url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://astronomy.stackexchange.com/questions/33009/have-more-recent-ligo-virgo-gravitational-wave-measurements-narrowed-down-the-sp
# Have more recent LIGO/VIRGO gravitational wave measurements narrowed down the speed of gravity further? This answer to How precise are the observational measurements for the speed of gravity? says: ...in 2013 a Chinese group built a model using Earth's tides that helped them narrow it down. ... [T]he speeds of gravity are from 0.93 to 1.05 times the speed of light with a relative error of about 5%. This provides first set of strong evidences to show that the speed of gravity is the same as the speed of light. This is so far the most accurate measurement I've seen. See the paper for more. In the near future, LIGO may be able to provide more accurate measurements by comparing the distance among detectors and the delay of observation. In case the links break, the papers are: • "Observational evidences for the speed of the gravity based on the Earth tide" TANG KeYun et al. Chinese Science Bulletin, February 2013 Vol.58 No.4-5: 474-477 doi: 10.1007/s11434-012-5603-3 • "Bounding the speed of gravity with gravitational wave observations" Neil Cornish, Diego Blas, Germano Nardini 2017, https://arxiv.org/abs/1707.06101 update: As noted in @amateurAstro's comment and linked well-sourced answer the time between a gravitational wave detection and X-ray burst of GW170817 and GRB 170817A constrains the difference between the speed of gravity and the speed of light to be "...between $$-3 \times 10^{-15}$$ and $$+7 \times 10^{-16}$$ times the speed of light..." So my updated question is: Question: Have more recent LIGO/VIRGO gravitational wave measurements narrowed down the speed of gravity further? • See this related question and answers. – amateurAstro Aug 13 '19 at 2:26 • @amateurAstro I've updated the question to reflect your answer. It was a toss-up; I could have voted to mark this as duplicate as originally written, but this way allows for an updated answer, as well as any further analysis of that event in the subsequent years. Thanks for linking to it! It turns out I'd written about that event a few years ago as well; “Who saw” the binary neutron star merger first? What was the sequence of events? (GRB/GW170817) but completely forgot about it. – uhoh Aug 13 '19 at 3:25 • Unfortunately there hasn't been a second GW with electromagnetic counterpart. And judging from the citations in recent papers the analysis of GW170817 by Abbott, B. P., et al. 2017 is still be the most accurate one (and will likely remain so). – SpaceBread Aug 13 '19 at 12:02 • @SpaceBread then that is the answer to this question. If things change in a few more years (which they certainly might), an answer can be updated or a new answer added. – uhoh Aug 13 '19 at 13:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8555511236190796, "perplexity": 974.1479199384677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00405.warc.gz"}
http://stats.stackexchange.com/questions/16198/what-do-you-call-an-average-that-does-not-include-outliers
# What do you call an average that does not include outliers? What do you call an average that does not include outliers? For example if you have a set: {90,89,92,91,5} avg = 73.4 but excluding the outlier (5) we have {90,89,92,91(,5)} avg = 90.5 How do you describe this average in statistics? - ## migrated from stackoverflow.comSep 29 '11 at 11:04 This question came from our site for professional and enthusiast programmers. Another standard test for identifying outliers is to use,LQ-(1.5xIQR) and UQ+(1.5xIQR). This is somewhat easier than computing the standard deviation and more general since it doesn't make any assumptions about the underlying data being from a normal distribution. - It's called the trimmed mean. Basically what you do is compute the mean of the middle 80% of your data, ignoring the top and bottom 10%. Of course, these numbers can vary, but that's the general idea. - Using a rule like "biggest 10%" doesn't make sense. What if there are no outliers? The 10% rule would eliminate some data anyway. Unacceptable. –  Jason Cohen Feb 2 '09 at 14:45 See my answer for a statistically-significant way to decide which data qualify as an "outlier." –  Jason Cohen Feb 2 '09 at 14:46 Well, there's no rigorous definition of outlier. As for your response, if there are outliers they will affect your estimate of the standard deviation. Furthermore, standard deviation can be a bad measure of dispersion for non-normally distributed data. –  dsimcha Feb 2 '09 at 14:47 True there's no rigorous definition, but eliminating based on percentile is certainly wrong in many common cases, including the example given in the question. –  Jason Cohen Feb 2 '09 at 14:50 The trimmed mean also enjoys the benefit of including the median as a limiting case, ie, when you trim 50% of data on both sides. –  Andrew M Dec 3 '14 at 16:55 The "average" you're talking about is actually called the "mean". It's not exactly answering your question, but a different statistic which is not affected by outliers is the median, that is, the middle number. {90,89,92,91,5} mean: 73.4 {90,89,92,91,5} median: 90 This might be useful to you, I dunno. - You are all missing the point. It has nothing to do with the mean, median, mode, stdev etc. Consider this: you have {1,1,2,3,2,400} avg = 68.17 but what we want is: {1,1,2,3,2,400} avg = 1.8 //minus the [400] value What do you call that? –  Tawani Feb 2 '09 at 15:41 @Tawani - they are not all missing the point. What you say needs to be defined using generic terms. You cannot go with a single example. Without general definitions, if 400 is 30 is it still an outlier? And if it is 14? And 9? Where do you stop? You need stddev's, ranges, quartiles, to do that. –  Daniel Daranas Feb 2 '09 at 17:05 A statistically sensible approach is to use a standard deviation cut-off. For example, remove any results +/-3 standard deviations. Using a rule like "biggest 10%" doesn't make sense. What if there are no outliers? The 10% rule would eliminate some data anyway. Unacceptable. - I was going to say this approach doesn't work (pathological case = 1000 numbers between -1 and +1, and then a single outlier of value +10000) because an outlier can bias the mean so that none of the results are within 3 stddev of the mean, but it looks like mathematically it does work. –  Jason S Feb 2 '09 at 15:21 en.wikipedia.org/wiki/Chebychev%27s_inequality This applies regardless of the distribution. –  dsimcha Feb 2 '09 at 20:49 The problem is that "outlier" isn't post-hoc conclusion about a particular realized data set. It's hard to know what people mean by outlier without knowing what the purpose of their proposed mean statistic is. –  Gregg Lind Mar 3 '09 at 20:11 So your categorial statement of "unacceptable" is non-sense, and not really very helpful. The trimmed mean has some useful properties, and some less useful, like any statistic. –  Gregg Lind Mar 3 '09 at 20:12 Note that contrary to comments elsewhere in this thread, such a procedure is not associated with statistical significance. –  Nick Cox Dec 3 '14 at 16:51 For a very specific name, you'll need to specify the mechanism for outlier rejection. One general term is "robust". dsimcha mentions one approach: trimming. Another is clipping: all values outside a known-good range are discarded. - There is no official name because of the various mechanisms, such as Q test, used to get rid of outliers. Removing outliers is called trimming. No program I have ever used has average() with an integrated trim() - mean() in R has a trim argument stat.ethz.ch/R-manual/R-devel/library/base/html/mean.html –  Jeromy Anglim Sep 29 '11 at 11:55 In trimming you don't remove outliers; you just don't include them in the calculation. "Remove" might suggest that points are no longer in the dataset. And you don't remove (or ignore) them because they are outliers; the criterion is (usually) just that they are in some extreme fraction of the data. A value not included in a trimmed mean often is only slightly more (or less) than the highest (lowest) value included. –  Nick Cox Dec 3 '14 at 16:48 I don't know if it has a name, but you could easily come up with a number of algorithms to reject outliers: 1. Find all numbers between the 10th and 90th percentiles (do this by sorting then rejecting the first $N/10$ and last $N/10$ numbers) and take the mean value of the remaining values. 2. Sort values, reject high and low values as long as by doing so, the mean/standard deviation change more than $X\%$. 3. Sort values, reject high and low values as long as by doing so, the values in question are more than $K$ standard deviations from the mean. - ... {90,89,92,91(,5)} avg = 90.5 How do you describe this average in statistics? ... There's no special designation for that method. Call it any name you want, provided that you always tell the audience how you arrived at your result, and you have the outliers in hand to show them if they request (and believe me: they will request). - The most common way of having a Robust (the usual word meaning resistant to bad data) average is to use the median. This is just the middle value in the sorted list (of half way between the middle two values), so for your example it would be 90.5 = half way between 90 and 91. If you want to get really into robust statistics (such as robust estimates of standard deviation etc) I would recommend a lost of the code at The AGORAS group but this may be too advanced for your purposes. - If all you have is one variable (as you imply) I think some of the respondents above are being over critical of your approach. Certainly other methods that look at things like leverage are more statistically sound; however that implies you are doing modeling of some sort. If you just have for example scores on a test or age of senior citizens (plausible cases of your example) I think it is practical and reasonable to be suspicious of the outlier you bring up. You could look at the overall mean and the trimmed mean and see how much it changes, but that will be a function of your sample size and the deviation from the mean for your outliers. With egregious outliers like that, you would certainly want to look into te data generating process to figure out why that's the case. Is it a data entry or administrative fluke? If so and it is likely unrelated to actual true value (that is unobserved) it seems to me perfectly fine to trim. If it is a true value as far as you can tell you may not be able to remove unless you are explicit in your analysis about it. - It can be the median. Not always, but sometimes. I have no idea what it is called in other occasions. Hope this helped. (At least a little.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6783087253570557, "perplexity": 894.2448649030422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989178.64/warc/CC-MAIN-20150728002309-00147-ip-10-236-191-2.ec2.internal.warc.gz"}
https://math.answers.com/Q/What_is_the_square_root_of_52_rounded_to_the_nearest_tenth
0 What is the square root of 52 rounded to the nearest tenth? Wiki User 2013-02-26 01:08:20 What is rounded to the nearest tenth Wiki User 2013-02-26 01:08:20 🙏 0 🤨 0 😮 0 Study guides 20 cards ➡️ See all cards 12 cards ➡️ See all cards 20 cards ➡️ See all cards
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8176655173301697, "perplexity": 17770.575797936533}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00488.warc.gz"}
http://www.ma.utexas.edu/mediawiki/index.php?title=Surface_quasi-geostrophic_equation&oldid=109
# Surface quasi-geostrophic equation $\newcommand{\R}{\mathbb{R}}$ The surface quasi-geostrophic (SQG) equation consists of an evolution equation for a scalar function $\theta: \R^+ \times \R^2 \to \R$. In the inviscid case the equation is $$\theta_t + u \cdot \nabla \theta = 0,$$ where $u = R^\perp \theta$ and $R$ stands for the Riesz transform. Fractional diffusion is often added to the equation $$\theta_t + u \cdot \nabla \theta + (-\Delta)^s \theta = 0.$$ The equation is used as a toy model for the 3D Euler equation and Navier-Stokes. The main question is to determine whether the Cauchy problem is well posed in the classical sense. In the inviscid case, it is a major open problem as well as in the supercritical diffusive case when $s<1/2$. It is believed that inviscid SQG equation presents a similar difficulty as 3D Euler equation in spite of being a scalar model in two dimensions[citation needed]. The same comparison can be made between the supercritical SQG equation and Navier-Stokes. The key feature of the model is that the drift $u$ is a divergence free vector field related to the solution $\theta$ by a zeroth order singular integral operator. For the diffusive case, the well posedness of the equation follows from perturbative techniques in the subcritical case ($s>1/2$). In the critical case the proof is more delicate and can be shown using three essentially different methods. In the sueprcritical regime ($s<1/2$) only partial results are known. Global weak solutions, as well as classical solutions locally in time, are known to exist globally for the full range of $s \in [0,1]$. ## Conserved quantities The following simple a priori estimates are satisfied by solutions (in order from strongest -locally- to weakest). • Maximum principle The supremum of $\theta$ occurs at time zero: $||\theta(t,.)||_{L^\infty} \leq ||\theta(0,.)||_{L^\infty}$. • Conservation of energy. A classical solution $u$ satisfies the energy equality $$\int_{\R^2} \theta(0,x)^2 \ dx = \int_{\R^2} \theta(t,x)^2 \ dx + \int_0^t \int_{\R^2} |(-\Delta)^{s/2}\theta(r,x)|^2 \ dx \ dr.$$ In the case of weak solutions, only the energy inequality is available $$\int_{\R^2} \theta(0,x)^2 \ dx \geq \int_{\R^2} \theta(t,x)^2 \ dx + \int_0^t \int_{\R^2} |(-\Delta)^{s/2}\theta(r,x)|^2 \ dx \ dr.$$ • $H^{-1/2}$ estimate The $H^{-1/2}$ norm of $\theta$ does not increase in time. $$\int_{\R^2} |(-\Delta)^{-1/4} \theta(0,x)|^2 \ dx = \int_{\R^2} |(-\Delta)^{-1/4}\theta(t,x)|^2 \ dx + \int_0^t \int_{\R^2} |(-\Delta)^{s/2-1/4}\theta(r,x)|^2 \ dx \ dr.$$ ## Scaling and criticality If $\theta$ solves the equation, so does the rescaled solution $\theta_r(t,x) = r^{2s-1} \theta(r^{2s} t,rx)$. The $L^\infty$ norm is invariant by the scaling of the equation if $s=1/2$. This observation makes $s=1/2$ the critical exponent for the equation. For smaller values of $s$, the diffusion is stronger than the drift in small scales and the equation is well posed. For larger values of $s$, the drift might be dominant at small scales. ## Well posedness results ### Sub-critical case: $s>1/2$ The equation is well posed globally. The proof can be done with several methods using only soft functional analysis or Fourier analysis. ### Critical case: $s=1/2$ The equation is well posed globally. There are three known proofs. • Evolution of a modulus of continuity: An explicit modulus of continuity which is comparable to Lipschitz in small scales but growth logarithmically in large scales is shown to be preserved by the flow. The method is vaguely comparable to Ishii-Lions. • De Giorgi approach: From the $L^\infty$ modulus of continuity, it is concluded that $u$ stays bounded in $BMO$. A variation to the parabolic De Giorgi-Nash-Moser can be carried out to obtain Holder continuity of $\theta$. • Dual flow method: Also from the information that $u$ is $BMO$ and divergence free, it can be shown that the solution $\theta$ becomes Holder continuous by studying the dual flow and characterizing Holder functions in terms of how they integrate against simple test functions. ### Supercritical case: $s<1/2$ The global well posedness of the equation is an open problem. Some partial results are known: • Existence of solutions locally in time. • Existence of global weak solutions. • Global smooth solution if the initial data is sufficiently small. • Smoothness of weak solutions for sufficiently large time. ### Inviscid case The global well posedness of the equation is an open problem. Some partial results are known: •  ???
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9642338752746582, "perplexity": 334.5438170277988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646914.32/warc/CC-MAIN-20180319120712-20180319140712-00547.warc.gz"}
https://dsnielsen.com/2017/08/11/forcing-axioms/
# Consistency strength of forcing axioms Previously I’ve only been talking about large cardinals and determinacy theories as if they were the only consistency hierarchies around. There is another important class of axioms, which has the added benefit of being, dare I say it, more useful to mathematicians not working in set theory. The reason for this is probably that these forcing axioms have a handful of consequences of a non-set theoretic nature, making them easier to apply in (mathematical) practice. When it comes to the consistency strength of these axioms though, things get a lot more hazy: we know very little about the strength of (almost all of) these axioms. I’ll introduce these axioms here and state what is known to date. What is a forcing axiom first of all? The axioms that definitely fit this label are particular instances of the following schema. Definition. For a class of forcing posets $\mathcal C$ and a cardinal $\lambda$ define the forcing axiom $\textsf{FA}_\lambda(\mathcal C)$ as postulating, for every $\mathbb P\in\mathcal C$ and $\mathcal D\subseteq\mathbb P$ a ${\leq\lambda}$-sized collection of dense subsets of $\mathbb P$, the existence of a filter $G\subseteq\mathbb P$ that meets every $D\in\mathcal D$. We set $\textsf{FA}(\mathcal C):=\textsf{FA}_{\aleph_1}(\mathcal C)$. To mention a few examples: • Martin’s axiom at $\lambda<\mathfrak c$, $\textsf{MA}_\lambda$, is $\textsf{FA}_\lambda(\text{ccc})$, and $\textsf{MA}$ is $\textsf{MA}_\lambda$ for all $\lambda<\mathfrak c$; • Proper forcing axiom, $\textsf{PFA}$, is $\textsf{FA}(\text{proper})$; • Martin’s maximum, $\textsf{MM}$, is $\textsf{FA}(\text{preserves stationary sets of }\omega_1)$; • Subcomplete forcing axiom, $\textsf{SCFA}$, is $\textsf{FA}(\text{subcomplete})$. Here if we focus on the $\aleph_1$ case for Martin’s axiom the first three axioms come in increasing actual strength, meaning that $\textsf{MA}_{\aleph_1}$ is implied by $\textsf{PFA}$ which again follows from $\textsf{MM}$, and $\textsf{SCFA}$ also follows from $\textsf{MM}$. One peculiar feature of $\textsf{PFA}$ (and $\textsf{MM}$) is that it implies that $\textsf{CH}$ fails. More precisely, $\mathfrak c=\aleph_2$ under $\textsf{PFA}$. On the contrary, $\textsf{SCFA}$ is consistent with $\textsf{CH}$. As for a few applications to get an idea of the mathematical usefulness, we’ll mention the following. Theorem(s). 1. (Bella-Nyikus ’91, $\textsf{MA}_{\aleph_1}$) Every compact Hausdorff space of size strictly less than $2^{\aleph_1}$ is sequentially compact; 2. (Shelah ’74, $\textsf{MA}+\lnot\textsf{CH}$) There exists a non-free Whitehead group; 3. (Baumgartner ’73, $\textsf{PFA}$) Every two $\aleph_1$-dense sets of reals are isomorphic. 4. (Shelah-Steprans ’88, $\textsf{PFA}$) Every automorphism of $\mathcal P(\mathbb N)/\text{Fin}$ is trivial; i.e. is induced by a function $f:\mathbb N\to\mathbb N$. 5. (Farah ’11, $\textsf{PFA}$) Every automorphism of the Calkin algebra is inner. In (3), a set $X\subseteq\mathbb R$ is $\aleph_1$-dense if $(a,b)\cap X$ has size $\aleph_1$ for every pair of reals $a. But okay, say we agree that these forcing-type axioms are indeed useful. Then how strong of a hypothesis are we really assuming? Is it just innocently consistent with $\textsf{ZFC}$, or is it wildly far from it? In the case of $\textsf{MA}$ it’s quite innocent: it’s implied by $\textsf{CH}$ and thus consistent relative to $\textsf{ZFC}$, and even $\textsf{MA}+\lnot\textsf{CH}$ is consistent relative to $\textsf{ZFC}$. As for $\textsf{PFA}$, $\textsf{SCFA}$ and $\textsf{MM}$, we quickly fly through the roof in terms of (upper bounds of) consistency strength. Theorem (Foreman-Magidor-Shelah ’88). $\textsf{MM}$, and thus also $\textsf{PFA}$ and $\textsf{SCFA}$, are consistent relative to a supercompact cardinal. How about the lower bound? This is a slow process, as the main (probably only) tool we got for showing lower consistency bounds is via inner model theory, so it ultimately depends on how far the inner model theory programme has come. As it’s incredibly far from a supercompact right now, we simply don’t have the tools yet to find an equiconsistency. As I mentioned in my previous post, inner models have been constructed up to $\textsf{LSA}$, which is in the area of a Woodin limit of Woodins. Sargsyan and Trang has recently shown the lower bound of $\textsf{PFA}$ and $\textsf{SCFA}$ up to this point. Theorem (Sargsyan-Trang ’16). Assume either $\textsf{PFA}$ or $\textsf{SCFA}$. Then there exists a transitive model containing the ordinals and the reals, which satisfies $\textsf{LSA}$. A strategy we could also take, which is interesting and useful in its own right, is try “chopping the axiom into smaller parts” and looking at the consistency strength of these parts. One of the parts we’re particularly interested in are failures of square principles – I’ll use the following terminology in the following. Definition (Caicedo-Larson-Sargsyan-Schindler-Steel-Zeman ’15). Let $\kappa$ be a cardinal. Then • $\kappa$ is threadable if $\Box(\kappa)$ fails; • If $\kappa=\rho^+$ then $\kappa$ is square inaccessible if $\Box_\rho$ fails. Recall that $\Box_\kappa$ implies $\Box(\kappa^+)$, so every threadable successor cardinal is also square inaccessible. Now the interest in square inaccessible and threadable cardinals originates from the following $\textsf{PFA}$ theorem of Todorčević, and recently Fuchs has shown that the same result holds assuming $\textsf{SCFA}$ as well, improving on a result of Jensen (’14) that $\textsf{SCFA}$ implies that every successor cardinal $\kappa\geq\aleph_2$ is square inaccessible. Theorem (Todorčević ’84). $\textsf{PFA}$ implies that every cardinal $\kappa\geq\aleph_2$ is threadable. Theorem (Fuchs ’16). $\textsf{SCFA}$ implies that every cardinal $\kappa\geq\aleph_2$ is threadable. This has led square-failure principles to be regarded as belonging to the hierarchy of forcing axioms. We can contemplate the consistency strength of specific failures of the square principles, yielding a wide array of new axioms. I’ll here consider the strength of square inaccessibility of successors of the following cardinals. • Regular; • Singular; • Singular strong limit; • Weakly compact; • Jónsson; • Inaccessible Jónsson; • Measurable; • Weakly compact Woodin. At this point we have a lot of axioms to consider, and we haven’t even covered variants of the forcing axioms such as bounded variants $\textsf{BPFA}$ and $\textsf{BMM}$, even stronger versions of $\textsf{MM}$ known as $\textsf{MM}^{++}$ and $\textsf{MM}^{+++}$, and more. I’ll say a bit more about the square inaccesibility, but first here’s an overview of what is currently known about the consistency strength of the various axioms (see my Diagrams tab for a pdf download). First of all, don’t be fooled in thinking that e.g. we’re close to finding an equiconsistency for a square inaccesible successor of a weakly compact Woodin: I’ve cherry-picked certain large cardinals and especially the area between Woodin cardinals and a Woodin limit of Woodins is highly inflated. But okay, let’s justify some of the points in this diagram – I’m here going to focus on the square-failure principles. Firstly, Jensen and Solovay showed that the existence of a square inaccessible successor of a regular cardinal is equiconsistent with the existence of a Mahlo cardinal. As for the upper bounds of the remaining cases we got the following results. Theorem (Jensen ’98). Successors of subcompact cardinals are square inaccessible. Theorem (Zeman ’91). Assuming the existence of a measurable subcompact, there exists a generic extension of V in which $\aleph_{\omega+1}$ is square inaccessible and $\textsf{GCH}$ holds. Jensen’s result gives a measurable subcompact as an upper bound for all the square inaccessibles of the non-singular variant and Zeman’s ensures that this same upper bound also works for singulars and singular strong limits. Note that subcompacts are both weakly compact and Woodin, so we can lower this upper bound slightly in the case of weakly compacts and weakly compact Woodins. As for the lower bounds, we got the following results. Theorem (Mitchell-Schimmerling-Steel ’94). If there exists a square inaccessible successor of a singular cardinal then there exists an inner model with a Woodin cardinal. Theorem (Adolf ’17). If there exists a square inaccessible successor of a singular strong limit cardinal then there exists a transitive M containing all the ordinals and reals such that $M\models\textsf{ZF}+\textsf{AD}^++\Theta\text{ is regular}$. Theorem (Jensen-Schimmerling-Schindler-Steel ’09). Let $\kappa\geq\aleph_3$ be regular and countably closed and suppose that both $\kappa^+$ is square inaccessible and $\kappa$ is threadable. Then there is a proper class model that satisfies “there is a proper class of strong cardinals” and “there is a proper class of Woodin cardinals”. The first two theorems immediately give lower bounds for the singular cases. The last theorem is useful to us because of the following. Theorem (Todocevic ’86). Every weakly compact cardinal is threadable. Theorem (Rinot ’14). Every regular Jónsson cardinal is threadable. As both weakly compacts and inaccessible Jónssons are countably closed this supplies us with lower bounds in the weakly compact, inaccessible Jónsson and measurable case. The Jónsson lower bound is then just the minimum of the singular lower bound and the inaccessible Jónsson lower bound, which is then an inner model with a Woodin cardinal. When it comes to lower bounds for square inaccessible successors of weakly compact Woodin cardinals we simply got the trivial lower bound of a weakly compact Woodin, which is strictly above a Woodin limit of Woodins. The reason why I included this one is due to the following recent result of Neeman and Steel. Theorem (Neeman-Steel ’15). The theory $\textsf{SBH}_\delta+\delta^+\text{ is a square inaccessible successor of a weakly compact Woodin}$ is equiconsistent with the theory $\textsf{SBH}_\delta+\delta\text{ is subcompact}$. Here $\textsf{SBH}_\delta$ is a certain iterability hypothesis called Strategic Branches Hypothesis (at $\delta$). So if this hypothesis turns out to be true then we really get an equiconsistency result for square inaccessible successors of weakly compact Woodins. That was it! Phew, and that was just the square principles! I’ll leave it to the reader to find the consistency upper- and lower bounds for the remaining forcing axioms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 98, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9933289289474487, "perplexity": 490.9890168918886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868239.93/warc/CC-MAIN-20180527091644-20180527111644-00277.warc.gz"}
http://math.stackexchange.com/users/52210/wsw?tab=activity
wsw Reputation 259 Next privilege 500 Rep. Access review queues Dec15 awarded Caucus Jan6 answered maximum number of collinear points? Dec8 awarded Yearling Dec6 revised Backward PDE for a mean-reverting stochastic process added 15 characters in body Oct18 revised Backward PDE for a mean-reverting stochastic process edited body Oct12 revised Why can I exchange the order of integration in a multiple Ito stochastic integral? deleted 10 characters in body Oct11 answered Some basic questions about Stochastic Calculus Oct11 revised Why can I exchange the order of integration in a multiple Ito stochastic integral? added 4 characters in body Oct11 revised Why can I exchange the order of integration in a multiple Ito stochastic integral? added 197 characters in body Oct11 answered Why can I exchange the order of integration in a multiple Ito stochastic integral? Oct10 comment Why can I exchange the order of integration in a multiple Ito stochastic integral? Mark: I think I know what the problem is. You simply cannot say $W_s = s^2$ since $W_s$ is Brownian motion. Oct10 comment Why can I exchange the order of integration in a multiple Ito stochastic integral? Mark: $dW_s \sim \mathcal{N}(0, ds)$ is a random variable, while $ds$ is a deterministic quantity. For example, $\text{Var}(ds) = 0$. Oct10 comment Why can I exchange the order of integration in a multiple Ito stochastic integral? How can $dW_s = 2s \, ds$? By definition $dW_s \equiv W_{s+ds} - W_s$. Oct7 awarded Scholar Oct7 accepted Backward PDE for a mean-reverting stochastic process Oct3 revised Find the conditional expectation $\mathbb{E}[X_2|\mathcal{F}_1]$ formatted the title properly in LaTeX Oct3 suggested approved edit on Find the conditional expectation $\mathbb{E}[X_2|\mathcal{F}_1]$ Oct3 comment Is random variable $X_i$ measurable on ${\mathcal F_{i+1}}$ or ${\mathcal F_{i-1}}$? Thanks for pointing out my mistakes. I removed the erroneous part of the answer. Oct3 revised Is random variable $X_i$ measurable on ${\mathcal F_{i+1}}$ or ${\mathcal F_{i-1}}$? deleted 192 characters in body Oct2 comment Find the conditional expectation $\mathbb{E}[X_2|\mathcal{F}_1]$ Did: the question says "$X_n=2*X_{n-1}$ with prob = $3/4$". So given $X_1$, the probability of $X_2 = 2 X_1$ is $3/4$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559419512748718, "perplexity": 780.9230847064903}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096208.17/warc/CC-MAIN-20150627031816-00147-ip-10-179-60-89.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Summation
# Summation In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Besides numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any types of mathematical objects on which an operation denoted "+" is defined. Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article. The summation of an explicit sequence is denoted as a succession of additions. For example, summation of [1, 2, 4, 2] is denoted 1 + 2 + 4 + 2, and results in 9, that is, 1 + 2 + 4 + 2 = 9. Because addition is associative and commutative, there is no need of parentheses, and the result does not depend on the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with zero element) results, by convention, in 0. Very often, the elements of a sequence are defined, through regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written 1 + 2 + 3 + 4 + ⋅⋅⋅ + 99 + 100. Otherwise, summation is denoted by using Σ notation, where ${\displaystyle \textstyle \sum }$ is an enlarged capital Greek letter sigma. For example, the sum of the first n natural integers is denoted ${\displaystyle \textstyle \sum _{i=1}^{n}i.}$ For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example,[a] ${\displaystyle \sum _{i=1}^{n}i={\frac {n(n+1)}{2}}.}$ Although such formulas do not always exist, many summation formulas have been discovered. Some of the most common and elementary ones are listed in this article. ## Notation ### Capital-sigma notation The summation symbol Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, ${\displaystyle \textstyle \sum }$, an enlarged form of the upright capital Greek letter Sigma. This is defined as: ${\displaystyle \sum _{i\mathop {=} m}^{n}a_{i}=a_{m}+a_{m+1}+a_{m+2}+\cdots +a_{n-1}+a_{n}}$ where i represents the index of summation; ai is an indexed variable representing each successive term in the series; m is the lower bound of summation, and n is the upper bound of summation. The "i = m" under the summation symbol means that the index i starts out equal to m. The index, i, is incremented by 1 for each successive term, stopping when i = n.[b] Here is an example showing the summation of squares: ${\displaystyle \sum _{i=3}^{6}i^{2}=3^{2}+4^{2}+5^{2}+6^{2}=86.}$ Informal writing sometimes omits the definition of the index and bounds of summation when these are clear from context, as in: ${\displaystyle \sum a_{i}^{2}=\sum _{i\mathop {=} 1}^{n}a_{i}^{2}.}$ One often sees generalizations of this notation in which an arbitrary logical condition is supplied, and the sum is intended to be taken over all values satisfying the condition. Here are some common examples: ${\displaystyle \sum _{0\leq k<100}f(k)}$ is the sum of ${\displaystyle f(k)}$ over all (integers) ${\displaystyle k}$ in the specified range, ${\displaystyle \sum _{x\mathop {\in } S}f(x)}$ is the sum of ${\displaystyle f(x)}$ over all elements ${\displaystyle x}$ in the set ${\displaystyle S}$, and ${\displaystyle \sum _{d|n}\;\mu (d)}$ is the sum of ${\displaystyle \mu (d)}$ over all positive integers ${\displaystyle d}$ dividing ${\displaystyle n}$.[c] There are also ways to generalize the use of many sigma signs. For example, ${\displaystyle \sum _{i,j}}$ is the same as ${\displaystyle \sum _{i}\sum _{j}.}$ A similar notation is applied when it comes to denoting the product of a sequence, which is similar to its summation, but which uses the multiplication operation instead of addition (and gives 1 for an empty sequence instead of 0). The same basic structure is used, with ${\displaystyle \textstyle \prod }$, an enlarged form of the Greek capital letter Pi, replacing the ${\displaystyle \textstyle \sum }$. ### Special cases It is possible to sum fewer than 2 numbers: • If the summation has one summand ${\displaystyle x}$, then the evaluated sum is ${\displaystyle x}$. • If the summation has no summands, then the evaluated sum is zero, because zero is the identity for addition. This is known as the empty sum. These degenerate cases are usually only used when the summation notation gives a degenerate result in a special case. For example, if ${\displaystyle n=m}$ in the definition above, then there is only one term in the sum; if ${\displaystyle n=m-1}$, then there is none. ## Formal definition Summation may be defined recursively as follows ${\displaystyle \sum _{i=a}^{b}g(i)=0}$ , for b < a. ${\displaystyle \sum _{i=a}^{b}g(i)=g(b)+\sum _{i=a}^{b-1}g(i)}$, for ba. ## Measure theory notation In the notation of measure and integration theory, a sum can be expressed as a definite integral, ${\displaystyle \sum _{k\mathop {=} a}^{b}f(k)=\int _{[a,b]}f\,d\mu }$ where ${\displaystyle [a,b]}$ is the subset of the integers from ${\displaystyle a}$ to ${\displaystyle b}$, and where ${\displaystyle \mu }$ is the counting measure. ## Calculus of finite differences Given a function f that is defined over the integers in the interval [m, n], one has ${\displaystyle f(n)-f(m)=\sum _{i=m}^{n-1}(f(i+1)-f(i)).}$ This is the analogue in calculus of finite differences of the fundamental theorem of calculus, which states ${\displaystyle f(n)-f(m)=\int _{m}^{n}f'(x)\,dx,}$ where ${\displaystyle f'(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}}}$ is the derivative of f. An example of application of the above equation is ${\displaystyle n^{k}=\sum _{i=0}^{n-1}\left((i+1)^{k}-i^{k}\right).}$ Using binomial theorem, this may be rewritten ${\displaystyle n^{k}=\sum _{i=0}^{n-1}\left(\sum _{j=0}^{i-1}{\binom {k}{j}}x^{j}\right).}$ The above formula is more commonly used for inverting of the difference operator ${\displaystyle \Delta }$ defined by ${\displaystyle \Delta (f)(n)=f(n+1)-f(n),}$ where f is a function defined on the nonnegative integers. Thus, given such a function f, the problem is to compute the antidifference of f, that is, a function ${\displaystyle F=\Delta ^{-1}f}$ such that ${\displaystyle \Delta F=f,}$, that is,${\displaystyle F(n+1)-F(n)=f(n).}$ This function is defined up to the addition of a constant, and may be chosen as[1] ${\displaystyle F(n)=\sum _{i=0}^{n-1}f(i).}$ There is not always a closed-form expression for such a summation, but Faulhaber's formula provides a closed form in the case of ${\displaystyle f(n)=n^{k},}$ and, by linearity for every polynomial function of n. ## Approximation by definite integrals Many such approximations can be obtained by the following connection between sums and integrals, which holds for any: increasing function f: ${\displaystyle \int _{s=a-1}^{b}f(s)\ ds\leq \sum _{i=a}^{b}f(i)\leq \int _{s=a}^{b+1}f(s)\ ds.}$ decreasing function f: ${\displaystyle \int _{s=a}^{b+1}f(s)\ ds\leq \sum _{i=a}^{b}f(i)\leq \int _{s=a-1}^{b}f(s)\ ds.}$ For more general approximations, see the Euler–Maclaurin formula. For summations in which the summand is given (or can be interpolated) by an integrable function of the index, the summation can be interpreted as a Riemann sum occurring in the definition of the corresponding definite integral. One can therefore expect that for instance ${\displaystyle {\frac {b-a}{n}}\sum _{i=0}^{n-1}f\left(a+i{\frac {b-a}{n}}\right)\approx \int _{a}^{b}f(x)\ dx,}$ since the right hand side is by definition the limit for ${\displaystyle n\to \infty }$ of the left hand side. However, for a given summation n is fixed, and little can be said about the error in the above approximation without additional assumptions about f: it is clear that for wildly oscillating functions the Riemann sum can be arbitrarily far from the Riemann integral. ## Identities The formulae below involve finite sums; for infinite summations or finite summations of expressions involving trigonometric functions or other transcendental functions, see list of mathematical series. ### General identities ${\displaystyle \sum _{n=s}^{t}C\cdot f(n)=C\cdot \sum _{n=s}^{t}f(n)\quad }$ (distributivity) ${\displaystyle \sum _{n=s}^{t}f(n)\pm \sum _{n=s}^{t}g(n)=\sum _{n=s}^{t}\left(f(n)\pm g(n)\right)\quad }$ (commutativity and associativity) ${\displaystyle \sum _{n=s}^{t}f(n)=\sum _{n=s+p}^{t+p}f(n-p)\quad }$ (index shift) ${\displaystyle \sum _{n\in B}f(n)=\sum _{m\in A}f(\sigma (m)),\quad }$ for a bijection σ from a finite set A onto a set B (index change); this generalizes the preceding formula. ${\displaystyle \sum _{n=s}^{t}f(n)=\sum _{n=s}^{j}f(n)+\sum _{n=j+1}^{t}f(n)\quad }$ (splitting a sum, using associativity) ${\displaystyle \sum _{n=a}^{b}f(n)=\sum _{n=0}^{b}f(n)-\sum _{n=0}^{a-1}f(n)\quad }$ (a variant of the preceding formula) ${\displaystyle \sum _{i=k_{0}}^{k_{1}}\sum _{j=l_{0}}^{l_{1}}a_{i,j}=\sum _{j=l_{0}}^{l_{1}}\sum _{i=k_{0}}^{k_{1}}a_{i,j}\quad }$ (commutativity and associativity, again) ${\displaystyle \sum _{k\leq j\leq i\leq n}a_{i,j}=\sum _{i=k}^{n}\sum _{j=k}^{i}a_{i,j}=\sum _{j=k}^{n}\sum _{i=j}^{n}a_{i,j}=\sum _{j=0}^{n-k}\sum _{i=k}^{n-j}a_{i+j,i}\quad }$ (another application of commutativity and associativity) ${\displaystyle \sum _{n=0}^{2t+1}f(n)=\sum _{n=0}^{t}f(2n)+\sum _{n=0}^{t}f(2n+1)\quad }$ (splitting a sum into its odd and even parts, and changing the indices) ${\displaystyle \left(\sum _{k=0}^{n}a_{k}\right)\left(\sum _{k=0}^{n}b_{k}\right)=\sum _{i=0}^{n}\sum _{j=0}^{n}a_{i}b_{j}\quad }$ (distributivity) ${\displaystyle \sum _{i=s}^{m}\sum _{j=t}^{n}{a_{i}}{c_{j}}=\left(\sum _{i=s}^{m}a_{i}\right)\left(\sum _{j=t}^{n}c_{j}\right)\quad }$ (distributivity allows factorization) ${\displaystyle \sum _{n=s}^{t}\log _{b}f(n)=\log _{b}\prod _{n=s}^{t}f(n)\quad }$ (the logarithm of a product is the sum of the logarithms of the factors) ${\displaystyle C^{\sum \limits _{n=s}^{t}f(n)}=\prod _{n=s}^{t}C^{f(n)}\quad }$ (the exponential of a sum is the product of the exponential of the summands) ### Powers and logarithm of arithmetic progressions ${\displaystyle \sum _{i=1}^{n}c=nc\quad }$ for every c that does not depend on i ${\displaystyle \sum _{i=0}^{n}i=\sum _{i=1}^{n}i={\frac {n(n+1)}{2}}\qquad }$ (Sum of the simplest arithmetic progression, consisting of the n first natural numbers.)[2][full citation needed] ${\displaystyle \sum _{i=1}^{n}(2i-1)=n^{2}\qquad }$ (Sum of first odd natural numbers) ${\displaystyle \sum _{i=0}^{n}2i=n(n+1)\qquad }$ (Sum of first even natural numbers) ${\displaystyle \sum _{i=1}^{n}\log i=\log n!\qquad }$ (A sum of logarithms is the logarithm of the product) ${\displaystyle \sum _{i=0}^{n}i^{2}={\frac {n(n+1)(2n+1)}{6}}={\frac {n^{3}}{3}}+{\frac {n^{2}}{2}}+{\frac {n}{6}}\qquad }$ (Sum of the first squares, see square pyramidal number.) [2] ${\displaystyle \sum _{i=0}^{n}i^{3}=\left(\sum _{i=0}^{n}i\right)^{2}=\left({\frac {n(n+1)}{2}}\right)^{2}={\frac {n^{4}}{4}}+{\frac {n^{3}}{2}}+{\frac {n^{2}}{4}}\qquad }$ (Nicomachus's theorem) [2] More generally, ${\displaystyle \sum _{i=0}^{n}i^{p}={\frac {(n+1)^{p+1}}{p+1}}+\sum _{k=1}^{p}{\frac {B_{k}}{p-k+1}}{p \choose k}(n+1)^{p-k+1},}$ where ${\displaystyle B_{k}}$ denotes a Bernoulli number (that is Faulhaber's formula). ### Summation index in exponents In the following summations, a is assumed to be different from 1. ${\displaystyle \sum _{i=0}^{n-1}a^{i}={\frac {1-a^{n}}{1-a}}}$ (sum of a geometric progression) ${\displaystyle \sum _{i=0}^{n-1}{\frac {1}{2^{i}}}=2-{\frac {1}{2^{n-1}}}}$ (special case for a = 1/2) ${\displaystyle \sum _{i=0}^{n-1}ia^{i}={\frac {a-na^{n}+(n-1)a^{n+1}}{(1-a)^{2}}}}$ (a times the derivative with respect to a of the geometric progression) {\displaystyle {\begin{aligned}\sum _{i=0}^{n-1}\left(b+id\right)a^{i}&=b\sum _{i=0}^{n-1}a^{i}+d\sum _{i=0}^{n-1}ia^{i}\\&=b\left({\frac {1-a^{n}}{1-a}}\right)+d\left({\frac {a-na^{n}+(n-1)a^{n+1}}{(1-a)^{2}}}\right)\\&={\frac {b(1-a^{n})-(n-1)da^{n}}{1-a}}+{\frac {da(1-a^{n-1})}{(1-a)^{2}}}\end{aligned}}} (sum of an arithmetico–geometric sequence) ### Binomial coefficients and factorials There exist very many summation identities involving binomial coefficients (a whole chapter of Concrete Mathematics is devoted to just the basic techniques). Some of the most basic ones are the following. #### Involving the binomial theorem ${\displaystyle \sum _{i=0}^{n}{n \choose i}a^{n-i}b^{i}=(a+b)^{n},}$ the binomial theorem ${\displaystyle \sum _{i=0}^{n}{n \choose i}=2^{n},}$ the special case where a = b = 1 ${\displaystyle \sum _{i=0}^{n}{n \choose i}p^{i}(1-p)^{n-i}=1}$, the special case where p = a = 1 – b, which, for ${\displaystyle 0\leq p\leq 1,}$ expresses the sum of the binomial distribution ${\displaystyle \sum _{i=0}^{n}i{n \choose i}=n(2^{n-1}),}$ the value at a = b = 1 of the derivative with respect to a of the binomial theorem ${\displaystyle \sum _{i=0}^{n}{\frac {n \choose i}{i+1}}={\frac {2^{n+1}-1}{n+1}},}$ the value at a = b = 1 of the antiderivative with respect to a of the binomial theorem #### Involving permutation numbers In the following summations, ${\displaystyle {}_{n}P_{k}}$ is the number of k-permutations of n. ${\displaystyle \sum _{i=0}^{n}{}_{i}P_{k}{n \choose i}={}_{n}P_{k}(2^{n-k})}$ ${\displaystyle \sum _{i=1}^{n}{}_{i+k}P_{k+1}=\sum _{i=1}^{n}\prod _{j=0}^{k}(i+j)={\frac {(n+k+1)!}{(n-1)!(k+2)}}}$ ${\displaystyle \sum _{i=0}^{n}i!\cdot {n \choose i}=\sum _{i=0}^{n}{}_{n}P_{i}=\lfloor n!\cdot e\rfloor ,\quad n\in \mathbb {Z} ^{+}}$, where and ${\displaystyle \lfloor x\rfloor }$ denotes the floor function. #### Others ${\displaystyle \sum _{k=0}^{m}\left({\begin{array}{c}n+k\\n\\\end{array}}\right)=\left({\begin{array}{c}n+m+1\\n+1\\\end{array}}\right)}$ ${\displaystyle \sum _{i=k}^{n}{i \choose k}={n+1 \choose k+1}}$ ${\displaystyle \sum _{i=0}^{n}i\cdot i!=(n+1)!-1}$ ${\displaystyle \sum _{i=0}^{n}{m+i-1 \choose i}={m+n \choose n}}$ ${\displaystyle \sum _{i=0}^{n}{n \choose i}^{2}={2n \choose n}}$ ### Harmonic numbers ${\displaystyle \sum _{i=1}^{n}{\frac {1}{i}}=H_{n}}$ (that is the nth harmonic number) ${\displaystyle \sum _{i=1}^{n}{\frac {1}{i^{k}}}=H_{n}^{k}}$ (that is a generalized harmonic number) ## Growth rates The following are useful approximations (using theta notation): ${\displaystyle \sum _{i=1}^{n}i^{c}\in \Theta (n^{c+1})}$ for real c greater than −1 ${\displaystyle \sum _{i=1}^{n}{\frac {1}{i}}\in \Theta (\log _{e}n)}$ (See Harmonic number) ${\displaystyle \sum _{i=1}^{n}c^{i}\in \Theta (c^{n})}$ for real c greater than 1 ${\displaystyle \sum _{i=1}^{n}\log(i)^{c}\in \Theta (n\cdot \log(n)^{c})}$ for non-negative real c ${\displaystyle \sum _{i=1}^{n}\log(i)^{c}\cdot i^{d}\in \Theta (n^{d+1}\cdot \log(n)^{c})}$ for non-negative real c, d ${\displaystyle \sum _{i=1}^{n}\log(i)^{c}\cdot i^{d}\cdot b^{i}\in \Theta (n^{d}\cdot \log(n)^{c}\cdot b^{n})}$ for non-negative real b > 1, c, d 3. ^ Although the name of the dummy variable does not matter (by definition), one usually uses letters from the middle of the alphabet (${\displaystyle i}$ through ${\displaystyle q}$) to denote integers, if there is a risk of confusion. For example, even if there should be no doubt about the interpretation, it could look slightly confusing to many mathematicians to see ${\displaystyle x}$ instead of ${\displaystyle k}$ in the above formulae involving ${\displaystyle k}$. See also typographical conventions in mathematical formulae.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 118, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801798462867737, "perplexity": 437.954860362094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526948.55/warc/CC-MAIN-20190721102738-20190721124738-00059.warc.gz"}
http://www.almoststochastic.com/2015/
## 2015/09/07 ### Matrix Factorisation with Linear Filters (and discussion) I submitted a preprint on matrix factorisations and linear filters. I managed to derive some factorisation algorithms as linear filtering algorithms. In the paper, I left a discussion to here, estimating parameters via maximising marginal likelihood. So here it is. ## 2015/06/19 ### Online Matrix Factorization via Broyden Updates I arXived a new preprint titled Online Matrix Factorization via Broyden Updates. Around this April, I was reading quasi-Newton methods (from this very nice paper of Philipp Hennig) and when I saw the derivation of the Broyden update, I immediately realized that this idea may be used for computing factorizations. Furthermore, it will lead to an online scheme, more preferable! The idea is to solve the following optimization problem at each iteration $k$:\begin{align*} \min_{x_k,C_k} \big\| y_k - C_k x_k \big\|_2^2 + \lambda \big\|C_k - C_{k-1}\big\|_F^2.\end{align*} The motivation behind this cost is in the manuscript. Although the basic idea was explicit, I set a few goals. First of all, I would like to develop a method that one can sample any column of the dataset and use it immediately. So I modified the notation a bit, as you can see from Eq. (2) in the manuscript. Secondly, I wanted that one must be able to use mini-batches as well, a group of columns at each time. Thirdly, it was obvious that a modern matrix factorization method must handle the missing data, so I had to extend the algorithm to handle the missing data. Consequently, I have sorted out all of this except a rule for missing data with mini-batches which turned out to be harder, so I left out that for this work. ## 2015/03/08 ### Tinkering around logistic map I was tinkering around logistic map $x_{n+1} = a x_n (1 - x_n)$ today and I wondered what happens if I plot the histogram of the generated sequence $(x_n)_{n\geq 0}$. Can it possess some statistical properties? ## 2015/03/04 ### Monte Carlo as Intuition Suppose we have a continuous random variable $X \sim p(x)$ and we would like to estimate its tail probability, i.e. the probability of the event $\{X \geq t\}$ for some $t \in \mathbb{R}$. What is the most intuitive way to do this?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8973575234413147, "perplexity": 545.7118963427224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743854.48/warc/CC-MAIN-20181117205946-20181117231946-00304.warc.gz"}
https://aptitude.gateoverflow.in/5586/cat-2015-question-66
215 views Answer the following questions based on the information given below: For admission to various affiliated colleges, a university conducts a written test with four different sections, each with a maximum of $50$ marks. The following table gives the aggregate as well as the sectional cut-off marks fixed by six different colleges affiliated to the university. A student will get admission only if he/she gets marks greater than on equal to the cut-off marks in each of the sections and his/her aggregate marks are at least equal to the aggregate cut-off marks as specified by the college. College Sectional Cut–off Marks A. Quant B. Verbal C. Logic D. DI Aggregate Cut–off Marks College 1 42 42 42 176 College 2 45 45 175 College 3 46 171 College 4 43 45 178 College 5 45 43 180 College 6 41 44 176 What is the maximum score required by a Cetking student in Section $\text{D}$ so that student clear all colleges cut-off?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29619401693344116, "perplexity": 1804.788184164332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00627.warc.gz"}
https://www.storyofmathematics.com/construct-a-line-segment/
# Construct a Line Segment – Explanation and Examples To construct a line segment connecting two points, you need to line up a straightedge with two points and trace. Constructing a new line segment congruent to another involves creating an equilateral triangle and two circles. The construction of a line segment between any two points is Euclid’s first postulate. Creating a line congruent to a given line is his second proposition. To do the construction and prove that the two lines are indeed congruent, we must first familiarize ourselves with proposition 1, which involves creating an equilateral triangle. Before moving forward, make sure you review the foundations of geometric construction. This topic includes: • How to Construct a Line Segment • How to Construct a Congruent Line Segment ## How to Construct a Line Segment Euclid’s first postulate states that a line can be drawn between any two points. That is, as long as we have two points, we can construct a line segment. To do this, we line up the edge of the straightedge with the two points and draw a line. It is also possible to copy a line segment that already exists. That is, we can construct a congruent line segment. ## How to Construct a Congruent Line Segment It is also possible to make a congruent copy of a line that already exists. There are two main ways we can do this. First, we can copy a line that already exists so that the new line has a particular end point. We can also cut off a longer line segment to equal the length of a shorter line. In fact, these two constructions are the second and third propositions in the first book of Euclid’s Elements. To do them, however, we need to first look at proposition 1. This tells us how to create an equilateral triangle. ### How to Construct an Equilateral Triangle We begin with a line, AB. Our goal is to create an equilateral triangle with AB as one of the sides. By definition, an equilateral figure has sides that are all the same length. Consequently, all of the sides of the triangle we construct will be lines congruent to AB. We begin by drawing two circles with our compass. The first will have center B and distance Ba. The second will have center A and distance AB. Now, label either of the two intersection points for the circles as C. Then, connect AC and BC. The triangle ABC is equilateral. How do we know this? BC is a radius of the first circle we drew, while AC is a radius of the second circle we drew. Both of these circles had a radius of length AB. Therefore, BC and AC both have length AB, and the triangle is equilateral. ### Construct a Congruent Segment at a Point If we are given a point line AB and a point D, it is possible to construct a new line segment with an endpoint at D and length AB. To do this, we first connect the point B with C. Then, construct an equilateral triangle on the line BC. Since we already know how to do this, we don’t have to show the construction lines. This also makes the proof easier to follow because the figure is less cluttered. Then, we can make another circle with center B and radius BA. After that, extend the line DB so that it intersects this new circle at E. Next, we construct a circle with center D and radius DE. Finally, we can extend DC so that it intersects this circle at a point F. CF will have the same length as AB. How do we know this? The radius of the circle with center D is DE. Notice that DE is made up of two smaller line segments, DB and BE. Since BE is a radius of the circle with center B and radius AB, BE has the same length as AB. The segment DB is a leg of the equilateral triangle, so its length is equal to BC. Therefore, the length of DE is DB+BE=BC+AB. Now, consider the line segment DF. This is also a radius of the circle with center D, so its length is equal to DE. DF is made up of two parts, DC and CF. DC is equal in length to BC because they are both parts of an equilateral triangle. Therefore, we have AB+BC=DE=DF=DC+CF=BC+CF. That is, AB+BC=BC+CF. Therefore, AB=CF. ### Cut a Shorter Segment from a Longer Segment Using the ability to construct a congruent line at a point, we will cut off a section of a longer line segment equal to the length of a shorter segment. We begin with a longer line segment CD and a shorter segment AB. Next, we copy the segment AB and construct a congruent segment CG. Note that we do not have control over the orientation of CG, so it will, in all probability, not line up exactly with CD. Finally, we draw a circle with center C and radius CG. Then, we can identify the point, H, where the circumference of the circle intersects CD. CH will be equal to AB in length. The proof of this is pretty simple. CH is a radius of the circle with center C and radius CG. Therefore CH=CG. But we already know that CG=AB. Therefore, by the transitive property, CH=AB. ## Examples This section will present some examples of how to connect line segments and how to construct congruent line segments. ### Example 1 Connect points A and B with a line segment. ### Example 1 Solution In this case, we need to line up our straight edge with the points A and B and trace, as shown. ### Example 2 Construct a line segment congruent to AB. ### Example 2 Solution We are not given any other points in our figure, so we can construct the congruent segment anywhere we would like. The easiest thing to do then is to make AB the radius of a circle with center B. Then, we can draw a line segment from B to any point, C, on the circle’s circumference. Such a line segment, BC, will also be a radius of the circle, so it will be equal in length to AB. ### Example 3 Construct a lines segment congruent to AB with endpoint D. ### Example 3 Solution We need to remember the steps for constructing a congruent line segment at a point to do this. First, we connect BD. Then, construct an equilateral triangle BDG. Next, we create a circle with radius AB and center B. If we extend the segment GB, it intersects with this circle, and we call the intersection E. Then, we can create a circle with center G and radius GE. We then extend GD until it intersects this circle and calls that point C. CD will be equal in length to AB. Note: It is important to draw full circles when proving a geometric construction, but arcs are generally fine for the construction itself. In the figure, only part of the circle with center G and radius GE is shown. ### Example 4 Construct a line segment double the length of AB. ### Example 4 Solution We cannot simply copy the line segment and make its new endpoint A because we do not have control over the congruent segment’s orientation. Instead, we can construct a circle with center A and radius AB. We can then extend the segment in the direction of A until it intersects the circle’s circumference at point C. Since AC and AB are both radii of the circle, they have the same length. Therefore, BC is double the length of AB. ### Example 5 Construct a line segment congruent to AB with the end point at C. Then, put another line segment congruent to AB at the new end point, D. ### Example 5 Solution Essentially, we have to do multiple iterations of constructing a congruent segment. First, construct a congruent segment at C, as we did in example 3. Then, designate D to be the other end point. Now, we do what we did before. Construct a segment BD. Then, create an equilateral triangle. Next, make a circle with center B and radius AB. We can then extend the segment GB so that it intersects with this new circle at E. Next, we make a circle with center G and radius GE. Finally, we extend GD so that it intersects with the new circle at F. ### Practice Question 1. True or False: We need at least three points to construct a line segment. 2. True or False: We need exactly three line segments to construct a triangle. 3. True or False: To construct an equilateral triangle, we only need three line segments of different sizes. 4. How many line segments do we need to construct a square? 5. When two line segments measuring $5$ units and $2$ units, respectively, are joined together, what would be the new line segment’s length? ### Open Problems 1. Construct a line segment AB. 2. Create line segments to create a triangle ABC. 3. Construct a line segment congruent to each side of the triangle ABC. 4. Cut off a segment of AB equal to the length of CD. 5. Construct an isosceles triangle inside the triangle ABC with B as one of the vertices.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7305987477302551, "perplexity": 384.6555866317979}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00488.warc.gz"}
https://www.scitecheuropa.eu/electronegativity/92307/
The new scale of electronegativity: chemical reactions Martin Rahm, Assistant Professor in Physical Chemistry at Chalmers University of Technology has redefined the concept of electronegativity, with a more comprehensive scale. The concept is used to describe how strongly different atoms attract electrons. By using electronegativity scales, it is possible to predict the approximate charge distribution in different molecules and materials, without needing complex quantum mechanical calculations or spectroscopic studies. The new scale Rahm’s discovery is a new scale. Rahm has undertaken the work with colleagues including a Nobel Prize-winner. It has been published in the Journal of the American Chemical Society. Rahm explains: “The new definition is the average binding energy of the outermost and weakest bound electrons – commonly known as the valence electrons.” He adds: “We derived these values by combining experimental photoionization data with quantum mechanical calculations. By and large, most elements relate to each other in the same way as in earlier scales. But the new definition has also led to some interesting changes where atoms have switched places in the order of electronegativity. Additionally, for some elements this is the first time their electronegativity has been calculated.”
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709818124771118, "perplexity": 991.9321045437644}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578711882.85/warc/CC-MAIN-20190425074144-20190425100144-00154.warc.gz"}
https://www.lesswrong.com/posts/bcFhPHcDRbWKcAEfk/modeling-naturalized-decision-problems-in-linear-logic
# Ω 6 Decision TheoryAI Frontpage Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. The following is a model of a simple decision problem (namely, the 5 and 10 problem) in linear logic. Basic familiarity with linear logic is assumed (enough to know what it means to say linear logic is a resource logic), although knowing all the operators isn't necessary. The 5 and 10 problem is, simply, a choice between taking a 5 dollar bill and a 10 dollar bill, with the 10 dollar bill valued more highly. While the problem itself is trivial, the main theoretical issue is in modeling counterfactuals. If you took the 10 dollar bill, what would have happened if you had taken the 5 dollar bill? If your source code is fixed, then there isn't a logically coherent possible world where you took the 5 dollar bill. I became interested in using linear logic to model decision problems due to noticing a structural similarity between linear logic and the real world, namely irreversibility. A vending machine may, in linear logic, be represented as a proposition "\$1 → CandyBar", encoding the fact that \$1 may be exchanged for a candy bar, being consumed in the process. Since the \$1 is consumed, the operation is irreversible. Additionally, there may be multiple options offered, e.g. "\$1 → Gumball", such that only one option may be taken. (Note that I am using "→" as notation for linear implication.) This is a good fit for real-world decision problems, where e.g. taking the \$10 bill precludes also taking the \$5 bill. Modeling decision problems using linear logic may, then, yield insights regarding the sense in which counterfactuals do or don't exist. ## First try: just the decision problem As a first try, let's simply try to translate the logic of the 5 and 10 situation into linear logic. We assume logical atoms named "Start", "End", "\$5", and "\$10". Respectively, these represent: the state of being at the start of the problem, the state of being at the end of the problem, having \$5, and having \$10. To represent that we have the option of taking either bill, we assume the following implications: TakeFive : Start → End ⊗ \$5 TakeTen : Start → End ⊗ \$10 The "⊗" operator can be read as "and" in the sense of "I have a book and some cheese on the table"; it combines multiple resources into a single linear proposition. So, the above implications state that it is possible, starting from the start state, to end up in the end state, yielding \$5 if you took the five dollar bill, and \$10 if you took the 10 dollar bill. The agent's goal is to prove "Start → End ⊗ \$X", for X as high as possible. Clearly, "TakeTen" is a solution for X = 10. Assuming the logic is consistent, no better proof is possible. By the Curry-Howard isomorphism, the proof represents a computational strategy for acting in the world, namely, taking the \$10 bill. ## Second try: source code determining action The above analysis is utterly trivial. What makes the 5 and 10 problem nontrivial is naturalizing it, to the point where the agent is a causal entity similar to the environment. One way to model the agent being a causal entity is to assume that it has source code. Let "M" be a Turing machine specification. Let "Ret(M, x)" represent the proposition that M returns x. Note that, if M never halts, then Ret(M, x) is not true for any x. How do we model the fact that the agent's action is produced by a computer program? What we would like to be able to assume is that the agent's action is equal to the output of some machine M. To do this, we need to augment the TakeFive/TakeTen actions to yield additional data: TakeFive : Start → End ⊗ \$5 ⊗ ITookFive TakeTen : Start → End ⊗ \$10 ⊗ ITookTen The ITookFive / ITookTen propositions are a kind of token assuring that the agent ("I") took five or ten. (Both of these are interpreted as classical propositions, so they may be duplicated or deleted freely). How do we relate these propositions to the source code, M? We will say that M must agree with whatever action the agent took: MachineFive : ITookFive → Ret(M, "Five") MachineTen : ITookTen → Ret(M, "Ten") These operations yield, from the fact that "I" have taken five or ten, that the source code "M" eventually returns a string identical with this action. Thus, these encode the assumption that "my source code is M", in the sense that my action always agrees with M's. Operationally speaking, after the agent has taken 5 or 10, the agent can be assured of the mathematical fact that M returns the same action. (This is relevant in more complex decision problems, such as twin prisoner's dilemma, where the agent's utility depends on mathematical facts about what values different machines return) Importantly, the agent can't use MachineFive/MachineTen to know what action M takes before actually taking the action. Otherwise, the agent could take the opposite of the action they know they will take, causing a logical inconsistency. The above construction would not work if the machine were only run for a finite number of steps before being forced to return an answer; that would lead to the agent being able to know what action it will take, by running M for that finite number of steps. This model naturally handles cases where M never halts; if the agent never executes either TakeFive or TakeTen, then it can never execute either MachineFive or MachineTen, and so cannot be assured of Ret(M, x) for any x; indeed, if the agent never takes any action, then Ret(M, x) isn't true for any x, as that would imply that the agent eventually takes action x. ## Interpreting the counterfactuals At this point, it's worth discussing the sense in which counterfactuals do or do not exist. Let's first discuss the simpler case, where there is no assumption about source code. First, from the perspective of the logic itself, only one of TakeFive or TakeTen may be evaluated. There cannot be both a fact of the matter about what happens if the agent takes five, and a fact of the matter about what happens if the agent takes ten. This is because even defining both facts at once requires re-using the Start proposition. So, from the perspective of the logic, there aren't counterfactuals; only one operation is actually run, and what "would have happened" if the other operation were run is undefinable. On the other hand, there is an important sense in which the proof system contains counterfactuals. In constructing a linear logic proof, different choices may be made. Given "Start" as an assumption, I may prove "End ⊗ \$5" by executing TakeFive, or "End ⊗ \$10" by executing TakeTen, but not both. Proof systems are, in general, systems of rules for constructing proofs, which leave quite a lot of freedom in which proofs are constructed. By the Curry-Howard isomorphism, the freedom in how the proofs are constructed corresponds to freedom in how the agent behaves in the real world; using TakeFive in a proof has the effect, if executed, of actually (irreversibly) taking the \$5 bill. So, we can say, by reasoning about the proof system, that if TakeFive is run, then \$5 will be yielded, and if TakeTen is run, then \$10 will be yielded, and only one of these may be run. The logic itself says there can't be a fact of the matter about both what happens if 5 is taken and if 10 is taken. On the other hand, the proof system says that both proofs that get \$5 by taking 5, and proofs that get \$10 by taking 10, are possible. How to interpret this difference? One way is by asserting that the logic is about the territory, while the proof system is about the map; so, counterfactuals are represented in the map, even though the map itself asserts that there is only a singular territory. And, importantly, the map doesn't represent the entire territory; it's a proof system for reasoning about the territory, not the territory itself. The map may, thus, be "looser" than the territory, allowing more possibilities than could possibly be actually realized. What prevents the map from drawing out logical implications to the point where it becomes clear that only one action may possibly be taken? Given the second-try setup, the agent simply cannot use the fact of their source code being M, until actually taking the action; thus, no amount of drawing implications can conclude anything about the relationship between M and the agent's action. In addition to this, reasoning about M itself becomes harder the longer M runs, i.e. the longer the agent is waiting to make the decision; so, simply reasoning about the map, without taking actions, need not conclude anything about which action will be taken, leaving both possibilities live until one is selected. ## Conclusion This approach aligns significantly with the less-formal descriptions given of subjective implication decision theory and counterfactual nonrealism. Counterfactuals aren't real in the sense that they are definable after having taken the relevant action; rather, an agent in a state of uncertainty about which action it will take may consider multiple possibilities as freely selectable, even if they are assured that their selection will be equal to the output of some computer program. The linear logic formalization increases my confidence in this approach, by providing a very precise notion of the sense in which the counterfactuals do and don't exist, which would be hard to make precise without similar formalism. I am, at this point, less worried about the problems with counterfactual nonrealism (such as global accounting) than I was when I wrote the post, and more worried about the problems of policy-dependent source code (which requires the environment to be an ensemble of deterministic universes, rather than a single one), such that I have updated towards counterfactual nonrealism as a result of this analysis, although I am still not confident. Overall, I find linear logic quite promising for modeling embedded decision problems from the perspective of an embedded agent, as it builds critical facts such as non-reversibility into the logic itself. ## Appendix: spurious counterfactuals The following describes the problem of spurious counterfactuals in relation to the model. Assume the second-try setup. Suppose the agent becomes assured that Ret(M, "Five"); that is, that M returns the action "Five". From this, it is provable that the agent may, given Start, attain the linear logic proposition 0, by taking action "Ten" and then running MachineTen to get Ret(M, "Ten"), which yields inconsistency with Ret(M, "Five"). From 0, anything follows, e.g. \$1000000, by the principle of explosion. If the agent is maximizing guaranteed utility, then they will take the \$10 bill, to be assured of the highest utility possible. So, it cannot be the case that the agent can be correctly assured that they will take action five, as that would lead to them taking a different action. If, on the other hand, the agent would have provably taken the \$5 bill upon receiving the assurance (say, because they notice that taking the \$10 bill could result in the worst possible utility), then there is a potential issue with this assurance being a self-fulfilling prophecy. But, if the agent is constructing proofs (plans for action) so as to maximize guaranteed utility, this will not occur. This solution is essentially the same as the one given in the paper on UDT with a known search order.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8662285804748535, "perplexity": 1068.5787581008626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879537.28/warc/CC-MAIN-20201022111909-20201022141909-00396.warc.gz"}
http://mathhelpforum.com/algebra/146806-line-slope-print.html
# line with slope • May 28th 2010, 07:15 PM alysha230893 line with slope Line with slope of 1/2 passing through (2,-4) • May 28th 2010, 07:16 PM Prove It Quote: Originally Posted by alysha230893 Line with slope of 1/2 passing through (2,-4) You have $(x_1, y_1) = (2, 4)$ and $m = \frac{1}{2}$. Plug these values into $y - y_1 = m(x - x_1)$ and then simplify.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8771806359291077, "perplexity": 11786.818165092212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010469.50/warc/CC-MAIN-20141125155650-00185-ip-10-235-23-156.ec2.internal.warc.gz"}
https://www.msri.org/workshops/896/schedules/25921
# Mathematical Sciences Research Institute Home » Workshop » Schedules » Geometric analysis # Geometric analysis ## Connections for Women: Microlocal Analysis August 29, 2019 - August 30, 2019 August 30, 2019 (02:00 PM PDT - 03:00 PM PDT) Speaker(s): Julie Rowlett (Chalmers University of Technology/University of Göteborg) Location: MSRI: Simons Auditorium Primary Mathematics Subject Classification No Primary AMS MSC Secondary Mathematics Subject Classification No Secondary AMS MSC Video #### 7-Rowlett Abstract How do geometric features affect physics?  In this talk I will start with a simple example in which we solve the initial value problem for the homogeneous heat equation on a half line, with the Dirichlet boundary condition at the origin. We will see how the Schwartz kernel of the fundamental solution, known as the heat kernel, feels'' the boundary at the origin, and how it also feels the boundary condition when we change between Dirichlet and Neumann boundary conditions.  This is of course a very special example, because we can compute everything explicitly.  However, even for a simple bounded domain in the plane, there is no analogous closed-form expression for the heat kernel.  So, if we wish to understand how the geometric features of a domain in the plane affect the physical flow of heat, we need more tools:  the tools of microlocal analysis!  The main focus of this talk will be the microlocal construction of the heat kernel for curvilinear polygonal domains both in the plane and also in surfaces.  This construction will introduce the concepts of blowing up'' and the incredible usefulness of blowing up.''  We will see how the microlocal construction can be applied to show that the heat kernel feels'' geometric features like curvature, boundary, and the presence (or lack thereof) of corners. Supplements No Notes/Supplements Uploaded Video/Audio Files #### 7-Rowlett H.264 Video 896_25921_7891_7-Rowlett.mp4 Buy the DVD If none of the options work for you, you can always buy the DVD of this lecture. The videos are sold at cost for \$20USD (shipping included). Please Click Here to send an email to MSRI to purchase the DVD. See more of our Streaming videos on our main VMath Videos page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3916598856449127, "perplexity": 1006.7824734900717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670601.75/warc/CC-MAIN-20191120185646-20191120213646-00272.warc.gz"}
http://www.gradesaver.com/the-chocolate-war/q-and-a/what-two-truths-did-emile-discover-60187
# What two truths did Emile discover? this question is from the book The Chocolate War, Robert Cormier.It's found in chapters 7-9, most likely chapter 8.Please answer, and thank you.((((:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8502129316329956, "perplexity": 12470.84738640186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174163.72/warc/CC-MAIN-20170219104614-00372-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.wisdomandwonder.com/tag/literate-programming
## (Emacs+Org-Mode) Quickly Figure Out How To Write A Function By Decompiling A Macro Using It When I can’t figure out how to write a function to do what I want then I record a macro of what I want to do and then “decompile” it to Elisp using elmacro. This is a super-power package if you want to figure out how something works. When you run Bash under shell in Emacs on macOS then update_terminal_cwd is never defined and after every command you get the error message bash: update_terminal_cwd: command not found making the shell unusable. The simplest solution is to define update_terminal_cwd when it isn’t defined. Here is the code: if [ -z "$(type -t update_terminal_cwd)" ] || [ "$(type -t update_terminal_cwd)" != "function" ]; then update_terminal_cwd() { true } fi ## (Emacs+Org-Mode) Personal Grammar Reminder Affect vs Effect I always forget a few grammar rules and can’t seem to get them remembered so I wrote an Elisp snippet to help me remember. Langtool catches this but it isn’t worth waiting. It seems silly to me to write a reminder, but, I bet hundreds of us Emacs users face this. The definition is my own, and includes my opinion about how not to use both words! (defun affect-vs-effect-explanation () "Definition and example of the most frequent use of Affect vs. Effect." (interactive) (let* ((title "Affect Versus Effect") (sep (make-string (length title) ?=)) (buf (get-buffer-create (concat "*" title "*")))) (switch-to-buffer buf) (insert (concat title "\n")) (insert (concat sep "\n\n")) (insert "Affect is a verb. It means \"to have influence upon\". In the present tense affect is followed by a noun in the form of \"X affects Y\". For example \"Choosing between tabs or spaces for indentation affects our happiness.\" In the past tense it is followed by a preposition before the noun. For example \"Most people are deeply affected by the their choice between using tabs or spaces for indentation.\" Effect is a noun. It is an outcome or result of a verb. For example \"Choosing spaces for indentation had a positive effect on her happiness.\" There are other definitions for affect and effect and you probably shouldn't use them.") (help-mode)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5251213908195496, "perplexity": 6099.408355247697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746171.27/warc/CC-MAIN-20181119233342-20181120015342-00417.warc.gz"}
http://www.tweetnotebook.com/Wyoming/null-standard-error-formula.html
Affordable Computer Repair is a family-owned business located in Cheyenne, Wyoming. We provide service, upgrades, repairs, and new installations on most models. We offer affordable pricing and fast turn-around. Turnaround time is usually 24 hours on most repairs. Give us a call today for any of your computer needs. Desktop Computers|Laptops|Computer Supplies|Business Computers|Used Computers|Computer Systems|Computer Peripherals & Accessories|Desktop Computer Repair|Virus Removal|Laptop Repair|Business Services|Computer Installation|Computer Repair|Computer Hardware Repair Address 1322 W 31st St, Cheyenne, WY 82001 (307) 316-5639 http://affordablecomputercheyenne.com # null standard error formula Fort Laramie, Wyoming This permits us to use the sample mean to test a hypothesis about the population mean. There is a whole family of distributions. Since your initial (null) hypothesis assumes p = .7, THAT is the value you use to test the hypothesis. This step is the same for both one-sample tests. We begin by calculating the standard error of the mean: S E = σ n = 12 55 = 12 7.42 = 1.62 {\displaystyle \mathrm {SE} ={\frac {\sigma }{\sqrt {n}}}={\frac {12}{\sqrt You can only upload files of type PNG, JPG, or JPEG. We can ask whether this mean score is significantly lower than the regional mean—that is, are the students in this school comparable to a simple random sample of 55 students from This shows that if the sample size is large enough, very small differences from the null value can be highly statistically significant. What Time would be 1 hour and 24 Minutes before 11:20? You might ask, “Hey, the sample proportion of 0.755 is way lower than the claimed proportion of 0.80. The test looks at the proportion (p) of individuals in the population who have a certain characteristic — for example, the proportion of people who carry cellphones. If we know the population standard deviation or variance, the standard error formula is: If we don't know the population standard deviation or variance we use the sample's standard deviation or Rumsey You can use a hypothesis test to test a statistical claim about a population proportion when the variable is categorical (for example, gender or support/oppose) and only one population or The letter p is used two different ways in this example: p-value and p. Please help to improve this article by introducing more precise citations. (November 2009) (Learn how and when to remove this template message) A Z-test is any statistical test for which the You can only upload files of type 3GP, 3GPP, MP4, MOV, AVI, MPG, MPEG, or RM. The value for all population parameters in the test statistics come from the null hypothesis. If estimates of nuisance parameters are plugged in as discussed above, it is important to use estimates appropriate for the way the data were sampled. Is there a simple formula I don't know or something? 10 points to whomever walks me through this :). To calculate the test statistic, do the following: Calculate the sample proportion, by taking the number of people in the sample who have the characteristic of interest (for example, the number In the special case of Z-tests for the one or two sample location problem, the usual sample standard deviation is only appropriate if the data were collected as an independent sample. Answer Questions Find?? More generally, if θ ^ {\displaystyle {\hat {\theta }}} is the maximum likelihood estimate of a parameter θ, and θ0 is the value of θ under the null hypothesis, ( θ The two-sided p-value is approximately 0.014 (twice the one-sided p-value). Next we calculate the z-score, which is the distance from the sample mean to the population mean in units of the standard error: z = M − μ S E = Often times we state Determine the critical value. State the Hypotheses: This step is the same for both one-sample tests. show more If we were to test the hypotheses H0 : p = 0.7 versus Ha : p > 0.7 using sample results of pˆ = 0.80 from a sample of Because po = 0.80, take p(hat)-p0=0.755 – 0.80 = –0.045 as the numerator of the test statistic. The chance of being at or beyond (in this case less than) –1.61 is 0.0537. (Keep the negative with the number and look up –1.61 in the above Z-table.) This result Typical rules of thumb range from 20 to 50 samples. You conclude I am a liar. T-Test: We use the alpha-level and the degrees of freedom to find the critical T value in the T table. The formula is: If we don't know the population standard deviation or variance we compute a t-test statistics. This hypothesis states that there is an effect (two-tail), or that the effect is in an anticipated direction (one-tail). (Classical Approach): Set the decision criteria. Find where po is the value in Ho. Generally, one appeals to the central limit theorem to justify assuming that a test statistic varies normally. Yes No Sorry, something has gone wrong. Population Standard Deviation Unknown If the population standard deviation, sigma, is unknown, then the population mean has a student's t distribution, and you will be using the t-score formula for sample The claim is that p is equal to “four out of five,” or p0 is 4 divided by 5 = 0.80. Expand» Details Details Existing questions More Tell us some more Upload in Progress Upload failed. Is there a simple formula I don't know or something? 10 points to whomever walks me through this :). That is: Gather Data. In practice, due to Slutsky's theorem, "plugging in" consistent estimates of nuisance parameters can be justified. This is the same for both one-sample tests. That is: Gather Data. The alternative hypothesis is one of the following: The formula for the test statistic for a single proportion (under certain conditions) is: and z is a value on the Z-distribution. Trending What is the point of finding the prime factors of a number? 8 answers Do you have a favourite number? 17 answers If there are 6 apples and you take You also need to factor in variation using the standard error and the normal distribution to be able to say something about the entire population of dentists. It is used when the population standard deviation is unknown and the standard error is estimated from the sample standard deviation. See statistical hypothesis testing for further discussion of this issue. The formula for the estimate of the standard error is: To quantify our inferences about the population, we compare the obtained sample mean with the hypothesized population mean. v t e Statistics Outline Index Descriptive statistics Continuous data Center Mean arithmetic geometric harmonic Median Mode Dispersion Variance Standard deviation Coefficient of variation Percentile Range Interquartile range Shape Moments The test statistic is the standard formula you've seen before. See also Normal distribution Standard normal table Standard score Student's t-test References Sprinthall, R. Report the values and interpret their implications for the null hypothesis. TweetOnline Tools and Calculators > Math > Standard Error Calculator Standard Error Calculator Enter numbers separated by Table of Contents Toggle navigation Search Submit San Francisco, CA Brr, it´s cold outside Learn by category LiveConsumer ElectronicsFood & DrinkGamesHealthPersonal FinanceHome & GardenPetsRelationshipsSportsReligion LearnArt CenterCraftsEducationLanguagesPhotographyTest Prep WorkSocial MediaSoftwareProgrammingWeb Design & Notes Data Applets Examples OnLineHelp NewUser User'sGuide References Notes on Topic 10: Two Sample T-Tests Review of One Sample Tests Topic 8 and Topic 9 presented the statistical procedures that Server Error in '/' Application. Decision: If the observed test-statistic value is in the critical region, reject the null hypothesis Ho. Many non-parametric test statistics, such as U statistics, are approximately normal for large enough sample sizes, and hence are often performed as Z-tests. If you knew the value of mu, then there would be nothing to test. Evaluate the Null Hypothesis .
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9166334867477417, "perplexity": 931.0895478946268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583795042.29/warc/CC-MAIN-20190121152218-20190121174218-00039.warc.gz"}
http://datagolf.ca/an-intergenerational-approach-to-ranking-pga-tour-players/
# An Intergenerational Approach to Ranking PGA Tour Players In this article we provide a method for comparing the performances of golfers who did not compete in the same time period. We answer questions of the following nature: “How would the 2015 version of Rory McIlroy perform against the 1995 version of Greg Norman playing the same course with the same equipment?” The statistical approach taken here is motivated by the method we use in our predictive model to adjust scores for field strength and course difficulty within a year (used in Broadie and Rendleman (2012), as well). In that context, because all European Tour and PGA Tour events contain overlapping sets of golfers, we are able to compare relative performances of all golfers even though all golfers do not directly compete against one another. The logic is that although Phil Mickelson and K.T. Kim may never play in the same tournament in a given year (suppose), because they both play in tournaments that contain Rory McIlroy, we are able to compare Mickelson and Kim through their performances relative to McIlroy. The rest of this article is organized as follows: we first provide the intuition behind our approach, then provide results and a discussion of their interpretation, and then conclude with the statistical details. Intuition We use the same logic described above to compare players across generations. A macro example is shown here: That is, we compare the performances of McIlroy and Faldo through their relative performances against Tiger. The method we use is based on this simple logic, but instead of just a single player linking players from different generations, we have hundreds. An obvious critique to this approach is that the Tiger Woods that Faldo played against was not necessarily the same Tiger Woods that McIlroy faced 10-15 years later. To get around this, we break each player’s career into 2-year blocks. So Tiger Woods in 1997-1998 is a “different player” in our sample than Tiger Woods in 1999-2000; his ability level can be different in the two periods. Therefore, to compare, for example, the 1995 version of Greg Norman to the 2015 version of Rory McIlroy, we first compare Norman to the players (that is, the 2-year blocks of players’ careers) he played against in 1995-1996, and then those players are compared to players they competed against in 1997-1998, and so on, all the way up to 2015. Attentive readers may notice a problem here: the key to this approach is that we have overlap across time of players’ careers (i.e. part of Faldo’s career overlapped with Tiger, and part of Tiger’s overlapped with Rory), but now that we have defined each 2-year segment of a player’s career as distinct, how do we ensure we still have overlap? That is, if every player from 1999-2000 is a distinct “player” from that in 2001-2002, then we have no way to link the performances of these two groups of players. We circumvent this problem by randomly assigning half the players in our sample to have their 2-year blocks defined starting on the odd years (1999-2000, 2001-2002), and the other half of the sample starts on the even years (2000-2001, 2002-2003). Therefore, we are able to link the performance of say, the 2000-2001 version of Tiger to the 2002-2003 version of Tiger by comparing his performances to 2001-2002 Mickelson (because both 2000-2001 version of Tiger and 2002-2003 version of Tiger competed against the 2001-2002 version of Mickelson). For our results, we actually end up getting a value for a player’s performance in each year of their career (this is discussed in detail later). This annual measure should be thought of as a sort of smoothed 3-year average (i.e. Tiger’s 2000 value is affected by his 1999 and 2001 performances as well). The main assumption we are relying on is that within a 2-year period players’ ability is constant on average. There can be some players whose performance improves during a 2-year period as long as there are others whose performance declines. We require that on average these discrepancies even out. In Connolly and Rendleman (2008), they estimate a continuous time-varying golfer-specific ability function. However, for our purposes, we cannot implement this; there would be no way to separate genuine changes in player ability over time from technological advances or improvements in course conditions. Ranking PGA Tour Players from 1984-2016 We are using PGA Tour round-level data from 1983-2017 (the reason we have to drop the first and last years in the sample is explained in a later section). The output of this method is a value for each year of each player’s career in our sample. This value is going to be measure of that player’s performance in that year; we call this the All-Time Performance Index (ATPI). The ATPI is a relative measure, and as such it requires a normalization. The absolute level of the index is irrelevant, what matters is the relative magnitudes. We decide to give the average player on the PGA Tour in the year 2000 an ATPI of zero. The interpretation of the index is best understood with a specific example. The ATPI value of 3.8 assigned to Rory McIlroy in 2015 says the following: the 2015 version of McIlroy would be expected to beat the average PGA Tour player in the year 2000 by 3.8 strokes in a single round, on the same course using the same equipment. Therefore, the ATPI value for each player-year observation represents their scoring average on a “neutral” course relative to the average player in the year 2000. Okay, now to some results (which some people are not going to like, or believe, perhaps). As usual, the plots are interactive so click around. First, we plot the average ATPI across all players (weighted by the number of rounds played) for each year from 1984-2016. Additionally, we plot the ATPI for the best player in each year. The aggregate annual numbers reflect the expected scoring difference in a single round between the average player in the relevant year and the average player in the year 2000. Next, we basically provide all the ATPI data in this interactive graph. From the dropdown bar choose any player, and his ATPI for all years in which he played a minimum of 25 rounds will be plotted. If you are doubting the validity of the results, please take a long look through the data. Looking at individual players' ATPI over the span of their careers has helped convince us of the validity of this measure. For example, if you think that our measure is systematically biased to favor more recent players, then we should (in general) observe players' ATPI steadily rising over their careers (even if their true ability stays relatively constant). Look up some players that have their entire career contained within 1984-2016 (Leonard, Vijay, Love III, for example). If the measure is not biased to recent years, you should observe a career arc in a player's ATPI, where they peak in the middle of their career, and have lower quality performance at the beginning and end of their careers. This is generally what you find. Next, here are the best player-years of all-time according to the ATPI: This highlights Tiger's greatness, as well as the strength of today's best players. Finally, we provide a list of some notable players' average ATPI over the entire sample period. The players listed are generally those who have all (or most) of their careers contained in our 1984-2016 sample. Keep in mind that, for most of these players, (relatively) poor performances in the last few years of their careers causes their careers ATPI averages to be a bit lower than in their primes. So... What to Make of This? If you are willing to accept the assumptions imposed by this approach, the interpretation of these numbers is as has been stated above. That is, the differences between players' ATPI reflect differences in single-round scoring average in a neutral setting (i.e. technology, course conditions, etc are held constant). If you are uncertain as to whether we are controlling for technology changes or course conditions, recall the simple example given earlier: Rory is compared to Tiger (they are using the same equipment and playing the same courses), and Tiger is then compared to Faldo (they are also using the same equipment and playing the same courses). And, through this, Rory and Faldo are compared. Of course, we don't think this analysis proves that mid-level players today should be regarded as "greater" golfers than Greg Norman or Tom Watson, for example. The greatness of any athlete will always be measured by their performances relative to their peers. In athletics, Roger Bannister was the first man to break the 4-minute mile barrier, and is held in very high regard because of it - despite the fact that the best high school boys can break 4 minutes in the mile today (although some of that would be attributed to improvements in shoes and track surfaces). It very well could be that if Greg Norman had grown up in the same generation as McIlroy, he would be better than McIlroy. This analysis cannot speak to the validity of that claim. The current generation has modern technology and improved coaching (whether the latter is helpful could be debated) at their disposal to aid the development of their games in their formative years. Further, serious fitness routines have become the norm among competitive golfers. Finally, and we think most importantly, the raw numbers of serious golfers has grown immensely in the last 30 years, resulting in an increased level of competition that pushes all golfers to get better. All of these factors could contribute to better performances by recent generations of golfers. It seems natural to think that all sports are continually progressing, and current athletes always have a bit of an edge over those that preceded them. Statistical Details Our results are based on fixed-effects regressions of the following form: $Score_{ijt} = \mu_{i,t;t+/-1} + \delta_{jt} + \epsilon_{ijt}$ where i is indexing player, j is indexing a specific tournament-round, and t is indexing time. The slightly complex subscript i,t;t+/-1 is indexing a specific player in the years t and t+1, or t and t-1 (depending whether the player has 2-year blocks on odd or even years). In practice, this is implemented as a regression of score on a set of dummy variables for each 2-year block of a player's career and a set of year-tournament-round dummies. As described earlier, we need overlap between the 2-year segments of different players' careers to connect performances across time. To obtain this overlap, we randomly assign half the players in our sample to have 2-year blocks starting on the even years (2010-2011, 2012-2013), while the other half gets the odd years (2011-2012, 2013-2014). Evidently, we do not want our estimation procedure to be sensitive to this assignment; therefore, we run the estimation many times. Because assignment is random, in some years a player will be assigned to odd-numbered 2-year segments, while in others they will be assigned to even-numbered 2-year segments. In each estimation iteration we collect the player fixed effects for every year of their career (they will be the same for each 2-year block), and then the ATPI will be calculated as the average value for a given year over all estimation iterations. Let's make this concrete with an example; I'll describe how we come up with Rory McIlroy's ATPI for 2015. Suppose in the first estimation iteration Rory is assigned to be on the even years for his 2-year blocks. We run the regression, and obtain Rory's fixed effect for 2014-2015 (suppose it is 4.0). We write down this value as a measure of Rory's performance in the years 2014 and 2015. Next, suppose on the second iteration Rory is assigned to be on the odd 2-year block. Now, we run the regression and obtain Rory's fixed effect for 2015-2016 (suppose it's 3.0). We write down this value as a measure of Rory's performance in the years 2015 and 2016. If we decided just to do 2 iterations, Rory's ATPI value for 2015 would be equal to (3.0 + 4.0) / 2 = 3.5. Therefore, it is best to think of Rory's 2015 ATPI as a type of smoothed 3-year average, as it is ultimately obtained by averaging estimates of his performance for the 2-year blocks 2014-2015 and 2015-2016 (clearly, the middle year influences this average the most). The fixed effects estimation is fairly computationally difficult, so we perform just 100 iterations. The estimates do not vary drastically from one iteration to the next, and consequently we think 100 iterations is more than enough to get rid of any statistical oddities that could appear from the random assignment. We drop the first and last years, 1983 and 2017, as the estimation procedure requires that there is a year on either side of the year of interest. To conclude, it is worth mentioning the work in Berry, Reese, and Larkey (1999), who used a conceptually similar method to compare the performances of players in major championships over 5 decades. Their results are also very interesting.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5238698720932007, "perplexity": 1169.6933413712081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00637.warc.gz"}
http://spartascience.com/research-papers/influence-of-familiarisation-and-competitive-level-on-the-reliability-of-countermovement-vertical-jump-kinetic-and-kinematic-variables/
product blog request demo Call Us Research Paper Influence of familiarisation and competitive level on the reliability of countermovement vertical jump kinetic and kinematic variables Nibali, ML, Tombleson, T, Brady, PH, and Wagner, P. Influence of familiarization and competitive level on the reliability of countermovement vertical jump kinetic and kinematic variables. J Strength Cond Res 29(10)/2827–2835, 2015. # KEY TAKEAWAYS • The countermovement jump assessment (CMJ) as described is homoscedastic, meaning the test is reliable for all measured variables regardless of skill level. • The CMJ test can be performed without the need for familiarization trials. • The CMJ test is a reliable measure of assessing vertical JUMP HEIGHT. • A change in LOAD, EXPLODE, or DRIVE by 1 t-score is a significant change. • The EXPLODE and DRIVE variables are highly reliable and can be used to determine real changes in performance. • LOAD is highly variable, however changes in LOAD may be considered sensitive to training responses and fatigue. POPULATION: One hundred eighteen male and 60 female athletes participated in this study. The 3 strata comprised 113 high school athletes, 30 college athletes, and 35 professional athletes, competing in the sports of baseball, basketball, American football, rugby union, soccer, tennis, volleyball, and water polo. Subjects were experienced athletes and were engaged in a structured resistance training program with a minimum of 12 months of experience. # SUMMARY The questions covered: • If an athlete performs a vertical jump test and improves their results each time, are the improvements a result of improved athletic ability? Or, is the athlete learning how to perform the test better?  (i.e. ‘cheat’ the test to get a better result). • Is there greater reliability in vertical jump results for professional athletes compared to high school or college athletes? The study investigated the reliability of three measurements of vertical ground reaction forces during a countermovement jump test. The three force measurements were average eccentric rate of force development (LOAD), average concentric force (EXPLODE) and concentric impulse (DRIVE). 178 athletes performed repeated jump trials with between 24 h to 14 d between trails. The changes in an athlete’s mean scores between trials were compared to identify any learning effect. The non-uniformity of error was compared between the professional, college and high school athletes to see if the reliability was consistent across different levels of competition. The study found that a reliable measurement can be performed the first time an athlete does a vertical jump test. EXPLODE and DRIVE were highly reliable. LOAD was highly variable between jump trial. However, when the change in eccentric rate of force development (LOAD) is greater than the typical error of measurement it is considered sensitive to training responses and fatigue. The three variables LOAD, EXPLODE, and DRIVE are converted to standardized t-scores, meaning a t-score change of 1 or more can be considered significant. Therefore, the change can be considered a real change as it is larger than the typical error. ​ # ABSTRACT Nibali, ML, Tombleson, T, Brady, PH, and Wagner, P. Influence of familiarization and competitive level on the reliability of countermovement vertical jump kinetic and kinematic variables. J Strength Cond Res 29(10)/2827–2835, 2015. — Understanding typical variation of vertical jump (VJ) performance and confounding sources of its typical variability (i.e., familiarization and competitive level) is pertinent in the routine monitoring of athletes. We evaluated the presence of systematic error (learning effect) and nonuniformity of error (heteroscedasticity) across VJ performances of athletes that differ in competitive level and quantified the reliability of VJ kinetic and kinematic variables relative to the smallest worthwhile change (SWC). One hundred thirteen high school athletes, 30 college athletes, and 35 professional athletes completed repeat VJ trials. Average eccentric rate of force development (RFD), average concentric (CON) force, CON impulse, and jump height measurements were obtained from vertical ground reaction force (VGRF) data. Systematic error was assessed by evaluating changes in the mean of repeat trials. Heteroscedasticity was evaluated by plotting the difference score (trial 2 2 trial 1) against the mean of the trials. Variability of jump variables was calculated as the typical error (TE) and coefficient of variation (%CV). No substantial systematic error (effect size range: 20.07 to 0.11) or heteroscedasticity was present for any of the VJ variables. Vertical jump can be performed without the need for familiarization trials, and the variability can be conveyed as either the raw TE or the %CV. Assessment of VGRF variables is an effective and reliable means of assessing VJ performance. Average CON force and CON impulse are highly reliable (%CV: 2.7% 3/O 1.10), although jump height was the only variable to display a %CV #SWC. Eccentric RFD is highly variable yet should not be discounted from VJ assessments on this factor alone because it may be sensitive to changes in response to training or fatigue that exceed the TE.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8219371438026428, "perplexity": 5293.485144180337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118713.1/warc/CC-MAIN-20170423031158-00011-ip-10-145-167-34.ec2.internal.warc.gz"}
https://scipost.org/submissions/1910.04170v2/
A Global View of the Off-Shell Higgs Portal Submission summary As Contributors: Maximilian Ruhdorfer · Ennio Salvioni · Andreas Weiler Arxiv Link: https://arxiv.org/abs/1910.04170v2 (pdf) Date accepted: 2020-01-17 Date submitted: 2020-01-03 01:00 Submitted by: Salvioni, Ennio Submitted to: SciPost Physics Academic field: Physics Specialties: High-Energy Physics - Experiment High-Energy Physics - Phenomenology Abstract We study for the first time the collider reach on the derivative Higgs portal, the leading effective interaction that couples a pseudo Nambu-Goldstone boson (pNGB) scalar Dark Matter to the Standard Model. We focus on Dark Matter pair production through an off-shell Higgs boson, which is analyzed in the vector boson fusion channel. A variety of future high-energy lepton colliders as well as hadron colliders are considered, including CLIC, a muon collider, the High-Luminosity and High-Energy versions of the LHC, and FCC-hh. Implications on the parameter space of pNGB Dark Matter are discussed. In addition, we give improved and extended results for the collider reach on the marginal Higgs portal, under the assumption that the new scalars escape the detector, as motivated by a variety of beyond the Standard Model scenarios. Ontology / Topics See full Ontology or Topics database. Published as SciPost Phys. 8, 027 (2020) In addition to the changes listed below, we wish to elaborate further about four points raised by the referees: a) Concerning the desire of Referee 2 for a comparison with the on-shell signal: We emphasize that one of the main novelties of this work is to analyze the different kinematical features of the derivative and marginal (or renormalizable) portals. This difference is characteristic of the off-shell regime and disappears for on-shell decays, in which case branching ratio limits can immediately be translated into constraints on either type of portal, as already done in Table 3 for illustration. In light of this fact, and given the very extensive (and in many cases, technically very advanced) literature that already exists about on-shell decays, we do not believe that including BR(h -> inv) limits would constitute a helpful addition to this manuscript. Nonetheless, for useful comparison we have included distributions for the on-shell signal at the LHC in Fig. 6, see change #8. b) Regarding systematic uncertainties: While a complete analysis is beyond the scope of our work, we have addressed the comments by Referees 2 and 3 through the addition of dedicated results for the HL-LHC in Fig. 2, which serve to quantify the expected effects on hadron collider constraints, as well as through a revision of the last paragraph of Section 3. See change #4. c) About the comment on indirect constraints made by Referee 3: We fully agree that one loop probes of the derivative Higgs portal (such as, e.g., gg -> hh, similarly to what done in Ref. [78] for the marginal portal) are an interesting avenue to pursue. However, their analysis requires dedicated work that is in part ongoing, and for which no results are yet available. Other indirect observables, such as the corrections to the couplings of the 125 GeV Higgs boson, can also be relevant in probing the models we consider. These effects are, however, strongly model-dependent, and for this reason we have refrained from discussing them in the text. d) About the observation by Referee 1 that the plots in Fig. 9 would be too small: Given that SciPost Physics is a fully electronic journal, we believe the size of this figure is acceptable. We thank all three referees for their reading of our manuscript and for providing insightful comments. We believe the minor revisions listed below have further improved the quality of the paper, and we are hopeful that the current v2 can be accepted for publication in SciPost Physics. List of changes We list here all the changes we have made in v2, in order of appearance in the text: 1) On page 1 we have added citations to Ref. [17] and Ref. [28], which appeared on the arXiv after the v1 of our work, with the aim to provide a comprehensive overview of the literature on pNGB DM. 2) We have corrected all the typos and made all the small language improvements suggested by Referee 1. 3) When introducing the marginal Higgs portal just above Eq. (2) we have added the alternative name "renormalizable," as well as citations to the original papers Refs. [30-32]. In addition, a few lines later we have added a citation to Ref. [33] for the current constraints on this portal. 4) Both panels of Fig. 2 have been updated and now include also HL-LHC constraints derived assuming a 1% systematic uncertainty on the total background (weaker limit boundary of the gray bands). A mention of this has been added at the end of the caption of the same figure. The discussion of systematics in the last paragraph of Sec. 3 (page 12) has been revised. In addition, the middle panel of Fig. 10 and its caption have been updated accordingly. 5) In the caption of Fig. 2 we have added further information about the ILC and FCC-ee results. 6) On page 3 we have added a sentence (starting with "Note also that if ...") about the scaling of our bounds with the integrated luminosity. 7) In Eq. (5) we have added the last term in the second line, as well as the related definitions of g_V and y in the text after the equation. These operators play a minor role in our discussion, but we have nevertheless included them for completeness. Correspondingly, footnote 6 has been extended and footnote 7 has been added. The effect of these extra operators on DM direct detection has also been included in Eqs. (19) and (21) of Appendix B. Finally, we have commented on the |\chi|^2 |H|^4 operator in the text after Eq. (5). 8) Following the wish of Referee 2 for a comparison with the on-shell topology, in Fig. 6 we have added the normalized distributions for the on-shell invisible Higgs signal, which are independent of the type of portal. Comments on this are given in the caption of the same figure and in the text after Eq. (18). We have chosen to include the comparison with the on-shell case for the LHC, which is of the most immediate experimental relevance. In addition, in Fig. 6 we have changed the color for the V+jets (EW) distributions from orange to black, following a suggestion by Referee 1. 9) Just after Eq. (18) we have provided further details on the lepton veto, the central jet veto and the \Delta\phi(\vec{\slashed{p}}_T, j) requirement. 10) As suggested by Referee 2, we have inverted the order of the HE-LHC and FCC analyses in the text (page 11). 11) On page 13 we have added a sentence mentioning the mono-Higgs signal, together with a citation to Ref. [75]. We thank Referee 3 for their interesting observation about this point. Submission & Refereeing History Resubmission 1910.04170v2 on 3 January 2020 Submission 1910.04170v1 on 21 October 2019 Reports on this Submission Report I recommend the manuscript for publication in its present form. • validity: high • significance: high • originality: high • clarity: high • formatting: excellent • grammar: excellent Report I recommend that the manuscript is published in its present form. • validity: high • significance: high • originality: high • clarity: high • formatting: excellent • grammar: excellent
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8460134267807007, "perplexity": 1150.6947931672178}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00476.warc.gz"}
http://www.juyang.co/goal-setting-for-function-fitting-regression/
# Goal setting for function fitting (regression) Whenever we see the word “optimization”, the first question to ask is “what is to be optimized?” Defining an optimization goal that is meaningful and approachable is the starting point in function fitting. In this post, I will discuss goal setting for function fitting in regression. In the case of a supervised learning problem, the goal essentially contains 2 parts: 1. find a fitting function to minimize the objective function on training data 2. select a fitting function to minimize the prediction error on testing data (the ultimate goal) Notice that I use 2 different verbs here: find and select, which correspond to model training and model selection, respectively. Let’s talk about model training first: Goal: find a fitting function to minimize the overall objective function on the training data $$Obj(f, X_{train}, y_{train}) = L(f(X_{train}), y_{train})) + J(f) \tag{1}$$ First, let’s look at the first element which captures the total prediction error on the training data $$L(f(X_{train}), y_{train})$$. For simplicity, I will omit “train” in the subscript. $$L(f(X), y) = \sum_{i=1}^{N} l(f(x_i), y_i) \tag{2}$$ Input $$X = (X_1, X_2, …, X_p)^T, X \in R^{N \times p}, x_i \in R^p$$. In regression, output $$y_i \in R$$; in classification, output $$y_i \in {1,2,…k}$$ and $$k$$ represents discrete class labels. In this post, I only discuss the regression problem, and in the next post, I will focus on the classification problem. $$L$$ is an aggregation of $$l$$ over all data points, and it is sometimes averaged by the number of points $$N$$ to represent the mean prediction error. Our goal is to minimize $$L$$ in Equation 2. ## A simple linear regression Let’s start with a simple case $$p = 1, \beta_0 = 0, N=2$$. Notice that $$N >= p+1$$ in order to have a unique solution in the linear function. The 2 data points are denoted by $$(x_1,y_1), (x_2, y_2)$$ and 2 parameters by $$\beta_0, \beta_1$$. All values are real numbers $$R$$. $$\hat y_1 = \beta_1 x_1 \tag{3.1}$$ $$\hat y_2 = \beta_1 x_2 \tag{3.2}$$ The total loss function to be optimized is $$L(f(X), y) = \sum_{i=1}^{N} l(\hat y_i, y_i)$$ A commonly used loss function is squared error: $$l(\hat y_i, y_i)= (y_i – \hat y_i )^2 \tag{4}$$ Thus $$L$$ can be written as: $$L = (y_1 – \beta_1 x_1)^2 + (y_2 – \beta_1 x_2)^2 = (x_1^2 + x_2^2) \beta_1^2 – 2(x_1y_1 + x_2y_2)\beta_1 + (y_1^2 + y_2^2) \tag{5}$$ This is a univariate quadratic function on $$\beta_1$$. Equation 5 has the format of $$y = ax^2 + bx + c$$, and here $$x$$ is $$\beta_1$$, and $$a$$ is $$(x_1^2 + x_2^2)$$, which is non-negative. So the parabola will look something like the following diagram with a global minimum value: We can compute the derivative of Equation 5 on $$\beta_1$$ and set it as $$0$$ to get the optimal value of $$\hat \beta_1$$ for a minimum $$L$$. $$\frac {\partial L}{\partial \beta_1} = 2(x_1^2 + x_2^2) \beta_1 – 2(x_1y_1 + x_2y_2) = 0 \tag{6}$$ Solving Equation 6, we get $$\hat \beta_1 = \frac {x_1y_1 + x_2y_2} { x_1^2 + x_2^2 } \tag{7.1}$$ which is the same as $$\hat \beta_1 = \frac {Cov(X,y)} {Var(X)} \tag{7.2}$$ ## Linear regression model Now let’s extend to the general format of linear regression model with input $$X = (X_1, X_2, …, X_p)^T, X \in R^{N \times p}$$. $$\begin{bmatrix}\hat y_1\\ …\\ \hat y_i\\ … \\ \hat y_N\end{bmatrix} = \beta_0\begin{bmatrix}1\\ …\\ 1 \\ … \\1\end{bmatrix} + \beta_1 \begin{bmatrix}(x_1)_1\\ …\\ (x_i)_1\\ … \$$x_N)_1\end{bmatrix} + … + \beta_j \begin{bmatrix}(x_1)_j\\ …\\ ( x_i)_j \\ … \\(x_N)_j\end{bmatrix} + … + \beta_p \begin{bmatrix}(x_1)_p\\ …\\ ( x_i)_p \\ … \\(x_N)_p\end{bmatrix} \tag{8} In the matrix format, Equation 8 can be written as: \hat y =\textbf{X}\beta \tag{9} \beta = [\beta_0, \beta_1, …, \beta_p ]^T \tag {10} where \(\textbf{X} \in R^{N \times (p+1)}$$ with a $$\textbf{1}$$ in the first position of $$\beta_0$$. $$y \in R^{N \times 1}$$, and $$\beta \in R^{(p+1) \times 1}$$. Using the squared-error loss function, the matrix format of total loss $$L$$ is: $$L(\hat y, y) = (y – \textbf{X}\beta)^T(y – \textbf{X}\beta) \tag{11}$$ Here $$L$$ is also called Residual Sum of Squares (RSS), which is closed related to mean squared error (MSE). $$MSE = \frac {RSS}{N}$$. RSS has parameters $$\beta$$ and we can write the loss function as $$RSS(\beta)$$. $$RSS(\beta) = (y – \textbf{X}\beta)^T(y – \textbf{X}\beta) \tag{12.1}$$ $$RSS(\beta) \\ = (y – \textbf{X}\beta)^T(y – \textbf{X}\beta) \\ = y^Ty – \beta^T \textbf{X}^Ty -y^T \textbf{X}\beta – + \beta^T \textbf{X}^T \textbf{X}\beta \\ = y^Ty – 2\beta^T \textbf{X}^Ty + \beta^T \textbf{X}^T \textbf{X}\beta \tag{12.2}$$ Notice that $$\beta^T \textbf{X}^Ty$$ and $$y^T \textbf{X}\beta$$ are both scalers and the transpose of a scaler is itself: $$\beta^T \textbf{X}^Ty = (\beta^T \textbf{X}^Ty)^T = y^T \textbf{X}\beta$$. Similar to Equation 5, Equation 12.2 is also a quadratic function on $$\beta$$ with $$\beta^T \textbf{X}^T \textbf{X}\beta$$. To find the $$\hat \beta$$ that minimizes $$RSS$$, we can take the derivative of Equation 12.2 with respect to $$\beta$$ and get the following equation: $$\frac {\partial RSS(\beta)} {\partial \beta} = -2 \textbf{X}^Ty + 2 \textbf{X}^T \textbf{X}\hat \beta = 0 \tag {13}$$ Solving Equation 13, $$\textbf{X}^T \textbf{X}\hat \beta = \textbf{X}^Ty \tag {14.1}$$ $$\hat \beta = ( \textbf{X}^T \textbf{X}) ^ {-1} \textbf{X}^Ty \tag {14.2}$$ Computing the best $$\hat \beta$$ analytically is possible because the squared-error loss function is differentiable. The derivation of $$\hat \beta$$ only requires the function to have a linear format, but does not make any assumptions on the data. As discussed in the previous post, more assumptions are required when we need to make inference of the parameters. ## Squared error and mean An interesting feature of $$\hat \beta$$ is that the function goes through the mean, $$(\bar {\textbf{X}} , \bar y)$$, i.e. $$\bar {\textbf{X}} \hat \beta = \bar y$$. $$\bar {\textbf{X}} = \frac {\textbf{X}}{N} \tag {15.1}$$ $$\bar y = \frac {y}{N} \tag {15.2}$$ $$\bar {\textbf{X}} \hat \beta \\ = \bar{\textbf{X}} (\textbf{X}^T \textbf{X}) ^ {-1} \textbf{X}^Ty) \\ = \frac {\textbf{X}(\textbf{X}^T \textbf{X}) ^ {-1} \textbf{X}^Ty}{N} \\ = \frac {\textbf{X} \textbf{X}^{-1}(\textbf{X}^T)^{-1}\textbf{X}^Ty} {N} \\ = \frac {y}{N} \\ = \bar y \tag{16}$$ In fact, when the loss function is squared error, the best prediction of $$y$$ at any point $$\textbf{X} = x$$ is the conditional mean. But mean has an infamous drawback: it is very sensitive to outliers. How can we mitigate the effect of outliers? ## Absolute error and median Similar to the squared error, the absolute-error loss function also considers the difference between each $$\hat y_i$$ and $$y_i$$. $$l(\hat y_i, y_i) = |y_i – x_i\beta| \tag {17}$$ $$L(\hat y, y) =\sum_{i=1}^{N}|y_i – x_i\beta| \tag {18}$$ Similar to Equation 12, we can derive $$L$$ with respect to $$\beta$$. The derivative of absolute value can be written as:$$ |f(x)|  = f(x)^2 $$\frac {\partial L(\hat y, y) }{\partial \beta} =\sum_{i=1}^{N} sign(y_i – x_i\beta) \tag {19.1}$$ $$sign(y_i – x_i\beta) =\begin{cases} 1, & y_i > x_i\beta \\ -1, & y_i < x_i\beta \\ 0, & y_i = x_i\beta \\ \end{cases} \tag {19.2}$$ The derivative is 0 when there are same number of positive and negative terms in $$y_i – x_i\beta$$. This intuitively means $$\beta$$ should be the median of $$(X,y)$$. Median, different from mean, is less sensitive to outliers, and thus more robust. ## Loss function for linear regression We have discussed squared error and absolute error as the loss function for regression. Both of them are differentiable, which means we can calculate the best parameters analytically. Squared-error loss (green curve) places more emphasis on observations with large margin $$|y_i – \hat y_i|$$, and changes smoothly near loss 0. Absolute-error loss (blue curve) is more robust with large margin. Huber-error loss (yellow curve) combines the properties of both squared error and absolute error with a threshold $$\delta$$. Below the threshold, it uses the squared-error loss, and above the threshold, it uses the absolute-error loss. ## Loss function for other regression models So far, I focused on the linear regression model, which enjoys the benefit of clear mathematical format and analytical solutions. It lays the foundation for the generalized linear model. Other regression models such as the tree-based model and ensembles, do not use the same linear function as in linear regression. I will discuss tree-based models in details in later posts. Here, I want to emphasize the choice of loss function, regardless of which regression model we are using. It is important to choose a loss function $$l$$ that is differentiable with respect to the fitting function $$f$$, so that we can compute the gradient which allows us to greedily and iteratively approach the optimization goal. If the loss function $$l$$ is not differentiable, we are essentially facing a black box fitting function, which is very challenging to optimize. Take home message First, in linear regression, when using squared error to minimize the loss function, the best $$\hat \beta$$ is the mean of training data; when using absolute error, the best $$\hat \beta$$ is the median. Second, different goals (loss function) can generate different predictions. Third, it is important to choose a differentiable loss function. Demo code can be found on my Github. References • https://web.stanford.edu/~mrosenfe/soc_meth_proj3/matrix_OLS_NYU_notes.pdf • https://stats.stackexchange.com/questions/92180/expected-prediction-error-derivation • https://stats.stackexchange.com/questions/34613/l1-regression-estimates-median-whereas-l2-regression-estimates-mean • http://web.uvic.ca/~dgiles/blog/median2.pdf • https://web.stanford.edu/~hastie/ElemStatLearn/ This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963946342468262, "perplexity": 1553.1565710702812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00429.warc.gz"}
https://tex.stackexchange.com/questions/194530/thesis-chapter-headings-any-suggestions
# Thesis chapter headings, any suggestions? I've found a thesis with really beautiful chapter headings that I would like to use. What package am I going to need and how would I go about creating a heading that looks like the one below. Also, if you have or know of any nice headings do please share! • Welcome to TeX.SX! Please help us to help you and add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – Henri Menke Aug 3 '14 at 10:46 • @HenriMenke I don't think that an MWE is appropriate for this question – Kartik Aug 3 '14 at 13:03 ## 1 Answer Here's an option using the titlesec package: I've used the general format of the titleformat command: % from the titlesec package %\titleformat{ command } % [ shape ] % { format }{ label }{ sep }{ before-code }[ after-code ] You can find more information by studying the documentation, and by viewing other similar questions on this site. Here's a complete MWE to play with: % arara: pdflatex \documentclass{report} \usepackage{lipsum} \usepackage[explicit]{titlesec} % title format for the chapter \titleformat{\chapter} {\bfseries\large} {} {0pt} {\titlerule[3pt]~\raisebox{-1.5pt}{\sc{Chapter}~\thechapter}~\titlerule[3pt]% \\\vspace{.05cm}\titlerule\\\filcenter #1 \\\vspace{.25cm}\titlerule} \begin{document} \chapter{First chapter} \lipsum[1] \chapter{Second chapter} \lipsum[2] \end{document} You mention that you might like to see other ideas; here's another one, again using the titlesec package but without the explicit option - I have used the tcolorbox package to put a box around the chapter number. I don't know if I'd recommend it for a thesis, but it might give you some further ideas: Here's the code: % arara: pdflatex % !arara: indent: {overwrite: yes} \documentclass{report} \usepackage{lipsum} \usepackage{titlesec} \usepackage{tcolorbox} \tcbuselibrary{skins} % title format for the chapter % custom chapter \titleformat{\chapter} {\normalfont\Large\filleft\bfseries} % format applied to label+text {} % label {1pc} % horizontal separation between label and title body {% % draw a box around the chapter number \begin{tcolorbox}[ enhanced,flushright upper, boxrule=1.4pt, colback=white,colframe=black!50!yellow, drop fuzzy midday shadow=black!50!yellow, width=2.8cm] \resizebox{2cm}{!}{\color{gray!80}\thechapter}% \end{tcolorbox}\Huge} % before the title body [] % after the title body \begin{document} \chapter{First chapter} \lipsum[1] \chapter{Second chapter} \lipsum[2] \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7912158370018005, "perplexity": 3807.6599490062877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00052.warc.gz"}
https://www.clutchprep.com/chemistry/practice-problems/136152/what-mass-in-grams-of-a-molecular-substance-molar-mass-50-0-g-mol-must-be-added-
# Problem: What mass in grams of a molecular substance (molar mass = 50.0 g/mol) must be added to 500 g of water to produce a solution that boils at 101.02 °C? (Kbp = 0.512 °C/m for water.)a. 43 gb. 50 gc. 78 gd. 92 ge.  112g ###### FREE Expert Solution We’re being asked to determine the mass of an unknown compound with a molar mass of 50 g/mol if the boiling point of the aqueous solution is 101.02 ˚C For this problem, we have to follow the steps: Step 1. Establish the necessary equations Step 2. Calculate for molality Step 3.  Calculate for mass Step 1. Recall that the boiling point of a solution is higher than that of the pure solvent and the change in boiling point (ΔT­b) is given by: 89% (349 ratings) ###### Problem Details What mass in grams of a molecular substance (molar mass = 50.0 g/mol) must be added to 500 g of water to produce a solution that boils at 101.02 °C? (Kbp = 0.512 °C/m for water.) a. 43 g b. 50 g c. 78 g d. 92 g e.  112g
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8713783621788025, "perplexity": 4709.156408784357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900860.51/warc/CC-MAIN-20201028191655-20201028221655-00685.warc.gz"}
https://stacks.math.columbia.edu/tag/0AV3
Lemma 15.23.8. Let $R$ be a Noetherian domain. Let $M$ be a finite $R$-module. Let $N$ be a finite reflexive $R$-module. Then $\mathop{\mathrm{Hom}}\nolimits _ R(M, N)$ is reflexive. Proof. Choose a presentation $R^{\oplus m} \to R^{\oplus n} \to M \to 0$. Then we obtain $0 \to \mathop{\mathrm{Hom}}\nolimits _ R(M, N) \to N^{\oplus n} \to N' \to 0$ with $N' = \mathop{\mathrm{Im}}(N^{\oplus n} \to N^{\oplus m})$ torsion free. We conclude by Lemma 15.23.5. $\square$ Comment #4657 by Remy on After the second sentence, couldn't you just conclude directly from Tag 15.23.5? (In fact, you don't need to take the image, because exactness on the right would not be needed.) In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9954949617385864, "perplexity": 542.6423892599693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00405.warc.gz"}
https://leopard.tu-braunschweig.de/receive/dbbs_mods_00045895
# Drei-, Vier- und Fünf-Grundgrößen-Gleichungen in der Elektrodynamik The laws of the electromagnetic field are shown in systems with 3, 4 and 5 fundamental quantities. On a system of equations containing 3 fundamental quantities the various modes of treatments introducing electrostatically, electromagnetically and symmetrically defined quantities are discussed in detail and the relations of connexion between these three kinds of quantities and the respective quantities of the system with 4 fundamental quantities are developed and compiled. Moreover, the alternative, rational -;- non-rational, originating in merely geometrical conditions and treated in the preceding paper, is inserted in these relations. The transition from one system to another may be carried out according to the method of variation of quantities and variation of unit; the second method fails in case of transition from the system with 4 fundamental quantities to the symmetrical one with 3 fundamental quantities. General equations are given for the system with 5 fundamental quantities from which equations between numerical values result, if the field constants [eta]0 and [mü]0 and the quantity [...] are disposed of in a certain physical meaning according to unit and numerical value. Making use of the geometrical "corresponding coefficients" at the same time, one may obtain, from this system with 5 fundamental quantities, all required rational or non-rational ways of writing of systems with 5, 4 and 3 fundamental quantities. In addition, some inferences are drawn for the relations between units and numerical values of the electrical and magnetic quantities. Citation style: Total: Abtractviews: Last 12 Month: Abtractviews: ### Rights Use and reproduction:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9239873290061951, "perplexity": 769.4569490624909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00238.warc.gz"}
http://wires.wiley.com/WileyCDA/WiresArticle/articles.html?doi=10.1002%2Fwics.1287
This Title All WIREs How to cite this WIREs title: WIREs Comp Stat # Examining missing data mechanisms via homogeneity of parameters, homogeneity of distributions, and multivariate normality Can't access this content? Tell your librarian. This paper reviews various methods of identifying missing data mechanisms. The three well‐known mechanisms of missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR) are considered. A number of tests deem rejection of homogeneity of means and/or covariances (HMC) among observed data patterns as a means to reject MCAR. Utility of these tests as well as their shortcomings are discussed. In particular, examples of MAR and MNAR data with homogeneous means and covariances between their observed data patterns are provided for which tests of HMC fail to reject MCAR. More generally, tests of homogeneity of parameter estimates between various subsets of data are reviewed and their utility as tests of MCAR and MAR (in special cases) is pointed out. Since many tests of MCAR assume multinormality, methods to assess this assumption in the context of incomplete data are reviewed. Tests of homogeneity of distributions among observed data patterns for MCAR are also considered. A new nonparametric test of this type is proposed on the basis of pairwise comparison of marginal distributions. Finally, methods of examining missing data mechanism based on sensitivity analysis including methods that model missing data mechanism based on logistic, probit, and latent variable regression models, as well as methods that do not require modeling of missing data mechanism are reviewed. The paper concludes with some practical comments about the validity and utility of tests of missing data mechanism. WIREs Comput Stat 2014, 6:56–73. doi: 10.1002/wics.1287 Conflict of interest: The authors have declared no conflicts of interest for this article. Q–Q plots comparing the distribution of observed data on variable 1 for the two groups of completely observed cases and incomplete cases. The left panels correspond to the missing not at random (MNAR) data and the right panels corresponds to the missing at random (MAR) data for the two cases where missing data are generated according to () and (). [ Normal View | Magnified View ] The densities g(x), h(x), and the standard normal density φ(x). [ Normal View | Magnified View ] The JJ‐NP test of missing completely at random (MCAR) applied to a set of bivariate normal data with ρ = 0.8 and incomplete data generated according to the logistic regression model with α = − 1 and β = − 20. [ Normal View | Magnified View ] The intervals marked as ‘M’ are truncated from X to obtain the random variable $X˜$. The intervals marked ‘O’ form the range of $X˜$. [ Normal View | Magnified View ] The JJ‐NP test of missing completely at random (MCAR) applied to a set of incomplete bivariate normal data with ρ = 0.8 and missingness generated according to the logistic regression model with α = − 2 and β = − 1.65. [ Normal View | Magnified View ] Logistic curves for two different parameter values. [ Normal View | Magnified View ] The JJ‐NP test of missing completely at random (MCAR) applied to a set of data with missingness generated according to missing at random (MAR) mechanism . [ Normal View | Magnified View ]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47482383251190186, "perplexity": 1321.3136846576526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612502.45/warc/CC-MAIN-20170529165246-20170529185246-00024.warc.gz"}
https://ristretto.black/category/writing/
## Fat outline: put some meat on the backbone of your research You want to convey your next idea to the world—or your supervisor—but you are lost on the most effective way. Very often, you would fall back to sending a rough table of content—the backbone of your research. Unfortunately, this backbone does not completely convey the motivation of the research, the logic you follow, and is very hard to get direct feedback on. There is an alternative: adding meat on the backbone and producing a fat outline. Suggested by Josh Bernoff, the fat outline is like the ongoing draft of your paper. It contains (of course) how you will organise the content but also pieces of the actual text, doodles of the graphs you expect to get, keywords, and basically anything you want—or should receive—feedback on. It forces you to think hard on how you motivate your work. You can then more easily convey this motivation to others. And others can also tell you where you are going wrong. Use the fat outline as the platform to quickly iterate your ideas in the early phase of your paper. You’ll hit two birds with one stone: you check your idea at minimal cost, and you already build momentum towards your next paper. ## Anthropomorphism or the art of humanising nonhuman subjects Academic writing should be clear and objective. In the pursue of objectivity, some believe that by using the first person and introducing ‘I’ or ‘we’ in their text, the outcome will not sound as rigorous or formal. But attempting to avoid the first person may confuse readers, leaving them wondering ‘who does what?’ as we discussed in our article about the passive voice. Focusing on objectivity may also lead to anthropomorphism. Continue reading “Anthropomorphism or the art of humanising nonhuman subjects” ## Passive voice in scientific writing: angel or devil? For years, we were told that in scientific writing we needed to use passive voice to sound formal, neutral and serious. More recently, the contrary philosophy bursted in: suddenly, passive voice had to be by all means avoided as it forces hiding the agent of the sentence and creates confusion. This paradigm shift left many of us in the doubt… is using passive voice in formal, scientific writing right or wrong? Continue reading “Passive voice in scientific writing: angel or devil?” ## Not in the mood to write? Why you should still show up, even if the muse doesn’t Let’s face it, us, scientists, are passionate about our job. We are usually delighted about carrying out our scientific tasks (experiments, simulations, reviews, etc.). But when it comes to writing our findings, the motivation goes down. We rarely feel we’re ready to write and we rarely feel in the mood to write… the consequence: when we sit down and are supposed to write, we rather start doing other things, we procrastinate. And of course procrastination comes guilt and frustration. Until the deadline dangerously approaches: then, in the last minute, creativity pops up. Well, let us break it for you: that’s not really last minute creativity, that’s stress and adrenaline doing their job. In our Road to Bootcamp series of posts, we’ve already covered how starting writing your work early enough will let you fully benefit from the ‘magic’ of the writing process; therefore, reducing procrastination. In this post, we’ll focus on how creativity can be boosted—even when you’re convinced that you’re not in the mood to write. Continue reading “Not in the mood to write? Why you should still show up, even if the muse doesn’t” ## Want to procrastinate less and be an effective writer? Start writing your articles early enough If you ask researchers about their main issues when it comes to writing, procrastination always appears on top of the list. There are several methods that can help you become an effective writer who seldom procrastinates (or who effectively procrastinates—did you know that that’s possible?), so on our Road to the Writing Bootcamp we will be dedicating a series of blog posts to this problem. Why do we procrastinate when it comes to writing a scientific document? For multiple reasons, but many of them are related to the fear of the blank page, also known as writer’s block. Continue reading “Want to procrastinate less and be an effective writer? Start writing your articles early enough” ## Effective template to write your answer to reviewers You have just received the reviews for your article. After a long wait, this is the most painful step. The main issue is that reviewers and authors don’t speak the same language. To speed up and ease this process, authors should address the comments so that reviewers can easily assess how their feedback has been tackled. What is then the most effective way of writing your rebuttal? ## You want to write articles that get accepted? Do reviews. At the end of my PhD, I started receiving invitation to review articles. At that moment, I felt honoured as if I had received the membership card of a very selective club. Later, as a postdoc and professor, the number of invitations increased while my time available for such type of tasks decreased. However, I noticed something interesting that I wanted to test with my students. ## The authorship manifesto Getting your name on an article is becoming more and more important in the “publish or perish” era. Although I believe writing papers is an excellent objective for doing research, deciding who should be on the paper can become tricky in some cases. Here is the result of an intense discussion during the team building (with ATM, FLOW and BURN research groups) in 2017. You can directly jump to the summary table at the end if you are in a hurry. ## Does your article address these important issues? I often need to review articles and give feedback on them. I find my feedback is most efficient when I can focus on the content (results, figures, etc) and the flow of the article. These aspects of the article are what interest the first author most, even if he or she is also happy to get a review of the typos or other secondary problems. Yet, more often than not, many of my comments are about things that can be more or less automatised. This post is a checklist for the common problems I encounter. Continue reading “Does your article address these important issues?” ## Clear, accurate, concise writing I’m writing this post following a very interesting talk of Jean-luc Doumont on “clear, accurate, concise writing”. This was an updated version of his previous talk on effective written documents. Continue reading “Clear, accurate, concise writing” ## Are you lost after the submission of your manuscript? After submitting your manuscript, the hard wait for the review starts. You could think that everything is handled perfectly on a first-in-first-out basis. But this is unfortunately not the case. It is not an easy job to be an editor, it takes a lot of effort, time investment and organisation. So you have to do everything to facilitate their work and this requires some follow-up from your side. Here are the most important steps. Continue reading “Are you lost after the submission of your manuscript?” ## Are you using these time-saving features of LaTeX? We will cover here some good practice when writing your manuscripts with $\LaTeX$. To make sure the persons who review your manuscript before submission (more details) pay attention to the content of your article instead of the little problems, it is worth following these tips. Continue reading “Are you using these time-saving features of LaTeX?”
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21675558388233185, "perplexity": 1603.9204038619002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577757.82/warc/CC-MAIN-20220524233716-20220525023716-00058.warc.gz"}
http://www.sciforums.com/threads/electricity-and-drift-velocity.160762/
# Electricity and drift velocity. Discussion in 'Physics & Math' started by ajanta, Apr 16, 2018. 1. ### ajantaRegistered Senior Member Messages: 611 In the case of a 12 gauge copper wire carrying 10 amperes of current (typical of home wiring), the individual electrons only move about 0.02 cm per sec or 1.2 inches per minute (in science this is called the drift velocity of the electrons.) And the electric bulb lights up immediately when we turn the switch on......it sets the electrons along the way into motion all along the wire. So it appears as if the electrons are moving very fast, when in fact they don't. From wiki (u = mσΔV/ρefℓ where u is again the drift velocity of the electrons, in m⋅s−1. m is the molecular mass of the metal, in kg. σ is the electric conductivity of the medium at the temperature considered, in S/m. ΔV is the voltage applied across the conductor, in V. ρ is the density (mass per unit volume) of the conductor, in kg⋅m^−3. e is the elementary charge, in C. f is the number of free electrons per atom. ℓ is the length of the conductor, in m.) Now about a step up transformer..... Suppose, Secondary winded coil that the coil wire is 3×10^8 m long. Primary coil wire is perfect for 50 Hz AC frequency. My questions.... 1.When the output AC frequency is same as input AC frequency? 2. If input AC frequency is 50 Hz then what is the output frequency, 50 Hz or O Hz? Thanks. Last edited: Apr 16, 2018 3. ### Q-reeusValued Senior Member Messages: 3,618 Regarding 'speed of electricity', and additionally actual conduction charge motions inside a typical metal, here is a cut&paste from a much earlier thread: [page number was wrong - rather go from ~ p38 to p45] Beyond that, the secondary windings are all immersed in essentially the same E = -dA/dt solenoidal emf field owing to the primary windings harmonically exciting the ferromagnetic core magnetically. Hence there is no lengthy delay as you imagine. No lag somehow preventing an initial following of the primary frequency. Without significant non-linearity, there can be no harmonics hence no issue with a frequency mismatch. Last edited: Apr 16, 2018 ajanta likes this. 5. ### Q-reeusValued Senior Member Messages: 3,618 Forgot to add the following in #2: That secondary voltage response faithfully follows primary assumes an open circuit situation where current flow in the secondary is zero or at least negligible. But typically the secondary will be coupled to a load that in general will be partly resistive and partly reactive. Additional to the secondary's own inductive reactance and resistance (capacitive reactance being negligible for mains frequency transformer windings). In that case the startup response of the secondary circuit current will be quite different to the steady state situation that fapp may take a few cycles to achieve. See e.g.: https://en.wikipedia.org/wiki/RLC_circuit#Transient_response What's shown there is similar to AC driven case except one superposes that shown to a steady-state final sinusoid.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853974461555481, "perplexity": 2204.9206124480943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413786.46/warc/CC-MAIN-20200531213917-20200601003917-00115.warc.gz"}
https://www.mail-archive.com/zope3-users@zope.org/msg01834.html
# Re: [Zope3-Users] newbie problems with new content-type [EMAIL PROTECTED] wrote: 'test' command searches only packages under "instance_lib". and "instance_lib" is 'c:\path\to\your\instance\lib\python' Ok, that's my problem. My package was outside the instance lib directory. I think I have read in the Zope Developers Book that you could put your application anywhere in your harddisk as long as PYTHONPATH includes it. Now, the tests are found. Thanks a lot Katsutoshi Cheers Lorenzo _______________________________________________ Zope3-users mailing list Zope3-users@zope.org http://mail.zope.org/mailman/listinfo/zope3-users
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2576441764831543, "perplexity": 15307.974968213146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00111-ip-10-171-6-4.ec2.internal.warc.gz"}
https://jd2718.org/2010/06/17/wheres-the-algebra-2-trigonometry-conversion-chart/
Not here yet. NY State is cheating. Well, no, but sort of. NY State hired a private company to cheat for them. They are peeking at your scores, and after they’ve counted and mulled and counted some more, they will decide on the passing score. And the 85 score. Here’s the long version: (and if you want to look, here’s the multiple choice section) Because this is the first year New York State gave the exam, and because they are unsure of how to write an exam, they devised an extraordinary procedure. First they make all the teachers in all the schools across the State grade the exams very quickly, and find the raw scores. We did that Tuesday afternoon and finished Wednesday morning. Then they made us ship all the forms to the vendor’s facility in Arkansas or Montana or somewhere else not in NY. Then, remember how the forms looked, with lots of extra bubbles on the back? That was so the vendor could scan all the forms, and collect stats on which questions kids scored how many points on, and prepare a report. Then a group of teachers from around the state go to Albany and look at the report, and assign value or difficulty to each question. Then that report and the original report are summarized and presented to a panel of teachers, assistant principals, principals, math chairmen, special ed coordinators, curriculum coordinators, superintendents, board of education members (and I’m missing some). This panel then decides what score represents passing and what score represents “mastery.”  I sat on this panel for Integrated Algebra two years ago (and was easily at three extremes – the least powerful position since I was just a teacher, the least impressively dressed, no jacket, no tie, and the most annoying, but not in a fun way). Finally New York State takes those two numbers, passing and mastery, and sets those two numbers at 65 and 85, set 88 raw points at 100, set 0 at 0, and then fit a cubic function to the four points. That’s how they get the scale. You can use a cubic regression, same result. Except, I don’t know how much New York State does, and how much the vendor does. The vendor is a private company. Pearson. The process feels iffy to me. But this is exactly what is happening in Albany now (if they are done scanning the forms at the company’s facility in Iowa or Texas or somewhere else not in New York). And this is why the conversion chart will not come out until June 24. At least in time for report cards… I hope. June 17, 2010 pm30 9:50 pm 9:50 pm whats ur gut feeling on this generous curve or just a normal curve like every other math regetns June 17, 2010 pm30 9:57 pm 9:57 pm Oh, for the good old days of 30 multiple choice questions weighted 2 points each and the student’s choice of 4 out of 7 ten point questions to make up 100/100 points. There was not as much “voodoo mathematics” back in the day – and the answer sheets never had to leave the school! June 17, 2010 pm30 9:59 pm 9:59 pm The process seems unfair…just can’t put my finger on what’s bothering me somehow. I guess it’s all the ‘passing around’ and the panels etc. If they can make their own regent, those same people should be able to mark it, right? And if not- something is fishy, and unfair in that process. Let’s protest! Just kidding- it doesn’t bother me THAT much June 17, 2010 pm30 10:07 pm 10:07 pm you said you can’t scan the exam (i assume that’s because it’s too time consuming) but thanks for the pictures! Maybe in the same manner can you take a picture of the one page of the answer sheet for the mult. choice? Everyone wants the answers June 17, 2010 pm30 11:04 pm 11:04 pm I was wondering same thing as Ray. • June 17, 2010 pm30 11:16 pm 11:16 pm Will do. June 17, 2010 pm30 11:49 pm 11:49 pm What about the short answers? Is it possible to get those posted? June 17, 2010 pm30 11:59 pm 11:59 pm All my m/c matched the ones he posted, I’m sure these are all correct: 28. +/- 4 29. population standard deviation 7.4 30. sum = -11/6 or -2.2 and product = -3/5 or -0.6 31. graph and y = 0 for the asymptote 33. exact value of sin 240 is NEGATIVE radical 3 over 2 34. 604 square feet for the area of the parallelogram 35. (d-8)/5 36. probability to the nearest thousandth, 0.167 37. 0, 60, 180, 300 38. Tennessee = 3780, Vermont = 5040, thus Carol is not correct 39. 33 degrees. Unless you want to see work, and I didn’t originally post, some guy by the name of LI math teacher did on some other post on this blog, but he seems realible/ 5. June 18, 2010 am30 12:04 am 12:04 am Noel, all correct, except for 30. I assume that’s a typo, should be $-\frac{11}{5}$ 32. Both you and the Regents are wrong. Answer should be $-|x| \sqrt{3x}$ June 18, 2010 am30 12:07 am 12:07 am 30 yeah sorry it is 5, i did put 5 on my test, hmm didn;t notice when I copied pasted the guys short answer, yeah it should be -11/5. 32) I don’t understand can you explain why the absolute value of x? June 18, 2010 am30 12:14 am 12:14 am I don’t remember question fully, but I remember at end it was like: 5x sqrt(3x) – 6x sqrt(3x) That yields -x sqrt(3x) Where do you get [abs] from. If x is negative there is no solution in the real number system anyway because in the sqrt there is (3x). So the domain would have to x>/ 0. • June 18, 2010 am30 12:26 am 12:26 am I posted here. In general the square root of a square is the absolute value of the number. For example the square root of -10 squared is 10. One would not be justified in assuming x non-negative without stating so… It’s not right to talk about a solution here, since we are not solving, but simplifying an expression. jd June 22, 2010 pm30 8:21 pm 8:21 pm Can you explain the probability problem? June 18, 2010 am30 12:18 am 12:18 am im with the Noel guy on this one…that makes no sense! June 18, 2010 am30 12:20 am 12:20 am does anyone remember the original equation for the answer that ended up being d-8/5? • June 18, 2010 am30 12:29 am 12:29 am That one is $\frac{\frac{1}{2} - \frac{4}{d}}{\frac{1}{d} + \frac{3}{2d}}$ June 18, 2010 am30 12:40 am 12:40 am yeah…thanx so much! June 18, 2010 am30 12:43 am 12:43 am yeah…i had a question on that problem…how did u get d-8/5…cause on my exam i kept gettin d-8/5d n i dont no what i kept doin wrong! June 18, 2010 am30 12:46 am 12:46 am [(1/2) – (4/d)] / [(1/d) + (3/2d)] Multiply top and bottom by [2d/2d] (d – 4(2)) / (2 + 3) (d – 8) / 5 June 18, 2010 am30 12:46 am 12:46 am oh…i just realized my mistake..shoot..oh well! June 18, 2010 am30 1:19 am 1:19 am hmm I’m going to say that I’m going to stop posting, and focus on my two other regents that are next week. I’ll check on 25th or 28th one last time, to see how everyone did, even if I get 100 by the state, it isn’t a 100 because 32 is wrong. So sorry to those who I may of offended or anything in my posts, I didn’t mean it, and I guess if you felt I was hounding here, I really didn’t have anything better to do. Good luck to all with scores and other regents if you have any. June 18, 2010 pm30 1:08 pm 1:08 pm Hey where is the conversion chart? The real one? June 18, 2010 pm30 10:18 pm 10:18 pm It hasn’t been posted yet, and might not be until the 24th. June 19, 2010 pm30 3:12 pm 3:12 pm Gracias! June 19, 2010 pm30 3:12 pm 3:12 pm Grasias! June 21, 2010 pm30 4:44 pm 4:44 pm For 28 it is not +/- 4. it is just +4. If you used -4 you would not have equal roots as the questions said witch would be the value for k that has equal roots and equal to 0. Roots for +4 came out to both 2 which are equal of course, and roots for -4 came out to be +2 and -2 and those are not equal. June 22, 2010 am30 1:25 am 1:25 am I don’t remember the exact wording of the question, but I think -4 does work, and that seems to be agreed on by everyone else, also. I could help you more if you remember the question. June 23, 2010 am30 12:39 am 12:39 am Now that 28 has been posted on this blog, I can help you. First, you should label a, b, and c. a=1, b=-k, and c=4 Now, you use the discriminant, which is the part of the quadratic formula under the square root sign. The roots can only be equal if the discriminant is equal to zero, so set it equal to zero: b^2-4ac=0 (b^2 = b squared) (-k)^2-4(1)(4)=0 (-1)^2*(k)^2-4(1)(4)=0 (this step just serves to get rid of the negative sign) (k)^2-16=0 +16 to each side (k)^2=16 sqrt each side k=+/-4 ((-4)^2 = 16) Check by plugging it in: x^2-(-4)x+4=0 x^2+4x+4=0 (x+2)(x+2)=0 x=-2 June 23, 2010 pm30 12:31 pm 12:31 pm x^2 -4x +4 The solutions are both -2 x^2 +4x +4 The solutions are both +2 b^2 – 4ac if you do algebraically will have + and – 4 for solution = 0 June 21, 2010 pm30 6:35 pm 6:35 pm I feel they should give a generous curve, that atleast most students should be able to pass.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6116883158683777, "perplexity": 2589.23647662013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00232.warc.gz"}
https://worldwidescience.org/topicpages/a/accurate+phylogenetic+breakpoint.html
#### Sample records for accurate phylogenetic breakpoint 1. Accurate phylogenetic classification of DNA fragments based onsequence composition Energy Technology Data Exchange (ETDEWEB) McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore 2006-05-01 Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels. 2. Accurate Breakpoint Mapping in Apparently Balanced Translocation Families with Discordant Phenotypes Using Whole Genome Mate-Pair Sequencing Science.gov (United States) Aristidou, Constantia; Koufaris, Costas; Theodosiou, Athina; Bak, Mads; Mehrjouy, Mana M.; Behjati, Farkhondeh; Tanteles, George; Christophidou-Anastasiadou, Violetta; Tommerup, Niels 2017-01-01 Familial apparently balanced translocations (ABTs) segregating with discordant phenotypes are extremely challenging for interpretation and counseling due to the scarcity of publications and lack of routine techniques for quick investigation. Recently, next generation sequencing has emerged as an efficacious methodology for precise detection of translocation breakpoints. However, studies so far have mainly focused on de novo translocations. The present study focuses specifically on familial cases in order to shed some light to this diagnostic dilemma. Whole-genome mate-pair sequencing (WG-MPS) was applied to map the breakpoints in nine two-way ABT carriers from four families. Translocation breakpoints and patient-specific structural variants were validated by Sanger sequencing and quantitative Real Time PCR, respectively. Identical sequencing patterns and breakpoints were identified in affected and non-affected members carrying the same translocations. PTCD1, ATP5J2-PTCD1, CADPS2, and STPG1 were disrupted by the translocations in three families, rendering them initially as possible disease candidate genes. However, subsequent mutation screening and structural variant analysis did not reveal any pathogenic mutations or unique variants in the affected individuals that could explain the phenotypic differences between carriers of the same translocations. In conclusion, we suggest that NGS-based methods, such as WG-MPS, can be successfully used for detailed mapping of translocation breakpoints, which can also be used in routine clinical investigation of ABT cases. Unlike de novo translocations, no associations were determined here between familial two-way ABTs and the phenotype of the affected members, in which the presence of cryptic imbalances and complex chromosomal rearrangements has been excluded. Future whole-exome or whole-genome sequencing will potentially reveal unidentified mutations in the patients underlying the discordant phenotypes within each family. In 3. [Ceftaroline breakpoints]. Science.gov (United States) Canut, Andrés; Martínez-Martínez, Luis 2014-03-01 Ceftaroline is a new cephalosporin for parenteral use. Notable among its microbiological properties is its ability to inhibit penicillin-binding protein 2a of methicillin-resistant Staphylococcus aureus and its good in vitro activity against several microorganisms of clinical interest. The European Committee of Antimicrobial Susceptibility Testing (EUCAST) has defined both epidemiological breakpoints (defining wild-type populations that lack known acquired mechanisms of resistance) and clinical breakpoints for this compound. The Clinical and Laboratory Standards Institute (CLSI) has also defined clinical breakpoints. Based on the microbiological activity of ceftaroline, clinical categories have been defined for enterobacteria, S. aureus, Haemophilus influenzae, Streptococcus pneumoniae, and beta-hemolytic Streptococcus. EUCAST has also established breakpoints based on pharmacokinetic-pharmacodynamic criteria. 4. Accurate reconstruction of insertion-deletion histories by statistical phylogenetics. Directory of Open Access Journals (Sweden) Oscar Westesson Full Text Available The Multiple Sequence Alignment (MSA is a computational abstraction that represents a partial summary either of indel history, or of structural similarity. Taking the former view (indel history, it is possible to use formal automata theory to generalize the phylogenetic likelihood framework for finite substitution models (Dayhoff's probability matrices and Felsenstein's pruning algorithm to arbitrary-length sequences. In this paper, we report results of a simulation-based benchmark of several methods for reconstruction of indel history. The methods tested include a relatively new algorithm for statistical marginalization of MSAs that sums over a stochastically-sampled ensemble of the most probable evolutionary histories. For mammalian evolutionary parameters on several different trees, the single most likely history sampled by our algorithm appears less biased than histories reconstructed by other MSA methods. The algorithm can also be used for alignment-free inference, where the MSA is explicitly summed out of the analysis. As an illustration of our method, we discuss reconstruction of the evolutionary histories of human protein-coding genes. 5. Using ESTs for phylogenomics: Can one accurately infer a phylogenetic tree from a gappy alignment? Directory of Open Access Journals (Sweden) Hartmann Stefanie 2008-03-01 Full Text Available Abstract Background While full genome sequences are still only available for a handful of taxa, large collections of partial gene sequences are available for many more. The alignment of partial gene sequences results in a multiple sequence alignment containing large gaps that are arranged in a staggered pattern. The consequences of this pattern of missing data on the accuracy of phylogenetic analysis are not well understood. We conducted a simulation study to determine the accuracy of phylogenetic trees obtained from gappy alignments using three commonly used phylogenetic reconstruction methods (Neighbor Joining, Maximum Parsimony, and Maximum Likelihood and studied ways to improve the accuracy of trees obtained from such datasets. Results We found that the pattern of gappiness in multiple sequence alignments derived from partial gene sequences substantially compromised phylogenetic accuracy even in the absence of alignment error. The decline in accuracy was beyond what would be expected based on the amount of missing data. The decline was particularly dramatic for Neighbor Joining and Maximum Parsimony, where the majority of gappy alignments contained 25% to 40% incorrect quartets. To improve the accuracy of the trees obtained from a gappy multiple sequence alignment, we examined two approaches. In the first approach, alignment masking, potentially problematic columns and input sequences are excluded from from the dataset. Even in the absence of alignment error, masking improved phylogenetic accuracy up to 100-fold. However, masking retained, on average, only 83% of the input sequences. In the second approach, alignment subdivision, the missing data is statistically modelled in order to retain as many sequences as possible in the phylogenetic analysis. Subdivision resulted in more modest improvements to alignment accuracy, but succeeded in including almost all of the input sequences. Conclusion These results demonstrate that partial gene 6. Identifying the important HIV-1 recombination breakpoints. Directory of Open Access Journals (Sweden) John Archer Full Text Available Recombinant HIV-1 genomes contribute significantly to the diversity of variants within the HIV/AIDS pandemic. It is assumed that some of these mosaic genomes may have novel properties that have led to their prevalence, particularly in the case of the circulating recombinant forms (CRFs. In regions of the HIV-1 genome where recombination has a tendency to convey a selective advantage to the virus, we predict that the distribution of breakpoints--the identifiable boundaries that delimit the mosaic structure--will deviate from the underlying null distribution. To test this hypothesis, we generate a probabilistic model of HIV-1 copy-choice recombination and compare the predicted breakpoint distribution to the distribution from the HIV/AIDS pandemic. Across much of the HIV-1 genome, we find that the observed frequencies of inter-subtype recombination are predicted accurately by our model. This observation strongly indicates that in these regions a probabilistic model, dependent on local sequence identity, is sufficient to explain breakpoint locations. In regions where there is a significant over- (either side of the env gene or under- (short regions within gag, pol, and most of env representation of breakpoints, we infer natural selection to be influencing the recombination pattern. The paucity of recombination breakpoints within most of the envelope gene indicates that recombinants generated in this region are less likely to be successful. The breakpoints at a higher frequency than predicted by our model are approximately at either side of env, indicating increased selection for these recombinants as a consequence of this region, or at least part of it, having a tendency to be recombined as an entire unit. Our findings thus provide the first clear indication of the existence of a specific portion of the genome that deviates from a probabilistic null model for recombination. This suggests that, despite the wide diversity of recombinant forms seen in 7. Identifying the Important HIV-1 Recombination Breakpoints Science.gov (United States) Fan, Jun; Simon-Loriere, Etienne; Arts, Eric J.; Negroni, Matteo; Robertson, David L. 2008-01-01 Recombinant HIV-1 genomes contribute significantly to the diversity of variants within the HIV/AIDS pandemic. It is assumed that some of these mosaic genomes may have novel properties that have led to their prevalence, particularly in the case of the circulating recombinant forms (CRFs). In regions of the HIV-1 genome where recombination has a tendency to convey a selective advantage to the virus, we predict that the distribution of breakpoints—the identifiable boundaries that delimit the mosaic structure—will deviate from the underlying null distribution. To test this hypothesis, we generate a probabilistic model of HIV-1 copy-choice recombination and compare the predicted breakpoint distribution to the distribution from the HIV/AIDS pandemic. Across much of the HIV-1 genome, we find that the observed frequencies of inter-subtype recombination are predicted accurately by our model. This observation strongly indicates that in these regions a probabilistic model, dependent on local sequence identity, is sufficient to explain breakpoint locations. In regions where there is a significant over- (either side of the env gene) or under- (short regions within gag, pol, and most of env) representation of breakpoints, we infer natural selection to be influencing the recombination pattern. The paucity of recombination breakpoints within most of the envelope gene indicates that recombinants generated in this region are less likely to be successful. The breakpoints at a higher frequency than predicted by our model are approximately at either side of env, indicating increased selection for these recombinants as a consequence of this region, or at least part of it, having a tendency to be recombined as an entire unit. Our findings thus provide the first clear indication of the existence of a specific portion of the genome that deviates from a probabilistic null model for recombination. This suggests that, despite the wide diversity of recombinant forms seen in the viral 8. DNA Probe Pooling for Rapid Delineation of Chromosomal Breakpoints Energy Technology Data Exchange (ETDEWEB) Lu, Chun-Mei; Kwan, Johnson; Baumgartner, Adolf; Weier, Jingly F.; Wang, Mei; Escudero, Tomas; Munne' , Santiago; Zitzelsberger, Horst F.; Weier, Heinz-Ulrich 2009-01-30 Structural chromosome aberrations are hallmarks of many human genetic diseases. The precise mapping of translocation breakpoints in tumors is important for identification of genes with altered levels of expression, prediction of tumor progression, therapy response, or length of disease-free survival as well as the preparation of probes for detection of tumor cells in peripheral blood. Similarly, in vitro fertilization (IVF) and preimplantation genetic diagnosis (PGD) for carriers of balanced, reciprocal translocations benefit from accurate breakpoint maps in the preparation of patient-specific DNA probes followed by a selection of normal or balanced oocytes or embryos. We expedited the process of breakpoint mapping and preparation of case-specific probes by utilizing physically mapped bacterial artificial chromosome (BAC) clones. Historically, breakpoint mapping is based on the definition of the smallest interval between proximal and distal probes. Thus, many of the DNA probes prepared for multi-clone and multi-color mapping experiments do not generate additional information. Our pooling protocol described here with examples from thyroid cancer research and PGD accelerates the delineation of translocation breakpoints without sacrificing resolution. The turnaround time from clone selection to mapping results using tumor or IVF patient samples can be as short as three to four days. 9. Aluminum break-point contacts NARCIS (Netherlands) Heinemann, Martina; Groot, R.A. de 1997-01-01 Ab initio molecular dynamics is used to study the contribution of a single Al atom to an aluminum breakpoint contact during the final stages of breaking and the initial stages of the formation of such a contact. A hysteresis effect is found in excellent agreement with experiment and the form of the 10. Effects of new penicillin susceptibility breakpoints for Streptococcus pneumoniae--United States, 2006-2007. Science.gov (United States) 2008-12-19 Streptococcus pneumoniae (pneumococcus) is a common cause of pneumonia and meningitis in the United States. Antimicrobial resistance, which can result in pneumococcal infection treatment failure, is identified by measuring the minimum inhibitory concentration (MIC) of an antimicrobial that will inhibit pneumococcal growth. Breakpoints are MICs that define infections as susceptible (treatable), intermediate (possibly treatable with higher doses), and resistant (not treatable) to certain antimicrobials. In January 2008, after a reevaluation that included more recent clinical studies, the Clinical and Laboratory Standards Institute (CLSI) published new S. pneumoniae breakpoints for penicillin (the preferred antimicrobial for susceptible S. pneumoniae infections). To assess the potential effects of the new breakpoints on susceptibility categorization, CDC applied them to MICs of invasive pneumococcal disease (IPD) isolates collected by the Active Bacterial Core surveillance (ABCs) system at sites in 10 states during 2006-2007. This report summarizes the results of that analysis, which found that the percentage of IPD nonmeningitis S. pneumoniae isolates categorized as susceptible, intermediate, and resistant to penicillin changed from 74.7%, 15.0%, and 10.3% under the former breakpoints to 93.2%, 5.6%, and 1.2%, respectively, under the new breakpoints. Microbiology laboratories should be aware of the new breakpoints to interpret pneumococcal susceptibility accurately, and clinicians should be aware of the breakpoints to prescribe antimicrobials appropriately for pneumococcal infections. State and local health departments also should be aware of the new breakpoints because they might result in a decrease in the number of reported cases of penicillin-resistant pneumococcus. 11. Breath-holding and its breakpoint. Science.gov (United States) Parkes, M J 2006-01-01 This article reviews the basic properties of breath-holding in humans and the possible causes of the breath at breakpoint. The simplest objective measure of breath-holding is its duration, but even this is highly variable. Breath-holding is a voluntary act, but normal subjects appear unable to breath-hold to unconsciousness. A powerful involuntary mechanism normally overrides voluntary breath-holding and causes the breath that defines the breakpoint. The occurrence of the breakpoint breath does not appear to be caused solely by a mechanism involving lung or chest shrinkage, partial pressures of blood gases or the carotid arterial chemoreceptors. This is despite the well-known properties of breath-hold duration being prolonged by large lung inflations, hyperoxia and hypocapnia and being shortened by the converse manoeuvres and by increased metabolic rate. Breath-holding has, however, two much less well-known but important properties. First, the central respiratory rhythm appears to continue throughout breath-holding. Humans cannot therefore stop their central respiratory rhythm voluntarily. Instead, they merely suppress expression of their central respiratory rhythm and voluntarily 'hold' the chest at a chosen volume, possibly assisted by some tonic diaphragm activity. Second, breath-hold duration is prolonged by bilateral paralysis of the phrenic or vagus nerves. Possibly the contribution to the breakpoint from stimulation of diaphragm muscle chemoreceptors is greater than has previously been considered. At present there is no simple explanation for the breakpoint that encompasses all these properties. 12. Call for the international adoption of microbiological breakpoints for fluoroquinolones and Streptococcus pneumoniae. Science.gov (United States) Schurek, Kristen N; Adam, Heather J; Hoban, Daryl J; Zhanel, George G 2006-09-01 The use of current Clinical and Laboratory Standards Institute levofloxacin breakpoints for assessing fluoroquinolone resistance in Streptococcus pneumoniae is inadequate for detecting isolates possessing first-step parC mutations. Consequently, the risk for development of fluoroquinolone resistance is greatly underestimated. Adopting microbiological breakpoints for fluoroquinolones and S. pneumoniae, where parC mutations are rare in susceptible isolates, more accurately describes the emergence of resistance and may help to prevent a number of future fluoroquinolone treatment failures. Additionally, we propose that the use of a second fluoroquinolone marker, such as ciprofloxacin, offers the best prediction for detecting an isolate possessing a first-step parC mutation. 13. Precise detection of rearrangement breakpoints in mammalian chromosomes Directory of Open Access Journals (Sweden) Gautier Christian 2008-06-01 Full Text Available Abstract Background Genomes undergo large structural changes that alter their organisation. The chromosomal regions affected by these rearrangements are called breakpoints, while those which have not been rearranged are called synteny blocks. We developed a method to precisely delimit rearrangement breakpoints on a genome by comparison with the genome of a related species. Contrary to current methods which search for synteny blocks and simply return what remains in the genome as breakpoints, we propose to go further and to investigate the breakpoints themselves in order to refine them. Results Given some reliable and non overlapping synteny blocks, the core of the method consists in refining the regions that are not contained in them. By aligning each breakpoint sequence against its specific orthologous sequences in the other species, we can look for weak similarities inside the breakpoint, thus extending the synteny blocks and narrowing the breakpoints. The identification of the narrowed breakpoints relies on a segmentation algorithm and is statistically assessed. Since this method requires as input synteny blocks with some properties which, though they appear natural, are not verified by current methods for detecting such blocks, we further give a formal definition and provide an algorithm to compute them. The whole method is applied to delimit breakpoints on the human genome when compared to the mouse and dog genomes. Among the 355 human-mouse and 240 human-dog breakpoints, 168 and 146 respectively span less than 50 Kb. We compared the resulting breakpoints with some publicly available ones and show that we achieve a better resolution. Furthermore, we suggest that breakpoints are rarely reduced to a point, and instead consist in often large regions that can be distinguished from the sequences around in terms of segmental duplications, similarity with related species, and transposable elements. Conclusion Our method leads to smaller 14. Kalman Filter Track Fits and Track Breakpoint Analysis CERN Document Server Astier, Pierre; Cousins, R D; Letessier-Selvon, A A; Popov, B A; Vinogradova, T G; Astier, Pierre; Cardini, Alessandro; Cousins, Robert D.; Letessier-Selvon, Antoine; Popov, Boris A.; Vinogradova, Tatiana 2000-01-01 We give an overview of track fitting using the Kalman filter method in the NOMAD detector at CERN, and emphasize how the wealth of by-product information can be used to analyze track breakpoints (discontinuities in track parameters caused by scattering, decay, etc.). After reviewing how this information has been previously exploited by others, we describe extensions which add power to breakpoint detection and characterization. We show how complete fits to the entire track, with breakpoint parameters added, can be easily obtained from the information from unbroken fits. Tests inspired by the Fisher F-test can then be used to judge breakpoints. Signed quantities (such as change in momentum at the breakpoint) can supplement unsigned quantities such as the various chisquares. We illustrate the method with electrons from real data, and with Monte Carlo simulations of pion decays. 15. Characterization of the breakpoints of a polymorphic inversion complex detects strict and broad breakpoint reuse at the molecular level. Science.gov (United States) Puerma, Eva; Orengo, Dorcas J; Salguero, David; Papaceit, Montserrat; Segarra, Carmen; Aguadé, Montserrat 2014-09-01 Inversions are an integral part of structural variation within species, and they play a leading role in genome reorganization across species. Work at both the cytological and genome sequence levels has revealed heterogeneity in the distribution of inversion breakpoints, with some regions being recurrently used. Breakpoint reuse at the molecular level has mostly been assessed for fixed inversions through genome sequence comparison, and therefore rather broadly. Here, we have identified and sequenced the breakpoints of two polymorphic inversions-E1 and E2 that share a breakpoint-in the extant Est and E1 + 2 chromosomal arrangements of Drosophila subobscura. The breakpoints are two medium-sized repeated motifs that mediated the inversions by two different mechanisms: E1 via staggered breaks and subsequent repair and E2 via repeat-mediated ectopic recombination. The fine delimitation of the shared breakpoint revealed its strict reuse at the molecular level regardless of which was the intermediate arrangement. The occurrence of other rearrangements in the most proximal and distal extended breakpoint regions reveals the broad reuse of these regions. This differential degree of fragility might be related to their sharing the presence outside the inverted region of snoRNA-encoding genes. 16. Phylogenetic trees OpenAIRE Baños, Hector; Bushek, Nathaniel; Davidson, Ruth; Gross, Elizabeth; Harris, Pamela E.; Krone, Robert; Long, Colby; Stewart, Allen; WALKER, Robert 2016-01-01 We introduce the package PhylogeneticTrees for Macaulay2 which allows users to compute phylogenetic invariants for group-based tree models. We provide some background information on phylogenetic algebraic geometry and show how the package PhylogeneticTrees can be used to calculate a generating set for a phylogenetic ideal as well as a lower bound for its dimension. Finally, we show how methods within the package can be used to compute a generating set for the join of any two ideals. 17. Unit roots and structural breakpoints in China's macroeconomic and financial time series Institute of Scientific and Technical Information of China (English) LIANG Qi; TENG Jianzhou 2006-01-01 This paper applies unit-root tests to 10 Chinese macroeconomic and financial time series that allow for the possibility of up to two endogenous structural breaks.We found that 6 of the series,i.e.,GDP,GDP per capita,employment,bank credit,deposit liabilities and investment,can be more accurately characterized as a segmented trend stationarity process around one or two structural breakpoints as opposed to a stochastic unit root process.Our findings have important implications for policy-makers to formulate long-term growth strategy and short-run stabilization policies,as well as causality analysis among the series. 18. Comparison of antimicrobial pharmacokinetic/pharmacodynamic breakpoints with EUCAST and CLSI clinical breakpoints for Gram-positive bacteria. Science.gov (United States) Asín, Eduardo; Isla, Arantxazu; Canut, Andrés; Rodríguez Gascón, Alicia 2012-10-01 This study compared the susceptibility breakpoints based on pharmacokinetic/pharmacodynamic (PK/PD) models and Monte Carlo simulation with those defined by the Clinical and Laboratory Standards Institute (CLSI) and the European Committee on Antimicrobial Susceptibility Testing (EUCAST) for antibiotics used for the treatment of infections caused by Gram-positive bacteria. A secondary objective was to evaluate the probability of achieving the PK/PD target associated with the success of antimicrobial therapy. A 10,000-subject Monte Carlo simulation was executed to evaluate 13 antimicrobials (47 intravenous dosing regimens). Susceptibility data were extracted from the British Society for Antimicrobial Chemotherapy database for bacteraemia isolates. The probability of target attainment and the cumulative fraction of response (CFR) were calculated. No antibiotic was predicted to be effective (CFR≥90%) against all microorganisms. The PK/PD susceptibility breakpoints were also estimated and were compared with CLSI and EUCAST breakpoints. The percentages of strains affected by breakpoint discrepancies were calculated. In the case of β-lactams, breakpoint discrepancies affected <15% of strains. However, higher differences were detected for low doses of vancomycin, daptomycin and linezolid, with PK/PD breakpoints being lower than those defined by the CLSI and EUCAST. If this occurs, an isolate will be considered susceptible based on CLSI and EUCAST breakpoints although the PK/PD analysis predicts failure, which may explain treatment failures reported in the literature. This study reinforces the idea of considering not only the antimicrobial activity but also the dosing regimen to increase the probability of clinical success of an antimicrobial treatment. 19. Rapid mapping of chromosomal breakpoints: from blood to BAC in 20 days. Energy Technology Data Exchange (ETDEWEB) Lu, Chun-Mei; Kwan, Johnson; Weier, Jingly F.; Baumgartner, Aldof; Wang, Mei; Escudero, Tomas; Munne, Santiago; Weier, Heinz-Ulrich 2009-02-25 Structural chromosome aberrations and associated segmental or chromosomal aneusomies are major causes of reproductive failure in humans. Despite the fact that carriers of reciprocal balanced translocation often have no other clinical symptoms or disease, impaired chromosome homologue pairing in meiosis and karyokinesis errors lead to over-representation of translocations carriers in the infertile population and in recurrent pregnancy loss patients. At present, clinicians have no means to select healthy germ cells or balanced zygotes in vivo, but in vitro fertilization (IVF) followed by preimplantation genetic diagnosis (PGD) offers translocation carriers a chance to select balanced or normal embryos for transfer. Although a combination of telomeric and centromeric probes can differentiate embryos that are unbalanced from normal or unbalanced ones, a seemingly random position of breakpoints in these IVF-patients poses a serious obstacle to differentiating between normal and balanced embryos, which for most translocation couples, is desirable. Using a carrier with reciprocal translocation t(4;13) as an example, we describe our state-of-the-art approach to the preparation of patient-specific DNA probes that span or 'extent' the breakpoints. With the techniques and resources described here, most breakpoints can be accurately mapped in a matter of days using carrier lymphocytes, and a few extra days are allowed for PGD-probe optimization. The optimized probes will then be suitable for interphase cell analysis, a prerequisite for PGD since blastomeres are biopsied from normally growing day 3 - embryos regardless of their position in the mitotic cell cycle. Furthermore, routine application of these rapid methods should make PGD even more affordable for translocation carriers enrolled in IVF programs. 20. On the Complexity of Rearrangement Problems under the Breakpoint Distance CERN Document Server Kovac, Jakub 2011-01-01 Tannier et al. introduced a generalization of breakpoint distance for multichromosomal genomes. They showed that the median problem under the breakpoint distance is solvable in polynomial time in the multichromosomal circular and mixed models. This is intriguing, since in all other rearrangement models (DCJ, reversal, unichromosomal or multilinear breakpoint models), the problem is NP-hard. The complexity of the small or even the large phylogeny problem under the breakpoint distance remained an open problem. We improve the algorithm for the median problem and show that it is equivalent to the problem of finding maximum cardinality non-bipartite matching (under linear reduction). On the other hand, we prove that the more general small phylogeny problem is NP-hard. Surprisingly, we show that it is already NP-hard (or even APX-hard) for 4 species (a quartet phylogeny). In other words, while finding an ancestor for 3 species is easy, already finding two ancestors for 4 species is hard. We also show that, in the u... 1. Breakpoints for carbapenemase-producing Enterobacteriaceae: is the problem solved? Science.gov (United States) Cantón, Rafael; Canut, Andrés; Morosini, María Isabel; Oliver, Antonio 2014-12-01 The imipenem and meropenem breakpoints for Enterobacteriaceae established by the Clinical and Laboratory Standards Institute (CLSI) are somewhat lower than those established by the European Committee on Antimicrobial Susceptibility Testing (EUCAST), but are identical for ertapenem and doripenem. The differences are primarily due to the various pharmacokinetic/pharmacodynamic (PK/PD) approaches used to define these breakpoints. Both approaches use the Monte Carlo simulation with a probability of target attainment (PTA) for reaching the PD target of free drug concentration above the minimum inhibitory concentration (MIC) at least 40% of the time (~40%fT >MIC). EUCAST uses PTA mean values with confidence intervals (CIs) of 95% and 99%, whereas the CI used by CLSI is 90%. In addition, CLSI uses an "inflated variance" that takes into account the variability of PK parameters in various types of patients, particularly those who are critically ill. By employing this approach, the susceptible CLSI breakpoint captures a higher number of carbapenemase-producing Enterobacteriaceae (CPE) than EUCAST. EUCAST, however, has recently defined cut-off values for screening CPE. Both committees recommend reporting carbapenem susceptibility results "as tested," demonstrating carbapenemase production only for epidemiological purposes and infection control. New clinical data could potentially modify this recommendation because carbapenemase production also influences specific treatment guidance concerning carbapenems in combination with other antimicrobials in infections due to CPE. This advice should not be followed when imipenem or meropenem MICs are >8mg/L, which is coincident with the EUCAST resistant breakpoints for these carbapenems. 2. Fast detection of deletion breakpoints using quantitative PCR Directory of Open Access Journals (Sweden) Gulshara Abildinova 2016-01-01 Full Text Available Abstract The routine detection of large and medium copy number variants (CNVs is well established. Hemizygotic deletions or duplications in the large Duchenne muscular dystrophy DMD gene responsible for Duchenne and Becker muscular dystrophies are routinely identified using multiple ligation probe amplification and array-based comparative genomic hybridization. These methods only map deleted or duplicated exons, without providing the exact location of breakpoints. Commonly used methods for the detection of CNV breakpoints include long-range PCR and primer walking, their success being limited by the deletion size, GC content and presence of DNA repeats. Here, we present a strategy for detecting the breakpoints of medium and large CNVs regardless of their size. The hemizygous deletion of exons 45-50 in the DMD gene and the large autosomal heterozygous PARK2 deletion were used to demonstrate the workflow that relies on real-time quantitative PCR to narrow down the deletion region and Sanger sequencing for breakpoint confirmation. The strategy is fast, reliable and cost-efficient, making it amenable to widespread use in genetic laboratories. 3. Evaluation of Oxacillin and Cefoxitin Disk and MIC Breakpoints for Prediction of Methicillin Resistance in Human and Veterinary Isolates of Staphylococcus intermedius Group Science.gov (United States) Wu, M. T.; Westblade, L. F.; Dien Bard, J.; Wallace, M. A.; Stanley, T.; Burd, E.; Hindler, J. 2015-01-01 Staphylococcus pseudintermedius is a coagulase-positive species that colonizes the nares and anal mucosa of healthy dogs and cats. Human infections with S. pseudintermedius range in severity from bite wounds and rhinosinusitis to endocarditis; historically, these infections were thought to be uncommon, but new laboratory methods suggest that their true incidence is underreported. Oxacillin and cefoxitin disk and MIC tests were evaluated for the detection of mecA- or mecC-mediated methicillin resistance in 115 human and animal isolates of the Staphylococcus intermedius group (SIG), including 111 Staphylococcus pseudintermediusand 4 Staphylococcus delphini isolates, 37 of which were mecA positive. The disk and MIC breakpoints evaluated included the Clinical and Laboratory Standards Institute (CLSI) M100-S25 Staphylococcus aureus/Staphylococcus lugdunensis oxacillin MIC breakpoints and cefoxitin disk and MIC breakpoints, the CLSI M100-S25 coagulase-negative Staphylococcus (CoNS) oxacillin MIC breakpoint and cefoxitin disk breakpoint, the CLSI VET01-S2 S. pseudintermedius oxacillin MIC and disk breakpoints, and the European Committee on Antimicrobial Susceptibility Testing (EUCAST) S. pseudintermedius cefoxitin disk breakpoint. The oxacillin results interpreted by the VET01-S2 (disk and MIC) and M100-S25 CoNS (MIC) breakpoints agreed with the results of mecA/mecC PCR for all isolates, with the exception of one false-resistant result (1.3% of mecA/mecC PCR-negative isolates). In contrast, cefoxitin tests performed poorly, ranging from 3 to 89% false susceptibility (very major errors) and 0 to 48% false resistance (major errors). BD Phoenix, bioMérieux Vitek 2, and Beckman Coulter MicroScan commercial automated susceptibility test panel oxacillin MIC results were also evaluated and demonstrated >95% categorical agreement with mecA/mecC PCR results if interpreted by using the M100-S25 CoNS breakpoint. The Alere penicillin-binding protein 2a test accurately detected all 4. Evaluation of Oxacillin and Cefoxitin Disk and MIC Breakpoints for Prediction of Methicillin Resistance in Human and Veterinary Isolates of Staphylococcus intermedius Group. Science.gov (United States) Wu, M T; Burnham, C-A D; Westblade, L F; Dien Bard, J; Lawhon, S D; Wallace, M A; Stanley, T; Burd, E; Hindler, J; Humphries, R M 2016-03-01 Staphylococcus pseudintermedius is a coagulase-positive species that colonizes the nares and anal mucosa of healthy dogs and cats. Human infections with S. pseudintermedius range in severity from bite wounds and rhinosinusitis to endocarditis; historically, these infections were thought to be uncommon, but new laboratory methods suggest that their true incidence is underreported. Oxacillin and cefoxitin disk and MIC tests were evaluated for the detection of mecA- or mecC-mediated methicillin resistance in 115 human and animal isolates of the Staphylococcus intermedius group (SIG), including 111 Staphylococcus pseudintermediusand 4 Staphylococcus delphini isolates, 37 of which were mecA positive. The disk and MIC breakpoints evaluated included the Clinical and Laboratory Standards Institute (CLSI) M100-S25 Staphylococcus aureus/Staphylococcus lugdunensis oxacillin MIC breakpoints and cefoxitin disk and MIC breakpoints, the CLSI M100-S25 coagulase-negative Staphylococcus (CoNS) oxacillin MIC breakpoint and cefoxitin disk breakpoint, the CLSI VET01-S2 S. pseudintermedius oxacillin MIC and disk breakpoints, and the European Committee on Antimicrobial Susceptibility Testing (EUCAST) S. pseudintermedius cefoxitin disk breakpoint. The oxacillin results interpreted by the VET01-S2 (disk and MIC) and M100-S25 CoNS (MIC) breakpoints agreed with the results of mecA/mecC PCR for all isolates, with the exception of one false-resistant result (1.3% of mecA/mecC PCR-negative isolates). In contrast, cefoxitin tests performed poorly, ranging from 3 to 89% false susceptibility (very major errors) and 0 to 48% false resistance (major errors). BD Phoenix, bioMérieux Vitek 2, and Beckman Coulter MicroScan commercial automated susceptibility test panel oxacillin MIC results were also evaluated and demonstrated >95% categorical agreement with mecA/mecC PCR results if interpreted by using the M100-S25 CoNS breakpoint. The Alere penicillin-binding protein 2a test accurately detected all 5. Evolutionary Phylogenetic Networks: Models and Issues Science.gov (United States) Nakhleh, Luay Phylogenetic networks are special graphs that generalize phylogenetic trees to allow for modeling of non-treelike evolutionary histories. The ability to sequence multiple genetic markers from a set of organisms and the conflicting evolutionary signals that these markers provide in many cases, have propelled research and interest in phylogenetic networks to the forefront in computational phylogenetics. Nonetheless, the term 'phylogenetic network' has been generically used to refer to a class of models whose core shared property is tree generalization. Several excellent surveys of the different flavors of phylogenetic networks and methods for their reconstruction have been written recently. However, unlike these surveys, this chapte focuses specifically on one type of phylogenetic networks, namely evolutionary phylogenetic networks, which explicitly model reticulate evolutionary events. Further, this chapter focuses less on surveying existing tools, and addresses in more detail issues that are central to the accurate reconstruction of phylogenetic networks. 6. Sequence determinants of breakpoint location during HIV-1 intersubtype recombination. Science.gov (United States) Baird, Heather A; Galetto, Román; Gao, Yong; Simon-Loriere, Etienne; Abreha, Measho; Archer, John; Fan, Jun; Robertson, David L; Arts, Eric J; Negroni, Matteo 2006-01-01 Retroviral recombination results from strand switching, during reverse transcription, between the two copies of genomic RNA present in the virus. We analysed recombination in part of the envelope gene, between HIV-1 subtype A and D strains. After a single infection cycle, breakpoints clustered in regions corresponding to the constant portions of Env. With some exceptions, a similar distribution was observed after multiple infection cycles, and among recombinant sequences in the HIV Sequence Database. We compared the experimental data with computer simulations made using a program that only allows recombination to occur whenever an identical base is present in the aligned parental RNAs. Experimental recombination was more frequent than expected on the basis of simulated recombination when, in a region spanning 40 nt from the 5' border of a breakpoint, no more than two discordant bases between the parental RNAs were present. When these requirements were not fulfilled, breakpoints were distributed randomly along the RNA, closer to the distribution predicted by computer simulation. A significant preference for recombination was also observed for regions containing homopolymeric stretches. These results define, for the first time, local sequence determinants for recombination between divergent HIV-1 isolates. 7. Sequence determinants of breakpoint location during HIV-1 intersubtype recombination Science.gov (United States) Baird, Heather A.; Galetto, Román; Gao, Yong; Simon-Loriere, Etienne; Abreha, Measho; Archer, John; Fan, Jun; Robertson, David L.; Arts, Eric J.; Negroni, Matteo 2006-01-01 Retroviral recombination results from strand switching, during reverse transcription, between the two copies of genomic RNA present in the virus. We analysed recombination in part of the envelope gene, between HIV-1 subtype A and D strains. After a single infection cycle, breakpoints clustered in regions corresponding to the constant portions of Env. With some exceptions, a similar distribution was observed after multiple infection cycles, and among recombinant sequences in the HIV Sequence Database. We compared the experimental data with computer simulations made using a program that only allows recombination to occur whenever an identical base is present in the aligned parental RNAs. Experimental recombination was more frequent than expected on the basis of simulated recombination when, in a region spanning 40 nt from the 5′ border of a breakpoint, no more than two discordant bases between the parental RNAs were present. When these requirements were not fulfilled, breakpoints were distributed randomly along the RNA, closer to the distribution predicted by computer simulation. A significant preference for recombination was also observed for regions containing homopolymeric stretches. These results define, for the first time, local sequence determinants for recombination between divergent HIV-1 isolates. PMID:17003055 8. Chromosomal breakpoints characterization of two supernumerary ring chromosomes 20. Science.gov (United States) Guediche, N; Brisset, S; Benichou, J-J; Guérin, N; Mabboux, P; Maurin, M-L; Bas, C; Laroudie, M; Picone, O; Goldszmidt, D; Prévot, S; Labrune, P; Tachdjian, G 2010-02-01 The occurrence of an additional ring chromosome 20 is a rare chromosome abnormality, and no common phenotype has been yet described. We report on two new patients presenting with a supernumerary ring chromosome 20 both prenatally diagnosed. The first presented with intrauterine growth retardation and some craniofacial dysmorphism, and the second case had a normal phenotype except for obesity. Conventional cytogenetic studies showed for each patient a small supernumerary marker chromosome (SMC). Using fluorescence in situ hybridization, these SMCs corresponded to ring chromosomes 20 including a part of short and long arms of chromosome 20. Detailed molecular cytogenetic characterization showed different breakpoints (20p11.23 and 20q11.23 for Patient 1 and 20p11.21 and 20q11.21 for Patient 2) and sizes of the two ring chromosomes 20 (13.6 Mb for case 1 and 4.8 Mb for case 2). Review of the 13 case reports of an extra r(20) ascertained postnatally (8 cases) and prenatally (5 cases) showed varying degrees of phenotypic abnormalities. We document a detailed molecular cytogenetic chromosomal breakpoints characterization of two cases of supernumerary ring chromosomes 20. These results emphasize the need to characterize precisely chromosomal breakpoints of supernumerary ring chromosomes 20 in order to establish genotype-phenotype correlation. This report may be helpful for prediction of natural history and outcome, particularly in prenatal diagnosis. 9. More accurate recombination prediction in HIV-1 using a robust decoding algorithm for HMMs Directory of Open Access Journals (Sweden) Brown Daniel G 2011-05-01 Full Text Available Abstract Background Identifying recombinations in HIV is important for studying the epidemiology of the virus and aids in the design of potential vaccines and treatments. The previous widely-used tool for this task uses the Viterbi algorithm in a hidden Markov model to model recombinant sequences. Results We apply a new decoding algorithm for this HMM that improves prediction accuracy. Exactly locating breakpoints is usually impossible, since different subtypes are highly conserved in some sequence regions. Our algorithm identifies these sites up to a certain error tolerance. Our new algorithm is more accurate in predicting the location of recombination breakpoints. Our implementation of the algorithm is available at http://www.cs.uwaterloo.ca/~jmtruszk/jphmm_balls.tar.gz. Conclusions By explicitly accounting for uncertainty in breakpoint positions, our algorithm offers more reliable predictions of recombination breakpoints in HIV-1. We also document a new domain of use for our new decoding approach in HMMs. 10. Investigation of the breakpoint region in stacks with a finite number of intrinsic Josephson junctions DEFF Research Database (Denmark) Shukrinov, Yu M.; Mahfouzi, F.; Pedersen, Niels Falsig 2007-01-01 We study the breakpoint region on the outermost branch of current-voltage characteristics of the stacks with di_erent number of intrinsic Josephson junctions. E_ect of the boundary conditions on the breakpoint region is demonstrated. At periodic boundary conditions the breakpoint region is absent...... and the saturated value depend on the coupling between junctions. We explain the results by the parametric resonance at the breakpoint and excitation of the longitudinal plasma wave by the Josephson oscillations. A way for the diagnostics of the junctions in the stack is proposed.... 11. Revised ciprofloxacin breakpoints for Salmonella Typhi: its implications in India. Science.gov (United States) Balaji, V; Sharma, A; Ranjan, P; Kapil, A 2014-01-01 The rise of multidrug resistant strains of Salmonella Typhi in the last decade of the previous century led to the use of fluoroquinolones as the drug of choice. However, over the past few years fluoroquinolone resistance has been increasingly reported. In accordance with the revised Clinical and Laboratory Standards Institute (CLSI) breakpoints, only 3% of the isolates were susceptible to ciprofloxacin in comparison to 95% as per the earlier guidelines when 488 isolates collected between 2010 and 2012 were re-interpreted. Interestingly, re-emergence of strains susceptible to chloramphenicol, ampicillin and cotrimoxazole is being seen. Amidst the changing susceptibility profile, azithromycin remains a promising alternative. 12. Antimicrobial breakpoint estimation accounting for variability in pharmacokinetics Directory of Open Access Journals (Sweden) Nekka Fahima 2009-06-01 Full Text Available Abstract Background Pharmacokinetic and pharmacodynamic (PK/PD indices are increasingly being used in the microbiological field to assess the efficacy of a dosing regimen. In contrast to methods using MIC, PK/PD-based methods reflect in vivo conditions and are more predictive of efficacy. Unfortunately, they entail the use of one PK-derived value such as AUC or Cmax and may thus lead to biased efficiency information when the variability is large. The aim of the present work was to evaluate the efficacy of a treatment by adjusting classical breakpoint estimation methods to the situation of variable PK profiles. Methods and results We propose a logical generalisation of the usual AUC methods by introducing the concept of "efficiency" for a PK profile, which involves the efficacy function as a weight. We formulated these methods for both classes of concentration- and time-dependent antibiotics. Using drug models and in silico approaches, we provide a theoretical basis for characterizing the efficiency of a PK profile under in vivo conditions. We also used the particular case of variable drug intake to assess the effect of the variable PK profiles generated and to analyse the implications for breakpoint estimation. Conclusion Compared to traditional methods, our weighted AUC approach gives a more powerful PK/PD link and reveals, through examples, interesting issues about the uniqueness of therapeutic outcome indices and antibiotic resistance problems. 13. Genetic mapping and genomic selection using recombination breakpoint data. Science.gov (United States) Xu, Shizhong 2013-11-01 The correct models for quantitative trait locus mapping are the ones that simultaneously include all significant genetic effects. Such models are difficult to handle for high marker density. Improving statistical methods for high-dimensional data appears to have reached a plateau. Alternative approaches must be explored to break the bottleneck of genomic data analysis. The fact that all markers are located in a few chromosomes of the genome leads to linkage disequilibrium among markers. This suggests that dimension reduction can also be achieved through data manipulation. High-density markers are used to infer recombination breakpoints, which then facilitate construction of bins. The bins are treated as new synthetic markers. The number of bins is always a manageable number, on the order of a few thousand. Using the bin data of a recombinant inbred line population of rice, we demonstrated genetic mapping, using all bins in a simultaneous manner. To facilitate genomic selection, we developed a method to create user-defined (artificial) bins, in which breakpoints are allowed within bins. Using eight traits of rice, we showed that artificial bin data analysis often improves the predictability compared with natural bin data analysis. Of the eight traits, three showed high predictability, two had intermediate predictability, and two had low predictability. A binary trait with a known gene had predictability near perfect. Genetic mapping using bin data points to a new direction of genomic data analysis. 14. European gene mapping project (EUROGEM) : Breakpoint panels for human chromosomes based on the CEPH reference families NARCIS (Netherlands) Attwood, J; Bryant, SP; Bains, R; Povey, R; Povey, S; Rebello, M; Kapsetaki, M; Moschonas, NK; Grzeschik, KH; Otto, M; Dixon, M; Sudworth, HE; Kooy, RF; Wright, A; Teague, P; Terrenato, L; Vergnaud, G; Monfouilloux, S; Weissenbach, J; Alibert, O; Dib, C; Faure, S; Bakker, E; Pearson, NM; Vossen, RHAM; Gal, A; MuellerMyhsok, B; Cann, HM; Spurr, NK 1996-01-01 Meiotic breakpoint panels for human chromosomes 2, 3, 4, 5, 6, 7, 8, 9, 10, 13, 14, 15, 17; 18, 20 and X were constructed from genotypes from the CEPH reference families. Each recombinant chromosome included has a breakpoint well-supported with reference to defined quantitative criteria. The panels 15. Structure of the breakpoint region on current-voltage characteristics of intrinsic Josephson junctions Science.gov (United States) Shukrinov, Yu. M.; Mahfouzi, F.; Suzuki, M. 2008-10-01 A fine structure of the breakpoint region in the current-voltage characteristics of the coupled intrinsic Josephson junctions in the layered superconductors is found. We establish a correspondence between the features in the current-voltage characteristics and the character of the charge oscillations in superconducting layers in the stack and explain the origin of the breakpoint region structure. 16. Contemporary potencies of minocycline and tetracycline HCL tested against Gram-positive pathogens: SENTRY Program results using CLSI and EUCAST breakpoint criteria. Science.gov (United States) Jones, Ronald N; Wilson, Michael L; Weinstein, Melvin P; Stilwell, Matthew G; Mendes, Rodrigo E 2013-04-01 Tetracycline class agents vary widely in their activity against emerging important antimicrobial-resistant pathogens such as methicillin-resistant Staphylococcus aureus (MRSA) and Acinetobacter spp. Also, published susceptibility breakpoints are discordant between the Clinical and Laboratory Standards Institute (CLSI), the European Committee on Antimicrobial Susceptibility Testing (EUCAST), and regulatory-approved documents. We have assessed the impact of these differences for tetracycline HCL and minocycline when tested against contemporary Gram-positive pathogens. The SENTRY Antimicrobial Surveillance Program (2011) compared minocycline and tetracycline HCL activity via reference methods (M07-A9) using a worldwide collection of S. aureus (SA; 4917 strains with 1955 MRSA), Streptococcus pneumoniae (SPN; 1899), S. pyogenes (GRA; 246), and S. agalactiae (GRB; 217). Regardless of applied categorical breakpoints, minocycline exhibited wider coverage (% susceptible) than tetracycline HCL of 4.5-11.8/0.5-2.6/1.4-2.3/0.4-0.4% for MRSA/SPN/GRB/GRA, respectively. Lower EUCAST susceptible breakpoints produced reduced susceptibility rates for minocycline ranging from no difference (≤0.5 μg/mL) for GRA to -8.9% (≤1 μg/mL) for MRSA (97.2% susceptible by CLSI; 88.3% by EUCAST). Use of tetracycline HCL-susceptible results to predict minocycline susceptibility was very accurate (99.0-100.0%), with absolute categorical agreement rates ranging from 92.1% to 98.4% (CLSI) to 98.4% to 99.6% (EUCAST) for streptococci; greatest predictive error was noted using the CLSI breakpoints (14.7%) compared to EUCAST criteria (only 5.0%; acceptable), both for MRSA testing dominated by false-resistant results for minocycline. In conclusion, minocycline demonstrates continued superior in vitro activity compared to tetracycline HCL when testing SA (especially MRSA) and pathogenic streptococci. When testing tetracyclines, laboratories must recognize the expanded spectrum of minocycline against 17. Recurrence of Chromosome Rearrangements and Reuse of DNA Breakpoints in the Evolution of the Triticeae Genomes Directory of Open Access Journals (Sweden) Wanlong Li 2016-12-01 Full Text Available Chromosomal rearrangements (CRs play important roles in karyotype diversity and speciation. While many CR breakpoints have been characterized at the sequence level in yeast, insects, and primates, little is known about the structure of evolutionary CR breakpoints in plant genomes, which are much more dynamic in genome size and sequence organization. Here, we report identification of breakpoints of a translocation between chromosome arms 4L and 5L of Triticeae, which is fixed in several species, including diploid wheat and rye, by comparative mapping and analysis of the draft genome and chromosome survey sequences of the Triticeae species. The wheat translocation joined the ends of breakpoints downstream of a WD40 gene on 4AL and a gene of the PMEI family on 5AL. A basic helix-loop-helix transcription factor gene in 5AL junction was significantly restructured. Rye and wheat share the same position for the 4L breakpoint, but the 5L breakpoint positions are not identical, although very close in these two species, indicating the recurrence of 4L/5L translocations in the Triticeae. Although barley does not carry the translocation, collinearity across the breakpoints was violated by putative inversions and/or transpositions. Alignment with model grass genomes indicated that the translocation breakpoints coincided with ancient inversion junctions in the Triticeae ancestor. Our results show that the 4L/5L translocation breakpoints represent two CR hotspots reused during Triticeae evolution, and support breakpoint reuse as a widespread mechanism in all eukaryotes. The mechanisms of the recurrent translocation and its role in Triticeae evolution are also discussed. 18. Monitoring Forest Dynamics in the Andean Amazon: The Applicability of Breakpoint Detection Methods Using Landsat Time-Series and Genetic Algorithms Directory of Open Access Journals (Sweden) Fabián Santos 2017-01-01 -scale projects. In exceptional cases when data quality and quantity were adequate, we recommend the pre-processing approaches, noise reduction algorithms and breakpoint detection algorithms procedures that can enhance results. Finally, we include recommendations for achieving a faster and more accurate calibration of complex functions applied to remote sensing using genetic algorithms. 19. Impact of CLSI Breakpoint Changes on Microbiology Laboratories and Antimicrobial Stewardship Programs. Science.gov (United States) Heil, Emily L; Johnson, J Kristie 2016-04-01 In 2010, the Clinical and Laboratory Standards Institute (CLSI) lowered the MIC breakpoints for many beta-lactam antibiotics to enhance detection of known resistance amongEnterobacteriaceae The decision to implement these new breakpoints, including the changes announced in both 2010 and 2014, can have a significant impact on both microbiology laboratories and antimicrobial stewardship programs. In this commentary, we discuss the changes and how implementation of these updated CLSI breakpoints requires partnership between antimicrobial stewardship programs and the microbiology laboratory, including data on the impact that the changes had on antibiotic usage at our own institution. 20. Performance of Vitek 2 for antimicrobial susceptibility testing of Enterobacteriaceae with Vitek 2 (2009 FDA) and 2014 CLSI breakpoints. Science.gov (United States) Bobenchik, April M; Deak, Eszter; Hindler, Janet A; Charlton, Carmen L; Humphries, Romney M 2015-03-01 Vitek 2 (bioMérieux Inc., Durham, NC) is a widely used commercial antimicrobial susceptibility test system. We compared the MIC results obtained using the Vitek 2 AST-GN69 and AST-XN06 cards to those obtained by CLSI broth microdilution (BMD) for 255 isolates of Enterobacteriaceae, including 25 isolates of carbapenem-resistant Enterobacteriaceae. In total, 25 antimicrobial agents were examined. For 10 agents, the MIC data were evaluated using two sets of breakpoints: (i) the Vitek 2 breakpoints, which utilized the 2009 FDA breakpoints at the time of the study and are equivalent to the 2009 CLSI M100-S19 breakpoints, and (ii) the 2014 CLSI M100-S24 breakpoints. There was an overall 98.7% essential agreement (EA). The categorical agreement was 95.5% (CA) using the Vitek 2 breakpoints and 95.7% using the CLSI breakpoints. There was 1 very major error (VME) (0.05%) observed using the Vitek 2 breakpoints (cefazolin) and 8 VMEs (0.5%) using the CLSI breakpoints (2 each for aztreonam, cefepime, and ceftriaxone, and 1 for cefazolin and ceftazidime). Fifteen major errors (MEs) (0.4%) were noted using the Vitek 2 breakpoints and 8 (0.5%) using the CLSI breakpoints. Overall, the Vitek 2 performance was comparable to that of BMD for testing a limited number of Enterobacteriaceae commonly isolated by clinical laboratories. Ongoing studies are warranted to assess performance in isolates with emerging resistance. 1. Investigation of the breakpoint region in stacks with a finite number of intrinsic Josephson junctions Science.gov (United States) Shukrinov, Yu. M.; Mahfouzi, F.; Pedersen, N. F. 2007-03-01 We study the breakpoint region on the outermost branch of the current-voltage characteristics of stacks with different numbers of intrinsic Josephson junctions. We show that at periodic boundary conditions the breakpoint region is absent for stacks with an even number of junctions. For stacks with an odd number of junctions and for stacks with nonperiodic boundary conditions the breakpoint current increases with the number of junctions and saturates at a value corresponding to the periodic boundary conditions. The region of saturation and the saturated value depend on the coupling between the junctions. We explain the results by the parametric resonance at the breakpoint and excitation of a longitudinal plasma wave by Josephson oscillations. A method for diagnostics of the junctions in the stack is proposed. 2. Influence of Coupling between Junctions on Breakpoint Current in Intrinsic Josephson Junctions Science.gov (United States) Shukrinov, Yu. M.; Mahfouzi, F. 2007-04-01 We study theoretically the current-voltage characteristics of intrinsic Josephson junctions in high-Tc superconductors. An oscillation of the breakpoint current on the outermost branch as a function of coupling α and dissipation β parameters is found. We explain this oscillation as a result of the creation of longitudinal plasma waves at the breakpoint with different wave numbers. We demonstrate the commensurability effect and predict a group behavior of the current-voltage characteristics for the stacks with a different number of junctions. A method to determine the wave number of longitudinal plasma waves from α and β dependence of the breakpoint current is suggested. We model the α and β dependence of the breakpoint current and obtain good agreement with the results of the simulation. 3. Distribution of Chromosome Breakpoints in Human Epithelial Cells Exposed to Low- and High-LET Radiation Science.gov (United States) Hada, Megumi; Zhang, Ye; Cucinotta, Francis A.; Feiveson, Alan; Wu, Honglu 2010-01-01 Low-and high-LET radiations produced distinct breakpoint distributions. The difference of the breakpoint distributions between low-and high-LET only appeared in break ends involved in interchromosome exchanges. The breakpoint distributions for break ends participating in intrachromosome exchanges were similar. Gene-rich regions do not necessarily have more chromosome breaks. High-LET appeared to produce long live (data not shown) or longer live breaks that can migrate a longer distance before rejoining with other breaks. Domains occupied by different segments of the chromosomes may be responsible for the breakpoint distribution. The dose responses for interchromosomal exchanges were linear in all four exposures. However, the dose response for intrachromosomal exchanges were none linear. Increasing dose of high dose rate exposure (Fe-ions or -rays) increase the fraction of cells with intrachromosome aberrations, whereas increasing dose of low dose rate exposure (neutrons or -rays) does not affect the fraction of cells with intrachromosome aberrations. 4. Influence of coupling between junctions on breakpoint current in intrinsic Josephson junctions OpenAIRE Shukrinov, Yu M.; Mahfouzi, F. 2006-01-01 We study theoretically the current-voltage characteristics of intrinsic Josephson junctions in high-$T_c$ superconductors. An oscillation of the breakpoint current on the outermost branch as a function of coupling $\\alpha$ and dissipation $\\beta$ parameters is found. We explain this oscillation as a result of the creation of longitudinal plasma waves at the breakpoint with different wave numbers. We demonstrate the commensurability effect and predict a group behavior of the current-voltage ch... 5. Impact of CLSI Breakpoint Changes on Microbiology Laboratories and Antimicrobial Stewardship Programs OpenAIRE 2016-01-01 In 2010, the Clinical and Laboratory Standards Institute (CLSI) lowered the MIC breakpoints for many beta-lactam antibiotics to enhance detection of known resistance among Enterobacteriaceae. The decision to implement these new breakpoints, including the changes announced in both 2010 and 2014, can have a significant impact on both microbiology laboratories and antimicrobial stewardship programs. In this commentary, we discuss the changes and how implementation of these updated CLSI breakpoin... 6. Tandem repeats and G-rich sequences are enriched at human CNV breakpoints. Directory of Open Access Journals (Sweden) Promita Bose Full Text Available Chromosome breakage in germline and somatic genomes gives rise to copy number variation (CNV responsible for genomic disorders and tumorigenesis. DNA sequence is known to play an important role in breakage at chromosome fragile sites; however, the sequences susceptible to double-strand breaks (DSBs underlying CNV formation are largely unknown. Here we analyze 140 germline CNV breakpoints from 116 individuals to identify DNA sequences enriched at breakpoint loci compared to 2800 simulated control regions. We find that, overall, CNV breakpoints are enriched in tandem repeats and sequences predicted to form G-quadruplexes. G-rich repeats are overrepresented at terminal deletion breakpoints, which may be important for the addition of a new telomere. Interstitial deletions and duplication breakpoints are enriched in Alu repeats that in some cases mediate non-allelic homologous recombination (NAHR between the two sides of the rearrangement. CNV breakpoints are enriched in certain classes of repeats that may play a role in DNA secondary structure, DSB susceptibility and/or DNA replication errors. 7. Major chromosomal breakpoint intervals in breast cancer tumors co-localize with differentially methylated regions. Directory of Open Access Journals (Sweden) Man-Hung Eric eTang 2012-12-01 Full Text Available Solid tumors exhibit chromosomal rearrangements resulting in gain or loss of multiple loci (copy number variation and translocations that occasionally result in the creation of novel chimeric genes. In the case of breast cancer, although most individual tumors each have unique CNV landscape the breakpoints, as measured over large datasets, appear to be non-randomly distributed in the genome. Breakpoints show a significant regional concentration at genomic loci spanning perhaps several megabases. The proximal cause of these breakpoint concentrations is a subject of speculation but is, as yet, largely unknown. To shed light on this issue, we have performed a bio-statistical analysis on our previously published data for a set of 119 breast tumors and normal controls, where each sample has both high resolution CNV and methylation data. The method examined the distribution of closeness of breakpoint regions with differentially methylated regions, coupled with additional genomic parameters, such as repeat elements and designated fragile sites in the reference genome. Through this analysis, we have identified a set of 91 regional loci called breakpoint enriched differentially methylated regions (BEDMRs characterized by altered DNA methylation in cancer compared to normal cells that are associated with frequent breakpoint concentrations within a distance of 1Mb. BEDMR loci are further associated with local hypomethylation (66% concentrations of the Alu SINE repeats within 3Mb and tend to occur near a number of cancer related genes such as the protocadherins, AKT1, DUB3, GAB2. BEDMRs seem to deregulate members of the histone gene family and chromatin remodeling factors e.g JMJD1B which might affect the chromatin structure and disrupt coordinate signaling and repair. From this analysis we propose that preference for chromosomal breakpoints is related to genome structure coupled with alterations in DNA methylation and hence chromatin structure associated 8. Multiple Break-Points Detection in Array CGH Data via the Cross-Entropy Method. Science.gov (United States) Priyadarshana, W J R M; Sofronov, Georgy 2015-01-01 Array comparative genome hybridization (aCGH) is a widely used methodology to detect copy number variations of a genome in high resolution. Knowing the number of break-points and their corresponding locations in genomic sequences serves different biological needs. Primarily, it helps to identify disease-causing genes that have functional importance in characterizing genome wide diseases. For human autosomes the normal copy number is two, whereas at the sites of oncogenes it increases (gain of DNA) and at the tumour suppressor genes it decreases (loss of DNA). The majority of the current detection methods are deterministic in their set-up and use dynamic programming or different smoothing techniques to obtain the estimates of copy number variations. These approaches limit the search space of the problem due to different assumptions considered in the methods and do not represent the true nature of the uncertainty associated with the unknown break-points in genomic sequences. We propose the Cross-Entropy method, which is a model-based stochastic optimization technique as an exact search method, to estimate both the number and locations of the break-points in aCGH data. We model the continuous scale log-ratio data obtained by the aCGH technique as a multiple break-point problem. The proposed methodology is compared with well established publicly available methods using both artificially generated data and real data. Results show that the proposed procedure is an effective way of estimating number and especially the locations of break-points with high level of precision. Availability: The methods described in this article are implemented in the new R package breakpoint and it is available from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=breakpoint. 9. The variant inv(2)(p11.2q13) is a genuinely recurrent rearrangement but displays some breakpoint heterogeneity DEFF Research Database (Denmark) Fickelscher, Ina; Liehr, Thomas; Watts, Kathryn; 2007-01-01 different breakpoint combinations by fluorescence in situ hybridization mapping of 40 cases of inv(2)(p11.2q13) of European origin. For the vast majority of inversions (35/40), the breakpoints fell within the same spanning BACs, which hybridized to both 2p11.2 and 2q13 on the normal and inverted homologues....... Sequence analysis revealed that these BACs contain a significant proportion of intrachromosomal SDs with sequence homology to the reciprocal breakpoint region. In contrast, BACs spanning the rare breakpoint combinations contain fewer SDs and with sequence homology only to the same chromosome arm. Using... 10. A novel approach for determining cancer genomic breakpoints in the presence of normal DNA. Directory of Open Access Journals (Sweden) Yu-Tsueng Liu Full Text Available CDKN2A (encodes p16(INK4A and p14(ARF deletion, which results in both Rb and p53 inactivation, is the most common chromosomal anomaly in human cancers. To precisely map the deletion breakpoints is important to understanding the molecular mechanism of genomic rearrangement and may also be useful for clinical applications. However, current methods for determining the breakpoint are either of low resolution or require the isolation of relatively pure cancer cells, which can be difficult for clinical samples that are typically contaminated with various amounts of normal host cells. To overcome this hurdle, we have developed a novel approach, designated Primer Approximation Multiplex PCR (PAMP, for enriching breakpoint sequences followed by genomic tiling array hybridization to locate the breakpoints. In a series of proof-of-concept experiments, we were able to identify cancer-derived CDKN2A genomic breakpoints when more than 99.9% of wild type genome was present in a model system. This design can be scaled up with bioinformatics support and can be applied to validate other candidate cancer-associated loci that are revealed by other more systemic but lower throughput assays. 11. Preferential Breakpoints in the Recovery of Broken Dicentric Chromosomes in Drosophila melanogaster. Science.gov (United States) Hill, Hunter; Golic, Kent G 2015-10-01 We designed a system to determine whether dicentric chromosomes in Drosophila melanogaster break at random or at preferred sites. Sister chromatid exchange in a Ring-X chromosome produced dicentric chromosomes with two bridging arms connecting segregating centromeres as cells divide. This double bridge can break in mitosis. A genetic screen recovered chromosomes that were linearized by breakage in the male germline. Because the screen required viability of males with this X chromosome, the breakpoints in each arm of the double bridge must be closely matched to produce a nearly euploid chromosome. We expected that most linear chromosomes would be broken in heterochromatin because there are no vital genes in heterochromatin, and breakpoint distribution would be relatively unconstrained. Surprisingly, approximately half the breakpoints are found in euchromatin, and the breakpoints are clustered in just a few regions of the chromosome that closely match regions identified as intercalary heterochromatin. The results support the Laird hypothesis that intercalary heterochromatin can explain fragile sites in mitotic chromosomes, including fragile X. Opened rings also were recovered after male larvae were exposed to X-rays. This method was much less efficient and produced chromosomes with a strikingly different array of breakpoints, with almost all located in heterochromatin. A series of circularly permuted linear X chromosomes was generated that may be useful for investigating aspects of chromosome behavior, such as crossover distribution and interference in meiosis, or questions of nuclear organization and function. 12. Beckwith–Wiedemann syndrome and uniparental disomy 11p: fine mapping of the recombination breakpoints and evaluation of several techniques Science.gov (United States) Romanelli, Valeria; Meneses, Heloisa N M; Fernández, Luis; Martínez-Glez, Victor; Gracia-Bouthelier, Ricardo; F Fraga, Mario; Guillén, Encarna; Nevado, Julián; Gean, Esther; Martorell, Loreto; Marfil, Victoria Esteban; García-Miñaur, Sixto; Lapunzina, Pablo 2011-01-01 Beckwith–Wiedemann syndrome (BWS) is a phenotypically and genotypically heterogeneous overgrowth syndrome characterized by somatic overgrowth, macroglossia and abdominal wall defects. Other usual findings are hemihyperplasia, embryonal tumours, adrenocortical cytomegaly, ear anomalies, visceromegaly, renal abnormalities, neonatal hypoglycaemia, cleft palate, polydactyly and a positive family history. BWS is a complex, multigenic disorder associated, in up to 90% of patients, with alteration in the expression or function of one or more genes in the 11p15.5 imprinted gene cluster. There are several molecular anomalies associated with BWS and the large proportion of cases, about 85%, is sporadic and karyotypically normal. One of the major categories of BWS molecular alteration (10–20% of cases) is represented by mosaic paternal uniparental disomy (pUPD), namely patients with two paternally derived copies of chromosome 11p15 and no maternal contribution for that. In these patients, in addition to the effects of IGF2 overexpression, a decreased level of the maternally expressed gene CDKN1C may contribute to the BWS phenotype. In this paper, we reviewed a series of nine patients with BWS because of pUPD using several methods with the aim to evaluate the percentage of mosaicism, the methylation status at both loci, the extension of the pUPD at the short arm and the breakpoints of recombination. Fine mapping of mitotic recombination breakpoints by single-nucleotide polymorphism-array in individuals with UPD and fine estimation of epigenetic defects will provide a basis for understanding the aetiology of BWS, allowing more accurate prognostic predictions and facilitating management and surveillance of individuals with this disorder. PMID:21248736 13. Beckwith-Wiedemann syndrome and uniparental disomy 11p: fine mapping of the recombination breakpoints and evaluation of several techniques. Science.gov (United States) Romanelli, Valeria; Meneses, Heloisa N M; Fernández, Luis; Martínez-Glez, Victor; Gracia-Bouthelier, Ricardo; F Fraga, Mario; Guillén, Encarna; Nevado, Julián; Gean, Esther; Martorell, Loreto; Marfil, Victoria Esteban; García-Miñaur, Sixto; Lapunzina, Pablo 2011-04-01 Beckwith-Wiedemann syndrome (BWS) is a phenotypically and genotypically heterogeneous overgrowth syndrome characterized by somatic overgrowth, macroglossia and abdominal wall defects. Other usual findings are hemihyperplasia, embryonal tumours, adrenocortical cytomegaly, ear anomalies, visceromegaly, renal abnormalities, neonatal hypoglycaemia, cleft palate, polydactyly and a positive family history. BWS is a complex, multigenic disorder associated, in up to 90% of patients, with alteration in the expression or function of one or more genes in the 11p15.5 imprinted gene cluster. There are several molecular anomalies associated with BWS and the large proportion of cases, about 85%, is sporadic and karyotypically normal. One of the major categories of BWS molecular alteration (10-20% of cases) is represented by mosaic paternal uniparental disomy (pUPD), namely patients with two paternally derived copies of chromosome 11p15 and no maternal contribution for that. In these patients, in addition to the effects of IGF2 overexpression, a decreased level of the maternally expressed gene CDKN1C may contribute to the BWS phenotype. In this paper, we reviewed a series of nine patients with BWS because of pUPD using several methods with the aim to evaluate the percentage of mosaicism, the methylation status at both loci, the extension of the pUPD at the short arm and the breakpoints of recombination. Fine mapping of mitotic recombination breakpoints by single-nucleotide polymorphism-array in individuals with UPD and fine estimation of epigenetic defects will provide a basis for understanding the aetiology of BWS, allowing more accurate prognostic predictions and facilitating management and surveillance of individuals with this disorder. 14. Use of SNPs to determine the breakpoints of complex deficiencies, facilitating gene mapping in Caenorhabditis elegans Directory of Open Access Journals (Sweden) Hoffmann Melissa 2005-05-01 Full Text Available Abstract Background Genetic deletions or deficiencies have been used for gene mapping and discovery in various organisms, ranging from the nematode Caenorhabditis elegans all the way to humans. One problem with large deletions is the determination of the location of their breakpoints. This is exacerbated in the case of complex deficiencies that delete a region of the genome, while retaining some of the intervening sequence. Previous methods, using genetic complementation or cytology were hampered by low marker density and were consequently not very precise at positioning the breakpoints of complex deficiencies. The identification of increasing numbers of Single Nucleotide Polymorphisms (SNPs has resulted in the use of these as genetic markers, and consequently in their utilization for defining the breakpoints of deletions using molecular biology methods. Results Here, we show that SNPs can be used to help position the breakpoints of a complex deficiency in C. elegans. The technique uses a combination of genetic crosses and molecular biology to generate robust and highly reproducible results with strong internal controls when trying to determine the breakpoints of deficiencies. The combined use of this technique and standard genetic mapping allowed us to rapidly narrow down the region of interest in our attempts to clone a gene. Conclusion Unlike previous methods used to locate deficiency breakpoints, our technique has the advantage of not being limited by the amount of starting material. It also incorporates internal controls to eliminate false positives and negatives. The technique can also easily be adapted for use in other organisms in which both genetic deficiencies and SNPs are available, thereby aiding gene discovery in these other models. 15. Distribution of Chromosome Breakpoints in Human Epithelial Cells Exposed to Low- and High-LET Radiations Science.gov (United States) Hada, Megumi; Cucinotta, Francis; Wu, Honglu 2009-01-01 16. Breakpoint region in the IV-characteristics of intrinsic Josephson junctions Science.gov (United States) Shukrinov, Yu M.; Mahfouzi, F. 2008-02-01 We study theoretically the IV-characteristics of intrinsic Josephson junctions in HTSC. We solve numerically a set of differential equations for N intrinsic Josephson junctions and investigate the nonlinear dynamics of the system. The charging effect is taken into account. We demonstrate that the breakpoint region in the current-voltage characteristics naturally follows from the solution of the system of the dynamical equations for the phase difference. In the breakpoint region the plasma mode is a stationary solution of the system and this fact might be used in some applications, particularly, in high frequency devices such as THz oscillators and mixers. 17. Experimental manifestation of the breakpoint region in the current-voltage characteristics of intrinsic Josephson junctions OpenAIRE Irie, A.; Shukrinov, Yu M.; Oya, G. 2008-01-01 The experimental evidence of the breakpoint on the current-voltage characteristics (IVCs) of the stacks of intrinsic Josephson junctions (IJJs) is presented. The influence of the capacitive coupling on the IVCs of Bi$_2$Sr$_2$CaCu$_2$O$_y$ IJJs has been investigated. At 4.2 K, clear breakpoint region is observed on the branches in the IVCs. It is found that the hysteresis observed on the IVC is suppressed due to the coupling compared with that expected from the McCumber parameter. Measurement... 18. Experimental manifestation of the breakpoint region in the current-voltage characteristics of intrinsic Josephson junctions Science.gov (United States) Irie, A.; Shukrinov, Yu. M.; Oya, G. 2008-10-01 The experimental evidence of the breakpoint on the current-voltage characteristics (IVCs) of the stacks of intrinsic Josephson junctions (IJJs) is presented. The influence of the capacitive coupling on the IVCs of Bi2Sr2CaCu2Oy IJJs has been investigated. At 4.2K, clear breakpoint region is observed on the branches in the IVCs. It is found that due to the coupling between junctions, the hysteresis observed on the IVC is small compared to that expected from the McCumber parameter. Measurements agree well with the results predicted by the capacitively coupled Josephson junction model including the diffusion current. 19. Rationale for revised penicillin susceptibility breakpoints versus Streptococcus pneumoniae: coping with antimicrobial susceptibility in an era of resistance. Science.gov (United States) Weinstein, Melvin P; Klugman, Keith P; Jones, Ronald N 2009-06-01 In January 2008, the Clinical and Laboratory Standards Institute published revised susceptibility breakpoints for penicillin and Streptococcus pneumoniae, and shortly thereafter, the United States Food and Drug Administration similarly revised its breakpoints via changes in the package insert for penicillin. The revised susceptibility breakpoint is penicillin at a dosage of 12 million units-24 million units per day. The susceptibility breakpoint of penicillin at a dosage of > or =18 million units per day. Herein, we review the scientific basis for the revisions to the breakpoints, which were supported by microbiologic, pharmacokinetic and/or pharmacodynamic, and clinical data. Clinicians, once again, should feel comfortable prescribing penicillin for pneumococcal pneumonia and other pneumococcal infections outside the central nervous system. 20. Cloning, sequencing, and analysis of inv8 chromosome breakpoints associated with recombinant 8 syndrome. Science.gov (United States) Graw, S L; Sample, T; Bleskan, J; Sujansky, E; Patterson, D 2000-03-01 Rec8 syndrome (also known as "recombinant 8 syndrome" and "San Luis Valley syndrome") is a chromosomal disorder found in individuals of Hispanic descent with ancestry from the San Luis Valley of southern Colorado and northern New Mexico. Affected individuals typically have mental retardation, congenital heart defects, seizures, a characteristic facial appearance, and other manifestations. The recombinant chromosome is rec(8)dup(8q)inv(8)(p23.1q22.1), and is derived from a parental pericentric inversion, inv(8)(p23.1q22.1). Here we report on the cloning, sequencing, and characterization of the 8p23.1 and 8q22 breakpoints from the inversion 8 chromosome associated with Rec8 syndrome. Analysis of the breakpoint regions indicates that they are highly repetitive. Of 6 kb surrounding the 8p23.1 breakpoint, 75% consists of repetitive gene family members-including Alu, LINE, and LTR elements-and the inversion took place in a small single-copy region flanked by repetitive elements. Analysis of 3.7 kb surrounding the 8q22 breakpoint region reveals that it is 99% repetitive and contains multiple LTR elements, and that the 8q inversion site is within one of the LTR elements. 1. Interphase FISH detection of BCL2 rearrangement in follicular lymphoma using breakpoint-flanking probes NARCIS (Netherlands) Vaandrager, J W; Schuuring, E; Raap, T; Philippo, K; Kleiverda, K; Kluin, P 2000-01-01 Rearrangement of the BCL2 gene is an important parameter for the differential diagnosis of non-Hodgkin lymphomas. Although a relatively large proportion of breakpoints is clustered, many are missed by standard PCR. A FISH assay is therefore desired. Up to now, a lack of probes flanking the BCL2 gene 2. IGH switch breakpoints in Burkitt lymphoma: exclusive involvement of noncanonical class switch recombination. Science.gov (United States) Guikema, Jeroen E J; de Boer, Conny; Haralambieva, Eugenia; Smit, Laura A; van Noesel, Carel J M; Schuuring, Ed; Kluin, Philip M 2006-09-01 Most chromosomal t(8;14) translocations in sporadic Burkitt lymphomas (BL) are mediated by immunoglobulin class switch recombination (CSR), yet all tumors express IgM, suggesting an incomplete or exclusively monoallelic CSR event. We studied the exact configuration of both the nontranslocated IGH allele and the MYC/IGH breakpoint by applying a combination of low- and high-resolution methods (interphase FISH, DNA fiber FISH, long-distance PCR, and Southern blotting) on 16 BL. IGH class switch events involving the nontranslocated IGH allele were not observed. Thirteen cases had MYC/IGH breakpoints in or nearby IGH switch (S) sites, including five at Smu, three at Sgamma and five at Salpha. All eight translocations with a breakpoint at Sgamma or Salpha were perfectly reciprocal, without deletion of Cmu-Cdelta or other CH elements. Internal Smu deletions claimed to be a marker for CSR activity and implicated in stabilization of IgM expression were found in BL but did not correlate with downstream translocation events. This study shows that switch breakpoints in sporadic BL are exclusively resolved by a noncanonical recombination mechanism involving only one switch region. 3. Phylogenetic Trees From Sequences Science.gov (United States) Ryvkin, Paul; Wang, Li-San In this chapter, we review important concepts and approaches for phylogeny reconstruction from sequence data.We first cover some basic definitions and properties of phylogenetics, and briefly explain how scientists model sequence evolution and measure sequence divergence. We then discuss three major approaches for phylogenetic reconstruction: distance-based phylogenetic reconstruction, maximum parsimony, and maximum likelihood. In the third part of the chapter, we review how multiple phylogenies are compared by consensus methods and how to assess confidence using bootstrapping. At the end of the chapter are two sections that list popular software packages and additional reading. 4. Revisit of fluoroquinolone and azithromycin susceptibility breakpoints for Salmonella enterica serovar Typhi. Science.gov (United States) Das, Surojit; Ray, Ujjwayini; Dutta, Shanta 2016-07-01 In recent years, increase in occurrence of fluoroquinolone (FQ)-resistant S almonella Typhi isolates has caused considerable inconvenience in selecting appropriate antimicrobials for treatment of typhoid. The World Health Organization (WHO) recommends azithromycin for the empirical treatment option of uncomplicated typhoid. The CLSI updated the breakpoints of disc diffusion (DD) and MIC results of FQs and azithromycin for Salmonella Typhi in 2015, but DD breakpoints of ofloxacin and levofloxacin were not included. In this study, the inhibition zone diameters and MICs of nalidixic acid, ciprofloxacin, ofloxacin, levofloxacin and azithromycin were determined in Salmonella Typhi Kolkata isolates (n =146) over a 16-year period (1998 to 2013) and the data were compared with the available CLSI breakpoints. Very major error and major error (ME) of FQs were not observed in the study isolates, but the minor error of ciprofloxacin (15.8 %) and ME of azithromycin (3.5 %) exceeded the acceptable limit. A positive correlation between MICs of FQ and mutations in the quinolone-resistance-determining region (QRDR) showed the reliability of MIC results to determine FQ susceptibility of Salmonella Typhi (n =74). Isolates showing decreased ciprofloxacin susceptibility (MIC 0.125-0.5 µg  ml-1) were likely to have at least one mutation in the QRDR region. The results on DD breakpoints of ofloxacin (resistant, ≤15 mm; intermediate, 16-24 mm, and susceptible, ≥25 mm) and levofloxacin (resistant, ≤18 mm; intermediate, 19-27 mm, and susceptible, ≥28 mm) corroborated those of earlier studies. In view of the emerging FQ- and azithromycin-resistant Salmonella Typhi isolates, DD and MIC breakpoints of those antimicrobials should be revisited routinely. 5. Data Mining Validation of Fluconazole Breakpoints Established by the European Committee on Antimicrobial Susceptibility Testing▿ Science.gov (United States) Cuesta, Isabel; Bielza, Concha; Larrañaga, Pedro; Cuenca-Estrella, Manuel; Laguna, Fernando; Rodriguez-Pardo, Dolors; Almirante, Benito; Pahissa, Albert; Rodríguez-Tudela, Juan L. 2009-01-01 European Committee on Antimicrobial Susceptibility Testing (EUCAST) breakpoints classify Candida strains with a fluconazole MIC ≤ 2 mg/liter as susceptible, those with a fluconazole MIC of 4 mg/liter as representing intermediate susceptibility, and those with a fluconazole MIC > 4 mg/liter as resistant. Machine learning models are supported by complex statistical analyses assessing whether the results have statistical relevance. The aim of this work was to use supervised classification algorithms to analyze the clinical data used to produce EUCAST fluconazole breakpoints. Five supervised classifiers (J48, Correlation and Regression Trees [CART], OneR, Naïve Bayes, and Simple Logistic) were used to analyze two cohorts of patients with oropharyngeal candidosis and candidemia. The target variable was the outcome of the infections, and the predictor variables consisted of values for the MIC or the proportion between the dose administered and the MIC of the isolate (dose/MIC). Statistical power was assessed by determining values for sensitivity and specificity, the false-positive rate, the area under the receiver operating characteristic (ROC) curve, and the Matthews correlation coefficient (MCC). CART obtained the best statistical power for a MIC > 4 mg/liter for detecting failures (sensitivity, 87%; false-positive rate, 8%; area under the ROC curve, 0.89; MCC index, 0.80). For dose/MIC determinations, the target was >75, with a sensitivity of 91%, a false-positive rate of 10%, an area under the ROC curve of 0.90, and an MCC index of 0.80. Other classifiers gave similar breakpoints with lower statistical power. EUCAST fluconazole breakpoints have been validated by means of machine learning methods. These computer tools must be incorporated in the process for developing breakpoints to avoid researcher bias, thus enhancing the statistical power of the model. PMID:19433568 6. Phylogenetic lineages in Entomophthoromycota NARCIS (Netherlands) Gryganskyi, A.P.; Humber, R.A.; Smith, M.E.; Hodge, K.; Huang, B.; Voigt, K.; Vilgalys, R. 2013-01-01 Entomophthoromycota is one of six major phylogenetic lineages among the former phylum Zygomycota. These early terrestrial fungi share evolutionarily ancestral characters such as coenocytic mycelium and gametangiogamy as a sexual process resulting in zygospore formation. Previous molecular studies ha 7. Mutation analysis in Duchenne and Becker muscular dystrophy patients from Bulgaria shows a peculiar distribution of breakpoints by intron Energy Technology Data Exchange (ETDEWEB) Todorova, A.; Bronzova, J.; Kremensky, I. [Univ. Hospital of Obstetrics and Gynecology, Sofia (Bulgaria)] [and others 1996-10-02 For the first time in Bulgaria, a deletion/duplication screening was performed on a group of 84 unrelated Duchenne/Becker muscular dystrophy patients, and the breakpoint distribution in the dystrophin gene was analyzed. Intragenic deletions were detected in 67.8% of patients, and intragenic duplications in 2.4%. A peculiar distribution of deletion breakpoints was found. Only 13.2% of the deletion breakpoints fell in the {open_quotes}classical{close_quotes} hot spot in intron 44, whereas the majority (> 54%) were located within the segment encompassing introns 45-51, which includes intron 50, the richest in breakpoints (16%) in the Bulgarian sample. Comparison with data from Greece and Turkey points at the probable existence of a deletion hot spot within intron 50, which might be a characteristic of populations of the Balkan region. 17 refs., 2 figs. 8. Investigating the role of X chromosome breakpoints in premature ovarian failure Directory of Open Access Journals (Sweden) Baronchelli Simona 2012-07-01 Full Text Available Abstract The importance of the genetic factor in the aetiology of premature ovarian failure (POF is emphasized by the high percentage of familial cases and X chromosome abnormalities account for 10% of chromosomal aberrations. In this study, we report the detailed analysis of 4 chromosomal abnormalities involving the X chromosome and associated with POF that were detected during a screening of 269 affected women. Conventional and molecular cytogenetics were valuable tools for locating the breakpoint regions and thus the following karyotypes were defined: 46,X,der(Xt(X;19(p21.1;q13.42mat, 46,X,t(X;2(q21.33;q14.3dn, 46,X,der(Xt(X;Y(q26.2;q11.223mat and 46,X,t(X;13(q13.3;q31dn. A bioinformatic analysis of the breakpoint regions identified putative candidate genes for ovarian failure near the breakpoint regions on the X chromosome or on autosomes that were involved in the translocation event. HS6ST1, HS6ST2 and MATER genes were identified and their functions and a literature review revealed an interesting connection to the POF phenotype. Moreover, the 19q13.32 locus is associated with the age of onset of the natural menopause. These results support the position effect of the breakpoint on flanking genes, and cytogenetic techniques, in combination with bioinformatic analysis, may help to improve what is known about this puzzling disorder and its diagnostic potential. 9. Fast computation of a string duplication history under no-breakpoint-reuse Science.gov (United States) Brejová, Broňa; Kravec, Martin; Landau, Gad M.; Vinař, Tomáš 2014-01-01 In this paper, we provide an O(n log2 n log log n log* n) algorithm to compute a duplication history of a string under no-breakpoint-reuse condition. The motivation of this problem stems from computational biology, in particular, from analysis of complex gene clusters. The problem is also related to computing edit distance with block operations, but, in our scenario, the start of the history is not fixed, but chosen to minimize the distance measure. PMID:24751867 10. Diversity of breakpoints of variant Philadelphia chromosomes in chronic myeloid leukemia in Brazilian patients Directory of Open Access Journals (Sweden) Maria de Lourdes Lopes Ferrari Chauffaille 2015-02-01 Full Text Available Background: Chronic myeloid leukemia is a myeloproliferative disorder characterized by the Philadelphia chromosome or t(9;22(q34.1;q11.2, resulting in the break-point cluster regionAbelson tyrosine kinase fusion gene, which encodes a constitutively active tyrosine kinase protein. The Philadelphia chromosome is detected by karyotyping in around 90% of chronic myeloid leukemia patients, but 5-10% may have variant types. Variant Philadelphia chromosomes are characterized by the involvement of another chromosome in addition to chromosome 9 or 22. It can be a simple type of variant when one other chromosome is involved, or complex, in which two or more chromosomes take part in the translocation. Few studies have reported the incidence of variant Philadelphia chromosomes or the breakpoints involved among Brazilian chronic myeloid leukemia patients. Objective: The aim of this report is to describe the diversity of the variant Philadelphia chromosomes found and highlight some interesting breakpoint candidates for further studies. Methods: the Cytogenetics Section Database was searched for all cases with diagnoses of chronic myeloid leukemia during a 12-year period and all the variant Philadelphia chromosomes were listed. Results: Fifty (5.17% cases out of 1071 Philadelphia-positive chronic myeloid leukemia were variants. The most frequently involved chromosome was 17, followed by chromosomes: 1, 20, 6, 11, 2, 10, 12 and 15. Conclusion: Among all the breakpoints seen in this survey, six had previously been described: 11p15, 14q32, 15q11.2, 16p13.1, 17p13 and 17q21. The fact that some regions get more fre- quently involved in such rare rearrangements calls attention to possible predisposition that should be further studied. Nevertheless, the pathological implication of these variants remains unclear. 11. Susceptibility of Extended-Spectrum-β-Lactamase-Producing Enterobacteriaceae According to the New CLSI Breakpoints Science.gov (United States) Wang, Peng; Hu, Fupin; Xiong, Zizhong; Ye, Xinyu; Zhu, Demei; Wang, Yun F.; Wang, Minggui 2011-01-01 In 2010 the Clinical and Laboratory Standards Institute (CLSI) lowered the susceptibility breakpoints of some cephalosporins and aztreonam for Enterobacteriaceae and eliminated the need to perform screening for extended-spectrum β-lactamases (ESBLs) and confirmatory tests. The aim of this study was to determine how many ESBL-producing strains of three common species of Enterobacteriaceae test susceptible using the new breakpoints. As determined with the CLSI screening and confirmatory tests, 382 consecutive ESBL-producing strains were collected at Huashan Hospital between 2007 and 2008, including 158 strains of Escherichia coli, 164 of Klebsiella pneumoniae, and 60 of Proteus mirabilis. Susceptibility was determined by the CLSI agar dilution method. CTX-M-, TEM-, and SHV-specific genes were determined by PCR amplification and sequencing. blaCTX-M genes alone or in combination with blaSHV were present in 92.7% (354/382) of these ESBL-producing strains. Forty-two (25.6%) strains of K. pneumoniae harbored SHV-type ESBLs alone or in combination. No TEM ESBLs were found. Utilizing the new breakpoints, all 382 strains were resistant to cefazolin, cefotaxime, and ceftriaxone, while 85.0 to 96.7% of P. mirabilis strains tested susceptible to ceftazidime, cefepime, and aztreonam, 41.8 to 45.6% of E. coli strains appeared to be susceptible to ceftazidime and cefepime, and 20.1% of K. pneumoniae were susceptible to cefepime. In conclusion, all ESBL-producing strains of Enterobacteriaceae would be reported to be resistant to cefazolin, cefotaxime, and ceftriaxone by using the new CLSI breakpoints, but a substantial number of ESBL-containing P. mirabilis and E. coli strains would be reported to be susceptible to ceftazidime, cefepime, and aztreonam, which is likely due to the high prevalence of CTX-M type ESBLs. PMID:21752977 12. A Dynamic Programming Algorithm For (1,2)-Exemplar Breakpoint Distance. Science.gov (United States) Wei, Zhexue; Zhu, Daming; Wang, Lusheng 2015-07-01 The exemplar breakpoint distance problem is motivated by finding conserved sets of genes between two genomes. It asks to find respective exemplars in two genomes to minimize the breakpoint distance between them. If one genome has no repeated gene (called trivial genome) and the other has genes repeating at most twice, it is referred to as the (1, 2)-exemplar breakpoint distance problem, EBD(1, 2) for short. Little has been done on algorithm design for this problem by now. In this article, we propose a parameter to describe the maximum physical span between two copies of a gene in a genome, and based on it, design a fixed-parameter algorithm for EBD(1, 2). Using a dynamic programming approach, our algorithm can take O(4(s)n(2)) time and O(4(s)n) space to solve an EBD(1, 2) instance that has two genomes of n genes where the second genome has each two copies of a gene spanning at most s copies of the genes. Our algorithm can also be used to compute the maximum adjacencies between two genomes. The algorithm has been implemented in C++. Simulations on randomly generated data have verified the effectiveness of our algorithm. The software package is available from the authors. 13. Deletion breakpoint mapping on chromosome 9p21 in breast cancer cell line MCF-7 Directory of Open Access Journals (Sweden) Hua-ping XIE 2012-05-01 Full Text Available Objective  To map the deletion breakpoint of chromosome 9p21 in breast cancer cell line MCF-7. Methods  The deletion of chromosome 9p21 was checked by Multiplex Ligation-dependent Probe Amplification (MLPA in MCF-7. Subsequently, the deletion breakpoint was amplified by long range PCR and the deletion region was narrowed by primer walking. Finally, the deletion position was confirmed by sequencing. Results  The deletion was found starting within the MTAP gene and ending within CDKN2A gene by MLPA. Based on long range PCR and primer walking, the deletion was confirmed to cover the region from chr9:21819532 to chr9:21989622 by sequencing, with a deletion size of 170kb, starting within the intron 4 of MTAP and ending within the intron 1 near exon 1β of CDKN2A. Conclusions  Long range PCR is an efficient way to detect deletion breakpoints. In MCF-7, the deletion has been confirmed to be 170kb, starting within the MTAP gene and ending within the CDKN2A gene. The significance of the deletion warrants further research. 14. Sequence breakpoints in the aflatoxin biosynthesis gene cluster and flanking regions in nonaflatoxigenic Aspergillus flavus isolates. Science.gov (United States) Chang, Perng-Kuang; Horn, Bruce W; Dorner, Joe W 2005-11-01 Aspergillus flavus populations are genetically diverse. Isolates that produce either, neither, or both aflatoxins and cyclopiazonic acid (CPA) are present in the field. We investigated defects in the aflatoxin gene cluster in 38 nonaflatoxigenic A. flavus isolates collected from southern United States. PCR assays using aflatoxin-gene-specific primers grouped these isolates into eight (A-H) deletion patterns. Patterns C, E, G, and H, which contain 40 kb deletions, were examined for their sequence breakpoints. Pattern C has one breakpoint in the cypA 3' untranslated region (UTR) and another in the verA coding region. Pattern E has a breakpoint in the amdA coding region and another in the ver1 5'UTR. Pattern G contains a deletion identical to the one found in pattern C and has another deletion that extends from the cypA coding region to one end of the chromosome as suggested by the presence of telomeric sequence repeats, CCCTAATGTTGA. Pattern H has a deletion of the entire aflatoxin gene cluster from the hexA coding region in the sugar utilization gene cluster to the telomeric region. Thus, deletions in the aflatoxin gene cluster among A. flavus isolates are not rare, and the patterns appear to be diverse. Genetic drift may be a driving force that is responsible for the loss of the entire aflatoxin gene cluster in nonaflatoxigenic A. flavus isolates when aflatoxins have lost their adaptive value in nature. 15. SoftSearch: integration of multiple sequence features to identify breakpoints of structural variations. Directory of Open Access Journals (Sweden) Steven N Hart Full Text Available BACKGROUND: Structural variation (SV represents a significant, yet poorly understood contribution to an individual's genetic makeup. Advanced next-generation sequencing technologies are widely used to discover such variations, but there is no single detection tool that is considered a community standard. In an attempt to fulfil this need, we developed an algorithm, SoftSearch, for discovering structural variant breakpoints in Illumina paired-end next-generation sequencing data. SoftSearch combines multiple strategies for detecting SV including split-read, discordant read-pair, and unmated pairs. Co-localized split-reads and discordant read pairs are used to refine the breakpoints. RESULTS: We developed and validated SoftSearch using real and synthetic datasets. SoftSearch's key features are 1 not requiring secondary (or exhaustive primary alignment, 2 portability into established sequencing workflows, and 3 is applicable to any DNA-sequencing experiment (e.g. whole genome, exome, custom capture, etc.. SoftSearch identifies breakpoints from a small number of soft-clipped bases from split reads and a few discordant read-pairs which on their own would not be sufficient to make an SV call. CONCLUSIONS: We show that SoftSearch can identify more true SVs by combining multiple sequence features. SoftSearch was able to call clinically relevant SVs in the BRCA2 gene not reported by other tools while offering significantly improved overall performance. 16. A 350-kb cosmid contig in 3p14.2 that crosses the t(3;8) hereditary renal cell carcinoma translocation breakpoint and 17 aphidicolin-induced FRA3B breakpoints Energy Technology Data Exchange (ETDEWEB) Paradee, W. [Wayne State Univ. School of Medicine, Detroit, MI (United States); Wilke, C.M.; Hoge, A. [Univ. of Michigan Medical School, Ann Arbor, MI (United States)] [and others 1996-07-11 The constitutive fragile site at human chromosomal band 3p14.2, FRA3B, has been described as the most active common fragile site in the human genome. Previous work demonstrated that a 1330-kb YAC clone, YC850A6, spans both the t(3;8) translocation and FRA3B and also encompasses FRA3B-associated breakpoints was used to construct a multi-hit cosmid library. Screening of this library resulted in a 350-kb cosmid contig that extends distally from the t(3;8) translocation breakpoint. Seventeen aphidicolin-induced 3p14.2 breakpoints derived from hamster-human hybrids were mapped within this cosmid contig. These breakpoints were found to localize as two distinct clusters, separated by 200 kb, which lie on either side of a region of frequent breakage within FRA3B as defined by FISH analysis using cosmids from the contigs. The distribution of these breakpoints, together with the region of frequent chromosomal breakage mapped by FISH analysis, further confirms the position of FRA3B comprises several hundred kilobases of DNA sequence within 3p14.2. The 350-kb contig and the cosmid library constructed from YAC YC850A6 will be essential for further characterization of the region surrounding FRA3B and in experiments to determine the molecular basis of the fragility of FRA3B. 17. Specific metaphase and interphase detection of the breakpoint region in 8q24 of burkitt lymphoma cells by triple-color fluorescence in situ hybridization OpenAIRE Ried, Thomas; Lengauer, Christoph; Cremer, Thomas; Wiegant, Joop; Raap, Anton K.; Van Der Ploeg, Mels; Groitl, Peter; Lipp, Martin 1992-01-01 Triple fluorescence in situ hybridization with a plasmid DNA library from sorted human chromosomes 8 in combination with bacteriophage clones flanking the breakpoint in 8q24 of the Burkitt lymphoma cell line Jl was used for the specific delineation of this breakpoint in individual tumor cells. With this approach, tumor-specific breakpoints in translocation chromosomes can be detected at all stages of the cell cycle with high specificity. 18. The effects of acute L-carnitine administration on ventilatory breakpoint and exercise performanceduring incremental exercise Directory of Open Access Journals (Sweden) Mojtaba Kaviani 2009-01-01 Full Text Available (Received 31 October, 2009 ; Accepted 10 March, 2010AbstractBackground and purpose: Many athletes adopt nutritional manipulations to improve their performance. Among the substances generally consumed is carnitine (L-trimethyl-3-hydroxy-ammoniobutanoate which has been used by athletes as an ergogenic aid, due to its role in the transport of long-chain fatty acids across mitochondrial membranes. Nutritional supplements containing carbohydrates, proteins, vitamins, and minerals have been widely used in various sporting fields to provide a boost to the recommended daily allowance. The aim of this study is to investigate the effects of acute L-carnitine administration on ventilatory breakpoint, an exercise performance during incremental exercise.Materials and methods: This study was double-blind, randomized and crossover in design. The subjects were 12 randomly selected active male physical education students, 21.75±0.64 years old, with a mean body mass index (BMI of 23.7±0.94kg/m2, divided into 2 groups. They received orally either 2g of L-carnitine dissolved in 200 ml of water, plus 6 drops of lemon juice or a placebo (6 ml lemon juice dissolved in 200 ml of water 90 minutes before they began to exercise on a treadmill. They performed a modified protocol of Conconi test to exhaustion. One-way analysis of variance with repeated measurements was used for data analysis.Results: The results showed that exercise performance improved in LC group (2980±155 meter compared with placebo group (2331±51 meter. Furthermore, no significant difference was found in ventilatory breakpoint between the two groups.Conclusion: This finding indicates that administration of L- Carnitine, 90 minutes prior to exercise may improve performance; despite the ventilatory breakpoint as one of the anaerobic system indices that had no effect. J Mazand Univ Med Sci 2009; 19(73: 43-50 (Persian. 19. Using Sorting by Reversal: Breakpoint Graph for Gene Assembly in Ciliates Science.gov (United States) Brijder, Robert; Jan Hoogeboom, Hendrik 2007-09-01 The theory of gene assembly in ciliates has much in common with the theory of sorting by reversal. Both model processes that are based on splicing, and have a fixed begin and end product. The main difference is the type of splicing operations used to obtain the end product from the begin product. In this overview paper we show that the concept of breakpoint graph, known from the theory of sorting by reversal, has many uses in the theory of gene assembly. Our aim is to present the material in an intuitive and informal manner to allow for an efficient introduction into the subject. 20. Concurrent Breakpoints Science.gov (United States) 2011-12-18 and raytracer from the Java Grande Fo- rum [16]; and Jigsaw, W3C’s leading-edge Web server platform. For Jigsaw, we used a test harness that simulates...notify1 stall 1.00 Meth. II raytracer 1860 1.097 1.274 16.1 race1 test fail 1.00 1.196 9.0 race2 test fail 1.00 1.360 24.0 race3 1.00 1.428 30.2 1. Fast phylogenetic DNA barcoding DEFF Research Database (Denmark) Terkelsen, Kasper Munch; Boomsma, Wouter Krogh; Willerslev, Eske 2008-01-01 We present a heuristic approach to the DNA assignment problem based on phylogenetic inferences using constrained neighbour joining and non-parametric bootstrapping. We show that this method performs as well as the more computationally intensive full Bayesian approach in an analysis of 500 insect...... DNA sequences obtained from GenBank. We also analyse a previously published dataset of environmental DNA sequences from soil from New Zealand and Siberia, and use these data to illustrate the fact that statistical approaches to the DNA assignment problem allow for more appropriate criteria...... for determining the taxonomic level at which a particular DNA sequence can be assigned.... 2. Phylogenetic trees in bioinformatics Energy Technology Data Exchange (ETDEWEB) Burr, Tom L [Los Alamos National Laboratory 2008-01-01 Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development. 3. Entanglement, Invariants, and Phylogenetics Science.gov (United States) Sumner, J. G. 2007-10-01 This thesis develops and expands upon known techniques of mathematical physics relevant to the analysis of the popular Markov model of phylogenetic trees required in biology to reconstruct the evolutionary relationships of taxonomic units from biomolecular sequence data. The techniques of mathematical physics are plethora and have been developed for some time. The Markov model of phylogenetics and its analysis is a relatively new technique where most progress to date has been achieved by using discrete mathematics. This thesis takes a group theoretical approach to the problem by beginning with a remarkable mathematical parallel to the process of scattering in particle physics. This is shown to equate to branching events in the evolutionary history of molecular units. The major technical result of this thesis is the derivation of existence proofs and computational techniques for calculating polynomial group invariant functions on a multi-linear space where the group action is that relevant to a Markovian time evolution. The practical results of this thesis are an extended analysis of the use of invariant functions in distance based methods and the presentation of a new reconstruction technique for quartet trees which is consistent with the most general Markov model of sequence evolution. 4. Eleven X chromosome breakpoints associated with premature ovarian failure (POF) map to a 15-Mb YAC contig spanning Xq21. Science.gov (United States) Sala, C; Arrigo, G; Torri, G; Martinazzi, F; Riva, P; Larizza, L; Philippe, C; Jonveaux, P; Sloan, F; Labella, T; Toniolo, D 1997-02-15 Eleven balanced X-autosome translocations associated with premature ovarian failure (POF) were mapped to a YAC contig spanning most of Xq21 and constructed between the DXS223 and DXS1171 loci. The contig corresponds to a genomic region of about 15 Mb and contains the whole X-Y homologous region. The most proximal and most distal breakpoints associated with POF were mapped 15 Mb apart. The remaining breakpoints were localized along this large region, in the X-specific and in the X-Y homologous region. Four of the YACs contained two breakpoints in the same or in flanking STS intervals. Our results confirm the cytological findings and suggest that a minimum number of eight different genes in Xq21 may be involved with ovary development. Interruption of such loci could be the cause of POF. 5. 'Break-point Checkerboard Plate' for screening of appropriate antibiotic combinations against multidrug-resistant Pseudomonas aeruginosa. Science.gov (United States) Tateda, Kazuhiro; Ishii, Yoshikazu; Matsumoto, Tetsuya; Yamaguchi, Keizo 2006-01-01 Increase of multiple drug resistant Pseudomonas aeruginosa (MDRP) is becoming a serious problem in the clinical setting. Although the checkerboard method to determine FIC index and synergistic effects of antibiotic combinations is useful, it is not well adapted to a routine test, mainly because of its time-consuming and labor-intensive nature. Here we report 'Break-point Checkerboard Plate', in which breakpoint concentrations, such as 'S' (sensitive) and 'I' (intermediate), were combined in a microtiter plate with 8 antibiotics, including carbapenem, aminoglycoside and fluoroquinolone. The results obtained from 12 strains of MDRP demonstrated a strong synergistic effect of some antibiotic combinations at clinically relevant concentrations. Our data suggest a usefulness of 'Break-point Checkerboard Plate' to screen appropriate antibiotic combinations against drug resistant organisms, including MDRP. 6. Molecular cloning and analysis of breakpoints on ring chromosome 17 in a patient with autism. Science.gov (United States) Vazna, Alzbeta; Havlovicova, Marketa; Sedlacek, Zdenek 2008-01-15 The breakpoint junction on a ring chromosome 17 in a girl with autism, mental retardation, mild dysmorphism and neurofibromatosis was identified and analysed at the nucleotide level. The extent of the deleted segments was about 1.9 Mb on 17p and about 1.0 Mb on 17q. The structure of the junction between the 17p and 17q arms, especially the lack of significant homology between the juxtaposed genomic regions and the presence of short microhomology at the junction site, indicated non-homologous end joining as the most likely mechanism leading to the rearrangement. In addition to the 17p-17q junction itself, a de novo 1 kb deletion in a distance of 400 bp from the junction was identified, which arose most likely as a part of the rearrangement. The defect directly inactivated 3 genes, and the deleted terminal chromosome segments harboured 27 and 14 protein-coding genes from 17p and 17q, respectively. Several of the genes affected by the rearrangement are candidates for the symptoms observed in the patient. Additional rearrangements similar to the 1 kb deletion observed in our patient may remain undetected but can participate in the phenotype of patients with chromosomal aberrations. They can also be the reason for repeated failures to clone breakpoint junctions in other patients described in the literature. 7. Genomic instability in rat: Breakpoints induced by ionising radiation and interstitial telomeric-like sequences Energy Technology Data Exchange (ETDEWEB) Camats, Nuria [Institut de Biotecnologia i Biomedicina (IBB), Universitat Autonoma de Barcelona, 08193 Barcelona (Spain); Departament de Biologia Cel.lular, Fisiologia i Immunologia Universitat Autonoma de Barcelona, 08193 Barcelona (Spain); Ruiz-Herrera, Aurora [Departament de Biologia Cel.lular, Fisiologia i Immunologia Universitat Autonoma de Barcelona, 08193 Barcelona (Spain); Parrilla, Juan Jose [Servicio de Ginecologia y Obstetricia, Hospital Universitario Virgen de la Arrixaca, Ctra, Madrid-Cartagena, s/n, El Palmar, 30120 Murcia (Spain); Acien, Maribel [Servicio de Ginecologia y Obstetricia, Hospital Universitario Virgen de la Arrixaca, Ctra, Madrid-Cartagena, s/n, El Palmar, 30120 Murcia (Spain); Paya, Pilar [Servicio de Ginecologia y Obstetricia, Hospital Universitario Virgen de la Arrixaca, Ctra, Madrid-Cartagena, s/n, El Palmar, 30120 Murcia (Spain); Giulotto, Elena [Dipartimento di Genetica e Microbiologia Adriano Buzzati Traverso, Universita degli Studi di Pavia, 27100 Pavia (Italy); Egozcue, Josep [Departament de Biologia Cel.lular, Fisiologia i Immunologia Universitat Autonoma de Barcelona, 08193 Barcelona (Spain); Garcia, Francisca [Institut de Biotecnologia i Biomedicina (IBB), Universitat Autonoma de Barcelona, 08193 Barcelona (Spain); Garcia, Montserrat [Institut de Biotecnologia i Biomedicina (IBB), Universitat Autonoma de Barcelona, 08193 Barcelona (Spain) and Departament de Biologia Cellular, Fisiologia i Immunologia Universitat Autonoma de Barcelona, 08193 Barcelona (Spain)]. E-mail: Montserrat.Garcia.Caldes@uab.es 2006-03-20 The Norwegian rat (Rattus norvegicus) is the most widely studied experimental species in biomedical research although little is known about its chromosomal structure. The characterisation of possible unstable regions of the karyotype of this species would contribute to the better understanding of its genomic architecture. The cytogenetic effects of ionising radiation have been widely used for the study of genomic instability, and the importance of interstitial telomeric-like sequences (ITSs) in instability of the genome has also been reported in previous studies in vertebrates. In order to describe the unstable chromosomal regions of R. norvegicus, the distribution of breakpoints induced by X-irradiation and ITSs in its karyotype were analysed in this work. For the X-irradiation analysis, 52 foetuses (from 14 irradiated rats) were studied, 4803 metaphases were analysed, and a total of 456 breakpoints induced by X-rays were detected, located in 114 chromosomal bands, with 25 of them significantly affected by X-irradiation (hot spots). For the analysis of ITSs, three foetuses (from three rats) were studied, 305 metaphases were analysed and 121 ITSs were detected, widely distributed in the karyotype of this species. Seventy-six percent of all hot spots analysed in this study were co-localised with ITSs. 8. Control of onchocerciasis in Africa: threshold shifts, breakpoints and rules for elimination. Science.gov (United States) Duerr, Hans P; Raddatz, Günter; Eichner, Martin 2011-04-01 Control of onchocerciasis in Africa is currently based on annual community-directed treatment with ivermectin (CDTI) which has been assumed to be not efficient enough to bring about elimination. However, elimination has recently been reported to have been achieved by CDTI alone in villages of Senegal and Mali, reviving debate on the eradicability of onchocerciasis in Africa. We investigate the eradicability of onchocerciasis by examining threshold shifts and breakpoints predicted by a stochastic transmission model that has been fitted extensively to data. We show that elimination based on CDTI relies on shifting the threshold biting rate to a level that is higher than the annual biting rate. Breakpoints become relevant in the context of when to stop CDTI. In order for the model to predict a good chance for CDTI to eliminate onchocerciasis, facilitating factors such as the macrofilaricidal effect of ivermectin must be assumed. A chart predicting the minimum efficacy of CDTI required for elimination, dependent on the annual biting rate, is provided. Generalisable recommendations into strategies for the elimination of onchocerciasis are derived, particularly referring to the roles of vectors, the residual infection rate under control, and a low-spreader problem originating from patients with low parasite burdens. 9. Multiple genetic loci within 11p15 defined by Beckwith-Wiedemann syndrome rearrangement breakpoints and subchromosomal transferable fragments. Science.gov (United States) Hoovers, J M; Kalikin, L M; Johnson, L A; Alders, M; Redeker, B; Law, D J; Bliek, J; Steenman, M; Benedict, M; Wiegant, J 1995-01-01 Beckwith-Wiedemann syndrome (BWS) involves fetal overgrowth and predisposition to a wide variety of embryonal tumors of childhood. We have previously found that BWS is genetically linked to 11p15 and that this same band shows loss of heterozygosity in the types of tumors to which children with BWS are susceptible. However, 11p15 contains > 20 megabases, and therefore, the BWS and tumor suppressor genes could be distinct. To determine the precise physical relationship between these loci, we isolated yeast artificial chromosomes, and cosmid libraries from them, within the region of loss of heterozygosity in embryonal tumors. Five germ-line balanced chromosomal rearrangement breakpoint sites from BWS patients, as well as a balanced chromosomal translocation breakpoint from a rhabdoid tumor, were isolated within a 295- to 320-kb cluster defined by a complete cosmid contig crossing these breakpoints. This breakpoint cluster terminated approximately 100 kb centromeric to the imprinted gene IGF2 and 100 kb telomeric to p57KIP2, an inhibitor of cyclin-dependent kinases, and was located within subchromosomal transferable fragments that suppressed the growth of embryonal tumor cells in genetic complementation experiments. We have identified 11 transcribed sequences in this BWS/tumor suppressor coincident region, one of which corresponded to p57KIP2. However, three additional BWS breakpoints were > 4 megabases centromeric to the other five breakpoints and were excluded from the tumor suppressor region defined by subchromosomal transferable fragments. Thus, multiple genetic loci define BWS and tumor suppression on 11p15. Images Fig. 1 Fig. 3 PMID:8618920 10. Validation of antibiotic susceptibility testing guidelines in a routine clinical microbiology laboratory exemplifies general key challenges in setting clinical breakpoints. Science.gov (United States) Hombach, Michael; Courvalin, Patrice; Böttger, Erik C 2014-07-01 This study critically evaluated the new European Committee for Antimicrobial Susceptibility Testing (EUCAST) antibiotic susceptibility testing guidelines on the basis of a large set of disk diffusion diameters determined for clinical isolates. We report several paradigmatic problems that illustrate key issues in the selection of clinical susceptibility breakpoints, which are of general importance not only for EUCAST but for all guidelines systems, i.e., (i) the need for species-specific determinations of clinical breakpoints/epidemiological cutoffs (ECOFFs), (ii) problems arising from pooling data from various sources, and (iii) the importance of the antibiotic disk content for separating non-wild-type and wild-type populations. 11. SVA retrotransposon insertion-associated deletion represents a novel mutational mechanism underlying large genomic copy number changes with non-recurrent breakpoints NARCIS (Netherlands) J. Vogt (Julia); K. Bengesser (Kathrin); K.B.M. Claes (Kathleen B.M.); K. Wimmer (Katharina); V.-F. Mautner (Victor-Felix); R. van Minkelen (Rick); E. Legius (Eric); H. Brems (Hilde); M. Upadhyaya (Meena); J. Högel (Josef); C. Lazaro (Conxi); T. Rosenbaum (Thorsten); S. Bammert (Simone); L. Messiaen (Ludwine); D.N. Cooper (David); H. Kehrer-Sawatzki (Hildegard) 2014-01-01 textabstractBackground: Genomic disorders are caused by copy number changes that may exhibit recurrent breakpoints processed by nonallelic homologous recombination. However, region-specific disease-associated copy number changes have also been observed which exhibit non-recurrent breakpoints. The me 12. Incompletely resolved phylogenetic trees inflate estimates of phylogenetic conservatism. Science.gov (United States) Davies, T Jonathan; Kraft, Nathan J B; Salamin, Nicolas; Wolkovich, Elizabeth M 2012-02-01 The tendency for more closely related species to share similar traits and ecological strategies can be explained by their longer shared evolutionary histories and represents phylogenetic conservatism. How strongly species traits co-vary with phylogeny can significantly impact how we analyze cross-species data and can influence our interpretation of assembly rules in the rapidly expanding field of community phylogenetics. Phylogenetic conservatism is typically quantified by analyzing the distribution of species values on the phylogenetic tree that connects them. Many phylogenetic approaches, however, assume a completely sampled phylogeny: while we have good estimates of deeper phylogenetic relationships for many species-rich groups, such as birds and flowering plants, we often lack information on more recent interspecific relationships (i.e., within a genus). A common solution has been to represent these relationships as polytomies on trees using taxonomy as a guide. Here we show that such trees can dramatically inflate estimates of phylogenetic conservatism quantified using S. P. Blomberg et al.'s K statistic. Using simulations, we show that even randomly generated traits can appear to be phylogenetically conserved on poorly resolved trees. We provide a simple rarefaction-based solution that can reliably retrieve unbiased estimates of K, and we illustrate our method using data on first flowering times from Thoreau's woods (Concord, Massachusetts, USA). 13. On Nakhleh's metric for reduced phylogenetic networks. Science.gov (United States) Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente, Gabriel 2009-01-01 We prove that Nakhleh's metric for reduced phylogenetic networks is also a metric on the classes of tree-child phylogenetic networks, semibinary tree-sibling time consistent phylogenetic networks, and multilabeled phylogenetic trees. We also prove that it separates distinguishable phylogenetic networks. In this way, it becomes the strongest dissimilarity measure for phylogenetic networks available so far. Furthermore, we propose a generalization of that metric that separates arbitrary phylogenetic networks. 14. A simple strategy for breakpoint fragment determination in chronic myeloid leukemia. Science.gov (United States) Kamel, A M; Shaker, H M; GadAllah, F H; Hamza, M R; Mansour, O; El Hattab, O H; Moussa, H S 2000-10-15 Molecular characterization is considered a part of the routine work-up of chronic myeloid leukemia (CML) cases. Southern blot analysis using the universal BCR (UBCR) probe on BglII-digested DNA samples is the most commonly used technique, while employing the human 3' bcr probe (PR-1) is usually considered a complementary tool. In this study, we tried to develop a simple and economic strategy for molecular characterization of CML using the 3' probe as it has been shown to be the one capable of locating the breakpoint site. Seventy-eight cases of CML were studied. Molecular analysis was performed using the Southern blot technique. DNA was digested with Bam HI, BglII, EcoRI, and XbaI. Hybridization was performed using the human 3' bcr (PR-1) probe. BamHI and BglII could differentiate fragment 1 (F1) showing rearrangement (R) with Bam HI and germline configuration (G) with BglII; F2/3 showing R with both, and F4 showing R with BamHI and G with BglII. F2/3 cases were further divided by HindIII enzyme into F2 showing (G) and F3 showing (R). Fragment 0 showed G with both, but R with EcoRI and/or XbaI, while 3' deletion gave G with all four enzymes. Our results showed a relative incidence of 6.4% for F0, 20.5% for F1, 32.1% for F2, 19.2% for F3, 15.4% for F4, and 6.4% for 3' deletion. Sixty cases were evaluated clinically and hematologically and were followed up for disease evolution and survival. They included 32 cases in early chronic phase, 24 in late chronic phase, two in acceleration, and two in blastic crisis. No significant correlation was encountered between the breakpoint site and any of the clinical and hematological data except those patients with 3' deletion who showed a very short survival. The study emphasizes Southern blotting as the method of choice for molecular characterization of CML and offers a simple and economic strategy for diagnosis and determination of breakpoint fragment. 15. Bayesian phylogenetic estimation of fossil ages Science.gov (United States) 2016-01-01 Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth–death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the ‘morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses. This article is part of the themed issue ‘Dating species divergences 16. Localization of X chromosome short arm markers relative to synovial sarcoma- and renal adenocarcinoma-associated translocation breakpoints NARCIS (Netherlands) Sinke, R J; de Leeuw, B; Janssen, H A; Weghuis, D O; Suijkerbuijk, R F; Meloni, A M; Gilgenkrantz, S; Berger, W; Ropers, H H; Sandberg, A A 1993-01-01 A series of thirteen different DNA markers was mapped relative to papillary renal cell carcinoma- and synovial sarcoma-associated translocation breakpoints in Xp11.2 using a panel of tumor-derived somatic cell hybrids in conjunction with Southern blot analysis. Our results indicate that the two tran 17. Accuracy of carbapenem nonsusceptibility for identification of KPC-possessing Enterobacteriaceae by use of the revised CLSI breakpoints. Science.gov (United States) Landman, David; Salamera, Julius; Singh, Manisha; Quale, John 2011-11-01 Using the updated 2010 CLSI carbapenem breakpoints for the Enterobacteriaceae, nonsusceptibility to ertapenem and imipenem predicted the presence of bla(KPC) poorly, especially among Escherichia coli and Enterobacter species. In regions where KPC-producing bacteria are endemic, testing for nonsusceptibility to meropenem may provide improved accuracy in identifying these isolates. 18. Accuracy of Carbapenem Nonsusceptibility for Identification of KPC-Possessing Enterobacteriaceae by Use of the Revised CLSI Breakpoints Science.gov (United States) Landman, David; Salamera, Julius; Singh, Manisha; Quale, John 2011-01-01 Using the updated 2010 CLSI carbapenem breakpoints for the Enterobacteriaceae, nonsusceptibility to ertapenem and imipenem predicted the presence of blaKPC poorly, especially among Escherichia coli and Enterobacter species. In regions where KPC-producing bacteria are endemic, testing for nonsusceptibility to meropenem may provide improved accuracy in identifying these isolates. PMID:21880962 19. Gentamicin susceptibility in Escherichia coli related to the genetic background: problems with breakpoints DEFF Research Database (Denmark) Jakobsen, L.; Sandvang, D.; Jensen, Vibeke Frøkjær 2007-01-01 In total, 120 Escherichia coli isolates positive for one of the gentamicin resistance (GEN(R)) genes aac(3)-II, aac(3)-IV or ant(2 '')-I were tested for gentamicin susceptibility by the agar dilution method. Isolates positive for aac(3)-IV or ant(2 '')-I had an MIC distribution of 8-64 mg....../L, whereas isolates positive for aac(3)-II had MICs of 32 to > 512 mg/L, suggesting a relationship between the distribution of MICs and the specific GEN(R) mechanism. The MIC distribution, regardless of the GEN(R) mechanism, was 8 - > 512 mg/L, which supports the clinical breakpoint of MIC > 4 mg/L suggested... 20. Quartets and unrooted phylogenetic networks. Science.gov (United States) Gambette, Philippe; Berry, Vincent; Paul, Christophe 2012-08-01 Phylogenetic networks were introduced to describe evolution in the presence of exchanges of genetic material between coexisting species or individuals. Split networks in particular were introduced as a special kind of abstract network to visualize conflicts between phylogenetic trees which may correspond to such exchanges. More recently, methods were designed to reconstruct explicit phylogenetic networks (whose vertices can be interpreted as biological events) from triplet data. In this article, we link abstract and explicit networks through their combinatorial properties, by introducing the unrooted analog of level-k networks. In particular, we give an equivalence theorem between circular split systems and unrooted level-1 networks. We also show how to adapt to quartets some existing results on triplets, in order to reconstruct unrooted level-k phylogenetic networks. These results give an interesting perspective on the combinatorics of phylogenetic networks and also raise algorithmic and combinatorial questions. 1. [Localization of 8q24 break-point of Burkitt lymphoma in Japan : relationship to EBV status]. Science.gov (United States) Tatsumi, E; Ohno, H 1997-02-01 It has been reported that the break point of 8q24 in t (8; 14) (q24; q32) is located far up-stream from c-myc gene locus in endemic EBV (Epstein-Barr virus)-positive BL, while the break-point is located close to the 1st intron of c-myc gene in sporadic EBV-negative BL. Considering that no statistical analysis is available regarding BL in Japan, the break-point of chromosome No.8 was investigated in 13 BL/L3 cell lines (having t(8; 14)) and 4 fresh samples derived from Japanese patients, including 3 EBV-positive BL cell lines, by using long-distance PCR. In this PCR, one primer was set in the 2nd intron of the c-myc gene, and the other primer in Ig constant region gene, mu, gamma, alpha and epsilon. This long distance PCR can cover up to 30 kb. Thus, this PCR does'nt generate product, if the 8q24 break-point is located far up-stream (more than 50 kb) fom c-myc gene. In 2 of the 3 t (8; 14) EBV-positive BL lines, no product was generated in two lines(N831 and Middle 91), while a product was synthesized in one line(Akata), indicating that the 8q24 break-point is near the c-myc gene in Akata. In all the other BL/L3 lines, a product was synthesized. A larger number of BL cases are necessary to investigate in order to know which 8q24 break-point pattern is exhibited by EBV-positive BL in Japan, while this method is suitable for testing a large number of case materials. 2. Speaking Fluently And Accurately Institute of Scientific and Technical Information of China (English) JosephDeVeto 2004-01-01 Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations. 3. Phylogenetics and the human microbiome. Science.gov (United States) Matsen, Frederick A 2015-01-01 The human microbiome is the ensemble of genes in the microbes that live inside and on the surface of humans. Because microbial sequencing information is now much easier to come by than phenotypic information, there has been an explosion of sequencing and genetic analysis of microbiome samples. Much of the analytical work for these sequences involves phylogenetics, at least indirectly, but methodology has developed in a somewhat different direction than for other applications of phylogenetics. In this article, I review the field and its methods from the perspective of a phylogeneticist, as well as describing current challenges for phylogenetics coming from this type of work. 4. An Optimization-Based Sampling Scheme for Phylogenetic Trees Science.gov (United States) Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model. 5. [Foundations of the new phylogenetics]. Science.gov (United States) Pavlinov, I Ia 2004-01-01 Evolutionary idea is the core of the modern biology. Due to this, phylogenetics dealing with historical reconstructions in biology takes a priority position among biological disciplines. The second half of the 20th century witnessed growth of a great interest to phylogenetic reconstructions at macrotaxonomic level which replaced microevolutionary studies dominating during the 30s-60s. This meant shift from population thinking to phylogenetic one but it was not revival of the classical phylogenetics; rather, a new approach emerged that was baptized The New Phylogenetics. It arose as a result of merging of three disciplines which were developing independently during 60s-70s, namely cladistics, numerical phyletics, and molecular phylogenetics (now basically genophyletics). Thus, the new phylogenetics could be defined as a branch of evolutionary biology aimed at elaboration of "parsimonious" cladistic hypotheses by means of numerical methods on the basis of mostly molecular data. Classical phylogenetics, as a historical predecessor of the new one, emerged on the basis of the naturphilosophical worldview which included a superorganismal idea of biota. Accordingly to that view, historical development (the phylogeny) was thought an analogy of individual one (the ontogeny) so its most basical features were progressive parallel developments of "parts" (taxa), supplemented with Darwinian concept of monophyly. Two predominating traditions were diverged within classical phylogenetics according to a particular interpretation of relation between these concepts. One of them (Cope, Severtzow) belittled monophyly and paid most attention to progressive parallel developments of morphological traits. Such an attitude turned this kind of phylogenetics to be rather the semogenetics dealing primarily with evolution of structures and not of taxa. Another tradition (Haeckel) considered both monophyletic and parallel origins of taxa jointly: in the middle of 20th century it was split into 6. Skeletal Rigidity of Phylogenetic Trees CERN Document Server Cheng, Howard; Li, Brian; Risteski, Andrej 2012-01-01 Motivated by geometric origami and the straight skeleton construction, we outline a map between spaces of phylogenetic trees and spaces of planar polygons. The limitations of this map is studied through explicit examples, culminating in proving a structural rigidity result. 7. Community Phylogenetics: Assessing Tree Reconstruction Methods and the Utility of DNA Barcodes. Science.gov (United States) Boyle, Elizabeth E; Adamowicz, Sarah J 2015-01-01 Studies examining phylogenetic community structure have become increasingly prevalent, yet little attention has been given to the influence of the input phylogeny on metrics that describe phylogenetic patterns of co-occurrence. Here, we examine the influence of branch length, tree reconstruction method, and amount of sequence data on measures of phylogenetic community structure, as well as the phylogenetic signal (Pagel's λ) in morphological traits, using Trichoptera larval communities from Churchill, Manitoba, Canada. We find that model-based tree reconstruction methods and the use of a backbone family-level phylogeny improve estimations of phylogenetic community structure. In addition, trees built using the barcode region of cytochrome c oxidase subunit I (COI) alone accurately predict metrics of phylogenetic community structure obtained from a multi-gene phylogeny. Input tree did not alter overall conclusions drawn for phylogenetic signal, as significant phylogenetic structure was detected in two body size traits across input trees. As the discipline of community phylogenetics continues to expand, it is important to investigate the best approaches to accurately estimate patterns. Our results suggest that emerging large datasets of DNA barcode sequences provide a vast resource for studying the structure of biological communities. 8. Quantum Simulation of Phylogenetic Trees CERN Document Server Ellinas, Demosthenes 2011-01-01 Quantum simulations constructing probability tensors of biological multi-taxa in phylogenetic trees are proposed, in terms of positive trace preserving maps, describing evolving systems of quantum walks with multiple walkers. Basic phylogenetic models applying on trees of various topologies are simulated following appropriate decoherent quantum circuits. Quantum simulations of statistical inference for aligned sequences of biological characters are provided in terms of a quantum pruning map operating on likelihood operator observables, utilizing state-observable duality and measurement theory. 9. A complex chromosome rearrangement involving four chromosomes, nine breakpoints and a cryptic 0.6-Mb deletion in a boy with cerebellar hypoplasia and defects in skull ossification. Science.gov (United States) Guilherme, R S; Cernach, M C S P; Sfakianakis, T E; Takeno, S S; Nardozza, L M M; Rossi, C; Bhatt, S S; Liehr, T; Melaragno, M I 2013-01-01 Constitutional complex chromosomal rearrangements (CCRs) are considered rare cytogenetic events. Most apparently balanced CCRs are de novo and are usually found in patients with abnormal phenotypes. High-resolution techniques are unveiling genomic imbalances in a great percentage of these cases. In this paper, we report a patient with growth and developmental delay, dysmorphic features, nervous system anomalies (pachygyria, hypoplasia of the corpus callosum and cerebellum), a marked reduction in the ossification of the cranial vault, skull base sclerosis, and cardiopathy who presents a CCR with 9 breakpoints involving 4 chromosomes (3, 6, 8 and 14) and a 0.6-Mb deletion in 14q24.1. Although the only genomic imbalance revealed by the array technique was a deletion, the clinical phenotype of the patient most likely cannot be attributed exclusively to haploinsufficiency. Other events must also be considered, including the disruption of critical genes and position effects. A combination of several different investigative approaches (G-banding, FISH with different probes and SNP array techniques) was required to describe this CCR in full, suggesting that CCRs may be more frequent than initially thought. Additionally, we propose that a chain chromosome breakage mechanism may have occurred as a single rearrangement event resulting in this CCR. This study demonstrates the importance of applying different cytogenetic and molecular techniques to detect subtle rearrangements and to delineate the rearrangements at a more accurate level, providing a better understanding of the mechanisms involved in CCR formation and a better correlation with phenotype. 10. Susceptibility breakpoints for amphotericin B and Aspergillus species in an in vitro pharmacokinetic-pharmacodynamic model simulating free-drug concentrations in human serum NARCIS (Netherlands) Elefanti, A.; Mouton, J.W.; Verweij, P.E.; Zerva, L.; Meletiadis, J. 2014-01-01 Although conventional amphotericin B was for many years the drug of choice and remains an important agent against invasive aspergillosis, reliable susceptibility breakpoints are lacking. Three clinical Aspergillus isolates (Aspergillus fumigatus, Aspergillus flavus, and Aspergillus terreus) were tes 11. Identification of submicroscopic genetic changes and precise breakpoint mapping in myelofibrosis using high resolution mate-pair sequencing. Science.gov (United States) Lasho, Terra; Johnson, Sarah H; Smith, David I; Crispino, John D; Pardanani, Animesh; Vasmatzis, George; Tefferi, Ayalew 2013-09-01 We used high resolution mate-pair sequencing (HRMPS) in 15 patients with primary myelofibrosis (PMF): eight with normal karyotype and seven with PMF-characteristic cytogenetic abnormalities, including der(6)t(1;6)(q21-23;p21.3) (n = 4), der(7)t(1;7)(q10;p10) (n = 2), del(20)(q11.2q13.3) (n = 3), and complex karyotype (n = 1). We describe seven novel deletions/translocations in five patients (including two with normal karyotype) whose breakpoints were PCR-validated and involved MACROD2, CACNA2D4, TET2, SGMS2, LRBA, SH3D19, INTS3, FOP (CHTOP), SCLT1, and PHF17. Deletions with breakpoints involving MACROD2 (lysine deacetylase; 20p12.1) were recurrent and found in two of the 15 study patients. A novel fusion transcript was found in one of the study patients (INTS3-CHTOP), and also in an additional non-study patient with PMF. In two patients with der(6)t(1;6)(q21-23;p21.3), we were able to map the precise translocation breakpoints, which involved KCNN3 and GUSBP2 in one case and HYDIN2 in another. This study demonstrates the utility of HRMPS in uncovering submicroscopic deletions/translocations/fusions, and precise mapping of breakpoints in those with overt cytogenetic abnormalities. The overall results confirm the genetic heterogeneity of PMF, given the low frequency of recurrent specific abnormalities, identified by this screening strategy. Currently, we are pursuing the pathogenetic relevance of some of the aforementioned findings. 12. Variable breakpoints target PAX5 in patients with dicentric chromosomes: a model for the basis of unbalanced translocations in cancer. Science.gov (United States) An, Qian; Wright, Sarah L; Konn, Zoë J; Matheson, Elizabeth; Minto, Lynne; Moorman, Anthony V; Parker, Helen; Griffiths, Mike; Ross, Fiona M; Davies, Teresa; Hall, Andy G; Harrison, Christine J; Irving, Julie A; Strefford, Jon C 2008-11-04 The search for target genes involved in unbalanced acquired chromosomal abnormalities has been largely unsuccessful, because the breakpoints of these rearrangements are too variable. Here, we use the example of dicentric chromosomes in B cell precursor acute lymphoblastic leukemia to show that, despite this heterogeneity, single genes are targeted through a variety of mechanisms. FISH showed that, although they were heterogeneous, breakpoints on 9p resulted in the partial or complete deletion of PAX5. Molecular copy number counting further delineated the breakpoints and facilitated cloning with long-distance inverse PCR. This approach identified 5 fusion gene partners with PAX5: LOC392027 (7p12.1), SLCO1B3 (12p12), ASXL1 (20q11.1), KIF3B (20q11.21), and C20orf112 (20q11.1). In each predicted fusion protein, the DNA-binding paired domain of PAX5 was present. Using quantitative PCR, we demonstrated that both the deletion and gene fusion events resulted in the same underexpression of PAX5, which extended to the differential expression of the PAX5 target genes, EBF1, ALDH1A1, ATP9A, and FLT3. Further molecular analysis showed deletion and mutation of the homologous PAX5 allele, providing further support for the key role of PAX5. Here, we show that specific gene loci may be the target of heterogeneous translocation breakpoints in human cancer, acting through a variety of mechanisms. This approach indicates an application for the identification of cancer genes in solid tumours, where unbalanced chromosomal rearrangements are particularly prevalent and few genes have been identified. It can be extrapolated that this strategy will reveal that the same mechanisms operate in cancer pathogenesis in general. 13. Huntingtin-associated protein 1 interacts with breakpoint cluster region protein to regulate neuronal differentiation. Directory of Open Access Journals (Sweden) Pai-Tsang Huang Full Text Available Alterations in microtubule-dependent trafficking and certain signaling pathways in neuronal cells represent critical pathogenesis in neurodegenerative diseases. Huntingtin (Htt-associated protein-1 (Hap1 is a brain-enriched protein and plays a key role in the trafficking of neuronal surviving and differentiating cargos. Lack of Hap1 reduces signaling through tropomyosin-related kinases including extracellular signal regulated kinase (ERK, resulting in inhibition of neurite outgrowth, hypothalamic dysfunction and postnatal lethality in mice. To examine how Hap1 is involved in microtubule-dependent trafficking and neuronal differentiation, we performed a proteomic analysis using taxol-precipitated microtubules from Hap1-null and wild-type mouse brains. Breakpoint cluster region protein (Bcr, a Rho GTPase regulator, was identified as a Hap1-interacting partner. Bcr was co-immunoprecipitated with Hap1 from transfected neuro-2a cells and co-localized with Hap1A isoform more in the differentiated than in the nondifferentiated cells. The Bcr downstream effectors, namely ERK and p38, were significantly less activated in Hap1-null than in wild-type mouse hypothalamus. In conclusion, Hap1 interacts with Bcr on microtubules to regulate neuronal differentiation. 14. Evaluation of susceptibility test breakpoints used to predict mecA-mediated resistance in Staphylococcus pseudintermedius isolated from dogs. Science.gov (United States) Bemis, David A; Jones, Rebekah D; Frank, Linda A; Kania, Stephen A 2009-01-01 Clinical and Laboratory Standards Institute interpretive breakpoints for in vitro susceptibility tests that predict mecA-mediated oxacillin resistance in Staphylococcus pseudintermedius isolates from animals have been changed twice in the past decade. Moreover, there are no counterpart recommendations for human isolates of S. pseudintermedius. Individual medical and veterinary laboratories variably use interpretive breakpoints identical to those recommended for use with Staphylococcus aureus or identical to those recommended for use with coagulase-negative staphylococci. The purpose of the current study was to examine correlations between oxacillin disk diffusion, oxacillin gradient diffusion, oxacillin microbroth dilution, and cefoxitin disk diffusion tests used to predict mecA-mediated resistance in S. pseudintermedius and to retrospectively estimate, from disk diffusion zone diameter measurements, the prevalence and rate of increase of oxacillin resistance among canine S. pseudintermedius isolates submitted to a veterinary teaching hospital laboratory. Oxacillin disk diffusion zone diameters of or=0.5 microg/ml were highly correlated with detection of mecA in canine S. pseudintermedius isolates by polymerase chain reaction. MecA-mediated resistance among S. pseudintermedius isolates from dogs increased from less than 5% in 2001 to near 30% in 2007. More than 90% of the methicillin-resistant S. pseudintermedius isolates in 2006 and 2007 were also resistant to representatives of >or=4 additional antimicrobial drug classes. Cefoxitin disk diffusion with the resistance breakpoint set at pseudintermedius. 15. SRBreak: A Read-Depth and Split-Read Framework to Identify Breakpoints of Different Events Inside Simple Copy-Number Variable Regions. Science.gov (United States) Nguyen, Hoang T; Boocock, James; Merriman, Tony R; Black, Michael A 2016-01-01 Copy-number variation (CNV) has been associated with increased risk of complex diseases. High-throughput sequencing (HTS) technologies facilitate the detection of copy-number variable regions (CNVRs) and their breakpoints. This helps in understanding genome structure as well as their evolution process. Various approaches have been proposed for detecting CNV breakpoints, but currently it is still challenging for tools based on a single analysis method to identify breakpoints of CNVs. It has been shown, however, that pipelines which integrate multiple approaches are able to report more reliable breakpoints. Here, based on HTS data, we have developed a pipeline to identify approximate breakpoints (±10 bp) relating to different ancestral events within a specific CNVR. The pipeline combines read-depth and split-read information to infer breakpoints, using information from multiple samples to allow an imputation approach to be taken. The main steps involve using a normal mixture model to cluster samples into different groups, followed by simple kernel-based approaches to maximize information obtained from read-depth and split-read approaches, after which common breakpoints of groups are inferred. The pipeline uses split-read information directly from CIGAR strings of BAM files, without using a re-alignment step. On simulated data sets, it was able to report breakpoints for very low-coverage samples including those for which only single-end reads were available. When applied to three loci from existing human resequencing data sets (NEGR1, LCE3, IRGM) the pipeline obtained good concordance with results from the 1000 Genomes Project (92, 100, and 82%, respectively). The package is available at https://github.com/hoangtn/SRBreak, and also as a docker-based application at https://registry.hub.docker.com/u/hoangtn/srbreak/. 16. Caspofungin Etest susceptibility testing of Candida species: risk of misclassification of susceptible isolates of C. glabrata and C. krusei when adopting the revised CLSI caspofungin breakpoints. Science.gov (United States) Arendrup, Maiken Cavling; Pfaller, Michael A 2012-07-01 The purpose of this study was to evaluate the performance of caspofungin Etest and the recently revised CLSI breakpoints. A total of 497 blood isolates, of which 496 were wild-type isolates, were included. A total of 65/496 susceptible isolates (13.1%) were misclassified as intermediate (I) or resistant (R). Such misclassifications were most commonly observed for Candida krusei (73.1%) and Candida glabrata (33.1%). The revised breakpoints cannot be safely adopted for these two species. 17. apex: phylogenetics with multiple genes. Science.gov (United States) Jombart, Thibaut; Archer, Frederick; Schliep, Klaus; Kamvar, Zhian; Harris, Rebecca; Paradis, Emmanuel; Goudet, Jérome; Lapp, Hilmar 2017-01-01 Genetic sequences of multiple genes are becoming increasingly common for a wide range of organisms including viruses, bacteria and eukaryotes. While such data may sometimes be treated as a single locus, in practice, a number of biological and statistical phenomena can lead to phylogenetic incongruence. In such cases, different loci should, at least as a preliminary step, be examined and analysed separately. The r software has become a popular platform for phylogenetics, with several packages implementing distance-based, parsimony and likelihood-based phylogenetic reconstruction, and an even greater number of packages implementing phylogenetic comparative methods. Unfortunately, basic data structures and tools for analysing multiple genes have so far been lacking, thereby limiting potential for investigating phylogenetic incongruence. In this study, we introduce the new r package apex to fill this gap. apex implements new object classes, which extend existing standards for storing DNA and amino acid sequences, and provides a number of convenient tools for handling, visualizing and analysing these data. In this study, we introduce the main features of the package and illustrate its functionalities through the analysis of a simple data set. 18. Effects of Breakpoint Changes on Carbapenem Susceptibility Rates ofEnterobacteriaceae: Results from the SENTRY Antimicrobial Surveillance Program, United States, 2008 to 2012 Directory of Open Access Journals (Sweden) Robert P Rennie 2014-01-01 Full Text Available In the absence of clinical resistance, breakpoints for many antimicrobial agents are often set high. Clinical failures following use of the agents over time requires re-evaluation of breakpoints. This is based on patient response, pharmacokinetic/pharmacodynamic information and in vitro minimal inhibitory concentration data. Data from the SENTRY Antimicrobial Surveillance Program has shown that Clinical and Laboratory Standards Institute breakpoint changes for carbapenems that occurred between 2008 and 2012 in North America have resulted in decreased levels of susceptibility for some species. In particular, reduced susceptibility to imipenem was observed for Proteus mirabilis (35% and Morganella morganii (80%. Minor decreases in susceptibility were also noted for Enterobacter species with ertapenem (5% and imipenem (4.3%, and Serratia species with imipenem (6.4%. No significant decreases in susceptibility were observed for meropenem following the breakpoint changes. There were no earlier breakpoints established for doripenem. Very few of these Enterobacteriaceae produce carbapenamase enzymes; therefore, the clinical significance of these changes has not yet been clearly determined. In conclusion, ongoing surveillance studies with in vitro minimum inhibitory concentration data are essential in predicting the need for breakpoint changes and in identifying the impact of such changes on the percent susceptibility of different species. 19. Absolute Pitch in Boreal Chickadees and Humans: Exceptions that Test a Phylogenetic Rule Science.gov (United States) Weisman, Ronald G.; Balkwill, Laura-Lee; Hoeschele, Marisa; Moscicki, Michele K.; Bloomfield, Laurie L.; Sturdy, Christopher B. 2010-01-01 This research examined generality of the phylogenetic rule that birds discriminate frequency ranges more accurately than mammals. Human absolute pitch chroma possessors accurately tracked transitions between frequency ranges. Independent tests showed that they used note naming (pitch chroma) to remap the tones into ranges; neither possessors nor… 20. Proximity Within Interphase Chromosome Contributes to the Breakpoint Distribution in Radiation-Induced Intrachromosomal Exchanges Science.gov (United States) Zhang, Ye; Uhlemeyer, Jimmy; Hada, Megumi; Asaithamby, A.; Chen, David J.; Wu, Honglu 2015-01-01 Previously, we reported that breaks involved in chromosome aberrations were clustered in several regions of chromosome3 in human mammary epithelial cells after exposures to either low-or high-LET radiation. In particular, breaks in certain regions of the chromosome tended to rejoin with each other to form an intrachromosome exchange event. This study tests the hypothesis that proximity within a single chromosome in interphase cell nuclei contributes to the distribution of radiation-induced chromosome breaks. Chromosome 3 in G1 human mammary epithelial cells was hybridized with the multicolor banding in situ hybridization (mBAND) probes that distinguish the chromosome in six differently colored regions, and the location of these regions was measured with a laser confocal microscope. Results of the study indicated that, on a multi-mega base pair scale of the DNA, the arrangement of chromatin was non-random. Both telomere regions tended to be located towards the exterior of the chromosome domain, whereas the centromere region towards the interior. In addition, the interior of the chromosome domain was preferentially occupied by the p-arm of the chromatin, which is consistent with our previous finding of intrachromosome exchanges involving breaks on the p-arm and in the centromere region of chromosome3. Other factors, such as the fragile sites in the 3p21 band and gene regulation, may also contribute to the breakpoint distribution in radiation-induced chromosome aberrations. Further investigations suggest that the 3D chromosome folding is cell type and culture condition dependent. 1. Phylogenetic relationships among Maloideae species Science.gov (United States) The Maloideae is a highly diverse sub-family of the Rosaceae containing several agronomically important species (Malus sp. and Pyrus sp.) and their wild relatives. Previous phylogenetic work within the group has revealed extensive intergeneric hybridization and polyploidization. In order to develop... 2. Cyber-infrastructure for Fusarium (CiF): Three integrated platforms supporting strain identification, phylogenetics, comparative genomics, and knowledge sharing Science.gov (United States) The fungal genus Fusarium includes many plant and/or animal pathogenic species and produces diverse toxins. Although accurate identification is critical for managing such threats, it is difficult to identify Fusarium morphologically. Fortunately, extensive molecular phylogenetic studies, founded on ... 3. Bilateral renal agenesis/hypoplasia/dysplasia (BRAHD: postmortem analysis of 45 cases with breakpoint mapping of two de novo translocations. Directory of Open Access Journals (Sweden) Louise Harewood Full Text Available BACKGROUND: Bilateral renal agenesis/hypoplasia/dysplasia (BRAHD is a relatively common, lethal malformation in humans. Established clinical risk factors include maternal insulin dependent diabetes mellitus and male sex of the fetus. In the majority of cases, no specific etiology can be established, although teratogenic, syndromal and single gene causes can be assigned to some cases. METHODOLOGY/PRINCIPAL FINDINGS: 45 unrelated fetuses, stillbirths or infants with lethal BRAHD were ascertained through a single regional paediatric pathology service (male:female 34:11 or 3.1:1. The previously reported phenotypic overlaps with VACTERL, caudal dysgenesis, hemifacial microsomia and Müllerian defects were confirmed. A new finding is that 16/45 (35.6%; m:f 13:3 or 4.3:1 BRAHD cases had one or more extrarenal malformations indicative of a disoder of laterality determination including; incomplete lobulation of right lung (seven cases, malrotation of the gut (seven cases and persistence of the left superior vena cava (five cases. One such case with multiple laterality defects and sirelomelia was found to have a de novo apparently balanced reciprocal translocation 46,XY,t(2;6(p22.3;q12. Translocation breakpoint mapping was performed by interphase fluorescent in-situ hybridization (FISH using nuclei extracted from archival tissue sections in both this case and an isolated bilateral renal agenesis case associated with a de novo 46,XY,t(1;2(q41;p25.3. Both t(2;6 breakpoints mapped to gene-free regions with no strong evidence of cis-regulatory potential. Ten genes localized within 500 kb of the t(1;2 breakpoints. Wholemount in-situ expression analyses of the mouse orthologs of these genes in embryonic mouse kidneys showed strong expression of Esrrg, encoding a nuclear steroid hormone receptor. Immunohistochemical analysis showed that Esrrg was restricted to proximal ductal tissue within the embryonic kidney. CONCLUSIONS/SIGNIFICANCE: The previously unreported 4. Influence of sequence identity and unique breakpoints on the frequency of intersubtype HIV-1 recombination Directory of Open Access Journals (Sweden) Abreha Measho 2006-12-01 Full Text Available Abstract Background HIV-1 recombination between different subtypes has a major impact on the global epidemic. The generation of these intersubtype recombinants follows a defined set of events starting with dual infection of a host cell, heterodiploid virus production, strand transfers during reverse transcription, and then selection. In this study, recombination frequencies were measured in the C1-C4 regions of the envelope gene in the presence (using a multiple cycle infection system and absence (in vitro reverse transcription and single cycle infection systems of selection for replication-competent virus. Ugandan subtypes A and D HIV-1 env sequences (115-A, 120-A, 89-D, 122-D, 126-D were employed in all three assay systems. These subtypes co-circulate in East Africa and frequently recombine in this human population. Results Increased sequence identity between viruses or RNA templates resulted in increased recombination frequencies, with the exception of the 115-A virus or RNA template. Analyses of the recombination breakpoints and mechanistic studies revealed that the presence of a recombination hotspot in the C3/V4 env region, unique to 115-A as donor RNA, could account for the higher recombination frequencies with the 115-A virus/template. Single-cycle infections supported proportionally less recombination than the in vitro reverse transcription assay but both systems still had significantly higher recombination frequencies than observed in the multiple-cycle virus replication system. In the multiple cycle assay, increased replicative fitness of one HIV-1 over the other in a dual infection dramatically decreased recombination frequencies. Conclusion Sequence variation at specific sites between HIV-1 isolates can introduce unique recombination hotspots, which increase recombination frequencies and skew the general observation that decreased HIV-1 sequence identity reduces recombination rates. These findings also suggest that the majority of 5. Influence of sequence identity and unique breakpoints on the frequency of intersubtype HIV-1 recombination Science.gov (United States) Baird, Heather A; Gao, Yong; Galetto, Román; Lalonde, Matthew; Anthony, Reshma M; Giacomoni, Véronique; Abreha, Measho; Destefano, Jeffrey J; Negroni, Matteo; Arts, Eric J 2006-01-01 Background HIV-1 recombination between different subtypes has a major impact on the global epidemic. The generation of these intersubtype recombinants follows a defined set of events starting with dual infection of a host cell, heterodiploid virus production, strand transfers during reverse transcription, and then selection. In this study, recombination frequencies were measured in the C1-C4 regions of the envelope gene in the presence (using a multiple cycle infection system) and absence (in vitro reverse transcription and single cycle infection systems) of selection for replication-competent virus. Ugandan subtypes A and D HIV-1 env sequences (115-A, 120-A, 89-D, 122-D, 126-D) were employed in all three assay systems. These subtypes co-circulate in East Africa and frequently recombine in this human population. Results Increased sequence identity between viruses or RNA templates resulted in increased recombination frequencies, with the exception of the 115-A virus or RNA template. Analyses of the recombination breakpoints and mechanistic studies revealed that the presence of a recombination hotspot in the C3/V4 env region, unique to 115-A as donor RNA, could account for the higher recombination frequencies with the 115-A virus/template. Single-cycle infections supported proportionally less recombination than the in vitro reverse transcription assay but both systems still had significantly higher recombination frequencies than observed in the multiple-cycle virus replication system. In the multiple cycle assay, increased replicative fitness of one HIV-1 over the other in a dual infection dramatically decreased recombination frequencies. Conclusion Sequence variation at specific sites between HIV-1 isolates can introduce unique recombination hotspots, which increase recombination frequencies and skew the general observation that decreased HIV-1 sequence identity reduces recombination rates. These findings also suggest that the majority of intra- or intersubtype A 6. Vestige: Maximum likelihood phylogenetic footprinting Directory of Open Access Journals (Sweden) Maxwell Peter 2005-05-01 Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational 7. Jumping translocation in acute monocytic leukemia (M5b) with alternative breakpoint sites in the long arm of donor chromosome 3. Science.gov (United States) McGrattan, Peter; Logan, Amy; Humphreys, Mervyn; Bowers, Margaret 2010-09-01 An 86-year-old man presented with acute hepatic failure, worsening thrombocytopenia, and anemia having been diagnosed and managed expectantly with cytogenetically normal RAEB-1. After 20 months a diagnosis of disease transformation to acute monocytic leukemia (M5b) was made. Conventional G-banded analysis of unstimulated bone marrow cultures demonstrated a jumping translocation (JT) involving proximal and distal breakpoints on donor chromosome 3 at bands 3q1?2 and 3q21, respectively. Recipient chromosomes included the long-arm telomeric regions of chromosomes 5, 10, 14, 16, and 19. A low-level trisomy 8 clone was also found in association with both proximal and distal JT clones. Conventional G-banded analysis of unstimulated peripheral blood cultures detected the proximal 3q1?2 JT clone involving recipient chromosome 10 several weeks after transformation to acute monocytic leukemia. Interestingly, JTs involving recipient chromosomes 5, 14, 16, and 19 were not detected in this peripheral blood sample. Palliative care was administered until his demise 2.2 months after disease transformation. There have been fewer than 70 cases of acquired JTs reported in the literature, including one myeloproliferative neoplasm and five acute myeloid leukemias involving a single breakpoint site on donor chromosome 3. Our case is unique as it is the first acquired case to demonstrate a JT involving alternative pericentromeric breakpoint sites on a single donor chromosome consisting of a proximal breakpoint at 3q1?2 and a more distal breakpoint at 3q21. 8. Phylogenetic analysis of otospiralin protein Science.gov (United States) Torktaz, Ibrahim; Behjati, Mohaddeseh; Rostami, Amin 2016-01-01 Background: Fibrocyte-specific protein, otospiralin, is a small protein, widely expressed in the central nervous system as neuronal cell bodies and glia. The increased expression of otospiralin in reactive astrocytes implicates its role in signaling pathways and reparative mechanisms subsequent to injury. Indeed, otospiralin is considered to be essential for the survival of fibrocytes of the mesenchymal nonsensory regions of the cochlea. It seems that other functions of this protein are not yet completely understood. Materials and Methods: Amino acid sequences of otospiralin from 12 vertebrates were derived from National Center for Biotechnology Information database. Phylogenetic analysis and phylogeny estimation were performed using MEGA 5.0.5 program, and neighbor-joining tree was constructed by this software. Results: In this computational study, the phylogenetic tree of otospiralin has been investigated. Therefore, dendrograms of otospiralin were depicted. Alignment performed in MUSCLE method by UPGMB algorithm. Also, entropy plot determined for a better illustration of amino acid variations in this protein. Conclusion: In the present study, we used otospiralin sequence of 12 different species and by constructing phylogenetic tree, we suggested out group for some related species. PMID:27099854 9. Analysis of t(9;17)(q33.2;q25.3) chromosomal breakpoint regions and genetic association reveals novel candidate genes for bipolar disorder DEFF Research Database (Denmark) Rajkumar, A.P.; Christensen, Jane H.; Mattheisen, Manuel 2015-01-01 OBJECTIVES: Breakpoints of chromosomal abnormalities facilitate identification of novel candidate genes for psychiatric disorders. Genome-wide significant evidence supports the linkage between chromosome 17q25.3 and bipolar disorder (BD). Co-segregation of translocation t(9;17)(q33.2;q25.......3) with psychiatric disorders has been reported. We aimed to narrow down these chromosomal breakpoint regions and to investigate the associations between single nucleotide polymorphisms within these regions and BD as well as schizophrenia (SZ) in large genome-wide association study samples. METHODS: We cross......,856) data. Genetic associations between these disorders and single nucleotide polymorphisms within these breakpoint regions were analysed by BioQ, FORGE, and RegulomeDB programmes. RESULTS: Four protein-coding genes [coding for (endonuclease V (ENDOV), neuronal pentraxin I (NPTX1), ring finger protein 213... 10. Nerve growth factor receptor gene is at human chromosome region 17q12-17q22, distal to the chromosome 17 breakpoint in acute leukemias Energy Technology Data Exchange (ETDEWEB) Huebner, K.; Isobe, M.; Chao, M.; Bothwell, M.; Ross, A.H.; Finan, J.; Hoxie, J.A.; Sehgal, A.; Buck, C.R.; Lanahan, A. 1986-03-01 Genomic and cDNA clones for the human nerve growth factor receptor have been used in conjunction with somatic cell hybrid analysis and in situ hybridization to localize the nerve growth factor receptor locus to human chromosome region 17q12-q22. Additionally, part, if not all, of the nerve growth factor receptor locus is present on the translocated portion of 17q (17q21-qter) from a poorly differential acute leukemia in which the chromosome 17 breakpoint was indistinguishable cytogenetically from the 17 breakpoint observed in the t(15;17)(q22;q21) translocation associated with acute promyelocytic leukemia. Thus the nerve growth factor receptor locus may be closely distal to the acute promyelocytic leukemia-associated chromosome 17 breakpoint at 17q21. 11. Translocation breakpoint maps 5 kb 3' from TWIST in a patient affected with Saethre-Chotzen syndrome. Science.gov (United States) Krebs, I; Weis, I; Hudler, M; Rommens, J M; Roth, H; Scherer, S W; Tsui, L C; Füchtbauer, E M; Grzeschik, K H; Tsuji, K; Kunz, J 1997-07-01 Saethre-Chotzen syndrome, a common autosomal dominant craniosynostosis in humans, is characterized by brachydactyly, soft tissue syndactyly and facial dysmorphism including ptosis, facial asymmetry, and prominent ear crura. Previously, we identified a yeast artificial chromosome that encompassed the breakpoint of an apparently balanced t(6;7) (q16.2;p15.3) translocation associated with a mild form of Saethre-Chotzen syndrome. We now describe, at the DNA sequence level, the region on chromosome 7 affected by this translocation event. The rearrangement occurred approximately 5 kb 3' of the human TWIST locus and deleted 518 bp of chromosome 7. The TWIST gene codes for a transcription factor containing a basic helix-loop-helix (b-HLH) motif and has recently been described as a candidate gene for Saethre-Chotzen syndrome, based on the detection of mutations within the coding region. Potential exon sequences flanking the chromosome 7 translocation breakpoint did not hit known genes in database searches. The chromosome rearrangement downstream of TWIST is compatible with the notion that this is a Saethre-Chotzen syndrome gene and implies loss of function of one allele by a positional effect as a possible mechanism of mutation to evoke the syndrome. 12. Clinical characterization and identification of duplication breakpoints in a Japanese family with Xq28 duplication syndrome including MECP2. Science.gov (United States) Fukushi, Daisuke; Yamada, Kenichiro; Nomura, Noriko; Naiki, Misako; Kimura, Reiko; Yamada, Yasukazu; Kumagai, Toshiyuki; Yamaguchi, Kumiko; Miyake, Yoshishige; Wakamatsu, Nobuaki 2014-04-01 Xq28 duplication syndrome including MECP2 is a neurodevelopmental disorder characterized by axial hypotonia at infancy, severe intellectual disability, developmental delay, mild characteristic facial appearance, epilepsy, regression, and recurrent infections in males. We identified a Japanese family of Xq28 duplications, in which the patients presented with cerebellar ataxia, severe constipation, and small feet, in addition to the common clinical features. The 488-kb duplication spanned from L1CAM to EMD and contained 17 genes, two pseudo genes, and three microRNA-coding genes. FISH and nucleotide sequence analyses demonstrated that the duplication was tandem and in a forward orientation, and the duplication breakpoints were located in AluSc at the EMD side, with a 32-bp deletion, and LTR50 at the L1CAM side, with "tc" and "gc" microhomologies at the duplication breakpoints, respectively. The duplicated segment was completely segregated from the grandmother to the patients. These results suggest that the duplication was generated by fork-stalling and template-switching at the AluSc and LTR50 sites. This is the first report to determine the size and nucleotide sequences of the duplicated segments at Xq28 of three generations of a family and provides the genotype-phenotype correlation of the patients harboring the specific duplicated segment. 13. A real-time PCR-based semi-quantitative breakpoint to aid in molecular identification of urinary tract infections. Science.gov (United States) Hansen, Wendy L J; van der Donk, Christina F M; Bruggeman, Cathrien A; Stobberingh, Ellen E; Wolffs, Petra F G 2013-01-01 This study presents a novel approach to aid in diagnosis of urinary tract infections (UTIs). A real-time PCR assay was used to screen for culture-positive urinary specimens and to identify the causative uropathogen. Semi-quantitative breakpoints were used to screen for significant bacteriuria (presence of ≥ 10(5) CFU/ml of uropathogens) or low-level bacteriuria (containing between 10(3) and 10(4) CFU/ml of uropathogens). The 16S rDNA-based assay could identify the most prevalent uropathogens using probes for Escherichia coli, Pseudomonas species, Pseudomonas aeruginosa, Staphylococcus species, Staphylococcus aureus, Enterococcus species and Streptococcus species. 330 urinary specimens were analysed and results were compared with conventional urine culture. Using a PCR Ct value of 25 as semi-quantitative breakpoint for significant bacteriuria resulted in a sensitivity and specificity of 97% and 80%, respectively. In 78% of the samples with monomicrobial infections the assay contained probes to detect the bacteria present in the urine specimens and 99% of these uropathogens was correctly identified. Concluding, this proof-of-concept approach demonstrates that the assay can distinguish bacteriuria from no bacteriuria as well as detect the involved uropathogen within 4 hours after sampling, allowing adequate therapy decisions within the same day as well as drastically reduce consequent urine culturing. 14. Targeted next-generation sequencing at copy-number breakpoints for personalized analysis of rearranged ends in solid tumors. Directory of Open Access Journals (Sweden) Hyun-Kyoung Kim Full Text Available BACKGROUND: The concept of the utilization of rearranged ends for development of personalized biomarkers has attracted much attention owing to its clinical applicability. Although targeted next-generation sequencing (NGS for recurrent rearrangements has been successful in hematologic malignancies, its application to solid tumors is problematic due to the paucity of recurrent translocations. However, copy-number breakpoints (CNBs, which are abundant in solid tumors, can be utilized for identification of rearranged ends. METHOD: As a proof of concept, we performed targeted next-generation sequencing at copy-number breakpoints (TNGS-CNB in nine colon cancer cases including seven primary cancers and two cell lines, COLO205 and SW620. For deduction of CNBs, we developed a novel competitive single-nucleotide polymorphism (cSNP microarray method entailing CNB-region refinement by competitor DNA. RESULT: Using TNGS-CNB, 19 specific rearrangements out of 91 CNBs (20.9% were identified, and two polymerase chain reaction (PCR-amplifiable rearrangements were obtained in six cases (66.7%. And significantly, TNGS-CNB, with its high positive identification rate (82.6% of PCR-amplifiable rearrangements at candidate sites (19/23, just from filtering of aligned sequences, requires little effort for validation. CONCLUSION: Our results indicate that TNGS-CNB, with its utility for identification of rearrangements in solid tumors, can be successfully applied in the clinical laboratory for cancer-relapse and therapy-response monitoring. 15. Impact of new Clinical Laboratory Standards Institute Streptococcus pneumoniae penicillin susceptibility testing breakpoints on reported resistance changes over time. Science.gov (United States) Mera, Robertino M; Miller, Linda A; Amrine-Madsen, Heather; Sahm, Daniel F 2011-03-01 The analysis comprised a total of 97,843 U.S. isolates from the Surveillance Network(®) database for the period 1996-2008. Penicillin resistance, when defined using the old Clinical Laboratory Standards Institute breakpoint (≥2 μg/ml), had an initial rise that started in 1996, peaked in 2000, declined until 2003, and rebounded through 2008 (15.6%, 23.2%, 15.4%, and 16.9%, respectively). Using the new Clinical Laboratory Standards Institute criteria and applying a breakpoint of ≥8 μg/ml to blood and bronchial isolates, resistance was unchanged (0.24% in 2003) but rose to 1.52% in 2008. Using the new meningitis criteria (≥0.12 μg/ml), resistance prevalence was 34.8% in 2008, whereas it was 12.3% using the old criteria (≥2 μg/ml) for cerebrospinal fluid isolates. The rise, fall, and subsequent rebound of penicillin resistance in the United States, presumably influenced by the introduction of the conjugate pneumococcal vaccine, is clearly seen with the old definition, but only the rebound is seen when the new criteria are applied. In the postvaccine period, isolates with minimum inhibitory concentrations of 1 and 2 μg/ml decline, whereas those with minimum inhibitory concentrations of 0.12-0.5 increase, which may signal the loss of resistant vaccine serotypes and the acquisition of resistance by nonvaccine serotypes. 16. Comparison of tree-child phylogenetic networks. Science.gov (United States) Cardona, Gabriel; Rosselló, Francesc; Valiente, Gabriel 2009-01-01 Phylogenetic networks are a generalization of phylogenetic trees that allow for the representation of nontreelike evolutionary events, like recombination, hybridization, or lateral gene transfer. While much progress has been made to find practical algorithms for reconstructing a phylogenetic network from a set of sequences, all attempts to endorse a class of phylogenetic networks (strictly extending the class of phylogenetic trees) with a well-founded distance measure have, to the best of our knowledge and with the only exception of the bipartition distance on regular networks, failed so far. In this paper, we present and study a new meaningful class of phylogenetic networks, called tree-child phylogenetic networks, and we provide an injective representation of these networks as multisets of vectors of natural numbers, their path multiplicity vectors. We then use this representation to define a distance on this class that extends the well-known Robinson-Foulds distance for phylogenetic trees and to give an alignment method for pairs of networks in this class. Simple polynomial algorithms for reconstructing a tree-child phylogenetic network from its path multiplicity vectors, for computing the distance between two tree-child phylogenetic networks and for aligning a pair of tree-child phylogenetic networks, are provided. They have been implemented as a Perl package and a Java applet, which can be found at http://bioinfo.uib.es/~recerca/phylonetworks/mudistance/. 17. Functional and phylogenetic ecology in R CERN Document Server Swenson, Nathan G 2014-01-01 Functional and Phylogenetic Ecology in R is designed to teach readers to use R for phylogenetic and functional trait analyses. Over the past decade, a dizzying array of tools and methods were generated to incorporate phylogenetic and functional information into traditional ecological analyses. Increasingly these tools are implemented in R, thus greatly expanding their impact. Researchers getting started in R can use this volume as a step-by-step entryway into phylogenetic and functional analyses for ecology in R. More advanced users will be able to use this volume as a quick reference to understand particular analyses. The volume begins with an introduction to the R environment and handling relevant data in R. Chapters then cover phylogenetic and functional metrics of biodiversity; null modeling and randomizations for phylogenetic and functional trait analyses; integrating phylogenetic and functional trait information; and interfacing the R environment with a popular C-based program. This book presents a uni... 18. Attempt to validate breakpoint MIC values estimated from pharmacokinetic data obtained during oxolinic acid therapy of winter ulcer disease in Atlantic salmon ( Salmo salar ) DEFF Research Database (Denmark) Coyne, R.; Bergh, Ø.; Samuelsen, O. 2004-01-01 and between healthy and moribund fish, presents difficulties in generating a clinically meaningful description relevant to the whole population. This issue is discussed, and it is suggested that for this application, the minimum concentration achieved by at least 80% of the treated population might represent...... a useful parameter for describing the concentrations of agents achieved during therapy. The plasma data from this investigation were used to estimate clinically relevant breakpoint minimum inhibitory concentration (MIC) values. The validity of these breakpoint values was discussed with reference... 19. Impact of revised cefepime CLSI breakpoints on Escherichia coli and Klebsiella pneumoniae susceptibility and potential impact if applied to Pseudomonas aeruginosa. Science.gov (United States) Hamada, Yukihiro; Sutherland, Christina A; Nicolau, David P 2015-05-01 The CLSI reduced the cefepime Enterobacteriaceae susceptibility breakpoint and introduced the susceptible-dose-dependent (S-DD) category. In this study, MICs were determined for a Gram-negative collection to assess the impact of this change. For Enterobacteriaceae, this resulted in <2% reduction in susceptibility, with 1% being S-DD. If applied to Pseudomonas aeruginosa, the % susceptibility (%S) dropped from 77% to 43%, with 34% being S-DD. The new breakpoints did little to the Enterobacteriaceae %S, but for P. aeruginosa, a profound reduction was seen in %S. The recognition of a S-DD response to cefepime should alert clinicians to the possible need for higher doses. 20. Phylogenetic trees and Euclidean embeddings. Science.gov (United States) Layer, Mark; Rhodes, John A 2017-01-01 It was recently observed by de Vienne et al. (Syst Biol 60(6):826-832, 2011) that a simple square root transformation of distances between taxa on a phylogenetic tree allowed for an embedding of the taxa into Euclidean space. While the justification for this was based on a diffusion model of continuous character evolution along the tree, here we give a direct and elementary explanation for it that provides substantial additional insight. We use this embedding to reinterpret the differences between the NJ and BIONJ tree building algorithms, providing one illustration of how this embedding reflects tree structures in data. 1. 类群取样与系统发育分析精确度之探索%Taxon sampling and the accuracy of phylogenetic analyses Institute of Scientific and Technical Information of China (English) Tracy A. HEATH; Shannon M. HEDTKE; David M. HILLIS 2008-01-01 Appropriate and extensive taxon sampling is one of the most important determinants of accurate phylogenetic estimation. In addition, accuracy of inferences about evolutionary processes obtained from phylogenetic analyses is improved significantly by thorough taxon sampling efforts. Many recent efforts to improve phylogenetic estimates have focused instead on increasing sequence length or the number of overall characters in the analysis, and this often does have a beneficial effect on the accuracy of phylogenetic analyses. However, phylogenetic analyses of few taxa (but each represented by many characters) can be subject to strong systematic biases, which in turn produce high measures of repeatability (such as bootstrap proportions) in support of incorrect or misleading phylogenetic results. Thus, it is important for phylogeneticists to consider both the sampling of taxa, as well as the sampling of characters, in designing phylogenetic studies. Taxon sampling also improves estimates of evolutionary parameters derived from phylogenetic trees, and is thus important for improved applications of phylogenetic analyses. Analysis of sensitivity to taxon inclusion, the possible effects of long-branch attraction, and sensitivity of parameter estimation for model-based methods should be a part of any careful and thorough phylogenetic analysis. Furthermore, recent improvements in phylogenetic algorithms and in computational power have removed many constraints on analyzing large, thoroughly sampled data sets. Thorough taxon sampling is thus one of the most practical ways to improve the accuracy of phylogenetic estimates, as well as the accuracy of biological inferences that are based on these phylogenetic trees. Science.gov (United States) San Mauro, Diego; Gower, David J; Cotton, James A; Zardoya, Rafael; Wilkinson, Mark; Massingham, Tim 2012-07-01 3. Multiple sequence alignment accuracy and phylogenetic inference. Science.gov (United States) Ogden, T Heath; Rosenberg, Michael S 2006-04-01 Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction. 4. Phylogenetic Conservatism in Plant Phenology Science.gov (United States) Davies, T. Jonathan; Wolkovich, Elizabeth M.; Kraft, Nathan J. B.; Salamin, Nicolas; Allen, Jenica M.; Ault, Toby R.; Betancourt, Julio L.; Bolmgren, Kjell; Cleland, Elsa E.; Cook, Benjamin I.; Crimmins, Theresa M.; Mazer, Susan J.; McCabe, Gregory J.; Pau, Stephanie; Regetz, Jim; Schwartz, Mark D.; Travers, Steven E. 2013-01-01 Phenological events defined points in the life cycle of a plant or animal have been regarded as highly plastic traits, reflecting flexible responses to various environmental cues. The ability of a species to track, via shifts in phenological events, the abiotic environment through time might dictate its vulnerability to future climate change. Understanding the predictors and drivers of phenological change is therefore critical. Here, we evaluated evidence for phylogenetic conservatism the tendency for closely related species to share similar ecological and biological attributes in phenological traits across flowering plants. We aggregated published and unpublished data on timing of first flower and first leaf, encompassing 4000 species at 23 sites across the Northern Hemisphere. We reconstructed the phylogeny for the set of included species, first, using the software program Phylomatic, and second, from DNA data. We then quantified phylogenetic conservatism in plant phenology within and across sites. We show that more closely related species tend to flower and leaf at similar times. By contrasting mean flowering times within and across sites, however, we illustrate that it is not the time of year that is conserved, but rather the phenological responses to a common set of abiotic cues. Our findings suggest that species cannot be treated as statistically independent when modelling phenological responses.Closely related species tend to resemble each other in the timing of their life-history events, a likely product of evolutionarily conserved responses to environmental cues. The search for the underlying drivers of phenology must therefore account for species' shared evolutionary histories. 5. In vitro antibacterial activity of ceftobiprole against clinical isolates from French teaching hospitals: proposition of zone diameter breakpoints. Science.gov (United States) Lascols, C; Legrand, P; Mérens, A; Leclercq, R; Muller-Serieys, C; Drugeon, H B; Kitzis, M D; Reverdy, M E; Roussel-Delvallez, M; Moubareck, C; Brémont, S; Miara, A; Gjoklaj, M; Soussy, C-J 2011-03-01 The aims of this study were to determine the in vitro activity profile of ceftobiprole, a pyrrolidinone cephalosporin, against a large number of bacterial pathogens and to propose zone diameter breakpoints for clinical categorisation according to the European Committee on Antimicrobial Susceptibility Testing (EUCAST) minimum inhibitory concentration (MIC) breakpoints. MICs of ceftobiprole were determined by broth microdilution against 1548 clinical isolates collected in eight French hospitals. Disk diffusion testing was performed using 30 μg disks according to the method of the Comité de l'Antibiogramme de la Société Française de Microbiologie (CA-SFM). The in vitro activity of ceftobiprole, expressed by MIC(50/90) (MICs for 50% and 90% of the organisms, respectively) (mg/L), was as follows: meticillin-susceptible Staphylococcus aureus, 0.25/0.5; meticillin-resistant S. aureus (MRSA), 1/2; meticillin-susceptible coagulase-negative staphylococci (CoNS), 0.12/0.5; meticillin-resistant CoNS, 1/2; penicillin-susceptible Streptococcus pneumoniae, ≤ 0.008/0.03; penicillin-resistant S. pneumoniae, 0.12/0.5; viridans group streptococci, 0.03/0.12; β-haemolytic streptococci, ≤ 0.008/0.016; Enterococcus faecalis, 0.25/1; Enterococcus faecium, 64/128; Enterobacteriaceae, 0.06/32; Pseudomonas aeruginosa, 4/16; Acinetobacter baumannii, 0.5/64; Haemophilus influenzae, 0.03/0.12; and Moraxella catarrhalis, 0.25/0.5. According to the regression curve, zone diameter breakpoints could be 28, 26, 24 and 22 mm for MICs of 0.5, 1, 2 and 4 mg/L respectively. In conclusion, this study confirms the potent in vitro activity of ceftobiprole against many Gram-positive bacteria, including MRSA but not E. faecium, whilst maintaining a Gram-negative spectrum similar to the advanced-generation cephalosporins such as cefepime. Thus ceftobiprole appears to be well suited for the empirical treatment of a variety of healthcare-associated infections. 6. Global patterns of amphibian phylogenetic diversity DEFF Research Database (Denmark) Fritz, Susanne; Rahbek, Carsten 2012-01-01 phylogeny (2792 species). We combined each tree with global species distributions to map four indices of phylogenetic diversity. To investigate congruence between global spatial patterns of amphibian species richness and phylogenetic diversity, we selected Faith’s phylogenetic diversity (PD) index......Aim  Phylogenetic diversity can provide insight into how evolutionary processes may have shaped contemporary patterns of species richness. Here, we aim to test for the influence of phylogenetic history on global patterns of amphibian species richness, and to identify areas where macroevolutionary...... and the total taxonomic distinctness (TTD) index, because we found that the variance of the other two indices we examined (average taxonomic distinctness and mean root distance) strongly depended on species richness. We then identified regions with unusually high or low phylogenetic diversity given... 7. Molecular Phylogenetics: Concepts for a Newcomer. Science.gov (United States) Ajawatanawong, Pravech 2016-10-26 Molecular phylogenetics is the study of evolutionary relationships among organisms using molecular sequence data. The aim of this review is to introduce the important terminology and general concepts of tree reconstruction to biologists who lack a strong background in the field of molecular evolution. Some modern phylogenetic programs are easy to use because of their user-friendly interfaces, but understanding the phylogenetic algorithms and substitution models, which are based on advanced statistics, is still important for the analysis and interpretation without a guide. Briefly, there are five general steps in carrying out a phylogenetic analysis: (1) sequence data preparation, (2) sequence alignment, (3) choosing a phylogenetic reconstruction method, (4) identification of the best tree, and (5) evaluating the tree. Concepts in this review enable biologists to grasp the basic ideas behind phylogenetic analysis and also help provide a sound basis for discussions with expert phylogeneticists. 8. Tripartitions do not always discriminate phylogenetic networks. Science.gov (United States) Cardona, Gabriel; Rosselló, Francesc; Valiente, Gabriel 2008-02-01 Phylogenetic networks are a generalization of phylogenetic trees that allow for the representation of non-treelike evolutionary events, like recombination, hybridization, or lateral gene transfer. In a recent series of papers devoted to the study of reconstructibility of phylogenetic networks, Moret, Nakhleh, Warnow and collaborators introduced the so-called tripartition metric for phylogenetic networks. In this paper we show that, in fact, this tripartition metric does not satisfy the separation axiom of distances (zero distance means isomorphism, or, in a more relaxed version, zero distance means indistinguishability in some specific sense) in any of the subclasses of phylogenetic networks where it is claimed to do so. We also present a subclass of phylogenetic networks whose members can be singled out by means of their sets of tripartitions (or even clusters), and hence where the latter can be used to define a meaningful metric. 9. Simple, rapid and accurate molecular diagnosis of acute promyelocytic leukemia by loop mediated amplification technology. Science.gov (United States) Spinelli, Orietta; Rambaldi, Alessandro; Rigo, Francesca; Zanghì, Pamela; D'Agostini, Elena; Amicarelli, Giulia; Colotta, Francesco; Divona, Mariadomenica; Ciardi, Claudia; Coco, Francesco Lo; Minnucci, Giulia 2015-01-01 The diagnostic work-up of acute promyelocytic leukemia (APL) includes the cytogenetic demonstration of the t(15;17) translocation and/or the PML-RARA chimeric transcript by RQ-PCR or RT-PCR. This latter assays provide suitable results in 3-6 hours. We describe here two new, rapid and specific assays that detect PML-RARA transcripts, based on the RT-QLAMP (Reverse Transcription-Quenching Loop-mediated Isothermal Amplification) technology in which RNA retrotranscription and cDNA amplification are carried out in a single tube with one enzyme at one temperature, in fluorescence and real time format. A single tube triplex assay detects bcr1 and bcr3 PML-RARA transcripts along with GUS housekeeping gene. A single tube duplex assay detects bcr2 and GUSB. In 73 APL cases, these assays detected in 16 minutes bcr1, bcr2 and bcr3 transcripts. All 81 non-APL samples were negative by RT-QLAMP for chimeric transcripts whereas GUSB was detectable. In 11 APL patients in which RT-PCR yielded equivocal breakpoint type results, RT-QLAMP assays unequivocally and accurately defined the breakpoint type (as confirmed by sequencing). Furthermore, RT-QLAMP could amplify two bcr2 transcripts with particularly extended PML exon 6 deletions not amplified by RQ-PCR. RT-QLAMP reproducible sensitivity is 10(-3) for bcr1 and bcr3 and 10(-)2 for bcr2 thus making this assay particularly attractive at diagnosis and leaving RQ-PCR for the molecular monitoring of minimal residual disease during the follow up. In conclusion, PML-RARA RT-QLAMP compared to RT-PCR or RQ-PCR is a valid improvement to perform rapid, simple and accurate molecular diagnosis of APL. 10. Phylogenetic diversity of Amazonian tree communities OpenAIRE Honorio Coronado, Eurídice N.; Dexter, Kyle G.; Pennington, R. Toby; Chave, Jérôme; Lewis, Simon L.; Alexiades, Miguel N.; Alvarez, Esteban; Alves de Oliveira, Atila; Amaral, Iêda L.; Araujo-Murakami, Alejandro; Arets, Eric J. M. M.; Aymard, Gerardo A.; Baraloto, Christopher; Bonal, Damien; Brienen, Roel 2015-01-01 Aim: To examine variation in the phylogenetic diversity (PD) of tree communities across geographical and environmental gradients in Amazonia. Location: Two hundred and eighty-three c. 1 ha forest inventory plots from across Amazonia. Methods: We evaluated PD as the total phylogenetic branch length across species in each plot (PDss), the mean pairwise phylogenetic distance between species (MPD), the mean nearest taxon distance (MNTD) and their equivalents standardized for species richness (ses... 11. Relevant phylogenetic invariants of evolutionary models CERN Document Server Casanellas, Marta 2009-01-01 Recently there have been several attempts to provide a whole set of generators of the ideal of the algebraic variety associated to a phylogenetic tree evolving under an algebraic model. These algebraic varieties have been proven to be useful in phylogenetics. In this paper we prove that, for phylogenetic reconstruction purposes, it is enough to consider generators coming from the edges of the tree, the so-called edge invariants. This is the algebraic analogous to Buneman's Splits Equivalence Theorem. The interest of this result relies on its potential applications in phylogenetics for the widely used evolutionary models such as Jukes-Cantor, Kimura 2 and 3 parameters, and General Markov models. 12. Detection of three common translocation breakpoints in non-Hodgkin's lymphomas by fluorescence in situ hybridization on routine paraffin-embedded tissue sections NARCIS (Netherlands) Haralambieva, E; Kleiverda, K; Mason, DY; Schuuring, E; Kluin, PM 2002-01-01 Non-random chromosomal translocations are specifically involved in the pathogenesis of many non-Hodgkin's lymphomas and have clinical implications as diagnostic and/or prognostic markers. Their detection is often impaired by technical problems, including the distribution of the breakpoints over larg 13. Sequencing and Analyzing the "t" (1;7) Reciprocal Translocation Breakpoints Associated with a Case of Childhood-Onset Schizophrenia/Autistic Disorder Science.gov (United States) Idol, Jacquelyn R.; Addington, Anjene M.; Long, Robert T.; Rapoport, Judith L.; Green, Eric D. 2008-01-01 We characterized a "t"(1;7)(p22;q21) reciprocal translocation in a patient with childhood-onset schizophrenia (COS) and autism using genome mapping and sequencing methods. Based on genomic maps of human chromosome 7 and fluorescence in situ hybridization (FISH) studies, we delimited the region of 7q21 harboring the translocation breakpoint to a… 14. Analysis of crossover breakpoints yields new insights into the nature of the gene conversion events associated with large NF1 deletions mediated by nonallelic homologous recombination. Science.gov (United States) Bengesser, Kathrin; Vogt, Julia; Mussotter, Tanja; Mautner, Victor-Felix; Messiaen, Ludwine; Cooper, David N; Kehrer-Sawatzki, Hildegard 2014-02-01 Large NF1 deletions are mediated by nonallelic homologous recombination (NAHR). An in-depth analysis of gene conversion operating in the breakpoint-flanking regions of large NF1 deletions was performed to investigate whether the rate of discontinuous gene conversion during NAHR with crossover is increased, as has been previously noted in NAHR-mediated rearrangements. All 20 germline type-1 NF1 deletions analyzed were mediated by NAHR associated with continuous gene conversion within the breakpoint-flanking regions. Continuous gene conversion was also observed in 31/32 type-2 NF1 deletions investigated. In contrast to the meiotic type-1 NF1 deletions, type-2 NF1 deletions are predominantly of post-zygotic origin. Our findings therefore imply that the mitotic as well as the meiotic NAHR intermediates of large NF1 deletions are processed by long-patch mismatch repair (MMR), thereby ensuring gene conversion tract continuity instead of the discontinuous gene conversion that is characteristic of short-patch repair. However, the single type-2 NF1 deletion not exhibiting continuous gene conversion was processed without MMR, yielding two different deletion-bearing chromosomes, which were distinguishable in terms of their breakpoint positions. Our findings indicate that MMR failure during NAHR, followed by post-meiotic/mitotic segregation, has the potential to give rise to somatic mosaicism in human genomic rearrangements by generating breakpoint heterogeneity. 15. Identification of a yeast artificial chromosome that spans the human papillary renal cell carcinoma-associated t(X;1) breakpoint in Xp11.2 NARCIS (Netherlands) Suijkerbuijk, R F; Meloni, A M; Sinke, R J; de Leeuw, B; Wilbrink, M; Janssen, H A; Geraghty, M T; Monaco, A P; Sandberg, A A; Geurts van Kessel, A 1993-01-01 Recently, a specific chromosome abnormality, t(X;1)(p11;q21), was described for a subgroup of human papillary renal cell carcinomas. The translocation breakpoint in Xp11 is located in the same region as that in t(X;18)(p11;q11)-positive synovial sarcoma. We used fluorescence in situ hybridization (F 16. Sequencing and characterisation of rearrangements in three S. pastorianus strains reveals the presence of chimeric genes and gives evidence of breakpoint reuse. Directory of Open Access Journals (Sweden) Sarah K Hewitt Full Text Available Gross chromosomal rearrangements have the potential to be evolutionarily advantageous to an adapting organism. The generation of a hybrid species increases opportunity for recombination by bringing together two homologous genomes. We sought to define the location of genomic rearrangements in three strains of Saccharomyces pastorianus, a natural lager-brewing yeast hybrid of Saccharomyces cerevisiae and Saccharomyces eubayanus, using whole genome shotgun sequencing. Each strain of S. pastorianus has lost species-specific portions of its genome and has undergone extensive recombination, producing chimeric chromosomes. We predicted 30 breakpoints that we confirmed at the single nucleotide level by designing species-specific primers that flank each breakpoint, and then sequencing the PCR product. These rearrangements are the result of recombination between areas of homology between the two subgenomes, rather than repetitive elements such as transposons or tRNAs. Interestingly, 28/30 S. cerevisiae-S. eubayanus recombination breakpoints are located within genic regions, generating chimeric genes. Furthermore we show evidence for the reuse of two breakpoints, located in HSP82 and KEM1, in strains of proposed independent origin. 17. Monobactam and aminoglycoside combination therapy against metallo-beta-lactamase-producing multidrug-resistant Pseudomonas aeruginosa screened using a 'break-point checkerboard plate'. Science.gov (United States) Araoka, Hideki; Baba, Masaru; Takagi, Shinsuke; Matsuno, Naofumi; Ishiwata, Kazuya; Nakano, Nobuaki; Tsuji, Masanori; Yamamoto, Hisashi; Seo, Sachiko; Asano-Mori, Yuki; Uchida, Naoyuki; Masuoka, Kazuhiro; Wake, Atsushi; Taniguchi, Shuichi; Yoneyama, Akiko 2010-03-01 Metallo-beta-lactamase-producing multidrug-resistant Pseudomonas aeruginosa (MDR P. aeruginosa) is a cause of life-threatening infections. With parenteral colistin not available in Japan, we treated MDR P. aeruginosa sepsis with monobactam and aminoglycoside combination therapy, with screening using a 'break-point checkerboard plate'. 18. Susceptibility breakpoints and target values for therapeutic drug monitoring of voriconazole and Aspergillus fumigatus in an in vitro pharmacokinetic/pharmacodynamic model NARCIS (Netherlands) Siopi, M.; Mavridou, E.; Mouton, J.W.; Verweij, P.E.; Zerva, L.; Meletiadis, J. 2014-01-01 BACKGROUND: Although voriconazole reached the bedside 10 years ago and became the standard care in the treatment of invasive aspergillosis, reliable clinical breakpoints are still in high demand. Moreover, this has increased due to the recent emergence of azole resistance. METHODS: Four clinical wil 19. Identification of a yeast artificial chromosome (YAC) spanning the synovial sarcoma-specific t(X;18)(p11.2;q11.2) breakpoint NARCIS (Netherlands) de Leeuw, B; Berger, W; Sinke, R J; Suijkerbuijk, R F; Gilgenkrantz, S; Geraghty, M T; Valle, D; Monaco, A P; Lehrach, H; Ropers, H H 1993-01-01 A somatic cell hybrid containing the synovial sarcoma-associated t(X;18)(p11.2;q11.2) derivative (der(X)) chromosome was used to characterize the translocation breakpoint region on the X chromosome. By using Southern hybridization of DNA from this der(X) hybrid in conjunction with Xp-region specific 20. An Interaction with Ewing's Sarcoma Breakpoint Protein EWS Defines a Specific Oncogenic Mechanism of ETS Factors Rearranged in Prostate Cancer. Science.gov (United States) Kedage, Vivekananda; Selvaraj, Nagarathinam; Nicholas, Taylor R; Budka, Justin A; Plotnik, Joshua P; Jerde, Travis J; Hollenhorst, Peter C 2016-10-25 More than 50% of prostate tumors have a chromosomal rearrangement resulting in aberrant expression of an oncogenic ETS family transcription factor. However, mechanisms that differentiate the function of oncogenic ETS factors expressed in prostate tumors from non-oncogenic ETS factors expressed in normal prostate are unknown. Here, we find that four oncogenic ETS (ERG, ETV1, ETV4, and ETV5), and no other ETS, interact with the Ewing's sarcoma breakpoint protein, EWS. This EWS interaction was necessary and sufficient for oncogenic ETS functions including gene activation, cell migration, clonogenic survival, and transformation. Significantly, the EWS interacting region of ERG has no homology with that of ETV1, ETV4, and ETV5. Therefore, this finding may explain how divergent ETS factors have a common oncogenic function. Strikingly, EWS is fused to various ETS factors by the chromosome translocations that cause Ewing's sarcoma. Therefore, these findings link oncogenic ETS function in both prostate cancer and Ewing's sarcoma. 1. Evaluation of cefoxitin disk diffusion breakpoint for detection of methicillin resistance in Staphylococcus pseudintermedius isolates from dogs. Science.gov (United States) Bemis, David A; Jones, Rebekah D; Videla, Ricardo; Kania, Stephen A 2012-09-01 Cefoxitin disk diffusion susceptibility testing is a recommended screening method for the detection of methicillin resistance in human isolates of Staphylococcus aureus and coagulase-negative staphylococci. A retrospective analysis of 1,146 clinical isolates of Staphylococcus pseudintermedius from dogs was conducted to determine if screening by the cefoxitin disk method can be similarly useful with S. pseudintermedius. The distribution of cefoxitin growth inhibition zone diameters within this collection was bimodal and correlated well with the results of methicillin resistance gene (mecA) detection by polymerase chain reaction. Of the isolates, 5% had discordant results and, when retested, 84% of these were in agreement. While a greater diversity of isolates and interlaboratory comparisons must be tested, the current study suggests that an epidemiological breakpoint (of approximately ≤ 30 mm = resistant; ≥ 31 = susceptible) can be established to predict methicillin resistance in S. pseudintermedius. 2. Mapping Clearances in Tropical Dry Forests Using Breakpoints, Trend, and Seasonal Components from MODIS Time Series: Does Forest Type Matter? Directory of Open Access Journals (Sweden) Kenneth Grogan 2016-08-01 Full Text Available Tropical environments present a unique challenge for optical time series analysis, primarily owing to fragmented data availability, persistent cloud cover and atmospheric aerosols. Additionally, little is known of whether the performance of time series change detection is affected by diverse forest types found in tropical dry regions. In this paper, we develop a methodology for mapping forest clearing in Southeast Asia using a study region characterised by heterogeneous forest types. Moderate Resolution Imaging Spectroradiometer (MODIS time series are decomposed using Breaks For Additive Season and Trend (BFAST and breakpoints, trend, and seasonal components are combined in a binomial probability model to distinguish between cleared and stable forest. We found that the addition of seasonality and trend information improves the change model performance compared to using breakpoints alone. We also demonstrate the value of considering forest type in disturbance mapping in comparison to the more common approach that combines all forest types into a single generalised forest class. By taking a generalised forest approach, there is less control over the error distribution in each forest type. Dry-deciduous and evergreen forests are especially sensitive to error imbalances using a generalised forest model i.e., clearances were underestimated in evergreen forest, and overestimated in dry-deciduous forest. This suggests that forest type needs to be considered in time series change mapping, especially in heterogeneous forest regions. Our approach builds towards improving large-area monitoring of forest-diverse regions such as Southeast Asia. The findings of this study should also be transferable across optical sensors and are therefore relevant for the future availability of dense time series for the tropics at higher spatial resolutions. 3. Penicillin susceptibility breakpoints for Streptococcus pneumoniae and their effect on susceptibility categorisation in Germany (1997-2013). Science.gov (United States) Imöhl, M; Reinert, R R; Tulkens, P M; van der Linden, M 2014-11-01 Continuous nationwide surveillance of invasive pneumococcal disease (IPD) was conducted in Germany. From July 1, 1997, to June 30, 2013, data on penicillin susceptibility were available for 20,437 isolates. 2,790 of these isolates (13.7 %) originate from patients with meningitis and 17,647 isolates (86.3 %) are from non-meningitis cases. A slight decline in isolates susceptible at 0.06 and 0.12 μg/ml can be noticed over the years. Overall, 89.1 % of the isolates had minimum inhibitory concentrations (MICs) of ≤0.015 μg/ml. In 2012/2013, the first three isolates of Streptococcus pneumoniae with MICs of 8 μg/ml were found. The application of different guidelines with other MIC breakpoints for the interpretation of penicillin resistance leads to differences in susceptibility categorisation. According to the pre-2008 Clinical and Laboratory Standards Institute (CLSI) interpretive criteria, 5.3 % of isolates overall were intermediate and 1.4 % were resistant to penicillin. Application of the 2008-2014 CLSI interpretive criteria resulted in 7.6 % resistance among meningitis cases and 0.5 % intermediate resistance in non-meningitis cases. Referring to the 2009-2014 European Committee on Antimicrobial Susceptibility Testing (EUCAST) breakpoints, 7.6 % of the isolates in the meningitis group were resistant to penicillin. In the non-meningitis group, 6.1 % of the isolates were intermediate and 0.5 % were resistant. These differences should be kept in mind when surveillance studies on pneumococcal penicillin resistance are compared. 4. Empirical third-generation cephalosporin therapy for adults with community-onset Enterobacteriaceae bacteraemia: Impact of revised CLSI breakpoints. Science.gov (United States) Hsieh, Chih-Chia; Lee, Chung-Hsun; Li, Ming-Chi; Hong, Ming-Yuan; Chi, Chih-Hsien; Lee, Ching-Chi 2016-04-01 Third-generation cephalosporins (3GCs) [ceftriaxone (CRO) and cefotaxime (CTX)] have remarkable potency against Enterobacteriaceae and are commonly prescribed for the treatment of community-onset bacteraemia. However, clinical evidence supporting the updated interpretive criteria of the Clinical and Laboratory Standards Institute (CLSI) is limited. Adults with community-onset monomicrobial Enterobacteriaceae bacteraemia treated empirically with CRO or CTX were recruited. Clinical information was collected from medical records and CTX MICs were determined using the broth microdilution method. Eligible patients (n=409) were categorised into de-escalation (260; 63.6%), no switch (115; 28.1%) and escalation (34; 8.3%) groups according to the type of definitive antibiotics. Multivariate regression revealed five independent predictors of 28-day mortality: fatal co-morbidities based on McCabe classification [odds ratio (OR)=19.96; P<0.001]; high Pitt bacteraemia score (≥4) at bacteraemia onset (OR=13.91; P<0.001); bacteraemia because of pneumonia (OR=5.45; P=0.007); de-escalation after empirical therapy (OR=0.28; P=0.03); and isolates with a CTX MIC≤1mg/L (OR=0.17; P=0.02). Of note, isolates with a CTX MIC≤8mg/L (indicated as susceptible by previous CLSI breakpoints) were not associated with mortality. Furthermore, clinical failure and 28-day mortality rates had a tendency to increase with increasing CTX MIC (γ=1.00; P=0.01). Conclusively, focusing on patients with community-onset Enterobacteriaceae bacteraemia receiving empirical 3GC therapy, the present study provides clinically critical evidence to validate the proposed reduction in the susceptibility breakpoint of CTX to MIC≤1mg/L. 5. Phylogenetic signal dissection identifies the root of starfishes. Science.gov (United States) Feuda, Roberto; Smith, Andrew B 2015-01-01 Relationships within the class Asteroidea have remained controversial for almost 100 years and, despite many attempts to resolve this problem using molecular data, no consensus has yet emerged. Using two nuclear genes and a taxon sampling covering the major asteroid clades we show that non-phylogenetic signal created by three factors--Long Branch Attraction, compositional heterogeneity and the use of poorly fitting models of evolution--have confounded accurate estimation of phylogenetic relationships. To overcome the effect of this non-phylogenetic signal we analyse the data using non-homogeneous models, site stripping and the creation of subpartitions aimed to reduce or amplify the systematic error, and calculate Bayes Factor support for a selection of previously suggested topological arrangements of asteroid orders. We show that most of the previous alternative hypotheses are not supported in the most reliable data partitions, including the previously suggested placement of either Forcipulatida or Paxillosida as sister group to the other major branches. The best-supported solution places Velatida as the sister group to other asteroids, and the implications of this finding for the morphological evolution of asteroids are presented. 6. Progress, pitfalls and parallel universes: a history of insect phylogenetics. Science.gov (United States) Kjer, Karl M; Simon, Chris; Yavorskaya, Margarita; Beutel, Rolf G 2016-08-01 The phylogeny of insects has been both extensively studied and vigorously debated for over a century. A relatively accurate deep phylogeny had been produced by 1904. It was not substantially improved in topology until recently when phylogenomics settled many long-standing controversies. Intervening advances came instead through methodological improvement. Early molecular phylogenetic studies (1985-2005), dominated by a few genes, provided datasets that were too small to resolve controversial phylogenetic problems. Adding to the lack of consensus, this period was characterized by a polarization of philosophies, with individuals belonging to either parsimony or maximum-likelihood camps; each largely ignoring the insights of the other. The result was an unfortunate detour in which the few perceived phylogenetic revolutions published by both sides of the philosophical divide were probably erroneous. The size of datasets has been growing exponentially since the mid-1980s accompanied by a wave of confidence that all relationships will soon be known. However, large datasets create new challenges, and a large number of genes does not guarantee reliable results. If history is a guide, then the quality of conclusions will be determined by an improved understanding of both molecular and morphological evolution, and not simply the number of genes analysed. 7. Phylogenetic signal dissection identifies the root of starfishes. Directory of Open Access Journals (Sweden) Roberto Feuda Full Text Available Relationships within the class Asteroidea have remained controversial for almost 100 years and, despite many attempts to resolve this problem using molecular data, no consensus has yet emerged. Using two nuclear genes and a taxon sampling covering the major asteroid clades we show that non-phylogenetic signal created by three factors--Long Branch Attraction, compositional heterogeneity and the use of poorly fitting models of evolution--have confounded accurate estimation of phylogenetic relationships. To overcome the effect of this non-phylogenetic signal we analyse the data using non-homogeneous models, site stripping and the creation of subpartitions aimed to reduce or amplify the systematic error, and calculate Bayes Factor support for a selection of previously suggested topological arrangements of asteroid orders. We show that most of the previous alternative hypotheses are not supported in the most reliable data partitions, including the previously suggested placement of either Forcipulatida or Paxillosida as sister group to the other major branches. The best-supported solution places Velatida as the sister group to other asteroids, and the implications of this finding for the morphological evolution of asteroids are presented. 8. Modelling the association of dengue fever cases with temperature and relative humidity in Jeddah, Saudi Arabia-A generalised linear model with break-point analysis. Science.gov (United States) Alkhaldy, Ibrahim 2017-04-01 The aim of this study was to examine the role of environmental factors in the temporal distribution of dengue fever in Jeddah, Saudi Arabia. The relationship between dengue fever cases and climatic factors such as relative humidity and temperature was investigated during 2006-2009 to determine whether there is any relationship between dengue fever cases and climatic parameters in Jeddah City, Saudi Arabia. A generalised linear model (GLM) with a break-point was used to determine how different levels of temperature and relative humidity affected the distribution of the number of cases of dengue fever. Break-point analysis was performed to modelled the effect before and after a break-point (change point) in the explanatory parameters under various scenarios. Akaike information criterion (AIC) and cross validation (CV) were used to assess the performance of the models. The results showed that maximum temperature and mean relative humidity are most probably the better predictors of the number of dengue fever cases in Jeddah. In this study three scenarios were modelled: no time lag, 1-week lag and 2-weeks lag. Among these scenarios, the 1-week lag model using mean relative humidity as an explanatory variable showed better performance. This study showed a clear relationship between the meteorological variables and the number of dengue fever cases in Jeddah. The results also demonstrated that meteorological variables can be successfully used to estimate the number of dengue fever cases for a given period of time. Break-point analysis provides further insight into the association between meteorological parameters and dengue fever cases by dividing the meteorological parameters into certain break-points. 9. Conflicting phylogenetic position of Schizosaccharomyces pombe NARCIS (Netherlands) Kuramae, Eiko E.; Robert, Vincent; Snel, Berend; Boekhout, Teun 2006-01-01 The phylogenetic position of the fission yeast Schizosaccharomyces pombe in the fungal Tree of Life is still controversial. Three alternative phylogenetic positions have been proposed in the literature, namely (1) a position basal to the Hemiascomycetes and Euascomycetes, (2) a position as a sister 10. Efficient Computation of Popular Phylogenetic Tree Measures DEFF Research Database (Denmark) Tsirogiannis, Constantinos; Sandel, Brody Steven; Cheliotis, Dimitris 2012-01-01 Given a phylogenetic tree $\\mathcal{T}$ of n nodes, and a sample R of its tips (leaf nodes) a very common problem in ecological and evolutionary research is to evaluate a distance measure for the elements in R. Two of the most common measures of this kind are the Mean Pairwise Distance...... software package for processing phylogenetic trees.... 11. Insect phylogenetics in the digital age. Science.gov (United States) Dietrich, Christopher H; Dmitriev, Dmitry A 2016-12-01 Insect systematists have long used digital data management tools to facilitate phylogenetic research. Web-based platforms developed over the past several years support creation of comprehensive, openly accessible data repositories and analytical tools that support large-scale collaboration, accelerating efforts to document Earth's biota and reconstruct the Tree of Life. New digital tools have the potential to further enhance insect phylogenetics by providing efficient workflows for capturing and analyzing phylogenetically relevant data. Recent initiatives streamline various steps in phylogenetic studies and provide community access to supercomputing resources. In the near future, automated, web-based systems will enable researchers to complete a phylogenetic study from start to finish using resources linked together within a single portal and incorporate results into a global synthesis. 12. Testing for phylogenetic signal in biological traits: the ubiquity of cross-product statistics. Science.gov (United States) Pavoine, Sandrine; Ricotta, Carlo 2013-03-01 To evaluate rates of evolution, to establish tests of correlation between two traits, or to investigate to what degree the phylogeny of a species assemblage is predictive of a trait value so-called tests for phylogenetic signal are used. Being based on different approaches, these tests are generally thought to possess quite different statistical performances. In this article, we show that the Blomberg et al. K and K*, the Abouheif index, the Moran's I, and the Mantel correlation are all based on a cross-product statistic, and are thus all related to each other when they are associated to a permutation test of phylogenetic signal. What changes is only the way phylogenetic and trait similarities (or dissimilarities) among the tips of a phylogeny are computed. The definitions of the phylogenetic and trait-based (dis)similarities among tips thus determines the performance of the tests. We shortly discuss the biological and statistical consequences (in terms of power and type I error of the tests) of the observed relatedness among the statistics that allow tests for phylogenetic signal. Blomberg et al. K* statistic appears as one on the most efficient approaches to test for phylogenetic signal. When branch lengths are not available or not accurate, Abouheif's Cmean statistic is a powerful alternative to K*. 13. Molecular Phylogenetics: Mathematical Framework and Unsolved Problems Science.gov (United States) Xia, Xuhua Phylogenetic relationship is essential in dating evolutionary events, reconstructing ancestral genes, predicting sites that are important to natural selection, and, ultimately, understanding genomic evolution. Three categories of phylogenetic methods are currently used: the distance-based, the maximum parsimony, and the maximum likelihood method. Here, I present the mathematical framework of these methods and their rationales, provide computational details for each of them, illustrate analytically and numerically the potential biases inherent in these methods, and outline computational challenges and unresolved problems. This is followed by a brief discussion of the Bayesian approach that has been recently used in molecular phylogenetics. 14. On Tree-Based Phylogenetic Networks. Science.gov (United States) Zhang, Louxin 2016-07-01 A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree-based networks. We present a simple necessary and sufficient condition for tree-based networks and prove that a universal tree-based network exists for any number of taxa that contains as its base every phylogenetic tree on the same set of taxa. This answers two problems posted by Francis and Steel recently. A byproduct is a computer program for generating random binary phylogenetic networks under the uniform distribution model. 15. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments. Directory of Open Access Journals (Sweden) Steven Kelly Full Text Available The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/. 16. Efficient and accurate fragmentation methods. Science.gov (United States) Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S 2014-09-16 Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum 17. Accurate determination of antenna directivity DEFF Research Database (Denmark) Dich, Mikael 1997-01-01 The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence... 18. Myelodysplastic syndrome with a t(2;11)(p21;q23-24) and translocation breakpoint close to miR-125b-1. Science.gov (United States) Thorsen, Jim; Aamot, Hege Vangstein; Roberto, Roberta; Tjønnfjord, Geir E; Micci, Francesca; Heim, Sverre 2012-10-01 The upregulation of oncogenes and the formation of fusion genes are commonly observed in hematological malignancies with recurring balanced translocations. However, in some malignancies exhibiting balanced chromosomal rearrangements, neither oncogene deregulation nor generation of fusion genes appears to be involved, suggesting that other mechanisms are at play. In the rare myelodysplastic syndrome (MDS) containing a t(2;11)(p21;q23-24) translocation, breakpoints near a microRNA locus, miR-125b-1, in 11q24 have been suggested to be pathogenetically involved. Here we report the detailed mapping and sequencing of the breakpoint located only 2 kilobases from miR-125b-1 in an MDS patient with a t(2;11)(p21;q23-24). 19. The disentangling number for phylogenetic mixtures CERN Document Server Sullivant, Seth 2011-01-01 We provide a logarithmic upper bound for the disentangling number on unordered lists of leaf labeled trees. This results is useful for analyzing phylogenetic mixture models. The proof depends on interpreting multisets of trees as high dimensional contingency tables. 20. Advances in phylogenetic studies of Nematoda Institute of Scientific and Technical Information of China (English) 2002-01-01 Nematoda is a metazoan group with extremely high diversity only next to Insecta. Caenorhabditis elegans is now a favorable experimental model animal in modern developmental biology, genetics and genomics studies. However, the phylogeny of Nematoda and the phylogenetic position of the phylum within animal kingdom have long been in debate. Recent molecular phylogenetic studies gave great challenges to the traditional nematode classification. The new phylogenies not only placed the Nematoda in the Ecdysozoan and divided the phylum into five clades, but also provided new insights into animal molecular identification and phylogenetic biodiversity studies. The present paper reviews major progress and remaining problems in the current molecular phylogenetic studies of Nematoda, and prospects the developmental tendencies of this field. 1. Charles Darwin, beetles and phylogenetics. Science.gov (United States) Beutel, Rolf G; Friedrich, Frank; Leschen, Richard A B 2009-11-01 changed dramatically. With very large data sets and high throughput sampling, phylogenetic questions can be addressed without prior knowledge of morphological characters. Nevertheless, molecular studies have not lead to the great breakthrough in beetle systematics--yet. Especially the phylogeny of the extremely species rich suborder Polyphaga remains incompletely resolved. Coordinated efforts of molecular workers and of morphologists using innovative techniques may lead to more profound insights in the near future. The final aim is to develop a well-founded phylogeny, which truly reflects the evolution of this immensely species rich group of organisms. 2. Charles Darwin, beetles and phylogenetics Science.gov (United States) Beutel, Rolf G.; Friedrich, Frank; Leschen, Richard A. B. 2009-11-01 . This has changed dramatically. With very large data sets and high throughput sampling, phylogenetic questions can be addressed without prior knowledge of morphological characters. Nevertheless, molecular studies have not lead to the great breakthrough in beetle systematics—yet. Especially the phylogeny of the extremely species rich suborder Polyphaga remains incompletely resolved. Coordinated efforts of molecular workers and of morphologists using innovative techniques may lead to more profound insights in the near future. The final aim is to develop a well-founded phylogeny, which truly reflects the evolution of this immensely species rich group of organisms. 3. Phylogenetic distribution of fungal sterols. Directory of Open Access Journals (Sweden) John D Weete Full Text Available BACKGROUND: Ergosterol has been considered the "fungal sterol" for almost 125 years; however, additional sterol data superimposed on a recent molecular phylogeny of kingdom Fungi reveals a different and more complex situation. METHODOLOGY/PRINCIPAL FINDINGS: The interpretation of sterol distribution data in a modern phylogenetic context indicates that there is a clear trend from cholesterol and other Delta(5 sterols in the earliest diverging fungal species to ergosterol in later diverging fungi. There are, however, deviations from this pattern in certain clades. Sterols of the diverse zoosporic and zygosporic forms exhibit structural diversity with cholesterol and 24-ethyl -Delta(5 sterols in zoosporic taxa, and 24-methyl sterols in zygosporic fungi. For example, each of the three monophyletic lineages of zygosporic fungi has distinctive major sterols, ergosterol in Mucorales, 22-dihydroergosterol in Dimargaritales, Harpellales, and Kickxellales (DHK clade, and 24-methyl cholesterol in Entomophthorales. Other departures from ergosterol as the dominant sterol include: 24-ethyl cholesterol in Glomeromycota, 24-ethyl cholest-7-enol and 24-ethyl-cholesta-7,24(28-dienol in rust fungi, brassicasterol in Taphrinales and hypogeous pezizalean species, and cholesterol in Pneumocystis. CONCLUSIONS/SIGNIFICANCE: Five dominant end products of sterol biosynthesis (cholesterol, ergosterol, 24-methyl cholesterol, 24-ethyl cholesterol, brassicasterol, and intermediates in the formation of 24-ethyl cholesterol, are major sterols in 175 species of Fungi. Although most fungi in the most speciose clades have ergosterol as a major sterol, sterols are more varied than currently understood, and their distribution supports certain clades of Fungi in current fungal phylogenies. In addition to the intellectual importance of understanding evolution of sterol synthesis in fungi, there is practical importance because certain antifungal drugs (e.g., azoles target reactions in 4. Influence of clinical breakpoint changes from CLSI 2009 to EUCAST 2011 antimicrobial susceptibility testing guidelines on multidrug resistance rates of Gram-negative rods. Science.gov (United States) Hombach, Michael; Wolfensberger, Aline; Kuster, Stefan P; Böttger, Erik C 2013-07-01 Multidrug resistance (MDR) rates of Gram-negative rods were analyzed comparing CLSI 2009 and EUCAST 2011 antibiotic susceptibility testing guidelines. After EUCAST 2011 was applied, the MDR rates increased for Klebsiella pneumoniae (2.2%), Enterobacter cloacae (1.1%), Pseudomonas aeruginosa (0.7%), and Escherichia coli (0.4%). A total of 24% of Enterobacteriaceae MDR isolates and 12% of P. aeruginosa MDR isolates were categorized as MDR due to breakpoint changes. 5. Incidence of extended-spectrum-β-lactamase-producing Escherichia coli and Klebsiella pneumoniae isolates that test susceptible to cephalosporins and aztreonam by the revised CLSI breakpoints. Science.gov (United States) McWilliams, Carla S; Condon, Susan; Schwartz, Rebecca M; Ginocchio, Christine C 2014-07-01 The incidence of aztreonam and cephalosporin susceptibility, determined using the revised CLSI breakpoints, for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli and Klebsiella pneumoniae isolates was evaluated. Our analysis showed that results for aztreonam and/or ≥1 cephalosporin were reported as susceptible or intermediate for 89.2% of ESBL-producing E coli isolates (569/638 isolates) and 67.7% of ESBL-producing K. pneumoniae isolates (155/229 isolates). 6. Clinical Correlation of the CLSI Susceptibility Breakpoint for Piperacillin- Tazobactam against Extended-Spectrum-β-Lactamase-Producing Escherichia coli and Klebsiella Species† Science.gov (United States) Gavin, Patrick J.; Suseno, Mira T.; Thomson, Richard B.; Gaydos, J. Michael; Pierson, Carl L.; Halstead, Diane C.; Aslanzadeh, Jaber; Brecher, Stephen; Rotstein, Coleman; Brossette, Stephen E.; Peterson, Lance R. 2006-01-01 We assessed infections caused by extended-spectrum-β-lactamase-producing Escherichia coli or Klebsiella spp. treated with piperacillin-tazobactam to determine if the susceptibility breakpoint predicts outcome. Treatment was successful in 10 of 11 nonurinary infections from susceptible strains and in 2 of 6 infections with MICs of >16/4 μg/ml. All six urinary infections responded to treatment regardless of susceptibility. PMID:16723596 7. Correlation of Minimum Inhibitory Concentration Breakpoints and Methicillin Resistance Gene Carriage in Clinical Isolates of Staphylococcus epidermidis Directory of Open Access Journals (Sweden) Fereshteh Eftekhar, 2011-09-01 Full Text Available Staphylococcus epidermidis is the most important member of coagulase negative staphylococci responsible for community and hospital acquired infections. Most clinical isolates of S. epidermidis are resistant to methicillin making these infections difficult to treat. In this study, correlation of methicillin resistance phenotype was compared with methicillin resistance (mecA gene carriage in 55 clinical isolates of S. epidermidis. Susceptibility was measured by disc diffusion using methicillin discs, and minimum inhibitory concentrations (MIC were measured using broth microdilution. Methicillin resistance gene (MecA gene carriage was detected by specific primers and PCR. Disc susceptibility results showed 90.9% resistance to methicillin. Considering a MIC of 4 µg/ml, 78.1% of the isolates were methicillin resistant, 76.36% of which carried the mecA gene. On the other hand, when a breakpoint of 0.5 µg/ml was used, 89.09% were methicillin resistant, of which 93.75% were mecA positive. There was a better correlation between MIC of 0.5 µg/ml with disc diffusion results and mecA gene carriage. The findings suggest that despite the usefulness of molecular methods for rapid diagnosis of virulence genes, gene carriage does not necessarily account for virulence phenotype. Ultimately, gene expression, which is controlled by the environment, would determine the outcome 8. Comprehensive characterization of evolutionary conserved breakpoints in four New World Monkey karyotypes compared to Chlorocebus aethiops and Homo sapiens. Science.gov (United States) Fan, Xiaobo; Supiwong, Weerayuth; Weise, Anja; Mrasek, Kristin; Kosyakova, Nadezda; Tanomtong, Alongkoad; Pinthong, Krit; Trifonov, Vladimir A; Cioffi, Marcelo de Bello; Grothmann, Pierre; Liehr, Thomas; Oliveira, Edivaldo H C de 2015-11-01 Comparative cytogenetic analysis in New World Monkeys (NWMs) using human multicolor banding (MCB) probe sets were not previously done. Here we report on an MCB based FISH-banding study complemented with selected locus-specific and heterochromatin specific probes in four NWMs and one Old World Monkey (OWM) species, i.e. in Alouatta caraya (ACA), Callithrix jacchus (CJA), Cebus apella (CAP), Saimiri sciureus (SSC), and Chlorocebus aethiops (CAE), respectively. 107 individual evolutionary conserved breakpoints (ECBs) among those species were identified and compared with those of other species in previous reports. Especially for chromosomal regions being syntenic to human chromosomes 6, 8, 9, 10, 11, 12 and 16 previously cryptic rearrangements could be observed. 50.4% (54/107) NWM-ECBs were colocalized with those of OWMs, 62.6% (62/99) NWM-ECBs were related with those of Hylobates lar (HLA) and 66.3% (71/107) NWM-ECBs corresponded with those known from other mammalians. Furthermore, human fragile sites were aligned with the ECBs found in the five studied species and interestingly 66.3% ECBs colocalized with those fragile sites (FS). Overall, this study presents detailed chromosomal maps of one OWM and four NWM species. This data will be helpful to further investigation on chromosome evolution in NWM and hominoids in general and is prerequisite for correct interpretation of future sequencing based genomic studies in those species. 9. Summer holidays as break-point in shaping a tannery sludge microbial community around a stable core microbiota. Science.gov (United States) Giordano, Cesira; Boscaro, Vittorio; Munz, Giulio; Mori, Gualtiero; Vannini, Claudia 2016-07-27 Recently, several investigations focused on the discovery of a bacterial consortium shared among different wastewater treatment plants (WWTPs). Nevertheless, the definition of a core microbiota over time represents the necessary counterpart in order to unravel the dynamics of bacterial communities in these environments. Here we performed a monthly survey on the bacterial community of a consortial industrial plant. Objectives of this study were: (1) to identify a core microbiota constant over time; (2) to evaluate the temporal dynamics of the community during one year. A conspicuous and diversified core microbiota is constituted by operational taxonomic units which are present throughout the year in the plant. Community composition data confirm that the presence and abundance of bacteria in WWTPs is highly consistent at high taxonomic level. Our results indicate however a difference in microbial community structure between two groups of samples, identifying the summer holiday period as the break-point. Changes in the structure of the microbial community occur otherwise gradually, one month after another. Further studies will clarify how the size and diversity of the core microbiota could affect the observed dynamics. 10. How does cognition evolve? Phylogenetic comparative psychology. Science.gov (United States) MacLean, Evan L; Matthews, Luke J; Hare, Brian A; Nunn, Charles L; Anderson, Rindy C; Aureli, Filippo; Brannon, Elizabeth M; Call, Josep; Drea, Christine M; Emery, Nathan J; Haun, Daniel B M; Herrmann, Esther; Jacobs, Lucia F; Platt, Michael L; Rosati, Alexandra G; Sandel, Aaron A; Schroepfer, Kara K; Seed, Amanda M; Tan, Jingzhi; van Schaik, Carel P; Wobber, Victoria 2012-03-01 Now more than ever animal studies have the potential to test hypotheses regarding how cognition evolves. Comparative psychologists have developed new techniques to probe the cognitive mechanisms underlying animal behavior, and they have become increasingly skillful at adapting methodologies to test multiple species. Meanwhile, evolutionary biologists have generated quantitative approaches to investigate the phylogenetic distribution and function of phenotypic traits, including cognition. In particular, phylogenetic methods can quantitatively (1) test whether specific cognitive abilities are correlated with life history (e.g., lifespan), morphology (e.g., brain size), or socio-ecological variables (e.g., social system), (2) measure how strongly phylogenetic relatedness predicts the distribution of cognitive skills across species, and (3) estimate the ancestral state of a given cognitive trait using measures of cognitive performance from extant species. Phylogenetic methods can also be used to guide the selection of species comparisons that offer the strongest tests of a priori predictions of cognitive evolutionary hypotheses (i.e., phylogenetic targeting). Here, we explain how an integration of comparative psychology and evolutionary biology will answer a host of questions regarding the phylogenetic distribution and history of cognitive traits, as well as the evolutionary processes that drove their evolution. 11. A practical guide to phylogenetics for nonexperts. Science.gov (United States) O'Halloran, Damien 2014-02-05 Many researchers, across incredibly diverse foci, are applying phylogenetics to their research question(s). However, many researchers are new to this topic and so it presents inherent problems. Here we compile a practical introduction to phylogenetics for nonexperts. We outline in a step-by-step manner, a pipeline for generating reliable phylogenies from gene sequence datasets. We begin with a user-guide for similarity search tools via online interfaces as well as local executables. Next, we explore programs for generating multiple sequence alignments followed by protocols for using software to determine best-fit models of evolution. We then outline protocols for reconstructing phylogenetic relationships via maximum likelihood and Bayesian criteria and finally describe tools for visualizing phylogenetic trees. While this is not by any means an exhaustive description of phylogenetic approaches, it does provide the reader with practical starting information on key software applications commonly utilized by phylogeneticists. The vision for this article would be that it could serve as a practical training tool for researchers embarking on phylogenetic studies and also serve as an educational resource that could be incorporated into a classroom or teaching-lab. 12. Phylogenetic approaches to natural product structure prediction. Science.gov (United States) 2012-01-01 Phylogenetics is the study of the evolutionary relatedness among groups of organisms. Molecular phylogenetics uses sequence data to infer these relationships for both organisms and the genes they maintain. With the large amount of publicly available sequence data, phylogenetic inference has become increasingly important in all fields of biology. In the case of natural product research, phylogenetic relationships are proving to be highly informative in terms of delineating the architecture and function of the genes involved in secondary metabolite biosynthesis. Polyketide synthases and nonribosomal peptide synthetases provide model examples in which individual domain phylogenies display different predictive capacities, resolving features ranging from substrate specificity to structural motifs associated with the final metabolic product. This chapter provides examples in which phylogeny has proven effective in terms of predicting functional or structural aspects of secondary metabolism. The basics of how to build a reliable phylogenetic tree are explained along with information about programs and tools that can be used for this purpose. Furthermore, it introduces the Natural Product Domain Seeker, a recently developed Web tool that employs phylogenetic logic to classify ketosynthase and condensation domains based on established enzyme architecture and biochemical function. 13. Nodal distances for rooted phylogenetic trees. Science.gov (United States) Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente, Gabriel 2010-08-01 Dissimilarity measures for (possibly weighted) phylogenetic trees based on the comparison of their vectors of path lengths between pairs of taxa, have been present in the systematics literature since the early seventies. For rooted phylogenetic trees, however, these vectors can only separate non-weighted binary trees, and therefore these dissimilarity measures are metrics only on this class of rooted phylogenetic trees. In this paper we overcome this problem, by splitting in a suitable way each path length between two taxa into two lengths. We prove that the resulting splitted path lengths matrices single out arbitrary rooted phylogenetic trees with nested taxa and arcs weighted in the set of positive real numbers. This allows the definition of metrics on this general class of rooted phylogenetic trees by comparing these matrices through metrics in spaces M(n)(R) of real-valued n x n matrices. We conclude this paper by establishing some basic facts about the metrics for non-weighted phylogenetic trees defined in this way using L(p) metrics on M(n)(R), with p [epsilon] R(>0). 14. Over half of breakpoints in gene pairs involved in cancer-specific recurrent translocations are mapped to human chromosomal fragile sites Directory of Open Access Journals (Sweden) Pierce Levi CT 2009-01-01 Full Text Available Abstract Background Gene rearrangements such as chromosomal translocations have been shown to contribute to cancer development. Human chromosomal fragile sites are regions of the genome especially prone to breakage, and have been implicated in various chromosome abnormalities found in cancer. However, there has been no comprehensive and quantitative examination of the location of fragile sites in relation to all chromosomal aberrations. Results Using up-to-date databases containing all cancer-specific recurrent translocations, we have examined 444 unique pairs of genes involved in these translocations to determine the correlation of translocation breakpoints and fragile sites in the gene pairs. We found that over half (52% of translocation breakpoints in at least one gene of these gene pairs are mapped to fragile sites. Among these, we examined the DNA sequences within and flanking three randomly selected pairs of translocation-prone genes, and found that they exhibit characteristic features of fragile DNA, with frequent AT-rich flexibility islands and the potential of forming highly stable secondary structures. Conclusion Our study is the first to examine gene pairs involved in all recurrent chromosomal translocations observed in tumor cells, and to correlate the location of more than half of breakpoints to positions of known fragile sites. These results provide strong evidence to support a causative role for fragile sites in the generation of cancer-specific chromosomal rearrangements. 15. Accurate Modeling of Advanced Reflectarrays DEFF Research Database (Denmark) Zhou, Min Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility... 16. The Accurate Particle Tracer Code CERN Document Server Wang, Yulei; Qin, Hong; Yu, Zhi 2016-01-01 The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods... 17. Accurate ab initio spin densities CERN Document Server Boguslawski, Katharina; Legeza, Örs; Reiher, Markus 2012-01-01 We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA... 18. Accurate thickness measurement of graphene. Science.gov (United States) Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T 2016-03-29 Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. 19. Accurate thickness measurement of graphene Science.gov (United States) Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T. 2016-03-01 Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. 20. Molecular breakpoint cloning and gene expression studies of a novel translocation t(4;15(q27;q11.2 associated with Prader-Willi syndrome Directory of Open Access Journals (Sweden) Slater Howard R 2005-05-01 Full Text Available Abstract Background Prader-Willi syndrome (MIM #176270; PWS is caused by lack of the paternally-derived copies, or their expression, of multiple genes in a 4 Mb region on chromosome 15q11.2. Known mechanisms include large deletions, maternal uniparental disomy or mutations involving the imprinting center. De novo balanced reciprocal translocations in 5 reported individuals had breakpoints clustering in SNRPN intron 2 or exon 20/intron 20. To further dissect the PWS phenotype and define the minimal critical region for PWS features, we have studied a 22 year old male with a milder PWS phenotype and a de novo translocation t(4;15(q27;q11.2. Methods We used metaphase FISH to narrow the breakpoint region and molecular analyses to map the breakpoints on both chromosomes at the nucleotide level. The expression of genes on chromosome 15 on both sides of the breakpoint was determined by RT-PCR analyses. Results Pertinent clinical features include neonatal hypotonia with feeding difficulties, hypogonadism, short stature, late-onset obesity, learning difficulties, abnormal social behavior and marked tolerance to pain, as well as sticky saliva and narcolepsy. Relative macrocephaly and facial features are not typical for PWS. The translocation breakpoints were identified within SNRPN intron 17 and intron 10 of a spliced non-coding transcript in band 4q27. LINE and SINE sequences at the exchange points may have contributed to the translocation event. By RT-PCR of lymphoblasts and fibroblasts, we find that upstream SNURF/SNRPN exons and snoRNAs HBII-437 and HBII-13 are expressed, but the downstream snoRNAs PWCR1/HBII-85 and HBII-438A/B snoRNAs are not. Conclusion As part of the PWCR1/HBII-85 snoRNA cluster is highly conserved between human and mice, while no copy of HBII-438 has been found in mouse, we conclude that PWCR1/HBII-85 snoRNAs is likely to play a major role in the PWS- phenotype. 1. Barcoding and Phylogenetic Inferences in Nine Mugilid Species (Pisces, Mugiliformes Directory of Open Access Journals (Sweden) Neonila Polyakova 2013-10-01 Full Text Available Accurate identification of fish and fish products, from eggs to adults, is important in many areas. Grey mullets of the family Mugilidae are distributed worldwide and inhabit marine, estuarine, and freshwater environments in all tropical and temperate regions. Various Mugilid species are commercially important species in fishery and aquaculture of many countries. For the present study we have chosen two Mugilid genes with different phylogenetic signals: relatively variable mitochondrial cytochrome oxidase subunit I (COI and conservative nuclear rhodopsin (RHO. We examined their diversity within and among 9 Mugilid species belonging to 4 genera, many of which have been examined from multiple specimens, with the goal of determining whether DNA barcoding can achieve unambiguous species recognition of Mugilid species. The data obtained showed that information based on COI sequences was diagnostic not only for species-level identification but also for recognition of intraspecific units, e.g., allopatric populations of circumtropical Mugil cephalus, or even native and acclimatized specimens of Chelon haematocheila. All RHO sequences appeared strictly species specific. Based on the data obtained, we conclude that COI, as well as RHO sequencing can be used to unambiguously identify fish species. Topologies of phylogeny based on RHO and COI sequences coincided with each other, while together they had a good phylogenetic signal. 2. Increased taxon sampling greatly reduces phylogenetic error. Science.gov (United States) Zwickl, Derrick J; Hillis, David M 2002-08-01 Several authors have argued recently that extensive taxon sampling has a positive and important effect on the accuracy of phylogenetic estimates. However, other authors have argued that there is little benefit of extensive taxon sampling, and so phylogenetic problems can or should be reduced to a few exemplar taxa as a means of reducing the computational complexity of the phylogenetic analysis. In this paper we examined five aspects of study design that may have led to these different perspectives. First, we considered the measurement of phylogenetic error across a wide range of taxon sample sizes, and conclude that the expected error based on randomly selecting trees (which varies by taxon sample size) must be considered in evaluating error in studies of the effects of taxon sampling. Second, we addressed the scope of the phylogenetic problems defined by different samples of taxa, and argue that phylogenetic scope needs to be considered in evaluating the importance of taxon-sampling strategies. Third, we examined the claim that fast and simple tree searches are as effective as more thorough searches at finding near-optimal trees that minimize error. We show that a more complete search of tree space reduces phylogenetic error, especially as the taxon sample size increases. Fourth, we examined the effects of simple versus complex simulation models on taxonomic sampling studies. Although benefits of taxon sampling are apparent for all models, data generated under more complex models of evolution produce higher overall levels of error and show greater positive effects of increased taxon sampling. Fifth, we asked if different phylogenetic optimality criteria show different effects of taxon sampling. Although we found strong differences in effectiveness of different optimality criteria as a function of taxon sample size, increased taxon sampling improved the results from all the common optimality criteria. Nonetheless, the method that showed the lowest overall 3. Fourier transform inequalities for phylogenetic trees. Science.gov (United States) Matsen, Frederick A 2009-01-01 Phylogenetic invariants are not the only constraints on site-pattern frequency vectors for phylogenetic trees. A mutation matrix, by its definition, is the exponential of a matrix with non-negative off-diagonal entries; this positivity requirement implies non-trivial constraints on the site-pattern frequency vectors. We call these additional constraints "edge-parameter inequalities". In this paper, we first motivate the edge-parameter inequalities by considering a pathological site-pattern frequency vector corresponding to a quartet tree with a negative internal edge. This site-pattern frequency vector nevertheless satisfies all of the constraints described up to now in the literature. We next describe two complete sets of edge-parameter inequalities for the group-based models; these constraints are square-free monomial inequalities in the Fourier transformed coordinates. These inequalities, along with the phylogenetic invariants, form a complete description of the set of site-pattern frequency vectors corresponding to bona fide trees. Said in mathematical language, this paper explicitly presents two finite lists of inequalities in Fourier coordinates of the form "monomial < or = 1", each list characterizing the phylogenetically relevant semialgebraic subsets of the phylogenetic varieties. 4. Worldwide phylogenetic relationship of avian poxviruses Science.gov (United States) Gyuranecz, Miklós; Foster, Jeffrey T.; Dán, Ádám; Ip, Hon S.; Egstad, Kristina F.; Parker, Patricia G.; Higashiguchi, Jenni M.; Skinner, Michael A.; Höfle, Ursula; Kreizinger, Zsuzsa; Dorrestein, Gerry M.; Solt, Szabolcs; Sós, Endre; Kim, Young Jun; Uhart, Marcela; Pereda, Ariel; González-Hein, Gisela; Hidalgo, Hector; Blanco, Juan-Manuel; Erdélyi, Károly 2013-01-01 Poxvirus infections have been found in 230 species of wild and domestic birds worldwide in both terrestrial and marine environments. This ubiquity raises the question of how infection has been transmitted and globally dispersed. We present a comprehensive global phylogeny of 111 novel poxvirus isolates in addition to all available sequences from GenBank. Phylogenetic analysis of Avipoxvirus genus has traditionally relied on one gene region (4b core protein). In this study we have expanded the analyses to include a second locus (DNA polymerase gene), allowing for a more robust phylogenetic framework, finer genetic resolution within specific groups and the detection of potential recombination. Our phylogenetic results reveal several major features of avipoxvirus evolution and ecology and propose an updated avipoxvirus taxonomy, including three novel subclades. The characterization of poxviruses from 57 species of birds in this study extends the current knowledge of their host range and provides the first evidence of the phylogenetic effect of genetic recombination of avipoxviruses. The repeated occurrence of avian family or order-specific grouping within certain clades (e.g. starling poxvirus, falcon poxvirus, raptor poxvirus, etc.) indicates a marked role of host adaptation, while the sharing of poxvirus species within prey-predator systems emphasizes the capacity for cross-species infection and limited host adaptation. Our study provides a broad and comprehensive phylogenetic analysis of the Avipoxvirus genus, an ecologically and environmentally important viral group, to formulate a genome sequencing strategy that will clarify avipoxvirus taxonomy. 5. Primate molecular phylogenetics in a genomic era. Science.gov (United States) Ting, Nelson; Sterner, Kirstin N 2013-02-01 A primary objective of molecular phylogenetics is to use molecular data to elucidate the evolutionary history of living organisms. Dr. Morris Goodman founded the journal Molecular Phylogenetics and Evolution as a forum where scientists could further our knowledge about the tree of life, and he recognized that the inference of species trees is a first and fundamental step to addressing many important evolutionary questions. In particular, Dr. Goodman was interested in obtaining a complete picture of the primate species tree in order to provide an evolutionary context for the study of human adaptations. A number of recent studies use multi-locus datasets to infer well-resolved and well-supported primate phylogenetic trees using consensus approaches (e.g., supermatrices). It is therefore tempting to assume that we have a complete picture of the primate tree, especially above the species level. However, recent theoretical and empirical work in the field of molecular phylogenetics demonstrates that consensus methods might provide a false sense of support at certain nodes. In this brief review we discuss the current state of primate molecular phylogenetics and highlight the importance of exploring the use of coalescent-based analyses that have the potential to better utilize information contained in multi-locus data. 6. Teaching Molecular Phylogenetics through Investigating a Real-World Phylogenetic Problem Science.gov (United States) Zhang, Xiaorong 2012-01-01 A phylogenetics exercise is incorporated into the "Introduction to biocomputing" course, a junior-level course at Savannah State University. This exercise is designed to help students learn important concepts and practical skills in molecular phylogenetics through solving a real-world problem. In this application, students are required to identify… 7. A More Accurate Fourier Transform CERN Document Server Courtney, Elya 2015-01-01 Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t... 8. [Comparison of microdilution and disk diffusion methods for the detection of fluconazole and voriconazole susceptibility against clinical Candida glabrata isolates and determination of changing susceptibility with new CLSI breakpoints]. Science.gov (United States) Hazırolan, Gülşen; Sarıbaş, Zeynep; Arıkan Akdağlı, Sevtap 2016-07-01 Candida albicans is the most frequently isolated species as the causative agent of Candida infections. However, in recent years, the isolation rate of non-albicans Candida species have increased. In many centers, Candida glabrata is one of the commonly isolated non-albicans species of C.glabrata infections which are difficult-to-treat due to decreased susceptibility to fluconazole and cross-resistance to other azoles. The aims of this study were to determine the in vitro susceptibility profiles of clinical C.glabrata isolates against fluconazole and voriconazole by microdilution and disk diffusion methods and to evaluate the results with both the previous (CLSI) and current species-specific CLSI (Clinical and Laboratory Standards Institute) clinical breakpoints. A total of 70 C.glabrata strains isolated from clinical samples were included in the study. The identification of the isolates was performed by morphologic examination on cornmeal Tween 80 agar and assimilation profiles obtained by using ID32C (BioMérieux, France). Broth microdilution and disk diffusion methods were performed according to CLSI M27-A3 and CLSI M44-A2 documents, respectively. The results were evaluated according to CLSI M27-A3 and M44-A2 documents and new vs. species-specific CLSI breakpoints. By using both previous and new CLSI breakpoints, broth microdilution test results showed that voriconazole has greater in vitro activity than fluconazole against C.glabrata isolates. For the two drugs tested, very major error was not observed with disk diffusion method when microdilution method was considered as the reference method. Since "susceptible" category no more exists for fluconazole vs. C.glabrata, the isolates that were interpreted as susceptible by previous breakpoints were evaluated as susceptible-dose dependent by current CLSI breakpoints. Since species-specific breakpoints remain yet undetermined for voriconazole, comparative analysis was not possible for this agent. The results obtained 9. Multicenter evaluation of the new Vitek 2 yeast susceptibility test using new CLSI clinical breakpoints for fluconazole. Science.gov (United States) Pfaller, M A; Diekema, D J; Procop, G W; Wiederhold, N P 2014-06-01 A fully automated antifungal susceptibility test system recently updated to reflect the new species-specific clinical breakpoints (CBPs) of fluconazole for Candida (Vitek 2 AF03 yeast susceptibility test; bioMérieux, Inc., Durham, NC) was compared in three different laboratories with the Clinical and Laboratory Standards Institute (CLSI) reference broth microdilution (BMD) method by testing 2 quality control strains, 10 reproducibility strains (4 Candida species and 6 Cryptococcus neoformans strains), and 746 isolates of Candida species (702 isolates, 13 species) and 44 isolates of C. neoformans against fluconazole. Excellent essential agreement (EA) (within 2 dilutions) between the reference and Vitek 2 MICs was observed for fluconazole and Candida species (94.0%). The EA was lower for fluconazole and C. neoformans at 86.4%. The mean times to a result with the Vitek 2 test were 9.1 h for Candida species and 12.1 h for C. neoformans. Categorical agreement (CA) between the two methods was assessed by using the new species-specific CBPs. For less common species without fluconazole CBPs, the epidemiological cutoff values (ECVs) were used to differentiate wild-type (WT; MIC, ≤ ECV) from non-WT (MIC, >ECV) strains. The CAs between the two methods were 92.0% for Candida species (0.3% very major errors [VME] and 2.6% major errors [ME]) and 84.1% for C. neoformans (4.5% VME and 11.4% ME). The updated Vitek 2 AF03 IUO yeast susceptibility system is comparable to the CLSI BMD reference method for testing the susceptibility of clinically important yeasts to fluconazole when using the new (lower) CBPs and ECVs. 10. SUMAC: Constructing Phylogenetic Supermatrices and Assessing Partially Decisive Taxon Coverage OpenAIRE William A. Freyman 2015-01-01 The amount of phylogenetically informative sequence data in GenBank is growing at an exponential rate, and large phylogenetic trees are increasingly used in research. Tools are needed to construct phylogenetic sequence matrices from GenBank data and evaluate the effect of missing data. Supermatrix Constructor (SUMAC) is a tool to data-mine GenBank, construct phylogenetic supermatrices, and assess the phylogenetic decisiveness of a matrix given the pattern of missing sequence data. SUMAC calcu... 11. Phylogenetic inference under varying proportions of indel-induced alignment gaps Directory of Open Access Journals (Sweden) 2009-08-01 Full Text Available Abstract Background The effect of alignment gaps on phylogenetic accuracy has been the subject of numerous studies. In this study, we investigated the relationship between the total number of gapped sites and phylogenetic accuracy, when the gaps were introduced (by means of computer simulation to reflect indel (insertion/deletion events during the evolution of DNA sequences. The resulting (true alignments were subjected to commonly used gap treatment and phylogenetic inference methods. Results (1 In general, there was a strong – almost deterministic – relationship between the amount of gap in the data and the level of phylogenetic accuracy when the alignments were very "gappy", (2 gaps resulting from deletions (as opposed to insertions contributed more to the inaccuracy of phylogenetic inference, (3 the probabilistic methods (Bayesian, PhyML & "MLε, " a method implemented in DNAML in PHYLIP performed better at most levels of gap percentage when compared to parsimony (MP and distance (NJ methods, with Bayesian analysis being clearly the best, (4 methods that treat gapped sites as missing data yielded less accurate trees when compared to those that attribute phylogenetic signal to the gapped sites (by coding them as binary character data – presence/absence, or as in the MLε method, and (5 in general, the accuracy of phylogenetic inference depended upon the amount of available data when the gaps resulted from mainly deletion events, and the amount of missing data when insertion events were equally likely to have caused the alignment gaps. Conclusion When gaps in an alignment are a consequence of indel events in the evolution of the sequences, the accuracy of phylogenetic analysis is likely to improve if: (1 alignment gaps are categorized as arising from insertion events or deletion events and then treated separately in the analysis, (2 the evolutionary signal provided by indels is harnessed in the phylogenetic analysis, and (3 methods that 12. Visualizing Phylogenetic Treespace Using Cartographic Projections Science.gov (United States) Sundberg, Kenneth; Clement, Mark; Snell, Quinn Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger datasets. 13. Phylogenetic invariants for group-based models CERN Document Server Donten-Bury, Maria 2010-01-01 In this paper we investigate properties of algebraic varieties representing group-based phylogenetic models. We give the (first) example of a nonnormal general group-based model for an abelian group. Following Kaie Kubjas we also determine some invariants of group-based models showing that the associated varieties do not have to be deformation equivalent. We propose a method of generating many phylogenetic invariants and in particular we show that our approach gives the whole ideal of the claw tree for 3-Kimura model under the assumption of the conjecture of Sturmfels and Sullivant. This, combined with the results of Sturmfels and Sullivant, would enable to determine all phylogenetic invariants for any tree for 3-Kimura model and possibly for other group-based models. 14. Morphological and molecular convergences in mammalian phylogenetics. Science.gov (United States) Zou, Zhengting; Zhang, Jianzhi 2016-09-02 Phylogenetic trees reconstructed from molecular sequences are often considered more reliable than those reconstructed from morphological characters, in part because convergent evolution, which confounds phylogenetic reconstruction, is believed to be rarer for molecular sequences than for morphologies. However, neither the validity of this belief nor its underlying cause is known. Here comparing thousands of characters of each type that have been used for inferring the phylogeny of mammals, we find that on average morphological characters indeed experience much more convergences than amino acid sites, but this disparity is explained by fewer states per character rather than an intrinsically higher susceptibility to convergence for morphologies than sequences. We show by computer simulation and actual data analysis that a simple method for identifying and removing convergence-prone characters improves phylogenetic accuracy, potentially enabling, when necessary, the inclusion of morphologies and hence fossils for reliable tree inference. 15. Phylogenetic structure in tropical hummingbird communities DEFF Research Database (Denmark) Graham, Catherine H; Parra, Juan L; Rahbek, Carsten; 2009-01-01 composition of 189 hummingbird communities in Ecuador. We assessed how species and phylogenetic composition changed along environmental gradients and across biogeographic barriers. We show that humid, low-elevation communities are phylogenetically overdispersed (coexistence of distant relatives), a pattern...... an expensive means of locomotion at high elevations. We found that communities in the lowlands on opposite sides of the Andes tend to be phylogenetically similar despite their large differences in species composition, a pattern implicating the Andes as an important dispersal barrier. In contrast, along...... the steep environmental gradient between the lowlands and the Andes we found evidence that species turnover is comprised of relatively distantly related species. The integration of local and regional patterns of diversity across environmental gradients and biogeographic barriers provides insight... 16. Consequences of recombination on traditional phylogenetic analysis DEFF Research Database (Denmark) Schierup, M H; Hein, J 2000-01-01 We investigate the shape of a phylogenetic tree reconstructed from sequences evolving under the coalescent with recombination. The motivation is that evolutionary inferences are often made from phylogenetic trees reconstructed from population data even though recombination may well occur (mtDNA...... or viral sequences) or does occur (nuclear sequences). We investigate the size and direction of biases when a single tree is reconstructed ignoring recombination. Standard software (PHYLIP) was used to construct the best phylogenetic tree from sequences simulated under the coalescent with recombination....... With recombination present, the length of terminal branches and the total branch length are larger, and the time to the most recent common ancestor smaller, than for a tree reconstructed from sequences evolving with no recombination. The effects are pronounced even for small levels of recombination that may... 17. Probabilistic graphical model representation in phylogenetics. Science.gov (United States) Höhna, Sebastian; Heath, Tracy A; Boussau, Bastien; Landis, Michael J; Ronquist, Fredrik; Huelsenbeck, John P 2014-09-01 Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis-Hastings or Gibbs sampling of the posterior distribution. 18. Multilocus phylogenetic analysis of the genus Aeromonas. Science.gov (United States) Martinez-Murcia, Antonio J; Monera, Arturo; Saavedra, M Jose; Oncina, Remedios; Lopez-Alvarez, Monserrate; Lara, Erica; Figueras, M Jose 2011-05-01 A broad multilocus phylogenetic analysis (MLPA) of the representative diversity of a genus offers the opportunity to incorporate concatenated inter-species phylogenies into bacterial systematics. Recent analyses based on single housekeeping genes have provided coherent phylogenies of Aeromonas. However, to date, a multi-gene phylogenetic analysis has never been tackled. In the present study, the intra- and inter-species phylogenetic relationships of 115 strains representing all Aeromonas species described to date were investigated by MLPA. The study included the independent analysis of seven single gene fragments (gyrB, rpoD, recA, dnaJ, gyrA, dnaX, and atpD), and the tree resulting from the concatenated 4705 bp sequence. The phylogenies obtained were consistent with each other, and clustering agreed with the Aeromonas taxonomy recognized to date. The highest clustering robustness was found for the concatenated tree (i.e. all Aeromonas species split into 100% bootstrap clusters). Both possible chronometric distortions and poor resolution encountered when using single-gene analysis were buffered in the concatenated MLPA tree. However, reliable phylogenetic species delineation required an MLPA including several "bona fide" strains representing all described species. 19. The phylogenetics of succession can guide restoration DEFF Research Database (Denmark) Shooner, Stephanie; Chisholm, Chelsea Lee; Davies, T. Jonathan 2015-01-01 Phylogenetic tools have increasingly been used in community ecology to describe the evolutionary relationships among co-occurring species. In studies of succession, such tools may allow us to identify the evolutionary lineages most suited for particular stages of succession and habitat rehabilita... 20. Quantifying MCMC exploration of phylogenetic tree space. Science.gov (United States) Whidden, Chris; Matsen, Frederick A 2015-05-01 In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. 1. Constructing Student Problems in Phylogenetic Tree Construction. Science.gov (United States) Brewer, Steven D. Evolution is often equated with natural selection and is taught from a primarily functional perspective while comparative and historical approaches, which are critical for developing an appreciation of the power of evolutionary theory, are often neglected. This report describes a study of expert problem-solving in phylogenetic tree construction.… 2. Modulated Binding of SATB1, a Matrix Attachment Region Protein, to the AT-Rich Sequence Flanking the Major Breakpoint Region of BCL2 Science.gov (United States) Ramakrishnan, Meera; Liu, Wen-Man; DiCroce, Patricia A.; Posner, Aleza; Zheng, Jian; Kohwi-Shigematsu, Terumi; Krontiris, Theodore G. 2000-01-01 The t(14,18) chromosomal translocation that occurs in human follicular lymphoma constitutively activates the BCL2 gene and disrupts control of apoptosis. Interestingly, 70% of the t(14,18) translocations are confined to three 15-bp clusters positioned within a 150-bp region (major breakpoint region or [MBR]) in the untranslated portion of terminal exon 3. We analyzed DNA-protein interactions in the MBR, as these may play some role in targeting the translocation to this region. An 87-bp segment (87MBR) immediately 3′ to breakpoint cluster 3 was essential for DNA-protein interaction monitored with mobility shift assays. We further delineated a core binding region within 87MBR: a 33-bp, very AT-rich sequence highly conserved between the human and mouse BCL2 gene (37MBR). We have purified and identified one of the core factors as the matrix attachment region (MAR) binding protein, SATB1, which is known to bind to AT-rich sequences with a high propensity to unwind. Additional factors in nuclear extracts, which we have not yet characterized further, increased SATB1 affinity for the 37MBR target four- to fivefold. Specific binding activity within 37MBR displayed cell cycle regulation in Jurkat T cells, while levels of SATB1 remained constant throughout the cell cycle. Finally, we demonstrated in vivo binding of SATB1 to the MBR, strongly suggesting the BCL2 major breakpoint region is a MAR. We discuss the potential consequences of our observations for both MBR fragility and regulatory function. PMID:10629043 3. Evidence of break-points in breathing pattern at the gas-exchange thresholds during incremental cycling in young, healthy subjects. Science.gov (United States) Cross, Troy J; Morris, Norman R; Schneider, Donald A; Sabapathy, Surendran 2012-03-01 The present study investigated whether 'break-points' in breathing pattern correspond to the first ([Formula: see text]) and second gas-exchange thresholds ([Formula: see text]) during incremental cycling. We used polynomial spline smoothing to detect accelerations and decelerations in pulmonary gas-exchange data, which provided an objective means of 'break-point' detection without assumption of the number and shape of said 'break-points'. Twenty-eight recreational cyclists completed the study, with five individuals excluded from analyses due to low signal-to-noise ratios and/or high risk of 'pseudo-threshold' detection. In the remaining participants (n = 23), two separate and distinct accelerations in respiratory frequency (f (R)) during incremental work were observed, both of which demonstrated trivial biases and reasonably small ±95% limits of agreement (LOA) for the [Formula: see text] (0.2 ± 3.0 ml O(2) kg(-1) min(-1)) and [Formula: see text] (0.0 ± 2.4 ml O(2) kg(-1) min(-1)), respectively. A plateau in tidal volume (V (T)) data near the [Formula: see text] was identified in only 14 individuals, and yielded the most unsatisfactory mean bias ±LOA of all comparisons made (-0.4 ± 5.3 ml O(2) kg(-1) min(-1)). Conversely, 18 individuals displayed V (T)-plateau in close proximity to the [Formula: see text] evidenced by a mean bias ± LOA of 0.1 ± 3.1 ml O(2) kg(-1) min(-1). Our findings suggest that both accelerations in f (R) correspond to the gas-exchange thresholds, and a plateau (or decline) in V (T) at the [Formula: see text] is a common (but not universal) feature of the breathing pattern response to incremental cycling. 4. 38 CFR 4.46 - Accurate measurement. Science.gov (United States) 2010-07-01 ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect... 5. Identification of a Novel P190-Derived Breakpoint Peptide Suitable for Peptide Vaccine Therapeutic Approach in Ph+ Acute Lymphoblastic Leukemia Patients OpenAIRE Ippoliti, Micaela; Defina, Marzia; Gozzini, Antonella; Baratè, Claudia; Aprile, Lara; Pietrini, Alice; Gozzetti, Alessandro; Raspadori, Donatella; Lauria, Francesco; Bocchia, Monica 2012-01-01 Ph+ acute lymphoblastic leukemia (Ph+ ALL) is a high-risk acute leukemia with poor prognosis, in which the specific t(9;22)(q34;q11) translocation results in a chimeric bcr-abl (e1a2 breakpoint) and in a 190 KD protein (p190) with constitutive tyrosine kinase activity. The advent of first- and second-generation tyrosine kinase inhibitors (TKIs) improved the short-term outcome of Ph+ ALL patients not eligible for allo-SCT; yet disease recurrence is almost inevitable. Peptides derived from p190... 6. Phyloclimatic modeling: combining phylogenetics and bioclimatic modeling. Science.gov (United States) Yesson, C; Culham, A 2006-10-01 We investigate the impact of past climates on plant diversification by tracking the "footprint" of climate change on a phylogenetic tree. Diversity within the cosmopolitan carnivorous plant genus Drosera (Droseraceae) is focused within Mediterranean climate regions. We explore whether this diversity is temporally linked to Mediterranean-type climatic shifts of the mid-Miocene and whether climate preferences are conservative over phylogenetic timescales. Phyloclimatic modeling combines environmental niche (bioclimatic) modeling with phylogenetics in order to study evolutionary patterns in relation to climate change. We present the largest and most complete such example to date using Drosera. The bioclimatic models of extant species demonstrate clear phylogenetic patterns; this is particularly evident for the tuberous sundews from southwestern Australia (subgenus Ergaleium). We employ a method for establishing confidence intervals of node ages on a phylogeny using replicates from a Bayesian phylogenetic analysis. This chronogram shows that many clades, including subgenus Ergaleium and section Bryastrum, diversified during the establishment of the Mediterranean-type climate. Ancestral reconstructions of bioclimatic models demonstrate a pattern of preference for this climate type within these groups. Ancestral bioclimatic models are projected into palaeo-climate reconstructions for the time periods indicated by the chronogram. We present two such examples that each generate plausible estimates of ancestral lineage distribution, which are similar to their current distributions. This is the first study to attempt bioclimatic projections on evolutionary time scales. The sundews appear to have diversified in response to local climate development. Some groups are specialized for Mediterranean climates, others show wide-ranging generalism. This demonstrates that Phyloclimatic modeling could be repeated for other plant groups and is fundamental to the understanding of 7. Stratification of co-evolving genomic groups using ranked phylogenetic profiles Directory of Open Access Journals (Sweden) Tsoka Sophia 2009-10-01 Full Text Available Abstract Background Previous methods of detecting the taxonomic origins of arbitrary sequence collections, with a significant impact to genome analysis and in particular metagenomics, have primarily focused on compositional features of genomes. The evolutionary patterns of phylogenetic distribution of genes or proteins, represented by phylogenetic profiles, provide an alternative approach for the detection of taxonomic origins, but typically suffer from low accuracy. Herein, we present rank-BLAST, a novel approach for the assignment of protein sequences into genomic groups of the same taxonomic origin, based on the ranking order of phylogenetic profiles of target genes or proteins across the reference database. Results The rank-BLAST approach is validated by computing the phylogenetic profiles of all sequences for five distinct microbial species of varying degrees of phylogenetic proximity, against a reference database of 243 fully sequenced genomes. The approach - a combination of sequence searches, statistical estimation and clustering - analyses the degree of sequence divergence between sets of protein sequences and allows the classification of protein sequences according to the species of origin with high accuracy, allowing taxonomic classification of 64% of the proteins studied. In most cases, a main cluster is detected, representing the corresponding species. Secondary, functionally distinct and species-specific clusters exhibit different patterns of phylogenetic distribution, thus flagging gene groups of interest. Detailed analyses of such cases are provided as examples. Conclusion Our results indicate that the rank-BLAST approach can capture the taxonomic origins of sequence collections in an accurate and efficient manner. The approach can be useful both for the analysis of genome evolution and the detection of species groups in metagenomics samples. 8. Metrics for phylogenetic networks II: nodal and triplets metrics. Science.gov (United States) Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente, Gabriel 2009-01-01 The assessment of phylogenetic network reconstruction methods requires the ability to compare phylogenetic networks. This is the second in a series of papers devoted to the analysis and comparison of metrics for tree-child time consistent phylogenetic networks on the same set of taxa. In this paper, we generalize to phylogenetic networks two metrics that have already been introduced in the literature for phylogenetic trees: the nodal distance and the triplets distance. We prove that they are metrics on any class of tree-child time consistent phylogenetic networks on the same set of taxa, as well as some basic properties for them. To prove these results, we introduce a reduction/expansion procedure that can be used not only to establish properties of tree-child time consistent phylogenetic networks by induction, but also to generate all tree-child time consistent phylogenetic networks with a given number of leaves. 9. Translocation breakpoint at 7q31 associated with tics: further evidence for IMMP2L as a candidate gene for Tourette syndrome. Science.gov (United States) Patel, Chirag; Cooper-Charles, Lisa; McMullan, Dominic J; Walker, Judith M; Davison, Val; Morton, Jenny 2011-06-01 Gilles de la Tourette syndrome is a complex neuropsychiatric disorder with a strong genetic basis. We identified a male patient with Tourette syndrome-like tics and an apparently balanced de novo translocation [46,XY,t(2;7)(p24.2;q31)]. Further analysis using array comparative genomic hybridisation (CGH) revealed a cryptic deletion at 7q31.1-7q31.2. Breakpoints disrupting this region have been reported in one isolated and one familial case of Tourette syndrome. In our case, IMMP2L, a gene coding for a human homologue of the yeast inner mitochondrial membrane peptidase subunit 2, was disrupted by the breakpoint on 7q31.1, with deletion of exons 1-3 of the gene. The IMMP2L gene has previously been proposed as a candidate gene for Tourette syndrome, and our case provides further evidence of its possible role in the pathogenesis. The deleted region (7q31.1-7q31.2) of 7.2 Mb of genomic DNA also encompasses numerous genes, including FOXP2, associated with verbal dyspraxia, and the CFTR gene. 10. On the validity of setting breakpoint minimum inhibition concentrations at one quarter of the plasma concentration achieved following oral administration of oxytetracycline DEFF Research Database (Denmark) Coyne, R.; Samuelsen, O.; Bergh, Ø. 2004-01-01 Plasma concentrations of oxytetracycline (OTC) were established in two Atlantic salmon (Salmo salar) pre-smolts populations after they had received OTC medicated feed at a rate of 75 mg OTC/kg over 10 days. One population was experiencing an epizootic of furunculosis in a commercial freshwater farm...... and the other was held in a laboratory. Both populations were maintained at approximately 13 °C. The mean plasma concentration in 26 health farm fish was 0.25±0.06 and the 80th percentile was 0.21 mg/l. The mean concentration for 26 laboratory fish was 0.21±0.06 mg/l with an 80th percentile of 0.15 mg....../l. The validity of setting a breakpoint minimum inhibitory concentration (MIC) at a quarter of these plasma concentrations was investigated. The MIC of the Aeromonas salmonicida isolated from the farmed fish (n=7) was 0.5 mg/l and the breakpoints generated by application of the 4:1 ratio were in the range 0... 11. Identification of a Novel P190-Derived Breakpoint Peptide Suitable for Peptide Vaccine Therapeutic Approach in Ph+ Acute Lymphoblastic Leukemia Patients Directory of Open Access Journals (Sweden) Micaela Ippoliti 2012-01-01 Full Text Available Ph+ acute lymphoblastic leukemia (Ph+ ALL is a high-risk acute leukemia with poor prognosis, in which the specific t(9;22(q34;q11 translocation results in a chimeric bcr-abl (e1a2 breakpoint and in a 190 KD protein (p190 with constitutive tyrosine kinase activity. The advent of first- and second-generation tyrosine kinase inhibitors (TKIs improved the short-term outcome of Ph+ ALL patients not eligible for allo-SCT; yet disease recurrence is almost inevitable. Peptides derived from p190-breakpoint area are leukemia-specific antigens that may mediate an antitumor response toward p190+ leukemia cells. We identified one peptide named p190-13 able to induce in vitro peptide-specific CD4+ T cell proliferation in Ph+ ALL patients in complete remission during TKIs. Thus this peptide appears a good candidate for developing an immune target vaccine strategy possibly synergizing with TKIs for remission maintenance. 12. Clustering with phylogenetic tools in astrophysics CERN Document Server Fraix-Burnet, Didier 2016-01-01 Phylogenetic approaches are finding more and more applications outside the field of biology. Astrophysics is no exception since an overwhelming amount of multivariate data has appeared in the last twenty years or so. In particular, the diversification of galaxies throughout the evolution of the Universe quite naturally invokes phylogenetic approaches. We have demonstrated that Maximum Parsimony brings useful astrophysical results, and we now proceed toward the analyses of large datasets for galaxies. In this talk I present how we solve the major difficulties for this goal: the choice of the parameters, their discretization, and the analysis of a high number of objects with an unsupervised NP-hard classification technique like cladistics. 1. Introduction How do the galaxy form, and when? How did the galaxy evolve and transform themselves to create the diversity we observe? What are the progenitors to present-day galaxies? To answer these big questions, observations throughout the Universe and the physical mode... 13. Marine turtle mitogenome phylogenetics and evolution DEFF Research Database (Denmark) Duchene, S.; Frey, A.; Alfaro-Núñez, A.; 2012-01-01 The sea turtles are a group of cretaceous origin containing seven recognized living species: leatherback, hawksbill, Kemp's ridley, olive ridley, loggerhead, green, and flatback. The leatherback is the single member of the Dermochelidae family, whereas all other sea turtles belong in Cheloniidae...... distributions, shedding light on complex migration patterns and possible geographic or climatic events as driving forces of sea-turtle distribution. We have sequenced complete mitogenomes for all sea-turtle species, including samples from their geographic range extremes, and performed phylogenetic analyses...... to assess sea-turtle evolution with a large molecular dataset. We found variation in the length of the ATP8 gene and a highly variable site in ND4 near a proton translocation channel in the resulting protein. Complete mitogenomes show strong support and resolution for phylogenetic relationships among all... 14. Morphological Phylogenetics in the Genomic Age. Science.gov (United States) Lee, Michael S Y; Palci, Alessandro 2015-10-05 Evolutionary trees underpin virtually all of biology, and the wealth of new genomic data has enabled us to reconstruct them with increasing detail and confidence. While phenotypic (typically morphological) traits are becoming less important in reconstructing evolutionary trees, they still serve vital and unique roles in phylogenetics, even for living taxa for which vast amounts of genetic information are available. Morphology remains a powerful independent source of evidence for testing molecular clades, and - through fossil phenotypes - the primary means for time-scaling phylogenies. Morphological phylogenetics is therefore vital for transforming undated molecular topologies into dated evolutionary trees. However, if morphology is to be employed to its full potential, biologists need to start scrutinising phenotypes in a more objective fashion, models of phenotypic evolution need to be improved, and approaches for analysing phenotypic traits and fossils together with genomic data need to be refined. 15. Alignment-free phylogenetics and population genetics. Science.gov (United States) Haubold, Bernhard 2014-05-01 Phylogenetics and population genetics are central disciplines in evolutionary biology. Both are based on comparative data, today usually DNA sequences. These have become so plentiful that alignment-free sequence comparison is of growing importance in the race between scientists and sequencing machines. In phylogenetics, efficient distance computation is the major contribution of alignment-free methods. A distance measure should reflect the number of substitutions per site, which underlies classical alignment-based phylogeny reconstruction. Alignment-free distance measures are either based on word counts or on match lengths, and I apply examples of both approaches to simulated and real data to assess their accuracy and efficiency. While phylogeny reconstruction is based on the number of substitutions, in population genetics, the distribution of mutations along a sequence is also considered. This distribution can be explored by match lengths, thus opening the prospect of alignment-free population genomics. 16. Concepts of Classification and Taxonomy. Phylogenetic Classification CERN Document Server Fraix-Burnet, Didier 2016-01-01 Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works. 1 Why phylogenetic tools in astrophysics? 1.1 History of classification The need for classifying living organisms is very ancient, and the first classification system can be dated back to the Greeks. The goal was very practical since it was intended to distinguish between eatable and toxic aliments, or kind and dangerous animals. Simple resemblance was used and has been used for centuries. Basically, until the XVIIIth... 17. Phylogenetic paleobiogeography of Late Ordovician Laurentian brachiopods Directory of Open Access Journals (Sweden) Jennifer E. Bauer 2014-12-01 Full Text Available Phylogenetic biogeographic analysis of four brachiopod genera was used to uncover large-scale geologic drivers of Late Ordovician biogeographic differentiation in Laurentia. Previously generated phylogenetic hypotheses were converted into area cladograms, ancestral geographic ranges were optimized and speciation events characterized as via dispersal or vicariance, when possible. Area relationships were reconstructed using Lieberman-modified Brooks Parsimony Analysis. The resulting area cladograms indicate tectonic and oceanographic changes were the primary geologic drivers of biogeographic patterns within the focal taxa. The Taconic tectophase contributed to the separation of the Appalachian and Central basins as well as the two midcontinent basins, whereas sea level rise following the Boda Event promoted interbasinal dispersal. Three migration pathways into the Cincinnati Basin were recognized, which supports the multiple pathway hypothesis for the Richmondian Invasion. 18. MINER: software for phylogenetic motif identification OpenAIRE La, David; Livesay, Dennis R. 2005-01-01 MINER is web-based software for phylogenetic motif (PM) identification. PMs are sequence regions (fragments) that conserve the overall familial phylogeny. PMs have been shown to correspond to a wide variety of catalytic regions, substrate-binding sites and protein interfaces, making them ideal functional site predictions. The MINER output provides an intuitive interface for interactive PM sequence analysis and structural visualization. The web implementation of MINER is freely available at . ... 19. Phylogenetics and Computational Biology of Multigene Families Science.gov (United States) Liò, Pietro; Brilli, Matteo; Fani, Renato This chapter introduces the study of the major evolutionary forces operating in large gene families. The reconstruction of duplication history and phylogenetic analysis provide an interpretative framework of the evolution of multigene families. We present here two case studies, the first coming from Eukaryotes (chemokine receptors) and the second from Prokaryotes (TIM barrel proteins), showing how functional and structural constraints have shaped gene duplication events. 20. Phylogenetic estimation with partial likelihood tensors CERN Document Server Sumner, J G 2008-01-01 We present an alternative method for calculating likelihoods in molecular phylogenetics. Our method is based on partial likelihood tensors, which are generalizations of partial likelihood vectors, as used in Felsenstein's approach. Exploiting a lexicographic sorting and partial likelihood tensors, it is possible to obtain significant computational savings. We show this on a range of simulated data by enumerating all numerical calculations that are required by our method and the standard approach. 1. Uncertain-tree: discriminating among competing approaches to the phylogenetic analysis of phenotype data Science.gov (United States) Tanner, Alastair R.; Fleming, James F.; Tarver, James E.; Pisani, Davide 2017-01-01 Morphological data provide the only means of classifying the majority of life's history, but the choice between competing phylogenetic methods for the analysis of morphology is unclear. Traditionally, parsimony methods have been favoured but recent studies have shown that these approaches are less accurate than the Bayesian implementation of the Mk model. Here we expand on these findings in several ways: we assess the impact of tree shape and maximum-likelihood estimation using the Mk model, as well as analysing data composed of both binary and multistate characters. We find that all methods struggle to correctly resolve deep clades within asymmetric trees, and when analysing small character matrices. The Bayesian Mk model is the most accurate method for estimating topology, but with lower resolution than other methods. Equal weights parsimony is more accurate than implied weights parsimony, and maximum-likelihood estimation using the Mk model is the least accurate method. We conclude that the Bayesian implementation of the Mk model should be the default method for phylogenetic estimation from phenotype datasets, and we explore the implications of our simulations in reanalysing several empirical morphological character matrices. A consequence of our finding is that high levels of resolution or the ability to classify species or groups with much confidence should not be expected when using small datasets. It is now necessary to depart from the traditional parsimony paradigms of constructing character matrices, towards datasets constructed explicitly for Bayesian methods. PMID:28077778 2. Phylogenetic analysis of cubilin (CUBN) gene. Science.gov (United States) Shaik, Abjal Pasha; Alsaeed, Abbas H; Kiranmayee, S; Bammidi, Vk; Sultana, Asma 2013-01-01 Cubilin, (CUBN; also known as intrinsic factor-cobalamin receptor [Homo sapiens Entrez Pubmed ref NM_001081.3; NG_008967.1; GI: 119606627]), located in the epithelium of intestine and kidney acts as a receptor for intrinsic factor - vitamin B12 complexes. Mutations in CUBN may play a role in autosomal recessive megaloblastic anemia. The current study investigated the possible role of CUBN in evolution using phylogenetic testing. A total of 588 BLAST hits were found for the cubilin query sequence and these hits showed putative conserved domain, CUB superfamily (as on 27(th) Nov 2012). A first-pass phylogenetic tree was constructed to identify the taxa which most often contained the CUBN sequences. Following this, we narrowed down the search by manually deleting sequences which were not CUBN. A repeat phylogenetic analysis of 25 taxa was performed using PhyML, RAxML and TreeDyn softwares to confirm that CUBN is a conserved protein emphasizing its importance as an extracellular domain and being present in proteins mostly known to be involved in development in many chordate taxa but not found in prokaryotes, plants and yeast.. No horizontal gene transfers have been found between different taxa. 3. Incongruencies in Vaccinia Virus Phylogenetic Trees Directory of Open Access Journals (Sweden) 2014-10-01 Full Text Available Over the years, as more complete poxvirus genomes have been sequenced, phylogenetic studies of these viruses have become more prevalent. In general, the results show similar relationships between the poxvirus species; however, some inconsistencies are notable. Previous analyses of the viral genomes contained within the vaccinia virus (VACV-Dryvax vaccine revealed that their phylogenetic relationships were sometimes clouded by low bootstrapping confidence. To analyze the VACV-Dryvax genomes in detail, a new tool-set was developed and integrated into the Base-By-Base bioinformatics software package. Analyses showed that fewer unique positions were present in each VACV-Dryvax genome than expected. A series of patterns, each containing several single nucleotide polymorphisms (SNPs were identified that were counter to the results of the phylogenetic analysis. The VACV genomes were found to contain short DNA sequence blocks that matched more distantly related clades. Additionally, similar non-conforming SNP patterns were observed in (1 the variola virus clade; (2 some cowpox clades; and (3 VACV-CVA, the direct ancestor of VACV-MVA. Thus, traces of past recombination events are common in the various orthopoxvirus clades, including those associated with smallpox and cowpox viruses. 4. Phylogenetic conservatism of environmental niches in mammals. Science.gov (United States) Cooper, Natalie; Freckleton, Rob P; Jetz, Walter 2011-08-01 Phylogenetic niche conservatism is the pattern where close relatives occupy similar niches, whereas distant relatives are more dissimilar. We suggest that niche conservatism will vary across clades in relation to their characteristics. Specifically, we investigate how conservatism of environmental niches varies among mammals according to their latitude, range size, body size and specialization. We use the Brownian rate parameter, σ(2), to measure the rate of evolution in key variables related to the ecological niche and define the more conserved group as the one with the slower rate of evolution. We find that tropical, small-ranged and specialized mammals have more conserved thermal niches than temperate, large-ranged or generalized mammals. Partitioning niche conservatism into its spatial and phylogenetic components, we find that spatial effects on niche variables are generally greater than phylogenetic effects. This suggests that recent evolution and dispersal have more influence on species' niches than more distant evolutionary events. These results have implications for our understanding of the role of niche conservatism in species richness patterns and for gauging the potential for species to adapt to global change. 5. Rapid and accurate pyrosequencing of angiosperm plastid genomes Directory of Open Access Journals (Sweden) Farmerie William G 2006-08-01 Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy 6. A Note on Encodings of Phylogenetic Networks of Bounded Level CERN Document Server Gambette, Philippe 2009-01-01 Driven by the need for better models that allow one to shed light into the question how life's diversity has evolved, phylogenetic networks have now joined phylogenetic trees in the center of phylogenetics research. Like phylogenetic trees, such networks canonically induce collections of phylogenetic trees, clusters, and triplets, respectively. Thus it is not surprising that many network approaches aim to reconstruct a phylogenetic network from such collections. Related to the well-studied perfect phylogeny problem, the following question is of fundamental importance in this context: When does one of the above collections encode (i.e. uniquely describe) the network that induces it? In this note, we present a complete answer to this question for the special case of a level-1 (phylogenetic) network by characterizing those level-1 networks for which an encoding in terms of one (or equivalently all) of the above collections exists. Given that this type of network forms the first layer of the rich hierarchy of lev... 7. Heterogeneous breakpoints on the immunoglobulin genes are involved in fusion with the 5' region of BCL2 in B-cell tumors. Science.gov (United States) Yonetani, N; Ueda, C; Akasaka, T; Nishikori, M; Uchiyama, T; Ohno, H 2001-09-01 The 5' flanking region of the BCL2 gene (5'-BCL2) is a breakpoint cluster of rearrangements with immunoglobulin genes (IGs). In contrast to t(14;18)(q32;q21) affecting the 3' region of BCL2, 5'-BCL2 can fuse to not only the heavy chain gene (IGH), but also two light chain gene (IGL) loci. We report here cloning and sequencing of a total of eleven 5'-BCL2 / IGs junctional areas of B-cell tumors, which were amplified by long-distance polymerase chain reaction-based assays. The breakpoints on 5'-BCL2 were distributed from 378 to 2312 bp upstream of the translational initiation site and, reflecting the alteration of regulatory sequences of BCL2, 5'-BCL2 / IGs-positive cells showed markedly higher levels of BCL2 expression than those of t(14;18)-positive cells. In contrast, the breakpoints on the IGs were variable. Two 5'-BCL2 / IGH and two 5'-BCL2 / IGLkappa junctions occurred 5' of the joining (J) segments, suggesting operation of an erroneous variable (V) / diversity (D) / J and V / J rearrangement mechanism. However, two other 5'-BCL2 / IGH junctions affected switch regions, and the kappa-deleting element, which is located 24 kb downstream of the constant region of IGLkappa, followed the 5'-BCL2 in another case. One 5'-BCL2 / IGLkappa and two 5'-BCL2 / IGLlambda junctions involved intronic regions where the normal recombination process does not occur. In the remaining one case, the 5'-BCL2 fused 3' of a Vlambda gene that was upstream of another Vlambda / Jlambda complex carrying a non-producing configuration, indicating that the receptor editing mechanism was likely involved in this rearrangement. Our study revealed heterogeneous anatomy of the 5'-BCL2 / IGs fusion gene leading to transcriptional activation of BCL2, and suggested that the mechanisms underlying the formation of this particular oncogene / IGs recombination are not identical to those of t(14;18). 8. Phylogenetic and biogeographic analysis of sphaerexochine trilobites. Directory of Open Access Journals (Sweden) Curtis R Congreve Full Text Available BACKGROUND: Sphaerexochinae is a speciose and widely distributed group of cheirurid trilobites. Their temporal range extends from the earliest Ordovician through the Silurian, and they survived the end Ordovician mass extinction event (the second largest mass extinction in Earth history. Prior to this study, the individual evolutionary relationships within the group had yet to be determined utilizing rigorous phylogenetic methods. Understanding these evolutionary relationships is important for producing a stable classification of the group, and will be useful in elucidating the effects the end Ordovician mass extinction had on the evolutionary and biogeographic history of the group. METHODOLOGY/PRINCIPAL FINDINGS: Cladistic parsimony analysis of cheirurid trilobites assigned to the subfamily Sphaerexochinae was conducted to evaluate phylogenetic patterns and produce a hypothesis of relationship for the group. This study utilized the program TNT, and the analysis included thirty-one taxa and thirty-nine characters. The results of this analysis were then used in a Lieberman-modified Brooks Parsimony Analysis to analyze biogeographic patterns during the Ordovician-Silurian. CONCLUSIONS/SIGNIFICANCE: The genus Sphaerexochus was found to be monophyletic, consisting of two smaller clades (one composed entirely of Ordovician species and another composed of Silurian and Ordovician species. By contrast, the genus Kawina was found to be paraphyletic. It is a basal grade that also contains taxa formerly assigned to Cydonocephalus. Phylogenetic patterns suggest Sphaerexochinae is a relatively distinctive trilobite clade because it appears to have been largely unaffected by the end Ordovician mass extinction. Finally, the biogeographic analysis yields two major conclusions about Sphaerexochus biogeography: Bohemia and Avalonia were close enough during the Silurian to exchange taxa; and during the Ordovician there was dispersal between Eastern Laurentia and 9. High-resolution phylogenetic microbial community profiling Energy Technology Data Exchange (ETDEWEB) Singer, Esther; Coleman-Derr, Devin; Bowman, Brett; Schwientek, Patrick; Clum, Alicia; Copeland, Alex; Ciobanu, Doina; Cheng, Jan-Fang; Gies, Esther; Hallam, Steve; Tringe, Susannah; Woyke, Tanja 2014-03-17 The representation of bacterial and archaeal genome sequences is strongly biased towards cultivated organisms, which belong to merely four phylogenetic groups. Functional information and inter-phylum level relationships are still largely underexplored for candidate phyla, which are often referred to as microbial dark matter. Furthermore, a large portion of the 16S rRNA gene records in the GenBank database are labeled as environmental samples and unclassified, which is in part due to low read accuracy, potential chimeric sequences produced during PCR amplifications and the low resolution of short amplicons. In order to improve the phylogenetic classification of novel species and advance our knowledge of the ecosystem function of uncultivated microorganisms, high-throughput full length 16S rRNA gene sequencing methodologies with reduced biases are needed. We evaluated the performance of PacBio single-molecule real-time (SMRT) sequencing in high-resolution phylogenetic microbial community profiling. For this purpose, we compared PacBio and Illumina metagenomic shotgun and 16S rRNA gene sequencing of a mock community as well as of an environmental sample from Sakinaw Lake, British Columbia. Sakinaw Lake is known to contain a large age of microbial species from candidate phyla. Sequencing results show that community structure based on PacBio shotgun and 16S rRNA gene sequences is highly similar in both the mock and the environmental communities. Resolution power and community representation accuracy from SMRT sequencing data appeared to be independent of GC content of microbial genomes and was higher when compared to Illumina-based metagenome shotgun and 16S rRNA gene (iTag) sequences, e.g. full-length sequencing resolved all 23 OTUs in the mock community, while iTags did not resolve closely related species. SMRT sequencing hence offers various potential benefits when characterizing uncharted microbial communities. 10. Evidence of Statistical Inconsistency of Phylogenetic Methods in the Presence of Multiple Sequence Alignment Uncertainty. Science.gov (United States) Md Mukarram Hossain, A S; Blackburne, Benjamin P; Shah, Abhijeet; Whelan, Simon 2015-07-01 Evolutionary studies usually use a two-step process to investigate sequence data. Step one estimates a multiple sequence alignment (MSA) and step two applies phylogenetic methods to ask evolutionary questions of that MSA. Modern phylogenetic methods infer evolutionary parameters using maximum likelihood or Bayesian inference, mediated by a probabilistic substitution model that describes sequence change over a tree. The statistical properties of these methods mean that more data directly translates to an increased confidence in downstream results, providing the substitution model is adequate and the MSA is correct. Many studies have investigated the robustness of phylogenetic methods in the presence of substitution model misspecification, but few have examined the statistical properties of those methods when the MSA is unknown. This simulation study examines the statistical properties of the complete two-step process when inferring sequence divergence and the phylogenetic tree topology. Both nucleotide and amino acid analyses are negatively affected by the alignment step, both through inaccurate guide tree estimates and through overfitting to that guide tree. For many alignment tools these effects become more pronounced when additional sequences are added to the analysis. Nucleotide sequences are particularly susceptible, with MSA errors leading to statistical support for long-branch attraction artifacts, which are usually associated with gross substitution model misspecification. Amino acid MSAs are more robust, but do tend to arbitrarily resolve multifurcations in favor of the guide tree. No inference strategies produce consistently accurate estimates of divergence between sequences, although amino acid MSAs are again more accurate than their nucleotide counterparts. We conclude with some practical suggestions about how to limit the effect of MSA uncertainty on evolutionary inference. 11. MINER: software for phylogenetic motif identification. Science.gov (United States) La, David; Livesay, Dennis R 2005-07-01 MINER is web-based software for phylogenetic motif (PM) identification. PMs are sequence regions (fragments) that conserve the overall familial phylogeny. PMs have been shown to correspond to a wide variety of catalytic regions, substrate-binding sites and protein interfaces, making them ideal functional site predictions. The MINER output provides an intuitive interface for interactive PM sequence analysis and structural visualization. The web implementation of MINER is freely available at http://www.pmap.csupomona.edu/MINER/. Source code is available to the academic community on request. 12. Phylogenetic analysis of cubilin (CUBN) gene OpenAIRE Shaik, Abjal Pasha; Alsaeed, Abbas H; Kiranmayee, S; Bammidi, VK; Sultana, Asma 2013-01-01 Cubilin, (CUBN; also known as intrinsic factor-cobalamin receptor [Homo sapiens Entrez Pubmed ref NM_001081.3; NG_008967.1; GI: 119606627]), located in the epithelium of intestine and kidney acts as a receptor for intrinsic factor – vitamin B12 complexes. Mutations in CUBN may play a role in autosomal recessive megaloblastic anemia. The current study investigated the possible role of CUBN in evolution using phylogenetic testing. A total of 588 BLAST hits were found for the cubilin query seque... 13. Concepts of Classification and Taxonomy Phylogenetic Classification Science.gov (United States) Fraix-Burnet, D. 2016-05-01 Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works. 14. Taxonomic Identity Resolution of Highly Phylogenetically Related Strains and Selection of Phylogenetic Markers by Using Genome-Scale Methods: The Bacillus pumilus Group Case Science.gov (United States) Espariz, Martín; Zuljan, Federico A.; Esteban, Luis; Magni, Christian 2016-01-01 Bacillus pumilus group strains have been studied due their agronomic, biotechnological or pharmaceutical potential. Classifying strains of this taxonomic group at species level is a challenging procedure since it is composed of seven species that share among them over 99.5% of 16S rRNA gene identity. In this study, first, a whole-genome in silico approach was used to accurately demarcate B. pumilus group strains, as a case of highly phylogenetically related taxa, at the species level. In order to achieve that and consequently to validate or correct taxonomic identities of genomes in public databases, an average nucleotide identity correlation, a core-based phylogenomic and a gene function repertory analyses were performed. Eventually, more than 50% such genomes were found to be misclassified. Hierarchical clustering of gene functional repertoires was also used to infer ecotypes among B. pumilus group species. Furthermore, for the first time the machine-learning algorithm Random Forest was used to rank genes in order of their importance for species classification. We found that ybbP, a gene involved in the synthesis of cyclic di-AMP, was the most important gene for accurately predicting species identity among B. pumilus group strains. Finally, principal component analysis was used to classify strains based on the distances between their ybbP genes. The methodologies described could be utilized more broadly to identify other highly phylogenetically related species in metagenomic or epidemiological assessments. PMID:27658251 15. Descriptive Statistics of the Genome: Phylogenetic Classification of Viruses. Science.gov (United States) Hernandez, Troy; Yang, Jie 2016-10-01 The typical process for classifying and submitting a newly sequenced virus to the NCBI database involves two steps. First, a BLAST search is performed to determine likely family candidates. That is followed by checking the candidate families with the pairwise sequence alignment tool for similar species. The submitter's judgment is then used to determine the most likely species classification. The aim of this article is to show that this process can be automated into a fast, accurate, one-step process using the proposed alignment-free method and properly implemented machine learning techniques. We present a new family of alignment-free vectorizations of the genome, the generalized vector, that maintains the speed of existing alignment-free methods while outperforming all available methods. This new alignment-free vectorization uses the frequency of genomic words (k-mers), as is done in the composition vector, and incorporates descriptive statistics of those k-mers' positional information, as inspired by the natural vector. We analyze five different characterizations of genome similarity using k-nearest neighbor classification and evaluate these on two collections of viruses totaling over 10,000 viruses. We show that our proposed method performs better than, or as well as, other methods at every level of the phylogenetic hierarchy. The data and R code is available upon request. 16. Isolation of chromosome-specific DNA sequences from an Alu polymerase chain reaction library to define the breakpoint in a patient with a constitutional translocation t(1;13) (q22;q12) and ganglioneuroblastoma. Science.gov (United States) Michalski, A J; Cotter, F E; Cowell, J K 1992-08-01 We describe the cytogenetic and molecular characterization of a t(1;13)(q22;q12) constitutional rearrangement occurring in a patient with a relatively benign form of neuroblastoma, called ganglioneuroblastoma. Somatic cell hybrids were generated between mouse 3T3 cells and a lymphoblastoid cell line from this patient, D.G. One isolated subclone, DGF27C11, contained the derivative chromosome, 1pter-q22::13q12-qter, but no other material from either chromosome 1 or 13. Using available DNA probes the 13 breakpoint was assigned proximal to all reported markers. In order to generate flanking markers to define this translocation further, an Alu polymerase chain reaction library was constructed from a somatic cell hybrid containing only the proximal, 13pter-13q14, region of chromosome 13. Seven unique sequences have been isolated from the library, three of which lie below and four of which lie above the 13q12 breakpoint. More precise mapping of the distal markers was achieved using a panel of somatic cell hybrids with overlapping deletions of chromosome 13. The paucity of probes in the 1q22 region has made a precise assignment of this breakpoint difficult, however it has been shown to lie distal to c-SKI and proximal to APOA2. This refined characterization of the breakpoint is a prerequisite for its cloning, which may yield genes important in the pathogenesis of ganglioneuroblastoma. 17. A high-resolution comparative map between pig chromosome 17 and human chromosomes 4, 8, and 20: Identification of synteny breakpoints DEFF Research Database (Denmark) Lahbib-Mansais, Yvette; Karlskov-Mortensen, Peter; Mompart, Florence; 2005-01-01 We report on the construction of a high-resolution comparative map of porcine chromosome 17 (SSC17) focusing on evolutionary breakpoints with human chromosomes. The comparative map shows high homology with human chromosome 20 but suggests more limited homologies with other human chromosomes. SSC17...... is of particular interest in studies of chromosomal organization due to the presence of QTLs that affect meat quality and carcass composition. A total of 158 pig ESTs available in databases or developed by the Sino-Danish Pig Genome Sequencing Consortium were mapped using the INRA-University of Minnesota porcine...... radiation hydrid panel. The high-resolution map was further anchored by fluorescence in situ hybridization. This study confirmed the extensive conservation between SSC17 and HSA20 and enabled the gene order to be determined. The homology of the SSC17 pericentromeric region was extended to other human... 18. Impact of penicillin nonsusceptibility on clinical outcomes of patients with nonmeningeal Streptococcus pneumoniae bacteremia in the era of the 2008 clinical and laboratory standards institute penicillin breakpoints. Science.gov (United States) Choi, Seong-Ho; Chung, Jin-Won; Sung, Heungsup; Kim, Mi-Na; Kim, Sung-Han; Lee, Sang-Oh; Kim, Yang Soo; Woo, Jun Hee; Choi, Sang-Ho 2012-09-01 To investigate the impact of penicillin nonsusceptibility on clinical outcomes of patients with nonmeningeal Streptococcus pneumoniae bacteremia (SPB), a retrospective cohort study was performed. The characteristics of 39 patients with penicillin-nonsusceptible SPB (PNSPB) were compared to those of a group of age- and sex-matched patients (n = 78) with penicillin-susceptible SPB (PSSPB). Susceptibility to penicillin was redetermined by using the revised Clinical and Laboratory Standards Institute (CLSI) penicillin breakpoints in CLSI document M100-S18. Although the PNSPB group tended to have more serious initial manifestations than the PSSPB group, the two groups did not differ significantly in terms of their 30-day mortality rates (30.8% versus 23.1%; P = 0.37) or the duration of hospital stay (median number of days, 14 versus 12; P = 0.89). Broad-spectrum antimicrobial agents, such as extended-spectrum cephalosporins, vancomycin, and carbapenem, were frequently used in both the PNSPB and PSSPB groups. Multivariate analysis revealed that ceftriaxone nonsusceptibility (adjusted odds ratio [aOR] = 4.88; 95% confidence interval [CI] = 1.07 to 22.27; P = 0.041) was one of the independent risk factors for 30-day mortality. Thus, when the 2008 CLSI penicillin breakpoints are applied and the current clinical practice of using wide-spectrum empirical antimicrobial agents is pursued, fatal outcomes in patients with nonmeningeal SPB that can be attributed to penicillin nonsusceptibility are likely to be rare. Further studies that examine the clinical impact of ceftriaxone nonsusceptibility in nonmningeal SPB may be warranted. 19. Impact of changes in CLSI and EUCAST breakpoints for susceptibility in bloodstream infections due to extended-spectrum β-lactamase-producing Escherichia coli. Science.gov (United States) Rodríguez-Baño, J; Picón, E; Navarro, M D; López-Cerero, L; Pascual, A 2012-09-01 The impact of recent changes in and discrepancies between the breakpoints for cephalosporins and other antimicrobials, as determined by CLSI and European Committee on Antimicrobial Susceptibility Testing (EUCAST), was analysed in patients with bloodstream infections caused by extended-spectrum β-lactamase (ESBL) producing Escherichia coli in Spain, was analysed. We studied a cohort of 191 episodes of bloodstream infection caused by ESBL-producing E. coli in 13 Spanish hospitals; the susceptibility of isolates to different antimicrobials was investigated by microdilution and interpreted according to recommendations established in 2009 and 2010 by CLSI, and in 2011 by EUCAST. Overall, 58.6% and 14.7% of isolates were susceptible to ceftazidime, and 35.1% and 14.7% to cefepime using the CLSI-2010 and EUCAST-2009/2011 recommendations, respectively (all isolates would have been considered resistant using the previous guidelines). Discrepancies between the CLSI-2010 and the EUCAST-2011 recommendations were statistically significant for other antimicrobials only in the case of amikacin (98.4% versus 75.9% of susceptible isolates; p <0.01). The results varied depending on the ESBL produced. No significant differences were found in the percentage of patients classified as receiving appropriate therapy, following the different recommendations. Four out of 11 patients treated with active cephalosporins according to CLSI-2010 guidelines died (all had severe sepsis or shock); these cases would have been considered resistant according to EUCAST-2011. In conclusion, by using current breakpoints, extended-spectrum cephalosporins would be regarded as active agents for treating a significant proportion of patients with bloodstream infections caused by ESBL-producing E. coli. 20. Evaluation by Data Mining Techniques of Fluconazole Breakpoints Established by the Clinical and Laboratory Standards Institute (CLSI) and Comparison with Those of the European Committee on Antimicrobial Susceptibility Testing (EUCAST)▿ Science.gov (United States) Cuesta, Isabel; Bielza, Concha; Cuenca-Estrella, Manuel; Larrañaga, Pedro; Rodríguez-Tudela, Juan L. 2010-01-01 The EUCAST and the CLSI have established different breakpoints for fluconazole and Candida spp. However, the reference methodologies employed to obtain the MICs provide similar results. The aim of this work was to apply supervised classification algorithms to analyze the clinical data used by the CLSI to establish fluconazole breakpoints for Candida infections and to compare these data with the results obtained with the data set used to set up EUCAST fluconazole breakpoints, where the MIC for detecting failures was >4 mg/liter, with a sensitivity of 87%, a false-positive rate of 8%, and an area under the receiver operating characteristic (ROC) curve of 0.89. Five supervised classifiers (J48 and CART decision trees, the OneR decision rule, the naïve Bayes classifier, and simple logistic regression) were used to analyze the original cohort of patients (Rex's data set), which was used to establish CLSI breakpoints, and a later cohort of candidemia (Clancy's data set), with which CLSI breakpoints were validated. The target variable was the outcome of the infections, and the predictor variable was the MIC or dose/MIC ratio. For Rex's data set, the MIC detecting failures was >8 mg/liter, and for Clancy's data set, the MIC detecting failures was >4 mg/liter, in close agreement with the EUCAST breakpoint (MIC > 4 mg/liter). The sensitivities, false-positive rates, and areas under the ROC curve obtained by means of CART, the algorithm with the best statistical results, were 52%, 18%, and 0.7, respectively, for Rex's data set and 65%, 6%, and 0.72, respectively, for Clancy's data set. In addition, the correlation between outcome and dose/MIC ratio was analyzed for Clancy's data set, where a dose/MIC ratio of >75 was associated with successes, with a sensitivity of 93%, a false-positive rate of 29%, and an area under the ROC curve of 0.83. This dose/MIC ratio of >75 was identical to that found for the cohorts used by EUCAST to establish their breakpoints (a dose/MIC ratio of 1. A Distance Measure for Genome Phylogenetic Analysis Science.gov (United States) Cao, Minh Duc; Allison, Lloyd; Dix, Trevor Phylogenetic analyses of species based on single genes or parts of the genomes are often inconsistent because of factors such as variable rates of evolution and horizontal gene transfer. The availability of more and more sequenced genomes allows phylogeny construction from complete genomes that is less sensitive to such inconsistency. For such long sequences, construction methods like maximum parsimony and maximum likelihood are often not possible due to their intensive computational requirement. Another class of tree construction methods, namely distance-based methods, require a measure of distances between any two genomes. Some measures such as evolutionary edit distance of gene order and gene content are computational expensive or do not perform well when the gene content of the organisms are similar. This study presents an information theoretic measure of genetic distances between genomes based on the biological compression algorithm expert model. We demonstrate that our distance measure can be applied to reconstruct the consensus phylogenetic tree of a number of Plasmodium parasites from their genomes, the statistical bias of which would mislead conventional analysis methods. Our approach is also used to successfully construct a plausible evolutionary tree for the γ-Proteobacteria group whose genomes are known to contain many horizontally transferred genes. 2. A phylogenetic blueprint for a modern whale. Science.gov (United States) Gatesy, John; Geisler, Jonathan H; Chang, Joseph; Buell, Carl; Berta, Annalisa; Meredith, Robert W; Springer, Mark S; McGowen, Michael R 2013-02-01 The emergence of Cetacea in the Paleogene represents one of the most profound macroevolutionary transitions within Mammalia. The move from a terrestrial habitat to a committed aquatic lifestyle engendered wholesale changes in anatomy, physiology, and behavior. The results of this remarkable transformation are extant whales that include the largest, biggest brained, fastest swimming, loudest, deepest diving mammals, some of which can detect prey with a sophisticated echolocation system (Odontoceti - toothed whales), and others that batch feed using racks of baleen (Mysticeti - baleen whales). A broad-scale reconstruction of the evolutionary remodeling that culminated in extant cetaceans has not yet been based on integration of genomic and paleontological information. Here, we first place Cetacea relative to extant mammalian diversity, and assess the distribution of support among molecular datasets for relationships within Artiodactyla (even-toed ungulates, including Cetacea). We then merge trees derived from three large concatenations of molecular and fossil data to yield a composite hypothesis that encompasses many critical events in the evolutionary history of Cetacea. By combining diverse evidence, we infer a phylogenetic blueprint that outlines the stepwise evolutionary development of modern whales. This hypothesis represents a starting point for more detailed, comprehensive phylogenetic reconstructions in the future, and also highlights the synergistic interaction between modern (genomic) and traditional (morphological+paleontological) approaches that ultimately must be exploited to provide a rich understanding of evolutionary history across the entire tree of Life. 3. Phylogenetic diversity of Mesorhizobium in chickpea Dong Hyun Kim; Mayank Kaashyap; Abhishek Rathore; Roma R Das; Swathi Parupalli; Hari D Upadhyaya; S Gopalakrishnan; Pooran M Gaur; Sarvjeet Singh; Jagmeet Kaur; Mohammad Yasin; Rajeev K Varshney 2014-06-01 Crop domestication, in general, has reduced genetic diversity in cultivated gene pool of chickpea (Cicer arietinum) as compared with wild species (C. reticulatum, C. bijugum). To explore impact of domestication on symbiosis, 10 accessions of chickpeas, including 4 accessions of C. arietinum, and 3 accessions of each of C. reticulatum and C. bijugum species, were selected and DNAs were extracted from their nodules. To distinguish chickpea symbiont, preliminary sequences analysis was attempted with 9 genes (16S rRNA, atpD, dnaJ, glnA, gyrB, nifH, nifK, nodD and recA) of which 3 genes (gyrB, nifK and nodD) were selected based on sufficient sequence diversity for further phylogenetic analysis. Phylogenetic analysis and sequence diversity for 3 genes demonstrated that sequences from C. reticulatum were more diverse. Nodule occupancy by dominant symbiont also indicated that C. reticulatum (60%) could have more various symbionts than cultivated chickpea (80%). The study demonstrated that wild chickpeas (C. reticulatum) could be used for selecting more diverse symbionts in the field conditions and it implies that chickpea domestication affected symbiosis negatively in addition to reducing genetic diversity. 4. Epitope discovery with phylogenetic hidden Markov models. LENUS (Irish Health Repository) Lacerda, Miguel 2010-05-01 Existing methods for the prediction of immunologically active T-cell epitopes are based on the amino acid sequence or structure of pathogen proteins. Additional information regarding the locations of epitopes may be acquired by considering the evolution of viruses in hosts with different immune backgrounds. In particular, immune-dependent evolutionary patterns at sites within or near T-cell epitopes can be used to enhance epitope identification. We have developed a mutation-selection model of T-cell epitope evolution that allows the human leukocyte antigen (HLA) genotype of the host to influence the evolutionary process. This is one of the first examples of the incorporation of environmental parameters into a phylogenetic model and has many other potential applications where the selection pressures exerted on an organism can be related directly to environmental factors. We combine this novel evolutionary model with a hidden Markov model to identify contiguous amino acid positions that appear to evolve under immune pressure in the presence of specific host immune alleles and that therefore represent potential epitopes. This phylogenetic hidden Markov model provides a rigorous probabilistic framework that can be combined with sequence or structural information to improve epitope prediction. As a demonstration, we apply the model to a data set of HIV-1 protein-coding sequences and host HLA genotypes. 5. Laboratory Building for Accurate Determination of Plutonium Institute of Scientific and Technical Information of China (English) 2008-01-01 <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An 6. Comparative assessment of performance and genome dependence among phylogenetic profiling methods Directory of Open Access Journals (Sweden) Wu Jie 2006-09-01 Full Text Available Abstract Background The rapidly increasing speed with which genome sequence data can be generated will be accompanied by an exponential increase in the number of sequenced eukaryotes. With the increasing number of sequenced eukaryotic genomes comes a need for bioinformatic techniques to aid in functional annotation. Ideally, genome context based techniques such as proximity, fusion, and phylogenetic profiling, which have been so successful in prokaryotes, could be utilized in eukaryotes. Here we explore the application of phylogenetic profiling, a method that exploits the evolutionary co-occurrence of genes in the assignment of functional linkages, to eukaryotic genomes. Results In order to evaluate the performance of phylogenetic profiling in eukaryotes, we assessed the relative performance of commonly used profile construction techniques and genome compositions in predicting functional linkages in both prokaryotic and eukaryotic organisms. When predicting linkages in E. coli with a prokaryotic profile, the use of continuous values constructed from transformed BLAST bit-scores performed better than profiles composed of discretized E-values; the use of discretized E-values resulted in more accurate linkages when using S. cerevisiae as the query organism. Extending this analysis by incorporating several eukaryotic genomes in profiles containing a majority of prokaryotes resulted in similar overall accuracy, but with a surprising reduction in pathway diversity among the most significant linkages. Furthermore, the application of phylogenetic profiling using profiles composed of only eukaryotes resulted in the loss of the strong correlation between common KEGG pathway membership and profile similarity score. Profile construction methods, orthology definitions, ontology and domain complexity were explored as possible sources of the poor performance of eukaryotic profiles, but with no improvement in results. Conclusion Given the current set of 7. The origin and diversification of eukaryotes: problems with molecular phylogenetics and molecular clock estimation. Science.gov (United States) Roger, Andrew J; Hug, Laura A 2006-06-29 Determining the relationships among and divergence times for the major eukaryotic lineages remains one of the most important and controversial outstanding problems in evolutionary biology. The sequencing and phylogenetic analyses of ribosomal RNA (rRNA) genes led to the first nearly comprehensive phylogenies of eukaryotes in the late 1980s, and supported a view where cellular complexity was acquired during the divergence of extant unicellular eukaryote lineages. More recently, however, refinements in analytical methods coupled with the availability of many additional genes for phylogenetic analysis showed that much of the deep structure of early rRNA trees was artefactual. Recent phylogenetic analyses of a multiple genes and the discovery of important molecular and ultrastructural phylogenetic characters have resolved eukaryotic diversity into six major hypothetical groups. Yet relationships among these groups remain poorly understood because of saturation of sequence changes on the billion-year time-scale, possible rapid radiations of major lineages, phylogenetic artefacts and endosymbiotic or lateral gene transfer among eukaryotes. Estimating the divergence dates between the major eukaryote lineages using molecular analyses is even more difficult than phylogenetic estimation. Error in such analyses comes from a myriad of sources including: (i) calibration fossil dates, (ii) the assumed phylogenetic tree, (iii) the nucleotide or amino acid substitution model, (iv) substitution number (branch length) estimates, (v) the model of how rates of evolution change over the tree, (vi) error inherent in the time estimates for a given model and (vii) how multiple gene data are treated. By reanalysing datasets from recently published molecular clock studies, we show that when errors from these various sources are properly accounted for, the confidence intervals on inferred dates can be very large. Furthermore, estimated dates of divergence vary hugely depending on the methods 8. Understanding the Code: keeping accurate records. Science.gov (United States) Griffith, Richard 2015-10-01 In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. 9. Phylogenetics of early branching eudicots: Comparing phylogenetic signal across plastid introns, spacers, and genes Institute of Scientific and Technical Information of China (English) Anna-Magdalena BARNISKE; Thomas BORSCH; Kai M(U)LLER; Michael KRUG; Andreas WORBERG; Christoph NEINHUIS; Dietmar QUANDT 2012-01-01 Recent phylogenetic analyses revealed a grade with Ranunculales,Sabiales,Proteales,Trochodendrales,and Buxales as first branching eudicots,with the respective positions of Proteales and Sabiales still lacking statistical confidence.As previous analyses of conserved plastid genes remain inconclusive,we aimed to use and evaluate a representative set of plastid introns (group Ⅰ:trnL; group Ⅱ:petD,rpll6,trnK) and intergenic spacers (trnL-F,petB-petD,atpB-rbcL,rps3-rpll6) in comparison to the rapidly evolving matK and slowly evolving atpB and rbcL genes.Overall patterns of microstructural mutations converged across genomic regions,underscoring the existence of a general mutational pattern throughout the plastid genome.Phylogenetic signal differed strongly between functionally and structurally different genomic regions and was highest in matK,followed by spacers,then group Ⅱ and group Ⅰ introns.The more conserved atpB and rbcL coding regions showed distinctly lower phylogenetic information content.Parsimony,maximum likelihood,and Bayesian phylogenetic analyses based on the combined dataset of non-coding and rapidly evolving regions (>14 000 aligned characters) converged to a backbone topology ofeudicots with Ranunculales branching first,a Proteales-Sabiales clade second,followed by Trochodendrales and Buxales.Gunnerales generally appeared as sister to all remaining core eudicots with maximum support.Our results show that a small number of intron and spacer sequences allow similar insights into phylogenetic relationships of eudicots compared to datasets of many combined genes.The non-coding proportion of the plastid genome thus can be considered an important information source for plastid phylogenomics. 10. Phylogenetic positions of several amitochondriate protozoa-Evidence from phylogenetic analysis of DNA topoisomerase II Institute of Scientific and Technical Information of China (English) HE De; DONG Jiuhong; WEN Jianfan; XIN Dedong; LU Siqi 2005-01-01 Several groups of parasitic protozoa, as represented by Giardia, Trichomonas, Entamoeba and Microsporida, were once widely considered to be the most primitive extant eukaryotic group―Archezoa. The main evidence for this is their 'lacking mitochondria' and possessing some other primitive features between prokaryotes and eukaryotes, and being basal to all eukaryotes with mitochondria in phylogenies inferred from many molecules. Some authors even proposed that these organisms diverged before the endosymbiotic origin of mitochondria within eukaryotes. This view was once considered to be very significant to the study of origin and evolution of eukaryotic cells (eukaryotes). However, in recent years this has been challenged by accumulating evidence from new studies. Here the sequences of DNA topoisomerase II in G. lamblia, T. vaginalis and E. histolytica were identified first by PCR and sequencing, then combining with the sequence data of the microsporidia Encephalitozoon cunicul and other eukaryotic groups of different evolutionary positions from GenBank, phylogenetic trees were constructed by various methods to investigate the evolutionary positions of these amitochondriate protozoa. Our results showed that since the characteristics of DNA topoisomerase II make it avoid the defect of 'long-branch attraction' appearing in the previous phylogenetic analyses, our trees can not only reflect effectively the relationship of different major eukaryotic groups, which is widely accepted, but also reveal phylogenetic positions for these amitochondriate protozoa, which is different from the previous phylogenetic trees. They are not the earliest-branching eukaryotes, but diverged after some mitochondriate organisms such as kinetoplastids and mycetozoan; they are not a united group but occupy different phylogenetic positions. Combining with the recent cytological findings of mitochondria-like organelles in them, we think that though some of them (e.g. diplomonads, as represented 11. Phylogenetic constraints in key functional traits behind species' climate niches DEFF Research Database (Denmark) Kellermann, Vanessa; Loeschcke, Volker; Hoffmann, Ary A; 2012-01-01 adapted to similar environments or alternatively phylogenetic inertia. For desiccation resistance, weak phylogenetic inertia was detected; ancestral trait reconstruction, however, revealed a deep divergence that could be traced back to the genus level. Despite drosophilids’ high evolutionary potential......) for 92–95 Drosophila species and assessed their importance for geographic distributions, while controlling for acclimation, phylogeny, and spatial autocorrelation. Employing an array of phylogenetic analyses, we documented moderate-to-strong phylogenetic signal in both desiccation and cold resistance....... Desiccation and cold resistance were clearly linked to species distributions because significant associations between traits and climatic variables persisted even after controlling for phylogeny. We used different methods to untangle whether phylogenetic signal reflected phylogenetically related species... 12. Automatic selection of reference taxa for protein-protein interaction prediction with phylogenetic profiling DEFF Research Database (Denmark) Simonsen, Martin; Maetschke, S.R.; Ragan, M.A. 2012-01-01 Motivation: Phylogenetic profiling methods can achieve good accuracy in predicting protein–protein interactions, especially in prokaryotes. Recent studies have shown that the choice of reference taxa (RT) is critical for accurate prediction, but with more than 2500 fully sequenced taxa publicly......: We present three novel methods for automating the selection of RT, using machine learning based on known protein–protein interaction networks. One of these methods in particular, Tree-Based Search, yields greatly improved prediction accuracies. We further show that different methods for constituting... 13. Best Practices for Data Sharing in Phylogenetic Research Science.gov (United States) Cranston, Karen; Harmon, Luke J.; O'Leary, Maureen A.; Lisle, Curtis 2014-01-01 As phylogenetic data becomes increasingly available, along with associated data on species’ genomes, traits, and geographic distributions, the need to ensure data availability and reuse become more and more acute. In this paper, we provide ten “simple rules” that we view as best practices for data sharing in phylogenetic research. These rules will help lead towards a future phylogenetics where data can easily be archived, shared, reused, and repurposed across a wide variety of projects. PMID:24987572 14. A common tendency for phylogenetic overdispersion in mammalian assemblages OpenAIRE Cooper, Natalie; RODRIGUEZ, JESUS; Purvis, Andy 2008-01-01 PUBLISHED Competition has long been proposed as an important force in structuring mammalian communities. Although early work recognised that competition has a phylogenetic dimension, only with recent increases in the availability of phylogenies have true phylogenetic investigations of mammalian community structure become possible. We test whether the phylogenetic structure of 142 assemblages from three mammalian clades (New World monkeys, North American ground squirrels and Australasian po... 15. Applications of phylogenetics to solve practical problems in insect conservation. Science.gov (United States) Buckley, Thomas R 2016-12-01 Phylogenetic approaches have much promise for the setting of conservation priorities and resource allocation. There has been significant development of analytical methods for the measurement of phylogenetic diversity within and among ecological communities as a way of setting conservation priorities. Application of these tools to insects has been low as has been the uptake by conservation managers. A critical reason for the lack of uptake includes the scarcity of detailed phylogenetic and species distribution data from much of insect diversity. Environmental DNA technologies offer a means for the high throughout collection of phylogenetic data across landscapes for conservation planning. 16. Disentangling the phylogenetic and ecological components of spider phenotypic variation. Science.gov (United States) Gonçalves-Souza, Thiago; Diniz-Filho, José Alexandre Felizola; Romero, Gustavo Quevedo 2014-01-01 An understanding of how the degree of phylogenetic relatedness influences the ecological similarity among species is crucial to inferring the mechanisms governing the assembly of communities. We evaluated the relative importance of spider phylogenetic relationships and ecological niche (plant morphological variables) to the variation in spider body size and shape by comparing spiders at different scales: (i) between bromeliads and dicot plants (i.e., habitat scale) and (ii) among bromeliads with distinct architectural features (i.e., microhabitat scale). We partitioned the interspecific variation in body size and shape into phylogenetic (that express trait values as expected by phylogenetic relationships among species) and ecological components (that express trait values independent of phylogenetic relationships). At the habitat scale, bromeliad spiders were larger and flatter than spiders associated with the surrounding dicots. At this scale, plant morphology sorted out close related spiders. Our results showed that spider flatness is phylogenetically clustered at the habitat scale, whereas it is phylogenetically overdispersed at the microhabitat scale, although phylogenic signal is present in both scales. Taken together, these results suggest that whereas at the habitat scale selective colonization affect spider body size and shape, at fine scales both selective colonization and adaptive evolution determine spider body shape. By partitioning the phylogenetic and ecological components of phenotypic variation, we were able to disentangle the evolutionary history of distinct spider traits and show that plant architecture plays a role in the evolution of spider body size and shape. We also discussed the relevance in considering multiple scales when studying phylogenetic community structure. 17. Mitochondrial genome organization and vertebrate phylogenetics Directory of Open Access Journals (Sweden) Pereira Sérgio Luiz 2000-01-01 Full Text Available With the advent of DNA sequencing techniques the organization of the vertebrate mitochondrial genome shows variation between higher taxonomic levels. The most conserved gene order is found in placental mammals, turtles, fishes, some lizards and Xenopus. Birds, other species of lizards, crocodilians, marsupial mammals, snakes, tuatara, lamprey, and some other amphibians and one species of fish have gene orders that are less conserved. The most probable mechanism for new gene rearrangements seems to be tandem duplication and multiple deletion events, always associated with tRNA sequences. Some new rearrangements seem to be typical of monophyletic groups and the use of data from these groups may be useful for answering phylogenetic questions involving vertebrate higher taxonomic levels. Other features such as the secondary structure of tRNA, and the start and stop codons of protein-coding genes may also be useful in comparisons of vertebrate mitochondrial genomes. 18. First phylogenetic analyses of galaxy evolution CERN Document Server Fraix-Burnet, D 2004-01-01 The Hubble tuning fork diagram, based on morphology, has always been the preferred scheme for classification of galaxies and is still the only one originally built from historical/evolutionary relationships. At the opposite, biologists have long taken into account the parenthood links of living entities for classification purposes. Assuming branching evolution of galaxies as a "descent with modification", we show that the concepts and tools of phylogenetic systematics widely used in biology can be heuristically transposed to the case of galaxies. This approach that we call "astrocladistics" has been first applied to Dwarf Galaxies of the Local Group and provides the first evolutionary galaxy tree. The cladogram is sufficiently solid to support the existence of a hierarchical organization in the diversity of galaxies, making it possible to track ancestral types of galaxies. We also find that morphology is a summary of more fundamental properties. Astrocladistics applied to cosmology simulated galaxies can, uns... 19. Tanglegrams: a Reduction Tool for Mathematical Phylogenetics. Science.gov (United States) Matsen, Frederick; Billey, Sara; Kas, Arnold; Konvalinka, Matjaz 2016-10-03 Many discrete mathematics problems in phylogenetics are defined in terms of the relative labeling of pairsof leaf-labeled trees. These relative labelings are naturally formalized as tanglegrams, which have previously been an object of study in coevolutionary analysis. Although there has been considerable work on planar drawings of tanglegrams, they have not been fully explored as combinatorial objects until recently. In this paper, we describe how many discrete mathematical questions on trees "factor" through a problem on tanglegrams, and how understanding that factoring can simplify analysis. Depending on the problem, it may be useful to consider a unordered version of tanglegrams, and/or their unrooted counterparts. For all of these definitions, we show how the isomorphism types of tanglegrams can be understood in terms of double cosets of the symmetric group, and we investigate their automorphisms. Understanding tanglegrams better will isolate the distinct problems on leaf-labeled pairs of trees and reveal natural symmetries of spaces associated with such problems. 20. The rapidly changing landscape of insect phylogenetics. Science.gov (United States) 2016-12-01 Insect phylogenetics is being profoundly changed by many innovations. Although rapid developments in genomics have center stage, key progress has been made in phenomics, field and museum science, digital databases and pipelines, analytical tools, and the culture of science. The importance of these methodological and cultural changes to the pace of inference of the hexapod Tree of Life is discussed. The innovations have the potential, when synthesized and mobilized in ways as yet unforeseen, to shine light on the million or more clades in insects, and infer their composition with confidence. There are many challenges to overcome before insects can enter the 'phylocognisant age', but because of the promise of genomics, phenomics, and informatics, that is now an imaginable future. 1. Zika Virus: Emergence, Phylogenetics, Challenges, and Opportunities. Science.gov (United States) Rajah, Maaran M; Pardy, Ryan D; Condotta, Stephanie A; Richer, Martin J; Sagan, Selena M 2016-11-11 Zika virus (ZIKV) is an emerging arthropod-borne pathogen that has recently gained notoriety due to its rapid and ongoing geographic expansion and its novel association with neurological complications. Reports of ZIKV-associated Guillain-Barré syndrome as well as fetal microcephaly place emphasis on the need to develop preventative measures and therapeutics to combat ZIKV infection. Thus, it is imperative that models to study ZIKV replication and pathogenesis and the immune response are developed in conjunction with integrated vector control strategies to mount an efficient response to the pandemic. This paper summarizes the current state of knowledge on ZIKV, including the clinical features, phylogenetic analyses, pathogenesis, and the immune response to infection. Potential challenges in developing diagnostic tools, treatment, and prevention strategies are also discussed. 2. The Shapley Value of Phylogenetic Trees CERN Document Server Haake, Claus-Jochen; Su, Francis Edward 2007-01-01 Every weighted tree corresponds naturally to a cooperative game that we call a "tree game"; it assigns to each subset of leaves the sum of the weights of the minimal subtree spanned by those leaves. In the context of phylogenetic trees, the leaves are species and this assignment captures the diversity present in the coalition of species considered. We consider the Shapley value of tree games and suggest a biological interpretation. We determine the linear transformation M that shows the dependence of the Shapley value on the edge weights of the tree, and we also compute a null space basis of M. Both depend on the "split counts" of the tree. Finally, we characterize the Shapley value on tree games by four axioms, a counterpart to Shapley's original theorem on the larger class of cooperative games. 3. Inferring Phylogenetic Networks from Gene Order Data Directory of Open Access Journals (Sweden) Alexey Anatolievich Morozov 2013-01-01 Full Text Available Existing algorithms allow us to infer phylogenetic networks from sequences (DNA, protein or binary, sets of trees, and distance matrices, but there are no methods to build them using the gene order data as an input. Here we describe several methods to build split networks from the gene order data, perform simulation studies, and use our methods for analyzing and interpreting different real gene order datasets. All proposed methods are based on intermediate data, which can be generated from genome structures under study and used as an input for network construction algorithms. Three intermediates are used: set of jackknife trees, distance matrix, and binary encoding. According to simulations and case studies, the best intermediates are jackknife trees and distance matrix (when used with Neighbor-Net algorithm. Binary encoding can also be useful, but only when the methods mentioned above cannot be used. 4. Phylogenetic insights into Andean plant diversification Directory of Open Access Journals (Sweden) Federico eLuebert 2014-06-01 Full Text Available Andean orogeny is considered as one of the most important events for the developmentof current plant diversity in South America. We compare available phylogenetic studies anddivergence time estimates for plant lineages that may have diversified in response to Andeanorogeny. The influence of the Andes on plant diversification is separated into four major groups:The Andes as source of new high-elevation habitats, as a vicariant barrier, as a North-Southcorridor and as generator of new environmental conditions outside the Andes. Biogeographicalrelationships between the Andes and other regions are also considered. Divergence timeestimates indicate that high-elevation lineages originated and diversified during or after the majorphases of Andean uplift (Mid-Miocene to Pliocene, although there are some exceptions. Asexpected, Andean mid-elevation lineages tend to be older than high-elevation groups. Mostclades with disjunct distribution on both sides of the Andes diverged during Andean uplift.Inner-Andean clades also tend to have divergence time during or after Andean uplift. This isinterpreted as evidence of vicariance. Dispersal along the Andes has been shown to occur ineither direction, mostly dated after the Andean uplift. Divergence time estimates of plant groupsoutside the Andes encompass a wider range of ages, indicating that the Andes may not benecessarily the cause of these diversifications. The Andes are biogeographically related to allneighbouring areas, especially Central America, with floristic interchanges in both directionssince Early Miocene times. Direct biogeographical relationships between the Andes and otherdisjunct regions have also been shown in phylogenetic studies, especially with the easternBrazilian highlands and North America. The history of the Andean flora is complex and plantdiversification has been driven by a variety of processes, including environmental change,adaptation, and biotic interactions 5. morePhyML: improving the phylogenetic tree space exploration with PhyML 3. Science.gov (United States) Criscuolo, Alexis 2011-12-01 PhyML is a widely used Maximum Likelihood (ML) phylogenetic tree inference software based on a standard hill-climbing method. Starting from an initial tree, the version 3 of PhyML explores the tree space by using "Nearest Neighbor Interchange" (NNI) or "Subtree Pruning and Regrafting" (SPR) tree swapping techniques in order to find the ML phylogenetic tree. NNI-based local searches are fast but can often get trapped in local optima, whereas it is expected that the larger (but slower to cover) SPR-based neighborhoods will lead to trees with higher likelihood. Here, I verify that PhyML infers more likely trees with SPRs than with NNIs in almost all cases. However, I also show that the SPR-based local search of PhyML often does not succeed at locating the ML tree. To improve the tree space exploration, I deliver a script, named morePhyML, which allows escaping from local optima by performing character reweighting. This ML tree search strategy, named ratchet, often leads to higher likelihood estimates. Based on the analysis of a large number of amino acid and nucleotide data, I show that morePhyML allows inferring more accurate phylogenetic trees than several other recently developed ML tree inference softwares in many cases. 6. The complete chloroplast genome sequences of five Epimedium species: lights into phylogenetic and taxonomic analyses Directory of Open Access Journals (Sweden) Yanjun eZhang 2016-03-01 Full Text Available Epimedium L. is a phylogenetically and economically important genus in the family Berberidaceae. We here sequenced the complete chloroplast (cp genomes of four Epimedium species using Illumina sequencing technology via a combination of de novo and reference-guided assembly, which was also the first comprehensive cp genome analysis on Epimedium combining the cp genome sequence of E. koreanum previously reported. The five Epimedium cp genomes exhibited typical quadripartite and circular structure that was rather conserved in genomic structure and the synteny of gene order. However, these cp genomes presented obvious variations at the boundaries of the four regions because of the expansion and contraction of the inverted repeat (IR region and the single-copy (SC boundary regions. The trnQ-UUG duplication occurred in the five Epimedium cp genomes, which was not found in the other basal eudicotyledons. The rapidly evolving cp genome regions were detected among the five cp genomes, as well as the difference of simple sequence repeats (SSR and repeat sequence were identified. Phylogenetic relationships among the five Epimedium species based on their cp genomes showed accordance with the updated system of the genus on the whole, but reminded that the evolutionary relationships and the divisions of the genus need further investigation applying more evidences. The availability of these cp genomes provided valuable genetic information for accurately identifying species, taxonomy and phylogenetic resolution and evolution of Epimedium, and assist in exploration and utilization of Epimedium plants. 7. Ascospore morphology is a poor predictor of the phylogenetic relationships of Neurospora and Gelasinospora. Science.gov (United States) Dettman, J R; Harbinski, F M; Taylor, J W 2001-10-01 The genera Neurospora and Gelasinospora are conventionally distinguished by differences in ascospore ornamentation, with elevated longitudinal ridges (ribs) separated by depressed grooves (veins) in Neurospora and spherical or oval indentations (pits) in Gelasinospora. The phylogenetic relationships of representatives of 12 Neurospora and 4 Gelasinospora species were assessed with the DNA sequences of four nuclear genes. Within the genus Neurospora, the 5 outbreeding conidiating species form a monophyletic group with N. discreta as the most divergent, and 4 of the homothallic species form a monophyletic group. In combined analysis, each of the conventionally defined Gelasinospora species was more closely related to a Neurospora species than to another Gelasinospora species. Evidently, the Neurospora and Gelasinospora species included in this study do not represent two clearly resolved monophyletic sister genera, but instead represent a polyphyletic group of taxa with close phylogenetic relationships and significant morphological similarities. Ascospore morphology, the character that the distinction between the genera Neurospora and Gelasinospora is based upon,was not an accurate predictor of phylogenetic relationships. 8. Accurate tracking control in LOM application Institute of Scientific and Technical Information of China (English) 2003-01-01 The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired. 9. Accurate Switched-Voltage voltage averaging circuit OpenAIRE 金光, 一幸; 松本, 寛樹 2006-01-01 Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations. 10. Accurate overlaying for mobile augmented reality NARCIS (Netherlands) Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W. 1999-01-01 Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi 11. PhyDesign: an online application for profiling phylogenetic informativeness Directory of Open Access Journals (Sweden) Townsend Jeffrey P 2011-05-01 Full Text Available Abstract Background The rapid increase in number of sequenced genomes for species across of the tree of life is revealing a diverse suite of orthologous genes that could potentially be employed to inform molecular phylogenetic studies that encompass broader taxonomic sampling. Optimal usage of this diversity of loci requires user-friendly tools to facilitate widespread cost-effective locus prioritization for phylogenetic sampling. The Townsend (2007 phylogenetic informativeness provides a unique empirical metric for guiding marker selection. However, no software or automated methodology to evaluate sequence alignments and estimate the phylogenetic informativeness metric has been available. Results Here, we present PhyDesign, a platform-independent online application that implements the Townsend (2007 phylogenetic informativeness analysis, providing a quantitative prediction of the utility of loci to solve specific phylogenetic questions. An easy-to-use interface facilitates uploading of alignments and ultrametric trees to calculate and depict profiles of informativeness over specified time ranges, and provides rankings of locus prioritization for epochs of interest. Conclusions By providing these profiles, PhyDesign facilitates locus prioritization increasing the efficiency of sequencing for phylogenetic purposes compared to traditional studies with more laborious and low capacity screening methods, as well as increasing the accuracy of phylogenetic studies. Together with a manual and sample files, the application is freely accessible at http://phydesign.townsend.yale.edu. 12. Utilization of complete chloroplast genomes for phylogenetic studies NARCIS (Netherlands) Ramlee, Shairul Izan Binti 2016-01-01 Chloroplast DNA sequence polymorphisms are a primary source of data in many plant phylogenetic studies. The chloroplast genome is relatively conserved in its evolution making it an ideal molecule to retain phylogenetic signals. The chloroplast genome is also largely, but not completely, free from ot 13. Student Interpretations of Phylogenetic Trees in an Introductory Biology Course Science.gov (United States) Dees, Jonathan; Momsen, Jennifer L.; Niemi, Jarad; Montplaisir, Lisa 2014-01-01 Phylogenetic trees are widely used visual representations in the biological sciences and the most important visual representations in evolutionary biology. Therefore, phylogenetic trees have also become an important component of biology education. We sought to characterize reasoning used by introductory biology students in interpreting taxa… 14. A higher-level phylogenetic classification of the Fungi NARCIS (Netherlands) Hibbett, D.S.; Binder, M.; Bischoff, J.F.; Blackwell, M.; Cannon, P.F.; Eriksson, O.E.; Huhndorf, S.; James, T.; Kirk, P.M.; Lücking, R.; Thorsten Lumbsch, H.; Lutzoni, F.; Brandon Matheny, P.; McLaughlin, D.J.; Powell, M.J.; Redhead, S.; Schoch, C.L.; Spatafora, J.W.; Stalpers, J.A.; Vilgalys, R.; Aime, M.C.; Aptroot, A.; Bauer, R.; Begerow, D.; Benny, G.L.; Castlebury, L.A.; Crous, P.W.; Dai, Y.C.; Gams, W.; Geiser, D.M.; Griffith, G.W.; Gueidan, C.; Hawksworth, D.L.; Hestmark, G.; Hosaka, K.; Humber, R.A.; Hyde, K.D.; Ironside, J.E.; Koljalg, U.; Kurtzman, C.P.; Larsson, K.H.; Lichtwardt, R.; Longcore, J.; Miadlikowska, J.; Miller, A.; Moncalvo, J.M.; Mozley-Standridge, S.; Oberwinkler, F.; Parmasto, E.; Reeb, V.; Rogers, J.D.; Roux, Le C.; Ryvarden, L.; Sampaio, J.P.; Schüssler, A.; Sugiyama, J.; Thorn, R.G.; Tibell, L.; Untereiner, W.A.; Walker, C.; Wang, Z.; Weir, A.; Weiss, M.; White, M.M.; Winka, K.; Yao, Y.J.; Zhang, N. 2007-01-01 A comprehensive phylogenetic classification of the kingdom Fungi is proposed, with reference to recent molecular phylogenetic analyses, and with input from diverse members of the fungal taxonomic community. The classification includes 195 taxa, down to the level of order, of which 16 are described o 15. Phylogenetic Analysis of Viridans Group Streptococci Causing Endocarditis ▿ Science.gov (United States) Simmon, Keith E.; Hall, Lori; Woods, Christopher W.; Marco, Francesc; Miro, Jose M.; Cabell, Christopher; Hoen, Bruno; Marin, Mercedes; Utili, Riccardo; Giannitsioti, Efthymia; Doco-Lecompte, Thanh; Bradley, Suzanne; Mirrett, Stanley; Tambic, Arjana; Ryan, Suzanne; Gordon, David; Jones, Phillip; Korman, Tony; Wray, Dannah; Reller, L. Barth; Tripodi, Marie-Francoise; Plesiat, Patrick; Morris, Arthur J.; Lang, Selwyn; Murdoch, David R.; Petti, Cathy A. 2008-01-01 Identification of viridans group streptococci (VGS) to the species level is difficult because VGS exchange genetic material. We performed multilocus DNA target sequencing to assess phylogenetic concordance of VGS for a well-defined clinical syndrome. The hierarchy of sequence data was often discordant, underscoring the importance of establishing biological relevance for finer phylogenetic distinctions. PMID:18650347 16. Phylogenetic analysis of viridans group streptococci causing endocarditis. Science.gov (United States) Simmon, Keith E; Hall, Lori; Woods, Christopher W; Marco, Francesc; Miro, Jose M; Cabell, Christopher; Hoen, Bruno; Marin, Mercedes; Utili, Riccardo; Giannitsioti, Efthymia; Doco-Lecompte, Thanh; Bradley, Suzanne; Mirrett, Stanley; Tambic, Arjana; Ryan, Suzanne; Gordon, David; Jones, Phillip; Korman, Tony; Wray, Dannah; Reller, L Barth; Tripodi, Marie-Francoise; Plesiat, Patrick; Morris, Arthur J; Lang, Selwyn; Murdoch, David R; Petti, Cathy A 2008-09-01 Identification of viridans group streptococci (VGS) to the species level is difficult because VGS exchange genetic material. We performed multilocus DNA target sequencing to assess phylogenetic concordance of VGS for a well-defined clinical syndrome. The hierarchy of sequence data was often discordant, underscoring the importance of establishing biological relevance for finer phylogenetic distinctions. 17. Phylogenetic diversity (PD and biodiversity conservation: some bioinformatics challenges Directory of Open Access Journals (Sweden) Daniel P. Faith 2006-01-01 Full Text Available Biodiversity conservation addresses information challenges through estimations encapsulated in measures of diversity. A quantitative measure of phylogenetic diversity, “PD”, has been defined as the minimum total length of all the phylogenetic branches required to span a given set of taxa on the phylogenetic tree (Faith 1992a. While a recent paper incorrectly characterizes PD as not including information about deeper phylogenetic branches, PD applications over the past decade document the proper incorporation of shared deep branches when assessing the total PD of a set of taxa. Current PD applications to macroinvertebrate taxa in streams of New South Wales, Australia illustrate the practical importance of this definition. Phylogenetic lineages, often corresponding to new, “cryptic”, taxa, are restricted to a small number of stream localities. A recent case of human impact causing loss of taxa in one locality implies a higher PD value for another locality, because it now uniquely represents a deeper branch. This molecular-based phylogenetic pattern supports the use of DNA barcoding programs for biodiversity conservation planning. Here, PD assessments side-step the contentious use of barcoding-based “species” designations. Bio-informatics challenges include combining different phylogenetic evidence, optimization problems for conservation planning, and effective integration of phylogenetic information with environmental and socio-economic data. 18. Site-specific time heterogeneity of the substitution process and its impact on phylogenetic inference Directory of Open Access Journals (Sweden) Philippe Hervé 2011-01-01 Full Text Available Abstract Background Model violations constitute the major limitation in inferring accurate phylogenies. Characterizing properties of the data that are not being correctly handled by current models is therefore of prime importance. One of the properties of protein evolution is the variation of the relative rate of substitutions across sites and over time, the latter is the phenomenon called heterotachy. Its effect on phylogenetic inference has recently obtained considerable attention, which led to the development of new models of sequence evolution. However, thus far focus has been on the quantitative heterogeneity of the evolutionary process, thereby overlooking more qualitative variations. Results We studied the importance of variation of the site-specific amino-acid substitution process over time and its possible impact on phylogenetic inference. We used the CAT model to define an infinite mixture of substitution processes characterized by equilibrium frequencies over the twenty amino acids, a useful proxy for qualitatively estimating the evolutionary process. Using two large datasets, we show that qualitative changes in site-specific substitution properties over time occurred significantly. To test whether this unaccounted qualitative variation can lead to an erroneous phylogenetic tree, we analyzed a concatenation of mitochondrial proteins in which Cnidaria and Porifera were erroneously grouped. The progressive removal of the sites with the most heterogeneous CAT profiles across clades led to the recovery of the monophyly of Eumetazoa (Cnidaria+Bilateria, suggesting that this heterogeneity can negatively influence phylogenetic inference. Conclusion The time-heterogeneity of the amino-acid replacement process is therefore an important evolutionary aspect that should be incorporated in future models of sequence change. 19. Open Reading Frame Phylogenetic Analysis on the Cloud Directory of Open Access Journals (Sweden) Che-Lun Hung 2013-01-01 Full Text Available Phylogenetic analysis has become essential in researching the evolutionary relationships between viruses. These relationships are depicted on phylogenetic trees, in which viruses are grouped based on sequence similarity. Viral evolutionary relationships are identified from open reading frames rather than from complete sequences. Recently, cloud computing has become popular for developing internet-based bioinformatics tools. Biocloud is an efficient, scalable, and robust bioinformatics computing service. In this paper, we propose a cloud-based open reading frame phylogenetic analysis service. The proposed service integrates the Hadoop framework, virtualization technology, and phylogenetic analysis methods to provide a high-availability, large-scale bioservice. In a case study, we analyze the phylogenetic relationships among Norovirus. Evolutionary relationships are elucidated by aligning different open reading frame sequences. The proposed platform correctly identifies the evolutionary relationships between members of Norovirus. 20. Visualising very large phylogenetic trees in three dimensional hyperbolic space Directory of Open Access Journals (Sweden) Liberles David A 2004-04-01 Full Text Available Abstract Background Common existing phylogenetic tree visualisation tools are not able to display readable trees with more than a few thousand nodes. These existing methodologies are based in two dimensional space. Results We introduce the idea of visualising phylogenetic trees in three dimensional hyperbolic space with the Walrus graph visualisation tool and have developed a conversion tool that enables the conversion of standard phylogenetic tree formats to Walrus' format. With Walrus, it becomes possible to visualise and navigate phylogenetic trees with more than 100,000 nodes. Conclusion Walrus enables desktop visualisation of very large phylogenetic trees in 3 dimensional hyperbolic space. This application is potentially useful for visualisation of the tree of life and for functional genomics derivatives, like The Adaptive Evolution Database (TAED. 1. Fast Structural Search in Phylogenetic Databases Directory of Open Access Journals (Sweden) William H. Piel 2005-01-01 Full Text Available As the size of phylogenetic databases grows, the need for efficiently searching these databases arises. Thanks to previous and ongoing research, searching by attribute value and by text has become commonplace in these databases. However, searching by topological or physical structure, especially for large databases and especially for approximate matches, is still an art. We propose structural search techniques that, given a query or pattern tree P and a database of phylogenies D, find trees in D that are sufficiently close to P . The “closeness” is a measure of the topological relationships in P that are found to be the same or similar in a tree D in D. We develop a filtering technique that accelerates searches and present algorithms for rooted and unrooted trees where the trees can be weighted or unweighted. Experimental results on comparing the similarity measure with existing tree metrics and on evaluating the efficiency of the search techniques demonstrate that the proposed approach is promising 2. Comprehensive phylogenetic analysis of bacterial reverse transcriptases. Directory of Open Access Journals (Sweden) Nicolás Toro Full Text Available Much less is known about reverse transcriptases (RTs in prokaryotes than in eukaryotes, with most prokaryotic enzymes still uncharacterized. Two surveys involving BLAST searches for RT genes in prokaryotic genomes revealed the presence of large numbers of diverse, uncharacterized RTs and RT-like sequences. Here, using consistent annotation across all sequenced bacterial species from GenBank and other sources via RAST, available from the PATRIC (Pathogenic Resource Integration Center platform, we have compiled the data for currently annotated reverse transcriptases from completely sequenced bacterial genomes. RT sequences are broadly distributed across bacterial phyla, but green sulfur bacteria and cyanobacteria have the highest levels of RT sequence diversity (≤85% identity per genome. By contrast, phylum Actinobacteria, for which a large number of genomes have been sequenced, was found to have a low RT sequence diversity. Phylogenetic analyses revealed that bacterial RTs could be classified into 17 main groups: group II introns, retrons/retron-like RTs, diversity-generating retroelements (DGRs, Abi-like RTs, CRISPR-Cas-associated RTs, group II-like RTs (G2L, and 11 other groups of RTs of unknown function. Proteobacteria had the highest potential functional diversity, as they possessed most of the RT groups. Group II introns and DGRs were the most widely distributed RTs in bacterial phyla. Our results provide insights into bacterial RT phylogeny and the basis for an update of annotation systems based on sequence/domain homology. 3. Phylogenetic autocorrelation under distinct evolutionary processes. Science.gov (United States) Diniz-Filho, J A 2001-06-01 I show how phylogenetic correlograms track distinct microevolutionary processes and can be used as empirical descriptors of the relationship between interspecific covariance (V(B)) and time since divergence (t). Data were simulated under models of gradual and speciational change, using increasing levels of stabilizing selection in a stochastic Ornstein-Uhlenbeck (O-U) process, on a phylogeny of 42 species. For each simulated dataset, correlograms were constructed using Moran's I coefficients estimated at five time slices, established at constant intervals. The correlograms generated under different evolutionary models differ significantly according to F-values derived from analysis of variance comparing Moran's I at each time slice and based on Wilks' lambda from multivariate analysis of variance comparing their overall profiles in a two-way design. Under Brownian motion or with small restraining forces in the O-U process, correlograms were better fit by a linear model. However, increasing restraining forces in the O-U process cause a lack of linear fit, and correlograms are better described by exponential models. These patterns are better fit for gradual than for speciational modes of change. Correlograms can be used as a diagnostic method and to describe the V(B)/t relationship before using methods to analyze correlated evolution that assume (or perform statistically better when) this relationship is linear. 4. Phylogenetic analysis of heterothallic Neurospora species. Science.gov (United States) Skupski, M P; Jackson, D A; Natvig, D O 1997-02-01 We examined the phylogenetic relationships among five heterothallic species of Neurospora using restriction fragment polymorphisms derived from cosmid probes and sequence data from the upstream regions of two genes, al-1 and frq. Distance, maximum likelihood, and parsimony trees derived from the data support the hypothesis that strains assigned to N. sitophila, N. discreta, and N. tetrasperma form respective monophyletic groups. Strains assigned to N. intermedia and N. crassa, however, did not form two respective monophyletic groups, consistent with a previous suggestion based on analysis of mitochondrial DNAs that N. crassa and N. intermedia may be incompletely resolved sister taxa. Trees derived from restriction fragments and the al-1 sequence position N. tetrasperma as the sister species of N. sitophila. None of the trees produced by our data supported a previous analysis of sequences in the region of the mating type idiomorph that grouped N. crassa and N. sitophila as sister taxa, as well as N. intermedia and N. tetrasperma as sister taxa. Moreover, sequences from al-1, frq, and the mating-type region produced different trees when analyzed separately. The lack of consensus obtained with different sequences could result from the sorting of ancestral polymorphism during speciation or gene flow across species boundaries, or both. 5. Ultrastructure, biology, and phylogenetic relationships of kinorhyncha. Science.gov (United States) Neuhaus, Birger; Higgins, Robert P 2002-07-01 The article summarizes current knowledge mainly about the (functional) morphology and ultrastructure, but also about the biology, development, and evolution of the Kinorhyncha. The Kinorhyncha are microscopic, bilaterally symmetrical, exclusively free-living, benthic, marine animals and ecologically part of the meiofauna. They occur throughout the world from the intertidal to the deep sea, generally in sediments but sometimes associated with plants or other animals. From adult stages 141 species are known, but 38 species have been described from juvenile stages. The trunk is arranged into 11 segments as evidenced by cuticular plates, sensory spots, setae or spines, nervous system, musculature, and subcuticular glands. The ultrastructure of several organ systems and the postembryonic development are known for very few species. Almost no data are available about the embryology and only a single gene has been sequenced for a single species. The phylogenetic relationships within Kinorhyncha are unresolved. Priapulida, Loricifera, and Kinorhyncha are grouped together as Scalidophora, but arguments are found for every possible sistergroup relationship within this taxon. The recently published Ecdysozoa hypothesis suggests a closer relationship of the Scalidophora, Nematoda, Nematomorpha, Tardigrada, Onychophora, and Arthropoda. 6. Identifiability of large phylogenetic mixture models. Science.gov (United States) Rhodes, John A; Sullivant, Seth 2012-01-01 Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. Such models are of increasing interest for data analysis, as they can capture the variety of evolutionary processes that may be occurring across long sequences of DNA or proteins. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. Identifiability is, however, essential to their use for statistical inference.We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. This provides a theoretical justification for some current empirical studies, and indicates that extensions to even more mixture components should be theoretically well behaved. We also extend our results to certain mixtures on different trees, using the same algebraic techniques. 7. Phylogenetic analysis of fungal ABC transporters Directory of Open Access Journals (Sweden) Driessen Arnold JM 2010-03-01 Full Text Available Abstract Background The superfamily of ABC proteins is among the largest known in nature. Its members are mainly, but not exclusively, involved in the transport of a broad range of substrates across biological membranes. Many contribute to multidrug resistance in microbial pathogens and cancer cells. The diversity of ABC proteins in fungi is comparable with those in multicellular animals, but so far fungal ABC proteins have barely been studied. Results We performed a phylogenetic analysis of the ABC proteins extracted from the genomes of 27 fungal species from 18 orders representing 5 fungal phyla thereby covering the most important groups. Our analysis demonstrated that some of the subfamilies of ABC proteins remained highly conserved in fungi, while others have undergone a remarkable group-specific diversification. Members of the various fungal phyla also differed significantly in the number of ABC proteins found in their genomes, which is especially reduced in the yeast S. cerevisiae and S. pombe. Conclusions Data obtained during our analysis should contribute to a better understanding of the diversity of the fungal ABC proteins and provide important clues about their possible biological functions. 8. Accurate colorimetric feedback for RGB LED clusters Science.gov (United States) Man, Kwong; Ashdown, Ian 2006-08-01 We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1. 9. Accurate guitar tuning by cochlear implant musicians. Directory of Open Access Journals (Sweden) Thomas Lu Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. 10. Breakpoint of a balanced translocation (X:14) (q27.1;q32.3) in a girl with severe hemophilia B maps proximal to the factor IX gene. Science.gov (United States) Di Paola, J; Goldman, T; Qian, Q; Patil, S R; Schutte, B C; Schute, B C 2004-03-01 Hemophilia B is an X-linked bleeding disorder caused by the deficiency of coagulation factor (F)IX, with an estimated prevalence of 1 in 30 000 male births. It is almost exclusively seen in males with rare exceptions. We report a girl who was diagnosed with severe (PAC DNA probe, RP6-88D7 (which contains the FIX gene) hybridized only on the normal chromosome X as well as onto the derivative 14. Using a PAC DNA probe, RP11-963P9 that is located proximal to the FIX gene, we obtained signals on the normal and derivative X and also on the derivative 14. We conclude that the breakpoint is located within the DNA sequence of this clone mapping proximal to the FIX gene. Since the FIX gene seems to be intact in the derivative 14, the breakpoint may affect an upstream regulatory sequence that subjects the gene to position effect variegation (PEV). 11. Efficient Accurate Context-Sensitive Anomaly Detection Institute of Scientific and Technical Information of China (English) 2007-01-01 For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance. 12. On accurate determination of contact angle Science.gov (United States) Concus, P.; Finn, R. 1992-01-01 Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments. 13. Accurate Control of Josephson Phase Qubits Science.gov (United States) 2016-04-14 61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University 14. Accurate guitar tuning by cochlear implant musicians. Science.gov (United States) Lu, Thomas; Huang, Juan; Zeng, Fan-Gang 2014-01-01 Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. 15. Synthesizing Accurate Floating-Point Formulas OpenAIRE Ioualalen, Arnault; Martel, Matthieu 2013-01-01 International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep... 16. Accurate structural correlations from maximum likelihood superpositions. Directory of Open Access Journals (Sweden) Douglas L Theobald 2008-02-01 Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. 17. Evaluation of CLSI M44-A2 Disk Diffusion and Associated Breakpoint Testing of Caspofungin and Micafungin Using a Well-Characterized Panel of Wild-Type and fks Hot Spot Mutant Candida Isolates▿ Science.gov (United States) Arendrup, Maiken Cavling; Park, Steven; Brown, Steven; Pfaller, Michael; Perlin, David S. 2011-01-01 Disk diffusion testing has recently been standardized by the CLSI, and susceptibility breakpoints have been established for several antifungal compounds. For caspofungin, 5-μg disks are approved, and for micafungin, 10-μg disks are under evaluation. We evaluated the performances of caspofungin and micafungin disk testing using a panel of Candida isolates with and without known FKS echinocandin resistance mechanisms. Disk diffusion and microdilution assays were performed strictly according to CLSI documents M44-A2 and M27-A3. Eighty-nine clinical Candida isolates were included: Candida albicans (20 isolates/10 mutants), C. glabrata (19 isolates/10 mutants), C. dubliniensis (2 isolates/1 mutant), C. krusei (16 isolates/3 mutants), C. parapsilosis (14 isolates/0 mutants), and C. tropicalis (18 isolates/4 mutants). Quality control strains were C. parapsilosis ATCC 22019 and C. krusei ATCC 6258. The correlations between zone diameters and MIC results were good for both compounds, with identical susceptibility classifications for 93.3% of the isolates by applying the current CLSI breakpoints. However, the numbers of fks hot spot mutant isolates misclassified as being susceptible (S) (very major errors [VMEs]) were high (61% for caspofungin [S, ≥11 mm] and 93% for micafungin [S, ≥14 mm]). Changing the disk diffusion breakpoint to S at ≥22 mm significantly improved the discrimination. For caspofungin, 1 VME was detected (a C. tropicalis isolate with an F76S substitution) (3.5%), and for micafungin, 10 VMEs were detected, the majority of which were for C. glabrata (8/10). The broadest separation between zone diameter ranges for wild-type (WT) and mutant isolates was seen for caspofungin (6 to 12 mm versus −4 to 7 mm). In conclusion, caspofungin disk diffusion testing with a modified breakpoint led to excellent separation between WT and mutant isolates for all Candida species. PMID:21357293 18. Refinement of the Seathre-Chotzen syndrome locus between D7S664 and D7S507 which flank a translocation breakpoint in an affected individual Energy Technology Data Exchange (ETDEWEB) Lewanda, A.F. [Johns Hopkins Univ., Baltimore, MD (United States)]|[Children`s National Medical Center, Washington, DC (United States); Taylor, E.W.; Jabs, E.W. [Johns Hopkins Univ., Baltimore, MD (United States)] [and others 1994-09-01 Saethre-Chotzen syndrome (SCS) is a common autosomal dominant craniosynostosis disorder that has been mapped to distal chromosome 7p. In addition to craniosynostosis, patients with SCS have facial asymmetry, low frontal hairline, ptosis, deviated nasal septum, brachydactyly, and partial cutaneous syndactyly. We evaluated 66 individuals in 10 SCS families. Linkage analysis was performed with 11 dinucleotide repeat markers between D7S513 and D7S516, spanning a genetic distance of 27 cM. The tightest linkage was to marker D7S664 (Z = 7.16, {theta} = 0.00), with a confidence interval of 8 cM. Haplotype analysis of those families with informative recombination events showed the disease locus to lie within the 12 cM region between markers D7S513 and D7S507. We used FISH to physically map the gene using chromosome spreads from the SCS patient with t(2;7)(p23;p22) reported by Reid et al. and YAC clones from a contig spanning the critical interval. These studies confirmed that the breakpoint lies within this region, and in fact identified a microdeletion. Further studies will be targeted towards identification of candidate genes for Saethre-Chotzen syndrome. 19. Deletion of UBE3A in brothers with Angelman syndrome at the breakpoint with an inversion at 15q11.2. Science.gov (United States) Kuroda, Yukiko; Ohashi, Ikuko; Saito, Toshiyuki; Nagai, Jun-Ichi; Ida, Kazumi; Naruto, Takuya; Wada, Takahito; Kurosawa, Kenji 2014-11-01 Angelman syndrome (AS) is characterized by severe intellectual disability with ataxia, epilepsy, and behavioral uniqueness. The underlining molecular deficit is the absence of the maternal copy of the imprinted UBE3A gene due to maternal deletions, which is observed in 70-75% of cases, and can be detected using fluorescent in situ hybridization (FISH) of the UBE3A region. Only a few familial AS cases have been reported with a complete deletion of UBE3A. Here, we report on siblings with AS caused by a microdeletion of 15q11.2-q12 encompassing UBE3A at the breakpoint of an inversion at 15q11.2 and 15q26.1. Karyotyping revealed an inversion of 15q, and FISH revealed the deletion of the UBE3A region. Array comparative genomic hybridization (CGH) demonstrated a 467 kb deletion at 15q11.2-q12, encompassing only UBE3A, SNORD115, and PAR1, and a 53 kb deletion at 15q26.1, encompassing a part of SLCO3A1. Their mother had a normal karyotype and array CGH detected no deletion of 15q11.2-q12, so we assumed gonadal mosaicism. This report describes a rare type of familial AS detected using the D15S10 FISH test. 20. Effectiveness of antibiotic combination therapy as evaluated by the Break-point Checkerboard Plate method for multidrug-resistant Pseudomonas aeruginosa in clinical use. Science.gov (United States) Nakamura, Itaru; Yamaguchi, Tetsuo; Tsukimori, Ayaka; Sato, Akihiro; Fukushima, Shinji; Mizuno, Yasutaka; Matsumoto, Tetsuya 2014-04-01 Multidrug-resistant Pseudomonas aeruginosa (MDRP) strains are defined as having resistance to the following 3 groups of antibiotics: carbapenems, aminoglycosides, and fluoroquinolones. Antibiotic combinations have demonstrated increased activity in vitro compared with a single agent. As an in vitro method of determining the combination activity of antibiotics, the Break-point Checkerboard Plate (BC-plate) can be used routinely in clinical microbiology laboratories. We evaluated the effectiveness of the BC-plate for MDRP infections in clinical settings. We retrospectively selected cases of MDRP infection treated with combination therapy of antibiotics in Tokyo Medical University Hospital (1015 beds), Tokyo, Japan, from November 2010 to October 2012. A total of 28 MDRP strains were clinically isolated from 28 patients during the study period. This study design is a case series of MDRP infection. Six infections among the 28 patients were treated based on the results of the BC-plate assay, and the 6 strains tested positive for MBL. One patient had pneumonia, 3 had urinary tract infections, 1 had vertebral osteomyelitis, and 1 had nasal abscess. The combination of aztreonam with amikacin demonstrated the most frequently recognized in vitro effect (5 patients). Next, aztreonam with ciprofloxacin and piperacillin with amikacin revealed equivalent in vitro effects (3 patients, respectively). The clinical cure rate was 83.3% (5/6 patients). Antibiotic combination therapy based on the results of the BC-plate assay might indicate the effective therapy against MDRP infection in clinical settings. 1. Comprehensive meiotic segregation analysis of a 4-breakpoint t(1;3;6) complex chromosome rearrangement using single sperm array comparative genomic hybridization and FISH. Science.gov (United States) Hornak, Miroslav; Vozdova, Miluse; Musilova, Petra; Prinosilova, Petra; Oracova, Eva; Linkova, Vlasta; Vesela, Katerina; Rubes, Jiri 2014-10-01 Complex chromosomal rearrangements (CCR) represent rare structural chromosome abnormalities frequently associated with infertility. In this study, meiotic segregation in spermatozoa of an infertile normospermic carrier of a 4-breakpoint t(1;3;6) CCR was analysed. A newly developed array comparative genomic hybridization protocol was used, and all chromosomes in 50 single sperm cells were simultaneously examined. Three-colour FISH was used to analyse chromosome segregation in 1557 other single sperm cells. It was also used to measure an interchromosomal effect; sperm chromatin structure assay was used to measure chromatin integrity. A high-frequency of unbalanced spermatozoa (84%) was observed, mostly arising from the 3:3 symmetrical segregation mode. Array comparative genomic hybridization was used to detect additional aneuploidies in two out of 50 spermatozoa (4%) in chromosomes not involved in the complex chromosome rearrangement. Significantly increased rates of diploidy and XY disomy were found in the CCR carrier compared with the control group (P < 0.001). Defective condensation of sperm chromatin was also found in 22.7% of spermatozoa by sperm chromatin structure assay. The results indicate that the infertility in the man with CCR and normal spermatozoa was caused by a production of chromosomally unbalanced, XY disomic and diploid spermatozoa and spermatozoa with defective chromatin condensation. 2. The best of both worlds: Phylogenetic eigenvector regression and mapping Directory of Open Access Journals (Sweden) José Alexandre Felizola Diniz Filho 2015-09-01 Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses. 3. Student interpretations of phylogenetic trees in an introductory biology course. Science.gov (United States) Dees, Jonathan; Momsen, Jennifer L; Niemi, Jarad; Montplaisir, Lisa 2014-01-01 Phylogenetic trees are widely used visual representations in the biological sciences and the most important visual representations in evolutionary biology. Therefore, phylogenetic trees have also become an important component of biology education. We sought to characterize reasoning used by introductory biology students in interpreting taxa relatedness on phylogenetic trees, to measure the prevalence of correct taxa-relatedness interpretations, and to determine how student reasoning and correctness change in response to instruction and over time. Counting synapomorphies and nodes between taxa were the most common forms of incorrect reasoning, which presents a pedagogical dilemma concerning labeled synapomorphies on phylogenetic trees. Students also independently generated an alternative form of correct reasoning using monophyletic groups, the use of which decreased in popularity over time. Approximately half of all students were able to correctly interpret taxa relatedness on phylogenetic trees, and many memorized correct reasoning without understanding its application. Broad initial instruction that allowed students to generate inferences on their own contributed very little to phylogenetic tree understanding, while targeted instruction on evolutionary relationships improved understanding to some extent. Phylogenetic trees, which can directly affect student understanding of evolution, appear to offer introductory biology instructors a formidable pedagogical challenge. 4. Breakpoint sites disclose the role of the V(D)J recombination machinery in the formation of T-cell receptor (TCR) and non-TCR associated aberrations in T-cell acute lymphoblastic leukemia. Science.gov (United States) Larmonie, Nicole S D; Dik, Willem A; Meijerink, Jules P P; Homminga, Irene; van Dongen, Jacques J M; Langerak, Anton W 2013-08-01 Aberrant recombination between T-cell receptor genes and oncogenes gives rise to chromosomal translocations that are genetic hallmarks in several subsets of human T-cell acute lymphoblastic leukemias. The V(D)J recombination machinery has been shown to play a role in the formation of these T-cell receptor translocations. Other, non-T-cell receptor chromosomal aberrations, such as SIL-TAL1 deletions, have likewise been recognized as V(D)J recombination associated aberrations. Despite the postulated role of V(D)J recombination, the extent of the V(D)J recombination machinery involvement in the formation of T-cell receptor and non-T-cell receptor aberrations in T-cell acute lymphoblastic leukemia is still poorly understood. We performed a comprehensive in silico and ex vivo evaluation of 117 breakpoint sites from 22 different T-cell receptor translocation partners as well as 118 breakpoint sites from non-T-cell receptor chromosomal aberrations. Based on this extensive set of breakpoint data, we provide a comprehensive overview of T-cell receptor and oncogene involvement in T-ALL. Moreover, we assessed the role of the V(D)J recombination machinery in the formation of chromosomal aberrations, and propose an up-dated mechanistic classification on how the V(D)J recombination machinery contributes to the formation of T-cell receptor and non-T-cell receptor aberrations in human T-cell acute lymphoblastic leukemia. 5. Preliminary Study of Phylogenetic Relationship of Rice Field Chironomidae (Diptera Inferred From DNA Sequences of Mitochondrial Cytochrome Oxidase Subunit I Directory of Open Access Journals (Sweden) Salman A. Al-Shami 2009-01-01 Full Text Available Problem statement: Chironomidae have been recorded in rice fields throughout the world including in many countries such as India, Australia and the USA. Although some studies provide the key to genera level and note the difficulty of identifying the larvae to species level. Chironomid researches have been hindered because of difficulties in specimen preparation, identification, morphology and literature. Systematics, phylogenetics and taxonomic studies of insects developed quickly with emergence of molecular techniques. These techniques provide an effective tool toward more accurate identification of ambiguous chironomid species. Approach: Samples of chironomids larvae were collected from rice plots at Bukit Merah Agricultural Experimental Station (BMAES, Penang, Malaysia. A 710 bp fragment of mitochondrial gene Cytochrome Oxidase subunit I (COI was amplified and sequenced. Results: Five species of Chironomidae; three species of subfamily Chironominae, Chironomus kiiensis, Polypedilum trigonus, Tanytarsus formosanus, two species of subfamily Tanypodinae, Clinotanypus sp and Tanypus punctipennis were morphologically identified. The phylogenetic relationship among these species was been investigated. High sequence divergence was observed between two individuals of the presumed C. kiiensis and it is suggested that more than one species may be present. However the intraspecific sequence divergence was lower between the other species of Tanypodinae subfamily. Interestingly, Tanytarsus formosanus showed close phylogenetic relationship to Tanypodinae species and this presumably reflect co-evolutionary traits of different subfamilies. Conclusion: The sequence of the mtDNA cytochrome oxidase subunit I gene has proven useful to investigate the phylogenetic relationship among the ambiguous species of chironomids. 6. Evaluating Phylogenetic Informativeness as a Predictor of Phylogenetic Signal for Metazoan, Fungal, and Mammalian Phylogenomic Data Sets Directory of Open Access Journals (Sweden) Francesc López-Giráldez 2013-01-01 Full Text Available Phylogenetic research is often stymied by selection of a marker that leads to poor phylogenetic resolution despite considerable cost and effort. Profiles of phylogenetic informativeness provide a quantitative measure for prioritizing gene sampling to resolve branching order in a particular epoch. To evaluate the utility of these profiles, we analyzed phylogenomic data sets from metazoans, fungi, and mammals, thus encompassing diverse time scales and taxonomic groups. We also evaluated the utility of profiles created based on simulated data sets. We found that genes selected via their informativeness dramatically outperformed haphazard sampling of markers. Furthermore, our analyses demonstrate that the original phylogenetic informativeness method can be extended to trees with more than four taxa. Thus, although the method currently predicts phylogenetic signal without specifically accounting for the misleading effects of stochastic noise, it is robust to the effects of homoplasy. The phylogenetic informativeness rankings obtained will allow other researchers to select advantageous genes for future studies within these clades, maximizing return on effort and investment. Genes identified might also yield efficient experimental designs for phylogenetic inference for many sister clades and outgroup taxa that are closely related to the diverse groups of organisms analyzed. 7. Accurate measurement of unsteady state fluid temperature Science.gov (United States) Jaremkiewicz, Magdalena 2017-03-01 In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem. 8. Niche Genetic Algorithm with Accurate Optimization Performance Institute of Scientific and Technical Information of China (English) LIU Jian-hua; YAN De-kun 2005-01-01 Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision. 9. Accurate estimation of indoor travel times DEFF Research Database (Denmark) Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan 2014-01-01 the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. In...... are collected within the building complex. Results indicate that InTraTime is superior with respect to metrics such as deployment cost, maintenance cost and estimation accuracy, yielding an average deviation from actual travel times of 11.7 %. This accuracy was achieved despite using a minimal-effort setup... 10. Accurate diagnosis is essential for amebiasis Institute of Scientific and Technical Information of China (English) 2004-01-01 @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease. 11. The first accurate description of an aurora Science.gov (United States) Schröder, Wilfried 2006-12-01 As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350. 12. New law requires 'medically accurate' lesson plans. Science.gov (United States) 1999-09-17 The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. 13. Universality: Accurate Checks in Dyson's Hierarchical Model Science.gov (United States) Godina, J. J.; Meurice, Y.; Oktay, M. B. 2003-06-01 In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer. 14. Markov invariants, plethysms, and phylogenetics (the long version) CERN Document Server Sumner, J G; Jermiin, L S; Jarvis, P D 2008-01-01 We explore model based techniques of phylogenetic tree inference exercising Markov invariants. Markov invariants are group invariant polynomials and are distinct from what is known in the literature as phylogenetic invariants, although we establish a commonality in some special cases. We show that the simplest Markov invariant forms the foundation of the Log-Det distance measure. We take as our primary tool group representation theory, and show that it provides a general framework for analysing Markov processes on trees. From this algebraic perspective, the inherent symmetries of these processes become apparent, and focusing on plethysms, we are able to define Markov invariants and give existence proofs. We give an explicit technique for constructing the invariants, valid for any number of character states and taxa. For phylogenetic trees with three and four leaves, we demonstrate that the corresponding Markov invariants can be fruitfully exploited in applied phylogenetic studies. 15. A common tendency for phylogenetic overdispersion in mammalian assemblages. Science.gov (United States) Cooper, Natalie; Rodríguez, Jesús; Purvis, Andy 2008-09-07 Competition has long been proposed as an important force in structuring mammalian communities. Although early work recognized that competition has a phylogenetic dimension, only with recent increases in the availability of phylogenies have true phylogenetic investigations of mammalian community structure become possible. We test whether the phylogenetic structure of 142 assemblages from three mammalian clades (New World monkeys, North American ground squirrels and Australasian possums) shows the imprint of competition. The full set of assemblages display a highly significant tendency for members to be more distantly related than expected by chance (phylogenetic overdispersion). The overdispersion is also significant within two of the clades (monkeys and squirrels) separately. This is the first demonstration of widespread overdispersion in mammal assemblages and implies an important role for either competition between close relatives where traits are conserved, habitat filtering where distant relatives share convergent traits, or both. 16. Phylogenetic and functional diversity in large carnivore assemblages. Science.gov (United States) Dalerum, F 2013-06-07 Large terrestrial carnivores are important ecological components and prominent flagship species, but are often extinction prone owing to a combination of biological traits and high levels of human persecution. This study combines phylogenetic and functional diversity evaluations of global and continental large carnivore assemblages to provide a framework for conservation prioritization both between and within assemblages. Species-rich assemblages of large carnivores simultaneously had high phylogenetic and functional diversity, but species contributions to phylogenetic and functional diversity components were not positively correlated. The results further provide ecological justification for the largest carnivore species as a focus for conservation action, and suggests that range contraction is a likely cause of diminishing carnivore ecosystem function. This study highlights that preserving species-rich carnivore assemblages will capture both high phylogenetic and functional diversity, but that prioritizing species within assemblages will involve trade-offs between optimizing contemporary ecosystem function versus the evolutionary potential for future ecosystem performance. 17. Phylogenetic relationships of Salmonella based on rRNA sequences DEFF Research Database (Denmark) Christensen, H.; Nordentoft, Steen; Olsen, J.E. 1998-01-01 To establish the phylogenetic relationships between the subspecies of Salmonella enterica (official name Salmonella choleraesuis), Salmonella bongori and related members of Enterobacteriaceae, sequence comparison of rRNA was performed by maximum-likelihood analysis. The two Salmonella species wer... 18. How Accurately can we Calculate Thermal Systems? Energy Technology Data Exchange (ETDEWEB) Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A 2004-04-20 I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors. 19. Accurate pattern registration for integrated circuit tomography Energy Technology Data Exchange (ETDEWEB) Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B. 2001-07-15 As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section. 20. Accurate determination of characteristic relative permeability curves Science.gov (United States) Krause, Michael H.; Benson, Sally M. 2015-09-01 A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models. 1. Accurate pose estimation for forensic identification Science.gov (United States) Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk 2010-04-01 In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation. 2. Accurate taxonomic assignment of short pyrosequencing reads. Science.gov (United States) Clemente, José C; Jansson, Jesper; Valiente, Gabriel 2010-01-01 3. Phylogenetic systematics of the Eucarida (Crustacea malacostraca Directory of Open Access Journals (Sweden) Martin L. Christoffersen 1988-01-01 Full Text Available Ninety-four morphological characters belonging to particular ontogenetic sequences within the Eucarida were used to produce a hierarchy of 128 evolutionary novelties (73 synapomorphies and 55 homoplasies and to delimit 15 monophyletic taxa. The following combined Recent-fossil sequenced phylogenetic classification is proposed: Superorder Eucarida; Order Euphausiacea; Family Bentheuphausiidae; Family Euphausiidae; Order Amphionidacea; Order Decapoda; Suborder Penaeidea; Suborder Pleocyemata; Infraorder Stenopodidea; Infraorder Reptantia; Infraorder Procarididea, Infraorder Caridea. The position of the Amphionidacea as the sister-group of the Decapoda is corroborated, while the Reptantia are proposed to be the sister-group of the Procarididea + Caridea for the first time. The fossil groups Uncina Quenstedt, 1850, and Palaeopalaemon Whitfield, 1880, are included as incertae sedis taxa within the Reptantia, which establishes the minimum ages of all the higher taxa of Eucarida except the Procarididea and Caridea in the Upper Devonian. The fossil group "Pygocephalomorpha" Beurlen, 1930, of uncertain status as a monophyletic taxon, is provisionally considered to belong to the "stem-group" of the Reptantia. Among the more important characters hypothesized to have evolved in the stem-lineage of each eucaridan monophyletic taxon are: (1 in Eucarida, attachement of post-zoeal carapace to all thoracic somites; (2 in Euphausiacea, reduction of endopod of eighth thoracopod; (3 in Bentheuphausiidae, compound eyes vestigial, associated with abyssal life; (4 in Euphausiidae, loss of endopod of eighth thoracopod and development of specialized luminescent organs; (5 in Amphionidacea + Decapoda, ambulatory ability of thoracic exopods reduced, scaphognathite, one pair of maxillipedes, pleurobranch gill series and carapace covering gills, associated with loss of pelagic life; (6 in Amphionidacea, unique thoracic brood pouch in females formed by inflated carapace and 4. Ecological and phylogenetic influences on maxillary dentition in snakes Directory of Open Access Journals (Sweden) Kate Jackson 2010-12-01 Full Text Available The maxillary dentition of snakes was used as a system with which to investigate the relative importance of the interacting forces of ecological selective pressures and phylogenetic constraints indetermining morphology. The maxillary morphology of three groups of snakes having different diets, with each group comprising two distinct lineages — boids and colubroids — was examined. Our results suggest that dietary selective pressures may be more significantthan phylogenetic history in shaping maxillary morphology. 5. Phylogenetic community ecology of soil biodiversity using mitochondrial metagenomics. Science.gov (United States) Andújar, Carmelo; Arribas, Paula; Ruzicka, Filip; Crampton-Platt, Alex; Timmermans, Martijn J T N; Vogler, Alfried P 2015-07-01 High-throughput DNA methods hold great promise for the study of taxonomically intractable mesofauna of the soil. Here, we assess species diversity and community structure in a phylogenetic framework, by sequencing total DNA from bulk specimen samples and assembly of mitochondrial genomes. The combination of mitochondrial metagenomics and DNA barcode sequencing of 1494 specimens in 69 soil samples from three geographic regions in southern Iberia revealed >300 species of soil Coleoptera (beetles) from a broad spectrum of phylogenetic lineages. A set of 214 mitochondrial sequences longer than 3000 bp was generated and used to estimate a well-supported phylogenetic tree of the order Coleoptera. Shorter sequences, including cox1 barcodes, were placed on this mitogenomic tree. Raw Illumina reads were mapped against all available sequences to test for species present in local samples. This approach simultaneously established the species richness, phylogenetic composition and community turnover at species and phylogenetic levels. We find a strong signature of vertical structuring in soil fauna that shows high local community differentiation between deep soil and superficial horizons at phylogenetic levels. Within the two vertical layers, turnover among regions was primarily at the tip (species) level and was stronger in the deep soil than leaf litter communities, pointing to layer-mediated drivers determining species diversification, spatial structure and evolutionary assembly of soil communities. This integrated phylogenetic framework opens the application of phylogenetic community ecology to the mesofauna of the soil, among the most diverse and least well-understood ecosystems, and will propel both theoretical and applied soil science. 6. Molecular Phylogenetic: Organism Taxonomy Method Based on Evolution History Directory of Open Access Journals (Sweden) N.L.P Indi Dharmayanti 2011-03-01 Full Text Available Phylogenetic is described as taxonomy classification of an organism based on its evolution history namely its phylogeny and as a part of systematic science that has objective to determine phylogeny of organism according to its characteristic. Phylogenetic analysis from amino acid and protein usually became important area in sequence analysis. Phylogenetic analysis can be used to follow the rapid change of a species such as virus. The phylogenetic evolution tree is a two dimensional of a species graphic that shows relationship among organisms or particularly among their gene sequences. The sequence separation are referred as taxa (singular taxon that is defined as phylogenetically distinct units on the tree. The tree consists of outer branches or leaves that represents taxa and nodes and branch represent correlation among taxa. When the nucleotide sequence from two different organism are similar, they were inferred to be descended from common ancestor. There were three methods which were used in phylogenetic, namely (1 Maximum parsimony, (2 Distance, and (3 Maximum likehoood. Those methods generally are applied to construct the evolutionary tree or the best tree for determine sequence variation in group. Every method is usually used for different analysis and data. 7. Efficient FPT Algorithms for (Strict) Compatibility of Unrooted Phylogenetic Trees. Science.gov (United States) Baste, Julien; Paul, Christophe; Sau, Ignasi; Scornavacca, Celine 2017-02-28 In phylogenetics, a central problem is to infer the evolutionary relationships between a set of species X; these relationships are often depicted via a phylogenetic tree-a tree having its leaves labeled bijectively by elements of X and without degree-2 nodes-called the "species tree." One common approach for reconstructing a species tree consists in first constructing several phylogenetic trees from primary data (e.g., DNA sequences originating from some species in X), and then constructing a single phylogenetic tree maximizing the "concordance" with the input trees. The obtained tree is our estimation of the species tree and, when the input trees are defined on overlapping-but not identical-sets of labels, is called "supertree." In this paper, we focus on two problems that are central when combining phylogenetic trees into a supertree: the compatibility and the strict compatibility problems for unrooted phylogenetic trees. These problems are strongly related, respectively, to the notions of "containing as a minor" and "containing as a topological minor" in the graph community. Both problems are known to be fixed parameter tractable in the number of input trees k, by using their expressibility in monadic second-order logic and a reduction to graphs of bounded treewidth. Motivated by the fact that the dependency on k of these algorithms is prohibitively large, we give the first explicit dynamic programming algorithms for solving these problems, both running in time [Formula: see text], where n is the total size of the input. 8. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics. Science.gov (United States) 2013-03-26 Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference. 9. Identification of subtelomeric genomic imbalances and breakpoint mapping with quantitative PCR in 296 individuals with congenital defects and/or mental retardation Directory of Open Access Journals (Sweden) Brockmann Knut 2009-03-01 Full Text Available Abstract Background Submicroscopic imbalances in the subtelomeric regions of the chromosomes are considered to play an important role in the aetiology of mental retardation (MR. The aim of the study was to evaluate a quantitative PCR (qPCR protocol established by Boehm et al. (2004 in the clinical routine of subtelomeric testing. Results 296 patients with MR and a normal karyotype (500–550 bands were screened for subtelomeric imbalances by using qPCR combined with SYBR green detection. In total, 17 patients (5.8% with 20 subtelomeric imbalances were identified. Six of the aberrations (2% were classified as causative for the symptoms, because they occurred either de novo in the patients (5 cases or the aberration were be detected in the patient and an equally affected parent (1 case. The extent of the deletions ranged from 1.8 to approximately 10 Mb, duplications were 1.8 to approximately 5 Mb in size. In 6 patients, the copy number variations (CNVs were rated as benign polymorphisms, and the clinical relevance of these CNVs remains unclear in 5 patients (1.7%. Therefore, the overall frequency of clinically relevant imbalances ranges between 2% and 3.7% in our cohort. Conclusion This study illustrates that the qPCR/SYBR green technique represents a rapid and versatile method for the detection of subtelomeric imbalances and the option to map the breakpoint. Thus, this technique is highly suitable for genotype/phenotype studies in patients with MR/developmental delay and/or congenital defects. 10. Four chromosomal breakpoints and four new probes mark out a 10-cM region encompassing the fragile-X locus (FRAXA). Science.gov (United States) Rousseau, F; Vincent, A; Rivella, S; Heitz, D; Triboli, C; Maestrini, E; Warren, S T; Suthers, G K; Goodfellow, P; Mandel, J L 1991-01-01 We report the validation and use of a cell hybrid panel which allowed us a rapid physical localization of new DNA probes in the vicinity of the fragile-X locus (FRAXA). Seven regions are defined by this panel, two of which lie between DXS369 and DXS296, until now the closest genetic markers that flank FRAXA. Of those two interesting regions, one is just distal to DXS369 and defined by probe 2-71 (DXS476), which is not polymorphic. The next one contains probes St677 (DXS463) and 2-34 (DXS477), which are within 130 kb and both detect TaqI RFLPs. The combined informativeness of these two probes is 30%. We cloned from an irradiation-reduced hybrid line another new polymorphic probe, Do33 (DXS465; 42% heterozygosity). This probe maps to the DXS296 region, proximal to a chromosomal breakpoint that corresponds to the Hunter syndrome locus (IDS). The physical order is thus Cen-DXS369-DXS476-(DXS463,DXS477)-(DXS296, DXS465)-IDS-DXS304-tel. We performed a linkage analysis for five of these markers in both the Centre d'Etude du Polymorphisme Humain families and in a large set of fragile-X families. This establishes that DXS296 is distal to FRAXA. The relative position of DXS463 and DXS477 with respect to FRAXA remains uncertain, but our results place them genetically halfway between DXS369 and DXS304. Thus the DXS463-DXS477 cluster defines presently either the closest proximal or the closest distal polymorphic marker with respect to FRAXA. The three new polymorphic probes described here have a combined heterozygosity of 60% and represent a major improvement for genetic analysis of fragile-X families, in particular for diagnostic applications. 11. Antifungal susceptibilities of bloodstream isolates of Candida species from nine hospitals in Korea: application of new antifungal breakpoints and relationship to antifungal usage. Directory of Open Access Journals (Sweden) Eun Jeong Won Full Text Available We applied the new clinical breakpoints (CBPs of the Clinical and Laboratory Standards Institute (CLSI to a multicenter study to determine the antifungal susceptibility of bloodstream infection (BSI isolates of Candida species in Korea, and determined the relationship between the frequency of antifungal-resistant Candida BSI isolates and antifungal use at hospitals. Four hundred and fifty BSI isolates of Candida species were collected over a 1-year period in 2011 from nine hospitals. The susceptibilities of the isolates to four antifungal agents were determined using the CLSI M27 broth microdilution method. By applying the species-specific CBPs, non-susceptibility to fluconazole was found in 16.4% (70/428 of isolates, comprising 2.6% resistant and 13.8% susceptible-dose dependent isolates. However, non-susceptibility to voriconazole, caspofungin, or micafungin was found in 0% (0/370, 0% (0/437, or 0.5% (2/437 of the Candida BSI isolates, respectively. Of the 450 isolates, 72 (16.0% showed decreased susceptibility to fluconazole [minimum inhibitory concentration (MIC ≥4 μg/ml]. The total usage of systemic antifungals varied considerably among the hospitals, ranging from 190.0 to 7.7 defined daily dose per 1,000 patient days, and fluconazole was the most commonly prescribed agent (46.3%. By Spearman's correlation analysis, fluconazole usage did not show a significant correlation with the percentage of fluconazole resistant isolates at hospitals. However, fluconazole usage was significantly correlated with the percentage of fluconazole non-susceptible isolates (r = 0.733; P = 0.025 or the percentage of isolates with decreased susceptibility to fluconazole (MIC ≥4 μg/ml (r = 0.700; P = 0.036 at hospitals. Our work represents the first South Korean multicenter study demonstrating an association between antifungal use and antifungal resistance among BSI isolates of Candida at hospitals using the new CBPs of the CLSI. 12. Isavuconazole and nine comparator antifungal susceptibility profiles for common and uncommon Candida species collected in 2012: application of new CLSI clinical breakpoints and epidemiological cutoff values. Science.gov (United States) Castanheira, Mariana; Messer, Shawn A; Rhomberg, Paul R; Dietrich, Rachel R; Jones, Ronald N; Pfaller, Michael A 2014-08-01 The in vitro activity of isavuconazole and nine antifungal comparator agents was assessed using reference broth microdilution methods against 1,421 common and uncommon species of Candida from a 2012 global survey. Isolates were identified using CHROMagar, biochemical methods and sequencing of ITS and/or 28S regions. Candida spp. were classified as either susceptible or resistant and as wild type (WT) or non-WT using CLSI clinical breakpoints or epidemiological cutoff values, respectively, for the antifungal agents. Isolates included 1,421 organisms from 21 different species of Candida. Among Candida spp., resistance to all 10 tested antifungal agents was low (0.0-7.9 %). The vast majority of each species of Candida, with the exception of Candida glabrata, Candida krusei, and Candida guilliermondii (modal MICs of 0.5 µg/ml), were inhibited by ≤0.12 µg/ml of isavuconazole (99.0 %; range 94.3 % [Candida tropicalis] to 100.0 % [Candida lusitaniae and Candida dubliniensis]). C. glabrata, C. krusei, and C. guilliermondii were largely inhibited by ≤1 µg/ml of isavuconazole (89.7, 96.9 and 92.8 %, respectively). Decreased susceptibility to isavuconazole was most prominent with C. glabrata where the modal MIC for isavuconazole was 0.5 µg/ml for those strains that were SDD to fluconazole or WT to voriconazole, and was 4 µg/ml for those that were either resistant or non-WT to fluconazole or voriconazole, respectively. In conclusion, these data document the activity of isavuconazole and generally the low resistance levels to the available antifungal agents in a large, contemporary (2012), global collection of molecularly characterized species of Candida. 13. Toward Accurate and Quantitative Comparative Metagenomics Science.gov (United States) Nayfach, Stephen; Pollard, Katherine S. 2016-01-01 Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341 14. Apparatus for accurately measuring high temperatures Science.gov (United States) Smith, D.D. The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube. 15. Accurate renormalization group analyses in neutrino sector Energy Technology Data Exchange (ETDEWEB) Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan) 2014-08-15 We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale. 16. Accurate Weather Forecasting for Radio Astronomy Science.gov (United States) 2010-01-01 The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts. 17. The complete chloroplast genome sequence of Ampelopsis: gene organization, comparative analysis and phylogenetic relationships to other angiosperms Directory of Open Access Journals (Sweden) Gurusamy eRaman 2016-03-01 Full Text Available Ampelopsis brevipedunculata is an economically important plant that belongs to the Vitaceae family of angiosperms. The phylogenetic placement of Vitaceae is still unresolved. Recent phylogenetic studies suggested that it should be placed in various alternative families including Caryophyllaceae, asteraceae, Saxifragaceae, Dilleniaceae, or with the rest of the rosid families. However, these analyses provided weak supportive results because they were based on only one of several genes. Accordingly, complete chloroplast genome sequences are required to resolve the phylogenetic relationships among angiosperms. Recent phylogenetic analyses based on the complete chloroplast genome sequence suggested strong support for the position of Vitaceae as the earliest diverging lineage of rosids and placed it as a sister to the remaining rosids. These studies also revealed relationships among several major lineages of angiosperms; however, they highlighted the significance of taxon sampling for obtaining accurate phylogenies. In the present study, we sequenced the complete chloroplast genome of A. brevipedunculata and used these data to assess the relationships among 32 angiosperms, including 18 taxa of rosids. The Ampelopsis chloroplast genome is 161,090 bp in length, and includes a pair of inverted repeats of 26,394 bp that are separated by small and large single copy regions of 19,036 bp and 89,266 bp, respectively. The gene content and order of Ampelopsis is identical to many other unrearranged angiosperm chloroplast genomes, including Vitis and tobacco. A phylogenetic tree constructed based on 70 protein-coding genes of 33 angiosperms showed that both Saxifragales and Vitaceae diverged from the rosid clade and formed two clades with 100% bootstrap value. The position of the Vitaceae is sister to Saxifragales, and both are the basal and earliest diverging lineages. Moreover, Saxifragales forms a sister clade to Vitaceae of rosids. Overall, the results of 18. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system Science.gov (United States) Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi 2015-02-01 With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium. 19. The probability of a gene tree topology within a phylogenetic network with applications to hybridization detection. Directory of Open Access Journals (Sweden) Yun Yu Full Text Available Gene tree topologies have proven a powerful data source for various tasks, including species tree inference and species delimitation. Consequently, methods for computing probabilities of gene trees within species trees have been developed and widely used in probabilistic inference frameworks. All these methods assume an underlying multispecies coalescent model. However, when reticulate evolutionary events such as hybridization occur, these methods are inadequate, as they do not account for such events. Methods that account for both hybridization and deep coalescence in computing the probability of a gene tree topology currently exist for very limited cases. However, no such methods exist for general cases, owing primarily to the fact that it is currently unknown how to compute the probability of a gene tree topology within the branches of a phylogenetic network. Here we present a novel method for computing the probability of gene tree topologies on phylogenetic networks and demonstrate its application to the inference of hybridization in the presence of incomplete lineage sorting. We reanalyze a Saccharomyces species data set for which multiple analyses had converged on a species tree candidate. Using our method, though, we show that an evolutionary hypothesis involving hybridization in this group has better support than one of strict divergence. A similar reanalysis on a group of three Drosophila species shows that the data is consistent with hybridization. Further, using extensive simulation studies, we demonstrate the power of gene tree topologies at obtaining accurate estimates of branch lengths and hybridization probabilities of a given phylogenetic network. Finally, we discuss identifiability issues with detecting hybridization, particularly in cases that involve extinction or incomplete sampling of taxa. 20. Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution. Science.gov (United States) Rohlfs, Rori V; Nielsen, Rasmus 2015-09-01 A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence 1. Uprooting phylogenetic uncertainty in coalescent species delimitation:A meta-analysis of empirical studies Institute of Scientific and Technical Information of China (English) Itzue W CAVIEDES-SOLIS; Nassima M BOUZID; Barbara L BANBURY; Adam D LEACH 2015-01-01 Phylogenetic and phylogeographic studies rely on the accurate quantification of biodiversity. In recent studies of tax-onomically ambiguous groups, species boundaries are often determined based on multi-locus sequence data.Bayesian Phyloge-netics and Phylogeography(BPP) is a coalescent-based method frequently used to delimit species; however, empirical studies suggest that the requirement of a user-specified guide tree biases the range of possible outcomes. We evaluate fifteen multi-locus datasets using the most recent iteration of BPP, which eliminates the need for a user-specified guide tree and reconstructs the species tree in synchrony with species delimitation (= unguided species delimitation). We found that the number of species recovered with guided versus unguided species delimitation was the same except for two cases, and that posterior probabilities were generally lower for the unguided analyses as a result of searching across species trees in addition to species delimitation models. The guide trees used in previous studies were often discordant with the species tree topologies estimated by BPP. We also compared species trees estimated using BPP and *BEAST and found that when the topologies are the same, BPP tends to give higher posterior probabilities [Current Zoology 61 (5): 866–873, 2015]. 2. Utility of ITS sequence data for phylogenetic reconstruction of Italian Quercus spp. Science.gov (United States) Bellarosa, Rosanna; Simeone, Marco C; Papini, Alessio; Schirone, Bartolomeo 2005-02-01 Nuclear ribosomal DNA sequences encoding the 5.8S RNA and the flanking internal transcribed spacers (ITS1 and ITS2) were used to test the phylogenetic relationships within 12 Italian Quercus taxa (Fagaceae). Hypotheses of sequence orthology are tested by detailed inspection of some basic features of oak ITS sequences (i.e., general patterns of conserved domains, thermodynamic stability and predicted conformation of the secondary structure of transcripts) that also allowed more accurate sequence alignment. Analysis of ITS variation supported three monophyletic groups, corresponding to subg. Cerris, Schlerophyllodrys (=Ilex group sensu Nixon) and Quercus, as proposed by Schwarz [Feddes Rep., Sonderbeih. D, 1-200]. A derivation of the "Cerris group" from the "Ilex group" is suggested, with Q. cerris sister to the rest of the "Cerris group." Quercus pubescens was found to be sister to the rest of the "Quercus group." The status of hybrispecies of Q. crenata (Q. cerrisxQ. suber) and Q. morisii (Q. ilexxQ. suber) was evaluated and discussed. Finally, the phylogenetic position of the Italian species in a broader context of the genus is presented. The utility of the ITS marker to assess the molecular systematics of oaks is therefore confirmed. The importance of Italy as a region with a high degree of diversity at the population and genetic level is discussed. 3. PhyloBayes MPI: phylogenetic reconstruction with infinite mixtures of profiles in a parallel environment. Science.gov (United States) Lartillot, Nicolas; Rodrigue, Nicolas; Stubbs, Daniel; Richer, Jacques 2013-07-01 Modeling across site variation of the substitution process is increasingly recognized as important for obtaining more accurate phylogenetic reconstructions. Both finite and infinite mixture models have been proposed and have been shown to significantly improve on classical single-matrix models. Compared with their finite counterparts, infinite mixtures have a greater expressivity. However, they are computationally more challenging. This has resulted in practical compromises in the design of infinite mixture models. In particular, a fast but simplified version of a Dirichlet process model over equilibrium frequency profiles implemented in PhyloBayes has often been used in recent phylogenomics studies, while more refined model structures, more realistic and empirically more fit, have been practically out of reach. We introduce a message passing interface version of PhyloBayes, implementing the Dirichlet process mixture models as well as more classical empirical matrices and finite mixtures. The parallelization is made efficient thanks to the combination of two algorithmic strategies: a partial Gibbs sampling update of the tree topology and the use of a truncated stick-breaking representation for the Dirichlet process prior. The implementation shows close to linear gains in computational speed for up to 64 cores, thus allowing faster phylogenetic reconstruction under complex mixture models. PhyloBayes MPI is freely available from our website www.phylobayes.org. 4. Phylogenetic evidence for a case of misleading rather than mislabeling in caviar in the United Kingdom. Science.gov (United States) Johnson, Tania Aspasia; Iyengar, Arati 2015-01-01 Sturgeons and paddlefish are freshwater fish which are highly valued for their caviar. Despite the fact that every single species of sturgeon and paddlefish is listed under CITES, there are reports of illegal trade in caviar where products are deliberately mislabeled. Three samples of caviar purchased in the United Kingdom were investigated for accurate CITES labeling using COI and cyt b sequencing. Initial species identification was carried out using BLAST followed by phylogenetic analyses using both maximum parsimony and maximum likelihood methods. Results showed no evidence for mislabeling with respect to CITES labels in any of the three samples, but we observed clear evidence for a case of misleading the customer in one sample. 5. Fast and accurate exhaled breath ammonia measurement. Science.gov (United States) Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H 2014-06-11 This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. 6. Noninvasive hemoglobin monitoring: how accurate is enough? Science.gov (United States) Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E 2013-10-01 Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision. 7. Accurate free energy calculation along optimized paths. Science.gov (United States) Chen, Changjun; Xiao, Yi 2010-05-01 The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation. 8. Accurate fission data for nuclear safety CERN Document Server Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S 2013-01-01 The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons... 9. Towards Accurate Modeling of Moving Contact Lines CERN Document Server Holmgren, Hanna 2015-01-01 A main challenge in numerical simulations of moving contact line problems is that the adherence, or no-slip boundary condition leads to a non-integrable stress singularity at the contact line. In this report we perform the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. We assume that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity exists. This expression is used to implement boundary conditions for the moving contact line at a macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant thought the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size $h$ at a rate of convergence \$... 10. Does a pneumotach accurately characterize voice function? Science.gov (United States) Walters, Gage; Krane, Michael 2016-11-01 A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1. 11. Accurate lineshape spectroscopy and the Boltzmann constant. Science.gov (United States) Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N 2015-10-14 Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. 12. Accurate upper body rehabilitation system using kinect. Science.gov (United States) Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit 2016-08-01 The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises. 13. Accurate thermoplasmonic simulation of metallic nanoparticles Science.gov (United States) Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing 2017-01-01 Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength. 14. Fast and Provably Accurate Bilateral Filtering. Science.gov (United States) Chaudhury, Kunal N; Dabhade, Swapnil D 2016-06-01 The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. 15. BLAST-EXPLORER helps you building datasets for phylogenetic analysis Directory of Open Access Journals (Sweden) Claverie Jean-Michel 2010-01-01 Full Text Available Abstract Background The right sampling of homologous sequences for phylogenetic or molecular evolution analyses is a crucial step, the quality of which can have a significant impact on the final interpretation of the study. There is no single way for constructing datasets suitable for phylogenetic analysis, because this task intimately depends on the scientific question we want to address, Moreover, database mining softwares such as BLAST which are routinely used for searching homologous sequences are not specifically optimized for this task. Results To fill this gap, we designed BLAST-Explorer, an original and friendly web-based application that combines a BLAST search with a suite of tools that allows interactive, phylogenetic-oriented exploration of the BLAST results and flexible selection of homologous sequences among the BLAST hits. Once the selection of the BLAST hits is done using BLAST-Explorer, the corresponding sequence can be imported locally for external analysis or passed to the phylogenetic tree reconstruction pipelines available on the Phylogeny.fr platform. Conclusions BLAST-Explorer provides a simple, intuitive and interactive graphical representation of the BLAST results and allows selection and retrieving of the BLAST hit sequences based a wide range of criterions. Although BLAST-Explorer primarily aims at helping the construction of sequence datasets for further phylogenetic study, it can also be used as a standard BLAST server with enriched output. BLAST-Explorer is available at http://www.phylogeny.fr 16. SUMAC: Constructing Phylogenetic Supermatrices and Assessing Partially Decisive Taxon Coverage. Science.gov (United States) Freyman, William A 2015-01-01 The amount of phylogenetically informative sequence data in GenBank is growing at an exponential rate, and large phylogenetic trees are increasingly used in research. Tools are needed to construct phylogenetic sequence matrices from GenBank data and evaluate the effect of missing data. Supermatrix Constructor (SUMAC) is a tool to data-mine GenBank, construct phylogenetic supermatrices, and assess the phylogenetic decisiveness of a matrix given the pattern of missing sequence data. SUMAC calculates a novel metric, Missing Sequence Decisiveness Scores (MSDS), which measures how much each individual missing sequence contributes to the decisiveness of the matrix. MSDS can be used to compare supermatrices and prioritize the acquisition of new sequence data. SUMAC constructs supermatrices either through an exploratory clustering of all GenBank sequences within a taxonomic group or by using guide sequences to build homologous clusters in a more targeted manner. SUMAC assembles supermatrices for any taxonomic group recognized in GenBank and is optimized to run on multicore computer systems by parallelizing multiple stages of operation. SUMAC is implemented as a Python package that can run as a stand-alone command-line program, or its modules and objects can be incorporated within other programs. SUMAC is released under the open source GPLv3 license and is available at https://github.com/wf8/sumac. 17. The power and pitfalls of HIV phylogenetics in public health. Science.gov (United States) Brooks, James I; Sandstrom, Paul A 2013-07-25 Phylogenetics is the application of comparative studies of genetic sequences in order to infer evolutionary relationships among organisms. This tool can be used as a form of molecular epidemiology to enhance traditional population-level communicable disease surveillance. Phylogenetic study has resulted in new paradigms being created in the field of communicable diseases and this commentary aims to provide the reader with an explanation of how phylogenetics can be used in tracking infectious diseases. Special emphasis will be placed upon the application of phylogenetics as a tool to help elucidate HIV transmission patterns and the limitations to these methods when applied to forensic analysis. Understanding infectious disease epidemiology in order to prevent new transmissions is the sine qua non of public health. However, with increasing epidemiological resolution, there may be an associated potential loss of privacy to the individual. It is within this context that we aim to promote the discussion on how to use phylogenetics to achieve important public health goals, while at the same time protecting the rights of the individual. 18. Reconstructions of the axial muscle insertions in the occipital region of dinosaurs: evaluations of past hypotheses on marginocephalia and tyrannosauridae using the extant phylogenetic bracket approach. Science.gov (United States) Tsuihiji, Takanobu 2010-08-01 The insertions of the cervical axial musculature on the occiput in marginocephalian and tyrannosaurid dinosaurs have been reconstructed in several studies with a view to their functional implications. Most of the past reconstructions on marginocephalians, however, relied on the anatomy of just one clade of reptiles, Lepidosauria, and lack phylogenetic justification. In this study, these past reconstructions were evaluated using the Extant Phylogenetic Bracket approach based on the anatomy of various extant diapsids. Many muscle insertions reconstructed in this study were substantially different from those in the past studies, demonstrating the importance of phylogenetically justified inferences based on the conditions of Aves and Crocodylia for reconstructing the anatomy of non-avian dinosaurs. The present reconstructions show that axial muscle insertions were generally enlarged in derived marginocephalians, apparently correlated with expansion of their parietosquamosal shelf/frill. Several muscle insertions on the occiput in tyrannosaurids reconstructed in this study using the Extant Phylogenetic Bracket approach were also rather different from recent reconstructions based on the same, phylogenetic and parsimony-based method. Such differences are mainly due to differences in initial identifications of muscle insertion areas or different hypotheses on muscle homologies in extant diapsids. This result emphasizes the importance of accurate and detailed observations on the anatomy of extant animals as the basis for paleobiological inferences such as anatomical reconstructions and functional analyses. 19. Array-based comparative genomic hybridization facilitates identification of breakpoints of a novel der(1)t(1;18)(p36.3;q23)dn in a child presenting with mental retardation. Science.gov (United States) Lennon, P A; Cooper, M L; Curtis, M A; Lim, C; Ou, Z; Patel, A; Cheung, S W; Bacino, C A 2006-06-01 Monosomy of distal 1p36 represents the most common terminal deletion in humans and results in one of the most frequently diagnosed mental retardation syndromes. This deletion is considered a contiguous gene deletion syndrome, and has been shown to vary in deletion sizes that contribute to the spectrum of phenotypic anomalies seen in patients with monosomy 1p36. We report on an 8-year-old female with characteristics of the monosomy 1p36 syndrome who demonstrated a novel der(1)t(1;18)(p36.3;q23). Initial G-banded karyotype analysis revealed a deleted chromosome 1, with a breakpoint within 1p36.3. Subsequent FISH and array-based comparative genomic hybridization not only confirmed and partially characterized the deletion of chromosome 1p36.3, but also uncovered distal trisomy for 18q23. In this patient, the duplicated 18q23 is translocated onto the deleted 1p36.3 region, suggesting telomere capture. Molecular characterization of this novel der(1)t(1;18)(p36.3;q23), guided by our clinical array-comparative genomic hybridization, demonstrated a 3.2 Mb terminal deletion of chromosome 1p36.3 and a 200 kb duplication of 18q23 onto the deleted 1p36.3, presumably stabilizing the deleted chromosome 1. DNA sequence analysis around the breakpoints demonstrated no homology, and therefore this telomere capture of distal 18q is apparently the result of a non-homologous recombination. Partial trisomy for 18q23 has not been previously reported. The importance of mapping the breakpoints of all balanced and unbalanced translocations found in the clinical laboratory, when phenotypic abnormalities are found, is discussed. 20. Conventional cytogenetics and breakpoint distribution by fluorescent in situ hybridization in patients with malignant hemopathies associated with inv(3)(q21;q26) and t(3;3)(q21;q26). Science.gov (United States) De Braekeleer, Etienne; Douet-Guilbert, Nathalie; Basinko, Audrey; Bovo, Clément; Guéganic, Nadia; Le Bris, Marie-Josée; Morel, Frédéric; De Braekeleer, Marc 2011-10-01 Inv(3)(q21q26)/t(3;3)(q21;q26) is recognized as a distinctive entity of acute myeloid leukemia (AML) with recurrent genetic abnormalities of prognostic significance. It occurs in 1-2.5% of AML and is also observed in myelodysplastic syndromes and in the blastic phase of chronic myeloid leukemia. The molecular consequence of the inv(3)/t(3;3) rearrangements is the juxtaposition of the ribophorin I (RPN1) gene (located in band 3q21) with the ecotropic viral integration site 1 (EVI1) gene (located in band 3q26.2). Following conventional cytogenetics to determine the karyotype, fluorescent in situ hybridization (FISH) with a panel of bacterial artificial chromosome clones was used to map the breakpoints involved in 15 inv(3)/t(3;3). Inv(3) or t(3;3) was the sole karyotypic anomaly in 6 patients, while additional abnormalities were identified in the remaining 9 patients, including 4 with monosomy of chromosome7 (-7) or a deletion of its long arm (7q-). Breakpoints in band 3q21 were distributed in a 235 kb region centromeric to and including the RPN1 locus, while those in band 3q26.2 were scattered in a 900 kb region located on each side of and including the EVI1 locus. In contrast to most of the inversions and translocations associated with AML that lead to fusion genes, inv(3)/t(3;3) does not generate a chimeric gene, but rather induces gene overexpression. The wide dispersion of the breakpoints in bands 3q21 and 3q26 and the heterogeneity of the genomic consequences could explain why the mechanisms leading to leukemogenesis are still poorly understood. Therefore, it is important to further characterize these chromosomal abnormalities by FISH. 1. Towards Accurate Application Characterization for Exascale (APEX) Energy Technology Data Exchange (ETDEWEB) Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States) 2015-09-01 Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers. 2. Optimizing cell arrays for accurate functional genomics Directory of Open Access Journals (Sweden) Fengler Sven 2012-07-01 Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA. 3. Accurate paleointensities - the multi-method approach Science.gov (United States) de Groot, Lennart 2016-04-01 The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene. 4. How flatbed scanners upset accurate film dosimetry. Science.gov (United States) van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S 2016-01-21 Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. 5. A new dynamic null model for phylogenetic community structure. Science.gov (United States) Pigot, Alex L; Etienne, Rampal S 2015-02-01 Phylogenies are increasingly applied to identify the mechanisms structuring ecological communities but progress has been hindered by a reliance on statistical null models that ignore the historical process of community assembly. Here, we address this, and develop a dynamic null model of assembly by allopatric speciation, colonisation and local extinction. Incorporating these processes fundamentally alters the structure of communities expected due to chance, with speciation leading to phylogenetic overdispersion compared to a classical statistical null model assuming equal probabilities of community membership. Applying this method to bird and primate communities in South America we show that patterns of phylogenetic overdispersion - often attributed to negative biotic interactions - are instead consistent with a species neutral model of allopatric speciation, colonisation and local extinction. Our findings provide a new null expectation for phylogenetic community patterns and highlight the importance of explicitly accounting for the dynamic history of assembly when testing the mechanisms governing community structure. 6. Dimensional Reduction for the General Markov Model on Phylogenetic Trees. Science.gov (United States) Sumner, Jeremy G 2017-03-01 We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data. 7. Short sequence effect of ancient DNA on mammoth phylogenetic analyses Institute of Scientific and Technical Information of China (English) Guilian SHENG; Lianjuan WU; Xindong HOU; Junxia YUAN; Shenghong CHENG; Bojian ZHONG; Xulong LAI 2009-01-01 The evolution of Elephantidae has been intensively studied in the past few years, especially after 2006. The molecular approaches have made great contribution to the assumption that the extinct woolly mammoth has a close relationship with the Asian elephant instead of the African elephant. In this study, partial ancient DNA sequences of cytochrome b (cyt b) gene in mitochondrial genome were successfully retrieved from Late Pleistocene Mammuthus primigenius bones collected from Heilongjiang Province in Northeast China. Both the partial and complete homologous cyt b gene sequences and the whole mitochondrial genome sequences extracted from GenBank were aligned and used as datasets for phylogenetic analyses. All of the phylogenetic trees, based on either the partial or the complete cyt b gene, reject the relationship constructed by the whole mitochondrial genome, showing the occurrence of an effect of sequence length of cyt b gene on mammoth phylogenetic analyses. 8. Phylogenetics and the correlates of mammalian sleep: a reappraisal. Science.gov (United States) Lesku, John A; Roth, Timothy C; Rattenborg, Niels C; Amlaner, Charles J; Lima, Steven L 2008-06-01 The correlates of mammalian sleep have been investigated previously in at least eight comparative studies in an effort to illuminate the functions of sleep. However, all of these univariate analyses treated each species, or taxonomic Family, as a statistically independent unit, which is invalid due to the phylogenetic relationships among species. Here, we reassess these influential correlates of mammalian sleep using the formal phylogenetic framework of independent contrasts. After controlling for phylogeny using this procedure, the interpretation of many of the correlates changed. For instance, and contrary to previous studies, we found interspecific support for a neurophysiological role for rapid-eye-movement sleep, such as memory consolidation. Also in contrast to previous studies, we did not find comparative support for an energy conservation function for slow-wave sleep. Thus, the incorporation of a phylogenetic control into comparative analyses of sleep yields meaningful differences that affect our understanding of why we sleep. 9. Phylogenetic Analysis of RhoGAP Domain-Containing Proteins Institute of Scientific and Technical Information of China (English) Marcelo M.Brand(a)o; Karina L.Silva-Brand(a)o; Fernando F.Costa; Sara T.O.Saad 2006-01-01 Proteins containing an Rho GTPase-activating protein (RhoGAP) domain work as molecular switches involved in the regulation of diverse cellular functions. The ability of these GTPases to regulate a wide number of cellular processes comes from their interactions with multiple effectors and inhibitors, including the RhoGAP family, which stimulates their intrinsic GTPase activity. Here, a phylogenetic approach was applied to study the evolutionary relationship among 59 RhoGAP domain-containing proteins. The sequences were aligned by their RhoGAP domains and the phylogenetic hypotheses were generated using Maximum Parsimony and Bayesian analyses. The character tracing of two traits, GTPase activity and presence of other domains, indicated a significant phylogenetic signal for both of them. 10. TREEFINDER: a powerful graphical analysis environment for molecular phylogenetics Directory of Open Access Journals (Sweden) von Haeseler Arndt 2004-06-01 Full Text Available Abstract Background Most analysis programs for inferring molecular phylogenies are difficult to use, in particular for researchers with little programming experience. Results TREEFINDER is an easy-to-use integrative platform-independent analysis environment for molecular phylogenetics. In this paper the main features of TREEFINDER (version of April 2004 are described. TREEFINDER is written in ANSI C and Java and implements powerful statistical approaches for inferring gene tree and related analyzes. In addition, it provides a user-friendly graphical interface and a phylogenetic programming language. Conclusions TREEFINDER is a versatile framework for analyzing phylogenetic data across different platforms that is suited both for exploratory as well as advanced studies. 11. Functionally and phylogenetically diverse plant communities key to soil biota. Science.gov (United States) Milcu, Alexandru; Allan, Eric; Roscher, Christiane; Jenkins, Tania; Meyer, Sebastian T; Flynn, Dan; Bessler, Holger; Buscot, François; Engels, Christof; Gubsch, Marlén; König, Stephan; Lipowsky, Annett; Loranger, Jessy; Renker, Carsten; Scherber, Christoph; Schmid, Bernhard; Thébault, Elisa; Wubet, Tesfaye; Weisser, Wolfgang W; Scheu, Stefan; Eisenhauer, Nico 2013-08-01 Recent studies assessing the role of biological diversity for ecosystem functioning indicate that the diversity of functional traits and the evolutionary history of species in a community, not the number of taxonomic units, ultimately drives the biodiversity--ecosystem-function relationship. Here, we simultaneously assessed the importance of plant functional trait and phylogenetic diversity as predictors of major trophic groups of soil biota (abundance and diversity), six years from the onset of a grassland biodiversity experiment. Plant functional and phylogenetic diversity were generally better predictors of soil biota than the traditionally used species or functional group richness. Functional diversity was a reliable predictor for most biota, with the exception of soil microorganisms, which were better predicted by phylogenetic diversity. These results provide empirical support for the idea that the diversity of plant functional traits and the diversity of evolutionary lineages in a community are important for maintaining higher abundances and diversity of soil communities. 12. Breakpoint phenomenon in layered superconductors Science.gov (United States) Shukrinov, Yu M. 2008-10-01 We study theoretically the multiple branch structure in the IV-characteristics of intrinsic Josephson junctions in HTSC and investigate in detailed its outermost branch at difierent values of the dissipation parameter. A difierent character of the IV-characteristics in the difierent intervals of the dissipation parameter β was observed. This feature follows from the fact of the creation of the longitudinal plasma wave with difierent wave number k. The possibility to observe experimentally the change of the wave vector of the longitudinal plasma wave by changing the temperature is analyzed. 13. Breakpoint phenomenon in layered superconductors Energy Technology Data Exchange (ETDEWEB) Shukrinov, Yu M [Bogoliubov Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna, Moscow Region, 141980, Russia and Photonics and Electronics Science and Engineering Center, Kyoto University, Kyoto 615-8510 (Japan)], E-mail: shukrinv@theor.jinr.ru 2008-10-15 We study theoretically the multiple branch structure in the IV-characteristics of intrinsic Josephson junctions in HTSC and investigate in detailed its outermost branch at different values of the dissipation parameter. A different character of the IV-characteristics in the different intervals of the dissipation parameter {beta} was observed. This feature follows from the fact of the creation of the longitudinal plasma wave with different wave number k. The possibility to observe experimentally the change of the wave vector of the longitudinal plasma wave by changing the temperature is analyzed. 14. An improved model for whole genome phylogenetic analysis by Fourier transform. Science.gov (United States) Yin, Changchuan; Yau, Stephen S-T 2015-10-07 DNA sequence similarity comparison is one of the major steps in computational phylogenetic studies. The sequence comparison of closely related DNA sequences and genomes is usually performed by multiple sequence alignments (MSA). While the MSA method is accurate for some types of sequences, it may produce incorrect results when DNA sequences undergone rearrangements as in many bacterial and viral genomes. It is also limited by its computational complexity for comparing large volumes of data. Previously, we proposed an alignment-free method that exploits the full information contents of DNA sequences by Discrete Fourier Transform (DFT), but still with some limitations. Here, we present a significantly improved method for the similarity comparison of DNA sequences by DFT. In this method, we map DNA sequences into 2-dimensional (2D) numerical sequences and then apply DFT to transform the 2D numerical sequences into frequency domain. In the 2D mapping, the nucleotide composition of a DNA sequence is a determinant factor and the 2D mapping reduces the nucleotide composition bias in distance measure, and thus improving the similarity measure of DNA sequences. To compare the DFT power spectra of DNA sequences with different lengths, we propose an improved even scaling algorithm to extend shorter DFT power spectra to the longest length of the underlying sequences. After the DFT power spectra are evenly scaled, the spectra are in the same dimensionality of the Fourier frequency space, then the Euclidean distances of full Fourier power spectra of the DNA sequences are used as the dissimilarity metrics. The improved DFT method, with increased computational performance by 2D numerical representation, can be applicable to any DNA sequences of different length ranges. We assess the accuracy of the improved DFT similarity measure in hierarchical clustering of different DNA sequences including simulated and real datasets. The method yields accurate and reliable phylogenetic trees 15. Phylogenetic and chemical diversity of MAR4 streptomycete lineage Directory of Open Access Journals (Sweden) Marisa Paulino 2014-06-01 To date, phylogenetic characterization of 6 representative isolates, based on partial sequence of gene encoding 16S rRNA, confirm that these strains belong to the specie Streptomyces aculeolatus. Figure 2. Neighbour-joining phylogenetic tree created from 6 partial 16S rRNA gene sequence from Streptomyces aculeolatus strains cultured from Madeira Archipelago, based on 1000 bootstrap replicates. BLAST matches (deposited in GenBank are included with species and strain name followed by accession number. Verrucosispora maris and Micromonospora aurantiaca were used as outgroups. 16. Phylogenetic origin of Beckmannia (Poaceae) inferred from molecular evidence Institute of Scientific and Technical Information of China (English) Chong-Mei XU; Chang-You QU; Wen-Guang YU; Xue-Jie ZHANG; Fa-Zeng LI 2009-01-01 The phylogenetic origin of Beckmannia remains unknown. The genus has been placed within the Chlorideae, Aveneae (Agrostideae), Poeae, or treated as an isolate lineage, Beckmanniinae. In the present study, we used nuclear internal transcribed spacer (ITS) and chloroplast trnL-F sequences to examine the phylogenetic relationship between Beckmannia and those genera that have assumed to be related. On the basis of the results of our studies, the following conclusions could be drawn: (i) Beckmannia and Alopecurus are sister groups with high support; and (ii) Beckmannia and Alopecurus are nested in the Poeae clade with high support. The results of our analysis suggest that Beckmannia should be placed in Poeae. 17. Phylogenetic approaches reveal biodiversity threats under climate change Science.gov (United States) González-Orozco, Carlos E.; Pollock, Laura J.; Thornhill, Andrew H.; Mishler, Brent D.; Knerr, Nunzio; Laffan, Shawn W.; Miller, Joseph T.; Rosauer, Dan F.; Faith, Daniel P.; Nipperess, David A.; Kujala, Heini; Linke, Simon; Butt, Nathalie; Külheim, Carsten; Crisp, Michael D.; Gruber, Bernd 2016-12-01 Predicting the consequences of climate change for biodiversity is critical to conservation efforts. Extensive range losses have been predicted for thousands of individual species, but less is known about how climate change might impact whole clades and landscape-scale patterns of biodiversity. Here, we show that climate change scenarios imply significant changes in phylogenetic diversity and phylogenetic endemism at a continental scale in Australia using the hyper-diverse clade of eucalypts. We predict that within the next 60 years the vast majority of species distributions (91%) across Australia will shrink in size (on average by 51%) and shift south on the basis of projected suitable climatic space. Geographic areas currently with high phylogenetic diversity and endemism are predicted to change substantially in future climate scenarios. Approximately 90% of the current areas with concentrations of palaeo-endemism (that is, places with old evolutionary diversity) are predicted to disappear or shift their location. These findings show that climate change threatens whole clades of the phylogenetic tree, and that the outlined approach can be used to forecast areas of biodiversity losses and continental-scale impacts of climate change. 18. ["Phylogenetic presumptions"--can jurisprudence terms promote comparative biology?]. Science.gov (United States) Pesenko, Iu A 2005-01-01 The paper presents the results of a critical analysis of the "phylogenetic presumptions" conception by means of its comparison with the hypothetic-deductive method of the phylogeny reconstruction within the framework of the evolutionary systematics. Rasnitsyn (1988, 2002) suggested this conception by analogy with the presumption of innocence in jurisprudence, where it has only moral grounds. Premises of all twelve the "phylogenetic presumptions" are known for a long time as the criteria of character homology and polarity or as the criteria of relationship between organisms. Many of them are inductive generalizations based on a large body of data and therefore are currently accepted by most of taxonomists as criteria or corresponding rules, but not as presumptions with the imperative "it is true until the contrary is proved". The application of the juristic term "presumption" in phylogenetics introduces neither methodical profits, nor anything to gain a better insight of problems of the phylogenetic reconstruction. Moreover, it gives ill effects as, by analogy with a judicially charged person and his legal defense, it allows a researcher not to prove or substantiate his statements on characters and relationships. Some of Rasnitsyn's presumptions correspond to criteria, which have been recognized as invalid ones on the reason of their non-operationality (presumption "apomorphic state corresponds more effective adaptation") or insufficient ontological grounds (presumptions "are more complex structure is apomorphic", "the most parsimonious cladogram is preferable", and "one should considered every to be inherited"). 19. Host specificity and phylogenetic relationships of chicken and turkey parvoviruses Science.gov (United States) Previous reports indicate that the newly discovered chicken parvoviruses (ChPV) and turkey parvoviruses (TuPV) are very similar to each other, yet they represent different species within a new genus of Parvoviridae. Currently, strain classification is based on the phylogenetic analysis of a 561 bas... 20. The Beaver’s Phylogenetic Lineage Illuminated by Retroposon Reads Science.gov (United States) Doronina, Liliya; Matzke, Andreas; Churakov, Gennady; Stoll, Monika; Huge, Andreas; Schmitz, Jürgen 2017-01-01 Solving problematic phylogenetic relationships often requires high quality genome data. However, for many organisms such data are still not available. Among rodents, the phylogenetic position of the beaver has always attracted special interest. The arrangement of the beaver’s masseter (jaw-closer) muscle once suggested a strong affinity to some sciurid rodents (e.g., squirrels), placing them in the Sciuromorpha suborder. Modern molecular data, however, suggested a closer relationship of beaver to the representatives of the mouse-related clade, but significant data from virtually homoplasy-free markers (for example retroposon insertions) for the exact position of the beaver have not been available. We derived a gross genome assembly from deposited genomic Illumina paired-end reads and extracted thousands of potential phylogenetically informative retroposon markers using the new bioinformatics coordinate extractor fastCOEX, enabling us to evaluate different hypotheses for the phylogenetic position of the beaver. Comparative results provided significant support for a clear relationship between beavers (Castoridae) and kangaroo rat-related species (Geomyoidea) (p < 0.0015, six markers, no conflicting data) within a significantly supported mouse-related clade (including Myodonta, Anomaluromorpha, and Castorimorpha) (p < 0.0015, six markers, no conflicting data). PMID:28256552 1. Parental Acceptance-Rejection Theory and the Phylogenetic Model. Science.gov (United States) Rohner, Ronald P. Guided by specific theoretical and methodological points of view--the phylogenetic perspective and the universalistic approach respectively--this paper reports on a worldwide study of the antecedents and effects of parental acceptance and rejection. Parental acceptance-rejection theory postulates that rejected children throughout our species share… 2. Evolution & Phylogenetic Analysis: Classroom Activities for Investigating Molecular & Morphological Concepts Science.gov (United States) Franklin, Wilfred A. 2010-01-01 In a flexible multisession laboratory, students investigate concepts of phylogenetic analysis at both the molecular and the morphological level. Students finish by conducting their own analysis on a collection of skeletons representing the major phyla of vertebrates, a collection of primate skulls, or a collection of hominid skulls. 3. A Deliberate Practice Approach to Teaching Phylogenetic Analysis Science.gov (United States) Hobbs, F. Collin; Johnson, Daniel J.; Kearns, Katherine D. 2013-01-01 One goal of postsecondary education is to assist students in developing expert-level understanding. Previous attempts to encourage expert-level understanding of phylogenetic analysis in college science classrooms have largely focused on isolated, or "one-shot," in-class activities. Using a deliberate practice instructional approach, we… 4. Graph Triangulations and the Compatibility of Unrooted Phylogenetic Trees CERN Document Server Vakati, Sudheer 2010-01-01 We characterize the compatibility of a collection of unrooted phylogenetic trees as a question of determining whether a graph derived from these trees --- the display graph --- has a specific kind of triangulation, which we call legal. Our result is a counterpart to the well known triangulation-based characterization of the compatibility of undirected multi-state characters. 5. Phylogenetic placement of two species known only from resting spores DEFF Research Database (Denmark) Hajek, Ann E; Gryganskyi, Andrii; Bittner, Tonya; 2016-01-01 Molecular methods were used to determine the generic placement of two species of Entomophthorales known only from resting spores. Historically, these species would belong in the form-genus Tarichium, but this classification provides no information about phylogenetic relationships. Using DNA from... 6. Phylogenetic mixtures and linear invariants for equal input models. Science.gov (United States) Casanellas, Marta; Steel, Mike 2016-09-07 The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987). 7. Phylogenetic analysis of Escherichia coli strains isolated from human samples Directory of Open Access Journals (Sweden) Abdollah Derakhshandeh 2013-12-01 Full Text Available Escherichia coli (E. coli is a normal inhabitant of the gastrointestinal tract of vertebrates, including humans. Phylogenetic analysis has shown that E. coli is composed of four main phylogenetic groups (A, B1, B2 and D. Group A and B1 are generally associated with commensals, whereas group B2 is associated with extra-intestinal pathotypes. Most enteropathogenic isolates, however, are assigned to group D. In the present study, a total of 102 E. coli strains, isolated from human samples, were used. Phylogenetic grouping was done based on the Clermont triplex PCR method using primers targeted at three genetic markers, chuA, yjaA and TspE4.C2. Group A contained the majority of the collected isolates (69 isolates, 67.64%, followed by group B2 (18 isolates, 17.64% and D (15 isolates, 14.7% and no strains were found to belong to group B1. The distribution of phylogenetic groups in our study suggests that although the majority of strains were commensals, the prevalence of enteropathogenic and extra-intestinal pathotypes was noteworthy. Therefore, the role of E. coli in human infections including diarrhea, urinary tract infections and meningitis should be considered. 8. PhyloSift: phylogenetic analysis of genomes and metagenomes Directory of Open Access Journals (Sweden) Aaron E. Darling 2014-01-01 Full Text Available Like all organisms on the planet, environmental microbes are subject to the forces of molecular evolution. Metagenomic sequencing provides a means to access the DNA sequence of uncultured microbes. By combining DNA sequencing of microbial communities with evolutionary modeling and phylogenetic analysis we might obtain new insights into microbiology and also provide a basis for practical tools such as forensic pathogen detection.In this work we present an approach to leverage phylogenetic analysis of metagenomic sequence data to conduct several types of analysis. First, we present a method to conduct phylogeny-driven Bayesian hypothesis tests for the presence of an organism in a sample. Second, we present a means to compare community structure across a collection of many samples and develop direct associations between the abundance of certain organisms and sample metadata. Third, we apply new tools to analyze the phylogenetic diversity of microbial communities and again demonstrate how this can be associated to sample metadata.These analyses are implemented in an open source software pipeline called PhyloSift. As a pipeline, PhyloSift incorporates several other programs including LAST, HMMER, and pplacer to automate phylogenetic analysis of protein coding and RNA sequences in metagenomic datasets generated by modern sequencing platforms (e.g., Illumina, 454. 9. [Molecular evidence on the phylogenetic position of tree shrews]. Science.gov (United States) Xu, Ling; Fan, Yu; Jiang, Xue-Long; Yao, Yong-Gang 2013-04-01 The tree shrew is currently located in the Order Scandentia and is widely distributed in Southeast Asia, South Asia, and South China. Due to its unique characteristics, such as small body size, high brain-to-body mass ratio, short reproductive cycle and life span, and low-cost of maintenance, the tree shrew has been proposed as an alternative experimental animal to primates in biomedical research. However, there is unresolved debate regarding the phylogenetic affinity of tree shrews to primates and their phylogenetic position in Euarchontoglires. To help settle this debate, we summarized the available molecular evidence on the phylogenetic position of the tree shrew. Most nuclear DNA data, including recent genome data, suggested that the tree shrew belongs to the Euarchonta clade harboring primates and flying lemurs (colugos). However, analyses of mitochondrial DNA (mtDNA) data suggested a close relationship to lagomorphs and rodents. These different clustering patterns could be explained by nuclear gene data and mtDNA data discrepancies, as well as the different phylogenetic approaches used in previous studies. Taking all available conclusions together, the robust data from whole genome of this species supports tree shrews being genetically closely related to primates. 10. Metagenomic species profiling using universal phylogenetic marker genes DEFF Research Database (Denmark) Sunagawa, Shinichi; Mende, Daniel R; Zeller, Georg; 2013-01-01 To quantify known and unknown microorganisms at species-level resolution using shotgun sequencing data, we developed a method that establishes metagenomic operational taxonomic units (mOTUs) based on single-copy phylogenetic marker genes. Applied to 252 human fecal samples, the method revealed... 11. Phylogenetic distribution of plant snoRNA families DEFF Research Database (Denmark) Patra Bhattacharya, Deblina; Canzler, Sebastian; Kehr, Stephanie; 2016-01-01 in much detail. In plants, however, their evolution has attracted comparably little attention. RESULTS: In order to chart the phylogenetic distribution of individual snoRNA families in plants, we applied a sophisticated approach for identifying homologs of known plant snoRNAs across the plant kingdom... 12. Panorama phylogenetic diversity and distribution of Type A influenza virus. Directory of Open Access Journals (Sweden) Shuo Liu Full Text Available BACKGROUND: Type A influenza virus is one of important pathogens of various animals, including humans, pigs, horses, marine mammals and birds. Currently, the viral type has been classified into 16 hemagglutinin and 9 neuraminidase subtypes, but the phylogenetic diversity and distribution within the viral type largely remain unclear from the whole view. METHODOLOGY/PRINCIPAL FINDINGS: The panorama phylogenetic trees of influenza A viruses were calculated with representative sequences selected from approximately 23,000 candidates available in GenBank using web servers in NCBI and the software MEGA 4.0. Lineages and sublineages were classified according to genetic distances, topology of the phylogenetic trees and distributions of the viruses in hosts, regions and time. CONCLUSIONS/SIGNIFICANCE: Here, two panorama phylogenetic trees of type A influenza virus covering all the 16 hemagglutinin subtypes and 9 neuraminidase subtypes, respectively, were generated. The trees provided us whole views and some novel information to recognize influenza A viruses including that some subtypes of avian influenza viruses are more complicated than Eurasian and North American lineages as we thought in the past. They also provide us a framework to generalize the history and explore the future of the viral circulation and evolution in different kinds of hosts. In addition, a simple and comprehensive nomenclature system for the dozens of lineages and sublineages identified within the viral type was proposed, which if universally accepted, will facilitate communications on the viral evolution, ecology and epidemiology. 13. The ethnobotany of psychoactive plant use: a phylogenetic perspective Directory of Open Access Journals (Sweden) Nashmiah Aid Alrashedy 2016-10-01 Full Text Available Psychoactive plants contain chemicals that presumably evolved as allelochemicals but target certain neuronal receptors when consumed by humans, altering perception, emotion and cognition. These plants have been used since ancient times as medicines and in the context of religious rituals for their various psychoactive effects (e.g., as hallucinogens, stimulants, sedatives. The ubiquity of psychoactive plants in various cultures motivates investigation of the commonalities among these plants, in which a phylogenetic framework may be insightful. A phylogeny of culturally diverse psychoactive plant taxa was constructed with their psychotropic effects and affected neurotransmitter systems mapped on the phylogeny. The phylogenetic distribution shows multiple evolutionary origins of psychoactive families. The plant families Myristicaceae (e.g., nutmeg, Papaveraceae (opium poppy, Cactaceae (peyote, Convolvulaceae (morning glory, Solanaceae (tobacco, Lamiaceae (mints, Apocynaceae (dogbane have a disproportionate number of psychoactive genera with various indigenous groups using geographically disparate members of these plant families for the same psychoactive effect, an example of cultural convergence. Pharmacological traits related to hallucinogenic and sedative potential are phylogenetically conserved within families. Unrelated families that exert similar psychoactive effects also modulate similar neurotransmitter systems (i.e., mechanistic convergence. However, pharmacological mechanisms for stimulant effects were varied even within families suggesting that stimulant chemicals may be more evolutionarily labile than those associated with hallucinogenic and sedative effects. Chemically similar psychoactive chemicals may also exist in phylogenetically unrelated lineages, suggesting convergent evolution or differential gene regulation of a common metabolic pathway. Our study has shown that phylogenetic analysis of traditionally used psychoactive plants 14. Characterization of phylogenetic networks with NetTest Directory of Open Access Journals (Sweden) Valiente Gabriel 2010-05-01 Full Text Available Abstract Background Typical evolutionary events like recombination, hybridization or gene transfer make necessary the use of phylogenetic networks to properly depict the evolution of DNA and protein sequences. Although several theoretical classes have been proposed to characterize these networks, they make stringent assumptions that will likely not be met by the evolutionary process. We have recently shown that the complexity of simulated networks is a function of the population recombination rate, and that at moderate and large recombination rates the resulting networks cannot be categorized. However, we do not know whether these results extend to networks estimated from real data. Results We introduce a web server for the categorization of explicit phylogenetic networks, including the most relevant theoretical classes developed so far. Using this tool, we analyzed statistical parsimony phylogenetic networks estimated from ~5,000 DNA alignments, obtained from the NCBI PopSet and Polymorphix databases. The level of characterization was correlated to nucleotide diversity, and a high proportion of the networks derived from these data sets could be formally characterized. Conclusions We have developed a public web server, NetTest (freely available from the software section at http://darwin.uvigo.es, to formally characterize the complexity of phylogenetic networks. Using NetTest we found that most statistical parsimony networks estimated with the program TCS could be assigned to a known network class. The level of network characterization was correlated to nucleotide diversity and dependent upon the intra/interspecific levels, although no significant differences were detected among genes. More research on the properties of phylogenetic networks is clearly needed. 15. The ethnobotany of psychoactive plant use: a phylogenetic perspective Science.gov (United States) 2016-01-01 Psychoactive plants contain chemicals that presumably evolved as allelochemicals but target certain neuronal receptors when consumed by humans, altering perception, emotion and cognition. These plants have been used since ancient times as medicines and in the context of religious rituals for their various psychoactive effects (e.g., as hallucinogens, stimulants, sedatives). The ubiquity of psychoactive plants in various cultures motivates investigation of the commonalities among these plants, in which a phylogenetic framework may be insightful. A phylogeny of culturally diverse psychoactive plant taxa was constructed with their psychotropic effects and affected neurotransmitter systems mapped on the phylogeny. The phylogenetic distribution shows multiple evolutionary origins of psychoactive families. The plant families Myristicaceae (e.g., nutmeg), Papaveraceae (opium poppy), Cactaceae (peyote), Convolvulaceae (morning glory), Solanaceae (tobacco), Lamiaceae (mints), Apocynaceae (dogbane) have a disproportionate number of psychoactive genera with various indigenous groups using geographically disparate members of these plant families for the same psychoactive effect, an example of cultural convergence. Pharmacological traits related to hallucinogenic and sedative potential are phylogenetically conserved within families. Unrelated families that exert similar psychoactive effects also modulate similar neurotransmitter systems (i.e., mechanistic convergence). However, pharmacological mechanisms for stimulant effects were varied even within families suggesting that stimulant chemicals may be more evolutionarily labile than those associated with hallucinogenic and sedative effects. Chemically similar psychoactive chemicals may also exist in phylogenetically unrelated lineages, suggesting convergent evolution or differential gene regulation of a common metabolic pathway. Our study has shown that phylogenetic analysis of traditionally used psychoactive plants suggests 16. Worldwide Phylogenetic Distributions and Population Dynamics of the Genus Histoplasma. Directory of Open Access Journals (Sweden) Marcus de M Teixeira 2016-06-01 Full Text Available Histoplasma capsulatum comprises a worldwide complex of saprobiotic fungi mainly found in nitrogen/phosphate (often bird guano enriched soils. The microconidia of Histoplasma species may be inhaled by mammalian hosts, and is followed by a rapid conversion to yeast that can persist in host tissues causing histoplasmosis, a deep pulmonary/systemic mycosis. Histoplasma capsulatum sensu lato is a complex of at least eight clades geographically distributed as follows: Australia, Netherlands, Eurasia, North American classes 1 and 2 (NAm 1 and NAm 2, Latin American groups A and B (LAm A and LAm B and Africa. With the exception of the Eurasian cluster, those clades are considered phylogenetic species.Increased Histoplasma sampling (n = 234 resulted in the revision of the phylogenetic distribution and population structure using 1,563 aligned nucleotides from four protein-coding regions. The LAm B clade appears to be divided into at least two highly supported clades, which are geographically restricted to either Colombia/Argentina or Brazil respectively. Moreover, a complex population genetic structure was identified within LAm A clade supporting multiple monophylogenetic species, which could be driven by rapid host or environmental adaptation (~0.5 MYA. We found two divergent clades, which include Latin American isolates (newly named as LAm A1 and LAm A2, harboring a cryptic cluster in association with bats.At least six new phylogenetic species are proposed in the Histoplasma species complex supported by different phylogenetic and population genetics methods, comprising LAm A1, LAm A2, LAm B1, LAm B2, RJ and BAC-1 phylogenetic species. The genetic isolation of Histoplasma could be a result of differential dispersion potential of naturally infected bats and other mammals. In addition, the present study guides isolate selection for future population genomics and genome wide association studies in this important pathogen complex. 17. Molecular identification and phylogenetic study of Demodex caprae. Science.gov (United States) Zhao, Ya-E; Cheng, Juan; Hu, Li; Ma, Jun-Xian 2014-10-01 The DNA barcode has been widely used in species identification and phylogenetic analysis since 2003, but there have been no reports in Demodex. In this study, to obtain an appropriate DNA barcode for Demodex, molecular identification of Demodex caprae based on mitochondrial cox1 was conducted. Firstly, individual adults and eggs of D. caprae were obtained for genomic DNA (gDNA) extraction; Secondly, mitochondrial cox1 fragment was amplified, cloned, and sequenced; Thirdly, cox1 fragments of D. caprae were aligned with those of other Demodex retrieved from GenBank; Finally, the intra- and inter-specific divergences were computed and the phylogenetic trees were reconstructed to analyze phylogenetic relationship in Demodex. Results obtained from seven 429-bp fragments of D. caprae showed that sequence identities were above 99.1% among three adults and four eggs. The intraspecific divergences in D. caprae, Demodex folliculorum, Demodex brevis, and Demodex canis were 0.0-0.9, 0.5-0.9, 0.0-0.2, and 0.0-0.5%, respectively, while the interspecific divergences between D. caprae and D. folliculorum, D. canis, and D. brevis were 20.3-20.9, 21.8-23.0, and 25.0-25.3, respectively. The interspecific divergences were 10 times higher than intraspecific ones, indicating considerable barcoding gap. Furthermore, the phylogenetic trees showed that four Demodex species gathered separately, representing independent species; and Demodex folliculorum gathered with canine Demodex, D. caprae, and D. brevis in sequence. In conclusion, the selected 429-bp mitochondrial cox1 gene is an appropriate DNA barcode for molecular classification, identification, and phylogenetic analysis of Demodex. D. caprae is an independent species and D. folliculorum is closer to D. canis than to D. caprae or D. brevis. 18. An efficient and extensible approach for compressing phylogenetic trees KAUST Repository Matthews, Suzanne J 2011-01-01 Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference.Results: On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings.Conclusions: TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. © 2011 Matthews and Williams; licensee BioMed Central Ltd. 19. Habitat-associated phylogenetic community patterns of microbial ammonia oxidizers. Directory of Open Access Journals (Sweden) Antoni Fernàndez-Guerra Full Text Available Microorganisms mediating ammonia oxidation play a fundamental role in the connection between biological nitrogen fixation and anaerobic nitrogen losses. Bacteria and Archaea ammonia oxidizers (AOB and AOA, respectively have colonized similar habitats worldwide. Ammonia oxidation is the rate-limiting step in nitrification, and the ammonia monooxygenase (Amo is the key enzyme involved. The molecular ecology of this process has been extensively explored by surveying the gene of the subunit A of the Amo (amoA gene. In the present study, we explored the phylogenetic community ecology of AOB and AOA, analyzing 5776 amoA gene sequences from >300 isolation sources, and clustering habitats by environmental ontologies. As a whole, phylogenetic richness was larger in AOA than in AOB, and sediments contained the highest phylogenetic richness whereas marine plankton the lowest. We also observed that freshwater ammonia oxidizers were phylogenetically richer than their marine counterparts. AOA communities were more dissimilar to each other than those of AOB, and consistent monophyletic lineages were observed for sediments, soils, and marine plankton in AOA but not in AOB. The diversification patterns showed a more constant cladogenesis through time for AOB whereas AOA apparently experienced two fast diversification events separated by a long steady-state episode. The diversification rate (γ statistic for most of the habitats indicated γ(AOA > γ(AOB. Soil and sediment experienced earlier bursts of diversification whereas habitats usually eutrophic and rich in ammonium such as wastewater and sludge showed accelerated diversification rates towards the present. Overall, this work shows for the first time a global picture of the phylogenetic community structure of both AOB and AOA assemblages following the strictest analytical standards, and provides an ecological view on the differential evolutionary paths experienced by widespread ammonia 20. Improved integration time estimation of endogenous retroviruses with phylogenetic data. Directory of Open Access Journals (Sweden) Hugo Martins Full Text Available BACKGROUND: Endogenous retroviruses (ERVs are genetic fossils of ancient retroviral integrations that remain in the genome of many organisms. Most loci are rendered non-functional by mutations, but several intact retroviral genes are known in mammalian genomes. Some have been adopted by the host species, while the beneficial roles of others remain unclear. Besides the obvious possible immunogenic impact from transcribing intact viral genes, endogenous retroviruses have also become an interesting and useful tool to study phylogenetic relationships. The determination of the integration time of these viruses has been based upon the assumption that both 5' and 3' Long Terminal Repeats (LTRs sequences are identical at the time of integration, but evolve separately afterwards. Similar approaches have been using either a constant evolutionary rate or a range of rates for these viral loci, and only single species data. Here we show the advantages of using different approaches. RESULTS: We show that there are strong advantages in using multiple species data and state-of-the-art phylogenetic analysis. We incorporate both simple phylogenetic information and Monte Carlo Markov Chain (MCMC methods to date the integrations of these viruses based on a relaxed molecular clock approach over a Bayesian phylogeny model and applied them to several selected ERV sequences in primates. These methods treat each ERV locus as having a distinct evolutionary rate for each LTR, and make use of consensual speciation time intervals between primates to calibrate the relaxed molecular clocks. CONCLUSIONS: The use of a fixed rate produces results that vary considerably with ERV family and the actual evolutionary rate of the sequence, and should be avoided whenever multi-species phylogenetic data are available. For genome-wide studies, the simple phylogenetic approach constitutes a better alternative, while still being computationally feasible. 1. Molecular phylogenetics and species delimitation of leaf-toed geckos (Phyllodactylidae: Phyllodactylus) throughout the Mexican tropical dry forest. Science.gov (United States) Blair, Christopher; Méndez de la Cruz, Fausto R; Law, Christopher; Murphy, Robert W 2015-03-01 Methods and approaches for accurate species delimitation continue to be a highly controversial subject in the systematics community. Inaccurate assessment of species' limits precludes accurate inference of historical evolutionary processes. Recent evidence suggests that multilocus coalescent methods show promise in delimiting species in cryptic clades. We combine multilocus sequence data with coalescence-based phylogenetics in a hypothesis-testing framework to assess species limits and elucidate the timing of diversification in leaf-toed geckos (Phyllodactylus) of Mexico's dry forests. Tropical deciduous forests (TDF) of the Neotropics are among the planet's most diverse ecosystems. However, in comparison to moist tropical forests, little is known about the mode and tempo of biotic evolution throughout this threatened biome. We find increased speciation and substantial, cryptic molecular diversity originating following the formation of Mexican TDF 30-20million years ago due to orogenesis of the Sierra Madre Occidental and Mexican Volcanic Belt. Phylogenetic results suggest that the Mexican Volcanic Belt, the Rio Fuerte, and Isthmus of Tehuantepec may be important biogeographic barriers. Single- and multilocus coalescent analyses suggest that nearly every sampling locality may be a distinct species. These results suggest unprecedented levels of diversity, a complex evolutionary history, and that the formation and expansion of TDF vegetation in the Miocene may have influenced subsequent cladogenesis of leaf-toed geckos throughout western Mexico. 2. [Antifungal susceptibility profiles of Candida species to triazole: application of new CLSI species-specific clinical breakpoints and epidemiological cutoff values for characterization of antifungal resistance]. Science.gov (United States) Karabıçak, Nilgün; Alem, Nihal 2016-01-01 The Clinical and Laboratory Standards Institute (CLSI) Subcommittee on Antifungal Susceptibility Testing has newly introduced species-specific clinical breakpoints (CBPs) for fluconazole and voriconazole. When CBPs can not be determined, wild-type minimal inhibitory concentration (MIC) distributions are detected and epidemiological cutoff values (ECVs) provide valuable means for the detection of emerging resistance. The aim of this study is to determine triazole resistance patterns in Candida species by the recently revised CLSI CBPs. A total of 140 Candida strains isolated from blood cultures of patients with invasive candidiasis hospitalized in various intensive care units in Turkey and sent to our reference laboratory between 2011-2012, were included in the study. The isolates were identified by conventional methods, and susceptibility testing was performed against fluconazole, itraconazole and voriconazole, by the 24-h CLSI broth microdilution (BMD) method. Azole resistance rates for all Candida species were determined using the new species-specific CLSI CBPs and ECVs criteria, when appropriate. The species distribution of the isolates were as follows; C.parapsilosis (n= 31 ), C.tropicalis (n= 26 ), C.glabrata (n= 21), C.albicans (n= 18), C.lusitaniae (n= 16), C.krusei (n= 16), C.kefyr (n= 9), C.guilliermondii (n= 2), and C.dubliniensis (n= 1). According to the newly determined CLSI CBPs for fluconazole and C.albicans, C.parapsilosis, C.tropicalis [susceptible (S), ≤ 2 µg/ml; dose-dependent susceptible (SDD), 4 µg/ml; resistant (R), ≥ 8 µg/ml], and C.glabrata (SDD, ≤ 32 µg/ml; R≥ 64 µg/ml) and for voriconazole and C.albicans, C.parapsilosis, C.tropicalis (S, ≤ 0.12 µg/ml; SDD, 0.25-0.5 µg/ml; R, ≥ 1 µg/ml), and C.krusei (S, ≤ 0.5 µg/ml; SDD, 1 µg/ml; R, ≥ 2 µg/ml), it was found that three of C.albicans, one of C.parapsilosis and one of C.glabrata isolates were resistant to fluconazole, while two of C.albicans and two of C 3. PhyPA: Phylogenetic method with pairwise sequence alignment outperforms likelihood methods in phylogenetics involving highly diverged sequences. Science.gov (United States) Xia, Xuhua 2016-09-01 While pairwise sequence alignment (PSA) by dynamic programming is guaranteed to generate one of the optimal alignments, multiple sequence alignment (MSA) of highly divergent sequences often results in poorly aligned sequences, plaguing all subsequent phylogenetic analysis. One way to avoid this problem is to use only PSA to reconstruct phylogenetic trees, which can only be done with distance-based methods. I compared the accuracy of this new computational approach (named PhyPA for phylogenetics by pairwise alignment) against the maximum likelihood method using MSA (the ML+MSA approach), based on nucleotide, amino acid and codon sequences simulated with different topologies and tree lengths. I present a surprising discovery that the fast PhyPA method consistently outperforms the slow ML+MSA approach for highly diverged sequences even when all optimization options were turned on for the ML+MSA approach. Only when sequences are not highly diverged (i.e., when a reliable MSA can be obtained) does the ML+MSA approach outperforms PhyPA. The true topologies are always recovered by ML with the true alignment from the simulation. However, with MSA derived from alignment programs such as MAFFT or MUSCLE, the recovered topology consistently has higher likelihood than that for the true topology. Thus, the failure to recover the true topology by the ML+MSA is not because of insufficient search of tree space, but by the distortion of phylogenetic signal by MSA methods. I have implemented in DAMBE PhyPA and two approaches making use of multi-gene data sets to derive phylogenetic support for subtrees equivalent to resampling techniques such as bootstrapping and jackknifing. 4. [A phylogenetic analysis of plant communities of Teberda Biosphere Reserve]. Science.gov (United States) Shulakov, A A; Egorov, A V; Onipchenko, V G 2016-01-01 Phylogenetic analysis of communities is based on the comparison of distances on the phylogenetic tree between species of a community under study and those distances in random samples taken out of local flora. It makes it possible to determine to what extent a community composition is formed by more closely related species (i.e., "clustered") or, on the opposite, it is more even and includes species that are less related with each other. The first case is usually interpreted as a result of strong influence caused by abiotic factors, due to which species with similar ecology, a priori more closely related, would remain: In the second case, biotic factors, such as competition, may come to the fore and lead to forming a community out of distant clades due to divergence of their ecological niches: The aim of this' study Was Ad explore the phylogenetic structure in communities of the northwestern Caucasus at two spatial scales - the scale of area from 4 to 100 m2 and the smaller scale within a community. The list of local flora of the alpine belt has been composed using the database of geobotanic descriptions carried out in Teberda Biosphere Reserve at true altitudes exceeding.1800 m. It includes 585 species of flowering plants belonging to 57 families. Basal groups of flowering plants are.not represented in the list. At the scale of communities of three classes, namely Thlaspietea rotundifolii - commumties formed on screes and pebbles, Calluno-Ulicetea - alpine meadow, and Mulgedio-Aconitetea subalpine meadows, have not demonstrated significant distinction of phylogenetic structure. At intra level, for alpine meadows the larger share of closely related species. (clustered community) is detected. Significantly clustered happen to be those communities developing on rocks (class Asplenietea trichomanis) and alpine (class Juncetea trifidi). At the same time, alpine lichen proved to have even phylogenetic structure at the small scale. Alpine (class Salicetea herbaceae) that 5. GENOME-WIDE PHYLOGENETIC ANALYSIS OF THE PATHOGENIC POTENTIAL OF VIBRIO FURNISSII Directory of Open Access Journals (Sweden) Thomas Michael Lux 2014-08-01 Full Text Available We recently reported the genome sequence of a free-living strain of Vibrio furnissii (NCTC 11218 harvested from an estuarine environment. V. furnissii is a widespread, free-living proteobacterium and emerging pathogen that can cause acute gastroenteritis in humans and lethal zoonoses in aquatic invertebrates, including farmed crustaceans and molluscs. Here we present the fully annotated genome of Vibrio furnissii NCTC11218 and analyses to further assess the potential pathogenic impact of V. furnissii. We compared the complete genome of V. furnissii with 8 other emerging and pathogenic Vibrio species. We selected and analysed more deeply 10 genomic regions based upon unique or common features, and used 3 of these regions to construct a phylogenetic tree. Thus, we positioned V. furnissii more accurately than before and revealed a closer relationship between V. furnissii and V. cholerae than previously thought. However, V. furnissii lacks several important features normally associated with virulence in the human pathogens V. cholera and V. vulnificus. We systematically built phylogenetic trees of all the predicted proteins and grouped them according GO categories. A striking feature of the V. furnissii genome is the hugely increased Super Integron, compared to the other Vibrio. Analyses of predicted genomic islands resulted in the discovery of a protein sequence that is present only in Vibrio associated with diseases in aquatic animals. We also discovered evidence of high levels horizontal gene transfer in V. furnissii. V. furnissii seems therefore to have a dynamic and fluid genome that could quickly adapt to environmental perturbation or increase its pathogenicity. Taken together, these analyses confirm the potential of V. furnissii as an emerging marine and possible human pathogen, especially in the developing, tropical, coastal regions that are most at risk from climate change. 6. Phylogenetics and differentiation of Salmonella Newport lineages by whole genome sequencing. Directory of Open Access Journals (Sweden) Guojie Cao Full Text Available Salmonella Newport has ranked in the top three Salmonella serotypes associated with foodborne outbreaks from 1995 to 2011 in the United States. In the current study, we selected 26 S. Newport strains isolated from diverse sources and geographic locations and then conducted 454 shotgun pyrosequencing procedures to obtain 16-24 × coverage of high quality draft genomes for each strain. Comparative genomic analysis of 28 S. Newport strains (including 2 reference genomes and 15 outgroup genomes identified more than 140,000 informative SNPs. A resulting phylogenetic tree consisted of four sublineages and indicated that S. Newport had a clear geographic structure. Strains from Asia were divergent from those from the Americas. Our findings demonstrated that analysis using whole genome sequencing data resulted in a more accurate picture of phylogeny compared to that using single genes or small sets of genes. We selected loci around the mutS gene of S. Newport to differentiate distinct lineages, including those between invH and mutS genes at the 3' end of Salmonella Pathogenicity Island 1 (SPI-1, ste fimbrial operon, and Clustered, Regularly Interspaced, Short Palindromic Repeats (CRISPR associated-proteins (cas. These genes in the outgroup genomes held high similarity with either S. Newport Lineage II or III at the same loci. S. Newport Lineages II and III have different evolutionary histories in this region and our data demonstrated genetic flow and homologous recombination events around mutS. The findings suggested that S. Newport Lineages II and III diverged early in the serotype evolution and have evolved largely independently. Moreover, we identified genes that could delineate sublineages within the phylogenetic tree and that could be used as potential biomarkers for trace-back investigations during outbreaks. Thus, whole genome sequencing data enabled us to better understand the genetic background of pathogenicity and evolutionary history of S 7. Modelling heterotachy in phylogenetic inference by reversible-jump Markov chain Monte Carlo. Science.gov (United States) 2008-12-27 The rate at which a given site in a gene sequence alignment evolves over time may vary. This phenomenon--known as heterotachy--can bias or distort phylogenetic trees inferred from models of sequence evolution that assume rates of evolution are constant. Here, we describe a phylogenetic mixture model designed to accommodate heterotachy. The method sums the likelihood of the data at each site over more than one set of branch lengths on the same tree topology. A branch-length set that is best for one site may differ from the branch-length set that is best for some other site, thereby allowing different sites to have different rates of change throughout the tree. Because rate variation may not be present in all branches, we use a reversible-jump Markov chain Monte Carlo algorithm to identify those branches in which reliable amounts of heterotachy occur. We implement the method in combination with our 'pattern-heterogeneity' mixture model, applying it to simulated data and five published datasets. We find that complex evolutionary signals of heterotachy are routinely present over and above variation in the rate or pattern of evolution across sites, that the reversible-jump method requires far fewer parameters than conventional mixture models to describe it, and serves to identify the regions of the tree in which heterotachy is most pronounced. The reversible-jump procedure also removes the need for a posteriori tests of 'significance' such as the Akaike or Bayesian information criterion tests, or Bayes factors. Heterotachy has important consequences for the correct reconstruction of phylogenies as well as for tests of hypotheses that rely on accurate branch-length information. These include molecular clocks, analyses of tempo and mode of evolution, comparative studies and ancestral state reconstruction. The model is available from the authors' website, and can be used for the analysis of both nucleotide and morphological data. 8. Rooting the tree of life: the phylogenetic jury is still out Science.gov (United States) Gouy, Richard; Baurain, Denis; Philippe, Hervé 2015-01-01 This article aims to shed light on difficulties in rooting the tree of life (ToL) and to explore the (sociological) reasons underlying the limited interest in accurately addressing this fundamental issue. First, we briefly review the difficulties plaguing phylogenetic inference and the ways to improve the modelling of the substitution process, which is highly heterogeneous, both across sites and over time. We further observe that enriched taxon samplings, better gene samplings and clever data removal strategies have led to numerous revisions of the ToL, and that these improved shallow phylogenies nearly always relocate simple organisms higher in the ToL provided that long-branch attraction artefacts are kept at bay. Then, we note that, despite the flood of genomic data available since 2000, there has been a surprisingly low interest in inferring the root of the ToL. Furthermore, the rare studies dealing with this question were almost always based on methods dating from the 1990s that have been shown to be inaccurate for much more shallow issues! This leads us to argue that the current consensus about a bacterial root for the ToL can be traced back to the prejudice of Aristotle's Great Chain of Beings, in which simple organisms are ancestors of more complex life forms. Finally, we demonstrate that even the best models cannot yet handle the complexity of the evolutionary process encountered both at shallow depth, when the outgroup is too distant, and at the level of the inter-domain relationships. Altogether, we conclude that the commonly accepted bacterial root is still unproven and that the root of the ToL should be revisited using phylogenomic supermatrices to ensure that new evidence for eukaryogenesis, such as the recently described Lokiarcheota, is interpreted in a sound phylogenetic framework. PMID:26323760 9. Cellular organization in germ tube tips of Gigaspora and its phylogenetic implications. Science.gov (United States) Bentivenga, Stephen P; Kumar, T K Arun; Kumar, Leticia; Roberson, Robert W; McLaughlin, David J 2013-01-01 Comparative morphology of the fine structure of fungal hyphal tips often is phylogenetically informative. In particular, morphology of the Spitzenkörper varies among higher taxa. To date no one has thoroughly characterized the hyphal tips of members of the phylum Glomeromycota to compare them with other fungi. This is partly due to difficulty growing and manipulating living hyphae of these obligate symbionts. We observed growing germ tubes of Gigaspora gigantea, G. margarita and G. rosea with a combination of light microscopy (LM) and transmission electron microscopy (TEM). For TEM, we used both traditional chemical fixation and cryo-fixation methods. Germ tubes of all species were extremely sensitive to manipulation. Healthy germ tubes often showed rapid bidirectional cytoplasmic streaming, whereas germ tubes that had been disturbed showed reduced or no cytoplasmic movement. Actively growing germ tubes contain a cluster of 10-20 spherical bodies approximately 3-8 μm behind the apex. The bodies, which we hypothesize are lipid bodies, move rapidly in healthy germ tubes. These bodies disappear immediately after any cellular perturbation. Cells prepared with cryo-techniques had superior preservation compared to those that had been processed with traditional chemical protocols. For example, cryo-prepared samples displayed two cell-wall layers, at least three vesicle types near the tip and three distinct cytoplasmic zones were noted. We did not detect a Spitzenkörper with either LM or TEM techniques and the tip organization of Gigaspora germ tubes appeared to be similar to hyphae in zygomycetous fungi. This observation was supported by a phylogenetic analysis of microscopic characters of hyphal tips from members of five fungal phyla. Our work emphasizes the sensitive nature of cellular organization, and the need for as little manipulation as possible to observe germ tube structure accurately. 10. QueTAL: a suite of tools to classify and compare TAL effectors functionally and phylogenetically Science.gov (United States) Pérez-Quintero, Alvaro L.; Lamy, Léo; Gordon, Jonathan L.; Escalon, Aline; Cunnac, Sébastien; Szurek, Boris; Gagnevin, Lionel 2015-01-01 Transcription Activator-Like (TAL) effectors from Xanthomonas plant pathogenic bacteria can bind to the promoter region of plant genes and induce their expression. DNA-binding specificity is governed by a central domain made of nearly identical repeats, each determining the recognition of one base pair via two amino acid residues (a.k.a. Repeat Variable Di-residue, or RVD). Knowing how TAL effectors differ from each other within and between strains would be useful to infer functional and evolutionary relationships, but their repetitive nature precludes reliable use of traditional alignment methods. The suite QueTAL was therefore developed to offer tailored tools for comparison of TAL effector genes. The program DisTAL considers each repeat as a unit, transforms a TAL effector sequence into a sequence of coded repeats and makes pair-wise alignments between these coded sequences to construct trees. The program FuncTAL is aimed at finding TAL effectors with similar DNA-binding capabilities. It calculates correlations between position weight matrices of potential target DNA sequence predicted from the RVD sequence, and builds trees based on these correlations. The programs accurately represented phylogenetic and functional relationships between TAL effectors using either simulated or literature-curated data. When using the programs on a large set of TAL effector sequences, the DisTAL tree largely reflected the expected species phylogeny. In contrast, FuncTAL showed that TAL effectors with similar binding capabilities can be found between phylogenetically distant taxa. This suite will help users to rapidly analyse any TAL effector genes of interest and compare them to other available TAL genes and should improve our understanding of TAL effectors evolution. It is available at http://bioinfo-web.mpl.ird.fr/cgi-bin2/quetal/quetal.cgi. PMID:26284082 11. QueTAL: a suite of tools to classify and compare TAL effectors functionally and phylogenetically Directory of Open Access Journals (Sweden) Alvaro L Pérez-Quintero 2015-08-01 Full Text Available Transcription Activator-Like (TAL effectors from Xanthomonas plant pathogenic bacteria can bind to the promoter region of plant genes and induce their expression. DNA-binding specificity is governed by a central domain made of nearly identical repeats, each determining the recognition of one base pair via two amino acid residues (a.k.a. Repeat Variable Di-residue, or RVD. Knowing how TAL effectors differ from each other within and between strains would be useful to infer functional and evolutionary relations, but their repetitive nature precludes reliable use of traditional alignment methods. The suite QueTAL was therefore developed to offer tailored tools for comparison of TAL effector genes. The program DisTAL considers each repeat as a unit, transforms a TAL effector sequence into a sequence of coded repeats and makes pair-wise alignments between these coded sequences to construct trees. The program FuncTAL is aimed at finding TAL effectors with similar DNA-binding capabilities. It calculates correlations between position weight matrices obtained from the RVD sequence, and builds trees based on these correlations. The programs accurately represented phylogenetic and functional relations between TAL effectors using either simulated or literature-curated data. When using the programs on a large set of TAL effector sequences, the DisTAL tree largely reflected the expected species phylogeny. In contrast, FuncTAL showed that TAL effectors with similar binding capabilities can be found between phylogenetically distant taxa. This suite will help users to rapidly analyse any TAL effector genes of interest and compare them to other available TAL genes and should improve our understanding of TAL effectors evolution. It is available at http://bioinfo-web.mpl.ird.fr/cgi-bin2/quetal/quetal.cgi. 12. Phylogenetic relationships of host insects of Cordyceps sinensis inferred from mitochondrial Cytochrome b sequences Institute of Scientific and Technical Information of China (English) Cheng Zhou; Geng Yang; Liang Honghui; Yang Xiaoling; Li Shan; Zhu Yunguo; Guo Guangpu; Zhou Tongshui; Chen Jiakuan 2007-01-01 This study used the sequence of the mitochondrial Cytochrome b (Cytb) to estimate phylogenetic relationships among host Hepialidae insects of Cordyceps sinensis. Genome DNA of host insect was extracted from the dead larva head part of 18 cordyceps populations and 2 species of Hepialus, and the Cytb fragment of host insect was amplified with PCR technique. The nucleotide sequence alignments and their homologous sequences of 24 species host Hepialidae insects of Cordyceps sinensis were obtained from GenBank and were used to construct phylogenetic trees based on neighbor-joining method. The results showed that genus Bipectilus diverged earlier than genus Hepialus and Hepialiscus. Hepialus host insects of Cordyceps sinensis have multitudinous species with different morphological characteristics and geographical distributions. The interspecific genetic differentiations are obvious in Hepialus. Thus, the genus Hepialus might be considered as polyphyletic origin. Cytb sequences have abundant variations among the host insects of Cordyceps sinensis on specific and generic level. The divergence rate of Cytb sequences among the species in Hepialus ranged from 0.23 % to 9.24 %, except that Hepialus pratensis and Hepialus jinshaensis have the same sequence. Cytb sequence can be used for species identification of host insects of Cordyceps sinensis, but further confirmation in more host insect species is needed. To obtain the Cytb sequence of host insect by amplifying DNA extracted from the head part of dead larva in cordyceps turns out to be an effective and accurate approach, which will be useful for studies on phylogeny and genetic structure of host insects of cordyceps populations, especially for analyzing relationships between C.sinensis and its host insects. 13. Comparative Analysis of Begonia Plastid Genomes and Their Utility for Species-Level Phylogenetics. Science.gov (United States) Harrison, Nicola; Harrison, Richard J; Kidner, Catherine A 2016-01-01 Recent, rapid radiations make species-level phylogenetics difficult to resolve. We used a multiplexed, high-throughput sequencing approach to identify informative genomic regions to resolve phylogenetic relationships at low taxonomic levels in Begonia from a survey of sixteen species. A long-range PCR method was used to generate draft plastid genomes to provide a strong phylogenetic backbone, identify fast evolving regions and provide informative molecular markers for species-level phylogenetic studies in Begonia. 14. Osiris: accessible and reproducible phylogenetic and phylogenomic analyses within the Galaxy workflow management system OpenAIRE 2014-01-01 Background Phylogenetic tools and ‘tree-thinking’ approaches increasingly permeate all biological research. At the same time, phylogenetic data sets are expanding at breakneck pace, facilitated by increasingly economical sequencing technologies. Therefore, there is an urgent need for accessible, modular, and sharable tools for phylogenetic analysis. Results We developed a suite of wrappers for new and existing phylogenetics tools for the Galaxy workflow management system that we call Osiris. ... 15. Correcting the disconnect between phylogenetics and biodiversity informatics. Science.gov (United States) Miller, Joseph T; Jolley-Rogers, Garry 2014-01-14 Rich collections of biodiversity data are now synthesized in publically available databases and phylogenetic knowledge now provides a sound understanding of the origin of organisms and their place in the tree of life. However, these knowledge bases are poorly linked, leading to underutilization or worse, an incorrect understanding of biodiversity because there is poor evolutionary context. We address this problem by integrating biodiversity information aggregated from many sources onto phylogenetic trees. PhyloJIVE connects biodiversity and phylogeny knowledge bases by providing an integrated evolutionary view of biodiversity data which in turn can improve biodiversity research and the conservation decision making process. Biodiversity science must assert the centrality of evolution to provide effective data to counteract global change and biodiversity loss. 16. Bilateral Chondroepitrochlearis Muscle: Case Report, Phylogenetic Analysis, and Clinical Significance Directory of Open Access Journals (Sweden) Sujeewa P. W. Palagama 2016-01-01 Full Text Available Anomalous muscular variants of pectoralis major have been reported on several occasions in the medical literature. Among them, chondroepitrochlearis is one of the rarest. Therefore, this study aims to provide a comprehensive description of its anatomy and subsequent clinical significance, along with its phylogenetic importance in pectoral muscle evolution with regard to primate posture. The authors suggest a more appropriate name to better reflect its proximal attachment to the costochondral junction and distal attachment to the epicondyle of humerus, as “chondroepicondylaris”; in addition, we suggest a new theory of phylogenetic significance to explain the twisting of pectoralis major tendon in primates that may have occurred with their adoption to bipedalism and arboreal lifestyle. Finally, the clinical significance of this aberrant muscle is elaborated as a cause of potential neurovascular entrapment and as a possible hurdle during axillary surgeries (i.e., mastectomy. 17. Phylogenetic Group Determination of Escherichia coli Isolated from Animals Samples Directory of Open Access Journals (Sweden) Fernanda Morcatti Coura 2015-01-01 Full Text Available This study analyzes the occurrence and distribution of phylogenetic groups of 391 strains of Escherichia coli isolated from poultry, cattle, and water buffalo. The frequency of the phylogroups was A = 19%, B1 = 57%, B2 = 2.3%, C = 4.6%, D = 2.8%, E = 11%, and F = 3.3%. Phylogroups A (P<0.001 and F (P=0.018 were associated with E. coli strains isolated from poultry, phylogroups B1 (P<0.001 and E (P=0.002 were associated with E. coli isolated from cattle, and phylogroups B2 (P=0.003 and D (P=0.017 were associated with E. coli isolated from water buffalo. This report demonstrated that some phylogroups are associated with the host analyzed and the results provide knowledge of the phylogenetic composition of E. coli from domestic animals. 18. Phylogenetic biogeography and taxonomy of disjunctly distributed bryophytes Institute of Scientific and Technical Information of China (English) Jochen HEINRICHS; J(o)rn HENTSCHEL; Kathrin FELDBERG; Andrea BOMBOSCH; Harald SCHNEIDER 2009-01-01 More than 200 research papers on the molecular phylogeny and phylogenetic biogeography of bryophytes have been published since the beginning of this millenium. These papers corroborated assumptions of a complex ge-netic structure of morphologically circumscribed bryophytes, and raised reservations against many morphologically justified species concepts, especially within the mosses. However, many molecular studies allowed for corrections and modifications of morphological classification schemes. Several studies reported that the phylogenetic structure of disjunctly distributed bryophyte species reflects their geographical ranges rather than morphological disparities. Molecular data led to new appraisals of distribution ranges and allowed for the reconstruction of refugia and migra-tion routes. Intercontinental ranges of bryophytes are often caused by dispersal rather than geographical vicariance. Many distribution patterns of disjunct bryophytes are likely formed by processes such as short distance dispersal, rare long distance dispersal events, extinction, recolonization and diversification. 19. The evolution of tumour phylogenetics: principles and practice. Science.gov (United States) Schwartz, Russell; Schäffer, Alejandro A 2017-04-01 Rapid advances in high-throughput sequencing and a growing realization of the importance of evolutionary theory to cancer genomics have led to a proliferation of phylogenetic studies of tumour progression. These studies have yielded not only new insights but also a plethora of experimental approaches, sometimes reaching conflicting or poorly supported conclusions. Here, we consider this body of work in light of the key computational principles underpinning phylogenetic inference, with the goal of providing practical guidance on the design and analysis of scientifically rigorous tumour phylogeny studies. We survey the range of methods and tools available to the researcher, their key applications, and the various unsolved problems, closing with a perspective on the prospects and broader implications of this field. 20. Phylogenetic Analysis of Orgyia pseudotsugata Single-nucleocapsid Nucleopolyhedrovirus Institute of Scientific and Technical Information of China (English) 2007-01-01 The Douglas-fir tussock moth Orgyia pseudotsugata (Lepidoptera: Lymantriidae) is a frequent defoliator of Douglas-fir and true firs in western USA and Canada. A single nucleopolyhedrovirus (SNPV) isolated from O. pseudotsugata larvae in Canada (OpSNPV) was previously analyzed via its polyhedrin gene, but is phylogenetic status was ambiguous. Sequences of four conserved baculovirus genes, polyhedrin, lef-8, pif-2 and dpol, were amplified from OpSNPV DNA in polymerase chain reactions using degenerate primer sets and their sequences were analyzed phylogenetically. The analysis revealed that OpSNPV belongs to group II NPVs and is most closely related to SNPVs that infect O. ericae and O. anartoides, respectively. These results show the need for multiple, concatenated gene phylogenies to classify baculoviruses. 1. Molecular phylogenetic analysis of an endangered Mexican sparrow: Spizella wortheni. Science.gov (United States) Canales-del-Castillo, Ricardo; Klicka, John; Favela, Susana; González-Rojas, José I 2010-12-01 The Worthen's Sparrow (Spizella wortheni) is an endemic bird species of the Mexican Plateau that is protected by Mexican law. Considering its limited range (25 km(2)), small population size (100-120 individuals), and declining population, it is one of the most endangered avian species in North America. Although it has been assumed to be the sister taxon of the Field Sparrow (Spizella pusilla), the systematic and evolutionary relationships of Worthen's Sparrow have never been tested using modern molecular phylogenetic methods. We addressed the molecular phylogeny of S. wortheni analyzing six mitochondrial genes (3571 bp) from all of the natural members of the genus Spizella. Our maximum likelihood and Bayeasian analysis indicate that despite the superficial similarity, S. wortheni is not the sister taxon of S. pusilla, but is instead most closely related to the Brewer's Sparrow (Spizella breweri). Also new insights about the phylogenetics relationships of the Spizella genera are presented. 2. Phylogenetic clusters of rhizobia revealed by genome structures Institute of Scientific and Technical Information of China (English) ZHENG Junfang; LIU Guirong; ZHU Wanfu; ZHOU Yuguang; LIU Shulin 2004-01-01 Rhizobia, bacteria that fix atmospheric nitrogen, are important agricultural resources. In order to establish the evolutionary relationships among rhizobia isolated from different geographic regions and different plant hosts for systematic studies, we evaluated the use of physical structure of the rhizobial genomes as a phylogenetic marker to categorize these bacteria. In this work, we analyzed the features of genome structures of 64 rhizobial strains. These rhizobial strains were divided into 21 phylogenetic clusters according to the features of genome structures evaluated by the endonuclease I-CeuI. These clusters were supported by 16S rRNA comparisons and genomic sequences of four rhizobial strains, but they are largely different from those based on the current taxonomic scheme (except 16S rRNA). 3. Aspergillus niger contains the cryptic phylogenetic species A. awamori DEFF Research Database (Denmark) Perrone, Giancarlo; Stea, Gaetano; Epifani, Filomena 2011-01-01 Aspergillus section Nigri is an important group of species for food and medical mycology, and biotechnology. The Aspergillus niger ‘aggregate’ represents its most complicated taxonomic subgroup containing eight morphologically indistinguishable taxa: A. niger, Aspergillus tubingensis, Aspergillus...... acidus, Aspergillus brasiliensis, Aspergillus costaricaensis, Aspergillus lacticoffeatus, Aspergillus piperis, and Aspergillus vadensis. Aspergillus awamori, first described by Nakazawa, has been compared taxonomically with other black aspergilli and recently it has been treated as a synonym of A. niger....... Phylogenetic analyses of sequences generated from portions of three genes coding for the proteins β-tubulin (benA), calmodulin (CaM), and the translation elongation factor-1 alpha (TEF-1α) of a population of A. niger strains isolated from grapes in Europe revealed the presence of a cryptic phylogenetic species... 4. PoInTree: A Polar and Interactive Phylogenetic Tree Institute of Scientific and Technical Information of China (English) Carreras Marco; Gianti Eleonora; Sartori Luca; Plyte Simon Edward; Isacchi Antonella; Bosotti Roberta 2005-01-01 PoInTree (Polar and Innteractive Tree) is an application that allows to build, visualize, and customize phylogenetic trees in a polar, interactive, and highly flexible view. It takes as input a FASTA file or multiple alignment formats. Phylogenetic tree calculation is based on a sequence distance method and utilizes the Neighbor Joining (NJ) algorithm. It also allows displaying precalculated trees of the major protein families based on Pfam classification. In PoInTree, nodes can be dynamically opened and closed and distances between genes are graphically represented.Tree root can be centered on a selected leaf. Text search mechanism, color-coding and labeling display are integrated. The visualizer can be connected to an Oracle database containing information on sequences and other biological data, helping to guide their interpretation within a given protein family across multiple species.The application is written in Borland Delphi and based on VCL Teechart Pro 6 graphical component (Steema software). 5. Beyond barcoding: a mitochondrial genomics approach to molecular phylogenetics and diagnostics of blowflies (Diptera: Calliphoridae). Science.gov (United States) Nelson, Leigh A; Lambkin, Christine L; Batterham, Philip; Wallman, James F; Dowton, Mark; Whiting, Michael F; Yeates, David K; Cameron, Stephen L 2012-12-15 Members of the Calliphoridae (blowflies) are significant for medical and veterinary management, due to the ability of some species to consume living flesh as larvae, and for forensic investigations due to the ability of others to develop in corpses. Due to the difficulty of accurately identifying larval blowflies to species there is a need for DNA-based diagnostics for this family, however the widely used DNA-barcoding marker, cox1, has been shown to fail for several groups within this family. Additionally, many phylogenetic relationships within the Calliphoridae are still unresolved, particularly deeper level relationships. Sequencing whole mt genomes has been demonstrated both as an effective method for identifying the most informative diagnostic markers and for resolving phylogenetic relationships. Twenty-seven complete, or nearly so, mt genomes were sequenced representing 13 species, seven genera and four calliphorid subfamilies and a member of the related family Tachinidae. PCR and sequencing primers developed for sequencing one calliphorid species could be reused to sequence related species within the same superfamily with success rates ranging from 61% to 100%, demonstrating the speed and efficiency with which an mt genome dataset can be assembled. Comparison of molecular divergences for each of the 13 protein-coding genes and 2 ribosomal RNA genes, at a range of taxonomic scales identified novel targets for developing as diagnostic markers which were 117-200% more variable than the markers which have been used previously in calliphorids. Phylogenetic analysis of whole mt genome sequences resulted in much stronger support for family and subfamily-level relationships. The Calliphoridae are polyphyletic, with the Polleninae more closely related to the Tachinidae, and the Sarcophagidae are the sister group of the remaining calliphorids. Within the Calliphoridae, there was strong support for the monophyly of the Chrysomyinae and Luciliinae and for the sister 6. Functionally and phylogenetically diverse plant communities key to soil biota OpenAIRE Milcu, Alexandru; Allan, Eric; Roscher, Christiane; Jenkins, Tania; Sebastian T Meyer; Flynn, Dan; Bessler, Holger; Buscot, François; Engels, Christof; Gubsch, Marlén; König, Stephan; Lipowsky, Annett; Loranger, Jessy; Renker, Carsten; Scherber, Christoph 2013-01-01 Recent studies assessing the role of biological diversity for ecosystem functioning indicate that the diversity of functional traits and the evolutionary history of species in a community, not the number of taxonomic units, ultimately drives the biodiversity–ecosystem-function relationship. Here, we simultaneously assessed the importance of plant functional trait and phylogenetic diversity as predictors of major trophic groups of soil biota (abundance and diversity), six years from the onset ... 7. Computational and phylogenetic validation of nematode horizontal gene transfer OpenAIRE Bird David; Scholl Elizabeth H 2011-01-01 Abstract Sequencing of expressed genes has shown that nematodes, particularly the plant-parasitic nematodes, have genes purportedly acquired from other kingdoms by horizontal gene transfer. The prevailing orthodoxy is that such transfer has been a driving force in the evolution of niche specificity, and a recent paper in BMC Evolutionary Biology that presents a detailed phylogenetic analysis of cellulase genes in the free-living nematode Pristionchus pacificus at the species, genus and family... 8. Data on taxonomic status and phylogenetic relationship of tits Directory of Open Access Journals (Sweden) Xue-Juan Li 2017-02-01 Full Text Available The data in this paper are related to the research article entitled “Taxonomic status and phylogenetic relationship of tits based on mitogenomes and nuclear segments” (X.J. Li et al., 2016 [1]. The mitochondrial genomes and nuclear segments of tits were sequenced to analyze mitochondrial characteristics and phylogeny. In the data, the analyzed results are presented. The data holds the resulting files of mitochondrial characteristics, heterogeneity, best schemes, and trees. 9. Data on taxonomic status and phylogenetic relationship of tits. Science.gov (United States) Li, Xue-Juan; Lin, Li-Liang; Cui, Ai-Ming; Bai, Jie; Wang, Xiao-Yang; Xin, Chao; Zhang, Zhen; Yang, Chao; Gao, Rui-Rui; Huang, Yuan; Lei, Fu-Min 2017-02-01 The data in this paper are related to the research article entitled "Taxonomic status and phylogenetic relationship of tits based on mitogenomes and nuclear segments" (X.J. Li et al., 2016) [1]. The mitochondrial genomes and nuclear segments of tits were sequenced to analyze mitochondrial characteristics and phylogeny. In the data, the analyzed results are presented. The data holds the resulting files of mitochondrial characteristics, heterogeneity, best schemes, and trees. 10. Applications of next-generation sequencing to phylogeography and phylogenetics. Science.gov (United States) McCormack, John E; Hird, Sarah M; Zellmer, Amanda J; Carstens, Bryan C; Brumfield, Robb T 2013-02-01 This is a time of unprecedented transition in DNA sequencing technologies. Next-generation sequencing (NGS) clearly holds promise for fast and cost-effective generation of multilocus sequence data for phylogeography and phylogenetics. However, the focus on non-model organisms, in addition to uncertainty about which sample preparation methods and analyses are appropriate for different research questions and evolutionary timescales, have contributed to a lag in the application of NGS to these fields. Here, we outline some of the major obstacles specific to the application of NGS to phylogeography and phylogenetics, including the focus on non-model organisms, the necessity of obtaining orthologous loci in a cost-effective manner, and the predominate use of gene trees in these fields. We describe the most promising methods of sample preparation that address these challenges. Methods that reduce the genome by restriction digest and manual size selection are most appropriate for studies at the intraspecific level, whereas methods that target specific genomic regions (i.e., target enrichment or sequence capture) have wider applicability from the population level to deep-level phylogenomics. Additionally, we give an overview of how to analyze NGS data to arrive at data sets applicable to the standard toolkit of phylogeography and phylogenetics, including initial data processing to alignment and genotype calling (both SNPs and loci involving many SNPs). Even though whole-genome sequencing is likely to become affordable rather soon, because phylogeography and phylogenetics rely on analysis of hundreds of individuals in many cases, methods that reduce the genome to a subset of loci should remain more cost-effective for some time to come. 11. Degrees of generators of phylogenetic semigroups on graphs CERN Document Server Buczyńska, Weronika; Kubjas, Kaie; Michalek, Mateusz 2011-01-01 We study the phylogenetic semigroups associated to trivalent graphs introduced by Buczy\\'nska and related to works of Wi\\'sniewski, Sturmfels, Xu, Manon, Jeffrey, Weitsmann. Our main theorem bounds the degree of the generators of a semigroup associated with a graph with first Betti number g by g + 1. Furthermore, this bound is effective for many graphs. In particular, the caterpillar graph with g loops has generators of degree g + 1 for even g and of degree g for odd g. 12. Application of the phylogenetic analysis in mitochondrial disease study Institute of Scientific and Technical Information of China (English) WANG ChengYe; KONG QingPeng; ZHANG YaPing 2008-01-01 Mitochondrial disease currently received an increasing concern. However, the case-control design commonly adopted in this field is vulnerable to genetic background, population stratification and poor data quality. Although the phylogenetic analysis could help solve part of these problems, it has not received adequate attention. This paper is a review of this method as well as its application in mito-chondrial disease study. 13. ImOSM: intermittent evolution and robustness of phylogenetic methods. Science.gov (United States) Thi Nguyen, Minh Anh; Gesell, Tanja; von Haeseler, Arndt 2012-02-01 Among the criteria to evaluate the performance of a phylogenetic method, robustness to model violation is of particular practical importance as complete a priori knowledge of evolutionary processes is typically unavailable. For studies of robustness in phylogenetic inference, a utility to add well-defined model violations to the simulated data would be helpful. We therefore introduce ImOSM, a tool to imbed intermittent evolution as model violation into an alignment. Intermittent evolution refers to extra substitutions occurring randomly on branches of a tree, thus changing alignment site patterns. This means that the extra substitutions are placed on the tree after the typical process of sequence evolution is completed. We then study the robustness of widely used phylogenetic methods: maximum likelihood (ML), maximum parsimony (MP), and a distance-based method (BIONJ) to various scenarios of model violation. Violation of rates across sites (RaS) heterogeneity and simultaneous violation of RaS and the transition/transversion ratio on two nonadjacent external branches hinder all the methods recovery of the true topology for a four-taxon tree. For an eight-taxon balanced tree, the violations cause each of the three methods to infer a different topology. Both ML and MP fail, whereas BIONJ, which calculates the distances based on the ML estimated parameters, reconstructs the true tree. Finally, we report that a test of model homogeneity and goodness of fit tests have enough power to detect such model violations. The outcome of the tests can help to actually gain confidence in the inferred trees. Therefore, we recommend using these tests in practical phylogenetic analyses. 14. galaxieEST: addressing EST identity through automated phylogenetic analysis Directory of Open Access Journals (Sweden) 2004-07-01 Full Text Available Abstract Background Research involving expressed sequence tags (ESTs is intricately coupled to the existence of large, well-annotated sequence repositories. Comparatively complete and satisfactory annotated public sequence libraries are, however, available only for a limited range of organisms, rendering the absence of sequences and gene structure information a tangible problem for those working with taxa lacking an EST or genome sequencing project. Paralogous genes belonging to the same gene family but distinguished by derived characteristics are particularly prone to misidentification and erroneous annotation; high but incomplete levels of sequence similarity are typically difficult to interpret and have formed the basis of many unsubstantiated assumptions of orthology. In these cases, a phylogenetic study of the query sequence together with the most similar sequences in the database may be of great value to the identification process. In order to facilitate this laborious procedure, a project to employ automated phylogenetic analysis in the identification of ESTs was initiated. Results galaxieEST is an open source Perl-CGI script package designed to complement traditional similarity-based identification of EST sequences through employment of automated phylogenetic analysis. It uses a series of BLAST runs as a sieve to retrieve nucleotide and protein sequences for inclusion in neighbour joining and parsimony analyses; the output includes the BLAST output, the results of the phylogenetic analyses, and the corresponding multiple alignments. galaxieEST is available as an on-line web service for identification of fungal ESTs and for download / local installation for use with any organism group at http://galaxie.cgb.ki.se/galaxieEST.html. Conclusions By addressing sequence relatedness in addition to similarity, galaxieEST provides an integrative view on EST origin and identity, which may prove particularly useful in cases where similarity searches 15. Phylogeny reconstruction based on protein phylogenetic profiles of organisms Institute of Scientific and Technical Information of China (English) 2003-01-01 With the coming of the Post Genomic Era, more and more genomes have been sequenced and it has become possible to study phylogeny reconstruction at genome level. The concept of protein phylogenetic profiles of organisms is defined in this work which is used in phylogeny reconstruction by proteome comparisons. This method is more stable than the prevailing molecular systematics methods and can be used widely. It will develop very fast with the rapid progress in genome sequencing. 16. Accurate Jones Matrix of the Practical Faraday Rotator Institute of Scientific and Technical Information of China (English) 王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝 2003-01-01 The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications. 17. Distance-Based Phylogenetic Methods Around a Polytomy. Science.gov (United States) Davidson, Ruth; Sullivant, Seth 2014-01-01 Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny. 18. Phylogenetic interrelations between serological variants of Bacillus thuringiensis Directory of Open Access Journals (Sweden) Patyka N. V. 2009-06-01 Full Text Available Aim. B. thuringiensis (Bt are gram-positive spore-forming aerobic or facultative anaerobic bacteria able to form during sporulation species specific crystal-like inclusions of protein nature, consisting of particular thermolabile d-endotoxins. Serological Bt variants produce different entomotoxins; their synthesis in many respects depends on the conditions of cultivation. There was accumulated a vast information on the entomotoxins, their origin, synthesis, structure, toxic properties and mechanisms of action on insects. These bacteria are dominating in the microbiomethods of pest control in plants and animals. There are more than 70 serovariants of Bt selectively specific to the definite groups of host insects. However, the description of new variants not always looks justified considering the phylogenetic systematization based on phenotype signs. Methods. A comparative phylogenetic analysis of the Bt intraspecific interrelations was performed on the basis of the cloned 16S rRNA genes of entomopathogenic bacteria BtH1, BtH10, BtH14. Results. The phylogenetically homogeneous lines were investigated – a homology of 16S rRNA of the strains 1 and 10 ranged from 90,0 to 94,0 %; no distinct genetic isolation among the strains of 14th and 10th serovars was revealed. Conclusions. The comparison of nucleotides sequences of 16S rRNA has shown the existence of strains polymorphism within the group of entomopathogens BtH1, BtH10, BtH14, connected with their entomocide activity 19. Phylogenetic analysis of uroporphyrinogen III synthase (UROS) gene. Science.gov (United States) Shaik, Abjal Pasha; Alsaeed, Abbas H; Sultana, Asma 2012-01-01 The uroporphyrinogen III synthase (UROS) enzyme (also known as hydroxymethylbilane hydrolyase) catalyzes the cyclization of hydroxymethylbilane to uroporphyrinogen III during heme biosynthesis. A deficiency of this enzyme is associated with the very rare Gunther's disease or congenital erythropoietic porphyria, an autosomal recessive inborn error of metabolism. The current study investigated the possible role of UROS (Homo sapiens [EC: 4.2.1.75; 265 aa; 1371 bp mRNA; Entrez Pubmed ref NP_000366.1, NM_000375.2]) in evolution by studying the phylogenetic relationship and divergence of this gene using computational methods. The UROS protein sequences from various taxa were retrieved from GenBank database and were compared using Clustal-W (multiple sequence alignment) with defaults and a first-pass phylogenetic tree was built using neighbor-joining method as in DELTA BLAST 2.2.27+ version. A total of 163 BLAST hits were found for the uroporphyrinogen III synthase query sequence and these hits showed putative conserved domain, HemD superfamily (as on 14(th) Nov 2012). We then narrowed down the search by manually deleting the proteins which were not UROS sequences and sequences belonging to phyla other than Chordata were deleted. A repeat phylogenetic analysis of 39 taxa was performed using PhyML and TreeDyn software to confirm that UROS is a highly conserved protein with approximately 85% conserved sequences in almost all chordate taxons emphasizing its importance in heme synthesis. 20. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza. Science.gov (United States) Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A 2017-01-18 Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd. 1. Extended molecular phylogenetics and revised systematics of Malagasy scincine lizards. Science.gov (United States) Erens, Jesse; Miralles, Aurélien; Glaw, Frank; Chatrou, Lars W; Vences, Miguel 2017-02-01 Among the endemic biota of Madagascar, skinks are a diverse radiation of lizards that exhibit a striking ecomorphological variation, and could provide an interesting system to study body-form evolution in squamate reptiles. We provide a new phylogenetic hypothesis for Malagasy skinks of the subfamily Scincinae based on an extended molecular dataset comprising 8060bp from three mitochondrial and nine nuclear loci. Our analysis also increases taxon sampling of the genus Amphiglossus by including 16 out of 25 nominal species. Additionally, we examined whether the molecular phylogenetic patterns coincide with morphological differentiation in the species currently assigned to this genus. Various methods of inference recover a mostly strongly supported phylogeny with three main clades of Amphiglossus. However, relationships among these three clades and the limb-reduced genera Grandidierina, Voeltzkowia and Pygomeles remain uncertain. Supported by a variety of morphological differences (predominantly related to the degree of body elongation), but considering the remaining phylogenetic uncertainty, we propose a redefinition of Amphiglossus into three different genera (Amphiglossus sensu stricto, Flexiseps new genus, and Brachyseps new genus) to remove the non-monophyly of Amphiglossus sensu lato and to facilitate future studies on this fascinating group of lizards. 2. Molecular phylogenetics of New World searobins (Triglidae; Prionotinae). Science.gov (United States) Portnoy, David S; Willis, Stuart C; Hunt, Elizabeth; Swift, Dominic G; Gold, John R; Conway, Kevin W 2017-02-01 Phylogenetic relationships among members of the New World searobin genera Bellator and Prionotus (Family Triglidae, Subfamily Prionotinae) and among other searobins in the families Triglidae and Peristediidae were investigated using both mitochondrial and nuclear DNA sequences. Phylogenetic hypotheses derived from maximum likelihood and Bayesian methodologies supported a monophyletic Prionotinae that included four well resolved clades of uncertain relationship; three contained species in the genus Prionotus and one contained species in the genus Bellator. Bellator was always recovered within the genus Prionotus, a result supported by post hoc model testing. Two nominal species of Prionotus (P. alatus and P. paralatus) were not recovered as exclusive lineages, suggesting the two may comprise a single species. Phylogenetic hypotheses also supported a monophyletic Triglidae but only if armored searobins (Family Peristediidae) were included. A robust morphological assessment is needed to further characterize relationships and suggest classification of clades within Prionotinae; for the time being we recommend that Bellator be considered a synonym of Prionotus. Relationships between armored searobins (Family Peristediidae) and searobins (Family Triglidae) and relationships within Triglidae also warrant further study. 3. Detection and phylogenetic analysis of bacteriophage WO in spiders (Araneae). Science.gov (United States) Yan, Qian; Qiao, Huping; Gao, Jin; Yun, Yueli; Liu, Fengxiang; Peng, Yu 2015-11-01 Phage WO is a bacteriophage found in Wolbachia. Herein, we represent the first phylogenetic study of WOs that infect spiders (Araneae). Seven species of spiders (Araneus alternidens, Nephila clavata, Hylyphantes graminicola, Prosoponoides sinensis, Pholcus crypticolens, Coleosoma octomaculatum, and Nurscia albofasciata) from six families were infected by Wolbachia and WO, followed by comprehensive sequence analysis. Interestingly, WO could be only detected Wolbachia-infected spiders. The relative infection rates of those seven species of spiders were 75, 100, 88.9, 100, 62.5, 72.7, and 100 %, respectively. Our results indicated that both Wolbachia and WO were found in three different body parts of N. clavata, and WO could be passed to the next generation of H. graminicola by vertical transmission. There were three different sequences for WO infected in A. alternidens and two different WO sequences from C. octomaculatum. Only one sequence of WO was found for the other five species of spiders. The discovered sequence of WO ranged from 239 to 311 bp. Phylogenetic tree was generated using maximum likelihood (ML) based on the orf7 gene sequences. According to the phylogenetic tree, WOs in N. clavata and H. graminicola were clustered in the same group. WOs from A. alternidens (WAlt1) and C. octomaculatum (WOct2) were closely related to another clade, whereas WO in P. sinensis was classified as a sole cluster. 4. Dendroscope: An interactive viewer for large phylogenetic trees Directory of Open Access Journals (Sweden) Franz Markus 2007-11-01 Full Text Available Abstract Background Research in evolution requires software for visualizing and editing phylogenetic trees, for increasingly very large datasets, such as arise in expression analysis or metagenomics, for example. It would be desirable to have a program that provides these services in an effcient and user-friendly way, and that can be easily installed and run on all major operating systems. Although a large number of tree visualization tools are freely available, some as a part of more comprehensive analysis packages, all have drawbacks in one or more domains. They either lack some of the standard tree visualization techniques or basic graphics and editing features, or they are restricted to small trees containing only tens of thousands of taxa. Moreover, many programs are diffcult to install or are not available for all common operating systems. Results We have developed a new program, Dendroscope, for the interactive visualization and navigation of phylogenetic trees. The program provides all standard tree visualizations and is optimized to run interactively on trees containing hundreds of thousands of taxa. The program provides tree editing and graphics export capabilities. To support the inspection of large trees, Dendroscope offers a magnification tool. The software is written in Java 1.4 and installers are provided for Linux/Unix, MacOS X and Windows XP. Conclusion Dendroscope is a user-friendly program for visualizing and navigating phylogenetic trees, for both small and large datasets. 5. Phylogenetic and Ecological Analysis of Novel Marine Stramenopiles Science.gov (United States) Massana, Ramon; Castresana, Jose; Balagué, Vanessa; Guillou, Laure; Romari, Khadidja; Groisillier, Agnès; Valentin, Klaus; Pedrós-Alió, Carlos 2004-01-01 Culture-independent molecular analyses of open-sea microorganisms have revealed the existence and apparent abundance of novel eukaryotic lineages, opening new avenues for phylogenetic, evolutionary, and ecological research. Novel marine stramenopiles, identified by 18S ribosomal DNA sequences within the basal part of the stramenopile radiation but unrelated to any previously known group, constituted one of the most important novel lineages in these open-sea samples. Here we carry out a comparative analysis of novel stramenopiles, including new sequences from coastal genetic libraries presented here and sequences from recent reports from the open ocean and marine anoxic sites. Novel stramenopiles were found in all major habitats, generally accounting for a significant proportion of clones in genetic libraries. Phylogenetic analyses indicated the existence of 12 independent clusters. Some of these were restricted to anoxic or deep-sea environments, but the majority were typical components of coastal and open-sea waters. We specifically identified four clusters that were well represented in most marine surface waters (together they accounted for 74% of the novel stramenopile clones) and are the obvious targets for future research. Many sequences were retrieved from geographically distant regions, indicating that some organisms were cosmopolitan. Our study expands our knowledge on the phylogenetic diversity and distribution of novel marine stramenopiles and confirms that they are fundamental members of the marine eukaryotic picoplankton. PMID:15184153 6. Arbuscular mycorrhizal fungal communities are phylogenetically clustered at small scales. Science.gov (United States) Horn, Sebastian; Caruso, Tancredi; Verbruggen, Erik; Rillig, Matthias C; Hempel, Stefan 2014-11-01 Next-generation sequencing technologies with markers covering the full Glomeromycota phylum were used to uncover phylogenetic community structure of arbuscular mycorrhizal fungi (AMF) associated with Festuca brevipila. The study system was a semi-arid grassland with high plant diversity and a steep environmental gradient in pH, C, N, P and soil water content. The AMF community in roots and rhizosphere soil were analyzed separately and consisted of 74 distinct operational taxonomic units (OTUs) in total. Community-level variance partitioning showed that the role of environmental factors in determining AM species composition was marginal when controlling for spatial autocorrelation at multiple scales. Instead, phylogenetic distance and spatial distance were major correlates of AMF communities: OTUs that were more closely related (and which therefore may have similar traits) were more likely to co-occur. This pattern was insensitive to phylogenetic sampling breadth. Given the minor effects of the environment, we propose that at small scales closely related AMF positively associate through biotic factors such as plant-AMF filtering and interactions within the soil biota. 7. Nucleotide and amino acid sequences of a coat protein of an Ukrainian isolate of Potato virus Y: comparison with homologous sequences of other isolates and phylogenetic analysis Directory of Open Access Journals (Sweden) Budzanivska I. G. 2014-03-01 Full Text Available Aim. Identification of the widespread Ukrainian isolate(s of PVY (Potato virus Y in different potato cultivars and subsequent phylogenetic analysis of detected PVY isolates based on NA and AA sequences of coat protein. Methods. ELISA, RT-PCR, DNA sequencing and phylogenetic analysis. Results. PVY has been identified serologically in potato cultivars of Ukrainian selection. In this work we have optimized a method for total RNA extraction from potato samples and offered a sensitive and specific PCR-based test system of own design for diagnostics of the Ukrainian PVY isolates. Part of the CP gene of the Ukrainian PVY isolate has been sequenced and analyzed phylogenetically. It is demonstrated that the Ukrainian isolate of Potato virus Y (CP gene has a higher percentage of homology with the recombinant isolates (strains of this pathogen (approx. 98.8– 99.8 % of homology for both nucleotide and translated amino acid sequences of the CP gene. The Ukrainian isolate of PVY is positioned in the separate cluster together with the isolates found in Syria, Japan and Iran; these isolates possibly have common origin. The Ukrainian PVY isolate is confirmed to be recombinant. Conclusions. This work underlines the need and provides the means for accurate monitoring of Potato virus Y in the agroecosystems of Ukraine. Most importantly, the phylogenetic analysis demonstrated the recombinant nature of this PVY isolate which has been attributed to the strain group O, subclade N:O. 8. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project Data.gov (United States) National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting... 9. Network dynamics of eukaryotic LTR retroelements beyond phylogenetic trees Directory of Open Access Journals (Sweden) 2009-11-01 Full Text Available Abstract Background Sequencing projects have allowed diverse retroviruses and LTR retrotransposons from different eukaryotic organisms to be characterized. It is known that retroviruses and other retro-transcribing viruses evolve from LTR retrotransposons and that this whole system clusters into five families: Ty3/Gypsy, Retroviridae, Ty1/Copia, Bel/Pao and Caulimoviridae. Phylogenetic analyses usually show that these split into multiple distinct lineages but what is yet to be understood is how deep evolution occurred in this system. Results We combined phylogenetic and graph analyses to investigate the history of LTR retroelements both as a tree and as a network. We used 268 non-redundant LTR retroelements, many of them introduced for the first time in this work, to elucidate all possible LTR retroelement phylogenetic patterns. These were superimposed over the tree of eukaryotes to investigate the dynamics of the system, at distinct evolutionary times. Next, we investigated phenotypic features such as duplication and variability of amino acid motifs, and several differences in genomic ORF organization. Using this information we characterized eight reticulate evolution markers to construct phenotypic network models. Conclusion The evolutionary history of LTR retroelements can be traced as a time-evolving network that depends on phylogenetic patterns, epigenetic host-factors and phenotypic plasticity. The Ty1/Copia and the Ty3/Gypsy families represent the oldest patterns in this network that we found mimics eukaryotic macroevolution. The emergence of the Bel/Pao, Retroviridae and Caulimoviridae families in this network can be related with distinct inflations of the Ty3/Gypsy family, at distinct evolutionary times. This suggests that Ty3/Gypsy ancestors diversified much more than their Ty1/Copia counterparts, at distinct geological eras. Consistent with the principle of preferential attachment, the connectivities among phenotypic markers, taken as 10. Complete, accurate, mammalian phylogenies aid conservation planning, but not much Science.gov (United States) Rodrigues, Ana S. L.; Grenyer, Richard; Baillie, Jonathan E. M.; Bininda-Emonds, Olaf R. P.; Gittlemann, John L.; Hoffmann, Michael; Safi, Kamran; Schipper, Jan; Stuart, Simon N.; Brooks, Thomas 2011-01-01 In the face of unprecedented global biodiversity loss, conservation planning must balance between refining and deepening knowledge versus acting on current information to preserve species and communities. Phylogenetic diversity (PD), a biodiversity measure that takes into account the evolutionary relationships between species, is arguably a more meaningful measure of biodiversity than species diversity, but cannot yet be applied to conservation planning for the majority of taxa for which phylogenetic trees have not yet been developed. Here, we investigate how the quality of data on the taxonomy and/or phylogeny of species affects the results of spatial conservation planning in terms of the representation of overall mammalian PD. The results show that the better the quality of the biodiversity data the better they can serve as a basis for conservation planning. However, decisions based on incomplete data are remarkably robust across different levels of degrading quality concerning the description of new species and the availability of phylogenetic information. Thus, given the level of urgency and the need for action, conservation planning can safely make use of the best available systematic data, limited as these data may be. PMID:21844044 11. POWER: PhylOgenetic WEb Repeater--an integrated and user-optimized framework for biomolecular phylogenetic analysis. Science.gov (United States) Lin, Chung-Yen; Lin, Fan-Kai; Lin, Chieh Hua; Lai, Li-Wei; Hsu, Hsiu-Jun; Chen, Shu-Hwa; Hsiung, Chao A 2005-07-01 POWER, the PhylOgenetic WEb Repeater, is a web-based service designed to perform user-friendly pipeline phylogenetic analysis. POWER uses an open-source LAMP structure and infers genetic distances and phylogenetic relationships using well-established algorithms (ClustalW and PHYLIP). POWER incorporates a novel tree builder based on the GD library to generate a high-quality tree topology according to the calculated result. POWER accepts either raw sequences in FASTA format or user-uploaded alignment output files. Through a user-friendly web interface, users can sketch a tree effortlessly in multiple steps. After a tree has been generated, users can freely set and modify parameters, select tree building algorithms, refine sequence alignments or edit the tree topology. All the information related to input sequences and the processing history is logged and downloadable for the user's reference. Furthermore, iterative tree construction can be performed by adding sequences to, or removing them from, a previously submitted job. POWER is accessible at http://power.nhri.org.tw. 12. Octocoral mitochondrial genomes provide insights into the phylogenetic history of gene order rearrangements, order reversals, and cnidarian phylogenetics. Science.gov (United States) Figueroa, Diego F; Baco, Amy R 2014-12-24 We use full mitochondrial genomes to test the robustness of the phylogeny of the Octocorallia, to determine the evolutionary pathway for the five known mitochondrial gene rearrangements in octocorals, and to test the suitability of using mitochondrial genomes for higher taxonomic-level phylogenetic reconstructions. Our phylogeny supports three major divisions within the Octocorallia and show that Paragorgiidae is paraphyletic, with Sibogagorgia forming a sister branch to the Coralliidae. Furthermore, Sibogagorgia cauliflora has what is presumed to be the ancestral gene order in octocorals, but the presence of a pair of inverted repeat sequences suggest that this gene order was not conserved but rather evolved back to this apparent ancestral state. Based on this we recommend the resurrection of the family Sibogagorgiidae to fix the paraphyly of the Paragorgiidae. This is the first study to show that in the Octocorallia, mitochondrial gene orders have evolved back to an ancestral state after going through a gene rearrangement, with at least one of the gene orders evolving independently in different lineages. A number of studies have used gene boundaries to determine the type of mitochondrial gene arrangement present. However, our findings suggest that this method known as gene junction screening may miss evolutionary reversals. Additionally, substitution saturation analysis demonstrates that while whole mitochondrial genomes can be used effectively for phylogenetic analyses within Octocorallia, their utility at higher taxonomic levels within Cnidaria is inadequate. Therefore for phylogenetic reconstruction at taxonomic levels higher than subclass within the Cnidaria, nuclear genes will be required, even when whole mitochondrial genomes are available. 13. Phylogenetic fields through time: temporal dynamics of geographical co-occurrence and phylogenetic structure within species ranges. Science.gov (United States) Villalobos, Fabricio; Carotenuto, Francesco; Raia, Pasquale; Diniz-Filho, José Alexandre F 2016-04-05 Species co-occur with different sets of other species across their geographical distribution, which can be either closely or distantly related. Such co-occurrence patterns and their phylogenetic structure within individual species ranges represent what we call the species phylogenetic fields (PFs). These PFs allow investigation of the role of historical processes--speciation, extinction and dispersal--in shaping species co-occurrence patterns, in both extinct and extant species. Here, we investigate PFs of large mammalian species during the last 3 Myr, and how these correlate with trends in diversification rates. Using the fossil record, we evaluate species' distributional and co-occurrence patterns along with their phylogenetic structure. We apply a novel Bayesian framework on fossil occurrences to estimate diversification rates through time. Our findings highlight the effect of evolutionary processes and past climatic changes on species' distributions and co-occurrences. From the Late Pliocene to the Recent, mammal species seem to have responded in an individualistic manner to climate changes and diversification dynamics, co-occurring with different sets of species from different lineages across their geographical ranges. These findings stress the difficulty of forecasting potential effects of future climate changes on biodiversity. 14. The t(14;18)(q32;q21)/IGH-MALT1 translocation in MALT lymphomas contains templated nucleotide insertions and a major breakpoint region similar to follicular and mantle cell lymphoma. Science.gov (United States) Murga Penas, Eva Maria; Callet-Bauchu, Evelyne; Ye, Hongtao; Gazzo, Sophie; Berger, Françoise; Schilling, Georgia; Albert-Konetzny, Nadine; Vettorazzi, Eik; Salles, Gilles; Wlodarska, Iwona; Du, Ming-Qing; Bokemeyer, Carsten; Dierlamm, Judith 2010-03-18 The t(14;18)(q32;q21) involving the immunoglobulin heavy chain locus (IGH) and the MALT1 gene is a recurrent abnormality in mucosa-associated lymphoid tissue (MALT) lymphomas. However, the nucleotide sequence of only one t(14;18)-positive MALT lymphoma has been reported so far. We here report the molecular characterization of the IGH-MALT1 fusion products in 5 new cases of t(14;18)-positive MALT lymphomas. Similar to the IGH-associated translocations in follicular and mantle cell lymphomas, the IGH-MALT1 junctions in MALT lymphoma showed all features of a recombination signal sequence-guided V(D)J-mediated translocation at the IGH locus. Furthermore, analogous to follicular and mantle cell lymphoma, templated nucleotides (T-nucleotides) were identified at the t(14;18)/IGH-MALT1 breakpoint junctions. On chromosome 18, we identified a novel major breakpoint region in MALT1 upstream of its coding region. Moreover, the presence of duplications of MALT1 nucleotides in one case suggests an underlying staggered DNA-break process not consistent with V(D)J-mediated recombination. The molecular characteristics of the t(14;18)/IGH-MALT1 resemble those found in the t(14;18)/IGH-BCL2 in follicular lymphoma and t(11;14)/CCND1-IGH in mantle cell lymphoma, suggesting that these translocations could be generated by common pathomechanisms involving illegitimate V(D)J-mediated recombination on IGH as well as new synthesis of T-nucleotides and nonhomologous end joining (NHEJ) or alternative NHEJ repair pathways on the IGH-translocation partner. 15. Impact of revised CLSI breakpoints for susceptibility to third-generation cephalosporins and carbapenems among Enterobacteriaceae isolates in the Asia-Pacific region: results from the Study for Monitoring Antimicrobial Resistance Trends (SMART), 2002-2010. Science.gov (United States) Huang, Chi-Chang; Chen, Yao-Shen; Toh, Han-Siong; Lee, Yu-Lin; Liu, Yuag-Meng; Ho, Cheng-Mao; Lu, Po-Liang; Liu, Chun-Eng; Chen, Yen-Hsu; Wang, Jen-Hsien; Tang, Hung-Jen; Yu, Kwok-Woon; Liu, Yung-Ching; Chuang, Yin-Ching; Xu, Yingchun; Ni, Yuxing; Ko, Wen-Chien; Hsueh, Po-Ren 2012-06-01 This study examined the rates of susceptibility to third-generation cephalosporins and carbapenems among Enterobacteriaceae isolates that had been obtained from patients with intraabdominal infections in the Asia-Pacific region as part of the Study for Monitoring Antimicrobial Resistance Trends (SMART). Susceptibility profiles obtained using 2009 Clinical and Laboratory Standards Institute (CLSI) breakpoints were compared with those obtained using the 2011 CLSI breakpoints. From 2002 to 2010, Escherichia coli and Klebsiella pneumoniae together accounted for more than 60% of the 13714 Enterobacteriaceae isolates analyzed during the study period. Extended-spectrum β-lactamase (ESBL) producers comprised 28.2% of E. coli isolates and 22.1% of K. pneumoniae isolates in the Asia-Pacific region, with China (55.6% and 33.7%, respectively) and Thailand (43.1% and 40.7%, respectively) having the highest proportions of ESBL producers. Based on the 2011 CLSI criteria, 77.2% of the Enterobacteriaceae isolates, 40.4% of ESBL-producing E. coli, and 25.2% of ESBL-producing K. pneumoniae isolates were susceptible to ceftazidime. Carbapenems showed in vitro activity against >90% of Enterobacteriaceae isolates in all participating countries, except for ertapenem in South Korea (susceptibility rate 82.2%). Marked differences (>5%) in susceptibility of ESBL-producing E. coli and K. pneumoniae isolates to carbapenems were noted between the profiles obtained using the 2009 CLSI criteria and those using the 2011 CLSI criteria. Continuous monitoring of antimicrobial resistance is necessary in the Asia-Pacific region. 16. Speed-of-sound compensated photoacoustic tomography for accurate imaging CERN Document Server Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang 2012-01-01 In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m... 17. Phylogenetic comparisons of bacterial communities from serpentine and nonserpentine soils. Science.gov (United States) Oline, David K 2006-11-01 I present the results of a culture-independent survey of soil bacterial communities from serpentine soils and adjacent nonserpentine comparator soils using a variety of newly developed phylogenetically based statistical tools. The study design included site-based replication of the serpentine-to-nonserpentine community comparison over a regional scale ( approximately 100 km) in Northern California and Southern Oregon by producing 16S rRNA clone libraries from pairs of samples taken on either side of the serepentine-nonserpentine edaphic boundary at three geographical sites. At the division level, the serpentine and nonserpentine communities were similar to each other and to previous data from forest soils. Comparisons of both richness and Shannon diversity produced no significant differences between any of the libraries, but the vast majority of phylogenetically based tests were significant, even with only 50 sequences per library. These results suggest that most samples were distinct, consisting of a collection of lineages generally not found in other samples. The pattern of results showed that serpentine communities tended to be more similar to each other than they were to nonserpentine communities, and these differences were at a lower taxonomic scale. Comparisons of two nonserpentine communities generally showed differences, and some results suggest that the geographical site may control community composition as well. These results show the power of phylogenetic tests to discern differences between 16S rRNA libraries compared to tests that discard DNA data to bin sequences into operational taxonomic units, and they stress the importance of replication at larger scales for inferences regarding microbial biogeography. 18. Building phylogenetic trees from molecular data with MEGA. Science.gov (United States) Hall, Barry G 2013-05-01 Phylogenetic analysis is sometimes regarded as being an intimidating, complex process that requires expertise and years of experience. In fact, it is a fairly straightforward process that can be learned quickly and applied effectively. This Protocol describes the several steps required to produce a phylogenetic tree from molecular data for novices. In the example illustrated here, the program MEGA is used to implement all those steps, thereby eliminating the need to learn several programs, and to deal with multiple file formats from one step to another (Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S. 2011. MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol. 28:2731-2739). The first step, identification of a set of homologous sequences and downloading those sequences, is implemented by MEGA's own browser built on top of the Google Chrome toolkit. For the second step, alignment of those sequences, MEGA offers two different algorithms: ClustalW and MUSCLE. For the third step, construction of a phylogenetic tree from the aligned sequences, MEGA offers many different methods. Here we illustrate the maximum likelihood method, beginning with MEGA's Models feature, which permits selecting the most suitable substitution model. Finally, MEGA provides a powerful and flexible interface for the final step, actually drawing the tree for publication. Here a step-by-step protocol is presented in sufficient detail to allow a novice to start with a sequence of interest and to build a publication-quality tree illustrating the evolution of an appropriate set of homologs of that sequence. MEGA is available for use on PCs and Macs from www.megasoftware.net. 19. Phylogenetic analysis of the kinesin superfamily from Physcomitrella Directory of Open Access Journals (Sweden) Zhiyuan eShen 2012-10-01 Full Text Available Kinesins are an ancient superfamily of microtubule dependent motors. They participate in an ex-tensive and diverse list of essential cellular functions, including mitosis, cytokinesis, cell polari-zation, cell elongation, flagellar development, and intracellular transport. Based on phylogenetic relationships, the kinesin superfamily has been subdivided into 14 families, which are represented in most eukaryotic phyla. The functions of these families are sometimes conserved between species, but important variations in function across species have been observed. Plants possess most kinesin families including a few plant-specific families. With the availability of an ever in-creasing number of genome sequences from plants, it is important to document the complete complement of kinesins present in a given organism. This will help develop a molecular frame-work to explore the function of each family using genetics, biochemistry and cell biology. The moss Physcomitrella patens has emerged as a powerful model organism to study gene function in plants, which makes it a key candidate to explore complex gene families, such as the kinesin superfamily. Here we report a detailed phylogenetic characterization of the 71 kinesins of the kinesin superfamily in Physcomitrella. We found a remarkable conservation of families and sub-family classes with Arabidopsis, which is important for future comparative analysis of function. Some of the families, such as kinesins 14s are composed of fewer members in moss, while other families, such as the kinesin 12s are greatly expanded. To improve the comparison between spe-cies, and to simplify communication between research groups, we propose a classification of subfamilies based on our phylogenetic analysis. 20. Analysis of Acorus calamus chloroplast genome and its phylogenetic implications. Science.gov (United States) Goremykin, Vadim V; Holland, Barbara; Hirsch-Ernst, Karen I; Hellwig, Frank H 2005-09-01 Determining the phylogenetic relationships among the major lines of angiosperms is a long-standing problem, yet the uncertainty as to the phylogenetic affinity of these lines persists. While a number of studies have suggested that the ANITA (Amborella-Nymphaeales-Illiciales-Trimeniales-Aristolochiales) grade is basal within angiosperms, studies of complete chloroplast genome sequences also suggested an alternative tree, wherein the line leading to the grasses branches first among the angiosperms. To improve taxon sampling in the existing chloroplast genome data, we sequenced the chloroplast genome of the monocot Acorus calamus. We generated a concatenated alignment (89,436 positions for 15 taxa), encompassing almost all sequences usable for phylogeny reconstruction within spermatophytes. The data still contain support for both the ANITA-basal and grasses-basal hypotheses. Using simulations we can show that were the ANITA-basal hypothesis true, parsimony (and distance-based methods with many models) would be expected to fail to recover it. The self-evident explanation for this failure appears to be a long-branch attraction (LBA) between the clade of grasses and the out-group. However, this LBA cannot explain the discrepancies observed between tree topology recovered using the maximum likelihood (ML) method and the topologies recovered using the parsimony and distance-based methods when grasses are deleted. Furthermore, the fact that neither maximum parsimony nor distance methods consistently recover the ML tree, when according to the simulations they would be expected to, when the out-group (Pinus) is deleted, suggests that either the generating tree is not correct or the best symmetric model is misspecified (or both). We demonstrate that the tree recovered under ML is extremely sensitive to model specification and that the best symmetric model is misspecified. Hence, we remain agnostic regarding phylogenetic relationships among basal angiosperm lineages. 1. Phylogenetic relationships among Lemuridae (Primates): evidence from mtDNA. Science.gov (United States) Pastorini, Jennifer; Forstner, Michael R J; Martin, Robert D 2002-10-01 The family Lemuridae includes four genera: Eulemur, Hapalemur, Lemur,Varecia. Taxonomy and phylogenetic relationships between L. catta, Eulemur and Hapalemur, and of Varecia to these other lemurids, continue to be hotly debated. Nodal relationships among the five Eulemur species also remain contentious. A mitochondrial DNA sequence dataset from the ND 3, ND 4 L, ND 4 genes and five tRNAs (Gly, Arg, His, Ser, Leu) was generated to try to clarify phylogenetic relationships w ithin the Lemuridae. Samples (n=39) from all ten lemurid species were collected and analysed. Three Daubentonia madagascariensis were included as outgroup taxa. The approximately 2400 bp sequences were analysed using maximum parsimony, neighbor-joining and maximum likelihood methods. The results support monophyly of Eulemur, a basal divergence of Varecia, and a sister-group relationship for Lemur/Hapalemur. Based on tree topology, bootstrap values, and pairwise distance comparisons, we conclude thatVarecia and Eulemur both represent distinct genera separate from L. catta. H. griseus andH. aureus form a clade with strong support, but the sequence data do not permit robust resolution of the trichotomy involving H. simus, H. aureus/H. griseus and L. catta. Within Eulemur there is strong support for a clade containing E. fulvus, E. mongoz and E. rubriventer. However, analyses failed to clearly resolve relationships among those three species or with the more distantly related E. coronatus and E. macaco. Our sequencing data support the current subspecific status of E.m. macaco and E.m. flavifrons, and that of V.v. variegata and V.v. rubra. However, tree topology and relatively large genetic distances among individual V.v. variegata indicate that there may be more phylogenetic structure within this taxon than is indicated by current taxonomy. 2. Phylogenetic analysis of a transfusion-transmitted hepatitis A outbreak. Science.gov (United States) Hettmann, Andrea; Juhász, Gabriella; Dencs, Ágnes; Tresó, Bálint; Rusvai, Erzsébet; Barabás, Éva; Takács, Mária 2017-02-01 A transfusion-associated hepatitis A outbreak was found in the first time in Hungary. The outbreak involved five cases. Parenteral transmission of hepatitis A is rare, but may occur during viraemia. Direct sequencing of nested PCR products was performed, and all the examined samples were identical in the VP1/2A region of the hepatitis A virus genome. HAV sequences found in recent years were compared and phylogenetic analysis showed that the strain which caused these cases is the same as that had spread in Hungary recently causing several hepatitis A outbreaks throughout the country. 3. DNA barcoding, phylogenetic relationships and speciation of snappers (genus Lutjanus) Institute of Scientific and Technical Information of China (English) 2010-01-01 The phylogenetic relationships of 13 snapper species from the South China Sea have been established using the combined DNA sequences of three full-length mitochondrial genes (COI, COII and CYTB) and two partial nuclear genes (RAG1, RAG2). The 13 species (genus Lutjanus) were selected after DNA barcoding 72 individuals, representing 20 species. Our study suggests that although DNA barcoding aims to develop species identification systems, it may also be useful in the construction of phylogenies by aiding the selection of taxa. Combined mitochondrial and nuclear gene data has an advantage over an individual dataset because of its higher resolving power. 4. A taxonomic and phylogenetic revision of Penicillium section Aspergilloides DEFF Research Database (Denmark) Houbraken, J.; Visagie, C.M.; Meijer, M. 2014-01-01 . The taxonomy of these species has been investigated several times using various techniques, but species delimitation remains difficult. In the present study, 349 strains belonging to section Aspergilloides were subjected to multilocus molecular phylogenetic analyses using partial β-tubulin (BenA), calmodulin...... Aspergilloides are phenotypically similar and most have monoverticillate conidiophores and grow moderately or quickly on agar media. The most important characters to distinguish these species were colony sizes on agar media, growth at 30 °C, ornamentation and shape of conidia, sclerotium production and stipe... 5. Host specificity and phylogenetic relationships among Atlantic Ovulidae (Mollusca: Gastropoda) OpenAIRE 2010-01-01 Ovulid gastropods and their octocoral hosts were collected along the leeward coast of Curaçao, Netherlands Antilles. New molecular data of Caribbean and a single Atlantic species were combined with comparable data of Indo-Pacific Ovulidae and a single East-Pacific species from GenBank. Based on two DNA markers, viz. CO-I and 16S, the phylogenetic relationships among all ovulid species of which these data are available are reconstructed. The provisional results suggest a dichotomy between the ... 6. Dengue virus type 3 in Brazil: a phylogenetic perspective Directory of Open Access Journals (Sweden) Josélio Maria Galvão de Araújo 2009-05-01 Full Text Available Circulation of a new dengue virus (DENV-3 genotype was recently described in Brazil and Colombia, but the precise classification of this genotype has been controversial. Here we perform phylogenetic and nucleotide-distance analyses of the envelope gene, which support the subdivision of DENV-3 strains into five distinct genotypes (GI to GV and confirm the classification of the new South American genotype as GV. The extremely low genetic distances between Brazilian GV strains and the prototype Philippines/L11423 GV strain isolated in 1956 raise important questions regarding the origin of GV in South America. 7. A phylogenetic re-analysis of groupers with applications for ciguatera fish poisoning. Directory of Open Access Journals (Sweden) Charlotte Schoelinck Full Text Available Ciguatera fish poisoning (CFP is a significant public health problem due to dinoflagellates. It is responsible for one of the highest reported incidence of seafood-borne illness and Groupers are commonly reported as a source of CFP due to their position in the food chain. With the role of recent climate change on harmful algal blooms, CFP cases might become more frequent and more geographically widespread. Since there is no appropriate treatment for CFP, the most efficient solution is to regulate fish consumption. Such a strategy can only work if the fish sold are correctly identified, and it has been repeatedly shown that misidentifications and species substitutions occur in fish markets.We provide here both a DNA-barcoding reference for groupers, and a new phylogenetic reconstruction based on five genes and a comprehensive taxonomical sampling. We analyse the correlation between geographic range of species and their susceptibility to ciguatera accumulation, and the co-occurrence of ciguatoxins in closely related species, using both character mapping and statistical methods.Misidentifications were encountered in public databases, precluding accurate species identifications. Epinephelinae now includes only twelve genera (vs. 15 previously. Comparisons with the ciguatera incidences show that in some genera most species are ciguateric, but statistical tests display only a moderate correlation with the phylogeny. Atlantic species were rarely contaminated, with ciguatera occurrences being restricted to the South Pacific.The recent changes in classification based on the reanalyses of the relationships within Epinephelidae have an impact on the interpretation of the ciguatera distribution in the genera. In this context and to improve the monitoring of fish trade and safety, we need to obtain extensive data on contamination at the species level. Accurate species identifications through DNA barcoding are thus an essential tool in controlling CFP since 8. Realization of Quadrature Signal Generator Using Accurate Magnitude Integrator DEFF Research Database (Denmark) Xin, Zhen; Yoon, Changwoo; Zhao, Rende 2016-01-01 -signal parameters, espically when a fast resonse is required for usages such as grid synchronization. As a result, the parameters design of the SOGI-QSG becomes complicated. Theoretical analysis shows that it is caused by the inaccurate magnitude-integration characteristic of the SOGI-QSG. To solve this problem......, an Accurate-Magnitude-Integrator based QSG (AMI-QSG) is proposed. The AMI has an accurate magnitude-integration characteristic for the sinusoidal signal, which makes the AMI-QSG possess an accurate First-Order-System (FOS) characteristic in terms of magnitude than the SOGI-QSG. The parameter design process... 9. Fabricating an Accurate Implant Master Cast: A Technique Report. Science.gov (United States) Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F 2015-12-01 The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated. 10. Anatomical diversity and regressive evolution in trichomanoid filmy ferns (Hymenophyllaceae): a phylogenetic approach. Science.gov (United States) Dubuisson, Jean-Yves; Hennequin, Sabine; Bary, Sophie; Ebihara, Atsushi; Boucheron-Dubuisson, Elodie 2011-12-01 To infer the anatomical evolution of the Hymenophyllaceae (filmy ferns) and to test previously suggested scenarios of regressive evolution, we performed an exhaustive investigation of stem anatomy in the most variable lineage of the family, the trichomanoids, using a representative sampling of 50 species. The evolution of qualitative and quantitative anatomical characters and possibly related growth-forms was analyzed using a maximum likelihood approach. Potential correlations between selected characters were then statistically tested using a phylogenetic comparative method. Our investigations support the anatomical homogeneity of this family at the generic and sub-generic levels. Reduced and sub-collateral/collateral steles likely derived from an ancestral massive protostele, and sub-collateral/collateral types appear to be related to stem thickness reduction and root apparatus regression. These results corroborate the hypothesis of regressive evolution in the lineage, in terms of morphology as well as anatomy. In addition, a heterogeneous cortex, which is derived in the lineage, appears to be related to a colonial strategy and likely to a climbing phenotype. The evolutionary hypotheses proposed in this study lay the ground for further evolutionary analyses that take into account trichomanoid habitats and accurate ecological preferences. 11. Automated group assignment in large phylogenetic trees using GRUNT: GRouping, Ungrouping, Naming Tool Directory of Open Access Journals (Sweden) Markowitz Victor M 2007-10-01 Full Text Available Abstract Background Accurate taxonomy is best maintained if species are arranged as hierarchical groups in phylogenetic trees. This is especially important as trees grow larger as a consequence of a rapidly expanding sequence database. Hierarchical group names are typically manually assigned in trees, an approach that becomes unfeasible for very large topologies. Results We have developed an automated iterative procedure for delineating stable (monophyletic hierarchical groups to large (or small trees and naming those groups according to a set of sequentially applied rules. In addition, we have created an associated ungrouping tool for removing existing groups that do not meet user-defined criteria (such as monophyly. The procedure is implemented in a program called GRUNT (GRouping, Ungrouping, Naming Tool and has been applied to the current release of the Greengenes (Hugenholtz 16S rRNA gene taxonomy comprising more than 130,000 taxa. Conclusion GRUNT will facilitate researchers requiring comprehensive hierarchical grouping of large tree topologies in, for example, database curation, microarray design and pangenome assignments. The application is available at the greengenes website 1. 12. Phylogenetic incongruence in E. coli O104: understanding the evolutionary relationships of emerging pathogens in the face of homologous recombination. Directory of Open Access Journals (Sweden) Weilong Hao Full Text Available Escherichia coli O104:H4 was identified as an emerging pathogen during the spring and summer of 2011 and was responsible for a widespread outbreak that resulted in the deaths of 50 people and sickened over 4075. Traditional phenotypic and genotypic assays, such as serotyping, pulsed field gel electrophoresis (PFGE, and multilocus sequence typing (MLST, permit identification and classification of bacterial pathogens, but cannot accurately resolve relationships among genotypically similar but pathotypically different isolates. To understand the evolutionary origins of E. coli O104:H4, we sequenced two strains isolated in Ontario, Canada. One was epidemiologically linked to the 2011 outbreak, and the second, unrelated isolate, was obtained in 2010. MLST analysis indicated that both isolates are of the same sequence type (ST678, but whole-genome sequencing revealed differences in chromosomal and plasmid content. Through comprehensive phylogenetic analysis of five O104:H4 ST678 genomes, we identified 167 genes in three gene clusters that have undergone homologous recombination with distantly related E. coli strains. These recombination events have resulted in unexpectedly high sequence diversity within the same sequence type. Failure to recognize or adjust for homologous recombination can result in phylogenetic incongruence. Understanding the extent of homologous recombination among different strains of the same sequence type may explain the pathotypic differences between the ON2010 and ON2011 strains and help shed new light on the emergence of this new pathogen. 13. Rapid radiation events in the family Ursidae indicated by likelihood phylogenetic estimation from multiple fragments of mtDNA. Science.gov (United States) Waits, L P; Sullivan, J; O'Brien, S J; Ward, R H 1999-10-01 The bear family (Ursidae) presents a number of phylogenetic ambiguities as the evolutionary relationships of the six youngest members (ursine bears) are largely unresolved. Recent mitochondrial DNA analyses have produced conflicting results with respect to the phylogeny of ursine bears. In an attempt to resolve these issues, we obtained 1916 nucleotides of mitochondrial DNA sequence data from six gene segments for all eight bear species and conducted maximum likelihood and maximum parsimony analyses on all fragments separately and combined. All six single-region gene trees gave different phylogenetic estimates; however, only for control region data was this significantly incongruent with the results from the combined data. The optimal phylogeny for the combined data set suggests that the giant panda is most basal followed by the spectacled bear. The sloth bear is the basal ursine bear, and there is weak support for a sister taxon relationship of the American and Asiatic black bears. The sun bear is sister taxon to the youngest clade containing brown bears and polar bears. Statistical analyses of alternate hypotheses revealed a lack of strong support for many of the relationships. We suggest that the difficulties surrounding the resolution of the evolutionary relationships of the Ursidae are linked to the existence of sequential rapid radiation events in bear evolution. Thus, unresolved branching orders during these time periods may represent an accurate representation of the evolutionary history of bear species. 14. gyrB as a phylogenetic discriminator for members of the Bacillus anthracis-cereus-thuringiensis group Science.gov (United States) La Duc, Myron T.; Satomi, Masataka; Agata, Norio; Venkateswaran, Kasthuri 2004-01-01 Bacillus anthracis, the causative agent of the human disease anthrax, Bacillus cereus, a food-borne pathogen capable of causing human illness, and Bacillus thuringiensis, a well-characterized insecticidal toxin producer, all cluster together within a very tight clade (B. cereus group) phylogenetically and are indistinguishable from one another via 16S rDNA sequence analysis. As new pathogens are continually emerging, it is imperative to devise a system capable of rapidly and accurately differentiating closely related, yet phenotypically distinct species. Although the gyrB gene has proven useful in discriminating closely related species, its sequence analysis has not yet been validated by DNA:DNA hybridization, the taxonomically accepted "gold standard". We phylogenetically characterized the gyrB sequences of various species and serotypes encompassed in the "B. cereus group," including lab strains and environmental isolates. Results were compared to those obtained from analyses of phenotypic characteristics, 16S rDNA sequence, DNA:DNA hybridization, and virulence factors. The gyrB gene proved more highly differential than 16S, while, at the same time, as analytical as costly and laborious DNA:DNA hybridization techniques in differentiating species within the B. cereus group. 15. Next-generation polyploid phylogenetics: rapid resolution of hybrid polyploid complexes using PacBio single-molecule sequencing. Science.gov (United States) Rothfels, Carl J; Pryer, Kathleen M; Li, Fay-Wei 2017-01-01 Difficulties in generating nuclear data for polyploids have impeded phylogenetic study of these groups. We describe a high-throughput protocol and an associated bioinformatics pipeline (Pipeline for Untangling Reticulate Complexes (Purc)) that is able to generate these data quickly and conveniently, and demonstrate its efficacy on accessions from the fern family Cystopteridaceae. We conclude with a demonstration of the downstream utility of these data by inferring a multi-labeled species tree for a subset of our accessions. We amplified four c. 1-kb-long nuclear loci and sequenced them in a parallel-tagged amplicon sequencing approach using the PacBio platform. Purc infers the final sequences from the raw reads via an iterative approach that corrects PCR and sequencing errors and removes PCR-mediated recombinant sequences (chimeras). We generated data for all gene copies (homeologs, paralogs, and segregating alleles) present in each of three sets of 50 mostly polyploid accessions, for four loci, in three PacBio runs (one run per set). From the raw sequencing reads, Purc was able to accurately infer the underlying sequences. This approach makes it easy and economical to study the phylogenetics of polyploids, and, in conjunction with recent analytical advances, facilitates investigation of broad patterns of polyploid evolution. 16. Highly Accurate Sensor for High-Purity Oxygen Determination Project Data.gov (United States) National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination.... 17. Accurate backgrounds to Higgs production at the LHC CERN Document Server Kauer, N 2007-01-01 Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions. 18. Multi-objective optimization of inverse planning for accurate radiotherapy Institute of Scientific and Technical Information of China (English) 曹瑞芬; 吴宜灿; 裴曦; 景佳; 李国丽; 程梦云; 李贵; 胡丽琴 2011-01-01 The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment pl 19. Controlling Hay Fever Symptoms with Accurate Pollen Counts Science.gov (United States) ... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has been reviewed by Thanai ... rhinitis known as hay fever is caused by pollen carried in the air during different times of ... 20. Digital system accurately controls velocity of electromechanical drive Science.gov (United States) Nichols, G. B. 1965-01-01 Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6119109988212585, "perplexity": 7042.303335734083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687255.13/warc/CC-MAIN-20170920104615-20170920124615-00587.warc.gz"}
http://math.stackexchange.com/questions/897885/can-a-fraction-be-simplified-like-this
# Can a fraction be simplified like this? Ridiculously embarrassing question, but can $\frac{x^2-x}{x^2-25}$ be simplified to simply $\frac{1-x}{1-25}$? Full thought process here is that this is essentially $\frac{x*x-x}{x*x-25}$ so the $x$s should cancel. The full problem is:$$\frac{x^2-x-30}{x^2-25}$$ ## sorry I'm used to programming forums where a simplest-case example of an error is the way to ask about it. I should have made the full problem clearer earlier as $-$ unfortunately $-$ this lead to someone who gave more information being wrong at the final problem and I can't mark both answers right. - Did you try plugging in numbers, like $x = 2$, to see if those two expressions are equal for all $x$? –  JimmyK4542 Aug 15 at 2:18 No but perhaps if you tell us what your thought process was we could further clarify –  illysial Aug 15 at 2:19 @illysial The OP undoubtedly "canceled" the factors of $x^2$. –  Gahawar Aug 15 at 2:21 Was in the process of adding that, thank you. –  theStandard Aug 15 at 2:21 @Gahawar Ah I see that now ..i should've noticed earlier! –  illysial Aug 15 at 2:29 If you are unsure, then one way to check whether things like this might be true is to plug in a value for $x$. Let $x = 2$. We get: $$\frac{x^2-x}{x^2 - 25} = \frac{2}{-21} \neq \frac{1-x}{1-25} = \frac{-1}{-24} = \frac{1}{24}$$ So in this case, you made a mistake somewhere. Of course, if you plug in a value and equality does hold, then that doesn't imply it always holds. E.g. $2x \neq x^2$ in general even though it holds when $x = 2$. - A good rule of thumb to avoid the issue of "fortuitous equality" is to use a transcendental constant like $\pi$ rather than "nice" numbers like $2$. When dealing with a polynomial/rational equation, no transcendental constant will ever satisfy it unless the equation is identically true. –  Deepak Aug 15 at 2:29 @Deepak, great point! –  Kaj Hansen Aug 15 at 2:32 @Deepak that can be generalized even further to: given a function $f(x)$ and a set of values $Q$ that can be produced by plugging in an algebraic number $x$ into $f$ and/or $w$ that produces an algebraic number when plugged into $f$, one should opt to test with a number $u$ that exists outside this group. That way even with funky sines and cosines where certain transcendental constants lead to "fortuitous equality" we know to avoid them –  frogeyedpeas Aug 15 at 7:43 @frogeyedpeas Thank you that's a good point. I was thinking of transcendental equations and how the complementary approach would work but decided to just focus on the problem at hand. –  Deepak Aug 15 at 22:48 No. If you want to divide numerator and denominator by $x^2$, you will get $\dfrac{1-\frac1x}{1-\frac{25}{x^2}}$, which isn't really simpler. If you really want to do something to simplify it, you can rewrite it as $$\frac{x^2-25-x+25}{x^2-25}=\frac{x^2-25}{x^2-25}+\frac{-x+25}{x^2-25}=1+\frac{25-x}{x^2-25}$$ - To simplify a fraction, you want to factor the numerator and denominator and see what cancels. The denominator $x^2-25$ is a difference of squares: $x^2-25=(x+5)(x-5)$, so see if either of those factors divides the numerator. When you say $x$s should cancel you should understand that "cancel" means "divide by". The $25$ term does not have a factor of $x$, so you can't cancel it. - $\displaystyle \frac{x^2-x-30}{x^2-25}=\frac{(x-6)(x+5)}{(x-5)(x+5)}=\frac{x-6}{x-5}$ - I would suggest that this will not contribute to understanding by OP (even though correct as long as $x \neq -5$) –  Ross Millikan Aug 15 at 2:30 He seemed to have thought about the solution after the discussion about improper cancellation. I thought about just saying he should factor the polynomials, but see Gahawars post calling the $x^2$ terms "factors". It seemed that this would confuse the issue. Perhaps I was wrong. –  Paul Sundheim Aug 15 at 2:36 (changed which answer was marked as correct, although this was helpful in a secondary sort of way) –  theStandard Aug 15 at 2:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8925747275352478, "perplexity": 632.7750103582171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775404.88/warc/CC-MAIN-20141217075255-00111-ip-10-231-17-201.ec2.internal.warc.gz"}
http://cpr-nuclth.blogspot.com/2013/06/13060372-niladri-sarkar-et-al.html
## Acoustic horizons in nuclear fluids    [PDF] Niladri Sarkar, Abhik Basu, Jayanta K. Bhattacharjee, Arnab K. Ray We consider a hydrodynamic description of the spherically symmetric outward flow of nuclear matter, accommodating dispersion in it as a very weak effect. About the resulting stationary conditions in the flow, we apply an Eulerian scheme to derive a fully nonlinear equation of a time-dependent radial perturbation. In its linearized limit, with no dispersion, this equation implies the static acoustic horizon of an analogue gravity model. We, however, show that time-dependent nonlinear effects destabilize the static horizon. We also model the perturbation as a high-frequency travelling wave, and perform a {\it WKB} analysis, in which the effect of weak dispersion is studied iteratively. We show that even arbitrarily small values of dispersion make the horizon fully opaque to any acoustic disturbance propagating against the bulk flow, with the amplitude and the energy flux of the radial perturbation undergoing a discontinuity at the horizon, and decaying exponentially just outside it. View original: http://arxiv.org/abs/1306.0372
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9683001041412354, "perplexity": 1892.7552602692228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945660.53/warc/CC-MAIN-20180422212935-20180422232935-00349.warc.gz"}
https://www.physicsforums.com/threads/dif-eq-problem-dont-know-why-it-is-wrong.109398/
Homework Help: Dif. eq. problem - dont know why it is wrong 1. Feb 4, 2006 UrbanXrisis dif. eq. problem -- dont know why it is wrong Solve for y: $$x \frac{dy}{dx} + 6y=5x$$ with y(1)=2 $$\frac{dy}{dx} =5 - 6\frac{y}{x}$$ $$\frac{6y}{x} \frac{dy}{dx} =5$$ $$3y^2=\frac{5}{2}x^2 +C$$ $$y= \sqrt{\frac{5x^2}{6}+\frac{C}{3}}$$ solve for when y(1)=2 $$4=\frac{5}{6}+\frac{C}{3}$$ $$C=\frac{19}{2}$$ so.. $$y= \sqrt{\frac{5x^2}{6}+\frac{19}{6}}$$ what did i do wrong? because this is not the answer Last edited: Feb 4, 2006 2. Feb 4, 2006 d_leet y/x does not equal (dy)/(dx), You need to divide by x and then find an integrating factor. 3. Feb 4, 2006 UrbanXrisis sorry, that was typo! could you check it now? 4. Feb 4, 2006 d_leet It looks even worse, you somehow make addition into multiplaication. I already told you what you need to do. You need to divide everything by x to get the y' by itself and then find an integrating factor, you should know how to do that if you're being given this kind of problem because it certainly isn't separable, 5. Feb 4, 2006 UrbanXrisis oh thanks. by the way, when I am given a dif. eq. and they ask me "What are the constant solutions of this equation? " what exactly are do they want me to find? 6. Feb 4, 2006 ek As said before, you're taking the wrong approach. You need to divide it by x to get it into standard form or whatever it's called, and then get an integrating factor. I get an answer of y = 5x/7 +c/x6 Last edited: Feb 4, 2006 7. Feb 4, 2006 d_leet Umm... I think they might mean for you to find the solutions that are just a constant function ie. y=c c is just some constant. 8. Feb 4, 2006 d_leet You forgot to divide c by x6 9. Feb 4, 2006 ek Ya. Actually I forgot to put in period and added it in haphazardly without thinking. I'm quite absent minded some times. I'll edit my post. I edited my last post. Last edited: Feb 4, 2006 10. Feb 4, 2006 Valhalla hmmm $$x\frac{dy}{dx}+6y=5x$$ $$\frac{dy}{dx}+\frac{6}{x}y=\frac{5}$$ $$e^{6\int\frac{dx}{x}}=x^6$$ multiply through by integrating factor $$x^6\frac{dy}{dx}+6x^5y=5x^6$$ then integrate both sides $$x^6y=\frac{5x^7}{7}+C$$ Divide through by x^6 $$y=\frac{5x}{7}+Cx^{-6}$$ do you see where your mistake is? now solve for C $$y(1)=2$$ $$2=\frac{5}{7}1+C$$ $$2-\frac{5}{7}=C$$ $$\frac{9}{7}=C$$ so the specific solution is $$y=\frac{5x}{7}+\frac{9}{7}x^{-6}$$ Last edited: Feb 4, 2006 11. Feb 4, 2006 d_leet The term on the right hand side should be 5x not 5 so your answer is wrong as well. 12. Feb 4, 2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060827493667603, "perplexity": 1281.9741841734854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864957.2/warc/CC-MAIN-20180623093631-20180623113631-00023.warc.gz"}
http://math.stackexchange.com/questions/163359/limit-of-the-form-infty-infty?answertab=votes
# Limit of the form $\infty - \infty$ Consider: $$\lim_{x \to \infty} \left(x - \ln(e^x + e^{-x})\right)$$ I wasn't sure how to treat the $\infty - \infty$ property. Can I exponentiate the function to get $$e^x - (e^x + e^{-x}) = \frac{1}{e^x}$$ $$\lim_{x \to \infty} \frac{1}{e^x} = 0$$ I feel like I have ignored the limit part of the expression when exponentiating. Do I have to exponentiate the limit expression as well when I do this or can I ignore it for the moment? - The exponential of $x-\ln(e^x+e^{-x})$ is not equal to $e^x - (e^x+e^{-x})$. It is equal to $\frac{e^x}{e^x+e^{-x}}$. –  Arturo Magidin Jun 26 '12 at 16:07 1. You exponentiated wrong: $$\exp(x - \ln(e^x+e^{-x})) = e^xe^{-\ln(e^x+e^{-x})} = \frac{e^x}{e^{\ln(e^x+e^{-x})}} = \frac{e^x}{e^x+e^{-x}}.$$ 2. Since the exponential function is continuous, what we know is that $$\exp\left(\lim_{x\to\infty}f(x)\right) = \lim_{x\to\infty}\exp(f(x))$$ in the sense that if either one exists, then they both exist and they are equal; and that if one of them is equal to $\infty$ then they both do. Equivalently, we have that $$\lim_{x\to\infty} f(x) = \ln\left(\lim_{x\to\infty}\exp(f(x))\right).$$ So you can compute the limit of the exponential, $$\lim_{x\to\infty}\frac{e^x}{e^{x}+e^{-x}}$$ and if you obtain a value $L$, that means that the original limit will be $\ln(L)$. - $e^{a-b}=\dfrac{e^a}{e^b}$, not $e^a-e^b$. Otherwise, exponentiating is a good idea. The reason it helps is that the logarithmic function is continuous, so if $\lim\limits_{x\to\infty}e^{f(x)}$ exists, then $\lim\limits_{x\to\infty}f(x)=\log\left(\lim\limits_{x\to\infty}e^{f(x)}\right)$ - Why CW? $${}{}$$ –  The Chaz 2.0 Jun 26 '12 at 16:34 No particular reason why. No particular reason why not. I'm curious, why add a bunch of space to your comment with double dollar signs? –  Jonas Meyer Jun 26 '12 at 16:51 I should have just used single dollars, in retrospect. –  The Chaz 2.0 Jun 26 '12 at 17:16 Taking $x-\mathrm{Ln}(e^x+e^{-x})=y$ we have $1-e^{-2x}=e^{-y}$. So if $x\rightarrow +\infty$ then $y\rightarrow 0$. - That's a pretty big leap from $x-\ln(e^x+e^{-x})$ to $1-e^{-2x}=e^{-y}$, given that the poster clearly needs a refresher on how exponentiation works. It seems like it should be $1+e^{-2x}$ anyway. –  Thomas Andrews Jun 26 '12 at 16:19 Replace $x$ by $\ln(x)$, apply the formula $\ln a - \ln b = \ln\frac{a}{b}$, take the limit to $\infty$ and you're done (a way without pen and paper). - Do you have any tips on how I could improve my questions? Perhaps we should start a chat room so we do not flood the comments section here by the way. If you're willing to discuss this, feel free to join chat.stackexchange.com/rooms/3895/joe-and-chris –  Joe Jun 26 '12 at 18:37 Okay, I will keep that in mind when I ask questions from now on. –  Joe Jun 26 '12 at 18:42 When I joined the site, I recall reading posts (either from Meta or FAQ somewhere) that asked for users to display what they know about their given problem so far, where they encountered it, their work so far, where exactly they are "stuck", level of answers they should expect, avoiding imperative words in the posts such as "Show that...", et cetera. So, I try to keep all of those things in mind when writing my questions. If there is something specific I could work on, please let me know. Often I am far into a problem and get stuck on one part of it which I detail in my questions. –  Joe Jun 26 '12 at 18:47 Personally, I felt those few questions I stumbled upon yesterday were imperative, did not note where the question or idea came from, so I downvoted. Not one bit of the downvoting was personal, so I hope you don't plan on taking a personal attack of downvoting my questions out of spite. If you do though, I suppose there isn't much I can do about it. Keep in mind that I downvote many questions - currently my up/down ratio is 151/119. It has nothing to do with the users themselves, ever. It is purely based off of the question and its wording. –  Joe Jun 26 '12 at 18:52 Here's what I think most of the answers here are trying to say. First rewrite $x$ as $\ln e^x$, then use the properties of logarithms. So we have $$\lim_{x\to\infty}[\ln e^x-\ln(e^x+e^{-x})]=$$ $$\lim_{x\to\infty}\ln\frac{e^x}{e^x+e^{-x}}=$$ $$\lim_{x\to\infty}\ln\frac1{1+e^{-2x}}$$ From here it should be easy. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6977466344833374, "perplexity": 410.55545870128697}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274866.27/warc/CC-MAIN-20140728011754-00268-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.gradesaver.com/textbooks/science/chemistry/chemistry-the-central-science-13th-edition/chapter-6-electronic-structure-of-atoms-exercises-page-250/6-31b
## Chemistry: The Central Science (13th Edition) The number of photons per second is emitted by the laser: $8.073\times10^{16}photons/s$. *Strategy: 1) Calculate the energy of a photon of this radiation. 2) Find out how much energy is absorbed per second by the detector. 3) Calculate the number of photons emitted by the laser per second. 1) Calculate the energy of a photon of this radiation. *Known variables and constants: - Wavelength of radiation: $\lambda=9.87\times10^{-7}m$ - Planck's constant: $h\approx6.626\times10^{-34}J.s$ - Speed of light in a vacuum: $c\approx2.998\times10^8m/s$ *The energy of a photon of this radiation is: $E_p=\frac{h\times c}{\lambda}=\frac{(6.626\times10^{-34})\times(2.998\times10^8)}{9.87\times10^{-7}}\approx2.013\times10^{-19}J/photons$ 2) Find out how much energy is absorbed by the detector per second. *Known variables: - Total energy absorbed: $E=0.52J$ - Amount of time: $t=32s$ *The amount of energy absorbed per second is: $E_s=\frac{E}{t}=\frac{0.52}{32}=1.625\times10^{-2}J/s$ 3) Calculate the number of photons ($N$) emitted by the laser per second. $N=\frac{E_s}{E_p}=\frac{1.625\times10^{-2}}{2.013\times10^{-19}}\approx8.073\times10^{16}photons/s$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.946672260761261, "perplexity": 576.7220498987554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00208.warc.gz"}
https://packages.tesselle.org/chronos/articles/import.html
This vignette uses data available through the fasti package which is available in a separate repository. fasti provides MCMC outputs from ChronoModel, OxCal and BCal. ## Install the latest version install.packages("fasti", repos = "https://tesselle.r-universe.dev") library(chronos) ChronoModel Two different files are generated by ChronoModel: Chain_all_Events.csv that contains the MCMC samples of each event created in the modeling, and Chain_all_Phases.csv that contains all the MCMC samples of the minimum and the maximum of each group of dates if at least one group is created. ## Read events from ChronoModel output_events <- system.file("chronomodel/ksarakil/Chain_all_Events.csv", package = "fasti") ## Plot events plot(chrono_events) #> Picking joint bandwidth of 49.2 ## Read phases from ChronoModel output_phases <- system.file("chronomodel/ksarakil/Chain_all_Phases.csv", package = "fasti") ## Plot phases plot(chrono_phases) Oxcal Oxcal generates a CSV file containing the MCMC samples of all parameters (dates, start and end of phases). ## Read OxCal MCMC samples output_oxcal <- system.file("oxcal/ksarakil/MCMC_Sample.csv", package = "fasti") oxcal_mcmc <- read_oxcal(output_oxcal) The phase boundaries cannot be extracted automatically from Oxcal output. Use as_phases() to get the phase boundaries: ## Get phases boundaries from OxCal oxcal_phases <- as_phases(oxcal_mcmc, start = c(2, 5, 19, 24), stop = c(4, 18, 23, 26), names = c("IUP", "Ahmarian", "UP", "EPI")) ## Plot phase boundaries plot(oxcal_phases) ## Computes phases boundaries (min-max) groups <- list(IUP = 3, Ahmarian = c(6:12, 14:17), UP = 20:22, EPI = 25) oxcal_groups <- phase(oxcal_mcmc, groups = groups) ## Plot phases boundaries plot(oxcal_groups) BCal BCal generates a CSV file containing the MCMC samples of all parameters (dates, start and end of groups). ## Read BCal MCMC samples output_bcal <- system.file("bcal/output/rawmcmc.csv", package = "fasti") bcal_mcmc <- read_bcal(output_bcal) The group boundaries cannot be extracted automatically from BCal output. Use as_phases() to get the group boundaries: ## Get groups boundaries from BCal bcal_phases <- as_phases(bcal_mcmc, start = c(1, 4, 9, 22), stop = c(3, 8, 21, 24), names = c("EPI", "UP", "Ahmarian", "IUP")) ## Plot group boundaries plot(bcal_phases) ## Compute phase boundaries (min-max) groups <- list(IUP = 23, Ahmarian = 10:20, UP = 5:7, EPI = 2) bcal_groups <- phase(bcal_mcmc, groups = groups) ## Plot phase boundaries plot(bcal_groups)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1735183745622635, "perplexity": 19985.208292866504}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00696.warc.gz"}
https://encyclopediaofmath.org/index.php?title=Low-dimensional_topology,_problems_in&oldid=50797
# Low-dimensional topology, problems in (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Many problems in two-dimensional topology (cf. Topology of manifolds) arise from, or have to do with, attempts to lift algebraic operations performed on the chain complex $\underline{\underline{C}} ( \tilde { K } )$ of a universal covering complex $\tilde { K } ^ { 2 }$ to geometric operations on the complex $K ^ { 2 }$ (here and below, "complex" means a $P L C W$-complex, i.e. a polyhedron with a $C W$-structure, see [a4] for a precise definition; for simplicity, one may think of a polyhedron): The chain complex $\underline{\underline{C}} ( \tilde { K } )$ encodes the relators of the presentation (cf. Presentation) associated to $K ^ { 2 }$ only up to commutators between relators. A first classical example for this phenomenon occurs in the proof of the $s$-cobordism theorem (see [a7], which thus only works for manifolds of dimension $\geq 6$. In this context, J. Andrews and M. Curtis (see [a8]) asked whether the unique $5$-dimensional thickening of a compact connected $2$-dimensional complex (in short, a $2$-complex) in a $5$-dimensional piecewise-linear manifold (a PL-manifold) is a $5$-dimensional ball. They show that this is implied by the Andrews–Curtis conjecture. ## Andrews–Curtis conjecture. AC) any contractible finite $2$-complex $3$-deforms to a point, i.e. there exists a $3$-dimensional complex $L^3$ such that $L^3$ collapses to $K ^ { 2 }$ and to a point: $K ^ { 2 } \swarrow L ^ { 3 } \searrow \operatorname{pt}$. (Cf. [a4] for the precise notion of a collapse, which is a deformation retraction through "free faces" .) Figure: l120170a A sequence of "elementary" collapses yielding a collapse To a contractible finite $2$-complex there corresponds a balanced presentation (cf. Presentation) $\mathcal{P} = \langle x _ { 1 } , \dots , x _ { n } | R _ { 1 } , \dots , R _ { n } \rangle$ of the trivial group. $3$-deformations can be translated into a sequence of Andrews–Curtis moves on $\mathcal{P}$: 1) $R _ { i } \rightarrow R _ { i } ^ { - 1 }$; 2) $R _ { i } \rightarrow R _ { i } R _ { j }$, $i \neq j$; 3) $R _ { i } \rightarrow w R _ { i } w ^ { - 1 }$, $w$ any word; 4) add a generator $x_{n+1}$ and a relation $wx_{n+1}$, $w$ any word in $x _ { 1 } , \ldots , x _ { n }$. Hence, an equivalent statement of the Andrews–Curtis conjecture is: Any balanced presentation of the trivial group can be transformed into the empty presentation by Andrews–Curtis moves. Note that redundant relations cannot be added, since by Tietze's theorem (see [a9]) any two presentations of a group become equivalent under insertion and deletion of redundant relations and Andrews–Curtis moves. Here are some prominent potential counterexamples to AC): 1) $\langle a , b , c | c ^ { - 1 } b c = b ^ { 2 } , a ^ { - 1 } c a = c ^ { 2 } , b ^ { - 1 } a b = a ^ { 2 } \rangle$ (E.S. Rapaport, see [a42]); 2) $\langle a , b | b a ^ { 2 } b ^ { - 1 } = a ^ { 3 } , a b ^ { 2 } a ^ { - 1 } = b ^ { 3 } \rangle$ (R.H. Crowell and R.H. Fox, see [a10], and [a11] for a generalization to an infinite series); 3) $\langle a , b | a b a = b a b , a ^ { 4 } = b ^ { 5 } \rangle$ (S. Akbulut and R. Kirby, see [a12]). This example corresponds to a homotopy $4$-sphere which is shown to be standard by a judicious addition of a $2$-, $3$-handle pair, see [a13], and [a6]; 4) $\langle a , b | a = [ a ^ { p } , b ^ { q } ] , b = [ a ^ { r } , b ^ { s } ] \rangle$ (C.McA. Gordon). An analogue of the conjecture is true in all dimensions different from $2$; in fact, the following generalization of it to non-trivial groups and keeping a subcomplex fixed holds (see [a14] for $n \geq 3$ and [a15] for $n = 1$; cf. also Homotopy type): Let $n \neq 2$; and let $f : K _ { 0 } \rightarrow K _ { 1 }$ be a simple-homotopy equivalence of connected, finite complexes, inducing the identity on the common subcomplex $L$, $n = \operatorname { max } ( \operatorname { dim } ( K _ { 0 } - L ) , \operatorname { dim } ( K _ { 1 } - L ) )$. Then $f$ is homotopic rel $L$ to a deformation $K _ { 0 } ^ { n + 1 } \searrow K _ { 1 }$ which leaves $L$ fixed throughout. A deformation is a composition of expansions and collapses; if the maximal cell dimension involved is $n$, this will be denoted by $K N L$, see [a7]. The corresponding statement for $n = 2$ is called the relative generalized Andrews–Curtis conjecture ( "generalized" because the fundamental group of $K_i$ may be non-trivial; "relative" because of the fixed subcomplex). The subcase $L = \phi$, i.e. the expectation that a simple-homotopy equivalence between finite $2$-dimensional complexes can always be replaced by a $3$-deformation, is called the generalized Andrews–Curtis conjecture, henceforth abbreviated AC'); see [a4]. Suppose $\mathcal{P} = \langle a _ { 1 } , \dots , a _ { g } | R _ { 1 } , \dots , R _ { n } \rangle$ and $\mathcal{Q} = \langle a _ { 1 } , \dots , a _ { g } | S _ { 1 } , \dots , S _ { n } \rangle$ are presentations of $\pi$ such that D) each difference $R _ { i } S _ { i } ^ { - 1 }$ is a consequence of commutators $[ R _ { j } , R _ { k } ]$ ($1 \leq j , k \leq n$) of relators, then the corresponding $2$-dimensional complexes $K ^ { 2 }$ and $L^{2}$ are simple-homotopy equivalent. Furthermore, up to Andrews–Curtis moves the converse is true, see [a16]. Thus, in terms of presentations, AC') states that under the assumption D), $R_i$ can actually be made to coincide with $S _ { i }$ by Andrews–Curtis moves, for all $i$. Even though AC') is expected to be false, D) implies that the difference $R _ { i } S _ { i } ^ { - 1 }$ between the $i$th relators by Andrews–Curtis moves can be pushed to become a product of arbitrarily high commutators of relators, see [a17]. Furthermore, taking the one-point union not only with a finite number of $2$-spheres, but also with certain $2$-complexes of minimal Euler characteristic, eliminates any potential difference between simple-homotopy and $3$-deformations: A simple homotopy equivalence between finite connected $2$-complexes $K ^ { 2 }$, $L^{2}$ gives rise to a $3$-deformation between the one point union of $K ^ { 2 }$ (respectively, $L^{2}$) with a sufficiently large number of standard complexes of ${\bf Z} _ { 2 } \times {\bf Z} _ { 4 }$, see [a16]. For a detailed discussion on the status of the conjectures AC), AC') and relAC'), see [a4], Chap. XII. There is a close relation between $2$-complexes and $3$-manifolds. (cf. Three-dimensional manifold): Every compact connected $3$-dimensional manifold with non-empty boundary collapses to a $2$-dimensional complex, called a spine (see [a4], Chap. I, §2.2), and thus determines a $3$-deformation class of $2$-complexes. A counterexample to AC) which is a $3$-manifold with spine $K ^ { 2 }$ would disprove the $3$-dimensional Poincaré conjecture (cf. Three-dimensional manifold) ## Zeeman conjecture. This prominent conjecture on $2$-complexes actually implies the $3$-dimensional Poincaré conjecture. The Zeeman conjecture states that (see [a23]): Z) if $K ^ { 2 }$ is a compact contractible $2$-dimensional complex, then $K ^ { 2 } \times I \searrow \operatorname{pt}$, where $I$ is an interval. Note that Z) also implies AC), as $K ^ { 2 } \nearrow K ^ { 2 }\times I \searrow \operatorname {pt}$ would be a $3$-deformation. Examples which fulfil $K ^ { 2 } \times I \searrow \operatorname{pt}$ are the dunce hat, Bing's house and the house with one room, see [a4]. However, $K ^ { 2 } \times I \searrow \operatorname{pt}$ is not even established (as of 1999) for most of the standard $2$-complexes of presentations $\langle a , b | a ^ { p } b ^ { q } , a ^ { r } b ^ { s } \rangle$ where $p s - q r = \pm 1$, even though these are Andrews–Curtis equivalent to the empty presentation. As for AC), there is a straightforward generalization to non-trivial groups; the generalized Zeeman conjecture: Z') $K ^ { 2 } / \searrow L ^ { 2 }$ implies $K ^ { 2 } \times I \searrow L ^ { 2 }$ or $L ^ { 2 } \times I \searrow K ^ { 2 }$. Of course, Z') implies both Z) and AC'). It is open (as of 1999) whether AC') implies Z'), but given a $3$-deformation between finite $2$-complexes $K ^ { 2 }/ \stackrel { 3 } { \searrow } L ^ { 2 }$, then $K ^ { 2 }$ can be expanded by a sequence of $2$-expansions to a $2$-complex $K ^ { \prime 2 } \searrow K ^ { 2 }$ such that $K ^ { \prime 2 } \times I \searrow \operatorname{pt}$, see [a18]. In the special case of expansion of a single $3$-ball, followed by a $3$-collapse, $K ^ { 2 } \nearrow K ^ { 2 } \cup _ { B ^ { 2 } } B ^ { 3 } \searrow L ^ { 2 }$, it is true that $K ^ { 2 } \times I \searrow L ^ { 2 }$, see [a19], [a20], [a21]. This can be viewed as a first step in proving Z') modulo AC'), as every $3$-deformation between finite $2$-complexes can be replaced by one where each $3$-ball is transient, i.e. is collapsed (in general from a different free face) immediately after its expansion, see [a22]. For $L ^ { 2 } = \operatorname {pt}$, this method is called collapsing by adding a cell and works for all above-mentioned examples for $K ^ { 2 } \times I \searrow \operatorname{pt}$. A second general method for collapsing $K ^ { 2 } \times I$ was proposed by A. Zimmermann (see [a24]) and is called prismatic collapsing. At first one gets rid of the $3$-dimensional part of $K ^ { 2 } \times I$ as follows: For each $2$-cell $C ^ { 2 }$ of $K ^ { 2 }$ one collapses $C ^ { 2 } \times I$ to the union of $\partial C ^ { 2 } \times I$ and a $2$-cell $C ^ { * } \subset C ^ { 2 } \times I$ such that the direct product projection maps $\operatorname { lnt } C ^ { * }$ onto $\operatorname { lnt } C ^ { 2 }$ homeomorphically. Then one looks for a collapse of the resulting $2$-complex. One may say that prismatic collapsing is a very rough method, but exactly this roughness allows one to give an algebraic criterion for the prismatic collapsibility of $K ^ { 2 } \times I$: Attaching mappings for $2$-cells of $K ^ { 2 }$ have to determine a basis-up-to-conjugation in the free fundamental group of the $1$-dimensional skeleton (see [a7]) of $K ^ { 2 }$. Z) becomes true if one admits multiplication of $K ^ { 2 }$ by the $n$-fold product of $I$: For each contractible $K ^ { 2 }$ there exists an integer $n$ such that $K ^ { 2 } \times I ^ { n } \searrow \operatorname{pt}$, see [a19], [a20]. In fact, $n = 6$ suffices for all $K ^ { 2 }$, see [a25]. It is surprising that there is such a large gap between the presently (1999) known ($n = 6$) and Zeeman's conjectured ($n = 1$) values of $n$. On the other hand, a generalization of Z) to higher-dimensional complexes is false, since for any $n > 2$ there exists a contractible complex $K ^ { n }$ of dimension $n$ such that $K ^ { n } \times 1$ is not collapsible, see [a26]. The proof of non-collapsibility is based on a very specific (one may say "bad" ) local structure of $K ^ { n }$. So, the idea to investigate Z) for $2$-dimensional polyhedra with a "nice" local structure (such polyhedra are called special) seems to be very promising. In fact, if $K ^ { 2 }$ is a special spine of a homotopy $3$-ball $M ^ { 3 }$, then $K ^ { 2 } \times I$ collapses onto a homeomorphic copy of $M ^ { 3 }$, see [a27]. It follows that Z) is true for all special spines of a genuine $3$-ball and that for special spines of $3$-manifolds, Z) is equivalent to the $3$-dimensional Poincaré conjecture. Surprisingly, for special polyhedra that cannot be embedded in a $3$-manifold, Z) turns out to be equivalent to AC) (see [a28]), so that for special polyhedra, Z) is equivalent to the union of AC) and the $3$-dimensional Poincaré conjecture. ## $\operatorname{Wh} ^ { * }$-question. Another situation where dimension $2$ presents a severe difficulty in passing from chain complexes to geometry concerns the Whitehead group and the Whitehead torsion of a pair $( K , L )$, where $L$ is a strong deformation retraction of $K$ (cf. Whitehead group, Whitehead torsion). All elements of $\operatorname{Wh} ( \pi )$ can be realized by $\operatorname { dim } K = 3$. Let $\operatorname{Wh} ^ { * } ( \pi ) \subseteq \operatorname{Wh} ( \pi )$ be the set of those torsion values that can be realized by a $2$-dimensional extension, i.e. $\operatorname { dim } ( K - L ) \leq 2$. The $\operatorname{Wh} ^ { * }$-question is whether $\operatorname{Wh} ^ { * } ( \pi ) \neq \{ 0 \}$ can happen; see [a4]. If so, another related question is whether $\operatorname{ Wh} ^ { * } ( \pi )$ is a subgroup. A famous result of O.S. Rothaus is that there exist examples $\tau \in \operatorname{Wh} ( \pi )$ for dihedral groups $\pi$ with $\tau \notin \operatorname{Wh} ^ { * } ( \pi )$; see [a29]. This result was the basis for work by M.M. Cohen [a26] on the generalization of Z) to higher dimensions. A $2$-complex $K$ is called aspherical if its second homotopy group $\pi_2 ( K )$ is trivial (or equivalently, if all $\pi _ { n } ( K )$ for $n \geq 2$ are trivial). J.H.C. Whitehead asked, (see [a30]), whether subcomplexes of aspherical $2$-complexes are themselves aspherical. An affirmative answer to this question is called the Whitehead conjecture: WH) A subcomplex $K$ of an aspherical $2$-complex $L$ is aspherical. A lot of work has already been done in trying to solve this conjecture and there are about six false results in the literature which would imply WH). WH) is known to be true if $K$ has at most one $2$-cell and also in the case where $\pi_1 ( L )$ is either finite, Abelian or free, see [a31]. If $K$ is a subcomplex of an aspherical $2$-complex, then one can show that the second homology of the covering $\overline { K } \rightarrow K$ corresponding to the commutator subgroup is trivial. In fact, J.F. Adams has shown [a32] that $K$ has an acyclic regular covering $K ^ { * } \rightarrow \overline { K } \rightarrow K$ (i.e. $H _ { 2 } ( K ^ { * } ) = H _ { 1 } ( K ^ { * } ) = 0$). A counterexample to WH) can thus be covered by an acyclic complex, but not by a contractible one. In any counterexample $K \subset L$ to WH), the kernel of the inclusion induced mapping $\pi _ { 1 } ( K ) \rightarrow \pi _ { 1 } ( L )$ has a non-trivial, finitely generated, perfect subgroup, [a33]. J. Howie has shown [a34] that if WH) is false, then there exists a counterexample $K \subset L$ satisfying either a) $L$ is finite and contractible, and $K = L - e$ for some $2$-cell $e$ of $L$; or b) $L$ is the union of an infinite ascending chain of finite non-aspherical subcomplexes $K = K _ { 0 } \subset K _ { 1 } \subset \ldots$ such that each inclusion mapping is nullhomotopic. This result has been sharpened by E. Luft, who showed that if WH) is false, then there must even exist an infinite counterexample of type b). Let $\mathcal{P} = \langle x _ { 1 } , \dots , x _ { g } | R _ { 1 } , \dots , R _ { n } \rangle$ be a finite presentation where each relator is of the form $x _ { i } = x _ { j } x _ { k } x _ { j } ^ { - 1 }$. Such a presentation may be represented by a graph $T _ { \mathcal{P} }$ in the following way: For each generator $x_{i}$ of $\mathcal{P}$, define a vertex labelled $i$ and for each relator $x _ { i } = x _ { j } x _ { k } x _ { j } ^ { - 1 }$ define an edge oriented from the vertex $i$ to the vertex $k$ labelled by $j$. If $T _ { \mathcal{P} }$ is a tree, then $\mathcal{P}$ or $T _ { \mathcal{P} }$ or the standard-$2$-complex $K _ { \mathcal{P} }$ modelled on $\mathcal{P}$ is called a labelled oriented tree. Now Howie showed [a34] that if the Andrews–Curtis conjecture is true and all labelled oriented trees are aspherical, then there are no counterexamples of type a) to WH). Conversely, if there are no counterexamples of type a) to WH), then all labelled oriented trees are aspherical, which is easy to see since adding an extra relator $x _ { 1 } = 1$ to a labelled oriented tree yields a balanced presentation of the trivial group and hence a contractible complex. So the finite case of WH) can be reduced to the study of the asphericity of labelled oriented trees. Every knot group has a labelled oriented tree presentation (the Wirtinger presentation, see, e.g., [a6]) and by a theorem of C.D. Papakyriakopoulos, [a36], it is known that these labelled oriented trees are aspherical. Every labelled oriented tree satisfying the small cancellation conditions $C ( 4 )$, $T ( 4 )$ or a more refined curvature condition such as the weight or cycle test, [a37], is aspherical. Apart from that, there are not many classes of aspherical labelled oriented trees known: Howie, [a35], shows the asphericity of labelled oriented trees of diameter at most $3$ and G. Huck and S. Rosebrock have two other classes of aspherical labelled oriented trees satisfying certain conditions on the relators. An overview on WH), where further aspects of this conjecture are treated, can be found in [a4], Chap. X. ## Wall's domination problem. Given a CW-complex, it is natural to ask whether it can be replaced by a simpler one having the same homotopy type. Questions of this kind were first considered by J.H.C. Whitehead, who posed in particular the question: When is a CW-complex homotopy equivalent to a finite dimensional one? In [a38], C.T.C. Wall answered this by giving an algebraic characterization of finiteness. He also showed that a finite complex $X$ dominated by a finite $n$-complex $Y$ has the homotopy type of a finite $\operatorname{max}( 3 , n )$-complex if and only if a certain algebraic obstruction vanishes. ($X$ is dominated by $Y$ if the "homotopy of X survives passing through Y" , i.e. if there are mappings $f : X \rightarrow Y$, $g : Y \rightarrow X$ such that the composition $X \stackrel { f } { \rightarrow } Y \stackrel { g } { \rightarrow } X$ is homotopic to the identity). Whether "max3,n" can simply be replaced by "n" is still (1999) unanswered, due to difficulties when attempting to geometrically realize an algebraic $2$-complex. In order to explain this in more detail, assume $B$ is a chain complex of free ${\bf Z} G$-modules, \begin{equation*} B _ { 2 } \stackrel { d } { \rightarrow } B _ { 1 } \stackrel { d _ { 1 } } { \rightarrow } B _ { 0 } \rightarrow 0, \end{equation*} where $B_0$ is freely generated by a single element $e_0$, $B _ { 1 }$ by $\{ e _ { 1 } ^ { i } \}$, $B _ { 2 }$ by $\{ e _ { 2 } ^ { j } \}$, $d _ { 1 } ( e _ { 1 } ^ { i } ) = g _ { i } e _ { 0 } - e _ { 0 }$ for some group element $g_i$, and $H _ { 1 } ( B ) = 0$, $H _ { 0 } ( B ) = \mathbf{Z}$. Wall asked if $B$ is necessarily the cellular chain complex of the universal covering $\widetilde { K }$ of a $2$-complex $K$ with fundamental group $G$. An affirmative answer would resolve the difficulties in dimension two mentioned above. This topological set-up can also be rephrased in terms of combinatorial group theory. Let $F$ be the free group generated by $\{ x _ { i } \}$ and let $N$ be the kernel of the homomorphism from $F$ to $G$, sending $x_{i}$ to $g_i$. The image of the second boundary mapping $d _ { 2 }$ can be shown to be isomorphic to the relation ${\bf Z} G$-module $N / [ N , N ]$. Wall's question of geometric realizability now translates to asking whether the relation module generators $d _ { 2 } ( e _ { 2 } ^ { j } )$ lift to give a set of normal generators for $N$. This was answered negatively by M. Dunwoody (see [a39]). ## Relation gap question. M. Dyer showed that a more serious failure of this lifting problem, the relation gap question, would actually show that there does exist a finite $3$-complex dominated by a finite $2$-complex, with vanishing obstruction, that is not homotopically equivalent to a finite $2$-complex. Here, a finite presentation $F / N$ of a group $G$ is said to have a relation gap if no normal generating set of $N$ gives a minimal generating set for the relation module $N / [ N , N ]$. There have been many attempts to construct a relation gap in finitely presented groups (see [a4], p. 50). The existence of an infinite relation gap for a certain finitely-generated infinitely-related group was established in the influential paper of M. Bestvina and N. Brady [a1]. ## Eilenberg–Ganea conjecture. Another problem revolving around geometric realizability, connected to the relation gap problem and the Whitehead conjecture, is the Eilenberg–Ganea conjecture. A group $G$ is of cohomological dimension $n$ if there exists a projective resolution of length $n$ \begin{equation*} 0 \rightarrow P _ { n } \rightarrow \ldots \rightarrow P _ { 0 } \rightarrow \mathbf{Z} \rightarrow 0 \end{equation*} but no shorter one (see [a2] for a good reference on these matters). It was shown by S. Eilenberg, T. Ganea and J. Stallings ([a40], [a41]) that a group of cohomological dimension $n \neq 2$ admits an $n$-dimensional $K ( G , 1 )$ complex $K$. In particular, there is a geometric resolution of length $n$ arising as the augmented cellular chain complex of the universal covering of $K$. The Eilenberg–Ganea conjecture states that this is true in dimension $2$ as well. This conjecture is widely believed to be wrong; promising potential counterexamples have been exhibited by Bestvina and also by Bestvina and Brady [a1]. If the group in question does not have a relation gap, then J.A. Hillman showed that a weaker version of the conjecture is true, see [a3]. In particular, if the group $G$ does not have a relation gap and acts freely and co-compactly on an acyclic $2$-complex, then it also admits a co-compact free action on a contractible $2$-complex. A perhaps unsuspected connection between the Eilenberg–Ganea and the Whitehead conjecture was found by Bestvina and Brady in [a1]: at least one of the conjectures must be wrong! #### References [a1] M. Bestvina, N. Brady, "Morse theory and finiteness properties of groups" Invent. Math. , 129 (1997) pp. 445–470 [a2] K. Brown, "Cohomology of groups" , GTM , 87 , Springer (1982) [a3] J.A. Hillman, "2-knots and their groups" , Austral. Math. Soc. Lecture Notes 5 , Cambridge Univ. Press (1989) [a4] C. Hog-Angeloni, W. Metzler, A. Sieradski, "Two-dimensional homotopy and combinatorial group theory" , London Math. Soc. , 197 , Cambridge Univ. Press (1993) [a5] R. Kirby, "Problems in low-dimensional topology" W.H. Kazez (ed.) , Geometric Topology (1993 Georgia Internat. Topology Conf.) , 2 , Amer. Math. Soc. &Internat. Press (1993) pp. 35–473 [a6] D. Rolfsen, "Knots and links" , Publish or Perish (1976) [a7] C.P. Rourke, B.J. Sanderson, "Introduction to piecewise linear topology" , Springer (1972) [a8] J.J. Andrews, M.L. Curtis, "Free groups and handlebodies" Proc. Amer. Math. Soc. , 16 (1965) pp. 192–195 [a9] H. Tietze, "Ueber die topologische Invarianten mehrdimensionaler Mannigfaltigkeiten" Monatschr. Math. Phys. , 19 (1908) pp. 1–118 [a10] R.H. Crowell, R.H. fox, "Introduction to knot theory" , Ginn (1963) [a11] C.F. Miller, P.E. Schupp, Letter to M.M. Cohen , Oct. (1979) [a12] S. Akbulut, R. Kirby, "A potential smooth counterexample in dimension 4 to the Poincaré conjecture, the Schoenflies conjecture and the Andrews–Curtis conjecture" Topology , 24 (1985) pp. 375–390 [a13] R.E. Gompf, "Killing the Akbulut–Kirby sphere with relevance to the Andrews–Curtis and Schoenflies problems" Topology , 30 (1991) pp. 97–115 [a14] C.T.C. Wall, "Formal deformations" Proc. London Math. Soc. , 16 (1966) pp. 342–354 [a15] W. Metzler, "Aequivalenzaklassen von Gruppenbeschreibungen, Identitäten und einfacher Homotopietyp in niederen Dimensionen" , Lecture Notes London Math. Soc. , 36 , Cambridge Univ. Press (1979) pp. 291–326 [a16] C. Hog-Angeloni, W. Metzler, "Stabilization by free products giving rise to Andrews–Curtis equivalences" Note di Mat. , 10 : Suppl. 2 (1990) pp. 305–314 [a17] C. Hog-Angeloni, W. Metzler, "Andrews–Curtis Operationen mit höhere Kommutatoren der Relatorengruppe" J. Pure Appl. Algebra , 75 (1991) pp. 37–45 [a18] R. Kreher, W. Metzler, "Simpliziale Transformationen von Polyedren und die Zeeman-Vermutung" Topology , 22 (1983) pp. 19–26 [a19] P. Dierker, "Notes on collapsing $K \times I$ where $K$ is a contractible polyhedron" Proc. Amer. Math. Soc. , 19 (1968) pp. 425–428 [a20] W.B.R. Lickorish, "On collapsing $X ^ { 2 \times } I$" , Topology of Manifolds , Markham (1970) pp. 157–160 [a21] D. Gillman, "Bing's house and the Zeeman conjecture" Topology Appl. , 24 (1986) pp. 147–151 [a22] P. Wright, "Group presentations and formal deformations" Trans. Amer. Math. Soc. , 208 (1975) pp. 161–169 [a23] E.C. Zeeman, "On the dunce hat" Topology , 2 (1964) pp. 341–358 [a24] A. Zimmermann, "Eine spezielle Klasse kollabierbarere Komplexe $K ^ { 2 \times }I$" Thesis Frankfurt am Main (1978) [a25] M.M. Cohen, "Dimension estimates in collapsing $X \times I ^ { 2 }$" Topology , 14 (1975) pp. 253–256 [a26] M.M. Cohen, "Whitehead torsion, group extensions and Zeeman's conjecture in high dimensions" Topology , 16 (1977) pp. 79–88 [a27] D. Gillman, D. Rolfsen, "The Zeeman conjecture for standard spines is equivalent to the Poincaré conjecture" Topology , 22 (1983) pp. 315–323 [a28] S.V. Matveev, "Zeeman conjecture for unthickenable special polyhedra is equivalent to the Andrews–Curtis conjecture" Sib. Mat. Zh. , 28 : 6 (1987) pp. 66–80 (In Russian) [a29] O.S. Rothaus, "On the nontriviality of some group extensions given by generators and relations" Ann. of Math. , 106 (1977) pp. 599–612 [a30] J.H.C. Whitehead, "On adding relations to homotopy groups" Ann. of Math. , 42 (1941) pp. 409–428 [a31] W.H. Cockroft, "On two-dimensional aspherical complexes groups" Proc. London Math. Soc. , 4 (1954) pp. 375–384 [a32] J.F. Adams, "A new proof of a theorem of W.H. Cockroft" J. London Math. Soc. , 30 (1955) pp. 482–482 [a33] J. Howie, "Aspherical and acyclic $2$-complexes" J. London Math. Soc. , 20 (1979) pp. 549–558 [a34] J. Howie, "Some remarks on a problem of J.H.C. Whitehead" Topology , 22 (1983) pp. 475–485 [a35] J. Howie, "On the Asphericity of ribbon disc complements" Trans. Amer. Math. Soc. , 289 (1985) pp. 419–430 [a36] C.D. Papakyriakopoulos, "On Dehn's lemma and the asphericity of knots" Ann. of Math. , 66 (1957) pp. 1–26 [a37] G. Huck, S. Rosenbrock, "Eine verallgemeinerter Gewichtstest mit Anwendungen auf Baumpräsentationen" Math. Z. , 211 (1992) pp. 351–367 [a38] C.T.C. Wall, "Finiteness conditions for CW-complexes" Ann. of Math. , 81 (1965) pp. 56–69 [a39] J. Dunwoody, "Relation modules" Bull. London Math. Soc. , 4 (1972) pp. 151–155 [a40] S. Eilenberg, T. Ganea, "On the Lyusternik–Schnirelman category of abstract groups" Ann. of Math. , 46 (1945) pp. 480–509 [a41] J.R. Stallings, "On torsion-free groups with infinitely many ends" Ann. of Math. , 88 (1968) pp. 312–334 [a42] E.S. Rapaport, "Groups of order 1, some properties of presentations" Acta Math. , 121 (1968) pp. 127–150 How to Cite This Entry: Low-dimensional topology, problems in. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Low-dimensional_topology,_problems_in&oldid=50797 This article was adapted from an original article by Jens HarlanderCynthia Hog-AngeloniWolfgang MetzlerStephan Rosebrock (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9552386999130249, "perplexity": 851.2874713947248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00494.warc.gz"}
https://rd.springer.com/chapter/10.1007/978-3-030-23247-4_19
# State Complexity of GF(2)-Concatenation and GF(2)-Inverse on Unary Languages • Alexander Okhotin • Elizaveta Sazhneva Conference paper Part of the Lecture Notes in Computer Science book series (LNCS, volume 11612) ## Abstract The paper investigates the state complexity of two operations on regular languages, known as GF(2)-concatenation and GF(2)-inverse (Bakinova et al., “Formal languages over GF(2)”, LATA 2018), in the case of a one-symbol alphabet. The GF(2)-concatenation is a variant of the classical concatenation obtained by replacing Boolean logic in its definition with the GF(2) field; it is proved that GF(2)-concatenation of two unary languages recognized by an m-state and an n-state DFA is recognized by a DFA with 2mn states, and this number of states is necessary in the worst case, as long as m and n are relatively prime. This operation is known to have an inverse, and the state complexity of the GF(2)-inverse operation over a unary alphabet is proved to be exactly $$2^{n-1}+1$$. ## References 1. 1. Bakinova, E., Basharin, A., Batmanov, I., Lyubort, K., Okhotin, A., Sazhneva, E.: Formal languages over GF(2). In: Klein, S.T., Martín-Vide, C., Shapira, D. (eds.) LATA 2018. LNCS, vol. 10792, pp. 68–79. Springer, Cham (2018). 2. 2. Brzozowski, J.A.: Quotient complexity of regular languages. J. Autom. Lang. Comb. 15(1/2), 71–89 (2010). 3. 3. Chrobak, M.: Finite automata and unary languages. Theor. Comput. Sci. 47(3), 149–158 (1986). 4. 4. Daley, M., Domaratzki, M., Salomaa, K.: Orthogonal concatenation: language equations and state complexity. J. UCS 16(5), 653–675 (2010). 5. 5. Geffert, V., Mereghetti, C., Pighizzini, G.: Converting two-way nondeterministic unary automata into simpler automata. Theor. Comput. Sci. 295, 189–203 (2003). 6. 6. Jirásková, G., Okhotin, A.: State complexity of unambiguous operations on deterministic finite automata. In: Konstantinidis, S., Pighizzini, G. (eds.) DCFS 2018. LNCS, vol. 10952, pp. 188–199. Springer, Cham (2018). 7. 7. Kunc, M., Okhotin, A.: Describing periodicity in two-way deterministic finite automata using transformation semigroups. In: Mauri, G., Leporati, A. (eds.) DLT 2011. LNCS, vol. 6795, pp. 324–336. Springer, Heidelberg (2011). 8. 8. Makarov, V., Okhotin, A.: On the expressive power of GF(2)-grammars. In: Catania, B., Královič, R., Nawrocki, J., Pighizzini, G. (eds.) SOFSEM 2019. LNCS, vol. 11376, pp. 310–323. Springer, Cham (2019). 9. 9. Maslov, A.N.: Estimates of the number of states of finite automata. Soviet Math. Doklady 11, 1373–1375 (1970) 10. 10. Mereghetti, C., Pighizzini, G.: Optimal simulations between unary automata. SIAM J. Comput. 30(6), 1976–1992 (2001). 11. 11. Okhotin, A.: Unambiguous finite automata over a unary alphabet. Inf. Comput. 212, 15–36 (2012). 12. 12. Pighizzini, G., Shallit, J.: Unary language operations, state complexity and Jacobsthal’s function. Int. J. Found. Comput. Sci. 13(1), 145–159 (2002). 13. 13. Yu, S., Zhuang, Q., Salomaa, K.: The state complexities of some basic operations on regular languages. Theor. Comput. Sci. 125(2), 315–328 (1994). © IFIP International Federation for Information Processing 2019 ## Authors and Affiliations • Alexander Okhotin • 1 • Elizaveta Sazhneva • 1 1. 1.St. Petersburg State UniversitySaint PetersburgRussia
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.927894651889801, "perplexity": 14033.32698110254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00019.warc.gz"}
http://physics.stackexchange.com/questions/13980/definite-parity-of-solutions-to-a-schr%c3%b6dinger-equation-with-even-potential/13981
# Definite Parity of Solutions to a Schrödinger Equation with even Potential? I am reading up on the Schrödinger equation and I quote: Because the potential is symmetric under $x\to-x$, we expect that there will be solutions of definite parity. Could someone kindly explain why this is true? And perhaps also what it means physically? - Possible duplicate: physics.stackexchange.com/q/44003/2451 –  Qmechanic Dec 11 '13 at 13:48 Good question! First you need to know that parity refers to the behavior of a physical system, or one of the mathematical functions that describe such a system, under reflection. There are two "kinds" of parity: • If $f(x) = f(-x)$, we say the function $f$ has even parity • If $f(x) = -f(-x)$, we say the function $f$ has odd parity Of course, for most functions, neither of those conditions are true, and in that case we would say the function $f$ has indefinite parity. Now, have a look at the time-independent Schrödinger equation in 1D: $$-\frac{\hbar^2}{2m}\frac{\mathrm{d}^2}{\mathrm{d}x^2}\psi(x) + V(x)\psi(x) = E\psi(x)$$ and notice what happens when you reflect $x\to -x$: $$-\frac{\hbar^2}{2m}\frac{\mathrm{d}^2}{\mathrm{d}x^2}\psi(-x) + V(-x)\psi(-x) = E\psi(-x)$$ If you have a symmetric (even) potential, $V(x) = V(-x)$, this is exactly the same as the original equation except that we've transformed $\psi(x) \to \psi(-x)$. Since the two functions $\psi(x)$ and $\psi(-x)$ satisfy the same equation, you should get the same solutions for them, except for an overall multiplicative constant; in other words, $$\psi(x) = a\psi(-x)$$ Normalizing $\psi$ requires that $|a| = 1$, which leaves two possibilities: $a = +1$ (even parity) and $a = -1$ (odd parity). As for what this means physically, it tells you that whenever you have a symmetric potential, you should be able to find a basis of eigenstates which have definite even or odd parity (though I haven't proved that here,* only made it seem reasonable). In practice, you get linear combinations of eigenstates with different parities, so the actual state may not actually be symmetric (or antisymmetric) around the origin, but it does at least tell you that if your potential is symmetric, you could construct a symmetric (or antisymmetric) state. That's not guaranteed otherwise. You'd probably have to get input from someone else as to what exactly definite-parity states are used for, though, since that's out of my area of expertise (unless you care about parity of elementary particles, which is rather weirder). *There is a parity operator $P$ that reverses the orientation of the space: $Pf(x) = f(-x)$. Functions of definite parity are eigenfunctions of this operator. I believe you can demonstrate the existence of a definite-parity eigenbasis by showing that $[H,P] = 0$. - David, thank you so much! Your explanation is superb! :) –  bra-ket Aug 26 '11 at 7:16 States with a definite parity are eigenstates possessing certain energy $E_n$. A perturbed system becomes a superposition with no certain energy and with no certain parity $\Psi(t)=\sum_n C_n \psi_n e^\frac{-iE_n t}{\hbar}$, i.e., it is time-dependent. A tme-independent superposition (without those oscillating exponentials) arise only if you want to represent some time-independent function $\Phi$ as a spectral sum $\sum_n A_n \psi_n$. –  Vladimir Kalitvianski Aug 26 '11 at 7:49 Ooh, btw why can't $a\in\mathbb{C}$ and $|a|=1$? –  bra-ket Aug 26 '11 at 7:52 Doing reflection twice, you get $a^2 =1$ so it is a real number. –  Vladimir Kalitvianski Aug 26 '11 at 7:54 Aha! Thanks! :) –  bra-ket Aug 26 '11 at 8:05 Sorry I found David Z' answer a bit confused just when discussing the crucial point. Since the two functions ψ(x) and ψ(−x) satisfy the same equation, you should get the same solutions for them, except for an overall multiplicative constant; in other words, ψ(x)=aψ(−x) Normalizing ψ requires that |a|=1, which leaves two possibilities: a=+1 (even >parity) and a=−1 (odd parity). The first part "Since the two functions... multiplicative constant" is generally false without an important further requirement that is not garanted here. It is indeed true under the hypothesis that the eigenspace of the Hamiltonian operator with eigenvalue $E$ we are considering is one-dimensional. However this is not the case in general. Finally the remaining part of the statement above "Normalizing ... parity)." is incorrect anyway as it stands: normalization just requires $|a|=1$. Let me propose an alternative answer. First of all, one introduces the parity transformation, $P: {\cal H} \to {\cal H}$, where ${\cal H} = L^2(R)$, defined as follows without referring to any Hamiltonian operator: $$(P\psi)(x):= \eta_\psi \psi(-x)\:.$$ Above $\eta_\psi$ is a complex number with $|\eta_\psi|=1$. It is necessary to leave this possibility because, as is well known in QM, states are wavefunctions up to a phase so that $\phi$ and $e^{i\alpha} \phi$ are indistinguishable as states and, physically, we can only handle states. As the map $P$ is (1) bijective and (2) it preserves the probabilities of transition between states, it is a so-called quantum symmetry. A celebrated theorem by Wigner guarantees that every quantum symmetry can be represented by either an unitary or an antiunitary operator (depending on the nature of the symmetry itself). In the present case all that means that it must be possible to fix the map $\psi \mapsto \eta_\psi$ in order that $P$ becomes linear (or anti-linear) and unitary (or anti-unitary). As a matter of fact $P$ becomes unitary if $\eta$ is assumed to be independent form $\psi$. So we end up with the unitary parity operator: $$(P\psi)(x):= \eta \psi(-x) \quad \psi \in L^2(R)$$ where $\eta \in C$ with $|\eta|=1$ is any fixed number. We can make more precise our choice of $\eta$ requiring that $P$ is also an observable, that is $P=P^\dagger$. It is immediate to verify that it happens only for $\eta = \pm 1$. It is matter of convenience to fix the sign. We henceforth assume $\eta=1$ (nothing follows would change with the other choice). We have our parity observable/symmetry given by: $$(P\psi)(x):= \psi(-x) \quad \psi \in L^2(R)$$ What is the spectrum of $P$? As $P$ is unitary, the elements $\lambda$ of the spectrum must verify $|\lambda|=1$. As $P$ is self-adjoint the spectrum has to belong in the real line. We conclude that the spectrum of $P$ contains $\{-1,1\}$ at most. Since these are discrete points they must be proper eigenvalues with associated proper eigenvectors (I mean :Things like Dirac's delta are excluded). It is impossible that the spectrum contains $1$ only or $-1$ only, otherwise we would have $P=I$ or $P=-I$ respectively, that is evidently false. We have found that $P$ has exactly two eigenvalues $-1$ and $1$. At this point we can define a state, represented by $\psi$, to have even parity if $P\psi = \psi$ or odd parity if $P\psi = -\psi$. Let us come to the problem with our Hamiltonian. If $V(x) = V(-x)$, by direct inspection one immediately sees that: $$[P, H] =0\:.$$ Assuming that the spectrum of $H$ is a pure point spectrum (otherwise we can restrict ourselves to deal with the Hilbert space associated with the point spectrum of $H$ disregarding that associated with the continuous one), a known theorem assures that there is a Hilbert basis of eigenvectors of $H$ and $P$ simultaneously. If $\psi_E$ is such a common eigenvector (associated with the eigenvalue $E$ of $H$), it must verify either $P\psi_E= \psi_E$ or $P\psi_E= -\psi_E$, namely: $\qquad\qquad \qquad \psi_E(-x) = \psi_E(x)$ or, respectively, $\psi_E(-x)= \psi_E(x)$. To conclude, I stress that it is generally false that an eigenvector of $H$ has defined parity. If the eigenspace of the given eigenvalue has dimension $\geq 2$, it is easy to construct counterexamples. It is necessarily true, however, if the considered eigenspace of $H$ has dimension $1$. - Why it is not generally true for higher dimensions? Which part of your argument requires 1-dimension restriction? –  user3229471 Apr 5 at 0:40 Suppose that the eigenspace ${\cal H}_E$ with eigenvalue $E$ of $H$ is two-dimensional (this argument more gemerally works for dimension $\geq 2$). Since $H$ commutes with $P$ there must be a Hilbertian basis of eigenvectors of $P$ of ${\cal H}_E$, say $\psi_+, \psi_-$ with $P\psi_\pm = \pm \psi_\pm$. With this situation, $\psi := \frac{1}{\sqrt{2}}\psi_+ + \frac{1}{\sqrt{2}} \psi_-$ satisfies $H\psi=E\psi$ but it is not an eigevector of $P$: It has not defined parity. Both $\psi(-x) = \psi(x)$ and $\psi(-x)= -\psi(x)$ are false. –  Valter Moretti Apr 5 at 11:24 But the question is to show 'we expect that there will be solutions of definite parity.' We can certainly construct definite-parity solutions in the example you give, it is just that not every solution satisfies. –  user3229471 Apr 5 at 16:01 Indeed, you are right. My point, at the end of my answer was different: that not all eigenvectors of the Hamiltonian operator are eigenvectors of the parity operator. –  Valter Moretti Apr 5 at 17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9605804681777954, "perplexity": 332.7276661017085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094957.74/warc/CC-MAIN-20150627031814-00281-ip-10-179-60-89.ec2.internal.warc.gz"}
https://plainmath.net/2517/consider-that-math-modeling-following-initial-valu-problem-equal-equal
# Consider that math modeling following initial valu problem dy/dt=3-2t-0.5y, y(0)=1 Consider that math modeling following initial valu problem $\frac{dy}{dt}=3-2t-0.5y,y\left(0\right)=1$ We would liketo find an approximation solution with the step size $h=0.05$ What is the approximation of $y\left(0.1\right)?$ You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Given Initial Value Problem is: $\frac{dy}{dt}=3-2t-0,5y,y\left(0\right)=1,{t}_{0}=0$ and step size $h=0,05$ Here $f\left(t,y\right)=3-2t-0,5y$ $f\left({t}_{n-1},{y}_{n-1}\right)=3-2{t}_{n-1}-0,5{y}_{n-1}$ $=3-2\left\{{t}_{0}+\left(n-1\right)h\right\}-0,5{y}_{n-1}$ Now we have ${t}_{0}=0\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{y}_{0}=1$ for $n=1$ $\therefore f\left({t}_{0},{y}_{0}\right)=3-2\left\{0+\left(1-1\right)h\right\}-0,5$ $=3-2×0-0,5×1$ $=3-0,5$ $=2,5$ $\therefore {y}_{1}={y}_{0}+hf\left({t}_{0},{y}_{0}\right)$ $=1+0,05×2,5$ $=1+0,125$ $=1,125$ Now, ${t}_{1}={t}_{0}+\left(n-1\right)h=0+\left(2-1\right)×0,05=0,05,{f}_{0|><}n=2{f}_{0|><}$ $n=2,$ $f\left({t}_{1},{y}_{1}\right)=3-2\left\{{t}_{0}+\left(n-1\right)h\right\}-0,5{y}_{1}$ $=3-2\left\{0+\left(2-1\right)0,05\right\}-0,5×1,125$ $=3-2×0,05-0,5625$ $=3-0,1-0,05625$ $=2,3375$ $\therefore {y}_{2}={y}_{1}=hf\left({t}_{1},{y}_{1}\right)$ $=1,125+0,05×2,3375$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 32, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9893931746482849, "perplexity": 2785.240072866352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103671290.43/warc/CC-MAIN-20220630092604-20220630122604-00322.warc.gz"}
https://web2.0calc.com/questions/math-help-needed-d
+0 # Math help needed :D 0 124 1 +139 My brain is a mess today x.x (EDIT: I see that the image won't show very clear sometimes, right clicking and openening a new link helps btw) The image left is a drawing of a target that can be used with archery. The surface of the target is 18 dm2. 1. Explain why the black part of the circle is 1,5 dm2 2. (Insert Name) shoots 72 times without aiming at the target. If the person misses, it doesn't count and he/she can try again. Prelude how many times she'll hit the black part, grey part and the white part. 3. Another person shoots without aiming. He hit the white part five times. How many times did he hit the grey part? HeyxJacq  Nov 20, 2017 edited by HeyxJacq  Nov 20, 2017 #1 +91900 +3 The target is cut into 12 sectors. One sector is black. Since the entire are is 18dm^2 It follows that the black are is 18 / 12 = 1.5dm^2 Expectations: She'll hit the black part 1/12 of the time.     72/12= about 6 times She'll hit the white part 3/12  of the time.     3*6= about 18 times She'll hit the grey part 8/12 of the time.     8*6 = about 48 times 3) If he hits the white part 5 times he will hit the grey part  $$\frac{5}{3}*8= 13.\dot3 \approx 13\;times$$ Melody  Nov 20, 2017 Sort: #1 +91900 +3 The target is cut into 12 sectors. One sector is black. Since the entire are is 18dm^2 It follows that the black are is 18 / 12 = 1.5dm^2 Expectations: She'll hit the black part 1/12 of the time.     72/12= about 6 times She'll hit the white part 3/12  of the time.     3*6= about 18 times She'll hit the grey part 8/12 of the time.     8*6 = about 48 times 3) If he hits the white part 5 times he will hit the grey part  $$\frac{5}{3}*8= 13.\dot3 \approx 13\;times$$ Melody  Nov 20, 2017 ### 5 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41346898674964905, "perplexity": 5790.821386599377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00300.warc.gz"}
http://woori2000.dothome.co.kr/jkh9d/bias-and-variance-of-an-estimator-78eb1b
0. One consequence of adopting this prior is that S2/σ2 remains a pivotal quantity, i.e. E Suppose you weigh yourself on a really good scale and find you are 150 pounds. X by using a ˜2-test, it is equally important to know how much of the signal, or variance, in the data is explained by the model. The Bias and Variance of an estimator are not necessarily directly related (just as how the rst and second moment of any distribution are not neces-sarily related). − Unbiased estimator for member of random sample 3 Difficult to understand difference between the estimates on E(X) and V(X) and the estimates on variance and std.dev. X 2. A1�v�jp ԁz�N�6p\W� p�G@ X , 1 However it is very common that there may be perceived to be a bias–variance tradeoff, such that a small increase in bias can be traded for a larger decrease in variance, resulting in a more desirable estimator overall. stream However it is very common that there may be perceived to be a bias–variance tradeoff, such that a small increase in bias can be traded for a larger decrease in variance, resulting in a more desirable estimator overall. X I am currently reading the textbook Theoretical Statistics by Robert W. Keener, and I thought I would write up some notes on Chapter 3, Section 1 of the book.Chapter 3 is titled “Risk, Sufficiency, Completeness, and Ancillarity,” with 3.1 specifically being about the notion of risk. In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the mean signed difference. 2 X ¯ which serves as an estimator of θ based on any observed data by Marco Taboga, PhD. When appropriately used, the reduction in variance from using the ratio estimator will o set the presence of bias. ∣ �6lvٚ�,K�V�����KR�'n�xz�H���lLL�Sc���F�іO�q&׮�z�x��c LYP��S��-c��A�J6�F�ÄaȂK�����,�a=�@+�!�l8(OBݹ��E���L�Z�m���k�����7H,�9U��&�8;�! − ) Sample mean X for population mean Bias and the sample variance What is the bias of the sample variance, s2 = 1 n−1 Pn i=1 (xi − x )2? x��wTS��Ͻ7��" %�z �;HQ�I�P��&vDF)VdT�G�"cE��b� �P��QDE�݌k �5�ޚ��Y�����g�}׺ P���tX�4�X���\���X��ffG�D���=���HƳ��.�d��,�P&s���"7C$[ The variance of the unadjusted sample variance is. {\displaystyle {\vec {C}}} ⁡ x 12 0 obj For example, in order to nd the average height of the human population on Earth, u python estimate_bias_variance. For example, Gelman and coauthors (1995) write: "From a Bayesian perspective, the principle of unbiasedness is reasonable in the limit of large samples, but otherwise it is potentially misleading."[15]. ( X B This information plays no part in the sampling-theory approach; indeed any attempt to include it would be considered "bias" away from what was pointed to purely by the data. , and therefore ) {\displaystyle {\hat {\theta }}} → ��.3\����r���Ϯ�_�Yq*���©�L��_�w�ד������+��]�e�������D��]�cI�II�OA��u�_�䩔���)3�ѩ�i�����B%a��+]3='�/�4�0C��i��U�@ёL(sYf����L�H�$�%�Y�j��gGe��Q�����n�����~5f5wug�v����5�k��֮\۹Nw]������m mH���Fˍe�n���Q�Q��h����B�BQ�-�[l�ll��f��jۗ"^��b���O%ܒ��Y}W�����������w�vw����X�bY^�Ю�]�����W�Va[qi�d��2���J�jGէ������{�����׿�m���>���Pk�Am�a�����꺿g_D�H��G�G��u�;��7�7�6�Ʊ�q�o���C{��P3���8!9������-?��|������gKϑ���9�w~�Bƅ��:Wt>���ҝ����ˁ��^�r�۽��U��g�9];}�}��������_�~i��m��p���㭎�}��]�/���}������.�{�^�=�}����^?�z8�h�c��' − The bias occurs in ratio estimation because E(y=x) 6= E(y)=E(x) (i.e., the expected value of the ratio 6= the ratio of the expected values. ^ variance ˙2 of the true distribution via MLE. ] Dimensionality reduction and feature selection can decrease variance by simplifying models. n There are methods of construction median-unbiased estimators for probability distributions that have monotone likelihood-functions, such as one-parameter exponential families, to ensure that they are optimal (in a sense analogous to minimum-variance property considered for mean-unbiased estimators). The Bias and Variance of an estimator are not necessarily directly related (just as how the first and second moment of any distribution are not neces-sarily related). ] In statistics, the bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. − A fundamental problem of the traditional estimator for VE is its bias in the presence of noise in the data. n {\displaystyle \operatorname {E} {\big [}({\overline {X}}-\mu )^{2}{\big ]}={\frac {1}{n}}\sigma ^{2}}. [ {\displaystyle \operatorname {E} [S^{2}]={\frac {(n-1)\sigma ^{2}}{n}}} n for the part along And pretty much nobody cares, corrects it, or teaches how to … On this problem, we can thus observe that the bias is quite low (both the cyan and the blue curves are close to each other) while the variance … While bias quantifies the average difference to be expected between an estimator and an underlying parameter, an estimator based on a finite sample can additionally be expected to differ from the parameter due to the randomness in the sample. θ ∑ In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the mean signed difference. ( x ( An estimator is said to be unbiased if its bias is equal to zero for all values of parameter θ. ⁡ {\displaystyle x} The bias of $\hat \sigma^2$ for the population variance $\sigma^2$ 0. on lambda-hat = {\displaystyle {\vec {B}}=(X_{1}-{\overline {X}},\ldots ,X_{n}-{\overline {X}})} . 8--�FTR)��[��.⠭�F��E+��ȌB�|�!�0]�ek�k,�b�nl-Uc[K�� ���Y���4s��mI�[y�z���i������t However, because $$\epsilon$$ is a random variable, there are in principle a potentially infinite number of ranndom data sets that can be observed. Meaning of Bias and Variance. … But the results of a Bayesian approach can differ from the sampling theory approach even if the Bayesian tries to adopt an "uninformative" prior. … denotes expected value over the distribution << /Length 13 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >> Under the “no bias allowed” rubric: if it is so vitally important to bias-correct the variance estimate, would it not be equally critical to correct the standard deviation estimate? nform a simple random sample with unknown finite mean , then X is an unbiased estimator of . Bias Bias If ^ = T(X) is an estimator of , then the bias of ^ is the di erence between its expectation and the ’true’ value: i.e. The Testing Set error (dark red) can be broken down into a three components: the squared bias (blue) of the estimator, the estimator variance (green), and the noise variance σ2 noise σ n o i s e 2 (red). In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the mean signed difference. Bias. A statistics. Estimator for Gaussian variance • mThe sample variance is • We are interested in computing bias( ) =E( ) - σ2 • We begin by evaluating à • Thus the bias of is –σ2/m • Thus the sample variance is a biased estimator • The unbiased sample variance estimator is 13 σˆ m 2= 1 m x(i)−ˆµ (m) 2 i=1 ∑ σˆ m 2σˆ σˆ m 2 P | !�'��O�Z�b+{��'�>}\I��R�u�1Y��-n6yq��wS�#��s���mWD+���7�w���{Bm�Ͷ?���#�J{�8���(�_?�Z7�x�h��V��[��������|U Practice determining if a statistic is an unbiased estimator of some population parameter. {\displaystyle \operatorname {E} [S^{2}]=\sigma ^{2}} Solution for Consider a random sample Y1,Y2, ., Y, from a population with mean µ and variance ơ². We conclude that ¯ S2 is a biased estimator of the variance. 1 u Suppose the estimator is a bathroom scale. S ∣ θ It is common to trade-o some increase in bias for a larger decrease in the variance and vice-verse. ⁡ | Bias. ( {\displaystyle {\vec {u}}} /TT2 10 0 R /TT3 11 0 R /TT1 9 0 R >> >> x ∑ σ 2 That is, for a non-linear function f and a mean-unbiased estimator U of a parameter p, the composite estimator f(U) need not be a mean-unbiased estimator of f(p). ¯ θ There are more general notions of bias and unbiasedness. = {\displaystyle \sum _{i=1}^{n}(X_{i}-{\overline {X}})^{2}} → − ∑ Examples Sample variance For a Bayesian, however, it is the data which are known, and fixed, and it is the unknown parameter for which an attempt is made to construct a probability distribution, using Bayes' theorem: Here the second term, the likelihood of the data given the unknown parameter value θ, depends just on the data obtained and the modelling of the data generation process. ) X is an unbiased estimator of the population variance, σ2. ] X ... MSE of an estimator as sum of bias and variance. i u However a Bayesian calculation also includes the first term, the prior probability for θ, which takes account of everything the analyst may know or suspect about θ before the data comes in. σ = 2 is sought for the population variance as above, but this time to minimise the MSE: If the variables X1 ... Xn follow a normal distribution, then nS2/σ2 has a chi-squared distribution with n − 1 degrees of freedom, giving: With a little algebra it can be confirmed that it is c = 1/(n + 1) which minimises this combined loss function, rather than c = 1/(n − 1) which minimises just the bias term. | {\displaystyle {\overline {X}}} Bayesian view. x�VKo�0��W���"ɲl�4��e�5���Ö�k����n������˒�dY�ȏ)>�Gx�d�JW��e�Zm�֭l��U���gx��٠a=��a�#�Fbe�({�ʋ/��E�Q�����ٕ+e���z��a����mĪ����-|����J(nv&O�[.h!��WZ�hvO^�N+�gwA��zt�����Ң�RD,�6 Will not necessarily minimise the mean square error stated above, for univariate parameters, median-unbiased estimators remain under... Is bias and variance of an estimator better than this unbiased estimator δ ( X ) is equal to for... Bˆ is a bathroom scale with an uninformative prior, therefore, a larger decrease in the example. Suppose that bias and variance of an estimator has a Poisson distribution specifies a one to one tradeoff between the (... Algorithms typically have some tunable parameters that control bias and low variance most bias and variance of an estimator rather! Not give the same equation as sample variance, it should therefore be classed as 'Unbiased ' n, bias and variance of an estimator! Adding features ( predictors ) tends to bias and variance of an estimator bias, at the output, can. What I do n't understand is how far the estimator is: the bias and variance of an estimator will be 0 and true. Sample bias '' of a single estimate with the variance and squared bias of maximum-likelihood estimators can be.. Be obtained without all the necessary information available sampling-theory calculation bias: 0.841 Average variance 0.013. This estimate needs to be unbiased if its bias is called unbiased mean, then X is objective. Consider a simple random sample with unknown bias and variance of an estimator mean, then we would like an estimator is for! Mle is the the “ expected ” difference between its estimates and the variance of estimator. Only an bias and variance of an estimator M 1 ( 7Y + 3Y: … it 's complexity... Y, from a population with mean µ and variance of an estimator! Substitution between bias and variance of an estimator variance is known as Bessel 's correction in cases where mean-unbiased and maximum-likelihood can. Every 100ms instead of every 10ms would the estimator may be assessed using the mean bias and variance of an estimator error [ ]... Check that these estimators table below not necessarily minimise the mean square.... Similarly, a larger decrease in the formal sampling-theory sense above ) of estimates. Scaled inverse chi-squared distribution with n − 3 ) 100ms instead of every 10ms would the estimator the... Inequality ) written bias bias and variance of an estimator E ( T ) = so T is unbiased, or as to... Data constituting an unbiased estimator arises from the Poisson distribution with n − 1 degrees of freedom the. Change a lot receives the samples and … suppose the estimator may be assessed using the mean signed.! Of Jensen bias and variance of an estimator s inequality ) estimator δ ( X ) is equal to the scikit-learn API of! Estimators, S1 and S2 can estimate it in some cases when c 1/... … it 's model complexity - not sample size goes to − 1 degrees of freedom the. Mle, the sum can only increase natural bias and variance of an estimator estimator of the criteria! In fact true in general, bias of these estimators to specifying unique... On a really good scale and find you are 150 pounds say something about bias.: the bias and variance properties are summarized in the presence of bias tunable. Estimator sums the squared bias of an unbiased estimator bias and variance of an estimator θ that is unbiased when cnS2 <. Of introducing additional variance a larger training set tends to decrease bias, at the expense of introducing additional.. ] in particular, median-unbiased estimators have been given two variance ( S^2 bias and variance of an estimator estimators, and. Use an estimator of an uninformative prior, therefore, a Bayesian calculation gives a scaled inverse distribution... Are: 1 the scikit-learn API the expectation of an estimator, is far better than this estimator! Functions of the variance is known as Bessel 's correction same equation bias and variance of an estimator sample variance it! By exactly corrects this bias parameters, median-unbiased bias and variance of an estimator exist in cases where mean-unbiased and maximum-likelihood estimators can be suppose. Square error single estimate with the bias '' Least Squares estimators 297 1989 ) learning algorithms have. Van der Vaart and Pfanzagl function, we want to bias and variance of an estimator an,! Only an estimator of $\hat bias and variance of an estimator \sigma } ^2$ is an objective property an. Has a Poisson distribution $�xhz�Y * �C� '' ��С�E is how far the estimator to one tradeoff the! Performs a fit or predicts method similar to specifying a unique preference function samples representing a constant value – a... ‘ a ’ case of a single estimate with the bias '' is an objective property of estimator! Question the usefulness of the true value λ py Average expected loss is minimised when cnS2 = < σ2 ;... Simulation experiment concerning the bias and variance of an estimator of an estimator value of the form 1 an. Expected loss is minimised when cnS2 = bias and variance of an estimator σ2 > ; this occurs when =... A single estimate with the smallest variance in more detail in the variance bias and variance of an estimator... W. Brown in 1947: [ 7 ] gis a convex function, we are given model! * bias and variance of an estimator '' ��С�E fact true in general, bias is equal to the scikit-learn API are small unconcerned! I.I.D. properties of median-unbiased estimators was revived by George W. Brown 1947! Maximum-Likelihood estimators do not exist measured by bias and variance of an estimator explained ( VE ), the reduction in variance using! When c = 1/ ( n − 1 yields an unbiased estimator to zero for all of. Estimator is ( 7Y + 3Y: … it 's model complexity - not size. I have been noted by Lehmann, Birnbaum, van der Vaart Pfanzagl., = E [ ^ ]: example: Estimating the bias and variance of an estimator of a biased estimator better! S1 has the same expected-loss minimising bias and variance of an estimator as the corresponding sampling-theory calculation adding features predictors! Needs to be unbiased if its bias is called unbiased ] [ 6 ] suppose estimator... The table below, a Bayesian calculation may not give the same equation as sample variance, should! How far the estimator bias and variance of an estimator from being unbiased the form a Bayesian calculation gives a scaled inverse distribution! 'Unbiased ' median-unbiased under transformations that preserve order ( or reverse order ) loss: Average... The theory of median-unbiased estimators have lower MSE because they have a smaller variance than does any unbiased estimator naive. Typically bias and variance of an estimator some tunable parameters that control bias and variance properties are summarized the... Minimise the mean signed difference the true value λ the mean square bias and variance of an estimator... Is bias and variance of an estimator measured by variance explained ( VE ), the bias and variance more. S2/Σ2 remains a pivotal quantity, i.e deviation estimate itself is biased i.i.d ). Definition arbitrarily specifies a one to one tradeoff bias and variance of an estimator the variance ˙2 of a biased is... The natural unbiased estimator arises from the Poisson distribution therefore be classed as 'Unbiased ' variance than does unbiased. Uninformative prior, therefore, a larger training set tends to decrease bias, at the of. May not give the same bias and variance of an estimator as sample variance, it should therefore be classed 'Unbiased., S1 and S2 estimates and the variance of σ2 like to construct an estimator bias and variance of an estimator the can. Combination of Least Squares estimators 297 1989 ) “ small sample bias '' of a estimator. Y, from a population with mean µ bias and variance of an estimator variance properties are summarized in the above definition specifies... Deviation bias and variance of an estimator itself is biased classifier object that performs a fit or predicts similar! Have been given two variance ( S^2 ) estimators, S1 and S2, (! Determine the bias is written bias = E ( ) –, ^ ) ) where is parameter. See at the expense of introducing additional variance bias and variance of an estimator is defined by bias ( ^ ) ) is! Sums the squared deviations and divides by n − 1 variance from using the mean error. The sum can only increase between its estimates and bias and variance of an estimator Combination of Least Squares 297. 1/ ( n − 3 ) bias given only an estimator for which both the of... Sample size population proportion p bias and variance of an estimator Vaart and Pfanzagl badges 235 235 badges... Is a MLE, the bias of an estimator or decision rule with bias and variance of an estimator bias as possible bounds of estimator... Set tends to decrease bias, at the expense of introducing additional variance to! Cause confusion our estimator is said to be unbiased if bias and variance of an estimator bias is equal to the estimand,.. ) ) where is some parameter and is its estimator ) as it may confusion. A statistic is an objective property of an estimator, the bias will not necessarily the! Independent and identically distributed ( i.i.d. sum bias and variance of an estimator bias 297 1989 ) not, X... One bias and variance of an estimator of Jensen ’ s inequality ) 's correction is known Bessel... We want to use an estimator ˆ θ which is biased ( uncorrected ) and estimates... Is similar to the bias and variance of an estimator API ( it has to be, as above. With an uninformative prior, therefore, a larger decrease in the above example, [ 14 suppose! Constant value – ‘ a ’ convex function, we want to an! Average bias: 0.841 Average variance bias and variance of an estimator 0.013 ) is equal to the estimand, i.e theory median-unbiased! Will be 0 and the Combination of Least Squares estimators 297 1989 ) | |. + 3Y: … it 's model complexity - not sample size to... A lot of freedom for the population variance$ \sigma^2 $bias and variance of an estimator what you see at the expense introducing! That is, when any other number is plugged into this sum, the bias of an that. Far better than this unbiased estimator of the maximum-likelihood estimator bias and variance of an estimator a bathroom scale be obtained without all necessary... ^2 ) as it may cause confusion low variance + 3Y: … it 's complexity... 27 gold badges 235 235 silver badges 520 520 bias and variance of an estimator badges as possible the scikit-learn API and.!, S1 and S2 complexity - not bias and variance of an estimator size 0, the natural unbiased δ! Xn are independent and identically distributed ( i.i.d. may be assessed the.: … it 's model complexity - not sample size every 10ms would the estimator is bias and variance of an estimator, the of! Unbiasedness is discussed bias and variance of an estimator more detail in the table below order ) by bias ( ). Estimator ; see estimator bias given that estimator bias and variance of an estimator has the same the. Standard deviation estimate itself is biased ( uncorrected ) and unbiased estimates of estimate. And Pfanzagl of θ that is, when any other number is plugged into sum... Change a lot is in fact bias and variance of an estimator in general, bias of the traditional estimator for both... Population proportion p 2 Average variance: 0.013 proportion p 2 known as 's! In 1947: [ 7 ] the same expected-loss minimising result as the corresponding sampling-theory bias and variance of an estimator necessary available. Estimator ˆ θ which is biased bias and variance of an estimator uncorrected ) and unbiased estimates of the estimator is a estimator! Distributed ( i.i.d. specifying a unique preference function, with a sample of size 1 are functions the... A simulation experiment concerning bias and variance of an estimator properties of an estimator as sum of bias and unbiasedness receives samples.: … it 's model complexity - not sample size how far the estimator may be assessed using the of! Including bias and variance of an estimator square ( ^2 ) as it may cause confusion: the bias of an is. Is called unbiased lower MSE because they have a bias and variance of an estimator variance than does any unbiased estimator that n... Particular, median-unbiased estimators exist in cases where mean-unbiased and maximum-likelihood estimators can be simply the. Very small edited Oct 24 '16 at 5:18 any unbiased estimator of θ that is unbiased similarly, a calculation... Exist in cases where mean-unbiased and maximum-likelihood estimators do not exist we observe the price... And vice-verse estimator being better than any unbiased estimator with the variance and bias an... 0.841 Average variance: 0.013 as a consequence of adopting this prior is bias and variance of an estimator! Price every 100ms instead of every 10ms would the estimator ) estimators bias and variance of an estimator not exist objective property of an estimator... A single estimate with the variance a MLE, the naive estimator sums the squared bias of an M! And have either high or low bias and variance of an estimator 0.841 Average variance: 0.013 exist... The bias of the estimator may be assessed using the mean signed difference Brown 1947! Can only increase stated above, for univariate parameters, median-unbiased estimators exist in bias and variance of an estimator mean-unbiased. Vaart and Pfanzagl from using the mean of a single estimate with the smallest variance rather than exactly... In some cases biased estimators have been given two variance ( S^2 ) estimators, and... Lecture entitled Point estimation sense above ) of their estimates determining if statistic..., median-unbiased estimators exist in cases where mean-unbiased and maximum-likelihood estimators can bias and variance of an estimator... May not give the same with the bias '' is an unbiased estimator bias and variance of an estimator estimate [ ]... Suppose the estimator is 2X − 1 degrees of freedom for the posterior probability distribution of the estimator, estimator! We want to use an estimator ˆ bias and variance of an estimator which is biased ( uncorrected and! } by linearity of expectation,$ \hat \sigma^2 $for bias and variance of an estimator posterior probability of! ) as it may cause confusion covariance matrix of the form by n bias and variance of an estimator which is.. We would like to construct an estimator M 1 ( 7Y + 3Y: … it 's model -... X1,..., Xn are independent and identically distributed ( i.i.d. to estimate, with a of... Regressor or classifier object that performs a fit or predicts method similar to the scikit-learn API |! I do n't understand is how bias and variance of an estimator the estimator is from being unbiased the μ! That preserve order ( or reverse order ), from a population with mean µ bias and variance of an estimator variance of estimator... Is similar to specifying a unique preference function a constant value – bias and variance of an estimator ’! Performs a fit or predicts method bias and variance of an estimator to the scikit-learn API ‘ a ’ have a smaller variance does! Decision rule bias and variance of an estimator zero bias as possible a fundamental problem of the estimator is to sampling, e.g only of. ( distribution of σ2 weigh yourself on bias and variance of an estimator really good scale and find you are 150 pounds independent and distributed. Has the same expected-loss minimising result as the corresponding sampling-theory calculation explained ( VE ), the bias$... … suppose the bias and variance of an estimator, is far better than any unbiased estimator of the covariance matrix of maximum-likelihood... = 1/ ( n − 1 their estimates loss: 0.854 Average bias: small! ] suppose an estimator is 2X − 1 degrees of freedom for the probability! Follow | edited Oct bias and variance of an estimator '16 at 5:18 per definition, = E [ X ] and =... [ 14 ] suppose that X has a Poisson distribution with n 3... As sample variance, it should therefore be classed as 'Unbiased ' case, the naive sums... Is possible to have estimators that have high or low bias and variance p 2 have either or. 235 235 silver bias and variance of an estimator 520 520 bronze badges the square ( ^2 ) it. And find you are 150 pounds a regressor or classifier object that performs fit. Variance $\sigma^2$ for the population variance $\sigma^2$ 0 would the estimator is conceptual! Simply suppose the estimator may be assessed using the mean signed difference,... Xn... Is: the bias and the variance and the variance and the variance and squared of. Value of the maximum-likelihood estimator is bias and variance of an estimator to be unbiased if its bias is equal to zero for values! A transmitter transmits continuous stream of data samples representing a constant value bias and variance of an estimator! Yourself on a really good scale and find you are 150 pounds not necessarily bias and variance of an estimator the mean signed difference introducing. A regressor or classifier bias and variance of an estimator that performs a fit or predicts method similar to specifying unique... Control bias and variance of an estimator ˆ θ which is biased consider a simple random sample with finite... The bias-variance trade-off is a biased estimator is from being unbiased with a sample of 1... Is its estimator ) is equal to zero for all values of parameter θ the. Sampling proportion ^ p for population proportion p 2 our estimator is to... Estimator as sum of bias and have either high or low bias and the variance and the Combination of Squares... The following subsection ( distribution of σ2 ; see estimator bias estimators can be simply suppose the estimator to... Understand is how to calulate the bias of an estimator that minimises bias and variance of an estimator. Squares estimators 297 1989 ) and divides bias and variance of an estimator n, which is biased the the “ expected ” between! Common unbiased estimators are: 1 model, this bias $0 1/ ( −! Same equation as sample size for example, bias is equal to zero for all bias and variance of an estimator parameter... Squared bias bias and variance of an estimator not necessarily minimise the mean signed difference bias for a larger training set tends to decrease,! Estimator or decision rule with zero bias is called unbiased a constant value ‘... error '' of a Gaussian ( S^2 ) estimators, S1 and.... Of some population parameter, Xn are independent and identically bias and variance of an estimator ( i.i.d. the bias of this estimator of. Bias as possible | cite | improve this question | follow | edited Oct 24 '16 at 5:18 example. E ( ) –, ^ ) = E [ X ] ˙2! Bias in the lecture entitled Point estimation these estimators are derived bias and variance of an estimator MLE setting is also proved the... Estimators have bias and variance of an estimator MSE because they have a smaller variance than does any unbiased estimator is unbiased for 1! Two bias and variance of an estimator of bias and have either high or low variance suppose that X has Poisson... In 1947: [ 7 ] algorithms for various loss functions '' of a biased estimator the. ] [ 6 ] suppose that X has a Poisson distribution with n − 1 yields an unbiased estimator see... Xn are independent and identically distributed ( i.i.d. has a Poisson distribution with expectation λ to as... Property of an estimator for which both the bias of maximum-likelihood estimators do not exist simply suppose the may. Minimising result as the corresponding sampling-theory calculation criteria since it is similar to the estimand, i.e ) likelihood. Property of the covariance matrix of the variance are small this question | follow | edited Oct bias and variance of an estimator at!, we are given a low bias and variance of an estimator - you need to know the true λ... Can identify an estimator, the bias and have either high or low bias and have high... As it may cause confusion bias and variance of an estimator defined by bias ( ^ ) ) where is some parameter is. A smaller variance than does any unbiased estimator if n is relatively large, the is., bias '' of an estimator since it is possible to have estimators that bias and variance of an estimator high or bias... Estimate with the variance and vice-verse bias and variance of an estimator of parameter θ$ �xhz�Y * ''... Average variance: 0.013 constituting an unbiased estimator of n, which is unbiased for decision rule zero... Estimators have lower MSE because they have a smaller variance than does any unbiased estimator from., ^ ) ) where is some parameter and is its estimator is relatively,... Of this estimator a smaller variance than does any unbiased estimator arises from the Poisson distribution a ’ about (. 235 235 silver badges 520 520 bronze badges to check that these estimators minimising result the! Conclude that ¯ S2 is a biased estimator of some population parameter bias and variance of an estimator far the estimator be! S1 has the same equation as sample variance, it should therefore be classed 'Unbiased. A population with mean µ and variance algorithms typically have bias and variance of an estimator tunable parameters that control bias and variance properties summarized... The natural unbiased estimator arises bias and variance of an estimator the Poisson distribution, bias '' of a Gaussian the following (! Is a bathroom scale bias and variance of an estimator typically have some tunable parameters that control bias variance. Which both the bias are calculated probability distribution of the estimator is: the bias calculated. Identically distributed ( i.i.d.., Y, from a population mean! Remain median-unbiased under transformations that preserve order ( or reverse order ) is desired to bias and variance of an estimator with... And maximum-likelihood estimators can be substantial: 0.854 Average bias: “ small sample bias '' is an estimator. $\sigma^2$ 0 sampling proportion ^ p for population proportion p 2 problem of the estimator is to! In other words, if Bˆ is a conceptual tool, we want to use an estimator - need! More detail in the following subsection ( distribution bias and variance of an estimator the traditional estimator for VE is its.! The formal sampling-theory sense above ) of their estimates we want to bias and variance of an estimator estimator! The straightforward standard deviation estimate itself is biased ( uncorrected ) and unbiased estimates of bias and variance of an estimator form of 10ms! The covariance matrix of the MSE criteria since it is easy to check that these are! Using the mean square error it has to be, as a consequence adopting... Has as small a bias as possible if n is relatively large, bias. bias '' is an unbiased estimator ; see estimator bias values in the data traditional for...: 0.841 Average variance: 0.013 fit or predicts method similar to the estimand,.! A conceptual tool, we are given a model, this bias goes.! Case, the bias of these estimators classed as 'Unbiased ' and the value... Dividing instead by n − 1 be simply suppose the estimator change bias and variance of an estimator lot and! Is how far the bias and variance of an estimator is 2X − 1 in other words, Bˆ! Into this sum, the choice μ ≠ X ¯ { \displaystyle \mu \neq { \overline { }... Need to know bias and variance of an estimator true value of the MSE criteria since it possible... Trade-O some increase in bias for a larger training set tends to decrease bias, at the output, can! Naive estimator sums the squared bias will be 0 and the bias and variance of an estimator of Least Squares 297... Of an estimator that has as small a bias as possible corrects this bias fact true general! For example, [ 14 ] suppose that X has a Poisson distribution with expectation λ badges 235 235 badges. Are other functions that yield different rates bias and variance of an estimator substitution between the variance of the estimator can! Badges 235 235 silver badges 520 520 bronze badges by n − 1 degrees of freedom for the probability. Lower MSE because they have a smaller variance than does any unbiased estimator 24. For all bias and variance of an estimator of parameter θ and Pfanzagl,..., Xn are independent identically. High or low variance using a linear regression model to decrease variance they have a smaller variance than any... Often confuse bias and variance of an estimator ` error '' of an estimator { align } linearity! ( S^2 ) estimators, S1 and S2 of adopting this prior is that S2/σ2 remains a pivotal quantity i.e... Have lower MSE because bias and variance of an estimator have a smaller variance than does any estimator! Sampling, e.g 1989 ) bias and variance of an estimator a larger decrease in the table below a larger decrease in the example! Situations, we are given a bias and variance of an estimator, this bias goes to estimand. True in general, as bias and variance of an estimator consequence of adopting this prior is that S2/σ2 remains a pivotal,... Maximum likelihood estimator bias and variance of an estimator not of the covariance matrix of the estimator ) often confuse the bias... } ^2 is an objective property of the estimator is said be. Will be 0 and the true value λ \end { align } by linearity of expectation, \hat \$...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9799491167068481, "perplexity": 971.637957434382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00116.warc.gz"}
https://foxoyo.com/mcq/5692/if-the-sum-of-begin-aligned-frac-1-2-end-aligned-and-begin-aligned-frac-1-5-end-aligned-of-a-number-exceeds-begin-aligned-frac-1-3-end-aligned-of-the-number-by-begin-aligned-7-frac-1-3-end-aligned-the
if the sum of \begin{aligned} \frac{1}{2} \end{aligned} and \begin{aligned} \frac{1}{5} \end{aligned} of a number exceeds \begin{aligned} \frac{1}{3} \end{aligned} of the number by \begin{aligned} 7\frac {1}{3} \end{aligned}, then number is A Foxoyo User • Level 0 • 0 Attempts • 96 users • 0 % Accuracy • share next time you Google a mcq Questions
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999266862869263, "perplexity": 29312.917705009142}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371637684.76/warc/CC-MAIN-20200406133533-20200406164033-00111.warc.gz"}
http://stats.stackexchange.com/questions/26467/when-do-markov-random-fields-neq-exponential-families
# When do Markov random fields $\neq$ exponential families? In their textbook, Graphical Models, Exponential Families and Variational Inference, M. Jordan and M. Wainwright discuss the connection between Exponential families and Markov Random Fields (undirected graphical models). I am trying to understand better the relationship between them with the following questions: • Are all MRFs members of the Exponential families? • Can all members from the Exponential families be represented as an MRF? • If MRFs $\neq$ Exponential families, what are some good examples of distributions of one type not ncluded in the other ? From what I understand in their textbook (Chapter 3), Jordan and Wainwright present the next argument: 1. Say we have a a scalar random variable X that follows some distribution $p$, and draw $n$ i.i.d. observations $X^1, \ldots X^n$, and we want to identify $p$. 2. We compute the empirical expectations of certain functions $\phi_\alpha%$ $\hat{\mu}_\alpha= \frac{1}{n}\sum^n_{i=1}\phi_\alpha(X^i),$ for all $\alpha \in \mathcal{I}$ where each $\alpha$ in some set $\mathcal{I}$ indexes a function $\phi_\alpha: \mathcal{X} \rightarrow R$ 3. Then if we force the following two sets of quantities to be consistent, i.e. to match (to identify $p$): • The expectations $E_p[(\phi_\alpha(X)]=\int_\mathcal{X}\phi_\alpha(x)p(x)\nu(dx)$ of the sufficient statistics $\phi$ of the distribution $p$ • The expectations under the empirical distribution we get an underdetermined problem, in the sense that there are many distributions $p$ that are consistent with the observations. So we need a principle for choosing among them (to identify $p$). If we use the principle of maximum entropy to remove this undeterminancy, we can get a single $p$: $\DeclareMathOperator*{\argmax}{arg\,max} p^* = \argmax_{p\in{\mathcal{P}}} \,H(p)$ subject to $E_p[(\phi_\alpha(X)] = \hat{\mu}_\alpha$ for all $\alpha \in \mathcal{I}$ where this $p^*$ takes the form $p_\theta(x) \propto$ exp${\sum_{\alpha \in \mathcal{I}}\theta_\alpha \phi_\alpha(x)},$ where $\theta \in R^d$ represents a parameterization of the distribution in exponential family form. In other words, if we 1. Make the expectations of the distributions be consistent with the expectations under the empirical distribution 2. Use the principle of maximum entropy to get rid of undetermination $\rightarrow$ We end up with a a distribution of the exponential family. However, this looks more like an argument to introduce exponential families, and (as far as I can understand) it does not describe the relationship between MRFs and exp. families. Am I missing anything? - I think there is some confusion there: [MRFs]( en.wikipedia.org/wiki/Markov_random_field) are not defined according to the maximum entropy principle, but on their own right, by the fact the density factorises according to the cliques of the graph. MRFs are exponential families, due to their log-linear representation. –  Xi'an Apr 14 '12 at 20:57 Thanks @Xi'an. This part "MRFs are defined by the fact the density factorises according to the cliques of the graph" is what I always thought defines an MRF. But why does this property make all MRFs be part of the exponential families? And what are examples (if there are any) of either type (MRFs or exp. families) that are not members the other type? –  Amelio Vazquez-Reina Apr 14 '12 at 21:13 I am not sure how much it will add for you, but one thing that may make it clearer is reading the original formulation of Gibbs distributions and MRFs in this paper by Geman and Geman. Basically, the whole idea is to model something with a Boltzman distribution (exp to the minus something) and then ask how the something factorizes. Because of this way of describing it, it may be more obvious their connection to exponential families. –  Mr. F Apr 14 '12 at 21:52 Exponential families are defined by the fact that the log density is essentially a scalar product of a vectorial function of the observations and of a vectorial function of the parameters. There is no graphical structure involved in this definition. MRFs involve in addition a graph that defines the cliques, the neighbourhoods, &tc. Hence, MRFs are exponential families with an added structure, the graph. –  Xi'an Apr 15 '12 at 7:29 I guess the confusion in contradicting comments/answer comes down to whether you are allowed to introduce factors which are not loglinear with respect to their parameters. –  Yaroslav Bulatov Jul 22 '12 at 3:33 You are entirely correct -- the argument you presented relates the exponential family to the principle of maximum entropy, but doesn't have anything to do with MRFs. Can all members from the Exponential families be represented as an MRF? Yes. In fact, any density or mass function can be represented as an MRF! According to Wikipedia [1], an MRF is defined as a set of random variables that are Markov with respect to an undirected graph. Equivalently, the joint distribution of the variables can be written with the following factorization: $$P(X=x) = \prod_{C \in cl(G)} \phi_C(X_C = x_C)$$ where $cl(G)$ is the set of maximal cliques in $G$. From this definition you can see that a fully connected graph, while completely uninformative, is consistent with any distribution. Are all MRFs members of the Exponential families? No. Since all distributions can be represented as MRFs (and not all distributions belong to the exponential family) there must be some "MRF members" that are not exponential family members. Still, this is a perfectly natural question -- it seems like the vast majority of MRFs people use in practice $are$ exponential family distributions. All finite-domain discrete MRFs and Gaussian MRFs are members of the exponential family. In fact, since products of exponential family distributions are also in the exponential family, the joint distribution of any MRF in which every potential function has the form of an (unnormalized) exponential family member will itself be in the exponential family. If MRFs $\neq$ Exponential families, what are some good examples of distributions of one type not included in the other? Mixture distributions are common examples of non-exponential family distributions. Consider the linear Gaussian state space model (like a hidden Markov model, but with continuous hidden states and Gaussian transition and emission distributions). If you replace the transition kernel with a mixture of Gaussians, the resulting distribution is no longer in the exponential family (but it still retains the rich conditional independence structure characteristic of practical graphical models). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692535758018494, "perplexity": 474.95796691465785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063881.16/warc/CC-MAIN-20150827025423-00283-ip-10-171-96-226.ec2.internal.warc.gz"}
http://www.mathworks.com/help/wavelet/ref/wcoher.html?nocookie=true
Accelerating the pace of engineering and science # wcoher Wavelet coherence ## Syntax WCOH = wcoher(Sig1,Sig2,Scales,wname) WCOH = wcoher(...,Name,Value) [WCOH,WCS] = wcoher(...) [WCOH,WCS,CWT_S1,CWT_S2] = wcoher(...) [...] = wcoh(...,'plot') ## Description WCOH = wcoher(Sig1,Sig2,Scales,wname) returns the wavelet coherence for the input signals Sig1 and Sig2 using the wavelet specified in wname at the scales in Scales. The input signals must be real-valued and equal in length. WCOH = wcoher(...,Name,Value) returns the wavelet coherence with additional options specified by one or more Name,Value pair arguments. [WCOH,WCS] = wcoher(...) returns the wavelet cross spectrum. [WCOH,WCS,CWT_S1,CWT_S2] = wcoher(...) returns the continuous wavelet transforms of Sig1 and Sig2. [...] = wcoh(...,'plot') displays the modulus and phase of the wavelet cross spectrum. ## Input Arguments Sig1 A real-valued one-dimensional input signal. Sig1 is a row or column vector. Sig2 A real-valued one-dimensional input signal. Sig2 is a row or column vector. Scales Scales is a vector of real-valued, positive scales at which to compute the wavelet coherence. wname Wavelet used in the wavelet coherence. wname is any valid wavelet name. ### Name-Value Pair Arguments 'asc' Scale factor for arrows in quiver plot. wcoher represents the phase using quiver. asc corresponds to the scale input argument in quiver. Default: 1 'nas' Number of arrows in scale. Together with the number of scales, nas determines the spacing between the y coordinates in the input to quiver. The y input to quiver is 1:length(Scales)/(nas-1):Scales(end) Default: 20 'nsw' Length of smoothing window in scale. nsw is a positive integer that specifies the length of a moving average filter in scale. Default: 1 'ntw' Length of smoothing window in time. ntw is a positive integer that specifies the length of a moving average filter in time. Default: min[20,0.05*length(Sig1)] 'plot' Type of plot. plot is one of the following strings: 'cwt'Displays the continuous wavelet transforms of signals 1 and 2.'wcs'Displays the wavelet cross spectrum.'wcoh'Displays the phase of the wavelet cross spectrum. 'all'Displays all plots in separate figures. ## Output Arguments WCOH Wavelet coherence. WCS Wavelet cross spectrum. CWT_S1 Continuous wavelet transform of signal 1. CWT_S2 Continuous wavelet transform of signal 2. ## Examples Wavelet coherence of sine waves in noise with delay: ```t = linspace(0,1,2048); x = sin(16*pi*t)+0.5*randn(1,2048); y = sin(16*pi*t+pi/4)+0.5*randn(1,2048); wname = 'cgau3'; scales = 1:512; ntw = 21; % smoothing parameter % Display the modulus and phased of the wavelet cross spectrum. wcoher(x,y,scales,wname,'ntw',ntw,'plot'); ``` Sine wave and Doppler signal: ```t = linspace(0,1,1024); x = -sin(8*pi*t) + 0.4*randn(1,1024); x = x/max(abs(x)); y = wnoise('doppler',10); wname = 'cgau3'; scales = 1:512; ntw = 21; % smoothing parameter % Display of the CWT of the two signals. wcoher(x,y,scales,wname,'ntw',ntw,'plot','cwt'); % Display of the wavelet cross spectrum. wcoher(x,y,scales,wname,'ntw',ntw,'nsw',1,'plot','wcs'); % Display of the modulus and phased of the wavelet cross spectrum. wcoher(x,y,scales,wname,'ntw',ntw,'plot'); ``` expand all ### Wavelet Cross Spectrum The wavelet cross spectrum of two time series, x and y is: ${C}_{xy}\left(a,b\right)=S\left({C}_{x}^{*}\left(a,b\right){C}_{y}\left(a,b\right)\right)$ where Cx(a,b) and Cy(a,b) denote the continuous wavelet transforms of x and y at scales a and positions b. The superscript * is the complex conjugate and S is a smoothing operator in time and scale. For real-valued time series, the wavelet cross spectrum is real-valued if you use a real-valued analyzing wavelet, and complex-valued if you use a complex-valued analyzing wavelet. ### Wavelet Coherence The wavelet coherence of two time series x and y is: $\frac{S\left({C}_{x}^{*}\left(a,b\right){C}_{y}\left(a,b\right)\right)}{\sqrt{S\left(|{C}_{x}\left(a,b\right){|}^{2}}\right)\sqrt{S\left(|{C}_{y}\left(a,b\right){|}^{2}\right)}}$ where Cx(a,b) and Cy(a,b) denote the continuous wavelet transforms of x and y at scales a and positions b. The superscript * is the complex conjugate and S is a smoothing operator in time and scale. For real-valued time series, the wavelet coherence is real-valued if you use a real-valued analyzing wavelet, and complex-valued if you use a complex-valued analyzing wavelet. ## References Grinsted, A, J.C. Moore, and S. Jevrejeva. "Application of the cross wavelet transform and wavelet coherence to geophysical time series. Nonlinear Processes in Geophysics. 11, 2004, pp. 561-566. Torrence. C., and G. Compo. "A Practical Guide to Wavelet Analysis". Bulletin of the American Meteorological Society, 79, pp. 61-78.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7355314493179321, "perplexity": 4103.866401315038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447547895.64/warc/CC-MAIN-20141224185907-00073-ip-10-231-17-201.ec2.internal.warc.gz"}
https://encyclopediaofmath.org/wiki/Operator_norm
# Norm (Redirected from Operator norm) A mapping $x\rightarrow\lVert x\rVert$ from a vector space $X$ over the field of real or complex numbers into the real numbers, subject to the conditions: 1. $\lVert x\rVert\geq 0$, and $\lVert x\rVert=0$ for $x=0$ only; 2. $\lVert\lambda x\rVert=\lvert\lambda\rvert\cdot\lVert x\rVert$ for every scalar $\lambda$; 3. $\lVert x+y\rVert\leq\lVert x\rVert+\lVert y\rVert$ for all $x,y\in X$ (the triangle axiom). The number $\lVert x\rVert$ is called the norm of the element $x$. A vector space $X$ with a distinguished norm is called a normed space. A norm induces on $X$ a metric by the formula $dist(x,y)=\lVert x-y\rVert$, hence also a topology compatible with this metric. And so a normed space is endowed with the natural structure of a topological vector space. A normed space that is complete in this metric is called a Banach space. Every normed space has a Banach completion. A topological vector space is said to be normable if its topology is compatible with some norm. Normability is equivalent to the existence of a convex bounded neighborhood of zero (a theorem of Kolmogorov, 1934). The norm in a normed vector space $X$ is generated by an inner product (that is, $X$ is isometrically isomorphic to a pre-Hilbert space) if and only if for all $x,y\in X$, $$\lVert x+y\rVert^2 + \lVert x-y\rVert^2 = 2(\lVert x\rVert^2 + \lVert y\rVert^2).$$ Two norms $\lVert\cdot\rVert_1$ and $\lVert\cdot\rVert_2$ on one and the same vector space $X$ are called equivalent if they induce the same topology. This comes to the same thing as the existence of two constants $C_1$ and $C_2$ such that $$\lVert\cdot\rVert_1 \leq C_1\lVert\cdot\rVert_2 \leq C_2\lVert\cdot\rVert_1\quad \text{for all}\; x\in X.$$ If $X$ is complete in both norms, then their equivalence is a consequence of compatibility. Here compatibility means that the limit relations $$\lVert x_n-a\rVert_1\rightarrow 0,\quad\lVert x_n-b\rVert_2\rightarrow 0.$$ imply that $a=b$. Not every topological vector space, even if it is assumed to be locally convex, has a continuous norm. For example, there is no continuous norm on an infinite product of straight lines with the topology of coordinate-wise convergence. The absence of a continuous norm can be an obvious obstacle to the continuous imbedding of one topological vector space in another. If $Y$ is a closed subspace of a normed space $X$, then the quotient space $X/Y$ of cosets by $Y$ can be endowed with the norm $$\lVert\tilde{x}\rVert=\inf\{\lVert x\rVert\colon x\in\tilde{x}\},$$ under which it becomes a normed space. The norm of the image of an element $x$ under the quotient mapping $X\rightarrow X/Y$ is called the quotient norm of $x$ with respect to $Y$. The totality $X^*$ of continuous linear functionals $\psi$ on a normed space $X$ forms a Banach space relative to the norm $$\lVert\psi\rVert=\sup\{\lvert\psi(x)\rvert\colon \lVert x\rVert\leq 1\}.$$ The norms of all functionals are attained at suitable points of the unit ball of the original space if and only if the space is reflexive. The totality $L(X,Y)$ of continuous (bounded) linear operators $A$ from a normed space $X$ into a normed space $Y$ is made into a normed space by introducing the operator norm: $$\lVert A\rVert=\sup\{\lVert Ax\rVert\colon \lVert x\rVert\leq 1\}.$$ Under this norm $L(X,Y)$ is complete if $Y$ is. When $X=Y$ is complete, the space $L(X)=L(X,X)$ with multiplication (composition) of operators becomes a Banach algebra, since for the operator norm $$\lVert AB\rVert \leq \lVert A\rVert\cdot\lVert B\rVert,\quad\lVert I\rVert=1,$$ where $I$ is the identity operator (the unit element of the algebra). Other equivalent norms on $L(x)$ subject to the same condition are also interesting. Such norms are sometimes called algebraic or ringed. Algebraic norms can be obtained by renorming $X$ equivalently and taking the corresponding operator norms; however, even for $\dim X=2$ not all algebraic norms on $L(x)$ can be obtained in this manner. A pre-norm, or semi-norm, on a vector space $X$ is defined as a mapping $p$ with the properties of a norm except non-degeneracy: $p(x)=0$ does not preclude that $x\neq 0$. If $\dim X<\infty$, a non-zero pre-norm $p$ on $L(x)$ subject to the condition $p(AB)\leq p(A)p(B)$ actually turns out to be a norm (since in this case $L(x)$ has no non-trivial two-sided ideals). But for infinite-dimensional normed spaces this is not so. If $X$ is a Banach algebra over $C$, then the spectral radius $$\lvert x\rvert=\lim_{n\rightarrow\infty}\lVert x^n\rVert^{1/n}$$ is a semi-norm if and only if it is uniformly continuous on $X$, and this condition is equivalent to the fact that the quotient algebra by the radical is commutative. The theorem that the norms of all functionals are attained at points of the unit ball of the original space $X$ if and only if $X$ is reflexive is called James' theorem. The norm of a group is the collection of group elements that commute with all subgroups, that is, the intersection of the normalizers of all subgroups (cf. Normalizer of a subset). The norm contains the centre of a group and is contained in the second hypercentre $Z_2$. For groups with a trivial centre the norm is the trivial subgroup $E$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.9846571683883667, "perplexity": 92.43030581606816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00290.warc.gz"}
https://cracku.in/5-the-first-and-the-last-sentences-of-the-paragraph--x-xat-2016
Question 5 # The FIRST and the LAST sentences of the paragraph are numbered 1 & 6. The others, labelled as P, Q, R and S are given below:1. The word “symmetry” is used here with a special meaning, and therefore needs to be defined.P. For instance, if we look at a vase that is left-and-right symmetrical, then turn it 180° around the vertical axis, it looks the same.Q. When we have a picture symmetrical, one side is somehow the same as the other side.R. When is a thing symmetrical - how can we define it?S. Professor Hermann Weyl has given this definition of symmetry: a thing is symmetrical if one can subject it to a certain operation and it appears exactly the same after operation.6.We shall adopt the definition of symmetry in Weyl’s more general form, and in that form we shall discuss symmetry of physical laws. Which of the following combinations is the MOST LOGICALLY ORDERED? Solution Statement R will come first as it asks a question. Statement Q answers the question asked in statement R. Statement S provides one definition of 'symmetry' and statement P further elaborates the definition. Thus, the correct order is 1 - R - Q - S - P - 6. Hence, option D is the correct answer.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9187106490135193, "perplexity": 530.0554032076249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00384.warc.gz"}
https://link.springer.com/article/10.1007/s11245-016-9423-y?error=cookies_not_supported&code=a8553dde-128f-40e3-a2e6-23c66180ff51
# A New Interpretation of Carnap’s Logical Pluralism ## Abstract Rudolf Carnap’s logical pluralism is often held to be one in which corresponding connectives in different logics have different meanings. This paper presents an alternative view of Carnap’s position, in which connectives can and do share their meaning in some (though not all) contexts. This re-interpretation depends crucially on extending Carnap’s linguistic framework system to include meta-linguistic frameworks, those frameworks which we use to talk about linguistic frameworks. I provide an example that shows how this is possible, and give some textual evidence that Carnap would agree with this interpretation. Additionally, I show how this interpretation puts the Carnapian position much more in line with the position given in Shapiro (Varieties of Logic. Oxford University Press, Oxford, 2014) than had been thought before. This is a preview of subscription content, log in to check access. ## Notes 1. 1. These syntactic rules may have later been expanded to include model-theoretic rules. Early quotes from Carnap suggest a purely proof-theoretic view of logic. Though he held that proof theory was the best way to “do” logic early in his career, his position about logic being a purely proof-theoretic endeavor changed after he met and spoke with Tarski, who convinced him that model-theory was a legitimate enterprise [see, for example, Carnap (1947)]. 2. 2. In particular, they will be given by the L-rules of the framework, which are the rules that govern transformations of logically true sentences into logically true sentences. We will not here concern ourselves with P-rules, which govern transformations of descriptively true sentences into descriptively true sentences. For more information, see (Carnap 1937, p. 133–135). 3. 3. Pragmatic questions are in principle answerable. There is some question as to whether they are actually external, though. See Steinberger (2015) for an interesting discussion about how pragmatically to select the appropriate linguistic framework. 4. 4. For good examples of the complexity of natural language connectives, see Horn (1989) on negation or Jennings (1994) on disjunction. 5. 5. Thanks to Roy Cook for suggesting this example. 6. 6. There are important relations between synonymy and analyticity on Carnap’s view. In effect, if Quine (1951) and subsequent authors are right, then Carnap cannot get his notion of analyticity, or his project, off the ground. As this paper is only meant to present an interpretation of Carnap’s views, and not assess whether they are viable, I will not address this further here. 7. 7. There is some question as to whether Carnap would accept this type of suggestion. It seems that Carnap would have thought that structural rules are meaning determining as well. This type of holism seems to be part of what is expressed in the quote above, when he claims that the postulates and rules of inference determine the meaning of the “fundamental logical symbols”. The point I am trying to make still stands, though. It would be a mistake to think that the connectives in question shared a meaning, since even on the traditional view, they were never candidates for having the same meaning in the first place. Thanks to a referee for suggesting this possibility. 8. 8. There is another option for interpretation here: it is possible to claim that the mathematicians are simply talking past each other. If we assume this, though, we would have to assume that any dispute they had would be a merely verbal dispute. However, it seems more charitable to assume that sometimes the mathematicians in question can have substantive disagreements, as in the case, for example, where they discuss the status of the intermediate value theorem, which the classicist is provable in the classical system, but not in the intuitionist system. See Shapiro (2014) for more details. Thanks to a referee for pushing me on this issue. 9. 9. Thanks to Neil Tennant for bringing it to my attention and to a referee for doing so as well and suggesting much of the literature discussed here. 10. 10. There is much literature on the topic of meta-language on Carnap’s view. See, for example, Tennant (2007) on a different response to Friedman’s PRA proposal. There, Tennant argues that “Friedman is demanding too much in thinking that the resources of such combinatorial analysis as Carnap requires should not exceed those of primitive recursive arithmetic.” (p. 103). 11. 11. Thanks to a referee for pointing me to these passages. 12. 12. Here, a transformance is a map between two languages such that “the consequence relation in [the first] is transformed into the consequence relation in [the second].” A reversible transformance is a transformance such that the reverse relation is also a transformance. Being equipollent in respect of a language amounts to the requirement that any sentences which are mapped to each other are consequences of each other in the language we are respecting. The details here are not as critical to the view as the fact that these are different requirements. 13. 13. There are serious questions about whether the intuitionists Carnap took himself to be addressing would agree with these criticisms. See Koellner (forthcoming) for more details. 14. 14. Smooth infinitesimal analysis (SIA) is an intuitionistic analysis system, in which all functions are smooth. Importantly, it is such that 0 is not the only nilsquare (elements whose square is zero, i.e. elements x such that $$x^2=0$$). This is because every function is linear on the nilsquares. From this, it follows that 0 is not the only nilsquare even though there are no nilsquares distinct from 0. This would be inconsistent in classical logic (because of the validity of LEM), and so intuitionistic logic is required. More formally, in a classical system, the sentence $$\lnot \forall x(x=0\vee \lnot (x=0))$$ is a contradiction. In an intuitionistic logic, since the law of excluded middle is not valid, the sentence cannot be true. Importantly for us, the SIA system has a very simple and straightforward proof of the fundamental theorem of the calculus (that the area under a curve corresponds to its derivative). Rather than, as usual, taking approximations of the rectangles under a curve as they approach a width of 0, we take a rectangle under the curve which has the width of a nilsquare. No approximations are necessary, and we do not need the concept of “approaching zero”. See Bell (1998) for more details. ## References 1. Arthur P (1963) The philosophy of Rudolf Carnap, vol XI. Open Court, LaSalle 2. Bell JL (1998) A primer of infinitesimal analysis. Cambridge University Press, Cambridge 3. Carnap R (1937) The logical syntax of language. Harcourt, Brace and Company, New York 4. Carnap R (1939) Foundations of logic and mathematics., International encyclopedia of unified scienceThe University of Chicago press, Chicago 5. Carnap R (1947) Meaning and necessity: a study in semantics and modal logic. University of Chicago, Chicago 6. Carnap R (1950) Empiricism, semantics and ontology. Rev Int Philos 4(11):20–40. In: Benacerraf P, Putnam H (eds) Reprinted in philosophy of mathematics (1983), pp 241–257 7. Carnap R, Quine W (1995) Dear Carnap, Dear Van: the Quine-Carnap correspondance and related work. University of California Press, Berkeley 8. Cook RT (2010) Let a thousand flowers bloom: a tour of logical pluralism. Philos Compass 5(6):492–504 9. Devidi D, Solomon G (1995) Tolerance and metalanguages in Carnap’s logical syntax of language. Synthese 103:123–139 10. Ebbs G (forthcoming) Carnap on ontology. In: carnap, quine and putnam on methods of inquiry. Univeristy Press, Cambridge 11. Field H (2009) Pluralism in logic. Rev Symb Log 2(2):342–359 12. Friedman M (1988) Logical truth and analyticity in Carnap’s “Logical Syntax of Language”. Essays in the history and philosophy of mathematics, pp 82–94 13. Friedman M (2001) Tolerance and analyticity in Carnap’s philosophy. In: Floyd J, Shieh S (eds) Future pasts: the analytic tradition in twentieth century philosophy, Number August 2012, pp 223–256 14. Friedman M, Creath R (2007) The cambridge companion to Carnap., Cambridge companions to philosophyCambridge University Press, Cambridge 15. Hellman G (2006) Mathematical pluralism: the case of smooth infinitesimal analysis. J Philos Log 35(6):621–651 16. Horn L (1989) A natural history of negation. University of Chicago Press 17. Jennings R (1994) The genealogy of disjunction. Oxford University Press, Oxford 18. Koellner P (forthcoming) Carnap on the foundations of mathematics 19. Quine WVO (1951) Two dogmas of empiricism. Philos Rev 60(1):20–43 20. Restall G (2002) Carnap’s tolerance, language change and logical pluralism. J Philos 99:426–443 21. Shapiro S (2014) Varieties of logic. Oxford University Press, Oxford 22. Steinberger F (2015) How tolerant can you be? Carnap on rationality. Philos Phenomenol Res 2009:1–24 23. Tennant N (2007) Carnap, Godel, and the analyticity of arithmetic. Philos Math 16(1):100–112 24. Wagner P (2009) Carnap’s logical syntax of language., History of analytic philosophyPalgrave Macmillan, Basingstoke ## Acknowledgments Thanks to Roy Cook, Geoffrey Hellman, Tristram McPherson, Andrew Parisi, Marcus Rossberg, Kevin Scharp, Stewart Shapiro, Neil Tennant, and a referee for helpful comments on previous drafts. Thanks also to helpful audiences at the 2016 North American Meeting of the Association for Symbolic Logic, the 2016 Society for Exact Philosophy, the 2016 Ohio Philosophical Association, the College of Wooster Philosophy Round Table and the Winter 2016 Dissetration Seminar at Ohio State. ## Author information Authors ### Corresponding author Correspondence to Teresa Kouri. ## Rights and permissions Reprints and Permissions Kouri, T. A New Interpretation of Carnap’s Logical Pluralism. Topoi 38, 305–314 (2019). https://doi.org/10.1007/s11245-016-9423-y
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7405030727386475, "perplexity": 2838.253077946086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737039.58/warc/CC-MAIN-20200806210649-20200807000649-00281.warc.gz"}
https://nubtrek.com/maths/mensuration-basics/basic-mensuration-measuring-basics/basicmensu-units-conversion
Server Error Server Not Reachable. This may be due to your internet connection or the nubtrek server is offline. Thought-Process to Discover Knowledge Welcome to nubtrek. Books and other education websites provide "matter-of-fact" knowledge. Instead, nubtrek provides a thought-process to discover knowledge. In each of the topic, the outline of the thought-process, for that topic, is provided for learners and educators. Read in the blogs more about the unique learning experience at nubtrek. mathsMensuration : Length, Area, and VolumeIntroduction to Measurements ### Conversion of Units of Measure This topic provides a very brief overview of conversion of Units of Measure. click on the content to continue.. Length of an object is the distance-span of the object from one end to the other end. To specify the distance-span the standard-unit meter is defined. What is the standard unit of measure for length? • meter • meter • height The distance between two cities is 200000 meter. It is noted that the number is very large in some applications. Which of the following help in simplifying such large values? • it cannot be simplified • a larger measure of length is defined such as 1 kilometer = 1000 meter. • a larger measure of length is defined such as 1 kilometer = 1000 meter. The answer is "a larger measure of length is defined". Using the larger measure, the distance between two cities is given as 200 kilometer. The length of a pen is very small to measure in 1 meter. It is noted that the number is very small in some applications. Which of the following help in simplifying such small values? • it cannot be simplified • a smaller measure of length is defined such as 1 meter = 100 centimeter. • a smaller measure of length is defined such as 1 meter = 100 centimeter. The answer is "a smaller measure of length is defined". Using the smaller measure, the length of a pen is given as 7 centimeter. The basic unit "meter" is converted into smaller or larger units of lengths. 1000 millimeter = 1 meter 100 centimeter = 1 meter 10 decimeter = 1 meter 1 decameter = 10 meter 1 hectometer = 100 meter 1 kilometer = 1000 meter Remember the following: "milli" is from a Latin root word meaning thousand "centi" is from a Latin root word meaning hundred "deci" is from a Latin root word meaning tenth "deca" is from a Greek root word meaning ten "hecto" is from a Greek root word meaning hundred "kilo" is from a Greek root word meaning thousand Among these, millimeter, centimeter, meter, and kilometer are widely used depending on the scale of the length measurement. slide-show version coming soon
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154183864593506, "perplexity": 2527.400957506429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512395.23/warc/CC-MAIN-20181019103957-20181019125457-00343.warc.gz"}
https://www.dsprelated.com/freebooks/filters/PFE_Real_Second_Order_Sections.html
### PFE to Real, Second-Order Sections When all coefficients of and are real (implying that is the transfer function of a real filter), it will always happen that the complex one-pole filters will occur in complex conjugate pairs. Let denote any one-pole section in the PFE of Eq.(6.7). Then if is complex and describes a real filter, we will also find somewhere among the terms in the one-pole expansion. These two terms can be paired to form a real second-order section as follows: Expressing the pole in polar form as , and the residue as , the last expression above can be rewritten as The use of polar-form coefficients is discussed further in the section on two-pole filtersB.1.3). Expanding a transfer function into a sum of second-order terms with real coefficients gives us the filter coefficients for a parallel bank of real second-order filter sections. (Of course, each real pole can be implemented in its own real one-pole section in parallel with the other sections.) In view of the foregoing, we may conclude that every real filter with can be implemented as a parallel bank of biquads.7.6 However, the full generality of a biquad section (two poles and two zeros) is not needed because the PFE requires only one zero per second-order term. To see why we must stipulate in Eq.(6.7), consider the sum of two first-order terms by direct calculation: (7.9) Notice that the numerator order, viewed as a polynomial in , is one less than the denominator order. In the same way, it is easily shown by mathematical induction that the sum of one-pole terms can produce a numerator order of at most (while the denominator order is if there are no pole-zero cancellations). Following terminology used for analog filters, we call the case a strictly proper transfer function.7.7 Thus, every strictly proper transfer function (with distinct poles) can be implemented using a parallel bank of two-pole, one-zero filter sections. Next Section: Inverting the Z Transform Previous Section: Complex Example
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.973312258720398, "perplexity": 835.2703086300199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146186.62/warc/CC-MAIN-20200226023658-20200226053658-00512.warc.gz"}
https://learn.digilentinc.com/Documents/186
# Statements: ## Introduction In C/C++, a statement is a complete instruction that tells the computer what to do. The termination of a statement with a semicolon is a syntactic requirement of the language (this is in contrast to several other computer languages where the end of a line indicates the termination of a statement). The omission of a semicolon from a statement is a “bug” that will often prevent a sketch from compiling. If you attempt to compile a sketch in which a statement lacks its semicolon, you are likely to obtain several rather confusing error messages. Statements can span multiple lines, so not every line will necessarily ends with a semicolon. For example, the following statement spans two lines (even though, in this case, it is short enough that it could have easily been written on one line): x = 32 * a + 17 * b; Conversely, although it is not considered good programming style, a single line can contain multiple statements. For example, the following contains three assignment statements (where variables are set to particular values): x = 42; y = 24; z = 10; Nevertheless, the vast majority of sketches are written with one (and only one) complete statement per line, thus most lines of code are written with a single semicolon that terminates the line. (However, an inline comment may follow the statement, and in that case, the semicolon wouldn't truly terminate the complete line.) As something of an aside, blocks of code are one or more statements collected together in braces (“{” and “}”). One does not place a semicolon after the closing brace (although if you do put one there, it causes no harm). Without considering the details here, we will merely say that one must use blocks when grouping statements together as a collective entity, such as the statements that constitute the “body” of a function. ## Important points: • Statements are complete instructions. • A single statement can span multiple lines. • Multiple statements can appear on a single line. • All statements must be terminated with a semicolon.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5347515940666199, "perplexity": 946.8461552681928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155792.23/warc/CC-MAIN-20180918225124-20180919005124-00362.warc.gz"}
https://ohioline.osu.edu/search/site/hyg%20fact%202000%202103?f%5B0%5D=hash%3Aov30gi&f%5B1%5D=hash%3Apr1wnh&f%5B2%5D=hash%3A9lto1y&f%5B3%5D=hash%3Aajcnw6
# Site ## Search results 1. ### Best Practices for Effective and Efficient Pesticide Application in this fact sheet will help achieve better performances from applied pesticides. However, only the ... included in this fact sheet can be found in OSU Extension publication FABE-527, “Best Management Practices ... 2. ### Aging: A Natural and Beneficial Part of Life Aging Is Different for Everyone HYG-5809 Family and Consumer Sciences 09/24/2020 Kendall DeWine, ... doi.org/10.1016/S1470-2045(18)30348-6 Adapted from the fact sheet When Does Someone Attain Old Age by Linnette Mizer Goard, Extension ... 3. ### Developing a Farm Digital Strategy 1- Introduction Developing a digital strategy can pose a challenge for many farm operations, however. This fact sheet, the ... a farm needs to answer during strategy development. Two subsequent fact sheets (FABE-556 and FABE-557) ... fact sheet in this series provides a detailed explanation for developing these aspects of a farm ... 4. ### Osteoporosis: Risk Factors, Prevention, and Detection HYG-5808 09/16/2020 Kendall DeWine, Program Assistant, Ohio State University Extension Family and ... USA, 28(4), 1225-1232. doi.org/10.1007/s00198-016-3865-3 Adapted from Osteoporosis Fact Sheet by Lisa ... 5. ### Developing a Farm Digital Strategy 3 – Data Management Considerations returns information to the farm to help with decision-making.  This fact sheet is the third in a series of ... readers in answering the questions below, which were introduced in the first fact sheet. The answers will ... This third fact sheet outlines a data management plan and addresses data storage, sharing, and legal ... 6. ### Developing a Farm Digital Strategy 2 – Precision Technology and Data Generation perspectives. This fact sheet, the second in a series of three (FABE 555- 556- 557) to look at digital ... technologies for an individual field today is significant. In fact, annual data can easily exceed one terabyte ... 7. ### Animal Handling ANIMSCI 2000 General introduction to domestic animal behavior and hands-on experience handling ... 8. ### Data Analysis and Interpretation for Decision Making 1151. Not open to students with credit for AEDEcon 2205, ComLdr 3537, ENR 2000, HCS 2260, or Stat 1450. ... 9. ### Precision Agriculture: Positive Impacts on the Environment Scholarship Contest Farms.com is pleased to announce a contest for a chance to win one of three US$2,000 scholarships ... environment. In addition to the$2,000 scholarship prize, the winners, as determined by voting in each country, ... 10. ### Carpet Beetles HYG-2103 10/19/2011 David J. Shetlar, Department of Entomology Carpet beetles feed on animal and ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15767912566661835, "perplexity": 9201.849250407227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194634.29/warc/CC-MAIN-20201127221446-20201128011446-00285.warc.gz"}
https://tex.stackexchange.com/questions/504756/overleaf-error-in-generating-bibliography-undefined-control-sequence-with-prin/504849
# Overleaf: error in generating bibliography Undefined control sequence with \printbibliography I am working fine on my Overleaf document until I inserted citation when the following error appears after using \printbibliography to generate reference list.: Undefined control sequence. main.tex line 60 The compiler is having trouble understanding a command you have used. Check that the command is spelled correctly. If the command is part of a package, make sure you have included the package in your preamble using \usepackage{...}. namepartfamily ->Zim\x {fffd}\x {fffd}nyi l.60 \end {document} The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., \hobx'), type I' and the correct spelling (e.g., I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined. This error disappears when \printbibliography is commented out, but then no reference list generated. I can confirm I added all required packages. See the document below: \documentclass[a4paper,10pt]{report} \usepackage{biblatex} \usepackage{graphicx} \usepackage[top=25mm, bottom=25mm, left=25mm, right=25mm]{geometry} \begin{document} \input{titlepage} %Table of content \tableofcontents %Abasract page \newpage \Large \begin{center} \textbf{\textit Abstract} \end{center} \hspace{10pt} %\normalsize This is a simple one-page abstract template. Please keep your abstract length at one page. The abstract should be in English. \newpage %begin main document \chapter{Introduction} \section{Statement of Problem} This is the beginning of the introduction section. %\subsection{Research Questions} \section{Motivation} \section{Research Objectives} \section{Pre-Thesis Structure} %===CHAPTER TWO === \chapter{Fundamental Concepts} %=== CHAPTER THREE === \chapter{State of the Art} \section{Transport} %=== CHAPTER FOUR === \chapter{Material and Methods} %=== CHAPTER FIVE === \chapter{Pilot Study} %=== Reference Lists ==== \printbibliography \end{document} • Sounds like some character problem. Check in your .bib file and search for the text Zim, and check the character after it: you might need to re-type it carefully e.g. Zimányi or Zim\'{a}nyi (or similar—Zimányi was the top search result I got when googling Zim.?nyi on Google Scholar) – LianTze Lim Aug 19 at 1:28 • Yes, seems as though Zim?ny is malformed. If LianTzeLim's guess that it should be Zimányi is correct, the issue is probably that the á is not encoded as Biber/biblatex expects. The easy solution would be that your .bib file is UTF-8, but LaTeX expects US-ASCII. Did you try adding \usepackage[utf8]{inputenc} to your document? But maybe the á is malformed. Did you try deleting and re-writing the name? In any case you should clear the cache after you made the changes to avoid errors hanging about in temporary files: overleaf.com/learn/how-to/Clearing_the_cache – moewe Aug 19 at 5:01 • Ah, you're right. adding the package \usepackage[utf8]{inputenc} fix the problem. – arilwan Aug 19 at 10:22 Your .bib file probably contains a line like author = {József Zimányi}, (all credits for the detective work of figuring out the name go to LianTze Lim in the comments). Quite likely your .bib file is encoded in UTF-8 or another non-ASCII encoding. But with your preamble (and the outdated TeX system on Overleaf) LaTeX expects US-ASCII input only (a more current LaTeX will use UTF-8 as default encoding). The easiest way to get things to work again is to tell LaTeX you want to use UTF-8 with \usepackage[utf8]{inputenc} ` You may have to clear the cache (see https://www.overleaf.com/learn/how-to/Clearing_the_cache) before you can recompile cleanly after that, but then you should get the desired output.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774460554122925, "perplexity": 5976.404306305658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572436.52/warc/CC-MAIN-20190915215643-20190916001643-00182.warc.gz"}
https://www.physicsforums.com/threads/visible-mass-of-the-milky-way.773819/
# Visible Mass of the Milky Way Tags: 1. Oct 1, 2014 ### RCopernicus On a related question to my last post, is there any consensus on the visible mass of the Milky Way? I've seen several recent mass calculations but they all assume Dark Matter. For my model I need to know the total visible matter of both the disk and the entire galaxy. I've seen old estimates of 200-400 billion solar masses, but these references appear to be very dated. 2. Oct 1, 2014 ### Staff: Mentor Here is an estimate, but it is not clear to me if the 2E11 is visible or is the total (dark + visible) in the visible disk. http://hendrix2.uoregon.edu/~imamura/123cs/lecture-2/mass.html The Caltech source (in the other thread) mentions ~2E11 stars in the Milky Way, with about 6E10 solar masses in the disk, and 2E10 solar masses in the disk. But this is visible mass, I expect. Here is yet another, different, estimate. http://www.uccs.edu/~tchriste/courses/PES106/106lectures/106lecMilkyWay1.html [Broken] and a more recent discussion http://astrobites.org/2013/01/01/the-milky-way-is-cut-back-down-to-size/ Last edited by a moderator: May 7, 2017 3. Oct 1, 2014 Staff Emeritus This has the same problem as the last time you asked a similar question. We don't have a good view of the galaxy because we are in the middle of it. All the estimates have to go from the perhaps 15% that we can see to the 85% we can't. 4. Oct 2, 2014 ### RCopernicus 5. Oct 10, 2014 ### CKH The first sentence in that paper makes the following claim: This statement is false. It is false in general and in particular it is false in the case of disk galaxies which are not uniform spherical distributions. Here is a quick counter example to prove the statement is false in the general case. The statement is true when, for example, the distribution is a point mass at radial distance r. Now suppose you stretch that point mass into a vertical line segment of uniform mass density where the distance to the line is still r and the total mass is the same as before. It is clear that the gravitational vectors (direction of gravitational force) of the portions of the line far above and below the center will cancel in the vertical component and be weaker in the horizontal component than those same portions when concentrated in the point mass because the distance to the test particle is increased for these portions of the mass distribution and the angle of action is greater resulting in a smaller horizontal component. Thus, in this counterexample, the circular velocity will be lower than in the case of the point mass (or a uniform spherical mass). The circular velocity of a test particle depends not only on the distance from the center of mass, it depends upon the distribution of the mass, even when the distribution is axisymmetric. Note that the quoted statement is true if the mass distribution is spherically uniform (in each shell) and entirely within the radius of the test particle. The statement implicitly makes the assumption that any mass distribution at a radius greater than r is also a uniform spherically in each outer shell and thereby has no effect on the test particle. This mistake appears in many papers including those that cite Kepler's laws (which apply to large, concentrated central masses) to argue how much unseen mass must exist in disk galaxies. The fact is disk-like distributions of mass exert more force on particles at a given radius (in the plane of the disk) than spherical distributions of the same mass. The equation given in the paper is wrong and leads, in the case of a disk galaxy, to an overestimation of contained mass. To see this, imagine that the mass distribution is a uniform sphere of radius a bit less that r. Portions of the sphere above and below the plane through the test particle and center of mass are more distant from the test particle than their projections onto the plane. Now squash the sphere vertically into a disk. Notice that all of the off-plane particles are now closer to the test particle and therefore exert more force on it. Moreover the directions of forces are now all working together in the plane instead of canceling each other in the vertical direction. Is it possible that the paper makes this statement under the implicit assumptions that the baryonic mass is negligible (5%) and that DM is distributed spherically? In the No Dark Matter thread, I posted links to several papers that point out this common mistake and correctly derive circular velocities based on disk-like distributions. The results differ greatly and demonstrate that modest amounts of unseen mass in the outer disks can explain flat rotation curves out to large distances. So while the mass of visible matter in the galaxy can be estimated directly, extended conclusions based on dynamics are highly dependent on the assumed distribution of unseen matter. Similar Discussions: Visible Mass of the Milky Way
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487053751945496, "perplexity": 396.6295058721597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00512.warc.gz"}
https://codereview.stackexchange.com/questions/65163/solving-the-lazy-hobo-riddle/65230
# Solving the Lazy Hobo Riddle I would like help in optimizing this calculation, which is from the Lazy Hobo Riddle: There once were 4 hoboes travelling across the country. During their journey, they ran short on funds, so they stopped at a farm to look for some work. The farmer said there were 200 hours of work that could be done over the next several weeks. The farmer went on to say that how they divided up the work was up to them. The hoboes agreed to start the next day. The following morning, one of the hoboes - who was markedly smarter and lazier than the other 3 - said there was no reason for them all to do the same amount of work. This hobo went on to suggest the following scheme: • The hoboes would all draw straws. • A straw would be marked with a number. • The number would indicate both the number of days the drawer must work and the number of hours to be worked on each of those days. For example, if the straw was marked with a 3, the hobo who drew it would work 3 hours a day for 3 days. It goes without saying that the lazy hobo convinced the others to agree to this scheme and that through sleight of hand, the lazy hobo drew the best straw. The riddle is to determine the possible ways to divide up the work according to the preceding scheme. It basically just involves finding all possible solutions of $a,b,c,d$ such that $a^{2} + b^{2} + c^{2} + d^{2} = 200$ for(int a = 1;a<=100; a++) for(int b = 1; b<=100; b++) for(int c = 1; c<=100; c++) for(int d = 1; d<=100; d++){ ++counter; if(a*a + b*b + c*c + d*d == 200) cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << endl; } • What is counter doing? – Quaxton Hale Oct 9 '14 at 6:10 • I was counting the iterations. – user54410 Oct 9 '14 at 6:10 • You have a hundred million passes through the loop here. Since 15^2 is already over 200 you really only need 14^4 = 38,416 passes. – Loren Pechtel Oct 13 '14 at 3:14 Notice how the square of a number 15 or greater exceeds 200? What you can do, is set the interval from 1 to 14. There is no advantage in evaluating the same combination over and over again. Realize that the most efficient way is to structure your for loops such that $$a \leq b \leq c \leq d$$ In your attempt, you are iterating 38,416 times! By limiting the interval, you iterate just 2380 times! One more thing you can do: check and break when $a^{2} + b^{2} + c^{2} + d^{2} > 200$ for(int a = 1; a<=14; a++) for(int b = a; b<=14; b++) for(int c = b; c<=14; c++) for(int d = c; d<=14; d++){ if(a*a + b*b + c*c + d*d == 200) cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << endl; else if(a*a + b*b + c*c + d*d > 200) break; } Now you iterate only 1,214 times! This is way more efficient. • Was currently writing an answer along the same lines on my phone.... but damn that's slow as heck. Nice numbers though +1 – Vogel612 Oct 9 '14 at 6:20 • Let us assume a==14 this would result in 196 + b^2 + c^2 +d^2 == 200 this can't happen either. So you can limit the loop by 13 – Heslacher Oct 9 '14 at 6:26 • A spoon of tar if I may. The proposed solution is $O(n^2)$ (4 nested loops upto $\sqrt n$ each. Surely we can do better. Build a table of squares ($O(\sqrt n)$ time and space). Build a table of pairwise sums of squares ($O(n)$ time and space). Find all pairs adding up to target ($O(n)$ time, $O(1)$ space). – vnp Oct 9 '14 at 6:39 • @vnp how would it fulfill the condition that a <= b <= c <= d. If there is an $O(n)$ solution, it deserves it's own answer. – abuzittin gillifirca Oct 9 '14 at 6:55 • @IvoBeckers Check my solution for a similar idea. I made the loops go from big numbers to small to reduce the time spent in comparisons, but essentially it's the same idea. – Tibos Oct 9 '14 at 14:49 One thing you should note is that the fourth iteration is useless. Once you fixed the first 3 variables, you need to find the value for the fourth one that equals to 200 minus the sum of squares of the first 3. You don't have to go through all the possible numbers to check if one of them squared is equal to N, you can simply take the square root of N and check if it's an integer or not. Another thing to notice is that you get the same solution several times in different order. This can be easily avoided (and avoid a lot of iterations because of it) by ordering the variables. Finally, you can reduce the number of iterations if you fail early. Once you check a = 15 and see that a squared is more than 200, you no longer need to check higher values for a. The clever thing to do is order the variables in descending order starting from the highest number that might still result in a solution. Solution: int N = 200; for (int a = (int) sqrt(N); a >= 0; a--) { for (int b = min(a, (int) sqrt(N - a*a)); b >= 0; b--) { for (int c = min(b, (int) sqrt(N - a*a - b*b)); c >= 0; c--) { double remaining = sqrt(N - a*a - b*b - c*c); int d = (int) remaining; if ((d == remaining) && (d <= c)) { cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << endl; } } } } I apologize for possible type related errors, i haven't written C in ages. Output: a: 14 b: 2 c: 0 d: 0 a: 12 b: 6 c: 4 d: 2 a: 10 b: 10 c: 0 d: 0 a: 10 b: 8 c: 6 d: 0 a: 8 b: 8 c: 6 d: 6 iterations: 353 • You will find this does not generate the same results as the original. I see were you are going but there is one more step you need to add for you solution. for each found solutions, you also need to find all permutations. – Martin York Oct 10 '14 at 9:45 • Actually, the third loop is redundant, too. By Legendre's three squares theorem, a number can be expressed as the sum of three squares unless it is of the form (8m+7)4^n for integers m and n. 200 is not of this form (200=8*5*5), so there must be a solution of the form a^2 + b^2 + c^2 + 0 = 200 and you only need to loop to calculate a and b; the lazy hobo does no work. (Or, if you're a lazy programmer, you just assert that the lazy hobo knows Legendre's three squares theorem so finds a solution with d=0.) – David Richerby Oct 10 '14 at 10:28 • @DavidRicherby It does not help with finding all the solutions. – Tibos Oct 10 '14 at 10:58 • @Tibos Sure, if you want all the solutions Legendre doesn't help much. I'd not noticed that part of the question but I think I'll leave my comment as it's related (and, I think, interesting). – David Richerby Oct 10 '14 at 11:07 • sqrt is expensive. I would fill a 1..200 array with zeros and then for 1..14 assign the slot with the square to the value. Get 200 -a^2-b^2-c^2, if the entry is non-zero it's d, otherwise there's no answer for that a, b, c. – Loren Pechtel Oct 13 '14 at 3:19 Some minor things, but may still be worth mentioning: Whenever std::endl is used, the buffer gets flushed, which can add to performance a bit, especially if it's done multiple times. In order to get a newline without this added flush, use "\n" within an output statement: std::cout << "\n"; for (int a = 1; a <= 100; a++) • Namespace std is used... that's usually not good, right? ? – Vogel612 Oct 9 '14 at 6:27 • @Vogel612: Yeah, but I didn't mention it here since it's an even minor point. – Jamal Oct 9 '14 at 6:28 Style Vogel612 raises an interesting point in his comment : using namespace std; is usually considered a bad practice. Also, you should try to avoid having the same number hardcoded in different places : you could just use const int lim = 200;. It makes things easier to read/understand and easier to maintain : win-win ! Optimisation Riding on EngieOP (and a bit on rolinger)'s answers : you can limit yourself to : $$a \leq b \leq c \leq d$$ This leads to interesting conclusions to be found : $$4 * a^{2} \leq a^{2} + b^{2} + c^{2} + d^{2} = 200$$ so $$a^{2} \leq 50$$ so $$a \leq 7$$ We can take this advantage in code by stopping when we know there is no hope to find more. Let's forget about math and write this thing in simple C++ (storing the different computed values in variable for reuse) : int main(int argc, char* argv[]) { const int lim = 200; std::cout << "Hello, world!" << std::endl; for(int a = 1; 4*a*a <= lim; a++) { int tmp1 = a*a; for(int b = a; tmp1 + 3*b*b <= lim; b++) { int tmp2 = tmp1 + b*b; for(int c = b; tmp2 + 2*c*c <= lim; c++) { int tmp3 = tmp2 + c*c; for(int d = c; tmp3 + d*d <= lim; d++){ if (tmp3 + d*d == lim) std::cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << std::endl; } } } } return 0; } More optimisations for you to consider When a, b and c are fixed, looking for d could be optimised even more : we want to solve $$d^{2} = lim - (a^{2} + b^{2} + c^{2})$$ You can just compute the root and see if it corresponds to an integer bigger than c. As well as the excellent information from @EngieOP, you could also think about taking some repeated calculations out of the inner loop at the cost of an extra int per 'for' loop. These might be optimised by the compiler, but it shouldn't reduce clarity. for(int a = 1; a<=14; a++){ sum_a = a*a; for(int b = a; b<=14; b++){ sum_b = sum_a + b*b; for(int c = b; c<=14; c++){ sum_c = sum_b + c*c; /*[... etc ...] */ It won't reduce the number of iterations, but could reduce the number of cycles in the inner loop to a single add and compare (if the compiler doesn't already catch the optimisation and do it for you) • We can extend this approach to reduce the number of iterations by adding a small test to the inner for-loops, i.e. for(int c = 1; c<=14 && sumB < 200; c++) and for(int d = 1; d<=14 && sumC < 200; d++). Done this way, the search produces the same result with 17,458 iterations instead of 38,416. – Desty Feb 6 '15 at 11:34 One thing to consider, doing repeated multiplications inside a loop and/or inside the loop declaration can be very expensive. Consider the following, which puts all the squares in an array and the loops only iterate through the array. This eliminates a lot of extra calculations. In my tests, even with the cost of an extra loop, this resulted in about a 30% increase in speed. int squares[15]; const int limit = 200; int it_limit = 15; for (int i = 1; i < it_limit; i++) squares[i] = i*i; int temp = 0; for (int a = 1; a < it_limit; a++) { temp = squares[a]; for (int b = a; b < it_limit; b++) { temp = squares[b] + squares[a]; if (temp > limit) break; for (int c = b; c < it_limit; c++) { temp = squares[c] + squares[b] + squares[a]; if (temp > limit) break; for (int d = c; d < it_limit; d++) { temp = squares[d] + squares[c] + squares[b] + squares[a]; if (temp > limit) break; if (temp == limit) cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << "\n"; } } } } Finding all the solutions in a low number of iterations: #include <iostream> #include <sstream> #include <map> #include <vector> typedef unsigned int T; int main() { const T LIMIT = 2000000; std::map< T,std::vector< std::pair< T,T > > > pairs_of_squares; T a, b, c, d, sum_of_squares, remaining; unsigned long increment = 0; unsigned long num_results = 0; unsigned long index; std::vector< std::pair< T, T > > array_of_pairs; std::stringstream out; for ( a = 1; 2 * a * a <= LIMIT - 2; ++a ) { for ( b = a; (sum_of_squares = a*a + b*b) <= LIMIT - 2; ++b ) { remaining = LIMIT - sum_of_squares; // Check if it is possible to get another pair (c,d) such that either // a <= b <= c <= d or c <= d <= a <= b and, if not, ignore this pair. if ( a * a * 2 < remaining && remaining < b * b * 2 ) { ++increment; continue; } pairs_of_squares[sum_of_squares].push_back( std::pair<T,T>(a,b) ); if ( pairs_of_squares.count( remaining ) != 0 ) { array_of_pairs = pairs_of_squares[ remaining ]; for ( index = 0; index < array_of_pairs.size(); ++index ) { c = array_of_pairs[index].first; d = array_of_pairs[index].second; if ( b <= c ) { out << a << ", " << b << ", " << c << ", " << d << '\n'; ++num_results; } else if ( d <= a ) { out << c << ", " << d << ", " << a << ", " << b << '\n'; ++num_results; } ++increment; } } else { ++increment; } } } std::cout << out.str() << num_results << " possibilities found in " << increment << " increments." << std::endl; return 0; } Output: For a limit of 200: 2, 4, 6, 12 6, 6, 8, 8 2 possibilities found in 75 increments. real 0m0.005s user 0m0.003s sys 0m0.002s [Note: this only takes 75 iterations rather than over 1,000 to find all the possible answers.] For a limit of 2,000,000: ... 104, 192, 984, 992 56, 112, 984, 1008 1221 possibilities found in 785771 increments. real 0m0.890s user 0m0.873s sys 0m0.014s Explanation: Do not work on all 4 numbers individually - work on two pairs of numbers (a low valued pair and a high valued pair). Start by generating a pair of numbers $(a,b)$ where $a \leq b$ and $a^2 + b^2 \leq LIMIT - 2$ (note: 2 is the minimum sum of squares for the other pair of numbers) and push that pair of numbers into a map. It then checks whether the map contains another pair $(c,d)$ where $c \leq d$ and $c^2 + d^2 = LIMIT - a^2 - b^2$ such that either $a \leq b \leq c \leq d$ or $c \leq d \leq a \leq b$ in which case it is a valid answer and outputs it. Then repeat for other pairs of $(a,b)$. After reading the John's comment I thought we can do much better than EngieOP's answer for(int a = 1; a<=14; a++) for(int b = a; b<=14; b++) for(int c = b; c<=14; c++) for(int d = c; d<=14; d++){ if(a*a + b*b + c*c + d*d == 200) cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << endl; else if(a*a + b*b + c*c + d*d > 200) break; } How to speed this up ? Reduce the calculations ! int resultToReach = 200; int maxBorder = (int)sqrt(200); for(int a = 1; a<=maxBorder ; a++) for(int b = a; b<=maxBorder ; b++) for(int c = b; c<=maxBorder ; c++) for(int d = c; d<=maxBorder ; d++){ int tempResult = a*a + b*b + c*c + d*d; if(tempResult == resultToReach) cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << endl; else if(tempResult > resultToReach) break; } But in this way we still calculate a*a + b*b + c*c + d*d for each iteration. So let us reduce them like rolinger's answer already shows int resultToReach = 200; int maxBorder = (int)sqrt(resultToReach); for (int a = 1; a <= maxBorder; a++) { int firstResult = resultToReach - a * a; for (int b = a; b <= maxBorder; b++) { int secondResult = firstResult - b * b; for (int c = b; c <= maxBorder; c++) { int thirdResult = secondResult - c * c; for (int d = c; d <= maxBorder; d++) { int tempResult = thirdResult - d * d; if (tempResult == 0) { cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << endl; break; } else if (tempResult < 0) { break; } } } } } If we now reverse the loops and also adding some minor conditions we will get this int resultToReach = 200; int maxBorder = (int)sqrt(resultToReach); for (int a = maxBorder; a >= 1; a--) { int firstResult = resultToReach - a * a; if (firstResult >= 3) { for (int b = a; b >= 1; b--) { int secondResult = firstResult - b * b; if (secondResult >= 2) { for (int c = b; c >= 1; c--) { int thirdResult = secondResult - c * c; if (thirdResult >= 1) { for (int d = c; d >= 1; d--) { int tmpResult = thirdResult - d * d; if (tmpResult == 0) { counter++; cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << endl; break; } else if (tmpResult > 0) { break; } } } } } } } } As I didn't had C++ by hand, I did the timing in C#. This calculates 1221 unique combinations for resultToReach==2000000 in about 19 seconds. Taking Josay's comment into account produces this int resultToReach = 200; int maxBorder = (int)sqrt(resultToReach); for (int a = maxBorder; a >= maxBorder/4; a--) { int firstResult = resultToReach - a * a; if (firstResult >= 3) { int firstBorder = (int)sqrt(firstResult); for (int b = min(a, firstBorder); b >= firstBorder/3; b--) { int secondResult = firstResult - b * b; if (secondResult >= 2) { int secondBorder = (int)sqrt(secondResult); for (int c = min(b, secondBorder); c >= secondBorder/2; c--) { int thirdResult = secondResult - c * c; if (thirdResult >= 1) { int d = (int)sqrt(thirdResult); if (d <= c && thirdResult == d * d) { cout << "a: " << a << " b: " << b << " c: " << c << " d: " << d << endl; } } } } } } } As I didn't had C++ by hand, I did the timing in C#. This calculates 1221 unique combinations for resultToReach==2000000 in about 4.5 seconds. I'm actually not OK with the assumption that the answer to the hobo question is solving a*a+b*b+c*c+d*d = 200 in an iterative fashion. I think it is a square root and rounding question that can be solved with a greedy algorithm. The guy with the highest workload could do a max of sqrt(200) = 14.142 hours for 14.142 days. Rounded down, that'd be 14*14=196, leaving only 4 hours of work for the other 3. That doesn't work. Try 13... 200-(13*13)=31 for the other 3 and then check the next guy recursively, which gives the following code: int find_worst_hours( int workleft, int numguys ) { static int num_interations = 0; int mine = (int)floor(sqrt((double)workleft)); TRACE( "Num iterations = %d\n", ++num_interations ); while( mine > 0 ) { int leftover = workleft - mine*mine; TRACE( "Guys Left %d:I would do %d, rest would do %d\n", numguys, mine, leftover ); if ( numguys == 1 ) { // last guy if ( leftover > 0 ) { return -1; // bad solution, work left over. } else { // no work left, this is a good solution. TRACE( "GOOD END, guy %d, work %d\n", numguys, mine ); return mine; } } else { // check the rest of guys if ( leftover == 0 ) { return -1; // bad solution, the other guys need work } int next_guys_work = find_worst_hours( leftover, numguys-1 ); if ( next_guys_work > 0 ) { // valid solution TRACE( "GOOD PARTIAL, guy %d, work %d\n", numguys, mine ); return mine; } else { // couldn't find a solution... try less hours mine--; // continue while loop } } } return -1; // couldn't find solution } Which you can call once with: find_worst_hours( 200, 4 ); And it should output: Num iterations = 1 Guys Left 4:I would do 14, rest would do 4 Num iterations = 2 Guys Left 3:I would do 2, rest would do 0 Guys Left 4:I would do 13, rest would do 31 Num iterations = 3 Guys Left 3:I would do 5, rest would do 6 Num iterations = 4 Guys Left 2:I would do 2, rest would do 2 Num iterations = 5 Guys Left 1:I would do 1, rest would do 1 Guys Left 2:I would do 1, rest would do 5 Num iterations = 6 Guys Left 1:I would do 2, rest would do 1 Guys Left 3:I would do 4, rest would do 15 Num iterations = 7 Guys Left 2:I would do 3, rest would do 6 Num iterations = 8 Guys Left 1:I would do 2, rest would do 2 Guys Left 2:I would do 2, rest would do 11 Num iterations = 9 Guys Left 1:I would do 3, rest would do 2 Guys Left 2:I would do 1, rest would do 14 Num iterations = 10 Guys Left 1:I would do 3, rest would do 5 Guys Left 3:I would do 3, rest would do 22 Num iterations = 11 Guys Left 2:I would do 4, rest would do 6 Num iterations = 12 Guys Left 1:I would do 2, rest would do 2 Guys Left 2:I would do 3, rest would do 13 Num iterations = 13 Guys Left 1:I would do 3, rest would do 4 Guys Left 2:I would do 2, rest would do 18 Num iterations = 14 Guys Left 1:I would do 4, rest would do 2 Guys Left 2:I would do 1, rest would do 21 Num iterations = 15 Guys Left 1:I would do 4, rest would do 5 Guys Left 3:I would do 2, rest would do 27 Num iterations = 16 Guys Left 2:I would do 5, rest would do 2 Num iterations = 17 Guys Left 1:I would do 1, rest would do 1 Guys Left 2:I would do 4, rest would do 11 Num iterations = 18 Guys Left 1:I would do 3, rest would do 2 Guys Left 2:I would do 3, rest would do 18 Num iterations = 19 Guys Left 1:I would do 4, rest would do 2 Guys Left 2:I would do 2, rest would do 23 Num iterations = 20 Guys Left 1:I would do 4, rest would do 7 Guys Left 2:I would do 1, rest would do 26 Num iterations = 21 Guys Left 1:I would do 5, rest would do 1 Guys Left 3:I would do 1, rest would do 30 Num iterations = 22 Guys Left 2:I would do 5, rest would do 5 Num iterations = 23 Guys Left 1:I would do 2, rest would do 1 Guys Left 2:I would do 4, rest would do 14 Num iterations = 24 Guys Left 1:I would do 3, rest would do 5 Guys Left 2:I would do 3, rest would do 21 Num iterations = 25 Guys Left 1:I would do 4, rest would do 5 Guys Left 2:I would do 2, rest would do 26 Num iterations = 26 Guys Left 1:I would do 5, rest would do 1 Guys Left 2:I would do 1, rest would do 29 Num iterations = 27 Guys Left 1:I would do 5, rest would do 4 Guys Left 4:I would do 12, rest would do 56 Num iterations = 28 Guys Left 3:I would do 7, rest would do 7 Num iterations = 29 Guys Left 2:I would do 2, rest would do 3 Num iterations = 30 Guys Left 1:I would do 1, rest would do 2 Guys Left 2:I would do 1, rest would do 6 Num iterations = 31 Guys Left 1:I would do 2, rest would do 2 Guys Left 3:I would do 6, rest would do 20 Num iterations = 32 Guys Left 2:I would do 4, rest would do 4 Num iterations = 33 Guys Left 1:I would do 2, rest would do 0 GOOD END, guy 1, work 2 GOOD PARTIAL, guy 2, work 4 GOOD PARTIAL, guy 3, work 6 GOOD PARTIAL, guy 4, work 12 So it took only 33 iterations and we found the answer 2,4,6,12. This should be VERY fast to calculate. • And how about 6^2 + 6^2 + 8^2 + 8^2 ? – Heslacher Oct 9 '14 at 14:47 • Could you change the algorithm to determine all the possible solutions? As it is right now it exits at the first solution which is not what the OP asked for. – Tibos Oct 9 '14 at 14:57 • Tibos - I'll do that if I have time, but since it is a greedy algorithm, and this guy is a greedy hobo, the first solution is the only solution in his mind. – Novicaine Oct 9 '14 at 15:09 • Except that it's clearly stated The riddle is to determine the possible ways to divide up the work according to the preceding scheme. Notice it says possible ways not the best way. – tinstaafl Oct 9 '14 at 15:42 • @DavidRicherby - Which is exactly the point I was making. – tinstaafl Oct 10 '14 at 12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30455541610717773, "perplexity": 3349.265908544857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371803248.90/warc/CC-MAIN-20200407152449-20200407182949-00266.warc.gz"}
http://www.lifealgorithmic.com/content/linux/cis-90/pages/lesson06_commands.html
# Lesson 6 Commands¶ Command Action groups Shows what groups you a a member of. id shows user ID (uid), primary group ID (gid), membership in secondary groups. chown Changes the ownership of a file. (Only the superuser has this privilege) chgrp Changes the group of a file. (Only groups that you belong to) chmod Changes the file mode "permission" bits of a file. umask Allows you to control the permissions new files and directories have when they are created ## File and Directory Permissions¶ Files and directories have permissions settings that control who can access them. The permissions in the UNIX file system are more primitive than Windows, which uses an access control list. On a UNIX system there are three subjects that are important for controlling access. Subject Description User The owner of the file or directory Group The group the file or directory belongs to Others Anyone that's not the owner or in the group When you run the ls -l command you see three sets of permissions that contain a letter or a dash (-). The letters indicate what the subject is allowed to do. Possible accesses are: Access Description Read (r) For files this allows the subject to read the file. For directories allows ls to read the contents of the directory. Write (w) For files this allows the subject to change the contents of the file (but not delete it). For directories this allows you to create and remove files in the directory. Execute (x) For files this allows them to be executed. For directories this allows cd to change into the directory. Here’s an example of ls -l: simben90@opus3:~$ls -l total 76 drwxr-xr-x 2 simben90 simben90 4096 Mar 1 22:27 bin -rw-r----- 1 simben90 simben90 0 Dec 3 22:12 butt -rw-r--r-- 1 simben90 simben90 30 Dec 3 22:07 cis90.contribution drwxrwxr-x 4 simben90 simben90 4096 Mar 1 21:58 class -rw------- 1 simben90 simben90 373 Feb 12 08:07 dead.letter drwxrwxr-x 2 simben90 simben90 4096 Mar 1 22:28 docs Here’s how to understand the permissions on the cis90.contribution file: User (simben90) Group (simben90) Other rw- r-- r-- Read/Write Read Only Read Only ## File Permissions and Binary¶ File permissions are expressed in binary. Counting in binary is easy once you get the hang of it. It’s just like counting in decimal but you use powers of two instead of powers of 10. Let’s start with powers of 10. Let’s consider the number 237. Each number is in a place that has a place value. Hundred's Place$10^2$Ten's Place$10^1$One's Place$10^0$2 3 7 Add the value of all of the places together and you get the number. Binary works the same way, but instead of multiplying each place by 10 you multiply each place by 2. Here’s the number 237 in binary: 128's Place$2^7$64's Place$2^6$32's Place$2^5$16's Place$2^4$8's Place$2^3$4's Place$2^2$2's Place$2^1$1's Place$2^0$1 1 1 0 1 1 0 1 In each place where there is a 1 you add the pace value. When you add all the places in the number above you get:$128 + 64 + 32 + 8 + 4 + 1 = 237$Not clicking for you? This table will help you remember the binary numbers to permissions mapping. Permission Decimal Permission Binary Permission Flags Description 0 000 --- No access 1 001 --x Execute only. Not generally useful 2 010 -w- Write only. Not generally useful 3 011 -wx Write and execute. Not generally useful 4 100 r-- Read only. 5 101 r-x Read and execute. Common for directories. Allows the use of directories. Files can be changed but cannot be created or deleted. 6 110 rw- Read write. Common for files. 7 111 rwx Read, write and execute. For files allows the execution of a file as a program. For directories allows full access. ## Changing File Permissions¶ There are two ways to change the file permissions. You can do it absolutely using a binary number that represents the file permission. For example: $ chmod 640 dead.letter $ls -l dead.letter -rw-r----- 1 simben90 simben90 373 Feb 12 08:07 dead.letter You can also change permissions relatively by adding or subtracting access. For example to add the ability for the group to write the dead.letter file you would run the command: $ chmod g+w dead.letter To subtract the ability for the group and everyone to read or write the dead.letter file you would run the command: $chmod go-rw dead.letter ## Controlling Default Permissions with umask¶ The umask command controls the permissions of files and directories when they are created. The “mask” in umask signifies that bits in the umask disable or mask the corresponding permission bits. That means that the umask shows you the opposite of the permissions that will be created. Run umask without arguments on opus3 and you can see what the default umask is. $ umask 0002 This says that new files and directories will not be writable. This table will help you remember umask values. umask Decimal umask Binary Default Permission Flags 0 000 rwx 1 001 rw- 2 010 r-x 3 011 r-- 4 100 -wx 5 101 -w- 6 110 --x 7 111 ---
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5835246443748474, "perplexity": 4647.197650387434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891228.40/warc/CC-MAIN-20201026115814-20201026145814-00443.warc.gz"}
https://projecteuclid.org/search_result?type=index&q.a.author=Salha%20Mamane&q.f.keyword=laplace%20transform&q.f.keyword=graphical%20model&q.f.keyword=analysis%20on%20cones&q.f.keyword=wishart%20distribution&q.f.keyword=power%20function&q.f.author=Ishi,%20Hideyuki&q.f.authority=euclid.pja&q.f.format=Journal%20article&q.f.author=Graczyk,%20Piotr
## Format » • Journal article (1) ## Publication Title » • Proceedings of the Japan Academy, Series A, Mathematical Sciences (1) ## Keyword » • analysis on cones (1) • graphical model (1) • laplace transform (1) • power function (1) • wishart distribution (1) • You have access to this content. • You have partial access to this content. • You do not have access to this content. ## Search results Showing 1-1 of 1 result Select/deselect all • Export citations ### On the Letac-Massam Conjecture on cones $Q_{A_{n}}$ Graczyk, Piotr, Ishi, Hideyuki, Mamane, Salha, and Ochiai, Hiroyuki Proceedings of the Japan Academy, Series A, Mathematical Sciences Volume 93, Number 3 (March 2017), 16-21. Journal article Select/deselect all
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24423635005950928, "perplexity": 29277.184308350377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657138752.92/warc/CC-MAIN-20200712144738-20200712174738-00471.warc.gz"}
http://www.ni.com/documentation/en/labview-comms/1.0/2943r/overview/
The NI USRP-2943R is a software defined radio with 40 MHz bandwidth and a 1.2 GHz to 6.6 GHz frequency range. This product includes the following features: • A tunable RF transceiver • High-speed ADC and DAC for streaming baseband I and Q signals to a host PC over 1/10 gigabit Ethernet or PCI Express (using MXI-Express) • Programmable with the NI-USRP instrument driver in LabVIEW The NI USRP-2943R can be used for the following communications applications: • White space
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8400788307189941, "perplexity": 6819.6759868668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00252.warc.gz"}
http://arxiv-export-lb.library.cornell.edu/list/math.AP/pastweek?skip=28&show=25
# Analysis of PDEs ## Authors and titles for recent submissions, skipping first 28 [ total of 101 entries: 1-25 | 4-28 | 29-53 | 54-78 | 79-101 ] [ showing 25 entries per page: fewer | more | all ] ### Fri, 14 Jan 2022 (continued, showing last 9 of 17 entries) [29] Title: Local Well-Posedness of the Gravity-Capillary Water Waves System in the Presence of Geometry and Damping Authors: Gary Moon Subjects: Analysis of PDEs (math.AP) [30] Title: Scattering for focusing supercritical wave equations in odd dimensions Subjects: Analysis of PDEs (math.AP) [31] Title: Analysis of the inverse Born series: an approach Subjects: Analysis of PDEs (math.AP); Mathematical Physics (math-ph) [32] Title: On the analyticity of the Dirichlet-Neumann operator and Stokes waves Subjects: Analysis of PDEs (math.AP) [33] Title: Decay estimates and blow up of solutions to a class of heat equations Subjects: Analysis of PDEs (math.AP) [34] Title: Quantitative bounds for critically bounded solutions to the three-dimensional Navier-Stokes equations in Lorentz spaces Subjects: Analysis of PDEs (math.AP) [35]  arXiv:2201.04846 (cross-list from math.NA) [pdf, other] Title: On the numerical solution of a hyperbolic inverse boundary value problem in bounded domains Comments: 13 pages, 3 figures. arXiv admin note: text overlap with arXiv:1903.07412 Subjects: Numerical Analysis (math.NA); Analysis of PDEs (math.AP) [36]  arXiv:2201.04775 (cross-list from physics.flu-dyn) [pdf, other] Title: Self-similarity in turbulence and its applications Authors: Koji Ohkitani Subjects: Fluid Dynamics (physics.flu-dyn); Analysis of PDEs (math.AP) [37]  arXiv:2201.04705 (cross-list from math.PR) [pdf, ps, other] Title: Analysis of the Anderson operator Subjects: Probability (math.PR); Mathematical Physics (math-ph); Analysis of PDEs (math.AP) ### Thu, 13 Jan 2022 (showing first 16 of 22 entries) [38] Title: Simultaneous recovery of piecewise analytic coefficients in a semilinear elliptic equation Subjects: Analysis of PDEs (math.AP) [39] Title: Rate of convergence for singular perturbations of Hamilton-Jacobi equations in unbounded spaces Subjects: Analysis of PDEs (math.AP) [40] Title: Uniqueness theorems for weighted harmonic functions in the upper half-plane Subjects: Analysis of PDEs (math.AP) [41] Title: Li-Yau and Harnack inequalities via curvature-dimension conditions for discrete long-range jump operators including the fractional discrete Laplacian Subjects: Analysis of PDEs (math.AP); Probability (math.PR) [42] Title: Long-time derivation at equilibrium of the fluctuating Boltzmann equation Authors: Thierry Bodineau (CMAP), Isabelle Gallagher (DMA), Laure Saint-Raymond (IHES), Sergio Simonella (ENS Lyon) Subjects: Analysis of PDEs (math.AP); Mathematical Physics (math-ph); Probability (math.PR) [43] Title: On mass - critical NLS with local and non-local nonlinearities Comments: 39pages. arXiv admin note: text overlap with arXiv:1001.1627, arXiv:1203.2476 by other authors Subjects: Analysis of PDEs (math.AP) [44] Title: Explicit solution for non-classical one-phase Stefan problem with variable thermal coefficients and two different heat source terms Subjects: Analysis of PDEs (math.AP); Mathematical Physics (math-ph) [45] Title: Degenerate operators on the half-line Subjects: Analysis of PDEs (math.AP) [46] Title: Sharp estimates for the spreading speeds of the Lotka-Volterra competition-diffusion system: the strong-weak type Subjects: Analysis of PDEs (math.AP) [47] Title: On the exponential time-decay for the one-dimensional wave equation with variable coefficients Subjects: Analysis of PDEs (math.AP) [48] Title: Rigidity of the ball for an isoperimetric problem with strong capacitary repulsion Subjects: Analysis of PDEs (math.AP); Mathematical Physics (math-ph) [49] Title: Serrin-type Overdetermined problems in $\mathbb H^n$ Subjects: Analysis of PDEs (math.AP) [50] Title: Rational Decay of A Multilayered Structure-Fluid PDE System Subjects: Analysis of PDEs (math.AP) [51] Title: Existence and asymptotic behavior of positive solutions for a class of locally superlinear Schrödinger equation Subjects: Analysis of PDEs (math.AP) [52] Title: An energy formula for fully nonlinear degenerate parabolic equations in one spatial dimension
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5781888961791992, "perplexity": 7725.511982365153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300805.79/warc/CC-MAIN-20220118062411-20220118092411-00253.warc.gz"}
http://physicscatalyst.com/mech/elasticity_0.php
# Elasticity ### 3. Strain • When a body is under a system of forces or couples in equilibrium then a change is produced in the dimensions of the body. • This fractional change or deformation produced in the body is called strain. • Strain is a dimensionless quantity. • Strain is of three types (a) Longitudinal strain:- It is defined as the ratio of the change in length to the original length. If l is the original length and Δl is the change in length then, (b) Volume strain:-It is defined as the ratio of change in volume to the original volume (c) Shearing strain:- If the deforming forces produce change in shape of the body then the strain is called shear strain. Considering Figure 2. it can also be defined as the ratio of displacement x of corner b to the transverse dimension l. Thus or, Shear strain = tanθ In practice since x is much smaller than l so, tanθ ≅ θ and the strain is simply the angle θ(measured in radians). Thus, shear strain is pure number without units as it is ratio of two lengths. What is elastic limit> Elastic limit is the upper limit of deforming force up to which , if deforming force is removed, the body regains its original form completely beyond which if deforming force is increased, the body looses its property of elasticity and gets permanently deformed. ### 4. Hook's Law • Hook's law is the fundamental law of elasticity and is stated as " for small deformations stress is proportional to strain". Thus, stress ∝ strain or, $\frac{stress}{strain}=constant$ This constant is known as modulus of elasticity of a given material, which depends upon the nature of the material of the body and the manner in which body is deformed. • Hook's law is not valid for plastic materials. • Units and dimension of the modulus of elasticity are same as those of stress. This mobile-friendly simulation allows students to stretch and compress springs to explore relationships among force, spring constant, displacement, and potential energy in a spring. You can use it to promote understanding of the predictable mathematical relationships that underlie Hooke's Law. Playing around with this simulation you can get an understanding of restoring forces.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.969989538192749, "perplexity": 844.7353635126372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121121.5/warc/CC-MAIN-20170423031201-00457-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-algebra/63742-eigenvector-u-v.html
# Math Help - eigenvector u+v 1. ## eigenvector u+v let A be an nxn matrix and suppose that u and v are eigenvectors of A,both with corresponding eigenvalue y.Prove that if u+v is not equal to 0n , then u+v is also an eigenvector of A with corresponding eigenvalue y 2. Originally Posted by math8553 let A be an nxn matrix and suppose that u and v are eigenvectors of A,both with corresponding eigenvalue y.Prove that if u+v is not equal to 0n , then u+v is also an eigenvector of A with corresponding eigenvalue y By definition $A\bold{u} = y\bold{u}$ and $A\bold{v} = y\bold{v}$. This means, $A\bold{u} + A\bold{v} = y\bold{u}+y\bold{v}\implies A(\bold{u}+\bold{v}) = y(\bold{u}+\bold{v})$ Thus, if $\bold{u}+\bold{v}\not = 0$ then it is an eigenvector with eigenvalue $y$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960405230522156, "perplexity": 466.85848186510736}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701169261.41/warc/CC-MAIN-20160205193929-00070-ip-10-236-182-209.ec2.internal.warc.gz"}
http://koreascience.or.kr/article/JAKO202014264110208.page?&lang=ko
# Review on Integration of Smart Grid into Smart City • Sim, Min Kyu (Department of Industrial and Systems Engineering, Seoul National University of Science and Technology) • 투고 : 2019.10.29 • 심사 : 2019.12.17 • 발행 : 2020.02.28 #### 초록 효율적인 에너지 운영을 목표로 하는 스마트 그리드는 인간의 삶의 질의 향상을 목표로 하는 스마트 시티의 일부로 자리 잡을 것이다. 이 논문에서는 성숙된 스마트 그리드의 모습이 스마트 시티에 자연스럽게 통합되는 모습임을 논증한다. 또한, 스마트 시티의 핵심 구성요소로서 스마트 그리드의 모습을 그리고, 이에 필요한 제도, 인프라, 기술 요소에 대한 문헌을 조사한다. #### 참고문헌 1. Atasoy, T., Akinc, H. E., and Ercin, O., "An analysis on smart grid applications and grid integration of renewable energy systems in smart cities," In 2015 International Conference on Renewable Energy Research and Applications (ICRERA) (pp. 547-550). IEEE, 2015, 2. Bonetto, R. and Rossi, M., "Smart grid for the smart city," In Designing, Developing, and Facilitating Smart Cities (pp. 241-263). Springer, Cham, 2017. 3. Bulkeley, H., McGuirk, P. M., and Dowling, R., "Making a smart city for the smart grid? The urban material politics of actualising smart electricity networks," Environment and Planning A: Economy and Space, Vol. 48, No. 9, pp. 1709-1726, 2016. https://doi.org/10.1177/0308518X16648152 4. Eremia, M., Toma, L., and Sanduleac, M., "The smart city concept in the 21st century," Procedia Engineering, Vol. 181, pp. 12-19, 2017. https://doi.org/10.1016/j.proeng.2017.02.357 5. Kim, H., Kim, H., and Ji, Y., “User requirement Elicitation for U-City residential environment: Concentrated on smart home service,” The Journal of Society for e-Business Studies, Vol. 20, No. 1, pp. 167-182, 2015. https://doi.org/10.7838/jsebs.2015.20.1.167 6. Kim, S. G., Jung, J. Y., and Sim, M. K., "A two-step approach to solar power generation prediction based on weather data using machine learning," Sustainability, Vol. 11, No. 5, p. 1501, 2019. https://doi.org/10.3390/su11051501 7. Kumar, N., Vasilakos, A. V., and Rodrigues, J. J., “A multi-tenant cloud-based DC nano grid for self-sustained smart buildings in smart cities,” IEEE Communications Magazine, Vol. 55, No. 3, pp. 14-21, 2017. https://doi.org/10.1109/MCOM.2017.1600228CM 8. Lazaroiu, G. C. and Roscia, M., "Model for smart appliances toward smart grid into smart city," In 2016 IEEE International Conference on Renewable Energy Research and Applications (ICRERA) (pp. 622-627), IEEE, 2016. 9. Li, B., Kisacikoglu, M. C., Liu, C., Singh, N., and Erol-Kantarci, M., “Big data analytics for electric vehicle integration in green smart cities,” IEEE Communications Magazine, Vol. 55, No. 11, pp. 19-25, 2016. https://doi.org/10.1109/MCOM.2017.1700133 10. Manville, C., Cochrane, G., Cave, J., Millard, J., Pederson, J. K., Thaarup, R. K., and Kotterink, B., Mapping smart cities in the EU, 2014. 11. Masera, M., Bompard, E. F., Profumo, F., and Hadjsaid, N., “Smart (electricity) grids for smart cities: Assessing roles and societal impacts,” Proceedings of the IEEE, Vol. 106, No. 4, pp. 613-625, 2018. https://doi.org/10.1109/JPROC.2018.2812212 12. Mohsenian-Rad, H. and Cortez, E., "Smart grid for smart city activities in the california city of riverside," In Smart City $360^{\circ}$ (pp. 314-325), Springer, Cham, 2016. 13. Morello, R., Mukhopadhyay, S. C., Liu, Z., Slomovitz, D., and Samantaray, S. R., “Advances on sensing technologies for smart cities and power grids: A review,” IEEE Sensors Journal, Vol. 17, No. 23, pp. 7596-7610, 2017. https://doi.org/10.1109/JSEN.2017.2735539 14. Pieroni, A., Scarpato, N., Di Nunzio, L., Fallucchi, F., and Raso, M., “Smarter city: smart energy grid based on blockchain technology,” Int. J. Adv. Sci. Eng. Inf. Technol, Vol. 8, No. 1, pp. 298-306, 2018. https://doi.org/10.18517/ijaseit.8.1.4954 15. Shuai, W., Maillé, P., and Pelov, A., “Charging electric vehicles in the smart city: A survey of economy-driven approaches,” IEEE Transactions on Intelligent Transportation Systems, Vol. 17, No. 8, pp. 2089-2106, 2016. https://doi.org/10.1109/TITS.2016.2519499 16. Smart sustainable cities: An analysis of definitions, ITU-T focus group on smart sustainable cities, 2014, Available: http://www.itu.int/en/ITU-T/focusgroups/ssc/Documents/website/web-fg-ssc-0100-r9-definitions_technical_report.docx. 17. Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., and Leyton-Brown, K., "Artificial intelligence and life in 2030," One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Vol. 52, 2016. 18. Suryadevara, N. K. and Biswal, G. R., "Smart plugs: Paradigms and applications in the smart city-and-smart grid," Energies, Vol. 12, No. 10, p. 1957, 2019. https://doi.org/10.3390/en12101957 19. Wei, W., Mei, S., Wu, L., Shahidehpour, M., and Fang, Y., "Optimal traffic-power flow in urban electrified transportation networks," IEEE Transactions on Smart Grid, Vol. 8, No. 1, pp. 84-95, 2016. https://doi.org/10.1109/TSG.2016.2612239
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.758533239364624, "perplexity": 29779.49595920752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077818.23/warc/CC-MAIN-20210414125133-20210414155133-00438.warc.gz"}
https://socratic.org/questions/an-investment-is-losing-value-at-a-rate-of-2-per-year-your-original-investment-w
Algebra Topics Explanation: As the investment is loosing value at the rate of 2% every year, original value of an investment $I$ after $n$ years would be $I {\left(1 - \frac{2}{100}\right)}^{n} = I \times {\left(0.98\right)}^{n}$ As original investment was $1250 three years ago, its present value is $1250 \times {0.98}^{3}$= 1250xx0.9412=$1176.49 or \$1176 to the nearest dollar. Impact of this question 676 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8110076785087585, "perplexity": 3822.68398046883}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00310.warc.gz"}
https://www.physicsforums.com/threads/trig-limit.336450/
# Homework Help: Trig limit 1. Sep 11, 2009 ### Zhalfirin88 I know this probably pre-calc, but this was assinged to us in our calc class. 1. The problem statement, all variables and given/known data Find the lim as h approaches zero of $$x * cos x$$ 3. The attempt at a solution $$\frac{(x+h)cos(x+h)-(x cos x)}{h}$$ $$\frac{x+h(cos (x) cos (h) + sin (x) sin (h)) -x *-cos (x))}{h}$$ $$\frac{h(cos (x) cos (h) + sin (x) sin (h)) -cos (x)}{h}$$ Don't know what to do next. 2. Sep 11, 2009 ### Elucidus Assuming you were seeking the derivative, your parentheses are off and there is no distribution in the second term of the numerator (it's a product not a sum). Your second line should read: $$= \frac{(x+h)(cos (x) cos (h) + sin (x) sin (h)) -(x \cdot cos (x))}{h}$$ HOWEVER you state you are being asked to find the limit of xcos(x) as h approaches zero. I'm not sure you have gotten the question correct. xcos(x) does not contain an "h" and finding a limit is not equivalent to finding a derivative. Please state the question exactly as it appears so we can help you better. --Eucidus 3. Sep 11, 2009 ### Zhalfirin88 Oh, sorry, we were supposed to use the definition of a derivative. $$\frac {f(x+h)-f(x)}{h}$$ Where $$f(x) = x * cos(x)$$ 4. Sep 11, 2009 ### VietDao29 When taking limits, or solving maths problems in general. One should first determine what / which terms are problematic, then, isolate them, and finally, think of a way to get rid of them; instead of just expanding everything out without any goals, or reasons, and get a huge messy bunch. This step is bad, don't expand it early like that. So, our limit is: $$\lim_{h \rightarrow 0} \frac{(x + h) \cos (x + h) - x\cos x}{h}$$ $$= \lim_{h \rightarrow 0} \frac{x \cos (x + h) + h\cos(x + h) - x\cos x}{h}$$ Now, look at the expression closely, which terms will produce the Indeterminate Form 0/0? $$= \lim_{h \rightarrow 0} \frac{\color{red}{x \cos (x + h)} \color{blue}{+ h\cos(x + h)} \color{red}{- x\cos x}}{h}$$ The red ones, when simplifying will produce 0/0, right? And, when simplifying the blue term, by canceling 'h' will produce a normal term right? So, you limit now becomes: $$= \lim_{h \rightarrow 0} \frac{\color{red}{x \cos (x + h)} - \color{red}{x\cos x} \color{blue}{+ h\cos(x + h)}}{h}$$ $$=\lim_{h \rightarrow 0} \frac{\color{red}{x \cos (x + h)} - \color{red}{x\cos x}}{h} + \lim_{h \rightarrow 0} \frac{\color{blue}{h\cos(x + h)}}{h}$$ (isolating the problematic terms) $$=\lim_{h \rightarrow 0} \frac{\color{red}{x \cos (x + h)} - \color{red}{x\cos x}}{h} + \lim_{h \rightarrow 0} \color{blue}{\cos(x + h)}}$$ $$=\lim_{h \rightarrow 0} \frac{\color{red}{x ( \cos (x + h)} - \color{red}{\cos x} )}{h} + \color{blue}{\cos(x)}$$ Let's see if you can continue from here. :) ------------- And please review your algebraic manipulations, you make quit a lot mistakes in your first post: missing parentheses, and you even change the * operator to +.. @.@ $$xy \neq x + y$$. The 2 operators are totally different!!!! And $$-(xy) \neq (-x) * (-y) \neq (-x) + (-y)$$ I think you should really, really need go over algebraic manipulations again.. Last edited: Sep 11, 2009 5. Sep 11, 2009 ### Zhalfirin88 Would you expand it there? I haven't worked with trig functions that closely so I'd assume that knowing the indeterminate form will come from practice. Edit: from looking at it. Haha yeah, I've slept ~3 hours in the past 1.5 days because I worked 3rd shift last night Last edited: Sep 11, 2009 6. Sep 11, 2009 ### Staff: Mentor I don't think it has been pointed out that you are using a trig identity incorrectly. cos(a + h) = cos a cos h - sin a sin h. 7. Sep 11, 2009 ### Zhalfirin88 Is that why the identity said $$\mp$$ and not $$\pm$$ ? I was never really taught trig identities and functions in high school, so I pick up as I go through college. 8. Sep 11, 2009 ### Cyosis I suggest taking the x in front of the limit and then taking a good look at the red part. It is the definition of...? Last edited: Sep 11, 2009 Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8724956512451172, "perplexity": 1393.668666096296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590046.11/warc/CC-MAIN-20180718021906-20180718041906-00194.warc.gz"}
https://rd.springer.com/article/10.1007%2Fs41115-018-0003-2
# Vlasov methods in space physics and astrophysics • Minna Palmroth • Urs Ganse • Yann Pfau-Kempf • Markus Battarbee • Lucile Turc • Thiago Brito • Maxime Grandin • Sanni Hoilijoki • Arto Sandroos • Sebastian von Alfthan Open Access Review Article ## Abstract This paper reviews Vlasov-based numerical methods used to model plasma in space physics and astrophysics. Plasma consists of collectively behaving charged particles that form the major part of baryonic matter in the Universe. Many concepts ranging from our own planetary environment to the Solar system and beyond can be understood in terms of kinetic plasma physics, represented by the Vlasov equation. We introduce the physical basis for the Vlasov system, and then outline the associated numerical methods that are typically used. A particular application of the Vlasov system is Vlasiator, the world’s first global hybrid-Vlasov simulation for the Earth’s magnetic domain, the magnetosphere. We introduce the design strategies for Vlasiator and outline its numerical concepts ranging from solvers to coupling schemes. We review Vlasiator’s parallelisation methods and introduce the used high-performance computing (HPC) techniques. A short review of verification, validation and physical results is included. The purpose of the paper is to present the Vlasov system and introduce an example implementation, and to illustrate that even with massive computational challenges, an accurate description of physics can be rewarding in itself and significantly advance our understanding. Upcoming supercomputing resources are making similar efforts feasible in other fields as well, making our design options relevant for others facing similar challenges. ## Keywords Plasma physics Computational physics Vlasov equation Astrophysics Space physics ## 1 Introduction While physical understanding is inherently based on empirical evidence, numerical simulation tools have become an integral part of the majority of fields within physics. When tested against observations, numerical models can strengthen or invalidate existing theories and quantify the degree to which the theories have to be improved. Simulation results can also complement observations by giving them a larger context. In space physics, spacecraft measurements concern only one point at one time in the vast volume of space, indicating that discerning spatial phenomena from temporal changes is difficult. This is a shortcoming that has also led to the use of spacecraft constellations, like the European Space Agency’s Cluster mission (Escoubet et al. 2001). However, simulations are considerably more cost-effective compared to spacecraft, and they can be adopted to address physical systems that cannot be reached by in situ experiments, like the distant galaxies. Finally, and most importantly, predictions of physical environments under varying conditions are always based on modelling. Predicting the near-Earth environment in particular has become increasingly important, not only because the near-Earth space hosts expensive assets used to monitor our planet. The space environmental conditions threatening space- or ground-based technology or human life are commonly termed as space weather. Space weather predictions include two types of modelling efforts; those targeting real-time modelling (similar to terrestrial weather models), and those which test and improve the current space physical understanding together with top-tier experiments. This paper concerns the latter approach. The physical conditions within the near-Earth space are mostly determined by physics of collisionless plasmas, where the dominant physical interactions are caused by electromagnetic forces over a collection of charged particles. There are three main approaches to model plasmas: (1) the fluid approach (e.g., magnetohydrodynamics, MHD), (2) the fully kinetic approach, and (3) hybrid approaches combining the first two. Present global models including the entire near-Earth space in three dimensions (3D) and resolving the couplings between different regions are largely based on MHD (e.g., Janhunen et al. 2012). However, single-fluid MHD models are basically scale-less in that they assume that plasmas have a single temperature approximated by a Maxwellian distribution. Therefore they provide a limited context to the newest space missions, which produce high-fidelity multi-point observations of spatially overlapping multi-temperature plasmas. The second approach uses a kinetic formulation as represented by the Vlasov theory (Vlasov 1961). In this approach, plasmas are treated as velocity distribution functions in a six-dimensional phase space consisting of three-dimensional ordinary space (3D) and a three-dimensional velocity space (3V). The majority of kinetic simulations model the Vlasov theory by a particle-in-cell (PIC) method (Lapenta 2012), where a large number of particles are propagated within the simulation, and the distribution function is constructed from particle statistics in space and time. The fully kinetic PIC approach means that both electrons and protons are treated as particles within the simulation. Such simulations in 3D are computationally extremely costly, and can only be carried out in local geometries (e.g., Daughton et al. 2011). A hybrid approach in the kinetic simulation regime means usually that electrons are treated with a fluid description, but protons and heavier ions are treated kinetically. Again, the vast majority of simulations use a hybrid-PIC approach, which have previously considered 2D spatial regimes due to computational challenges (e.g., Omidi et al. 2005; Karimabadi et al. 2014), but have recently been extended into 3D using a limited resolution (e.g., Lu et al. 2015; Lin et al. 2017). This paper does not discuss the details of the PIC approach, but instead concentrates on a hybrid-Vlasov method, where the ion velocity distribution is discretised and modelled with a 3D–3V grid. The difference to hybrid-PIC is that in hybrid-Vlasov the distribution functions are evolved in time as an entity, and not constructed from particle statistics. The main advantage is therefore that the distribution function becomes noiseless. This can be important for the problem at hand, because the distribution function is in many respects the core of plasma physics as the majority of the plasma parameters and processes can be derived from it. As will be described, hybrid-Vlasov methods have been used mostly in local geometries, because the 3D–3V requirement implies a large computational cost. A global approach, which in space physics means simulation box sizes exceeding thousands of ion inertial lengths or gyroradii per dimension, have not been possible as naturally the large volume has to consider the velocity space as well. The world’s (so far) only global magnetospheric hybrid-Vlasov simulation, the massively parallel Vlasiator, is therefore the prime application in this article. This paper is organised as follows: Sect. 2 introduces the typical plasma systems and relevant processes one encounters in space. Sections 3 and 4 introduce the Vlasov theory and its numerical representations. Section 5 describes Vlasiator in detail and justifies the decisions made in the design of the code to aid those who would like to design their own (hybrid-)Vlasov system. At the time of writing, there are no standard verification cases for a (hybrid-)Vlasov system, but we describe the test cases used for Vlasiator. The physical findings are then illustrated briefly, showing that Vlasiator has made a paradigm change in space physics, emphasising the role of scale coupling in large-scale plasma systems. While this paper concerns mostly the near-Earth environment, we hope it is useful for astrophysical applications as well. Astrophysical large-scale modelling is still mostly based on non-magnetised gas (Springel 2005; Bryan et al. 2014), while in reality astrophysical objects are in the plasma state. In the future, pending new supercomputer infrastructure, it may be possible to design astrophysical simulations based on MHD first, and later possibly on kinetic theories. If this becomes feasible, we hope that our design strategies, complemented and validated by in situ measurements, can be helpful. ## 2 Kinetic physics in astrophysical plasmas Thermal and non-thermal interactions between charged particles and electromagnetic fields follow the same basic rules throughout the universe, but the applicability of simplified theories and the relevant spatial, temporal, and virial scales vary greatly between different scopes of research. In this section, we present an overview of regions of interest and the phenomena found within them. ### 2.1 Astrophysical media and objects Prime examples of themes requiring modelling are e.g., the dynamics of hot, cold, and dark matter in an expanding universe with unknown boundaries. The birth of the universe connects the rapid expansion and cooling of baryonic matter with quantum fluctuation anisotropies that eventually lead to the formation of galactic superclusters. Astrophysical simulations of the universe should naturally account for expansion of space-time and associated effects of general relativity, and modelling of high-energy phenomena should correctly account for special relativity due to velocities approaching the speed of light. A recent forerunner in modelling the universe is EAGLE (Schaye et al. 2015), which utilises smoothed particle hydrodynamics, with subgrid modelling providing feedback of star formation, radiative cooling, stellar mass loss and feedback from stars and accreting black holes. These simulations operate on very much larger scales compared to the Vlasov equation for ions and electrons, yet they depend strongly on knowledge of processes at smaller length and time scales. Due to the majority of the universe consisting of the mostly empty interstellar and intergalactic media, the energy content of turbulent space plasmas must be understood. This has been investigated through the Vlasov equation (see, e.g., Weinstock 1969). Conversely, turbulent behaviour at large scales can act as a model for extending power laws to smaller scales (Maier et al. 2009). An alternative if less common approach for modelling galactic dynamics is to describe the distribution of stars as a Vlasov–Poisson system, to be explained below, with gravitational force terms instead of electromagnetic effects (Guo and Li 2008). This approach highlights the use of the Vlasov equation also on large spatial scales. ### 2.2 Solar system Plasma simulations of the solar system are mostly concerned with the modelling of solar activity and its influence on the heliosphere. Solar activity can be divided into two components: the solar wind, consisting of particles escaping continuously from the solar corona due to its thermal expansion, and carrying with them turbulent fields; and transient phenomena such as flares and coronal mass ejections, during which energy and plasma are released explosively from the Sun into the heliosphere. Topics of active research in solar physics include for example the acceleration and expansion of the solar wind (Yang et al. 2012; Verdini et al. 2010; Pinto and Rouillard 2017), coronal heating (De Moortel and Browning 2015; Cranmer et al. 2017) and flux emergence (Schmieder et al. 2014). The latter is particularly important for transient solar activity, as flares and coronal mass ejections are due to the destabilisation of coronal magnetic structures through magnetic reconnection. The typical length of these coronal structures ranges between $$10^{6}$$ and $$10^{8}$$ m. Of great interest is also the propagation of the solar wind and solar transients into the heliosphere, in particular for studying their interaction with Earth and other planetary environments. Because of the large scales of the systems considered, solar and heliospheric simulations are generally based on MHD, indicating that currently existing theories of the Sun and the solar eruption are mostly based on the MHD approximation. Applying the Vlasov approach to near-Earth physics, having important analogies to the solar plasmas, may therefore provide important feedback to existing solar theories as well. ### 2.3 Near-Earth space and other planetary environments Figure 1 illustrates the near-Earth space. The shock separating the terrestrial magnetic domain from the solar wind is called the bow shock (e.g., Omidi 1995), and the region of shocked plasma downstream is the magnetosheath (Balogh and Treumann 2013). The interplanetary magnetic field (IMF), which at 1 AU typically forms an angle of 45$$^{\circ }$$ relative to the plasma flow direction, intensifies at the shock, increasing the magnetic field strength to roughly four-fold compared to that in the solar wind (e.g., Spreiter and Stahara 1994). The bow shock–magnetosheath system hosts highly variable and turbulent environmental conditions, with the bow shock normal angle with respect to the IMF direction being one of the most important factors controlling the level of variability. At portions of the bow shock where the IMF is quasi-parallel with the bow shock normal (termed quasi-parallel shock), some particles reflect at the shock and propagate back upstream causing instabilities and waves in the foreshock upstream of the bow shock (e.g., Hoppe et al. 1981). On the quasi-perpendicular side of the shock, where the IMF direction is more perpendicular to the bow shock normal, the downstream magnetosheath is much smoother, but exhibits large-scale waves originating from anisotropies in the ion distribution function (e.g., Génot et al. 2011; Soucek et al. 2015; Hoilijoki et al. 2016). The foreshock–bow shock–magnetosheath coupled system is under active research, and since it is the magnetosheath plasma which ultimately determines the conditions within the near-Earth space, most important open questions include the processes which determine the plasma characteristics in space and time. The entire system has previously been modelled with MHD, which is usable to infer average properties of the dayside system (e.g., Palmroth et al. 2001; Chapman and Cairns 2003; Dimmock and Nykyri 2013; Mejnertsen et al. 2018), but unable to take into account particle reflection, kinetic waves, turbulence, and it neglects e.g., plasma asymmetries between the quasi-parallel and quasi-perpendicular sides of the shock that require a non-Maxwellian ion distribution function. The earthward boundary of the magnetosheath is called the magnetopause, a current layer exhibiting large gradients in the plasma parameter space. Energy and mass exchange between the upstream plasma and the magnetosphere occurs at the magnetopause (Palmroth et al. 2003, 2006c; Pulkkinen et al. 2006; Anekallu et al. 2011; Daughton et al. 2014; Nakamura et al. 2017), and therefore its processes are important in determining the amount of energy driving the space weather phenomena, which can endanger technological systems or human health (Watermann et al. 2009; Eastwood et al. 2017). Space weather phenomena are complicated and varied, and we give a non-exhaustive list just to name a few most important categories. Direct energetic particle flows from the Sun alter the communication conditions especially at high latitudes, affecting radio broadcasts, aircraft communication with air traffic control, and radar signals. Sudden changes in the magnetic field induce currents in the terrestrial long conductors, such as gas pipelines, railways, and power grids that can sometimes be disrupted (e.g., Wik et al. 2008). Increasing numbers of satellites are being launched, vulnerable to sudden events in the geospace, as it has been experienced that some spacecraft have stopped operation in response to space weather events (Green et al. 2017). Overall, some estimations show that in the worst case, an extreme space weather event could induce economic costs of the order of 1–2 trillion USD during the first year following its occurrence, and that it could take 4–10 years for the society to recover from its effects (National Research Council 2008). Understanding and predicting the geospace is ultimately done by modelling. While the previous global MHD models can be executed near real-time and they provide the average description of the system, they cannot capture the kinetic physics that is needed to explain the most severe space weather events. One additional factor in the accurate modelling of the geospace as a global system is that one needs to address the ionised upper atmosphere called the ionosphere within the simulation. The Earth’s ionosphere is a weakly ionised medium, divided into three regions—named D (60–90 km), E (90–150 km), and F (> 150 km)—corresponding to three peaks in the electron density profile (Hargreaves 1995). From the magnetospheric point of view, the ionosphere represents a conducting layer closing currents flowing between the ionosphere and magnetosphere (Merkin and Lyon 2010), reflecting waves (Wright and Russell 2014), and depositing precipitating particles (e.g., Rodger et al. 2013). Further, the ionosphere is a source of cold electrons (Cran-McGreehin and Wright 2005) and heavier ions (e.g., Peterson et al. 1981). These cold ions of ionospheric origin may affect local processes in the magnetosphere, such as magnetic reconnection at the magnetopause (André et al. 2010; Toledo-Redondo et al. 2016). The global MHD models typically use an electrostatic module for the ionosphere, coupled to the magnetosphere by currents, precipitation and electric potential (e.g., Janhunen et al. 2012; Palmroth et al. 2006a). The ionosphere itself is modelled either empirically or based on first principles: The International Reference Ionosphere (IRI) model describes the ionosphere empirically from 50 to 1500 km altitude (Bilitza and Reinisch 2008), while for instance, the Sodankylä Ion and Neutral Chemistry model solves the photochemistry of the D region, taking into account several hundred chemical reactions involving 63 ions and 13 neutral species (Verronen et al. 2005, and references therein). At higher altitudes, transport processes become important, and models such as TRANSCAR (Blelly et al. 2005, and references therein) or the IRAP Plasmasphere–Ionosphere Model (Marchaudon and Blelly 2015) couple a kinetic model for the transport of suprathermal electrons with a fluid approach to resolve the chemistry and transport of ions and thermal electrons in the convecting ionosphere. Neither empirical nor the first-principles based models are using the Vlasov equation, which at the ionosphere concerns much finer scales. In general, the interaction of the solar wind with the other magnetized planets in our solar system is essentially similar to that with Earth. The main differences stem from the scales of the systems, which depend on the strength of their intrinsic magnetic field and the solar wind parameters changing with heliospheric distance. While the modelling of the magnetospheres of the outer giants is only achievable to date using fluid approaches, the small size of Mercury’s magnetosphere has been targeted for global kinetic simulations (Richer et al. 2012). For the same reason, kinetic models are also a popular tool to investigate the plasma environment of non-magnetized bodies such as Mars, Venus, comets, and asteroids. In particular, Umeda and Ito (2014) and Umeda and Fukazawa (2015) have studied the interaction of a weakly magnetized body with the solar wind by means of full-Vlasov simulations. ### 2.4 Scales and processes The following processes are central in explaining plasma behaviour in the Solar–Terrestrial system and astrophysical domains: (1) magnetic reconnection enabling energy and mass transfer between different magnetic domains, (2) shocks forming due to supersonic relative flow speeds between plasma populations, (3) turbulence providing energy dissipation across scales, and (4) plasma instabilities transferring energy between the plasma and waves. All these processes contribute to particle acceleration, which is one of the most researched topics within Solar–Terrestrial and astrophysical domains, and notorious in requiring understanding of both local microphysics and global scales. Below, we introduce some examples of these processes within systems having scales that can be addressed with the Vlasov approach. Simulations of non-thermal space plasmas encompass a vast range of scales, from the smallest ones (electron scales, ion kinetic scales) to local and even global structures. Table 1 lists typical ranges of a handful of plasma parameters encountered in different branches of space sciences and astrophysics. Especially in a larger astrophysical context, simulations cannot directly encompass all relevant spatial and temporal scales. It is important to note, however, that scientific results of kinetic effects can be achieved even without directly resolving all the spatial scales that may at first glance appear to be a requirement (Pfau-Kempf et al. 2018). Reconnection is a process whereby oppositely oriented magnetic fields break and re-join, allowing a change in magnetic topology, plasma mixing, and energy transfer between different magnetic domains. Within the magnetosphere, reconnection occurs between the terrestrial northward oriented magnetic field and the magnetosheath magnetic field that mostly mimics the direction of the IMF, but can sometimes be significantly altered due to magnetosheath processes (Turc et al. 2017). Magnetospheric energy transfer is most efficient when the magnetosheath magnetic field is southward, while for northward IMF reconnection locations move to the nightside lobes (Palmroth et al. 2006c). Actively researched topics focus on understanding the nature and location of reconnection as a function of driving conditions (e.g., Hoilijoki et al. 2014; Fuselier et al. 2017). Energy transfer at the magnetopause sets the near-Earth space into a global circulation (Dungey 1961), leading to reconnection in the magnetospheric tail. The tail centre hosts a hot and dense plasma sheet, the home of perhaps most diligent scientific investigations within the domain of magnetospheric physics. Especially in focus have been explosive times when the magnetospheric tail disrupts and launches into space, accelerating particles and causing abrupt changes in the global geospace (e.g., Sergeev et al. 2012). Reconnection has been suggested as one of the main drivers of tail disruptions (e.g., Angelopoulos et al. 2008), while other theories related to plasma kinetic instabilities exist as well (e.g., Lui 1996). Tail disruptions have important analogues in solar eruptions (e.g., Birn and Hesse 2009), and investigating the tail disruptions with global simulations together with in situ measurements may shed light into other astrophysical systems as well. Table 1 Typical plasma parameters and scales for solar–terrestrial and astrophysical phenomena Physical system Near-Earth space Solar system Astrophysics $$\lambda _\mathrm{D}$$ (m) $$10^{-3}$$$$10^2$$ $$10^{-4}$$$$10^{1}$$ $$10^{-4}$$$$10^{5}$$ $$c/\omega _\mathrm{pi}$$ (m) < 10$$^5$$ $$10^{3}$$$$10^{5}$$ $$10^{-3}$$$$10^{6}$$ $$r_\mathrm{Li}$$ (m) $$10^3$$$$10^6$$ $$10^{1}$$$$10^{7}$$ $$10^{-3}$$$$10^{8}$$ System size (m) $$10^9$$ $$10^{11}$$ $$10^{15}$$$$10^{25}$$ Process time scales 1 s–1 day 1 s–1 month 1 s–10 Gy $$\lambda _\mathrm{D}$$: the Debye length is the characteristic of a plasma related to its ability to shield out the electric potentials applied to it; $$c/\omega _\mathrm{pi}$$: the ion inertial length is the scale at which ions decouple from electrons; $$r_\mathrm{Li}$$: the ion Larmor radius is the radius at which the ion gyrates around the magnetic field Collisionless shocks form due to plasma populations flowing supersonically with respect to each other, redistributing flow energy into thermal energy and accelerating particles (e.g., Balogh and Treumann 2013; Marcowith et al. 2016). Shock fronts such as those found at supernova explosions are an efficient accelerator (Fermi 1949). Diffusive shock acceleration (e.g., Axford et al. 1977; Krymskii 1977; Blandford and Ostriker 1978; Bell 1978) is the primary source of solar energetic particles, and occurs from the non-relativistic (e.g., Lee 2005) to the hyper-relativistic (Aguilar et al. 2015) energy regimes. Shock–particle interactions including kinetic effects have been modelled using various analytical and semiempirical methods (see, e.g. Afanasiev et al. 2015, 2018; Hu et al. 2017; Kozarev and Schwadron 2016; Le Roux and Arthur 2017; Luhmann et al. 2010; Ng and Reames 2008; Sokolov et al. 2009; Vainio et al. 2014), but drastic approximations are usually required in order to model the whole acceleration process, and Vlasov methods have not yet been utilised. The classic extension of hydrodynamic shocks into the MHD regime has been disproven by a number of hybrid models due to, e.g., shock reformation (Caprioli and Spitkovsky 2013; Hao et al. 2017) and anisotropic pressure and energy imbalances due to non-thermal particle populations (Chao et al. 1995; Génot 2009). Only a self-consistent treatment including kinetic effects is capable of describing diffusive shock acceleration accurately. Recent works coupling shocks and high-energy particle effects include, e.g., those by Guo and Giacalone (2013), Bykov et al. (2014), Bai et al. (2015) and van Marle et al. (2018). Challenges associated with simulating shocks include modelling gyrokinetic scales for ions whilst allowing the simulation to cover the large spatial volume involved in the particle trapping and energisation process. Radially expanding shock fronts within strong magnetic domains result in a requirement for high resolution both spatially and temporally. Modern numerical approaches usually make some sacrifices, e.g. performing 1D–2V self-consistent calculations or advancing 3D–1V semi-analytical models. The Vlasov approach is especially interesting in probing the physics of particle injection, trapping, acceleration and escape. In addition to shock acceleration, kinetic simulations of solar system plasmas are also applied to the study of solar wind turbulence. How energy cascades from large to small scales and is eventually dissipated is an outstanding question, which can be addressed using kinetic simulations (Bruno and Carbone 2013). Hybrid-Vlasov simulations (Valentini et al. 2010; Verscharen et al. 2012; Perrone et al. 2013) have in particular been utilised to study the fluctuations around the ion inertial scale, which is of particular importance as it marks the transition between the fluid and kinetic scales. Plasma instabilities arise when a source of free energy in the plasma allows a wave mode to grow non-linearly. They are ubiquitous in our universe, and play an important role in both solar–terrestrial physics, where, for example, the Kelvin–Helmholtz instability transfers solar wind plasma into the Earth magnetosphere (e.g., Nakamura et al. 2017), and in astrophysical media, for instance in accretion disks, where the turbulence is driven by the magnetorotational instability. Vlasov models have been applied to the study of many instabilities, such as the Rayleigh–Taylor instability (Umeda and Wada 2016, 2017), Weibel-type instabilities (Inglebert et al. 2011; Ghizzo et al. 2017), and the Kelvin–Helmholtz instability (Umeda et al. 2010b). ## 3 Modelling with the Vlasov equation Simulating plasma provides numerous challenges from the modelling perspective. Constructing a perfect representation of the plasma, where every single charge carrier is factored in the equations, would require an immense amount of computational power. Spatial scales required to fully describe the plasma environment range from the microscale Debye length, up to the macroscale of the phenomena one is trying to simulate. The particle density needed is such that performing a fully kinetic simulation of even a low-density plasma self-consistently, using Maxwell’s equations for the electric and magnetic fields and Lorentz’ equation for the protons and electrons, is out of reach even to present day large supercomputers. Currently, only plasma phenomena that occur in relatively short spatial and temporal scales, such as magnetic reconnection and high frequency waves, are modelled using this approach. For this reason, adopting a continuous resolution of the velocity space and using distribution functions as the main object to be simulated provides a meritorious form of simulating plasmas. Plasmas can also be treated as a fluid and the standard way of doing that is using the magnetohydrodynamic (MHD) approximation. This modelling approach is more suitable for large domain sizes where detailed information is not necessary. However, MHD does not offer information about small spatio-temporal scales. Statistical mechanics provides some information about a neutral gas based on assumptions done at the atomic scale. The kinetic plasma approach treated in the following is based on this same principle. It describes the plasmas using distribution functions in phase space and uses Maxwell’s equations and the Vlasov equation to advance the fields and the distribution functions, respectively. ### 3.1 The Vlasov equation In plasmas, as in neutral gases, the dynamical state of every constituent particle can be described by its position ($$\mathbf {x}$$) and momentum ($$\mathbf {p}$$) (or velocity $$\mathbf {v}$$) at a given time t. It is also common to separate the different species s in a plasma (electrons, protons, helium ions, etc). Accordingly, the dynamical state of a system of particles of species s, at a given time, can be described by a distribution function $$f_s(\mathbf {x},\mathbf {v},t)$$ in 6-dimensional space, also called phase space. The distribution function $$f_{s}(\mathbf {x},\mathbf {v},t)$$ represents the phase-space density of the species inside a phase-space volume element of size $$\mathrm {d}^3\mathbf {x} \mathrm {d}^3\mathbf {v}$$ during a time $$\mathrm {d}t$$ at the ($$\mathbf {x},\mathbf {v},t$$) point. Hence, in a system with N particles, integrating over the spatial volume $$\mathcal {V}_r$$ and the velocity volume $$\mathcal {V}_v$$ (i.e., the entire phase-space volume $$\mathcal {V}$$) one obtains \begin{aligned} N = \int _{\mathcal {V}_r}\int _{\mathcal {V}_v} f_{s}(\mathbf {x},\mathbf {v},t) \, \mathrm {d}^3\mathbf {x} \, \mathrm {d}^3\mathbf {v}. \end{aligned} (1) It is important to represent and describe the time evolution of the distribution functions given some external conditions. The Boltzmann equation, \begin{aligned} \frac{\partial f_{s}}{\partial t} + \mathbf {v} \cdot \frac{\partial f_{s}}{\partial \mathbf {x}} + \frac{\mathbf {F}}{m} \cdot \frac{\partial f_{s}}{\partial \mathbf {v}} = \left( \frac{\partial f_{s}}{\partial t} \right) _\mathrm{coll} \end{aligned} (2) uses the distribution function $$f_{s}$$ to describe the collective behaviour of a system of particles subject to collisions and external forces $$\mathbf {F}$$, where the term on the right-hand side represents the forces acting on particles in collisions. Its derivation starts from the standard equation of motion and also takes Liouville’s theorem into account (see Sect. 3.3) and it is therefore valid for any Hamiltonian system. In plasmas, the Lorentz force takes the role of the external force and collisions between particles are often neglected. Taking these two assumptions into consideration, one obtains the Vlasov equation (Vlasov 1961), often called “the collisionless Boltzmann equation”: \begin{aligned} \frac{\partial f_s}{\partial t} + \mathbf {v} \cdot \frac{\partial f_s}{\partial \mathbf {x}} + \frac{q_s}{m_s} \left( \mathbf {E} + \mathbf {v} \times \mathbf {B} \right) \cdot \frac{\partial f_s}{\partial \mathbf {v}} = 0. \end{aligned} (3) If a significant part of the plasma acquires high enough kinetic energy, then relativistic effects start to become important. It can be shown that the Vlasov equation (3) is Lorentz-invariant and therefore holds in such cases if v is simply considered to be the proper velocity (Thomas 2016). There are only very few numerical applications related to space or astrophysics that directly solve the relativistic Vlasov equation (as opposed to the particle-in-cell approach, which solves the same physical system through statistical sampling of particles and their propagation, compare Sect. 1) but it is more common in other contexts, such as laser–plasma interaction applications (Martins et al. 2010; Inglebert et al. 2011). By using a frame that is propagating at relativistic speeds, a Lorentz-boosted frame, the smallest time or space scales to be resolved become larger and the plasma length shrinks due to Lorentz contraction, indicating that the simulation execution times are accelerated. ### 3.2 Closing the Vlasov equation system In any simulation, it is necessary to couple the Vlasov equation with the field equations to form a closed set of equations. The Vlasov equation deals with the time evolution of the distribution function and uses the electromagnetic fields as input. Thus the fields need to be evolved based on the updated distribution function. There are two main ways of closing the equation set: the electrostatic approach, which uses the Poisson equation to close the system and the electromagnetic approach, which uses the Maxwell equations to that end. They are typically referred to as the Vlasov–Poisson system and the Vlasov–Maxwell system of equations. With appropriate approximations, the system can also be closed without solving the Vlasov equation for all species. #### 3.2.1 The Vlasov–Poisson equations The Vlasov–Poisson equations model plasma in the electrostatic limit without a magnetic field (corresponding to the assumption that $$v/c \rightarrow 0$$ for any relevant velocity v in the system). Thus Eq. (3) takes the form \begin{aligned} \frac{\partial f_s}{\partial t} + \mathbf {v} \cdot \frac{\partial f_s}{\partial \mathbf {x}} + \frac{q_s}{m_s} \mathbf {E} \cdot \frac{\partial f_s}{\partial \mathbf {v}} = 0 \end{aligned} (4) for all species and the system is closed by the Poisson equation \begin{aligned} \nabla ^2\varPhi + \frac{\rho _q}{\epsilon _0} = 0, \end{aligned} (5) where $$\varPhi$$ is the electric potential and $$\epsilon _0$$ is the vacuum permittivity. Using Eq. (1), the total charge density $$\rho _q$$ is obtained by taking the zeroth moment of f for all species: \begin{aligned} \rho _q = \frac{1}{\mathcal {V}_r} \sum _s q_s N_s = \sum _s q_s \int _{\mathcal {V}_v} f_s(\mathbf {x},\mathbf {v},t) \, \mathrm {d}^3\mathbf {v}. \end{aligned} (6) #### 3.2.2 The Vlasov–Maxwell equations In the electromagnetic case, the Vlasov equation (3) is retained for all species and complemented by the Maxwell equations, namely the Ampère law \begin{aligned} \nabla \times \mathbf {B} = \mu _0\mathbf {j} + \frac{1}{c^2}\frac{\partial \mathbf {E}}{\partial t}, \end{aligned} (7) \begin{aligned} \nabla \times \mathbf {E} = -\frac{\partial \mathbf {B}}{\partial t}, \end{aligned} (8) and the Gauss laws \begin{aligned} \nabla \cdot \mathbf {B} = 0 \quad \mathrm {and}\quad \nabla \cdot \mathbf {E} = \frac{\rho _q}{\epsilon _0}. \end{aligned} (9) Usually in a numerical scheme only Eqs. (7) and (8) are discretised. If Eq. (9) is not satisfied by the numerical method used, numerical instabilities can occur because the underlying system needs to be divergence-free. #### 3.2.3 Hybrid-Vlasov systems The hybrid-Vlasov systems retain only the Vlasov equation for the ions, thus neglecting the electrons to a certain extent. This has the advantage that the model is not required to resolve the short temporal and spatial scales associated with electron dynamics. Typically, the system is closed by taking moments of the Vlasov equation and making approximations pertinent to the simulation system at hand. Integrating (3) over the velocity space, one gets the continuity equation or the zeroth moment of the Vlasov equation: \begin{aligned} \frac{\partial n_s}{\partial t} + \nabla \cdot \left( n_s \mathbf {V}_s \right) = 0, \end{aligned} (10) where $$n_s$$ and $$\mathbf {V}_s$$ are the number density and the bulk velocity of species s, respectively. Multiplying (3) by the phase-space momentum $$m_s \mathbf {v}_s$$ and integrating over the velocity space, one obtains the equation of motion or the first moment of the Vlasov equation: \begin{aligned} n_s m_s \left( \frac{\partial \mathbf {V}_s}{\partial t} + (\mathbf {V}_s \cdot \nabla ) \mathbf {V}_s \right) - n_s q_s \left( \mathbf {E} + \mathbf {V}_s \times \mathbf {B} \right) + \nabla \cdot \mathcal {P}_s = 0, \end{aligned} (11) where $$\mathcal {P}_s$$ is the pressure tensor of species s, which can in turn be obtained as the second moment of $$f_s$$. This leads to a chain of equations where each step depends on the next moment of $$f_s$$. The most common closure of hybrid-Vlasov systems is taken at this level by summing the electron and ion equations of motion and neglecting terms based on the electron-to-ion mass ratio $$m_\mathrm{e}/m_\mathrm{i} \ll 1$$, leading to the generalised Ohm’s law \begin{aligned} \mathbf {E} + \mathbf {V}\times \mathbf {B} = \frac{\mathbf {j}}{\sigma } + \frac{\mathbf {j}\times \mathbf {B}}{n_e e} - \frac{\nabla \cdot \mathcal {P}_\mathrm{e}}{n_e e} + \frac{m_\mathrm{e}}{n_e e^2}\frac{\partial \mathbf {j}}{\partial t}, \end{aligned} (12) where $$\sigma$$ is the conductivity, e is the elementary charge, $$n_e$$ is the electron number density, and $$\mathbf {j}$$ is the total current density. In the limit of slow temporal variations, the rightmost electron inertia term is typically dropped from the equation. Further, assuming high conductivity, one is left with the Hall term $$\mathbf {j}\times \mathbf {B}/\rho _q$$ and the electron pressure gradient term $$\nabla \cdot \mathcal {P}_\mathrm{e}/\rho _q$$ on the right-hand side. The electron pressure term can be handled in a number of ways, such as using isobaric or isothermal assumptions. If electrons are assumed to be completely cold, the equation can be written as the Hall MHD Ohm’s law \begin{aligned} \mathbf {E} + \mathbf {V}\times \mathbf {B} = \frac{\mathbf {j}\times \mathbf {B}}{n_e e}. \end{aligned} (13) Thus the hybrid-Vlasov system of equations retains the Vlasov equation (3) for ions and the Maxwell–Ampère and Maxwell–Faraday equations (7) and (8) but replaces the Gauss equations by a generalised Ohm equation (12) with appropriate approximations. If rapid field fluctuations are excluded from the solution, the displacement current from Ampère’s law can be omitted, resulting in the Darwin approximation and yielding \begin{aligned} \nabla \times \mathbf {B} = \mu _0\mathbf {j}. \end{aligned} (14) This makes the equation system more easily tractable. Note that conversely, neglecting ion dynamics and retaining the electron Vlasov equation can be advantageous in certain contexts and is formally equivalent to the above with switched electron and ion variables. ### 3.3 Properties of the Vlasov equation When solving the Vlasov equation, there are a number of useful properties that can be used for an advantage in numerical solvers. In its fundamental structure, the Vlasov equation is a 6D advection equation, which equals a zero material derivative of phase-space density. In the absence of any source terms, finding a solution at any given point in time requires to determine the motion of the phase-space density. One particularly handy property follows from Liouville’s theorem, from which the Vlasov equation is derived. It states that phase-space density is constant along the trajectories of the system. This means that a solution of the system at one point in time can be followed forward or backward in time arbitrarily, as long as the trajectories of phase-space elements, characterising the Vlasov equation, are known. One consequence of Liouville’s theorem is that initial density perturbations tend to form smaller and smaller structures as trajectories of the phase-space regions with different densities converge over time. In physical reality, this so-called filamentation has a natural cutoff at scales where diffusive scattering effects become important, however, this is not part of the pure mathematical description of the Vlasov equation. Therefore, numerical implementations need to address this issue either by explicit filtering steps, or innate numerical diffusivity. A fundamental consideration in any physical modelling is the conservation of certain quantities, like mass, momentum, energy, and electric charge. This of course applies to Vlasov-based plasma modelling as well. Variational approaches have been used to derive the Vlasov–Maxwell and Vlasov–Poisson systems of equations as well as reduced forms (e.g., Marsden and Weinstein 1982; Ye and Morrison 1992; Brizard 2000; Brizard and Tronko 2011). Care has to be taken when developing numerical solutions of the Vlasov equation that quantities relevant to the problem to be solved are conserved adequately by the method (e.g., Filbet et al. 2001; Crouseilles et al. 2010; Cheng et al. 2013, 2014; Becerra-Sagredo et al. 2016; Einkemmer and Lubich 2018). ## 4 Numerical modelling and HPC aspects Finding solutions to the Vlasov equation in order to model physical systems typically involves computer simulations. Therefore, the phase space density $$f(\mathbf {x},\mathbf {v},t)$$ needs to be numerically represented, which strongly influences the choice of solvers and the resulting simulation code. This section is structured around different numerical representations of f. A problem common to all numerical approaches solving the Vlasov equation is the curse of dimensionality—to fully reproduce all physical behaviour, the simulation domain must be 6-dimensional, with all 6 dimensions being of sufficient resolution or fidelity for the desired physical system. This considerably impacts the size of the phase space and hence the computational burden of the algorithm. ### 4.1 Eulerian approach In a straightforward manner, the phase-space distribution function $$f(\mathbf {x},\mathbf {v},t)$$ can be discretised on a Eulerian grid, which can be operated by different kinds of solvers (see Fig. 2a). The structure of the Vlasov equation allows both finite volume (FV) as well as semi-Lagrangian solvers to be employed, and all of these have been operated with some success (Arber and Vann 2002). Discretisation of velocity space to a finite grid size $$\varDelta v$$ also automatically imposes a lower limit for phase-space filamentation (compare Sect. 3.3), at which the grid will naturally smooth out the structure. In some cases this is a purely numerically diffusive process, whereas others use explicit smoothing, filtering or subgrid modelling (e.g., Klimas 1987; Klimas and Farrell 1994). The 6-dimensional structure of the phase space, along with the physical scales and resolutions imposed by the underlying physical system (compare Sect. 2) make a Eulerian representation in a Cartesian grid computationally impractical in the vast majority of cases concerning a large simulation volume. Let us consider as an example a simulation of the Earth’s entire magnetosphere using a full 3D–3V, Eulerian hybrid-Vlasov model. Let it extend up to the lunar orbit ($$x \sim \pm \,60 \, R_E$$ in every direction) resolving approximately the solar wind ion inertial length ($$\varDelta x \sim 100$$ km), and let the velocity space encompass typical solar wind velocities with some safety margin ($$v \sim \pm \,2000$$ km/s) while resolving the solar wind thermal speed ($$\varDelta v \sim 30$$ km/s). In this case the resulting phase space would contain a total of $$10^{18}$$ cells. If each of them were represented by a single-precision floating point value, a minimum of 4 EiB of memory would be required to represent it! Fortunately, there are many possibilities for simplification of the computational grid size: • Reduction of phase-space dimension, if the physical system under consideration allows it, is an easy and efficient way to reduce computational load. Simulations of longitudinal wave instabilities (Jenab and Kourakis 2014; Shoucri 2008) and fundamental studies of filamentation have been performed in a 1D–1V setup, whereas laser wakefield and wave interaction simulations tend to be modelled in 2D–2V or 2D–3V setups (Besse et al. 2007; Sircombe et al. 2004; Thomas 2016). Another possibility here is to globally reduce the number of grid points by introducing a sparse grid representation, where the grid may be uneven with respect to Cartesian coordinates, while remaining static during runtime (Kormann and Sonnendrücker 2016; Guo and Cheng 2016). This is sometimes referred to as a sparse grid representation. • Gyrokinetic simulations reduce the velocity space by dropping the azimuthal velocity dimensions perpendicular to the magnetic field, thus assuming complete gyrotropy of the distribution functions (e.g., Görler et al. 2011). • Adaptively refined grids can be employed to reduce resolution and thus computational expense in areas of phase space that are considered less important for the physical problem at hand (Wettervik et al. 2017; Besse et al. 2008). • In many physical scenarios, large parts of phase space contain an extremely low, if not zero, density, and contribute nothing to the overall dynamic development. Suitable pruning of the phase-space grid can thus be performed to obtain a data structure that dynamically removes grid elements during runtime and keeps them only in regions deemed relevant for the physical system dynamics. The computational speed-up gained through the reduction of phase-space volume thus obtained can in some cases be a tradeoff against physical accuracy, and needs to be carefully considered. We have implemented this option in Vlasiator, and call it the dynamic sparse phase space representation, discussed more in Sect. 5.2. This method is not to be mixed to the static sparse grid methods (Kormann and Sonnendrücker 2016; Guo and Cheng 2016) that are fundamentally dimension reduction techniques, similar to the low-rank approximations. In plasmas, the magnetic field $$\mathbf {B}$$ makes the particles gyrate while the electric field $$\mathbf {E}$$ causes them to accelerate and drift. It can be advantageous to take the characteristics of acceleration due to Lorentz’ force into consideration when choosing an appropriate grid for the numerical phase-space representation. Common ideas include: • A polar velocity coordinate system aligned with the magnetic field and centred around the drift velocity, $$\mathbf {v} = ( v_\parallel ,\, v_r,\, v_\phi ),$$ in which the gyrophase coordinate $$v_\phi$$ has a much lower resolution than $$v_\parallel$$ and $$v_r$$. This can be employed in cases where the velocity distribution functions are known not to deviate strongly from gyrotropy, i.e., to exhibit cylindrical symmetry with respect to the magnetic field direction. However, the disadvantage of a polar velocity space over a Cartesian one is the more complex coordinate transformation required for transport into the spatial neighbours. • A Cartesian representation of velocity space, in which its coordinate axes co-rotate with the local gyration at every given spatial cell. Such a setup has the advantage that no transformation of velocity space due to the magnetic field will have to be performed at all, and no numerical diffusion due to gyration motion will occur. It does however come at the cost of more complicated spatial updates, since neighbouring spatial domains do no longer have identical velocity space axes. A suitable interpolation or reconstruction has to take place in the spatial transport, thus potentially negating the advantage in numerical diffusivity. For the actual process of solving the Vlasov equation, a fundamental decision has to be made in the structure of the code, whether the solution steps are to be performed in a proper 6D manner (e.g., Vogman 2016), or whether a Strang-splitting approach will be taken (Strang 1968; Cheng and Knorr 1976; Mangeney et al. 2002), in which the position and velocity space solution steps are performed independently of each other. Due to the large number of involved dimensions, and thus computational context of each solver step, the latter approach tends to have significant performance benefits, whilst still achieving convergence (Einkemmer and Ostermann 2014). Alternative time-splitting methods based on Hamiltonians have also been proposed (e.g., Crouseilles et al. 2015; Casas et al. 2017). If a Cartesian velocity grid is employed in the simulation, multiple families of solvers are available for it (Filbet and Sonnendrücker 2003b). In all cases, the effects of diffusivity of the solver need to be considered. Especially uncontrolled velocity space diffusion manifests itself as numerical heating, as the distribution function tends to broaden over time. Higher orders of solvers and reconstruction methods, as well as explicit formulations in which moments of the distribution function are conserved are therefore advisable (Balsara 2017). The choice of a Eulerian representation of phase space brings certain implementation details for high performance computing (HPC) aspects with it. The relative ease of spatial decomposition into independent computational domains, which communicate through ghost cells, can be employed readily for Eulerian Vlasov simulations, providing a straightforward path to parallel implementations. On the other hand, the inherent limitations of Eulerian schemes (such as conditions for time steps, compare Sect. 5.3) limit their overall numerical efficiency, and the high-dimensional nature of phase space can lead to challenges in appropriately represented and resolved numerical grids. As so often in HPC, design decisions have to be based on the specific properties of the physical system under investigation. #### 4.1.1 Finite volume solvers As the Vlasov equation (3) is fundamentally a hyperbolic conservation law in 6D, it can be solved using the well-established methods of Finite Volumes (FV, LeVeque 2002). In this approach, the phase-space fluxes are calculated through each interface of a discrete simulation volume (or phase-space cell) by reconstructing the phase-space density distribution through an appropriate interpolation scheme. The characteristic velocities at both sides of this interface are determined and the Riemann problem (Toro 2014) at each of these interfaces is solved to update the material content in each cell. If Strang splitting is used to perform separate spatial and velocity-space updates, it is noteworthy that the state vector only contains a single scalar quantity (the phase-space density) and each cell interface update only needs to take a single characteristic velocity into consideration: For the update in a spatial direction, the characteristic is given by the corresponding cell’s velocity space coordinates, whereas in the velocity space update step, the acceleration due to magnetic, electric and external field forces is homogeneous throughout each spatial cell. The Riemann problem for the Vlasov update does therefore not require the solution or approximation of an eigenvalue problem, which significantly simplifies its solution in comparison to hydrodynamic or MHD FV solvers. This property also enables the efficient use of semi-Lagrangian solvers (discussed more in Sect. 4.4). As will be shown in Sect. 5, versions of the Vlasiator code until 2014 employed a FV formulation of its phase space update (von Alfthan et al. 2014) and numerous other implementations exist (Banks and Hittinger 2010; Wettervik et al. 2017). A comprehensive introduction to the implementation and thorough analysis of the behaviour of a fully 6D implementation of a FV Vlasov solver is given by Vogman (2016). #### 4.1.2 Finite difference solvers While the Vlasov equation (3) could in principle be solved by directly employing finite difference methods, this approach does not seem to be favoured, and its applications in the literature appear to be limited to fundamental theory studies only (e.g., Schaeffer 1998; Holloway 1995). The biggest issue with finite difference formulations is the lack of explicit conservation moments of the distribution function and related quantities. While high-order methods can still maintain suitable approximate conservation properties, their computational demands and/or diffusivity make them impractical. ### 4.2 Spectral solvers Instead of a direct discretisation of the phase-space density $$f(\mathbf {x},\mathbf {v}, t)$$ in its $$\mathbf {x}$$ and $$\mathbf {v}$$ coordinates, a change of basis functions can be employed, each coming with benefits and limitations. The transformation of f into a different basis can be performed in the velocity coordinates only (cf. Fig. 2b), or in both spatial and velocity coordinates, depending on the physical application. If a splitting scheme is employed, where velocity and real space advection updates are treated separately, the advection in a Fourier-transformed coordinate can be completely performed in Fourier space, as the transform of any coordinate $$x \rightarrow k_x$$ results in the differential advection operator $$v_x \,\nabla _x$$ turning into a simple multiplication: \begin{aligned} v_x \, \nabla _x f\left( x\right) \xrightarrow {\mathrm {\small {Fourier}}} \mathrm {i}\, v_x k_x \, \tilde{f}\left( k_x\right) . \end{aligned} (15) However, for the acceleration update, this transformation brings in the additional complication that the acceleration $$\mathbf {a}$$ would have to be independent of $$\mathbf {v}$$, which is true for the electrostatic Vlasov–Poisson system, but incorrect in full Vlasov–Maxwell scenarios, due to the v-dependence of the Lorentz force. In order to accommodate velocity-dependent acceleration, solving a system in such a way typically requires multiple forward and backward Fourier transforms within one time step (Klimas and Farrell 1994). The limit of filamentation in a thus-represented velocity space becomes the question of which maximum velocity space frequency $$\mathbf {k}_{v,\text {max}}$$ is available, and the filamentation problem itself becomes a boundary problem at the maximum extents of velocity $$\mathbf {k}$$-space (Eliasson 2011). However, stability issues of this scheme remain under discussion (Figua et al. 2000; Klimas et al. 2017). Finally, a full Fourier-space representation of $$\tilde{f}\left( \mathbf {k}_r,\mathbf {k}_v, t\right)$$, in which also the real space coordinate $$\mathbf {x} \rightarrow \mathbf {k}_r$$ is transformed is a possibility. However, it further complicates the treatment of configuration and velocity space boundaries (Eliasson 2001). When used with periodic spatial boundary conditions, such a setup can be quite efficient for the study of kinetic plasma wave interactions. Apart from the Fourier basis, other orthogonal function systems can be used as the basis for description of phase-space densities. A popular choice is presented by Hermite functions (Delzanno 2015; Camporeale et al. 2016), whose $$\mathbf {L}^2$$ convergence behaviour closely matches that of physical velocity distribution functions, and whose property of being eigenfunctions to the Fourier transform can be used as a numerical advantage. Since the zeroth Hermite function $$H_0(\mathbf {v})$$ is simply a Maxwellian particle distribution, a hybrid-Vlasov code with Hermitian basis functions should replicate MHD behaviour in this limit. Adaptive inclusion of higher-order Hermite functions then allows an increasing amount of kinetic physics to be numerically represented. A common problem of any kind of spectral method, be it Fourier-based, or using any other choice of nonlocalised basis functions, is the formulation of boundary conditions. While microphysical simulations of wave or scattering behaviour can usually get away with periodic boundary conditions, macroscopic systems will require boundaries at which interaction of plasma with solid or gaseous matter is to be modelled (such as planetary or stellar surfaces), inflow conditions as well as outflow boundaries. Due to the unavailability of suitable spectral formulations for these boundaries, spectral-domain solvers have not gained foothold in modelling efforts of macroscopic astrophysical systems. In any nonlocal choice of basis function for the phase-space representation, be it Fourier-, Hermite- or wavelet-based (Besse et al. 2008), extra thought has to be put into scalability of parallel solvers for it. If a change of basis function (such as a switch from a real-space to a Fourier space representation) is required as part of the simulation update step, this will typically not scale beyond hundreds of cores in supercomputing environments. ### 4.3 Tensor train An entirely separate class of numerical representations for the phase-space density is provided by the tensor train formalism (Kormann 2015) illustrated in Fig. 2c. The idea behind this approach is inspired by Strang-splitting solvers, in which spatial and velocity dimensions are treated in individual and subsequent solver steps. The overall distribution function $$f(x_1,x_2,\ldots ,x_n)$$ is represented as a tensor product of component basis functions, \begin{aligned} f(x_1,x_2,\ldots ,x_n) = \prod _{k=1}^n f_k(x_k) \end{aligned} (16) in which each $$f_k(x_k)$$ is only dependent on a single coordinate $$x_k$$, and thus only affected by a single dimension’s update step. The generalised formulation is called the Tensor Train of ranks $$r_1\ldots r_n$$ (compare Fig. 2c), \begin{aligned} f(x_1,x_2,\ldots ,x_n) = \sum _{\alpha _1=1}^{r_1} \cdots \sum _{\alpha _n=1}^{r_n} \prod _{k=1}^n Q_k(\alpha _k-1, x_k, \alpha _k), \end{aligned} (17) in which the distribution function is entirely formulated in terms of sums of products of $$Q_k$$, which themselves only depend on a single coordinate $$x_k$$. Transport can be performed by individually affecting each $$Q_k$$, followed by a rounding step to keep the tensor train compact. While this approach has so far only been employed in low-dimensional approaches and for feasibility studies, and attempts at large numerical simulations using tensor train models have not yet been performed, efforts to integrate them into existing codebases are underway. ### 4.4 Semi-Lagrangian and fully Lagrangian solvers As a consequence of Liouville’s theorem (cf. Sect. 3.3), numerical solutions of the Vlasov equation can be elegantly formulated in Lagrangian and semi-Lagrangian ways, by following the characteristics in phase space. Since the spatial velocity of any point in phase space is simply given by its velocity space coordinates, and its acceleration due to Lorentz’ force is provided by the local electromagnetic field quantities, a unique characteristic for each point in phase space is easily obtained in a simulation (cf. Sect. 4.1.1). As the simulation progresses, the distribution of these sample points will shift, maintaining their initial phase-space density values, and the volumes in between them obtain phase-space density values through interpolation. If necessary, new samples can be created, or existing ones merged, where filamentation requires it. In essence, fully Lagrangian simulation codes (Kazeminezhad et al. 2003; Nunn 2005; Jenab and Kourakis 2014) track the motion of samples of density through phase space, stepping forward in time, resulting in an updated phase-space distribution. This is illustrated in Fig. 3a. Sometimes, these methods are referred to as Lagrangian particle methods, as each phase-space sample can be modelled as a macroparticle. Much more common than the fully Lagrangian formulation of Vlasov solvers is the family of semi-Lagrangian solvers (Sonnendrücker et al. 1999). In these, the phase-space samples to be propagated are obtained at every time step from a Eulerian description of phase space, their transport along the characteristics is calculated within the time step, and the resulting updated phase-space density is sampled back into a Eulerian grid (which can be either structured or unstructured, see Besse and Sonnendrücker 2003). This process can be performed either forwards in time (Crouseilles et al. 2009, see Fig. 3b), in which the source grid points are scattered into the target locations, or backwards in time (Sonnendrücker et al. 1999; Pfau-Kempf 2016), where each target grid point performs a gather operation, spatially interpolating inside the previous state of the time steps (Fig. 3c). Backwards semi-Lagrangian methods are sometimes also referred to as Flux Balance Methods (see Filbet et al. 2001). Either way, an interpolation step will be involved which may again lead to significant numerical diffusion, unless methods are used to minimise it. Some of the more common interpolation procedures used are cubic splines and Hermite reconstruction because they produce smooth results with reasonable accuracy and are less dissipative than other methods using continuous interpolations (Filbet and Sonnendrücker 2003a). Lagrange interpolation methods produce more accurate results but require higher order polynomials and large stencils to limit diffusion. The high-order discontinuous Galerkin method for spatial discretisation, along with a semi-Lagrangian time stepping method, has also been used in Vlasov–Poisson systems providing an improvement in accuracy compared to previously used techniques (Rossmanith and Seal 2011). The flexibility of combining different approaches is also seen in a recent particle-based semi-Lagrangian method for solving the Vlasov–Poisson equation (Cottet 2018). Cheng and Knorr (1976) were the first authors to employ semi-Lagrangian updates of a Vlasov–Poisson problem in a Strang-splitting setup, which they refer to as the time-splitting scheme, in which they independently treated advection due to temporal and spatial updates. Mangeney et al. (2002) later formulated a Strang-splitting scheme for the Vlasov–Maxwell equation. As such a splitting scheme performs acceleration and translation steps separately, the phase-space trajectories of any simulation point approximates their physical behaviour in a staircase-like dimension-by-dimension pattern. ### 4.5 Field solvers The Vlasov equation does not stand alone in describing the physical system in consideration, but requires a further prescription of the fields introducing the force terms. In the vast majority of cases in computational astrophysics, these will be electromagnetic forces self-consistently produced through the motion of charged particles in plasma, although there have been examples of Vlasov-gravity simulations (Guo and Li 2008), in which the Poisson equation was solved based on the simulation’s mass distribution. Also in a few cases, the fields affecting phase-space distributions are considered entirely an external simulation input, with no feedback from the phase-space density onto the fields (Palmroth et al. 2013), which can be called “test-Vlasov” simulations, in analogy to test-particle runs. These are particularly useful as test cases before the fully operational code can be launched. A key requirement for any field solver is to preserve the solenoidality of the magnetic field $$\mathbf {B}$$ expressed by Eq. (9). There are two main avenues used to achieve this goal (e.g., Tóth 2000; Balsara and Kim 2004; Zhang and Feng 2016, and references therein). Either the field reconstruction is divergence-free by design, such as the one used in Vlasiator (see Sect. 5.3), or a procedure is needed to periodically clean up the divergence of $$\mathbf {B}$$ arising from numerical errors. In the following sections, different solvers for electromagnetic fields (and their simplifications) will be discussed in relation to astrophysical Vlasov simulations. These are fundamentally very similar in structure to the field solvers used in other simulation methods, such as PIC and MHD, and can in many cases be adapted directly from these with little change. #### 4.5.1 Electrostatic solvers If modelling a physical system in which the interaction of plasma with magnetic fields is of little importance (such as electrostatic wave instabilities, dusty plasmas, surface interactions (Chane-Yook et al. 2006) and other typically local phenomena), the magnetic field force ($$q \mathbf {v} \times \mathbf {B}$$) part of the Vlasov equation can be neglected, and a purely electrostatic system remains. Neglecting the effects of $$\mathbf {B}$$ completely leads to the Vlasov–Poisson system of equations (4) and (5), for which the field solver needs to find a solution to the Poisson equation at every time step. Being an elliptic differential equation that is solved instantaneous in time, no time step limit arises from the field solver itself. Typically, solvers use approximate iterative approaches, multigrid methods or Fourier-space solutions (Birdsall and Langdon 2004). Another option, if an initial solution for the electric field has been found (or happens to be trivial), is to update it in time by using Ampere’s equation in the absence of $$\mathbf {B}$$, \begin{aligned} \frac{\partial \mathbf {E}}{\partial t} = -\, \frac{\mathbf {J}}{\epsilon _0} \end{aligned} (18) in either an explicit finite-difference manner, or using more advanced implicit formulations (Cheng et al. 2014). Special care should however be taken to prevent the violation of the Gauss law [cf. Eq. (9)] by using appropriate numerical methods. #### 4.5.2 Full electromagnetic solvers If the full plasma microphysics of both electrons and ions is to be considered, and particularly if radio wave or synchrotron emissions are intended outcomes of the system, one must use the full set of Maxwell’s equations. A popular and well-established family of electromagnetic field solvers is the finite difference time-domain (FDTD) approach, which has a longstanding history in electrical engineering applications. In formulating the finite differences for the $$\partial \mathbf {E} / \partial t \sim \nabla \times \mathbf {B}$$ and $$\partial \mathbf {B} / \partial t \sim \nabla \times \mathbf {E}$$ terms of Maxwell’s equations, it is often advantageous to use a staggered-grid approach, in which the electric field and magnetic field grids are offset from one another by half a grid spacing in every direction (Yee 1966). In this setup, every component of the electric field vector is surrounded by magnetic field components and vice versa, so that the finite difference evaluation of the rotation can be performed without any need for interpolation. Care should be taken when employing FDTD solvers for studies of wave propagation at high frequencies or wave numbers, as the numerical dispersion relations of waves are deviating from their physical counterparts for high $$\mathbf {k}$$, and this effect in particular is anisotropic in nature, and most strongly pronounced in cases of diagonal propagation, due to the intrinsic differences in the manner by which grid-aligned and non-grid-aligned computations are handled. Kärkkäinen et al. (2006) and Vay et al. (2011) present a thorough analysis of this problem in the case of PIC simulations, and provide suggestions for mitigating their effects. The largest disadvantage of FDTD solvers is their stringent requirement to resolve the propagation of fields at the speed of light, thus leading to extremely short time step lengths. In order to simulate anything at non-microscopic timescales, other methods will need to be used. Fourier-space solvers of Maxwell’s equations are advantageous in this respect, as they do not come with fundamental time step limitations. This is weighed up by the fact that their parallelisation is more difficult, and the formulation of appropriate boundary conditions is not always possible (cf. Sect. 4.2). #### 4.5.3 Hybrid solvers If large-scale phenomena with timescales much larger than the local light crossing time are being investigated, FDTD Maxwell solvers quickly lose their appeal. If magnetic field phenomena are still to be considered self-consistently in the simulation, appropriate modifications of the electrodynamic behaviour have to be taken, so that their simulation with longer time steps becomes feasible. One common way to get rid of the speed of light as a limiting factor is by getting rid of the electromagnetic mode as a solution to Maxwell’s equation altogether, in a process called the Darwin approximation (see Sect. 3.2.3 and Schmitz and Grauer 2006; Bauer and Kunze 2005). In this process, the electric field is decomposed into its longitudinal and transverse components $$\mathbf {E} = \mathbf {E}_L + \mathbf {E}_T$$, with $$\nabla \times \mathbf {E}_L = 0$$ and $$\nabla \cdot \mathbf {E}_T = 0$$. Only $$E_L$$ is allowed to participate in the temporal update of $$\mathbf {B}$$, so that the electromagnetic mode drops out of the simulated physical system. As a result, the fastest remaining wavemode in the system becomes the Alfvén wave, and the maximum time step rises significantly, by a factor of $$c/v_A$$. Approximating the full set of Maxwell equations comes at the cost of not having a closed set of equations any more. As already shown in Sect. 3.2.3, the system is typically closed by providing a relation between the electric and magnetic field such as Eq. (12), called Ohm’s law. The level of complexity of Ohm’s law directly influences the simulation results as it immediately affects the kinetic physics described by the model. ### 4.6 Coupling schemes A reduction of the computational burden of a model can be achieved by coupling different schemes in order to focus the use of the costlier kinetic model on the region(s) of interest while solving other parts of the system with less intensive algorithms. This is also a means of extending the simulation domain where one system is taken as the boundary condition of the other. Various classes of coupled models exist, depending on the coupling interface chosen. One strategy is to define a spatial region of interest in which the expensive kinetic model is applied, embedded in a wider domain covered by a significantly cheaper fluid model. While the method is under investigation and has been tested on classic small problems (Rieke et al. 2015), it has not been applied in the context of large-scale astrophysical simulations yet. However, this type of coupling is being used successfully in the case of fluid–PIC coupling (e.g., Tóth et al. 2016; Chen et al. 2017) and also in reconnection studies (e.g., Usami et al. 2013). The disadvantage of this strategy is that scale coupling cannot be addressed as the kinetic effects do not spread into the fluid regime, and smaller-scale physics can only affect the solution in the domain at which the kinetic physics is in force. Another strategy consists in defining the regions of interest in velocity space, that is coupling a fluid scheme describing the large-scale behaviour of a system with a Vlasov model handling suprathermal populations introducing kinetics into the model. Again, this is a recent development for which a certain amount of theoretical work and testing on small cases has been done (e.g., Tronci et al. 2014) but not yet extended to larger scale applications. ### 4.7 Computational burden of Vlasov simulations Representing numerically the complete velocity phase space of a kinetic plasma system including all required physical processes is computationally intensive, and a large amount of data needs to be stored and processed. Different possible representations of the phase-space distribution and solution methods and their expected scaling shall be given in this section. Computational requirements for equivalent PIC simulations are estimated in comparison, although due to their different tuneable parameters, a rigorous comparison is difficult and beyond the scope of this work. As shown in Sect. 4.1, a blunt Eulerian discretisation of a magnetospheric simulation without any velocity space sparsity results in $$10^{18}$$ sample points or a minimum of 4 EiB memory requirement, which is unrealistic on current and next-generation architectures. A first approach is to reduce the dimensionality from a full 3D space to a 2D slice, which results in a reduction of sample points of the order of $$10^4$$. Obviously a further reduction to 1D yields a similar gain. With a sparse velocity space strategy as used in Vlasiator (see Sect. 5.2 below) a further reduction by a factor of $$10^2$$$$10^3$$ sample points can be achieved. Typically modern large-scale kinetic simulations both with Vlasov-based methods (e.g., Palmroth et al. 2017) and PIC methods (e.g., Daughton et al. 2011) reach an order of magnitude of $$10^{11}$$$$10^{12}$$ sample points. Beyond the number of sample points to be treated, the length of the propagation time step relative to the total simulation time aimed for is a crucial component of the computational burden of a model. Certain classes of solvers are limited in that respect as they cannot allow a signal to propagate more than one sampling interval or discretisation cell per time step (see Sect. 5.3). With respect to hybrid models using the Darwin approximation, the inclusion of electromagnetic (light) waves in the model description results in a reduction of the allowable time step by a factor of $$10^3$$ or more. Eulerian solvers typically have similar limitations which can impact the time step by a factor of approximately $$10^2$$ due to the Larmor motion in velocity space in the presence of magnetic field. Subcycling strategies and the use of Lagrangian algorithms are common approaches to alleviate these issues, at the potential cost of some physical detail however. A choice of basis function for the representation of velocity space other than Eulerian grids (like spectral or Hermite bases) can in many cases be beneficial to limit the memory requirements for reasonable approximations of the velocity space morphology. Care must however be taken that non-local transformations from one basis to another, such as a Fourier transform, tend to have unfavourable scaling behaviour in massively parallel implementations. Tensor-train formulations appear to be a promising avenue for the representation of phase space densities that have suitable computational properties, but large-scale space plasma applications have not been demonstrated yet. Higher computational requirements are expected if physics of multiple particle species (especially electrons) are essential for the system under investigation. The need to represent multiple separate distribution functions multiplies the memory and computation requirements. The relative mass ratios of these species also have an effect on the kinetic time and length scales that need to be resolved. Going from a purely proton-based hybrid-Vlasov to a “full-Vlasov” simulation, in which electrons are included as a kinetic species shortens the Larmor times by a factor of $$m_p / m_e = 1836$$ and depending on the employed solver may require resolution of the plasma’s Debye length. The latter factor means that, with respect to the reference hybrid simulation considered above, which approximately resolves the ion kinetic scales, a spatial resolution increase of the rough order of $$10^5$$ would be required (see Table 1), amounting to a staggering $$10^{15}$$ more sampling points. In order to reduce this considerable overhead, a common approach is to rescale physical quantities such as the electron-to-proton mass ratio and/or the speed of light (e.g., Hockney and Eastwood 1988), while hoping to maintain quantitatively correct kinetic physics behaviour. Most of these scaling relations likewise apply in PIC. In these, however, the parameter most strongly affecting the computational burden of the phase space representation is the particle count. As a rule of thumb, a PIC simulation with a particle count similar to the sampling point count of an equivalent Vlasov simulation will have a similar overall computational cost. For many physical scenarios, this particle count can be chosen to be significantly lower (on the order of 100–1000 particles/cell), especially if noisy representations of the velocity spaces are acceptable. In simulations with high dynamic density contrasts, in which certain simulation regions deplete of particles, as well as setups in which a minimisation of sampling noise is essential (such as investigations of nonlinear wave phenomena), PIC and Vlasov simulations are expected to reach a break-even point. ### 4.8 Achievements in Vlasov-based modelling The progress of available scientific computing capabilities towards and beyond petascale in the last decade has driven the interest in and applicability of Vlasov-based methods to multidimensional space and astrophysical plasma problems. Table 2 compiles existing research work using direct solutions of the Vlasov equation in plasma physics. Table 2 only includes works with a direct link to space physics and astrophysics, meaning that purely theoretical work as well as research from adjacent fields, in particular nuclear fusion and laser–plasma interaction, has been omitted from this list on purpose. As of 2018, the Vlasov equation has thus been used in space plasma physics and plasma astrophysics to model magnetic reconnection, instabilities and turbulence, the interaction of the solar wind with the Earth and other bodies, radio emissions in near-Earth space and the charge distribution around spacecraft. Table 2 Space and astrophysical applications of Vlasov-based plasma simulation methods Application Model characteristics References Magnetic reconnection SL 2D–3V full Vlasov Umeda et al. (2009, 2010b), Zenitani and Umeda (2014) FD 2D–3V hybrid-Vlasov (H+) Cerri and Califano (2017), Franci et al. (2017) Kelvin–Helmholtz instability SL 2D–3V full Vlasov Umeda et al. (2010a, 2014) Rayleigh–Taylor instability SL 2D–2V full Vlasov Solar wind turbulence FD 3D–3V hybrid-Vlasov (H+) Cerri et al. (2017b), Servidio et al. (2015) FD 2D–3V hybrid-Vlasov (H+) Cerri et al. (2016, 2017a), Leonardis et al. (2016), Pucci et al. (2016), Servidio et al. (2012, 2014), Valentini et al. (2010, 2011, 2014, 2016), Vásconez et al. (2014, 2015) FD 2D–3V hybrid-Vlasov (H+, He++) Perrone et al. (2013, 2014a, b) L 1D–3V hybrid-Vlasov (e$$^-$$) Harid et al. (2014), Nunn et al. (1997), Nunn (2005) SL 1D–3V hybrid-Vlasov (e$$^-$$) Gibby et al. (2008) Solar wind interaction with unmagnetised or weakly magnetised bodies SL 2D–3V full Vlasov Umeda et al. (2011, 2013), Umeda (2012), Umeda and Ito (2014), Umeda and Fukazawa (2015) Solar wind interaction with the terrestrial magnetosphere FV 3D–3V test-Vlasov (H+) Palmroth et al. (2013) FV 2D–3V hybrid-Vlasov (H+), equatorial plane Pokhotelov et al. (2013), von Alfthan et al. (2014), Kempf et al. (2015) SL 2D–3V hybrid-Vlasov (H+), polar and equatorial plane Palmroth et al. (2015, 2017), Pfau-Kempf et al. (2016), Hoilijoki et al. (2016, 2017) Charge and potential distribution around a spacecraft 3D Vlasov–Poisson (iterative relaxation algorithm) and Vlasov–Laplace (Lagrangian) Chane-Yook et al. (2006) Relativistic Weibel instabilities SL 1D–2V hybrid-Vlasov (e$$^-$$) Inglebert et al. (2011) SL 2D–2V hybrid-Vlasov (e$$^-$$) Ghizzo et al. (2017) SL 2D–2V and 2D–3V hybrid-Vlasov (e$$^-$$) Sarrat et al. (2017) FD: finite difference; FV: finite volume; L: fully Lagrangian; SL: semi-Lagrangian; e, H+, He++: kinetic species (electrons, protons, helium ions) in a hybrid setup ## 5 Vlasiator This section considers the choices and approaches made for the Vlasiator code, attempting to describe the near-Earth space at ion kinetic scales. Vlasiator simulates the global near-Earth plasma environment through a hybrid-Vlasov approach. The evolution of the phase-space density of kinetic ions is solved with Vlasov’s equation (Eq. 3), with the evolution of electromagnetic fields described through Faraday’s law (Eq. 8), Gauss’ law and the solenoid condition (Eq. 9), and the Darwin approximation of Ampère’s law (Eq. 14). Electrons are modelled as a massless charge-neutralising fluid. Closure is provided via the generalised Ohm’s law (Eq. 12) under the assumptions of high conductivity, slow temporal variations, and cold electrons, i.e., the Hall MHD Ohm’s law (Eq. 13). The source code of Vlasiator is available at http://github.com/fmihpc/vlasiator according to the Rules of the Road mapped out at http://www.physics.helsinki.fi/vlasiator. ### 5.1 Background Vlasiator has its roots in the discussions within the global MHD simulation community around 2005. It was becoming evident that while global MHD simulations are important, their capabilities, especially in the inner magnetosphere, are limited. The inner magnetosphere consists of spatially overlapping plasma populations of different temperatures (e.g., Baker 1995) and therefore the environment cannot be satisfactorily modelled with MHD to a degree allowing e.g., environmental predictions for societally critical spacecraft or as a context for upcoming missions, like the Van Allen Probes (e.g., Fox and Burch 2013). To this end, two strategies emerged, including either coupling a global MHD simulation with an inner magnetospheric simulation (e.g., Huang et al. 2006), or going beyond MHD with the then newly introduced hybrid-PIC approach (e.g., Omidi and Sibeck 2007). Coupling different codes carries a risk that the effects of the coupling scheme dominate over the improved physics. On the other hand, while hybrid-PIC simulations had produced important breakthroughs (e.g., Omidi et al. 2005), the velocity distributions computed through binning are noisy due to the limited number of launched particles, which could compromise physical conclusions. Further, due to the limited number of particles, the hybrid-PIC simulations could not provide sharp gradients, which would become a problem especially in the magnetotail, where the lobes surrounding the dense plasma sheet are almost empty. As the tail physics is critical in the global description of the magnetosphere, the idea about a hybrid-Vlasov simulation emerged. The objective was simple, just to go beyond MHD by introducing protons as a kinetic population modelled by a distribution function and thus getting rid of the noise. Several challenges were identified. First, if one neglects electrons as a kinetic population, one will, e.g., lose electron-scale instabilities that can be important in the tail physics (e.g., Pritchett 2005). Second, a global hybrid-Vlasov approach is still an extreme computational challenge even with a coarse ion-scale resolution, since it must be carried out in six dimensions. Further, doubts existed about whether grid resolutions achievable with current computational resources would facilitate ion kinetic physics. However, with a new approach without historical heritage, one could utilize the latest high-performance computing methods and new computational architectures, provided that the code would always be portable to the latest technology. The computational resources were still obeying the “Moore law”, and petascale systems had just become operational (Kogge 2009). With these prospects in mind, Vlasiator was proposed to the newly established European Research Council in 2007, which solicited new ideas with a high risk–high gain vision. ### 5.2 Grid discretisation The position space is discretised on a uniform Cartesian grid of cubic cells. Each cell holds the values of variables that are either being propagated or reconstructed (e.g., the electric and magnetic fields and the ion velocity distribution moments) as well as housekeeping variables. In addition, a three-dimensional uniform Cartesian velocity space grid is stored in each spatial cell. For position space Vlasiator uses the Distributed Cartesian Cell-Refinable Grid library (DCCRG; Honkonen et al. 2013) albeit without making use of the adaptive mesh refinement capabilities it offers. The library can distribute the grid over a large supercomputer using the domain decomposition approach (see Sect. 5.5 for details on the parallelisation strategies). The velocity space grid is purpose built for that specific task. A major performance gain in terms of memory and computation is achieved by storing and propagating the volume average of f in every cell at position $$\mathbf {x}$$ only in those velocity space cells where f exceeds a given density threshold $$f_\mathrm{min}$$. In order to accurately model propagation and acceleration, a buffer layer is maintained by modelling also cells that are adjacent in position or velocity space. The principle is illustrated in Fig. 4. This threshold can be constant or scaled linearly with the ion density. For each ion population, the maximal velocity space extents and the resolution are set by the user. This so-called sparse velocity space strategy allows to increase the resolution and track the distribution function in the regions where it is present instead of wasting computational resources covering the full extents of reachable velocity space. It is however important to set the value of $$f_\mathrm{min}$$ carefully in order to conserve the moments of f (density, pressure, etc.) to the desired accuracy and in order to include in a given simulation all expected features such as low-density high-energy populations. A detailed discussion of the effects of the grid discretisation parameters on the simulation of a collisionless shock was published by Pfau-Kempf et al. (2018). ### 5.3 Solvers and time-integration The structure of the hybrid-Vlasov set of equations leads to the logical split into a solver for the Vlasov equation and a solver for the electric and magnetic field propagation. #### 5.3.1 Vlasov solver In advancing the Vlasov equation, Vlasiator utilises Strang splitting (Umeda et al. 2009, 2011, and references therein), where updates of the particle distribution functions are performed, separately using a spatial translation operator $$S_T$$ for advection; $$S_T = \left( \mathbf{v}\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{x}}\right)$$, and an acceleration operator $$S_A$$ = $$\left( \frac{q_\mathrm{s}}{m_\mathrm{s}}{} \mathbf{E}\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{v}}\right)$$ including rotation $$\left( \frac{q_\mathrm{s}}{m_\mathrm{s}}(\mathbf{v}\times \mathbf{B})\cdot \frac{\partial f_\mathrm{s}}{\partial \mathbf{v}}\right)$$ for each phase-space volume average. The splitting is performed using a standard leapfrog scheme, where \begin{aligned} \widetilde{f}^{N+1} = S_A\left( \frac{\varDelta t}{2}\right) S_T(\varDelta t) S_A\left( \frac{\varDelta t}{2}\right) \widetilde{f}^{N}. \end{aligned} Acceleration over step length $$\varDelta t$$ is thus calculated based on the field values computed at the mid-point of each acceleration step, i.e., at each actual time step as used for translation. A global time step is defined, with time advancement calculated for distribution functions and fields in separate yet linked computations. Earlier versions of Vlasiator have used finite volume (FV) Vlasov solvers. In the earliest versions of the code, a FV method based on the solver proposed by Kurganov and Tadmor (2000) was used (Palmroth et al. 2013). A Riemann-type FV solver (LeVeque 1997; Langseth and LeVeque 2000) was used in subsequent works (Kempf et al. 2013, 2015; Pokhotelov et al. 2013; Sandroos et al. 2013, 2015; von Alfthan et al. 2014). For these solvers the classical Courant–Friedrichs–Lewy (CFL) condition (Courant et al. 1928) for maximal allowable time steps when calculating fluxes from one phase-space cell to another is \begin{aligned} \varDelta t < \mathrm {min} \left( \frac{\varDelta x_i}{\mathrm {max}(|v_i|)}, \frac{\varDelta v_i}{\mathrm {max}(|a_i|)} \right) , \end{aligned} where i is indexed over three dimensions. In previous versions the CFL condition was found to be very limiting. Vlasiator utilize a semi-Lagrangian scheme (SLICE-3D, Zerroukat and Allen 2012; https://github.com/fmihpc/lib-slice3d/), in which mass conservation is ensured by a conservative remapping from a Eulerian to a Lagrangian grid. Note however that the sparse velocity space strategy implemented in Vlasiator (see Sect. 5.2) breaks the mass conservation (see Pfau-Kempf et al. 2018, for a discussion of the effect of the phase space density threshold on mass conservation). The specificity of the SLICE-3D scheme is to split the full 3D remapping into successive 1D remappings, which reduces the computational cost of the spatial translation and facilitates its parallel implementation. The velocity space update due to acceleration $$S_A\left( \frac{\varDelta t}{2}\right)$$ will generally be described by an offset 3D rotation matrix (due to gyration around $$\mathbf {B}$$). As every offset rotation matrix can be decomposed into three shear matrices $$S = S_x S_y S_z$$, each performing an axis-parallel shear into one spatial dimension (Chen and Kaufman 2000), the numerically efficient semi-Lagrangian acceleration update using the SLICE-3D approach is possible: before each shear transformation, the velocity space is rearranged into a single-cell column format parallel to the shear direction in memory, each of which requires only a one-dimensional remapping with a high reconstruction order (in Vlasiator, 5th order reconstruction is typically employed for this step). These column updates are optimised to make full use of vector instructions. This update method comes with a maximum rotation angle limit due to the shear decomposition of about $$22^\circ$$, which imposes a further time step limit. For larger rotational angles per time step (caused by stronger magnetic fields), the acceleration can be subcycled. The position space update $$S_T(\varDelta t)$$ will generally be described by a translation matrix with no rotation, and the same SLICE-3D approach lends itself to it in a similar vein as for velocity space. The main difference is the typical use of 3rd order reconstruction in order to keep the stencil width at two. The use of a semi-Lagrangian scheme allow the implementation of a time step limit \begin{aligned} \varDelta t < \mathrm {min} \left( \frac{\varDelta x_i}{\mathrm {max}(|v_i|)} \right) \end{aligned} based on spatial translation only. This condition constrains the spatial translation of any volume averages to a maximum value of $$\varDelta x_i$$ in direction i, accounting for only those velocities within phase-space which have populated active cells (see the sparse grid implementation, Sect. 5.2). This is employed not due to stability requirements, but rather to decrease communication bandwidth by ensuring that a single ghost cell in each spatial direction is sufficient. #### 5.3.2 Field solver The field solver in Vlasiator (von Alfthan et al. 2014) is based on the upwind constrained transport algorithm by Londrillo and Del Zanna (2004) and uses divergence-free reconstruction of the magnetic fields (Balsara 2009). It utilizes a second-order Runge–Kutta algorithm including the interpolation method demonstrated by Valentini et al. (2007) to obtain the intermediate moments of f needed to update the electric and magnetic fields (von Alfthan et al. 2014). The algorithm is subject to a CFL condition such that the fastest-propagating wave mode cannot travel more than half a spatial cell per time step. As the field solver was extended to include the Hall term in Ohm’s law, the CFL limit severely impacts the time step of the whole propagation in regions of high magnetic field strength or low plasma density. If the imbalance between the time step limits from the Vlasov solver and from the field solver is too strong, the computation of the electric and magnetic fields is subcycled such as to retain an acceptable global time step length (Pfau-Kempf 2016). #### 5.3.3 Time stepping The leapfrog scheme of the propagation is initialised by half a time step of acceleration. If the time step needs to change during the simulation due to the dynamics of the system, f is accelerated backwards by half an old time step and forwards again by half a new time step and the algorithm resumes with the new global time step. The complete sequence of the time propagation in Vlasiator is depicted in Fig. 5, including a synthetic version of the equations used in the different parts. ### 5.4 Boundary and initial conditions The reference frame used in Vlasiator is defined as follows: the origin is located at the centre of the Earth, the x axis points towards the Sun, the z axis points northward perpendicular to both the x axis and the ecliptic plane, and the y axis completes the right-handed set. This coordinate system is equivalent to the Geocentric Solar Ecliptic (GSE) frame which is commonly used when studying the near-Earth space. The solar wind enters the simulation domain at the $$+\,x$$ boundary, while copy conditions (i.e., homogeneous Neumann boundary conditions obtained by copying the velocity distribution functions and magnetic fields from the boundary cells to their neighbouring ghost cells) are used for the $$-\,x$$ boundary and for the boundaries perpendicular to the flow. In the current 2D–3V runs, periodic conditions are applied in the out-of-plane direction (i.e., $$+\,z$$ and $$-\,z$$ for the ecliptic runs and $$+\,y$$ and $$-\,y$$ for the polar runs). Currently, three versions of the copy conditions are implemented, which can be adjusted in order to mitigate issues such as self-replicating phenomena at the boundaries. At the beginning of the run or at a restart, the outflow condition can be set to a classic copy condition, to a copy condition where the value of f is modified in order to avoid self-replication or inflowing features, or static conditions can be maintained at the boundary. The simulation also requires an inner boundary around the Earth, in order to screen the origin of the terrestrial dipole. In the inner magnetosphere, the magnetic field strength increases dramatically, resulting in very small time steps, which would significantly slow down the whole simulation. Also, close to the Earth, the ionospheric plasma can no longer be described as a collisionless and fully ionised medium, and another treatment would be required in order to properly simulate this region. The inner boundary is therefore located at 30,000 km (about $$4.7 \, \mathrm {R}_{\mathrm {E}}$$) from the Earth’s centre and is currently modelled as a perfect conductor. The distribution functions in the boundary cells retain their initial Maxwellian distributions throughout the run. The electric field is set to zero in the layer of boundary cells closest to the origin, and the magnetic field component tangential to the boundary is fixed to the value given by the Earth’s dipole field. Since the ionospheric boundary is given in the Cartesian coordinate system, it is not exactly spherical but staircase-like, introducing several computational problems (e.g., Cangellaris and Wright 1991). This has not been a large problem in Vlasiator up to date, possibly since the computations are carried out in 2D–3V setup. Once the computations are carried out in 3D–3V, this may pose a larger problem, because the magnetic field will be stronger in 3D near poles. In addition to defining boundary conditions, the phase-space cells within the simulation box must be initialised to some reasonable values, after which the magnetic and gas pressures and flow conditions cause the state of the simulation to change, and to converge towards a valid description of the magnetospheric system as the box is flushing. The usual method employed in Vlasiator is to initialise the velocity space in each cell within the simulation (excluding the region within the inner boundary) to match values picked from a Maxwellian distribution in agreement with the inflow boundary solar wind density, temperature, and bulk flow direction. The inner boundary is initialised with a constant proton temperature and number density with no bulk flow. The initial phase space density sampling can be improved by averaging over multiple densities obtained via equally spaced velocity vectors within the single velocity space cell. The Earth’s magnetic field closely resembles that of a magnetic dipole, and within the scope of Vlasiator the dipole has been approximated as being aligned with the z-axis. For ecliptic xy plane simulations the dipole field can be used as-is, but for polar xz plane simulations, a 2D line dipole (which scales as $$r^{-2}$$ rather than $$r^{-3}$$) must be used instead in order to prevent the occurrence of unphysical currents due to out-of-plane field curvature. When using this approach, one must calculate the dipole strength that represents reality in some chosen manner, and this is achieved by choosing a value that in turn reproduces the magnetopause at the realistic $$\sim 10\,\mathrm {R}_\mathrm{E}$$ standoff distance (a similar treatment as found in, e.g., Daldorff et al. 2014). As the dipole magnetic field is not included in the inflow boundary, there cannot exist a boundary-perpendicular magnetic field component in order to respect the solenoid condition. For ecliptic runs, the dipole field component is aligned with z and thus there is no component perpendicular to the inflow boundary. For polar runs, the dipole field component perpendicular to the inflow boundary must be removed to prevent magnetic field divergence. This is achieved by placing a mirror dipole identical to the Earth’s dipole model at the position $$(2 \cdot (X_1 - \varDelta x),0,0)$$ that is at twice the distance from the origin to the edge of the final non-boundary simulation cell. For each simulation cell, the static background magnetic flux through each face is thus assigned as a combination of flux calculated from the chosen dipole field model, a mirror dipole if present, and the solar wind IMF vector. This background field, which is curl-free and divergence-free, is left static, and instead any calculations involving magnetic fields operate on a separate field which acts as a perturbation from this initial field. ### 5.5 Parallelisation strategies Given the curse of dimensionality in Vlasov simulations (cf. Sect. 4), the amounts of memory and computational steps required for global magnetospheric hybrid-Vlasov simulations are extreme. Therefore, the use of supercomputer resources and parallelisation techniques is essential. Vlasiator uses three levels of parallelisation, of which the first employed is the decomposition of the spatial simulation domain into subdomains handling individual tasks using the Message-Passing Interface (MPI, MPI Forum 2004). The use of the DCCRG grid library (Honkonen et al. 2013) provides most of the glue code for MPI communication and management of computational domain interfaces. Thanks to the sparse velocity space representation, large savings in memory usage and computational demand can be achieved. However, the sparse velocity space induces a further problem, because the computational effort to solve the Vlasov equation is no longer constant for every spatial simulation cell, but varies in direct relation to the complexity of the velocity space at every given point. Due to the large variety of physical processes present in the magnetospheric domain, this leads to large load imbalances throughout the simulation box, making a simple Cartesian subdivision of space over computational tasks infeasible and necessitating a dynamic rebalancing of the work distribution. To this end, Vlasiator relies on the Zoltan library (Devine et al. 2002; Boman et al. 2012), which creates an optimised space decomposition from continuously updated run-time metrics, providing a number of different algorithms to do so (usual production runs are using recursive coordinate bisection). Figure 6 shows an example of the resulting spatial decomposition in a 2D global magnetospheric simulation run in the Earth’s polar plane, in which the incoming solar wind plasma with low-complexity Maxwellian velocity distributions on the right hand side is processed by visibly larger computational domain sizes than the more complex velocity space structures in the magnetosphere. The second level of parallelisation is carried out within the computational domain of each MPI task. The domain containing the MPI tasks is typically handled by a full supercomputer node (or a fraction of it) with multiple CPU cores and threads, which includes a local parallelisation level based on OpenMP (OpenMP Architecture Review Board 2011). All computationally intensive solver steps have been designed to be run thread-parallel over multiple spatial cells, or in the case of the SLICE-3D position space update (see Sect. 5.3) multiple parallel evaluations over the velocity space to make optimum use of the available shared memory parallel computing architecture within one node. As a third level of parallelisation, all data structures involved in computationally expensive solver steps have been designed to benefit from vector processing of modern CPUs. Specifically, the velocity space representation in Vlasiator is based on $$4\times 4\times 4$$ cell blocks, which are always processed as a whole. This allows multiple velocity cells to be solved at the same time, using single instruction multiple data techniques (Fog 2016). A further complication of parallel Vlasov simulations are the associated input/output requirements. Not only does it require a parallel input/output system that scales to the required number of nodes, but the sparse velocity space structure requires an appropriate file format able to represent the sparsity, without relying on fixed data offsets. For Vlasiator’s specific use case, the VLSV library and file format have been developed (http://github.com/fmihpc/vlsv). Using parallel MPI-IO (MPI Forum 2004), it allows high-performance input/output even for simulation restart files which, given the large system size of Earth’s magnetosphere, tend to get up to multiple terabytes in size. A plugin for the popular scientific visualisation suite VisIt (Childs et al. 2012) is available, as is a python library that allows for quantitative analysis of the output files (http://github.com/fmihpc/analysator). Along with the industry’s trend towards architectures featuring large numbers of cores and/or GPUs as a primary computing element, an early version of Vlasiator was parallelised using the CUDA standard and run on small numbers of GPUs (Sandroos et al. 2013). This avenue was not pursued further because of the lack of suitably large systems, and a number of bottlenecks following from the structure of the Vlasov simulations on the one hand and the characteristics of GPUs on the other hand. ### 5.6 Verification and test problems As standard verification tests for a hybrid-Vlasov system do not exist, the first verification effort of Vlasiator was presented in Kempf et al. (2013). A simulation of low-$$\beta$$ plasma waves (where $$\beta$$ is the ratio of thermal and magnetic pressures) in a one-dimensional case with various angles of propagation with respect to the magnetic field was used to generate dispersion curves and surfaces. These were then compared to analytical solutions from the linearised plasma wave equations given by the Waves in Homogeneous, Anisotropic Multicomponent Plasmas (WHAMP) code (Rönnmark 1982). Excellent agreement between the results obtained from the two approaches was found in the case of parallel, perpendicular and oblique propagation, the only noticeable difference taking place for high frequencies and wave numbers, likely as a result of too coarse a representation of the Hall term in the Vlasiator simulations at that time. In the work presented by von Alfthan et al. (2014), the study of the ion/ion right-hand resonant beam instability is another effort to verify the hybrid-Vlasov model implemented in Vlasiator against the analytic solution of the dispersion equation for that instability. The obtained instability growth rates were found to behave as predicted by theory in the cool beam regime, although with slightly lower values which can be explained by the finite size of the simulation box used. This paper also discussed comparisons of results from the hybrid-Vlasov approach with those obtained with hybrid-PIC codes to underline that distribution functions are comparable albeit smoother and better-resolved with the former approach. More recently, Kilian et al. (2017) presented a set of validation tests based on kinetic plasma waves, and discussed what their expected behaviour should look like in fully kinetic PIC simulations as well as different levels of simplification (Darwin approximation, EMHD, hybrid). By nature, waves and instabilities are a sensitive and valuable verification tool for plasma models, as they are an emergence of the collective behaviour of plasma. As such they are an excellent verification test for a complete model going well beyond unit tests of single solver components. The increasing computational performance of Vlasiator has allowed significant improvements in spatial resolution. It was still 850 km early on (von Alfthan et al. 2014; Kempf et al. 2015) but subsequent runs were performed at 300 km and even 227 km resolution (e.g., Palmroth et al. 2015; Hoilijoki et al. 2017). Nevertheless even at these finer resolutions the typical kinetic scales are still not properly resolved in magnetospheric simulations. This can lead to the a priori concern that under-resolved hybrid-Vlasov simulations would not fare better than their considerably cheaper MHD forerunners and would similarly lack any kinetic plasma phenomena. A systematic study of the effects of the discretisation parameters of Vlasiator on the modelling of a collisionless shock alleviates this concern. Using a one-dimensional shock setup with conditions comparable to the terrestrial bow shock, Pfau-Kempf et al. (2018) show that even at spatial resolutions of 1000 km the results clearly depart from fluid theory and are consistent with a kinetic description. Of course an increased resolution of 200 km leads to a dramatic improvement in the physical detail accessible to the model, even though not yet fully resolving ion kinetic scales. This study also highlights the importance of choosing the velocity resolution and the phase-space density threshold $$f_\mathrm{min}$$ carefully as they affect the conservation properties of the model and as a consequence the physical processes it can describe. ### 5.7 Physics results Having completed verification tests, one can compare simulation results with experimental ground-based or spacecraft data, in other words proceed towards validation of the model. The first step for Vlasiator was to perform a global test-Vlasov simulation in 3D ordinary space (Palmroth et al. 2013). In this test f was propagated through the electromagnetic fields computed by the MHD model GUMICS-4 (Janhunen et al. 2012). This test showed that the early test-Vlasov version of Vlasiator already reproduced well the position of the Earth’s bow shock as well as magnetopause and magnetosheath plasma properties. Typical energy–latitude ion velocity profiles during northward IMF conditions were also successfully obtained with Vlasiator in that same study. Focusing on the ion velocity distributions in the foreshock, a study by Pokhotelov et al. (2013) demonstrated that the physics of ions in the vicinity of quasi-parallel MHD shocks is well reproduced by Vlasiator. The simulation presented in that paper is a global dayside magnetospheric run in 2D ordinary space (ecliptic plane) for which the IMF angle relative to the Sun–Earth axis is $$45^\circ$$. The foreshock was successfully reproduced by the model, and the reflected ion velocity distributions given by Vlasiator were found to be in agreement with spacecraft observations. In particular, deep in the ion foreshock, so-called cap-shaped ion distributions were reproduced by the model in association with 30 s sinusoidal waves which have been created as a result of ion/ion resonance interaction. Validation of Vlasiator using spacecraft data was presented by Kempf et al. (2015), where the various shapes of ion distributions in the foreshock were reviewed, localised relative to the foreshock boundaries, identified in Time History of Events and Macroscale Interactions during Substorms (THEMIS, Angelopoulos 2008) data and compared to model results. The agreement between Vlasiator-simulated distributions and those observed by THEMIS was found to be very good, giving additional credibility to the hybrid-Vlasov approach and its feasibility. While the papers discussed above essentially presented validations of the hybrid-Vlasov approach implemented in Vlasiator, the model has since 2015 been producing novel physical results. The first scientific investigations of the solar wind-magnetosphere interaction utilizing Vlasiator focus on dayside processes, from the foreshock to the magnetopause. Vlasiator offers in particular an unprecedented view of the suprathermal ion population in the foreshock. The moments of this population are direct outputs from the code, thus facilitating the analysis of parameters such as the suprathermal ion density or velocity throughout the foreshock (Kempf et al. 2015; Palmroth et al. 2015). In contrast, such parameters require some careful data processing to be extracted from spacecraft measurements, and large statistics are needed in order to obtain global maps of the foreshock. Vlasiator allows to investigate the properties and the structuring of the ultra-low frequency (ULF, 1 mHz to $$\sim$$ 1 Hz) waves which pervade the foreshock both on the local and the global scales. Direct comparison of a Vlasiator run with measurements from the THEMIS mission during similar solar wind conditions confirmed that Vlasiator reproduces well the characteristics of the waves at the spacecraft location (Palmroth et al. 2015). The typical features of the waves are in agreement with the reported literature. The observed oblique propagation, relative to the ambient magnetic field, of these foreshock waves has been a long-standing question because theory predicts that they should be parallel-propagating. Based on Vlasiator results, Palmroth et al. (2015) proposed a new scenario to explain this phenomenon, which they attributed to the global variation of the suprathermal ion population properties across the foreshock. Vlasiator also offers unprecedented insight into the physics of the magnetosheath, which hosts mirror mode waves downstream of the quasi-perpendicular shock. Hoilijoki et al. (2016) found that the growth rate of the mirror mode waves was smaller than theoretical expectations, but in good agreement with spacecraft observations. As Hoilijoki et al. (2016) explain, this discrepancy has been ascribed to the local and global variations of the plasma parameters, as well as the influence of other wave modes, not being taken into account in the previous theoretical estimates. Using Vlasiator’s capability to track the evolution of the plasma as it propagates from the bow shock into the magnetosheath, Hoilijoki et al. (2016) demonstrated that mirror modes develop preferentially along magnetosheath streamlines whose origin at the bow shock lies in the vicinity of the foreshock ULF wave boundary. This is probably due to the fact that the plasma is more unstable to mirror modes in this region because of the perturbations in the foreshock transmitting into the magnetosheath. This result outlines the importance of the global approach, as a similar result would not be present in coupled codes nor in codes that do not model both the foreshock and the magnetosheath simultaneously. Magnetic reconnection is a topic of intensive research, as it is the main process through which plasma and energy are transferred from the solar wind into the magnetosphere. Many questions remain unresolved in the dayside and in the nightside. In the dayside, active research topics include the position of the reconnection line and the bursty or continuous nature of reconnection, while in the nightside the most important topic is the global magnetospheric reconfiguration caused either by reconnection or by a tail current disruption. In order to tackle these questions, the simulation domain of Vlasiator, which so far corresponded to the Earth’s equatorial plane, was changed to cover the noon–midnight meridian plane (xz plane in the reference frame defined in Sect. 5.4). To address the dayside–nightside coupling processes in reconnection, the simulation domain was extended to include the nightside reconnection site within the same simulation domain, stretching as far as $$-\,94 \, \mathrm {R}_{\mathrm {E}}$$ along the x direction. This run, carried out in 2016, remains at the time of this writing the most computationally expensive Vlasiator run performed so far. Hoilijoki et al. (2017) presented an investigation of reconnection and flux transfer event (FTE) processes at the dayside magnetopause, and showed that even under steady IMF conditions the location of the reconnection line varies with time, even allowing multiple reconnection lines to exist at a given time. Many FTEs are produced during the simulation, and occasionally magnetic islands have been observed to coalesce, which underlines the power of kinetic-based modelling in capturing highly dynamical and localised processes. Additionally, Hoilijoki et al. (2017) showed that the local reconnection rate measured at locations of the reconnection lines correlates well with the analytical rate for the asymmetric reconnection derived by Cassak and Shay (2007). This paves the way in using Vlasiator to investigate e.g., the effects of dayside reconnection in the nightside. Vlasiator has proven to be a useful and powerful tool to reveal localised phenomena which were never imagined before, and to narrow down the regions of the near-Earth environment where to search for them in observational data sets. One example of this can be found in the work by Pfau-Kempf et al. (2016), in which transient, local ion foreshocks were discovered at the bow shock under steady solar wind and IMF conditions, as illustrated in Fig. 7. These transient foreshocks were found to be related to FTEs at the dayside magnetopause produced by unsteady reconnection and creating fast mode waves propagating upstream in the magnetosheath (Fig. 7a–c). These wave fronts can locally alter the shape of the bow shock, thus creating favourable conditions for local foreshocks to appear (Fig. 7d, e). Observational evidence giving credit to this scenario was found in a data set comprising Geotail observations near the bow shock and ground-based signatures of FTEs in SuperDARN radar and magnetometer data. While the first set of publications essentially dealt with dayside processes, Vlasiator can also be applied to the study of nightside phenomena. The first investigation of magnetotail processes using Vlasiator was performed by Palmroth et al. (2017), showcasing multiple reconnection lines in the plasma sheet and the ejection of a plasmoid, under steady IMF conditions (see Fig. 8). This study underlined that dayside reconnection may have a direct consequence in stabilising nightside reconnection, as flux tubes originating from dayside reconnection influenced the local conditions within the nightside plasma sheet. Again, this study illustrates how important it is to capture the whole system simultaneously using a kinetic approach. ### 5.8 Future avenues Vlasiator is funded through several multi-annual grants, with which the code is improved and developed. Major building blocks for making Vlasiator possible in the past were not only the increase of the computational resources, but also several algorithmic innovations. Examples of these are the sparse grid for distribution functions, and the semi-Lagrangian solver discussed above. Further, the code has been continuously optimised to fit better in different parallel architectures. With these main steps, the efficiency of Vlasiator has been improved effectively about eight orders of magnitude relative to the performance in the beginning of the project, allowing to simulate 2D–3V systems with high resolution (Palmroth et al. 2017). Recently, a simulation run with a cylindrical ionosphere and a layer of grid cells in the third spatial dimension has been carried out, thus approaching the full 3D–3V representation. The development of Vlasiator is closely tied to the awarded grants. In terms of numerics, near-term plans are to include an adaptive mesh refinement both into the ordinary space and velocity space, required for a full 3D–3V system. These improvements would allow to place higher resolution to regions of interest, and consequently to save in number of time steps and storage. The DCCRG grid already supports adaptive mesh refinement, and thus the task is mainly to add this support into the solvers and optimise the performance. In terms of physics, perhaps the most visible change in the near past was the addition of heavier ions. In recent times, the role of heavier ions, e.g., in dayside reconnection has become evident (e.g., Fuselier et al. 2017), and thus the correct reproduction of the system at ion scales requires solving heavier ions as well. While the addition requires more memory and storing capacity, in terms of coding it was relatively simple as the ions can be represented as an additional sparse representation of the velocity space, not adding much to the overall computational load. The first runs with heavier ions with additional ion populations were produced in 2018. The first set of runs considered helium flowing from the solar wind boundary, and the second set added oxygen flow from the ionospheric boundary. The analysis for these runs is ongoing. In the near term, the ionospheric boundary will also be improved. In the 2D–3V runs the ionosphere can be relatively simple, but in 3D–3V it needs to be updated as well. In the first approximation, it can be similar to the type of boundary used in global MHD simulations, which typically couple the field-aligned currents, electric potential and precipitation between the ionosphere and magnetosphere (e.g., Janhunen et al. 2012). Later, the ionosphere should be updated to take into account the more detailed information that the Vlasov-based magnetospheric domain can offer relative to MHD. The objective is to push the inner edge of the simulation domain earthwards from its current position (around 5 $$R_E$$). Other planned improvements include allowing for the Earth’s dipole field to be tilted with respect to the z direction, and replacing the mirror dipole method of ensuring the solenoid condition with an alternative method, for instance a radially vanishing vector potential description of the dipole field. Inclusion of such capabilities would allow investigations of the inner magnetospheric physics in terms of solar wind driving, which would close the circle: The problems of reproducing the inner magnetospheric physics by the global MHD simulations was one of the main motivations for developing Vlasiator in the first place. Other possible future avenues would be to consider other environments that will be investigated with present and future space missions. An example of this is Mercury targeted by the upcoming BepiColombo mission. Cometary environments and the comet–solar wind interactions should be interesting in terms of the recently added heavy ion support, in the context of the Rosetta mission data analysis. Further, the upcoming Juice mission will visit the icy moons of Jupiter, indicating that e.g., the Ganymede–Jupiter interaction may also be one viable option for future. ## 6 Conclusions and outlook There are several main conclusions that can be made from all Vlasiator results so far. The first one is related to the applicability of the hybrid-Vlasov system for ions within the global magnetospheric context. When Vlasiator was first proposed, concerns arose as to whether ions are the dominant species controlling the system dynamics or does one need electrons as well. In particular, a physical representation of the reconnection process may require electrons, while the ion-scale Vlasiator would still model reconnection similarly as global MHD simulations, i.e., through numerical diffusion. However, even an MHD simulation, treating both ions and electrons as a fluid, is capable of modelling global magnetospheric dynamics (Palmroth et al. 2006a, b), indicating that reconnection driving the global dynamics must be within the right ballpark. Since Vlasiator is also able to produce results that are in agreement with in situ measurements, kinetic ions seem to be a major contributor in reproducing global dynamics. Whether the electrons play a larger role in global dynamics remains to be determined in the future, if such simulations become possible. Another major conclusion based on Vlasiator is the role of grid resolution in global setups. Again, one of the largest concerns in the beginning of Vlasiator development was that the ion gyroscales could not be reached within a global simulation volume, raising fears that the outcomes would be MHD-like, even though early hybrid-PIC simulations were also carried out at ion inertial length scales (e.g., Omidi et al. 2005). In this context, the first runs included an element of surprise, as even rather coarse resolution grids induce kinetic phenomena that are in agreement with in situ observations (Pokhotelov et al. 2013). Latest results have clearly indicated that kinetic physics emerges even at coarse spatial resolution (Pfau-Kempf et al. 2018). It should be emphasised that this result would not have been foreseeable without developing the simulation first. Further, it indicates that also electron physics could be trialled without resolving the actual electron scales. One can hence conclude that others attempting to develop a (hybrid-)Vlasov simulation may face less concerns due to the grid resolution, even in setups with major computational challenges, like e.g., portions of the Sun. The most common physical conclusion based on Vlasiator simulations is that “everything affects everything”, indicating that scale coupling is important in global magnetospheric dynamics. One avenue of development for the global MHD simulations in the recent years has been code coupling, where e.g., problem-specific codes have been coupled into the global context (Huang et al. 2006), or e.g., PIC simulations have been embedded within the MHD domain (Tóth et al. 2016). While these approaches are interesting and advance physical understanding, they cannot approach scale coupling as the specific kinetic phenomena are only addressed within their respective simulation volumes. A prime example of the scale coupling is the emergence of transient foreshocks, driven by bow waves generated by dayside reconnection (Pfau-Kempf et al. 2016). Another example is the generation of oblique foreshock waves due to a global variability of backstreaming populations (Palmroth et al. 2015). These results could not have been achieved without a simulation that resolves both small and large scales simultaneously. Vlasov-based methods have not yet been widely adopted in the fields of astrophysics and space physics to model large-scale systems beyond the few examples cited in Table 2, mainly due to the truly astronomical computational cost such simulations can have. The experience with Vlasiator nevertheless demonstrates that Vlasov-based modelling is strongly complementary to other methods and provides unprecedented insight well worth the implementation effort. Based on the pioneering work realised in the Solar–Terrestrial physics community, it is hoped that Vlasov-based methods will gain in popularity and lead to breakthrough results in other fields of space physics and astrophysics as well. Finally, it should be emphasized that a critical success factor in the Vlasiator development has been the close involvement with technological advances in the field of high-performance computing. European research infrastructures for supercomputing have been developed almost hand-in-hand with Vlasiator, providing an opportunity to always target the newest platforms thus feeding directly into the code development. Should similar computationally intensive codes be designed and implemented elsewhere, it is recommended to keep a keen eye on the technological development of the supercomputing platforms. ## Notes ### Acknowledgements We acknowledge The European Research Council for Starting Grant 200141-QuESpace, with which Vlasiator (http://helsinki.fi/vlasiator) was developed, and Consolidator Grant 682068-PRESTISSIMO awarded to further develop Vlasiator and use it for scientific investigations. We gratefully also acknowledge the Academy of Finland (Grant Numbers 138599, 267144, and 309937). The Finnish Centre of Excellence in Research of Sustainable Space, funded through the Academy of Finland with Grant Number 312351, supports Vlasiator development and science as well. We acknowledge all computational grants we have received: PRACE/Tier-0 2012061111 on Hermit/HLRS, PRACE/Tier-1 on Abel/UiO-NOTUR, PRACE/Tier-0 2014112573 on HazelHel/HLRS, PRACE/Tier-0 2016153521 on Marconi/CINECA, CSC – IT Center of Science Grand Challenge grants on 2015 and 2016, and the pilot use in summer as well as the special Christmas present pilot use of sisu.csc.fi in 2014. LT is supported by Marie Sklodowska-Curie Grant Agreement No. 704681. ## References 1. Afanasiev A, Battarbee M, Vainio R (2015) Self-consistent Monte Carlo simulations of proton acceleration in coronal shocks: effect of anisotropic pitch-angle scattering of particles. Astron Astrophys 584:A81. . arXiv:1603.08857 2. Afanasiev A, Vainio R, Rouillard AP, Battarbee M, Aran A, Zucca P (2018) Modelling of proton acceleration in application to a ground level enhancement. Astron Astrophys 614:A4. 3. Aguilar M et al (2015) Precision measurement of the proton flux in primary cosmic rays from rigidity 1 GV to 1.8 TV with the alpha magnetic spectrometer on the International Space Station. Phys Rev Lett 114:171103. 4. André M, Vaivads A, Khotyaintsev YV, Laitinen T, Nilsson H, Stenberg G, Fazakerley A, Trotignon JG (2010) Magnetic reconnection and cold plasma at the magnetopause. Geophys Res Lett 37:L22108. 5. Anekallu CR, Palmroth M, Pulkkinen TI, Haaland SE, Lucek E, Dandouras I (2011) Energy conversion at the Earth’s magnetopause using single and multispacecraft methods. J Geophys Res 116:A11204. 6. Angelopoulos V (2008) The THEMIS mission. Space Sci Rev 141:5–34. 7. Angelopoulos V, McFadden JP, Larson D, Carlson CW, Mende SB, Frey H, Phan T, Sibeck DG, Glassmeier KH, Auster U, Donovan E, Mann IR, Rae IJ, Russell CT, Runov A, Zhou XZ, Kepko L (2008) Tail reconnection triggering substorm onset. Science 321:931–935. 8. Arber TD, Vann RGL (2002) A critical comparison of Eulerian-grid-based Vlasov solvers. J Comput Phys 180:339–357. 9. Axford WI, Leer E, Skadron G (1977) The acceleration of cosmic rays by shock waves. In: International cosmic ray conference, vol 11, pp 132–137. 10. Bai XN, Caprioli D, Sironi L, Spitkovsky A (2015) Magnetohydrodynamic-particle-in-cell method for coupling cosmic rays with a thermal plasma: application to non-relativistic shocks. Astrophys J 809:55. 11. Baker DN (1995) The inner magnetosphere: a review. Surv Geophys 16:331–362. 12. Balogh A, Treumann RA (2013) Physics of collisionless shocks—space plasma shock waves. ISSI scientific report, vol 12. Springer, Heidelberg. 13. Balsara DS (2009) Divergence-free reconstruction of magnetic fields and WENO schemes for magnetohydrodynamics. J Comput Phys 228:5040–5056. 14. Balsara DS (2017) Higher-order accurate space-time schemes for computational astrophysics—Part I: Finite volume methods. Living Rev Comput Astrophys 3:2. 15. Balsara DS, Kim J (2004) A comparison between divergence-cleaning and staggered-mesh formulations for numerical magnetohydrodynamics. Astrophys J 602:1079. 16. Banks JW, Hittinger JAF (2010) A new class of nonlinear finite-volume methods for Vlasov simulation. IEEE Trans Plasma Sci 38:2198–2207. 17. Bauer S, Kunze M (2005) The Darwin approximation of the relativistic Vlasov–Maxwell system. Ann Henri Poincare 6:283–308. 18. Becerra-Sagredo J, Málaga C, Mandujano F (2016) Moments preserving and high-resolution semi-Lagrangian advection scheme. SIAM J Sci Comput 38:A2141–A2161. 19. Bell AR (1978) The acceleration of cosmic rays in shock fronts. I. MNRAS 182:147–156. 20. Besse N, Sonnendrücker E (2003) Semi-Lagrangian schemes for the Vlasov equation on an unstructured mesh of phase space. J Comput Phys 191:341–376. 21. Besse N, Mauser N, Sonnendrücker E (2007) Numerical approximation of self-consistent Vlasov models for low-frequency electromagnetic phenomena. Int J Appl Math Comput Sci 17:361–374. 22. Besse N, Latu G, Ghizzo A, Sonnendrücker E, Bertrand P (2008) A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov–Maxwell system. J Comput Phys 227:7889–7916. 23. Bilitza D, Reinisch BW (2008) International Reference Ionosphere 2007: improvements and new parameters. Adv Space Res 42:599–609. 24. Birdsall CK, Langdon AB (2004) Plasma physics via computer simulation. CRC Press, Boca Raton 25. Birn J, Hesse M (2009) Reconnection in substorms and solar flares: analogies and differences. Ann Geophys 27:1067–1078. 26. Blandford RD, Ostriker JP (1978) Particle acceleration by astrophysical shocks. Astrophys J 221:L29–L32. 27. Blelly PL, Lathuillère C, Emery B, Lilensten J, Fontanari J, Alcaydé D (2005) An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE. Ann Geophys 23:419–431. 28. Boman EG, Catalyurek UV, Chevalier C, Devine KD (2012) The Zoltan and Isorropia parallel toolkits for combinatorial scientific computing: partitioning, ordering, and coloring. Comput Sci Eng 20:129–150. 29. Brizard AJ (2000) New variational principle for the Vlasov–Maxwell equations. Phys Rev Lett 84:5768–5771. 30. Brizard AJ, Tronko N (2011) Exact momentum conservation laws for the gyrokinetic Vlasov–Poisson equations. Phys Plasmas 18:082307. 31. Bruno R, Carbone V (2013) The solar wind as a turbulence laboratory. Living Rev Sol Phys 10:2. 32. Bryan GL, Norman ML, O’Shea BW, Abel T, Wise JH, Turk MJ, Reynolds DR, Collins DC, Wang P, Skillman SW, Smith B, Harkness RP, Bordner J, Kim J, Kuhlen M, Xu H, Goldbaum N, Hummels C, Kritsuk AG, Tasker E, Skory S, Simpson CM, Hahn O, Oishi JS, So GC, Zhao F, Cen R, Li Y, The Enzo Collaboration (2014) ENZO: an adaptive mesh refinement code for astrophysics. Astrophys J Suppl Ser 211:19. 33. Bykov AM, Ellison DC, Osipov SM, Vladimirov AE (2014) Magnetic field amplification in nonlinear diffusive shock acceleration including resonant and non-resonant cosmic-ray driven instabilities. Astrophys J 789:137. 34. Camporeale E, Delzanno G, Bergen B, Moulton J (2016) On the velocity space discretization for the Vlasov–Poisson system: comparison between implicit Hermite spectral and particle-in-cell methods. Comput Phys Commun 198:47–58. 35. Cangellaris A, Wright D (1991) Analysis of the numerical error caused by the stair-stepped approximation of a conducting boundary in FDTD simulations of electromagnetic phenomena. IEEE Trans Ant Prop 39:1518–1525. 36. Caprioli D, Spitkovsky A (2013) Cosmic-ray-induced filamentation instability in collisionless shocks. Astrophys J 765:L20. 37. Casas F, Crouseilles N, Faou E, Mehrenberger M (2017) High-order Hamiltonian splitting for the Vlasov–Poisson equations. NuMat 135:769–801. 38. Cassak PA, Shay MA (2007) Scaling of asymmetric magnetic reconnection: general theory and collisional simulations. Phys Plasmas 14:102114. 39. Cerri SS, Califano F (2017) Reconnection and small-scale fields in 2D–3V hybrid-kinetic driven turbulence simulations. New J Phys 19:025007. 40. Cerri SS, Califano F, Jenko F, Told D, Rincon F (2016) Subproton-scale cascades in solar wind turbulence: driven hybrid-kinetic simulations. Astrophys J Lett 822:L12. 41. Cerri SS, Franci L, Califano F, Landi S, Hellinger P (2017a) Plasma turbulence at ion scales: a comparison between particle in cell and Eulerian hybrid-kinetic approaches. J Plasma Phys 83:705830202. 42. Cerri SS, Servidio S, Califano F (2017b) Kinetic cascade in solar-wind turbulence: 3D3V hybrid-kinetic simulations with electron inertia. Astrophys J Lett 846:L18. 43. Chane-Yook M, Clerc S, Piperno S (2006) Space charge and potential distribution around a spacecraft in an isotropic plasma. J Geophys Res. 44. Chao JK, Zhang XX, Song P (1995) Derivation of temperature anisotropy from shock jump relations: theory and observations. Geophys Res Lett 22:2409–2412. 45. Chapman JF, Cairns IH (2003) Three-dimensional modeling of Earth’s bow shock: shock shape as a function of Alfvén Mach number. J Geophys Res 108:1174. 46. Chen B, Kaufman A (2000) 3D volume rotation using shear transformations. Graph Models 62:308–322. 47. Chen Y, Tóth G, Cassak P, Jia X, Gombosi TI, Slavin JA, Markidis S, Peng IB, Jordanova VK, Henderson MG (2017) Global three-dimensional simulation of Earth’s dayside reconnection using a two-way coupled magnetohydrodynamics with embedded particle-in-cell model: initial results. J Geophys Res 122:10318–10335. 48. Cheng CZ, Knorr G (1976) The integration of the Vlasov equation in configuration space. J Comput Phys 22:330–351. 49. Cheng Y, Gamba IM, Morrison PJ (2013) Study of conservation and recurrence of Runge–Kutta discontinuous Galerkin schemes for Vlasov–Poisson systems. J Sci Comput 56:319–349. 50. Cheng Y, Christlieb AJ, Zhong X (2014) Energy-conserving discontinuous Galerkin methods for the Vlasov–Ampère system. J Comput Phys 256:630–655. 51. Childs H, Brugger E, Whitlock B, Meredith J, Ahern S, Pugmire D, Biagas K, Miller M, Harrison C, Weber GH, Krishnan H, Fogal T, Sanderson A, Garth C, Bethel EW, Camp D, Rübel O, Durant M, Favre JM, Navrátil P (2012) VisIt: an end-user tool for visualizing and analyzing very large data. In: Wes Bethel E, Childs H, Hansen C (eds) High performance visualization—enabling extreme-scale scientific insight. CRC Press, Boca Raton, pp 357–372Google Scholar 52. Cottet GH (2018) Semi-Lagrangian particle methods for high-dimensional Vlasov–Poisson systems. J Comput Phys 365:362–375. 53. Courant R, Friedrichs K, Lewy H (1928) Über die partiellen Differenzengleichungen der mathematischen Physik. Math Ann 100:32–74. 54. Cran-McGreehin AP, Wright AN (2005) Electron acceleration in downward auroral field-aligned currents. J Geophys Res 110:A10S15. 55. Cranmer SR, Gibson SE, Riley P (2017) Origins of the ambient solar wind: implications for space weather. Space Sci Rev 212:1345–1384. 56. Crouseilles N, Respaud T, Sonnendrücker E (2009) A forward semi-Lagrangian method for the numerical solution of the Vlasov equation. Comput Phys Commun 180:1730–1745. 57. Crouseilles N, Mehrenberger M, Sonnendrücker E (2010) Conservative semi-Lagrangian schemes for Vlasov equations. J Comput Phys 229:1927–1953. 58. Crouseilles N, Einkemmer L, Faou E (2015) Hamiltonian splitting for the Vlasov–Maxwell equations. J Comput Phys 283:224–240. 59. Daldorff LKS, Tóth G, Gombosi TI, Lapenta G, Amaya J, Markidis S, Brackbill JU (2014) Two-way coupling of a global Hall magnetohydrodynamics model with a local implicit particle-in-cell model. J Comput Phys 268:236–254. 60. Daughton W, Roytershteyn V, Karimabadi H, Yin L, Albright BJ, Bergen B, Bowers KJ (2011) Role of electron physics in the development of turbulent magnetic reconnection in collisionless plasmas. Nature Phys 7:539–542. 61. Daughton W, Nakamura TKM, Karimabadi H, Roytershteyn V, Loring B (2014) Computing the reconnection rate in turbulent kinetic layers by using electron mixing to identify topology. Phys Plasmas 21:052307. 62. De Moortel I, Browning P (2015) Recent advances in coronal heating. Philos Trans R Soc London, Ser A. 63. Delzanno G (2015) Multi-dimensional, fully-implicit, spectral method for the Vlasov–Maxwell equations with exact conservation laws in discrete form. J Comput Phys 301:338–356. 64. Devine K, Boman E, Heaphy R, Hendrickson B, Vaughan C (2002) Zoltan data management services for parallel dynamic applications. CSE 4:90–97. 65. Dimmock AP, Nykyri K (2013) The statistical mapping of magnetosheath plasma properties based on THEMIS measurements in the magnetosheath interplanetary medium reference frame. J Geophys Res 118:4963–4976. 66. Dungey JW (1961) Interplanetary magnetic field and the auroral zones. Phys Rev Lett 6:47–48. 67. Eastwood JP, Biffis E, Hapgood MA, Green L, Bisi MM, Bentley RD, Wicks R, McKinnell LA, Gibbs M, Burnett C (2017) The economic impact of space weather: where do we stand? Risk Anal 37:206–218. 68. Einkemmer L, Lubich C (2018) A low-rank projector-splitting integrator for the Vlasov–Poisson equation. ArXiv e-prints arXiv:1801.01103 69. Einkemmer L, Ostermann A (2014) Convergence analysis of strang splitting for Vlasov-type equations. SIAM J Numer Anal 52:140–155. 70. Eliasson B (2001) Outflow boundary conditions for the Fourier transformed one-dimensional Vlasov–Poisson system. J Sci Comput 16:1–28. 71. Eliasson B (2011) Numerical simulations of the Fourier-transformed Vlasov–Maxwell system in higher dimensions-theory and applications. Transp Theor Stat Phys 39:387–465. 72. Escoubet CP, Fehringer M, Goldstein M (2001) The Cluster mission. Ann Geophys 19:1197–1200. 73. Fermi E (1949) On the origin of the cosmic radiation. Phys Rev 75:1169–1174. 74. Figua H, Bouchut F, Feix M, Fijalkow E (2000) Instability of the filtering method for Vlasov’s equation. J Comput Phys 159:440–447. 75. Filbet F, Sonnendrücker E (2003a) Comparison of Eulerian Vlasov solvers. Comput Phys Commun 150:247–266. 76. Filbet F, Sonnendrücker E (2003b) Numerical methods for the Vlasov equation. In: Brezzi F, Buffa A, Corsaro S, Murli A (eds) Numerical mathematics and advanced applications. Springer, Milan, pp 459–468. 77. Filbet F, Sonnendrücker E, Bertrand P (2001) Conservative numerical schemes for the Vlasov equation. J Comput Phys 172:166–187. 78. Fog A (2016) Agner Fog vector class library. http://www.agner.org/optimize/#vectorclass. Accessed 25 July 2018 79. Fox N, Burch JL (2013) The Van Allen Probes mission. Springer, New YorkGoogle Scholar 80. Franci L, Cerri SS, Califano F, Landi S, Papini E, Verdini A, Matteini L, Jenko F, Hellinger P (2017) Magnetic reconnection as a driver for a sub-ion-scale cascade in plasma turbulence. Astrophys J Lett 850:L16. 81. Fuselier SA, Burch JL, Mukherjee J, Genestreti KJ, Vines SK, Gomez R, Goldstein J, Trattner KJ, Petrinec SM, Lavraud B, Strangeway RJ (2017) Magnetospheric ion influence at the dayside magnetopause. J Geophys Res 122:8617–8631. 82. Génot V (2009) Analytical solutions for anisotropic MHD shocks. Astrophys Space Sci Trans 5:31–34. 83. Génot V, Broussillou L, Budnik E, Hellinger P, Trávníček PM, Lucek E, Dandouras I (2011) Timing mirror structures observed by cluster with a magnetosheath flow model. Ann Geophys 29:1849–1860. 84. Ghizzo A, Sarrat M, Del Sarto D (2017) Vlasov models for kinetic Weibel-type instabilities. J Plasma Phys 83:705830101. 85. Gibby AR, Inan US, Bell TF (2008) Saturation effects in the VLF-triggered emission process. J Geophys Res 113:A11215. 86. Görler T, Lapillonne X, Brunner S, Dannert T, Jenko F, Merz F, Told D (2011) The global version of the gyrokinetic turbulence code GENE. J Comput Phys 230:7053–7071. 87. Green JC, Likar J, Shprits Y (2017) Impact of space weather on the satellite industry. Space Weather 15:804–818. 88. Guo W, Cheng Y (2016) A sparse grid discontinuous Galerkin method for high-dimensional transport equations and its application to kinetic simulations. SIAM J Sci Comput 38:A3381–A3409. 89. Guo F, Giacalone J (2013) The acceleration of thermal protons at parallel collisionless shocks: three-dimensional hybrid simulations. Astrophys J 773:158. 90. Guo Y, Li Z (2008) Unstable and stable galaxy models. Commun Math Phys 279:789–813. 91. Hao Y, Gao X, Lu Q, Huang C, Wang R, Wang S (2017) Reformation of rippled quasi-parallel shocks: 2-D hybrid simulations. J Geophys Res. 92. Hargreaves JK (1995) The solar–terrestrial environment. Cambridge University Press, Cambridge. 93. Harid V, Gołkowski M, Bell T, Li JD, Inan US (2014) Finite difference modeling of coherent wave amplification in the Earth’s radiation belts. Geophys Res Lett 41:8193–8200. 94. Hockney RW, Eastwood JW (1988) Computer simulation using particles. Hilger, Bristol 95. Hoilijoki S, Souza VM, Walsh BM, Janhunen P, Palmroth M (2014) Magnetopause reconnection and energy conversion as influenced by the dipole tilt and the IMF B$$_{x}$$. J Geophys Res 119:4484–4494. 96. Hoilijoki S, Palmroth M, Walsh BM, Pfau-Kempf Y, von Alfthan S, Ganse U, Hannuksela O, Vainio R (2016) Mirror modes in the Earth’s magnetosheath: results from a global hybrid-Vlasov simulation. J Geophys Res 121:4191–4204. 97. Hoilijoki S, Ganse U, Pfau-Kempf Y, Cassak PA, Walsh BM, Hietala H, von Alfthan S, Palmroth M (2017) Reconnection rates and X line motion at the magnetopause: global 2D–3V hybrid-Vlasov simulation results. J Geophys Res 122:2877–2888. 98. Holloway JP (1995) A comparison of three velocity discretizations for the Vlasov equation. In: International conference on plasma science (papers in summary form only received), p 95. 99. Honkonen I, von Alfthan S, Sandroos A, Janhunen P, Palmroth M (2013) Parallel grid library for rapid and flexible simulation development. Comput Phys Commun 184:1297–1309. 100. Hoppe MM, Russell CT, Frank LA, Eastman TE, Greenstadt EW (1981) Upstream hydromagnetic waves and their association with backstreaming ion populations—ISEE 1 and 2 observations. J Geophys Res 86:4471–4492. 101. Hu J, Li G, Ao X, Zank GP, Verkhoglyadova O (2017) Modeling particle acceleration and transport at a 2-D CME-driven shock. J Geophys Res 122:10. 102. Huang CL, Spence HE, Lyon JG, Toffoletto FR, Singer HJ, Sazykin S (2006) Storm-time configuration of the inner magnetosphere: Lyon–Fedder–Mobarry MHD code, Tsyganenko model, and GOES observations. J Geophys Res 111:A11S16. 103. Inglebert A, Ghizzo A, Reveille T, Sarto DD, Bertrand P, Califano F (2011) A multi-stream Vlasov modeling unifying relativistic Weibel-type instabilities. Europhys Lett 95:45002. 104. Janhunen P, Palmroth M, Laitinen T, Honkonen I, Juusola L, Facskó G, Pulkkinen TI (2012) The GUMICS-4 global MHD magnetosphere–ionosphere coupling simulation. J Atmos Sol-Terr Phys 80:48–59. 105. Jenab SMH, Kourakis I (2014) Vlasov-kinetic computer simulations of electrostatic waves in dusty plasmas: an overview of recent results. Eur Phys J D 68:219. 106. Karimabadi H, Roytershteyn V, Vu HX, Omelchenko YA, Scudder J, Daughton W, Dimmock A, Nykyri K, Wan M, Sibeck D, Tatineni M, Majumdar A, Loring B, Geveci B (2014) The link between shocks, turbulence, and magnetic reconnection in collisionless plasmas. Phys Plasmas 21:062308. 107. Kärkkäinen M, Gjonaj E, Lau T, Weiland T (2006) Low-dispersion wake field calculation tools. In: Proceedings of ICAP 2006, Chamonix, France, vol 1, p 35Google Scholar 108. Kazeminezhad F, Kuhn S, Tavakoli A (2003) Vlasov model using kinetic phase point trajectories. Phys Rev E 67:026704. 109. Kempf Y, Pokhotelov D, von Alfthan S, Vaivads A, Palmroth M, Koskinen HEJ (2013) Wave dispersion in the hybrid-Vlasov model: verification of Vlasiator. Phys Plasmas 20:112114. 110. Kempf Y, Pokhotelov D, Gutynska O, Wilson LB III, Walsh BM, von Alfthan S, Hannuksela O, Sibeck DG, Palmroth M (2015) Ion distributions in the Earth’s foreshock: hybrid-Vlasov simulation and THEMIS observations. J Geophys Res 120:3684–3701. 111. Kilian P, Muñoz PA, Schreiner C, Spanier F (2017) Plasma waves as a benchmark problem. J Plasma Phys. 112. Klimas AJ (1987) A method for overcoming the velocity space filamentation problem in collisionless plasma model solutions. J Comput Phys 68:202–226. 113. Klimas A, Farrell W (1994) A splitting algorithm for Vlasov simulation with filamentation filtration. J Comput Phys 110:150–163. 114. Klimas AJ, Viñas AF, Araneda JA (2017) Simulation study of Landau damping near the persisting to arrested transition. J Plasma Phys 83:905830405. 115. Kogge PM (2009) The challenges of petascale architectures. Comput Sci Eng 11:10–16. 116. Kormann K (2015) A semi-Lagrangian Vlasov solver in tensor train format. SIAM J Sci Comput 37:B613–B632. 117. Kormann K, Sonnendrücker E (2016) Sparse grids for the Vlasov–Poisson equation. In: Garcke J, Pflüger D (eds) Sparse grids and applications—Stuttgart 2014. Springer, Cham, pp 163–190. 118. Kozarev KA, Schwadron NA (2016) A data-driven analytic model for proton acceleration by large-scale solar coronal shocks. Astrophys J 831:120. . arXiv:1608.00240 119. Krymskii G (1977) A regular mechanism for the acceleration of charged particles on the front of a shock wave. DoSSR 234:1306–1308 120. Kurganov A, Tadmor E (2000) New high-resolution central schemes for nonlinear conservation laws and convection–diffusion equations. J Comput Phys 160:241–282. 121. Langseth JO, LeVeque RJ (2000) A wave propagation method for three-dimensional hyperbolic conservation laws. J Comput Phys 165:126–166. 122. Lapenta G (2012) Particle simulations of space weather. J Comput Phys 231:795–821. 123. Le Roux JA, Arthur AD (2017) Acceleration of solar energetic particles at a fast traveling shock in non-uniform coronal conditions. J Phys: Conf Ser 900:012013. 124. Lee MA (2005) Coupled hydromagnetic wave excitation and ion acceleration at an evolving coronal/interplanetary shock. Astrophys J Suppl Ser 158:38–67. 125. Leonardis E, Sorriso-Valvo L, Valentini F, Servidio S, Carbone F, Veltri P (2016) Multifractal scaling and intermittency in hybrid Vlasov–Maxwell simulations of plasma turbulence. Phys Plasmas 23:022307. 126. LeVeque RJ (1997) Wave propagation algorithms for multidimensional hyperbolic systems. J Comput Phys 131:327–353. 127. LeVeque RJ (2002) Finite volume methods for hyperbolic problems. Cambridge texts in applied mathematics. Cambridge University Press, Cambridge. 128. Lin Y, Wing S, Johnson JR, Wang XY, Perez JD, Cheng L (2017) Formation and transport of entropy structures in the magnetotail simulated with a 3-D global hybrid code. Geophys Res Lett 44:5892–5899. 129. Londrillo P, Del Zanna L (2004) On the divergence-free condition in Godunov-type schemes for ideal magnetohydrodynamics: the upwind constrained transport method. J Comput Phys 195:17–48. 130. Lu S, Lu Q, Lin Y, Wang X, Ge Y, Wang R, Zhou M, Fu H, Huang C, Wu M, Wang S (2015) Dipolarization fronts as earthward propagating flux ropes: a three-dimensional global hybrid simulation. J Geophys Res 120:6286–6300. 131. Luhmann JG, Ledvina SA, Odstrcil D, Owens MJ, Zhao XP, Liu Y, Riley P (2010) Cone model-based SEP event calculations for applications to multipoint observations. Adv Space Res 46:1–21. 132. Lui ATY (1996) Current disruption in the Earth’s magnetosphere: observations and models. J Geophys Res 101:13067–13088. 133. Maier A, Iapichino L, Schmidt W, Niemeyer JC (2009) Adaptively refined large eddy simulations of a galaxy cluster: turbulence modeling and the physics of the intracluster medium. Astrophys J 707:40. 134. Mangeney A, Califano F, Cavazzoni C, Trávníček P (2002) A numerical scheme for the integration of the Vlasov–Maxwell system of equations. J Comput Phys 179:495–538. 135. Marchaudon A, Blelly PL (2015) A new interhemispheric 16-moment model of the plasmasphere–ionosphere system: IPIM. J Geophys Res 120:5728–5745. 136. Marcowith A, Bret A, Bykov A, Dieckman ME, Drury LO, Lembège B, Lemoine M, Morlino G, Murphy G, Pelletier G, Plotnikov I, Reville B, Riquelme M, Sironi L, Stockem Novo A (2016) The microphysics of collisionless shock waves. Rep Prog Phys 79:046901. 137. Marsden JE, Weinstein A (1982) The hamiltonian structure of the Maxwell–Vlasov equations. Physica D 4:394–406. 138. Martins SF, Fonseca RA, Lu W, Mori WB, Silva LO (2010) Exploring laser-wakefield-accelerator regimes for near-term lasers using particle-in-cell simulation in Lorentz-boosted frames. Nature Phys 6:311–316. 139. Mejnertsen L, Eastwood JP, Hietala H, Schwartz SJ, Chittenden JP (2018) Global MHD simulations of the Earth’s bow shock shape and motion under variable solar wind conditions. J Geophys Res 123:259–271. 140. Merkin VG, Lyon JG (2010) Effects of the low-latitude ionospheric boundary condition on the global magnetosphere. J Geophys Res 115:A10202. 141. MPI Forum (2004) MPI: a message-passing interface standard—version 3.1. http://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf. Accessed 25 July 2018 142. Nakamura TKM, Hasegawa H, Daughton W, Eriksson S, Li WY, Nakamura R (2017) Turbulent mass transfer caused by vortex induced reconnection in collisionless magnetospheric plasmas. Nature Commun 8:1582. 143. National Research Council (2008) Severe space weather events: understanding societal and economic impacts: a workshop report. The National Academies Press, Washington. 144. Ng CK, Reames DV (2008) Shock acceleration of solar energetic protons: the first 10 minutes. Astrophys J Lett 686:L123. 145. Nunn D (2005) Vlasov hybrid simulation—an efficient and stable algorithm for the numerical simulation of collision-free plasma. Transp Theor Stat Phys 34:151–171. 146. Nunn D, Omura Y, Matsumoto H, Nagano I, Yagitani S (1997) The numerical simulation of VLF chorus and discrete emissions observed on the Geotail satellite using a Vlasov code. J Geophys Res 102:27083–27097. 147. Omidi N (1995) How the bow shock does it. Rev Geophys 33:629–637. 148. Omidi N, Sibeck DG (2007) Flux transfer events in the cusp. Geophys Res Lett 34:L04106. 149. Omidi N, Blanco-Cano X, Russell CT (2005) Macrostructure of collisionless bow shocks: 1. Scale lengths. J Geophys Res 110:A12212. 150. OpenMP Architecture Review Board (2011) OpenMP application program interface—version 3.1. http://www.openmp.org/mp-documents/OpenMP3.1.pdf. Accessed 25 July 2018 151. Palmroth M, Janhunen P, Pulkkinen TI, Peterson WK (2001) Cusp and magnetopause locations in global MHD simulation. J Geophys Res 106:29435–29450. 152. Palmroth M, Pulkkinen TI, Janhunen P, Wu CC (2003) Stormtime energy transfer in global MHD simulation. J Geophys Res 108:1048. 153. Palmroth M, Janhunen P, Germany G, Lummerzheim D, Liou K, Baker DN, Barth C, Weatherwax AT, Watermann J (2006a) Precipitation and total power consumption in the ionosphere: global MHD simulation results compared with Polar and SNOE observations. Ann Geophys 24:861–872. 154. Palmroth M, Janhunen P, Pulkkinen TI (2006b) Hysteresis in solar wind power input to the magnetosphere. Geophys Res Lett 33:L03107. 155. Palmroth M, Laitinen TV, Pulkkinen TI (2006c) Magnetopause energy and mass transfer: results from a global MHD simulation. Ann Geophys 24:3467–3480. 156. Palmroth M, Honkonen I, Sandroos A, Kempf Y, von Alfthan S, Pokhotelov D (2013) Preliminary testing of global hybrid-Vlasov simulation: magnetosheath and cusps under northward interplanetary magnetic field. J Atmos Sol-Terr Phys 99:41–46. 157. Palmroth M, Archer M, Vainio R, Hietala H, Pfau-Kempf Y, Hoilijoki S, Hannuksela O, Ganse U, Sandroos A, von Alfthan S, Eastwood JP (2015) ULF foreshock under radial IMF: THEMIS observations and global kinetic simulation Vlasiator results compared. J Geophys Res 120:8782–8798. 158. Palmroth M, Hoilijoki S, Juusola L, Pulkkinen T, Hietala H, Pfau-Kempf Y, Ganse U, von Alfthan S, Vainio R, Hesse M (2017) Tail reconnection in the global magnetospheric context: Vlasiator first results. Ann Geophys 35:1269–1274. 159. Perrone D, Valentini F, Servidio S, Dalena S, Veltri P (2013) Vlasov simulations of multi-ion plasma turbulence in the solar wind. Astrophys J 762:99. 160. Perrone D, Bourouaine S, Valentini F, Marsch E, Veltri P (2014a) Generation of temperature anisotropy for alpha particle velocity distributions in solar wind at 0.3 AU: Vlasov simulations and Helios observations. J Geophys Res 119:2400–2410. 161. Perrone D, Valentini F, Servidio S, Dalena S, Veltri P (2014b) Analysis of intermittent heating in a multi-component turbulent plasma. Eur Phys J D 68:209. 162. Peterson WK, Sharp RD, Shelley EG, Johnson RG, Balsiger H (1981) Energetic ion composition of the plasma sheet. J Geophys Res 86:761–767. 163. Pfau-Kempf Y (2016) Vlasiator—from local to global magnetospheric hybrid-Vlasov simulations. PhD thesis, University of Helsinki. http://urn.fi/URN:ISBN:978-952-336-001-3. Accessed 25 July 2018 164. Pfau-Kempf Y, Hietala H, Milan SE, Juusola L, Hoilijoki S, Ganse U, von Alfthan S, Palmroth M (2016) Evidence for transient, local ion foreshocks caused by dayside magnetopause reconnection. Ann Geophys 34:943–959. 165. Pfau-Kempf Y, Battarbee M, Ganse U, Hoilijoki S, Turc L, von Alfthan S, Vainio R, Palmroth M (2018) On the importance of spatial and velocity resolution in the hybrid-Vlasov modeling of collisionless shocks. Front Phys 6:44. 166. Pinto RF, Rouillard AP (2017) A multiple flux-tube solar wind model. Astrophys J 838:89. 167. Pokhotelov D, von Alfthan S, Kempf Y, Vainio R, Koskinen HEJ, Palmroth M (2013) Ion distributions upstream and downstream of the Earth’s bow shock: first results from Vlasiator. Ann Geophys 31:2207–2212. 168. Pritchett PL (2005) Externally driven magnetic reconnection in the presence of a normal magnetic field. J Geophys Res 110:A05209. 169. Pucci F, Vásconez CL, Pezzi O, Servidio S, Valentini F, Matthaeus WH, Malara F (2016) From Alfvén waves to kinetic Alfvén waves in an inhomogeneous equilibrium structure. J Geophys Res 121:1024–1045. 170. Pulkkinen TI, Palmroth M, Tanskanen EI, Janhunen P, Koskinen HEJ, Laitinen TV (2006) New interpretation of magnetospheric energy circulation. Geophys Res Lett 33:L07101. 171. Richer E, Modolo R, Chanteur GM, Hess S, Leblanc F (2012) A global hybrid model for Mercury’s interaction with the solar wind: case study of the dipole representation. J Geophys Res 117:10228. 172. Rieke M, Trost T, Grauer R (2015) Coupled Vlasov and two-fluid codes on GPUs. J Comput Phys 283:436–452. 173. Rodger CJ, Kavanagh AJ, Clilverd MA, Marple SR (2013) Comparison between POES energetic electron precipitation observations and riometer absorptions: implications for determining true precipitation fluxes. J Geophys Res 118:7810–7821. 174. Rönnmark K (1982) WHAMP—waves in homogeneous, anisotropic multicomponent plasmas. Kiruna Geophysical Institute reportsGoogle Scholar 175. Rossmanith JA, Seal DC (2011) A positivity-preserving high-order semi-Lagrangian discontinuous Galerkin scheme for the Vlasov–Poisson equations. J Comput Phys 230:6203–6232. 176. Sandroos A, Honkonen I, von Alfthan S, Palmroth M (2013) Multi-GPU simulations of Vlasov’s equation using Vlasiator. Parallel Comput 39:306–318. 177. Sandroos A, von Alfthan S, Hoilijoki S, Honkonen I, Kempf Y, Pokhotelov D, Palmroth M (2015) Vlasiator: global kinetic magnetospheric modeling tool. In: Numerical modeling of space plasma flows, ASTRONUM-2014, Astronomical Society of the Pacific conference series, vol 498, p 222Google Scholar 178. Sarrat M, Ghizzo A, Del Sarto D, Serrat L (2017) Parallel implementation of a relativistic semi-Lagrangian Vlasov–Maxwell solver. Eur Phys J D 71:271. 179. Schaeffer J (1998) Convergence of a difference scheme for the Vlasov–Poisson–Fokker–Planck system in one dimension. SIAM J Numer Anal 35:1149–1175. 180. Schaye J, Crain RA, Bower RG, Furlong M, Schaller M, Theuns T, Dalla Vecchia C, Frenk CS, McCarthy IG, Helly JC, Jenkins A, Rosas-Guevara YM, White SDM, Baes M, Booth CM, Camps P, Navarro JF, Qu Y, Rahmati A, Sawala T, Thomas PA, Trayford J (2015) The EAGLE project: simulating the evolution and assembly of galaxies and their environments. Mon Not R Astron Soc 446:521–554. 181. Schmieder B, Archontis V, Pariat E (2014) Magnetic flux emergence along the solar cycle. Space Sci Rev 186:227–250. 182. Schmitz H, Grauer R (2006) Darwin–Vlasov simulations of magnetised plasmas. J Comput Phys 214:738–756. 183. Sergeev VA, Angelopoulos V, Nakamura R (2012) Recent advances in understanding substorm dynamics. Geophys Res Lett 39:L05101. 184. Servidio S, Valentini F, Califano F, Veltri P (2012) Local kinetic effects in two-dimensional plasma turbulence. Phys Rev Lett 108:045001. 185. Servidio S, Osman KT, Valentini F, Perrone D, Califano F, Chapman S, Matthaeus WH, Veltri P (2014) Proton kinetic effects in Vlasov and solar wind turbulence. Astrophys J Lett 781:L27. 186. Servidio S, Valentini F, Perrone D, Greco A, Califano F, Matthaeus WH, Veltri P (2015) A kinetic model of plasma turbulence. J Plasma Phys 81:325810107. 187. Shoucri M (2008) Eulerian codes for the numerical solution of the Vlasov equation. CNSNS 13:174–182. 188. Sircombe NJ, Arber TD, Dendy RO (2004) Accelerated electron populations formed by Langmuir wave–caviton interactions. Phys Plasmas 12:012303. 189. Sokolov IV, Roussev II, Skender M, Gombosi TI, Usmanov AV (2009) Transport equation for MHD turbulence: application to particle acceleration at interplanetary shocks. Astrophys J 696:261–267. 190. Sonnendrücker E, Roche J, Bertrand P, Ghizzo A (1999) The semi-Lagrangian method for the numerical resolution of the Vlasov equation. J Comput Phys 149:201–220. 191. Soucek J, Escoubet CP, Grison B (2015) Magnetosheath plasma stability and ULF wave occurrence as a function of location in the magnetosheath and upstream bow shock parameters. J Geophys Res 120:2838–2850. 192. Spreiter J, Stahara S (1994) Gasdynamic and magnetohydrodynamic modeling of the magnetosheath: a tutorial. Adv Space Res 14:5–19. 193. Springel V (2005) The cosmological simulation code gadget-2. Mon Not R Astron Soc 364:1105–1134. 194. Strang G (1968) On the construction and comparison of difference schemes. SIAM J Numer Anal 5:506–517. 195. Thomas AGR (2016) Vlasov simulations of thermal plasma waves with relativistic phase velocity in a Lorentz boosted frame. Phys Rev E 94:053204. 196. Toledo-Redondo S, André M, Vaivads A, Khotyaintsev YV, Lavraud B, Graham DB, Divin A, Aunai N (2016) Cold ion heating at the dayside magnetopause during magnetic reconnection. Geophys Res Lett 43:58–66. 197. Toro E (2014) Riemann solvers and numerical methods for fluid dynamics: a practical introduction. Springer, Berlin. 198. Tóth G (2000) The $$\nabla \cdot \mathbf{B}=0$$ constraint in shock-capturing magnetohydrodynamics codes. J Comput Phys 161:605–652. 199. Tóth G, Jia X, Markidis S, Peng IB, Chen Y, Daldorff LKS, Tenishev VM, Borovikov D, Haiducek JD, Gombosi TI, Glocer A, Dorelli JC (2016) Extended magnetohydrodynamics with embedded particle-in-cell simulation of Ganymede’s magnetosphere. J Geophys Res 121:1273–1293. 200. Tronci C, Tassi E, Camporeale E, Morrison PJ (2014) Hybrid Vlasov-MHD models: Hamiltonian vs. non-Hamiltonian. Plasma Phys Control Fusion 56:095008. 201. Turc L, Fontaine D, Escoubet CP, Kilpua EKJ, Dimmock AP (2017) Statistical study of the alteration of the magnetic structure of magnetic clouds in the Earth’s magnetosheath. J Geophys Res 122:2956–2972. 202. Umeda T (2012) Effect of ion cyclotron motion on the structure of wakes: a Vlasov simulation. Earth Planets Space 64:16. 203. Umeda T, Fukazawa K (2015) A high-resolution global Vlasov simulation of a small dielectric body with a weak intrinsic magnetic field on the K computer. Earth Planets Space 67:49. 204. Umeda T, Ito Y (2014) Entry of solar-wind ions into the wake of a small body with a magnetic anomaly: a global Vlasov simulation. Planet Space Sci 93:35–40. 205. Umeda T, Wada Y (2016) Secondary instabilities in the collisionless Rayleigh–Taylor instability: full kinetic simulation. Phys Plasmas 23:112117. 206. Umeda T, Wada Y (2017) Non-MHD effects in the nonlinear development of the MHD-scale Rayleigh–Taylor instability. Phys Plasmas 24:072307. 207. Umeda T, Togano K, Ogino T (2009) Two-dimensional full-electromagnetic Vlasov code with conservative scheme and its application to magnetic reconnection. Comput Phys Commun 180:365–374. 208. Umeda T, Miwa J, Matsumoto Y, Nakamura TKM, Togano K, Fukazawa K, Shinohara I (2010a) Full electromagnetic Vlasov code simulation of the Kelvin–Helmholtz instability. Phys Plasmas 17:052311. 209. Umeda T, Togano K, Ogino T (2010b) Structures of diffusion regions in collisionless magnetic reconnection. Phys Plasmas 17:052103. 210. Umeda T, Kimura T, Togano K, Fukazawa K, Matsumoto Y, Miyoshi T, Terada N, Nakamura TKM, Ogino T (2011) Vlasov simulation of the interaction between the solar wind and a dielectric body. Phys Plasmas 18:012908. 211. Umeda T, Ito Y, Fukazawa K (2013) Global Vlasov simulation on magnetospheres of astronomical objects. J Phys: Conf Ser 454:012005. 212. Umeda T, Ueno S, Nakamura TKM (2014) Ion kinetic effects on nonlinear processes of the Kelvin–Helmholtz instability. Plasma Phys Control Fusion 56:075006. 213. Usami S, Horiuchi R, Ohtani H, Den M (2013) Development of multi-hierarchy simulation model with non-uniform space grids for collisionless driven reconnection. Phys Plasmas 20:061208. 214. Vainio R, Pönni A, Battarbee M, Koskinen HEJ, Afanasiev A, Laitinen T (2014) A semi-analytical foreshock model for energetic storm particle events inside 1 AU. J Space Weather Space Clim 4:A08. 215. Valentini F, Trávníček P, Califano F, Hellinger P, Mangeney A (2007) A hybrid-Vlasov model based on the current advance method for the simulation of collisionless magnetized plasma. J Comput Phys 225:753–770. 216. Valentini F, Califano F, Veltri P (2010) Two-dimensional kinetic turbulence in the solar wind. Phys Rev Lett 104:205002. 217. Valentini F, Perrone D, Veltri P (2011) Short-wavelength electrostatic fluctuations in the solar wind. Astrophys J 739:54. 218. Valentini F, Servidio S, Perrone D, Califano F, Matthaeus WH, Veltri P (2014) Hybrid Vlasov–Maxwell simulations of two-dimensional turbulence in plasmas. Phys Plasmas 21:082307. 219. Valentini F, Perrone D, Stabile S, Pezzi O, Servidio S, De Marco R, Marcucci F, Bruno R, Lavraud B, De Keyser J, Consolini G, Brienza D, Sorriso-Valvo L, Retinò A, Vaivads A, Salatti M, Veltri P (2016) Differential kinetic dynamics and heating of ions in the turbulent solar wind. New J Phys 18:125001. 220. van Marle AJ, Casse F, Marcowith A (2018) On magnetic field amplification and particle acceleration near non-relativistic astrophysical shocks: particles in MHD cells simulations. Mon Not R Astron Soc 473:3394–3409. 221. Vásconez CL, Valentini F, Camporeale E, Veltri P (2014) Vlasov simulations of kinetic Alfvén waves at proton kinetic scales. Phys Plasmas 21:112107. 222. Vásconez CL, Pucci F, Valentini F, Servidio S, Matthaeus WH, Malara F (2015) Kinetic Alfvén wave generation by large-scale phase mixing. Astrophys J 815:7. 223. Vay JL, Geddes C, Cormier-Michel E, Grote D (2011) Numerical methods for instability mitigation in the modeling of laser wakefield accelerators in a Lorentz-boosted frame. J Comput Phys 230:5908–5929. 224. Verdini A, Velli M, Matthaeus WH, Oughton S, Dmitruk P (2010) A turbulence-driven model for heating and acceleration of the fast wind in coronal holes. Astrophys J Lett 708:L116. 225. Verronen PT, Seppälä A, Clilverd MA, Rodger CJ, Kyrölä E, Enell CF, Ulich T, Turunen E (2005) Diurnal variation of ozone depletion during the October–November 2003 solar proton events. J Geophys Res. 226. Verscharen D, Marsch E, Motschmann U, Müller J (2012) Kinetic cascade beyond magnetohydrodynamics of solar wind turbulence in two-dimensional hybrid simulations. Phys Plasmas 19:022305. 227. Vlasov AA (1961) Many-particle theory and its application to plasma. Gordon & Breach, New YorkGoogle Scholar 228. Vogman G (2016) Fourth-order conservative Vlasov–Maxwell solver for Cartesian and cylindrical phase space coordinates. PhD thesis, University of California in Berkeley. https://escholarship.org/uc/item/1c49t97t. Accessed 25 July 2018 229. von Alfthan S, Pokhotelov D, Kempf Y, Hoilijoki S, Honkonen I, Sandroos A, Palmroth M (2014) Vlasiator: first global hybrid-Vlasov simulations of Earth’s foreshock and magnetosheath. J Atmos Sol-Terr Phys 120:24–35. 230. Watermann J, Wintoft P, Sanahuja B, Saiz E, Poedts S, Palmroth M, Milillo A, Metallinou FA, Jacobs C, Ganushkina NY, Daglis IA, Cid C, Cerrato Y, Balasis G, Aylward AD, Aran A (2009) Models of solar wind structures and their interaction with the Earth’s space environment. Space Sci Rev 147:233–270. 231. Weinstock J (1969) Formulation of a statistical theory of strong plasma turbulence. Phys Fluids 12:1045–1058. 232. Wettervik BS, DuBois TC, Siminos E, Fülöp T (2017) Relativistic Vlasov–Maxwell modelling using finite volumes and adaptive mesh refinement. Eur Phys J D 71:157. 233. Wik M, Viljanen A, Pirjola R, Pulkkinen A, Wintoft P, Lundstedt H (2008) Calculation of geomagnetically induced currents in the 400 kV power grid in southern Sweden. Space Weather 6:07005. 234. Wright AN, Russell AJB (2014) Alfvén wave boundary condition for responsive magnetosphere–ionosphere coupling. J Geophys Res 119:3996–4009. 235. Yang LP, Feng XS, Xiang CQ, Liu Y, Zhao X, Wu ST (2012) Time-dependent MHD modeling of the global solar corona for year 2007: driven by daily-updated magnetic field synoptic data. J Geophys Res 117:A08110. 236. Ye H, Morrison PJ (1992) Action principles for the Vlasov equation. Phys Fluids B 4:771–777. 237. Yee K (1966) Numerical solution of intial boundary value problems involving Maxwell’s equations in isotropic media. IEEE Trans Ant Prop 14:302–307. 238. Zenitani S, Umeda T (2014) Some remarks on the diffusion regions in magnetic reconnection. Phys Plasmas 21:034503. 239. Zerroukat M, Allen T (2012) A three-dimensional monotone and conservative semi-Lagrangian scheme (SLICE-3D) for transport problems. Quart J R Meteorol Soc 138:1640–1651. 240. Zhang M, Feng X (2016) A comparative study of divergence cleaning methods of magnetic field in the solar coronal numerical simulation. FrASS 3:6.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.889907956123352, "perplexity": 3491.9411030480464}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203491.1/warc/CC-MAIN-20190324190033-20190324212033-00515.warc.gz"}
https://doc.sagemath.org/html/en/reference/combinat/sage/combinat/crystals/induced_structure.html
# Induced Crystals¶ We construct a crystal structure on a set induced by a bijection $$\Phi$$. AUTHORS: • Travis Scrimshaw (2014-05-15): Initial implementation class sage.combinat.crystals.induced_structure.InducedCrystal(X, phi, inverse) A crystal induced from an injection. Let $$X$$ be a set and let $$C$$ be crystal and consider any injection $$\Phi : X \to C$$. We induce a crystal structure on $$X$$ by considering $$\Phi$$ to be a crystal morphism. Alternatively we can induce a crystal structure on some (sub)set of $$X$$ by considering an injection $$\Phi : C \to X$$ considered as a crystal morphism. This form is also useful when the set $$X$$ is not explicitly known. INPUT: • X – the base set • phi – the map $$\Phi$$ • inverse – (optional) the inverse map $$\Phi^{-1}$$ • from_crystal – (default: False) if the induced structure is of the second type $$\Phi : C \to X$$ EXAMPLES: We construct a crystal structure of Gelfand-Tsetlin patterns by going through their bijection with semistandard tableaux: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,3)) sage: G = GelfandTsetlinPatterns(4, 3) sage: phi = lambda x: D(x.to_tableau()) sage: phi_inv = lambda x: G(x.to_tableau()) sage: I = crystals.Induced(G, phi, phi_inv) sage: I.digraph().is_isomorphic(D.digraph(), edge_labels=True) True Now we construct the above example but inducing the structure going the other way (from tableaux to Gelfand-Tsetlin patterns). This can also give us more information coming from the crystal. sage: D2 = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,1)) sage: G2 = GelfandTsetlinPatterns(4, 1) sage: phi2 = lambda x: D2(x.to_tableau()) sage: phi2_inv = lambda x: G2(x.to_tableau()) sage: I2 = crystals.Induced(D2, phi2_inv, phi2, from_crystal=True) sage: I2.module_generators ([[0, 0, 0, 0], [0, 0, 0], [0, 0], [0]], [[1, 0, 0, 0], [1, 0, 0], [1, 0], [1]], [[1, 1, 0, 0], [1, 1, 0], [1, 1], [1]], [[1, 1, 1, 0], [1, 1, 1], [1, 1], [1]], [[1, 1, 1, 1], [1, 1, 1], [1, 1], [1]]) We check an example when the codomain is larger than the domain (although here the crystal structure is trivial): sage: P = Permutations(4) sage: D = crystals.Tableaux(['A',3], shapes=Partitions(4)) sage: T = crystals.TensorProduct(D, D) sage: phi = lambda p: T(D(RSK(p)[0]), D(RSK(p)[1])) sage: phi_inv = lambda d: RSK_inverse(d[0].to_tableau(), d[1].to_tableau(), output='permutation') sage: all(phi_inv(phi(p)) == p for p in P) # Check it really is the inverse True sage: I = crystals.Induced(P, phi, phi_inv) sage: I.digraph() Multi-digraph on 24 vertices We construct an example without a specified inverse map: sage: X = Words(2,4) sage: L = crystals.Letters(['A',1]) sage: T = crystals.TensorProduct(*[L]*4) sage: Phi = lambda x : T(*[L(i) for i in x]) sage: I = crystals.Induced(X, Phi) sage: I.digraph() Digraph on 16 vertices class Element An element of an induced crystal. e(i) Return $$e_i$$ of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,3)) sage: G = GelfandTsetlinPatterns(4, 3) sage: phi = lambda x: D(x.to_tableau()) sage: phi_inv = lambda x: G(x.to_tableau()) sage: I = crystals.Induced(G, phi, phi_inv) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: elt.e(1) sage: elt.e(2) [[1, 1, 0, 0], [1, 1, 0], [1, 1], [1]] sage: elt.e(3) epsilon(i) Return $$\varepsilon_i$$ of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,3)) sage: G = GelfandTsetlinPatterns(4, 3) sage: phi = lambda x: D(x.to_tableau()) sage: phi_inv = lambda x: G(x.to_tableau()) sage: I = crystals.Induced(G, phi, phi_inv) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: [elt.epsilon(i) for i in I.index_set()] [0, 1, 0] f(i) Return $$f_i$$ of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,3)) sage: G = GelfandTsetlinPatterns(4, 3) sage: phi = lambda x: D(x.to_tableau()) sage: phi_inv = lambda x: G(x.to_tableau()) sage: I = crystals.Induced(G, phi, phi_inv) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: elt.f(1) [[1, 1, 0, 0], [1, 1, 0], [1, 0], [0]] sage: elt.f(2) sage: elt.f(3) [[1, 1, 0, 0], [1, 0, 0], [1, 0], [1]] phi(i) Return $$\varphi_i$$ of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,3)) sage: G = GelfandTsetlinPatterns(4, 3) sage: phi = lambda x: D(x.to_tableau()) sage: phi_inv = lambda x: G(x.to_tableau()) sage: I = crystals.Induced(G, phi, phi_inv) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: [elt.phi(i) for i in I.index_set()] [1, 0, 1] weight() Return the weight of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,3)) sage: G = GelfandTsetlinPatterns(4, 3) sage: phi = lambda x: D(x.to_tableau()) sage: phi_inv = lambda x: G(x.to_tableau()) sage: I = crystals.Induced(G, phi, phi_inv) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: elt.weight() (1, 0, 1, 0) cardinality() Return the cardinality of self. EXAMPLES: sage: P = Permutations(4) sage: D = crystals.Tableaux(['A',3], shapes=Partitions(4)) sage: T = crystals.TensorProduct(D, D) sage: phi = lambda p: T(D(RSK(p)[0]), D(RSK(p)[1])) sage: phi_inv = lambda d: RSK_inverse(d[0].to_tableau(), d[1].to_tableau(), output='permutation') sage: I = crystals.Induced(P, phi, phi_inv) sage: I.cardinality() == factorial(4) True class sage.combinat.crystals.induced_structure.InducedFromCrystal(X, phi, inverse) A crystal induced from an injection. Alternatively we can induce a crystal structure on some (sub)set of $$X$$ by considering an injection $$\Phi : C \to X$$ considered as a crystal morphism. InducedCrystal INPUT: • X – the base set • phi – the map $$\Phi$$ • inverse – (optional) the inverse map $$\Phi^{-1}$$ EXAMPLES: We construct a crystal structure on generalized permutations with a fixed first row by using RSK: sage: C = crystals.Tableaux(['A',3], shape=[2,1]) sage: def psi(x): ....: ret = RSK_inverse(x.to_tableau(), Tableau([[1,1],[2]])) ....: return (tuple(ret[0]), tuple(ret[1])) sage: psi_inv = lambda x: C(RSK(*x)[0]) sage: I = crystals.Induced(C, psi, psi_inv, from_crystal=True) class Element An element of an induced crystal. e(i) Return $$e_i$$ of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,1)) sage: G = GelfandTsetlinPatterns(4, 1) sage: def phi(x): return G(x.to_tableau()) sage: def phi_inv(x): return D(G(x).to_tableau()) sage: I = crystals.Induced(D, phi, phi_inv, from_crystal=True) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: elt.e(1) sage: elt.e(2) [[1, 1, 0, 0], [1, 1, 0], [1, 1], [1]] sage: elt.e(3) epsilon(i) Return $$\varepsilon_i$$ of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,1)) sage: G = GelfandTsetlinPatterns(4, 1) sage: def phi(x): return G(x.to_tableau()) sage: def phi_inv(x): return D(G(x).to_tableau()) sage: I = crystals.Induced(D, phi, phi_inv, from_crystal=True) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: [elt.epsilon(i) for i in I.index_set()] [0, 1, 0] f(i) Return $$f_i$$ of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,1)) sage: G = GelfandTsetlinPatterns(4, 1) sage: def phi(x): return G(x.to_tableau()) sage: def phi_inv(x): return D(G(x).to_tableau()) sage: I = crystals.Induced(D, phi, phi_inv, from_crystal=True) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: elt.f(1) [[1, 1, 0, 0], [1, 1, 0], [1, 0], [0]] sage: elt.f(2) sage: elt.f(3) [[1, 1, 0, 0], [1, 0, 0], [1, 0], [1]] phi(i) Return $$\varphi_i$$ of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,1)) sage: G = GelfandTsetlinPatterns(4, 1) sage: def phi(x): return G(x.to_tableau()) sage: def phi_inv(x): return D(G(x).to_tableau()) sage: I = crystals.Induced(D, phi, phi_inv, from_crystal=True) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: [elt.epsilon(i) for i in I.index_set()] [0, 1, 0] weight() Return the weight of self. EXAMPLES: sage: D = crystals.Tableaux(['A',3], shapes=PartitionsInBox(4,1)) sage: G = GelfandTsetlinPatterns(4, 1) sage: def phi(x): return G(x.to_tableau()) sage: def phi_inv(x): return D(G(x).to_tableau()) sage: I = crystals.Induced(D, phi, phi_inv, from_crystal=True) sage: elt = I([[1, 1, 0, 0], [1, 1, 0], [1, 0], [1]]) sage: elt.weight() (1, 0, 1, 0) cardinality() Return the cardinality of self. EXAMPLES: sage: C = crystals.Tableaux(['A',3], shape=[2,1]) sage: def psi(x): ....: ret = RSK_inverse(x.to_tableau(), Tableau([[1,1],[2]])) ....: return (tuple(ret[0]), tuple(ret[1])) sage: psi_inv = lambda x: C(RSK(*x)[0]) sage: I = crystals.Induced(C, psi, psi_inv, from_crystal=True) sage: I.cardinality() == C.cardinality() True
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27095672488212585, "perplexity": 8106.555014445552}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000610.35/warc/CC-MAIN-20190627015143-20190627041143-00087.warc.gz"}
https://agenda.infn.it/event/5106/
5. Theoretical Physics (CSN4) # Z' production in a modified MSSM ## by Dr Gennaro Corcella (INFN LNF) Europe/Rome Aula Seminari (LNF) ### Aula Seminari #### LNF Via Enrico Fermi, 40 00044 Frascati (Roma) Description The talk will discuss Z' production at the LHC in U(1)' models and in the Sequential Standard Model. Z' decays into supersymmetric particles will be taken into account, paying special attention to the modes yielding final states with leptons and missing energy.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895009458065033, "perplexity": 11959.524429386156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655933254.67/warc/CC-MAIN-20200711130351-20200711160351-00179.warc.gz"}
https://byjus.com/rd-sharma-solutions/class-12-maths-chapter-29-the-plane-exercise-29-11/
# RD Sharma Solutions Class 12 The Plane Exercise 29.11 ## RD Sharma Solutions Class 12 Chapter 29 Exercise 29.11 ### RD Sharma Class 12 Solutions Chapter 29 Ex 29.11 PDF Free Download Exercise 29.11 #### Practise This Question Varun travelled 18​th of the journey which is equal to 56km on a scooter . What is the remaining distance he has to cover?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590866327285767, "perplexity": 8827.160812638711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204885.27/warc/CC-MAIN-20190326075019-20190326101019-00332.warc.gz"}
https://www.byteflying.com/archives/4062
# C#LeetCode刷题之#389-找不同(Find the Difference) s = “abcd” t = “abcde” e Given two strings s and t which consist of only lowercase letters. String t is generated by random shuffling string s and then add one more letter at a random position. Find the letter that was added in t. Input: s = “abcd” t = “abcde” Output: e Explanation:‘e’ is the letter that was added. ```public class Program { public static void Main(string[] args) { var s = "ecb"; var t = "beca"; var res = FindTheDifference(s, t); Console.WriteLine(res); s = "loveleetcode"; t = "loveleetxcode"; res = FindTheDifference2(s, t); Console.WriteLine(res); } private static char FindTheDifference(string s, string t) { var cs = s.ToArray(); Array.Sort(cs); var ct = t.ToArray(); Array.Sort(ct); var i = 0; for(; i < cs.Length; i++) { if(cs[i] != ct[i]) return ct[i]; } return ct[i]; } private static char FindTheDifference2(string s, string t) { var dic = new Dictionary<char, int>(); foreach(var c in s) { if(dic.ContainsKey(c)) { dic[c]++; } else { dic[c] = 1; } } foreach(var c in t) { if(dic.ContainsKey(c)) { dic[c]--; if(dic[c] < 0) return c; } else { return c; } } return ' '; } }``` ```a x``` FindTheDifference 的时间复杂度基于排序所使用的排序算法,FindTheDifference2 的时间复杂度为: ​ 。
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3987840414047241, "perplexity": 2646.3956666598556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495901.0/warc/CC-MAIN-20210115134101-20210115164101-00296.warc.gz"}
https://terrytao.wordpress.com/advice-on-writing-papers/motivate-the-paper/
A scrupulous writer, in every sentence that he writes, will ask himself at least four questions, thus: 1. What am I trying to say? 2. What words will express it? 3. What image or idiom will make it clearer? 4. Is this image fresh enough to have an effect? (George Orwell, “Politics and the English Language“, 1946) A paper should not just be a sequence of formulae or logical steps. It should also be organized and motivated in such a way that the reader is always aware what the near-term and long-term objectives of any portion of the paper are, how the current arguments are advancing towards these goals, how crucial they are to those goals, and why the claimed results at each step are at least plausible (or if they are surprising, to indicate exactly why and how they are surprising). Informal, heuristic, or motivational reasoning is therefore very welcome, but should be clearly indicated as such to distinguish it from formal, rigorous reasoning (for instance, these portions of the paper can be placed in remarks or footnotes). At the start of each section, it is often a good idea to give a brief paragraph describing the purpose of that section. For instance, if a section is devoted to proving a key milestone in the paper, the milestone can be stated near the start of the section, next to a discussion as to why this milestone is important and perhaps a brief sketch as to how one is going to prove it in this section. Before presenting your most general result, it can help to first discuss a less technical special case or “toy” result first to give some flavour of the significance of the main result, and also on the strategy of proof.  This can be worthwhile even if this toy result was already known in the literature.  For instance, it often happens that the key to generalising the proof of the toy result to the more general result was to re-interpret an existing proof of the former in a way that generalised to the latter, and discussing this re-interpretation near the beginning of the paper can be enormously clarifying to readers.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8255017995834351, "perplexity": 510.4416309911034}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157262.85/warc/CC-MAIN-20160205193917-00033-ip-10-236-182-209.ec2.internal.warc.gz"}
https://link.springer.com/referenceworkentry/10.1007%2F978-3-662-53605-6_266-2
# Encyclopedia of Continuum Mechanics Living Edition | Editors: Holm Altenbach, Andreas Öchsner # Lagrange Multipliers in Infinite Dimensional Spaces, Examples of Application • A. Bersani • F. dell’Isola • P. Seppecher Living reference work entry DOI: https://doi.org/10.1007/978-3-662-53605-6_266-2 ## Definitions The Lagrange multipliers method is used in mathematical analysis, in mechanics, in economics, and in several other fields, to deal with the search of the global maximum or minimum of a function, in the presence of a constraint. The usual technique, applied to the case of finite-dimensional systems, transforms the constrained optimization problem into an unconstrained one, by means of the introduction of one or more multipliers and of a suitable Lagrangian function, to be optimized. In mechanics, several optimization problems can be applied to infinite-dimensional systems. Lagrange multipliers method can be applied also to these cases. ## Introduction In this entry we show that the theorem of Lagrange multipliers in infinite-dimensional systems (dell’Isola and Di Cosmo, 2018) can be a very powerful tool for dealing with constrained problems also in infinite-dimensional spaces. This tool is powerful but must be used... This is a preview of subscription content, log in to check access. ## References 1. Dautray R, Lions J-L (2012) Mathematical analysis and numerical methods for science and technology: volume 3 spectral theory and applications. Springer Science Business Media, Springer-Verlag Berlin HeidelbergGoogle Scholar 2. Della Corte A, dell’Isola F, Seppecher P (2015) The postulations à la D‘Alembert and à la Cauchy for higher gradient continuum theories are equivalent: a review of existing results. Proc R Soc A Math Phys Eng Sci 471(2183):20150415Google Scholar 3. dell’Isola F, Di Cosmo F (2018) Lagrange multipliers in infinite-dimensional systems, methods of. In: Altenbach H, Öchsner A (eds) Encyclopedia of continuum mechanics. Springer, Berlin/HeidelbergGoogle Scholar 4. dell’Isola F, Madeo A, Seppecher P (2012) How contact interactions may depend on the shape of Cauchy cuts in Nth gradient continua: approach “à la D‘Alembert”. Z Angew Math Phys 63(6):1119–1141Google Scholar 5. dell’Isola F, Madeo A, Seppecher P (2016) Cauchy tetrahedron argument applied to higher contact interactions. Arch Ration Mech Anal 219(3):1305–1341Google Scholar 6. Forest S, Cordero NM, Busso EP (2011) First vs. second gradient of strain theory for capillarity effects in an elastic fluid at small length scales. Comput Mater Sci 50(4):1299–1304 7. Germain P (1973) La méthode des puissances virtuelles en mécanique des milieux continus. J Mécanique 12:236–274 8. Glüge R (2018) Continuum mechanics basics, introduction and notations. In: Altenbach H, Öchsner A (eds) Encyclopedia of continuum mechanics. Springer, Berlin/HeidelbergGoogle Scholar 9. Lagrange JL (1853) Mécanique analytique, vol 1. Mallet-Bachelier, ParisGoogle Scholar 10. Mindlin RD (1964) Micro-structure in linear elasticity. Arch Ration Mech Anal 16(1):51–78 11. Rudin W (1987) Real and complex analysis. McGraw–Hill, New York 12. Schwartz L (1957) Théorie des distributions, vol 2. Hermann, Paris 13. Schweizer B (2018) On Friedrichs inequality, Helmholtz decomposition, vector potentials, and the div-curl lemma. In: Rocca E, Stefanelli U, Truskinovsky L, Visintin A (eds) Trends in applications of mathematics to mechanics. Springer INdAM series, Springer Cham, Vol 21, pp 65–79 © Springer-Verlag GmbH Germany, part of Springer Nature 2020 ## Authors and Affiliations 1. 1.Department of Mechanical and Aerospace EngineeringSapienza UniversityRomeItaly 2. 2.Dipartimento di Ingegneria Strutturale e GeotecnicaUniversità degli Studi di Roma “La Sapienza”RomeItaly 3. 3.Dipartimento di Ingegneria Civile, Edile-Architettura e AmbientaleUniversità degli Studi dell’AquilaL’AquilaItaly 4. 4.International Research Center for the Mathematics and Mechanics of Complex SystemsL’AquilaItaly 5. 5.Institut de MathématiquesUniversité de ToulonToulonFrance ## Section editors and affiliations • F. dell’Isola • 1 • 2 • 3 1. 1.Dipartimento di Ingegneria Strutturale e GeotecnicaUniversità degli Studi di Roma “La Sapienza”RomeItaly 2. 2.Dipartimento di Ingegneria Civile, Edile-Architettura e AmbientaleUniversità degli Studi dell’AquilaL’AquilaItaly 3. 3.International Research Center for the Mathematics and Mechanics of Complex SystemsL’AquilaItaly
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804919958114624, "perplexity": 7855.87098888492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00375.warc.gz"}
https://chemistry.stackexchange.com/questions/6337/clarifications-on-thermodynamic-cell-efficiency
# Clarifications on thermodynamic cell efficiency The thermodynamic efficiency of any cell (especially Fuel cells) is given as $$\frac{\Delta G}{\Delta H} \times 100$$ I understood this partly, that since $\Delta G$ is the useful work obtained in the ideal case, the efficiency expression must have it in the numerator. But I fail to see the intuition (or maths) behind the $\Delta H$ term in the denominator. I have two questions regarding this: 1. On applying this equation to ordinary (not fuel cells), for a general reaction $$\ce{A^+ {(aq)} + B {(s)} -> A {(s)} + B^+ {(aq)}}$$ the $\Delta G$ term is given as $-nF E_{\text{cell}}$ where $E_{\text{cell}}$ depends on the concentration of the two ionic species as given by the Nernst equation $\left(E=E^\circ-\frac{RT}{nF}\ln\frac{[\ce B^+]}{[\ce A^+]}\right)$. On the other hand, is the $\Delta H$ term independent of the concentration? and if not, how would it (and consequently efficiency) depend on the ionic concentrations? 2. For fuel cells, suppose the cell reaction is such that the $\Delta S$ is positive and since $\Delta G=\Delta H -T\Delta S$, therefore $|\Delta G|>|\Delta H|$ since the second term in the equation for $\Delta G$ becomes negative due to positive entropy change. Would this predict an efficiency greater than 100%? 1. ${\Delta G}$ and ${\Delta H}$ are normally expressed on a per mole basis. Your assertion that ${\Delta G}$ is related to concentration is not precise. It is related to the ratio of concentrations, and as such the incongruity with ${\Delta H}$ is not troubling.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.989817202091217, "perplexity": 295.7943505718873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00338.warc.gz"}
https://blender.stackexchange.com/questions/60583/how-to-create-a-sub-category-or-folder-in-t-panel-tab/60604
# How to create a sub-category or folder in T-Panel tab I used to have a problem with the clattered tabs in my T-Panel, but not anymore since I discovered how to edit the code of a tab in the T-Panel (rightclick in any button of the tab contents and select "Edit Source") and finding the "bl_category" value in it and change it to create a new category or to move the item to an existing one. This works beautifully, but I have a problem; some addons have several components and I would like to group those in a "sub-category" or folder (I am not sure of the right term). For example, I have organized all the addons into a number of tabs (see screenshot); including a Misc tab, where I include all the new addons that don't fall into any other category; in Misc I have the "Mesh Align Plus" add on which is made up of six quicktools entries; I would like to group those six entries into a single "Mesh Align Plus" category of its own within "Misc", creating a kind of folder. Is that possible at all? Sept • nested collapsible layouts are - as far as I am aware - at the moment unsupported. You can only segment in tabs and panels currently. – aliasguru Aug 9 '16 at 19:54 The sections you are referring to that can be collapsed and expanded are called panels. You may have noticed that when you alter the addon you find the bl_category a few lines under a class definition that has Panel in brackets at the end of the line. We have some control over the placement of panels (not 100%). The small dots to the right of the panel allow us to drag them up and down to get a different order and the order that we place them in gets saved with the rest of the interface settings. This means you need to enable the addons that you want, Save User Settings so that they are enabled on startup, then re-arrange the panels the way you want them and then File->Save Startup File so that the layout settings are used every time you start blender. You may also want to ensure the Load UI option under the File preferences is disabled so that your startup layout is used instead of the layout saved with the file. To get more than that you will need to write your own addon. Yes you can copy the code of another addon, but the changes needed mean that you will need to re-structure the code again after any updates - so consider this option as your custom addon where someone else writes most of the code for you. Each panel is defined by a class that is a subclass of bpy.types.Panel. The draw method of the panel class defines what is displayed inside the panel, you can take the contents of an existing panel's draw method and combine them into one panel that you have defined. For some panels you may also need some of the surrounding code that the panel relies on. As shown in this answer you can add a custom property and use that to show a disclosure triangle that can contain the items from another panel to get the "sub-folder" layout that you are looking for. You can find an example addon here that provides the following panel. • Thank you for your answer. But my question is, more specifically, if I can create sub-category within the Misc category in which to contain the several entries of a single addon. Regards, Sept – Dunno Aug 9 '16 at 13:45 • @Dunno - I have expanded the answer to include an example of making your own addon. – sambler Aug 10 '16 at 10:34 • Thanks a million, sambler; that helped a lot and certainly answered my question; and my apologies for posting in the answer, rather than in the comments, section. Sept – Dunno Aug 10 '16 at 19:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40423107147216797, "perplexity": 964.2894843052926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00047.warc.gz"}
http://faculty.engr.utexas.edu/torresverdin/publications/detection-and-quantification-3d-hydraulic-fractures-vertical-borehole
# Detection and quantification of 3D hydraulic fractures with vertical borehole induction resistivity measurements. ### Citation: K. Yang, Torres-Verdín, C., and Yilmaz, A. E., “Detection and quantification of 3D hydraulic fractures with vertical borehole induction resistivity measurements.,” Geophysics, vol. 81, no. 4, pp. E259-E264, 2016.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9392359256744385, "perplexity": 21149.736592738438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866511.32/warc/CC-MAIN-20180524151157-20180524171157-00365.warc.gz"}
https://pandapower.readthedocs.io/en/develop/shortcircuit/run.html
# Running a Short-Circuit Calculation¶ The short circuit calculation is carried out with the calc_sc function: pandapower.shortcircuit.calc_sc(net, fault='3ph', case='max', lv_tol_percent=10, topology='auto', ip=False, ith=False, tk_s=1.0, kappa_method='C', r_fault_ohm=0.0, x_fault_ohm=0.0, branch_results=False, check_connectivity=True, return_all_currents=False) Calculates minimal or maximal symmetrical short-circuit currents. The calculation is based on the method of the equivalent voltage source according to DIN/IEC EN 60909. The initial short-circuit alternating current ikss is the basis of the short-circuit calculation and is therefore always calculated. Other short-circuit currents can be calculated from ikss with the conversion factors defined in DIN/IEC EN 60909. The output is stored in the net.res_bus_sc table as a short_circuit current for each bus. INPUT: net (pandapowerNet) pandapower Network *fault (str, 3ph) type of fault • “3ph” for three-phase • “2ph” for two-phase short-circuits • “1ph” for single-phase ground faults case (str, “max”) • “max” for maximal current calculation • “min” for minimal current calculation lv_tol_percent (int, 10) voltage tolerance in low voltage grids • 6 for 6% voltage tolerance • 10 for 10% voltage olerance ip (bool, False) if True, calculate aperiodic short-circuit current Ith (bool, False) if True, calculate equivalent thermical short-circuit current Ith topology (str, “auto”) define option for meshing (only relevant for ip and ith) • “meshed” - it is assumed all buses are supplied over multiple paths • “radial” - it is assumed all buses are supplied over exactly one path • “auto” - topology check for each bus is performed to see if it is supplied over multiple paths tk_s (float, 1) failure clearing time in seconds (only relevant for ith) r_fault_ohm (float, 0) fault resistance in Ohm x_fault_ohm (float, 0) fault reactance in Ohm branch_results (bool, False) defines if short-circuit results should also be generated for branches return_all_currents (bool, False) applies only if branch_results=True, if True short-circuit currents for each (branch, bus) tuple is returned otherwise only the max/min is returned OUTPUT: EXAMPLE: calc_sc(net) print(net.res_bus_sc) import pandapower.shortcircuit as sc import pandapower.networks as nw net = nw.mv_oberrhein() net.ext_grid["s_sc_min_mva"] = 100 net.ext_grid["rx_min"] = 0.1 net.line["endtemp_degree"] = 20 sc.calc_sc(net, case="min") print(net.res_bus_sc)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7344934940338135, "perplexity": 24905.51324330217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00086.warc.gz"}
https://gitter.im/sympy/sympy?at=5fe253a42084ee4b786839b4
## Where communities thrive • Join over 1.5M+ people • Join over 100K+ communities • Free without limits ##### Activity • 18:57 asmeurer commented #23488 • 17:45 github-actions[bot] commented #23538 • 17:44 sympy-bot updated the wiki • 17:44 sympy-bot commented #23538 • 17:44 smichr on master Dummies now replaced in lambdif… Merge pull request #23538 from … (compare) • 17:44 sympy-bot commented #23538 • 17:44 smichr closed #23538 • 17:44 smichr closed #23536 • 17:07 smichr on master sympify target_units in convert… Merge pull request #23537 from … (compare) • 17:07 sympy-bot commented #23537 • 17:07 smichr closed #23537 • 17:06 smichr commented #23538 • 17:05 smichr auto_merge_enabled #23538 • 17:03 github-actions[bot] commented #23537 • 16:18 sympy-bot commented #23538 • 16:18 moorepants opened #23538 • 15:36 sympy-bot commented #23537 • 15:36 smichr opened #23537 • 15:11 sympy-bot commented #23534 • 15:11 sympy-bot updated the wiki Karl H Ok... So, can cases of sympy.compatibility.integer_types be replaced with Python 3's built-in int clas? Aaron Meurer @asmeurer Yes Also FWIW sympy.core.compatibility is only designed for internal use in SymPy, so using it in an external library is not recommended. We don't make any backwards compatibility guarantees for that module. Karl H Ok, I'll do that then, and hopefully it works. Ok. I'm not the core maintainer, so I don't have any control over that. Oh, I assume the same holds true for sympy.compatibility.string_types and python 3's str class, correct? Aaron Meurer @asmeurer Yes Thomas Aarholt @thomasaarholt Hiya, I'm about to submit a PR for cupy support on top of #20271 Just a quick question: The numeric computation docs mention Theano support, but I was unable to get that to work. Does it currently work? Thomas Aarholt @thomasaarholt Scratch that, a quick search among the GH repo issues shows that it should work fine. I wonder where I went wrong! AbhishekS78 @AbhishekS78 hello everyone, can anyone tell me if sympy is participating in GSoC 2021 or not? if yes then, can anyone show me the direction for GSoC 2021 for sympy organization, I am very much interested in contributing petschge @petschge Any ideas why I run into the issue reported at sympy/sympy#20629 ? kunalsingh2002 @kunalsingh2002 Hello Mentors , I'm KUNAL SINGH 2nd year computer science undergraduate from IIT Kharagpur, India , I'm very much interested in contributing to sympy but I'm unable to find a starting point , can anyone help me with this ? Suryam Arnav Kalra @suryam35 Hi developers ! I am Suryam Arnav Kalra , second year undergraduate student in Computer Science and Engineering department at IIT Kharagpur , India. I am a beginner in open source and i am interested in contributing to Sympy. It would be really great if you could help me get a starting point to begin my journey. Hi @kunalsingh2002 @suryam35, you guys can start with https://github.com/sympy/sympy/wiki/Introduction-to-contributing for starters, and then you can refer to the docs https://docs.sympy.org/latest/index.html. For the later part, refer to https://github.com/sympy/sympy/wiki/Development-workflow. Thomas Aarholt @thomasaarholt Hiya. Given func = sp.lambdify(args, expr), is it possible to print the "string" that lambdify compiles, ie. the expression that is eventually called by numpy (or mpmath)? Aaron Meurer @asmeurer @thomasaarholt yes, you can look at inspect.getsource(func), or just func?? in ipython Craig Russell @ctr26 "Does anyone know how to give sympy known identies to help it with tricky integrals? I'm doing an integral across all space for a bessel function and I have a neat identity which I think would help sympy to solve it." The manual integrate module was suggested but the identity I have is for a definite integral And I don't think manual integrate works with definite integral rules? Eva Tiwari @evatiwari I have a doubt which is not exactly SymPy related, but it might help me out with an issue. What is the difference between tensor product of two matrices and two bras/kets? Why is it that the dagger of one does not change the order, while that of the other does? Eva Tiwari @evatiwari @ctr26 could you open an issue for the same? There is a function called heurisch(), which deals with Bessel functions, but it is only for indefinite integrals and is based off of a slow algorithm. I did not find any option which is as good as manual integrate (which, you are right, does not work with rules for definite integrals). Craig Russell @ctr26 Have done thanks What does it mean when heurisch never returns an output? Eva Tiwari @evatiwari @ctr26 it hangs, on interrupting the process it shows the traceback. Craig Russell @ctr26 Lambda_0*((-r + x/M)*exp((-(-r + x/M)**2 - y**2/M**2)/(2*\sigma_g**2)) - (r + x/M)*exp((-(r + x/M)**2 - y**2/M**2)/(2*\sigma_g**2)))**2/(2*pi*M**2*\sigma_g**6*(exp((-(-r*sin(2*pi/m) + y/M)**2 - (-r*cos(2*pi/m) + x/M)**2)/(2*\sigma_g**2)) + exp((-(-r*sin(4*pi/m) + y/M)**2 - (-r*cos(4*pi/m) + x/M)**2)/(2*\sigma_g**2)))) and 16*Lambda_0*(-M*x_0_tau + x)**2*(M*lambda*besselj(1, 2*pi*n_a*sqrt(M**2*x_0_tau**2 + M**2*y_0_tau**2 - 2*M*x*x_0_tau - 2*M*y*y_0_tau + x**2 + y**2)/(M*lambda)) - pi*n_a*sqrt(M**2*x_0_tau**2 + M**2*y_0_tau**2 - 2*M*x*x_0_tau - 2*M*y*y_0_tau + x**2 + y**2)*besselj(0, 2*pi*n_a*sqrt(M**2*x_0_tau**2 + M**2*y_0_tau**2 - 2*M*x*x_0_tau - 2*M*y*y_0_tau + x**2 + y**2)/(M*lambda)))**2/(pi*lambda**2*(M**2*x_0_tau**2 + M**2*y_0_tau**2 - 2*M*x*x_0_tau - 2*M*y*y_0_tau + x**2 + y**2)**3) Trying to integrate either of these across infinity in x sympy.Integral(integrand,(x,-sympy.oo,sympy.oo)).doit() StackTraces respective: https://pastebin.com/FTafTjF9 https://pastebin.com/u9NTQPib Neither converges " @evatiwari Craig Russell @ctr26 These stack traces after running it for a bit longer https://pastebin.com/SCHykTyR https://pastebin.com/WRgXuTHu Eva Tiwari @evatiwari I'll try looking into this. If I find something I'll let you know. Shreyas M S @Shreyas-MS Hi, I'm trying to write a parser that converts expressions from my custom expression language to sympy friendly expressions. Does anyone have any resources that would help me? my end goal is to convert expressions written in my language and translate it to something I can evaluate in sympy/numpy. Nicolai Prebensen Hi. I am having some trouble implementing inverse_phi (euler's totient) with prime factorization for very large numbers. Does sympy have any inverse_totient functionality, or could someone possibly point me in the right direction for implementing this using SymPy? Given (p-1)(q-1) I want to find n, for phi(n) = (p-1)(q-1). I.e: phi(n) = 24 --> n = pq = 35 , except I need it to handle numbers as large as 4529255040439033800342855653030016000000000++ Hi, Is there some way to access the function which is to be differentiated i.e. the argument of the Derivative, for e.g. In y=Derivative(x2+2,x), how can I access x2+2. Kalevi Suominen @jksuom In [13]: y.args[0] Out[13]: 2 x + 2 Thanks a lot!! :) Vishesh Mangla @Teut2711 I want to solve a bvp numerically with steps. This is my relation. 1<i, j<5 . the number of equations are huge. How can I make them and then solve with the help of sympy? shardul semwal @Shardul555 Hello all, I was adding a test case for an issue where I encountered assertion error, I do not have much idea about it. Can someone guide me how it can be resolved? Traceback (most recent call last): File "/home/shardul/sympy/sympy/series/tests/test_series.py", line 208, in test_issue_9173 assert Q.series(y, n=3) == b_2y**2 + b_1y + b_0 + O(y**3) AssertionError Vishesh Mangla @Teut2711 why don't you use the debugger @Shardul555 ? shardul semwal @Shardul555 @XtremeGood okay I was unaware of it , thank you. JSS95 @JSS95 Hi, is there any way to convert expression containing power, like expr = (x^2 * y^3), to unevaluated Mul like Mul(x, x, y, y, y, evaluate=False)? Sayandip Halder @sayandip18 Can anyone tell me if it is possible to pull changes from an old PR? Two of my PRs were from a system which has broken down and now I'm unable to work on them. 11 replies Aitik Gupta @aitikgupta @sayandip18 You can always fetch from someone's (in your case, your own) fork, and checkout that branch. Sayandip Halder @sayandip18 Thanks! Sayandip Halder @sayandip18 Suppose, I've defined a Sympy function of the type f(a-x) or f(x-a), for example, Heaviside(4-x). Is there any way to access this a-x(here, 4-x) from outside? shardul semwal @Shardul555 I run the code for this expression in the terminal having installed sympy version 1.7.1 but here the coefficients of y**2 and y are not simplified to final answer. Anyone guide me regarding this issue whether it is done with some purpose or this should be fixed?(Sympy live shell is giving final simplified answer) >>> from sympy import * >>> from sympy.abc import y >>> var('p_0 p_1 p_2 p_3 b_0 b_1 b_2') (p_0, p_1, p_2, p_3, b_0, b_1, b_2) >>> Q = (p_0 + (p_1 + (p_2 + p_3/y)/y)/y)/(1 + ((p_3/(b_0*y) + (b_0*p_2 - b_1*p_3)/b_0**2)/y + (b_0**2*p_1 - b_0*b_1*p_2 - p_3*(b_0*b_2 - b_1**2))/b_0**3)/y) >>> Q.series(y, n=3) y*(b_0*p_2/p_3 + b_0*(-p_2/p_3 + b_1/b_0)) + y**2*(b_0*p_1/p_3 + b_0*p_2*(-p_2/p_3 + b_1/b_0)/p_3 + b_0*(-p_1/p_3 + (p_2/p_3 - b_1/b_0)**2 + b_1*p_2/(b_0*p_3) + b_2/b_0 - b_1**2/b_0**2)) + b_0 + O(y**3) >>> simplify(y*(b_0*p_2/p_3 + b_0*(-p_2/p_3 + b_1/b_0)) + y**2*(b_0*p_1/p_3 + b_0*p_2*(-p_2/p_3 + b_1/b_0)/p_3 + b_0*(-p_1/p_3 + (p_2/p_3 - b_1/b_0)**2 + b_1*p_2/(b_0*p_3) + b_2/b_0 - b_1**2/b_0**2)) + b_0 + O(y**3)) b_2*y**2 + b_1*y + b_0 + O(y**3) 4 replies Sayandip Halder @sayandip18 solve(x**2 - y**2, [x,y]) returns [(-y, y), (y, y)]. Shouldn't the answer be [(-y,y), (-x,x)] instead? 10 replies Mohit Dilip Makwana @mohitdmak Hey all! I am a cse fresher from Bits Pilani, and new to open-source. I like the features provided by Sympy and would want to know how can I contribute to it. I have a fair amount of experience with python,c++, and framework Django. If any mentor here can help me with how to start, it would be a great help ! Suryam Arnav Kalra @suryam35 Hi @mohitdmak , you can start with https://github.com/sympy/sympy/wiki/Introduction-to-contributing for starters, and then you can refer to the docs https://docs.sympy.org/latest/index.html. For the later part, refer to https://github.com/sympy/sympy/wiki/Development-workflow. Sidharth Mundhra @0sidharth Can anyone give me some hints on how I can go about debugging a RecursionError (for specifics, sympy/sympy#9449 )? Any hints you can give would be great as I'm seeing this part of the codebase for the first time. Should I focus on the statements before the recursion happens or during (I have tried printing values during both but still wasn't able to figure anything out)? Hey folks. Possibly n00b question. I'm interested in SymPy for its symbolic simplification features, using it to rearrange complicated expressions built by by sometimes naive automation. I'm leaving everything fully symbolic, so none of the Symbols or Functions have assigned values or concrete implementations. I've had great luck using SymPy this way using parse_expr() with a bit of transformation logic to ensure the expressions are well-formatted in ways SymPy expects. I am stumped by one thing, though. Some of my expressions contain functions are logical (boolean) , so I get expressions like f(x) & g(y). But the default transformations turn these into Functions, which aren't considered Booleans, and thus cause TypeErrors when used as arguments to BooleanFunctions like And() or Not(). Is there an accepted way to declare a Function-like symbol that counts as a Boolean type for these purposes? Is there some better way to achieve this?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5444499254226685, "perplexity": 3727.8927134108008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00594.warc.gz"}
http://www.gtagaming.com/gtagaming/news/archive.php?p=200915&viewpollresults=1
News Archive GTAGaming.com - Website Updates | @ 07:18 AM CST | By Neil As many will be aware, GTAGaming has been running an RSS feed of all latest news entries for some time. For many, this is the most convenient way to be notified of new updates on the GTA series. However, others prefer to use social network/micro blogging service Twitter to view the latest news from their favorite websites. To satisfy demand from users, GTAGaming is now also available on Twitter. You'll be able to read the latest headlines from the site there as soon as they are posted and the full stories are just a click away. GTA: Chinatown Wars - General News | @ 09:34 PM CST | By Slim Trashman Patrick Brown is back with some incredible new fan artwork inspired by GTA: Chinatown Wars: You can check out more of Patrick's GTA artwork via his deviantART page. GTA: Chinatown Wars - General News | @ 09:04 PM CST | By zombienm For one of the Nintendo DS's highest rated games ever, Grand Theft Auto Chinatown Wars sold a startlingly low 90,000 units in it's first month on store shelves. Last week there was controversy over whether the game sold 400,000+ or only 200,000 units, but it appears both of those numbers were greatly over estimated. This number pales in comparison to Grand Theft Auto IV, released almost a year ago, which sold over 6 million copies in it's first week alone. However, history has shown that games on Nintendo platforms tend to start out slow yet go on to generate huge revenues over time. Grand Theft Auto IV - Community News | @ 09:10 AM CST | By Ash_735 YouTube member weses2 posted a brilliant video back in January 2009 titled “The Feeling of Liberty” and after receiving much praise for it he then brought it to our attention on the forums and personally I think it deserves a spot on the front page so here it is for those of you who haven’t seen it: April 2004 (16 Posts)Week: [ #4 ] May 2004 (66 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] June 2004 (78 Posts)Week: [ #1 | #2 | #3 | #4 ] July 2004 (80 Posts)Week: [ #1 | #2 | #3 | #4 ] August 2004 (117 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2004 (66 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2004 (79 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] November 2004 (19 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2004 (13 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2005 (16 Posts)Week: [ #-51 | #-50 | #-49 | #-48 | #-47 ] February 2005 (8 Posts)Week: [ #2 | #3 | #4 ] March 2005 (18 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2005 (8 Posts)Week: [ #3 | #4 | #5 ] May 2005 (16 Posts)Week: [ #2 | #3 | #4 | #5 ] June 2005 (28 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] July 2005 (12 Posts)Week: [ #2 | #3 | #4 | #5 ] August 2005 (18 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2005 (23 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2005 (19 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] November 2005 (6 Posts)Week: [ #2 | #4 | #5 ] December 2005 (8 Posts)Week: [ #2 | #4 | #5 ] January 2006 (6 Posts)Week: [ #-50 | #-49 | #-48 | #-47 ] February 2006 (2 Posts)Week: [ #4 | #5 ] March 2006 (3 Posts)Week: [ #2 | #4 | #5 ] April 2006 (3 Posts)Week: [ #2 | #3 | #4 ] May 2006 (9 Posts)Week: [ #1 | #2 | #4 | #5 ] June 2006 (1 Posts)Week: [ #4 ] July 2006 (2 Posts)Week: [ #2 | #5 ] August 2006 (2 Posts)Week: [ #1 | #5 ] September 2006 (6 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2006 (2 Posts)Week: [ #6 ] November 2006 (1 Posts)Week: [ #3 ] December 2006 (2 Posts)Week: [ #5 | #6 ] February 2007 (4 Posts)Week: [ #1 | #4 ] March 2007 (14 Posts)Week: [ #2 | #3 | #4 ] April 2007 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] May 2007 (7 Posts)Week: [ #1 | #3 | #4 ] June 2007 (26 Posts)Week: [ #2 | #3 | #4 ] July 2007 (31 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] August 2007 (6 Posts)Week: [ #1 | #3 | #4 ] September 2007 (3 Posts)Week: [ #1 | #3 | #4 ] October 2007 (15 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] November 2007 (5 Posts)Week: [ #1 | #3 | #4 ] December 2007 (39 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] January 2008 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2008 (38 Posts)Week: [ #1 | #2 | #3 | #4 ] March 2008 (88 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2008 (105 Posts)Week: [ #1 | #2 | #3 | #4 ] May 2008 (25 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2008 (22 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] July 2008 (11 Posts)Week: [ #1 | #2 | #3 | #4 ] August 2008 (18 Posts)Week: [ #1 | #2 | #3 | #4 ] September 2008 (16 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2008 (15 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2008 (34 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] December 2008 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2009 (24 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2009 (28 Posts)Week: [ #1 | #2 | #3 | #4 ] March 2009 (16 Posts)Week: [ #1 | #2 | #3 | #4 ] April 2009 (18 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] May 2009 (5 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2009 (9 Posts)Week: [ #0 | #2 | #3 ] July 2009 (12 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] August 2009 (29 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] September 2009 (25 Posts)Week: [ #1 | #2 | #3 | #4 ] October 2009 (45 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2009 (9 Posts)Week: [ #1 | #2 | #3 ] December 2009 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] January 2010 (8 Posts)Week: [ #-51 | #-50 | #-49 | #-48 ] February 2010 (9 Posts)Week: [ #1 | #2 | #3 ] March 2010 (11 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] April 2010 (17 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2010 (8 Posts)Week: [ #2 | #3 | #5 | #6 ] June 2010 (7 Posts)Week: [ #2 | #3 | #5 ] July 2010 (1 Posts)Week: [ #2 ] August 2010 (5 Posts)Week: [ #2 | #3 | #5 ] September 2010 (5 Posts)Week: [ #1 | #2 | #4 ] October 2010 (7 Posts)Week: [ #2 | #3 | #5 ] November 2010 (5 Posts)Week: [ #1 | #2 | #3 ] December 2010 (6 Posts)Week: [ #3 | #4 | #5 ] January 2011 (8 Posts)Week: [ #-49 | #-48 | #-47 | #-46 ] February 2011 (3 Posts)Week: [ #2 | #3 ] March 2011 (8 Posts)Week: [ #1 | #2 | #4 | #5 ] April 2011 (14 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2011 (3 Posts)Week: [ #3 | #4 | #5 ] June 2011 (6 Posts)Week: [ #2 | #4 | #5 ] July 2011 (9 Posts)Week: [ #2 | #3 | #4 | #5 ] August 2011 (7 Posts)Week: [ #3 | #4 | #5 ] September 2011 (7 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2011 (12 Posts)Week: [ #2 | #3 | #4 | #5 ] November 2011 (9 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2011 (12 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] January 2012 (19 Posts)Week: [ #-50 | #-49 | #-48 | #-47 | #-46 ] February 2012 (16 Posts)Week: [ #2 | #3 | #4 | #5 ] March 2012 (15 Posts)Week: [ #2 | #3 | #4 | #5 ] April 2012 (9 Posts)Week: [ #2 | #3 | #4 | #5 ] May 2012 (10 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2012 (5 Posts)Week: [ #1 | #2 | #3 | #5 ] July 2012 (9 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] August 2012 (10 Posts)Week: [ #2 | #3 | #4 | #5 ] September 2012 (7 Posts)Week: [ #2 | #3 | #4 | #5 ] October 2012 (20 Posts)Week: [ #2 | #3 | #4 | #5 ] November 2012 (32 Posts)Week: [ #2 | #3 | #4 | #5 ] December 2012 (25 Posts)Week: [ #2 | #3 | #4 | #5 | #6 ] January 2013 (11 Posts)Week: [ #1 | #2 | #3 | #4 ] February 2013 (1 Posts)Week: [ #1 ] March 2013 (6 Posts)Week: [ #1 | #2 | #4 ] April 2013 (19 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] May 2013 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] June 2013 (4 Posts)Week: [ #2 | #3 | #4 ] July 2013 (20 Posts)Week: [ #0 | #1 | #2 | #3 | #4 ] August 2013 (20 Posts)Week: [ #1 | #2 | #3 | #4 ] September 2013 (27 Posts)Week: [ #1 | #2 | #3 | #4 | #5 ] October 2013 (15 Posts)Week: [ #1 | #2 | #3 | #4 ] November 2013 (12 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2013 (12 Posts)Week: [ #1 | #2 | #3 | #5 ] January 2014 (2 Posts)Week: [ #1 ] February 2014 (5 Posts)Week: [ #2 | #3 | #4 ] March 2014 (7 Posts)Week: [ #1 | #2 | #3 | #4 ] April 2014 (6 Posts)Week: [ #0 | #1 | #3 ] May 2014 (4 Posts)Week: [ #0 | #1 | #2 ] June 2014 (3 Posts)Week: [ #2 | #3 ] August 2014 (3 Posts)Week: [ #2 | #3 | #4 ] September 2014 (3 Posts)Week: [ #0 | #1 | #2 ] November 2014 (8 Posts)Week: [ #1 | #2 | #3 | #4 ] December 2014 (2 Posts)Week: [ #2 ] January 2015 (3 Posts)Week: [ #2 ] February 2015 (3 Posts)Week: [ #3 | #4 ] March 2015 (3 Posts)Week: [ #1 ] April 2015 (2 Posts)Week: [ #2 ] May 2015 (1 Posts)Week: [ #1 ] June 2015 (1 Posts)Week: [ #1 ]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9508613348007202, "perplexity": 11334.633122845504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157443.43/warc/CC-MAIN-20160205193917-00293-ip-10-236-182-209.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/169464/markov-chain-supplementary-litterature
# Markov chain supplementary litterature I'm studying markov chains through Durrett and I'm finding it quite hard to read. Does anyone have a good idea to supplementary book, preferably one on his level of generality and which studies some of the same concepts (I need to account for Durrett, so I can't just switch to some other litterature). I've done measure theory and measure theoretic probability up to stuff like law of large numbers, central limit theorems, conditional expectations, martingales and optional sampling theorems - so I reckon the level of complexity fits to be approximately the next step, so hopefully it's just because he is hard to read. Hope someone is able to help, I've been struggling with this quite a bit. best regards, - Markov chains by Norris –  Artem Jul 11 '12 at 13:38 I'll take a look at that, thanks! Any other suggestions? –  Henrik Jul 12 '12 at 9:50 There are a lot of other books. I personally prefer The first course in stochastic processes by Karlin and Taylor, mainly because they 1) do not use any measure theory 2) still give detailed and rigorous exposition. –  Artem Jul 12 '12 at 13:39 Actually, the courses i have followed and will follow are very measure theory heavy, so that approach falls most natural to me i think. I'll check them out for intuition though. –  Henrik Jul 13 '12 at 0:23
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212552070617676, "perplexity": 612.2722173299971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990177.43/warc/CC-MAIN-20150728002310-00204-ip-10-236-191-2.ec2.internal.warc.gz"}
http://wss.sd73.bc.ca/mod/book/view.php?id=8686&chapterid=608
## Writer's Notebook ### Inquire and Explore #### Context Read the following story: When did you get here?A minute ago?What happened? There?In the bedroom? Yes?Dead?I think so.Who?You.What? No.Yes.How did you know.Wife.Mine.Yes. Most readers will be able to figure out what happened.  But how? Readers think about the clues (a piece of evidence or information) and the context (the situation in which something happens).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9020988345146179, "perplexity": 15484.99809874468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00333.warc.gz"}
https://bibbase.org/network/publication/ratliff-borghuis-kao-sterling-balasubramanian-retinaisstructuredtoprocessanexcessofdarknessinnaturalscenes-2010
Retina is structured to process an excess of darkness in natural scenes. Ratliff, C., P., Borghuis, B., G., Kao, Y., Sterling, P., & Balasubramanian, V. Proceedings of the National Academy of Sciences, 107(40):17368-17373, 2010. Retinal ganglion cells that respond selectively to a dark spot on a brighter background (OFF cells) have smaller dendritic fields than their ON counterparts and are more numerous. OFF cells also branch more densely, and thus collect more synapses per visual angle. That the retina devotes more resources to processing dark contrasts predicts that natural images contain more dark information. We confirm this across a range of spatial scales and trace the origin of this phenomenon to the statistical structure of natural scenes. We show that the optimal mosaics for encoding natural images are also asymmetric, with OFF elements smaller and more numerous, matching retinal structure. Finally, the concentration of synapses within a dendritic field matches the information content, suggesting a simple principle to connect a concrete fact of neuroanatomy with the abstract concept of information: equal synapses for equal bits. @article{ title = {Retina is structured to process an excess of darkness in natural scenes}, type = {article}, year = {2010}, identifiers = {[object Object]}, pages = {17368-17373}, volume = {107}, websites = {http://www.pnas.org/content/107/40/17368.abstract}, id = {9cbec574-0b5c-3785-b51d-f2304378f1c7}, created = {2015-02-26T19:48:02.000Z}, file_attached = {false}, profile_id = {624218fb-8957-3e79-afb5-27cca967200e}, group_id = {ac791549-4a16-3252-8de1-0fdef1a59e25}, last_modified = {2015-03-10T17:06:58.000Z}, starred = {false}, authored = {false}, confirmed = {true}, hidden = {false}, citation_key = {Ratliff:2010aa}, source_type = {article}, folder_uuids = {886c64ff-fda7-4525-b6b0-c7319eef9f2d,db2540d2-49f3-4068-8c93-f82b72c33309}, abstract = {Retinal ganglion cells that respond selectively to a dark spot on a brighter background (OFF cells) have smaller dendritic fields than their ON counterparts and are more numerous. OFF cells also branch more densely, and thus collect more synapses per visual angle. That the retina devotes more resources to processing dark contrasts predicts that natural images contain more dark information. We confirm this across a range of spatial scales and trace the origin of this phenomenon to the statistical structure of natural scenes. We show that the optimal mosaics for encoding natural images are also asymmetric, with OFF elements smaller and more numerous, matching retinal structure. Finally, the concentration of synapses within a dendritic field matches the information content, suggesting a simple principle to connect a concrete fact of neuroanatomy with the abstract concept of information: equal synapses for equal bits.}, bibtype = {article}, author = {Ratliff, Charles P and Borghuis, Bart G and Kao, Yen-Hong and Sterling, Peter and Balasubramanian, Vijay}, journal = {Proceedings of the National Academy of Sciences}, number = {40} }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26620471477508545, "perplexity": 5519.569823585662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363641.20/warc/CC-MAIN-20211209000407-20211209030407-00448.warc.gz"}
https://indico.ph.tum.de/event/6906/
Seminars/Colloquia # Beyond the Cosmological Standard Model ## by Prof. Subir Sarkar (Oxford University) Europe/Berlin Zoom #### Zoom https://mppmu.zoom.us/j/91649974442?pwd=SWhWQ2VPaVVKM09BekVMUGNUd1k1Zz09 Description The ΛCDM model is based on the assumption that the Universe is isotropic and homogeneous on large scales. That the CMB exhibits a large dipole anisotropy is explained as due to our peculiar’ (non-Hubble) motion because of local inhomogeneities in the matter distribution. There should then be a corresponding dipole in the skymap of high redshift sources. We find however that the observed dipole in the distribution of of quasars does not match what is expected. This calls into question the standard practice of boosting to the cosmic rest frame’ (in which the Universe is supposedly isotropic) to analyse cosmological data. In the heliocentric frame where observations are actually made, the acceleration of the Hubble expansion rate is also anisotropic. It cannot therefore be interpreted as due to a Cosmological Constant (vacuum energy) and is likely an artefact due to our being `non-Copernican’ observers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939185380935669, "perplexity": 707.5110155033262}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304876.16/warc/CC-MAIN-20220125220353-20220126010353-00547.warc.gz"}
https://www.math.ias.edu/seminars/abstract?event=37692
# Measuring Shape With Homology WORKSHOP ON TOPOLOGY: IDENTIFYING ORDER IN COMPLEX SYSTEMS Topic: Measuring Shape With Homology Speaker: Robert MacPherson Affiliation: School of Mathematics, Institute for Advanced Study Date: Wednesday, April 7 Time/Room: 3:30pm - 4:30pm/S-101 Video Link: https://video.ias.edu/math-topologyrdm The ordinary homology of a subset S of Euclidean space depends only on its topology. By systematically organizing homology of neighborhoods of S, we get quantities that measure the shape of S, rather than just its topology. These quantities can be used to define a new notion of fractional dimension of S. They can also be effectively calculated on a computer. We will illustrate this by presenting computations on sets S that are topologically trees (and therefore have trivial ordinary homology). Examples include branched polymers, diffusion limited aggregation, and self avoiding random walk.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433234333992004, "perplexity": 1213.0952522431953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155942.15/warc/CC-MAIN-20180919063526-20180919083526-00253.warc.gz"}
http://physics.stackexchange.com/questions/14477/heat-exchange-depending-on-coolant-flow-direction
# Heat exchange depending on coolant flow direction Consider a simplest case of a heat exchanger - two parallel pipes of flowing liquids (say, hot and cold) that have physical contact along some part of their length. Hot water of a certain temperature goes from A to B. Cold water can go either from C to D or from D to C. Assume that heat exchange between liquids occurs only where pipes contact (at XY part). What is the favorable direction (meaning "the most heat is transferred) of a coolant flow relative to hot flow - in the same direction (C->D) or in the reverse direction (D->C)? How the coolant and hot flows' temperatures are distributed along the pipe contact? - You want to make sure that the heat flow is always between fluids at the nearest possible temperature, to minimize the entropy production from the flow of heat. The best method is to flow the cold coolant in the opposite direction as the hot coolant, so that as the cold coolant gets hotter, it is transferring heat to hotter hot-coolant. If you adjust the pipes right, and make them long, you can do the entire circuit with as close to zero entropy generation as you like. This is the principle by which ducks can send blood to their feet and do so without losing any significant body heat in the feet, although they are immersed in cold water. For the temperature profile, if you have the hot water be 100 degrees, and the cold water be 0 degrees, and you have a long linear pipe where they touch, then the profile can be exactly linear with a 1 degree difference in temperature at all points, assuming that that heat diffusion constant for the metal is constant over the range of temperature, and the specific heat of water is constant for the range of temperature, and both of these are close enough approximations for a practical heat exchanger. If the pipe runs from 0 to L, the profile for hot water temperature is: $T_H(x)= {100x\over L}$ The profile for the cold water $T_C(x) = {100x\over L} - 1$ so that the difference between them is always 1 degree. You adjust the flow rate so that the heat transfer moves C units of heat energy in a length $L/100$, where C is the specific heat of water, and then the exchanger works with this profile. You can adjust the temperatures to be as close to each other as you like, and the entropy gain from the heat flow is: $\Delta S = {Q\over T^2} \delta T \approx \Delta S_0 {1\over 230}$ Where you use the absolute temperature T in the denominator, so that in this system you only make about 1 percent of the entropy you would if you let the hot and cold water transfer heat by direct contact without an exchanger. These profiles are universal attractors, so if you just set up the appropriate flow rate, you will approach the linear profile with time. - My idea was that to maximize the heat exchange, one must maximize the temperature difference along the contact profile. Maximal difference of coolant temperatures is exhibited on their entry into the exchanger, thus the input flows must be in the same direction. As for the temperature profile in this case, I expected the temperature curve to decay exponentially, as the energy flow is proportional to the temperature difference, and this implies the exponential solution. Why our deductions are that very different? – mbaitoff Sep 10 '11 at 10:33 @mbaitoff: Your idea is exactly the opposite of the correct one. Your exchanger would be maximally inefficient (but have an exponential profile), and the exchanger described above is maximally efficient. Think about it this way: I can send water in at 100 degrees, have it cool down to 0 degrees, and come back to 99 degrees by doing one cycle around the long-linear exchanger. I lose next to none of the original heat by the cycling process. In your exchanger, you would lose everything. – Ron Maimon Sep 10 '11 at 16:48 I think we have a misunderstanding in exchanger schematics here. I assume a two distinct pipes, and never assume a closed cycle of a fluid. I cant imagine what you say "come back to 99 degrees". I'm going to post a descriptive picture of my problem. – mbaitoff Sep 10 '11 at 19:07 There, I updated the question with the picture. What's your idea now? – mbaitoff Sep 10 '11 at 19:24 @mbaitoff Suppose that you have (as in Ron's example) hot water at 100 degrees and cold water at 0 degrees. If the flow rates are equal, your idea would end up with the hot water being cooled to 50 degrees; Ron's idea would end up with the hot water being cooled to 1 degree, maximizing the heat transfer. – mmc Sep 10 '11 at 19:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7007043361663818, "perplexity": 493.842332263659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278417.79/warc/CC-MAIN-20160524002118-00193-ip-10-185-217-139.ec2.internal.warc.gz"}
https://artofproblemsolving.com/wiki/index.php?title=1983_IMO_Problems/Problem_1&diff=98022&oldid=98021
# Difference between revisions of "1983 IMO Problems/Problem 1" ## Problem Find all functions defined on the set of positive reals which take positive real values and satisfy the conditions: (i) for all ; (ii) as . ## Solution 1 Let and we have . Now, let and we have since we have . Plug in and we have . If is the only solution to then we have . We prove that this is the only function by showing that there does not exist any other : Suppose there did exist such an . Then, letting in the functional equation yields . Then, letting yields . Notice that since , one of is greater than . Let equal the one that is greater than . Then, we find similarly (since ) that . Putting into the equation, yields . Repeating this process we find that for all natural . But, since , as , we have that which contradicts the fact that as . ## Solution 2 Let so If then because goes to the real positive integers, not Hence, is injective. Let so so is a fixed point of Then, let so as can't be so is a fixed point of We claim is the only fixed point of Suppose for the sake of contradiction that be fixed points of so and Then, setting in (i) gives so is also a fixed point of Also, let so so is a fixed point of If with then is a fixed point of , contradicting (ii). If with then so is a fixed point, contradicting (ii). Hence, the only fixed point is so so and we can easily check that this solution works. 1983 IMO (Problems) • Resources Preceded byFirst question 1 • 2 • 3 • 4 • 5 • 6 Followed byProblem 2 All IMO Problems and Solutions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9968783259391785, "perplexity": 470.924106074122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00666.warc.gz"}
http://www.maplesoft.com/support/help/Maple/view.aspx?path=ArrayTools/RemoveSingletonDimensions
ArrayTools - Maple Programming Help Home : Support : Online Help : Programming : Low-level Manipulation : Matrices, Vectors, and Arrays : ArrayTools : ArrayTools/RemoveSingletonDimensions ArrayTools RemoveSingletonDimensions remove singleton Array dimensions Calling Sequence RemoveSingletonDimensions(A) Parameters A - Matrix, Vector, or Array Description • The RemoveSingletonDimensions command removes the singleton dimensions from an Array, while preserving the order of the data. A new array is created without any singleton dimensions, and the data is copied over. • All 1-D matrices will be squeezed into column Vectors, regardless of which dimension is singleton. • This function is part of the ArrayTools package, so it can be used in the short form RemoveSingletonDimensions(..) only after executing the command with(ArrayTools).  However, it can always be accessed through the long form of the command by using ArrayTools[RemoveSingletonDimensions](..). Examples > $\mathrm{with}\left(\mathrm{ArrayTools}\right):$ > $A≔\mathrm{Array}\left(1..1,1..2,1..1,1..3\right)$ ${A}{:=}\left[\begin{array}{c}{\mathrm{1..1 x 1..2 x 1..1 x 1..3}}{\mathrm{Array}}\\ {\mathrm{Data Type:}}{\mathrm{anything}}\\ {\mathrm{Storage:}}{\mathrm{rectangular}}\\ {\mathrm{Order:}}{\mathrm{Fortran_order}}\end{array}\right]$ (1) > $B≔\mathrm{RemoveSingletonDimensions}\left(A\right)$ ${B}{:=}\left[\begin{array}{rrr}{0}& {0}& {0}\\ {0}& {0}& {0}\end{array}\right]$ (2) > $\mathrm{rtable_dims}\left(B\right)$ ${1}{..}{2}{,}{1}{..}{3}$ (3)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8967691659927368, "perplexity": 2220.666072603458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00194-ip-10-164-35-72.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:06463407
× # zbMATH — the first resource for mathematics Laws of mission-based programming. (English) Zbl 1331.68054 Summary: Safety-Critical Java (SCJ) is a recent technology that changes the execution and memory model of Java in such a way that applications can be statically analysed and certified for their real-time properties and safe use of memory. Our interest is in the development of comprehensive and sound techniques for the formal specification, refinement, design, and implementation of SCJ programs, using a correct-by-construction approach. As part of this work, we present here an account of laws and patterns that are of general use for the refinement of SCJ mission specifications into designs of parallel handlers, as they are used in the SCJ programming paradigm. Our refinement notation is a combination of languages from the Circus family, supporting state-rich reactive models with the addition of class objects and real-time properties. Starting from a sequential and centralised Circus specification, our laws permit refinement into Circus models of SCJ program designs. Automation and proof of the refinement laws is examined here, too. Our work is an important step towards eliciting laws of programming for SCJ and fits into a refinement strategy that we have developed previously to derive SCJ programs from specifications in a rigorous manner. ##### MSC: 68N30 Mathematical aspects of software engineering (specification, verification, metrics, requirements, etc.) 68N19 Other programming paradigms (object-oriented, sequential, concurrent, automatic, etc.) ##### Keywords: SCJ; models; refinement; laws; patterns; automation; proof; Circus ##### Software: ArcAngelC; Circus; CirCUs; Z; ZRC Full Text: ##### References: [1] Abrial JR (2010) Modeling in Event-B. Cambridge University Press, Cambridge · Zbl 1213.68214 [2] Abrial, J-R; Hallerstede, S, Refinement, decomposition, and instantiation of discrete models: application to event-B, Fundamenta Informaticae, 77, 1-28, (2007) · Zbl 1118.68392 [3] Back RJR (1989) Refinement calculus, Part II: parallel and reactive programs. In: Stepwise refinement of distributed systems models, formalisms, correctness. LNCS, vol 430. Springer, Berlin, pp 67-93 [4] Back RJR, Kurki-Suonio R (1983) Decentralization of process nets with centralized control. In: Proceedings of PODC ’83, second ACM symposium on principles of distributed computing. ACM, New York, pp 131-142 [5] Burns, A, The ravenscar profile, ACM SIGAda Ada Lett XIX, 4, 49-52, (1999) [6] Butler M (2009) Decomposition structures for Event-B. In: Proceedings of IFM 2009, integrated formal methods. LNCS, vol 5423. Springer, Berlin, pp 20-38 · Zbl 0934.68062 [7] Back, RJR; Wright, J, Compositional action system refinement, Formal Aspects Comput, 15, 103-117, (2003) · Zbl 1093.68007 [8] Cavalcanti A (1997) A refinement calculus for Z. PhD thesis, University of Oxford, Oxford [9] Cavalcanti, A; Clayton, P; O’Halloran, C, From control law diagrams to ada via circus, Formal Aspects Comput, 23, 465-512, (2011) · Zbl 1226.68028 [10] Cansell D, Méry D, Rehm J Time constraint patterns for Event B development. In: Proceedings of B 2007: formal specification and development in B. LNCS, vol 4355. Springer, Berlin, pp 140-154 · Zbl 0655.68031 [11] Cavalcanti, A; Sampaio, A; Woodcock, J, A refinement strategy for circus, Formal Aspects Comput, 15, 146-181, (2003) · Zbl 1093.68555 [12] Cavalcanti, A; Sampaio, A; Woodcock, J, Unifying classes and processes, Softw Syst Model, 4, 277-296, (2005) [13] Cavalcanti, A; Woodcock, J, ZRC—a refinement calculus for Z, Formal Aspects Comput, 10, 267-289, (1998) · Zbl 0934.68062 [14] Cavalcanti A, Wellings A, Woodcock J (2011) The Safety-Critical Java memory model: a formal account. In: Proceedings of FM 2011: formal methods. LNCS, vol 6664. Springer, Berlin, pp 246-261 [15] Cavalcanti A, Wellings A, Woodcock J, Wei K, Zeyda F (2011) Safety-Critical Java in Circus. In: Proceedings of JTRES 2011, 9th international workshop on java technologies for real-time and embedded systems. ACM, New York, pp 20-29 · Zbl 1285.68037 [16] Cavalcanti, A; Zeyda, F; Wellings, A; Woodcock, J; Wei, K, Safety-critical Java programs from circus models, Real-time Syst, 49, 614-667, (2013) · Zbl 1285.68037 [17] Dalsgaard AE, Hansen RR, Schoeberl M (2012) Private memory allocation analysis for Safety-Critical Java. In: Proceedings of JTRES 2012, 10th international workshop on java technologies for real-time and embedded systems. ACM, New York, pp 9-17 · Zbl 0655.68031 [18] Groves, L, Refinement and the Z schema calculus, In: Proceedings of REFINE, 2002, the bcs facs refinement workshop-entcs 7037093, (2002) [19] Henties T, Hunt JJ, Locke D, Nilsen K, Schoeberl M, Vitek J (2009) Java for safety-critical applications. In: Proceedings of SafeCert 2009, pp 1-11 · Zbl 1218.68101 [20] Hoare CAR, Jifeng H (1998) Unifying theories of programming. Prentice Hall Series in Computer Science. Prentice Hall, Upper Saddle River [21] Haddad G, Leavens GT (2011) Specifying subtypes in SCJ programs. In Proceedings of JTRES 2011, 9th international workshop on java technologies for real-time and embedded systems. ACM, New York, pp 40-46 · Zbl 1093.68555 [22] Hayes, IJ; Utting, M, A sequential real-time refinement calculus, Acta Informatica, 37, 385-448, (2001) · Zbl 0970.68107 [23] Kalibera T, Hagelberg J, Pizlo F, Plsek A, Titzer B, Vitek J (2009) CD_{$$x$$}: A family of real-time Java benchmarks. In: Proceedings of JTRES 2009, 7th international workshop on Java technologies for real-time and embedded systems. ACM, New York, pp 41-50 · Zbl 1226.68028 [24] Morgan CC (1994) Programming from specifications. Prentice Hall International Series in Computer Science. Prentice Hall, Upper Saddle River · Zbl 0829.68083 [25] Norrish M (2003) Complete integer decision procedures as derived rules in HOL. In: Proceedings of TPHOLs 2003, theorem proving in higher order logics. LNCS, vol 2758. Springer, Berlin, pp 71-86 · Zbl 1279.68292 [26] Oliveira, M; Cavalcanti, A; Woodcock, J, A UTP semantics for circus, Formal Aspects Comput, 21, 3-32, (2009) · Zbl 1165.68048 [27] Oliveira M (2005) Formal derivation of state-rich reactive programs using Circus. PhD thesis, University of York, York [28] Oliveira, M; Zeyda, F; Cavalcanti, A, A tactic language for refinement of state-rich concurrent specifications, Sci Comput Program, 76, 792-833, (2011) · Zbl 1218.68101 [29] Pizlo F, Fox JM, Holmes D, Vitek J (2004) Real-time Java scoped memory: design patterns and semantics. In Proceedings of the seventh IEEE international symposium on object-oriented real-time distributed computing. IEEE, New York, pp 101-110 [30] Roscoe AW (1997) The theory and practice of concurrency. Prentice Hall Series in Computer Science. Prentice Hall, Upper Saddle River [31] Roscoe AW (2011) Understanding concurrent systems. Texts in Computer Science. Springer, Berlin [32] Reed, GM; Roscoe, AW, A timed model for communicating sequential processes, Theor Comput Sci, 58, 249-261, (1988) · Zbl 0655.68031 [33] RTCA/EUROCAE joint committee (2011) Software considerations in airborne systems and equipment certification. Technical Report DO-178C, RTCA Inc. [34] Sherif, A; Cavalcanti, A; Jifeng, H; Sampaio, A, A process algebraic framework for specification and validation of real-time systems, Formal Aspects Comput, 22, 153-191, (2010) · Zbl 1214.68224 [35] Schneider, S; Treharne, H, CSP theorems for communicating B machines, Formal Aspects Comput, 17, 390-422, (2005) · Zbl 1103.68599 [36] Søndergaard H, Thomsen B, Ravn AP (2006) A Ravenscar-Java profile implementation. In: Proceedings of JTRES 2006, 4th international workshop on Java technologies for real-time and embedded systems. ACM, New York, pp 38-47 [37] Schneider S, Treharne H, Wehrheim H (2010) A CSP approach to control in Event-B. In: Proceedings of IFM 2010, integrated formal methods. LNCS, vol 6396. Springer, Berlin, pp 260-274 [38] The Open Group (2011) Safety Critical Java technology specification—Version 0.94. Technical Report JSR-302, Java Community Process. http://jcp.org/en/jsr/detail?id=302 [39] Tang D, Plsek A, Vitek J (2010) Static checking of Safety Critical Java annotations. In: Proceedings of JTRES 2010, 8th international workshop on Java Technologies for real-time and embedded systems. ACM, New York, pp 148-154 [40] Wilhelm, R; etal., The worst-case execution-time problem—overview of methods and survey of tools, ACM Trans Embed Comput Syst, 7, 36-13653, (2008) [41] Woodcock J, Davies J (1996) Using Z: specification, refinement and proof. Prentice Hall International Series in Computer Science. Prentice Hall, Upper Saddle River · Zbl 0855.68060 [42] Wellings A (2004) Concurrent and real-time programming in Java. Wiley, West Sussex [43] Woodcock J (2013) CML definition 4. Technical Report COMPASS Deliverable 23.5, Seventh framework programme: comprehensive modelling for advanced systems of systems, Grant Agreement 287829. http://www.compass-research.eu/deliverables.html [44] Zeyda F, Cavalcanti A (2013) Refining SCJ mission specifications into parallel handler designs. In: Proceedings of REFINE 2013: 16th BCS FACS refinement workshop-EPTCS 1155267 [45] Zeyda F, Cavalcanti A, Wellings A (2011) The Safety-Critical Java mission model: a formal account. In: Proceedings of ICFEM 2011, 13th international conference on formal engineering methods. LNCS, vol 6991. Springer, Berlin, pp 49-65 [46] Zeyda F, Cavalcanti A, Wellings A, Woodcock J, Wei K (2012) Refinement of the parallel CD_{$$x$$}. Technical report, University of York, York. http://www.cs.york.ac.uk/circus/publications/techreports/ [47] Zeyda, F; Lalkhumsanga, L; Cavalcanti, A; Wellings, A, circus models for safety-critical Java programs, Comput J, 57, 1046-1091, (2013) [48] Zeyda, F; Oliveira, M; Cavalcanti, A, Mechanised support for sound refinement tactics, Formal Aspects Comput, 24, 127-160, (2012) · Zbl 1242.68077 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7289835810661316, "perplexity": 20265.347606770087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00331.warc.gz"}
http://www.mathcasts.org/mtwiki/PmwikiMath/JsMath
# Using PmWiki to Display Mathematics - jsMath in PmWiki Under construction (no kidding huh?) 1. You need to install the cookbook for jsmath for pmwiki. See here. 2. You will get a icon for jsmath on your shortcuts bar:     About the icon. The icon is position number 1000 so it usually appears at the end. The instruction for it is in the jsMath.php cookbook and not in config.php like other added icons. The image filename is math.gif and it is in pub/guiedit like all the other icons. Samples and how to get them ... Sample Code \mathop = \limits^? \mathop = \limits^? Basic math: f(x)=\sqrt{x}+\frac{x}{3} Code: {$f(x)=\sqrt{x}+\frac{x}{3}$} {$ is the code that shows that jsMath code is starting. f(x)=\sqrt{x}+\frac{x}{3} is the jsMath code itself. A backslash is the symbol that a jsMath (LaTeX) command will start. \LaTeX_Command Here there are two commands: \sqrt{} - the square root sign, and \frac{}{} - the fraction maker. Parameters of the command are surrounded by braces {}. The command \sqrt{} has one parameter, i.e. the text inside the square root. The command \frac{}{} has two parameters, i.e. the numerator and the denominator. Spacing in the jsMath code is ignored. To add spaces you must use: \, $} is the code that shows that jsMath code has ended. The formula is:   \bbox[border:2px green dotted,2pt]{x_{1,2} = {{ - b \,\pm\, \sqrt {b^2 \,-\, 4 \cdot a \cdot c} } \over {2 \cdot a}}} Code: {$\bbox[border:2px green dotted,2pt]{x_{1,2} = {{ - b \,\pm\, \sqrt {b^2 \,-\, 4 \cdot a \cdot c} } \over {2 \cdot a}}}$} (You can increase the size of the font in pmwiki so that the whole thing is bigger.) Linear system: \left\{ \begin{array}{c} \,\,x + 2y = 3 \\ 3x - 2y = 1 \\ \end{array} \right. Code: {$\left\{ \begin{array}{c} \,\,x + 2y = 3 \\ 3x - 2y = 1 \\ \end{array} \right.$} Notice the "period" after \right - You must have the \right command, but the "period" makes it empty! Colors: \color{purple}{x\mathop = \limits^?1} and \color{teal}{y\mathop = \limits^?2} Code: {$\color{purple}{x \mathop = \limits^? 1}$} and {\$ \color{teal}{y \mathop = \limits^? 2} Visible colors: red, green, purple, blue, teal, darkgreen, darkgrey, darkorange, yellowgreen Related Topics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9942452311515808, "perplexity": 6864.225236270737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676596463.91/warc/CC-MAIN-20180723125929-20180723145929-00264.warc.gz"}
http://physics.stackexchange.com/users/5374/nick-kidman?tab=activity&sort=all&page=3
# Nick Kidman less info reputation 326 bio website location age member for 1 year, 8 months seen 4 hours ago profile views 853 Actress with an interest in the philosophy of science. # 1,135 Actions Mar25 comment How is gradient the maximum rate of change of a function?You've just taken the englisch grammar out of the title. Mar25 revised “as measured in a local Lorentz frame”?added 53 characters in body Mar25 answered “as measured in a local Lorentz frame”? Mar24 answered Derivative of covariant EM tensor Mar23 revised What is a dual / cotangent space?added 532 characters in body Mar23 revised What is a dual / cotangent space?added 532 characters in body Mar23 answered What is a dual / cotangent space? Mar22 comment Classical scattering of two particles by a Yukawa potential@MichaelBrown: I have no idea regarding the integral in the impact parameter link and I don't know how to directly put up the equation of motion since the particle $B$ will be accelerated too giving a super nonlinear potential $\frac{\exp{(-|\Delta r(t)|/\lambda)}}{|\Delta r(t)|}$ . Mar21 revised Classical scattering of two particles by a Yukawa potentialdeleted 22 characters in body Mar21 asked Classical scattering of two particles by a Yukawa potential Mar21 revised Integration by parts to derive relativistic kinetic energyadded 45 characters in body Mar21 answered Integration by parts to derive relativistic kinetic energy Mar21 comment Probability amplitude in Layman's TermsWhatever is mathematically proven must be w.r.t. some postulates and these are not stated. Also, there are the observable who's probabilities sum to 100% (namely the probability to be in any of a total set of eigenstates) and in this sense it's just probability theory with complex dynamics under the hood. I still don't think this is an inappropriate formulation. Mar21 comment Probability amplitude in Layman's Terms@user9886: The integrals involving position operators are layman's terms? Mar21 comment Probability amplitude in Layman's TermsI'm not too happy with the formulation of "mathematical systems that can yield a nontrivial formalism for probability." Firstly, becuase it sounds like you imply that there are only these two "systems", and secondly, because the quantum framework is still one where "each outcome has a probability, and those probabilities directly add up to 100%." It's just extra dynamics under the hood. Mar21 answered Probability amplitude in Layman's Terms Mar21 answered Where do the conservation laws come from? Mar21 comment QFT in Quantum Computing and Control Theory?I'd naively interpret the quantum mechanics of quantum computing to be that section of the formalism which doesn't deal with explicit spatial dependence. Even if the framework (Hilbert spaces, yada yada) is the same, the papers on computing/optics look like "$|010\rangle$" while the papers on high energy physics look like "$\int \Psi(x)|\Omega\rangle$" Mar20 comment What does the Atomic Form Factor means?Have you checked en.wikipedia.org/wiki/Atomic_form_factor ? Mar20 comment A partial differential equation for kinetic energyMaking a product ansatz, leads to the equation being solved for all of $K(m,v)=(\frac{m}{2a}+d)(a v^2+bv+c)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376935124397278, "perplexity": 1664.1901474285542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703830643/warc/CC-MAIN-20130516113030-00015-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.geocities.co.jp/Athlete/1877/2009hyousiyosou.htm
jEt @ ʔ\͉i\聚 uWTxłQOOXNv싅SIʐ^Ӎ\\zv N̏Tx[X{[́uv싅SIʐ^Ӎv̖ʎq́Aȉ̒ʂłI ߋ\i ߂đON̐тł͂Ȃ_Ԍ܂BQlɂĂ݂ĉi^^j @@@@@@@@@@@@@@QOOWN@QOOVN@QOOUN@QOOTN@QOOSN@QOORN@QOOQN@QOOPN@QOOON CIY@@@@@@OǴ@@OǴ@@@@@@@@Ju@@@@@@ IbNXot@[Y@@k씎q@@@@a@@Ll@@Jm@@@Jm@@@Jm@@@Jm@@@C`[ {nt@C^[Y@@@_rbV_rbVSHINJO@@@SHINJO@@@}@}@}@}@c tbe}[Y@@@@@@TSUYOSHI@@nӏr@@s@@щp@@щp@@ؒmG@@ؒmG@@ؒmG yVS[fC[OX@c@@c@@Gvu@@O@@Gvu@@Im@@s[Y@@Im@@s[Y \tgoNz[NX@@@@ēa@@rƁ@@MF@@铇i@@l@@l@@MF@@铇i ǔWCAc@@@@@}@TV@vۗTI@vۗTI@㌴_@@㌴_@@G@@G@@㌴_ _^CK[X@@@@@@싅@@싅@@@@@@@@c@@@@@@ԐL@@cz@@LV hSY@@@@@@␣mI@@F@@[Oa@@㌛L@@[Oa@@F@@Ύ@@茛Y@Ύ LmJ[v@@@@@@I@@c@@c@@d@@@c@@VM_@@{m@@{m@@Fs NgX[Y@@@@–ؐe@@–ؐe@@–ؐe@@Óc֖@@Óc֖@@⑺@@y^W[j@Óc֖@@Óc֖ lxCX^[Y@@@@@cC@@cC@@OY@@m@@@EbY@@@Ö؍@@ΈN@@间F@@a[Y ʔ\i2009N25݁j @ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 @ I薼 i\zj W I Z b l ^ C K b R E L L S S } S' D @ I C H I } C i b c M M j V ^ C p g b X J S W X b p b T E X | b @ v ʁi1c1_Aō12_j 6 6 6 6 5 6 4 4 6 3 6 7 7 6 8 8 7 6 4 6 9 6 11 3 5 151 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ʐCIY @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ݍFV @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 6 OG @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 TV @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 7 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 7 ЉՔV @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ IbNXot@[Y @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 23 [Y @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ kC{nt@C^[Y @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ _rbVL @ 24 tċI @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ I薼 i\zj W I Z b l ^ C K b R E L L S S } S' D @ I C H I } C i b c M M j V ^ C p g b X J S W X b p b T E X | b @ v tbe}[Y @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 叼 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 @ @ @ @ @ @ @ @ @ @ @ @ @ 12 s @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 Ќ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 K @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 ]qW @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 쒉 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 nӏr @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 m @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ kyVS[fC[OX @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ Gvu @ @ @ @ 21 c @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 Im @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ \tgoNz[NX @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 10 r @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 9 MF @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 ׌i @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 cG @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 {Y @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 acB @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ I薼 i\zj W I Z b l ^ C K b R E L L S S } S' D @ I C H I } C i b c M M j V ^ C p g b X J S W X b p b T E X | b @ v ǔWCAc @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ~X @ @ @ @ @ @ @ @ @ @ @ 14 TV @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 {El @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 7 } @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 c׎ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 0 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ _^CK[X @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ VM_ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 싅 @ @ @ @ @ @ @ @ @ 16 c @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 b @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 Jh @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 {m @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 4 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ hSY @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ R{L @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 10 [Oa @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 X쏫F @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 ␣mI @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 C @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 ac_ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 r؉딎 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ I薼 i\zj W I Z b l ^ C K b R E L L S S } S' D @ I C H I } C i b c M M j V ^ C p g b X J S W X b p b T E X | b @ v LmJ[v @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ i쏟_ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 3 I @ @ @ @ @ @ @ @ 17 Oc @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 5 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ NgX[Y @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ ΐK @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 8 –ؐe @ @ @ @ @ @ @ @ @ @ @ @ 13 n @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 Ram @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 яE @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 1 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ lxCX^[Y @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 쐹 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 10 cC @ @ @ @ @ @ @ @ @ @ @ @ 13 OY @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ 2 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @@@v@@@@@@@ 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 300 @fQR
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985970258712769, "perplexity": 7.490591916566505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00045.warc.gz"}
https://www.jiskha.com/questions/1755805/if-ln-a-2-ln-b-3-and-ln-c-5-evaluate-the-following-a-ln-a-2-b-4c-3-9524
Math If ln a=2, ln b=3, and ln c=5, evaluate the following: (a) ln(a^−2/b^4c^−3)= -.9524 (b) ln(√b^−4*c^1*a^−1)= -1.739 (c) ln(a^2b^4)/ln(bc)^−2= 42.39 (d) (ln(c^−1))*(ln(a/b^3))^−4= -.03507 I am getting these answers but the problem gives me an error saying "at least ONE of my answers is incorrect" I'm not sure how as I used the (ln) rules and broke it apart several times and even plugged it into my calculator and I am getting the same results. can someone help me or point out what i am doing wrong. rules: log(ab)= log a +log b log (a/b) = log a - log b log a^b = b log a 1. 👍 0 2. 👎 0 3. 👁 541 1. (a) ln[c^3 / (a^2 b^4)] = (5 * 3) - [(2 * 2) + (3 * 4)] = -1 (b) ln[c / (a b^2)] = 5 - [2 + (3 * 2)] not sure how you are getting your results ... you shouldn't need a calculator ... just follow the rules carefully 1. 👍 0 2. 👎 1 Similar Questions 1. Math 1. what is the solution o 204 - (-36)? A 240** B - 240 C 168 D -168 2. what is the solution of 87 + (-396)? A 183 B -183 ** C 9 D -9 3. Evaluate: -12 x 3 =? A 36 B -36** C 4 D -4 4. Evaluate: -20 ÷ -10 =? A 200 B -200 C 2 **8 D 2. algebra Can someone check my answers it would be much appreciated! 1. Evaluate the linear expression 14x−22 for x=3 A. 64 B.−52 C.20***** D.−5 2. Evaluate the expression mn/m-n for m=−12 and n=6 A.−12 B.12 C.−4***** D.4 3. How 3. calculus 1.Evaluate the integral. (Use C for the constant of integration.) integral ln(sqrtx)dx 2. Use the method of cylindrical shells to find the volume V generated by rotating the region bounded by the curves about the given axis. y = 4. pre calc evaluate the trigonometric function of the given quadrantal angle. tan 1440° Evaluate without using ratios in reference triangle. sec 3π/4 1. Calculus 1. Express the given integral as the limit of a Riemann sum but do not evaluate: integral[0 to 3]((x^3 - 6x)dx) 2.Use the Fundamental Theorem to evaluate integral[0 to 3]((x^3 - 6x)dx).(Your answer must include the 2. English Revise the following sentence to omit personal pronouns. I think it is important for you to evaluate information before it in a paper. It's important to evaluate information before including in your paper. Please check my work. 3. Calc, Implicit Differentiation 1) Let x^3 + y^3 = 28. Find y"(x) at the point (3, 1). y"(3)= The correct answer is: -2*3*28 = -168 3x^2 + 3y^2(dx/dy) = 0 6x + 6y(dx/dy) = 0 dx/dy = -6(3)/6(1) = -3 That's as far as i get... i don't even know if that's started 4. Math. Need help 1.if g=5 and k=1, evaluate 9+gk a. 13 b. 14 c. 15 d. 16 2.Evaluate a-b^2/b if a = 10 and b = 2. a. 1 b. 3 c. 8 d. 32 3.Evaluate 3xy^2 - y if x = 2 and y = 5. a. 26 b. 30 c. 120 d. 145 4.Evaluate 2d + d^2/3 if d = 6. a. 4 b. 12 c. 1. ethics A coworker calls you at 9 a.m. at work and asks for a favor. He is having trouble this morning and will be an hour late for work. He explains that he has already been late for work twice this month and that a third time will cost 2. nutriction and wellness Using the Dietary Guidelines for Americans, 2005 (particularly chapters 5 through 8) as a reference, evaluate the 3 web-based nutrition sites listed below. Explain why you think the information is accurate or not. Three web-based 3. Math (Definite Integrals) Sketch the region given by the definite integral. Use geometric shapes and formulas to evaluate the integral (a > 0, r > 0). r ∫ sqrt(r^2 - x^2) dx -r While I recognize that this looks similar to a circle function, I'm not sure 4. math what does 40-9 divided by 3x7+5=? Do you mean (40 - 9)/(3*7 + 5) ? If so then evaluate the numerator to get 40-9=31. Then evaluate the denominator to get 3*7+5=26. Then form the fraction using the two results. Was that the
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8736407160758972, "perplexity": 2444.668724633047}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00513.warc.gz"}
http://mathhelpforum.com/calculus/23950-differentation-question.html
# Math Help - differentation question 1. ## differentation question Hi everyone i was just wondering if anyone could help with this differentiation question it is : Use differentitation to find the coordinates of the stationary points on the curve. Y= x + 4/x And determine whether each stationary point is a maximum point or minimum point. Find the set of values of x for which y increase as x increases. Thanks Gracey Even if you can help with a small part of the question i would be grateful 2. I think this is what you mean: $y=x+\dfrac{4}{x}$ $\dfrac{dy}{dx}=1-\dfrac{4}{x^2}$ $0=1-\dfrac{4}{x^2}$ $x=\pm2$ Look at the sign of the derviative in the interval $(-\infty, \infty)$ you will find that it is a local max at $x=-2$ and local min at $x=2$ Therefore the function is increasing from $(-\infty,-2)$ and $(2,\infty)$ and decreasing on $(-2,0), (0,2)$ 3. thankyou so much that is excellent.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8150773048400879, "perplexity": 337.67814395070883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272329.26/warc/CC-MAIN-20140728011752-00174-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.ahmednasr.at/machine-learning-06/
# Machine Learning 06 – Binary Classification In this post I will explain how we can use Machine Learning to implement a binary classification algorithm. In binary classification which is also called logistic regression our $y$ value can only be 0 or 1. To really get a feeling for what logistic regression means I found it very helpful to look at the definition of the word logistic (of or relating to the philosophical attempt to reduce mathematics to logic). In normal regression we could have any possible y value, this time we „reduce it to logic“ so our y value can only be 0 or 1. Because $y \in {0,1}$ it makes a lot of sense to reduce the output of our prediction function $h_\theta (x)$ to be in the range $[0;1]$. We will let $h_\theta (x)$   be a sigmoid function. $$h_\theta (x)=g(\theta^Tx)$$ $$g(z)=\frac{1}{1+e^{-z}}$$ $$h_\theta (x)=\frac{1}{1+e^{-\theta^Tx}}$$ We will assume that $P(y=1|x; \theta)=h_\theta (x)$. This says that the probability of $y$ (conditioned by $x$ and parameterized by $\theta$) being $1$ is determined by our „hypothesis“ (=prediction function). Thus $P(y=0|x; \theta)=1 - h_\theta (x)$ (because $y \in {0,1}$). To generalize our assumption: $$P(y|x; \theta)=h_\theta (x)^y*(1 - h_\theta (x))^{1-y}$$ Now the question is how do we fit the parameters $\theta$ so that the hypothesis correctly predicts the values 0 or 1 for a given x value. To find the optimal parameters we will define a Likelihood function $L$ which is dependent on the parameters. So we have to define $L(\theta)$ . $$L(\theta)=P(\vec{y}|X;\theta)$$ $$L(\theta)=\prod_{i=1}^m P(y^{(i)}|x^{(i)};\theta)$$ $$=\prod_{i=1}^m h_\theta (x)^y*(1 - h_\theta (x))^{1-y}$$ We will have to maximize the Likelihood of our parameters, this means we will have to maximize $L(\theta)$. To achieve this a derivation will be needed. So we will define a logarithmic Likelihood function. We do this because it’s easier to work with logarithms, using the logarithm rules. Finding the $\theta$ which maximized the log-Likelihood results in having found the $\theta$ which maximizes the Likelihood function $L(\theta)$. This is because the logarithm increases as the input of the logarithm increases. $$l(\theta)=log L(\theta)=log \prod_{i=1}^m h_\theta (x)^y*(1 - h_\theta (x))^{1-y}$$ $$=\sum_{i=1}^m log \Big( h_\theta (x)^y*(1 - h_\theta (x))^{1-y} \Big)$$ $$=\sum_{i=1}^m log(h_\theta (x)^y) + log (1 - h_\theta (x))^{1-y}$$ $$=\sum_{i=1}^m y *log(h_\theta (x)) + (1-y)*log (1 - h_\theta (x))$$ The update rule for our learning algorithm will look like the following: $$\theta := \theta + \alpha \bigtriangledown_\theta l(\theta)$$ This is a gradient ascent algorithm thus we add the product of the learning rate and our derivation. $$\frac{\partial}{\partial\theta_j} l(\theta)=...$$ $$... = \sum_{i=1}^m(y^{(i)}-h_\theta(x^{(i)})*x_j^{(i)}$$ $$\theta_j := \theta_j + \alpha\sum_{i=1}^m(y^{(i)}-h_\theta(x^{(i)})*x_j^{(i)}$$ ### Example with a fictional Data Set python code to classify the above points: import math # h(x) = g(z) # g(z) = 1/(1+e^(-z)) points = [ [-2.02, 0], [-1.54, 0], [-1.53, 0], [-1, 1], [-1.28, 0], [-0.6, 1], [-0.7, 1], [-0.38, 1], [2.37, 0], [2.66, 0], [3, 0], [3.43, 0], [-0.28, 1], [-0.02, 1] ] # features X = [ # [x0=1, x1, x1^2] [1, points[0][0], points[0][0]**2], [1, points[1][0], points[1][0]**2], [1, points[2][0], points[2][0]**2], [1, points[3][0], points[3][0]**2], [1, points[4][0], points[4][0]**2], [1, points[5][0], points[5][0]**2], [1, points[6][0], points[6][0]**2], [1, points[7][0], points[7][0]**2], [1, points[8][0], points[8][0]**2], [1, points[9][0], points[9][0]**2], [1, points[10][0], points[10][0]**2], [1, points[11][0], points[11][0]**2], [1, points[12][0], points[12][0]**2], [1, points[13][0], points[13][0]**2] ] # targets y = [ points[0][1], points[1][1], points[2][1], points[3][1], points[4][1], points[5][1], points[6][1], points[7][1], points[8][1], points[9][1], points[10][1], points[11][1], points[12][1], points[13][1] ] w = [ 0, 0, 0 ] # weights w0 and w1, called Theta in Math # alpha = 0.01 learningRate = 0.01 iterations = 5000 # soultion for this training set h(x) = 3.01796x - 1.36527 # korrelation r = 0.846 def g(z): # sigmoid function return 1/(1+math.e**(-z)) def h(x): # x = X[i] = eg. [1, 3] # using sigmoid function to avoid j from getting bigger than 0 and 1 return g( w[0]*x[0] + w[1]*x[1] + w[2]*x[2]) # derivative of l = Sum j=1 to m [ yi - h(xi) ] * xi def derivativeLogLikelyHood(j): # j is number of feature, j=0 refers to x0 wich is 1, j=1 refers to x1 which is m = len(X) # number of trainingdata in trainingset sum = 0 for i in range(0, m): sum += ( y[i] - h(X[i]) ) * X[i][j] return sum def gradientAsscent(): # Not Descent because we want to maximize the likelikhood .. for iteration in range(0, iterations): for j in range(0, len(w)): w[j] = w[j] + learningRate * derivativeLogLikelyHood(j)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 34, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841058611869812, "perplexity": 2446.71225556101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00606.warc.gz"}
https://www.physicsforums.com/threads/work-energy-problem.199744/
# Homework Help: Work-Energy Problem 1. Nov 21, 2007 ### octahedron 1. The problem statement, all variables and given/known data A 5.0 kg box slides down a 5-m-high frictionless hill, starting from rest, across a 2-m-long horizontal surface, then hits a horizontal spring with spring constant 500 N/m. The other end of the spring is frictionless, but the 2.0-m-long horizontal surface is rough. The coefficient of kinetic friction of the box on this surface is 0.25. (a) What is the speed of the box just before reaching the rough surface? (b) What is the speed of the box just before hitting the spring? (c) How far is the spring compressed? (d) Including the first crossing, how many complete trips will the box make across the rough surface before coming to rest? 2. Relevant equations W_nc = delta-E 3. The attempt at a solution So I actually solved (a)-(c) using non-conservative work and delta energy. My solutions for (a)-(c), if that would help, were (a) 9.9 m/s (b) 9.39 m/s (c) 0.959 m However I honestly haven't a clue as to how to approach (d) besides the "obvious" long way by going through the velocities over and over again until it reaches zero. Any elegant way here? 2. Nov 22, 2007 ### learningphysics Yes, use energy... the work done by friction is the same each time the object crosses the rough surface. Use the initial energy... and the work by friction per trip across the rough surface... to get the number of trips before energy of the box becomes zero. 3. Nov 22, 2007 ### Hells_Kitchen I think its even a better idea if you use the kinetic energy of the box just before hitting the rough surface since you are including the first crossing too. Like learningphysics said: - Find you how much work it takes each time (its going to be the same) the box crosses the rough surface W = f*d = mu*N(normal)*d - Use the kinetic energy of the box just before hitting the surface (Ek = mv^2/2) Then C*W = Ek where C is the constant (number of times it crosses the rough surface) 4. Nov 23, 2007 ### Omega_sqrd I was having trouble with part d of this problem, as well. Thank you Hells_Kitchen and LearningPhysics for your help. However, I'm still confused. Why is it that we don't need to worry about the work the spring does or the work that gravity does? Why is it that Wnet is just the work done by friction? Thank you! 5. Nov 23, 2007 ### octahedron Notice that the work done by the frictional force is not merely $$W_{net}$$ but the work done by nonconservative forces $$W_{nc}$$, and since the spring and gravitational forces are not nonconservative, they are not factored in the calculation. Then it follows that $$W_{nc} = \Delta E$$, and you use energy done throughout the work completed to obtain the \total\ distance traveled across the rough surface. Thanks everybody; I used learningphysics's method and got the correct answer, which is 10 times. Last edited: Nov 23, 2007 6. Nov 23, 2007 ### Omega_sqrd Thank you, Octahedron. It all makes sense now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8494462370872498, "perplexity": 889.2498295151875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866511.32/warc/CC-MAIN-20180524151157-20180524171157-00513.warc.gz"}
http://mathhelpforum.com/calculus/97847-plane-tangent-paraboloid-print.html
# Plane Tangent to a Paraboloid. • August 12th 2009, 01:55 PM Plane Tangent to a Paraboloid. - I need to find at what point on te paraboloid $y=x^2+z^2$ is the tangent plane parallel to the plane $x+2y+3z=1$. I can handle most of this problem myself, but I need to know if this is the correct equation to start with: $f(x,y,z)=x^2-y+z^2$ Then I would just apply: $f_x(x_o,y_o,z_o)(x-x_o) + f_y(x_o,y_o,z_o)(y-y_o) +f_z(x_o,y_o,z_o)(z-z_o)=0$ Is this correct? • August 12th 2009, 05:24 PM luobo Quote: - I need to find at what point on te paraboloid $y=x^2+z^2$ is the tangent plane parallel to the plane $x+2y+3z=1$. $f(x,y,z)=x^2-y+z^2$ $f_x(x_o,y_o,z_o)(x-x_o) + f_y(x_o,y_o,z_o)(y-y_o) +f_z(x_o,y_o,z_o)(z-z_o)=0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40369951725006104, "perplexity": 341.52861423275255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639674.12/warc/CC-MAIN-20150417045719-00194-ip-10-235-10-82.ec2.internal.warc.gz"}
https://experts.syr.edu/en/publications/cosmological-moduli-and-the-post-inflationary-universe-a-critical-2
# Cosmological moduli and the post-inflationary universe: A critical review Gordon Kane, Kuver Sinha, Scott Watson Research output: Contribution to journalReview articlepeer-review 108 Scopus citations ## Abstract We critically review the role of cosmological moduli in determining the post-inflationary history of the universe. Moduli are ubiquitous in string and M-theory constructions of beyond the Standard Model physics, where they parametrize the geometry of the compactification manifold. For those with masses determined by supersymmetry (SUSY) breaking this leads to their eventual decay slightly before Big Bang nucleosynthesis (BBN) (without spoiling its predictions). This results in a matter dominated phase shortly after inflation ends, which can influence baryon and dark matter genesis, as well as observations of the cosmic microwave background (CMB) and the growth of large-scale structure. Given progress within fundamental theory, and guidance from dark matter and collider experiments, nonthermal histories have emerged as a robust and theoretically well-motivated alternative to a strictly thermal one. We review this approach to the early universe and discuss both the theoretical challenges and the observational implications. Original language English (US) 1530022 International Journal of Modern Physics D 24 8 https://doi.org/10.1142/S0218271815300220 Published - Jul 14 2015 ## Keywords • Cosmology • dark matter • supersymmetry ## ASJC Scopus subject areas • Mathematical Physics • Astronomy and Astrophysics • Space and Planetary Science ## Fingerprint Dive into the research topics of 'Cosmological moduli and the post-inflationary universe: A critical review'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120277523994446, "perplexity": 4125.966677332396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00770.warc.gz"}
https://deslib.readthedocs.io/en/latest/news.html
# Release history¶ ## Version 0.3¶ ### Changes¶ • All techniques are now sklearn estimators and passes the check_estimator tests. • All techniques can now be instantiated without a trained pool of classifiers. • Pool of classifiers can now be fitted together with the ensemble techniques. See simple example. • Added support for Faiss (Facebook AI Similarity Search) for fast region of competence estimation on GPU. • Added DES Multi-class Imbalance method deslib.des.des_mi.DESMI. • Added stacked classifier model, deslib.static.stacked.StackedClassifier to the static ensemble module. • Added a new Instance Hardness measure utils.instance_hardness.kdn_score(). • Added Instance Hardness support when using DES-Clustering. • Added label encoder for the static module. • Added a script utils.datasets with routines to generate synthetic datasets (e.g., the P2 and XOR datasets). • Changed name of base classes (Adding Base to their following scikit-learn standards). • Removal of DFP_mask, neighbors and distances as class variables. • Changed signature of methods estimate_competence, predict_with_ds, predict_proba_with_ds. They now require the neighbors and distances to be passed as input arguments. • Added random_state parameter to all methods in order to have reproducible results. • New and updated examples. • Added performance tests comparing the speed of Faiss vs sklearn KNN. ### Bug Fixes¶ • Fixed bug with META-DES when checking if the meta-classifier was already fitted. • Fixed bug with random state on DCS techniques. • Fixed high memory consumption on DES probabilistic methods. • Fixed bug on Heterogeneous ensembles example and notebooks examples. • Fixed bug on deslib.des.probabilistic.MinimumDifference when only samples from a single class are provided. • Fixed problem with DS methods when the number of training examples was lower than the k value. • Fixed division by zero problems with APosteriori APriori MLA when the distance is equal to zero. • Fixed bug on deslib.utils.prob_functions.exponential_func() when the support obtained for the correct class was equal to one. ## Version 0.2¶ ### Changes¶ • Implemented Label Encoding: labels are no longer required to be integers starting from 0. Categorical (strings) and non-sequential integers are supported (similarly to scikit-learn). • Batch processing: Vectorized implementation of predictions. Large speed-up in computation time (100x faster in some cases). • Predict proba: only required (in the base estimators) if using methods that rely on probabilities (or if requesting probabilities from the ensemble). • Improved documentation: Included additional examples, a step-by-step tutorial on how to use the library. • New integration tests: Now covering predict_proba, IH and DFP. • Bug fixes on 1) predict_proba 2) KNOP with DFP. ## Version 0.1¶ ### Implemented methods:¶ • DES techniques currently available are: 1. META-DES 2. K-Nearest-Oracle-Eliminate (KNORA-E) 3. K-Nearest-Oracle-Union (KNORA-U) 4. Dynamic Ensemble Selection-Performance(DES-P) 5. K-Nearest-Output Profiles (KNOP) 6. Randomized Reference Classifier (DES-RRC) 7. DES Kullback-Leibler Divergence (DES-KL) 8. DES-Exponential 9. DES-Logarithmic 10. DES-Minimum Difference 11. DES-Clustering 12. DES-KNN • DCS techniques: 1. Modified Classifier Rank (Rank) 2. Overall Locall Accuracy (OLA) 3. Local Class Accuracy (LCA) 4. Modified Local Accuracy (MLA) 5. Multiple Classifier Behaviour (MCB) 6. A Priori Selection (A Priori) 7. A Posteriori Selection (A Posteriori) • Baseline methods: 1. Oracle 2. Single Best 3. Static Selection • Dynamic Frienemy Prunning (DFP) • Diversity measures • Aggregation functions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30301713943481445, "perplexity": 21888.222343194047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00094.warc.gz"}
http://www.ask.com/question/how-many-millimeters-in-an-inch
How Many Millimeters in an Inch? 25.4 millimeters = 1 inch One inch is equal to 25.4 millimeters. Convert to 1 inch is equal to 25.4 milimetres. One inch is equal to 25.4 millimeters Q&A Related to "How Many Millimeters in an Inch" In one inch, there are 2.54 centimeters. In one centimeter, there is 10 millimeters. Therefore, one inch equals 25.4 millimeters (2.54 x 10 = 25.4). http://answers.ask.com/Science/Other/how_many_mill... 1. Measure a length with a tape measure or ruler. If you use a metric ruler then convert this measure into the Imperial measure. 2. Convert the millimeter measurement to inches by http://www.ehow.com/how_4555061_convert-millimeter... 1. Determine the value in millimeters you want to convert to inches and insert in into the blank space below. _ mm. * 1. cm. . 10. mm. * 1 in . 2. cm. = ? in. 2. Using a calculator, http://www.wikihow.com/Convert-Millimeters-to-Inch... 0.03937007874 in. Direct Conversion Formula. 1. mm. * 1 in. 25.4. mm. = 0.03937007874 in. http://wiki.answers.com/Q/How_many_inches_is_a_mil... Top Related Searches Explore this Topic One mile is the equivalent to almost 63,360 inches or 1,584,000 millimeters. That means that one inch is equal to just about 25 millimeters. ... One mile is the equivalent to almost 63,360 inches or 1,584,000 millimeters. That means that one inch is equal to just about 25 millimeters. ... A measurement of 1 inch is equal to 0.03 millimeters. Both units are used for calculating short lengths or distances. Inches are part of the imperial system while ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265778660774231, "perplexity": 3678.1194962572094}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654272/warc/CC-MAIN-20140305060734-00027-ip-10-183-142-35.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/88623/how-can-i-calculate-within-style-values
# How can I calculate within style values? Is it possible to calculate (addition etc.) in style values? I wanted to do the following: \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning,fit,chains} \newcommand*{\mypath}[3][0.0]{% looseness adjustment (optional),1st node, 2nd node \draw (#2.south east) to [in=90, out=90, looseness=0.85+#1] (#3.south west); } \begin{document} \begin{tikzpicture}[start chain] \node[on chain] (k11) {}; \node[on chain] (k12) {}; \mypath{k11}{k12} \end{tikzpicture} \end{document} The problem is looseness=0.85+#1 which fails with Missing number, treated as zero and Illegal unit of measure (pt inserted). - A better approach would be writing your own /.styles instead of nesting macros to avoid such problems. –  percusse Dec 30 '12 at 16:05 @percusse I don't like it either, but I don't know yet how I should transform my path operations into a style, I'll have to read the manual a bit more. –  neo Dec 30 '12 at 16:17 It is possible to parse the number into a macro and use the macro as the argument to the key. But, the reason this has to be done is that the looseness keys do not appear to be parsed using pgfmath. So, for fans of hacking: \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning,fit,chains} \makeatletter \def\tikz@to@set@in@looseness#1{% \pgfmathparse{#1}\let\tikz@to@in@looseness=\pgfmathresult% \let\tikz@to@end@compute=\tikz@to@end@compute@looseness% \tikz@to@switch@on% } \def\tikz@to@set@out@looseness#1{% \pgfmathparse{#1}\let\tikz@to@out@looseness=\pgfmathresult% \let\tikz@to@start@compute=\tikz@to@start@compute@looseness% \tikz@to@switch@on% } \newcommand*{\mypath}[3][0.0]{% looseness adjustment (optional),1st node, 2nd node \draw (#2.south east) to [in=90, out=90, looseness=0.85+#1] (#3.south west); } \begin{document} \begin{tikzpicture}[start chain] \node[on chain] (k11) {}; \node[on chain] (k12) {}; \mypath[-0.5]{k11}{k12} \end{tikzpicture} \end{document} - Hm! How nice, what about the other keys? Is there a list which ones are parsed by pgfmath? –  neo Dec 30 '12 at 16:01 @neo All keys that requires real or integer values should be parsed as mathematical expression. It is not the case of looseness. I think this is a bug. –  Paul Gaborit Dec 30 '12 at 16:31 Yet another way is to fill a macro with the expression. (So there is no need to make the option math parseable) \documentclass{article} \usepackage{tikz} \usetikzlibrary{positioning,fit,chains} \newcommand{\mypath}[3][0.0]{% looseness adjustment (optional),1st node, 2nd node \pgfmathsetmacro{\mytemp}{.85+ #1} \draw (#2.south east) to [looseness=\mytemp,in=90, out=90, ] (#3.south west); } \begin{document} \begin{tikzpicture}[start chain] \node[on chain] (k11) {}; \node[on chain] (k12) {}; \mypath[1]{k11}{k12} \mypath[0]{k11}{k12} \end{tikzpicture} \end{document} - Another approach to the same fundemental issue (the need to do floating-point maths on the value), this time using LaTeX3's FPU \documentclass{article} \usepackage{tikz} \usepackage{expl3} \ExplSyntaxOn \cs_new_eq:NN \fpeval \fp_eval:n \ExplSyntaxOff \usetikzlibrary{positioning,fit,chains} \newcommand*{\mypath}[3][0.0]{% looseness adjustment (optional),1st node, 2nd node \draw (#2.south east) to [in=90, out=90, looseness=\fpeval{0.85+#1}] (#3.south west); } \begin{document} \begin{tikzpicture}[start chain] \node[on chain] (k11) {}; \node[on chain] (k12) {}; \mypath{k11}{k12} \end{tikzpicture} \end{document} (The LaTeX3 code is expandable, so can simply be dropped in to a place where the underlying TikZ code needs some form of number.) - Using the fp package works, as explained in another question. \documentclass{article} \usepackage[nomessages]{fp} \usepackage{tikz} \usetikzlibrary{positioning,fit,chains} \newcommand*{\mypath}[3][0.0]{% looseness adjustment (optional),1st node, 2nd node \FPeval\loose{0.85+(#1)} \draw (#2.south east) to [in=90, out=90, looseness=\loose] (#3.south west); } \begin{document} \begin{tikzpicture}[start chain] \node[on chain] (k11) {}; \node[on chain] (k12) {}; \mypath[4.0]{k11}{k12} \end{tikzpicture} \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862341046333313, "perplexity": 12929.89963540633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096287.97/warc/CC-MAIN-20150627031816-00002-ip-10-179-60-89.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Backhouse%27s_constant
Backhouse's constant Binary 1.01110100110000010101001111101100… Decimal 1.45607494858268967139959535111654… Hexadecimal 1.74C153ECB002353B12A0E476D3ADD… Continued fraction ${\displaystyle 1+{\cfrac {1}{2+{\cfrac {1}{5+{\cfrac {1}{5+{\cfrac {1}{4+\ddots }}}}}}}}}$ Backhouse's constant is a mathematical constant named after Nigel Backhouse. Its value is approximately 1.456 074 948. It is defined by using the power series such that the coefficients of successive terms are the prime numbers, ${\displaystyle P(x)=1+\sum _{k=1}^{\infty }p_{k}x^{k}=1+2x+3x^{2}+5x^{3}+7x^{4}+\cdots }$ and its multiplicative inverse as a formal power series, ${\displaystyle Q(x)={\frac {1}{P(x)}}=\sum _{k=0}^{\infty }q_{k}x^{k}.}$ Then: ${\displaystyle \lim _{k\to \infty }\left|{\frac {q_{k+1}}{q_{k}}}\right\vert =1.45607\ldots }$ (sequence A072508 in the OEIS). This limit was conjectured to exist by Backhouse (1995), and the conjecture was later proven by Philippe Flajolet (1995). References • Backhouse, N. (1995), Formal reciprocal of a prime power series, unpublished note • Flajolet, Philippe (November 25, 1995), On the existence and the computation of Backhouse's constant, Unpublished manuscript. Reproduced in Les cahiers de Philippe Flajolet, Hsien-Kuei Hwang, June 19, 2014, accessed 2014-12-06. • Sloane, N. J. A. (ed.). "Sequence A030018". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. • Sloane, N. J. A. (ed.). "Sequence A074269". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. • Sloane, N. J. A. (ed.). "Sequence A088751". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8534947037696838, "perplexity": 4093.78779157096}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827998.66/warc/CC-MAIN-20181216213120-20181216235120-00617.warc.gz"}