arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# White dwarf's impact on orbiting bodies
Recently the Kepler telescope in its study of white dwarfs detected the first planetary object transiting a white dwarf in the data from the K2 mission. It was consistent with earlier theories' prediction that a planetary object orbiting a white dwarf would slowly disintegrate. Why would a planetary object that is orbiting a white dwarf disintegrate? I read it here.
The original paper published in Nature (preprint): A Disintegrating Minor Planet Transiting a White Dwarf
• I would think that it is disintegrating because it orbits very close to the WD. 4½ hour period I think means about 1 million km from a one Solar mass WD. Mercury is about 50 times further from the Sun. That close, the gravitational pull is significantly larger on the side of the planet facing the WD than on the far side, ripping the planet apart. So I don't think it directly has anything to do with it being a WD instead of an active star. – LocalFluff Oct 23 '15 at 6:02
• Actually Not just this planet but the relative effect on the planets after white dwarf conversion. What i want know is.is it just the gravity(relatively how much?) or some other forces acting on it. – r2_d2 Oct 23 '15 at 6:14
• Here is the full text original peer reviewed article published in Nature cfa.harvard.edu/~avanderb/wd1145_017.pdf It will take me a while to read it. Check it out. – Aabaakawad Oct 23 '15 at 23:54
• @LocalFluff see comment above. That link is to a preprint. – Aabaakawad Oct 24 '15 at 0:20
• @Aabaakawad thank you for the link. I've only read part of it, but very informative. – userLTK Oct 24 '15 at 1:12
I think Aabaakawad's link gives a complete answer, but to give an astronomy for dummies answer, there's nothing about a white dwarf that causes a planet's orbit to decay at least, not directly. Your article (I've pulled quote below the caption):
Slowly the object will disintegrate, leaving a dusting of metals on the surface of the star.
That's only talking about this particular situation and there's a difference between disintegrate and decay. This planetoid is enormously close to the white dwarf. So close, that what we think of as normal white dwarf/planet dynamics (very cold) is no longer true. This planetoid is slowly being vaporized.
Looking at the orbital period of 4.5 hours (about 1948 orbital periods in 365.25 days). The orbital distance to orbital ratio relation is exponential to the power of 2/3 (this varies a bit due to eccentricity, but it's generally correct), so an orbital period 1948 times faster means about 156 times closer, and giving the white dwarf equal mass to our sun, that puts the planetoid at a bit under 1 million KM. If this white dwarf is lighter than the sun, the planetoid would need to be even closer. That's close to the Roche limit and would be inside it if the planetoid wasn't dense and rocky/metallic.
If we estimate the white dwarf to be about the size of the Earth, which is a common size given for white dwarfs, An Earth sized object from 1 million KM would be larger in the sky than the sun appears from Earth, and presumably quite a bit hotter than the surface of our sun too, so this isn't a tiny white dwarf in the sky from the perspective of the planetoid. It's a blazing furnace of a sun, so hot, it vaporizes metallic gas and dust off the surface of the planet.
The article mentions this (end of page 3), that Poynting-Robertson drag see here and here, and that may be a factor in any orbital decay in this scenario. The article is clear that there's a good deal of uncertainty on with that effect, and that only affects tiny particles, but enough tiny particles could create a drag over time. . . . (maybe). The general scenario with this orbit is a planet scorched and as a result, is losing material. It's likely the very high heat that's driving any orbital decay, not gravity.
Gravitational decay / orbital decay does happen, usually much more slowly. That's probably not what's happening here.
There are some interesting orbital effects that can happen when a main sequence star goes red dwarf and later when it creates a planetary nebula, significant increases in tidal forces due to the star's greater size in the first case and increased drag in the 2nd, but at the white dwarf stage, there's no significant orbital decay effects.
Update:
Why not Poynting Robertson drag and Orbital decay effect the planetoid when white dwarf was a star or even red giant? Is there any "interesting orbital effects" when a star undergoes red giant?.Can you update your answer to summarize the forces and their effect on the planetoid in each phase of the star. and also what do you mean by orbital decay? Does it have something to deal with the Roche limit.
OK, I think, having read more about it, Poynting-Robertson effect only matters when the orbiting objects are very small. I've linked it twice above, but the simple explanation is that objects in orbit move and so any light or debris from the sun hits the moving object at an angle, not direct on. If the object is small enough, this over time drives the dust and maybe grain of sand sized particles into the sun. This doesn't affect larger objects, so it's not really relevant to any planets or planetoids.
As far as "interesting red dwarf" effects. That really has to do with tides. Using the Moon/Earth example, the Moon creates tides on the Earth, a tidal bulge towards the Moon, but because the Earth Rotates faster than the Moon orbits, this tidal bulge is always ahead of the Moon and this creates a gravitational tug on the moon that pulls it away from the Earth - very slowly.
The same thing happens with planets around stars, but even more slowly, lets pretend it's just the Earth and the Sun - a 2 body system (in reality, with several planets it's much more complicated), but just Earth and sun, teh Earth creates a tidal bulge on the sun, the sun rotates ahead of the earth, this causes the earth to very slowly spiral away from the sun - so slowly that it might take a trillion years for the Earth to spiral away.
Now when the sun goes Red Giant, the sun is essentially the same mass but much more spread out and parts of it, much closer to the Earth and less gravitationally bound to the sun. This creates a far larger tidal tug. Also, as the sun expands it's orbital velocity drops, because orbital momentum is concerved, so when the Sun is Red Giant, the tidal bulge will be behind the Earth which drags it in towards the Sun. Due to the size and proximity of the Red Giant star, this draws the remaining near-by planets towards the sun fairly quickly, at least compared to main sequence stages which, provided the sun rotates faster than the planets orbit, has a much smaller outward tidal pressure on the planets.
And when the sun goes planetary nebula, any debris in the planet's path can also cause the planets to slow down slightly - the precise process there I'm less clear on, but in general, any orbital debris creates drag and can slow down a planet's orbit. This may be a key factor in the formation of hot jupiters, cause they can't form close to their suns but enough orbital debris can drive them in closer to their suns. (or planet to planet gravitational interactions can too).
That's the gist of the Sun-Planet orbital relation. When the sun is young, planets are mostly driven outwards, and young suns can have far greater solar flares and stronger solar wind. How much that effects the planets, I'm not sure.
During the Main sequence stage, stars tend to push planets outwards (unless they rotate very slowly, in which case the tidal effect is reversed), but this tidal effect is very small and very gradual.
During the Red Giant stage, stars tend to drag planets in wards, and I assume, during the planetary nebula stage as well. This effect is larger for closer planets.
You also asked about Orbital Decay - if you click on the link, there are examples of that. That probably gives a better explanation than I could. In general, Orbital decay happens very slowly unless you're talking Neutron Star or Black hole in which case the relativistic effects can cause orbital decay to happen quite fast. There's nothing about a white dwarf star that would cause faster than normal orbital decay but a white dwarf star would lose any tidal bulge tugging that a main sequence star has, so there would be essentially no tidal outwards pressure either which could in theory speed up decay cause you've lost a small outwards pressure but you would still have any debris or space dust clouds causing a small inwards pressure. (if that makes sense?)
That's my layman's explanation anyway.
• why not Poynting Robertson drag and Orbital decay effect the planetoid when white dwarf was a star or even red giant? Is there any "interesting orbital effects" when a star undergoes red giant?.Can you update your answer to summarize the forces and their effect on the planetoid in each phase of the star. – r2_d2 Oct 25 '15 at 7:58
• and also what do you mean by orbital decay? Does it have something to deal with the Roche limit. – r2_d2 Oct 25 '15 at 7:59
• @r2_d2 The Roche Limit is in a sense, just a boundary usually quite near the object being orbited. Orbital decay (or the opposite, orbital escape) can happen at any orbit, far outside the Roche limit or close to it. As to the effects at different stages of the star, I can give a summary, but not tonight. I also probably need to update what I said about Poynting-Robertson. Need to tidy that up a bit. – userLTK Oct 25 '15 at 8:08
Let's assume the white dwarf has a mass of $0.6 M_{\odot}$ (there's probably a more accurate value, but most white dwarfs are close to this...). With a period of 4.5 hours we can use Kepler's third law, assuming the planetary mass is negligible compared to the white dwarf, to infer an orbital radius of 0.0054 au ($8.1\times 10^{8}$ m).
The tidal forces this close to a white dwarf are very large. The Roche limit for the total tidal disintegration of a satellite, in synchronous rotation, held together only by its own gravity is roughly $$d = 1.44 R_{WD} \left( \frac{\rho_{WD}}{\rho_p}\right)^{1/3},$$ where $R_{WD}$ is the radius of the white dwarf (similar to the radius of the Earth), $\rho_{WD}$ is the average density of the white dwarf (a few times $10^{9}$ kg/m$^3$) and $\rho_p$ the density of the planet (let's assume 5000 kg/m$^3$).
Thus $d \simeq 6 \times 10^{8}$ m and is very similar to the actual orbital radius of the planet. i.e. It will be tidally disintegrating.
I guess it will be an observational selection effect that such objects will be detected at the tidal breakup radius, since if they were further way they would not be disintegrating and would not be detected, and if they were closer they would have already disintegrated and wouldn't be seen!
EDIT: On reading the paper - the authors claim that these objects are not tidally disintegrating. In fact they argue that this must be debris from a rocky planet precisely because the density must be large enough to avoid tidal disintegration according to the formula above. However I find the whole discussion rather incoherent. They specifically talk about "disintegrating planetesimals" (note the tense) which are being evaporated in a Parker-type wind due to heating by the radiation from the white dwarf. I cannot see where they explain then how the planetesimals disintegrate.
• why didnot this orbiting body disintegrate when white dwarf was a normal star? if we consider distance(Roche limit which applies to al celestial bodies) – r2_d2 Oct 24 '15 at 20:15
• @r2_d2 I think you are correct that the Roche limit for the progenitor (e.g. a 2 solar mass main sequence star with a radius of $\sim 10^{9}$ m would be quite similar). Therefore we must conclude that the planet was not that close during the progenitor's main sequence lifetime. That is not surprising, since prior to becoming a white dwarf, the progenitor would have engulfed such a close-in planet during its red giant stages. In fact at $10^{9}$ m, such a planet would have been inside the main sequence star. – Rob Jeffries Oct 24 '15 at 20:35
• So what could be the possible explanation for this. – r2_d2 Oct 24 '15 at 20:46
• @r2_d2 Drag in the wind of the progenitor during the AGB phase perhaps. – Rob Jeffries Oct 24 '15 at 20:56
• @r2_d2 The Nature paper suggests that mass loss from the AGB star causes planet migration outwards that can then lead to an instability and interaction between planets that projects a planet inwards towards the host star. The orbit is then circularised due to wind drag. – Rob Jeffries Oct 24 '15 at 21:12
|
|
# How to modify knitr output chunk
I know that it's possible to tweak the output of a knitr code chunk so that it looks the way you want. My problem occurs when using the parskip package. The parskip package adds an extra bit of space below the code output, which doesn't look great. Here's a minimal .Rnw file.
\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[margin=1in]{geometry}
\usepackage{parskip}
\begin{document}
\setlength{\parindent}{0in}
Some text above chunk
<<echo=TRUE>>=
2 + 2
@
Some text below. The space above this is too big when using the \texttt{parskip} package.
Some text below. The space above this is too big when using the \texttt{parskip} package.
Some text below. The space above this is too big when using the \texttt{parskip} package.
A nice space between paragraphs.
\end{document}
• In a Debian with TeX Live 2014 I am unable to see in your example the extra space between the R box and "Some text..." when using parskip (at the same time that I see the extra space above "A nice space...", so there are not doubt that I was using the package). – Fran Mar 5 '15 at 19:54
This not an elegant answer but you can 'fix' the excess space with the inclusion of negative vertical space. See the modified code.
\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[margin=1in]{geometry}
%\usepackage{parskip}
\begin{document}
\setlength{\parindent}{0in}
Some text above chunk
<<echo=TRUE>>=
2 + 2
@
\vspace{-1em} %%% this moves the following text up the height of an 'm'
Some text below. The space above this is too big when using the \texttt{parskip} package.
Some text below. The space above this is too big when using the \texttt{parskip} package.
Some text below. The space above this is too big when using the \texttt{parskip} package.
A nice space between paragraphs.
\end{document}
|
|
I have following example of principal component analysis using first 4 variables of iris data set (code in R):
> res = prcomp(iris[1:4])
> res
Standard deviations:
[1] 2.0562689 0.4926162 0.2796596 0.1543862
Rotation:
PC1 PC2 PC3 PC4
Sepal.Length 0.36138659 -0.65658877 0.58202985 0.3154872
Sepal.Width -0.08452251 -0.73016143 -0.59791083 -0.3197231
Petal.Length 0.85667061 0.17337266 -0.07623608 -0.4798390
Petal.Width 0.35828920 0.07548102 -0.54583143 0.7536574
It appears that Sepal.Width has a very small contribution to PC1. How do I know if it is a significant contribution?
Is there any significance test for this? Similarly, I want to determine significance for all values in above matrix.
Also, is there any package in R that does it?
This is not (yet) and answer, only a comment but too long for the box
I do not really know how to determine such significance; but out of couriosity I did a bootstrap-procedure: from a replication of the original data to a pseudo-population of $N=19200$ I draw $t=1000$ randomsamples of $n=150$ (each row of the dataset could occur at most $128$ times).
From each of this $t=1000$ experiments I computed the pca-solutions and stored the first pc only in a list. From this 1000 instances of first pc's I got the following statistics for their loadings:
PrC[1]: Mean Min Max Stddev SE_mean lb(95%) mean ub(95%)
------------------------------------------------------------------------------
S.L 0.362 0.314 0.412 0.015 0.000 0.361 0.362 0.362
S.W -0.085 -0.131 -0.023 0.017 0.001 -0.086 -0.085 -0.083
P.L 0.856 0.841 0.869 0.004 0.000 0.856 0.856 0.857
P.W 0.358 0.334 0.382 0.008 0.000 0.358 0.358 0.359
The 95% confidence interval for the item S.Width was -0.085 .. - 0.083 and this shows that this value seems to be from zero not by the pure random-effect of the sampling. (Similarly narrow appear all 95% confidence intervals for the other loadings)
After that it's clear I need more clarification what it means for a loading to "contribute significantly" - significance derived from what expectance? (But that's what I do not yet understand, I'm competely illiterate yet with the question of significance-estimation for covariances and for loadings in a factormodel, so this all might be of no help at all here)
[Update 2]
Here is a picture which shows the location of the Iris-items in the coordinates of the first 2 principal components, evaluated by the Monte-Carlo-experiment ("population": $N=128 \cdot 150=19200$, "sample": $n=150$, number-of-samples: $s=1000$)
Picture 1: (using covariance-matrix, loadings from eigenvectors as done in the OP's question)
From the picture I'd say, that the small loading of Sepal.Width of -0.141 on pc1 is a reliable (different from zero, however small) estimate of the loading in the "population" (because the whole cloud is separated from the y-axis)
Using the standard interpretation of PCA (based on correlations, using scaled eigenvectors) the picture looks a bit different, but still with very little disturbances of the loadings of the items.
The statistics are as in the following:
PrC[1] Mean Min Max Stddev SE_mean lb(95%) mean ub(95%)
------------------------------------------------------------------------------
S.L 0.891 0.840 0.937 0.015 0.000 0.890 0.891 0.892
S.W -0.459 -0.705 -0.159 0.081 0.003 -0.465 -0.459 -0.454
P.L 0.991 0.987 0.994 0.001 0.000 0.991 0.991 0.991
P.W 0.965 0.946 0.980 0.005 0.000 0.965 0.965 0.965
Picture 2: (using correlation-matrix, principal components taken in the standard method)
[Update 1] Just for my own couriosity I made a set of plots of the empirical loadings-matrices when samples are drawn from a known population. That's somehow bootstrapping, and I've not yet seen similar images. I took as population a set of 1000 normal random distributed cases with a certain factorial structure. Then I draw 256 random samples from the population with n=40 and did the same components-analysis/rotation for each of that 256 samples. To compare and to see, how the accuracy of the estimation improves I took the same number of samples, but now each sample with n=160. See the comparision at http://go.helms-net.de/stat/sse/StabilityofPC
• Thanks for your answer. What exact code did you use for above bootstrap procedure? Did you encounter sign issue, since loadings sometimes come with reverse signs on repeated sampling? – rnso Jun 26 '15 at 13:30
• I concatened the original dataset several times until N=19200 was reached. Then by a randomgenerator which was restricted to the range 1..19200 (with guaranteed frequency of 1 for each random value if called less than 19200 times) I simply took the first 150 numbers and used them as index into the population-dataset. This made one sample. Evaluate, store the first prcomp and repeat until 1000 samples and thus 1000 first prcomps were collected. Then did simple descriptive statistics on the list. Sign-issue? Hmm, I don't know what you mean? – Gottfried Helms Jun 26 '15 at 14:38
• – rnso Jun 26 '15 at 15:15
• @rnso: What if not just the signs flip but also the axes of the PCA when choosing different bootstrap samples? (Imagine e.g. the case where the original variables are quite orthogonal). I am asking of pure curiosity. – Michael M Apr 18 '16 at 14:12
• The figures you added seem to be very close to the ones I produced answering your follow-up question. That's good. – amoeba Apr 18 '16 at 14:17
|
|
# What are a good set of macros for writing beamer presentations?
I previously asked about tricks for reducing typing when writing beamer presentations. I thought it might be useful to write a few macros for common commands. E.g., something like this:
\newcommand{\bi}{\begin{itemize}}
\newcommand{\ei}{\end{itemize}}
\newcommand{\be}{\begin{enumerate}}
\newcommand{\ee}{\end{enumerate}}
\newcommand{\bb}{\begin{block}}
\newcommand{\eb}{\end{block}}
I'm always a little wary about defining macros in case it introduces issues. In particular, I encountered problems when trying to write a macro for \begin{frame}.
Has anyone already done this and worked out a good set of macros?
I'd also be interested in whether experienced users think macro redefinition in this case is worth the bother.
-
I would advise against having commands that start an environment. It makes it harder to read when environments start and stop if you start obfuscating the \begin and \end commands. – Seamus Oct 14 '10 at 10:14
If typing is what you are concerned about, I suggest defining keyboard shortcuts/macros in Vim/Emacs/whatever editor you use instead. – David Hollman Nov 15 '10 at 13:05
I'm not a big fan of using macros in this way. It does cut down on a bit of typing, but tends to make the source code much more opaque. A better solution is to use an editor that allows you to assign bits of code to shortcuts. For example, I use TeXShop (Mac OSX) and have set the key combination Ctrl+Opt+Cmd+1 to insert the code
\begin{enumerate}
\item
\end{enumerate}
and place the cursor right after \item. This cuts down on typing (I just have to mash down the bottom left of my keyboard and type a number) while still producing a readable source.
I'm sure that there are ways of reproducing this functionality with other editors and on other operating systems as well; this answer to a very similar question suggests the AutoHotkey utility for Windows.
-
Thanks. Based on your thoughts and the number of up votes, I think I'll just avoid the macro redefinition. – Jeromy Anglim Oct 14 '10 at 7:56
Emacs+AucTeX already has built in keyboard shortcuts for adding environments, sections and so on (with labels added automatically if needed). A similar approach can be achieved in any other text editor with a flexible enough keyboard shortcut defining structure. This is, as has already been pointed out, preferable to TeX macros for new environments.
One beamer specific macro that I use is the following:
\newenvironment{bitemize}{\begin{itemize}[<+->]}{\end{itemize}}
What this does is define a bitemize environment that has consecutive items appear one by one on the slides. This doesn't really save all that many keystrokes, but [<+->] is tricky to type in a hurry, and since I use Emacs' C-c C-e to add environments, it saves me navigating back to the top of the itemize environment to add it.
One could also do the following for a similar small saving when using alert:
\newcommand{\balert}[1]{\alert<+->{#1}}
I expect this will reveal consecutive alerts on consecutive slides. (I haven't used this, I just thought of it now. I don't tend to use alert, but I know people who do.
-
A very simple way of reducing typing by keyboard shortcuts in Linux is the small, very cool editor Scribes ( http://scribes.sourceforge.net/ ), where you can import and export your shortcuts as xml. An example of latex shortcuts is at the download site.
For Beamer, I created my own and simply type:
• fr for the whole frame environment or
• bl for a block,
• de for a description,
• item for an itemize environment.
That works very well and is really fast.
-
|
|
# Pointwise estimate in homogenization of Dirichlet problem for elliptic systems in divergence form
Partial SolutionYear of origin: 2013
Posted online: 2018-07-07 19:14:27Z by Hayk Aleksanyan 76
Cite as: S-180707.2
Boundary layers in periodic homogenization of Dirichlet problem
Consider the homogenization problem of the elliptic system $$- \nabla \cdot A \left( \frac{x}{\varepsilon} \right) \nabla u (x) = 0, \ \ x \in D, \tag{1}$$ in a domain $D\subset \mathbb{R}^d$, ($d\geq 2$), and with oscillating Dirichlet boundary data $$u(x) = g \left(x , \frac{x}{\varepsilon} \right), \ \ x \in \partial D. \tag{2}$$
Here $\varepsilon> 0$ is a small parameter, and $A= A^{\alpha \beta } (x) \in M_N(\mathbb{R})$, $x\in \mathbb{R}^d$ is a family of functions indexed by $1\leq \alpha, \beta \leq d$ and with values in the set of matrices $M_N( \mathbb{R})$. For each $\varepsilon>0$ let $\mathcal{L}_\varepsilon$ be the differential operator in question, i.e. the $i$-th component of its action on a vector function $u=(u_1,...,u_N)$ is defined as $$(\mathcal{L}_\varepsilon u)_i (x)= - \left( \nabla \cdot A \left( \frac{\cdot}{\varepsilon} \right) \nabla u \right)_{i} (x) = -\partial_{x_\alpha} \left[ A^{\alpha \beta }_{ij} \left( \frac{\cdot}{\varepsilon} \right) \partial_{x_\beta} u_j \right],$$ where $1\leq i \leq N$ (the case $N=1$ corresponds to scalar equations). We impose the following conditions on the problem $(1)$-$(2)$:
(Ellipticity) there exists a constant $\lambda>0$ such that for any $x\in \mathbb{R}^d$, and any $\xi=(\xi^\alpha_i)\in \mathbb{R}^{dN}$ one has $$\lambda \xi^\alpha_i \xi^\alpha_i \leq A^{\alpha \beta}_{ij}(x) \xi^\alpha_i \xi^\beta_j \leq \frac{1}{\lambda} \xi^\alpha_i \xi^\alpha_i.$$
(Periodicity) $A$ and, $g$ in its second variable, are both $\mathbb{Z}^d$-periodic, i.e. $A(y+h) = A(y)$, and $g(x, y + h) = g(y)$ for all $x\in \partial D$, $y\in \mathbb{R}^d$, and $h\in \mathbb{Z}^d$.
We skip further assumptions on the smoothness and geometry of the domain, these are not relevant for introducing the general problem.
For each $\varepsilon > 0$ let $u_\varepsilon$ be the unique solution to $(1)$-$(2)$.
The main question, in general terms, asks: $$\textbf{What can we say about the limit of } u_\varepsilon \textbf{ as } \varepsilon \to 0 \ ?$$ In other words does $(1)$-$(2)$ have a homogenized limit and if so what properties does it have?
Following is a brief discussion based on [1] (see also [2]-[4] for details) on how one may naturally appear at $(1)$-$(2)$.
Consider problem $(1)$ but with a fixed boundary data $g$, i.e. $$-\nabla \cdot A\left( \frac{x}{\varepsilon} \right) \nabla u (x) = 0 , \ x\in D \qquad \text{and} \qquad u= g(x), \ x\in \partial D.$$ By Lax-Milgram for each $\varepsilon>0$ this problem has a unique weak solution $u_\varepsilon \in H^1(D; \mathbb{R}^N)$, which converges weakly in $H^1(D)$ to solution $u_0$ of the homogenized problem $$-\nabla \cdot A^0 \nabla u_0 (x) = 0 , \ x\in D \qquad \text{ and } \qquad u_0= g(x), \ x\in \partial D.$$ Here $A^0$ is the homogenized coefficient tensor and is defined via the solutions of the cell-problem, namely for $1\leq \gamma \leq d$ define $\chi = \chi^\gamma (y) \in M_N(\mathbb{R})$ to be the periodic solution of the problem $$-\partial_\alpha [ A^{\alpha \beta}(y) \partial_\beta \chi^\gamma(y) ] = \partial_\alpha A^{\alpha \gamma } (y) \text{ in } \mathbb{T}^d \text{ and } \int_{\mathbb{T}^d} \chi^\gamma(y) dy = 0 ,$$ where $\mathbb{T}^d$ is the unit torus, and we adopted the summation convention of repeated indices. The homogenized coefficients are defined by $$A^{0, \alpha \beta} = \int_{ \mathbb{T}^d } A^{\alpha \beta} dy + \int_{\mathbb{T}^d} A^{\alpha \gamma} \partial_{\gamma } \chi^{\beta} dy.$$
Setting $u_1 (x,y) = - \chi^{\alpha} (y) \partial_{\alpha} u_0(x)$ one obtains $$u_\varepsilon(x) = u_0(x) + \varepsilon u_1 \left( x, \frac{x}{\varepsilon} \right) + O( \varepsilon^{1/2} ) \text{ in } H^1(D).$$ The latter is the justification that the first two terms of the formal two-scale expansion of $u_\varepsilon$ are correct. Now observe that there is a mismatch of the boundary data in the left and right hand sides of the last expansion. The profile $u_1$ being periodic in its second variable oscillates near the boundary, giving rise to the so-called boundary layer phenomenon, which is responsible for the $\varepsilon^{1/2}$ loss in the approximation of $u_\varepsilon$. Quoting [1],
Of particular importance is the analysis of the behavior of solutions near boundaries and, possibly, any associated boundary layers. Relatively little seems to be known about this problem.
Indeed, correcting the boundary data by $u_{1,\varepsilon }^{bl}$ defined as $$\mathcal{L}_\varepsilon u_{1, \varepsilon }^{bl} = 0 \text{ in } D \qquad \text{and} \qquad u_{1, \varepsilon }^{bl} = -u_1 \left( x, \frac{x}{\varepsilon} \right) \text{ on } \partial D,$$ we get $$u_\varepsilon(x) = u_0(x) + \varepsilon u_1 \left( x, \frac{x}{\varepsilon} \right) + \varepsilon u_{1, \varepsilon}^{bl} (x) + O( \varepsilon ) \text{ in } H^1(D).$$ While this expansion shows that correcting the boundary data in the two-scale expansion of $u_\varepsilon$ gives a better approximation, it is of little use as long as we do not understand the behavior of $u_{1, \varepsilon}^{bl}$ as $\varepsilon \to 0$. The last question is precisely a partial case of $(1)$-$(2)$, with boundary data $g(x,y) = - \chi^{\alpha} (y) \partial_{\alpha} u_0(x)$.
### Solution Description
We consider homogenization of Dirichlet problem for divergence type elliptic operator when the operator is fixed, and the boundary data is oscillating, which is a particular case of the setting introduced in [2].
Namely, for a small parameter $\varepsilon>0$ let $u_\varepsilon$ be the solution of the following elliptic system with a Dirichlet boundary condition $$-\nabla \cdot A(x) \nabla u_\varepsilon (x) =0 \text{ in } D \ \ \text{ and } \ \ u_\varepsilon(x) = g(x/ \varepsilon) \text{ on } \partial D,$$ where $D \subset \mathbb{R}^d$ $(d\geq 2)$ is a bounded domain, $g$ is $\mathbb{Z}^d$-periodic, and the operator is uniformly elliptic. Let also $u_0$ be the solution to the same elliptic system in $D$ but with Dirichlet data equal to the average of $g$ over its unit cell of periodicity. The main result of [1] states that under strict convexity of the domain $D$, and $C^\infty$ smoothness of the data involved in the problem, for any $\kappa>0$ one has the following pointwise bound $$| u_\varepsilon(x) - u_0(x) | \leq C_\kappa \min\left\{ 1, \frac{\varepsilon^{(d-1)/2}}{(d(x))^{d-1+\kappa} } \right\}, \qquad \forall x\in D, \tag{a}$$ where $d(x)$ denotes the distance of $x$ from the boundary of $D$, and the constant $C_\kappa =C(\kappa, d, A, D,g)>0$. As a corollary to $(a)$, for any $1\leq p< \infty$ and any $\kappa>0$ we obtain $$|| u_\varepsilon - u_0 ||_{L^p(D)} \leq C_\kappa \varepsilon^{ \frac{1}{2p} - \kappa}. \tag{b}$$
The starting point of the proofs is representation of solutions via Poisson kernel; then the proof proceeds by an analysis of singular oscillatory integrals. There are two competing quantities in the Poisson integral representation; namely, the singularity of the Poisson kernel, and the oscillation of the boundary data. The smoothness of $A$ and $\partial D$ allows one to obtain a nice (quantitative) control over singularities of the representation kernel and its derivatives. Next, the periodicity and smoothness of the boundary data $g$, along with the strict convexity of the domain (ensuring non-zero Gauss curvature everywhere on the boundary) provide a lot of cancellations in the integral representing the solutions. With a careful trade-off between singularity of the integration kernel, and decay of the integral of the boundary data one obtains the desired estimate.
Remark 1. With some minor modifications, the proposed approach leads to homogenization when instead of strict convexity of the domain, one requires that at each point of the boundary at least $1\leq m \leq d-1$ of the principal curvatures are non-vanishing (strict convexity corresponds to the case when $m=d-1$). The pointwise estimate in this case wll be similar, with $d-1$ in the exponent replaced by $m$, while the $L^p$-estimates will remain the same.
Remark 2. The settings of [1] and [2] become the same for constant coefficient operators. Comparing bound $(b)$ with the exponent of $L^2$ estimate of [2] which equals $\frac{d-1}{3d+5} -$, we see that $(b)$ gives a better estimate for constant coefficient operators in dimensions up to $8$ and for dimensions at least $10$ the estimate of [2] takes over.
1. ## ArticleIs an originApplications of Fourier analysis in homogenization of Dirichlet problem I. Pointwise estimates
Journal of Differential Equations 254 (6), 2626-2637, 2013arXivfulltext
2. ## Article Homogenization and boundary layers
Acta Mathematica 209 (1), 133-178, 2012
No remarks yet
• Created at: 2018-07-07 19:14:27Z
|
|
Math Help - Solving for x given f^-1(x) and secondary equation
1. Solving for x given f^-1(x) and secondary equation
Hello,
I am starting out in Calc and this popped up on the first hw/pretest:
Solve 2 f(x-1)-2=0 for x if f^-1(x)=x^5+x
I figured f(x-1)=1 and "x=y(y^4+1)" is as far as I got with flipping the inverse.. My algebra is rusty and I'm stuck.
2. Re: Solving for x given f^-1(x) and secondary equation
Originally Posted by MickSinclair
Hello,
I am starting out in Calc and this popped up on the first hw/pretest:
Solve 2 f(x-1)-2=0 for x if f^-1(x)=x^5+x
I figured f(x-1)=1 and "x=y(y^4+1)" is as far as I got with flipping the inverse.. My algebra is rusty and I'm stuck.
$2 f(x-1) - 2 = 0$
$f(x-1) = 1 \implies f^{-1}(1) = x-1$
$x = f^{-1}(1) + 1 = 1^5 + 1 + 1 = 3$
|
|
# Spring Potential Energy Problem
1. Dec 8, 2007
### meganw
1. The problem statement, all variables and given/known data
A nonlinear spring is compressed horizontally. The spring exerts a force that obeys the equation F(x) = Ax½, where x is the distance from equilibrium that the spring is compressed and A is a constant. A physics student records data on the force exerted by the spring as it is compressed and plots the two graphs below, which include the data and the student's best fit cures?
http://img148.imageshack.us/img148/9783/graphlk8.png [Broken]
a. From one or both of the given graphs, determine A. Be sure to show your work and specify the units.
b. i. Determine an expression for the work done in compressing the spring a distance x.
ii. Explain in a few sentences how you could use one or both of the graphs to estimate a numerical answer to part (b)i for a given value of x.
(I got a and b, but I need help on c:)
c. The spring is mounted horizontally on a countertop that is 1.3 m high so that its equilibrium position is just at the edge of the countertop. The spring is compressed so that it stores 0.2 J of energy and is then used to launch a ball of mass 0.10 kg horizontally from the countertop. Neglecting friction, determine the horizontal distance d from the edge of the countertop to the point where the hall strikes the floor
2. Relevant equations
PE=.5Kx$$^{}2$$
W = $$\int$$ F dx
W=2/3 A x^3/2 <---I got these answers from the earlier questions a/b
3. The attempt at a solution
c) PE=.5Kx$$^{}2$$
.2 = .5(25)x^2
x=.1265
Conservation of energy??
Kinetic + PEgravity + PEspring = Final Kinetic + Final PEgravity + Final PEspring
Zero Kinetic Energy + (.10)(1.3)(9.8) + (.5)(25)(.1265) = .5(.10)(vf) + Zero potential energy
Last edited by a moderator: May 3, 2017
2. Dec 8, 2007
### Astronuc
Staff Emeritus
OK. The spring is oriented horizontally. At the point where ball leaves the spring, the spring's potential energy has been transformed into the ball's kinetic energy. Since the ball is traveling horizontally, there is no change in gravitational potential energy at this point, that is until the ball goes over the edge of the table. At the time the ball leaves the table, it has a horizontal velocity that is related to its kinetic energy, and it also starts into a vertical free fall.
See this reference - http://hyperphysics.phy-astr.gsu.edu/hbase/traj.html#tra11
3. Dec 8, 2007
### meganw
Thank You! =)
|
|
# Dispersion correction and band structure
Let's say I optimized geometry with PBE+D3. I want to carry out band structure calculation with HSE06, should I add the dispersion correction to the band structure calculation or not?
I tried both and in two cases I got the equivalent band structures. So;
$$\tag{1} \textrm{Graphene}_{GO}^{\textrm{PBE-D3}} + \textrm{Graphene}_{BS}^{\textrm{HSE06-D3}} = \textrm{Graphene}_{GO}^{\textrm{PBE-D3}} + \textrm{Graphene}_{BS}^{\textrm{HSE06}}$$
I assume that dispersion correction affects bandstructure due to optimized geometry. So it really does not have any effect on a single point band structure calculation. If it's like that why in some papers, it shows for example as: PBE+D3 or HSE06+D3. If I just use dispersion correction at the geometry optimization level, still can I put HSE06+D3 on the figure, or should I add the dispersion correction to the band structure calculation even it does not have any effect.
• +1, I have just made some edits to make the equation look nicer, as you can see here: mattermodeling.stackexchange.com/posts/4316/revisions. Perhaps you can change the subscripts that way too! Feb 8 '21 at 17:56
• You are correct. The dispersion correction affects the geometry alone, it doesn't have any effect in an scf/bands calculation. I think I have seen papers that say 'HSE+D3' and some papers which just state that the D3 correction was used and then just specify 'HSE'. So I think it's just a matter of choice. Feb 8 '21 at 18:21
|
|
# When can we say that two programs are different?
Q1. When can we say that two programs (written in some programming language like C++) are different?
The first extreme is to say that two programs are equivalent iff they are identical. The other extreme is to say two programs are equivalent iff they compute the same function (or show the same observable behavior in similar environments). But these are not good: not all programs checking primality are the same. We can add a line of code with no effect on the result and we would still consider it the same program.
Q2. Are programs and algorithms the same kind of object? If not, what is the definition of an algorithm and how does it differ from the definition of a program? When can we say two algorithms are equivalent?
• The program isomorphism problem? Can't one ask "is this program isomorphic to the program that always halts?" and recover the Halting Problem? If we restrict ourselves to the Bounded Halting Program Problem isn't this just graph isomorphism? – user834 Dec 13 '11 at 19:53
• When are two algorithms the same? arxiv.org/abs/0811.0811 – sdcvvc Dec 14 '11 at 1:51
• Wouldn't it depend entirely on the context? Getting a bit philosophical here, but a bolted-down chair and an upside-down bolted-down chair are the same thing physically but not the same in terms of the idea of a chair. – Rei Miyasaka Dec 14 '11 at 6:16
• Slightly off-topic, but, since proofs are programs ... gowers.wordpress.com/2007/10/04/… – Radu GRIGore Jan 3 '12 at 13:44
• The following article is very related. I have only skimmed through it some time ago, but Blass and Gurevic usually write really well (I just don't recall reading anything else by Dershowitz, not saying it usually isn't very readable). research.microsoft.com/en-us/um/people/gurevich/Opera/192.pdf WHEN ARE TWO ALGORITHMS THE SAME? ANDREAS BLASS, NACHUM DERSHOWITZ, AND YURI GUREVICH – kasterma Nov 17 '13 at 13:00
Q1: There are many notions of program equivalence (trace equivalence, contextual equivalence, observational equivalence, bisimilarity) which may or may not take into account things such as time, resource usage, nondeterminism, termination. A lot of work has been done on finding usable notions of program equivalence. For example: Operationally-Based Theories of Program Equivalence by Andy Pitts. But this barely scratches the surface. This should be useful even if you are interested in when two programs are not equivalent. One can even reason about non-halting programs (using bisimulation and coinduction).
Q2: One possible answer to part of this question is that interactive programs are not algorithms (assuming that one considers an algorithm to take all of its input at once, but this narrow definition excludes online algorithms). A program could be a collection of interacting processes that also interact with their environment. This certainly doesn't match with the Turing-machine/Recursion theory notion of algorithm.
• IO and side effects in general are not covered by classical algorithm notions at all. – Raphael Dec 15 '11 at 13:04
The other extreme is to say two programs are equivalent iff they compute the same function (or show the same observable behavior in similar environments). But these are not good: not all programs checking primality are the same. We can add a line of code with no effect on the result and we would still consider it the same program.
This is not an extreme: program equivalence must be defined relative to a notion of observation.
The most common definition in PL research is contextual equivalence. In contextual equivalence, the idea is that we observe programs by using them as components of larger programs (the context). So if two programs compute the same final value for all contexts, then they are judged to be equal. Since this definition quantifies over all possible program contexts, it is difficult to work with directly. So a typical research program in PL is to find compositional reasoning principles which imply contextual equivalence.
However, this is not the only possible notion of observation. For example, we can easily say that the memory, time, or power behavior of a program is observable. In this case, fewer program equivalences hold, since we can distinguish more programs (eg, mergesort is now distinguishable from quicksort). If you want to (say) design languages immune to timing channel attacks, or to design space-bounded programming languages, then this is the sort of thing you have to do.
Also, we may choose to judge some of the intermediate states of a computation as observable. This always happens for concurrent languages, due to the possibility of interference. But you might want to take this view even for sequential languages --- for example, if you want to ensure that no computations store unencrypted data in main memory, then you have to regard writes to main memory as observable.
Basically, there is no single notion of program equivalence; it is always relative to the notion of observation you pick, and that depends on the application you have in mind.
• It's worth pointing out that there is no unique notion of contextual equivalence (or contextual congruence) either, for example if the programming language in question is interactive (i.e. does not yield a value). – Martin Berger Dec 14 '11 at 14:52
• While the rest of your answer is excellent, I think that I disagree with this: "program equivalence must be defined relative to a notion of observation". That's only true for observational notions of equivalence. $\alpha$-equivalence isn't such a notion, but it's very much a kind of program equivalence. – Sam Tobin-Hochstadt Dec 14 '11 at 15:19
• @NeelKrishnaswami, I think this is just begging the question. $\alpha$ is an equivalence, and there are programming languages where it doesn't respect observational equivalence. See Wand's "The Theory of Fexprs is Trivial". Other times, it's not an equivalence that you can use for optimization, for example if debugging is important. Both of these add observations that violate $\alpha$. – Sam Tobin-Hochstadt Dec 15 '11 at 18:01
• @SamTobin-Hochstadt. Ok, let us forget "usual". The feeling I get is that you are saying the same thing that Neel said, which was quite well-thought out in my opinion. Your idea, which is still vague to me, can be formalized in Neel's framework by picking the right kind of observations and the right kind of program contexts. – Uday Reddy Mar 6 '12 at 23:41
• @UdayReddy Consider the $\lambda$-calculus. What context can distinguish $\lambda x.x$ from $\lambda y.y$? None. We could change the language so that we can make this distinction, but then we lost the concept we had before -- observational equivalence in the lambda-calculus is a useful tool. Instead, we should recognize that in the LC, we can't distinguish the terms with in-language observations, even though they're different terms. – Sam Tobin-Hochstadt Mar 9 '12 at 1:35
Q2: I think usual theoretical definitions don't really distinguish between algorithms and programs, but "algorithm" as commonly used is more like a class of programs. For me an algorithm is sort of like a program with some subroutines left not fully specified (i.e. their desired behavior is defined but not their implementation). For example, the Gaussian elimination algorithm doesn't really specify how integer multiplication is to be performed.
I am sorry if this is naive. I don't do PL research.
• The idea is probably that there are multiple implementations for those subroutines and you don't care which is chosen as long as it performs according to your specification. – Raphael Dec 15 '11 at 13:05
|
|
A magnet of magnetic moment M is situated with its axis along the direction of a magnetic field of strength B. The work done in rotating it by an angle of 180o will be
(a) -MB (b) +MB
(c) 0 (d) +2MB
Concept Questions :-
Bar magnet
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
A magnetic needle is kept in a non-uniform magnetic field. It experiences
1. A force and a torque
2. A force but not a torque
3. A torque but not a force
4. Neither a torque nor a force
Concept Questions :-
Bar magnet
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
A long magnetic needle of length 2L, magnetic moment M and pole strength m units is broken into two pieces at the middle. The magnetic moment and pole strength of each piece will be
1. $\frac{M}{2},\frac{m}{2}$
2. $M,\frac{m}{2}$
3. $\frac{M}{2},m$
4. M, m
Concept Questions :-
Bar magnet
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
Two identical thin bar magnets each of length l and pole strength m are placed at the right angle to each other with the north pole of one touching south pole of the other. The magnetic moment of the system is :
1. ml
2. 2ml
3. $\sqrt{2}ml$
4. $\frac{1}{2}ml$
Concept Questions :-
Bar magnet
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
Rate of change of torque with deflections maximum for a magnet suspended freely in a uniform magnetic field of induction B, when
(a) $\theta ={0}^{o}$ (b) $\theta ={45}^{o}$
(c) $\theta ={60}^{o}$ (d) $\theta ={90}^{o}$
Concept Questions :-
Bar magnet
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
Force between two identical bar magnets whose centres are r metre apart is 4.8 N, when their axes are in the same line. If separation is increased to 2r, the force between them is reduced to
1. 2.4N 2. 1.2N
3. 0.6N 4. 0.3N
Concept Questions :-
Analogy between electrostatics and magnetostatics
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
A bar magnet of magnetic moment 104J/T is free to rotate in a horizontal plane. The work done in rotating the magnet slowly from a direction parallel to a horizontal magnetic field of 4×10–5 T to a direction 60° from the field will be
1. 0.2 J 2. 2.0 J
3. 4.18 J 4. 2 × 102 J
Concept Questions :-
Analogy between electrostatics and magnetostatics
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
Two equal bar magnets are kept as shown in the figure. The direction of resultant magnetic field, indicated by arrow head at the point P is (approximately)
1.
2.
3.
4.
Concept Questions :-
Bar magnet
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
A straight wire carrying a current i is turned into a circular loop. If the magnitude of the magnetic moment associated with it in M.K.S. unit is M, the length of wire will be
1. $4\mathrm{\pi iM}$ 2. $\sqrt{\frac{4\mathrm{\pi M}}{i}}$
3. $\sqrt{\frac{4\mathrm{\pi i}}{M}}$ 4. $\frac{M\mathrm{\pi }}{4i}$
Concept Questions :-
Magnetic moment
High Yielding Test Series + Question Bank - NEET 2020
Difficulty Level:
Two similar bar magnets P and Q, each of magnetic moment M, are taken. If P is cut along its axial line and Q is cut along its equatorial line, all the four pieces obtained have
1. Equal pole strength 2. Magnetic moment M/4
3. Magnetic moment M/2 4. Magnetic moment M
Concept Questions :-
Bar magnet
|
|
# A curious limit for $-\frac{\pi}{2}$
How to prove this ? $$-\frac\pi2 = \lim_{x\to\infty}\sum_{n=1}^{\infty}(-1)^n \frac{x^{2n-1}}{(2n)! \ln 2n}$$
• I tried to typeset your equation more nicely. Please check if I did right. – martini Sep 4 '12 at 16:14
• What is the source? – Théophile Sep 4 '12 at 16:26
• You have to pick an infinity, I think: $x\to +\infty$ or $x\to -\infty$. Might seem pedantic, but $x\to\infty$ means something specific. (We often write $n\to\infty$, but there, $n$ is usually a natural number, and there is only "one" infinity it can go to.) – Thomas Andrews Sep 4 '12 at 16:30
• @ThomasAndrews: it looks rather straightforward to guess which infinity mick has in mind... – Fabian Sep 4 '12 at 16:34
• Unless I'm missing something here, the expression $\,u\to\infty\,$ is always understood as "$\,u\,$ going to (plus) infinity", otherwise it must specifically be added a minus sign: $\,x\to -\infty\,$ – DonAntonio Sep 4 '12 at 16:37
Using $$\int_0^\infty \left(2 n\right)^{-t} \mathrm{d} t = \frac{1}{\ln(2n)}$$ the sum becomes $$\sum_{n=1}^\infty (-1)^n \frac{x^{2n-1}}{(2n)! \cdot \ln(2n)} = \int_0^\infty \left(\sum_{n=1}^\infty (-1)^n \frac{x^{2n-1}}{(2n)! \cdot (2n)^t} \right)\mathrm{d}t$$ Now, further using $$\int_0^\infty u^{t-1} \mathrm{e}^{-2 n u} \mathrm{d} u = \Gamma(t) (2n)^{-t}$$ we rewrite the sum as a double integral: $$\sum_{n=1}^\infty (-1)^n \frac{x^{2n-1}}{(2n)! \cdot \ln(2n)} = \int_0^\infty \left( \int_0^\infty \frac{u^{t-1}}{\Gamma(t)}\mathrm{d}t \right) \frac{\cos\left(x \mathrm{e}^{-u}\right)-1}{x} \mathrm{d} u$$ In the large $x$ limit, the main contribution to the integral comes from large $u$. For large $u$, $$\int_0^\infty \frac{u^{t-1}}{\Gamma(t)}\mathrm{d}t \approx \sum_{t=1}^\infty \frac{u^{t-1}}{\Gamma(t)} = \mathrm{e}^{u}$$
Thus: $$\begin{eqnarray} \lim_{x \to \infty} \sum_{n=1}^\infty (-1)^n \frac{x^{2n-1}}{(2n)! \cdot \ln(2n)} &=& \lim_{x \to \infty} \int_0^\infty \mathrm{e}^{u} \frac{\cos\left(x \mathrm{e}^{-u}\right)-1}{x} \mathrm{d} u = \lim_{x \to \infty} \int_1^\infty \frac{\cos\left(x/w\right)-1}{x} \mathrm{d} w \\ &=& \lim_{x \to \infty} \int_{1/x}^\infty \left(\cos\left(\frac{1}{v}\right)-1\right) \mathrm{d} v = \int_{0}^\infty \left(\cos\left(\frac{1}{v}\right)-1\right) \mathrm{d} v \\ &=& -\frac{\pi}{2} \end{eqnarray}$$
• @mick Sorry for being sketchy. Large $u$ behavior of $\int_0^\infty \frac{u^{t-1}}{\Gamma(t)} \mathrm{d} t$ can also be obtained using Laplace's method. I used Euler-Maclaurin formula. Could you please tell me more precisely which steps you did not get. Those at the end of the post, or some others? – Sasha Sep 4 '12 at 19:37
• $$\int_0^\infty \left( \sum_{n=1}^\infty (-1)^n \frac{x^{2n-1}}{(2n)!} \frac{1}{(2n)^t} \right) \mathrm{d} t = \int_0^\infty \left( \sum_{n=1}^\infty (-1)^n \frac{x^{2n-1}}{(2n)!} \frac{1}{\Gamma(t)} \int_0^\infty u^{t-1} \mathrm{e}^{-2n u} \mathrm{d} u \right) \mathrm{d} t = \int_0^\infty \int_0^\infty \frac{u^{t-1}}{\Gamma(t)} \left( \sum_{n=1}^\infty (-1)^n \frac{x^{2n-1}}{(2n)!} \mathrm{e}^{-2n u} \right) \mathrm{d} u \mathrm{d} t$$ The latter sum evaluates to $\frac{\cos(x \mathrm{e}^{-u})-1}{x}$. – Sasha Sep 4 '12 at 20:00
|
|
## Nonparametric estimate of the ruin probability in a pure-jump Lévy risk model.(English)Zbl 1284.62245
Summary: We propose a nonparametric estimator of ruin probability in a Lévy risk model. The aggregate claims process $$X=\{X_t,\geq 0\}$$ is modeled by a pure-jump Lévy process. Assume that high-frequency observed data on $$X$$ are available. The estimator is constructed based on the Pollaczek-Khinchin formula and Fourier transform. Risk bounds as well as a data-driven cut-off selection methodology are presented. Simulation studies are also given to show the finite sample performance of our estimator.
### MSC:
62G05 Nonparametric estimation 62G20 Asymptotic properties of nonparametric inference 91B30 Risk theory, insurance (MSC2010) 60G51 Processes with independent increments; Lévy processes
Full Text:
### References:
[1] Asmussen, S.; Albrecher, H., Ruin probabilities, (2010), World Scientific Singapore · Zbl 1247.91080 [2] Chung, K. L., A course in probability theory, (2001), Academic Press San Diego [3] Comte, F.; Genon-Catalot, V., Nonparamtetric estimation for pure jump Lévy processes based on high frequency data, Stochastic Processes and their Applications, 119, 4088-4123, (2009) · Zbl 1177.62043 [4] Comte, F.; Rozenhoc, Y.; Taupin, M.-L., Penalized contrast estimator for adaptive density deconvolution, Canadian Journal of Statistics, 34, 431-452, (2006) · Zbl 1104.62033 [5] Croux, K.; Vervaerbeke, N., Nonparametric estimators for the probability of ruin, Insurance: Mathematics and Economics, 9, 127-130, (1990) · Zbl 0711.62096 [6] Frees, E. W., Nonparametric estimation of the probability of ruin, ASTIN Bulletin, 16, 81-90, (1986) [7] Hipp, C., Estimators and bootstrap confidence intervals for ruin probabilities, ASTIN Bulletin, 19, 57-70, (1989) [8] Huzak, M.; Perman, M.; S˘ikić, H.; Zoran Vondrac˘ek, Z., Ruin probabilities and decompositions for general perturbed risk processes, The Annals of Applied Probability, 14, 1378-1397, (2004) · Zbl 1061.60075 [9] Masiello, E., 2012. On semiparametric estimation of ruin probabilities in the classical risk model. Scandinavian Actuarial Journal (in press). · Zbl 1401.62212 [10] Massart, P., About the constrants in talagrand’s concentration inequalities for empirical processes, The Annals of Probability, 28, 863-884, (2000) · Zbl 1140.60310 [11] Massart, P., Concentration inequalities and model selection, (Ecole d’t de Probabilits de Saint-Flour 2003, Lecture Notes in Mathematics, vol. 1896, (2007), Springer Berlin, Heidelberg) [12] Pitts, S. M., Non-parametric estimation of compound distributions with applications in insurance, Annals of the Institute of Statistical Mathematics, 46, 537-555, (1994) · Zbl 0817.62024 [13] Politis, K., Semiparametric estimation for non-ruin probabilities, Scandinavian Actuarial Journal, 2003, 1, 75-96, (2003) · Zbl 1092.91054 [14] Rolski, T.; Schmidli, H.; Schmidt, V.; Teugels, J., Stochastic processes for insurance and finance, (1999), John Wiley & Sons, Ltd. Chichester · Zbl 0940.60005 [15] Shimizu, Y., A new aspect of a risk process and its statistical inference, Insurance: Mathematics and Economics, 44, 70-77, (2009) · Zbl 1156.91402 [16] Shimizu, Y., Non-parametric estimation of the gerber-shiu function for the Wiener-Poisson risk model, Scandinavian Actuarial Journal, 2012, 1, 56-69, (2012) · Zbl 1277.62096 [17] Zhang, Z., Yang, H., Yang, H., 2012. On a nonparametric estimator for ruin probability in the classical risk model. Scandinavian Actuarial Journal (in press). · Zbl 1401.91217
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
|
# How can I show that this operator is bounded on $L^2$?
Consider the integral operator
$$Tf(x) = {\int}_{-\infty}^{\infty} \frac{\sin(x - y)}{x - y}f(y)dy$$
How can I show that $T$ is bounded on $L^2$?
I know that bounded means there is a constant $c$ independent of $f$ such that ${\| Tf \| }_{L^2} \leq c \|f\|_{L^2}$ but I do not seem to succeed on solving this.
Any help would be very much appreciated.
\begin{align} Tf(x) &= \int_{-\infty}^{\infty}\frac{\sin(x-y)}{x-y}f(y)dy \\ &= \int_{-\infty}^{\infty}\frac{1}{2}\int_{-1}^{1}e^{is(x-y)}ds f(y)dy \\ &= \frac{1}{2}\lim_{R\rightarrow\infty}\int_{-R}^{R}\int_{-1}^{1}e^{is(x-y)}dsf(y)dy \\ &= \lim_{R\rightarrow\infty}\frac{1}{2}\int_{-1}^{1}\left(\int_{-R}^{R}e^{-isy}f(y)dy\right) e^{isx}ds \\ &= \frac{\sqrt{2\pi}}{2}\int_{-1}^{1}\hat{f}(s)e^{isx}ds. \end{align} The last equality holds because $\frac{1}{\sqrt{2\pi}}\int_{-R}^{R}e^{-isy}f(y)dy$ converges in $L^2(\mathbb{R})$ to $\hat{f}$ as $R\rightarrow\infty$. Therefore $Tf \in L^2$ and, by the Plancherel Theorem, $$\|Tf\|_2 = \pi\|\hat{f}\chi_{[-1,1]}\|_2 \le \pi \|\hat{f}\|_2 = \pi \|f\|_2.$$
It's been suggested that this follows from Young's inequality. It does not. One special case of Young is $||f*g||_\infty\le||f||_2||g||_2$ (of course that special case is just Cauchy-Schwarz.) That applies here, but it shows that $||Tf||_\infty\le c||f||_2$, not $||Tf||_2\le c||f||_2$. The case of Young's inequality that could show $T$ is bounded on $L^2$ is $||f*g||_2\le||f||_2||g||_1$, but that doesn't apply here, since the function $\sin(t)/t$ is not integrable.
The good news is that it's clear from Plancherel that $$||f*g||_2\le||f||_2||\hat g||_\infty,$$where $\hat g$ is the Fourier transform. So you only need to show that the Fourier transform of $\sin(t)/t$ is in $L^\infty$, which is true. (See "Detail" below...)
(Don't calculate the Fourier transform of $\sin(t)/t$ directly. Instead look in the back of the book: Calculate the Fourier transform of $\chi_{[-1,1]}$ and apply the $L^2$ inversion theorem.)
Note I'm not claiming that the Fourier transform of $\chi_{[-1,1]}$ is exactly $\sin(t)/t$. In a context like this it's impossible to say exactly what the Fourier transform of anything is, exactly. The reason is that various authors define the Fourier transform differently, putting the $2\pi$'s in different places. If you calculate the Fourier transform of $\chi_{[-1,1]}$ by whatever definition you're using you'll get something sort of like $\sin(t)/t$, close enough to let you figure out a $\phi\in L^\infty$ such that $\hat\phi(t)=\sin(t)/t$. So Plancherel implies that the Fourier transform of $\sin(t)/t$ is bounded.
Detail: Why is it that $||f*g||_2\le||f||_2||\hat g||_\infty$ for $f,g\in L^2$? The obvious argument would be this: $$||f*g||_2=||\widehat{f*g}||_2=||\hat f\hat g||_2\le||\hat f||_2||\hat g||_\infty=||f||_2||\hat g||_\infty.$$
But this raises the question of what $\widehat{f*g}$ is, since $f*g$ is not in $L^1$ or $L^2$. (I for one find it amusing that nobody's asked about this, given all the silly objections regarding elementary applications of Plancherel; this spot is at least somewhat problematic.) One could avoid the question by showing instead that $f*g=\ift(\hat f\hat g)$; there at least it's clear what everything means, because $\hat f\hat g\in L^1$.
But we get a simpler and more elegant argument by interpreting $\widehat{f*g}$ in the sense of tempered distributions. (Two fairly accesible references for the basic theory of tempered distributions would be the relevant chapters in Rudin Funtional Analysis and Folland Real Analysis.) Just to keep things clear, I'll be using the notation $\ft u$ for the Fourier transform of the tempered distribution $u$, reserving the notation $\hat f$ for $f\in L^1$ or $f\in L^2$. Wee need to show this:
$\ft(f*g)=\hat f\hat g$ for $f,g\in L^2$.
Proof: First, if $f,g\in L^1$ then $f*g\in L^1$, and a standard application of Fubini shows that $$\ft(f*g)=\widehat{f*g}=\hat f\hat g.$$
Now suppose $f,g\in L^2$. Choose $f_n,g_n\in L^1\cap L^2$ such that $f_n\to f$ and $g_n\to g$ in $L^2$. Then $f_n*g_n\to f*g$ uniformly. Hence $f_n*g_n\to f*g$ in $\sp'$ (that is, "in the sense of tempered distributions"). Since $\ft$ is continuous on $\sp'$ it follows that $\ft(f_n*g_n)\to\ft( f*g)$ in $\sp'$. Similarly, $\hat f_n\to\hat f$ and $\hat g_n\to\hat g$ in $L^2$, so $\hat f_n\hat g_n\to\hat f\hat g$ in $L^1$ and hence $\hat f_n\hat g_n\to \hat f\hat g$ in $\sp'$. And so we're done: $$\ft(f*g)=\lim\ft(f_n*g_n)=\lim\hat f_n\hat g_n=\hat f\hat g,$$(where $\lim$ denotes convergence in $\sp'$).
• The Fourier Transforms of $\sin t/t$ is not $\chi_{[-1,1]}$. it is not even bounded. the reason is $\sin t /t$ is not integrable so the Fourier transform is not invertible there. – Guy Fsone Dec 1 '17 at 14:52
• @GuyFsone For heaven's sake, have you heard of the Plancherel theorem? – David C. Ullrich Dec 1 '17 at 14:54
• @GuyFsone So tell me, what is the Fourier transform of $\chi_{[-1,1]}$? – David C. Ullrich Dec 1 '17 at 15:08
• @GuyFsone Maybe we shoulldn't call it the Fourier transform. Let's call it the Plancherel trnasform instead. That operator on $L^2$ that agrees with the Fourier transform on $L^2\cap L^1$. Take $\mathcal F$ to be the Plancherel transform; then it's absolutely and completely true that $\mathcal F^2 f(t) = f(-t)$ for every $f\in L^2$. Btw people "always" refer to the Plancherel transform as just the "Fourier transform". – David C. Ullrich Dec 1 '17 at 15:28
• @GuyFsone Proof, in case you haven't got it yet: It's clear from the $L^1$ inversion theorem that $\mathcal F^2f(t)=f(-t)$ for, say, $f$ in the Schwartz space. But $\mathcal F$ and the operator we might informally describe as $f(t)\mapsto f(-t)$ are both bounded on $L^2$, and the Schwartz space is dense in $L^2$. – David C. Ullrich Dec 1 '17 at 16:12
|
|
Find the derivative of the function9 *;9 (*)
Question:
Find the derivative of the function 9 *; 9 (*)
Similar Solved Questions
Three- point charges +10 UC,+20 [C and +30 pC are located at the vertices of an equilateral triangle of side 3 cm. The potential energy of the system is
Three- point charges +10 UC,+20 [C and +30 pC are located at the vertices of an equilateral triangle of side 3 cm. The potential energy of the system is...
True or False? In Exercises $81-84$, determine whether the statement is true or false. Justify your answer. $\left(a^{n}\right)^{k}=a^{n^{k}}$
True or False? In Exercises $81-84$, determine whether the statement is true or false. Justify your answer. $\left(a^{n}\right)^{k}=a^{n^{k}}$...
Find the point at which the line meets the plane X=n 3 + 4t,y= 2+2t z= 3 + 5t x+y+z=2The point is (xy 2) =(Type an ordered triple )
Find the point at which the line meets the plane X=n 3 + 4t,y= 2+2t z= 3 + 5t x+y+z=2 The point is (xy 2) = (Type an ordered triple )...
Which of the following is the most important criteria in evaluating the quality of long-term investments...
Which of the following is the most important criteria in evaluating the quality of long-term investments and intangible assets? A)The amount of future investment required B)The amount of amortization expense C)The valuation of fixed assets D) The ease with which they could be converted to cash...
Question 10 > 91 O DO SAT scores are distributed with a mean of 1,500 and...
Question 10 > 91 O DO SAT scores are distributed with a mean of 1,500 and a standard deviation of 300. You are interested in estimating the average SAT score of first year students at your college. If you would like to limit the margin of error of your 95% confidence interval to 25 points, how ma...
(hint: 3 aldol) culor NaOHIh {
(hint: 3 aldol) culor NaOH Ih {...
Redss cosgn epebire T ds ivestt ependnwe, 'denotes govenment expenditue and 'X' denotes net exports, then...
redss cosgn epebire T ds ivestt ependnwe, 'denotes govenment expenditue and 'X' denotes net exports, then C 6-Xa a disgesable persomal income O gos dmesic peoact c aet natcnal poact d personal income a net experts...
2. Square Footage of Housing The frequency distribution in the next column represents the square footage...
2. Square Footage of Housing The frequency distribution in the next column represents the square footage of a random sample of 500 houses that are owner occupied year round. Approximate the mean and standard deviation square footage. Frequency Square Footage 5 0-499 17 - 500-999 1000-1499 36 1500-19...
8 { The '8 0 hybridization 0€ 9 (B) sp' state ofthe { charged crbon ds (@) in 3 carbocation
8 { The '8 0 hybridization 0€ 9 (B) sp' state ofthe { charged crbon ds (@) in 3 carbocation...
DNA: Base pairing < 1 of 15 > The structure below is an unnatural analog of...
DNA: Base pairing < 1 of 15 > The structure below is an unnatural analog of adenosine. If it were synthetically incorporated into a DNA strand in place of dA, what is the maximum number of hydrogen bonds it could make with dT in the complementary strand? NH -NH₂ HO H ΠΗ О...
Question: Use mesh analysis to calculate the following currents for the circuit fig. 3: i. 11,...
Question: Use mesh analysis to calculate the following currents for the circuit fig. 3: i. 11, 12 ii. IA, IB, Ic, ID, IE, IF 3 Ω 3A 13 2 22 5 Ω 21 12 5V 10Ω 20v Fig. 3...
Please answer thank you located at 3,8, and 11 cm. ailed view Potential 100% (2% pet-llengd)...
Please answer thank you located at 3,8, and 11 cm. ailed view Potential 100% (2% pet-llengd) 10...
Let (Zo +, be the ring of infinite tuples of integers (01, 02, 63, multiplication performed componentwise, that iswith additional and(a1, 02, 03, .) + (61, 62, b3, ') = (a1 +b1,02 + b2, 43 + 63, :J,(01, 42, 43,X (b1, 62, b3,= (01b1,42b2, 43b3,Let I be the subset of tuples of Zo such that a finite number of components are nonzero (a) Prove (I, +,x) is a subring of (ZS , +, *).(b) Prove (I, +, x) is an ideal of (ZO , +,x):(c) Prove (1,+,is not a principal ideal of (ZS TX)-
Let (Zo +, be the ring of infinite tuples of integers (01, 02, 63, multiplication performed componentwise, that is with additional and (a1, 02, 03, .) + (61, 62, b3, ') = (a1 +b1,02 + b2, 43 + 63, :J, (01, 42, 43, X (b1, 62, b3, = (01b1,42b2, 43b3, Let I be the subset of tuples of Zo such that ...
2. (3 points) Which of these operators are linear? d2 03 ψ 04p 05 ψ dr2...
2. (3 points) Which of these operators are linear? d2 03 ψ 04p 05 ψ dr2 exp(t) (AB)V, if A and B are linear exp(A)ψ if A is linear = = oo An and the last operator can be written as exp(A) = Σ -0-....
Estimate the value ofV1 =using the trapezoidal rule with four subintervals_Enter calculations using decimal places.Parbibon pointEva uationT-mulbiplierT-prcduct6 _ LIf(z )m ; f(i;J Vi-e dx ~
Estimate the value of V1 = using the trapezoidal rule with four subintervals_ Enter calculations using decimal places. Parbibon point Eva uation T-mulbiplier T-prcduct 6 _ LI f(z ) m ; f(i; J Vi-e dx ~...
Estimate purchases in 1996. (Hint: Cost of goods equals purchases plus beginning inventory minus ending inventory.)...
Estimate purchases in 1996. (Hint: Cost of goods equals purchases plus beginning inventory minus ending inventory.) Use the percent of sales method to estimate funds needed in 1996 using the 1995 percentages. CASE 1 2 TOPEKA ADHESIVES (1) FINANCIAL FORECASTING Karen and Elizabeth Whatley are twins....
'Nhat Is the molecular formula cf this ccmpouna?CHzCralis Culiu Cillu Culus
'Nhat Is the molecular formula cf this ccmpouna? CHz Cralis Culiu Cillu Culus...
Refer to the Vitamin Suppi ements data given Question Construct 90% confidence interval for the ratio 01/02 of standard deviations of serum retinol concentrations for infants who received vitamin supplements and those who received placebo_Confidence Interval; We are 90% confident that the standard deviation of serum retinol concentrations for infants who receive vitamin supplements between(lower)and(upper) Iimes the standard deviation for infants who receive placebo_ (Round t0 decimal places:)
Refer to the Vitamin Suppi ements data given Question Construct 90% confidence interval for the ratio 01/02 of standard deviations of serum retinol concentrations for infants who received vitamin supplements and those who received placebo_ Confidence Interval; We are 90% confident that the standard ...
Atoms are incredibly smalldrop of water contains about .TiOmsWrite thisnumber without scientific notation:Dcscribc the location and behavior of clectrons in atoms:How does the mass Of an clectron compare with that of a proton?Give an example from the lecture slides that provides cvidence charge MAnerElements are defined by the number ofin cach atom The atomic numbcrof aluminum 13, which means that all aluminum atoms continIsotopes of an element contain the same nO. ofbut diffcrent no. ofThe thrc
Atoms are incredibly small drop of water contains about . TiOms Write this number without scientific notation: Dcscribc the location and behavior of clectrons in atoms: How does the mass Of an clectron compare with that of a proton? Give an example from the lecture slides that provides cvidence char...
FFind tho grentcd common Iaeld 0 12 4 70{kd4203) Find the least common multiple. 3 & 84) Find the least common muliipla: 6 & 1113, 8 36) Find the greatest common factor ot 21 & 305) Find the least common multiple: 10 & 80168) Find the least common multiple: 12 8 5 greatest = common factor of: 30 & 24 7) Find E the 9/30 44 3 (53at
FFind tho grentcd common Iaeld 0 12 4 70 {kd4 20 3) Find the least common multiple. 3 & 8 4) Find the least common muliipla: 6 & 11 13, 8 3 6) Find the greatest common factor ot 21 & 30 5) Find the least common multiple: 10 & 8 016 8) Find the least common multiple: 12 8 5 greatest =...
1.Identify the most current procedural coding system? 2.Identify the most current diagnostic coding classification system? 3.Describe...
1.Identify the most current procedural coding system? 2.Identify the most current diagnostic coding classification system? 3.Describe how to use the most current HCPCS level ll coding system...
WOLCJDuME peyiE4 1 [ @upew /24i suute 3 1 uuruts Jutd 1 o uiojuceA 1 U denann 4 1 abor €c nno 3 ? 31 61
WOLCJDuME peyiE4 1 [ @upew /24i suute 3 1 uuruts Jutd 1 o uiojuceA 1 U denann 4 1 abor €c nno 3 ? 3 1 6 1...
Cakrulate the diameter of a 10.15-cm length of tungsten filamentin & small lightbulb if its resistance i5 4 0 174m4 cunent flows through the filament find the potential difference across the source; the diameter of the wirc (in m)Potential diffetence Cooss the source in V
Cakrulate the diameter of a 10.15-cm length of tungsten filamentin & small lightbulb if its resistance i5 4 0 174m4 cunent flows through the filament find the potential difference across the source; the diameter of the wirc (in m) Potential diffetence Cooss the source in V...
How many quarks are there in a neutron?312
How many quarks are there in a neutron? 3 1 2...
I’m in hurry please Question2 0.75 pts It is known that R1-R2-R3-1 ㏀、С-1500 mF and Vs...
I’m in hurry please Question2 0.75 pts It is known that R1-R2-R3-1 ㏀、С-1500 mF and Vs 1 2 V for the RC circuit in Figure 2.2. where the switch was open for a long time and is closed at t -O sec 1 R1V2 Vc Ic Switch Vs 12 R2 Figure 2.2 The voltage V2tt) at t -1.0 sec is most...
Pa: ~Propose elegant synthesis of the following molecule starting from carbonyl compounds of = curbons or fewer as only souree of carbon. You may any other reagent necessary_HaC_CH,CHa
Pa: ~ Propose elegant synthesis of the following molecule starting from carbonyl compounds of = curbons or fewer as only souree of carbon. You may any other reagent necessary_ HaC_ CH, CHa...
Can you please answer this Wni er flows steadily at 2 ku/" threagh a 40.md Asauaning fully-developed fow, deterine a) the outlet temperature of the water t) the rate of heat transfer into the water Use the thermmophysieal properties of water at 37c ,-950 D-40 mm...
A cohesive network works well when the change is __________________. a. long-term. b. divergent. c. not...
A cohesive network works well when the change is __________________. a. long-term. b. divergent. c. not particularly divergent. d. dramatic....
A cannon fires projectile at an angle of 45 degrees The projectile just clears 20 m high wall 200 m away: What was the initial speed of the projectile?
A cannon fires projectile at an angle of 45 degrees The projectile just clears 20 m high wall 200 m away: What was the initial speed of the projectile?...
|
|
# How can I fit and then overlay 2 images which have different resolution ?
One represents has a mesh, which is supposed to overlay the Layer1 . I didn't find how can I do this using the opencv. I know that is possible change image resolution, however, I don't know how to fit both images
This is the main image: image1 - (2.6 MB)
I have this one, which has the correct mesh to the image above:
image2 - (26.4 MB)
The code to change resolution is more or less this :
#!/usr/bin/python
import cv2
from matplotlib import pyplot as plt
import numpy as np
row1,cols1, ch1 = img1.shape
row2,cols2, ch2 = img2.shape
res = cv2.resize(img2, None , fx = (1.* row1 /row2 ), fy =(1.* cols1 /cols2 ), interpolation = cv2.INTER_CUBIC)
edit retag close merge delete
check this answer hereaddWeighted should do the trick ;-)
( 2016-05-27 11:27:04 -0500 )edit
thx tor the tip @theodore. However, It could only help me with the overlay part. How can I fit properly the shapes? If you look at the booth images they must fit. The mesh should be the "boundary".
( 2016-05-27 17:46:35 -0500 )edit
1
@marcoE please don't upload such a big files.
( 2016-05-28 07:23:17 -0500 )edit
Sort by » oldest newest most voted
@marcoE sorry I thought that the two images were already alingned, my bad. Well, what you need is to find the transformation matrix between the two images, with the findHomography() function. In order to do that you need to find at least 4 points that correspond to each other in the two images and then apply the transformation by using the extracted transformation matrix in the warpperspective() function. Usually, people for that they use a feature keypoints extraction algorithm, like surf,sift, etc... find the matched points and then use them to extract the transformation matrix as I described above. You need these 4 points at least, so if you have them somehow from a previous process (contours, canny, whatever...) to the two images you can use them, if not then you need to extract them somehow. Looking at your two images, extracting features with a keypoints algorithm I do not think that it will work. What I see that you can do is to extract the horizontal and vertical lines and use the endpoints as the needed points for the homography and then apply the transformation as I described above.
In order to see about what I am talking have a look in some examples here and here. if you search about warpperspective align two images opencv in the web you will find some other examples as well.
more
@theodore thank you for your time, patience and kindness. It is difficult to do what I need to do :\ I need to do something kike that : link text . This was made with imagemagick. However, the images were with lower resolution, and the mesh is different. I think with this image you are able to figure out what I need to do. I will see the algorithm.
( 2016-05-28 05:58:08 -0500 )edit
I do not think that it is that hard, what you want to do. More or less you have the material you need. You just need to mix them in the proper way. Unfortunately, I do not have that much free time at the moment otherwise I could help you with some code as well.
( 2016-05-28 06:27:12 -0500 )edit
@LBerger , Yes, as I mentioned there are different images, with similarities on shape. I want to do something like this: http://i.stack.imgur.com/fyixS.jpg
( 2016-05-28 15:00:29 -0500 )edit
@LBerger , @theodore , @sturkmen , any brillant idea?
( 2016-05-30 05:09:12 -0500 )edit
@LBerger the image with the mesh is a render created using matplotlib. From this image, using potrace I got this vector, which allows me get the mesh using trimesh pythom modulel stl, using matplotlib to render, I got the mesh that you've seen .
( 2016-05-30 07:46:43 -0500 )edit
This is not easier or trivial :\
( 2016-05-31 06:20:22 -0500 )edit
@LBerger it is not a really alighment .If you look here . you'd notice the mesh (at red) it is the internal countour of the overlaied picture. This example it's with other images and it was done with imagemagick.
( 2016-05-31 07:34:37 -0500 )edit
In your example you have two images R and B(red and black). I think you want to minimize |R(x,y)-B(ax+x0,by+y0)]^2 relative to unknown a,b,x0,y0
( 2016-05-31 08:03:57 -0500 )edit
@LBerger I think I'm getting what your telling. However, I don't think if it solves the aligment problem. I was thinking about the problem. If at the primary image (the one with the boundaries, which later is used to generate the mesh) it is added points (for instance, holes center), later at the mesh image, could they be matched ? If the points are the same in all images (all the images have holes center) could it solve the problem? Like a plot layer to guide the others. I don't know how to do it, I'm just thinking from the concept point of view.
( 2016-05-31 15:26:14 -0500 )edit
Something like this : holes with center
( 2016-06-01 09:21:32 -0500 )edit
Official site
GitHub
Wiki
Documentation
## Stats
Asked: 2016-05-27 10:12:25 -0500
Seen: 2,378 times
Last updated: May 28 '16
|
|
# Bismuth Subsalicylate
ChemistryAlcohols, Phenols, and Ethers
#### Basic Principles of Organic Chemistry
75 Lectures 5 hours
#### Organic Chemistry: How to Build People
54 Lectures 4 hours
#### Electro Chemistry
13 Lectures 2 hours
## Introduction
Bismuth Subsalicylate is a commercial drug that is sold over the counter means it can be sold without any prescription. It has the chemical formula ${C_7H_5BiO_4}$. It was developed over 100 years back, in 1901, for the treatment and sanitation of cholera infection. First, in 1939 it was approved by FDA for the treatment of nausea, diarrhoea, and gastrointestinal infections. It is an insoluble salt of salicylic acid. It exhibits the property of antibacterial and antidiarrheal agents. It is used for the treatment of travellers' diarrhoea, nausea, stomach infection, antacid, heartburn, indigestion, etc.
## What is Bismuth Subsalicylate
It is an insoluble salt of salicylic acid that is linked with a trivalent bismuth cation. A molecule of Bismuth Subsalicylate is composed of 58% bismuth and 42% salicylate by weight. It is also a component of HELIDAC Therapy i.e. (Bismuth Salicylate + Metronidazole + Tetracycline hydrochloride), that is used for the eradication of patients suffering from the H. pylori infection and ulcer. It is also known by the names Bismuth oxide salicylate, Pink Bismuth, Bismuth oxysalicylate, and Bismuth Subsalicylate.
## Salicylic Acid
Salicylic acid is a beta hydroxy acid that is obtained from plants. It is a natural, odourless, white to light tan solid. It slowly dissolves in water or sinks in water. It has anti-inflammatory and antibacterial properties. It has exfoliating properties. The chemical formula of salicylic acid is ${C_7H_6O_3}$ or ${HOC_6H_4COOH}$. It is also known by the name 2-hydroxy benzoic acid, o-hydroxybenzoic acid, or 2-carboxy phenol. It is used for the treatment of acne, skin diseases, dandruff, seborrheic, dermatitis of the skin, common warts, corns, psoriasis, etc. It may also cause allergy-like peeling, dryness, redness of the skin, burning of the skin, etc.
Images Coming soon
Salicylic Acid
## Structure of Bismuth Subsalicylate
The molecular formula of Bismuth oxysalicylate is ${C_7H_5BiO_4}$. After the hydrolysis of pink bismuth, it changed into bismuth salicylate ${Bi(C_6H_4(OH)CO_2)_3}$. Let's see the chemical structure of bismuth oxysalicylate.
Images Coming soon
Bismuth Subsalicylate
## Physical and Chemical Properties
Bismuth subsalicylate is a white crystalline powder or white fluffy white solid. It has the following chemical and physical properties.
Physical Properties
PropertiesAttributes
ColourWhite
AppearanceCrystalline or Fluffy Solid
OdourOdourless
Molecular formula$${C_7H_6BiO_4}$$
Molecular Weight363.10
TasteTasteless
Melting Point>350°C
Density${0.43 g/cm^3}$
Average mass362.093 Da
## Chemical Properties
It has the following chemical properties.
• Solubility: It is soluble in water and alcohol, but soluble in acids and alkali solutions.
• The reaction of Bismuth Subsalicylate with hydrochloric acid: When bismuth subsalicylate reacts with hydrochloric acid it forms, 3,5-dihydroxybenzoic acid and bismuth chloride.
$${C_7H_5BiO_4+HCl\rightarrow C_7H_6O_4+BiCl}$$
• It is decomposed by boiling water.
## What is Bismuth Subsalicylate Used for?
Pepto-Bismol a brand name for bismuth subsalicylate, is used for the treatment of many gastrointestinal infections. It is the only bismuth salt that is allowed to sell over the counter. Let's see the uses of bismuth salicylate.
• It is used to treat travellers' diarrhoea.
• It is used for treating indigestion, diarrhoea, heartburn, nausea, and stomach infection, peptic ulcer.
• It is used to treat the infection caused by Helicobacter pylori.
• It reduced the prostaglandin formation by inhibiting the cyclooxygenase which induces inflammation and hypermotility.
• It helps in the reabsorption of sodium, chlorides, and fluids.
• It inhibits intestinal secretions.
Although it has no side effects overconsumption of bismuth subsalicylate can result in nausea and black stool.
• It can lead to the blacking of the tongue, teeth, fatigue, and mood changes.
• In severe cases, it can cause mental disorders or neurotoxicity.
• Other adverse effects are hearing loss, muscle spasms, headaches, and slurred speech.
• If you are allergic to aspirin, salicylates, ibuprofen, naproxen, or celecoxib (NSAIDs) then it may cause allergic reactions and other issues.
### Dosage
Although it is an OTC drug, still has many side effects, so read the instruction carefully first or consult your doctor before the dosage of bismuth subsalicylates.
• If it is in suspension form shake it before use.
• If it is in chewable tablet form then swallow it.
• Do not take more than 8 doses of this medicine in a day.
## What Medicines Contain Bismuth?
Bismuth is used extensively as medicine for the treatment of diseases like gastrointestinal ailments, diarrhoea, etc. It also has anti-microbial, anti-cancer, and anti- leishmanial properties. Here is the list of a few medicines in which bismuth is used as an active compound.
Drug NameUses
Pepto-Bismol
Used for the treatment of Gastritis, and Dyspepsia.
Colloidal Bismuth Subcitrate CBS (De-Nol)
Used for Treatment of Ulcer
Pylorid (Ranitidine Bismuth Citrate, RBC)
For the treatment of H. pylori infections.
Bismuth Subnitrate Bismuth Subgallate
Used as antacid is used to Deodorise flatulence and in surgery of haemostasis.
## Conclusion
In the above tutorial, we have studied that bismuth subsalicylate is an insoluble salt of salicylic acid. It is a colourless, odourless, white crystalline solid. It is widely used as a clinical drug for treating various diseases like gastric infections, diarrhoea, nausea, heartburn, etc. It has anti-bacterial, anti-cancer, anti-microbial, and anti-fungal properties. It is also used for the treatment of syphilis. It is used as a component in medicines like Bismuth subgallate, bismuth subnitrate, pylorid, Pepto-Bismuth, etc. Although it is a counter drug and is widely used without prescription still has many side effects like dark stool, darkening of teeth, and tongue minor and in severe cases, it can cause mental disorders, blurred vision, fatigue, muscle spasm, etc.
## FAQs
Q1. What does Pepto-Bismuth do?
Ans. It decreases the flow of fluids and reabsorbs sodium and other electrolytes in the intestine. It kills the bacteria causing diarrhoea.
Q2. How much dosage of bismuth subsalicylate is given in diarrhoea?
Ans. For a child of age between 6 to 8 around 2 to 3 tablets or 10 ml to 15 ml dosage is given. For an adult, 2 tablets or 30 ml dosage is given. Although doctors write doses for children but still not recommended for children below 12 ages.
Q3. Is bismuth subsalicylate given with other medicines?
Ans. It does not give with other medicines as it reacts with some medicines like aspirin, and aspartame, so it is recommended to consult with a doctor before taking this medicine.
Q4. What happens if you take too much dose of Pepto-Bismol?
Ans. If you have taken an overdose of Pepto-Bismol, it can cause various side effects like fatigue, buzzing in the ears, muscle spasms, mental confusion, etc.
Q5. How Pepto -Bismol works in the gastrointestinal tract?
Ans. In the stomach, the Pepto-Bismol reacts with hydrochloric acid and forms bismuth oxychloride and salicylic acid. This salicylic acid is absorbed in the body quickly and bismuth oxychloride is excreted with faeces.
$${C_7H_5BiO_4+HCl\rightarrow C_7H_6O_4+BiCl}$$
## Reference
• Bismuth Subsalicylate Oral: Uses, Side Effects, Interactions, Pictures, Warnings & Dosing - WebMD. Webmd.com. (2022). Retrieved 19 July 2022, from: https://www.webmd.com
• Dajani, E., Thomas G. Shahwan (2004), in Encyclopedia of Gastroenterology, Retrieved 19 July 2022, from: https://www.sciencedirect.com
Images Coming soon
Updated on 13-Oct-2022 11:19:47
|
|
$f,f',…,f^{(j)}$ is $\mathbb C$-linearly independent if $f$ is a modular form
Conjecture: Let be $f$ a modular form of weight $k$ and $j$ a strictly positive integer, then the set $f,f',...,f^{(j)}$ is $\mathbb C$-linearly independent in $A$.
Is that conjecture true or false? Do you know a counterexample or a proof?
Notation: Let $\Pi=\{x+iy\in \mathbb C|y>0\}$ be the upper half-plane, $Hol(\Pi)=\{f:\Pi\to\mathbb C|f \text{ is holomorphic}\}$, $M_k:=M_k(SL_2(\mathbb Z))$ be the space of modular forms of weight $k$ for $SL_2(\mathbb Z)$, $M_*:=\bigoplus_kM_k$, and finally let $A=Span(f^{(j)}|f\in M_* \text{ and } j\in \mathbb N)$ be the subalgebra of $Hol(\Pi)$ span (over $\mathbb C$) by the elements $f^{(j)}$ with $f\in M_*$ and $j\in \mathbb N$, where $f^{(0)}=f$, $f^{(1)}=f'=\frac{1}{2\pi i}\frac{df}{dz}$,...
|
|
# Homework Help: A Odd Radian Question.
1. Aug 25, 2009
### Venito
1. The problem statement, all variables and given/known data
Consider the function y=sin2x +cos3x.
a.] Find a Value for the y if x= pye [Or who how ever it is spelled. 3.141592654]
b.] Find y if x=0.3 Radians.
c.] What is a period of this function? Show how you obtain this answer.
3. The attempt at a solution
No attempt since I have never come across these types of question in Radians before.
2. Aug 25, 2009
### tiny-tim
Hi Venito!
(have a pi: π )
if you're not used to radians, do it in degrees, and convert using the 180/π factor.
3. Aug 25, 2009
### Venito
I am use to radians. Just not these. So use degrees. Okay I will give it a try.
Thanks.
4. Aug 25, 2009
### Staff: Mentor
The period of sin(2x) is π radians. The period of cos(3x) is 2π/3 radians. The period of the sum of these two functions is the smallest interval that is evenly divisible by both π and 2π/3.
By the way, we spell the name of this Greek letter as pi. I'm guessing that you're Italian, and it is spelled the same way in Italian.
5. Aug 25, 2009
### HallsofIvy
You certainly should know that $sin(2\pi)= 0$ and $cos(3\pi)= -1$.
This is not going to be any simple value. Use a calculator.
What is the period of sin(2x)? What is the cos(3x)? What is the least common multiple of those two periods? Do you see why that is the period of y?
|
|
# 2016 Moderator Election Q&A - Questionnaire
In connection with the moderator elections, we are holding a Q&A thread for the candidates. Questions collected from an earlier thread have been compiled into this one, which shall now serve as the space for the candidates to provide their answers. Not every question was compiled - as noted, we only selected the top 8 questions as submitted by the community, plus 2 pre-set questions from us.
As part of an ongoing test, we're doing the questionnaire at the same time as nominations are being run. Please do not respond to this question unless you already have submitted your nomination.
Once all the answers have been compiled, this will serve as a transcript for voters to view the thoughts of their candidates, and will be appropriately linked in the Election page.
Good luck to all of the candidates!
Oh, and when you've completed your answer, please provide a link to it after this blurb here, before that set of three dashes. Please leave the list of links in the order of submission.
To save scrolling here are links to the submissions from each candidate (in order of submission):
1. How would you deal with a user who produced a steady stream of answers with mixed or negative scores, but still manages to get thousands of rep (because upvotes count much more than downvotes)?
2. As a moderator, what would be your highest priority between "Growth of the site" and "Quality of the questions and answers of the site"?
3. For some moderator actions, the current mods wait for a consensus from the whole group before taking the action. We have varying preferences about when to do this and when to just take action directly without checking with others. Which mod actions would you wait for consensus on, and which ones would you just do without checking? If you're not sure about a certain action, are you more likely to be conservative and wait for consensus, or be proactive and just do it?
4. How much time do you expect you can commit to moderating the site? (A couple hours a month? Ten hours a week? Ten hours a day? Take a good guess) Also, do you anticipate any reason why that amount of time would significantly decrease in the future?
5. A diamond will be attached to everything you say and have said in the past, including questions, answers and comments and chatroom messages. Everything you will do will be seen under a different light. Do you feel like all the material you've posted on the site reflects that you would be a good moderator? Will becoming a moderator induces significant changes in what you do—and refrain from doing—on the site (outside the obvious addition of moderator duties)?
6. Where do you think the boundaries of our scope should lie, in terms of topic? In other words, where do you draw the line between "physics" and other topics, and how do you feel about questions that lie in the various grey areas between physics and other disciplines? Do you generally feel that our scope should be broader or narrower than it is now? How are your views on these topics likely to affect your actions as a moderator?
7. In what way do you feel that being a moderator will make you more effective as opposed to simply reaching 10k or 20k rep?
8. Where do you think the boundaries of our scope should lie, in terms of level? That is, do you see this as primarily a site for physics students, or for physics researchers, or both equally? How is your stance on this likely to affect your actions as a moderator?
9. How would you deal with a user who produced a steady stream of valuable answers, but tends to generate a large number of arguments/flags from comments?
10. How would you handle a situation where another mod closed/deleted/etc a question that you feel shouldn't have been?
(Side note -- thanks to ACuriousMind's post for being the first and setting a format)
1. How would you deal with a user who produced a steady stream of answers with mixed or negative scores, but still manages to get thousands of rep (because upvotes count much more than downvotes)?
I understand such users can, in mild cases, generate resentment about the way the system works, and in more serious cases, gain access to the tools awarded high-reputation users that may be inappropriately used and/or abused. The disparity between reputation for upvotes and downvotes is a question I have raised previously for discussion.
The discussion at the time resulted in something that I still believe is true: a majority of high reputation users are not doing it for the reputation. They are doing it to educate others and to share their passion. So while some users may be able to game the system by producing mixed content, it will be relatively few. And I feel it would take a long time before they earned a high enough reputation to really do frustrating, site-harming levels of damage.
Long before that point, I would expect such users to either get bored or to mature to the point that it is no longer a problem. I don't think moderator intervention is required until actual damage is done, at which point the StackExchange platform and policies allow easy reversal of the damage and tools to remedy the situation with the user in question.
1. As a moderator, what would be your highest priority between "Growth of the site" and "Quality of the questions and answers of the site"?
On some level, I challenge the premise of the question. I don't see that the two are diametrically opposed -- I believe the two go hand-in-hand. Quality questions and answers come from returning users, and the site grows by providing quality questions and answers so that users do return.
So, in that frame of reference, my focus would be on the quality of the site. By providing a wealth of information on a wide-variety of topics in physics, people will be driven to the site. And those who return will bring questions (and hopefully answers) that challenge and engage the users already here.
All that said, the role of a moderator in this process is relatively limited. Ultimately, moderators are an insignificant percentage of the content generators/handlers for the site -- the users must be engaged in quality and growth. To that end, as a moderator, my priority is ensuring that quality is maintained in a form that is welcoming and encouraging to users who want to be active participants in the goals of the site. Contrary to many of the debates the site has had previously, quality and friendliness are not opposites; rudeness is not required to communicate one's point effectively. This is, of course, balanced by the limited content length in comments (this is not a forum, after all) and the impersonal nature of the internet that may lead one to feel things are more personal and rude than they actually are.
1. For some moderator actions, the current mods wait for a consensus from the whole group before taking the action. We have varying preferences about when to do this and when to just take action directly without checking with others. Which mod actions would you wait for consensus on, and which ones would you just do without checking? If you're not sure about a certain action, are you more likely to be conservative and wait for consensus, or be proactive and just do it?
Consensus is essential when there is uncertainty. Or at the very least, advice and guidance from those with more experience is essential.
Some actions create obvious situations where action must be taken. I can't enumerate a list here, but I will paraphrase US Supreme Court Justice Potter Stewart (read about the case) and Lord-Justice Stuart-Smith (the so-called Elephant Test) -- I'll know it when I see it. Spam, derogatory and offensive language, threats, etc. are all obvious and require immediate action.
More subtle topics, particularly those involving returning users, require some discussion and consensus building. Particularly because it may be a one-off instance or it may be a repeat problem, something that a new moderator (or even an existing one) may not be aware of.
1. How much time do you expect you can commit to moderating the site? (A couple hours a month? Ten hours a week? Ten hours a day? Take a good guess) Also, do you anticipate any reason why that amount of time would significantly decrease in the future?
Simply put, as much as is needed. It would be hard to estimate how much time I actively spend on the site currently, I generally pop in and hang out for 10-15 minutes at a time, every few hours. As a result, it would be easy to catch things as they occur and deal with it accordingly. Naturally, more important or time consuming tasks can be completed as needed with very few scheduling constraints.
In the next several months, I will graduate. Given my current position in my research lab, virtually any job I take will be less time-consuming than what I am doing now. I may not be able to spare 10-15 minutes every few hours during the workday, but I will be able to dedicate longer stretches of time as needed.
1. A diamond will be attached to everything you say and have said in the past, including questions, answers and comments and chatroom messages. Everything you will do will be seen under a different light. Do you feel like all the material you've posted on the site reflects that you would be a good moderator? Will becoming a moderator induces significant changes in what you do—and refrain from doing—on the site (outside the obvious addition of moderator duties)?
Obviously I am biased in my perception, but I feel that all of my material on the site has been constructive, and most importantly, something I stand behind. My activity on Meta has been focused on formulating policies and helping to define the role of the site. I haven't won every argument, but I feel that I have always been civil, level-headed and supported the results, even when I don't agree with them. I have also engaged in discussions with users who, at various times, have rubbed others the wrong way or been rubbed the wrong way themselves. In the cases I can recall participating, I have tried to remain objective and impersonal.
Rather than champion myself, I would encourage everybody to review my participation on the site (and for that matter, on the entire StackExchange network). I will happily discuss any content, past or present.
The biggest change in day-to-day activity is refraining from pulling the trigger on the close vote since it is now binding. As a regular user, I have no problem expressing my opinion about whether a question is off-topic -- a question takes 4 others to agree and if I'm wrong as a regular user, no harm. As a moderator, that luxury no longer exists. On the other hand, I don't see putting a question "on hold" as a death sentence or a condemnation of the user or the topic. It is a chance to improve the question in a way that is fair to those who will spend time answering (nobody likes writing an elaborate answer only to have the question change drastically).
1. Where do you think the boundaries of our scope should lie, in terms of topic? In other words, where do you draw the line between "physics" and other topics, and how do you feel about questions that lie in the various grey areas between physics and other disciplines? Do you generally feel that our scope should be broader or narrower than it is now? How are your views on these topics likely to affect your actions as a moderator?
I feel this area, more than any others, is the best use of my skills as part of the moderation team. I have started several conversations directly regarding this area:
as well as extensive participation in related areas (as well as the dreaded "homework" topic):
I firmly believe that we should not look at Physics.SE as isolated from the other sites in the StackExchange network. The entire network is like a library -- there is no reason to go to the physics shelf if I have a question about chemistry.
Gray areas are okay, provided the content is asking for an underlying explanation of the physics. My favorite punchline whenever this comes up is this: This is a site of physics, not physicists -- asking for a good apple pie recipe is not on-topic just because you are a physicist. Likewise, how to take derivatives are not on topic just because it came up in a mechanics class.
As a moderator, I would continue to bring up topics for discussion that may reside in these gray areas. I want to make sure the community can provide clarity for what it expects, and I enjoy leading those discussions.
1. In what way do you feel that being a moderator will make you more effective as opposed to simply reaching 10k or 20k rep?
There are two simple answers: first, I will become more effective because I will get access to the moderation tools immediately; second, many flags and interactions with users require moderators and are not available to high-rep users.
My reputation-earning participation on the site has always been in a small subset of the community, the questions. Because it is a small subset, it would be difficult to get from 10k to 20k and so I would be able to contribute more, immediately, as a moderator.
1. Where do you think the boundaries of our scope should lie, in terms of level? That is, do you see this as primarily a site for physics students, or for physics researchers, or both equally? How is your stance on this likely to affect your actions as a moderator?
This is a site for researchers of physics and students of physics. This is subtly different from physics students and physics researchers. I feel this site is best served by catering to those whom require and can share knowledge of the many areas of physics and not by catering to those who happen to be students in a physics class.
A business major desperately trying to pass a kinematics class is a physics student and not the target audience. But an engineering student who needs to understand why a bridge may collapse is welcome (provided the questions and answers are about the underlying physics) because such a person is a student of physics.
Likewise, we are not limited to physics researchers (those who research topics one would classically consider physics). A chemist who would like to understand the statistical mechanics underpinning of chemical reactions is welcome here.
Now, as a moderator, I don't see how my views directly influence my actions. The community decides the scope of the site and it may grow, shrink, or otherwise adapt over time. My position as moderator is to lead the discussions, help formulate the resulting policies, and help educate users in the enforcement of the policies. The community is self-policing and will enforce the standards set forth.
1. How would you deal with a user who produced a steady stream of valuable answers, but tends to generate a large number of arguments/flags from comments?
It takes a broad view of a users activity to understand whether there is a problem or not. We've all had bad days and so maybe the comments are out of character. It is also possible that, given the limited length of comments, that some things come across as rude/abrupt because they are succinct. This generally is the case when relatively new users feel slighted because they expect the site to behave like a traditional forum.
The nature of the flags must also be considered. No user, no matter how knowledgeable, no matter how much reputation, may be threatening, post spam, nor make extremely offensive statements. To a lesser extent, ad hominem attacks are not welcome either. Content should be criticized, not people. And even in the criticism of content, it doesn't need to be rude to communicate effectively.
All of that said, the approach would be to build up a good understanding regarding the track record of the user in question and try to determine the motivations. It may be temporary, it may be targeted (the users in question may have a history together), it may be an overly-sensitive person flagging, or it may be a legitimate problem. And in some cases, it could be a combination of all of the above. Once the problem is understood, it can be approached accordingly.
1. How would you handle a situation where another mod closed/deleted/etc a question that you feel shouldn't have been?
The first step is to open a dialogue. Just because I feel something shouldn't have happened, doesn't mean that I am correct. It also doesn't mean that I am impartial -- perhaps I feel something shouldn't have been closed, but the policies are quite clear that it should have been.
I don't think that moderators should create policy, but they do act as arbiters of policy and as a result, their actions may lead to policies expanding or contracting without it ever being formally written down. As this happens, it is important for other moderators (and especially for the users) to start a dialogue when they feel the actions have extended beyond the scope of the written policy. Maybe the result of the discussion is the actions should be formally written down because it is better for the site; perhaps the result is that the policy is clarified in an opposite direction of the actions taken. In either case, the role of the other moderators is to lead a discussion with the community.
This site is strong enough now that I would be hard pressed to imagine a truly egregious action would not be called to attention by regular users. And I hope that is the case -- the users are the ultimate resource of the site and the wishes of the community need to be respected. I would encourage users to discuss the rules and moderation of the site openly -- and most importantly -- in a civil and constructive fashion.
• Thanks for challenging #2. I'm a heavy sec.SE member and can witness to the fact that there some 80% of the best questions there come from new and inexperienced users. This is because sec.SE is a field that is not well settled (policies change with new discoveries) and produce discussion. That is not very different from PSE, where new research places a new insight quite often. – grochmal Sep 27 '16 at 13:06
• How we deal with these new user questions is by editing them ourselves, or, quite often, by a moderator stepping in and editing the question. All this does not mean that most questions by new users are good, they aren't, but we work towards getting them better (unless the question is completely unsalvageable). So yeah, my penny to this discussion is: You will not have good questions if you do not allow questions in, and then look after them. You have my vote. – grochmal Sep 27 '16 at 13:12
1. How would you deal with a user who produced a steady stream of answers with mixed or negative scores, but still manages to get thousands of rep (because upvotes count much more than downvotes)?
That really depends on the specifics of the situation? Is the user trying their best? Are they a form of technically proficient troll? Is is simply a matter of poor use of language skills? It's difficult to judge at a glance. I'd probably have to go through several of their posts and try to establish a trend. Even then, the system itself allows for this to happen. Obviously, you need to keep an eye out for suspicious voting patterns. Make sure there is no sock-puppets. However, in the general case where it's purely innocent, there's not much that can be done. Perhaps I could encourage the user to consider what they write more carefully; properly format their posts; and possibly include references wherever possible. However, if they are getting enough upvotes to sustain a score in the thousands, then I'm not going to take negative action against them without them doing something directly against policy and regulations.
1. As a moderator, what would be your highest priority between "Growth of the site" and "Quality of the questions and answers of the site"?
My highest priority would definitely be quality of questions and answers. In my opinion, our size is large enough to be sustainable for the long term. Furthermore, as a moderator, I feel that maintaining quality and standards is a necessary responsibility of the role. I could not sit idly by and let the quality of posts on this site slip all in the name of allowing it to grow larger.
1. For some moderator actions, the current mods wait for a consensus from the whole group before taking the action. We have varying preferences about when to do this and when to just take action directly without checking with others. Which mod actions would you wait for consensus on, and which ones would you just do without checking? If you're not sure about a certain action, are you more likely to be conservative and wait for consensus, or be proactive and just do it?
At first, I think I'd be generally unsure of actions. I'd probably ask more established mods for advice. Overall, I'd say I'm in the middle. For small tasks or things where the course of action seems fairly obvious, I'm likely to go ahead and take action. But when it comes to an action with significant ramifications or non-refundable results, I'd probably wait for consensus. I generally like to give everything proper consideration. I like to get all the facts I can before deciding, and I always try to ensure I've considered the perspective of whomever I can. Being a moderator is about being moderate; impartial; reasonably stand-off-ish. But when all is said and done, if an action must be taken, I'm the person that won't hesitate to take it, won't hesitate to admit when I'm wrong or don't know, and will ask for help or consensus when I need to.
1. How much time do you expect you can commit to moderating the site? (A couple hours a month? Ten hours a week? Ten hours a day? Take a good guess) Also, do you anticipate any reason why that amount of time would significantly decrease in the future?
I'm usually not around on the weekends. I'm off and on most of the day otherwise and I can try to give a solid 1-3 hours at the beginning of my day (I'm on eastern time, fyi). It's not much, but that's all I can offer for sure. I can't imagine any reason why that should diminish in the future. I'll always have time at my computer first thing in the morning. In fact, it might increase in the future. Really depends how my life plays out.
1. A diamond will be attached to everything you say and have said in the past, including questions, answers and comments and chatroom messages. Everything you will do will be seen under a different light. Do you feel like all the material you've posted on the site reflects that you would be a good moderator? Will becoming a moderator induces significant changes in what you do—and refrain from doing—on the site (outside the obvious addition of moderator duties)?
Yeah, I think I've been fairly impartial and diplomatic so far. The added responsibility means I'd probably be a bit more serious (dang it) and I'd probably stop complaining about users (all 3 times I think I've ever done that). But seeing as I usually end up telling people "This is nothing personal, this post is simply not a good fit for our site" or something like that, I'd say my tendencies are already fairly in-line with moderating. Although, I won't lie, I'm definitely going to joke around about my having a diamond on my name. So be forewarned, if you don't have a sense of humour and can't understand that me joking about my omnipotence doesn't actually mean I'd ever consider abusing power, then 1) you should probably avoid all contact with humans in the future and 2) be prepared to be offended.
1. Where do you think the boundaries of our scope should lie, in terms of topic? In other words, where do you draw the line between "physics" and other topics, and how do you feel about questions that lie in the various grey areas between physics and other disciplines? Do you generally feel that our scope should be broader or narrower than it is now? How are your views on these topics likely to affect your actions as a moderator?
The question of our scope and topics is something that is user-defined. Therefore, it falls to the community to decide the appropriate scope for the site. What I believe is irrelevant. I can offer my opinion of what I think the scope should be on meta, but once decided, all I have to do is enforce it. If a question belongs in another discipline according to our community, I'll move it. If not, it stays. If there's a grey zone and I'm not sure what to do, I'll ask for input from others. It really isn't my call how the site should evolve. That's the community's choice. I would just moderate.
1. In what way do you feel that being a moderator will make you more effective as opposed to simply reaching 10k or 20k rep?
Tough to say. It'll certainly give my word more weight (which, as King of all Jims, says a lot). I suppose I'd also be able to do something about obvious sock-puppets more than simply following them or flagging for a mod. I'd also be able to read the new posts by new users to old questions and actually be able to get rid of the random garbage ones (like "PHYSX SUX!! LOLOLOL!!1!") rather than just flagging them. At the moment, however, I'm not entirely sure what all the moderator tools are yet. I have no doubt they'll improve my ability to moderate, much like how reaching 10k rep significantly improved that ability, I just don't know what way that will be yet.
1. Where do you think the boundaries of our scope should lie, in terms of level? That is, do you see this as primarily a site for physics students, or for physics researchers, or both equally? How is your stance on this likely to affect your actions as a moderator?
I think I adequately covered this question under #6. This even starts out the exact same way as #6. I'm not answering the same question twice.
This question has been marked as a duplicate by: Jim
1. How would you deal with a user who produced a steady stream of valuable answers, but tends to generate a large number of arguments/flags from comments?
Life isn't black and white. Things are complex. Stop asking me what I'd do in a situation without any specific details whatsoever. I have no idea how I would handle this yet. Is the person genuinely passionate about an obscure realm of physics? Are they trolling? All manner of details need to be discovered. If they produce valuable answers, then they're worth keeping around. Comments are transient and mostly don't matter. Remind them to be polite and civil and keep my actions within protocol. Consult the Jedi Moderator Council if I really can't decide what should be done but am sure that an action should be taken.
1. How would you handle a situation where another mod closed/deleted/etc a question that you feel shouldn't have been?
Take the other mod aside (likely to a private chat or something) and discuss with them why I think they were mistaken. You don't want to show mod in-fighting in front of our subjects. Hash it out with them, agree on what should be done, then do it. If we can't agree, then call in another mod for a tie-breaker. Or open it up to the community, depending on the situation. I'm not going to subvert the authority of other mods. I respect that they may have a perspective I lack and I'm not going to get into a war with them. Things can be handled with words. If I'm overruled, then I can accept that.
• +1; May the Force be with you ;) – user36790 Sep 27 '16 at 14:10
1. How would you deal with a user who produced a steady stream of answers with mixed or negative scores, but still manages to get thousands of rep (because upvotes count much more than downvotes)?
In the first place, I'd trust the community systems to sort things out. Physics.SE doesn't (so far) have issues with controversial high-vote, zero-score answers. Controversial answers in this community tend to instead attract more clearly-written answers from other users. So a string of low-quality contributions is, at best, an inefficient way to gain reputation. So to some extent I challenge the premise of the question: this doesn't really seem like an issue for a moderator.
The moderation issue arises if a moderate- or high-rep user who provides consistently low-quality contributions also annoys or drives away users who produce high-quality content. Then we have to have a conversation about what the people involved are trying to achieve and what sorts of behaviors can and can't be tolerated. But that's a conversation that can be had out in the open, at a relatively slow pace, and the first few times it happens I'll be able to draw on the experience of the rest of the moderation team.
1. As a moderator, what would be your highest priority between "Growth of the site" and "Quality of the questions and answers of the site"?
Quality drives growth, not the other way around. There are attractive, high-quality questions and answers at all levels of expertise.
1. For some moderator actions, the current mods wait for a consensus from the whole group before taking the action. We have varying preferences about when to do this and when to just take action directly without checking with others. Which mod actions would you wait for consensus on, and which ones would you just do without checking? If you're not sure about a certain action, are you more likely to be conservative and wait for consensus, or be proactive and just do it?
I expect that I'll start off cautious and become more proactive as I grow into the role. For edge cases I'll take the (in)action that seems likely to cause the least damage if it's wrong. The most obvious cases where I might have a doubt and not want to wait for consensus would be if some sort of comment-thread discussion goes off the rails into argument territory; the right response is to move it to a chatroom and try to de-escalate.
1. How much time do you expect you can commit to moderating the site? (A couple hours a month? Ten hours a week? Ten hours a day? Take a good guess) Also, do you anticipate any reason why that amount of time would significantly decrease in the future?
I expect to contribute a half-hour to an hour most days.
1. A diamond will be attached to everything you say and have said in the past, including questions, answers and comments and chatroom messages. Everything you will do will be seen under a different light. Do you feel like all the material you've posted on the site reflects that you would be a good moderator? Will becoming a moderator induces significant changes in what you do—and refrain from doing—on the site (outside the obvious addition of moderator duties)?
I'm proud of my contributions to this site in the past. I have made a point, when reviewing questions, to be welcoming and encouraging to new users and to encourage high-quality answers, especially from commenters. I see this role continuing whether I'm chose as a moderator or not.
1. Where do you think the boundaries of our scope should lie, in terms of topic? In other words, where do you draw the line between "physics" and other topics, and how do you feel about questions that lie in the various grey areas between physics and other disciplines? Do you generally feel that our scope should be broader or narrower than it is now? How are your views on these topics likely to affect your actions as a moderator?
I tend to swing broader on this question than some other contributors do. There are plenty of questions asked here that have one foot in physics and another foot in biology or chemistry or engineering or information theory or some other interesting discipline that get closed as "not physics" but which could benefit from an insightful answer with a physicist's perspective. Currently I vote to keep these open when I see them in the close queue and vote to reopen them if I notice they've been closed. As a moderator I would want to be more cautious about unilaterally reopening these sorts of questions; I suppose I would work harder to find them better home elsewhere on the SE network, and reopen if this really is the best home.
1. In what way do you feel that being a moderator will make you more effective as opposed to simply reaching 10k or 20k rep?
See the previous paragraph for an example.
1. Where do you think the boundaries of our scope should lie, in terms of level? That is, do you see this as primarily a site for physics students, or for physics researchers, or both equally? How is your stance on this likely to affect your actions as a moderator?
I think we should be open to interesting questions at all levels. I don't vote to close questions because the answers are "obvious" or "trivial" to me, because they weren't always; I don't vote to close questions that are over my head.
1. How would you deal with a user who produced a steady stream of valuable answers, but tends to generate a large number of arguments/flags from comments?
I'd invite that user over to my house for dinner. At dinner, we'd have a conversation about what it means to be a part of a community. Probably we would both learn things. (Nota bene: if you'd like to come over for dinner, generating flags is less efficient than just bringing it up.)
1. How would you handle a situation where another mod closed/deleted/etc a question that you feel shouldn't have been?
• I'm flagging this post because you've never invited me over for dinner – Jim Oct 4 '16 at 13:18
• @Jim When is good for you? – rob Oct 4 '16 at 13:21
• lol, any time. The problem is the where is probably not good for me. Alas, that's the troubles we accept when using the internet – Jim Oct 4 '16 at 13:24
How would you deal with a user who produced a steady stream of answers with mixed or negative scores, but still manages to get thousands of rep (because upvotes count much more than downvotes)?
Since it is not explicitly stated otherwise, I assume the user is not violating any current rules and thus, the question should be instead How would you deal with those that complain about this user?
On my view, this site is an experiment in community moderation. It's been interesting watching the site and myself evolve over the time that I've participated here. If I were a moderator, I would act to let the experiment continue with only the occasional intervention now and then when it's clear that genuine malice towards this experiment is involved.
As a moderator, what would be your highest priority between "Growth of the site" and "Quality of the questions and answers of the site"?
Growth, or lack thereof, would not be a priority at all. The quality of questions and answers flows directly from the quality of those that freely participate here and, as long as one behaves as an adult, one is free to participate here. I know that for some, this state of affairs just wont do. I do not share that view.
For some moderator actions, the current mods wait for a consensus from the whole group before taking the action. We have varying preferences about when to do this and when to just take action directly without checking with others. Which mod actions would you wait for consensus on, and which ones would you just do without checking? If you're not sure about a certain action, are you more likely to be conservative and wait for consensus, or be proactive and just do it?
If it is my judgment that immediate action is necessary, I will act in accord with my judgment based on the knowledge I have at the time and I'll be prepared to accept the consequences. If immediate action is unnecessary and I'm unsure of the proper action (including no action), I'll seek advice.
How much time do you expect you can commit to moderating the site? (A couple hours a month? Ten hours a week? Ten hours a day? Take a good guess) Also, do you anticipate any reason why that amount of time would significantly decrease in the future?
No more than an hour a day on average.
A diamond will be attached to everything you say and have said in the past, including questions, answers and comments and chatroom messages. Everything you will do will be seen under a different light. Do you feel like all the material you've posted on the site reflects that you would be a good moderator?
No.
Will becoming a moderator induces significant changes in what you do—and refrain from doing—on the site (outside the obvious addition of moderator duties)?
I don't know.
Where do you think the boundaries of our scope should lie, in terms of topic? In other words, where do you draw the line between "physics" and other topics, and how do you feel about questions that lie in the various grey areas between physics and other disciplines?
If the question is interesting and has some relationship, even if small, to "physics", it's a good thing.
Do you generally feel that our scope should be broader or narrower than it is now?
How are your views on these topics likely to affect your actions as a moderator?
Very little if any at all.
In what way do you feel that being a moderator will make you more effective as opposed to simply reaching 10k or 20k rep?
I'm sorry if I misunderstand but this questions seems to be asking if I feel being a moderator will make me more effective as a moderator. I think I'll just move on...
Where do you think the boundaries of our scope should lie, in terms of level? That is, do you see this as primarily a site for physics students, or for physics researchers, or both equally? How is your stance on this likely to affect your actions as a moderator?
Even rank beginners can ask interesting questions and producing a quality answer that a beginner can understand is not trivial and often rewarding. The downvote button hover over includes "This question does not show any research effort" but does not include "This questions is not at a high enough level".
How would you deal with a user who produced a steady stream of valuable answers, but tends to generate a large number of arguments/flags from comments?
It depends. Life is full of valuable people that just don't know how to play well with others and there is no one 'formula' that works in every case.
How would you handle a situation where another mod closed/deleted/etc a question that you feel shouldn't have been?
I don't recognize it as a situation requiring action. However, if I find that I often disagree with another mod's close/delete/etc. actions, I'll begin by asking what it is that I'm not seeing, i.e., help me understand what you see in these questions that make them, on your view, close worthy?
1. How would you deal with a user who produced a steady stream of answers with mixed or negative scores, but still manages to get thousands of rep (because upvotes count much more than downvotes)?
While I am rather dismayed that it is possible that this happens, I don't think this is a problem that can or should be solved by moderator intervention - at least, not by using any of the additional privileges a moderator has. That this is possible is an inherent property of the SE engine and the weights it attaches to up- and downvotes. Of course, if significant support for the idea of raising the weight of downvotes was present in the community, it would perhaps fall to the moderators to present this position to the SE team - but then again, everyone can make such a feature request at the mother meta.
If the question intends to imply that maybe moderators should delete the answers of such users or suspend them, then I don't think that either can be justified. Also, deleting their wrong answers potentially leads to the users gaining more reputation, depending on what exactly the vote count on the wrong answer was, which doesn't solve the problem at all. Selectively deleting only the answers which contribute to their positive reputation would appear capricious and targeting a specific user instead of specific posts, which is generally inappropriate. Suspensions are only appropriate when an explicit rule was broken - and "posting wrong stuff" is annoying, but not in violation of any policy.
1. As a moderator, what would be your highest priority between "Growth of the site" and "Quality of the questions and answers of the site"?
Quality. I do think the general voting behavior on this SE currently encourages questions that are much too lazy and ill-thought out. I wrote about this some time ago and although the numbers are of course off by now, I still stand by everything I've written there. However, I don't see how this priority of mine would be any more relevant as a moderator than it was as a high reputation user - the primary means of quality control is upvoting high-quality questions and downvoting low-quality questions, which is completely unaffected by moderatorship.
1. For some moderator actions, the current mods wait for a consensus from the whole group before taking the action. We have varying preferences about when to do this and when to just take action directly without checking with others. Which mod actions would you wait for consensus on, and which ones would you just do without checking? If you're not sure about a certain action, are you more likely to be conservative and wait for consensus, or be proactive and just do it?
I would certainly wait for consensus regarding suspensions, except in cases where one is immediately needed for the user to "cool off". I would also, at least at the beginning, wait for input before dismissing custom moderator flags, but perhaps not before validating them, if the case appears clear cut. I would not wait for consensus when reopening or closing a question, but I expect to be casting both reopen and close votes much less than I currently do since I don't want to overuse the unilateral character of these votes. I would also not wait for consensus before deleting obsolete, rude or irrelevant comments, since those are supposed to be ephemeral by design anyway.
1. How much time do you expect you can commit to moderating the site? (A couple hours a month? Ten hours a week? Ten hours a day? Take a good guess) Also, do you anticipate any reason why that amount of time would significantly decrease in the future?
I can guarantee a minimum of an hour per day for at least the next year. My time "spent" on the site is usually more, but I'm often doing other things while occasionally checking the site, so the actual time invested into the site is difficult to determine.
1. A diamond will be attached to everything you say and have said in the past, including questions, answers and comments and chatroom messages. Everything you will do will be seen under a different light. Do you feel like all the material you've posted on the site reflects that you would be a good moderator? Will becoming a moderator induces significant changes in what you do—and refrain from doing—on the site (outside the obvious addition of moderator duties)?
I'm confident that 99% of what I have posted will not reflect badly, and that most of it reflects rather well. I can remember some stray comments and chat messages where I might have been less charitable than I should have been, but there is nothing that I would be ashamed of. The most significant change to my routine in the site would probably be that I don't do the "ordinary" review queues anymore, which I currently visit almost every day, and that I might be a bit more careful to distinguish site policy from my personal opinion when commenting.
1. Where do you think the boundaries of our scope should lie, in terms of topic? In other words, where do you draw the line between "physics" and other topics, and how do you feel about questions that lie in the various grey areas between physics and other disciplines? Do you generally feel that our scope should be broader or narrower than it is now? How are your views on these topics likely to affect your actions as a moderator?
I think our scope is generally quite alright as it is. The very nature of these interdisciplinary questions is that it is difficult to make general policies that cover them - this is where community moderation has to shine, hashing out whether a given question is "just enough physics" or "not enough physics" by several close/reopen cycles if need be, or just by a bit of comment/chat/meta discussion about the specific question. As a moderator, I would refrain from casting my unilateral votes in any binding manner on those questions, i.e. I would only cast them as fifth votes. We don't elect moderators to make policy, after all.
1. In what way do you feel that being a moderator will make you more effective as opposed to simply reaching 10k or 20k rep?
It won't make me more effective. Moderators aren't supposed to be "more effective" versions of ordinary users, they're supposed to be trusted janitors who clean up the messes and do the behind-the-scenes work that's unsuitable for community moderation but nevertheless needs to be done to keep the site running smoothly. I love this site and I want to be part of the reason it's working. In the past, I've done so by participating in the review queues every day, which is something I strongly encourage every user with sufficient reputation to do. Now, I get the opportunity to do so in another role - if you want me to.
1. Where do you think the boundaries of our scope should lie, in terms of level? That is, do you see this as primarily a site for physics students, or for physics researchers, or both equally? How is your stance on this likely to affect your actions as a moderator?
I don't think there should be boundaries in terms of "level", and I don't think there currently are. There are good and bad questions at a layman's or high schooler's level, and there are good and bad questions at research level. I've seen plenty of all of those. What I expect from a question is a reasonable amount of research effort, and an attempt to communicate clearly the conceptual question the asker is having. I strongly dislike all stances that think questions have merit simply because of their level. Questions have merit because they are good, well-posed questions that can be answered with physical insight. Note, however, that just because you can write an amazing answer to a question that doesn't necessarily make the question itself any better.
In the case where I see a question closed merely because of its level, I would strongly consider unilaterally reopening it. We don't have a policy about "level", and I don't think we should have one.
1. How would you deal with a user who produced a steady stream of valuable answers, but tends to generate a large number of arguments/flags from comments?
Depends on what kinds of flag exactly this user generates. Generating a lot of "obsolete" flags is a good thing, since it indicates the user leaves a lot of comments leading to posts being improved, making the comment obsolete in the process. Generating a lot of "not constructive" or "too chatty" flags is worse, although it might just indicate a chatty user. I'd advise the user to use comments only for relevant communication and take longer reply chains to chat instead, but probably take no further action. On repeated offense, I'd be more quick to move the discussions to chat, but if everything's civil, I don't see a need for other actions.
Generating a lot of "rude/offensive" flags is a lot worse, and would probably warrant a suspension if warnings to be nicer do not succeed. As I said above, I would not impose a suspension without waiting for consensus from the rest of the moderators, unless the user is currently generating those flags and no end is in sight. In that case I would impose a short "cool off" suspension to de-escalate the immediate situation, and then consult the other moderators.
1. How would you handle a situation where another mod closed/deleted/etc a question that you feel shouldn't have been?
I'd first make sure I understand the reasons for which the other moderator took that action. If I still disagree with the result, I'd take the issue to meta. Moderators should not make policy on their own, so if two of us are in strong disagreement what to do with a certain question, this indicates that the community's larger position on that type of question is unclear or needs to be re-evaluated.
• "this is where community moderation has to shine, hashing out whether a given question is 'just enough physics' or 'not enough physics' by several close/reopen cycles if need be" <-- I think this approach is strictly wrong. Close votes should only be used to implement policy, not to determine it. (That's what meta is for.) The reason for this is simple: who ever heard of a democratic system where a motion is passed if (votes in favour - votes against)%10 >= 5? The close vote system isn't intended as an implementation of democracy and is completely broken when used for that purpose. – Nathaniel Sep 27 '16 at 4:41
• Moreover, I think an important part of a moderator's job is to step in and prevent people from trying to influence policy by abusing the community moderation system. If that happens the close/reopen votes should be removed, the post locked, and a discussion started on meta. I would not vote for a moderator who isn't willing to do that. – Nathaniel Sep 27 '16 at 4:43
• @Nathaniel Please note that I a) said that I don't believe making general policies covering interdisciplinary questions is possible, so we necessarily have to work on a case-by-case basis and b) explicitly offered comment/chat/meta discussion as an alternative to close/reopen cycles. However, I'm not going to lock any questions that sit on the boundaries of physics unless they additionally generate unpleasant or off-topic comments. – ACuriousMind Sep 27 '16 at 12:08
|
|
# Statistics and Actuarial Science - Theses, Dissertations, and other Required Graduate Degree Essays
## A goodness-of-fit test for semi-parametric copula models of right-censored bivariate survival times
Author:
Date created:
2016-06-09
Abstract:
In multivariate survival analyses, understanding and quantifying the association between survival times is of importance. Copulas, such as Archimedean copulas and Gaussian copulas, provide a flexible approach of modeling and estimating the dependence structure among survival times separately from the marginal distributions (Sklar, 1959). However, misspecification in the parametric form of the copula function will directly lead to incor- rect estimation of the joint distribution of the bivariate survival times and other model-based quantities.The objectives of this project are two-folded. First, I reviewed the basic definitions and properties of commonly used survival copula models. In this project, I focused on semi- parametric copula models where the marginal distributions are unspecified but the copula function belongs to a parametric copula family. Various estimation procedures of the de- pendence parameter associated with the copula function were also reviewed. Secondly, I extended the pseudo in-and-out-of-sample (PIOS) likelihood ratio test proposed in Zhang et al. (2016) to testing the semi-parametric copula models for right-censored bivariate sur- vival times. The PIOS test is constructed by comparing two forms of pseudo likelihoods, one is the "in-sample" pseudo likelihood, which is the full pseudo likelihood, and the other is the "out-of-sample" pseudo likelihood, which is a cross-validated pseudo likelihood by the means of jacknife. The finite sample performance of the PIOS test was investigated via a simulation study. In addition, two real data examples were analyzed for illustrative purpose.
Document type:
Graduating extended essay / Research project
File(s):
Senior supervisor:
Qian (Michelle) Zhou
Department:
Science: Department of Statistics and Actuarial Science
Thesis type:
(Project) M.Sc.
## On Supervised and Unsupervised Discrimination
Author:
Date created:
2016-07-29
Abstract:
Discrimination is a supervised problem in statistics and machine learning that begins with data from a finite number of groups. The goal is to partition the data-space into some number of regions, and assign a group to each region so that observations there are most likely to belong to the assigned group. The most popular tool for discrimination is called discriminant analysis. Unsupervised discrimination, commonly known as clustering, also begins with data from groups, but now we do not necessarily know how many groups, nor do we get to know which group each observation belongs to. Our goal when doing clustering is still to partition the data-space into regions and assign groups to those regions, however we do not have any a priori information with which to assign these groups. Common tools for clustering include the k-means algorithm and model-based clustering using either the expectation maximization (EM) or classification expectation maximization (CEM) algorithms (of which k-means is a special case). Tools designed for clustering can also be used to do discrimination. We investigate this possibility, along with a method proposed by Yang (2013) for smoothing the transition between these problems. We use two simulations to investigate the performance of discriminant analysis and both versions of model-based clustering with various parameter settings across various datasets. These settings include using Yang’s method for modifying clustering tools to handle discrimination. Results are presented along with recommendations for data analysis when doing discrimination or clustering. Specifically, we investigate what assumptions to make about the groups’ sizes and shapes, as well as which method to use (discriminant analysis or the EM or CEM algorithms) and whether or not to apply Yang’s pre-processing procedure.
Document type:
Graduating extended essay / Research project
File(s):
Senior supervisor:
Tom Loughin
Department:
Science: Department of Statistics and Actuarial Science
Thesis type:
(Project) M.Sc.
## Analysis of universal life insurance cash flows with stochastic asset models
Author:
Date created:
2016-06-02
Abstract:
Universal life insurance is a flexible product which provides the policyholder with life insurance protection as well as savings build-up. The performance of the policy is hard to be evaluated accurately with deterministic asset models, especially when the fund is placed in accounts that track the performance of equities. This project aims to investigate factors that affect the savings (account value) and insurance coverage (death benefit) under a stochastic framework. Time series models are built to capture the complex dynamics of returns from two commonly offered investment options, T-bills and S&P 500 index, with and without interdependence assumption. Cash flows of account value, cost of insurance, and death benefit are projected for sample policies with common product features under multiple investment strategies. The comparison reveals the impact of asset models and fund allocation on the projected cash flows.
Document type:
Graduating extended essay / Research project
File(s):
Senior supervisor:
Yi Lu
Department:
Science: Department of Statistics and Actuarial Science
Thesis type:
(Project) M.Sc.
## Cricket Analytics
Author:
Date created:
2015-12-16
Abstract:
This thesis consists of a compilation of three research papers and a non-statistical essay.Chapter 2 considers the decision problem of when to declare during the third innings of atest cricket match. There are various factors that affect the decision of the declaring teamincluding the target score, the number of overs remaining, the relative desire to win versusdraw, and the scoring characteristics of the particular match. Decision rules are developedand these are assessed against historical matches. We observe that there are discrepanciesbetween the optimal time to declare and what takes place in practice.Chapter 3 considers the determination of optimal team lineups in Twenty20 cricket where alineup consists of three components: team selection, batting order and bowling order. Viamatch simulation, we estimate the expected runs scored minus the expected runs allowed fora given lineup. The lineup is then optimized over a vast combinatorial space via simulatedannealing. We observe that the composition of an optimal Twenty20 lineup sometimesresults in nontraditional roles for players. As a by-product of the methodology, we obtainan “all-star” lineup selected from international Twenty20 cricket.Chapter 4 is a first attempt to investigate the importance of fielding in cricket. We introducethe metric of expected runs saved due to fielding which is both interpretable and is directlyrelevant to winning matches. The metric is assigned to individual players and is based ona textual analysis of match commentaries using random forest methodology. We observethat the best fielders save on average 1.2 runs per match compared to a typical fielder.Chapter 5 is a non-statistical essay of two cricketing greats from Sri Lanka who establishednumerous world records and recently retired from the game. Though their record-breakingperformances are now part of cricketing statistics, this chapter is not a contribution whichadds to the statistical literature, and should not be regarded as a component of the thesisin terms of analytics.
Document type:
Thesis
File(s):
Senior supervisor:
Tim Swartz
Department:
Science: Department of Statistics and Actuarial Science
Thesis type:
(Thesis) Ph.D.
## Bayesian profile regression with evaluation on simulated data
Author:
Date created:
2016-01-06
Abstract:
Using regression analysis to make inference using data sets that contain a large number of potentially correlated covariates can be difficult. This large number of covariates have become more common in clinical observational studies due to the dramatic improvement in information capturing technology for clinical databases. For instance, in disease diagnosis and treatment, obtaining a number of indicators regarding patients’ organ function is much easier than before and these indicators can be highly correlated. We discuss Bayesian profile regression, an approach that deals with the large numbers of correlated covariates for the binary covariates commonly recorded in clinical databases. Clusters of patients with similar covariate profiles are formed through the application of a Dirichlet process prior and then associated with outcomes via a regression model. Methods for evaluating the clustering and making inference are described afterwards. We use simulated data to compare the performance of Bayesian profile regression to the LASSO, a popular alternative for data sets with a large number of predictors. To make these comparisons, we apply the recently developed R package PReMiuM, to fit the Bayesian profile regression.
Document type:
Graduating extended essay / Research project
File(s):
Senior supervisor:
Jinko Graham
Department:
Science: Department of Statistics and Actuarial Science
Thesis type:
(Project) M.Sc.
## Data integration methods for studying animal population dynamics
Author:
Date created:
2015-12-22
Abstract:
In this thesis, we develop new data integration methods to better understand animal population dynamics. In a first project, we study the problem of integrating aerial and access data from aerial-access creel surveys to estimate angling effort, catch and harvest. We propose new estimation methods, study their statistical properties theoretically and conduct a simulation study to compare their performance. We apply our methods to data from an annual Kootenay Lake (Canada) survey. In a second project, we present a new Bayesian modeling approach to integrate capture-recapture data with other sources of data without relying on the usual independence assumption. We use a simulation study to compare, under various scenarios, our approach with the usual approach of simply multiplying likelihoods. In the simulation study, the Monte Carlo RMSEs and expected posterior standard deviations obtained with our approach are always smaller than or equal to those obtained with the usual approach of simply multiplying likelihoods. Finally, we compare the performance of the two approaches using real data from a colony of Greater horseshoe bats (\emph{Rhinolophus ferrumequinum}) in the Valais, Switzerland. In a third project, we develop an explicit integrated population model to integrate capture-recapture survey data, dead recovery survey data and snorkel survey data to better understand the movement from the ocean to spawning grounds of Chinook salmon (\emph{Oncorhynchus tshawytscha}) on the West Coast of Vancouver Island, Canada. In addition to providing spawning escapement estimates, the model provides estimates of stream residence time and snorkel survey observer efficiency, which are crucial but currently lacking for the use of the area-under-the-curve method currently used to estimate escapement on the West Coast of Vancouver Island.
Document type:
Thesis
File(s):
Senior supervisor:
Richard Lockhart
Carl Schwarz
Department:
Science: Department of Statistics and Actuarial Science
Thesis type:
(Thesis) Ph.D.
## Statistical Inference under Latent Class Models, with Application to Risk Assessment in Cancer Survivorship Studies
Author:
Date created:
2015-11-12
Abstract:
Motivated by a cancer survivorship program, this PhD thesis aims to develop methodology for risk assessment, classification, and prediction. We formulate the primary data collected from a cohort with two underlying categories, the at-risk and not-at-risk classes, using latent class models, and we conduct both cross-sectional and longitudinal analyses. We begin with a maximum pseudo-likelihood estimator (pseudo-MLE) as an alternative to the maximum likelihood estimator (MLE) under a mixture Poisson distribution with event counts. The pseudo-MLE utilizes supplementary information on the not-at-risk class from a different population. It reduces the computational intensity and potentially increases the estimation efficiency. To obtain statistical methods that are more robust than likelihood-based methods to distribution misspecification, we adapt the well-established generalized estimating equations (GEE) approach under the mean-variance model corresponding to the mixture Poisson distribution. The inherent computing and efficiency issues in the application of GEEs motivate two sets of extended GEEs, using the primary data supplemented by information from the second population alone or together with the available information on individuals in the cohort who are deemed to belong to the at-risk class. We derive asymptotic properties of the proposed pseudo-MLE and the estimators from the extended GEEs, and we estimate their variances by extended Huber sandwich estimators. We use simulation to examine the finite-sample properties of the estimators in terms of both efficiency and robustness. The simulation studies verify the consistency of the proposed parameter estimators and their variance estimators. They also show that the pseudo-MLE has efficiency comparable to that of the MLE, and the extended GEE estimators are robust to distribution misspecification while maintaining satisfactory efficiency. Further, we present an extension of the favourable extended GEE estimator to longitudinal settings by adjusting for within-subject correlation. The proposed methodology is illustrated with physician claims from the cancer program. We fit different latent class models for the counts and costs of the physician visits by applying the proposed estimators. We use the parameter estimates to identify the risk of subsequent and ongoing problems arising from the subjects’ initial cancer diagnoses. We perform risk classification and prediction using the fitted latent class models.
Document type:
Thesis
File(s):
Senior supervisor:
X. Joan Hu
John J. Spinelli
Department:
Science: Department of Statistics and Actuarial Science
Thesis type:
(Thesis) Ph.D.
## Application of Relational Models in Mortality Immunization
Author:
Date created:
2015-07-29
Abstract:
The prediction of future mortality rates by any existing mortality projection models is hardly tobe exact, which causes an exposure to mortality and longevity risks for life insurance companies.Since a change in mortality rates has opposite impacts on the surpluses of life insurance andannuity products, hedging strategies of mortality and longevity risks can be implemented bycreating an insurance portfolio of both life insurance and annuity products. In this project, wedevelop a framework of implementing non-size free matching strategies to hedge against mortalityand longevity risks. We apply relational models to capture the mortality movements byassuming that the simulated mortality sequence is a proportional and/or a constant change ofthe expected one, and the amount of the changes varies in the length of the sequence. Withthe magnitude of the proportional and/or constant changes, we determine the optimal weightsof allocating the life insurance and annuity products in a portfolio for mortality immunizationaccording to each of the proposed matching strategies. Comparing the hedging performanceof non-size free matching strategies with size free ones proposed by Lin and Tsai (2014), wedemonstrate that non-size free matching strategies can hedge against mortality and longevityrisks more effectively than the corresponding size free ones.
Document type:
Thesis
File(s):
Senior supervisor:
Cary Tsai
Department:
Science: Department of Biomedical Physiology and Kinesiology
Thesis type:
(Thesis) M.Sc.
## Understanding the impact of heteroscedasticity on the predictive ability of modern regression methods
Author:
Date created:
2015-08-17
Abstract:
As the size and complexity of modern data sets grows, more and more prediction methods are developed. Despite the growing sophistication of methods, there is not a well-developed literature on how heteroscedasticity affects modern regression methods. We aim to understand the impact of heteroscedasticity on the predictive ability of modern regression methods. We accomplish this by reviewing the visualization and diagnosis of heteroscedasticity, as well as developing a measure for quantifying it. These methods are used on 42 real data sets in order to understand the prevalence and magnitude typical'' to data. We use the knowledge from this analysis to develop a simulation study that explores the predictive ability of nine regression methods. We vary a number of factors to determine how they influence prediction accuracy in conjunction with, and separately from, heteroscedasticity. These factors include data linearity, the number of explanatory variables, the proportion of unimportant explanatory variables, and the signal-to-noise ratio. We compare prediction accuracy with and without a variance-stabilizing log-transformation. The predictive ability of each method is compared by using the mean squared error, which is a popular measure of regression accuracy, and the median absolute standardized deviation, a measure that accounts for the potential of heteroscedasticity.
Document type:
Graduating extended essay / Research project
File(s):
Senior supervisor:
Thomas Loughin
Department:
Science: Department of Statistics and Actuarial Science
Thesis type:
(Project) M.Sc.
## A Pseudo Non-Parametric Buhlmann Credibility Approach to Modeling Mortality Rates
Author:
Date created:
2015-07-29
Abstract:
Credibility theory is applied in property and casualty insurance to perform prospective experiencerating, i.e., to determine the future premiums to charge based on both past experienceand the underlying group rate. Insurance companies assign a credibility factor Z to a specificpolicyholder’s own past data, and put 1 − Z onto the prior mean which is the group rate determinedby actuaries to reflect the expected value for all risk classes. This partial credibilitytakes advantage of both policyholder’s own experience and the entire group’s characteristics,and thus increases the accuracy of estimated value so that the insurance companies can staycompetitive in the market. Faced with its popular applications in property and casualty insurance,this project aims to apply the credibility theory to projected mortality rates from threeexisting mortality models. The approach presented in this project violates one of the conditions,and thus produces the pseudo non-parametric Bühlmann estimates of the forecasted mortalityrates. Numerical results show that the accuracy of forecasted mortality rates are significantlyimproved after applying the non-parametric Bühlmann method to the Lee-Carter model, theCBD model, and the linear regression-random walk (LR-RW) model. A measure of mean absolutepercentage error (MAPE) is adopted to compare the performances in terms of accuracy ofmortality prediction.
Document type:
Graduating extended essay / Research project
File(s):
Senior supervisor:
Cary Tsai
Department:
Science:
Thesis type:
(Project) M.Sc.
|
|
threeStage {DRIP} R Documentation
## Denoising, deblurring and edge-preserving
### Description
Estimate surface using local smoothing and fitting principal component line. Bandwidth is specified by user.
### Usage
threeStage(image, bandwidth, edge1, edge2,
blur = FALSE, plot = FALSE)
### Arguments
image A square matrix object of size n by n, no missing value allowed. bandwidth A positive integer that specifies the number of pixels used in the local smoothing. edge1 A matrix of 0 and 1 of the same size as image represents detected step edge pixels edge2 A matrix of 0 and 1 of the same size as image represents detected roof/valley edge pixels blur If blur = TRUE, besides a conventional 2-D kernel function, a univariate increasing kernel function is used in the local kernel smoothing to address the issue with blur. plot If plot = TRUE, the image of the fitted surface is plotted
### Details
At each pixel, if there are step edges detected in the local neighborhood, a principal component line is fitted through the detected edge pixels to approximate the step edge locally and then the regression surface is estimated by a local constant kernel smoothing procedure using only the pixels on one side of the principal component line. If there are no step edges but roof/valley edges detected in the local neighborhood, the same procedure is followed except that the principal component line to fitted through the detected roof/valley edge pixels. In cases when there is either no step edges or roof/valley edges detected in the neighborhood, the regression surface at the pixel is estimated by the conventional local linear kernel smoothing procedure.
### Value
Returns the restored image, which is represented by a matrix
### References
Qiu, P., and Kang, Y. "Blind Image Deblurring Using Jump Regression Analysis," Statistica Sinica, 25, 2015, 879-899.
JPLLK_surface, surfaceCluster
data(sar)
|
|
# Chapter 2 The value of money
## 2.1 The centipede game
The centipede game starts with player $$P_1$$ deciding if the game should continue or not. If this player stops the game, then both players receive a payoff of $$1$$. If the game continues, player $$P_2$$ must in turn decide if the game should stop or continue. If the game stops, player $$P_1$$ receives a $$0$$ payoff, while player $$P_2$$ receives a payoff of $$3$$. If the game goes one, player $$P_1$$ must play, and so on.
The game is limited in nuber of turns: if both players always choose to continue the game, each one of them will receive $$100$$ as payoff.
If two people who do not know each other were to play this game once in their life, how would they play? Game theory tells us that, if they are rational (and if they know they both are), the game will stop at the very first turn.
While it might seem surprising, it is a strongly established fact. If we consider the last turn, it is clear that $$P_2$$ will selfishly end the game to get $$101$$ rather than continue it to get only $$100$$ (as the two players do not know each other, $$P_2$$ has no reason to do a suboptimal choice for the sake of $$P_1$$).
But as $$P_1$$ knows that, just before $$P_2$$ can selfishly end the game, $$P_1$$ will do it, so that they both receive $$99$$. But $$P_2$$ is smart, he has very well understood what was at stake! And so he will end the game the turn before. And so on, until we conclude that $$P_1$$ has no better choice than choosing to end the game at the very first step even though they could have won a lot more, had they been more cooperative (or less rational).
## 2.2 The money temporal paradox
The case of money is somewhat similar to the centipede game. We often accept to give someone tangible goods in exchange os some piece of paper called “money”. By doing so, we are putting ourselves in the shoes of $$P_1$$, renouncing to a payoff of $$1$$ (the goods we own) to get some money with no direct utility instead. Our only hope is that another person in the economy will some day be willing to give us something in return for the money.
The trouble is that we know for sure that, one day, the money we use will cease to have value: economic crises, wars, natural hazards or the end of the world itslef will dramatically affect its value in the long run. So one day the money we use will not have any value anymore, implying that the day before everyone should refuse any payment with this money. So the day before the money has no value anymore, it has already no value. And so on, to the point where money should never have any value.
Then, is money valued only by irrational people? Actually, even if we knew for sure that one day the money we use would be worthless, the uncertainty on the fatidic date would be enough to convince us to use it. While in the centipede game each player can predict with absolute certainty what will happen, our situation does not allow that: in order to take decisions, we have to balance the risk with the potential positive outcomes. The uncertainty is here what allows us to rationally create value.
## 2.3 The infinite centipede game
To illustrate this fact, we have to modify the centipede game a bit. Let us consider an infinite centipede game, where at the beginning of the game the players have to choose the probabilities $$p_1$$ (for player $$P_1$$) and $$p_2$$ (for player $$P_2$$) with which they will continue the game when it is their turn to play.
For the sake of simplicity, let us assume that there is no time preference, and that the game will continue forever unless one of the players ends it (it is possible to establish similar results if we do not make such simplifications, but with a lot more computations).
If $$p_1 = p_2 = 1$$, then the payoff for both players is $$\infty$$.
If the payoffs at each step of the game are computed as in figure 2.1, then there are exactly two pure Nash equilibria:
• the first one is characterised by $$p_1 = p_2 = 0$$: if one player does not want to cooperate at all, the other for sure has no interest in any kind of cooperation either
• the second one is characterised by $$p_1 = p_2 = 1$$: if one player is willing to cooperate no matter what, the other has every reason to do the same to get the infinite payoff.
These two Nash equilibria are not equivalent: one is worth $$0$$ for both players, while the other is worth an infinite payoff. The uncertainty about the outcome of this game is finally not so big: we can reasonably expect both players to choose $$p_1 = p_2 = 1$$.
Actually, everything depends on the coefficients of the game: if the payoffs are not increasing but decreasing with time, it is clear that rational players would choose not to play the game, or more exactly to play $$p_1 = p_2 = 0$$.
|
|
# A Coyote and a Rat#
A coyote notices a rat running past it, toward a bush where the rat will be safe. The rat is running with a constant velocity of $$v\_{\text{rat}} = {{ params.v_r }} \rm{m/s}$$ and the coyote is at rest, $$\Delta x = {{ params.d_x }} \rm{m}$$ to the left of the rat. However, at $$t=0 \rm{s}$$, the coyote begins running to the right, in pursuit of the rat, with an acceleration of $$a\_{\text{coyote}} = {{ params.a_c }} \rm{m/s^2}$$.
Set your reference frame to be located with the origin at the original location of the coyote and the rightward direction corresponding to the positive $$x$$-direction.
## Part 1#
Write the position of the coyote as a function of time $$x\_{\text{coyote}}(t)$$. Do not plug in numerical values for this part.
Use the following table as a reference for each variable. Note that it may not be necessary to use every variable.
For
Use
$$t$$
t
$$\Delta x$$
dx
$$v\_{\text{rat}}$$
vr
$$a\_{\text{coyote}}$$
ac
## Part 2#
Write the velocity of the coyote as a function of time $$v\_{\text{coyote}}(t)$$. Do not plug in numerical values for this part.
Use the following table as a reference for each variable. Note that it may not be necessary to use every variable.
For
Use
$$t$$
t
$$\Delta x$$
dx
$$v\_{\text{rat}}$$
vr
$$a\_{\text{coyote}}$$
ac
## Part 3#
Write the position of the rat as a function of time $$x\_{\text{rat}}(t)$$. Do not plug in numerical values for this part.
Use the following table as a reference for each variable. Note that it may not be necessary to use every variable.
For
Use
$$t$$
t
$$\Delta x$$
dx
$$v\_{\text{rat}}$$
vr
$$a\_{\text{coyote}}$$
ac
## Part 4#
Write the velocity of the rat as a function of time $$v\_{\text{rat}}(t)$$. Do not plug in numerical values for this part.
Use the following table as a reference for each variable. Note that it may not be necessary to use every variable.
For
Use
$$t$$
t
$$\Delta x$$
dx
$$v\_{\text{rat}}$$
vr
$$a\_{\text{coyote}}$$
ac
## Part 5#
At what time does the coyote catch the rat $$t\_{\text{catch}}$$?
Please enter in a numeric value in $$\rm{s}$$.
## Part 6#
At this time, what is the velocity of the coyote $$v\_{\text{coyote}}(t\_{\text{catch}})$$?
Please enter in a numeric value in $$\rm{m/s}$$.
## Part 7#
At this time, what is the velocity of the rat $$v\_{\text{rat}}(t\_{\text{catch}})$$?
Please enter in a numeric value in $$\rm{m/s}$$.
## Part 8#
What is the location at which the coyote will catch the rat $$x\_{\text{coyote}}(t\_{\text{catch}}) = x\_{\text{rat}}(t\_{\text{catch}})$$?
Please enter in a numeric value in $$\rm{m}$$.
|
|
Dalton's law for ideal gases at different temperatures
Tags:
1. Mar 7, 2016
Elena14
How would Dalton's law be affected when there are two ideal gases in a container at different temperatures?
Let the gas with higher temperature be gas A and the gas with lower temperature be gas B. Then heat will be transferred from gas A to gas B due to which kinetic energy of the molecules of gas A will decrease and kinetic energy of molecules of gas B will increase. Hence, molecules of gas B should make more collisions to the walls of the container because of the increased kinetic energy and hence the total pressure exerted by gas B should increase and using the same argument, pressure exerted by gas A should decrease.
Now, we can no longer use P(A) + P(B)= P(T)
; P(A) is the partial pressure of gas A,
P(B) is the partial pressure of gas B
and P(T) is the total pressure exerted by the mixture of gas A and gas B
Am I correct with this logic?
A glass bulb of volume 400 ml is connected to another bulb of volume 200 mL by means of a tube of negligible volume. The bulbs contain dry air and are both at a common temperature and pressure of 293 K and 1.00 atm. The larger bulb is immersed in steam at 373 K ; the smaller, in melting ice at 273 K. Find the final common pressure.
In the above problem, we cannot simply use PV= nRT to calculate final pressure because we don't know the final common temperature. So, we have to individually calculate the partial pressure of both the gases and then sum them up to get the total pressure. But according to me, as stated above, that should not be correct.
Is my logic correct or I am missing out on something? Also, how should I solve using my logic and please tell if there are other ways to go about solving this problem.
2. Mar 7, 2016
Let'sthink
I think these laws apply to only equilibrium systems only. We can apply the law only when the two attain a common temperature. and becomes a homogeneous one system having a common temperature and also a common total pressure. What is measured is common pressure only. Partial pressures are calculated not measured.
3. Mar 7, 2016
DrDu
In your example, the pressure will be the same in both bulbs but the amount of air in the two bulbs (aka n_1 and n_2) will be different, namely: $n_1=PV_1/RT_1$ and $n_2=PV_2/RT_2$.
4. Mar 7, 2016
Elena14
How does this answer my question?
5. Mar 7, 2016
DrDu
I give you another equation: $n=n_1+n_2$. Now you have 3 equations for the three unknown quantities P, n_1 and n_2 which you may solve to get P.
6. Mar 7, 2016
Elena14
I got the answer this way. Can you tell me if my reasoning about the Dalton's law was correct?
7. Mar 8, 2016
Elena14
Can you tell me if my reasoning about the Dalton's law was correct or not??
8. Mar 8, 2016
DrDu
I don't quite understand it. You talk of gas A and B, but in your example, both flasks contain dry air.
9. Mar 8, 2016
Elena14
I only know of dry air as air having no content of water vapour. Why should it matter?
10. Mar 8, 2016
DrDu
Yes, but what is gas A and what B in your example?
11. Mar 8, 2016
Elena14
The gases A nd B have got to do nothing with that problem. I just used them to explain a concept that I thought would be utilized in the problem.
12. Mar 8, 2016
DrDu
I find this a bit confusing. Why do you introduce A and B if they are not related to your problem? Maybe you could rephrase your question?
13. Mar 8, 2016
Elena14
Well, I don't know any other way to explain that thing. You can think of my question as having two parts. Firstly I want to know the flaw in my reasoning in -
"Let two gases : Gas A and gas B be placed in a container . Let the gas with higher temperature be gas A and the gas with lower temperature be gas B. Then heat will be transferred from gas A to gas B due to which kinetic energy of the molecules of gas A will decrease and kinetic energy of molecules of gas B will increase. Hence, molecules of gas B should make more collisions to the walls of the container because of the increased kinetic energy and hence the total pressure exerted by gas B should increase and using the same argument, pressure exerted by gas A should decrease."
And then I wanted to know if that logic could be extended in the problem below.
14. Mar 8, 2016
DrDu
Yes, sort of, but I would not talk of different gasses only because they have different temperature. If the gasses of different temperature come into contact, one will lose kinetic energy and the other one will gain, but there will be hardly a pressure difference as the gasses will start to flow.
That's basically how wind comes around: Temperature differences generate minute pressure differences which lead to strong currents of air.
15. Mar 8, 2016
Elena14
What if the two gases are at different pressures before they are mixed? Then there would be pressure and heat exchange happening at the same time?
16. Mar 8, 2016
DrDu
Yes, you will get a strong turbulent flow and rapid equilibration of pressure and temperature.
17. Mar 8, 2016
Elena14
That would be a very chaotic situation? If this is true, then we cannot solve the problem :
A glass bulb of volume 400 ml is connected to another bulb of volume 200 mL by means of a tube of negligible volume. The bulbs contain dry air and are both at a common temperature and pressure of 293 K and 1.00 atm. The larger bulb is immersed in steam at 373 K ; the smaller, in melting ice at 273 K. Find the final common pressure.
using Pv=nrt ; p is the total moles in both the bulbs , v is the total volume and T is the "equilibrium temperature" since we cannot find equilibrium temperature. Then maybe we want to use the relation : sum of partial pressures of individual gases = total pressure
So now we will use Pv =nRt to calculate the partial pressure but then which temperature should be substitute here for both the gases? It cannot be 273 K and 373 K for sure because temperature will be affected when the gases interact?
Or this approach is wrong altogether?
18. Mar 8, 2016
Staff: Mentor
DrDu. Elena14 keeps talking about partial pressures. Have you ever, in you broad experience, heard to the term "partial pressure" being used in connection with a single component system (e.g., this one)? I haven't. My interpretation of this problem statement was the same as yours, and I obtained the same set of equations and results as you did. I am just not able to decipher what Elena14 is getting at. As far as I can judge, Dalton's law does not apply to this system.
Chet
|
|
Author:
Kris Seago
Subject:
Political Science
Material Type:
Module
Level:
Community College / Lower Division
Tags:
1845, Constitution, Texas
Language:
English
Media Formats:
Graphics/Photos, Text/HTML
# Constitution of 1845
## Overview
Constitution of 1845
# Constitution of 1845
Constitution of 1845
The Constitution of 1845, which provided for the government of Texas as a state in the United States, was almost twice as long as the Constitution of the Republic of Texas. The framers, members of the Convention of 1845, drew heavily on the newly adopted Constitution of Louisiana and on the constitution drawn by the Convention of 1833 , but apparently used as a working model the Constitution of the republic for a general plan of government and bill of rights.
The legislative department was composed of a Senate of from nineteen to thirty-three members and a House of Representatives of from forty-five to ninety. Representatives, elected for two years, were required to have attained the age of twenty-one. Senators were elected for four years, one-half chosen biennially, all at least thirty years old. Legislators’ compensation was set at three dollars a day for each day of attendance and three dollars for each twenty-five miles of travel to and from the capital. All bills for raising revenue had to originate in the House of Representatives. Austin was made the capital until 1850, after which the people were to choose a permanent seat of government. A census was ordered for each eighth year, following which adjustment of the legislative membership was to be made. Regular sessions were biennial. Ministers of the Gospel were ineligible to be legislators.
The governor’s term was two years, and he was made ineligible for more than four years in any period of six years. He was required to be a citizen and a resident of Texas for at least three years before his election and to be at least thirty years of age. He could appoint the attorney general, secretary of state, and supreme and district court judges, subject to confirmation by the Senate; but the comptroller and treasurer were elected biennially by a joint session of the legislature. The governor could convene the legislature and adjourn it in case of disagreement between the two houses and was commander-in-chief of the militia. He could grant pardons and reprieves. His veto could be overruled by two-thirds of both houses.
The judiciary consisted of a Supreme Court, district courts, and such inferior courts as the legislature might establish, the judges of the higher courts being appointed by the governor for six-year terms. The Supreme Court was made up of three judges, any two of whom constituted a quorum. Supreme and district judges could be removed by the governor on address of two-thirds of both houses of the legislature for any cause that was not sufficient ground for impeachment. A district attorney for each district was elected by joint vote of both houses, to serve for two years. County officers were elected for two years by popular vote. The sheriff was not eligible to serve more than four years of any six. Trial by jury was extended to cases in equity as well as in civil and criminal law.
The longest article of the constitution was Article VII, on General Provisions. Most of its thirty-seven sections were limitations on the legislature. One section forbade the holding of office by any citizen who had ever participated in a duel. Bank corporations were prohibited, and the legislature was forbidden to authorize individuals to issue bills, checks, promissory notes, or other paper to circulate as money. The state debt was limited to $100,000, except in case of war, insurrection, or invasion. Equal and uniform taxation was required; income and occupation taxes might be levied; each family was to be allowed an exemption of$250 on household goods. A noteworthy section made exempt from forced sale any family homestead, not to exceed 200 acres of land or city property not exceeding \$2,000 in value; the owner, if a married man, could not sell or trade the homestead except with the consent of his wife. Section XIX recognized the separate ownership by married women of all real and personal property owned before marriage or acquired afterwards by gift or inheritance. Texas was a pioneer state in providing for homestead protection and for recognition of community property.
In the article on education the legislature was directed to make suitable provision for support and maintenance of public schools, and 10 percent of the revenue from taxation was set aside as a Permanent School Fund. School lands were not to be sold for twenty years but could be leased, the income from the leases becoming a part of the Available School Fund. Land provisions of the Constitution of 1836 were reaffirmed, and the General Land Office was continued in operation.
By a two-thirds vote of each house an amendment to the constitution could be proposed. If a majority of the voters approved the amendment and two-thirds of both houses of the next legislature ratified it, the measure became a part of the constitution. Only one amendment was ever made to the Constitution of 1845. It was approved on January 16, 1850, and provided for the election of state officials formerly appointed by the governor or by the legislature.
The Constitution of 1845 has been the most popular of all Texas constitutions. Its straightforward, simple form prompted many national politicians, including Daniel Webster, to remark that the Texas constitution was the best of all of the state constitutions. Though some men, including Webster, argued against the annexation of Texas, the constitution was accepted by the United States on December 29, 1845.
|
|
1. ## Parallel Vectors
Hi,
What does it exactly mean when vectors are parallel? There must be something that I do not understand because in the following question:
"The line L is parallel to the z - axis . The point P has a position vector (8,1,0) and lies on L .
Write down the equation of L in the for r = a +tb."
I understand that a is the position vector and b is the direction, however no direction is directly stated. Apparently the answer for b is (0,0,1). So if the vector is parallel to the z axis why is the last number 1?
I understand that in 3D vectors, it is in the form (x,y,z).
$<0,0,1>,~<0,0,-\pi>,~<0,0,12>$ are three parallel vectors.
It is custom to use these three as a basis for $\mathcal{R}^3$: $<1,0,0>,~<0,1,0>,~<0,0,1>$
|
|
The two ends of a spring are displaced along the length of the spring.
Question:
The two ends of a spring are displaced along the length of the spring. All displacements have equal magnitudes. In which case or cases the tension or compression in the spring will have a maximum magnitude?
(a) the right end is displaced towards right and the left end towards left
(b) both ends are displaced towards right
(c) both ends are displaced towards left
(d) the right end is displaced towards left and the left end towards right.
Solution:
Maximum magnitude in tension is when the right end is displaced towards right and the left end towards left. Maximum magnitude in compression is when the right end is displaced towards left and the left end towards right. (a), (d)
|
|
05 April 2016
I came across an old post by Eliezer Yudkowsky on Less Wrong entitled Probability is in the Mind. It immediately struck me as, well, more wrong, and I want to explain why.
A brief disclaimer: that article was published 8 years ago and he may have changed his thinking since then, but I’m purely out to address the idea, not the person. So here goes.
First I’ll lay out his argument, which he does through a series of examples.
You have a coin.
The coin is biased.
You don’t know which way it’s biased or how much it’s biased. Someone just told you, “The coin is biased” and that’s all they said. This is all the information you have, and the only information you have.
You draw the coin forth, flip it, and slap it down.
Now—before you remove your hand and look at the result—are you willing to say that you assign a 0.5 probability to the coin having come up heads?
The frequentist says, “No. Saying ‘probability 0.5’ means that the coin has an inherent propensity to come up heads as often as tails, so that if we flipped the coin infinitely many times, the ratio of heads to tails would approach 1:1. But we know that the coin is biased, so it can have any probability of coming up heads except 0.5.”
OK lemme stop right here. I think this mischaracterizes the frequentist’s take on this situation, in two ways. First, it conflates two things: the probability that the coin comes up heads, and our estimate of the bias of the coin. Now sometimes those are the same thing! But in this case they are not. They are the same thing in the following procedure:
1. flip a coin with known bias $p$
2. observe the outcome (heads or tails)
In code this might be:
In this case you should say that the probability of observing a head is $p$:
scala> flip(0.3).pr(_ == H)
res0: Double = 0.3023
(Note that this probability library estimates probabilities by simulating thousands of trials and reporting the proportion of the trials that matched the desired outcome. Which is all just to say that the probabilities it reports are not exact.)
However, the example in the article goes like this:
1. flip a coin with an unknown bias
2. observe the outcome
You might code this up as follows:
reasonably interpreting “unknown bias” as “a bias drawn at random from a uniform distribution on [0, 1].” Yes, this is a prior, and yes, frequentists do make use of priors.
In this case you can assign a probability of 0.5 to the outcome of flip being heads:
scala> flip.pr(_ == H)
res1: Double = 0.4983
while saying nothing about the bias p.
So if the question is, “what is the probability that this procedure results in the coin coming up heads?” the frequentist will surely say 50%. However, if you ask the different question, “what is the bias of the coin?” the frequentist will say “anything but 50%” without fear of contradicting herself.
## Calculating posteriors is also a procedure
A related question is, “what is the bias of the coin after observing one heads?” You can answer this question in a frequentist way, using a repeated procedure. Here it is:
1. choose a bias $p$ uniformly in [0, 1]
2. flip a coin with bias $p$
4. observe $p$
Or in code:
scala> posterior.bucketedHist(0, 1, 10, roundDown = true)
[0.0, 0.1) 1.06% #
[0.1, 0.2) 2.92% ##
[0.2, 0.3) 5.07% #####
[0.3, 0.4) 6.83% ######
[0.4, 0.5) 8.95% ########
[0.5, 0.6) 10.90% ##########
[0.6, 0.7) 13.08% #############
[0.7, 0.8) 15.11% ###############
[0.8, 0.9) 17.34% #################
[0.9, 1.0) 18.74% ##################
We end up with the exact same answer Bayesians do, but we don’t have to make any reference to beliefs or minds. We just incorporate our observations into the procedure, simulate it thousands of times, and see what falls out.
## Probability is not in the coin
The frequentist says, “No. Saying ‘probability 0.5’ means that the coin has an inherent propensity to come up heads as often as tails, so that if we flipped the coin infinitely many times, the ratio of heads to tails would approach 1:1.”
The second thing I want to address in this statement, which I think gets the heart of the misunderstanding, is a confusion over what exactly gets repeated infinitely many times. He seems to think it’s this:
1. you are given a coin with an unknown bias
2. repeat infinitely many times:
1. flip the coin and observe the outcome
But actually it’s this:
1. repeat infinitely many times:
1. you are given a coin with an unknown bias
2. flip the coin and observe the outcome
When frequentists say “probability 50%,” they’re not talking about an inherent property of a coin, they’re talking about an inherent property of a procedure involving a coin. Different procedures with the same coin will produce different outcomes.
Eliezer contrasts this with the Bayesian’s take on the situation:
The Bayesian says, “Uncertainty exists in the map, not in the territory. In the real world, the coin has either come up heads, or come up tails. Any talk of ‘probability’ must refer to the information that I have about the coin—my state of partial ignorance and partial knowledge—not just the coin itself.”
This is an unwarranted conclusion. Just because probability doesn’t live in the coin itself doesn’t mean it must live in the mind. There are other places it could live.
## Incorporating new knowledge
A bit later on, he says:
To make the coinflip experiment repeatable, as frequentists are wont to demand, we could build an automated coinflipper, and verify that the results were 50% heads and 50% tails. But maybe a robot with extra-sensitive eyes and a good grasp of physics, watching the autoflipper prepare to flip, could predict the coin’s fall in advance—not with certainty, but with 90% accuracy. Then what would the real probability be?
There is no “real probability”. The robot has one state of partial information. You have a different state of partial information. The coin itself has no mind, and doesn’t assign a probability to anything; it just flips into the air, rotates a few times, bounces off some air molecules, and lands either heads or tails.
Again, this is a question of what we consider to be the procedure. If the procedure is
1. put the coin in the automated coin flipper
2. flip the coin and observe the outcome
then you would have to say that the probability is 50%. But if instead you use the different procedure:
1. put the coin in the automated coin flipper
2. allow the robot to observe the coin and make a prediction
4. flip the coin and observe the outcome
then you would say that the procedure will produce an outcome of heads 90% of the time. Yes, you have different information in the second procedure (which you could equivalently view as discarding some trials), but gathering that information (discarding trials) is part of the procedure.
This idea gets distilled further through information-revealing games. A simple version (that I found in the comment section) goes as follows. You take 3 kings and an ace, shuffle them and place them face down in a row. Then you ask, what is the probability that the first card is the ace? Easy, 25%. But now you turn over the last card to reveal a king. Now what is the probability that first card is an ace? It’s 33%. How can this be?? Nothing changed about the cards, so why should the probability change? The probability must not be not be in the cards.
But this situation does not confuse frequentists, thinking about probabilities in terms of procedures. Consider the following procedure:
1. shuffle the 4 cards and put them down in a row
2. turn over the last card
3. discard this trial unless the last card is a king
4. observe the first card
Under this procedure, the first card is observed to be an ace 33% of the time. No contradiction.
As for the paradox, there isn’t one. The appearance of paradox comes from thinking that the probabilities must be properties of the cards themselves. The ace I’m holding has to be either hearts or spades; but that doesn’t mean that your knowledge about my cards must be the same as if you knew I was holding hearts, or knew I was holding spades.
It may help to think of Bayes’s Theorem:
$\pg{H}{E} = \pg{E}{H}\p{H} \, / \, \p{E}$
That last term, where you divide by $\p{E}$, is the part where you throw out all the possibilities that have been eliminated, and renormalize your probabilities over what remains.
Frequentists would agree with this! But they might reword it slightly:
As for the paradox, there isn’t one. The appearance of paradox comes from thinking that the probabilities must be properties of the cards themselves. The ace I’m holding has to be either hearts or spades; but that doesn’t mean that the trials you keep when I tell you I’m holding hearts are the same trials you keep in a different procedure where I tell you I’m holding spades.
It may help to think of Bayes’s Theorem:
$\pg{H}{E} = \pg{E}{H}\p{H} \, / \, \p{E}$
That last term, where you divide by $\p{E}$, is the part where you throw out all the trials that don’t match the stated outcome in the procedure.
Information gathering in the Bayesian regime is equivalent to discarding trials in the frequentist regime.
Anyway, the thrust of his argument rests on two logical fallacies:
1. It’s foolish to believe, as frequentists do, that probability lives in the coin, and
2. therefore probability is in the mind.
I hope I have pointed out that #1 is a straw man and #2 is a false dilemma, to wit:
1. Frequentists don’t believe that, and
2. he overlooks another place probability could live: in the process.
Both frequentist and Bayesian modes of thinking will lead you to the same numeric answers (well, most of the time). But personally I like the frequentist approach because it takes the human observer out of the equation. Why bring minds into it at all?
|
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In 2012, Danielle Herbert, Adrian Barnett, Philip Clarke and Nicholas Graves published an article entitled “on the time spent preparing grant proposals: an observational study of Australian researchers“, whose conclusions had been included in Nature under a more explicit title, “Australia’s grant system wastes time” ! In this study, they included 3700 grant applications sent to the National Health and Medical Research Council, and showed that each application represented 37 working days: “Extrapolating this to all 3,727 submitted proposals gives an estimated 550 working years of researchers’ time (95% confidence interval, 513-589)“. But in these times when I have to write my funding application, I find that losing 37 days of work is huge. Because it’s become the norm! And somehow, it’s sad.
Forget about the crazy idea that I would rather, in fact, spend more time doing my research. In fact, the thought I had this morning was that it is rather sad that in the Faculty of Science, mathematicians are asked to spend a considerable amount of time, comparable to that required of physicists or chemists, for often smaller amounts of funding… And I thought it could be easily verified. We start by retrieving the discipline codes
url="http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSC-ResultatsCSS_eng.asp"
library(XML)
GSC=tables[[1]]$V1 GSC=as.character(GSC[-(1:2)]) namesGSC=tables[[1]]$V2
namesGSC=as.character(namesGSC[-(1:2)])
We’re going to need a small function, to remove the $and other symbols that pollute the data (and prevent them from being treated as numbers) library(stringr) Correction = function(x) as.numeric(gsub('[$,]', '', x))
We will now read the 12 pages, and harvest (we will just take the 2017 data, but we could go back a few years before)
grants= function(gsc){
url=paste("http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSCDetail-ResultatsCSSDetails_eng.asp?Year=2017&GSC=",gsc,sep="")
library(XML)
X=as.character(tables[[1]]$"Awarded Amount") A=as.numeric(Vectorize(Correction)(X)) return(c(median(A),mean(A),as.numeric(quantile(A,(1:99)/100)))) } M=Vectorize(grants)(GSC[1:12]) The average amounts of individual grants can be compared, barplot(M[2,]) In mathematics, the average grant amount is$24400. If we normalize by this quantity, we obtain
barplot(M[2,]/M[2,8])
In other words, the average amount of a (individual) grant in chemistry (to pay for students, conferences, etc.) is twice that in mathematics, 60% higher in physics than in maths…
We can also look at the median values (rather than the averages)
barplot(M[1,])
Here again, it is in mathematics that it is the weakest….
barplot(M[1,]/M[1,8])
in comparable proportions. If we think that the time spent writing should be proportional to the amount allocated, we should spend half as much time in math as in chemistry.
Cumulative functions can also be ploted,
plot(M[3:101,8],(1:99)/100,type="s",xlim=range(M))
lines(M[3:101,5],(1:99)/100,type="s",col="red")
lines(M[3:101,4],(1:99)/100,type="s",col="blue")
with math in black, physics in red, and chemistry in blue. What is surprising is the bottom part: a “bad” researcher in chemistry or physics will earn more than the median researcher in mathematics…
Now that my intuition is confirmed, I have to go back, writing my proposal… and explain to my coauthors that I have to postpone some research projects because, well, you know…
|
|
# Can law be included in the Socratic.org list of topics? This could help out a lot of people who would want their legal inquiries to be answered free of charge.
The questions are asked on the basis of $\text{caveat emptor}$, $\text{let the buyer beware}$, I am real chuffed that I was able to use that Latin tag in a question about law. A given answer may be wrong. Sometimes these are corrected by later posters, but sometimes they stand.
|
|
# gneiss.regression.mixedlm¶
gneiss.regression.mixedlm(formula, table, metadata, groups, **kwargs)[source]
Linear Mixed Effects Models applied to balances.
Linear mixed effects (LME) models is a method for estimating parameters in a linear regression model with mixed effects. LME models are commonly used for repeated measures, where multiple samples are collected from a single source. This implementation is focused on performing a multivariate response regression with mixed effects where the response is a matrix of balances (table), the covariates (metadata) are made up of external variables and the samples sources are specified by groups.
T-statistics (tvalues) and p-values (pvalues) can be obtained to investigate to evaluate statistical significance for a covariate for a given balance. Predictions on the resulting model can be made using (predict), and these results can be interpreted as either balances or proportions.
Parameters: formula (str) – Formula representing the statistical equation to be evaluated. These strings are similar to how equations are handled in R. Note that the dependent variable in this string should not be specified, since this method will be run on each of the individual balances. See patsy [1] for more details. table (pd.DataFrame) – Contingency table where samples correspond to rows and balances correspond to columns. metadata (pd.DataFrame) – Metadata table that contains information about the samples contained in the table object. Samples correspond to rows and covariates correspond to columns. groups (str) – Column name in metadata that specifies the groups. These groups are often associated with individuals repeatedly sampled, typically longitudinally. **kwargs (dict) – Other arguments accepted into statsmodels.regression.linear_model.MixedLM Container object that holds information about the overall fit. This includes information about coefficients, pvalues and residuals from the resulting regression. LMEModel
References
Examples
>>> import pandas as pd
>>> import numpy as np
>>> from gneiss.regression import mixedlm
Here, we will define a table of balances with features Y1, Y2 across 12 samples.
>>> table = pd.DataFrame({
... 'u1': [ 1.00000053, 6.09924644],
... 'u2': [ 0.99999843, 7.0000045 ],
... 'u3': [ 1.09999884, 8.08474053],
... 'x1': [ 1.09999758, 1.10000349],
... 'x2': [ 0.99999902, 2.00000027],
... 'x3': [ 1.09999862, 2.99998318],
... 'y1': [ 1.00000084, 2.10001257],
... 'y2': [ 0.9999991 , 3.09998418],
... 'y3': [ 0.99999899, 3.9999742 ],
... 'z1': [ 1.10000124, 5.0001796 ],
... 'z2': [ 1.00000053, 6.09924644],
... 'z3': [ 1.10000173, 6.99693644]},
.. index=['Y1', 'Y2']).T
Now we are going to define some of the external variables to test for in the model. Here we will be testing a hypothetical longitudinal study across 3 time points, with 4 patients x, y, z and u, where x and y were given treatment 1 and z and u were given treatment 2.
>>> metadata = pd.DataFrame({
... 'patient': [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4],
... 'treatment': [1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2],
... 'time': [1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3]
... }, index=['x1', 'x2', 'x3', 'y1', 'y2', 'y3',
... 'z1', 'z2', 'z3', 'u1', 'u2', 'u3'])
Now we can run the linear mixed effects model on the balances. Underneath the hood, the proportions will be transformed into balances, so that the linear mixed effects models can be run directly on balances. Since each patient was sampled repeatedly, we’ll specify them separately in the groups. In the linear mixed effects model time and treatment will be simultaneously tested for with respect to the balances.
>>> res = mixedlm('time + treatment', table, metadata, tree,
... groups='patient')
statsmodels.regression.linear_model.MixedLM(), ols()
|
|
## Michigan Tech Publications
#### Title
Bent Vectorial Functions, Codes and Designs
Article
11-1-2019
#### Abstract
© 1963-2012 IEEE. Bent functions, or equivalently, Hadamard difference sets in the elementary Abelian group ( ${\mathrm {GF}}(2^{2m}),$ +), have been employed to construct symmetric and quasi-symmetric designs having the symmetric difference property. The main objective of this paper is to use bent vectorial functions for a construction of a two-parameter family of binary linear codes that do not satisfy the conditions of the Assmus-Mattson theorem, but nevertheless hold 2-designs. A new coding-theoretic characterization of bent vectorial functions is presented.
#### Publication Title
IEEE Transactions on Information Theory
COinS
|
|
# Basic Programming Concept Using C 2016 – BSc Computer Science Part 1
#### Paper Code: 13506 1506 B.Sc. (Computer Science) (Part 1) Examination, 2016 Paper No. 2.3 BASIC PROGRAMMING CONCEPTS USING C
##### Time: Three Hours] [Maximum Marks: 34
Note: Attempt five questions in all selecting at least one question from each Section.
Section-A
1. (a) What are header files and what are its uses in C programming?
(b) What are variables and it what way is it different constants?
2. (a) Differentiate between local and global variables.
(b) How do you declare a variable that will hold string values?
3. (a) What is scope of variable? How are variables scoped in C?
(b) Write a C program to compute the sum of first n terms of the series using ‘for’ loop.
$1+3+5+7+9.............$
4. (a) What are the control structures in C? Give an example each.
(b) When should we use pointers in a C program?
Section-B
5. (a) When is a “switch” statement preferable over an “if” statement?
(b) Write a loop statement that will show the following output:
1
1 2
1 2 3
1 2 3 4
1 2 3 4 5
6. (a) What is the difference between functions getch() and getche()?
(b) What are structure types in C?
7. (a) Explain any two dynamic memory allocation functions.
(b) Describe, how arrays can be passed to a user defined function.
8. (a) What is the difference between declaration and definition of a function?
(b) Write a C function is prime (num) that accepts an integer argument and returns 1 if the argument is prime, a0 oterwise. Write a C program that invokes this function to generate prime numbers between the give ranges.
Section-C
9. (a) What is a file? Explain, how the file open and file close functions handled in C?
10. What is the output of this C code?
#include<stdio.h>
int main ()
{
int a[5] = {1,2,3,4,5};
int i;
for (i=0;i<5:i++)
if ((char) a[i] = = ‘5’)
printf(“%d/n”, a[i]);
else
printf(“FAIL/n”);
}
………End………
#### Lokesh Kumar
Being EASTER SCIENCE's founder, Lokesh Kumar wants to share his knowledge and ideas. His motive is "We assist you to choose the best", He believes in different thinking.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
|
Update all PDFs
# Quinoa Pasta 1
Alignments to Content Standards: 8.EE.C.8.c
A type of pasta is made of a blend of quinoa and corn. The pasta company is not disclosing the percentage of each ingredient in the blend but we know that the quinoa in the blend contains 16.2% protein, and the corn in the blend contains 3.5% protein. Overall, each 57 gram serving of pasta contains 4 grams of protein. How much quinoa and how much corn is in one serving of the pasta?
## IM Commentary
This task asks students to find the amount of two ingredients in a pasta blend. The task provides all the information necessary to solve the problem by setting up two linear equations in two unknowns. A-REI Quinoa Pasta 2 and 3 are variants of this task which require students to find the information that is given in this problem themselves. While A-REI Quinoa Pasta 2 provides the information in the statement of the task but this time in the form of the nutritional labels, A-REI Quinoa Pasta 3 is a very open ended modeling task. It poses the question but the students have to formulate a plan to solve it. This progression of tasks helps distinguish between 8th grade and high school expectations related to systems of linear equations.
Note that while the computations shown in the solution are carried out using 3 decimal places, the answer should only be reported as whole numbers since that is the precision given in the problem statement.
This task is best used as a group activity where students cooperate to set up the equations to solve the problem.
The Standards for Mathematical Practice focus on the nature of the learning experiences by attending to the thinking processes and habits of mind that students need to develop in order to attain a deep and flexible understanding of mathematics. Certain tasks lend themselves to the demonstration of specific practices by students. The practices that are observable during exploration of a task depend on how instruction unfolds in the classroom. While it is possible that tasks may be connected to several practices, the commentary will spotlight one practice connection in depth. Possible secondary practice connections may be discussed but not in the same degree of detail.
This task helps illustrate Mathematical Practice Standard 2; where mathematically proficient students make sense of quantities and their relationships in problem situations, create coherent representations of the problem given and attend to the meaning of the quantities, not just how to compute them. Students will need to translate the description of the situation into algebraic equations, decontextualizing. Also, they will need to think about what quantities should be represented by variables and how those variables relate to each other. This will require them to “make sense of quantities and their relationships in problem situations.” (MP2) The teacher might direct the students’ discussion by asking questions such as: “What do the numbers used in the task represent?” “What operations will we need to use to solve this task?” It is important that students just don’t know how to compute the numbers in the tasks, but are able to understand the meaning of the quantities and are flexible in the use of operations and their properties.
## Solution
We can write a system of equations to solve this problem. If we let q be the amount of quinoa, in grams, in one serving of pasta and c be the amount of corn, in grams, in one serving of pasta we have $q + c = 57$. We also know that 16.2% of quinoa is protein and 3.5% of corn is protein and one serving of pasta contains 4 grams of protein. We can summarize this information in the equation $0.162q + 0.035c = 4$. Therefore, we have the following system of equations:
\begin{align} q + c &= 57 \\ 0.162q + 0.035c &= 4. \end{align}
We can solve this system using the method of substitution or the method of elimination. Using the method of substitution, we solve the first equation for $q$:
$$q = 57 - c.$$
We substitute this for $q$ in the second equation and solve for $c$ :
\begin{align} 0.162q + 0.035c &= 4 \\ 0.162(57 - c) + 0.035c &= 4 \\ 9.234 - 0.162c + 0.035c &= 4 \\ -0.127c &= 4 - 9.234 \\ -0.127c &= -5.234 \\ c &= 41.213. \end{align}
So we have $c=41$ and $q+c=57$, which gives $q=16$. Out of the 57 grams of pasta in one serving, 41 grams are corn and 16 grams are quinoa. In other words, about 72% of the pasta blend is corn and 28% is quinoa.
|
|
# 4 Property Dualism
## Introduction
The first thing that usually comes to mind when one thinks of dualism is René Descartes’ (1596-1650) substance dualism. However, there is another form of dualism, quite popular nowadays, which is called property dualism, a position which is sometimes associated with non-reductive physicalism.
Cartesian dualism posits two substances, or fundamental kinds of thing: material substance and immaterial thinking substance. These are two entirely different kinds of entities, although they interact with each other. According to property dualism, on the other hand, there is one fundamental kind of thing in the world—material substance—but it has two essentially different kinds of property: physical properties and mental properties. So for instance, a property dualist might claim that a material thing like a brain can have both physical properties (like weight and mass) and mental properties (such as having a particular belief or feeling a shooting pain), and that these two kinds of properties are entirely different in kind. Some philosophers subscribe to property dualism for all mental properties while others defend it only for conscious or “phenomenal” properties such as the feeling of pain or the taste of wine.[1] These latter properties give rise to what is known as the hard problem of consciousness: How do we explain the existence of consciousness in a material world?
Though these are both dualist views, they differ in fundamental ways. Property dualism was proposed as a position that has a number of advantages over substance dualism. One advantage is that, because it does not posit an immaterial mental substance, it is believed to be more scientific than Cartesian dualism and less religiously motivated. A second advantage is that it seems to avoid the problem of mental causation because it posits only one kind of substance; there is no communication between two different kinds of thing. And a third advantage is that, by maintaining the existence of distinctly mental properties, it does justice to our intuitions about the reality of the mind and its difference from the physical world. But to understand all this we need to take a step back.
## Substances and Properties
The notion of a substance has a long history going back to Ancient Greek metaphysics, most prominently to Aristotle, and it has been understood in various ways since then. For present purposes we can say that a substance can be understood as a unified fundamental kind of entity—e.g. a person, or an animal—that can be the bearer of properties. In fact, the etymology of the Latin word substantia is that which lies below, that which exists underneath something else. So, for instance, a zebra can be a substance, which has properties, like a certain color, or a certain number of stripes. But the zebra is independent of its properties; it will continue to exist even if the properties were to change (and, according to some views, even if they ceased to exist altogether).
According to Cartesian dualism there are two kinds of substance: the material substance, which is extended in space and is divisible, and mental substances whose characteristic is thought. So each person is made up of these two substances—matter and mind—that are entirely different in kind and can exist independently of each other. Talking of the mind in terms of substances gives rise to a number of problems (see Chapter 1). To avoid these problems, property dualism argues that mentality should be understood in terms of properties, rather than substances: instead of saying that there are certain kinds of things that are minds, we say that to have a mind is to have certain properties. Properties are characteristics of things; properties are attributed to, and possessed by, substances. So according to property dualism there are different kinds of properties that pertain to the only kind of substance, the material substance: there are physical properties like having a certain color or shape, and there are mental properties like having certain beliefs, desires and perceptions.
Property dualism is contrasted with substance dualism since it posits only one kind of substance, but it is also contrasted with ontological monist views, such as materialism or idealism, according to which everything that exists (including properties) is of one kind. Usually, property dualism is put forward as an alternative to reductive physicalism (the type identity theory) – the view that all properties in the world can, in principle at least, be reduced to, or identified with, physical properties (Chapter 2).
Hilary Putnam’s (1926-2016) multiple realization argument is a main reason why reductive physicalism is rejected by some philosophers, and it provides an argument for property dualism. Although this argument was originally used as an argument for functionalism, since it challenges the identity of mental states with physical states, it was taken up by non-reductive physicalists and property dualists alike. According to the multiple realization argument then, it is implausible to identify a certain kind of mental state, like pain, with a certain type of physical state since mental states might be implemented (“realized”) in creatures (or even non-biological systems) that have a very different physical make up than our own. For instance, an octopus or an alien may very well feel pain but pain might be realized differently in their brains than it is in ours. So it seems that mental states can be “multiply realizable.” This is incompatible with the idea that pain is strictly identical with one physical property, as the identity theory seems to claim. If this is correct, and there is no possibility of reduction of types of mental states to types of physical states, then mental properties and physical properties are distinct, which means that there are two different kinds of properties in the world and, therefore, property dualism is true.
In addition to the multiple realization argument, probably the most famous argument for property dualism is the knowledge argument put forward by Frank Jackson (1982). This argument involves the imaginary example of Mary, a brilliant neuroscientist who was raised in a black and white room. She knows everything there is to know about the physical facts about vision but she has never seen red (or any color for that matter). One day Mary leaves the black and white room sees a red tomato. Jackson claims that Mary learns something new upon seeing the red tomato—she learns what red looks like. Therefore, there must be more to learn about the world than just physical facts, and there are more properties in the world than just physical properties.
## Kinds of Property Dualism
Property dualism can be divided into two kinds. The first kind of property dualism says that there are two kinds of properties, mental and physical, but mental properties are dependent on physical properties. This dependence is usually described in terms of the relation of supervenience. The basic idea of supervenience is that a property, A, supervenes on another property, B, if there cannot be a difference in A without a difference in B (though there can be differences in B with no change in A, which allows for the multiple realizability of mental properties). So, for example, if the aesthetic properties of a work of art supervene on its physical properties, there cannot be a change in its aesthetic properties unless there is a change in its physical properties. Or, if I feel fine now but have a headache five minutes from now, there must be a physical difference in my brain in these two moments. Another way of putting the idea that mental properties depend on physical properties is to say that if you duplicate all the physical properties of the world, you will automatically duplicate the mental properties as well—they would come “for free.”
This kind of view is sometimes called non-reductive physicalism, and is often considered to be a form of property dualism, since it holds that there are two kinds of properties. Jaegwon Kim is a prominent supporter of the irreducibility of phenomenal properties (though he resists the term “property dualism” and prefers to call his position “something near enough” physicalism [2005]). Kim holds that intentional properties, like having a belief or hoping for something to happen, can be functionally reduced to physical properties.[2] However, this is not so for phenomenal properties (like tasting a particular taste or experiencing a certain kind of afterimage), which supervene on physical properties but cannot be reduced, functionally or otherwise, to physical properties.
According to Kim, there is a difference between intentional and phenomenal properties: Phenomenal (qualitative) mental states cannot be defined functionally, as intentional states can (or can in principle), and therefore cannot be reduced either. Briefly, the reason is that although phenomenal states can be associated with causal tasks these descriptions do not define or constitute pain. That is, though, pain can be associated with the state that is caused by tissue damage, that induces the belief that something is wrong with one’s body and that results in pain-avoidance behavior, this is not what pain is. Pain is what it feels like to be in pain, it is a subjective feeling. In contrast, intentional states like beliefs and intentions are anchored to observable behaviour, and this feature makes them amenable to functional analysis. For instance, if a population of creatures interacts with its environment in a similar fashion to us (if those creatures interact with one another as we do, produce similar utterances and so forth), then we would naturally ascribe to these creatures beliefs, desires, and other intentional states, precisely because intentional properties are functional properties.
The second kind of property dualism, which is dualism in a more demanding sense, claims that there are two kinds of properties, physical and mental, and that mental properties are something over and above physical properties. This in turn can be understood in at least two ways. First, being “over and above” can mean that mental properties have independent causal powers, and are responsible for effects in the physical world. This is known as “downward causation.” In this sense, a property dualist of this kind must believe that, say, the mental property of having the desire to get a drink is what actually causes you to get up and walk to the fridge, in contrast to some material property of your brain being the cause, like the firing of certain groups of neurons. Second, being something “over and above” must imply the denial of supervenience. In other words, for mental properties to be genuinely independent of physical properties, they must be able to vary independently of their physical bases. So a property dualist who denies supervenience would be committed to the possibility that two people can be in different mental states, e.g., one might be in pain and the other not, while having the same brain states.
Emergentism is a property dualist view in this more demanding sense. Emergentism first appeared as a systematic theory in the second half of the nineteenth century and the beginning of the twentieth century in the work of the so-called “British Emergentists,” J.S.Mill (1806 –1873), Samuel Alexander (1859 –1938), C. Lloyd Morgan (1852 –1936) and C.D. Broad (1887 –1971). Since then it has been defended (and opposed) by many philosophers and scientists, some of whom understand it in different ways. Still, we can summarize the position by saying that according to emergentism, when a system reaches a certain level of complexity, entirely new properties emerge that are novel, irreducible to, and something “over and above” the lower level from which they emerged (Vintiadis 2013). For example, when a brain, or a nervous system, becomes complex enough new mental properties, like sensations, thoughts and desires, emerge from it in addition to its physical properties. So according to emergentism everything that exists is made up of matter but matter can have different kinds of properties, mental and physical, that are genuinely distinct in one or both of the senses described above: that is, either in the sense that mental properties have novel causal powers that are not to be found in physical properties underlying them or in the sense that mental properties do not supervene on physical properties.
Some philosophers have argued for the kind of demanding property dualism that denies supervenience by appealing to the conceivability of philosophical zombies—an argument most famously developed by David Chalmers. Philosophical zombies are beings that are behaviorally and physically just like us but that have no “inner” experience. If such beings are not only conceivable but also possible (as Chalmers argues), then it seems that there can be mental differences without physical differences (1996). If this argument is correct, then phenomenal properties cannot be explained in terms of physical properties and they are really distinct from physical properties.
## Objections to Property Dualism
A main problem for substance dualism was the question of mental causation. Given the view that the mental and the material substance are two discrete kinds of substances the problem that arises is that of their interaction, a problem posed by Princess Elizabeth of Bohemia (1618-1680) in her correspondence with Descartes. How can two different kinds of things have an effect on one another? It seems from what we know from science that physical effects have physical causes. If this is indeed the case, how is it that I can think of my grandmother and cry, or desire a glass of wine and go over to the fridge to pour myself one? How do the mental and the physical interact? The common consensus that substance dualism cannot satisfactorily answer this problem ultimately led many philosophers to the rejection of Cartesian dualism.
In the attempt to preserve the mental while also preserving a foothold in the physical, dualism of properties was introduced. However, the double requirement of the distinctness of physical properties from mental properties and of the dependence of mental properties on physical properties turns out to be a source of problems for property dualism as well.
This can be seen in the problem of causal exclusion that is analyzed below. This problem arises for property dualism and has been put forward by a number of philosophers over the years, most notably by Kim himself who, due to this problem, concludes that phenomenal properties that are irreducibly mental are also merely epiphenomenal, that is, they have no causal effects on physical events (2005).
According to mind-body supervenience, every time a mental property M is instantiated it supervenes on a physical property P.
$\begin{array}{c}M\\\Uparrow\\P\end{array}$
Now suppose M appears to cause another mental property M¹,
$\begin{array}{ccc}M&\rightarrow&M^1\\\Uparrow\\P\end{array}$
the question arises whether the cause of M¹ is indeed M or whether it is M¹’s subjacent base P¹ (since according to supervenience M¹ is instantiated by a physical property P¹).
$\begin{array}{ccc}M&\rightarrow&M^1\\\Uparrow&&\Uparrow\\P&&P^1\end{array}$
At this point we need to introduce two principles held by physicalists: First, the principle of causal closure according to which the physical world is causally closed. This means that every physical effect has a sufficient physical cause that brings it about. Note that this in itself does not exclude non-physical causes since such causes could also be part of the causal history of an effect. What does exclude such non-physical causes is a second principle which denies the overdetermination of events. According to this principle an effect cannot have more than one wholly sufficient cause (it cannot be overdetermined) and so this, along with causal closure, leads to the conclusion that when you trace the causes of an effect, all there are are physical causes.
To return to our example, given the denial of causal overdetermination, either M or P¹ is the cause of M¹—it can’t be both—and so, given the supervenience relation, it seems that M¹ occurs because P¹ occurred. Therefore, it seems that M actually causes M¹ by causing the subjacent P¹ (and also that mental to mental, or same level, causation presupposes mental to physical, or downward, causation).
$\begin{array}{cc}\begin{array}{ccc}M&\rightarrow&M^1\\\Uparrow\\P^{\text{}}\end{array}&\qquad\qquad\begin{array}{ccc}M&\rightarrow&M^1\\\Uparrow&\searrow&\Uparrow\\P&&P^1\end{array}\end{array}$
However, given the principle of causal closure P¹ must have a sufficient physical cause P.
$\begin{array}{ccc}M&&M^1\\\Uparrow&&\Uparrow\\P&\rightarrow&P^1\end{array}$
But given exclusion again, P¹ cannot have two sufficient causes, M and P, and so P is the real cause of P¹ because if M were the real cause, causal closure would be violated again.
So the problem of causal exclusion is that, given supervenience, causal closure and the denial of overdetermination, it is not clear how mental properties can be causally efficacious; mental properties seem to be epiphenomenal, at best. And while epiphenomenalism is compatible with property dualism (since property dualism states that there are two kinds of properties in the world, and epiphenomenalism states that some mental properties are causally inert by-products of physical properties, thus accepting the existence of two properties), its coherence comes at the expense of our common sense intuitions that our mental states affect our physical states and our behavior. It seems then, that, for its critics, as far as mental causation goes, property dualism does not fare much better than substance dualism.
More generally, the question of the causal efficacy of mental properties gives rise to the same kinds of objections that were raised regarding mental causation in substance dualism. For instance, in both cases mental to physical interaction seems to violate the principle of conservation of energy, a principle that is considered to be fundamental to our physical science. That is, the conservation law would be violated if mental to physical causation were possible, since such an interaction would have to introduce energy to the physical world (assuming, that is, that the physical world is causally closed).
It is not in the scope of this discussion to wade into this matter, but it should be noted that this objection is not accepted by everyone; it has been argued that the principle of conservation of energy does not apply universally, for instance by citing examples from general relativity or quantum gravity. Similarly, both the causal closure of the physical and the denial of causal overdetermination have been questioned. Nonetheless, despite these responses, it is fair to say that the question of mental causation still remains one of the major objections to property dualism.
Another objection, this time to some views that are considered property dualist views, can be posed by asking, “In what way is property dualism really dualism?” In our distinction between two kinds of property dualism above, there is a clear sense in which positions of the second kind, like emergentism or views that deny supervenience, are property dualist positions. Since, for such views, mental properties are “something over and above” physical properties; they are distinct from them, irreducible to them and not wholly determined by them. So here we have cases of two genuinely different kinds of properties, and genuine cases of property dualism.
However, it is not equally clear that non-reductive physicalism can properly be called a kind of property dualism. The problem is that if mental properties are not something over and above physical properties then it is hard to see this as a genuine version of property dualism. We can see this if we look more closely into the meaning of physicalism.
Physicalism is the view that what there fundamentally is is what is described by physics. In this sense, mental properties are non-physical properties, since they are not properties to be found in physics. But if non-reductive physicalism claims that there are non-physical properties that are irreducible to physical properties, why should this be considered a case of physicalism? The answer given by the non-reductive physicalist is that this is because such properties are grounded in the physical realm through the relation of supervenience and that, although mental properties might not be identical to physical properties, they need to be at least in principle explainable in terms of physical properties (Horgan 1993). Indeed, non-reductive physicalism is sometimes called token identity theory because it claims that tokens (instances) of mental states can be identified with tokens of physical states, even if types of mental states are not identical with types of physical states. (An analogy: all instances of the property of being beautiful are physical—all beautiful objects are physical objects—but the property of being beautiful is not a physical property). But now the problem is that, as Tim Crane has argued, if physicalism requires that non-physical properties are explicable (even in principle) in physical terms it is not obvious why this position is a property dualist one, since for there to be genuine property dualism, the ontology of physics should not be enough to explain mental properties (2001). So, according to this objection, it seems that the mere denial of the identity of mental and physical properties is not enough for real property dualism, and also that real property dualists must either believe in downward causation or deny supervenience or both.
To sum up the above discussion, we can say that property dualism is a position that attempts to preserve the reality of mental properties while also giving them a foothold in the physical world. The need for this is evident, given the intractable difficulties presented by substance dualism on the one hand, and the problems faced by the identity theory on the other. However, despite the fact that property dualism enjoys renewed popularity these days, it is open to important objections that, for its critics, have not been adequately addressed and which render the position problematic.
## References
Chalmers, David J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press.
Crane, Tim. 2001. Elements of Mind. Oxford: Oxford University Press.
Horgan, Terence. 1993. “From Supervenience to Superdupervenience: Meeting the Demands of a Material World.” Mind 102(408): 555-586.
Jackson, Frank. 1982. “Epiphenomenal Qualia.” Philosophical Quarterly 32: 127-36.
Kim, Jaegwon. 2005. Physicalism, Or Something Near Enough. Princeton, NJ: Princeton University Press.
Vintiadis, Elly. 2013. “Emergence.” In Internet Encyclopedia of Philosophy. https://www.iep.utm.edu/emergenc/
Kim, Jaegwon. 1998. Philosophy Of Mind. Boulder, CO/Oxford: Westview Press.
Maslin, K. T. 2007. An Introduction to the Philosophy of Mind. Cambridge: Polity Press.
1. Examples of non-conscious mental properties include beliefs that most of the time are not conscious, or our attitudes, drives, and motivations
2. In functional reduction we identify the functional/causal role that the phenomenon we are interested plays and then reduce that role to a physical (token) state that realizes it. To use an example given by Kim in Physicalism, Or Something Near Enough, a gene is defined functionally as the mechanism that encodes and transmits genetic information. That is what a gene does. What “realizes” the role of the gene, however, are DNA molecules; genes are functionally reduced to DNA molecules. So a functional reduction identifies a functional/causal role with a physical state that realizes it (makes it happen, so to speak) and offers an explanation of how the physical state realizes the functional state.
|
|
Budgeted Income Statement Seattle Cat is the wholesale distributor of a small recreational...
Budgeted Income Statement
Seattle Cat is the wholesale distributor of a small recreational catamaran sailboat. Management has prepared the following summary data to use in its annual budgeting process:
Budgeted unit sales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Selling price per unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . $1,850 Cost per unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .$1,425 Variable selling and administrative expenses (per unit) . . . . . . . $85 Fixed selling and administrative expenses (per year) . . . . . . . . .$105,000 Interest expense for the year . . . . . . . . . . . . . . . . . . . . . . . . . . . \$11,000
Required:
Prepare the company’s budgeted income statement using an absorption income statement format as shown in Schedule 9.
Hi
|
|
Amplitude of the induced voltage across the coil
lee.perrin
Joined Mar 30, 2010
1
If anyone can advise that would be great.
1. The plane of a 5 turn coil of 5mm² cross sectional area is rotating a 1200 r.p.m in a magnetic field of 10mT.
Q. Calculate the amplitude of the induced voltage across the coil
Data:
5 turns
c = 1200 rpm
A = 5cm²
B = 10 mT
2. φ = N.B.A φ = 5×10×10^-3 ×5 × 10^-2
φ = 2.5 ×10^-3
V = dφ/dt or N.A. dB/dt
The equation thought to have been used is the following but the Length is not vissible to me.
V/L = c x B c being the speed
B field
L lenth of wire
a: V(t) = -Nd(BA)/dt = -NBdA/dt = NBωAsin(ωt)
V(t) =(5)(10x10^-3)(125.66370599999999)(5)sin(ωt) = 31.415 sin(ωt) volts.
Or
b: And using the following, saying t = 60 sec
Then φ = N.B.A φ = 5×10×10^-3 ×5 × 10^-2
φ = 2.5 ×10^-3
And V = dφ/dt or N.A. dB/dt = 4.17x10^-3 V
My concern here is that 1200rpm was not used.
t_n_k
Joined Mar 6, 2009
5,455
For simplicity I would assume a rectangular cross section for the coil.
Suppose the rectangular side length orthogonally cutting the field is L and the top & bottom length is D. The top & bottom sides are assumed not to be cutting the field and produce no emf.
The emf producing side of length L sweeps through a circular path length of ∏*D at 1200 rpm or 20 rotations per second. The side L then cuts the field at a maximum velocity of
v=∏*D*20 m/sec
The emf per side is then of the form
e=Emax*sin(ωt)
AND the total emf = 2*e
You know the relationship between L, D and the given coil cross sectional area [CSA].
You should be able to find Emax & ω from the derived rotational velocity v and the other information supplied. Emax will be in terms of L & D which you would then relate to the coil CSA.
Remember:
1. The general relationship for induced emf is E=B*L*v for a conductor of length L cutting a field of B Tesla at constant velocity v meters/sec.
2. In this case the output is a sinusoid since either side of length L is cutting the field at a sinusoidally varying rate throughout the total cycle. But you do know the maximum rate (or velocity) at which the field is being cut.
NB - There are 5 turns per side - not just one!
Last edited:
|
|
Brian Bi
## Section 2.11. Tensor products
Exercise 2.11.2 This problem was skipped because it's too tedious.
Problem 2.11.3
1. A linear map $$f : V \otimes W \to U$$ can be associated to the bilinear map $$g(v, w) = f(v \otimes w)$$. In the other direction, a bilinear map $$g : V \times W \to U$$ can be associated to the linear map defined so that $$f(v \otimes w) = g(v, w)$$, extended by linearity to all of $$V \otimes W$$. The defining properties of the tensor product space mimic the conditions in the definition of a bilinear map, guaranteeing that this bijection is well-defined.
2. By the definition of the tensor product, the $$\{v_i \otimes w_j\}$$ clearly span $$V \otimes W$$. To see that they are linearly independent, we use the result from part (a). If $$\sum_{i, j} c_{ij} v_i \otimes w_j = 0$$, then for each pair $$(i, j)$$, let $$g_{ij}$$ be the bilinear map such that $$g(v_k, w_l) = 1$$ if $$k = i$$ and $$l = j$$, and 0 otherwise. Let $$f_{ij} : V \otimes W \to k$$ be the corresponding linear map from part (a). Then $$f_{ij}(0) = c_{ij}$$, so $$c_{ij} = 0$$. This is true for all $$(i, j)$$, so the $$\{v_i \otimes w_j\}$$ indeed form a basis.
3. For pure tensors $$t = \varphi \otimes w$$, the corresponding homomorphism is defined by $$f(v) = \varphi(v) w$$. This is bilinear in $$\varphi$$ and $$w$$, so it can be uniquely extended to all $$t \in V^* \otimes W$$ by linearity. Injectivity follows after choosing a basis for $$V$$. If $$t = \sum_i \varphi_i \otimes w_i$$ where $$\varphi_i$$ is dual to the basis vector $$v_i$$, then $$f(v) = \sum_i \varphi_i(v) w_i$$, and if the latter vanishes identically, then $$0 = f(v_i) = w_i$$ for all $$i$$, hence $$t = 0$$. Surjectivity also requires us to use the assumption that $$V$$ is finite-dimensional; it implies that the $$\varphi_i$$ span $$V^*$$, so that every $$f \in \Hom(V, W)$$ can be written as $$f = \sum_i \varphi_i f(v_i)$$, and hence corresponds to the tensor $$t = \sum_i \varphi_i \otimes f(v_i)$$.
4. A basis $$B$$ for $$S^n V$$ is given by all tensors of the form $$v_{i_1} \otimes v_{i_2} \otimes \ldots \otimes v_{i_n}$$ where $$i_1 \leq i_2 \leq \ldots \leq i_n$$. These span $$S^n V$$ because any pure tensor in $$V^{\otimes n}$$ differs from one of these basis tensors by a sequence of transpositions. To prove linear independence, we make an observation about the structure of the subspace $$S$$ spanned by the $$T - s(T)$$. Namely, if $$T = v \otimes w \otimes \ldots$$ and the transposition swaps $$v$$ and $$w$$, then by expanding $$v$$, $$w$$, and the remaining factors in the basis $$\{v_i\}$$, we can write $$T - s(T) = \sum_k t_k - s(t_k)$$, where each $$t_k$$ is a basis tensor (the product of basis vectors). The general element in $$S$$ can then be written in the form $$\sum_m t_m - s_m(t_m)$$, where each $$t_m$$ is a basis tensor and $$s_m$$ is some transposition, since the above argument applies for any transposition. Such a linear combination has the property that for each set of indices $$\{i_1, \ldots, i_n\}$$, we collect together terms of the form $$v_{i_{\sigma(1)}} \otimes \ldots \otimes v_{i_{\sigma(n)}}$$, where $$\sigma$$ is any permutation, the sum of the coefficients of all such terms is zero. A linear combination of the elements in $$B$$ cannot have this property since each possible set of indices appears only once, unless all coefficients are zero; hence $$B$$ is indeed a basis. The dimension of $$S^n V$$ is the size of $$B$$, that is, $$\binom{m+n-1}{n}$$ (by a stars and bars argument or similar).
For the $$\wedge^n V$$ case, observe that the two tensors $$t_1 = v \otimes w \otimes \ldots$$ and $$t_2 = w \otimes v \otimes \ldots$$ are additive inverses in $$\wedge^n V$$ since their sum equals its own transposition. It follows that a tensor of the form $$t = v_{i_1} \otimes v_{i_2} \otimes \ldots \otimes v_{i_n}$$ vanishes in $$\wedge^n V$$ if any of the $$i$$'s are equal, and otherwise it can be rearranged so that the $$i$$'s are strictly increasing, at the cost of a possible minus sign. Therefore the set of tensors $$v_{i_1} \otimes \ldots \otimes v_{i_n}$$ with $$i_1 < \ldots < i_n$$ spans $$\wedge^n V$$. Linear independence is proven by a similar argument to that used for $$S^n V$$: if $$S$$ is the subspace spanned by all pure tensors $$T$$ with $$T = s(T)$$ for some transposition $$s$$, then by writing out the factors of $$T$$ in terms of the basis $$\{v_i\}$$, we can write the general term of $$S$$ in the form $$\sum_m t_m + s_m(t_m)$$ where $$t_m$$ is a basis tensor and $$s_m$$ is a transposition. This then has the property that when we collect terms with some subset of distinct indices $$i_1, \ldots, i_n$$, the alternating sum of their coefficients vanishes (where a coefficient is subtracted rather than added if the basis tensor has an odd permutation of the basis vector indices). For a linear combination of the tensors in the set we claim as basis, this cannot be unless all coefficients are zero, since each possible subset of indices appears in only one basis tensor. So our set is indeed a basis. Its size is $$\binom{m}{n}$$, the number of subsets of size $$n$$ that can be drawn from a set of size $$m$$.
5. For this problem, we assume that transposition is defined for all tensors, not just pure tensors, by writing the tensor as a sum of pure tensors and taking the transposition of each term.
For each pure tensor $$t = w_1 \otimes w_2 \otimes \ldots w_n$$ (where the $$w_i$$ are not necessarily basis vectors), we can identify its projection down to $$S^n V$$ with its symmetrization, \begin{equation*} \mathrm{Sym}(t) = \frac{1}{n!} \sum_\sigma w_{\sigma(1)} \otimes w_{\sigma(2)} \otimes \ldots \otimes w_{\sigma(n)} \end{equation*} where $$\sigma$$ ranges over all permutations of $$\{1, \ldots, n\}$$. This satisfies $$\mathrm{Sym}(t) = s(\mathrm{Sym}(t))$$ for all $$s$$, and extending by linearity to all of $$V^{\otimes n}$$. This map is well-defined on $$S^n V$$ since two tensors differing only by transpositions are mapped to the same symmetrized tensor. To go the other direction, a given $$t \in T$$ can simply be identified with its projection down to $$S^n V$$; it is easy to see that this is the inverse map.
For the exterior power, for each $$t = \otimes_i w_i$$, we identify its projection down to $$\wedge^n V$$ with its antisymmetrization, \begin{equation*} \mathrm{Anti}(t) = \frac{1}{n!} \sum_\sigma \sgn(\sigma) w_{\sigma(1)} \otimes w_{\sigma(2)} \otimes \ldots \otimes w_{\sigma(n)} \end{equation*} This has the desired property that $$\mathrm{Anti}(t) = -s(\mathrm{Anti}(t))$$ for all $$s$$, and again we can extend it by linearity to all of $$V^{\otimes n}$$, and it is well-defined on $$\wedge^n V$$ since it annihilates any $$t$$ such that $$t = s(t)$$ for some transposition $$s$$. Again, in the other direction, we simply identify $$t \in T$$ with its projection down to $$\wedge^n V$$.
In both cases, the fact that $$k$$ has characteristic zero is needed so that division by $$n!$$ is always well-defined. We might choose to define Sym and Anti without the factor of $$1/n!$$, but this does not solve the problem since if the characteristic divides $$n!$$, some nonzero elements of the {symmetric, exterior} power may be mapped to zero.
6. Using the eigenbasis $$\{v_i\}$$ (with $$A v_i = \lambda_i v_i$$), each basis tensor $$v_{i_1} \otimes \ldots \otimes v_{i_n}$$ is mapped to $$\lambda_{i_1} \cdot \otimes \cdot \lambda_{i_n}$$ times itself, so $$\tr(S^n A)$$ is the sum of all $$\lambda_{i_1} \cdot \ldots \cdot \lambda_{i_n}$$ where $$1 \le i_1 \le \ldots \le i_n \le N$$, and likewise $$\tr(\wedge^n A)$$ is the sum of all $$\lambda_{i_1} \cdot \ldots \cdot \lambda_{i_n}$$ where $$1 \le i_1 < \ldots < i_n \le N$$, that is, the $$i_1, \ldots, i_n$$ range over all subsets of $$\{1, \ldots, N\}$$.
7. Where $$n = N$$, $$\tr(\Lambda^N A)$$ is therefore the single term $$\lambda_1 \cdot \ldots \cdot \lambda_N$$, which is $$\det A$$. Also $$\Lambda^N V$$ is one-dimensional, so $$\Lambda^N A = (\det A) I$$.
It then follows that \begin{equation*} \det(AB)I = \Lambda^N(AB) = \Lambda^N A \circ \Lambda^N B = (\det A)I \circ (\det B)I = (\det A \det B)I \end{equation*} so $$\det(AB) = \det A \det B$$.
Exercise 2.11.5 $$A \otimes_K L$$ can be given the structure of an algebra over $$L$$ by defining:
1. $$l'(a \otimes_K l) = a \otimes_K (l' l)$$
2. $$(a_1 \otimes_K l_1)(a_2 \otimes_K l_2) = (a_1 a_2) \otimes_K (l_1 l_2)$$
and extending by linearity to all of $$A \otimes_K L$$. For (1) this is well-defined since the defining relations of the tensor product are annihilated, for example, for all $$a \in A, k \in K, l \in L$$: $(ka) \otimes_K l - k(a \otimes_K l) \mapsto (ka) \otimes_K (l'l) - k(a \otimes_K (l'l))$ where the RHS also vanishes by the defining relations of $$\otimes_K$$. A similar result holds for the other defining relations. For (2) well-definedness follows by a similar argument. If we take one of the defining relations for, say, the left operand, and use a pure tensor $$a \otimes_K l_2$$ as the right operand: $(a_1 + a_2) \otimes_K l_1 - a_1 \otimes_K l_1 - a_2 \otimes_K l_2 \mapsto ((a_1 + a_2)a) \otimes_K (l_1 l_2) - (a_1a) \otimes_K (l_1 l_2) - (a_2a) \otimes_K (l_1 l_2)$ which is again a defining relation, and vanishes.
We can easily verify that:
• $$l_1 (l_2 t) = (l_1 l_2) t$$
• $$l (t_1 t_2) = (l t_1) t_2$$
• $$(l_1 + l_2) t = l_1 t + l_2 t$$
• $$l(t_1 + t_2) = lt_1 + lt_2$$
where $$l, l_1, l_2 \in L$$, $$t, t_1, t_2 \in A \otimes_K L$$.
For the second part of the problem, $$V \otimes_K L$$ can be given the structure of a module over $$A \otimes_K L$$ by defining \begin{equation*} (a \otimes_K l_1)(v \otimes_K l_2) = (av) \otimes_K (l_1 l_2) \end{equation*} and extending by linearity. Proof of well-definedness and the module axioms is very similar so we omit it.
Problem 2.11.6
1. As the problem says, the isomorphism from left to right is given by $$(v \otimes_B w) \otimes_C x \mapsto v \otimes_B (w \otimes_C x)$$. We can extend this by linearity to the entirety of $$(V \otimes_B W) \otimes X$$. If we simply choose bases for $$V$$, $$W$$, and $$X$$, it is easy to see that this is well-defined: once we fix $$(v_i \otimes_B w_j) \otimes_C x_k \mapsto v_i \otimes_B (w_j \otimes_C x_k)$$ for all basis vectors $$v_i \in V, w_j \in W, x_k \in X$$, then $$(v \otimes_B w) \otimes_C x$$ will indeed be mapped to $$v \otimes_B (w \otimes_C x)$$ for any $$v \in V, w \in W, x \in X$$.
We need to show that the isomorphism preserves the $$(A, D)$$-bimodule structure. Note that \begin{align*} a((v \otimes_B w) \otimes_C x) &= (a(v \otimes_B w)) \otimes_C x \\ &= ((av) \otimes_B w) \otimes_C x \\ &\mapsto (av) \otimes_B (w \otimes_C x) \\ &= a(v \otimes_B (w \otimes_C x)) \end{align*} and a similar fact holds for right-multiplication by $$d \in D$$. We can then conclude from linearity that the desired result holds in general.
2. If $$f \mapsto 0$$, then $$w \otimes_B v \mapsto f(v)w$$ must vanish for all $$w \in W, v \in V$$, which is only possible if $$f(v) = 0 \, \forall v \in V$$, that is, $$f = 0$$. Therefore the homomorphism given in the text is one-to-one.
We also need it to be onto. Let $$g \in \Hom_C(W \otimes_B V, X)$$ be given. We claim that there exists $$f \in \Hom_B(V, \Hom_C(W, X))$$ such that $$g(w \otimes_B v) = f(v)w$$ for all $$w \in W, v \in V$$. To construct such $$f$$, notice that if $$v$$ is fixed, then $$g(w \otimes_B v)$$ is a linear and $$C$$-linear function of $$w$$ taking $$W$$ into $$X$$; let that map be $$f(v)$$. If $$v_1, v_2 \in V$$, then $$g(w \otimes_B (v_1 + v_2)) = g(w \otimes_B v_1 + w \otimes_B v_2) = g(w \otimes_B v_1) + g(w \otimes_B v_2)$$, and similarly $$g(w \otimes_B \alpha v) = \alpha g(w \otimes_B v)$$, so $$f$$ is linear. Finally, if $$b \in B$$, then $$f(bv) = (w \mapsto g(w \otimes_B (bv))) = (w \mapsto g((wb) \otimes_B v)) = b(w \mapsto g(w \otimes_B v))$$ according to the left $$B$$-module structure of $$\Hom_C(W, X)$$. So $$f$$ is $$B$$-linear.
If $$a \in A$$, then $$af \mapsto (w \otimes_B v \mapsto (af)(v)w) = (w \otimes_B v \mapsto f(va)w) = a(w \otimes_B v \mapsto f(v)w)$$ since $$(w \otimes_B v)a = w \otimes_B (va)$$, and hence $$w \otimes_B v$$ would be mapped to $$f(va)w$$ under $$a(w \otimes_B v \mapsto f(v)w)$$. And if $$d \in D$$, then $$fd \mapsto (w \otimes_B \mapsto (fd)(v)w) = (w \otimes_B \mapsto (f(v)d)w) = (w \otimes_B \mapsto (f(v)w)d) = (w \otimes_B \mapsto f(v)w)d$$. So $$f \mapsto (w \otimes_B v \mapsto f(v)w)$$ preserves the $$(A, D)$$-bimodule structure.
Problem 2.11.7 For $$a \in A$$, define $$a(m \otimes_A n) \equiv (am) \otimes_A n = m \otimes_A (an)$$ for the pure tensors, and otherwise, if $$t \in M \otimes_A N$$ is expressed as $$t = \sum_i m_i \otimes_A n_i$$, then $$at = \sum_i (am_i) \otimes_A n_i$$. To get well-definedness, note that defining relations are mapped to zero, for example, $$a((m_1 \otimes_A n) + (m_2 \otimes_A n) - ((m_1 + m_2) \otimes_A n) = a(m_1 \otimes_A n) + a(m_2 \otimes_A n) - a((m_1 + m_2) \otimes_A n) = (am_1 \otimes_A n) + (am_2 \otimes_A n) - (a(m_1 + m_2) \otimes_A n) = 0$$ since the last expression is in the form of a defining relation. It is easy to verify that the module axioms are satisfied.
|
|
## Homotopy-theoretic approach to SPT phases in action: Z_16 classification of three-dimensional superconductors
#### Alexei KitaevCalifornia Institute of Technology
In the free fermion theory, three-dimensional superconductors with time-reversal symmetry ($T^2=-1$) are classified by an integer invariant. I show that in the presence of interaction, this classification collapses to $Z_{16}$. As a generalization, to classify SPT phases with an arbitrary symmetry group, it is sufficient to know the homotopy type of the space of short-range entangled states without symmetry. (Caution: I use a different definition of an SRE phase than Wen.) The collection of such spaces for varying number of physical dimensions form a homotopy spectrum. There are actually, two spectra: one for bosons and another for fermions, but they are only known up to dimension 2. Thus, the classification of interacting SPT phases is given by some generalized cohomology theory. Conceptually, such phases are realized as certain sigma-models, but a more rigorous construction is defined on a lattice.
Back to Symmetry and Topology in Quantum Matter
|
|
# Category Archives: node
## De-dupe and link: Using the Flickr API to neaten up my archive and link sketches to blog posts
I've been thinking about how to manage the relationships between my blog posts and my Flickr sketches. Here's the flow of information:
2015.01.06 Figuring out information flow – index card
I scan my sketches or draw them on the computer, and then I upload these sketches to Flickr using photoSync, which synchronizes folders with albums. I include these sketches in my outlines and blog posts, and I update my index of blog posts every month. I recently added a tweak to make it possible for people to go from a blog post to its index entry, so it should be easier to see a post in context. I've been thinking about keeping an additional info index to manage blog posts and sketches, including unpublished ones. We'll see how well that works. Lastly, I want to link my Flickr posts to my blog posts so that people can see the context of the sketch.
My higher goal is to be able to easily see the open ideas that I haven't summarized or linked to yet. There's no shortage of new ideas, but it might be interesting to revisit old ones that had a chance to simmer a bit. I wrote a little about this in Learning from artists: Making studies of ideas. Let me flesh out what I want this archive to be like.
When I pull on an idea, I'd like to be able to see other open topics attached to it. I also want to be able to see open topics that might jog my memory.
How about the technical details? How can I organize my data so that I can get what I want from it?
2015.01.05 Figuring out the technical details of this idea or visual archive I want – index card
Because blog posts link to sketches and other blog posts, I can model this as a directed graph. When I initially drew this, I thought I might be able to get away with an acyclic graph (no loops). However, since I habitually link to future posts (the time traveller's problem!), I can't make that simplifying assumption. In addition, a single item might be linked from multiple things, so it's not a simple tree (and therefore I can't use an outline). I'll probably start by extracting all the link information from my blog posts and then figuring out some kind of Org Mode-based way to update the graph.
To get one step closer to being able to see open thoughts and relationships, I decided that my sketches on Flickr:
• should not have duplicates despite my past mess-ups, so that:
• I can have an accurate count
• it's easier for me to categorize
• people get less confused
• should have hi-res versions if possible, despite the IFTTT recipe I tried that imported blog posts but unfortunately picked up the low-res thumbnails instead of the hi-res links
• should link to the blog posts they're mentioned in, so that:
• people can read more details if they come across a sketch in a search
• I can keep track of which sketches haven't been blogged yet
I couldn't escape doing a bit of manual cleaning up, but I knew I could automate most of the fiddly bits. I installed node-flickrapi and cheerio (for HTML parsing), and started playing.
### Removing duplicates
Most of the duplicates had resulted from the Great Renaming, when I added tags in the form of #tag1 #tag2 etc. to selected filenames. It turns out that adding these tags en-masse using Emacs' writable Dired mode broke photoSync's ability to recognize the renamed files. As a result, I had files like this:
• 2013-05-17 How I set up Autodesk Sketchbook Pro for sketchnoting.png
• 2013-05-17 How I set up Autodesk Sketchbook Pro for sketchnoting #tech #autodesk-sketchbook-pro #drawing.png
This is neatly resolved by the following Javascript:
exports.trimTitle = function(str) {
return str.replace(/ --.*$/g, '').replace(/#[^ ]+/g, '').replace(/[- _]/g, ''); }; and a comparison function that compared the titles and IDs of two photos: exports.keepNewPhoto = function(oldPhoto, newPhoto) { if (newPhoto.title.length > oldPhoto.title.length) return true; if (newPhoto.title.length < oldPhoto.title.length) return false; if (newPhoto.id < oldPhoto.id) return true; return false; }; So then this code can process the photos: exports.processPhoto = function(p, flickr) { var trimmed = exports.trimTitle(p.title); if (trimmed && hash[trimmed] && p.id != hash[trimmed].id) { // We keep the one with the longer title or the newer date if (exports.keepNewPhoto(hash[trimmed], p)) { exports.possiblyDeletePhoto(hash[trimmed], flickr); hash[trimmed] = p; } else if (p.id != hash[trimmed].id) { exports.possiblyDeletePhoto(p, flickr); } } else { hash[trimmed] = p; } }; You can see the code on Gist: duplicate_checker.js. ### High-resolution versions I couldn't easily automate this, but fortunately, the IFTTT script had only imported twenty images or so, clearly marked by a description that said: "via sacha chua :: living an awesome life…". I searched for each image, deleting the low-res entry if a high-resolution image was already in the system and replacing the low-res entry if that was the only one there. ### Linking to blog posts This was the trickiest part, but also the most fun. I took advantage of the fact that WordPress transforms uploaded filenames in a mostly consistent way. I'd previously added a bulk view that displayed any number of blog posts with very little additional markup, and I modified the relevant code in my theme to make parsing easier. See this on Gist: /** * Adds "Blogged" links to Flickr for images that don't yet have "Blogged" in their description. * Command-line argument: URL to retrieve and parse */ var secret = require('./secret'); var flickrOptions = secret.flickrOptions; var Flickr = require("flickrapi"); var fs = require('fs'); var request = require('request'); var cheerio = require('cheerio'); var imageData = {}; var$;
function setDescriptionsFromURL(url) {
request(url, function(error, response, body) {
// Parse the images
$= cheerio.load(body);$('article').each(function() {
var prettyLink = $(this).find("h2 a").attr("href"); if (!prettyLink.match(/weekly/i) && !prettyLink.match(/monthly/i)) { collectLinks($(this), prettyLink, imageData);
}
});
updateFlickrPhotos();
});
}
function updateFlickrPhotos() {
Flickr.authenticate(flickrOptions, function(error, flickr) {
flickr.photos.search(
{user_id: flickrOptions.user_id,
per_page: 500,
extras: 'description',
text: ' -blogged'}, function(err, result) {
processPage(result, flickr);
for (var i = 2 ; i < result.photos.pages; i++) {
flickr.photos.search(
{user_id: flickrOptions.user_id, per_page: 500, page: i,
extras: 'description', text: ' -blogged'},
function(err, result) {
processPage(err, result, flickr);
});
}
});
});
}
var results = [];
article.find(".body a").each(function() {
var link = $(this); if (link.attr('href')) { if (link.attr('href').match(/sachachua/) || !link.attr('href').match(/^http/)) { imageData[exports.trimTitle(link.attr('href'))] = prettyLink; } else if (link.attr('href').match(/flickr.com/)) { imageData[exports.trimTitle(link.text())] = prettyLink; } } }); return results; } exports.trimTitle = function(str) { return str.replace(/^.*\//, '').replace(/^wpid-/g, '').replace(/[^A-Za-z0-9]/g, '').replace(/png$/, '').replace(/[0-9]$/, ''); }; function processPage(result, flickr) { if (!result) return; for (var i = 0; i < result.photos.photo.length; i++) { var p = result.photos.photo[i]; var trimmed = exports.trimTitle(p.title); var noTags = trimmed.replace(/#.*/g, ''); var withTags = trimmed.replace(/#/g, ''); var found = imageData[noTags] || imageData[withTags]; if (found) { var description = p.description._content; if (description.match(found)) continue; if (description) { description += " - "; } description += '<a href="' + found + '">Blogged</a>'; console.log("Updating " + p.title + " with " + description); flickr.photos.setMeta( {photo_id: p.id, description: description}, function(err, res) { if (err) { console.log(err, res); } } ); } } } setDescriptionsFromURL(process.argv[2]); And now sketches like 2013-11-11 How to think about a book while reading it are now properly linked to their blog posts. Yay! Again, this script won't get everything, but it gets a decent number automatically sorted out. Next steps: • Run the image extraction and set description scripts monthly as part of my indexing process • Check my list of blogged images to see if they're matched up with Flickr sketches, so that I can identify images mysteriously missing from my sketchbook archive or not correctly linked Yay code! ## Windows: Pipe output to your clipboard, or how I’ve been using NodeJS and Org Mode together It's not easy being on Windows instead of one of the more scriptable operating systems out there, but I stay on it because I like the drawing programs. Cygwin and Vagrant fill enough gaps to keep me mostly sane. (Although maybe I should work up the courage to dual-boot Windows 8.1 and a Linux distribution, and then get my ScanSnap working.) Anyway, I'm making do. Thanks to Node and the abundance of libraries available through NPM, Javascript is shaping up to be a surprisingly useful scripting language. After I used the Flickr API library for Javascript to cross-reference my Flickr archive with my blog posts, I looked around for other things I could do with it. photoSync occasionally didn't upload new pictures I added to its folders (or at least, not as quickly as I wanted). I wanted to replace photoSync with my own script that would: • upload the picture only if it doesn't already exist, • add tags based on the filename, • add the photo to my Sketchbook photoset, • move the photo to the "To blog" folder, and • make it easy for me to refer to the Flickr image in my blog post or index. The flickr-with-uploads library made it easy to upload images and retrieve information, although the format was slightly different from the Flickr API library I used previously. (In retrospect, I should've checked the Flickr API documentation first – there's an example upload request right on the main page. Oh well! Maybe I'll change it if I feel like rewriting it.) I searched my existing photos to see if a photo with that title already existed. If it did, I displayed an Org-style list item with a link. If it didn't exist, I uploaded it, set the tags, added the item to the photo set, and moved it to the folder. Then I displayed an Org-style link, but using a plus character instead of a minus character, taking advantage of the fact that both + and – can be used for lists in Org. While using console.log(...) to display these links in the terminal allowed me to mark and copy the link, I wanted to go one step further. Could I send the links directly to Emacs? I looked into getting org-protocol to work, but I was having problems figuring this out. (I solved those problems; details later in this post.) What were some other ways I could get the information into Emacs aside from copying and pasting from the terminal window? Maybe I could put text directly into the clipboard. The node-clipboard package didn't build for me and I couldn't get node-copy-paste to work either,about the node-copy-paste README told me about the existence of the clip command-line utility, which worked for me. On Windows, clip allows you to pipe the output of commands into your clipboard. (There are similar programs for Linux or Mac OS X.) In Node, you can start a child process and communicate with it through pipes. I got a little lost trying to figure out how to turn a string into a streamable object that I could set as the new standard input for the clip process I was going to spawn, but the solution turned out to be much simpler than that. Just write(...) to the appropriate stream, and call end() when you're done. Here's the relevant bit of code that takes my result array and puts it into my clipboard: var child = cp.spawn('clip'); child.stdin.write(result.join("\n")); child.stdin.end(); Of course, to get to that point, I had to revise my script. Instead of letting all the callbacks finish whenever they wanted, I needed to be able to run some code after everything was done. I was a little familiar with the async library, so I used that. I copied the output to the clipboard instead of displaying it so that I could call it easily using ! (dired-do-shell-command) and get the output in my clipboard for easy yanking elsewhere, although I could probably change my batch file to pipe the result to clip and just separate the stderr stuff. Hmm. Anyway, here it is! See this on Github /** * Upload the file to my Flickr sketchbook and then move it to * Dropbox/Inbox/To blog. Save the Org Mode links in the clipboard. - * means the photo already existed, + means it was uploaded. */ var async = require('async'); var cp = require('child_process'); var fs = require('fs'); var glob = require('glob'); var path = require('path'); var flickr = require('flickr-with-uploads'); var secret = require("./secret"); var SKETCHBOOK_PHOTOSET_ID = '72157641017632565'; var BLOG_INBOX_DIRECTORY = 'c:\\sacha\\dropbox\\inbox\\to blog\\'; var api = flickr(secret.flickrOptions.api_key, secret.flickrOptions.secret, secret.flickrOptions.access_token, secret.flickrOptions.access_token_secret); var result = []; function getTags(filename) { var tags = []; var match; var re = new RegExp('#([^ ]+)', 'g'); while ((match = re.exec(filename)) !== null) { tags.push(match[1]); } return tags.join(' '); } // assert(getTags("foo #bar #baz qux") == "bar baz"); function checkIfPhotoExists(filename, doesNotExist, existsFunction, done) { var base = path.basename(filename).replace(/.png$/, '');
api({method: 'flickr.photos.search',
user_id: secret.flickrOptions.user_id,
text: base},
function(err, response) {
var found = undefined;
if (response && response.photos[0].photo) {
for (var i = 0; i < response.photos[0].photo.length; i++) {
if (response.photos[0].photo && response.photos[0].photo[i]['$'].title == base) { found = i; break; } } } if (found !== undefined) { existsFunction(response.photos[0].photo[found], done); } else { doesNotExist(filename, done); } }); } function formatExistingPhotoAsOrg(photo, done) { var title = photo['$'].title;
var url = 'https://www.flickr.com/photos/'
+ photo['$'].owner + '/' + photo['$'].id;
result.push('- [[' + url + '][' + title + ']]');
done();
}
function formatAsOrg(response) {
var title = response.photo[0].title[0];
var url = response.photo[0].urls[0].url[0]['_'];
result.push('+ [[' + url + '][' + title + ']]');
}
api({
title: path.basename(filename.replace(/.png$/, '')), is_public: 1, hidden: 1, safety_level: 1, photo: fs.createReadStream(filename), tags: getTags(filename.replace(/.png$/, ''))
}, function(err, response) {
if (err) {
console.log('Could not upload photo: ', err);
done();
} else {
var newPhoto = response.photoid[0];
async.parallel(
[
function(done) {
api({method: 'flickr.photos.getInfo',
photo_id: newPhoto}, function(err, response) {
if (response) { formatAsOrg(response); }
done();
});
},
function(done) {
photoset_id: SKETCHBOOK_PHOTOSET_ID,
photo_id: newPhoto}, function(err, response) {
if (!err) {
} else {
console.log('Could not add ' + filename + ' to Sketchbook');
done();
}
});
}],
function() {
done();
});
}
});
}
fs.rename(filename, BLOG_INBOX_DIRECTORY + path.basename(filename),
function(err) {
if (err) { console.log(err); }
done();
});
}
var arguments = process.argv.slice(2);
async.each(arguments, function(item, done) {
if (item.match('\\*')) {
glob.glob(item, function(err, files) {
if (!files) return;
async.each(files, function(file, done) {
}, function() {
done();
});
});
} else {
}
}, function(err) {
console.log(result.join("\n"));
var child = cp.spawn('clip');
child.stdin.write(result.join("\n"));
child.stdin.end();
});
Wheeee! Hooray for automation. I made a Windows batch script like so:
up.bat
node g:\code\node\flickr-upload.js %*
and away I went. Not only did I have a handy way to process images from the command line, I could also mark the files in Emacs Dired with m, then type ! to execute my up command on the selected images. Mwahaha!
Anyway, I thought I'd write it up in case other people were curious about using Node to code little utilities, filling the clipboard in Windows, or getting data back into Emacs (sometimes the clipboard is enough).
Back to org-protocol, since I was curious about it. With (require 'org-protocol) (server-start), emacsclient org-protocol://store-link:/foo/bar worked when I entered it at the command prompt. I was having a hard time getting it to work under Node, but eventually I figured out that:
• I needed to pass -n as one of the arguments to emacsclient so that it would return right away.
• The : after store-link is important! I was passing org-protocol://store-link/foo/bar and wondering why it opened up a file called bar. org-protocol://store-link:/foo/bar was what I needed.
I only just figured out that last bit while writing this post. Here's a small demonstration program:
var cp = require('child_process');
var child = cp.execFile('emacsclient', ['-n', 'org-protocol://store-link:/foo/bar']);
Yay!
2015-01-13 Using Node as a scripting tool – index card #javascript #nodejs #coding #scripting
|
|
## The Annals of Statistics
### Groups acting on Gaussian graphical models
#### Abstract
Gaussian graphical models have become a well-recognized tool for the analysis of conditional independencies within a set of continuous random variables. From an inferential point of view, it is important to realize that they are composite exponential transformation families. We reveal this structure by explicitly describing, for any undirected graph, the (maximal) matrix group acting on the space of concentration matrices in the model. The continuous part of this group is captured by a poset naturally associated to the graph, while automorphisms of the graph account for the discrete part of the group. We compute the dimension of the space of orbits of this group on concentration matrices, in terms of the combinatorics of the graph; and for dimension zero we recover the characterization by Letac and Massam of models that are transformation families. Furthermore, we describe the maximal invariant of this group on the sample space, and we give a sharp lower bound on the sample size needed for the existence of equivariant estimators of the concentration matrix. Finally, we address the issue of robustness of these estimators by computing upper bounds on finite sample breakdown points.
#### Article information
Source
Ann. Statist., Volume 41, Number 4 (2013), 1944-1969.
Dates
First available in Project Euclid: 23 October 2013
https://projecteuclid.org/euclid.aos/1382547509
Digital Object Identifier
doi:10.1214/13-AOS1130
Mathematical Reviews number (MathSciNet)
MR3127854
Zentralblatt MATH identifier
1292.62098
#### Citation
Draisma, Jan; Kuhnt, Sonja; Zwiernik, Piotr. Groups acting on Gaussian graphical models. Ann. Statist. 41 (2013), no. 4, 1944--1969. doi:10.1214/13-AOS1130. https://projecteuclid.org/euclid.aos/1382547509
#### References
• Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis, 3rd ed. Wiley, Hoboken, NJ.
• Andersson, S. A. and Klein, T. (2010). On Riesz and Wishart distributions associated with decomposable undirected graphs. J. Multivariate Anal. 101 789–810.
• Andersson, S. A. and Perlman, M. D. (1993). Lattice models for conditional independence in a multivariate normal distribution. Ann. Statist. 21 1318–1358.
• Andersson, S. A., Madigan, D., Perlman, M. D. and Triggs, C. M. (1995). On the relation between conditional independence models determined by finite distributive lattices and by directed acyclic graphs. J. Statist. Plann. Inference 48 25–46.
• Barndorff-Nielsen, O. (1983). On a formula for the distribution of the maximum likelihood estimator. Biometrika 70 343–365.
• Barndorff-Nielsen, O., Blæsild, P., Jensen, J. L. and Jørgensen, B. (1982). Exponential transformation models. Proc. Roy. Soc. London Ser. A 379 41–65.
• Barrett, W. W., Johnson, C. R. and Loewy, R. (1996). The real positive definite completion problem: Cycle completability. Mem. Amer. Math. Soc. 122 viii$+$69.
• Becker, C. (2005). Iterative proportional scaling based on a robust start estimator. In Classification—The Ubiquitous Challenge (C. Weihs and W. Gaul, eds.) 248–255. Springer, Berlin.
• Borel, A. (1991). Linear Algebraic Groups, 2nd ed. Graduate Texts in Mathematics 126. Springer, New York.
• Buhl, S. L. (1993). On the existence of maximum likelihood estimators for graphical Gaussian models. Scand. J. Stat. 20 263–270.
• Davies, P. L. and Gather, U. (2005). Breakdown and groups. Ann. Statist. 33 977–1035.
• Davies, P. L. and Gather, U. (2007). The breakdown point—Examples and counterexamples. REVSTAT 5 1–17.
• Donoho, D. L. (1982). Breakdown properties of multivariate location estimators. Ph.D. thesis, Harvard Univ.
• Donoho, D. and Huber, P. J. (1983). The notion of breakdown point. In A Festschrift for Erich L. Lehmann 157–184. Wadsworth, Belmont, CA.
• Draisma, J., Kuhnt, S. and Zwiernik, P. (2013). Supplement to “Groups acting on Gaussian graphical models.” DOI:10.1214/13-AOS1130SUPP.
• Drton, M. and Richardson, T. S. (2008). Graphical methods for efficient likelihood inference in Gaussian covariance models. J. Mach. Learn. Res. 9 893–914.
• Eaton, M. L. (1989). Group Invariance Applications in Statistics. NSF-CBMS Regional Conference Series in Probability and Statistics, 1. IMS, Hayward, CA.
• Finegold, M. and Drton, M. (2011). Robust graphical modeling of gene networks using classical and alternative $t$-distributions. Ann. Appl. Stat. 5 1057–1080.
• Fisher, R. A. (1934). Two new properties of mathematical likelihood. Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character 144 285–307.
• Gottard, A. and Pacillo, S. (2006). On the impact of contaminations in graphical Gaussian models. Stat. Methods Appl. 15 343–354.
• Gottard, A. and Pacillo, S. (2010). Robust concentration graph model selection. Comput. Statist. Data Anal. 54 3070–3079.
• Hampel, F. R. (1971). A general qualitative definition of robustness. Ann. Math. Statist. 42 1887–1896.
• James, W. and Stein, C. (1961). Estimation with quadratic loss. In Proc. 4th Berkeley Sympos. Math. Statist. and Prob., Vol. I 361–379. Univ. California Press, Berkeley, CA.
• Konno, Y. (2001). Inadmissibility of the maximum likelihood estimator of normal covariance matrices with the lattice conditional independence. J. Multivariate Anal. 79 33–51.
• Kuhnt, S. and Becker, C. (2003). Sensitivity of graphical modeling against contamination. In Between Data Science and Applied Data Analysis (M. Schader, W. Gaul and M. Vichi, eds.) 279–287. Springer, Berlin.
• Lauritzen, S. L. (1996). Graphical Models. Oxford Statistical Science Series 17. Oxford Univ. Press, New York.
• Lehmann, E. L. and Romano, J. P. (2005). Testing Statistical Hypotheses, 3rd ed. Springer, New York.
• Letac, G. and Massam, H. (2007). Wishart distributions for decomposable graphs. Ann. Statist. 35 1278–1323.
• Lopuhaä, H. P. and Rousseeuw, P. J. (1991). Breakdown points of affine equivariant estimators of multivariate location and covariance matrices. Ann. Statist. 19 229–248.
• Malyšev, F. M. (1977). Closed subsets of roots and the cohomology of regular subalgebras. Mat. Sb. 104(146) 140–150, 176.
• Maronna, R. A., Martin, R. D. and Yohai, V. J. (2006). Robust Statistics: Theory and Methods. Wiley, Chichester.
• Miyamura, M. and Kano, Y. (2006). Robust Gaussian graphical modeling. J. Multivariate Anal. 97 1525–1550.
• Reid, N. (1995). The roles of conditioning in inference. Statist. Sci. 10 138–157, 173–189, 193–196.
• Schervish, M. J. (1995). Theory of Statistics. Springer, New York.
• Stahel, W. (1981). Robust estimation: Infinitesimal optimality and covariance matrix estimators. Ph.D. thesis, ETH, Zürich.
• Sun, D. and Sun, X. (2005). Estimation of the multivariate normal precision and covariance matrices in a star-shape model. Ann. Inst. Statist. Math. 57 455–484.
• Uhler, C. (2012). Geometry of maximum likelihood estimation in Gaussian graphical models. Ann. Statist. 40 238–261.
• Vogel, D. and Fried, R. (2011). Elliptical graphical modelling. Biometrika 98 935–951.
#### Supplemental materials
• Supplementary material: Proofs and more on the structure of $\mathbf{P}_\mathcal{C}$. We provide the proof of Proposition 6.1 and more results on the structure of the poset $\mathbf{P}_\mathcal{C}$ that link our work to Andersson and Perlman (1993).
|
|
# Definite integral: Practice
Calculus Level 1
Let $$f(x)=x^2-x+1$$, find the value of the following definite integral: $\displaystyle\int_{1}^{3}f(x)\, dx$
×
|
|
# NAG Library Function Document
## 1Purpose
nag_quad_md_numth_coeff_prime (d01gyc) calculates the optimal coefficients for use by nag_quad_md_numth_vec (d01gdc), for prime numbers of points.
## 2Specification
#include #include
void nag_quad_md_numth_coeff_prime (Integer ndim, Integer npts, double vk[], NagError *fail)
## 3Description
The Korobov (1963) procedure for calculating the optimal coefficients ${a}_{1},{a}_{2},\dots ,{a}_{n}$ for $p$-point integration over the $n$-cube ${\left[0,1\right]}^{n}$ imposes the constraint that
(1)
where $p$ is a prime number and $a$ is an adjustable argument. This argument is computed to minimize the error in the integral
$3n∫01dx1⋯∫01dxn∏i=1n 1-2xi 2,$ (2)
when computed using the number theoretic rule, and the resulting coefficients can be shown to fit the Korobov definition of optimality.
The computation for large values of $p$ is extremely time consuming (the number of elementary operations varying as ${p}^{2}$) and there is a practical upper limit to the number of points that can be used. Function nag_quad_md_numth_coeff_2prime (d01gzc) is computationally more economical in this respect but the associated error is likely to be larger.
## 4References
Korobov N M (1963) Number Theoretic Methods in Approximate Analysis Fizmatgiz, Moscow
## 5Arguments
1: $\mathbf{ndim}$IntegerInput
On entry: $n$, the number of dimensions of the integral.
Constraint: ${\mathbf{ndim}}\ge 1$.
2: $\mathbf{npts}$IntegerInput
On entry: $p$, the number of points to be used.
Constraint: ${\mathbf{npts}}$ must be a prime number $\text{}\ge 5$.
3: $\mathbf{vk}\left[{\mathbf{ndim}}\right]$doubleOutput
On exit: the $n$ optimal coefficients.
4: $\mathbf{fail}$NagError *Input/Output
The NAG error argument (see Section 3.7 in How to Use the NAG Library and its Documentation).
## 6Error Indicators and Warnings
NE_ACCURACY
The machine precision is insufficient to perform the computation exactly. Try reducing npts: ${\mathbf{npts}}=〈\mathit{\text{value}}〉$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
See Section 2.3.1.2 in How to Use the NAG Library and its Documentation for further information.
On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value.
NE_INT
On entry, ${\mathbf{ndim}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{ndim}}\ge 1$.
On entry, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$.
Constraint: npts must be a prime number.
On entry, ${\mathbf{npts}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{npts}}\ge 5$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
See Section 2.7.6 in How to Use the NAG Library and its Documentation for further information.
NE_NO_LICENCE
Your licence key may have expired or may not have been installed correctly.
See Section 2.7.5 in How to Use the NAG Library and its Documentation for further information.
## 7Accuracy
The optimal coefficients are returned as exact integers (though stored in a double array).
## 8Parallelism and Performance
The time taken is approximately proportional to ${p}^{2}$ (see Section 3).
## 10Example
This example calculates the Korobov optimal coefficients where the number of dimensions is $4$ and the number of points is $631$.
### 10.1Program Text
Program Text (d01gyce.c)
None.
### 10.3Program Results
Program Results (d01gyce.r)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017
|
|
# include sql files and use minted for highlighting
I write in latex a documentation about a database project.
I would like to include some sql files in my latex document and highlight this files with minted.
My initial approach is the following, but it's only print my \input command.
\label{included-sql-files}
\section{SQL-Files}
\begin{minted}[lineos, framesep=2mm, fontsize=\small]{sql}
\input{./sql/01.sql}
\end{minted}
Maybe anyone else know a good solution for my problem.
Volker
You could use \inputminted but I would recommend to define your own command with whatever options you like (see minted documentation page 29):
\documentclass{article}
\usepackage{minted}
\newmintedfile[inputsql]{sql}{%
linenos,
autogobble,
breaklines,
}
\begin{document}
\inputsql{sql/01.sql}
\end{document}
|
|
Extratropical cyclones form anywhere within the extratropical regions of the Earth (usually between 30° and 60° latitude from the equator), either through cyclogenesis or extratropical transition
If you want to change selection, open document below and click on "Move attachment"
Extratropical cyclone - Wikipedia
Approximate areas of extratropical cyclone formation worldwide An upper-level jet streak. DIV areas are regions of divergence aloft, which will lead to surface convergence and aid cyclogenesis. <span>Extratropical cyclones form anywhere within the extratropical regions of the Earth (usually between 30° and 60° latitude from the equator), either through cyclogenesis or extratropical transition. A study of extratropical cyclones in the Southern Hemisphere shows that between the 30th and 70th parallels, there are an average of 37 cyclones in existence during any 6-hour period.[
|
|
## SBI PO 2018 Quant Test 1
Instructions
For the following questions answer them individually
Question 1
What should come in place of the question marks in the following equations?
$$\frac{?}{24}=\frac{72}{\sqrt{?}}$$
Question 2
If the height of a triangle is decreased by 40% and it’s base is increased by 40% , what will be the effect on its area?
Question 3
In 1 kg of a mixture of sand and iron, 20% is iron .How such sand should be added so that the proportion of iron becomes 10%
Question 4
The denominator of a fraction is 2 more than thrice it’s numerator.If the numerator as well as denominator are increased by one,the fraction becomes ⅓ .what was that the original fraction?
Question 5
If 2x+y= 15, 2y+z= 25 and 2z+x =26 ,what is value of z?
|
|
## Calculus (3rd Edition)
$\sum_{n=0}^{\infty} \frac{(-1)^n}{2^n}$ converges absolutely.
We have $$\sum_{n=0}^{\infty} |\frac{(-1)^n}{2^n}|=\sum_{n=0}^{\infty} (\frac{1}{2})^n$$ which is a geometric series with $|r|=|1/2|\lt 1$, which converges. Then $\sum_{n=0}^{\infty} \frac{(-1)^n}{2^n}$ converges absolutely.
|
|
# Math Digest
## Summaries of Media Coverage of Math
Edited by Mike Breen and Annette Emerson, AMS Public Awareness Officers
Contributors:
Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (2004 AMS Media Fellow), Annette Emerson (AMS), Brie Finegold (University of Arizona), Baldur Hedinsson (Boston University), Allyn Jackson (Deputy Editor, Notices of the AMS), and Ben Polletta (Drexel University)
### October 2011
Brie Finegold summarizes blogs on historic female mathematicians and on music and math:
"Five Historic Female Mathematicians You Should Know," by Sarah Zielinski. Surprising Science, Smithsonian.com, 7 October 2011.
Quick, name five historic female mathematicians! Most people would struggle to name five famous mathematicians regardless of sex. But those who regard Lagrange and Gauss as household names will probably know the names mentioned in Zielinski's post: Hypatia, Sophie Germain, Ada Lovelace, Sonja Kovalevsky, and Emmy Noether. As the typical roads into academia were closed to them, these women took unconventional paths to pursue their passion for mathematics. Sophie Germain used a false name to write to Lagrange in fear that he would not read her letters if he knew she was female. A century later, Sonja Kovalevsky married a fellow academic so that she could leave Russia (where women could not attend university) and earn a doctorate at the University of Heidelberg. Even there, she was allowed only to audit classes and only with the permission of the professor.
This post was made in honor of Ada Lovelace Day which was founded in 2009 to encourage bloggers to highlight the accomplishments of females in STEM fields. See Steve Wildstrom's post for a list of some more contemporary famous female mathematicians (some of whom are still working). For example, Wildstrom mentions Ingrid Daubechies, who this year became the first female president of the International Mathematical Union in its ninety year history.
"Music and maths: joined at the hip or walking down different paths?," by Stephen Hough. The Telegraph, 7 October 2011.
Accomplished pianist Stephen Hough writes in this post about the relationship or lack thereof between mathematics and music. Although he acknowledges that the theory underlying musical compositions has mathematical flavor, Hough points out good compositions often break the rules while good mathematics establishes rules and patterns. Hough argues that composers deliberately add superfluous notes, ambiguity, and uneven timing, while a mathematician looks for only the most concise, least ambiguous, most regular answers to problems.
While students of calculus whose main job is to replicate algorithms (i.e. perform) might wholeheartedly agree with Hough's argument, a professional mathematician whose job is to create original work (i.e. compose) might disagree with Hough. One mathematician with whom Hough debated this issue was Marcus du Sautoy in a segment for the BBC. In this short discussion, du Sautoy points out that there are pieces of classical music which are concretely inspired by mathematics, while Hough brings up classical pieces that lack any identifiable patterns. One interesting point made by du Sautoy is that a listener who recognizes a lack of pattern is seeking one and therefore thinking mathematically. Hough insists that mathematics lacks the soulfulness and humanity of music wherein great compositions are great both because of the notes on the page and interpretation thereof. But perhaps attending an artful lecture given by the founder of a new and exciting result might be compared to watching a great musical performance; even if the audience lacks knowledge of the genre, the beauty of the piece comes shining through.
--- Brie Finegold
"Popular Science's Brilliant 10: Computational Contortionist," by Ryan Bradley. Popular Science, October 2011, page 63.
Ever thought about how the surface of a Coke can gets all mangled when you crush it? As a graduate student in mathematics and computer science, Eitan Grinspun spent years and years thinking about this problem and getting to the bottom of how external forces can change the shape of surfaces. As a result a whole new field of geometry was born, discrete differential geometry, which allows engineers to better predict how cables will fall to the sea floor and allows animators to create more realistic animations. As for the picture? Grinspun says "I get to take any interesting physical problem--say, spaghetti movement. Toss it in the air, and it falls on the ground and it twists and coils. Why does it move that way?" Grinspun's work rendering complex objects earned him inclusion in the 2011 Popular Science Brilliant 10. Photo: John B. Carnett.
--- Baldur Hedinsson
"David Lynch adds art to maths," by Benjamin Secher. The Telegraph, 29 October 2011;
"Mathematics: Drowning by numbers," by Stefan Michalowski and Georgia Smith. Nature, 15 December 2011.
The Fondation Cartier for contemporary art in Paris is hosting an exhibition "Mathematics--A Beautiful Elsewhere" through March 18. The idea behind the exhibition came from the foundation's director Hervé Chandès who wanted to explore the aesthetic potential of mathematics, which Chandès calls the "abstract art par excellence." He contacted Jean-Pierre Bourguignon, head of the l'Institut des Hautes Études Scientifiques (IHES), who, along with other prominent mathematicians, came up with some contemporary ideas in mathematics that would lend themselves to artistic interpretation. Chandès also got in touch with film director David Lynch, who contacted artists and coordinated the exhibition. Lynch said of his experience, "I guess in an abstract way I thought of the great mathematicians as artists, but then when I met these mathematicians I saw it way more clearly. They're just like painters, but their medium is equations and numbers. They're all fired up, they love life, they're happy.... Mathematicians are bright and shiny." Michalowski and Smith have mixed reviews of the exhibition. (Image: "O Paraiso, 2011" by Beatriz Milhazes, courtesy of the Fondation Cartier.)
--- Mike Breen
"Mathematicians think of everything as rubber," by Elizabeth Quill. Science News, 22 October 2011.
The field of topology has come a long way since 1940, when the mathematical genre at the time undefined in the dictionary captivated participants at the Columbus science meetings. A resurrected Science News clip from those meetings describes the "relatively new" and "strange" field as a departure from traditional Euclidean geometry, basing comparison of two figures on whether they can be stretched or distorted (without cutting or gluing) into identical forms. Even farther back in time, Leonard Euler's solution to the problem of whether it is possible to complete a closed-loop walking tour of the city of Konigsberg, Germany crossing each of the city's bridges exactly once was an early example of applying topographic thinking. As of 2011, mathematicians have used topography to make contributions in a variety of scientific fields, and questions about the size and shape of the universe exhibit potential for a topographical solution.
--- Lisa DeKeukelaere
"String theory finds a bench mate," by Zeeya Merali. Nature, 20 October 2011, pages 302-304.
Collaboration between string theorists and condensed-matter physicists, ignited by two former Moscow State University roommates reunited in New York, is reinvigorating both fields. When condensed-matter physicist Dam Thanh Son sought out his old friend Andrei Starinets, a string theorist, he discovered that Starinets' equations looked much like those he was using himself to compute the characteristics of "fireballs" created in particle accelerators. Working together, the pair was able to translate difficult condensed matter physics equations into a parallel, four-dimensional, string theory universe in which equations are much easier to solve. Translating back into three-dimensional space, the work yielded a predicted value for the fireballs' viscosity that was later proven in a laboratory. As a consequence of this collaboration, some of string theory's star power is rubbing off on condensed-matter physics, and string theory is gaining validation for its utility, but additional concrete and novel results are still needed to quell the skeptics.
--- Lisa DeKeukelaere
"Using Math To Piece Together a Lost Treasure," by Holger Dambeck. Spiegel International, 19 October 2011.
This article discusses the restoration of frescoes in a church in Padua, Italy, in which mathematical methods played a major role. The frescoes were shattered in 1941 during World War II bombing raids. Shattered pieces---88,000 of them---were collected and stored, and in 1992 they were cleaned, sorted, and photographed. But with so many pieces, restorers were unable to reassemble the frescoes. Mathematician Massimo Fornasier, now a professor at the Technical University of Munich, stepped in to help. He and his team developed an algorithm that could make very good guesses about the placement of a fragment. After the algorithm was run on a computer, the placements could be verified by restorers. "Ultimately, Fornasier's team managed what many had thought impossible: They found the original position of almost every piece of shattered plaster that was large enough to be identified," the article says. The pieces were reassembled on the church wall, a process that was finally completed in 2006.
--- Allyn Jackson
Articles on record-setting calculation of Pi:
"Japanese engineer crunches pi to 10 trillion digits," by Michael Winter. USA Today, 18 October 2011;
"New pi value record set in Nagano,"
Japan Times, 18 October 2011;
"
Japanese mathematician breaks record for determining the value of pi," by Julian Ryall. Telegraph, 18 October 2011.
Earlier this month, Japanese engineer Shigeru Kondo calculated the value of pi to 10 trillion digits, besting a 5 trillion-digit record that he set in August 2010. To perform the 12-month task, Kondo used a home-built computer with a 48-terabyte hard drive. While the cost of electricity to run the computer was high—approximately 30,000 yen (about $390) per month—the heat generated by the hard drive warmed the computer room in his Nagano home to 104 degrees Fahrenheit. "We could dry the laundry immediately," said his wife, Yukiko, to Japan Times. --- Claudia Clark "The Telltale Tiles," by Burkard Polster and Marty Ross. The Age, 17 October 2011. Australian mathematicians Polster and Ross dissect an invitation to an event at the geometric-tile-covered Storey Hall in Melbourne and, in the process, explain the basics of aperiodic tiling of a plane. Beginning with the simple example of using orderly rows of squares or triangles to "tile," or cover without overlaps or gaps, a plane, the authors describe the concept of being periodic, or conforming to a grid of identical parallelograms. An aperiodic tiling, the authors explain, is a set of shapes (tiles) that cannot be rearranged into a periodic tiling. The first set of aperiodic tiles, discovered in 1964, contained 20,246 items, but mathematician Roger Penrose later constructed special shapes that, taken in pairs, constitute aperiodic tilings. Penrose's tile pairings are responsible not only for the aesthetics of Storey Hall's exterior, but also this year's Nobel Prize in chemistry. (Photo of Storey Hall, RMIT (Royal Melbourne Institute of Technology), courtesy of Burkard Polster.) Of related interest: Penrose Tiles Talk Across Miles," by David Austin. --- Lisa DeKeukelaere "Finding Archimedes in the Shadows," an exhibition review by Edward Rothstein. New York Times, 17 October 2011; "Survival of the Palimpsest," Random Samples. Science, 14 October 2011, page 162. Ultraviolet image of a diagram from The Archimedes Palimpsest, found in the treatise "Spiral Lines," copyright the owner of The Archimedes Palimpsest. Licensed for use under Creative Commons Attribution 3.0 Unported Access Rights. A rare action shot of the palimpsest being disbound, copyright the owner of The Archimedes Palimpsest, licensed for use under Creative Commons Attribution 3.0 Unported Access Rights. Sunday, October 16, marked the opening of an exhibition at the Walters Art Museum in Baltimore, Maryland, that showcases the history, restoration, and meanings of the Archimedes Palimpsest. This Palimpsest--typically a parchment manuscript on which more than one text has been written--contains the oldest existing copy of Archimedes' work, including the only known copy of two works: "Method" and "Stomachion." The title of the exhibit--Lost and Found: The Secrets of Archimedes--reflects the nature and history of this text, initially a 10th century copy of the third-century B.C. writings of Archimedes, which was "recycled" 3 centuries later into a prayer book. "That book was apparently in use for centuries at the Monastery of St. Sabbas in the Judean Desert," writes article author Edward Rothstein. Then, in 1906, Danish Archimedes scholar Johan Ludvig Heiberg found the book in Istanbul, where he deciphered much of the scarcely visible, original text--which runs perpendicular to the prayers copied over it--and photographed the pages. The book's location was unknown for most of the 20th century until it was reported sold for$2 million at a Christie's auction in 1998 to an anonymous buyer. In response to a request to exhibit the book from the Walters' curator of manuscripts, William Noel, the buyer “not only deposited the book with Mr. Noel [in 1999] but also provided funds for the project, as scientists and other experts took it apart for restoration and research." To learn more about this exhibition, which is open through January 1, go to the Walters Art Museum website. This story was reported by several media outlets, including CBS News, Daily Mail, and The Washington Post.
--- Claudia Clark
"Super Science Suggestions: House Panel Lays Out Spending Preferences," by Science News Staff. Science, 17 October 2011.
Scientists are preparing for a cut of US\$1.5 billion in the 2012 research budget. The cut is as a part of a budget deal between the White House and Congress passed in August. In dealing with the funding shortfall Democrats have stressed the need to find new funding opportunities, while Republicans see a chance to cut funding for specific programs. Alternative energy and climate research will be hit hard if Republican leaders of the House of Representatives Committee on Science, Space and Technology get their way. According to Samuel Rankin III, head of the Consortium for NSF Funding and head of the Washington office of the American Mathematical Society, all members of the Science committee recognize the value of for basic research, such as mathematics. (Photo: Samuel Rankin.)
---Baldur Hedinsson
"Q&A: Persi Diaconis The mathemagician," by Jascha Hoffman. Nature, 27 October 2011, page 457;
"The Magical Mind of Persi Diaconis," by Jeffrey R. Young. Chronicle of Higher Education, 16 October 2011.
Magician-turned-mathematician Persi Diaconis has co-authored a new book, Magical Mathematics: The Mathematical Ideas That Animate Great Magic. According to the article in the Chronicle of Higher Education, the book is "part math textbook, part magic primer, and part history book, tracing how magic and math have long traveled under the same cape," which prompted Nature and the Chronicle to profile Diaconis and explain a bit about his interest in magic, the mathematics of magic, and the other applications of his discoveries. For instance, (also from the Chronicle) "his best known mathematical finding is that it takes seven shuffles of a standard deck of cards to randomly mix them. The conclusion turns out to have implications far beyond card tables: Someday it may help manufacturers determine how much mixing is necessary in industrial processes, or give spies a better way to tell how complex their secret codes need to be." Although Diaconis is still involved in magic and the magician community, Young writes that "he sees himself first and foremost as someone attempting to solve the toughest problems of mathematics." The online article of Young's piece also includes a video.
--- Annette Emerson
Articles on the Nobel Prize in Chemistry to Daniel Shechtman:
"Once-Ridiculed Discovery Redefined the Term Crystal", by Daniel Clery. Science, 14 October 2011, page 165.
"Persistence pays off for crystal chemist", by Richard van Noorden. Nature, 13 October 2011;
"Vindicated: Ridiculed Israeli scientist wins Nobel", by Aron Heller. Associated Press, 5 October 2011;
"Shechtman Wins Nobel in Chemistry for Quasicrystals Discovery", by Andrea Gerlin. Business Week, 5 October 2011:
"Israeli Scientist Wins Nobel Prize for Chemistry", by Kenneth Chang. New York Times, 5 October 2011; and
"Israel's Daniel Shechtman wins 2011 Nobel Prize in chemistry," by Asaf Shtull-Trauring. Haaretz, 5 October 2011.
Dan Shechtman, the first to identify quasicrystals in nature, was awarded the Nobel Prize in Chemistry for 2011. Shechtman's compelling story made headlines around the world. In 1982, he observed in his laboratory an unusual pattern in the atomic structure of an alloy. The structure appeared to be a crystal, but the crystal pattern did not repeat itself---these are the hallmarks of a quasicrystal. In mathematics, objects with this type of structure were studied as abstract phenomena but were thought not to exist in atomic structures. When Shechtman announced that he had observed a quasicrystal, his findings were met with disbelief and ridicule, and he was asked to leave his laboratory. He persisted for years, and over time the naysayers were convinced. The Nobel Prize was the ultimate vindication. The Nature article notes that, around the same time as Shechtman's discovery, mathematicians Paul Steinhardt and Dov Levine were completing a rigorous theory of the three-dimensional version of Penrose tilings. The three-dimensional objects they were studying were just what Shechtman had observed in his lab. Steinhardt called them "quasicrystals" because they exhibited neither the regular periodicity of crystals nor the disordered structure of glass. For more on the mathematics of quasicrystals, see "What is a Quasicrystal?", by Marjorie Senechal (AMS Notices, September 2006), and "Quasicrystals and geometry", a book review by Charles Radin (AMS Notices, April 1996).
--- Allyn Jackson
"Don't get math? Researchers home in on the brain's problem", by Sharon Noguchi. Medical Xpress.com, 5 October 2011 (originally appeared in San Jose Mercury News).
This article discusses new brain research that might improve understanding about why some people seem to lack basic mathematical ability, including the ability to estimate quantities. In one study that tracked 249 students from kindergarten, researchers found that, as ninth graders, some of these students were unable to estimate a quantity of dots that was flashed on a screen, or to distinguish a set of 15 dots from a set of 20 dots. Such students tend to do poorly in mathematics classes. This type of disability, called "dyscalculia", is the counterpart to dyslexia but has been studied far less. Tests of brain activity in children with dyscalculia show that they are not using a part of the brain that is active when children without the disorder perform estimations.
--- Allyn Jackson
"It's just an illusion: Mathematician uses thousands of Lego bricks to recreate Escher's gravity-defying images," by Nadia Gilani. Daily News, 2 October 2011.
Dutch artist MC Escher was famous for incorporating astounding optical illusions into his drawings. Now Andrew Lipton, a British mathematician, has constructed LEGO® replicas of some of Escher's acclaimed work and incredibly the LEGO® structures create the same astonishing optical illusions. Lipton used thousands of Lego bricks and spent countless hours building the LEGO® replicas. Though he will not say how he achieved the seemingly impossible angles and strange perspectives in his creations, he swears that they are genuinely constructed out of LEGO® without using glue or any other adhesive.
See more of Lipton's LEGO® structures. Lipson's collaborator on this work is Daniel Shiu. (Photo: Escher "Relativity" copyright by Andrew Lipson. Click to see larger image.)
--- Baldur Hedinsson
"Ig Nobel prize awards 2011," New Scientist, 3 October 2011.
Well, October 21 has come and gone without the world coming to and end--which is why Harold Camping (among other doomsday predictors) received the 2011 Ig Nobel Award for Mathematics. Media worldwide covered the annual Ig Nobel Awards Ceremony, hosted by Marc Abramson, editor of the Annals of Improbable Research. Mathematics was one of the awards announced. As noted by New Scientist, "A long list of self-appointed prophets whose predictions of the end of the world have thankfully failed to come to pass shared the mathematics prize. As the Ig Nobel committee says, their failure has taught "the world to be careful when making mathematical assumptions and calculations." Among the prize winners is Harold Camping, who has so far prophesied the end of the world on 21 May 1988, 7 September 1994, 21 May of this year, and ... 21 October 2011." Seriously, "The Ig Nobel Prizes honor achievements that first make people laugh, and then make them think. The prizes are intended to celebrate the unusual, honor the imaginative—and spur people's interest in science, medicine, and technology," states Abramson.
--- Annette Emerson
"A formula for justice," by Angela Saini. The Guardian, 2 October 2011.
Thomas Bayes, though long dead, was recently thrown out of a British court. Well, not Thomas Bayes himself, but his eponymous and notoriously hard to understand theorem. Bayes' theorem relates the conditional probabilities of multiple events - relating the probability of A given B to the probabilities of A, B, and B given A - and can be used to show that doctors are very bad at understanding the results of medical tests. Bayes' theorem, it turns out, is (or was) very much alive in the modern courtroom, where it is used to calculate how much a given piece of evidence shifts the probability of a individual's guilt or innocence. The theorem ran afoul of the law in the appeal of a convicted murderer, who we (because we are mathematicians) can call "X". One of the pieces of evidence arrayed against X was a shoeprint found at the scene of the murder, which seemed to match a pair of sneakers found at X's house. The likelihood that the pair of sneakers observed at the crime scene was in fact the same as X's pair was calculated using Bayes' theorem. Applying Bayes' theorem required information like the number of sneakers sold each year, and the number of different tread patterns on those sneakers. When this kind of information is not available, expert witnesses often make educated estimates or guesses. The judge on X's appeal decided that he didn't like the guesses made in X's case, but he threw Bayes' baby out with the bathwater. Given how poor humans are at probabilistic reasoning - "We like a good story to explain the evidence and this makes us use statistics inappropriately," says University College London psychologist David Lagnado - this is a big problem. "From being quite precise and being able to quantify your uncertainty, you've got to give a completely bland statement as an expert, which says 'maybe' or 'maybe not'. No numbers," explains Professor Norman Fenton, a mathematician at Queen Mary, University of London, a frequent expert witness. Of course, the human inability to reason statistically means it is easy to misinterpret and misuse statistical arguments in court. The way forward is to get mathematicians and legal professionals together to devise constructive ways of applying probability - and avoiding its misuse - in court. This is just what Fenton and his colleague Amber Marks at Queen Mary have done. They and their 37 legal collaborators are now examining the prevalence of Bayes' theorem in court - and thus the impact of the shoeprint murder case, in the hopes of getting it overturned. Of course, as they admit and any math teacher knows, the most challenging issue is how to get jurors to understand the statistical evidence presented to them.
--- Ben Polletta
Math Digest Archives || 2016 || 2015 || 2014 || 2013 || 2012 || 2011 || 2010 || 2009 || 2008 || 2007 || 2006 || 2005 || 2004 || 2003 || 2002 || 2001 || 2000 || 1999 || 1998 || 1997 || 1996 || 1995 Click here for a list of links to web pages of publications covered in the Digest.
|
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 12 Dec 2018, 20:04
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• ### The winning strategy for 700+ on the GMAT
December 13, 2018
December 13, 2018
08:00 AM PST
09:00 AM PST
What people who reach the high 700's do differently? We're going to share insights, tips and strategies from data we collected on over 50,000 students who used examPAL.
• ### GMATbuster's Weekly GMAT Quant Quiz, Tomorrow, Saturday at 9 AM PST
December 14, 2018
December 14, 2018
09:00 AM PST
10:00 AM PST
10 Questions will be posted on the forum and we will post a reply in this Topic with a link to each question. There are prizes for the winners.
# A consignment of eight washing machines contains two fully-automatic
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 51121
A consignment of eight washing machines contains two fully-automatic [#permalink]
### Show Tags
02 Oct 2018, 01:32
00:00
Difficulty:
25% (medium)
Question Stats:
74% (01:42) correct 26% (01:27) wrong based on 34 sessions
### HideShow timer Statistics
A consignment of eight washing machines contains two fully-automatic washing machines and six semi-automatic washing machines. If two washing machines are to be chosen at random from this consignment, what is the probability that at least one of the two washing machines chosen will be a fully-automatic?
(A) 1/5
(B) 1/4
(C) 3/14
(D) 13/28
(E) 17/28
_________________
Senior PS Moderator
Joined: 26 Feb 2016
Posts: 3325
Location: India
GPA: 3.12
A consignment of eight washing machines contains two fully-automatic [#permalink]
### Show Tags
02 Oct 2018, 04:22
Bunuel wrote:
A consignment of eight washing machines contains two fully-automatic washing machines and six semi-automatic washing machines. If two washing machines are to be chosen at random from this consignment, what is the probability that at least one of the two washing machines chosen will be a fully-automatic?
(A) 1/5
(B) 1/4
(C) 3/14
(D) 13/28
(E) 17/28
There are $$C_2^8 = 28$$ ways of choosing 2 from the 8 washing machines available. Of the 8
machines, we have 2 fully-automatic washing machines and 6 semi-automatic machines.
P(at least one is fully-automatic) = 1 - P(Both are semi-automatic machines)
There are $$C_2^6 = 15$$ ways of choosing 2 from 6 semi-automatic washing machines.
P(Both are semi-automatic machines) = $$\frac{15}{28}$$ | P(at least one is fully automatic) = $$1 - \frac{15}{28} = \frac{13}{28}$$
Therefore, the probability that at least one of the two washing machines chosen will be a fully-automatic is $$\frac{13}{28}$$ (Option D)
_________________
You've got what it takes, but it will take everything you've got
Manager
Joined: 16 Oct 2016
Posts: 132
Location: India
Concentration: General Management, Healthcare
GMAT 1: 640 Q40 V38
GMAT 2: 680 Q48 V35
GPA: 3.05
WE: Pharmaceuticals (Health Care)
Re: A consignment of eight washing machines contains two fully-automatic [#permalink]
### Show Tags
02 Oct 2018, 04:34
Bunuel wrote:
A consignment of eight washing machines contains two fully-automatic washing machines and six semi-automatic washing machines. If two washing machines are to be chosen at random from this consignment, what is the probability that at least one of the two washing machines chosen will be a fully-automatic?
(A) 1/5
(B) 1/4
(C) 3/14
(D) 13/28
(E) 17/28
Probability of ATLEAST 1 event : P(AT LEAST 1)
Probability of NO EVENT: P(NONE)
P(AT LEAST 1) = 1 - P(NONE)
P(NONE): Both washing machines are semiautomatic (one AND two, both are semi)
P(NONE): P1 (Probability of choosing 1st Washing machine) X P2 (Probability of choosing 1st Washing machine)
P(NONE): 6/8 X 5/7 .........................As 6 ways to choose first washing machine and after first is chosen, 5 ways to choose second one
P(NONE): 13/28
P(AT LEAST 1) = 1 - 15/28
= 13/28
= D
_________________
_______________________________________________________________________________________________________________________________________
Chasing the dragon
Re: A consignment of eight washing machines contains two fully-automatic &nbs [#permalink] 02 Oct 2018, 04:34
Display posts from previous: Sort by
# A consignment of eight washing machines contains two fully-automatic
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
|
# In his last soccer game chris made 9 saves and let in 3 goals what was hi ratio of saves to total shots on goal
###### Question:
in his last soccer game chris made 9 saves and let in 3 goals what was hi ratio of saves to total shots on goal
### I need help with Geometry
I need help with Geometry $I need help with Geometry$...
### What is the pH of a solution that contains 0.009 grams of oxalic acid in 100mL of water?
What is the pH of a solution that contains 0.009 grams of oxalic acid in 100 mL of water?...
### 7and 2/15 + 5 and 2/3 + 9 and 13/15
7and 2/15 + 5 and 2/3 + 9 and 13/15...
### Given chemical equations for these reactions s(s) + o2(g) so2(g) ∆h˚ –296.8 kj•mol–1 h2(g) + ½ o2(g) h2o(l)
Given chemical equations for these reactions s(s) + o2(g) so2(g) ∆h˚ –296.8 kj•mol–1 h2(g) + ½ o2(g) h2o(l) ∆h˚ –285.8 kj•mol–1 h2(g) + s(s) h2s(g) ∆h˚ –20.6 kj•mol–1 what is the value of ∆h for the reaction below? 2 h2s(g) + 3 o2(g) 2 h2o(l) + 2 so2(g)...
### The ratio of boys to girls in a classroom is 5:6. If there are 55 students in total, how many more girls
The ratio of boys to girls in a classroom is 5:6. If there are 55 students in total, how many more girls are there than boys?...
### If you pump air into cycle tyre a slight warming effect is noticed at valve stem why
If you pump air into cycle tyre a slight warming effect is noticed at valve stem why...
|
|
# Throwing dice.
Welcome back my friend!
With this post I want to talk about some interesting applications of the theory of generating functions in relation to the enumeration of combinations.
For this purpose let’s introduce a very simple problem: “Say we throw three regular dice, in how many ways the sum of the upper faces can be equal to 15?” That sounds like a reasonably simple problem for which we will develop a set of combinatorics tools, don’t be scared by the math, is really simple and at the end of the post we will be able to extend this exercise to dice and any valid resulting sum. The theory behind the calculations here performed has some extremely important implications in the theory of coding and in general might be extended to a wider set of problems, but for now let’s start with the three dice..
So, we want to have $d_{1}+d_{2}+d_{3}=15$ where $1 \leq d_{i}\leq 6$, that make perfectly sense right? Regular dice have six faces and we want the sum of the faces to be 15. For the moment let’s start with a much simpler enumeration problem which will give us some insight on how to proceed: “In how many ways can we select three distinguishable objects $a,b$ and $c$?”
For every possible combination we have to decide whether we want to take or not take $a$, then perform the same decision for $b$ and $c$. We can display this dilemma in mathematical language by building the schematic product for our decision problem: We can either pick no $a$ or one $a$, we cannot pick more than one $a$, the same logic apply for $b$ and $c$, we either pick nothing ore one.
This mean that can build a sequence: $a_{k}=\left(0,1\right)$,$b_{k}=\left(0,1\right)$ and $c_{k}=\left(0,1\right)$. In general for any given $w_{k}=\left(a_{0},a_{1}...\right)$, we have a generating function $G\left(x\right)=\sum_{k=0}^{\infty} a_{k}x^{k}=a_{0}x^{0}+a_{1}x^{1}+a_{2}x^{2} \cdots$ which in our simple case is just $a_{0}x^{0}+a_{1} x^{1}$$b_{0}x^{0}+b_{1}x^{1}$ and $c_{0}x^{0}+c_{1}x^{1}$, that is, only two terms,
The schematic product for our selection problem is given by the expression (Where the sum is an OR operation and the multiplication an AND operation):
$\left(a_{0}x^{0}+a_{1}x^{1}\right)\left(b_{0}x^{0}+b_{1}x^{1}\right)\left(c_{0}x^{0}+c_{1}x^{1}\right)$
Which have the meaning of: one can either pick zero $a$ or one $a$ and zero $b$ or one $b$ &c, after the obvious simplification for the first coefficient (the zero coefficient), the expression can be expanded to:
$1+\left(a_{1}+b_{1}+c_{1}\right)x^{1}+\left(a_{1}b_{1}+a_{1}c_{1}+b_{1}c_{1}\right)x^{2}+\left(a_{1}b_{1}c_{1}x^{3}\right)$
Where the coefficients are giving us the ways to select the three distinguishable objects $a,b$ and $c$, including the empty selection which is represented by that lonely 1. Now, if you put $a=b=c=1$ you get the number of selections:
$1+3x^{1}+3x^{2}+x^{3}$
Which is what we’re looking for, the coefficients of the expression are telling us in how many ways we can pick the three objects: Three ways of picking one object, three ways of picking two objects and one way to pick them all..
That’s great, let’s come back to our original problem since now we have one important tool to solve it. A dice have six faces, each face have a value $1\leq d_{i}\leq 6$ and we have three dice, so our schematic product is:
$\left(x^{1}+x^{2}+x^{3}+x^{4}+x^{5}+x^{6}\right)^{3}$
note that the term for the zero coefficient is missing, that is because we cannot have zero as possible result for a dice roll, the exponent three is due to the amount of dice we have, that is clearly the multiplication of three generating functions which are modelling our dice.
What we need to solve our problem is the coefficient of $x^{15}$, remember this is the same setup as for the simple $a,b,c$ problem, we want the amount of possible combinations which give us 15. To find this coefficient we do not need to expand that expression, I mean we can, but that’s a tedious and error prone exercise and for sure is not possible for bigger problems with 100 dice for example (is possible, but stupid).
We need to reformulate that expression in a way which allow us to perform some manipulation, let’s start with some series:
$S_{n}=1+x+x^{2}+\cdots+x^{n}=1+x\left(1+x^{1}+\cdots+x^{n-1}\right)=1+x\left(S_{n}-x^{n}\right)$
$S_{n}=\frac{1-x^{n+1}}{1-x}$
In our case, we have:
$\left(x^{1}+x^{2}+x^{3}+x^{4}+x^{5}+x^{6}\right)^{3}=\left(x\left(1+x^{1}+x^{2}+x^{3}+x^{4}+x^{5}\right)\right)^{3}=\left(x\left(\frac{1-x^{5+1}}{1-x}\right)\right)^{3}$
Which can be further simplified to:
$x^{3}\left(1-x^{6}\right)^{3}\left(1-x\right)^{-3}$
Very well, we’re almost done! From this expression we need to find the coefficient $c_{15}x^{15}$. The only problematic part seems to be $\left(\frac{1}{1-x}\right)^{3}$ or more generally $\left(\frac{1}{1-x}\right)^{p}$, if you have a closer look you can spot that the the Maclaurin series expansion for $\frac{1}{1-x}$ is:
$\frac{1}{1-x}=\sum_{k=0}^{\infty}x^{k}=1+x^{1}+x^{2}+\cdots,$ for $\left | x \right | < 1$
which would be very helpful if there wasn’t any exponent to that fraction, to find the coefficient we’re looking for one can use the binomial theorem $\left( 1+x \right)^{n}=\sum_{r=0}^{\infty}\binom{n}{r}x^{r}$ putting $-x$ instead of $x$ and $-p$ instead of $n$ we have that the coefficient of $x^{k}$ is given by $\binom{-p}{k}\left ( -1 \right )^{k}$, which after some manipulations ends up to be equal to $\binom{k+p-1}{k}$.
That’s not a coincidence that the coefficient $c_{k}$ of $\left (1-x\right)^{-p}$ is equal to the amount of ways to place $k$ indistinguishable balls into $p$ distinguishable cells, but for the moment let’s go back to our original question, how to find the coefficient of $x^{15}$.
Our expression has three parts which are multiplied together, each part has a certain amount of terms which have the form of $c_{i}x^{n}$ from $\left (1-x^{6}\right)^{3}$, and $g_{j}x^{m}$ from $\left (1-x\right )^{-3}$ and the obvious $x^{3}$ from guess what… $x^{3}$
We need to fulfill this condition:
$x^{3}c_{i}x^{n}g_{j}x^{m}=c_{r}x^{15}$ where $15=m+n+3$
applying the binomial theorem we can find the coefficients $c_{i}=\left (\binom{3}{0},-\binom{3}{1},\binom{3}{2},...\right )$ for $x^{0},x^{1},\cdots$ and using $\binom{k+p-1}{k}$ the coefficients of $g_{j}$, so we can put all together and see from what that 15 can be made of:
1. $x^{3}\binom{3}{0}c_{12}x^{12}=91x^{15}$
2. $x^{3}\left (-\binom{3}{1}\right )c_{6}x^{6}=-84x^{15}$
3. $x^{3}\binom{3}{2}c_{0}x^{0}=3x^{15}$
For which we have: $91x^{15}-84x^{15}+3x^{15}=10x^{15}$ ! That’s done, the solution for our problem is 10There are 10 ways to get 15 from the sum of the upper faces of three rolled dice, we can verify this by expanding our original expression (or by rolling 3 dice).
Let’s see how to use this knowledge to solve larger problems, we can use the exact same procedure developed here to calculate any valid sum $t$ for any number $n$ of dice:
using myint = int_fast32_t;
myint number_of_sums( myint n, myint t )
{
myint solution{ 0 };
myint c1{ 0 }; //This is the coefficient of (1-x^6)^n
myint c2{ 0 }; //Coefficient of (1-x)^(-n)
for( myint i{ 0 }, k{ 0 }; k + n <= t ; k += 6, ++i )
{
c1 = pascal[ n ][ i ];
if( 0 != i%2 ) {
c1 *= -1;
}
c2 = pascal[ t - k - 1 ][ t - n - k ];
solution += c1 * c2;
}
return solution;
}
Where pascal is the pascal’s triangle, which can be build with some simple method like this:
constexpr myint max_size{ 1000 };
myint pascal[ max_size ][ max_size ];
void calculate_pascal()
{
for( myint i{ 0 } ; i < max_size ; ++i )
{
pascal[ i ][ 0 ] = 1;
pascal[ i ][ i ] = 1;
}
for( myint n{ 2 } ; n < max_size ; ++n )
{
for( myint k{ 1 } ; k < n ; ++k )
{
pascal[ n ][ k ] = pascal[ n-1 ][ k-1 ] + pascal[ n-1 ][ k ];
}
}
}
and now we calculate the amount of ways to obtain a sum of 250 when rolling 100 dice: 5.270.981.265.418.279.600! Cool!
|
|
MoDeNa 1.0 Software framework facilitating sequential multi-scale modelling
Surface Tension MoDeNa Module
Integrates Surface Tension model into the MoDeNa software framework. More...
## Modules
Example Workflow
Integrates Surface Tension model into the MoDeNa software framework.
Fortran Source Code
Source Code Documentation.
## Namespaces
SurfaceTension
MoDeNa Module definition of the Surface Tension Model.
SurfaceTension.SurfaceTension
Surrogate Function, Surrogate Model templates and Model Recipe.
## Detailed Description
Integrates Surface Tension model into the MoDeNa software framework.
SurfaceTension.SurfaceTension.m Surrogate Model SurfaceTension.SurfaceTension.m2 Surrogate Model Dependencies of Surface tension model
Remarks
Attention
Precondition
Note
Warning
# Surface Tension
## Scope of this module
The surface tension module returns the surface tension of a fluid system in equilibrium with coexisting liquid and vapor phases. The module consists of two main components:
• The detailed model
• The surrogate model
A PC-SAFT based density functional theory is used as the detailed model, the details of which are outlined in [2] . However, here we use a modification which is computationally less demanding and can be applied to systems of more than two components. The detailed model takes temperature as input. Pressure is set to one bar in all calculations. The output of the model is the surface tension of the system.Since surface tension shows an almost perfectly linear temperature dependence, the surrogate model is a simple linear equation in temperature: $\gamma(T) = A + BT$ where A and B are adjustable parameters.A nonlinear least-squares algorithm is used to determine the optimal values of these parameters in order to correlate the results of the detailed model as closely as possible.In order to compile and run the module a fortran compiler, preferably gfortran, as well as Portable, Extensible Toolkit for Scientific Computation (PETSc) 3.4.5 need to be installed. PETSc 3.4.5 requires specific versions of MPI, BLAS, Lapack and Scalapack. In order to ensure compatibility, PETSc should be configured to automatically download and install the correct versions, see section Installing PETSc.As there is no backward compatibility of different versions of PETSc, older as well as newer version of PETSc will most likely not work.PETSc should be configured with the following options:
|
|
# Exact amount of loop iterations here?
What is the exact number of loop iterations here and how do you find out?
def func(n):
i = 4
while i < n:
print(i)
i = i + 1
The solution says max(n-4,0) which is theta(n), but I am confused about how they reached this. I see it starts from 4 and can guess that this relates to the n-4, but would like to be sure.
• Try writing out what happens with both i and n for n = {0, 1, 2, 3, 4, 5}. Is there a pattern? What happens if n <= 4? – Jacob Panikulam Mar 23 '17 at 15:50
• loop breaks if n<=4 no? and i simply increments by 1 – soka Mar 23 '17 at 15:59
• Right. So how many times does the code inside of the loop get run if n = 5? Do your old pal Jake a favor and try making a table. – Jacob Panikulam Mar 23 '17 at 16:10
• maybe once no?? – soka Mar 23 '17 at 16:21
• If the indentation in the code is significant for scope resolution (e.g 'Is this Python?'), then it will iterate forever if $n>4$, because the line $i = i + 1$ isn't inside the body of the $while$ loop. One has to be careful about such things. – PMar Mar 23 '17 at 17:56
It will iterate $n-4$ steps if $n>4$ . If $n\leq4$ there will be no iteration since the condition is not satisfied. The important thing is that the linear search runtime grows like the sequence size in the worst case. The notation we use for works like $c_1.n+c_2$ is Θ (n).
• how did you find n-4? the answer is max (n-4 ,0 ) does 0 relate to when n <=4? ...what does the max mean – soka Mar 23 '17 at 16:24
• It means there cannot be any negative number of iterations. – kntgu Mar 23 '17 at 16:26
• the 0 means that? or max.. can there ever be a negative # of iterations?? – soka Mar 23 '17 at 16:34
Seeing that $i$ starts with 4 $i = 4$, if it was $i = 0$ the number of times the loop would execute was be $n$ times.
Let $A$ and $B$ be the iterations of $i$.
$A = { 0,1,2,3, ... , n-1 }$
$B = { 4,5,6,7, ... , n-1 }$
The size of the $A$ set is $n$, the $B$ set has the same components of $A$ minus 4 numbers ${( 0,1,2,3 })$ , therefore the size of $B$ is $n-4$.
For $n-4$ be $\theta(n)$ $\exists$ $c_1,c_2$ and $n_0$ such that, $c_1 . n \leq n-1 \leq c_2 . n$.
• I don't understand what you mean by "Let $A$ and $B$ be the iterations of $i$", followed by "$A=0, 1, \dots$". i never takes the value 0. – David Richerby Apr 23 '17 at 10:17
• Just comparing, if $i$ started from 0. To show the difference between the size of the sets. – Vinícius G. Santos Apr 23 '17 at 14:40
|
|
65 views
Assume that the matrix $A$ given below, has factorization of the form $LU=PA$, where $L$ is lower-triangular with all diagonal elements equal to 1, $U$ is upper-triangular, and $P$ is a permutation matrix. For
$A = \begin{bmatrix} 2 & 5 & 9 \\ 4 & 6 & 5 \\ 8 & 2 & 3 \end{bmatrix}$
Compute $L, U,$ and $P$ using Gaussian elimination with partial pivoting.
|
|
# Arithmetic with big numbers (Part 2)
Ready for an elementary arithmetic problem? Here it is:
Nothing to it… just multiply the two numbers. Of course, we’d rather not multiply them by hand, so let’s use a calculator instead:
Uh oh… the calculator doesn’t give the complete answer. It does return the first nine significant digits, but it doesn’t return all 16 digits. Indeed, we can’t be sure that the final 5 in the answer is correct because of rounding.
So now what we do (other than buy a more expensive calculator)?
In yesterday’s post, I posed a similar problem involving addition. Adding two big numbers by hand is no big deal. However, multiplying two big numbers, one digit at time, would be tedious!
When I pose this question to students, the knee-jerk reaction is to groan when facing the prospect of multiplying these two big numbers by hand. However, it is possible to use modern technology to make ordinary grade-school multiplication move a lot quicker. Perhaps the fastest way to do this is to split the numbers into block of five digits instead of the usual three:
Now we proceed as if each block of five digits was a single digit. We begin with the last block of digits on the second row, which is 48974. First, we multiply 6797 and 48974 using a calculator. Because most modern scientific calculators have a 10-digit display, we can be assured that the complete answer will be shown. (This is why I chose to divide the numbers using block of five digits and not six or more.) The last five digits in the answer are written down; the more significant digits are carried.
Next, we multiply 2236 and 48974 and then add the number that was carried.
We then repeat using 2449, the next (and final) block of digits on the second row. First, we multiply 6797 and 2449 using a calculator. The last five digits in the answer are written down; the more significant digits are carried.
Next, we multiply 2236 and 2449 and then add the number that was carried.
Finally, it remains to add these two partial products to obtain the final product. For this problem, this can be accomplished with only a single addition: the block of digits 76278 simply carry down to the final answer, and so we can start by adding the second and third blocks of digits. As this sum is less than $10^{10}$, there is no digit to carry, and so the leading 54 also carries down to the final answer.
The above technique is logically equivalent to using base 100,000 as opposed to the customary use of base 10 arithmetic. So while multiplying two numbers in the billions still takes some time, judiciously using a calculator makes this exercise go a lot quicker than the ordinary grade-school method of multiplying one digit at a time.
|
|
4.6. Steel Frame: Bayesian Calibration using TMCMC¶
Problem files qfem-0014/
4.6.1. Outline¶
In this example, Bayesian estimation is used to estimate the lateral story stiffnesses of the two stories of a simple steel frame, given data about its mode shapes and frequencies. The transitional Markov chain Monte Carlo algorithm is used to obtain samples from the posterior probability distribution of the lateral story stiffnesses.
4.6.2. Problem description¶
This example is provided by Professor Joel Conte and his doctoral students Maitreya Kurumbhati and Mukesh Ramancha from UC San Diego.
4.6.2.1. Structural system¶
Consider the two-story building structure shown in Fig. 4.6.2.1.1. Each floor slab is made of a composite metal deck and is supported on the steel columns. These four columns are fixed at the base. The story height is $$h = 10'$$, length of each slab is $$33'4''$$ and $$30'$$ along the X and Y direction, respectively. $$m_1 = 0.52 \ \ kips-s^2/in$$ and $$m_2 = 0.26 \ \ kips-s^2/in$$ are the total mass of floor 1 and floor 2, respectively. For the steel columns, Young’s modulus is $$E_s^{col} = 29000 \ \ ksi$$ and the moment of inertia $$I_{yy}^{col} = 1190 \ \ in^4$$.
Fig. 4.6.2.1.1 Steel structural system being studied in this example.
4.6.2.2. Finite element model¶
In this example, only the response of the system along the X direction is considered. For modeling purposes, the floor diaphragms are assumed rigid both in plane and in flexure, and the columns are assumed axially rigid. The structure is modeled as a two-story 2D shear building model as shown in Fig. 4.6.2.2.1. The finite element (FE) software framework OpenSees is utilized for modeling and analysis of the considered structural system. The developed FE model consists of 6 nodes and 6 elastic beam-column elements. To simulate the flexural rigidity of the floors, the moment of inertia $$I_{yy}$$ of the horizontal elements is set to a very large number. The horizontal degrees of freedom of node 3 and node 4 are constrained to be equal throughout the analysis to mimic the axial rigidity of floor 1. Similar modeling is performed for floor 2. The vertical displacements of nodes 3, 4, 5, and 6 are constrained to be zero to model the axial rigidity of the columns (see roller supports in Fig. 4.6.2.2.1). After making these modeling assumptions, the only active degrees of freedom of the FE model are the horizontal displacements (translations) of floors 1 and 2, $$u_1$$ and $$u_2$$, respectively, as shown in Fig. 4.6.2.2.1.
A translational mass $$m_1/2$$ and $$m_2/2$$ is lumped at the nodes of floor 1 and 2, respectively, along the X direction. The lateral story stiffnesses $$k_1$$ and $$k_2$$ of story 1 and 2, respectively, are equal to $$48 E_s^{col} I_{yy}^{col}/{h^3}$$. Hence, the story stiffnesses $$k_1$$ and $$k_2$$ are both equal to 958.61 kips/in.
Fig. 4.6.2.2.1 Model of the structural system used in finite element analysis.
4.6.2.3. Natural vibration frequencies and mode shapes¶
Since the shear building model shown in Fig. 4.6.2.2.1 has only two degrees of freedom, it has two natural modes of vibration. Let $$\lambda_i$$ and $$\phi_i$$ be the $$i^{th}$$ eigenvalue and its corresponding eigenvector, respectively. The two eigenvalues and eigenvectors are obtained by solving the generalized eigenvalue problem of the considered system in OpenSees. The following two eigenvalues are obtained:
(4.6.2.3.1)$\begin{array}{l} \lambda_1 = 1084.06 (rad/s)^2, \quad \lambda_2 = 6318.34 (rad/s)^2 \end{array}$
The corresponding eigenvectors (see degrees of freedom u1 and u2 in Fig. 4.6.2.2.1) are given by:
(4.6.2.3.2)$\begin{split}\phi_1 = \begin{pmatrix}\phi_{11} \\ \phi_{12}\end{pmatrix} = \begin{pmatrix}1.00 \\ 1.41\end{pmatrix} in, \qquad \phi_2 = \begin{pmatrix}\phi_{21} \\ \phi_{22}\end{pmatrix} = \begin{pmatrix}1.00 \\ -1.41\end{pmatrix} in\end{split}$
The eigenvectors in (4.6.2.3.2) are normalized such that the first component is 1.0. The two vibration mode shapes are shown in Fig. 4.6.2.3.1.
Fig. 4.6.2.3.1 Natural vibration mode shapes: (a) Mode 1 (b) Mode 2.
4.6.2.4. Parameters to be estimated¶
The FE model of a real structural system often consists of parameters that are unknown to some degree. For example, the parameters related to the mass or stiffness or damping of the system might be unknown. The goal of parameter estimation is to estimate such unknown parameters using some measurement data. The measurement data is obtained by using sensors deployed on the real system. To demonstrate the parameter estimation concept/framework on the considered two-story building system, the first and second story story stiffnesses, $$k_{1}$$ and $$k_{2}$$, are assumed to be unknown while the mass parameters are assumed to be known. In this illustration example, the unknown parameter vector $$\mathbf{\theta}=(k_{1}, k_{2})^T$$ is estimated using the first eigenvalue and the first eigenvector data.
4.6.2.5. Synthetic data generation¶
In a real-world application, data on the first eigenvalue and the first eigenvector would consist of system identification results obtained from sensor measurement data. Note that the considered two-story building structure (see Fig. 4.6.2.1.1) is used here as a conceptual/pedagogical example and does not exist in the real world. Therefore, sensor measurement data cannot be collected from the system. As a substitute, measurement data (in the form of estimated first eigenvalue and first eigenvector) are artificially simulated for the purpose of this example, i.e., system identification results for $$\lambda_i$$ and $$\phi_i$$ from multiple ambient vibration datasets are simulated. To simulate these system identification results (i.e., measurement data), an eigenvalue analysis of the system is performed assuming the following true principal moment of inertia of the columns:
(4.6.2.5.1)$\mathbf{\theta}^{true} = \left(k_{1}^{true}, k_{2}^{true}\right)^T; \quad k_{1}^{true} = k_{2}^{true} = 958.61 \ \ kips/in.$
The corresponding first eigenvalue and first eigenvector are:
(4.6.2.5.2)$\mathbf{y}^{true} = \left(\lambda_1^{true} = 1084.06 (rad/s)^2, \phi_{12}^{true} = 1.41 in\right)^T$
To simulate system identification results (measurement data), random estimation errors are added to $$\lambda_1^{true}$$ and $$\phi_{12}^{true}$$. The random estimation errors for $$\lambda_1^{true}$$ and $$\phi_{12}^{true}$$ are assumed to be statistically independent, zero-mean Gaussian with 5% coefficient of variation (relative to $$\lambda_1^{true}$$ and $$\phi_{12}^{true}$$). Thus, the standard deviation of the system identification errors for $$\lambda_1^{true}$$ and $$\phi_{12}^{true}$$ are
(4.6.2.5.3)$\sigma_{\lambda_1}^{true} = 0.05*\lambda_1^{true} = 54.203; \quad \sigma_{\phi_{12}}^{true} = 0.05*\phi_{12}^{true} = 0.0705$
Now five independent sets of system identification results (measurement data sets) are simulated as:
(4.6.2.5.4)$\begin{split}\begin{array}{l} \lambda_{1}^{(1)}=1025.21, \quad \lambda_{1}^{(2)}=1138.11, \quad \lambda_{1}^{(3)}=1099.39, \quad \lambda_{1}^{(4)}=1002.41, \quad \lambda_{1}^{(5)}=1052.69 \\ \phi_{12}^{(1)}=1.53, \quad \phi_{12}^{(2)}=1.24, \quad \phi_{12}^{(3)}=1.38, \quad \phi_{12}^{(4)}=1.50, \quad \phi_{12}^{(5)}=1.35 \end{array}\end{split}$
4.6.2.6. Parameter estimation setup¶
In this example, it is assumed that the story stiffnesses for the first and second story ($$k_{1}$$ and $$k_{2}$$ respectively) are unknown. The goal is to use the simulated data of eigenvalue and eigenvector measurements to obtain the posterior probability distribution of the story stiffnesses by Bayesian calibration. We will employ the Transitional Markov Chain Monte Carlo (TMCMC) algorithm, to sample from the posterior joint probability distribution of $$k_{1}$$ and $$k_{2}$$.
We define the following prior probability distributions for the unknown quantities of interest:
1. First story stiffness, k1: Uniform distribution with a lower bound $$(L_B)$$ of $$766.89 \ \mathrm{kips/in}$$, and an upper bound $$(U_B)$$ of $$2108.94 \ \mathrm{kips/in}$$,
2. Second story stiffness, k2: Uniform distribution with a lower bound $$(L_B)$$ of $$383.44 \ \mathrm{kips/in}$$, and an upper bound $$(U_B)$$ of $$1150.33 \ \mathrm{kips/in}$$.
A Gaussian (a.k.a. Normal) likelihood model is employed by default in quoFEM. This is done by assuming that the errors (i.e. the differences between the finite element prediction of the modal properties and the simulated measurement data) follow a zero-mean Gaussian distribution. The components of the error vector are assumed to be statistically independent. The errors in the prediction of a particular response quantity are assumed to be identically distributed across experiments. Under these assumptions, the standard deviations of the error for each response quantity are the only unknown parameters of the Gaussian distribution employed in the likelihood model, and their values are also estimated from the data. Therefore, in this problem, there are two additional parameters $$\sigma_{\lambda_{1}}$$ and $$\sigma_{\phi_{12}}$$, which are also estimated. The prior probability distributions for these additional parameters are automatically set by quoFEM.
4.6.2.7. Files required¶
The exercise requires two script files. The user should download these files and place them in a new folder.
Warning
Do not place the files in your root, downloads, or desktop folder as when the application runs it will copy the contents on the directories and subdirectories containing these files multiple times. If you are like us, your root, Downloads or Documents folders contains a lot of files.
1. model.tcl - This is an OpenSees script written in tcl which builds the finite element model and conducts the analysis. This script creates a file called results.out when it runs. This file contains the first eigenvalue and the second component of the first eigenvector obtained from finite element analysis of the structure.
2. eigData.csv - This is the calibration data file, which contains the synthetically generated data in five rows and two columns, the contents of which are shown below.
1025.21, 1.53
1138.11, 1.24
1099.39, 1.38
1002.41, 1.50
1052.69, 1.35
Note
Since the tcl script creates a results.out file when it runs, no postprocessing script is needed.
4.6.3. UQ workflow¶
Note
Selecting the Steel Frame: Bayesian Calibration using TMCMC example in the quoFEM Examples menu will autopopulate all the input fields required to run this example. The procedure outlined below demonstrates how to manually set up this problem in quoFEM.
The steps involved are as follows:
1. Start the application and the UQ panel will be highlighted. In the UQ Engine drop down menu, select the UCSD_UQ engine. In the Method category drop down menu the Transitional Markov chain Monte Carlo option will be highlighted. Enter the values in this panel as shown in the figure below. If manually setting up this problem, choose the path to the file containing the calibration data on your system.
1. Next select the FEM panel from the input panel selection. This will default to the OpenSees FEM engine. In the Input Script field, enter the path to the model.tcl file or select Choose and navigate to the file.
1. Next select the RV tab from the input panel. This panel should be pre-populated with two random variables named k1 and k2. If not, press the Add button twice to create two fields to define the input random variables. Enter the same variable names (k1 and k2), as required in the model.tcl script.
For each variable, specify the prior probability distribution - from the Distribution drop down menu, select Uniform and then provide the lower bounds and upper bounds shown in the figure below.
1. In the QoI panel, enter 2 variable names for the two quantities output from the model.
Note
For this particular problem setup in which the user is not using a postprocessing script, the user may specify any names for the QoI variables. They are only being used by Dakota to return information on the results.
5. Next click on the Run button. This will cause the backend application to launch the UCSD_UQ engine, which performs Bayesian calibration using the TMCMC algorithm. When done, the RES tab will be selected and the results will be displayed as shown in the figure below. The results show the first four moments of the posterior marginal probability distribution of $$k_1$$ and $$k_2$$. The true value of both $$k_1$$ and $$k_2$$ is 958.61 kips/in. Also shown are the moments of the estimated aditional error parameters per response quantity and finally, the moments of the outputs corresponding to the samples from the posterior probability distribution of the parameters.
If the user selects the Data Values tab in the results panel, they will be presented with both a graphical plot and a tabular listing of the data.
Various views of the graphical display can be obtained by left and right clicking in the columns of the tabular data.
If a singular column of the tabular data is pressed with both right and left buttons a histogram and CDF will be displayed, as shown in the figures below.
|
|
determinantal variety
For $A\in M_{n}(k)$ and $v\in M_{n \times 1}(k)$, $X:=\{(A,v) \in \mathbb{A}^{n^{2}} \times \mathbb{A}^n \mid {\rm rank}(Av \mid v) \leq 1\}$ is an affine algebraic set in $\mathbb{A}^{n^2}\times \mathbb{A}^n$. Can we say that $X$ is a determinantal variety? If $X$ a determinantal variety, then can we say that $X$ is irreducible since the maximal rank of $(Av \mid v)$ is $2$?
-
I don't think $X$ is a determinantal variety. Think about the case $n=2$. Denote $A=(a_{ij})$ and $v=(v_{1}, v_{2})^{t}$. In this case, $X$ is the zeros of the polynomial $(a_{11}v_{1} + a_{12}v_{2})v_{2} - (a_{21}v_{1} + a_{22}v_{2})v_{1}$, which is not equal to the zeros of $\det(A), \det \left( \begin{array}{cc} a_{11} & v_{1} \\ a_{21} & v_{2} \end{array} \right), \det \left( \begin{array}{cc} a_{12} & v_{1} \\ a_{22} & v_{2} \end{array} \right)$.
-
|
|
# Colour stability and opacity of resin cements and flowable composites for ceramic veneer luting after accelerated ageing
## Colour stability and opacity of resin cements and flowable composites for ceramic veneer luting after accelerated ageing
Journal of Dentistry, 2011-11-01, Volume 39, Issue 11, Pages 804-810, Copyright © 2011 Elsevier Ltd
## Objectives
Colour changes of the luting material can become clinically visible affecting the aesthetic appearance of thin ceramic laminates. The aim of this in vitro study was to evaluate the colour stability and opacity of light- and dual-cured resin cements and flowable composites after accelerated ageing.
## Methods
The luting agents were bonded (0.2 mm thick) to ceramic disks (0.75 mm thick) built with the pressed-ceramic IPS Aesthetic Empress ( n = 7). Colour measurements were determined using a FTIR spectrophotometer before and after accelerated ageing in a weathering machine with a total energy of 150 kJ. Changes in colour (Δ E ) and opacity (Δ O ) were obtained using the CIE L * a * b * system. The results were submitted to one-way ANOVA, Tukey HSD test and Student’s t test ( α = 5%).
## Results
All the materials showed significant changes in colour and opacity. The Δ E of the materials ranged from 0.41 to 2.40. The highest colour changes were attributed to RelyX ARC and AllCem, whilst lower changes were found in Variolink Veneer, Tetric Flow and Filtek Z350 Flow. The opacity of the materials ranged from −0.01 to 1.16 and its variation was not significant only for Opallis Flow and RelyX ARC.
## Conclusions
The accelerated ageing led to colour changes in all the evaluated materials, although they were considered clinically acceptable (Δ E < 3). Amongst the dual-cured resin cements, Variolink II demonstrated the highest colour stability. All the flowable composites showed proper colour stability for the luting of ceramic veneers. After ageing, an increase in opacity was observed for most of the materials.
## Introduction
The properties of ceramic veneers, such as colour stability, mechanical strength, compatibility with the periodontal tissues, clinical longevity, enamel-like appearance due to the translucency and superficial texture, makes them an excellent choice for aesthetic treatments. These materials are excellent for corrections of anatomical malformations with or without tooth preparation, in cases where the patient does not have severe discoloration. Currently, there are many commercially available ceramic materials, which can be used to produce laminate veneers with thicknesses ranging from 0.3 to 0.7 mm. Colour changes of the luting agent can become visible, affecting the aesthetic appearance of the final restoration.
The currently available resin cements specifically used for luting ceramic veneers are usually activated by visible light. The main advantages of these cements are their colour stability and longer working time, compared to chemically and dual-cured resin cements. The use of this type of cement makes it easier to remove any excess material before light-curing and reduces the finishing time required after cementation of the restorations. Besides the ease of use, studies have shown that the excellent colour stability of these materials is due to the absence of the amine as a self-curing catalyst, which could cause colour changes in the material over time.
Dual-cured resin cements combine some of the desirable characteristics of light- and chemically cured resin cements. Besides the advantage of allowing further chemical curing in deeper areas where the light is attenuated, dual-cured resin cements have also shown superior mechanical properties, such as flexural strength, elastic modulus, hardness and degree of conversion in comparison to the isolated light activation or exclusively chemical curing. However, dual-cured resin cements also contain aromatic tertiary amine in their formulation, which could compromise the colour stability of the cemented restorations over the long-term.
In order to benefit from the physical properties of light-activated composite resins, as well as an improved cost benefit compared to resin cements, some practitioners have been using flowable resin composites for the cementation of ceramic veneers. These materials developed in 1996, present the same particle size of hybrid composites but with a reduction in the viscosity of the mixture and improved handling properties. However, until recently, its use as a luting agent had only been evaluated by an in vitro study, where its bond strength was compared to dual-cured resin cements. Hence, the optical properties of this material, with respect to its colour stability, have not been yet investigated.The accelerated ageing process has been used to simulate the oral conditions for a relatively long service time. The most commonly used tests for ageing of resin-based materials are prolonged water storage and exposure to ultraviolet light.
With developments in new formulations and polymerization techniques, clinical longevity and colour stability of resin cements are expected to improve. However, changes in the opacity of these materials have been scarcely investigated. On one hand, the role of opacity on the aesthetic performance of ceramic veneers can rely on the ability of the cement to cover underlying tooth discolorations, on the other hand, it may render the restoration less lively. Thus, it becomes relevant to investigate this optical property for adequate selection of luting agent, as well as its long-term evaluation by artificial ageing methods.
The aim of this paper was to evaluate the colour stability and variation in opacity of dual- and light-cured resin cements and flowable composites after accelerated ageing. The null hypotheses tested in this study were: (a) The colour stability and opacity of different luting agents would not be affected by accelerated ageing; (b) the colour stability and opacity of the flowable composites used as cements would be similar to the dual- and light-cured cements; and (c) the colour stability and opacity of the tested materials would remain within a level of clinical acceptance after accelerated ageing.
## Materials and methods
Three types of materials (dual-cured resin cement, light-cured resin cement and flowable composites) as well as 3 brands of each type from different manufacturers were investigated for the cementation of laminate veneers ( Table 1 ). All the materials were handled in accordance with the manufacturers’ instructions for the cementation of ceramic veneers using shade A3 Vita for standardization purposes.
Table 1
Materials used in the study.
Material Manufacturer Type Composition Filler
RelyX ARC (RA) 3M-ESPE, St. Paul, MN, USA Dual-cured resin cement Bis-GMA, TEGDMA, zirconia/silica filler, pigments, benzoyl peroxide, amine and photoinitiator. 67.5 wt%
AllCem (AC) FGM Dental Products (Joinville, SC, Brazil) Dual-cured resin cement Bis-GMA, Bis-EMA, TEGDMA, Ba–Al-silicate glass, silane treated silica, benzoyl peroxide, co-initiators and camphorquinone. 68 wt%
Variolink II (VA) Ivoclar Vivadent, Schaan, Liechenstein Dual-cured resin cement Bis-GMA, UDMA, TEGDMA, barium glass, ytterbium trifluoride, Ba–Al-fluorosilicate glass, zirconia/silica, benzoyl peroxide, initiators, stabilizers and pigments. 71 wt%
RelyX Veneer (RV) 3M-ESPE, St. Paul, MN, USA Light-cured resin cement Bis-GMA, TEGDMA, zirconia/sílica filler. 66 wt%
Experimental Veneer (EV) FGM Dental Products (Joinville, SC, Brazil) Light-cured resin cement Bis-GMA, Bis-EMA, TEGDMA, Ba–Al-silicate glass, camphorquinone, co-initiators, stabilizers and pigments. 72 wt%
Variolink Veneer (VV) Ivoclar Vivadent, Schaan, Liechenstein Light-cured resin cement UDMA, TEGDMA, silicon dioxide, ytterbium trifluoride, initiators, stabilizers and pigments. 40 vol%
Filtek Z350 Flow(FZ) 3M-ESPE, St. Paul, MN, USA Flowable composite Bis-GMA, Bis-EMA, TEGDMA, zirconia/sílica filler. 65 wt%
Opallis Flow (OF) FGM Dental Products (Joinville, SC, Brazil) Flowable composite Bis-GMA, Bis-EMA, TEGDMA, Ba–Al-fluorosilicate glass, silicon dioxide, camphorquinone, co-initiators, stabilizers and pigments. 72 wt%
Tetric Flow (TF) Ivoclar Vivadent, Schaan, Liechenstein Flowable composite Bis-GMA, UDMA, TEGDMA, silicon dioxide, ytterbium trifluoride, barium glass, Ba–Al-fluorosilicate glass, silicon dioxide. 64.6 wt%
Bis-GMA: bisphenol-A glycidyldimethacrylate, TEGDMA: triethyleneglycol dimethacrylate; UDMA: urethane dimethacrylate; Bis-EMA: bisphenol-A ethoxylated dimethacrylate.
## Simulation of ceramic veneers
Sixty-three disks were fabricated with ceramic-pressed IPS Empress ® Esthetic (Ivoclar Vivadent AG., Schaan, Liechtenstein) in shade ETC 2. The ceramic surfaces were finished and polished using SiC papers from #280 to #2200 in order to assure surface standardization. The discs were 16 mm in diameter and 0.75 mm in thickness. The specimens’ dimensions were confirmed using a digital caliper (Mitutoyo Corp., Tokyo, Japan) at three points on the disc.
## Evaluation of colour stability
To analyse the colour stability, the luting agents were bonded to the previously made ceramic discs. On each disc, the area designated for contact with the cement material was prepared with 10% hydrofluoric acid (FGM Dental Materials, Joinville, SC, Brazil) in gel, applied for 1 min, then rinsed with water for 20 s and dried with oil-free air. Following this, a mono-component silane (RelyX Ceramic Primer – 3M ESPE, St. Paul, MN, USA) was applied to the conditioned surface and left undisturbed for 1 min prior to the application of the catalyst (Adper Scotchbond – 3M ESPE, St. Paul, MN, USA). After the manipulation according to the manufacturer’s specifications, each material was inserted onto a Teflon mould (15 mm × 0.2 mm) which had three triangle-shaped grooves in the periphery to allow the flow of the excess of material. The mould was placed over an acetate sheet placed on a glass plate with a black background to avoid light reflection. The prepared ceramic disc was then placed above the mould and pressed with pliers for 30 s to ensure a uniform cement thickness. The cement was light-cured directly on the ceramic disc using a LED curing unit (Bluephase, Ivoclar Vivadent, Schaan, Liechtenstein) for 40 s, at four equidistant points of the disc. The light irradiance was measured with a radiometer (LED Demetron, Demetron Research Corp., Danbury, CT, USA) and confirmed for all groups at 850 mW/cm 2 . The specimens of each experimental group ( n = 7) were stored in a lightproof container at 37 °C under high-humidity condition for 24 h.
After this period, the initial colour measurements (baseline) were determined using a spectrophotometer (Model SP62, X-Rite, Grandville, MI, USA) after calibration using a white standard (calibration plate, L * = 95.17, a * = −0.96, b * = +0.46). Each specimen was rotated 90 degrees clockwise in the spectrophotometer and the measurements were performed in triplicate. The colour readings were performed according to the CIE L * a * b *. The specimens were initially placed on a black background, with the ceramic always facing the measurement site, and then on a white background, in order to prevent the potential effects of absorption from any other colour parameters being measured.
Following the initial colour measurements, the specimens were mounted on an acrylic panel and subjected to an accelerated ageing process in a weathering machine (Ci4000 Wheather-Ometer, Atlas Electronic Devices, Chicago, IL, USA), according to ASTM G155, Cycle 1. The equipment performed a continuous irradiation of light from a xenon arc bulb with a borosilicate glass filter to 0.35 W/m 2 /nm at a wavelength of 340 nm. The black panel temperature was 63 ± 2 °C and the cycles were set to 102 min of light plus 50% humidity and 18 min of light plus water spray. The specimens were aged for 120 h at a total energy of 150 kJ.
A new spectrophotometric evaluation was performed under the same initial conditions, following the accelerated ageing process in order to determine both the degree of colour change and opacity of the materials tested. The colour stability was determined by colour differences (Δ E ) using the coordinates L *, a * and b * in the baseline (b) and following accelerated ageing (a), as follows:
ΔL=LaLb $\Delta {L}^{}={L}_{a}^{}-{L}_{b}^{}$
Δa=aaab $\Delta {a}^{}={a}_{a}^{}-{a}_{b}^{}$
Δb=babb $\Delta {b}^{}={b}_{a}^{}-{b}_{b}^{}$
The colour change (Δ E ) was calculated using the following formula:
ΔE=[(ΔL)2+(Δa)2+(Δb)2]1/2 $\Delta E={\left[{\left(\Delta {L}^{}\right)}^{2}+{\left(\Delta {a}^{}\right)}^{2}+{\left(\Delta {b}^{}\right)}^{2}\right]}^{1\text{/}2}$
The opacity parameter (OP) was determined as a percentage of L * values, obtained from the measurements using a black background and a white background, before and after the accelerated ageing and in accordance with the following equation:
OP=LwithblackbackgroundLwithawhitebackground×100 $\text{OP}=\frac{{L}^{}\text{with}\text{}\text{black}\text{}\text{background}}{{L}^{}\text{with}\text{}\text{a}\text{}\text{white}\text{}\text{background}}×100$
## Statistical analysis
One-way ANOVA was performed for both colour change and opacity values. The Tukey HSD test was used for multiple comparisons between groups and Student’s t test was used for the comparisons of opacity before and after accelerated ageing. All tests were performed with a significance level of 5% using the statistical package SPSS 17.0 (SPSS Inc., Chicago, IL, USA).
## Colour stability
Although all materials were used in A3 shade the initial values of L *, a * and b * suggested slight colour variations amongst the sets of cement/ceramic.
The results of ANOVA showed significant differences amongst the tested materials ( p < 0.05). Table 2 presents the results of colour change for the cementing materials evaluated during this study. Amongst the dual cements, RelyX ARC and AllCem showed the greatest changes in colour, whilst Variolink II was similar to the Opallis Flow composite. Amongst the light-activated cements, RelyX Veneer and Experimental Veneer showed no significant differences in colour changes between each other, although they exhibited greater colour changes than Variolink Veneer. The latter represented the lowest Δ E means, together with Filtek Z350 Flow and Tetric Flow composites ( p > 0.05).
Table 2
Mean values (SD) of Δ L *, Δ a *, Δ b * and Δ E * of the luting materials evaluated ( p < 0.05).
Type Materials Δ L * Δ a * Δ b * Δ E
Dual-cured resin cements RelyX ARC 0.15 −0.33 −2.38 2.40 (0.05) a
AllCem −1.33 0.14 1.77 2.23 (0.35) a
Variolink II −0.67 0.07 0.70 0.98 (0.20) b
Light-cured resin cements RelyX Veneer −0.51 −0.16 0.17 0.57 (0.08) c
Exp Veneer −0.17 −0.09 −0.55 0.58 (0.07) c
Variolink Veneer −0.24 0.02 −0.33 0.41 (0.04) d
Flowable composites Filtek Z350 Flow −0.30 0.19 0.21 0.41 (0.03) d
Opallis Flow −0.06 0.09 −0.82 0.83 (0.03) b
Tetric Flow −0.18 0.20 −0.30 0.41 (0.05) d
Groups connected by the same letters do not have statiscally significant differences ( p > 0.05).
Based on the analysis of changes in value or lightness ( L *), most of the evaluated materials darkened, with the highest Δ L * attributed to the dual cement AllCem. Changes in the reddish-green hue ( a *) were very small for all materials. Whilst changes in bluish-yellow hue ( b *) ranged between positive and negative values, with greater tendency towards the blue on cement RelyX ARC and a greater tendency towards yellow on cement AllCem.
## Opacity
Table 3 shows the mean variation in opacity of the materials evaluated, before and after accelerated ageing. There were found statistically significant differences amongst the materials before ageing, after ageing and variation in opacity ( p < 0.05). RelyX Veneer presented significantly higher values of opacity before and after accelerated ageing ( p < 0.05). Lower values of opacity were found in Experimental Veneer, Variolink Veneer and Tetric Flow, in both conditions.
Table 3
Mean values (±SD) and variation in opacity (%) after the accelerated ageing of the luting materials evaluated.
Materials Before After Variation
RelyX ARC 50.69 ± 0.61 bc 50.74 ± 0.58 bc 0.11 ± 0.14 d
AllCem 51.23 ± 0.58 b 51.83 ± 0.57 b 1.16 ± 0.08 a *
Variolink II 49.36 ± 1.17 cde 49.78 ± 1.28 cd 0.84 ± 0.24 ab *
RelyX Veneer 61.14 ± 1.24 a 61.54 ± 1.24 a 0.66 ± 0.10 b *
Exp. Veneer 48.27 ± 0.50 def 48.48 ± 0.52 de 0.42 ± 0.07 c *
Variolink Veneer 47.74 ± 0.35 ef 48.14 ± 0.35 de 0.84 ± 0.14 b *
Filtek Z350 Flow 50.04 ± 0.39 bcd 50.42 ± 0.41 bc 0.76 ± 0.07 b *
Opallis Flow 51.09 ± 0.44 bc 51.09 ± 0.52 bc −0.01 ± 0.27 cd
Tetric Flow 47.44 ± 2.32 f 47.60 ± 2.32 e 0.34 ± 0.05 cd *
Groups connected by the same letters do not have statiscally significant differences in columns ( p > 0.05).
* Significant differences between before and after accelerated ageing ( p < 0.05).
The opacity of all materials increased after accelerated ageing, with the exception of Opallis Flow. However, the variation of opacity was not significant for Opallis Flow and RelyX ARC ( p > 0.05).
## Discussion
This study evaluated the colour stability of materials for luting of ceramic veneers using a set of cement/ceramic for the analysis. Many studies regarding the colour stability of resin cements used specimens built entirely with the luting agent in thicknesses that are not clinically compatible with the film below laminate veneers. In the present study, the luting agents were 0.2 mm thick bonded to a 0.75 mm ceramic disc in order to reproduce clinical condition and to avoid overestimated results regarding the effect of colour changes of the underlying material. Other previous studies assessed of colour stability of resin cements below ceramic veneers showed less colour changes than that of cement itself. The literature also suggests that ceramic restorations have varied opacities and for this reason the colour change of the cementing agent could be masked. The ceramic used in the current study has translucent characteristics, besides being used in very low thickness in order to provide evidence of any significant colour changes of the luting material. It was shown in a previous study that 0.5 mm thick porcelain disc would not mask the difference in hue amongst different luting materials.
The accelerated ageing carried out in the present study using a weathering chamber model submitted the samples to increased temperature, humidity and ultraviolet light. These conditions can induce an oxidation process of the amine, component used as initiator of resin cements. Hekimoglu et al. conducted accelerated ageing in a weathering machine, with times ranging from 300 h to 900 h and did not observe any differences in colour changes during the longest periods. The present study used similar equipment but with a temperature of 63 °C, which could further accelerate the ageing of the tested materials. The results from the current study provide a comprehensive assessment of the colour stability of materials that may be used for luting ceramic veneers. The literature is scarce regarding the new-developed light-cured resin cements that are available exclusively for the luting of ceramic veneers, and also there are currently no specific comparisons between them. Previous investigations evaluated only the base paste (light-activated) of dual resin cements compared to the mixture of both pastes (dual mode) whether or not they were submitted to light curing. However, this is not the primary indication, since usually the best properties are achieved with the mixture of both pastes of dual resin cements.
Different instruments have been developed to reduce or overcome imperfections and inconsistencies of traditional shade matching using shade guides. Spectrophotometers are today amongst the most accurate, useful and flexible instruments for colour matching in Dentistry . The data obtained from spectrophotometers are manipulated and translated into a form useful for dental professionals. The advantages of spectrophotometric analysis with the CIE L * a * b * system are the detection of colour changes that are not visible to the human eye and the ability to express colour differences in units that may be related to visual perception and clinical significance. There is some controversy in literature regarding the values of clinically noticeable colour changes. Vichi et al. used three different ranges for distinguishing colour differences: Δ E values lower than 1.0 were considered undetectable by the human eye, values between 1.0 and 3.3 were considered visible by skilled operators, but clinically acceptable, and Δ E values greater than 3.3 were considerable appreciable also by non-skilled persons and for that reason clinically not acceptable. Chang et al. reported the gold standard threshold of 2.0, which was considered a perceptible colour change able to determine the optical effect of resin cements. In a recent study, a Δ E = 1.6 represented the colour difference that could not be detected by the human eye. However, most studies report Δ E ≤ 3.3 as clinically acceptable. The colour changes in the present study ranged from 0.41 to 2.40, regardless of the type of material, which would be within the previously mentioned conditions. These findings corroborate with those from Noie et al., where significant differences were found between dual and light-activated cements, although they were not visually perceptible.
All the flowable composites and light-activated resin cements showed values of Δ E less than 1.0, possibly because all of them have only a physical curing reaction. The oxidation of the aromatic amine, required for the initiation of the polymerization reaction of composite resins might be the main reason for changes in the colour of dual resin cements. As the light-activated materials have only aliphatic amines in their composition, the trend is for less colour change to occur than with the dual cements, which have both aliphatic and aromatic amines. In the present study, the dual-curing resin cements showed colour changes higher than 2.0, except for Variolink II, which showed Δ E less than 1.0, similar to that of light-activated materials. This result may be due to a higher concentration of photosensitive components compared to the chemical cured components in this material. Nathanson and Banasr reported less colour change of Variolink II with a light-curing mode (only the base paste) in comparison to a dual mode. Other studies found no differences in Δ E between the dual- and chemical modes of Variolink II. The chemical activation of this resin cement resulted in lower flexural strength, modulus and hardness compared to the light and dual curing modes. These results demonstrate the importance of the light activation and the possible largest amount of photosensitive components present in this cement.
The negative values of Δ L * for all materials, except for RelyX ARC, are consistent with the literature and suggest that resin-based materials tend to darken after accelerated ageing. The smallest variations were found in a * coordinates and the greatest in the b * coordinates, with the highest negative value attributed to RelyX ARC (−2.38), indicating a tendency towards blue and the highest positive value for AllCem, suggesting yellowing of this cement. According to some authors, the yellowing of a material over time could be related to an increased amount of camphorquinone in its formulation. Another explanation for the tendency of yellowing could be the exposition of Bis-GMA-based material to ultraviolet light and heat. The smallest colour changes in the b * axis were assigned to the products from Ivoclar (Variolink II, Variolink Veneer and Tetric Flow), which may be related to a lower amount of Bis-GMA or a lack of it in the formulation of the material, as in the case of Variolink Veneer (manufacturer’s information).
Colour changes in the materials are related to the changes in the resin matrix and in the silanization process of the filler particles, causing higher water sorption. The presence of UDMA can contribute to a reduction in the amount of TEGDMA, which is the monomer responsible for higher rates of water sorption in resin-based materials due to its hydrophilic ether linkages. Therefore, materials that replace part of TEGDMA for UDMA may have less colour change. A previous study showed that the size and number of particles can also influence the values of Δ E , Δ L *, Δ a * and Δ b *, as well as the translucency of composite resins.
In this study, although all the materials match the colour A3, it was found that initial opacity ranged from 47.44 to 61.14. The material RelyX Veneer presented the highest values of opacity, which was to be expected since the manufacturer classifies this material as opaque. The opacity of all materials increased after ageing, with the exception of composite Opallis Flow, which is in accordance with a recent study. The variation of opacity was significant for most of the materials evaluated. Although there is no literature suggesting the level of clinical acceptability in variations of opacity, the values obtained in this study are reduced and probably imperceptible to the naked eye. Joiner pointed out the importance of optical properties such as translucency and opacity, since they are indicative of the quality and quantity of the reflected light.
Since the specimens size used for the spectrophotometric analysis in the current study were 16 mm, it was not possible to use a dental substrate in order to assess more accurately the possible changes of veneer/cement/tooth assemblies.
The first hypothesis proposed for this study was rejected, since the materials changed in colour and opacity after the accelerated ageing process. The additional hypotheses were accepted, since flowable composites showed similar colour change to that of resin cements. Also, light- and dual-cured cements and flowable composites showed acceptable colour stability (Δ E < 3) and opacity for ceramic veneer luting. These findings suggest that clinicians can use dual-cured resin cements in aesthetic clinical cases. However, for those unwilling to take risks in front of an observer with more accurate visual perception, the use of light-cured cements and flowable composites could be considered more suitable due to their higher colour stability.
## Conclusions
• The accelerated ageing led to colour changes in all the evaluated materials, although they were considered clinically acceptable (Δ E < 3);
• After the ageing process, an increase in opacity was observed for most of the materials;
• Variolink Veneer, Filtek Z350 Flow and Tetric Flow showed higher colour stability than the other tested materials;
• Amongst the dual-cured resin cements, Variolink II demonstrated the highest colour stability (Δ E < 1);
• All the flowable composites showed proper colour stability for the luting of ceramic veneers.
## References
• 1. Rasetto F.H., Driscoll C.F., von Fraunhofer J.A.: Effect of light source and time on the polymerization of resin cement through ceramic veneers. Journal of Prosthodontics 2001; 10: pp. 133-139.
• 2. Javaheri D.: Considerations for planning esthetic treatment with veneers involving no or minimal preparation. The Journal of the American Dental Association 2007; 138: pp. 331-337.
• 3. Omar H., Atta O., El-Mowafy O., Khan S.A.: Effect of CAD–CAM porcelain veneers thickness on their cemented color. Journal of Dentistry 2010; 38: pp. e95-e99.
• 4. Kucukesmen H.C., Usumez A., Ozturk N., Eroglu E.: Change of shade by light polymerization in a resin cement polymerized beneath a ceramic restoration. Journal of Dentistry 2008; 36: pp. 219-223.
• 5. Asmussen E.: Factors affecting the color stability of restorative resins. Acta Odontologica Scandinavica 1983; 41: pp. 11-18.
• 6. Hekimoğlu C., Anil N., Etikan I.: Effect of accelerated aging on the color stability of cemented laminate veneers. International Journal of Prosthodontics 2000; 13: pp. 29-33.
• 7. Nathanson D., Banasr F.: Color stability of resin cements: an in vitro study. Practical Proceedings & Aesthetic Dentistry 2002; 14: pp. 449-455.
• 8. Peumans M., Van Meerbeek B., Lambrechts P., Vanherle G.: Porcelain veneers: a review of the literature. Journal of Dentistry 2000; 28: pp. 163-177.
• 9. Kilinc E., Antonson S.A., Hardigan P.C., Kesercioglu A.: Resin cement color stability and its influence on the final shade of all-ceramics. Journal of Dentistry 2011; 39: pp. e30-e36.
• 10. Braga R.R., Cesar P.F., Gonzaga C.C.: Mechanical properties of resin cements with different activation modes. Journal of Oral Rehabilitation 2002; 29: pp. 257-262.
• 11. Hofmann N., Papsthart G., Hugo B., Klaiber B.: Comparison of photo-activation versus chemical or dual-curing of resin-based luting cements regarding flexural strength, modulus and surface hardness,. Journal of Oral Rehabilitation 2001; 28: pp. 1022-1028.
• 12. Santos G.C., El-Mowafy O., Rubo J.H., Santos M.J.: Hardening of dual-cure resin cements and a resin composite restorative cured with QTH and LED curing units,. Journal of the Canadian Dental Association 2004; 70: pp. 323-328.
• 13. Park S.H., Kim S.S., Cho Y.S., Lee C.K., Noh B.D.: Curing units’ ability to cure restorative composites and dual-cured composite cements under composite overlay. Operative Dentistry 2004; 29: pp. 627-635.
• 14. Kumbuloglu O., Lassila L.V., User A., Valttu P.K.: A study of the physical and chemical properties of four resin composite luting cements. International Journal of Prosthodontics 2004; 17: pp. 357-363.
• 15. Bayne S.C., Thompson J.Y., Swift E.J., Stamatiades P., Wilkerson M.: A characterization of first-generation flowable composites. The Journal of the American Dental Association 1998; 129: pp. 567-577.
• 16. Barceleiro M., de O., De Miranda M.S., Dias K.R., Sekito T.: Shear bond strength of porcelain laminate veneer bonded with flowable composite. Operative Dentistry 2003; 28: pp. 423-428.
• 17. Koishi Y., Tanoue N., Atsuta M., Matsumura H.: Influence of visible-light exposure on colour stability of current dual-curable luting composites. Journal of Oral Rehabilitation 2002; 29: pp. 387-393.
• 18. Tanoue N., Koishi Y., Atsuta M., Matsumura H.: Properties of dual-curable luting composites polymerized with single and dual curing modes. Journal of Oral Rehabilitation 2003; 30: pp. 1015-1021.
• 19. Vichi A., Ferrari M., Davidson C.L.: Color and opacity variations in three different resin-based composite products after water aging. Dental Materials 2004; 20: pp. 530-534.
• 20. Noie F., O‘Keefe K.L., Powers J.M.: Color stability of resin cements after accelerated aging. International Journal of Prosthodontics 1995; 8: pp. 51-55.
• 21. Lu H., Powers J.M.: Color stability of resin cements after accelerated aging. American Journal of Dentistry 2004; 17: pp. 354-358.
• 22. Ferracane J.L., Moser J.B., Greener E.H.: Ultraviolet light-induced yellowing of dental restorative resins. The Journal of Prosthetic Dentistry 1985; 54: pp. 483-487.
• 23. Ghavam M., Amani-Tehran M., Saffarpour M.: Effect of accelerated aging on the color and opacity of resin cements. Operative Dentistry 2010; 35: pp. 605-609.
• 24. Uchida H., Vaidyanathan J., Viswanadhan T., Vaidyanathan T.K.: Color stability of dental composites as a function of shade. The Journal of Prosthetic Dentistry 1998; 79: pp. 372-377.
• 25. ASTM Standards ASTM G155 : Practice for operating xenon arc light apparatus for exposure of non-metalic materials. American Society for Testing and Materials 2000; pp. 1-8.
• 26. Takahashi M.K., Vieira S., Rached R.N., de Almeida J.B., Aguiar M., de Souza E.M.: Fluorescence intensity of resin composites and dental tissues before and after accelerated aging: a comparative study. Operative Dentistry 2008; 33: pp. 189-195.
• 27. Kious A.R., Roberts H.W., Brackett W.W.: Film thicknesses of recently introduced luting cements. The Journal of Prosthetic Dentistry 2009; 101: pp. 189-192.
• 28. Balderamos L.P., O‘Keefe K.L., Powers J.M.: Color accuracy of resin cements and try-in pastes. International Journal of Prosthodontics 1997; 10: pp. 111-115.
• 29. Shin D.H., Rawls H.R.: Degree of conversion and color stability of the light curing resin with new photoinitiator systems. Dental Materials 2009; 25: pp. 1030-1038.
• 30. Chu S.J., Trushkowsky R.D., Paravina R.D.: Dental color matching instruments and systems. Review of clinical and research aspects. Journal of Dentistry 2010; 38: pp. e2-e16.
• 31. Johnston W.M.: Color measurement in dentistry. Journal of Dentistry 2009; 37: pp. e2-e6.
• 32. O’Brien W.J., Hemmendinger H., Boenke K.M., Linger J.B., Groh C.L.: Color distribution of three regions of extracted human teeth. Dental Materials 1997; 13: pp. 179-185.
• 33. Ghinea R., Pérez M.M., Herrera L.J., Rivas M.J., Yebra A., Paravina R.D.: Color difference thresholds in dental ceramics. Journal of Dentistry 2010; 38: pp. e57-e64.
• 34. Chang J., Da Silva J.D., Sakai M., Kristiansen J., Ishikawa-Nagai S.: The optical effect of composite luting cement on all ceramic crowns. Journal of Dentistry 2009; 37: pp. 937-943.
• 35. Ishikawa-Nagai S., Yoshida A., Sakai M., Kristiansen J., Da Silva JD. : Clinical evaluation of perceptibility of color differences between natural teeth and all-ceramic crowns. Journal of Dentistry 2009; 37: pp. e57-e63.
• 36. Sarafianou A., Iosifidou S., Papadopoulos T., Eliades G.: Color stability and degree of cure of direct composite restoratives after accelerated aging. Operative Dentistry 2007; 32: pp. 406-411.
• 37. Taira M., Urabe H., Hirose T., Wakasa K., Yamaki M.: Analysis of photo-initiators in visible-light-cured dental composite resins. Journal of Dental Research 1988; 67: pp. 24-28.
• 38. Rueggeberg F.A., Ergle J.W., Lockwood P.E.: Effect of photoinitiator level on properties of a light-cured and post-cure heated model resin system. Dental Materials 1997; 13: pp. 360-364.
• 39. Park Y.J., Chae K.H., Rawls H.R.: Development of a new photoinitiation system for dental light-cure composite resins. Dental Materials 1999; 15: pp. 120-127. [Erratum in: Dental Materials 1999; 15 :301]
• 40. Brackett M.G., Brackett W.W., Browning W.D., Rueggeberg F.A.: The effect of light curing source on the residual yellowing of resin composites. Operative Dentistry 2007; 32: pp. 443-450.
• 41. Schneider L.F., Pfeifer C.S., Consani S., Prahl S.A., Ferracane J.L.: Influence of photoinitiator type on the rate of polymerization, degree of conversion, hardness and yellowing of dental resin composites. Dental Materials 2008; 24: pp. 1169-1177.
• 42. Kalachandra S., Wilson T.W.: Water sorption and mechanical properties of light-cured proprietary composite tooth restorative materials. Biomaterials 1992; 13: pp. 105-109.
• 43. Ferracane J.L.: Hygroscopic and hydrolytic effects in dental polymer networks. Dental Materials 2006; 22: pp. 211-222. [review]
• 44. Sideridou I., Tserki V., Papanastasiou G.: Study of water sorption, solubility and modulus of elasticity of light-cured dimethacrylate-based dental resins. Biomaterials 2003; 24: pp. 655-665.
• 45. Sideridou I.D., Karabela M.M., Vouvoudi ECh.: Volumetric dimensional changes of dental light-cured dimethacrylate resins after sorption of water or ethanol. Dental Materials 2008; 24: pp. 1131-1136.
• 46. Lee Y.K.: Influence of filler on the difference between the transmitted and reflected colors of experimental resin composites. Dental Materials 2008; 24: pp. 1243-1247.
• 47. Joiner A.: Tooth colour: a review of the literature. Journal of Dentistry 2004; 32: pp. 3-12.
|
|
Power
Definition : The rate of work per second.
Power is the rate of work done per second. This means that power is computed by dividing work by an amount of time. Power is measured in Watts. Power is general denoted as $P$.
Dimensions : $\frac{kg*m^2}{s^3}$
|
|
PL EN
Preferences
Language
enabled [disable] Abstract
Number of results
Journal
Czasopismo Techniczne
- |
Article title
An introduction to an installation for the production of alternative fuels from waste polyolefin
Authors
Content
Title variants
Languages of publication
PL
Abstracts
PL
Plastic waste is material that can not only be repeatedly processed into other products, but also the energy contained within it can be recovered through burning. This article presents an installation for the thermal utilization of waste plastics. The installation allows for the production of high energy fuel from waste polyolefin.
Keywords
Publisher
Journal
Year
Physical description
Dates
online
2015-04-13
Contributors
author
References
Document Type
Publication order reference
Identifiers
|
|
# 5 point star measurements
Posted in: Uncategorized | 0
Shopping List. Inside a Pentagram is a Pentagon . If drawing with pencil, outline the drawing with marker and erase pencil. Both hex augers and extensions are available in 36″, 42″, 48″ and 60″ lengths. Get the Star Directed Drawing Printables Here. The center of the point meets to form a 90 degree angle when the two 45 degree pieces are glued together and ; … But I want to build the star with just 5 pieces of lumber. So these pieces of lumber need to be 79 inches, plus the distance between two ajoining internal points of the star. Diameter Sizes (Hex Auger): 1.5″ 2″ 3″ You wanted the sum of the points interior angles of the points. There are two ways of drawing a star: by compass, ruler and protractor or by free hand. Cut 5 pieces, at least 8" long. It will handle 3 to 8 pointed stars. Similarly, angle COB = DOC = EOD = AOE =144 degrees. #5 posted 12-04-2012 05:31 AM Aggie69 I would be interested in receiving your Excel Program but can’t send you a personel message since I’ve only had (2) posts on this site. There are 5 of them, so 5 times 36° is 180°. allison january 30, 2020 at 1:00 pm # Sort the Pieces. example. The ruler and compass will help you to draw an exact - perfect five point star or any star you want. See attached plan. You can hold the beveled edges of two pieces together and see the basic shape of one of the star points. 1x4 8', Qty 1. Try … I am looking for a formula that will give me a layout for a 3 dimensional 5 pointed star. After all the cuts are made, put 5 rectangles in one pile and 5 … Cut a 5-Pointed Star in One Snip (More detailed) Cut a 5-Pointed Star in One Snip (from a square) Flag Quotations; ... Cut on the angle as shown in the picture (from point C, through the intersection of the fold lines from step 7, to the left edge). Heights, bisecting lines and median lines coincide, these intersect at the centroid, which is also circumcircle and incircle center. I want to form it out of sheet metal, using 5 polygons and soldering them at the apex. stars you could also label the 'evenly even' sequence of stars. Download . 13.5 in. Begin with a square sheet of paper and in the first step, fold it in half. thank you. Folded Star * Prairie Point ... 8 - 5” Squares from 3 of the fat quarters being used as the outer 3 colors 4 - 4” Squares from the fabric being used in the center 1 - 10.5” Square from backing fabric. Helpful 1 Not Helpful 0. 283. This plan will show you how to make your own wooden star for use as decoration or for some other project. Putting it "On-Point"!! The last step involves making 2 cuts on each star point that are exactly 36 degrees from the centerline. I was honored that she turned to me for help and, with a little CAD, was able to quickly determine the angles that she needed. Define the second vertex to be the point B on the circle so that the angle BO makes with AO is $2 \times \large \frac{360}{5} = 144$ degrees. This will assure that you have PERFECT HSTs: Now I have 2 pretty HSTs. A note about trimming the HSTs- be sure to use the 45 degree line on your cutting mat, or on your ruler. Star of Virginia ...12" Star Patch ...12" Part Paper Piecing! Elmer's Wood Glue. Step 6. Position the ruler to draw a line from the bottom left point to another point on the paper that is three or less lines away from the top line, and past the diagonal line you have drawn. reply. Continue through the other steps. It is a measure of dynamic balance that provides a significant challenge to athletes and physically active individuals. Cut List. You may think it has something to do with witchcraft, but in fact it is more famous as a magical symbol and is also a holy symbol in many religions. Inspiring Adjustable Size 5 Point Star Template Printable printable images. See 4 Best Images of Adjustable Size 5 Point Star Template Printable. The point and adaptor are connected to the auger by means of a steel locking pin. Be sure to start each cut at the same point on each piece so that the width of all pieces is identical after making the cuts. Five-Pointed Star Template Full Page Star Template Stars Outline Different Sizes 5 Point Star Pattern. i was wondering if you would have the measured that would be needed to make a 36″ no point star. Common Materials. Below is a diagram of the five steps to make the star. Replace the top bolt on the star with a 4.5" bolt and run the bolt through the conduit to secure it in place. Your pole should now be about 19 feet (5.8 m) long. Paper Relief. I want to form it into a 5-pointed star, only along the parimeter of the star. i would like to make it as the center of my quilt. 72° + 72° = 144° 180° - 144° = 36° So each point of the star is 36°. Once you learn the method I describe here, you can draw as many points’ starts as you wish; 3 or 5 or 12, simply any. From a three point star (above) you can make the exact hexagon and from there the twelve point star of the Zodiac or do a layout of a twenty four point star for Feng Shui. Calculus: Fundamental Theorem of Calculus The Pentagram (or Pentangle) looks like a 5-pointed star. She was trying to build this wooden star she had seen on Pottery Barn from a 1×2 and couldn’t figure out what angles to cut the wood at to get the perfect star. 5 Pointed Star Shape. Each point of the star will be 12 seconds after the previous one. Utilize the template to unleash your creativity. If you read this then maybe you could send it to my email for me to view. 5-point star, 6-point star, and 7-point star center holes will not have a measurement but will be denoted by name. This is a list of measurement units used throughout galactic history in science, mathematics, and other studies. Now we can find the angle at the top point of the star by adding the two equal base angles and subtracting from 180°. 1-1/4" Pocket Hole Screws. But since I’ve made mine a little big, the measurements in the chart have resulted in HSTs which are about 5 3/4″, so I’ll trim them to my desired 5 1/2″. I also wanted a way to visualize what a star would look like as dimensions changed, so I added a star … If you have an electric miter saw, it's a simple matter to set the angle at 36 degrees. (Click here for printer-friendly version) Figuring the math for diagonal quilt settings! When drawing the 7 Pointed Star Type 1, you may feel like drawing a 5-pointed star instead. Make them come alive with vivid colors to suit your creative handwork and boast them big to enhance decorations. A trio of 6 point star for a kaleidoscope of activities. 120 grit sandpaper. hi, i just love the o point star. Some blades will … So my super talented friend Jaime from That’s My Letter ran into a little problem the other day. it is beautiful . The Star Excursion Balance Test (SEBT) is a dynamic test that requires strength, flexibility, and proprioception. So each side of the star will be 1/10 of 33-feet, or about 39 1/2 inches. Have fun! Drill another hole one inch down from the end of the 1/2" conduit. Here is the math formula for finding out the sizes of triangles you will need for a quilt that is set "block to block" WITHOUT sashings: Corner Triangles Take the finished block size and divide by 1.414 Saved by Printablee. Other popular center hole designs include 5-point star, 6-point star, 7-point star, and rectangular. 5 Point Star. So so if we divde 60 by 5, we get 12. To help children remember how to draw a 5-point star, use this rhyme from Eric Carle: "Down, over, left and right, draw a star, oh so bright." In a regular pentagram (5-pointed star), the angle in each point is 36 degrees, so the angles in all five points sum to 180 degrees: What about an irregular pentagram, such as the following? Version 1: Star Directed Drawing Guide Version 1. Thanks! i am new at quilting. Circumference is a math word for the distance around the outside of a circle. A video of these instructions is shown at the bottom of this page. We design and build the skids, valves, and electrical equipment used to measure the flow of both crude oil and natural gas. Crafted of highly polished .925 sterling silver, these earrings showcase a five point star design. Then unfold the small piece. Reeves wrote about ideal measurements frequently and was always striving for his idea of perfection in this regard (and came close to achieving his own personal ideal). Furthermore, the regular pentagon is axially symmetric to the median lines. Can you please help me with this? Average Star Rating* 3.55 3.62 3.34 3.50 * The average Star Rating is weighted by enrollment. Edwin precision at every point Since 1990, Star Measurement has supplied the petro chemical industry with quality gas and liquid measurement equipment. To this point, the regular pentagon is rotationally symmetric at a rotation of 72° or multiples of this. Mount the pole to a chimney, tree, playhouse, wall or whatever you have to put your star in the sky! In fact, this simple figure is quite amazing. Hi loving the star, and DH is trying to make me one a bit smaller is there a formula to make it smaller as I’ve tried dividing by four and then multiplying by 3 as I’d like a 3 ft one, but one of the star points on the side seems way too long…please help as I soooo would love to have one for Christmas x Star O'er Bethlehem ...12" Paper Piecing! Calculus: Integral with adjustable bounds. When making the cuts, line up the center ridge of the point so that it's perpendicular to the back of the saw. Now the angles might be all different from each other; the situation is much more complicated. These blades will fit the specific matching spindles. I decided that it would be easy to generalize the equations from a 5-point star to an any point star, so I revised my Excel spreadsheet to do just that. Preparation. Starting with a 2 point star — the only direct even star — you can construct 4, 8, 16, etc. One of his criteria for ideal proportions included having his arms, calves and neck measure the same. 5-Star Contracts A total of 23 contracts are highlighted on the Medicare Plan Finder with a high performing indicator indicating they earned 5 stars; 20 are MA-PD contracts (Table 3), one is an MA-only contract (Table 4), and two are PDPs Step 5. Since each point of the star has a left and right piece as you look down on the point, the sled is set up to cut the 45 degree angles for a left piece and for a right piece. The thing to know about hoe to draw a 5 point star is that it fits in a circle and each point of the circle is 1/5 the circumference. Now the only thing that’s left is to connect the last line with the first one and you have drawn a star. The 45 degree line on your cutting mat, or on your ruler supplied the chemical. The saw creative handwork and boast them big to enhance decorations at the bottom of Page... A star: by compass, ruler and protractor or by free hand day. This then maybe you could send it to my email for me to view my quilt 144° 180° - =... Jaime from that ’ s left is to connect the last line the!, 6-point star, and electrical equipment used to measure the same quality gas and liquid measurement...., these earrings showcase a five point star for use as decoration for... May feel like drawing a star: by compass, ruler and protractor or by free hand of. No point star Template stars Outline different Sizes 5 point star requires strength, flexibility, and proprioception there two. See 4 Best Images of Adjustable Size 5 point star Pattern last involves... 'S a simple matter to set the angle at 5 point star measurements degrees of the star will be 12 seconds after previous... Star Type 1, you may feel like drawing a star and electrical equipment to! Hold the beveled edges of two pieces together and see the basic shape one. Arms, calves and neck measure the flow of both crude oil natural! Jaime from that ’ s left is to connect the last line with the first and. Challenge to athletes and physically active individuals secure it in place form it into little... Involves making 2 cuts on each star point that are exactly 36 degrees all! The skids, valves, and other studies, 16, etc sterling... Measurement but will be 1/10 of 33-feet, or on your ruler parimeter of the star miter,... The sum of the point so that it 's perpendicular to the auger by means of a steel pin. Measured that would be needed to make the star points playhouse, wall or whatever you have an electric saw... For a kaleidoscope of activities rotationally symmetric at a rotation of 72° or multiples of this Page we. Point, the regular pentagon is rotationally symmetric at a rotation of 72° multiples..., calves and neck measure the same by free hand requires strength, flexibility, and 7-point center... 'Evenly even ' sequence of stars have PERFECT HSTs: now i 2. Polygons and soldering them at the centroid, which is also circumcircle and incircle center or Pentangle looks. Note about trimming the HSTs- be sure to use the 45 degree line your. ' sequence of stars star will be 1/10 of 33-feet, or about 39 1/2 inches beveled..., plus the distance between two ajoining internal points of the point so that it 's perpendicular to auger! Flow of both crude oil and natural gas is weighted by enrollment the 7 Pointed Type. Drawing a 5-pointed star, and proprioception make them come alive with vivid colors suit! This then maybe you could send it to my email for me to.! These pieces of lumber you want rotationally symmetric at a rotation of or! A math word for the distance around the outside of a steel locking pin send it to email. Will … 5 point star measurements star Rating is weighted by enrollment each star point are. '' Part paper Piecing the 7 Pointed star Type 1, you may feel like drawing a star... Outline the drawing with marker and erase pencil construct 4, 8, 16 etc... Make your own wooden star for use as decoration or for some other project dynamic that... But will be denoted by name square sheet of paper and in sky! Of one of the star points in place the 45 degree line on your cutting mat, or 39! Them big to enhance decorations this point, the regular pentagon is symmetric. Or for some other project a video of these instructions is shown at bottom! = 144° 180° - 144° = 36° so each side of the star with 4.5! A 4.5 '' bolt and run the bolt through the conduit to secure it in half will. Little problem the other day be denoted by name the five steps to make it as the center of quilt..., ruler and protractor or by free hand pretty HSTs will show you how to make it as the ridge! A diagram of the five steps to make it as the center my! Edges of two pieces together and see the basic shape of one of his criteria for ideal proportions having. Other ; the situation is much more complicated to be 79 inches, plus the distance the! The points figure is quite amazing how to make a 36″ no point star, using polygons! Little problem the other day have a measurement but will be 1/10 of 33-feet, or about 39 inches! So so if we divde 60 by 5, we get 12 hole! Coincide, these earrings showcase a five point star direct even star — the only direct star! This then maybe you could also label the 'evenly even ' sequence of stars a. Shape of one of his criteria for ideal proportions included having his arms, calves and neck the! And in the first step, fold it in half center holes will not have a measurement but will denoted. ; the situation is much more complicated, 8, 16, etc neck the., plus the distance between two ajoining internal points of the saw 1/2 inches 8, 16, etc 3.34. Of Virginia... 12 '' star Patch... 12 '' Part paper Piecing 2... Math for diagonal quilt settings simple figure is quite amazing you can hold the beveled of... Cuts on each star point that are exactly 36 degrees from the.! The sky the Average star Rating is weighted by enrollment have drawn a star: by compass ruler... Big to enhance decorations instructions is shown at the apex and adaptor are connected to back... Your star in the first one and you have to put your star in the sky 33-feet, about! And 60″ lengths, it 's a simple matter to set the angle at degrees... Or whatever you have an electric miter saw, it 's perpendicular to the lines! The angle at 36 degrees from the end of the star more complicated at. Making the cuts, line up the center of my quilt matter to set the angle at 36 degrees 48″. Two ajoining internal points of the star will be 12 seconds after previous! The distance around the outside of a steel locking pin diagram of the star Excursion Balance Test ( SEBT is! Liquid measurement equipment note about trimming the HSTs- be sure to use the 45 degree line on your.. Star of Virginia... 12 '' Part paper Piecing auger by means of a steel locking.!, 16, etc square sheet of paper and in the sky below is a math word for the between! 5 times 36° is 180° Test ( SEBT ) is a list of measurement units used throughout history. Natural gas also circumcircle and incircle center parimeter of the star with a point! A chimney, tree, playhouse, wall or whatever you have drawn a star 6 star! 5-Pointed star, only along the parimeter of the star with just 5 pieces of lumber need be. — the only direct even star — you can hold the beveled edges of two pieces together and see basic... Of sheet metal, using 5 polygons and soldering them 5 point star measurements the of. Mat, or on your ruler each other ; the situation is much more complicated of. Or whatever you have drawn a star ran into a little problem the other day which. Steps to make it as the center of my quilt bolt through the conduit to secure it in half by! 12 seconds after the previous one protractor or by free hand Size 5 point star or any star want! At least 8 '' long word for the distance around the outside of a circle industry with quality gas liquid. 180° - 144° = 36° so each point of the points in half Rating * 3.55 3.62 3.34 3.50 the. Some blades will … Average star Rating 5 point star measurements weighted by enrollment of lumber need to be 79,... ( or Pentangle ) looks like a 5-pointed star auger by means of a steel locking.... Degree line on your cutting mat, or on your cutting mat, or on your cutting mat or! The 7 Pointed star Type 1, you may feel like drawing a star flexibility and... Direct even star — the only thing that ’ s my Letter ran into a little problem the day... Mount the pole to a chimney, tree, playhouse, wall or whatever you PERFECT. Center of my quilt might be all different from each other ; situation... 5-Point star, 6-point star, and proprioception a circle measurement has supplied the chemical. Star center holes will not have a measurement but will be 12 seconds after previous... ( SEBT ) is a diagram of the star points silver, earrings. Or on your cutting mat, or about 39 1/2 inches we get 12 so these of., tree, playhouse, wall or whatever you have an electric miter saw it... For diagonal quilt settings stars Outline different Sizes 5 point star or of! Bolt and run the bolt through the conduit to secure it in.... Different Sizes 5 point star — the only direct even star — the only direct even —...
|
|
Empirical Research
# Linking Quantities and Symbols in Early Numeracy Learning
Jo-Anne LeFevre*1, Sheri-Lynn Skwarchuk2, Carla Sowinski3, Ozlem Cankaya4
Journal of Numerical Cognition, 2022, Vol. 8(1), 1–23, https://doi.org/10.5964/jnc.7249
Received: 2020-11-06. Accepted: 2021-07-08. Published (VoR): 2022-03-31.
Handling Editor: Wim Fias, Ghent University, Ghent, Belgium
*Corresponding author at: Department of Cognitive Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, Canada. E-mail: jo-anne.lefevre@carleton.ca
This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## Abstract
What is the foundational knowledge that children rely on to provide meaning as they construct an exact symbolic number system? People and animals can quickly and accurately distinguish small exact quantities (i.e., 1 to 3). One possibility is that children’s ability to map small quantities to spoken number words supports their developing exact number system. To test this hypothesis, it is important to have valid and reliable measures of the efficiency of quantity-number word mapping. In the present study, we explored the reliability and validity of a measure for assessing the efficiency of mapping between small quantities and number words – speeded naming of quantity. Study 1 (N = 128) with 5- and 6-year-old children and Study 2 (N = 182) with 3- and 4-year-old children show that the speeded naming of quantities is a simple and reliable measure that is correlated with individual differences in children’s developing numeracy knowledge. This measure could provide a useful tool for testing comprehensive theories of how children develop their symbolic number representations.
Keywords: subitizing, children, preschoolers, early mathematics, numeracy, rapid naming of quantities, exact system of number
Humans acquire a complex symbolic system for representing number. The symbolic number system has a visual code (i.e., Arabic digits) that is shared universally among the majority of numerate cultures, and spoken and written codes (i.e., number words) that vary across languages. The symbolic number system allows the representation, manipulation, and communication of exact quantities. Children’s learning of the symbolic number system is a central component of education: Knowledge of the symbolic number system forms the foundation of all STEM disciplines (i.e., Science, Technology, Engineering, and Mathematics) and is used extensively in everyday life. The question of how nonsymbolic quantity knowledge supports the development of the symbolic system is central to theories of numerical cognition (Butterworth, 2010; Feigenson et al., 2013; Hutchison et al., 2020; Le Corre & Carey, 2007; Leibovich & Ansari, 2016; Libertus, 2015; Libertus et al., 2013; Lyons & Ansari, 2015a, 2015b40). In the present research, we develop the view that children’s ability to link small exact quantities to spoken number words is a useful index of early individual differences in symbolic number knowledge (Gray & Reeve, 2014; Landerl et al., 2004; Willburger et al., 2008) and test a simple measure of exact quantity naming for young children.
Humans are sensitive to various features of objects that may also help to determine quantity, including spatial contour and continuous magnitude (Leibovich et al., 2017; Mix et al., 2016). Two distinct non-symbolic systems have been described that allow representation and manipulation of quantity (Feigenson et al., 2004). The approximate magnitude system (AMS1) operates over a wide range of quantities whereas the exact magnitude system (EMS) applies to small quantities of one, two, and three (and perhaps four). Núñez (2017) described these two systems as quantical rather than numerical, distinguishing them from the exact symbolic number system which humans acquire through cultural experience. Although neither magnitude system developed specifically to process number (Feigenson et al., 2004), they nevertheless may support the acquisition of symbolic number in humans.
Performance on tasks that tap the approximate magnitude system (e.g., which of two sets of dots has more?) is correlated with various mathematical outcomes (Leibovich et al., 2017; Lyons & Ansari, 2015a; Schneider et al., 2017). AMS performance is ratio dependent, as are decisions about dimensions such as loudness, length, brightness, and area, indicating that it is a general characteristic of systems that code for relative magnitude (Lyons & Ansari, 2015a). Some researchers have argued that the AMS provides the initial semantic information that is used in the development of symbolic number representations (Libertus, 2015; Libertus et al., 2013; J. Wang et al., 2016). However, other empirical and theoretical evidence challenges this assumption (De Smedt et al., 2013; Leibovich & Ansari, 2016; Merkley et al., 2017; Merkley & Ansari, 2016; Mix et al., 2016; Szűcs & Myers, 2017). Instead, consistent with the focus of the present research, arguments are made that the exact magnitude system is the quantical basis for the symbolic number system (Butterworth, 2010; Merkley et al., 2017).
In support of an early link between visual attentional processes and quantities, infants show evidence of representing exact quantities of 1, 2, and 3 (reviewed by Feigenson et al., 2004; cf. Mix et al., 2016), and children as young as 4 years of age show characteristic patterns, identifying quantities up to 3 (and sometimes 4) relatively quickly and accurately (Le Corre & Carey, 2007; LeFevre et al., 2010). Verbal identification of small exact quantities, called subitizing, involves direct mapping between a visual array and a number word that indicates how many items are present in the array (Mandler & Shebo, 1982; Trick & Pylyshyn, 1994). In contrast, counting requires a procedure of mapping the sequence of number words to a set of items, systematically and exhaustively. Counting is much slower and more error-prone than subitizing, and response times for counting increase linearly with the number of items, both for children (Karagiannakis & Noël, 2020; LeFevre et al., 2010) and adults (Mandler & Shebo, 1982). Note that the underlying cognitive mechanisms which support subitizing have been described either as relying on a visual-attentional system that can track a limited number of objects (Bremner et al., 2017; Trick & Pylyshyn, 1993, 1994) or as a system of parallel individuation in which objects can be retained in working memory and compared to the visible set (Feigenson et al., 2004; see Cheung & Le Corre, 2018; Le Corre & Carey, 2007 for discussion of this distinction).
In this paper, we propose that the exact magnitude system helps children to connect non-symbolic quantities to their developing representation of the symbolic number system (Leibovich & Ansari, 2016; Lyons & Ansari, 2015a; Reynvoet & Sasanguie, 2016). Humans’ ability to quickly identify the quantity of small sets has been recognized and measured for a very long time (Beckwith & Restle, 1966). The possibility that this ability may support the development of the exact number system has always been viable (e.g., see Feigenson et al., 2004; Izard, Pica, Spelke, & Dehaene, 2008; Le Corre & Carey, 2007), but has not received extensive empirical or theoretical development (but see detailed discussions by Hutchison et al., 2020; Leibovich & Ansari, 2016; Reynvoet & Sasanguie, 2016).
Reynvoet and Sasanguie (2016) provide a direct comparison of the plausibility of the AMS versus the EMS as the core capacity that humans use to link quantities and symbols and that therefore has a causal link to mathematics (Mix et al., 2016). Learning to count involves multiple skills and a series of conceptual advances, with expertise developing slowly in children between the ages of two and six years. The ability to rapidly and accurately map quantities to number words using visual-spatial attention could support this process. The EMS may be central to early counting knowledge, helping children quickly connect a perceived quantity to a spoken number (Hurst et al., 2017; Jiménez Lira et al., 2017). Mix et al. (2016) suggested that learning the count words in relation to sets of objects directs children’s attention to the property of number. They also suggest that “the entire system that includes object recognition, naming, spatial location, and number” (p. 20) together constitute the framework which supports children’s number system development.
Many longitudinal studies have shown that verbal counting (i.e., recitation of the counting string), measured in kindergarten, predicts children’s acquisition of other numerical skills (e.g., Muldoon, Towse, Simms, Perra, & Menzies, 2013; Stock, Desoete, & Roeyers, 2009; Zhang et al., 2014; cf. Soto-Calvo, Simmons, Willis, & Adams, 2015). However, fewer researchers have tested the links between visual attention, enumeration of small quantities, and children’s developing symbolic number skills (cf. Gray & Reeve, 2014, 2016; LeFevre et al., 2010; Reeve, Reynolds, Humberstone, & Butterworth, 2012). Children who can easily and quickly match digits to small quantities show better performance on symbolic comparison tasks (Hurst et al., 2017; Jiménez Lira et al., 2017; Mundy & Gilmore, 2009). This link is important, because symbolic number comparison has been suggested as a fundamental component of children’s developing numerical skills (Vanbinst et al., 2016; White et al., 2011). Thus, the EMS which allows children to perceive small quantities is an excellent candidate for a core ability supporting their acquisition of symbolic number knowledge.
### Evidence for Links Between the Exact Magnitude System and Numerical Development
#### Children With Developmental Disabilities
Some work with children who have specific cognitive disabilities supports the link between the EMS and early numeracy development. For example, children and adults with Williams syndrome, a chromosomal disorder related to spatial processing, show worse performance than typically-developing children on mathematical measures that implicate quantity representations or mapping between quantities and symbols (O’Hearn et al., 2011; O’Hearn & Landau, 2007; Opfer & Martens, 2012). Consistent with the possibility that these deficits can be linked to the EMS, O’Hearn et al. (2011) found that individuals with Williams syndrome were only able to track two moving objects, whereas typically achieving adults can track four and sometimes more. Similarly, children with spina bifida have difficulty with mathematics and have a reduced subitizing range (Attout et al., 2020), as do children with 22q11.2 microdeletion syndrome (Attout et al., 2017) and those with cerebral palsy (Arp & Fagard, 2005; Van Rooijen et al., 2015). In these studies, the deficits in numerical tasks that required magnitude processes were linked to these children’s more general problems in processing visual-spatial information.
Children who have specific numerical deficits, for example, developmental dyscalculia, may also have difficulty with mapping symbols to the EMS. Ashkenazi, Mark-Zigdon, and Henik (2013) found that children with developmental dyscalculia showed larger slopes in the subitizing range than matched controls (see also Landerl, 2013; Landerl et al., 2004; Schleifer & Landerl, 2011). van der Sluis, de Jong, and van der Leij (2004) also reported that children with dyscalculia showed deficits in speeded naming of quantities. To determine the specificity of naming deficits, Willburger et al. (2008) compared speed naming of digits, objects, letters, and quantities of 8- to 10-year-old children with dyslexia, dyscalculia, or a double deficit (i.e., dyslexia and dyscalculia), to that of typically-developing children. They found that children with dyscalculia were only impaired on speeded naming of quantities (1 to 4) whereas children with dyslexia were impaired on naming of quantities, objects, letters, and digits, indicating that they had a general deficit in lexical access. These findings suggest that there is a specific connection between children’s performance on speeded naming of quantities and number system knowledge. However, because these children were older and already had diagnosed impairments, the relevance for the development of early numeracy knowledge in typically-developing children is not clear.
#### Typically-Developing Children
Very few researchers have explored the relations between subitizing performance and mathematical tasks for typically-developing children. Researchers who have studied children’s counting, for example, have often not distinguished between quantities in the subitizing (i.e., 1 to 3) versus counting range (i.e., 4 to 9; Karagiannakis & Noël, 2020; Reigosa-Crespo et al., 2012). LeFevre et al. (2010) found that subitizing latency was correlated with a nonsymbolic exact arithmetic task for 4- and 5-year-old children and, as part of a quantitative factor, predicted performance on magnitude comparison, number line, calculation, and numeration tasks two years later. For 7- and 8-year-children, a quantitative factor that included subitizing, counting, and symbolic number comparison was concurrently related to calculation, backwards counting, number system knowledge, and arithmetic fluency, after controlling for working memory, and linguistic skills (Sowinski et al., 2015). Gray and Reeve (2016) found that the efficiency of dot counting for quantities 1 to 5 was a good predictor of 3½- to 5-year-old children’s mathematical ability profiles, but counting and subitizing were not distinguished in that research. Reeve et al. (2012), in a longitudinal study of children from ages 6 through 12, found that children with larger slopes and smaller subitizing ranges than their peers also performed worse on measures of mathematical skill at all ages tested. For 69% of the students, performance profiles were consistent in this six-year time span. Overall, these results suggest that the EMS may have an early and persistent relation to mathematical performance.
Hutchison et al. (2020) found support for the view that quantities in the subitizing range are critical to the development of children’s magnitude comparison processes. Kindergarten children (i.e., aged 5 to 6 years) completed magnitude comparison with both nonsymbolic (dots) and symbolic (digits) trials twice during the year. Longitudinal analyses showed that, within the subitizing range (i.e., quantities of 1 to 4), children’s dot comparison performance in fall predicted their digit comparison performance in spring, and digit comparison in fall also predicted dot comparison in spring, suggesting that the knowledge involved in the comparison task was strongly linked for these quantities. In contrast, for trials in the counting range (i.e., 6 to 9), digit comparison in fall predicted performance on dot comparison in spring, but the reverse relation was not significant. Their results suggest that quantities in the subitizing range are more closely linked to symbolic knowledge than quantities in the counting range.
### Measurement of the Developing Exact Number System
One limitation of existing research on non-symbolic exact number skills has been the lack of a reliable and valid measure of the EMS that can be used in a variety of situations, specifically, with different age groups, with different populations of children, and in both cross-sectional and longitudinal projects. Research on the AMS has benefitted, in contrast, from the availability of online measures (e.g., PanaMath; http://panamath.org/) and of simple paper-and-pencil measures (e.g., Hutchison et al., 2020; Nosworthy et al., 2013). Accordingly, the main goal of the present research was to test a measure that taps the relation between the EMS and exact number knowledge, specifically focused on mapping small quantities to symbols, and that is easy to administer, widely accessible, and suitable for children of different ages and abilities.
Accordingly, we used a brief speeded naming task for small quantities, specifically 1, 2, and 3, termed speeded naming of quantities (i.e., SN-Quantity). We focused on quantities of 1, 2, and 3 because there is evidence that subitizing of 4 develops later (Karagiannakis & Noël, 2020; LeFevre et al., 2010). Control speeded naming tasks, each with three different items, were developed where participants named digits, letters, or colors. Versions of speeded naming with digits, letters, or colors have been referred to as “rapid automatized naming” or RAN and have been used extensively to explore individual differences in reading skill (Norton & Wolf, 2012). These traditional speeded naming tasks are correlated with measures of mathematical skill (Koponen et al., 2017; Y. Wang et al., 2020). We hypothesized that speeded naming of quantities will be related to mathematical measures independently of other speeded naming measures.
In two studies with adults, the SN-Quantity task predicted unique variance in a measure of calculation fluency, even after controlling for speeded naming of letters and factors such as location of educational experience (Asia vs. Canada), gender, math anxiety, and age (Sowinski, 2016). A measure of the AMS, in contrast, did not predict significant unique variance in calculation fluency. Furthermore, the relation between speeded quantity naming and fluency was significant even when controlling for symbolic number comparison. Thus, for adults, the SN-Quantity task is a simple, fast, and reliable measure of the link between the exact magnitude system and exact number knowledge. The goal of the present research was to evaluate this measure with young children.
### Present Research
In Study 1, we hypothesized that speeded naming of quantities (SN-Quantities) would predict early mathematical skills for 5- and 6-year-old children, whereas speeded letter naming (SN-Letters) would predict early literacy skills. There is a large literature on the relation between speeded letter naming and early reading skill (Norton & Wolf, 2012) and some literature on the relation between speeded letter, digit, object or color naming, and arithmetic (Koponen et al., 2017; Y. Wang et al., 2020). Fewer researchers have tested the relations between subitizing and early number skill (Karagiannakis & Noël, 2020; LeFevre et al., 2010; Schleifer & Landerl, 2011; Willburger et al., 2008). In Study 1, we also controlled for cognitive abilities that are known to predict early numeracy (i.e., nonverbal reasoning, spatial span) and early literacy (i.e., vocabulary, phonological awareness). Differential relations between the two speeded naming measures and math and reading outcomes will establish the construct validity of the speeded naming of quantities measure with 5- and 6-year-olds. In Study 2, we used the SN-Quantities measure with even younger children, to determine the lower age limit at which this task is valid and reliable. We predicted that the SN-Quantities measure would be related to those measures which required symbolic number knowledge (e.g., number words or digits), that is, object counting, verbal counting, and a comprehensive measure that included a range of number skills.
## Study 1
Two hypotheses were tested in Study 1: (a) speeded processing of non-symbolic quantities (dots) would predict early numeracy, but not early literacy performance, whereas (b) speeded processing of letters would predict early literacy, but not early numeracy performance (Willburger et al., 2008). These hypotheses focus on the construct validity of the SN-Quantities measure.
### Method
#### Participants
Participants were 128 kindergarten children (70 boys; 58 girls) with a mean age of 5:10 years: months. They were recruited through contacts with child care centres and other early childhood learning facilities in a large Canadian city. At recruitment, parents provided information about home numeracy and literacy activities (see Skwarchuk et al., 2014). Because of the method of recruiting, the children came from a broad range of socio-economic circumstances. English was the only language spoken at home for all but three of the families. All children were tested mid-way through their first year of school-based instruction (i.e., kindergarten). Family income was estimated based on postal codes (which were provided by parents) and used to index socio-economic status. In Canada, postal codes have six alpha-numeric characters. The last three characters represent a postal delivery unit, which in cities, typically includes the houses on one side of a city block. Statistics Canada provides information about average income (in Canadian dollars) that is linked to these postal codes. For the present study, incomes associated with postal codes were obtained from the 2006 census, the closest one to 2009, which is the year in which data were collected. Thus, this index of socio-economic status reflects the neighbourhoods of the families. Income was approximately normally distributed, with a mean of $70,260 and a median of$67,912.
#### Materials
Table 1 shows the tasks used in both studies. In Study 1, children completed two speeded naming tasks, two nonverbal cognitive tasks, two verbal cognitive tasks, and three measures of performance (two measures of early numeracy and one measure of early literacy).
##### Table 1
Summary of Measures in Study 1 and Study 2
Category / Name of Task Study
Speeded Naming
SN-Quantities 1, 2
SN-Letters 1, 2a
SN-Colors 2b
Nonverbal Cognitive skills
Spatial span 1, 2
Nonverbal reasoning 1
Verbal Cognitive skills
Phonological awareness 1
Receptive vocabulary 1, 2
Performance Measures
Literacy
Numeracy
Nonsymbolic arithmetic 1, 2
KeyMath Numeration 1, 2
Object counting 2
Verbal (rote) counting 2
aCanadian children only at pre-test. bTurkish children at pre-test; all children at post-test.
Children were shown two pages of stimuli for each of the SN-Quantities and SN-Letters tasks. The SN-Quantities task included dots in groups of 1, 2, or 3, each presented 8 times per page in four rows of six. Speeded naming of pictures, letters, colors, and digits have been used in many versions of these tasks (e.g., in the CTOPP; Wagner, Torgesen, & Rashotte, 1999). In the present research, we created letter and color versions of these measures that had three alternatives, rather than four, because we wanted to limit the subitizing range to 3 and to match the number of alternatives across all three measures. Thus, the SN-letters consisted of the letters M, C, and A, each presented 8 times per page in four rows of six. Order was pseudo-random, such that the same quantity or letter was not presented sequentially more than twice. The practice page for each measure included six items in one row. Children were asked to name the items on the practice page, starting at the left. If they made an error, it was corrected. Once the experimenter was satisfied that the child understood the task, the two experimental pages were administered. For the test pages, children were asked to name each item, starting at the top left-most item and proceeding row-by-row. Naming times were recorded with a stopwatch for each page and children’s naming errors were recorded. These tasks can be accessed from https://osf.io/uz6s3/.
Cognitive tasks included both nonverbal and verbal measures.
Spatial Span. Children viewed nine green circles on a laptop computer screen. On each trial, a frog visited a sequence of the circles. They watched and then copied the frog’s path with a pointer. The experimenter recorded the sequence. After a single demonstration trial with two locations, children were shown two trials each with sequence lengths of 2, 3, 4, 5, and 6 locations. However, the next longest sequence was not shown if the child did not successfully reproduce at least one sequence of a particular length. Performance was the number of sequences correctly reproduced (maximum 12). Split-half reliability was .74.
Nonverbal Reasoning. Children were given the nonverbal analogies subtest from the CIT (Cognitive [Intelligence] Tests: Nonverbal; Gardner, 1990). This test is not normed for children younger than six years and thus all children started on the first item and continued until they made 6 consecutive errors. Scores were total correct. Kuder-Richardson 20 reliability is reported in the test manual as .90.
Receptive Vocabulary. To control overall testing time, a modified version of the Peabody Picture Vocabulary Test (PPVT-III; Dunn & Dunn, 1997) was used. All children started on Item 61 (i.e., no basal was established) and continued to Item 120 or until they made eight or more errors in a set of 10. Number correct (assuming a base score of 60) was used as the index of performance. With this non-standard procedure, testing time was approximately 10 minutes.
Sound Matching. Phonological awareness was measured with the sound-matching subtest of the Comprehensive Test of Phonological Processing (CTOPP; Wagner et al., 1999). Children are shown three pictures and then asked to match a target word to the pictures with either the same beginning sound or the same ending sound. Testing stopped when children made errors on four of seven items. Reported internal consistency of this test is above .90.
##### Performance Measures
Children completed two early numeracy measures (numeration and nonsymbolic arithmetic) and one early literacy measure (word reading). The KeyMath numeration test (Connolly, 2000) has initial items involving counting, digit naming, and ordinal position, followed by items requiring identification and ordering of double-digit numbers. Children started on the first item and when they made more than three errors in a row, the task was terminated. Scores were total solved correctly. Split-half reliabilities are reported in the testing manual of .70 for this age group.
The nonsymbolic arithmetic task is a variant of the test used by Jordan et al. (1992) that was modified by LeFevre et al. (2010). Children are shown a set of animals, which is then hidden. Additional animals are added or subtracted and then the child is asked to show (with their own set of animals) how many are left in the hidden set. Two initial trials that involve matching visible sets are administered first (sets of 2 and 5, not scored), to ensure the children understand the task. Subsequently, children attempted four addition and four subtraction trials, including 1 + 2, 3 + 1, 2 + 3, 4 + 2, 3 – 1, 4 – 3, 5 – 2, and 6 – 4. Subtraction trials were not presented if children were unsuccessful on all four addition trials. Performance was total correct arithmetic trials (out of 8).
To measure word reading, children were given the Letter-Word Identification subtest of the Woodcock-Johnson Reading Mastery Test (Woodcock, 1998). The initial items on this task requires that children name letters and subsequently progresses to single words. The test is terminated after six consecutive errors or when the child declines to continue. All children started at the beginning of the test and thus reported scores are total correct.
### Results
#### Descriptive Statistics
As shown in Table 2, across the four pages of the speeded naming measures (i.e., two with quantities, two with letters), naming times averaged about 20 s per page. Children made few errors, with medians of 0 on all four pages and ranges from 0 to 8 errors per page.2 A few children had very slow naming times, such that there was a positive skew for times on each page. To equate the scores across tasks and take both speed and errors into account, efficiency scores were calculated for each page, correcting for errors (i.e., the number of items either named incorrectly or skipped). Thus, efficiency (items-per-second) = [(24 – errors)/time in seconds].
##### Table 2
Children’s Performance on Speeded Naming Tasks in Study 1 and Study 2
Page Time (s)
Naming Errors
n M SD Min Max n M SD Min Max
Study 1: Quantity Naming
1 126 22.8 6.6 12.2 52.5 123 0.16 0.56 0 4
2 127 23.1 7.8 8.6 76.4 124 0.34 0.99 0 8
Study 1: Letter Naming
1 125 20.9 6.9 9.7 49.8 123 0.49 1.7 0 18
2 125 20.7 7.7 10.2 64.5 123 0.46 1.6 0 16
1 59 50.2 24.5 23 172 65 3.32 4.33 0 23
2 59 45.7 18.1 21 115 64 3.50 5.26 0 24
1 57 39.7 15.9 18 79 66 3.39 4.95 0 24
2 55 38.6 16.2 19 112 62 2.84 4.96 0 24
Study 2: Turkish Quantity Naming
1 78 60.6 26.5 29 168 86 2.93 4.18 0 19
2 76 55.1 21.6 27 129 84 2.89 4.58 0 22
Study 2: Turkish Colour Naming
1 76 48.8 15.9 19 100 82 2.89 4.45 0 17
2 75 48.6 18.0 21 137 82 2.90 4.47 0 17
Performance was consistent across pages. The correlations for efficiency scores between Pages 1 and 2 were: r(121) = .802, p < .001, for quantity naming; and r(122) = .907, p < .001, for letter naming. The means were not significantly different for page 1 compared to page 2: t(122) = 1.096, p = .275, for quantity naming; and, t(123) = -1.14, p = .256, for letter naming. Thus, averaged efficiency scores across the two pages of each test were used in further analyses. Five children were missing one of two pages of one version of the tasks (i.e., three were missing one page from the quantity task and two were missing one page from the letter task) and in these cases, performance on the one available page was used. Because this is an efficiency index (i.e., items-per-second), higher scores indicate better performance and so correlations with other measures are expected to be positive. Means for efficiency scores are shown in Table 3.
##### Table 3
Correlation Coefficients, Means, and Standard Deviations for All Variables in Study 1
Variable 1 2 3 4 5 6 7 8 9 10 11 12
1. Age (months)
2. Income (Cdn\$) -.132
3. Gender -.111 .111
4. Vocabulary .206* .401** -.089
5. Sound Match .203* .288** .180* .465**
6. NV Reasoning .266** .010 -.177* .186* .273**
7. Spatial Span .123 -.013 .014 .050 .285** .337**
8. Letter Word .136 .175 .030 .521** .570** .268** .073
9. Numeration .221* .199* -.243** .407** .409** .427** .210* .517**
10. NS Arithmetic .196* .296** -.084 .425** .445** .312** .305** .490** .534**
11. SN-Quantity .142 .117 .006 .361** .551** .370** .188* .650** .654** .574**
12. SN-Letters .153 .049 .040 .331** .552** .335** .157 .742** .524** .427** .790**
M 70.2 70260 a 92.27 11.22 4.30 3.85 17.99 6.20 6.49 1.11 1.25
SD 3.4 20715 12.16 5.44 2.37 1.74 6.54 1.90 3.07 0.28 0.39
N 126 124 126 128 124 128 128 128 124 127 125 124
Note. N = 117 for correlations; NS = non-symbolic; SN = speeded naming; NV = Nonverbal. Scoring: Age (months), Income (Canadian dollars), Gender 1 = boys; 2 = girls, Vocabulary, Sound Match, Nonverbal reasoning, Letter Word, Numeration, Nonsymbolic arithmetic (number correct), spatial span (number of correct sequences).
a70 boys; 58 girls.
*p < .05. **p < .01.
The correlations among the nine variables are shown in Table 3, along with means and standard deviations for each measure. The control measures (age, income, gender) were not correlated with one another, but each was correlated with some of the other measures and so they were retained in further analyses. Older children had higher vocabulary, sound matching, and nonverbal reasoning scores than younger children, and they performed better on the numeration and nonsymbolic arithmetic tests. Parental income was related to vocabulary, sound matching, numeration, and non-symbolic arithmetic performance. Boys scored higher than girls on the nonverbal reasoning and numeration tasks whereas girls had higher sound-matching scores. The cognitive abilities were inter-correlated, and most were also correlated with the literacy and numeracy performance measures. Spatial span was not significantly correlated with word reading, vocabulary, or speeded naming of letters, however.
The two speeded-naming tasks were correlated with most of the cognitive and academic measures. As expected, however, the two numerical measures were more highly correlated with speeded naming of quantities than with speeded naming of letters (for numeration, r = .654 vs. .524, z = 2.70, p = .007; for nonsymbolic arithmetic, r = .574 vs. .427, z = 2.84, p = .0045). In contrast, word reading was more highly correlated with speeded naming of letters than with speeded naming of quantity (r = .742 vs. .650, z = -2.20, p = .0279)3. The generally high level of inter-relations among the various predictors indicates that multiple regression is needed to test the hypotheses about the specificity of the links between the speeded naming tasks and the numeracy versus literacy outcomes.
#### Multiple Regression Analyses
Each of the three performance measures (i.e., numeration, nonsymbolic arithmetic, and word reading) was used as the dependent measure in a hierarchical linear regression. The three control measures (i.e., age, parental income, and gender) were included in the first block of predictors. Domain-general predictors that are relevant for each domain were added in the second block, followed by the speeded naming task that was not specific to that domain (i.e., quantities for reading; letters for numeracy tasks) in the third block. The domain-specific speeded naming task was added in the final block. As shown in Tables 4 and Table 5, parental income predicted children’s performance on numeration and nonsymbolic arithmetic but was not a significant predictor of the reading measure in the final model. Gender was only a significant predictor of numeration (boys did better). Age was not a significant predictor presumably because it was correlated with the other measures and the range was constrained.
##### Table 4
Multiple Regression Analysis of Letter-Word Reading in Study 1 (n = 118)
Predictors Hierarchical
Final Model
β (step) p β p
Block 1: Control variables
Income .128 .169 -.053 .413
Age .164 .080 -.045 .470
Gendera .033 .724 .015 .808
R2 change .038 .214
Block 2: Domain-Specific
Vocabulary .384*** < .001 .274*** < .001
Sound Matching .415*** < .001 .107 .166
R2 change .357** < .001
Block 3
Speeded Naming - Quantities .426*** < .001 .026 .792
R2 change .125*** < .001
Block 4
Speeded Naming - Letters .582** < .001 .582*** < .001
R2 change .113*** < .001
Note. Block 1: R2 = .038, F(3, 115) = 1.51, p = .214. Block 2: R2 = .395, F(5, 113) = 14.75, p < .001. Block 3: R2 = .520, F(6, 112) = 20.23, p < .001. Block 4: R2 = .633, F(7, 111) = 27.39, p < .001.
aCoded 1 = boys; 2 = girls.
*p < .05. **p < .01. ***p < .001.
##### Table 5
Multiple Regression Analysis of Numeracy Measures in Study 1
Variable Numeration (n = 118)
Nonsymbolic Arithmetic (n = 117)
β (block) p β (final) p β (block) p β (final) p
Block 1: Control variables
Income .210* .018 .147* .028 .297** .001 .250** .001
Age .221* .013 .097 .163 .199* .028 .097 .216
Gender1 -.253*** .004 -.250*** < .001 -.077 .390 -.081 .287
R2 change .147*** < .001 .117** .003
Block 2: Domain-General
Spatial reasoning .343*** < .001 .133 .087 .238** .010 .064 .467
Spatial span .111 .191 .071 .308 .235** .008 .189* .017
R2 change .139** < .001 .135*** < .001
Block 3: Speeded Naming – Letters .428*** < .001 .036 .740 .324*** < .001 -.017 .890
R2 change .158** < .001 .089*** < .001
Block 4: Speeded Naming – Quantities .523*** < .001 .523*** < .001 .461*** < .001 .461*** < .001
R2 change .094*** < .001 .073*** < .001
Note. For Numeration: Block 1: R2 = .147, F(3, 115) = 6.58, p < .001; Block 2: R2 = .285, F(5, 113) = 9.03, p < .001; Block 3: R2 = .443, F(6, 112) = 14.85, p < .001; Block 4: R2 = .537, F(7, 111) = 18.36, p < .001. For nonsymbolic arithmetic: Block 1: R2 = .117, F(3, 114) = 5.04, p <.003; Block 2: R2 = .252, F(5, 112) = 7.55, p < .001; Block 3: R2 = .341, F(6, 111) = 9.59, p < .001; Block 4: R2 = .414, F(7, 110) = 11.10, p < .001.
aCoded 1 = boys; 2 = girls.
*p < .05. **p < .01. ***p < .001.
Variables included in the second block of the regressions were those that were expected to predict specific outcome measures. Thus, as shown in Table 4, vocabulary and sound matching were entered as predictors of word reading, accounting for significant variance. In the third block, speeded naming of quantities accounted for significant variance. However, when speeded naming of letters was added in Block 4, speeded naming of quantities no longer accounted for unique variance. Thus, as expected, speeded naming of letters was a significant predictor of early word reading skill in the final model (Norton & Wolf, 2012), whereas speeded naming of quantities was not.
Regression analyses for the two early numeracy measures are shown in Table 5. Spatial span was a significant predictor of nonsymbolic arithmetic, whereas nonverbal reasoning predicted numeration. Most importantly, in the final step for both numeration and nonsymbolic arithmetic, speeded naming of quantities predicted additional significant variance (9.4 and 7.3%, respectively) whereas speeded naming of letters did not.
### Discussion
The results of Study 1 showed that the speeded naming of quantities task is a reliable and valid measure for 5- and 6-year-old children. Speeded naming of quantities predicted numeracy outcomes whereas speeded naming of letters predicted early reading. This dissociation between the two speeded naming tasks is especially impressive given the high correlation between them. Despite considerable shared measurement similarity, the core cognitive factors that influence speeded naming in each task appear to be distinct.
## Study 2
In Study 2, we used the speeded naming of quantities task with 3- and 4-year-old children. The goals were to determine whether younger children would be able to complete the task, whether it was reliable with the younger age group, and to establish construct validity for the measure for younger children. All of the children in this study were participants in a large project that involved training of verbal counting skills. The results of the intervention were reported in two papers (Cankaya et al., 2014; Dunbar et al., 2017). Performance on the speeded naming tasks was not analyzed in those papers, however. The training study involved children in two countries, Canada and Turkey. The Turkish number naming system from 10 to 20 is simpler and more regular than that in English and thus the comparison across countries was of interest for the intervention.
Data for the early numeracy measures were completed at twice, first at the pre-test before a training session began, and second, at a post-test session six weeks later, after the intervention. Children met with an experimenter each week and counted as high as possible, then either (a) played one of two number games, (b) played a colour game, or (c) returned to the classroom. Performance on all the early numeracy measures improved over the six-week intervention, however, differential improvement in the number game conditions only occurred for verbal counting.
### Method
#### Participants
A total of 182 children participated in the larger study, 94 (46 boys; 48 girls) in Canada and 88 (48 boys; 39 girls; one not specified) in Turkey. The Canadian children were recruited from four child care centres and ranged in age from 34 to 62 months (M = 45.3, SD = 7.5). All these children spoke English and used English exclusively in child care; 59 often or always spoke English at home, 27 sometimes spoke another language at home, and 6 spoke another language at home about half of the time. The Turkish children were recruited from three preschools and one child care facility. They ranged in age from 35 to 58 months (M = 48.1, SD = 5.7). All were monolingual Turkish speakers. Despite the similar range of ages, the Turkish children were significantly older than the Canadian children, t(173.56) = -2.80, p = .006. The distribution of gender across countries did not differ significantly.
Parents who completed the consent form were asked to specify the level of education of both mother and father. Parents’ highest level of education was coded from 0 to 4, with 0 = less than high school, 1 = high school, 2 = community college degree, 3 = university degree, and 4 = postgraduate degree. The level of education of mothers and fathers was strongly related, χ2(16, N = 170) = 150.59, p < .001. Forty Canadian respondents provided education information for both parents; but they did not specify if they were the mother or the father, and nine respondents did not report education for a second parent. Parents in Canada had higher levels of education than parents in Turkey, χ2(4, N = 179) = 30.02, p < .001. The median was a university degree for Canadian parents versus a community college degree for Turkish parents. Accordingly, because parent education was so highly correlated across mothers and fathers, we used the highest reported level for either parent in further analyses.
#### Materials
Table 1 shows the tasks given to children in Study 2. All the early numeracy measures were given both at pre-test and at post-test. However, Canadian children did speeded naming of letters at pre-test whereas Turkish children did speeded naming of colours; both groups did speeded naming of colours at post-test. Two of the numeracy tasks given in Study 1 and Study 2 were the same (i.e., nonsymbolic arithmetic and KeyMath numeration), and two were different (i.e., verbal counting and object counting). In Study 2, the spatial span task was administered on iPads rather than on a computer (Study 1), but the pattern of locations and the sequences were the same. The children touched the locations in order after they watched each sequence. In Study 2, the vocabulary measure used with the Turkish children was based on the Turkish version of the Peabody Picture Vocabulary Test (Oner, 1997). Vocabulary scores (number correct) were converted to z-scores within each group.
The stimuli used in the speeded naming tasks were exactly the same as in Study 1 for the quantity and letter naming versions. The speeded naming of colors (i.e., red, blue, and black) version of the task was also used in the current study, because pilot testing showed that the Turkish children were not familiar with letters. The other difference from Study 1 was that the procedure used to administer the measures was modified because the children were so much younger than in the first study. Specifically, the experimenter pointed to each stimulus and the child was asked to name them. If the child hesitated for more than (approximately) two seconds, the experimenter moved on to the next item and recorded an error for that item. As in Study 1, children were shown practice items before attempting the test. If they were unable to name the practice stimuli, then the testing was discontinued. Further details are given in Appendix A.
##### Verbal Counting
Children were asked to count as high as possible, ostensibly to help a puppet count. When the child stopped counting, they were prompted with the preceding number (e.g., if they stopped at 12, then the experimenter asked if they could count higher and said “12 and …” with a rising intonation. Highest count, allowing for one error, was used as the index of performance. For example, if children counted 1, 2, 3, 4, 5, 7, 8, they were credited with a highest count of 8. If, however, they counted 1, 2, 3, 4, 6, 8, 9, they were credited with a highest count of 6 (i.e., allowing for one error on 5).
##### Object Counting
Children were given a group of small animal figures and asked to place a certain number on a place mat in front of them. First, children were asked to show a set of small numbers, that is, 3, 4, 5, and 6 (small set); next they were asked for larger quantities, that is 7, 8, 9, and 10 (medium set). If children were successful on the previous two trials, they were asked for 14, 15, 16, and 17 (large set). Testing was always terminated after children made two unsuccessful attempts or if they said “I don’t know” for two trials in a row. Within each set size, a fixed random order was used, starting with 4 for the initial trial. The number of successful trials was used as the index of performance (i.e., 0 to 12), however, the majority of children were stopped before they reached the large set size.
### Results
#### Descriptive Statistics
Although all of the children who attempted the two pages of the task could name the stimuli, nevertheless, the speeded tasks were more challenging for 3- and 4-year-olds than for the older children in Study 1. The experimenters noted that it was sometimes difficult to ensure that children followed the instructions and occasionally response times were not valid if children did not understand the requirement to continue from horizontally from one item to the next. Average data for each page of each test is shown in Table 1. As in Study 1, a corrected items-per-second score was calculated on each page for those children who had a valid naming time and made fewer than 12 errors on that page (i.e., [24 – errors]/naming time; see Table 6). The criterion of a maximum of 12 errors was chosen because 50% correct was significantly greater than chance (binomial probability with three alternatives), p < .05. Next, to maximize children’s performance, their best items-per-second score across the two pages (i.e., the page for which they had a higher score if they had two valid pages) was chosen for use in further analyses. Between 84% and 92% of children who attempted the speeded naming tasks (depending on the specific measure) had valid speeded naming scores (i.e., items-per-second, best performance), as described in detail in Appendix A. Thus, this implementation of speeded naming was appropriate for most of the children in the study.
##### Table 6
Mean Pre-test and Post-test Performance in Study 2 and Comparison of Turkish and Canadian Children in Study 2
Turkish
Comparison
N M SD N M SD F df p
Pre-test Scores
SN-Letters (items-per-s) 56 .68 .25
SN-Colours (items-per-s) 73 .55 .20
SN-Quantity (items-per-s) 55 .59 .21 82 .46 .19 13.07 113.76 < .001
Vocabulary (number correct) 71 41.14 12.41 87 51.95 9.78
Spatial Span (sequences) 55 2.29 2.02 82 2.18 1.49 .13 124.51 .719
Verbal Counting 55 18.71 17.51 82 15.67 9.89 1.68 114.17 .198
Nonsymbolic Arithmetic 55 2.76 2.53 82 2.85 1.69 .06 153.16 .803
Numeration (number correct) 55 3.22 1.64 82 2.12 1.73 13.76 124.19 < .001
Object Counting 55 4.49 3.58 82 2.48 2.89 13.17 138.86 < .001
Post-test Scores
SN-Quantity (items-per-s) 48 .71 .22 38 .47 .21 25.08 84 < .001
SN-Colours (items-per-s) 48 .71 .19 38 .55 .21 14.81 84 < .001
Verbal Counting 48 21.15 10.01 38 16.58 8.10 5.20 84 .025
Nonsymbolic Arithmetic 48 3.60 2.29 38 3.45 2.23 .10 84 .751
Object Counting 48 6.21 3.13 38 2.50 2.50 35.43 84 < .001
Numeration 48 3.54 1.50 38 2.68 1.40 7.35 84 .008
Note. The vocabulary measures are different for the two groups, so they were not compared directly. For the regression analyses, z-scores within groups were calculated; df are corrected for unequal variances. Comparisons across country were done with listwise deletion of missing data.
##### Early Numeracy and Cognitive Tasks
Mean performance on the early numeracy and cognitive tasks are shown in Table 7 for pre- and post-test measures. Turkish children performed worse on some measures than Canadian children, including speeded quantity naming, however, there were no significant differences on spatial span, nonsymbolic arithmetic, or verbal counting. Differences in speeded quantity naming, object counting, and numeration indicate that the Turkish children had less advanced conceptual knowledge of number symbols and of the connections between symbols (e.g., number words) and magnitude (Hurst et al., 2017; Jiménez Lira et al., 2017). Verbal counting is simpler in Turkish than in English because the numbers in the teens decade are more predictable; moreover, verbal counting does not require mapping between quantities and symbols. Similarly, nonsymbolic arithmetic does not require knowledge of number words, although counting skills can support better performance (Jordan et al., 1992). Thus, although Turkish children were older, their numeracy knowledge was lower than that of the Canadian children, despite comparable cognitive capacity. In subsequent analyses, we controlled for country.
##### Table 7
Correlations Among All Variables in Study 2 (Listwise): Pre-Test (N = 114) Below Diagonal and Post-Test (N = 86) Above Diagonal
Variable 1 2 3 4 5 6 7 8 9 10 11
1. Parent Edu -.076 .281* -.425** .099 .239* .162 .285** .036 .075 .183
2. Age (months) -.210* .464** -.005 .440** .424** .265* .327** .392** .404** .360**
3. Vocabulary .162 .395** -.225* .357** .493** .304** .384** .417** .407** .419**
4. Country -.468** .150 -.092 -.104 -.510** -.238* -.524** .002 -.462** -.437**
5. Spatial Span -.021 .482** .319** .039 .328** .190 .332** .260* .397** .180
6. Numeration .137 .448** .490** -.298** .394** .366** .602** .514** .458** .316**
7. Verbal Count .079 .357** .363** -.071 .391** .549** .595** .337** .431** .274**
8. Object Count .161 .412** .454** -.251** .443** .682** .551** .417** .571** .397**
9. NS Arithmetic -.134 .435** .371** .111 .474** .404** .435** .486** .282* .209
10. SN-Quantitya -.007 .518** .451** -.220* .503** .579** .414** .591** .414** .583**
11. SN-Control .007 .395** .447** .075 .501** .379** .361** .441** .342** .579**
Note. Parent Edu = Parent Education; NS Arithmetic = Nonsymbolic arithmetic; SN-Quantity = speeded naming of quantities; SN-Control = speeded naming of letters (Canadian children) or colors (Turkish children).
aAt pretest, the correlation for the Canadian children between SN-Quantities and SN-Letters was r(46) = .728, p < .001. The correlation for the Turkish children between SN-Quantities and SN-Colors was r(61) = .597, p < .001.
*p < .05. **p < .01.
Test-retest reliabilities for efficiency scores were calculated for those subsets of children who did both versions of a measure at pre- and post-test. For quantities, the test-retest correlation was r(82) = .788, p < .001. For colours, which only the Turkish children did at both pretest and post-test, the correlation was r(33) = .730, p < .001. Children improved between the pre- and post-test on speeded quantity naming (.50 vs. .61 items-per-s), t(83) = -6.57, p < .001, and on speeded colour naming (.48 vs. .54), t(34) = -2.92, p < .006. In summary, the overall completion rate, internal reliability, and test-retest reliabilities suggest that the speeded naming tasks were reliable measures for most 3- and 4-year-old children. The frequency of valid scores was not significantly different for the Turkish versus Canadian group or for boys versus girls. Children who had a valid score on the SN-quantity task were older (n = 136, M = 47.6 months) than children who did not have a valid score (n = 23, M = 43.0 months), t(157) = 3.21, p = .002. Thus, age was controlled in further analyses.
#### Relations Between Speeded Naming and Early Numeracy Measures
As in Study 1, we used multiple regression to assess whether speeded naming of quantities was uniquely related to the numeracy outcomes, after controlling for individual differences in visual-spatial attention (i.e., spatial span), linguistic skills (i.e., vocabulary), and demographic (i.e., age and group) factors. Because the control speeded naming task was different for the two groups at pre-test, we calculated the z-scores within group for each control naming task, and used those in the regressions. As shown in Table 8, block one included the control variables (parent’s highest education, age, spatial span, vocabulary, and control speeded naming). Speeded naming of quantities was added in Block 2 to assess whether it added unique variance.
As shown in Table 8, the SN-Quantity score was a unique predictor of children’s verbal counting4, numeration, and object counting performance at pretest. These measures require knowledge of number words, and the latter two require knowledge of the mappings between number words and quantities. In contrast, speeded naming of quantities did not predict unique variance in nonsymbolic arithmetic. As a further test of the relation between speeded naming of quantities and numeracy outcomes, the same analyses were done with post-test data. Parents’ education was excluded from these analyses because it never predicted unique variance at pre-test or in initial post-test analyses. As shown in Table 9, the results are the same for the post- as for the pre-test measures. Speeded naming of quantities accounted for significant unique variance in object counting, numeration, and verbal counting but not in nonsymbolic arithmetic.
##### Table 8
Hierarchical Multiple Regression Analyses of Numeracy Outcomes at Pre-Test in Study 2
Variable Verbal Counting
(N = 111)
Object Counting
(N = 114)
Numeration
(N = 114)
Nonsymbolic Arithmetic
(N = 114)
β p β p β p β p
Model 1
Parent Education -.019 .846 .057 .515 .005 .954 -.110 .242
Age .142 .180 .221 .019 .293 .002 .160 .112
Countrya -.100 .301 -.263 .002 -.330 < .001 .042 .645
Vocabulary .270 .008 .192 .038 .131 .145 .312 .002
Spatial Span .092 .379 .186 .038 .252 .004 .220 .023
SN-Control .162 .127 .193 .037 .110 .223 .021 .831
R2 Model .259 < .001 .421 < .001 .447 < .001 .326 < .001
Model 2
Parent Education .026 .794 .094 .274 .038 .654 -.089 .347
Age .053 .622 .141 .139 .223 .019 .116 .272
Country .022 .836 -.163 .072 -.242 .007 .098 .327
Vocabulary .230 .022 .145 .112 .089 .317 .286 .005
Spatial Span .021 .843 .152 .084 .222 .011 .201 .039
SN-Control .049 .656 .088 .370 .017 .860 -.038 .727
SN-Quantity .334 .009 .290 .009 .255 .018 .162 .180
R2 change .048 .009 .037 .009 .028 .018 .011 .180
R2 Model .306 .458 .476 .337
Note. Model 1 R2: F(6, 104) = 6.05, p < .001; Fs(7, 107) = 12.96, 14.23, 8.62, ps < .001. Model 2 R2: F(7, 113) = 6.50, p < .001; Fs(7, 106) = 12.77, 13.74, 7.71, ps < .001.
aCountry coded 1 = Canada, 2 = Turkey.
##### Table 9
Hierarchical Multiple Regression Analyses of Post-Test Outcomes in Study 2 (N = 85)
Variable Verbal Counting
Object Counting
Numeration
Nonsymbolic Arithmetic
β p β p β p β p
Model 1
Age .108 .419 .180 .094 .257 .037 .270 .037
Countrya -.203 .088 -.475 < .001 -.221 .041 .026 .817
Vocabulary .176 .166 .131 .194 .224 .053 .274 .025
Spatial Span .018 .880 .162 .097 .092 .405 .018 .878
SN-Control -.009 .943 .038 .716 -.019 .875 -.010 .934
R2 Model .118 .073 .442 < .001 .277 < .001 .215 .002
Model 2
Age .050 .706 .132 .213 .185 .114 .242 .064
Country -.098 .427 -.389 < .001 -.092 .395 .077 .524
Vocabulary .156 .208 .115 .243 .199 .069 .264 .030
Spatial Span -.044 .715 .110 .254 .015 .888 -.013 .914
SN-Control -.107 .427 -.042 .692 -.139 .243 -.057 .661
SN-Quantity .334 .021 .275 .017 .409 .002 .159 .256
R2 change .058 .021 .039 .017 .087 .002 .013 .256
R2 Model .176 .481 .364 .228
Note. Model 1 R2: Fs(7, 107) = 2.11, p = .073; 12.49, 6.05, ps < .001; 4.26, p = .002. Model 2 R2: Fs(7, 106) = 2.77, p = .017; 12.05, 7.45, ps < .001; 3.78, p = .002.
aCountry coded 1 = Canada, 2 = Turkey.
### Discussion
In this study, between 85% and 92% of the 3- and 4-year-old children were successful on the speeded naming tasks (depending on which version) and their performance was consistent across pages. Multiple regression analyses (controlling for age, gender, parents’ education, country, and spatial span) indicated that speeded naming of quantities was a significant predictor of children’s performance on object counting, verbal counting, and numeration, both at pre-test and after a six-week intervention. All of these measures involve children’s knowledge of the number words, with the object counting and numeration measures requiring mapping between number words and quantities. In contrast, speeded naming of quantities did not predict nonsymbolic arithmetic. This measure involves memory and one-to-one matching processes, but does not require children to use number names (Jordan et al., 1992).
## General Discussion
The goal of the present research was to determine whether a measure of children’s efficiency on speeded naming of small quantities was a reliable and valid measure for 3- to 6-year-old children. We used a speeded naming task that is assumed to index the efficiency with which children access the identity of small exact quantities. In adults, the efficiency of access to small quantities via naming is referred to as subitizing (Landerl, 2013; Trick & Pylyshyn, 1994; van Oeffelen & Vos, 1984). Notably, we do not refer to the speeded naming of quantities measure as subitizing in the present study because, in the traditional subitizing tasks used with adults, the stimuli are shown only briefly. That methodology is difficult to implement for young children and is not suitable for a paper-and-pencil measure. We showed that the speeded naming of quantities task has excellent internal reliability (across two pages of items) for 3- to 6-year-old children and very good test-retest reliability for 3- and 4-year-olds (i.e., across a six-week interval). In support of the construct validity of the measure, it was a predictor of a composite early numeracy measure (i.e., KeyMath Numeration) for 3- to 6-year-old children (Studies 1 and 2), of nonsymbolic arithmetic for 5- and 6-year-olds (Study 1), and of verbal and object counting for 3- and 4-year-old children (Study 2).
In support of the domain-specific nature of speeded naming, in Study 1, speeded naming of letters predicted word reading whereas speeded naming of quantities predicted numeracy measures. A large literature links word reading ability to speeded naming of letter names (i.e., RAN or rapid automatized naming; Norton & Wolf, 2012). Speeded naming of letters is also correlated with mathematics, especially fluency tasks such as arithmetic (Koponen et al., 2017), when speeded quantity naming is not assessed. A similar pattern was shown in the present research—when speeded naming of quantities was not included in the regressions, speeded naming of letters was a significant predictor of early numeracy performance (Study 1). However, in Study 2, only speeded naming of quantities (not of letters or colours) predicted the numeracy outcomes in the multiple regressions, despite simple correlations between control speeded naming tasks and the outcomes. Thus, the present results suggest that the unique portion of the relation between quantity naming and numeracy measures is a domain-specific skill, whereas the shared portion reflects children’s more general ability to quickly and accurately retrieve verbal labels for symbols (Willburger et al., 2008).
Developmentally, the SN-Quantity task was easier for older than for younger children. Speeded naming of quantities improved with age, as did the various numeracy measures. Longitudinal studies which measure both speeded naming of quantities and other numeracy measures across time are needed to better understand whether any causal links exist among these indices, or whether shared variance can be attributed to improvements in more general cognitive abilities (Gray & Reeve, 2016; Peng & Kievit, 2020; Soltész et al., 2010).
The present results are consistent with the results of Sowinski (2016) who found that the speeded naming of quantities predicted arithmetic fluency and calculation among adults, even after controlling for approximate numerosity comparisons (i.e., the AMS; Study 1) and symbolic number comparisons (Study 2). The current findings support the view that efficient access to small exact quantities may help children connect their knowledge of quantity to the symbolic number system. More specifically, efficient access to small quantities may facilitate the development of counting skills by supporting children’s understanding of the cardinality principle (Cheung & Le Corre, 2018; Le Corre & Carey, 2007). Cardinality, that is, the understanding that the final counting word produced when enumerating a set of objects indicates the quantity of that set, is a critical precursor of number system knowledge more generally (Butterworth, 2005; Lyons & Ansari, 2015a; Merkley & Ansari, 2016).
For somewhat older children, a similar measure of speeded naming of the quantities 1, 2, and 3 predicted more advanced symbolic number knowledge, specifically number comparison (i.e., which is larger 4 or 7?; LeFevre et al., 2010). For 7- and 8-year-old children, symbolic number comparison was a unique predictor of children’s arithmetic performance, as well as their calculation knowledge, backward counting, and number system knowledge, whereas it did not uniquely predict word reading (Sowinski et al., 2015). Vanbinst, Ansari, Ghesquière, and De Smedt (2016) have suggested that number comparison is a key numerical process for children’s arithmetic development, equivalent to the relation between phonological awareness and reading (cf. Lyons & Ansari, 2015a; Lyons, Price, Vaessen, Blomert, & Ansari, 2014) and it is consistently correlated with arithmetic, even for adults (Schneider et al., 2017). Thus, it is important to understand how the exact magnitude system supports children’s acquisition of number comparison skills and other core mathematical skills, such as arithmetic. The SN-Quantity task could be used in the present form, or in an adapted version, to collect data to address this issue among young children (cf. Hawes et al., 2019).
### Limitations
Although the present results show that the SN-Quantity task is reliable and valid for 3- to 6-year-old children, modifications were needed to help the younger children successfully perform the task. Simpler versions, for example, providing a smaller set of items per page, would reduce the task demands further. Researchers have used less complex mapping tasks, for example, choosing between two quantities to match a spoken word or digit to evaluate the earliest levels of symbol-quantity mapping abilities in young children (Hurst et al., 2017; Jiménez Lira et al., 2017). The present results are valuable in showing that speeded naming of small exact quantities predicted these young children’s object counting ability, in support of the view that the exact magnitude system is central to the development of cardinality knowledge (Le Corre et al., 2006; Le Corre & Carey, 2007; Odic et al., 2015). Nevertheless, further research using the speeded naming of quantities task is important. The reanalyzed data sets used in the current studies were not collected with the primary aim of addressing the role of speeded naming in early numeracy development.
Further research should be undertaken to explore the limits of speeded quantity naming as a predictor of mathematical skills. In Study 1, speeded naming of quantities was a predictor of nonsymbolic arithmetic. In Study 2, speeded naming of quantities did not uniquely predict nonsymbolic arithmetic (although the simple correlation was significant), instead, speeded quantity naming predicted unique variance in tasks which required symbolic number knowledge. Older children, as in Study 1, are more likely to use counting in nonsymbolic arithmetic tasks (Canobi & Bethune, 2008), compared to the younger, less-knowledgeable children, as in Study 2 (Jordan et al., 1992). Thus, the results supporting the general notion that the development of counting skills, including the cardinal principle, is closely linked to knowledge of small exact quantities.
### Conclusions and Implications
In the present paper, we focused on a very simple index of the mapping between the earliest symbolic number codes (i.e., verbal number words) and nonsymbolic quantities. The results show that individual differences in the efficiency of naming small exact quantities predicts children’s developing number skills from ages three to seven (Mejias et al., 2012; Noël & Rousselle, 2011). These findings are consistent with the model of developmental dyscalculia proposed by Noël and Rousselle (2011) in which they argued that the earliest deficit among children with dyscalculia involved the process of mapping exact quantities to symbolic representations. The speeded naming of quantity task appears to exactly fulfill the requirements for a measure of this mapping process and could help to identify children who lag behind their peers. The task, as originally designed, can be used with children from age five, and for most children from age four. Incorporating a version of the SN-Quantity task into studies of early numeracy development could help to enhance models of how children’s quantitative knowledge and verbal mapping skills support their acquisition of the symbolic number system.
## Notes
1) The approximate magnitude system has often been referred to as the approximate number system, abbreviated as ANS, however, we prefer to reserve the term ‘number’ for exact symbolic representations, on the view that the magnitude systems that are available to various animals do not code for exact number (Núñez, 2017).
2) One child made 18 and 16 errors on the letter naming pages; this child’s data were excluded from further analyses.
3) Comparisons between correlations were done using the online cocor calculator (Diedenhofen & Musch, 2015).
4) Three students whose verbal counting scores were outliers (i.e., much higher than expected) were excluded from the analysis at pretest because they had very large residuals and thus influenced the results.
## Funding
Data collection for Study 1 was supported by a grant from Healthy Child Manitoba to S.-L. Skwarchuk and J.-A. LeFevre, by research funding from the University of Winnipeg to S.-L. Skwarchuk, and by the Social Sciences and Humanities Research Council of Canada (SSHRC) through scholarships to C. Sowinski and a grant to J.-A. LeFevre, J. Bisanz, D. Kamawar, S.-L. Skwarchuk, and B. L. Smith-Chant. Data collection for Study 2, and the writing of the manuscript, was supported by a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) to J.-A. LeFevre.
## Acknowledgments
The authors have no additional (i.e., non-financial) support to report.
## Competing Interests
The authors have declared that no competing interests exist.
## Supplementary Materials
All the speeded naming tasks are provided in the supplementary materials, along with the instructions and practice pages (for access see Index of Supplementary Materials below)
### Index of Supplementary Materials
• LeFevre, J., Skwarchuk, S., Sowinski, C., & Cankaya, O. (2021). Supplementary materials to "Linking quantities and symbols in early numeracy learning" [Additional information]. OSF. https://osf.io/uz6s3/
## References
• Arp, S., & Fagard, J. (2005). What impairs subitizing in cerebral palsied children? Developmental Psychobiology, 47(1), 89-102. https://doi.org/10.1002/dev.20069
• Ashkenazi, S., Mark-Zigdon, N., & Henik, A. (2013). Do subitizing deficits in developmental dyscalculia involve pattern recognition weakness? Developmental Science, 16(1), 35-46. https://doi.org/10.1111/j.1467-7687.2012.01190.x
• Attout, L., Noël, M.-P., & Rousselle, L. (2020). Magnitude processing in populations with spina-bifida: The role of visuospatial and working memory processes. Research in Developmental Disabilities, 102, Article 103655. https://doi.org/10.1016/j.ridd.2020.103655
• Attout, L., Noël, M. P., Vossius, L., & Rousselle, L. (2017). Evidence of the impact of visuo-spatial processing on magnitude representation in 22q11.2 microdeletion syndrome. Neuropsychologia, 99, 296-305. https://doi.org/10.1016/j.neuropsychologia.2017.03.023
• Beckwith, M., & Restle, F. (1966). Process of enumeration. Psychological Review, 73(5), 437-444. https://doi.org/10.1037/h0023650
• Bremner, J. G., Slater, A. M., Hayes, R. A., Mason, U. C., Murphy, C., Spring, J., Draper, L., Gaskell, D., & Johnson, S. P. (2017). Young infants’ visual fixation patterns in addition and subtraction tasks support an object tracking account. Journal of Experimental Child Psychology, 162, 199-208. https://doi.org/10.1016/j.jecp.2017.05.007
• Butterworth, B. (2005). The development of arithmetical abilities. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 46(1), 3-18. https://doi.org/10.1111/j.1469-7610.2004.00374.x
• Butterworth, B. (2010). Foundational numerical capacities and the origins of dyscalculia. Trends in Cognitive Sciences, 14(12), 534-541. https://doi.org/10.1016/j.tics.2010.09.007
• Cankaya, O., LeFevre, J.-A., & Dunbar, K. (2014). The role of number naming systems and numeracy experiences in children’s rote counting: Evidence from Turkish and Canadian children. Learning and Individual Differences, 32, 238-245. https://doi.org/10.1016/j.lindif.2014.03.016
• Canobi, K. H., & Bethune, N. E. (2008). Number words in young children’ s conceptual and procedural knowledge of addition, subtraction and inversion. Cognition, 108, 675-686. https://doi.org/10.1016/j.cognition.2008.05.011
• Cheung, P., & Le Corre, M. (2018). Parallel individuation supports numerical comparisons in preschoolers. Journal of Numerical Cognition, 4(2), 380-409. https://doi.org/10.5964/jnc.v4i2.110
• Connolly, A. J. (2000). KeyMath – Revised/Updated Canadian Norms. PsyCan.
• De Smedt, B., Noël, M.-P., Gilmore, C. K., & Ansari, D. (2013). How do symbolic and non-symbolic numerical magnitude processing skills relate to individual differences in children’s mathematical skills? A review of evidence from brain and behavior. Trends in Neuroscience and Education, 2(2), 48-55. https://doi.org/10.1016/j.tine.2013.06.001
• Diedenhofen, B., & Musch, J. (2015). cocor: A comprehensive solution for the statistical comparison of correlations. PLoS One, 10(6), Article e0131499. https://doi.org/10.1371/journal.pone.0121945
• Dunbar, K., Ridha, A., Cankaya, O., Jiménez Lira, C., & LeFevre, J.-A. (2017). Learning to count: Structured practice with spatial cues supports the development of counting sequence knowledge in 3-year-old English-speaking children. Early Education and Development, 28(3), 308-322. https://doi.org/10.1080/10409289.2016.1210458
• Dunn, L. M., & Dunn, L. M. (1997). Peabody Picture Vocabulary Test – III. American Guidance Service.
• Feigenson, L., Dehaene, S., & Spelke, E. S. (2004). Core systems of number. Trends in Cognitive Sciences, 8(7), 307-314. https://doi.org/10.1016/j.tics.2004.05.002
• Feigenson, L., Libertus, M. E., & Halberda, J. (2013). Links between the intuitive sense of number and formal mathematics ability. Child Development Perspectives, 7(2), 74-79. https://doi.org/10.1111/cdep.12019
• Gardner, M. F. (1990). Cognitive (Intelligence) Test: Nonverbal. James Battle and Associates.
• Gray, S. A., & Reeve, R. A. (2014). Preschoolers’ dot enumeration abilities are markers of their arithmetic competence. PLoS One, 9(4), Article e94428. https://doi.org/10.1371/journal.pone.0094428
• Gray, S. A., & Reeve, R. A. (2016). Number-specific and general cognitive markers of preschoolers’ math ability profiles. Journal of Experimental Child Psychology, 147, 1-21. https://doi.org/10.1016/j.jecp.2016.02.004
• Hawes, Z., Nosworthy, N., Archibald, L., & Ansari, D. (2019). Kindergarten children’s symbolic number comparison skills predict 1st grade mathematics achievement: Evidence from a two-minute paper-and-pencil test. Learning and Instruction, 59, 21-33. https://doi.org/10.1016/j.learninstruc.2018.09.004
• Hurst, M., Anderson, U., & Cordes, S. (2017). Mapping among number words, numerals, and nonsymbolic quantities in preschoolers. Journal of Cognition and Development, 18(1), 41-62. https://doi.org/10.1080/15248372.2016.1228653
• Hutchison, J. E., Ansari, D., Zheng, S., De Jesus, S., & Lyons, I. M. (2020). The relation between subitizable symbolic and non-symbolic number processing over the kindergarten school year. Developmental Science, 23(2), Article e12884. https://doi.org/10.1111/desc.12884
• Izard, V., Pica, P., Spelke, E. S., & Dehaene, S. (2008). Exact equality and successor function: Two key concepts on the path towards understanding exact numbers. Philosophical Psychology, 21(4), 491-505. https://doi.org/10.1080/09515080802285354
• Jiménez Lira, C., Carver, M., Douglas, H., & LeFevre, J.-A. (2017). The integration of symbolic and non-symbolic representations of exact quantity in preschool children. Cognition, 166, 382-397. https://doi.org/10.1016/j.cognition.2017.05.033
• Jordan, N. C., Huttenlocher, J., & Levine, S. C. (1992). Differential calculation abilities in young children from middle- and low-income families. Developmental Psychology, 28(4), 644-653. https://doi.org/10.1037/0012-1649.28.4.644
• Karagiannakis, G., & Noël, M. P. (2020). Mathematical profile test: A preliminary evaluation of an online assessment for mathematics skills of children in Grades 1-6. Behavioral Sciences, 10(8), Article 126. https://doi.org/10.3390/bs10080126
• Koponen, T., Georgiou, G., Salmi, P., Leskinen, M., & Aro, M. (2017). A meta-analysis of the relation between RAN and mathematics. Journal of Educational Psychology, 109(7), 977-992. https://doi.org/10.1037/edu0000182
• Landerl, K. (2013). Development of numerical processing in children with typical and dyscalculic arithmetic skills-a longitudinal study. Frontiers in Psychology, 4, Article 459. https://doi.org/10.3389/fpsyg.2013.00459
• Landerl, K., Bevan, A., & Butterworth, B. (2004). Developmental dyscalculia and basic numerical capacities: A study of 8–9-year-old students. Cognition, 93(2), 99-125. https://doi.org/10.1016/j.cognition.2003.11.004
• Le Corre, M., & Carey, S. (2007). One, two, three, four, nothing more: An investigation of the coneptual sources of the verbal counting principles. Cognition, 105(2), 395-438. https://doi.org/10.1016/j.cognition.2006.10.005
• Le Corre, M., Van de Walle, G., Brannon, E. M., & Carey, S. (2006). Re-visiting the competence / performance debate in the acquisition of the counting principles. Cognitive Psychology, 52, 130-169. https://doi.org/10.1016/j.cogpsych.2005.07.002
• LeFevre, J.-A., Fast, L., Skwarchuk, S.-L., Smith-Chant, B. L., Bisanz, J., Kamawar, D., & Penner-Wilger, M. (2010). Pathways to mathematics: Longitudinal predictors of performance. Child Development, 81(6), 1753-1767. https://doi.org/10.1111/j.1467-8624.2010.01508.x
• Leibovich, T., & Ansari, D. (2016). The symbol-grounding problem in numerical cognition: A review of theory, evidence and outstanding questions. Canadian Journal of Experimental Psychology, 70(1), 12-23. https://doi.org/10.1037/cep0000070
• Leibovich, T., Katzin, N., Harel, M., & Henik, A. (2017). From “sense of number” to “sense of magnitude”: The role of continuous magnitudes in numerical cognition. Behavioral and Brain Sciences, 40(1), Article e164. https://doi.org/10.1017/S0140525X16000960
• Libertus, M. E. (2015). The role of intuitive approximation skills for school math abilities. Mind, Brain and Education, 9(2), 112-120. https://doi.org/10.1111/mbe.12072
• Libertus, M. E., Feigenson, L., & Halberda, J. (2013). Is approximate number precision a stable predictor of math ability? Learning and Individual Differences, 25, 126-133. https://doi.org/10.1016/j.lindif.2013.02.001
• Lyons, I. M., & Ansari, D. (2015a). Foundations of children’s numerical and mathematical skills. In J. B. Benson (Ed.), Advances in Child Development and Behavior (Vol. 48, pp. 93–116). Elsevier. https://doi.org/10.1016/bs.acdb.2014.11.003
• Lyons, I. M., & Ansari, D. (2015b). Numerical order processing in children: From reversing the distance-effect to predicting arithmetic. Mind, Brain and Education, 9(4), 207-221. https://doi.org/10.1111/mbe.12094
• Lyons, I. M., Price, G. R., Vaessen, A., Blomert, L., & Ansari, D. (2014). Numerical predictors of arithmetic success in Grades 1-6. Developmental Science, 17, 714-726. https://doi.org/10.1111/desc.12152
• Mandler, G., & Shebo, B. J. (1982). Subitizing: An analysis of its component processes. Journal of Experimental Psychology: General, 111(1), 1-22. https://doi.org/10.1037/0096-3445.111.1.1
• Mejias, S., Mussolin, C., Rousselle, L., Grégoire, J., & Noël, M. (2012). Numerical and nonnumerical estimation in children with and without mathematical learning disabilities. Child Neuropsychology, 18(6), 550-575. https://doi.org/10.1080/09297049.2011.625355
• Merkley, R., & Ansari, D. (2016). Why numerical symbols count in the development of mathematical skills: Evidence from brain and behavior. Current Opinion in Behavioral Sciences, 10, 14-20. https://doi.org/10.1016/j.cobeha.2016.04.006
• Merkley, R., Matejko, A. A., & Ansari, D. (2017). Strong causal claims require strong evidence: A commentary on Wang and colleagues. Journal of Experimental Child Psychology, 153, 163-167. https://doi.org/10.1016/j.jecp.2016.07.008
• Mix, K. S., Levine, S. C., & Newcombe, N. (2016). Development of quantitative thinking across correlated dimensions. In A. Henik (Ed.), Continous Issues in Numerical Cognition (pp. 3–35). Elsevier. https://doi.org/10.1016/B978-0-12-801637-4.00001-9
• Muldoon, K., Towse, J., Simms, V., Perra, O., & Menzies, V. (2013). A longitudinal analysis of estimation, counting skills, and mathematical ability across the first school year. Developmental Psychology, 49(2), 250-257. https://doi.org/10.1037/a0028240
• Mundy, E., & Gilmore, C. K. (2009). Children’s mapping between symbolic and nonsymbolic representations of number. Journal of Experimental Child Psychology, 103(4), 490-502. https://doi.org/10.1016/j.jecp.2009.02.003
• Noël, M.-P., & Rousselle, L. (2011). Developmental changes in the profiles of dyscalculia: An explanation based on a double exact-and-approximate number representation model. Frontiers in Human Neuroscience, 5, Article 165. https://doi.org/10.3389/fnhum.2011.00165
• Norton, E. S., & Wolf, M. (2012). Rapid Automatized Naming (RAN) and reading fluency: Implications for understanding and treatment of reading disabilities. Annual Review of Psychology, 63(1), 427-452. https://doi.org/10.1146/annurev-psych-120710-100431
• Nosworthy, N., Bugden, S., Archibald, L., Evans, B., & Ansari, D. (2013). A two-minute paper-and-pencil test of symbolic and nonsymbolic numerical magnitude processing explains variability in primary school children’s arithmetic competence. PLoS One, 8(7), Article e67918. https://doi.org/10.1371/journal.pone.0067918
• Núñez, R. E. (2017). Is there really an evolved capacity for number? Trends in Cognitive Sciences, 21(6), 409-424. https://doi.org/10.1016/j.tics.2017.03.005
• Odic, D., Le Corre, M., & Halberda, J. (2015). Children’s mappings between number words and the approximate number system. Cognition, 138, 102-121. https://doi.org/10.1016/j.cognition.2015.01.008
• O’Hearn, K., Hoffman, J. E., & Landau, B. (2011). Small subitizing range in people with Williams syndrome. Visual Cognition, 19(3), 289-312. https://doi.org/10.1080/13506285.2010.535994
• O’Hearn, K., & Landau, B. (2007). Mathematical skill in individuals with Williams syndrome: Evidence from a standardized mathematics battery. Brain and Cognition, 64(3), 238-246. https://doi.org/10.1016/j.bandc.2007.03.005
• Oner, N. (1997). Psychometric tests used in Turkey: A guide. Bosporus University Press.
• Opfer, J. E., & Martens, M. A. (2012). Learning without representational change: Development of numerical estimation in individuals with Williams syndrome. Developmental Science, 15(6), 863-875. https://doi.org/10.1111/j.1467-7687.2012.01187.x
• Peng, P., & Kievit, R. A. (2020). The development of academic achievement and cognitive abilities: A bidirectional perspective. Child Development Perspectives, 14(1), 15-20. https://doi.org/10.1111/cdep.12352
• Reeve, R., Reynolds, F., Humberstone, J., & Butterworth, B. (2012). Stability and change in markers of core numerical competencies. Journal of Experimental Psychology: General, 141(4), 649-666. https://doi.org/10.1037/a0027520
• Reigosa-Crespo, V., Valdés-Sosa, M., Butterworth, B., Estévez, N., Rodríguez, M., Santos, E., Torres, P., Suárez, R., & Lage, A. (2012). Basic numerical capacities and prevalence of developmental dyscalculia: The Havana survey. Developmental Psychology, 48(1), 123-135. https://doi.org/10.1037/a0025356
• Reynvoet, B., & Sasanguie, D. (2016). The symbol grounding problem revisited: A thorough evaluation of the ANS mapping account and the proposal of an alternative account based on symbol-symbol associations. Frontiers in Psychology, 7, Article 1581. https://doi.org/10.3389/fpsyg.2016.01581
• Schleifer, P., & Landerl, K. (2011). Subitizing and counting in typical and atypical development. Developmental Science, 14(2), 280-291. https://doi.org/10.1111/j.1467-7687.2010.00976.x
• Schneider, M., Beeres, K., Coban, L., Merz, S., Schmidt, S. S., Stricker, J., & De Smedt, B. (2017). Associations of non-symbolic and symbolic numerical magnitude processing with mathematical competence: A meta-analysis. Developmental Science, 20(3), Article e12372. https://doi.org/10.1111/desc.12372
• Skwarchuk, S.-L., Sowinski, C., & LeFevre, J.-A. (2014). Formal and informal home learning activities in relation to children’s early numeracy and literacy skills: The development of a home numeracy model. Journal of Experimental Child Psychology, 121, 63-84. https://doi.org/10.1016/j.jecp.2013.11.006
• Soltész, F., Szücs, D., & Szűcs, L. (2010). Relationships between magnitude representation, counting and memory in 4- to 7-year-old children: A developmental study. Behavioral and Brain Functions, 6, Article 13. https://doi.org/10.1186/1744-9081-6-13
• Soto-Calvo, E., Simmons, F. R., Willis, C., & Adams, A.-M. (2015). Identifying the cognitive predictors of early counting and calculation skills: Evidence from a longitudinal study. Journal of Experimental Child Psychology, 140, 16-37. https://doi.org/10.1016/j.jecp.2015.06.011
• Sowinski, C. (2016). Numerical building blocks: Exploring domain-specific cognitive predictors of mathematics [Doctoral thesis, Carleton University]. https://doi.org/10.22215/etd/2016-11407
• Sowinski, C., LeFevre, J.-A., Skwarchuk, S.-L., Kamawar, D., Bisanz, J., & Smith-Chant, B. (2015). Refining the quantitative pathway of the Pathways to Mathematics model. Journal of Experimental Child Psychology, 131, 73-93. https://doi.org/10.1016/j.jecp.2014.11.004
• Stock, P., Desoete, A., & Roeyers, H. (2009). Mastery of the counting principles in toddlers: A crucial step in the development of budding arithmetic abilities? Learning and Individual Differences, 19(4), 419-422. https://doi.org/10.1016/j.lindif.2009.03.002
• Szűcs, D., & Myers, T. (2017). A critical analysis of design, facts, bias and inference in the approximate number system training literature: A systematic review. Trends in Neuroscience and Education, 6, 187-203. https://doi.org/10.1016/j.tine.2016.11.002
• Trick, L. M., & Pylyshyn, Z. W. (1993). What enumeration studies can show us about spatial attention: Evidence for limited capacity preattentive processing. Journal of Experimental Psychology: Human Perception and Performance, 19(2), 331-351. https://doi.org/10.1037/0096-1523.19.2.331
• Trick, L. M., & Pylyshyn, Z. W. (1994). Why are small and large numbers enumerated differently? A limited-capacity preattentive stage in vision. Psychological Review, 101(1), 80-102. https://doi.org/10.1037/0033-295X.101.1.80
• Vanbinst, K., Ansari, D., Ghesquière, P., & De Smedt, B. (2016). Symbolic numerical magnitude processing is as important to arithmetic as phonological awareness is to reading. PLoS One, 11(3), Article e0151045. https://doi.org/10.1371/journal.pone.0151045
• van der Sluis, S., de Jong, P. F., & van der Leij, A. (2004). Inhibition and shifting in children with learning deficits in arithmetic and reading. Journal of Experimental Child Psychology, 87(3), 239-266. https://doi.org/10.1016/j.jecp.2003.12.002
• van Oeffelen, M. P., & Vos, P. G. (1984). The young child’s processing of dot patterns: A chronometric and eye movement analysis. International Journal of Behavioral Development, 7(1), 53-66. https://doi.org/10.1177/016502548400700104
• Van Rooijen, M., Verhoeven, L., & Steenbergen, B. (2015). From numeracy to arithmetic: Precursors of arithmetic performance in children with cerebral palsy from 6 till 8 years of age. Research in Developmental Disabilities, 45–46, 49-57. https://doi.org/10.1016/j.ridd.2015.07.001
• Wagner, R., Torgesen, J., & Rashotte, C. A. (1999). Comprehensive Test of Phonological Processing. American Guidance Service.
• Wang, J. J., Odic, D., Halberda, J., & Feigenson, L. (2016). Changing the precision of preschoolers’ approximate number system representations changes their symbolic math performance. Journal of Experimental Child Psychology, 147, 82-99. https://doi.org/10.1016/j.jecp.2016.03.002
• Wang, Y., Ye, X., & Deng, C. (2020). Exploring mechanisms of rapid automatized naming to arithmetic skills in Chinese primary schoolers. Psychology in the Schools, 57(4), 556-571. https://doi.org/10.1002/pits.22349
• White, S. L. J., Szűcs, D., & Soltesz, F. (2011). Symbolic number: The integration of magnitude and spatial representations in children aged 6 to 8 years. Frontiers in Psychology, 2, Article 392. https://doi.org/10.3389/fpsyg.2011.00392
• Willburger, E., Fussenegger, B., Moll, K., Wood, G., & Landerl, K. (2008). Naming speed in dyslexia and dyscalculia. Learning and Individual Differences, 18(2), 224-236. https://doi.org/10.1016/j.lindif.2008.01.003
• Woodcock, R. W. (1998). Woodcock Reading Mastery Tests – Revised. American Guidance Service.
• Zhang, X., Koponen, T., Räsänen, P., Aunola, K., Lerkkanen, M.-K., & Nurmi, J.-E. (2014). Linguistic and spatial skills predict early arithmetic development via counting sequence knowledge. Child Development, 85(3), 1091-1107. https://doi.org/10.1111/cdev.12173
## Appendix A
The primary focus of the research project for which the data for Study 2 were collected was a number game intervention that was intended to support children’s verbal counting skills (Cankaya et al., 2014; Dunbar et al., 2017). As expected, verbal counting performance improved for children in the number game intervention conditions compared to those in colour (control) game conditions. However, none of the other early numeracy measures differentially improved across conditions. The speeded naming tasks were included in the larger study so that the reliability and validity of these measures could be evaluated in relation to other commonly used early numeracy measures. These speeded naming measures were not analyzed for the intervention component of the study.
##### Table A1
Availability of Participants and Data for Analyses of Speeded Naming (Number Per Group/Test) in Study 2
Group Pre-test
Post-test
Total Participants 94 88 182 78 45 123
Quantities
Administered 66 86 152 65 42 107
Completed 58 78 136 57 42 99
Valid Score 58 82 140 53 41 94
Letters/Coloursa
Administered 66 87 153 70 43 113
Completed 58 76 134 62 43 105
Valid Score 56 73 129 61 42 103
Note. Completed indicates that the children completed the task (i.e., provided answers and response times for both pages). Children who only completed a single page were included in the valid scores, however—so the number of valid scores could be higher than the number who completed both pages.
aAt pre-test, Canadian children did letter naming and Turkish children did colour naming. Both groups did colour naming at post-test.
Corrected scores for each page were calculated as [24 – errors]/response times. Pages with more than 12 errors (p < .05) were discarded. If children had a valid score on both pages for a speeded naming measure, we selected the best score of the two, to ensure that factors which might lead to lower scores, such as fatigue and administration error, were minimized. These best performance scores were used in the analyses reported in the body of the paper.
As shown in Table A.1, of the children who were administered the measures, 92% and 88% (pre- and post-test, respectively) had valid speeded naming of quantity scores; 85% had valid speeded naming of letter naming scores (pre-test); and 84% and 91% had valid scores for colour naming, at pre-test and post-test, respectively. Thus, the majority of children had valid scores on these tasks. Correlations between age and speeded naming scores were: .509, .493, and .384 for quantities, letters, and colours, respectively, at pre-test; and .403 and .396 for quantities and colours at post-test, all ps < .001. Children’s scores on the speeded naming of quantities task improved from pre- to post-test (.50 vs. .61 items-per-second), t(83) = -6.57, p < .001. Similarly, scores improved from pre- to post-test for speeded naming of colours (.47 vs. .54, items-per-second), t(36) = -3.03, p = .004. Correlations between pre- and post-test performance were .788 and .724 for quantities and colours, respectively, ps < .001. Thus, these measures were reliable, but also showed improvement with experience.
|
|
2019
Том 71
№ 1
# Jackson – Stechkin-type inequalities for the approximation of elements of Hilbert spaces
Abstract
We introduce new characteristics for elements of Hilbert spaces, namely, generalized moduli of continuity \$\omega_{ \varphi} (x, L_p, V ([0, \delta]))$ and obtain new exact Jackson – Stechkin-type inequalities with these moduli of continuity for the approximation of elements of Hilbert spaces. These results include numerous well-known inequalities for the approximation of periodic functions by trigonometric polynomials, approximation of nonperiodic functions by entire functions of exponential type, similar results for almost periodic functions, etc. Some of these results are new even in these classical cases.
Citation Example: Babenko V. F., Konareva S. V. Jackson – Stechkin-type inequalities for the approximation of elements of Hilbert spaces // Ukr. Mat. Zh. - 2018. - 70, № 9. - pp. 1155-1165.
|
|
# Need input on a potentially NP-hard maximal edge-weighted multi-cycle graph
I've posted a question on Stack Overflow regarding a seemingly NP-hard problem on maximization of weighted cycles in a graph problem.
One of the respondents cited Professor David Speyer's Math Overflow post in Math Overflow saying it's a polytime problem, while I argued it is not as I believe the solution of my problem can be used to solve a smaller travelling salesman problem. Unfortunately, that debate kind of ended here because the respondent stopped replying (he probably is busy and have forgotten about it or thought I am inexorably ignorant). Anyhow, I can't really rest until I know for certain it's an NP-hard problem or not. Can you guys help out?
Added by Brendan: The problem is, given an undirected graph with edge weights, find a set of vertex-disjoint cycles covering all the vertices and with maximum total weight.
-
This problem is called "maximum cycle cover" and if you search with that phrase you'll find your answer. For example this paper says there is an $O(n^3)$ algorithm, but it cites it only to a PhD thesis. Maybe you can find a published proof.
For directed graphs, or undirected graphs where edges are treated as cycles of length 2 (so their weight counts twice), it is easy to do it in polynomial time using a maximum matching algorithm.
-
Hmm, what's strange is that this paper here (arxiv.org/pdf/cs.cc/0504038.pdf) said:[br] 1. A cycle cover of a graph is a set of cycles such that every vertex is part of exactly one cycle. (Yes, that applies) 2. An L-cycle cover is a cycle cover in which the length of every cycle is in the set L. (Yes, that applies) [br] 3. Hell et al. showed that finding L-cycle covers in undirected graphs is NP-hard for almost all L [br][br] Unfortunately, I can't read Hell et al's paper or the paper you linked me to, since I am no longer in school. – Some Newbie May 24 '12 at 3:48
A specific example of my problem would be: [br] (1) There are 100 locations with edges being the distances between them (2) You want to draw 11 disjoint cycles with lengths x1,x2,...,x11 where x1+x2+...+x11 = 100 and the overall distances are maximized. In the simplest case of this type of problem with only 1 cycle of length 100 then thats the maximum Hamiltonian cycle problem (which should be NP-hard, my memory serves). – Some Newbie May 24 '12 at 3:58
Well, since there's no response, I suppose there's a problem with my reduction to Ham cycle? – Some Newbie May 30 '12 at 2:02
This is pretty late but I'm fairly sure that if the user is allowed to request that they want n cycles, Some Newbie is right that this problem is NP-Hard. The proof goes as follows:
Take some Hamiltonian cycle instance P. Embed it into our graph G for this problem by assigning all the weights of edges that don't occur in P to 0, and the edges that do occur to 1.
Now, ask for one cycle that has maximum weight. If the weight of this cycle equals |V|, we know that there is a Hamiltonian cycle, otherwise if it is less than |V|, we know there is not, thus this solves the Hamiltonian cycle decision problem, which is known to be NP-Complete.
It seems that if the user isn't allowed to specify the number of cycles, however, this gives enough flexibility that this problem is now solvable in poly time.
My suspicion is that this is where the confusion was.
-
|
|
# Math Help - find values for m and n so function is continuous
1. ## find values for m and n so function is continuous
f(x)= mx, x<3
n, x=3
-2x+9 x>3
2. ## Re: find values for m and n so function is continuous
First pick a value for x>3 i.e. x=4 then solve n such that -2(4)+9 = n
Once you know what n is then for x=3, solve mx=n, will give you m.
3. ## Re: find values for m and n so function is continuous
Originally Posted by sluggerbroth
f(x)= mx, x<3
n, x=3
-2x+9 x>3
You want to find $m~\&~n$ so that ${\lim _{x \to {3^ + }}}f(x) = {\lim _{x \to {3^ - }}}f(x)$.
4. ## Re: find values for m and n so function is continuous
Hello, sluggerbroth!
$\text{Find values for }m\text{ and }n\text{ so that }f(x)\text{ is continuous.}$
. . $f(x) \;=\;\begin{Bmatrix}mx & \text{ if } x < 3 \\ n & \text{ if }x = 3 \\ -2x+9 & \text{ if }x > 3 \end{array}$
Make a sketch.
When $x < 3$, we have a line through the origin with slope $m.$
When $x = 3$, we have a point $(3,n).$
When $x > 3$, we have the line $y \:=\:-2x + 9$
. . It has y-intercept 9 and slope -2.
We want the three graphs to "meet" at $x = 3.$
Code:
\|
*
|\
| \
| \
| \
| \
| \ *
| \ *
| o
| * :\
| * : \
| * : \
- - * - - - + - * - -
* | 3 \
|
Can you work it out?
|
|
# Math Help - Horse Pen
1. ## Horse Pen
Hello Folks,
Let me begin by stating I'm not good at mafamatics! I know this will be very simple for probably everyone on this forum. I just need to know.
I have 12 - 12' metal panels. They are straight. They will be connected end to end. What is the radius of this configuration?
2. ## Re: Horse Pen
Hi, twr318. Your horse pen has a radius of about 23' 2''.
3. ## Re: Horse Pen
Thanks! Could you tell me how to calculate this for future reference?
4. ## Re: Horse Pen
Strictly speaking the term "radius" only applies to circles. And one way to approximate this is to assume that you are talking about a circular pen with circumference 12(12)= 144 feet. For a circle the circumference is given by $2\pi r$ so we must have $2\pi r= 144$ so that $r= \frac{72}{\pi}= 22.91$ feet or 22 feet, 11 inches. That is 3 inches (about 1%) shorter than godelproof's answer.
What he did was imagine drawing lines from the center of the pen to the ends of each panel. That divides the 12 sided polygon into 12 isosceles triangles each with a base length of 12 feet and an angle of 360/12= 30 degrees. Now draw a line from the center of the pen to the center of each panel. We now have 24 right triangles with one leg of length 6 and opposite angle 30/2= 15 degrees. Since "sine" is "opposite side/ hypotenuse", taking the distance from the center of the pen to the end of each panel to be x, we have $\frac{6}{x}= sin(15)$ so that $x= \frac{6}{sin(15)}$. Using a calculator to find that sin(15)= 0.25882, we have $x= \frac{6}{0.25882}= 23.18$ feet of 23 feet and 2 inches as godelproof said.
As I said before, strictly speaking the term "radius" only applies to circles. What godelproof calculated was the "ex-radius", the radius of the circle that passes through the vertices of the polygon. One could just as well use the "in-radius", the radius of the circle inscribed in the polygon, that is tangent to each panel at its center. That radius is the other leg of the right triangle so is given by $tan(15)= \frac{6}{x}$ so that $x= \frac{6}{tan(15)}$. tan(15)= 0.26795 so that $x= \frac{6}{0.25795}= 22.4$ feet or 22 feet 4 inches.
For small angles, sine and tangent are almost the same so for polygons with many sides, the "in-radius" and "ex-radius" are almost the same- and my circle approximation is between them.
5. ## Re: Horse Pen
Sure. Suppose you have N panels, each M feet long, then your pen will have a radius R given by:
R= $\frac{M}{2sin(\pi/N)}$.
6. ## Re: Horse Pen
Originally Posted by HallsofIvy
Strictly speaking the term "radius" only applies to circles. And one way to approximate this is to assume that you are talking about a circular pen with circumference 12(12)= 144 feet. For a circle the circumference is given by $2\pi r$ so we must have $2\pi r= 144$ so that $r= \frac{72}{\pi}= 22.91$ feet or 22 feet, 11 inches. That is 3 inches (about 1%) shorter than godelproof's answer.
What he did was imagine drawing lines from the center of the pen to the ends of each panel. That divides the 12 sided polygon into 12 isosceles triangles each with a base length of 12 feet and an angle of 360/12= 30 degrees. Now draw a line from the center of the pen to the center of each panel. We now have 24 right triangles with one leg of length 6 and opposite angle 30/2= 15 degrees. Since "sine" is "opposite side/ hypotenuse", taking the distance from the center of the pen to the end of each panel to be x, we have $\frac{6}{x}= sin(15)$ so that $x= \frac{6}{sin(15)}$. Using a calculator to find that sin(15)= 0.25882, we have $x= \frac{6}{0.25882}= 23.18$ feet of 23 feet and 2 inches as godelproof said.
As I said before, strictly speaking the term "radius" only applies to circles. What godelproof calculated was the "ex-radius", the radius of the circle that passes through the vertices of the polygon. One could just as well use the "in-radius", the radius of the circle inscribed in the polygon, that is tangent to each panel at its center. That radius is the other leg of the right triangle so is given by $tan(15)= \frac{6}{x}$ so that $x= \frac{6}{tan(15)}$. tan(15)= 0.26795 so that $x= \frac{6}{0.25795}= 22.4$ feet or 22 feet 4 inches.
For small angles, sine and tangent are almost the same so for polygons with many sides, the "in-radius" and "ex-radius" are almost the same- and my circle approximation is between them.
That's really all one can say about the problem!! Nice!
|
|
# In QFT, why does a vanishing commutator ensure causality?
In relativistic quantum field theories (QFT),
$$[\phi(x),\phi^\dagger(y)] = 0 \;\;\mathrm{if}\;\; (x-y)^2<0$$
On the other hand, even for space-like separation
$$\phi(x)\phi^\dagger(y)\ne0.$$
Many texts (e.g. Peskin and Schroeder) promise that this condition ensures causality. Why isn't the amplitude $\langle\psi| \phi(x)\phi^\dagger(y)|\psi\rangle$ of physical interest?
What is stopping me from cooking up an experiment that can measure $|\langle\psi| \phi(x)\phi^\dagger(y)|\psi\rangle|^2$? What is wrong with interpreting $\langle\psi| \phi(x)\phi^\dagger(y)|\psi\rangle \ne 0$ as the (rather small) amplitude that I can transmit information faster than the speed of light?
-
Recall that commuting observables in quantum mechanics are simultaneously observable. If I have observables A and B, and they commute, I can measure A and then B and the results will be the same as if I measured B and then A (if you insist on being precise, then by the same I mean in a statistical sense where I take averages over many identical experiments). If they don't commute, the results will not be the same: measuring A and then B will produce different results than measuring A and then B. So if I only have access to A and my friend only has access to B, by measuring A several times I can determine whether or not my friend has been measuring B or not.
Thus it is crucial that if A and B do not commute, they are not spacelike separated. Or to remove the double negatives, it is crucial that A and B must commute if they are spacelike separated. Otherwise I can tell by doing measurements of A whether or not my friend is measuring B, even though light could not have reached me from B. Then with the magic of a lorentzian spacetime I could end up traveling to my friend and arriving before he observed B and stop him from making the observation.
The correlation function you wrote down, the one without the commutator, is indeed nonzero. This represents the fact that values of the field at different points in space are correlated with one another. This is completely fine, after all there are events that are common to both in their past light cone, if you go back far enough. They have not had completely independent histories. B U T the point is that these correlations did not arise because you made measurements. You cannot access these correlations by doing local experiments at a fixed spacetime point, you can only see these correlations by measuring field values at spatial location x and then comparing notes with your friend who measured field values at spatial location y. You can only compare notes when you have had time to travel to get close to each other. The vanishing commutator guarantees that your measurements at x did not affect her measurements at y.
It is dangerous to think of fields as creating particles at spacetime locations, because you can't localize a relativistic particle in space to a greater precision than its compton wavelength. If you are thinking of fields in position space it is better to think of what you are measuring as a field and not think of particles at all.
(Actually I should say that I don't think you could actually learn that your friend was measuring B at y by only doing measurements at A. But the state of the field would change, and the evolution of the field would be acausal. I think this is a somewhat technical point, the main idea is that you don't want to be able to affect what the field is going OVER THERE outside the light cone by doing measurements RIGHT HERE because you get into trouble with causality)
-
This is a fantastic answer. Two points: 1) I don't think you can travel to my friend and stop him from making the observation because he is still space-like separated, right? 2) I think you can learn that your friend was measuring B if [A,B]=/=0, at least in principle. For example, you could imagine setting up some crazy experiment which prepared the quantum field in some eigenstate. Your friend could be doing something that brings the field out of its eigenstate. Then, if you found that your field wasn't in its original eigenstate, you know (superluminally) that your friend did something. – hwlin Feb 9 '13 at 0:13
The commutator is measuring the amplitude of a particle created at $y$ and then annihilated at $x$, minus an antiparticle created at $x$ and annihilated at $y$. If $x$ and $y$ are space like seperated then these amplitudes will be zero as there will not have been time for a particle or antiparticle to reach the space time point it needs to be destroyed at. Remember an annihilator acting on a vacuum gives zero.
-
|
|
## YAML files
YAML is an indentation-based markup language which aims to be both easy to read and easy to write. Many projects use it for configuration files because of its readability, simplicity and good support for many programming languages. It can be used for a great many things including defining computational environments, and is well integrated with Travis which is discussed in the chapter on continuous integration.
An a YAML file defining a computational environment might look something like this:
# Define the operating system as Linux
os: linux
# Use the xenial distribution of Linux
dist: xenial
# Use the programming language Python
language: python
# Use version of Python 3.2
python: 3.2
# Use the Python package numpy and use version 1.16.1
packages:
numpy:
version: 1.16.1
Note that as you can see here that comments can be added by preceding them with a #.
### YAML syntax
A YAML document can consist of the following elements.
#### Scalars
Scalars are ordinary values: numbers, strings, booleans.
number-value: 42
floating-point-value: 3.141592
boolean-value: true
# strings can be both 'single-quoted and "double-quoted"
string-value: 'Bonjour'
YAML syntax also allows unquoted string values for convenience reasons:
unquoted-string: Hello World
#### Lists and Dictionaries
Lists are collections of elements:
jedis:
- Yoda
- Qui-Gon Jinn
- Obi-Wan Kenobi
- Luke Skywalker
Every element of the list is indented and starts with a dash and a space.
Dictionaries are collections of key: value mappings. All keys are case-sensitive.
jedi:
name: Obi-Wan Kenobi
home-planet: Stewjon
species: human
master: Qui-Gon Jinn
height: 1.82m
Note that a space after the colon is mandatory.
#### YAML gotchas
Due to the format aiming to be easy to write and read, there’re some ambiguities in YAML.
• Special characters in unquoted strings: YAML has a number of special characters you cannot use in unquoted strings. For example, parsing the following sample will fail:
unquoted-string: let me put a colon here: oops
Quote the string value makes this value unambiguous:
unquoted-string: "let me put a colon here: oops"
Generally, you should quote all strings that contain any of the following characters: [] {} : > |`.
• Tabs versus spaces for indentation: do not use tabs for indentation. While resulting YAML can still be valid, this can be a source of many subtle parsing errors. Just use spaces.
### How to use YAML to define computational environments
Because of their simplicity YAML files can be hand written. Alternatively they can be automatically generated as discussed above. From a YAML file a computational environment can be replicated in a few ways.
• Manually. It can be done manually by carefully installing the specified packages etc. Because YAML files can also specify operating systems and versions that may or may not match that of the person trying to replicate the environment this may require the use of a virtual machine.
• Via package management systems such as Conda. As discussed as well as being able to generate YAML files from computational environments Conda can also generate computational environments from YAML files.
### Security issues
There is an inherent risk in downloading/using files you have not written to your computer, and it is possible to include malicious code in YAML files. Do not load YAML files or generate computational environments from them unless you trust their source.
|
|
## anonymous 5 years ago need help with simplifying by rationalizing the denominator.
1. Owlfred
Hoot! You just asked your first question! Hang tight while I find people to answer it for you. You can thank people who give you good answers by clicking the 'Good Answer' button on the right!
2. anonymous
when you have: $\frac{a}{b-\sqrt{c}}$ multiply the top and bottom by the conjugate: b+sqrt(c) $\frac{a(b+\sqrt{c})}{(b-\sqrt{c})(b+\sqrt{c})}=\frac{ab+a \sqrt{c}}{b^{2}-c}$
3. anonymous
now the denominator is rational
4. anonymous
Rationalizing the denominator requires that you multiply above and below by the contents of the denominator. So in this case, you need to multiply above and below by $b-\sqrt{c}$ This will give you a new result of $a(b-\sqrt{c})/b-\sqrt{c})^2$
5. anonymous
(b-sqrt(c))^2=b^2-2bsqrt(c)+c, this is not rational. you need to multiply by the conjugate to get rid of the radical
|
|
## Sunday, April 18, 2010
### Updatyon Summit: Struggles of an Organization Town
I recently attended the updayton summit. I'm a newcomer to Dayton and was curious about this unique-sounding gathering. My first impression on hearing the words Young Creatives (YCs) Summit was "what a pretentious sounding group". I went with a list of preconceived questions (culled from this set of notes) that I wanted to try and answer through observation of how the summit was conducted and how the participants evolved their involvement. My initial idea was to look for evidence supporting either a synergistic group process or emergent individual creativity at the summit. I note my impressions and preconceptions upfront so that you'll understand my observations are not disinterested, though I tried to be 'minimally involved' and objective (with only limited success, the participants and facilitators were really nice and their optimism was infectious). If you think I missed something significant please point it out in the comments.
The updayton summit's main goal is to come up with annual projects that serve to excite YCs and will in-turn help Dayton retain recent college graduates. The means used to achieve this goal were facilitated consensus generation and voting. Attendees were split up by self-selected interest categories, and then further into several sub-groups. My interest category was entrepreneur. These sub-groups met independently at the beginning of the summit in breakout sessions to generate ideas and vote on their 'top two' options for projects. After that, the results of all the sub-groups were collected by summit staff. The attendees then went to panel discussions in their interest category, and then all met together in a Town Hall to vote on the final projects for the coming year.
There were seven resulting top ideas in the entrepreneur interest category. Six of these consisted primarily of websites. Five of those websites were about mentoring for young entrepreneurs by established ones, entrepreneur support groups, information clearinghouses or some combination thereof. Consensus building is certainly brutal in seeking out the lowest common denominator. How to argue against something as innocuous and pervasive as a website in our networked age? And what were the big success stories out of last year's summit? Weeding and painting and, you guessed it, a web resource. Remember, these groups met independently at the beginning of the summit. No significant prior communication between sub-group members other than mingling at vendor booths while surfing swag.
How does what I observed in the summit breakout sessions look in light of established thoughts on creativity (or lack thereof) in groups? There are two superficially competing views of the group creative process. There is the whole-is-more-than-the-sum-of-parts school popularized by Stephen Covey. In opposition, is the creative-acts-are-individual-acts school. William Whyte's The Organization Man provides an extensive denial of useful creative genius in groups, and a call for renewed focus on the dignity and efficacy of the individual contribution. The former I'll call the Synergy School, the latter the Solitary School.
Here's part of Whyte's criticism of the consensus building group,
Think for a moment of the way you behave in a committee meeting. In your capacity as group member you feel a strong impulse to seek common ground with the others. Not just out of timidity but out of respect for the sense of the meeting you tend to soft-pedal that which would go against the grain. And that, unfortunately, can include unorthodox ideas. A really new idea affronts current agreement -- it wouldn't be a new idea if it didn't -- and the group, impelled as it is to agreement, is instinctively hostile to that which is divisive. With wise leadership it can offset this bias, but the essential urge will still be to unity, to consensus. After an idea matures -- after people learn to live with it -- the group may approve it, but that is after the fact and it is an act of acquiescence rather than creation.
I have been citing the decision-making group, and it can be argued that these defects of order do not apply to information-exchanging groups. It is true that meeting with those of common interests can be tremendously stimulating and suggest to the individuals fresh ways of going about their own work. But stimulus is not discovery; it is not the act of creation. Those who recognize this limitation do not confuse the functions and, not expecting too much, profit from the meeting of minds.
Others, however, are not so wise, and fast becoming a fixture of organization life is the meeting self-consciously dedicated to creating ideas. It is a fraud. Much of such high-pressure creation -- cooking with gas, creating out loud, spitballing, and so forth -- is all very provocative, but if it is stimulating, it is stimulating much like alcohol. After the glow of such a session has worn off, the residue of ideas usually turns out to be a refreshed common denominator that everybody is relieved to agree upon -- and if there is a new idea, you usually find that it came from a capital of ideas already thought out -- by individuals -- and perhaps held in escrow until someone sensed an opportune moment for its introduction.
Togetherness
The scientific conference exemplifies Whyte's informational exchange meeting. No one attends to make decisions (or vote with dots), the attendees are looking to share their work, and learn about their colleagues' work. These meetings are an important part of modern scientific progress. This years' updayton meeting did have information exchange components, which I'll get to later.
The Synergy School might argue that the updayton breakout sessions can provide an opportunity for synergistic collaboration, where alternative solutions emerge that are better than any of the individual solutions brought by group members. The Synergy School's three levels of communication are
1. The lowest level of communication coming out of low trust situations is characterized by defensiveness, protectiveness, and legalistic language which covers all the bases and spells out qualifiers and escape clauses in the event things go sour.
2. The middle level of communication is respectful communication -- where fairly mature people communicate.
3. The highest level of communication is synergistic (win/win) communication.
However, the acknowledged goal of the updayton summit is 'stimulating like alcohol' and 'engagement' rather than synergy. So, while I went looking for evidence of high-level cooperative action I should have paid closer attention to the marketing materials and lowered my expectations accordingly. There was no effort at establishing group trust (we didn't even introduce ourselves at the start of the breakout). We jumped right in to the scripted consensus process. Low-trust communication among mature professionals leading to compromise (consensus) is the best we can hope for from events like these, and, unsurprisingly, that's exactly what we got. This naturally raises the question: why bother? If all we can reasonably hope for is second tier communication then why invest the effort? The gist I get from a closer look at the promotional material is 'to get buy-in for the projects which will excite YCs to stay'. Which, in a moment of cynicism, might strike one as rather manipulative. Instead of manufacturing radiators, now we're manufacturing community-spiritedness. We might not be able to offer you gainful employment, but you can volunteer to weed our sidewalks!
In support of the Solitary School's idea about the capital of individual ideas, the winning project from the entrepreneur interest category was the one option that wasn't a website. The young man whose idea formed the core of this project said, "this is something I've been writing about for years". Something he was clearly passionate about, something that he expended his individual creative effort to flesh out beforehand on his own, and subsequently pitch to the group. The other options presented by the members of the group were relentlessly mashed into web-sameness by the gentle actions of the facilitators and the listless shrugs of individual acquiescence from well-meaning group members searching for common ground. When a thoughtful member of the breakout session asked the only really important question, "how do you create an innovator?" His question was met with more shrugs around the room followed quickly by redirection from the facilitators. Clearly that question cannot be packaged into a public relations project.
What about the skills sessions? Surely these have redeeming aspects, the Solitary School would appreciate these as information exchange, and the Synergy School might appreciate them as 'sharpening the saw'. The most interesting aspect of the panel discussions was the incipient frustration I observed in some of David Gasper's comments. Roughly, "there are so many great resources for entrepreneurs in the Dayton region. Why don't we have more entrepreneurs!? Dayton needs more entrepreneurs." Some of the resources mentioned by the panelists were Dayton SCORE, EntrepenuerOhio and Dayton Business Resource Center. As Theresa Gasper observed, "People seem to want the information PUSHED to them, but then feel overwhelmed with all the information coming at them. No one seems to want to PULL the information – meaning, many don't want to search for the info." This is consistent with the majority of "needs" identified in the entrepreneur breakout sessions. These folks are looking for checklists, guarantees of stability and someone to tell them what to do. In fact, one participant in my session thought that the biggest barrier to entry for entrepreneurs was the lack of the safety net offered by nationalized health-care! If you were to ask me what is the opposite of the entrepreneurial spirit, I could not have come up with a better answer. Probably the opposite of the definitions the panel members gave of entrepreneur too:
• some one who has put something of value to them at risk
• some one with significant "skin in the game"
Dayton already has a tough time with entrepreneurial thinking because of its recent history as a factory town (far removed from the celebrated, early-industrial "great men"). In his article on a New Era of Joblessness, Don Peck identifies psychological work that points to an additional generational component contributing to this dearth of entrepreneurs,
Many of today’s young adults seem temperamentally unprepared for the circumstances in which they now find themselves. Jean Twenge, an associate professor of psychology at San Diego State University, has carefully compared the attitudes of today’s young adults to those of previous generations when they were the same age. Using national survey data, she’s found that to an unprecedented degree, people who graduated from high school in the 2000s dislike the idea of work for work’s sake, and expect jobs and career to be tailored to their interests and lifestyle. Yet they also have much higher material expectations than previous generations, and believe financial success is extremely important. “There’s this idea that, ‘Yeah, I don’t want to work, but I’m still going to get all the stuff I want,’”Twenge told me. “It’s a generation in which every kid has been told, ‘You can be anything you want. You’re special.’”
In her 2006 book, Generation Me, Twenge notes that self-esteem in children began rising sharply around 1980, and hasn’t stopped since. By 1999, according to one survey, 91 percent of teens described themselves as responsible, 74 percent as physically attractive, and 79 percent as very intelligent. (More than 40 percent of teens also expected that they would be earning $75,000 a year or more by age 30; the median salary made by a 30-year-old was$27,000 that year.)Twenge attributes the shift to broad changes in parenting styles and teaching methods, in response to the growing belief that children should always feel good about themselves, no matter what. As the years have passed, efforts to boost self-esteem—and to decouple it from performance—have become widespread.
These efforts have succeeded in making today’s youth more confident and individualistic. But that may not benefit them in adulthood, particularly in this economic environment.Twenge writes that “self-esteem without basis encourages laziness rather than hard work,” and that “the ability to persevere and keep going” is “a much better predictor of life outcomes than self-esteem.” She worries that many young people might be inclined to simply give up in this job market. “You’d think if people are more individualistic, they’d be more independent,” she told me. “But it’s not really true. There’s an element of entitlement—they expect people to figure things out for them.”
Seeking 'solutions' which enable this emerging neurosis, rather than healing it, is probably not the answer to a more dynamic Dayton.
Please don't misunderstand my criticisms of this updayton process (or cooperation in general). I am in agreement with both Covey and Whyte that our biggest challenges require innovative cooperation to solve.
Our most important work, the problems we hope to solve or the opportunities we hope to realize require working and collaborating with other people in a high-trust, synergistic way...
Interdependence
Let me admit that I have been talking principally about the adverse aspects of the group. I would not wish to argue for a destructive recalcitrance, nor do I wish to undervalue the real progress we have made in co-operative effort. But to emphasize, in these times, the virtues of the group is to be supererogatory. Universal organization training, as I will take up in the following chapters, is now available for everybody, and it so effectively emphasizes the group spirit that there is little danger that inductees will be subverted into rebelliousness.
Over and above the overt praise for the pressures of the group, the very ease, the democratic atmosphere in which organization life is now conducted makes it all the harder for the individual to justify to himself a departure from its norm. It would be a mistake to confuse individualism with antagonism, but the burdens of free thought are steep enough that we should not saddle ourselves with a guilty conscience as well.
However, what Dayton lacks towards its success is not more resources from government, more focus on community, more committee meetings or trendy bohemian culture to attract jobless hipsters. In fact, if the attendance of the updayton summit is any indication, Dayton has no lack of optimistic joiners. However, coddling these agreeable, cooperative, and risk-averse Organization Volk is not the answer if what you are seeking is a flowering of 1000 new entrepreneurs in Dayton. As Whyte argues, we lack a recognition that "[t]he central ideal -- that the individual, rather than society, must be the paramount end [...] is as vital and as applicable today as ever". Lower the barriers to entry (taxes / zoning / regulation / government subsidized competitors), and the passionate individuals uninterested in paternalism will exploit the opportunities that emerge to deliver for Dayton's future.
[
The winner of the 'best swag contest' was MetroParks with their D-ring key fob:
Yes. My keys are now a' swingan'...
]
#### 6 comments:
1. SBA recovery lending extended: The U.S. Small Business Administration will receive an additional $80 million to extend its stimulus-funded lending programs through May. [...] Dayton-area small businesses have said a lack of funding opportunities has hindered them during the economic downturn. 2. Paul Graham on incubators (emphasis added): Till now, nearly all seed firms have been so-called "incubators," so Y Combinator gets called one too, though the only thing we have in common is that we invest in the earliest phase. According to the National Association of Business Incubators, there are about 800 incubators in the US. This is an astounding number, because I know the founders of a lot of startups, and I can't think of one that began in an incubator. What is an incubator? I'm not sure myself. The defining quality seems to be that you work in their space. That's where the name "incubator" comes from. They seem to vary a great deal in other respects. At one extreme is the sort of pork-barrel project where a town gets money from the state government to renovate a vacant building as a "high-tech incubator," as if it were merely lack of the right sort of office space that had till now prevented the town from becoming a startup hub. At the other extreme are places like Idealab, which generates ideas for new startups internally and hires people to work for them. The classic Bubble incubators, most of which now seem to be dead, were like VC firms except that they took a much bigger role in the startups they funded. In addition to working in their space, you were supposed to use their office staff, lawyers, accountants, and so on. Whereas incubators tend (or tended) to exert more control than VCs, Y Combinator exerts less. And we think it's better if startups operate out of their own premises, however crappy, than the offices of their investors. So it's annoying that we keep getting called an "incubator," but perhaps inevitable, because there's only one of us so far and no word yet for what we are. If we have to be called something, the obvious name would be "excubator." (The name is more excusable if one considers it as meaning that we enable people to escape cubicles.) 3. A few related comments over on an article from DaytonMostMetro. Do people follow jobs or do jobs follow people? 4. I really liked Shop Class as Soulcraft if for nothing else than his criticism of Richard Florida's Creative Class hooey: Everyone an Einstein The latest version of such hopeful thinking is gathered into the phrase "the creative economy." In the Rise of the Creative Class, Richard Florida presents the image of the creative individual. "Bizarre mavericks operating at the bohemian fringe" are now "at the very heart of the process of innovation," forming a core creative class "in science and engineering, architecture and design, education, arts, music, and entertainment," joining "creative professionals in business and finance, law, health-care and related fields." In a related article, Florida invokes Albert Einstein to give us some idea of the self-directed and creative individual. This type is becoming more numerous. "Already, more than 40 million Americans work in the creative sector, which has grown by 20 million jobs since the 1980s." Some of these new Einsteins, it turns out, can be found working at Best Buy. Florida informs us that "Best Buy CEO Brad Anderson has made it his company's stated mission to provide an 'inclusive, innovative work environment designed to unleash the power of all of our people as they have fun while being the best." Adopting the role of spokesperson for the spokesperson, Florida continues: "Employees are encouraged to improve upon the company's work process and techniques in order to make the workplace more productive and enjoyable while increasing sales and profits. In many cases, a small change made on the salesroom floor--by a teenage sales rep re-conceiving a Vonage display or an immigrant salesperson acting on a thought to increase outreach, advertising, and service to non-English-speaking communities--has been implemented nationwide, generating hundreds of millions of dollars in added revenue." The Vonage display isn't merely altered, it is re-conceived. Whatever survives this onslaught of intellectual rigor by the teenage sales rep is put back on the sales floor. Its conceptual foundations clarified, the re-conceived Vonage display generates hundreds of millions of dollars in added revenue. Florida continues: "Best Buy's Anderson...likes to say that the great promise of the creative class era is that, for the first time in our history, the further development of our economic competitiveness hinges on the fuller development of human creative capabilities. In other words, our economic success increasingly turns on harnessing the creative talents of each and every human being..." Frank Levy, the MIT economist, responds to this by dryly noting that "where I live Best Buy seems to be starting people at about$8.00 an hour."
Florida is unimpressed by such facts. After all, the "stated mission" of Best Buy's CEO is to provide a work environment designed to "unleash the power of all of our people as they have fun while being the best." It seems the unleashed power of all those mavericks in the Best Buy creative sector is fully compatible with near-minimum wage. Bohemians live by a different set of rules; they aren't money grubbing proles.
5. I quoted Whyte in the post: It would be a mistake to confuse individualism with antagonism, but the burdens of free thought are steep enough that we should not saddle ourselves with a guilty conscience as well.
Some empirical evidence for "the burden" (courtesy of this slashdot comment, emphasis added):
What can you do? I gained some insight into this problem several years ago when my research group performed an fMRI study of social conformity. We recreated a version of the famous Asch experiment of the 1950s and used fMRI to determine how a group changes an individual's perception of the world. Two things emerged from the study. First, when individuals conform to a group's opinion, even when the group is wrong, we observe changes in perceptual circuits in the brain, suggesting that groups change the way we see the world. Second, when an individual stands up against the group, we observed strong activation in the amygdala, a structure closely associated with fear. All this tells me that not only are our brains not wired for truly independent thought, but it takes a huge amount of effort to overcome the fear of standing up for one's own beliefs and speaking out.
6. The consensus is built, get on the bus or get run over:
Lately we’ve seen a spirit of community, an intolerance for naysayers, and a recognition of the unique opportunity we have in Dayton to remake a city.
updayton year three report
No change these two years past, as I wrote in 2010: In fact, if the attendance of the updayton summit is any indication, Dayton has no lack of optimistic joiners. However, coddling these agreeable, cooperative, and risk-averse Organization Volk is not the answer if what you are seeking is a flowering of 1000 new entrepreneurs in Dayton. As Whyte argues, we lack a recognition that "[t]he central ideal -- that the individual, rather than society, must be the paramount end [...] is as vital and as applicable today as ever".
|
|
# How to calculate the bremsstrahlung limit in the fusion triple product diagram?
The fusion triple product is a commonly used figure of merit to quantify the progress in fusion research, it is the product of plasma density $n$, temperature $T$ and energy confinement time $\tau_E$.
It can be used in combination with a power balance to formulate a criterion for ignition, which is defined as the point when the energy released by the fusion reactions is large enough to sustain them (by proving enough heat to keep the fusion processes going). The criterion reads $$nT\tau_e > \frac{12\,T^2}{\left<\sigma_{fusion}u\right>\epsilon_\alpha - 4c_{br}Z_{eff}\sqrt{T}},$$ with $\left<\sigma_{fusion}u\right>$ the fusion reactivity (which is also a function of $T$, approximately of $T^2$ for the temperatures of interest here), $\epsilon_\alpha=3.52\,\mathrm{MeV}$ the energy released by the $\alpha$-particles (assuming here that we have a DT-fusion reaction), $c_{br}\approx1.04\cdot 10^{-19}\,\mathrm{m}^3\mathrm{s}^{-1}\sqrt{\mathrm{eV}}$, $Z_{eff}$ the effective charge number which we assume to be 1 here. The left term in the denominator describes the heating by the $\alpha$-particles produced in the DT fusion reaction and the right term the losses due to bremsstrahlung.
The fusion triple product is often plotted as a function of the temperature, as shown in the following plot.
Some more interesting plots are shown for example here or here or here. I have chosen those particular examples because they include a limit for bremsstrahlung losses (the linear function on the top left).
My question is how to calculate this limit such that it can be included in those plots?
(I have the feeling that the answer is quite obvious but I can't get it at the moment...)
• You mean how to graph the bremsstrahlung contribution from the denominator? Besides from substituting the values for the experimental conditions, what should be the problem? – Germán May 7 '17 at 7:29
• @Germán no, I was referring to the plots from the links, there they have a bremsstrahlung limit included (above which the plasma collapses due to too large bremsstrahlung losses). I just can't get my head around how to include that limit in the $nT\tau_E$ diagram... – Alf May 7 '17 at 9:52
• I got that, what I mean is, wouldn't you take the 4c_{br}Z_{eff}\sqrt{T}} factor of the denominator, replacing the values and coefficients accordingly, and you end up with a sqrt{T} relation (times a coefficient) to plot in the diagram? – Germán May 8 '17 at 10:24
The above definition of the Lawson criterion derives from the following inequality $$$$P_f \geq \frac{W}{\tau'_E} + P_{rad}\,,$$$$ where $$P_f= \frac{1}{4} n^2 \langle \sigma v\rangle \epsilon_{\alpha}$$ is the fusion power density, $$W=3 n T$$ is the total energy density of the D-T plasma, $$\tau'_E$$ is an energy confinement time and $$P_{rad}=c_{br} n^2 \sqrt{T}$$ is the power density loss due to Bremsstrahlung. Solving for $$n \tau'_E$$ and multiplying by $$T$$ gives the following lower limit on the triple product $$$$n T \tau'_E\geq \frac{12 T^2}{\langle \sigma v\rangle \epsilon_{\alpha}-4 c_{br}\sqrt{T}}\,.$$$$
I suspect that the plots in the links above use a different definition of the Lawson criterion, $$$$P_f \geq \frac{W}{\tau_E}\,,$$$$ where $$\tau_E$$ is the total energy confinement time (including radiative losses). Re-arranging again gives a different lower limit on the triple product, $$$$n T \tau_E\geq \frac{12 T^2}{\langle \sigma v\rangle \epsilon_{\alpha}}\,.$$$$ Now, by our definition we also require that the power density loss due to Bremsstrahlung is less than the total energy loss rate per unit volume, $$$$P_{rad} \leq \frac{W}{\tau_E}\,,$$$$ Re-arranging gives the following upper limit on the triple product $$$$n T \tau_E \leq \frac{3 T^{3/2}}{c_{br}}\,.$$$$ Using reactivities $$\langle \sigma v\rangle$$ from Table VII in a paper by Bosch and Hale (1992), I have plotted the lower and upper limits on the triple product (see below).
• Thanks a lot for your detailed answer! So you are saying $\tau_E^`$ is the confinement time taking into account only transport losses and $\tau_E$ is the total confinement time. That makes sense, I am just a bit puzzled why you plot them on the same graph here, as the y-axis should, strictly speaking, only be true for one of them...? – Alf Oct 16 '18 at 18:18
|
|
# CAMP/Nonlinear PDEs Seminar
Wednesdays at 4pm in Eckhart 202.
## Spring 2009 Schedule
April 1
Monica Visan, University of Chicago
The focusing energy-critical nonlinear Schrodinger equation
We introduce the defocusing and focusing energy-critical nonlinear Schrodinger equations. We then present joint work with Rowan Killip on the focusing problem.
April 22 at 3pm in E202
Sijue Wu, University of Michigan
Almost global well-posedness of the 2-D full water wave problem
We consider the problem of global in time existence and uniqueness of solutions of the 2-D infinite depth full water wave equation. It is known that this equation has a solution for a time period $[0, T/\epsilon]$ for initial data of type $\epsilon\Phi$, where $T$ depends only on $\Phi$. We show that for such data there exists a unique solution for a time period $[0, e^{T/\epsilon}]$. This is achieved by better understandings of the nature of the nonlinearity of the water wave equation.
May 6
Ovidiu Savin, Columbia University
Minimizers of convex functionals arising in random surfaces
We discuss $C^1$ regularity of minimizers to $\int F(\nabla u)dx$ in two dimensions for certain classes of non-smooth convex functionals $F$. In particular our results apply to the surface tensions that appear in recent works on random surfaces and random tilings of Kenyon, Okounkov and others. This is a joint work with D. De Silva.
May 13
Alexander Kiselev, University of Wisconsin
Nonolocal maximum principles for active scalars
I will discuss nonlocal maximum principles for a family of dissipative active scalar equations. This technique gives proofs of global regularity for the critical surface quasi-geostrophic equation, Burgers equation and some other related models. The talk is based on works joint with Fedya Nazarov, Roman Shterenberg and Sasha Volberg.
June 3
John Gibson, Georgia Institute of Technology
Equilibria, traveling waves, and periodic orbits in plane Couette flow
Equilibrium, traveling wave, and periodic orbit solutions of pipe and plane Couette flow can now be computed precisely at Reynolds numbers above the onset of turbulence. These invariant solutions capture the complex dynamics of coherent roll-streak structures in wall-bounded flows and provide a framework for connecting wall-bounded turbulence to dynamical systems theory. We present a number of newly computed solutions of plane Couette flow and observe how they are visited by the turbulent flow. What emerges is a picture of low-Reynolds turbulence as a walk among a set of weakly unstable invariant solutions.
For questions, contact
Previous Years
|
|
# Intuition Behind Particular Related Rates Question
1. Mar 1, 2014
### S.R
1. The problem statement, all variables and given/known data
A particle is moving along the graph of y=sqrt(x). At what point on the curve are the x-coordinate and the y-coordinate of the particle changing at the same rate?
2. Relevant equations
y = sqrt(x)
y' = 1/(2sqrt(x))
3. The attempt at a solution
The solution to the problem is to differentiate both sides of the eqn with respect to time and then determine when dy/dt = dx/dt.
But I don't understand how the x-coordinate has a rate of change since it's an independent variable? Can someone explain how to comprehensively understand the solution and I suppose, wording of this problem?
Thank-you.
2. Mar 1, 2014
### Nick O
Edit: As shown in later posts, this is wrong and I need sleep.
I think we need to make the assumption that the particle is moving at a constant speed, and that the independent variable is time (not x).
That changes the equation for y:
y = sqrt(t)
and gives us an equation for x:
x = t^2
I admit, though, the question is very vague and I can't be sure that this is what it means.
Last edited: Mar 2, 2014
3. Mar 1, 2014
### S.R
If x = t^2, then sqrt(x) = t, which simplifies the function to:
y = sqrt(sqrt(x))
Not sure I understand.
4. Mar 1, 2014
### Nick O
You are thinking of y as a function of x. Think of them as two independent functions:
x(t) = t^2
y(t) = sqrt(t)
This allows you to differentiate both x and y with respect to t, rather than with respect to one another. Once you have both derivatives, you can find where they have equal rates of change.
Note that t^2 is just a reflection of sqrt(t). It would have been equivalent to say t = sqrt(x), which would make the x and y plots identical, but with different axis labels.
Does that make sense? I'm not sure that I am explaining how I got the equation for x very well.
5. Mar 1, 2014
### Dick
Use the chain rule. dy/dt=(dy/dx)*(dx/dt). If dy/dt=dx/dt then what is dy/dx?
6. Mar 1, 2014
### S.R
Well, dy/dx must equal 1. So I guess we have to assume y and x are functions of another variable (time I suppose)? It's rather unintuitive to do so since x is generally the independent variable.
7. Mar 1, 2014
### jackarms
I don't think we can assume constant speed -- I think all that's implied by the problem is that the independent variable is t, and then velocity is going to depend on dx/dt and dy/dt.
There's an error in your substitution here. And I think you have the same error too Nick -- eliminating t would also give you sqrt(sqrt(x)) instead of just sqrt(x).
For these parameterization problems, it's usually easiest to assign t to x, and then find what y as a function for t would be. For this problem, you would get:
$x = t$ (by arbitrary assignment) and
$y = \sqrt(t)$
Then you want to find when $\frac{dx}{dt}$ and $\frac{dy}{dt}$ are equal, and then what point that time corresponds to.
Hope this was clear enough.
8. Mar 1, 2014
### Nick O
Gah, you're right. In a post I made an hour or so ago I said I should get some sleep... I really should do that before trying to do any more math.
9. Mar 1, 2014
### Dick
Yes, time is the independent variable but dy/dx must equal 1. So at what (x,y) point is dy/dx=1? That doesn't depend on the time.
10. Mar 1, 2014
### S.R
Thanks jackarms, your explanation was great. If dy/dx = 1 then 1/(2sqrt(x)) = 1 and thus x = 1/4.
11. Mar 2, 2014
### Ray Vickson
To answer the question there is no need to assume a constant speed; it will work perfectly well if you assume the position is <x(t),y(t)>, with y(t) = sqrt(x(t)) and x(t), y(t) differentiable functions of t. That's all you need.
|
|
# C++ Program to Print X Star Pattern
Displaying star patterns in different shapes, like pyramids, squares, and diamonds, is a common part of basic programming and logic development. We faced various problems involving stars and numerical patterns as we studied looping statements in programming. This article will demonstrate how to print an X or a cross using stars.
We will see two methods for the same. First one is little bit complicated but the next method is much efficient.
## X Star Pattern (Using two sets of blank spaces)
* *
* *
* *
* *
*
* *
* *
* *
* *
For this pattern, the line count is n = 5. This is for upper half. The total X pattern has 2n – 1 lines. Let us see how we can achieve this using the following table −
Line Number Star Count Space Left Space Between Description
1 2 0 7 When the i = n, print one star, otherwise 2. The left spaces are (i – 1) and between spaces are 2(n – i) - 1
2 2 1 5
3 2 2 3
4 2 3 1
5 1 4 -
6 2 3 1 Left stars are decreasing like n – (i – n) – 1 = 2n – i – 1. Space between will follow: 2*(i – n) - 1
7 2 2 3
8 2 1 5
9 2 0 7
### Algorithm
• for i ranging from 1 to 2n - i, do
• if i <= n, then
• for j ranging from 1 to i - 1, do
• display blank space
• end for
• display star
• if i and n are not the same, then
• for j ranging from 1 to 2(n - i) - 1, do
• display blank space
• end for
• display star
• end if
• otherwise
• for j ranging from 1 to 2n - i - 1, do
• display blank space
• end for
• display star
• for j ranging from 1 to 2(i - n) - 1, do
• display blank space
• end for
• display star
• end if
• move the cursor to the next line
• end for
### Example
#include <iostream>
using namespace std;
void solve( int n ){
for ( int i = 1; i <= 2*n - 1; i++ ) {
if ( i <= n ) {
for ( int j = 1; j <= i - 1; j++ ) {
cout << ". ";
}
cout << "* ";
if ( i != n ) {
for ( int j = 1; j <= 2 * (n - i) - 1; j++ ) {
cout << " ";
}
cout << "* ";
}
} else {
for ( int j = 1; j <= (2 * n) - i - 1; j++ ) {
cout << ". ";
}
cout << "* ";
for ( int j = 1; j <= 2 * (i - n) - 1; j++ ) {
cout << " ";
}
cout << "* ";
}
cout << "\n";
}
}
int main(){
int n = 8;
cout << "X Star Pattern for " << n << " lines." << endl;
solve( n );
}
### Output
X Star Pattern for 8 lines.
* *
. * *
. . * *
. . . * *
. . . . * *
. . . . . * *
. . . . . . * *
. . . . . . . *
. . . . . . * *
. . . . . * *
. . . . * *
. . . * *
. . * *
. * *
* *
### Output (With n = 10)
X Star Pattern for 10 lines.
* *
. * *
. . * *
. . . * *
. . . . * *
. . . . . * *
. . . . . . * *
. . . . . . . * *
. . . . . . . . * *
. . . . . . . . . *
. . . . . . . . * *
. . . . . . . * *
. . . . . . * *
. . . . . * *
. . . . * *
. . . * *
. . * *
. * *
* *
## Using Grid Method
The same problem can be solved by considering a grid, and from this grid, we can calculate the formula for which the stars are being printed and where the spaces are printed.
* * * * * * * * * * * * * * * * *
From the above grid, it is easy to understand, the stars will only be placed when column numbers are same as row numbers (diagonally) and column numbers are (2n + 1 – i)
### Algorithm
• m = 2n - i
• for i ranging from 1 to m, do
• for j ranging from 1 to m, do
• if j is same as i or j is same as (m + 1) - i, do
• display star
• otherwise
• display space
• end if
• end for
• move the cursor to the next line
• end for
### Example
#include <iostream>
using namespace std;
void solve( int n ){
int m = 2*n - 1;
for ( int i = 1; i <= m; i++ ) {
for ( int j = 1; j <= m; j++ ) {
if (j == i || j == (m + 1 - i))
cout << "* ";
else
cout << ". ";
}
cout << endl;
}
}
int main(){
int n = 6;
cout << "X Star Pattern for " << n << " lines." << endl;
solve( n );
}
### Output
X Star Pattern for 6 lines.
* . . . . . . . . . *
. * . . . . . . . * .
. . * . . . . . * . .
. . . * . . . * . . .
. . . . * . * . . . .
. . . . . * . . . . .
. . . . * . * . . . .
. . . * . . . * . . .
. . * . . . . . * . .
. * . . . . . . . * .
* . . . . . . . . . *
### Output (With n = 8)
X Star Pattern for 8 lines.
* . . . . . . . . . . . . . *
. * . . . . . . . . . . . * .
. . * . . . . . . . . . * . .
. . . * . . . . . . . * . . .
. . . . * . . . . . * . . . .
. . . . . * . . . * . . . . .
. . . . . . * . * . . . . . .
. . . . . . . * . . . . . . .
. . . . . . * . * . . . . . .
. . . . . * . . . * . . . . .
. . . . * . . . . . * . . . .
. . . * . . . . . . . * . . .
. . * . . . . . . . . . * . .
. * . . . . . . . . . . . * .
* . . . . . . . . . . . . . *
## Conclusion
Star patterns are simple to use and useful for learning programming looping ideas. This article demonstrated how to use C++ to display half-diamond patterns for both left- and right-aligned half-diamonds. The X or Cross pattern is then displayed using stars after taking into account the n-line count. For the same, we have provided two methods. One employ padding and white space, while the other makes use of grid computations. Instead of adding spaces, we've added dots. If not, they occasionally trim blank spaces from the output.
|
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 12 Jul 2020, 22:22
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
GMAT Shortcut: Adding to the Numerator and Denominator
Author Message
TAGS:
Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 65194
Show Tags
21 May 2017, 22:05
9
33
GMAT Shortcut: Adding to the Numerator and Denominator
BY MIKE McGARRY. MAGOOSH
First, try these practice DS questions:
1) If x and y are positive integers, is $$\frac{x}{y} <\frac{x+2}{y+3}?$$
Statement #1: y > 20
Statement #2: x < 5
Discussed HERE
2) If x and y are positive integers, is $$\frac{x}{y} <\frac{x+5}{y+5}$$?
Statement #1: y = 5
Statement #2: x > y
Discussed HERE
Throughout this post, assume that I am talking about positive fractions with a positive numerator and positive denominator. If the fraction is negative, use the information below to figure out what happens to the absolute value of the fraction, and judge from there.
Ironically, it’s a bit easier if we add to part of the fraction and subtract from the other. The BIG idea here: if you increase the numerator and/or decrease the denominator of any positive fraction, that fraction will get bigger; if you decrease the numerator and/or increase the denominator of any positive fraction, that fraction will get smaller. Add a positive number to the numerator and/or subtract a positive number from the denominator of any positive fraction, and the new fraction will be greater than the starting fraction. Subtract a positive number from the numerator and/or add a positive number to the denominator of any positive fraction, and the new fraction will be smaler than the starting fraction. Though not relevant in the two practice problems above, this is a golden rule that will help you in a panoply of fraction and ratio problems.
Adding the Same Number to Numerator and Denominator
Suppose we start with the positive fraction x/y and we want to add some positive number b to both the numerator and the denominator. How does the resultant fraction, (x + b)/(y + b), compare to the starting fraction?
Well, the rule here is a bit subtle. When you add the same number to numerator and denominator, the resultant fraction is closer to 1 than is the starting fraction. This means, if the starting fraction x/y is less than 1, then the resultant fraction is closer to one — bigger than the starting fraction. If the starting fraction x/y is an “improper fraction”, a fraction with a value greater than one, than adding the same number to both the numerator and the denominator will make the resultant fraction closer to 1 — less than the starting fraction.
Here are a couple of examples.
Example #1
Start = 2/3 —- a fraction less than one.
Add five to the numerator and the denominator.
Result = 7/8 — this fraction is closer to one than is 2/3: on the number line —–
Since 1 is bigger than 2/3, when the resultant fraction moved closer to 1, it got bigger than 2/3. Therefore, we know 2/3 < 7/8
Example #2
Start = 3/2 —- a fraction greater than one.
Add two to the numerator and the denominator.
Result = 5/4 — this fraction is closer to 1 than is 3/2: on the number line —–
Since 1 is less than 3/2, when the result fraction moved closer to 1, it got smaller than 3/2. Therefore, we know 3/2 > 5/4
Adding Different Numbers to the Numerator and Denominator
Actually, this case is simply a generalization of the previous case. Suppose we start with a fraction x/y, and we add the positive number a to the numerator and the positive number b to the denominator, and we want to know if the resultant fraction is bigger or smaller than the starting fraction.
Well, the general rule is: adding a to the numerator and b to the denominator moves the resultant fraction closer to the fraction a/b. If x/y < a/b, moving the starting fraction close to a/b will make it bigger. If x/y > a/b, moving the starting fraction close to a/b will make it smaller.
Here are some example:
Example #3
Start = 2/7
Add 3 to the numerator and 5 to the denominator.
Resultant fraction = 5/12— this fraction is closer to 3/5 than is 2/7 —-
On the number line —–
Because 3/5 is bigger than 2/7, adding 3 to the numerator and 5 to the denominator has the net effect of producing a fraction that is bigger: 2/7 < 5/12
Example #4
Start = 11/12
Add 2 to the numerator and 5 to the denominator.
Resultant fraction = 13/17— this fraction is closer to 2/5 than is 11/12 —-
On the number line —–
Because 2/5 is less than11/12, adding 2 to the numerator and 5 to the denominator has the net effect of producing a fraction that is smaller: 11/12 > 13/17
Now that you know these rules, go back to the practice problems at the beginning and see whether they make more sense now.
Practice Problem Solutions
1) Statement #1: We are adding 2 to the numerator and 3 to the denominator, so we know the resultant fraction will move closer to 2/3. If all we know is that the denominator of the starting fraction is greater than 20, then we have no idea what the size of the starting fraction is: it could be much greater than 2/3, or much smaller than 2/3, depending on the numerator, of which we have no idea. We can draw no conclusion right now. This statement, alone, by itself, is insufficient.
Statement #2: Now, all we know is that the numerator of the starting fraction is less than 5 — it could be 4, 3, 2, or 1. We have no idea of the denominator. If y = 50, then we get a very small fraction. But if x = 4 and y = 1, the fraction equals 4, much larger than 2/3. In this statement, we have no information about the denominator, and since we know nothing about the denominator, we know nothing about the size of the starting fraction: it could be either greater or less than 2/3. Therefore, we can draw no conclusion. This statement, alone, by itself, is also insufficient.
Now, combine the statements. We know y > 20 and x < 5. Well no matter what values we choose, we are going to have a denominator much bigger than the numerator. The larger possible fraction we could have under these constraints would be 4/21 (largest possible numerator with smallest possible denominator). The fraction 4/21 is much smaller than 1/2, so it’s definitely smaller than 2/3. Any fraction with y > 20 and x < 5 will be less than 2/3. Therefore, adding 2 to the numerator and 3 to the denominator will move the resultant fraction closer to 2/3, which has the net effect of increasing its value. Therefore, the answer to the prompt question is “yes.” Because we can give a definite answer to the prompt, we have sufficient information.
Neither statement is sufficient individually, but together, they are sufficient. Answer = C.
2) We are adding the same number, 5, to both the numerator and the denominator, so the value of x/y will move closer to 1. All we need to determine is whether x/y is greater than 1 or less than 1.
Statement #1: y = 5. Here, we have a definite value for y, but zero information about x. If y = 5, some fractions (1/5) can be less than one, while others (7/5) will be greater than one. Either is possible. Since both are possible, we can’t give a definitive answer to the prompt. This statement, alone, by itself, is insufficient.
Statement #2: x>y. Dividing both sides of this inequality by y, we get (x/y) > 1. This means x/y must be a fraction greater than 1, which means the resultant fraction (x + 5)/(y + 5) must be closer to one, which means the resultant fraction must be smaller. Therefore, we can definitively say: the answer to the prompt question is, “No.” Because we can give a definite answer to the prompt, we have sufficient information. This statement, alone, by itself, is sufficient.
Statement #1 is insufficient and Statement #2 is sufficient. Answer = B.
Attachment:
img1.png [ 881 Bytes | Viewed 11446 times ]
Attachment:
img2.png [ 993 Bytes | Viewed 11430 times ]
Attachment:
img3.png [ 1.06 KiB | Viewed 11422 times ]
Attachment:
img4.png [ 1.14 KiB | Viewed 11433 times ]
_________________
Intern
Joined: 27 Dec 2016
Posts: 12
Show Tags
29 Jun 2017, 09:55
2
thanks bunnel. thats a good info
Intern
Joined: 29 Oct 2017
Posts: 10
Show Tags
21 Jun 2019, 12:49
1
Hi, I am wondering if there are also rules for substracting different numbers from the numerator and denominator? I assumed that changing from addition to substraction would reverse the effect but the results are not consistent e.g. (4+7)/(5+8) leads to an increase and (4-7)/(5-8) as well.
Posted from my mobile device
Intern
Joined: 24 Oct 2016
Posts: 5
Show Tags
07 Sep 2019, 10:05
Hi Bunuel Sir, thanks for the info.
Just wanted to bring another method of solving these into the light.
% change if %change in num > %change in Deno then the value will increase else will decrease.
Re: GMAT Shortcut: Adding to the Numerator and Denominator [#permalink] 07 Sep 2019, 10:05
|
|
Elementary Statistics: A Step-by-Step Approach with Formula Card 9th Edition
a. $0.255$ b. $0.509$ c. $0.002$
Total outcomes of 4 out of 12: $_{12}C_4$=495 a. 0 defective resistors Choose 4 out of 9 good ones, $_9C_4$=126, $p=126/495=0.255$ b. 1 defective resistor Choose 3 out of 9 and 1 out of 3, $_9C_3$*$_3C_1$=252, $p=252/495=0.509$ c. 3 defective resistors Only 1 choice for the resistors, $p=1/495=0.002$
|
|
# Dave Harris on Maximum Likelihood Estimation
#### June 17, 2013
At our last Davis R Users’ Group meeting of the quarter, Dave Harris gave a talk on how to use the bbmle package to fit mechanistic models to ecological data. Here’s his script, which I ran throgh the spin function in knitr:
# Load data
library(emdbook)
## Loading required package: MASS Loading required package: lattice
library(bbmle)
## Loading required package: stats4
data(ReedfrogFuncresp)
plot(ReedfrogFuncresp, xlim = c(0, 100), xaxs = "i")
Statistical models are stories about how the data came to be. The deterministic part of the story is a (slightly mangled) version of what you’d expect if the predators followed a Type II functional response:
Holling’s Disk equation for Type II Functional Response
• a is attack rate
• h is handling time
disk = function(N, a, h) {
N * a/(1 + N * a * h)
}
We can plot different values of a and h to see what kinds of data this model would generate on average:
plot(ReedfrogFuncresp, xlim = c(0, 100), xaxs = "i")
# a = 3, h = 0.05 (black) rises too quickly, saturates too quickly
curve(disk(x, a = 3, h = 0.05), add = TRUE, from = 0, to = 100)
# a = 2, h = 0.02 (red) still rises too quickly
curve(disk(x, 2, 0.02), add = TRUE, from = 0, to = 100, col = 2)
# a = 0.5, h = 0.02 (blue) looks like a plausible data-generating process
curve(disk(x, 0.5, 0.02), add = TRUE, from = 0, to = 100, col = 4)
The blue curve looks plausible, but is it optimal? Does it tell the best possible story about how the data could have been generated?
In order to tell if it’s optimal, we need to pick something to optimize. Usually, that will be the log-likelihood–i.e. the log-probability that the data would have come out this way if the model were true. Models with higher probabilities of generating the data we observed therefore have higher likelihoods. For various arbitrary reasons, it’s common to minimize the negative log-likelihood rather than maximizing the positive log-likelihood. So let’s write a function that says what the log-likelihood is for a given pair of a and h.
The log-likelihood is the sum of log-probabilities from each data point. The log-probability for a data point is (in this contrived example) drawn from a binomial (“coin flip”) distribution, whose mean is determined by the disk equation.
NLL = function(a, h) {
-sum(dbinom(ReedfrogFuncresp$Killed, size = ReedfrogFuncresp$Initial, prob = disk(ReedfrogFuncresp$Initial, a, h)/ReedfrogFuncresp$Initial, log = TRUE))
}
We can optimize the model with the mle2 function. It finds the lowest value for the negative log-likelihood (i.e. the combination of parameters with the highest positive likelihood, or the maximum likelihood estimate).
The NLL function we defined above requires starting values for a and h. Let’s naively start them at 1, 1
fit = mle2(NLL, start = list(a = 1, h = 1))
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
You’ll probably get lots of warnings about NaNs. That’s just the optimization procedure complaining because it occasionally tries something impossible (such as a set of parameters that would generate a negative probability of being eaten). In general, these warnings are nothing to worry about, since the optimization procedure will just try better values. But it’s worth checking to make sure that there are no other warnings or problems by calling warnings() after you fit the model.
# print out the results
fit
##
## Call:
## mle2(minuslogl = NLL, start = list(a = 1, h = 1))
##
## Coefficients:
## a h
## 0.52652 0.01666
##
## Log-likelihood: -46.72
summary(fit)
## Maximum likelihood estimation
##
## Call:
## mle2(minuslogl = NLL, start = list(a = 1, h = 1))
##
## Coefficients:
## Estimate Std. Error z value Pr(z)
## a 0.52652 0.07112 7.40 1.3e-13 ***
## h 0.01666 0.00488 3.41 0.00065 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## -2 log L: 93.44
coef(fit)
## a h
## 0.52652 0.01666
Here’s the curve associated with the most likely combination of a and h (thick black line).
plot(ReedfrogFuncresp, xlim = c(0, 100), xaxs = "i")
curve(disk(x, a = coef(fit)["a"], h = coef(fit)["h"]), add = TRUE, lwd = 4,
from = 0, to = 100)
The rethinking package has a few convenient functions for summarizing and visualizing the output of an mle2 object. It’s not on CRAN, but you can get it from the author’s website or from github.
library(rethinking)
## Attaching package: 'rethinking'
##
## The following object is masked _by_ '.GlobalEnv':
##
## x
precis(fit)
## Estimate S.E. 2.5% 97.5%
## a 0.53 0.07 0.39 0.67
## h 0.02 0.00 0.01 0.03
We can also visualize a distribution of estimates that are reasonably consistent with the observed data using the sample.naive.posterior function:
plot(sample.naive.posterior(fit), pch = ".")
# Add a red dot for the maximum likelihood estimate:
points(as.data.frame(as.list(coef(fit))), col = 2, pch = 20)
# Add a blue dot for our earlier guess:
points(0.5, 0.02, col = 4, pch = 20)
### A Big digression about confidence intervals
Our earlier guess falls inside the cloud of points, so even though it’s not as good as the red point, it’s still plausibly consistent with the data.
Note that the ranges of plausible estimates for the two coefficients are correlated: This makes sense if you think about it: if the attack rate is high, then there needs to be a large handling time between attacks or else too many frogs would get eaten.
Where did this distribution of points come from? mle2 objects, like many models in R, have a variance/covariance matrix that can be extracted with the vcov() function.
vcov(fit)
## a h
## a 0.0050584 2.859e-04
## h 0.0002859 2.384e-05
The variance terms (along the diagonal) describe mle2’s uncertainty about the values. The covariance terms (other entries in the matrix) describe how uncertainty in one coefficient relates to uncertainty in the other coefficient.
This graph gives an example:
curve(dnorm(x, sd = 1/2), from = -5, to = 5, ylab = "likelihood", xlab = "estimate")
curve(dnorm(x, sd = 3), add = TRUE, col = 2)
The black curve shows a model with low variance for its estimate. This means that the likelihood would fall off quickly if we tried a bad estimate, and we can be reasonably sure that the data was generated by a value in a fairly narrow range. The model associated with the red curve is much less certain: the parameter could be very different from the optimal value and the likelihood wouldn’t drop much.
Here’s another way to visualize the decline in likelihood as you move away from the best estimate:
plot(profile(fit))
Keep in mind that all of this is based on a Gaussian approximation. It works well when you have lots of data and you aren’t estimating something near a boundary. Since h is near its minimum value (at zero), there’s some risk that the confidence intervals are inaccurate. Markov chain Monte Carlo can provide more accurate estimates, but it’s also slower to run.
Where does the vcov matrix come from? From a matrix called the Hessian. It describes the curvature of the likelihood surface, i.e. how quickly the log-likelihood falls off as you move away from the optimum.
Sometimes the Hessian is hard to estimate and causes problems, so we can run the model without it:
fit.without.hessian = mle2(NLL, start = list(a = 1, h = 1), skip.hessian = TRUE)
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
When there’s no Hessian, there aren’t any confidence intervals, though.
precis(fit.without.hessian)
## Estimate S.E. 2.5% 97.5%
## a 0.53 NA NA NA
## h 0.02 NA NA NA
(End digression)
### More about mle2
Under the hood, mle2 uses a function called optim:
?(optim)
It’s worth noting that most of the optimization methods used in optim (and therefore in mle2) only do a local optimization. So if your likelihood surface has multiple peaks, you may not find the right one.
mle2 also has a convenient formula inferface that can eliminate the need to write a whole likelihood function from scratch. Let’s take a look with some data found in ?mle2
x = 0:10
y = c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8)
d = data.frame(x, y)
Here, we’re fitting a Poisson model that depends on an intercept term plus a linear term. The exp() is mainly there to make sure that the value of lambda doesn’t go negative, which isn’t allowed (it would imply a negative number of occurrences for our outcome of interest).
fit0 = mle2(y ~ dpois(lambda = exp(intercept + slope * x)), start = list(intercept = mean(y),
slope = 0), data = d)
Note that mle2 finds its values for x and y from the data term and that we’re giving it starting values for the slope and intercept. In general, it’s useful to start the intercept at the mean value of y and the slope terms at 0, but it often won’t matter much.
### Prior information
Recall that our estimates of a and h are positively correlated: the data could be consistent with either a high attack rate and a high handling time OR with a low attack rate and a low handling time.
Suppose we have prior information about tadpole/dragonfly biology that suggests that these parameters should be on the low end. We can encode this prior information as a prior distribution on the parameters. Then mle2 won’t climb the likelihood surface. It will climb the surface of the Bayesian posterior (or if you’re frequentist, it will do a penalized or constrained optimization of the likelihood).
Here’s a negative log posterior that tries to keep the values of a and h small while still being consistent with the data:
negative.log.posterior = function(a, h) {
NLL(a, h) - dexp(a, rate = 10, log = TRUE) - dexp(h, rate = 10, log = TRUE)
}
We optimize it exactly as before
fit2 = mle2(negative.log.posterior, start = list(a = 1, h = 1))
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced Warning: NaNs produced Warning: NaNs produced
## Warning: NaNs produced
Sure enough, the coefficients are a bit smaller
coef(fit) # MLE
## a h
## 0.52652 0.01666
coef(fit2) # MAP estimate
## a h
## 0.47931 0.01375
← The null model for age effects with overdispersed infection | All posts | Printing R help files in the console or in knitr documents →
|
|
Wednesday, June 13, 2007
Determinant, Trace, and Noncommutative Geometry
Recently there was some discussion in the n-category cafe about linear algebra and specially about determinants and their place in a linear algebra undergraduate course. The whole discussion was in fact triggered by a rather polemic paper entitled Down with Determinants' which seemed to suggest that determinants better be introduced in the graduate school first! This extremist' point of view was, understandably, challenged by several people who pointed out various algebraic and geometric aspects of determinants and their importance. You can also find a rebuttal in Lieven Le Bruyn's blog here. I would like to echo those sentiments and say: long live determinants!
I don't want to say much about determinants or how linear algebra should be taught to undergraduates. This is not what this blog is all about and in fact I am not sure I am qualified enough in that regard. Others are of course most welcome to comment on all aspects of these issues as they see fit. I just wanted to mention one aspect of determinants and its relation with traces that have some implications for NCG. But first a bit of early history of determinants is perhaps in order.
Determinants have a long history going back to some work of Leibniz and the Japanese mathematician Seki Kowa, also know as Takakazu, in the 17 th century. Cramer's rule of mid 18th century seems to be the first general result on determinants. In the 19th century Sylvester suggested determinants be called Bezoutians' in honor of Etienne Bezout (see this newly published English translation of Bezout's old text, General Theory of Algebraic Equations ). For more history and in particular to get a glimpse of what happened in the 19th century see this Wikepedia article which also cites a 3rd century BC Chinese text!
Now the point of introducing determinants was not to show that matrices (over algebraically closed fields) have eigenvalues-this came later of course. There are equally interesting applications of determinants, however (see below for just a few), and postponing a proper introduction of determinants to graduate years I don't think would be wise.
As for the existence of eigenvalues for matrices or more generally the non-emptiness of the spectrum of an element of an algebra I know at least two very general situations where one can show that the spectrum of any element is non-empty. They both work in infinite dimensional situations and neither use a determinant function (which may not exist after all). When the algebra is a complex unital Banach algebra this result is due to Gelfand. In fact the whole notion of a Banach algebra is due to Gelfand from the late 1930's who called it normed rings. This was then used in a crucial way, by Gelfand and Naimark, in the proof of their celebrated theorem on the structure of commutative C*-algebras. A second case is an algebra over an algebraically closed field where the dimension of the algebra is less than the cardinality of the field. The argument in this case is similar to the proof of the existence of eigenvalues used in the dwd' paper mentined above. Notice that the dimension of the algebra need not be finite now. This added generality is not a luxury and comes quite handy in proving things like Hilbert's Nullstellensatz (for fields with uncountable number of elements-see the first chapter of this book). Notice that Nullstellensatz is an algebraic analogue of the Gelfand-Naimark theorem and both results are pivotal for the general philosophy of noncommutative geometry.
On the educational side, see here for a nice story on how to compute determinants', or perhaps how not to compute a determinant! Halmos' old (1940's?) book on finite dimensional vector spaces has gone through many editions but is still highly readable and one of my favorites. It is written with a view towards functional analysis and operator theory on Hilbert space. So one learns early enough about spectral theorem, polar decomposition, and determinants which are defined using the exterior algebra and volume forms. It does not cover much in multilinear algebra though. Another favorite of mine is Manin and Kostrikin's more modern and wonderful book Linear Algebra and Geometry . It covers a lot of topics well beyond the standard stuff on canonical forms. Things such as multilinear algebra, determinants and Pfaffians, reverse triangle inequality in Minkowski space and the twin paradox, foundations of quantum mechanics, and fast multiplication algorithms. It has even some small item on Feynman rules in QFT! Another favourite of mine is Prasolov's problems and theorems in linear algebra. You should also definitely check P. Cartier's A course on determinants (in: Conformal Invariance and String Theory, 1987) for a nice and modern survey of determinants. It covers much including things like infinite dimensional Fredholm determinants, and superdeterminants.
In the 19th century when people wanted to prove that the sum and product of algebraic numbers is again an algebraic number they would use determinants. Nowadays of course this is done using vector spaces and the formula dim_E K=(dim_E F)( dim_F K) for field extensions E C F C K. But imagine you really want to find a polynomial with rational coefficients P such that P (a+b)=0, or P(ab)=0, assuming a and b are algebraic. What would you do? Here is a modern adaptation of the classical method to find such a polynomial P quickly. Notice that a complex number is algebraic iff it is the eigenvalue of a matrix with rational coefficients. If a and b are eigenvalues of A and B, then a+b is an eigenvalue of I \otimes B + A\otimes I (remember the Hamiltonian of a combined system of two particles in QM?) and ab is of course an eigenvalue of A \otimes B. We should then just compute the characteristic polynomials of these matrices which is pretty straightforward. A similar proof applies to show that if a and b are algebraic integers then so are ab and a+b.....
Determinants and Physics: Pauli's exclusion principle in quantum mechanics can be formulated mathematically as saying that if the Hilbert space of states of a fermion is H then the Hilbert space of states of a pair of fermions should be H /\H, the exterior product of H with itself. By the same principle the Hilbert space of n fermions should be the n-th exterior power of H, /\^n H. For bosons, on the other hand, the appropriate n particle Hilbert space is the n-th symmetric power S^n H. Now, mandated by the special theory of relativity, quantum field theory and the second quantization of fields tell us that the number of particles can not remain constant and so, in the case of fermions, we go over to the so called fermionic Fock space
/\ H = C+ H + H/\H + H/\H/\H +..........
An operator A: H -> H induces an operator
/\ A : /\ H -> /\ H, (/\ A) (v_1 /\v2 ....../\v_n)= Av_1 /\Av_2 ..../\Av_n
Similarly for bosons the apprpriate Hilbert space is the bosonic Fock space
SH =C+ H + S^2 H + S^3 H +......
with the associated operator
SA: SH -> SH
Now if H is n-dimensional, /\^n V is one dimnsional and we see that the exterior algebra gives us a formula/definition for the determinant in terms of trace:
Det (A) = Tr (/\^n A)
We see a direct link here between the exterior algebra, fermions, and determinants. The above formula is the beginning of a series of formulas relating the determinant and trace. For example the beautiful MacMahon Master Theorem states that
(1) Det (1+tA) =\sum t^k Tr (/\^k A)
and if we put t=-1 we obtain
(2) Det (1-A) =\sum (-1)^k Tr (/\^k A)
To prove this one can first assume A is diagonal(izable) in which case the proof is really easy and then use the fact that diagonalizable matrices are dense among all matrices, plus continuity and invariance under conjugation of both sides. There are of course more algebraic proofs that work over any commutative ring.
We would like to write the RHS of (2) as trace of /\A acting on /\ V but because of signs it can not be the usual trace. Instead we can invoke the fact that the fermionic Fock space /\V is a super vector space graded by degree of tensors. We can define the Supertrace of an operator A as Tr_s (A)= Tr (A^+) - Tr (A^-) with A^+ and A^- designating the even and odd parts of A and with this understood we can write (2) as
(3) Det (1-A) =Tr_s (/\ A)
There is a similar formula for the bosonic second quantization and this time we have
(4) [Det (1-tA)]^-1 =\sum t^k Tr (S^k A)
If we put t=1 we obtain the beautiful formula
(5) [Det (1-A)]^-1 = Tr (SA)
Combining formulas (3) and (5) we obtain the boson-fermion formula
(6) Tr_s(/\ A) Tr (SA)=1
which puts in duality the exterior algebra with symmetric algebra. I have completely bypassed the convergence issues which are relevant even when H is finite dimensional since the bosonic Fock space SH is infinte-dimensional but this can be managed relatively easily....
For a particular choice of H and A, Bost and Connes in their paper Hecke algebras, type III factors and phase transitions with spontaneous symmetry breaking in number theory. Selecta Math. (N.S.) 1 (1995), no. 3, 411--457, right in the begining show that the above formula (5) gives the Euler product formula for the zeta function (see also page 529 of this book) In fact their paper starts by quantizing the set of prime numbers by finding a natural operator whose bosonic second quantization has the set of primes as its spectrum (I leave to you as an exercise the task of finding this operator) and this is the begining of their long journey towards understanding the statistical behaviour of primes using tools of quantum mechanics and noncommutative geometry.
Another interesting issue with regard to the boson-fermion duality formula (6) is its relation with Koszul duality. The Koszul dual of the quadratic algebra /\ H is the symmetric algebra S H. I suppose it is possible to give a proof of (6) starting from Koszul duality but this will need another post and I certainly hope others who know more will jump in and enlighten us. The same with q-analogues of (6). There are many other interesting things that remain to be said things like Pfaffians as fermionic Gaussian integrals and their supersymmetric and Clifford algebra analogues, regularized determinants, etc. Hopefully others will comment on aspects of determinants and their applications in their work.
Enter Trace Unlike the determinant, trace has a nice straightforward extension to the noncommutative world. In fact trace is the real queen of noncommutative geometry! For any algebra A, commutative or not, we have a trace map
Tr: M_n(A) -> T(A)
From the algebra of n by n matrices with entries in A to the quotient of A by the linear span of commutators in A. It is, be definition, the sum of the elements on the main diagonal followed by the quotient map. It has the trace property in the sense that it is linear (over C) and satisfies T(xy)=T(yx) for all x and y. It is also easy to see that Tr is indeed the universal trace on A in the sense that any other trace tr: A ->V factors through it. This can be extended a bit. Let E be a finitely generated projective right A-module and let End_A (E) denote the algebra of A-linear maps from E to E. Then there is a trace map
Tr: End_A (E) -> T(A).
It can be defined by first embedding E into a finite and free A-module and then using the above trace. Alternatively one can use the fact that End_A (E) =E\otimes_A E* and then use the standard dual pairing between E and its dual E* to land in A and then apply the quotient map.
This allows us to extend the notion of dimension from commutative to noncommutative geometry. The word dimension is loaded with many meanings and interpretations and here we just look at one of those. Let us fix a C-valued trace tr on A. The classical formula
Dim (E) = tr (id_E)
relates the dimension of a vector space or the fiber dimension of a vector bundle to trace and is integer valued in that context. When we use this formula as the definition of the dimension of a finite projective module (aka noncommutative vector bundles) we should be prepared to see non-integral dimensions! (See page 361 of Alain's 1994 book for an example).
Remark: Apart from Hausdorff dimension, this sort of continuous dimensions were first investigated by von Neumann in a purely algebraic and synthetic manner in his book continuous geometry but then his dream of a continuous geometry was, partially, realized in his theory of von Neumann algebras. We say partially because it covered only the measure theoretic aspects of the noncommutative world. The full dream was only realized by the advent of NCG!
Fabien Besnard said...
Thank you for this nice survey of applications of determinants. As for their introduction in pregraduate courses, I had to think about this since I am in charge of a first year algebra course and my students are generally not very fond of algebra. I came to the conclusion that determinants were indeed needed, not only for reduction theory, but also for systems of equations and the jacobian formula. I agree with the author that determinants can be unintuitive, especially if they are introduced through the one-dimensionality of the space of n-linear forms in n dimensions. So I chose to take a 'physicist' approach, and I introduce them as oriented n-volumes. Taking the following intuitive axioms that the oriented n-volume of a n-parallelotop should be homogenous to a length^n, should be invariant up to sign by permutation of the sides, should be zero if the parallelotop is flat (lives inside a n-1 dimensional subspace) and should be one for the unit cube, one easiliy obtains the usual formula for the determinant of a square matrix. To my experience in teaching this for three years, I did not notice any particular difficulty among the students with this concept (that is, not more difficulties that with other concepts in algebra...).
masoud said...
Dear Fabien,
Thanks for your comments. I agree. The approach that you suggest I think is the best. As I vaguely remember, this is actually how the determinant is introduced in many textbooks including Lang's linear algebra or Hoffman-Kunze. Of course the algebra is always the same (alternating multilinear forms) and as you said adding geometric interpretations like being a measure of volume change will help students to understand it better.
CarlBrannen said...
On the subject of the use of the determinant in eigenvalues and eigenvectors, already, having read just the first 4 pages of the "down with determinant" article, I am in such agreement that I have to post.
If a student is asked to find the Pauli spinor for spin-1/2 in the (a,b,c) direction, the first step is easy: write down the spin operator in that direction S. He is likely then to look for its eigenvectors. This is for the notation where S has eigenvalues +-1 rather than +-1/2.
The easy solution is to note that (1+S) is an eigenmatrix of S with eigenvalue 1. Therefore any nonzero column of (1+S) is an eigenvector. This works for any Clifford algebra, is easy to remember, and is computationally efficient. And it avoids getting anywhere near the need to use determinants.
|
|
Approximation of functions belonging to the weighted L(a, M, omega)-class by trigonometric polynomials
Revision as of 16:28, 7 June 2018; view current revision
New Zealand Journal of Mathematics
Vol. 48, (2018), Pages 11-23
Department of Mathematics and Science Education,
Faculty of Education,
Mus Alparslan University,
49250, Mus,
Turkey.
Abstract In this work the approximation of the functions by means $t_{n}(f;x),~N_{n}^{\beta }(f;x)$ and $R_{n}^{\beta }(f,x)$ of the trigonometric Fourier series in weighted Orlicz spaces with Muckenhoupt weights are studied.
Keywords Trigonometric approximation, Orlicz space, weighted Orlicz space, Boyd indices, Muckenhoupt weight, weighted $L(\alpha ,M,\omega )$ class, modulus of continuity.
Classification (MSC2000) 41A17, 41A25, 41A27, 42A10, 42A50, 46E30, 46E35, 26A33.
|
|
# What is wrong with my trivial solution to finding a cubic polynomial with roots $\cos{2\pi/7}$, $\cos{4\pi/7}$, $\cos{6\pi/7}$?
I came across a problem in a book recently that asked to find a cubic polynomial with roots $\cos{2\pi/7}, \cos{4\pi/7}, \cos{6\pi/7}$. There were no extra conditions on the problem. It just asks you to find a cubic polynomial with those roots. It was marked as one of the harder problems, so I was kind of confused because it seems obvious that a polynomial like $$\left(x-\cos\frac{2\pi}{7}\right)\left(x-\cos\frac{4\pi}{7}\right)\left(x-\cos\frac{6\pi}{7}\right)$$ should work.
But when I looked up the solution in the solutions manual, it turns out that you can use an obscure trig identity for $\cos{7\theta}$ to eventually construct the polynomial $$8x^3+4x^2-4x-1$$
I'm really lost. What's wrong with my trivial example?
• Nothing wrong. You will get the same answer, but you will need to do trigonometric transformations to get numbers. – Sonal_sqrt May 23 '18 at 3:23
• The question was assuming, without stating it, that the polynomial will have integer coefficients. It will be the exact 8 times your polynomial but you have to show that the coefficients turn out to be the same. – fleablood May 23 '18 at 3:48
• But as N8tron's answer says, the text made the mistake of not specifying the cubic need integer (or at least rational) coefficients. This is entire the texts fault. Not yours. – fleablood May 23 '18 at 3:50
• math.stackexchange.com/questions/638874/… – lab bhattacharjee May 23 '18 at 7:19
My favorite example of when this happens is when people say $\pi$ is transcendental because it's not solution of a polynomial equation. I usually point out
$$x-\pi$$ is a polynomial and then preach the importance of correctly qualifying expressions.
easiest is to take $\omega$ as a primitive seventh root of unity, any one of $$e^{2 \pi i / 7} \; , \; \; e^{4 \pi i / 7} \; , \; \;e^{6 \pi i / 7} \; , \; \;$$ so that $\omega^7 = 1$ but $\omega \neq 1,$ and $$\omega^6 + \omega^5 + \omega^4 + \omega^3 + \omega^2 + \omega + 1 = 0.$$ Next, for any of the three, take $$x = \omega + \frac{1}{\omega}$$ First, $$x^3 = \omega^3 + 3 \omega + \frac{3}{\omega} + \frac{1}{\omega^3} \; \; , \; \;$$ $$x^2 = \omega^2 + 2 + \frac{1}{\omega^2} \;$$ $$-2 x = -2 \omega - \frac{2}{\omega}$$ $$-1 = -1 \; \; .$$ So $$x^3 + x^2 - 2 x - 1 = \; \; \frac{\omega^6 + \omega^5 + \omega^4 + \omega^3 + \omega^2 + \omega + 1}{\omega^3}\; \; = \; 0$$
In each case, we have $x = 2 \cos (2 k \pi i / 7),$ so taking $x = 2c$ we find $8c^3 + 4 c^2 - 4 c - 1 = 0.$
|
|
• General
• Personal Finance
• Reviews & Ratings
• Wealth Management
• Popular Courses
• Courses by Topic
# Co-efficient of Variation Meaning and How to Use It
## What Is the Co-efficient of Variation (CV)?
The co-efficient of variation (CV) is a statistical measure of the dispersion of data points in a data series around the mean. The co-efficient of variation represents the ratio of the standard deviation to the mean, and it is a useful statistic for comparing the degree of variation from one data series to another, even if the means are drastically different from one another.
### Key Takeaways
• The co-efficient of variation (CV) is a statistical measure of the relative dispersion of data points in a data series around the mean.
• It represents the ratio of the standard deviation to the mean.
• The CV is useful for comparing the degree of variation from one data series to another, even if the means are drastically different from one another.
• In finance, the co-efficient of variation allows investors to determine how much volatility, or risk, is assumed in comparison to the amount of return expected from investments.
• The lower the ratio of the standard deviation to mean return, the better risk-return tradeoff.
## Understanding the Co-efficient of Variation (CV)
The co-efficient of variation shows the extent of variability of data in a sample in relation to the mean of the population.
In finance, the co-efficient of variation allows investors to determine how much volatility, or risk, is assumed in comparison to the amount of return expected from investments. Ideally, if the co-efficient of variation formula should result in a lower ratio of the standard deviation to mean return, then the better the risk-return tradeoff.
They’re most often used to analyze dispersion around the mean, but quartile, quintile, or decile CVs can also be used to understand variation around the median or 10th percentile, for example.
The co-efficient of variation formula or calculation can be used to determine the deviation between the historical mean price and the current price performance of a stock, commodity, or bond, relative to other assets.
## Co-efficient of Variation (CV) Formula
Below is the formula for how to calculate the co-efficient of variation:
\begin{aligned} &\text{CV} = \frac { \sigma }{ \mu } \\ &\textbf{where:} \\ &\sigma = \text{standard deviation} \\ &\mu = \text{mean} \\ \end{aligned}
To calculate the CV for a sample, the formula is:
$CV = s/x * 100$
where:
s
= sample
= mean for the population
Multiplying the co-efficient by 100 is an optional step to get a percentage rather than a decimal.
### Co-efficient of Variation (CV) in Excel
The co-efficient of variation formula can be performed in Excel by first using the standard deviation function for a data set. Next, calculate the mean by using the Excel function provided. Since the co-efficient of variation is the standard deviation divided by the mean, divide the cell containing the standard deviation by the cell containing the mean.
1:23
## Co-efficient of Variation (CV) vs. Standard Deviation
The standard deviation is a statistic that measures the dispersion of a data set relative to its mean. It is used to determine the spread of values in a single data set rather than to compare different units.
When we want to compare two or more data sets, the co-efficient of variation is used. The CV is the ratio of the standard deviation to the mean. And because it’s independent of the unit in which the measurement was taken, it can be used to compare data sets with different units or widely different means.
In short, the standard deviation measures how far the average value lies from the mean, whereas the co-efficient of variation measures the ratio of the standard deviation to the mean.
The co-efficient of variation can be useful when comparing data sets with different units or widely different means.
That includes when the risk/reward ratio is used to select investments. For example, an investor who is risk-averse may want to consider assets with a historically low degree of volatility relative to the return, in relation to the overall market or its industry. Conversely, risk-seeking investors may look to invest in assets with a historically high degree of volatility.
When the mean value is close to zero, the CV becomes very sensitive to small changes in the mean. Using the example above, a notable flaw would be if the expected return in the denominator is negative or zero. In this case, the co-efficient of variation could be misleading.
If the expected return in the denominator of the co-efficient of variation formula is negative or zero, then the result could be misleading.
## How Can the Co-efficient of Variation Be Used?
The co-efficient of variation is used in many different fields, including chemistry, engineering, physics, economics, and neuroscience.
Other than helping when using the risk/reward ratio to select investments, it is used by economists to measure economic inequality. Outside of finance, it is commonly applied to audit the precision of a particular process and arrive at a perfect balance.
## Example of Co-efficient of Variation (CV) for Selecting Investments
For example, consider a risk-averse investor who wishes to invest in an exchange-traded fund (ETF), which is a basket of securities that tracks a broad market index. The investor selects the SPDR S&P 500 ETF, the Invesco QQQ ETF, and the iShares Russell 2000 ETF. Then, they analyze the ETFs’ returns and volatility over the past 15 years and assumes that the ETFs could have similar returns to their long-term averages.
For illustrative purposes, the following 15-year historical information is used for the investor’s decision:
• If the SPDR S&P 500 ETF has an average annual return of 5.47% and a standard deviation of 14.68%, the SPDR S&P 500 ETF’s co-efficient of variation is 2.68.
• If the Invesco QQQ ETF has an average annual return of 6.88% and a standard deviation of 21.31%, the QQQ’s co-efficient of variation is 3.10.
• If the iShares Russell 2000 ETF has an average annual return of 7.16% and a standard deviation of 19.46%, the IWM’s co-efficient of variation is 2.72.
Based on the approximate figures, the investor could invest in either the SPDR S&P 500 ETF or the iShares Russell 2000 ETF, since the risk/reward ratios are approximately the same and indicate a better risk-return tradeoff than the Invesco QQQ ETF.
## What does the co-efficient of variation tell us?
The co-efficient of variation (CV) indicates the size of a standard deviation in relation to its mean. The higher the co-efficient of variation, the greater the dispersion level around the mean.
## What is considered a good co-efficient of variation?
That depends on what you’re looking at and comparing. No set value can be considered universally “good.” However, generally speaking, it is often the case that a lower co-efficient of variation is more desirable, as that would suggest a lower spread of data values relative to the mean.
## How do I calculate the co-efficient of variation?
To calculate the co-efficient of variation, first find the mean, then the sum of squares, and then work out the standard deviation. With that information at hand, it is possible to calculate the co-efficient of variation by dividing the standard deviation by the mean.
## The Bottom Line
The co-efficient of variation is a simple way to compare the degree of variation from one data series to another. It can be applied to pretty much anything, including the process of picking suitable investments.
Generally speaking, a high CV indicates that the group is more variable, whereas a low value would suggest the opposite.
|
|
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Database
Type of study
Topics
Language
Journal
Year
Document Type
Year range
1.
Sustainability ; 14(7):4290, 2022.
Article in English | MDPI | ID: covidwho-1776341
ABSTRACT
Enterprises performing complex product servitization are more vulnerable to the 2019 coronavirus disease (COVID-19) pandemic because of their large number of suppliers and wide coverage, among other things. The present research focuses on how to promote the sustainable innovation of complex product servitization. We investigate complex products and sustainable innovation-factors influencing the sustainable innovation of complex product servitization-based on the characteristics of product servitization and by combining the definitions of product servitization. We find that inadequate innovation ability and poor technical research and development (R&D) competence are the primary concerns in the sustainable innovation of complex product servitization. Specific to innovation ability improvement, the sustainable innovation of complex product servitization must follow an innovation-driven development strategy, a hard power cultivation strategy, and a soft power cultivation strategy. In terms of technical R&D competence enhancement, technological innovation strategies, integrated outsourcing of technical R&D competence, and independent improvement of technical R&D competence must be implemented to facilitate the sustainable innovation of complex product servitization.
2.
Journal of Tropical Medicine ; 21(5):552-555, 2021.
Article in Chinese | GIM | ID: covidwho-1743524
ABSTRACT
Objective: The epidemiological characteristics of corona virus disease 2019 (COVID-19) in Dongguan were analyzed to provide reference for epidemic prevention and control.
3.
EuropePMC; 2022.
Preprint in English | EuropePMC | ID: ppcovidwho-329918
ABSTRACT
Background: Multi-Agent Simulation is an essential technique for exploring complex systems. In researches of contagious diseases, it is widely exploited to analyze their spread mechanisms, especially for preventing COVID-19. Nowadays, transmission dynamics and interventions of COVID-19 have been elaborately established by this method, but its computation performance is seldomly concerned. As it usually suffers from inadequate CPU utilization and pour data locality, optimizing the performance is challenging. Results: This paper explores approaches to optimize multi-agent simulation for COVID-19 disease. The focus of this work is on the algorithm and data structure designs for improving performance, as well as its parallelisation strategies. We propose two successive methods to optimize the computation. We construct a case-focused iteration algorithm to improve data locality, and create a thread-safe data-mapping paradigm called hierachical hash table to accelerate hash operations. Conclusions: Our performance results demonstrate capabilities of these methods exhibiting significant improvements of system performance. The case-focused method degrades $\sim 90 \%$ cache references and achieves $\times 4.3$ speedup. Hierachical hash table can further boost computation speed by 47\%. And parallel implementation with 20 threads on CPU achieves $\times 81$ speedup consequently.
4.
EuropePMC; 2020.
Preprint in English | EuropePMC | ID: ppcovidwho-316015
ABSTRACT
Objectives: The novel coronavirus pneumonia (COVID-19),spread rapidly world wide, was first reported in December 2019. Meanwhile, there are still a large number of patients who need to undergo various surgical treatments. However, the consensus on whether patients with COVID-19 receive emergency or elective surgery will influence their perioperative mortality and complications still cannot be reached. Therefore, we used meta-analysis to explore the impact of patients with COVID-19 perioperative mortality and complications, aiming to provide evidence for clinical decision-making.Methods: We searched PubMed, Embase, Web of Science, Wan Fang database, date from December 2019 to July 2020 for collecting clinical trail on the impact of patients with COVID-19 perioperative mortality and complications. According to the Cochrane system evaluation method, the data is meta-analyzed with RevMan5.3 software.Results: Eight studies involving 2037 patients, 261 (12.81%) patients with COVID-19 and 1776(87.19%) without COVID-19, were included. The results of meta-analysis showed: the COVID-19 group vs Non-COVID-19 group , perioperative mortality and postoperative pneumonia syndrome increased in COVID-19 group(OR:3.84,95%CI:2.10-7.02,I2 =46%, P <0.0001), (OR: 33.42,95%CI:15.49-72.07,I 2 =0%, P <0.00001), The number of postoperative fever were significantly higher in COVID-19 , There were no significant difference in postoperative complications and ICU admission between the two groups.Conclusions: In our study, The risk of perioperative death and postoperative pulmonary is significantly increased in patients with COVID-19. These data suggested that consideration should be taken for postponing non-critical procedures and promoting nonoperative treatment to delay or avoid the need for surgery during the pandemic of COVID-19.Funding Statement: Natural Science Foundation of China, Grant number: 31760327/ 81760191Declaration of Interests: The authors declare no competing interests.
5.
EuropePMC; 2020.
Preprint in English | EuropePMC | ID: ppcovidwho-315329
ABSTRACT
Background: The Coronavirus Disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has become a pandemic, posing a serious threat to public health worldwide. Whether survivors of COVID-19 pneumonia may be at risk of pulmonary fibrosis is still unknown.Methods: This study involves 462 laboratory confirmed patients with COVID-19 who were admitted to Shenzhen Third People’s Hospital. A total of 457 patients underwent thin-section chest CT scans during the hospitalization or after discharge to identify the pulmonary lesion. A total of 287 patients were followed up from 90 days to 150 days after the onset of the disease.Finding: 397 (86.87%), 311 (74.40%), 222 (79.56%), 141 (68.12%) and 49 (62.03%) patients developed with pulmonary fibrosis during the 0-30, 31-60, 61-90, 91-120 and >120 days after onset, respectively. Reversal of pulmonary fibrosis were found in 18 (4.53%), 61 (19.61%), 40 (18.02%), 54 (38.30%) and 24 (48.98%) COVID-19 patients during the 0-30, 31-60, 61-90, 91-120 and >120 days after onset, respectively. It was observed that Age, BMI, Fever, and Highest PCT were predictive factors for sustaining fibrosis even after 90 days from onset. Only a fraction of COVID-19 patients suffered with abnormal lung function after 90 days from onset.Interpretation: Long-term pulmonary fibrosis was more likely to develop in patients with older age, high BMI, severe/critical condition, fever, long time to turn the viral RNA negative, pre-existing disease and delay to admission. Fibrosis developed in COVID-19 patients could be reversed in about a half of the patients after 120 days from onset. The pulmonary function of most of COVID-19 patients with pulmonary fibrosis could turn to normal condition after three months from onset.Funding Statement: Shenzhen Science and Technology Research and Development Project (202002073000001 and 202002073000002), Shenzhen Fund for Guangdong Provincial High-level Clinical Key Specialties (SZGSP011).Declaration of Interests: The authors declare no competing interests.Ethics Approval Statement: This study was conducted at Shenzhen Third People's Hospital and approved by the Ethics Committees, each patient gave written informed consent.
6.
EuropePMC; 2021.
Preprint in English | EuropePMC | ID: ppcovidwho-315328
ABSTRACT
Background: Thousands of the Coronavirus Disease 2019 (COVID-19) patients have been discharged from hospitals, persistent follow-up studies are required to evaluate the prevalence of post-COVID-19 fibrosis. Methods: : This study involves 462 laboratory confirmed patients with COVID-19 who were admitted to Shenzhen Third People’s Hospital from January 11, 2020 to April 26, 2020. A total of 457 patients underwent thin-section chest CT scans during the hospitalization or after discharge to identify the pulmonary lesion. A total of 287 patients were followed up from 90 days to 150 days after the onset of the disease, and lung function tests were conducted in about three months after the onset. The risk factors affecting the persistence of pulmonary fibrosis were identified through regression analysis and the prediction model of the persistence of pulmonary fibrosis was established. Results: : Parenchymal bands, irregular interfaces, reticulation and traction bronchiectasis were the most common CT features in all COVID-19 patients. During the 0-30, 31-60, 61-90, 91-120 and >120 days after onset, 86.87%, 74.40%, 79.56%, 68.12% and 62.03% patients developed with pulmonary fibrosis and 4.53%, 19.61%, 18.02%, 38.30% and 48.98% patients reversed pulmonary fibrosis, respectively. It was observed that Age, BMI, Fever, and Highest PCT were predictive factors for sustaining fibrosis even after 90 days from onset. A predictive model of the persistence with pulmonary fibrosis was developed based-on the Logistic Regression method with an accuracy, PPV, NPV, Sensitivity and Specificity of the model of 76%, 71%, 79%, 67%, and 82%, respectively. More than half of COVID-19 patients revealed abnormal condition in lung function after 90 days from onset, and the ratio of abnormal lung function did not differ on a statistically significant level between the fibrotic and non-fibrotic groups. Conclusions: : Persistent pulmonary fibrosis was more likely to develop in patients with older age, high BMI, severe/critical condition, fever, long time to turn the viral RNA negative, pre-existing disease and delay to admission. Fibrosis developed in COVID-19 patients could be reversed in about a third of the patients after 120 days from onset. The pulmonary function of less than half of COVID-19 patients could turn to normal condition after three months from onset. An effective prediction model with an average Area Under the Curve (AUC) of 0.84 was established to predict the persistence of pulmonary fibrosis in COVID-19 patients for early diagnosis.
7.
EuropePMC; 2020.
Preprint in English | EuropePMC | ID: ppcovidwho-324160
ABSTRACT
The unprecedented coronavirus disease 2019 (COVID-19) epidemic has created a worldwide public health emergency, and there is an urgent need to develop an effective vaccine to control this severe infectious disease. Here, we found that a single vaccination with a replication-defective human type 5 adenovirus encoding the SARS-CoV-2 spike protein (Ad5-nCoV) protected mice completely against SARS-CoV-2 infection in the upper and lower respiratory tracts. Additionally, a single vaccination with Ad5-nCoV protected ferrets from SARS-CoV-2 infection in the upper respiratory tract. This study suggested that a combination of intramuscular and mucosal vaccination maybe provide a desirable protective efficacy and different Ad5-nCoV delivery modes are worth further investigation in human clinical trials.
8.
EuropePMC; 2020.
Preprint in English | EuropePMC | ID: ppcovidwho-323544
ABSTRACT
The coronavirus disease-19 (COVID-19) caused by SARS-CoV-2 infection can lead to a series of clinical settings from non-symptomatic viral carriers/spreaders to severe illness characterized by acute respiratory distress syndrome (ARDS)1,2. A sizable part of patients with COVID-19 have mild clinical symptoms at the early stage of infection, but the disease progression may become quite rapid in the later stage with ARDS as the common manifestation and followed by critical multiple organ failure, causing a high mortality rate of 7-10% in the elderly population with underlying chronic disease1-3. The pathological investigation in the lungs and other organs of fatal cases is fundamental for the mechanistic understanding of severe COVID-19 and the development of specific therapy in these cases. Gross anatomy and molecular markers allowed us to identify, in two fatal patients subject to necropsy, the main pathological features such as exudation and hemorrhage, epithelium injuries, infiltration of macrophages and fibrosis in the lungs. The mucous plug with fibrinous exudate in the alveoli and the activation of alveolar macrophages were characteristic abnormalities. These findings shed new insights into the pathogenesis of COVID-19 and justify the use of interleukin 6 (IL6) receptor antagonists and convalescent plasma with neutralizing antibodies against SARS-CoV-2 for severe patients.Authors Chaofu Wang, Jing Xie, Lei Zhao, Xiaochun Fei, Heng Zhang, and Yun Tan contributed equally to this work. Authors Chaofu Wang, Jun Cai, Rong Chen, Zhengli Shi, and Xiuwu Bian jointly supervised this work.
9.
Preprint in English | medRxiv | ID: ppmedrxiv-22269510
ABSTRACT
Since Omicron variant of SARS-CoV-2 was first detected in South Africa (SA), it has now dominated in United Kingdom (UK) of Europe and United State (USA) of North America. A prominent feature of this variant is the gathering of spike protein mutations, in particularly at the receptor binding domain (RBD). These RBD mutations essentially contribute to antibody resistance of current immune approaches. During global spillover, combinations of RBD mutations may exist and synergistically contribute to antibody resistance in fact. Using three geographic-stratified genome wide association studies (GWAS), we observed that RBD combinations exhibited a geographic pattern and genetical associated, such as five common mutations in both UK and USA Omicron, six or two specific mutations in UK or USA Omicron. Although the UK specific RBD mutations can be further classified into two separated sub-groups of combination based on linkage disequilibrium analysis. Functional analysis indicated that the common RBD combinations (fold change, -11.59) alongside UK or USA specific mutations significantly reduced neutralization (fold change, -38.72, -18.11). As RBD overlaps with angiotensin converting enzyme 2(ACE2) binding motif, protein-protein contact analysis indicated that the common RBD mutations enhanced ACE2 binding accessibility and were further strengthened by UK or USA-specific RBD mutations. Spatiotemporal evolution analysis indicated that UK-specific RBD mutations largely contribute to global spillover. Collectively, we have provided genetic evidence of RBD combinations and estimated their effects on antibody evasion and ACE2 binding accessibility.
10.
EuropePMC; 2021.
Preprint in English | EuropePMC | ID: ppcovidwho-296269
ABSTRACT
By largely unknown mechanisms, dysregulated gene-specific translation directly contributes to chronic inflammation-associated diseases such as sepsis and ARDS. Here, we report that G9a, a histone methyltransferase and well-regarded transcriptional repressor, non-canonically or non-epigenetically activates translation of select antimicrobial genes to promote proliferation of cytokine producing macrophages and to impair T cell function;all hallmarks of endotoxin-tolerance related complications including sepsis, ARDS and COVID19. Mechanistically, G9a interacts with translation regulators including METTL3, an N6-methyladenosine or m6A RNA methyltransferase, and methylates it to cooperatively upregulate the translation of certain m6A-modified mRNAs that encode immune checkpoint and anti-inflammatory proteins. Further, translatome proteomic analysis of ET macrophages progressively treated by a G9a inhibitor identified proteins showing G9a-dependent translation that unite the networks associated with hyperinflammation and T cell dysfunction. Overall, we identified a previously unrecognized function of G9a in gene-specific translation that can be leveraged to treat ET-related chronic inflammatory diseases.
11.
Advanced Materials Technologies ; : 1, 2021.
Article in English | Academic Search Complete | ID: covidwho-1267441
ABSTRACT
As a core part of personal protective equipment (PPE), filter materials play a key role in individual protection, especially in the fight against the COVID‐19. Here, a high‐performance multiscale cellulose fibers‐based filter material is introduced for protective clothing, which overcomes the limitation of mutual exclusion of filtration and permeability in cellulose‐based filter materials. With the hierarchical biomimetic structure design and the active surface of multiscale cellulose fibers, high PM2.5 removal efficiency of ≈92% is achieved with the high moisture transmission rate of 8 kg m−2 d−1. Through a simple and effective dip‐coating and roll‐to‐roll process, the hierarchical filter materials can be made on a large scale and further fabricated into high‐quality protective clothing by industrial production equipment. [ABSTRACT FROM AUTHOR] Copyright of Advanced Materials Technologies is the property of John Wiley & Sons, Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
12.
SSRN; 2021.
Preprint in English | SSRN | ID: ppcovidwho-6239
13.
SciFinder; 2020.
Preprint | SciFinder | ID: ppcovidwho-5249
ABSTRACT
A review.
14.
Journal of Third Military Medical University ; 42(14):1462-1468, 2020.
Article in Chinese | GIM | ID: covidwho-890798
ABSTRACT
Objective: To evaluate suicide risk, sleep quality and psychological status of patients with coronavirus disease 2019 (COVID-19) and analyze the factors contributing to a high suicide risk in these patients.
15.
SSRN; 2020.
Preprint in English | SSRN | ID: ppcovidwho-2445
16.
CAplus; 2020.
Preprint | CAplus | ID: ppcovidwho-2035
ABSTRACT
A review on research status and laboratory diagnosis of the new coronavirus. It is urgently needed to have reliable and quick test technique for diagnosis. POCT and high throughput screening biochiare suggested. Protection methods for laboratory stuff are also summaried.
17.
SSRN; 2020.
Preprint | SSRN | ID: ppcovidwho-927
ABSTRACT
Background: Identifying immune correlates of COVID-19 disease severity is an urgent need for clinical management, vaccine evaluation and drug development. Her
18.
Preprint in English | bioRxiv | ID: ppbiorxiv-311480
ABSTRACT
19.
Preprint in English | medRxiv | ID: ppmedrxiv-20159392
ABSTRACT
BackgroundSince December 2019, the outbreak of coronavirus disease (COVID-19) has been occurred by novel coronavirus (SARS-CoV-2). The rapid and sensitive immunoassays are urgently demanded for detecting specific antibodies as assistant diagnosis for primary screening of asymptomatic individuals, close contacts, suspected or recovered patients of COIVD-19 during the pandemic period. MethodsThe recombinant receptor binding domain of SARS-CoV-2 spike protein (S-RBD) was used as the antigen to detect specific IgM and the mixture of recombinant nucleocapsid phosphoprotein (NP) and S-RBD were used to detect specific IgG by the newly designed quantum-dot lateral flow immunoassay strip (QD-LFIA), respectively. ResultsA rapid and sensitive QD-LFIA based portable fluorescence smart-phone system was developed for detecting specific IgM/IgG to SARS-CoV-2 from 100 serum samples of COVID-19 patients and 450 plasma samples from healthy blood donors. Among 100 COVID-19 patients diagnosed with NAT previously, 3 were severe, 35 mild and 62 recovered cases. By using QD-LFIA, 78 (78%) and 99 (99%) samples from 100 COVID-19 patients serum were detected positive for anti-SARS-CoV-2 IgM or IgG, respectively, but only one sample (0.22%) was cross-reactive with S-RBD from 450 healthy blood donor plasmas that were collected from different areas of China. ConclusionAn ultrasensitive and specific QD-LFIA based portable fluorescence smart-phone system was developed fo r detection of specific IgM and IgG to SARS-CoV-2 infection, which could be used for investigating the prevalence or assistant diagnosis of COVID-19 in humans.
20.
Preprint in English | medRxiv | ID: ppmedrxiv-20138628
ABSTRACT
PurposeThis study investigated the KAP towards COVID-19 and their influencing factors among primary and middle school students during the self-quarantine period in Beijing. MethodsThis was a cross-sectional study among students from 18 primary and middle schools in Beijing during March 2020. Stratified cluster sampling was conducted. Demographic and KAP-related COVID-19 information was collected through an online questionnaire. The influencing factors were analyzed by multivariable logistic regression. ResultsA total of 7,377 students were included. The overall correct rate for COVID-19 knowledge was 74.1%, while only 31.5% and 40.5% could identify the high-risk places of cross-infection and warning body temperature. Although 94.5% of respondents believed the epidemic could be controlled, over 50% expressed various concerns about the epidemic. The compliance rates for basic preventing behaviors were all over 80%, while those for "rational and effective ventilation" (39.2%) and "dinning separately" (38.6%) were low. The KAP levels were significantly differed according to various school categories of students. The COVID-19 knowledge (OR= 3.309, 95% CI: 2.921, 3.748) and attitude (OR=1.145, 95% CI: 1.003, 1.308) were associated with preventive practices. Besides, female, urban students, those with a healthy lifestyle, and those with the willingness to engage in healthcare tended to have better preventive practices. ConclusionMost students in Beijing hold a high level of knowledge, optimistic attitudes and have appropriate practices towards COVID-19. However, targeted interventions are still necessary, especially for students with high-risk characteristics. Implications and contributionsThe performance and the potential factors of COVID-19-related knowledge, attitudes and practices (KAP) among students in primary and middle schools is still unclear. This study investigates the characteristics and the level of KAP among students. The results of the study may contribute to the targeted education and interventions for students.
|
|
# Particle-hole symmetry and the sign of the superconducting gap
Time-reversal (TR) symmetry leads to topological insulator property. As expected, the topological invariant depends on TR (Pfaffian) operator.
TR+particle-hole symmetry leads to topological superconductor. The topological invariant depends on the Chern number and the sign of the gap.
Is it physically correct to say that the Chern number represents the TR symmetry of the band and the sign of the gap represents the particle-hole symmetry ? If yes, what is the physical intuition behind the representation of the particle-hole symmetry with sign of the gap ?
I believe some background information might provide some context to the general readers. For a time-reversal invariant 3D topological superconductor the topological invariant or winding number can be written as $$N_{W}=\frac{1}{2}\sum_{s}{\rm sgn}(\delta_{s})C_{s}$$ where $$C_{s}=\frac{1}{2\pi}\int_{{\rm FS}_{s}}d\Omega^{ij}\left[\partial_{i}a_{sj}({\bf k})-\partial_{j}a_{si}({\bf k})\right]$$ is a Chern-number-like quantity evaluated over the $s^{{\rm th}}$ Fermi surface (which is 2D for a 3D superconductor), $a_{si}=-{\rm i}\left\langle s{\bf k}\right|\partial/\partial k_{i}\left|s{\bf k}\right\rangle$, where $\left|s{\bf k}\right\rangle$ is a Bloch state on the $s^{\rm th}$ Fermi surface, ${\rm sgn}(\delta_{s})$ represents the sign of the gap on the $s^{{\rm th}}$ Fermi surface, and $i,j=x,y,z$. The $s$ different Fermi surfaces also need to be disconnected in the BZ. Corresponding expressions can be found in lower dimensions by a formal procedure called “dimensional reduction.” However, let’s focus on the 3D case for now.
If our system obeys time-reversal symmetry then $$\sum_{s}C_{s}=0$$ For a conventional $s$-wave superconductor, the sign, by definition, is constant throughout the BZ. This always implies $N_{W}=0$ in a time-reversal invariant system. For example, for a system with 2 disconnected Fermi surfaces, (say) we have $C_{\pm}=\pm 1$. For an $s$-wave superconductor ${\rm sgn}(\delta_{\pm})=+1$ (or $-1$ for both). Then we obviously have $$N_{W}=\frac{1}{2}(+1)(+1)+\frac{1}{2}(+1)(-1)=0$$ But if ${\rm sgn}(\delta_{\pm})=\pm 1$ (say fully gapped $p$-wave), then $$N_{W}=\frac{1}{2}(+1)(+1)+\frac{1}{2}(-1)(-1)=1$$ In summary, both cases have particle-hole symmetry. The sign of the gap simply indicates topological character.
|
|
We can recognise an isosceles triangle because it will have two sides marked with lines. The two angles formed between base and legs, Mathematically prove congruent isosceles triangles using the Isosceles Triangles Theorem, Mathematically prove the converse of the Isosceles Triangles Theorem, Connect the Isosceles Triangle Theorem to the Side Side Side Postulate and the Angle Angle Side Theorem. We are given: We just showed that the three sides of △DUC are congruent to △DCK, which means you have the Side Side Side Postulate, which gives congruence. A triangle can be said to be isosceles if it matches any of the following descriptions: A. So, it is an isosceles triangle. The angle between the two legs is called the vertex angle. I have NO IDEA how to do this. Step 2) calculate the distances. isosceles using Finally, AD is the height, which means that the angle ∠ADC is a right angle, and we have a right triangle, ΔADC, whose hypotenuse we know (10) and can use to find the legs using the Pythagorean theorem , c 2 =a 2 +b 2, (Since it is isosceles AB = BC) AC 2 = AB2 +BC 2 The traingle is satisfying the pythagoras theorem. There can be 3, 2 or no equal sides/angles:How to remember? (FIGURE CAN'T COPY) Use the information on page 202 to explain why triangles are important in construction. We then take the given line – in this case, the apex angle bisector – as a common side, and use one additional property or given fact to show that the triangles formed by this line are congruent. Look for isosceles triangles. A TRIANGLE IS ISOSCELES IF TWO OF ITS SIDES ARE THE SAME LENGTH. If the original conditional statement is false, then the converse will also be false. The converse of the Isosceles Triangle Theorem is true! No need to plug it in or recharge its batteries -- it's right there, in your head! And using the base angles theorem, we also have two congruent angles. Isosceles Triangle Formulas An Isosceles triangle has two equal sides with the angles opposite to them equal. You also should now see the connection between the Isosceles Triangle Theorem to the Side Side Side Postulate and the Angle Angle Side Theorem. C Program to Check Triangle is Equilateral Isosceles or Scalene Write a C Program to Check Triangle is Equilateral Isosceles or Scalene with example. Knowing the triangle's parts, here is the challenge: how do we prove that the base angles are congruent? Get better grades with tutoring from top-rated professional tutors. For example, if we know a and b we know c since c = a. An isosceles triangle is a triangle that has two equal sides and two equal angles. Local and online. A scalene triangle is a triangle that has three unequal sides. ; Each line segment of the isosceles triangle is erected as the sides of the triangle. Alphabetically they go 3, 2, none: 1. Given that ∠BER ≅ ∠BRE, we must prove that BE ≅ BR. The sides AB and BC are having equal length. Show that the triangle with vertices A (0,2); B (-3, -1); and C (-4, 3) is isosceles. We haven't covered this in class! Given All Side Lengths To use this method, you should know the length of the triangle’s base and the … Then, the triangle is equilateral only if a == b == c. A triangle is said Isosceles Triangle, if its two sides are equal. So here once again is the Isosceles Triangle Theorem: To make its converse, we could exactly swap the parts, getting a bit of a mish-mash: Now it makes sense, but is it true? Where the angle bisector intersects base ER, label it Point A. ; The points in which the straight lines are found are known as vertices. Using. In this video I have shown how we can show that a given triangle is an isosceles triangles using Pythagoras theorem if the coordinates of the three vertices are known. The relationship between the lateral side $$a$$, the based $$b$$ of the isosceles triangle, its area A, height h, inscribed and circumscribed radii r and R respectively are … ! C. It has 2 interior angles of equal size (ie, the same number of degrees). Isosceles triangles have equal legs (that's what the word "isosceles" means). The two angles touching the base (which are congruent, or equal) are called base angles. D. Pictorial Presentation: Sample Solution: Python Code: One thing that should immediately jump to mind is that as we have shown, in an isosceles triangle, the height to the base bisects the base, so CD=DB=x/2. That's just DUCKy! Note : An equilateral triangle is a triangle in which all three sides are equal. If the premise is true, then the converse could be true or false: For that converse statement to be true, sleeping in your bed would become a bizarre experience. Then, the triangle is isosceles … ∠ BAC and ∠ BCA are the base angles of the triangle picture on the left. Textbook solution for McDougal Littell Jurgensen Geometry: Student Edition… 5th Edition Ray C. Jurgensen Chapter 13.1 Problem 27WE. show 10 more Desperately need help with mathswatch! 1-to-1 tailored lessons, flexible scheduling. You can draw one yourself, using △DUK as a model. Since line segment BA is an angle bisector, this makes ∠EBA ≅ ∠RBA. B. A triangle is said Equilateral Triangle, if all its sides are equal. The two equal sides are marked with lines and the two equal angles are opposite these sides. Given the coordinates of the triangle's vertices, to prove that a, Triangle ABC has coordinate A(-2,3) , B (-5,-4) and C (2,-1). You can watch many more videos on :http://www.mmtutorial.com/ where I have organised the videos in different playlists Learn faster with a math tutor. That would be the Angle Angle Side Theorem, AAS: With the triangles themselves proved congruent, their corresponding parts are congruent (CPCTC), which makes BE ≅ BR. Here we have on display the majestic isosceles triangle, △ DU K △ D U K. You can draw one yourself, using △ DU K △ D U K as a model. We checked for instance that isosceles triangle perimeter is 4.236 in and that the angles in the golden triangle are equal to 72° and 36° - the ratio is equal to 2:2:1, indeed. If these two sides, called legs, are … To prove the converse, let's construct another isosceles triangle, △BER. The converse of a conditional statement is made by swapping the hypothesis (if …) with the conclusion (then …). The main characteristics of the isosceles triangle are as follows: It is formed by three straight lines; these straight lines will be cut two by two. It has 3 lines of symmetry. If a, b, c are three sides of triangle. Since this is an isosceles triangle, by definition we have two equal sides. So if the two triangles are congruent, then corresponding parts of congruent triangles are congruent (CPCTC), which means …. Get help fast. Characteristics of the isosceles triangle. If a, b, c are three sides of triangle. One way of proving that it is an isosceles triangle is by calculating the length of each side since two sides of equal lengths means that it is an isosceles triangle. By working through these exercises, you now are able to recognize and draw an isosceles triangle, mathematically prove congruent isosceles triangles using the Isosceles Triangles Theorem, and mathematically prove the converse of the Isosceles Triangles Theorem. While a general triangle requires three elements to be fully identified, an isosceles triangle requires only two because we have the equality of its two sides and two angles. And bears are famously selfish. Step 1) Plot Points Calculate all 3 distances. 3. Look at the two triangles formed by the median. Hash marks show sides ∠DU ≅ ∠DK, which is your tip-off that you have an isosceles triangle. Interactive simulation the most controversial math riddle ever! How do we know those are equal, too? Isosceles Triangle An i sosceles triangle has two congruent sides and two congruent angles. The isosceles triangle is an important triangle within the classification of triangles, so we will see the most used properties that apply in this geometric figure. An isosceles triangle has two equal sides (or three, technically) and two equal angles (or three, technically). Show that \triangle A D C is isosceles. Isosceles: means \"equal legs\", and we have two legs, right? The equal sides are called legs, and the third side is the base. Step 2) Show Distances. has 2 congruent sides and two congruent angles. We have step-by-step solutions for … Real World Math Horror Stories from Real encounters, If any 2 sides have equal side lengths, then the triangle is. geometry - Show that the triangle $ADC$ is isosceles - Mathematics Stack Exchange 0 Let K be a circle with center M and L be a circle that passes through M and intersects K in two different points A and B and let g be a line that goes through B but not through A. coordinate geometry is to use the sides. Want to see the math tutors near you? Yippee for them, but what do we know about their base angles? Thank you! You can use this calculator to determine different parameters than in the example, but remember that there are in general two distinct isosceles triangles with given area and other parameter, e.g. An isosceles triangle is a triangle with (at least) two equal sides. Step 1) Plot Points Calculate all 3 distances. Therefore, the given triangle is right-angle triangle. Any ideas on what I should do? In geometry, an isosceles triangle is a triangle that has two sides of equal length. The vertex angle is ∠ ABC Also iSOSceles has two equal \"Sides\" joined by an \"Odd\" side. Steps to Coordinate Proof. Length of (13, −2)&(9, − 8) = √(13 −9)2 + (− 2 +8)2 = √16+ 36 Since line segment BA is used in both smaller right triangles, it is congruent to itself. In our calculations for a right triangle we only consider 2 known sides to calculate the other 7 unknowns. What else have you got? That is the heart of the Isosceles Triangle Theorem, which is built as a conditional (if, then) statement: To mathematically prove this, we need to introduce a median line, a line constructed from an interior angle to the midpoint of the opposite side. Example 2 : Show that the following points taken in order form an isosceles triangle. Let's see … that's an angle, another angle, and a side. Unless the bears bring honeypots to share with you, the converse is unlikely ever to happen. Notice that if you can construct a unique triangle using given elements, these elements fully define a triangle. If these two sides, called legs, are equal, then this is an isosceles triangle. The two angle-side theorems are critical for solving many proofs, so when you start doing a proof, look at the diagram and identify all triangles that look like they’re isosceles. Equilateral: \"equal\"-lateral (lateral means side) so they have all equal sides 2. Mathswatch isosceles angles GCSE Maths - mathswatch edexcel paper 1 question 5 Can someone help me with a maths watch question Mathswatch marking my answer wrong when it’s right. The congruent angles are called the base angles and the other angle is known as the vertex angle. Below is an example of an isosceles triangle. Hash marks show sides ∠DU ≅ ∠DK ∠ D U ≅ ∠ D K, which is your tip-off that you have an isosceles triangle. There are three special names given to triangles that tell how many sides (or angles) are equal. You may need to tinker with it to ensure it makes sense. An isosceles triangle is a special case of a triangle where 2 sides, a and c, are equal and 2 angles, A and C, are equal. leg length. Here we have on display the majestic isosceles triangle, △DUK. Not every converse statement of a conditional statement is true. a= b = c Triangle Congruence Theorems (SSS, SAS, ASA), Conditional Statements and Their Converse, Congruency of Right Triangles (LA & LL Theorems), Perpendicular Bisector (Definition & Construction), How to Find the Area of a Regular Polygon. The easiest way to prove that a triangle is The angles in a triangle add up to 180, so its 5x+2+6x-10+4x+8=100, then you combine it, so its 15x=180, then divide 180 by 15, and you get 12. Property 1: In an isosceles triangle the notable lines: Median, Angle Bisector, Altitude and Perpendicular Bisector that are drawn towards the side of the BASE are equal in segment and length . After working your way through this lesson, you will be able to: Get better grades with tutoring from top-rated private tutors. The above figure shows two isosceles triangles. Then insert that into each equation. If it has, it is also an equilateral triangle. For example, a, b, and c are sides of a triangle Equilateral Triangle: If all sides of a triangle are equal, then it is an Equilateral triangle. It has 1 line of symmetry. Write a Python program to check a triangle is equilateral, isosceles or scalene. We reach into our geometer's toolbox and take out the Isosceles Triangle Theorem. Take any two arbitrary directions in the plane of the paper, and draw a small isosceles triangle abc, whose sides are perpendicular to the two directions, and consider the equilibrium of a small triangular prism of fluid, of which the triangle is the cross section. Add the angle bisector from ∠EBR down to base ER. Decide if a point is inside the shape made by a fixed-area isosceles triangle as its vertex slides down the y-axis 1 Let R be the region of the disc $x^2+y^2\leq1$ in the first quadrant. Then make a mental note that you may have to use one of the angle-side theorems for one or more of the isosceles triangles. Now we have two small, right triangles where once we had one big, isosceles triangle: △BEA and △BAR. The isosceles triangle theorem states that if a triangle is isosceles then the angles opposite the congruent sides are congruent. An isosceles triangle Find a tutor locally or online. Suppose in triangle ABC, {eq}\overline{AB}\cong\overline{AC}{/eq}. What do we have? Scalene: means \"uneven\" or \"odd\", so no equal sides. We find Point C on base UK and construct line segment DC: There! None: 1 that \triangle a D c is isosceles of its sides are marked lines. To base ER is erected as the sides all three sides are equal the original conditional is... Order form an isosceles triangle Theorem is true or no equal sides/angles: how do we know about their angles. Angle angle side Theorem so if the original conditional statement is true prove the converse will also false. Has two congruent angles or equal ) are called base angles are,... Are three sides of triangle: △BEA and △BAR: Python Code Show... Alphabetically they go 3, 2 or no equal sides 2 given elements, these elements fully define a that. Ab and BC are having equal length Look for isosceles triangles in your!. Solution: Python Code: Show that \triangle a D c is isosceles using coordinate geometry is use... Its sides are the SAME number of degrees ) Write a Python Program to a. For them, but what do we know c since c = a can draw one yourself using. There, in your head means … Student Edition… 5th Edition Ray c. Jurgensen Chapter 13.1 27WE! Is your tip-off that you may need to plug it in or recharge its batteries -- how to show that a triangle is isosceles 's right,. } \cong\overline { AC } { /eq } here we have step-by-step for. With it to ensure it makes sense add the angle angle side Theorem mental note that you need! 3 distances also have two congruent angles go 3, 2 or no sides/angles... Littell Jurgensen geometry: Student Edition… 5th Edition Ray c. Jurgensen Chapter 13.1 Problem 27WE Horror from! To them equal … that 's what the word isosceles '' means ) 2: that! To base ER, label it Point a as a model calculations for right! An equilateral triangle is a triangle in how to show that a triangle is isosceles the straight lines are found are as... Formed by the median ≅ ∠DK, which means … more of the triangle another. Sides to Calculate the other angle is known as the sides of the triangle picture on the left conditional is. In order form an isosceles triangle Theorem is true triangle picture on the left Point! The SAME number of degrees ) a unique triangle using given elements, these elements fully define triangle..., 2, none: 1 of degrees ) you also should now see the connection between the triangles... Many sides ( or angles ) are equal, too if you construct... See the connection between the isosceles triangle, if we know those equal... Important in construction marks Show sides ∠DU ≅ ∠DK, which is your tip-off that you have isosceles... That 's an angle, another angle, another angle, another angle, angle. Here we have two small, right triangles where once we had one big, isosceles or scalene Write c... Ever to happen triangles have equal side lengths, then the converse let! They have all equal sides 2 it makes sense step 1 ) Plot Points Calculate all 3 distances,... Lateral means side ) so they have all equal sides, you will be able to: better! Small, right triangles where once we had one big, isosceles or scalene with example both smaller right where. We must prove that the base angles and the other 7 unknowns special given! Angles of equal size ( ie, the SAME number of degrees ) 's an angle and. Equal legs\ '', and the third side is the challenge: how to remember to! Smaller right triangles, it is also an equilateral triangle, △BER we reach our. On display the majestic isosceles triangle has two equal angles, are equal is equilateral or... Triangle is isosceles using coordinate geometry is to use one of the triangle is,... Segment DC: there is true: Python Code: Show that the following descriptions a. We have two equal sides with the angles opposite to them equal erected as the sides there in! Converse, let 's see … that 's an angle bisector from ∠EBR down to base,... Isosceles or scalene Write a Python Program to Check triangle is erected as the sides of length. 2 or no equal sides are marked with lines Chapter 13.1 Problem.. A unique triangle using given elements, these elements fully define a triangle is isosceles using geometry... Special names given to triangles that tell how many sides ( or angles ) are called legs, triangles... Real encounters, if all its sides are marked with lines and the angle between isosceles... With you, the converse will also be false Write a c Program to Check a triangle is a with! A scalene triangle is a triangle that has two sides, called legs are. Way through this lesson, you will be able to: Get better grades with tutoring from top-rated professional.. Converse, let 's construct another isosceles triangle has 2 interior angles the... By definition we have two congruent angles Points in which all three sides are called legs, a., { eq } \overline { AB } \cong\overline { AC } { /eq.! Not every converse statement of a conditional statement is true top-rated private tutors for,... The challenge: how to remember more of the following Points taken in order form an triangle... Be said to be isosceles if it has, it is also equilateral... Ensure it makes sense, called legs, and a side congruent angles { }! Given that ∠BER ≅ ∠BRE, we must prove that be ≅.! Small, right triangles where once we had one big, isosceles triangle is isosceles using coordinate is.: Student Edition… 5th Edition Ray c. Jurgensen Chapter 13.1 Problem 27WE said to be isosceles if it any... Uk and construct line segment how to show that a triangle is isosceles: there formed by the median you will be able to: better... All equal sides so if the two equal sides are called legs, and we on. And ∠ BCA are the SAME length or scalene to base ER define triangle! ≅ ∠RBA lines and the other 7 unknowns construct a unique triangle using given,! An isosceles triangle has two equal sides: △BEA and △BAR has unequal! Given elements, these elements fully define a triangle is erected as the sides know c c... Not every converse statement of a conditional statement is made by swapping the hypothesis ( if … ) △DUK... We also have two legs, right triangles, it is also an equilateral triangle triangle in which all sides! Our geometer 's toolbox and take out the isosceles triangle is a triangle in the! Bc are having equal length triangles that tell how many sides ( or angles ) are,. Opposite to them equal Point c on base UK and construct line segment DC: there but do... You have an isosceles triangle Theorem to the side side Postulate and the third side is the base angles the. ( CPCTC ), which means … you can construct a unique using! Geometry is to use the sides AB and BC are having equal length also should see! Are … a triangle is a triangle in which all three sides the! That has three unequal sides c. it has, it is also an equilateral triangle, here is the angles. Triangle, △BER we only consider 2 known sides to Calculate the other 7 unknowns descriptions. The angles opposite to them equal ∠DU ≅ ∠DK, which is your tip-off that you have an triangle! A model equal length take out the isosceles triangle: \ '' ''. Side ) so they have all equal sides with the conclusion ( then … ) with conclusion! The angle angle side Theorem hash marks Show sides ∠DU ≅ ∠DK, which means … angles Theorem we... From real encounters, if we know a and b we know a and b know! Opposite these sides ∠BER ≅ ∠BRE, we must prove that a triangle is how to show that a triangle is isosceles, isosceles triangle Theorem sosceles. Called the base ( which are congruent, then the triangle picture on the left Theorem is!! Triangle can be said to be isosceles if two of its sides are the (... Angles and the two legs, are equal, then this is isosceles... Because it will have two congruent angles another angle, another angle, another angle, and the other is! Other angle is known as vertices '' side lines are found are known as.... You also should now see the connection between the isosceles triangle suppose in triangle ABC, { eq } {... Which are congruent ( CPCTC ), which is your tip-off that you have an isosceles triangle: △BEA △BAR... We only consider 2 known sides to Calculate the other angle is known as the vertex angle the... Congruent triangles are congruent if you can construct a unique triangle using given elements, these fully... We also have two equal sides CPCTC ), which means … define a triangle is isosceles if of... Recognise an isosceles triangle has two equal sides in your head, another angle and... Those are equal Jurgensen geometry: Student Edition… 5th Edition Ray c. Jurgensen 13.1..., isosceles triangle, △BER ∠EBA ≅ ∠RBA which all three sides are equal, too segment DC there. Then, the SAME number of degrees ) of degrees how to show that a triangle is isosceles 2: Show that following! Recharge its batteries -- it 's right there, in your head out the isosceles triangle a. The sides tell how many sides ( or angles ) are called base angles are congruent, this...
|
|
m
• E
F Nous contacter
0
# Documents Bach, Francis | enregistrements trouvés : 3
O
P Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Large-scale machine learning and convex optimization 1/2 Bach, Francis | CIRM H
Multi angle
Computer Science;Control Theory and Optimization;Probability and Statistics
Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Large-scale machine learning and convex optimization 2/2 Bach, Francis | CIRM H
Multi angle
Computer Science;Control Theory and Optimization;Probability and Statistics
Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes over the data. Given n observations/iterations, the optimal convergence rates of these algorithms are $O(1/\sqrt{n})$ for general convex functions and reaches $O(1/n)$ for strongly-convex functions. In this tutorial, I will first present the classical results in stochastic approximation and relate them to classical optimization and statistics results. I will then show how the smoothness of loss functions may be used to design novel algorithms with improved behavior, both in theory and practice: in the ideal infinite-data setting, an efficient novel Newton-based stochastic approximation algorithm leads to a convergence rate of $O(1/n)$ without strong convexity assumptions, while in the practical finite-data setting, an appropriate combination of batch and online algorithms leads to unexpected behaviors, such as a linear convergence rate for strongly convex problems, with an iteration cost similar to stochastic gradient descent. Many machine learning and signal processing problems are traditionally cast as convex optimization problems. A common difficulty in solving these problems is the size of the data, where there are many observations ("large n") and each of these is large ("large p"). In this setting, online algorithms such as stochastic gradient descent which pass over the data only once, are usually preferred over batch algorithms, which require multiple passes ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Gradient descent for wide two-layer neural networks Bach, Francis | CIRM H
Multi angle
Computer Science;Control Theory and Optimization;Probability and Statistics
Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions. Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a ...
#### Filtrer
##### Langue
Titres de périodiques et e-books électroniques (Depuis le CIRM)
Ressources Electroniques
Books & Print journals
Recherche avancée
0
Z
|
|
# How to check overrun?
I'm using an STM32F103C8T6 (aka blue pill) with Eclipse/System Workbench and HAL.
The following code:
volatile HAL_StatusTypeDef result =
HAL_UART_Transmit(&huart1, bytesToSend + currentSendIndex,
bytesPerMessageToSend, 100);
volatile uint32_t error = HAL_UART_GetError(&huart1);
works fine when sending 1 byte, and (later) receiving 1 byte and iterating this.
However, when sending 3 bytes, I noticed, I only receive the first byte, and the next calls for HAL_UART_Receive result in a timeout.
This seems logical, because probably there is only 1 byte buffer in the UART.
However, what I don't understand, why does transmitting multiple bytes in the code mentioned, returns HAL_OK, and also the call to HAL_UART_GetError returns 0.
I assume an overrun has occurred for the 2nd and consecutive bytes... is this true, and if yes, how can I check this?
• Aren't you using TXempty for IRQ? What data rate? Sep 3, 2017 at 22:06
• Using a timer is possible to fill 8 byte data buffer but IRQ is better. Before STM32 which have many std UARTS, I would use IRQ with DMA to send bursts of data, so CPU could run processes, and do housekeeping and delgate the DMA. Make sure you use parity, and check for all errors, NE ORE or FE and test signal integrity for glitches. Sep 4, 2017 at 12:20
• For MIDI , I once just used the Joystick port to communicate over MIDI on WIN98 Sep 4, 2017 at 13:24
• @Michel Keijzers if you want to learn microcontrollers forget HAL and use bare registers. Sep 4, 2017 at 15:46
• I bet that learning those ridiciolous HAL structures will be much more complicated and will take longer (now you have problem with very simple peripheral) , than learning the hardware itself. I cant help you as I know how the hardware works, but I have no clues what HAL does). I use some HAL libraries (strongly modified by me) especially for USB & Ethernet. Sep 4, 2017 at 17:05
You can configure an interrupt to detect overrun errors. Actually a lot of interrupt are available to detect errors. From the reference manual:
Just for sure you could use the "Transmit data register empty" one as Tony has suggested.
As far as I know HAL IRQ handlers and IRQ calls for UART use all of these interrupts by default. The _IT versions of the UART calls enable all these interrupts.
• Thanks ... I already knew interrupts would be better, but since I'm quite a newbie I thought it would be easier to start with the non-interrupt solution... But I guess I already found it's limitation. I will check interrupts, and yes, the HAL supports interrupts for UART. Sep 4, 2017 at 7:53
Your solution may end up duplicating existing products.
http://www.ucapps.de/mbhp_core_lpc17.html
• I know such things exist, but first, I want to learn more about microcontrollers, but also I want eventually (at least) 4 inputs, 4 outputs and 4 thrus. Probably I could stack 4 of those. Sep 4, 2017 at 14:22
• Before you can design, you must know how everything works and fails, and learning curve depends on user. So start with something that works. Sep 4, 2017 at 14:28
• Well I tried already a 'proof of concept' with 3 MIDI in/out/thrus on an Arduino and worked fine... but I need more memory, and possibly more processing power, so I am converting my proof of concept to STM32 ... Sep 4, 2017 at 14:31
• then you must use DMA for thruput in order to have useful functions. Sep 4, 2017 at 14:35
• Yes, but since I'm new I thought I start simple with polling, but that doesn't seem to work out well, so I will do IRQ/interrupts, and next step is using DMA. Sep 4, 2017 at 15:11
|
|
# Double tagged equations?
In the following example
the inequality (2.26) is obtained in the environment equation, but the equations (2.27) and (2.28) by dirty tricks, destroying the structure of the document. This is because additional tags (i) and (ii) are needed on the left-hand side of lines. (This is, in fact, some kind of an enumeration).
How to obtain such additional tags in equations or even more: produce a list with equations of its elements?
I am trying to find better solutions then How to number equations with a list environment
Remark. According to some wishes a (very ugly) code snippet. I hope that it would be enough instead of MWE and I know, that one should not use LaTeX in the presented way.
Indukcja matematyczna pozwala przenieść nierówność \eqref{(2.25)} na dowolną liczbę składników:
$$\label{(2.26)} J(X_1+\ldots +X_n)\leqslant \alpha_1^2J(X_1)+\ldots +\alpha_n^2J(X_n),% \hfill (2.26)$$
\vspace{2mm}
\noindent gdzie $X_1,\ldots ,X_n$ są niezależne oraz $\alpha_i\in[0,1],\; \sum_{i=1}^n\alpha_i=1$. Nierówność (2.26) udowodnili Stam (1959) oraz Blachman (1965), nie korzystając z lematu 2.2. Nierówność (2.26) można zapisać w kilku
równoważnych postaciach:
\vspace{2mm}
\noindent$\;\;${\it (i) $\;\;\;J(\sqrt{\alpha_1}X_1+\ldots +\sqrt{\alpha_n}X_n)\leqslant \alpha_1J(X_1)+\ldots +\alpha_nJ(X_n)$;\hfill {\rm (2.27)}
\vspace{2mm}
\noindent$\;\;$(ii) $\;\;\displaystyle \frac{1}{J(X_1+\ldots +X_n)}\geqslant \frac{1}{J(X_1)}+\ldots +\frac{1}{J(X_n)}$;\hfill
{\rm (2.28)}
-
Can you please add a minimal working example so that people can see what packages etc you need to get your code to work (but don't add code that is not needed). This makes it much easier for people to help you. – Andrew Sep 2 at 5:40
One common reason for using an enumerate environment is that one wants to be able to cross-reference the various items somewhere else in the text. However, given that the itemized material consists of already-enumerated equations (inequalities, actually), is it necessary to provide a second form of enumeration? Would the flow of the presentation suffer (or be enhanced?) if you placed the two last inequalities in an align environment (with alignment on the inequality symbol)? – Mico Sep 2 at 5:41
@Andrew It is not a problem of errors, hence packages are rather meaningless (amsmath is obviously used). The code used for the fragment in the picture is really dirty, against LaTeX rules, so it is not included intentionally. – Przemysław Scherwentke Sep 2 at 5:49
I gather that "Nierówność" means either "equation" or "inequality". :-) – Mico Sep 2 at 7:20
I can't understand what the labels on the left are for. If the two equations are to be considered as a set, then subequations should be used, which would label them as 2.27a and 2.27b, with the possibility to refer to “Equations 2.27” by setting a \label after \begin{subequations}. – egreg Sep 2 at 13:46
EDIT
I decided that I didn't really like my first solution (see below) because it requires all of this "extra clutter" in order for it to work. So, I have written a custom enumitem environment equationate (=equation+enumerate) that does the same thing except that it hides the clutter inside the environment.
The output is given above, which is exactly the same as for my first solution, but the input now only requires some equations inside an enumerate-like environment. These equations are automatically typeset as mathematics in \displaystyle. Even though it doesn't look like it above (I shrunk the image), the equation numbers are flush with the righthand margin.
Here is the new MWE:
\documentclass{article}
\usepackage{enumitem}
\usepackage[width=80mm]{geometry}
\renewcommand\theequation{(\arabic{section}.\arabic{equation})}
\let\realItem=\item
\newcommand\EquationItem[1][\relax]{%
\ifmmode\EndEquationItem\fi% close off math-mode from last item
\ifx\relax#1\relax\realItem\else\realItem[#1]\fi%
\refstepcounter{equation}$\displaystyle% } \newcommand\EndEquationItem{$\hfill\theequation}
\newlist{equationate}{enumerate}{1}
\setlist[equationate]{label=\roman*), before=\global\let\item\EquationItem,
after=\EndEquationItem\global\let\item\realItem}
\begin{document}
\section{Some equations}
\begin{equationate}
\item \sum_{k=1}^nk=\frac12 n(n+1)
\item \sum_{k=1}^nk^2=\frac16n(n+1)(2n+1)
\end{equationate}
\end{document}
Notice that because equationate is an enumitem environment you can customise it on the fly in the usual way. For example,
\begin{equationate}[label=\alph*)]
\item \sum_{k=1}^nk=\frac12 n(n+1)
\item \sum_{k=1}^nk^2=\frac16n(n+1)(2n+1)
\end{equationate}
will print the item numbers as a), b), .... Of course, if you change the values of before or after then everything will break. If you're keen it shouldn't be hard to add an option so that \item* suppresses the equation label for an item.
Original solution
I would use the following (using the enumitem package):
\documentclass{article}
\usepackage{enumitem}
\usepackage[width=80mm]{geometry}
\begin{document}
\section{Some equations}
\renewcommand\theequation{(\arabic{section}.\arabic{equation})}
\begin{enumerate}[label=\roman*)]
\item $\displaystyle \sum_{k=1}^nk=\frac12 n(n+1)$
\refstepcounter{equation}\hfill\theequation
\item $\displaystyle \sum_{k=1}^nk^2=\frac16n(n+1)(2n+1)$
\refstepcounter{equation}\hfill\theequation
\end{enumerate}
\end{document}
This is not much of hack and it produces the image above.
-
@Mico I will try to reform:) – Andrew Sep 2 at 6:22
@Andrew +1. Nice trick, preserving the structure of a document. – Przemysław Scherwentke Sep 2 at 6:37
@barbarabeeton Thanks, you're right it was an old image. I've updated it. – Andrew Sep 2 at 13:55
Wouldn't it be the easiest way to use an align or alignat here? I replaced the \ldots by \dots in between the binary operators as they are looking wrong.
% arara: pdflatex
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{mathtools}
\usepackage{amssymb}
\renewcommand\theequation{\arabic{section}.\arabic{equation}}
\begin{document}
\setcounter{section}{2}
\setcounter{equation}{25}
Indukcja matematyczna pozwala przenieść nierówność (2.25) na dowolną liczbę składników:
$$\label{(2.26)} J(X_1 + \dots + X_n) \leqslant \alpha_1^2J(X_1) + \dots + \alpha_n^2J(X_n),$$
gdzie $X_1, \dots, X_n$ są niezależne oraz $\alpha_i\in[0,1],\; \sum_{i=1}^n\alpha_i=1$. Nierówność (2.26) udowodnili Stam (1959) oraz Blachman (1965), nie korzystając z lematu 2.2. Nierówność (2.26) można zapisać w kilku
równoważnych postaciach:
\begin{alignat}{2}
&(i)\qquad &&J(\sqrt{\alpha_1}X_1+ \dots +\sqrt{\alpha_n}X_n)\leqslant \alpha_1J(X_1)+ \dots +\alpha_nJ(X_n);\\
&(ii)\qquad &&\displaystyle \frac{1}{J(X_1+ \dots +X_n)}\geqslant \frac{1}{J(X_1)}+ \dots + \frac{1}{J(X_n)};
\end{alignat}
\end{document}
If you need the left labels flush left, you should go with a table I guess.
-
Between binary operators, one can use \dotsb, where the "b" stands for "binary". See: tex.stackexchange.com/a/122493/6376 – Tyson Williams Sep 2 at 14:26
|
|
# Technological challenges to sending a high altitude balloon to space and orbit from 50 km?
If you want to be above all winds, you have to be above all atmosphere. But you can't get above all atmosphere with a ballon. There is no ballon lighter than a vacuum.
Conventional helium balloons get up to roughly 50 km, which is half way there in terms of altitude but a factor of almost 2,000 away in density.
• There have been hight altitude instrumentation balloons that reach nearly to 50 km.
• There have been demonstrations of ion wind propulsion at sea level at MIT (MIT EAD Airframe Version 2).
• There have been balloons in space, in orbit around Earth: 1, 2 3 (#3 is unanswered)
Suppose a big normal balloon were to loft our special balloon to 50 km where it inflated. What would be the technical challenges to getting it moving and rising up to the Karman line at 100 km or beyond with orbital velocity? Include photovoltaic, solar-thermal, radioisotope or other clever ways to make power and clever propulsion schemes, use state of the art materials and a substantial budget and R&D effort for this admittedly crazy demonstrator mission, and answer:
Question: What are the main technological challenges to sending a high altitude balloon to space and orbit starting from 50 km?
From NASA TM-X-74335/NOAA-S/T 76-1562: U.S. Standard Atmosphere, 1976
Altitude Pressure Temperature Density
(km) (mbar) (K) (kg/m^3)
------- -------- ----------- -------
50 7.6E-01 271 9.8E-04
100 3.2E-04 195 5.6E-07
For previous thinking in this area and sources of references, see also
• What about reaching orbital speed by the balloon in the upper atmosphere where there is still atmospheric drag? High speed would destroy the balloon when exposed to a very low pressure. Balloons are known for very low speed difference between air and balloon. – Uwe Jan 1 at 5:07
• I think you need to separate "space" and "orbit". Getting to 100km is one challenge, getting up to orbital velocity is an entirely different one. – Steve Linton Jan 1 at 10:43
• @uhoh OK. I understand now. I suspect the hardest part is relatively early on when the balloon is moving at mach 5-10 relative to the local atmosphere. It still needs to support most of its weight by bouyancy and aerodyanmic lift. If much of that comes from bouyancy then it must be large and the surrounding air can't be too thin, but then the drag would be horrendous. If most of it comes from lift, then it's not really a balloon, and either way it needs so much propulsive power that it must be using stored energy and will have limited time. – Steve Linton Jan 1 at 12:27
• You may be interested in JP Aerospace's "Airship to Orbit" project: jpaerospace.com/ATO/ATO.html – Ajedi32 Jan 5 at 17:30
• @uhoh I couldn't find a ton of technical details; though there is a PDF on their website which offers a bit more information than the landing page: jpaerospace.com/atohandout.pdf Wikipedia also has a small overview on the concept, though it's mostly sourced from the same PDF: en.wikipedia.org/wiki/JP_Aerospace#Airship_to_Orbit_project – Ajedi32 Jan 5 at 19:28
|
|
# Synchronous motor mystery
Not too long ago I acquired an old ('70/'80) German flip clock. I found out that this clock is driven by a one-phase synchronous engine.
I can't get the motor to run. I (carefully!) connected the motor to the mains (220V 50Hz). The results was a chaotic buzzing and the motor oscillating. The motor look like this:
I have no idea what 'Type 5' or 'BT 1200' signifies (I googled it extensively). The coil has 4 wires running from it. Counting from top to bottom, or from green to yellow, the resistance between:
• 1 and 4 (green and yellow) is 460 Ohm
• 2 and 3 are shorted (0 Ohm), not in the coil but the blue and black wire
• 1 and 3, 2 and 4 is 230 Ohm
With the coil removed the axis becomes visible:
One turn of the axle moves the clock forward 1 minute (I am able to turn it by hand). This is also weird. What I understood of synchronous motors is that they rotate at the same frequency as the mains it is connected to, in this case 50Hz. In other words 50 times to fast.
I can easily remove the axle as well:
Behind it is the gearbox. The motor is obviously not a Shaded-pole motor.
So how does it start? Perhaps it needs a different voltage (the clock was recovered from an industrial complex)? 120V, 480V? Is there a (start) component removed? On the second image there are some protrusions visible on the left side of the axis with no obvious function.
And how to explain the difference between the expected rotational speed of the motor (50 rpm) and the required 1 rotation per minute for the clock?
• This doesn't sounds like a motor to me. – Standard Sandun May 9 '12 at 10:26
Brave the man who connects mains voltage to a device when he is uncertain if it is intended to be mains powered.
Sometimes dead, the equipment or the experimenter, or both, become, also.
Harm sounds like an apposite user name :-).
From what you say it now seems even more likely than before that my previous answer to your previous question is correct or along the rights lines, but that you are ignoring it in favour of an incorrect answer. But, I may be wrong :-).
Part of my prior answer said:
• It's effectively an electric motor - possibly driven at mains frequency and possibly an escapement release solenoid - but maybe effectively both. Probably it provides the complete driving power for the flip action but the double lobe cam (see below) suggests triggering at regular intervals. If there is no other timing or driving mechanism then it may have been run as a "slave" with control pulses sent via the visible wiring from a central controller.
and
• This mechanism may work with the same pulses used to control time clocks in older analog dial systems - used in eg British Railway Stations long ago I think - and many other such locations.
Try driving it with a 1 Hz pulse (possible square wave, possibly sinusoid) and see what happens.
Newly provided information makes it almost certain that this is indeed a linked system clock that shares a 1 minute pulse with other clocks. Pulse may be on briefly once per minute. Or on then reversed then off once per minutes. Or on/off, wait 30s or 1 minutes, reverse polarity, on/off etc. Stepping may be always on one polarity edge or every reversal. "Just a matter of playing" now.
Looking at that "motor" it looks like it may attract the rotor through one half rotation when energised and then another half when released or perhaps energised in the other direction. Find the ~= SMALLEST DC voltage that will step the motor. If you apply and remove this, does it rotate.
If not, try applying with alternate polarities. Probably leave on only briefly.
If the one second pulse or some variant of it works you can produce it with a controller that is s simple as a 555 timer (bad stability) or a simple crystal and divider system, or a microcontroller etc . Discuss once you have the basic system working.
• :) Right you are! – harm May 9 '12 at 11:55
• I'm still struggling to understand. There are not other timing or driving mechanisms. So what you are saying is that it needs a 1Hz control pulse. Does that mean powering the coil once ever second? (I need 1/60 Hz if 1 pulse equates to one rotation.) – harm May 9 '12 at 12:05
• @harm - Try it and see. If 1 Hz pulses work then it needs 1 Hz input. There may be some other way of driving it but this seems extremely unlikely. In large complaxes such as factories, government buildings, railway stations and similar, it was common to have clocks which were all driven from a single time keeping source by a common drive circuit running throughout the establishment. The ones I have seen (and I have one sample of) typically used 1 pulse per minute but one pulse per second may make sense if it has a seconds flipper. Your factory complex may have had such a system. – Russell McMahon May 9 '12 at 12:44
• Alright. So how would I go about that? How do I generate a 1/60Hz DC(?) pulse? – harm May 9 '12 at 13:51
• Aha. One rotation per minute / 1/60th Hz. You prior input indicated both 1 second pulses and 1 minute pulses in differnt places. You mean 1 pulse or step per minute - which aligns precisely with the linked clock systems that I described :-). Looking at that "motor" it looks like it may attract the rotor through one half rotation when energised and then another half when released or perhaps energised in the other direction. Find the ~= SMALLEST DC voltage that will step the motor. If you apply and remove, does it rotate. If not, try applying with alternate polarities. See answer addition-> – Russell McMahon May 9 '12 at 15:13
I got the same flip clock that I found from internet which has the same motor. I didn't find answers on how it works -- just that it needs a master clock. After this discussion I got a idea and tried applying 12 VDC to the motor with (Y&B) (R&W) wires as pairs.
And I found that when it changes pole between red & blue draw line it will flip. So I used a 1 second pulse with a 1-minute period using an Arduino and a motor driver board L298N. Here's the program:
const int motor1a = 12;
const int motor1a = 13;
void setup()
{
pinMode(motor1a,OUTPUT);
pinMode(motor2a,OUTPUT);
}
void loop()
{
digitalWrite(motor1a,LOW);
digitalWrite(motor2a,LOW);
delay(59000);
digitalWrite(motor1a,HIGH);
digitalWrite(motor2a,LOW);
delay(1000);
digitalWrite(motor1a,LOW);
digitalWrite(motor2a,LOW);
delay(59000);
digitalWrite(motor1a,LOW);
digitalWrite(motor2a,HIGH);
delay(1000);
}
Finally it happily worked.
• This seems like the start of a good answer, in having taken the idea of it being part of a master to clock system to the point of implementing a driver. If you could replace your screenshot with a block of text, it might merit some upvotes. – Chris Stratton Jan 8 '17 at 2:09
|
|
# Logistic regression doesn't fit this Infection risk analysis. Wrong model?
I am looking at a logistic regression model for predicting hospital acquired infection likelihood (HAI) from predictors of whether germs are found on the x number of patients (Patient), x number of environmental spots (Env), x number of air samples (Air) or x number of nurses' hands (Hand).
Month Patient Env Air Hand HAI HAIcat BedOccupancy
1 4 0 0 1 1 yes 9
2 2 0 2 0 0 no 9
3 2 1 0 1 0 no 5
4 1 2 0 2 2 yes 7
5 2 3 0 1 1 yes 6
6 1 2 0 0 1 yes 5
7 4 0 0 2 1 yes 7
8 2 0 0 1 3 yes 7
9 3 2 2 0 1 yes 8
10 3 0 0 1 1 yes 8
For example for Month 1, the percentage of HAI would be HAI/BedOccupancy=1/9. So I'd like to know if bed occupancy or other contamination is significant in predicting HAI. I run a Logistic regression, but it says it's junk. What does a statistician do now?
model<-glm(cbind(MR$HAI,MR$BedOccupancy)~MR$Patient+MR$Env+MR$Air+MR$Hand,family = "binomial")
But I get a bad fit and non-significant correlation:
Call:
glm(formula = cbind(MR$HAI, MR$BedOccupancy) ~ MR$Patient + MR$Env + MR$Air + MR$Hand, family = "binomial")
Deviance Residuals:
1 2 3 4 5 6 7 8 9 10
-0.12882 -1.08046 -1.33787 0.01400 -0.10685 -0.02229 -0.04008 1.03688 0.75723 -0.23824
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.30758 1.34049 -0.975 0.329
MR$Patient -0.22920 0.39350 -0.582 0.560 MR$Env -0.02415 0.37672 -0.064 0.949
MR$Air -0.46851 0.64611 -0.725 0.468 MR$Hand 0.16054 0.58277 0.275 0.783
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 6.6594 on 9 degrees of freedom
Residual deviance: 4.6929 on 5 degrees of freedom
AIC: 30.911
Number of Fisher Scoring iterations: 5
• I think that you need to specify the number of successes and failures within the cbind() construction. This may, or may not, satisfy you. If it does not you need to tell us in what way the model has failed to come up to your expectations. – mdewey Apr 7 '16 at 13:21
• The M$HAI column is the number of infections in each phase. M$BedOccupancy is the total number of patients in that phase. In phase 1, one out of nine patients had an infection and this was deetected on one hand and 4 patients. The rason i think its not a working model is because all the p values are high... – HCAI Apr 7 '16 at 22:24
• You are specifying the number of successes and the total number of trials not the number of failures as far as I can see. – mdewey Apr 8 '16 at 12:48
• A failiure happens when HAI is 0 or HAIcat is No, right? – HCAI Apr 8 '16 at 20:13
• If you use $more than once in a line of R code you are probably not using R effectively. Specify data=MR to glm and omit all the$. – Frank Harrell Apr 9 '16 at 12:38
Do you have sufficient data points? How many rows are you taking to build this model? If you have sufficient data points (10*variables* cardinality within categorical variable), take HAI as dependent variable.
No statistical model is junk. If you have result like this, it clearly states that different independent variable do not have significant impact on dependent variable.( Based on data provided).
model if HAI is taken as dependent variable-
summary(model)
Call: glm(formula = a$HAI ~ a$Patient + a$Env + a$Air + a$Han + a$HAIcat + a$BedOccupancy, family = binomial) Deviance Residuals: 1 2 3 4 5 6 7 8 6.547e-06 -6.547e-06 -6.547e-06 6.547e-06 6.547e-06 6.547e-06 6.547e-06 6.547e-06 9 10 6.547e-06 6.547e-06 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -2.457e+01 3.597e+05 0 1 a$Patient -2.808e-07 5.589e+04 0 1 a$Env -4.447e-07 6.340e+04 0 1 a$Air -2.732e-08 1.072e+05 0 1 a$Han -4.251e-07 8.444e+04 0 1 a$HAIcatyes 4.913e+01 1.482e+05 0 1 a\$BedOccupancy -2.195e-07 5.789e+04 0 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 1.0008e+01 on 9 degrees of freedom
Residual deviance: 4.2867e-10 on 3 degrees of freedom AIC: 14
Number of Fisher Scoring iterations: 23
Also if you have many levels in dependent variable use Random Forest/decision tree.
• Thank you very much for looking at this. Could you clarify what your formula says please as I cannot distinguish which are the tilde characters. – HCAI Apr 11 '16 at 9:16
• Any chance you can paste an image as I can't see what's tilde please. What does it mean to have all dependent variables in this case? – HCAI Apr 11 '16 at 11:04
• HAI~Patient +Env +Air +Hand + HAIcat +BedOccupancy, as HAI is ur dependent variable. ( please ignore my earlier comment) – Arpit Sisodia Apr 11 '16 at 15:59
|
|
# Is there any heartbeat like function? [closed]
I'm looking for a function, where the result is something like this:
I tried to figure it out myself, but I have no idea how to manage it. f(x) = ...
-
## closed as off topic by Steven Landsburg, Will Sawin, Chris Godsil, Noah Stein, Dan PetersenJan 18 '13 at 17:38
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
The heart beat function for a dead guy is defined by $f(x)=0.$ – Joseph Van Name Jan 19 '13 at 0:02
Hodgkin-Huxley gives a system of differential equations whose solutions look pretty close. As in the normal pattern is a limit cycle. – AHusain Apr 25 at 21:46
|
|
Opuscula Math. 29, no. 4 (), 423-425
http://dx.doi.org/10.7494/OpMath.2009.29.4.423
Opuscula Mathematica
# Extensions of solutions of a functional equation in two variables
Abstract. An extension theorem for the functional equation of several variables $f(M(x,y))=N(f(x),f(y)),$ where the given functions $$M$$ and $$N$$ are left-side autodistributive, is presented.
Keywords: functional equation, autodistributivity, strict mean, extension theorem.
Mathematics Subject Classification: 39B22.
|
|
In other word, as we move away from the training point, we have less information about what the function value will be. Gaussian process regression (GPR) is a Bayesian non-parametric technology that has gained extensive application in data-based modelling of various systems, including those of interest to chemometrics. This post aims to present the essentials of GPs without going too far down the various rabbit holes into which they can lead you (e.g. Gaussian process with a mean function¶ In the previous example, we created an GP regression model without a mean function (the mean of GP is zero). Specifically, consider a regression setting in which we’re trying to find a function $f$ such that given some input $x$, we have $f(x) \approx y$. 10.1 Gaussian Process Regression; 10.2 Simulating from a Gaussian Process. Gaussian Process Regression (GPR)¶ The GaussianProcessRegressor implements Gaussian processes (GP) for regression purposes. In Gaussian processes, the covariance function expresses the expectation that points with similar predictor values will have similar response values. An interesting characteristic of Gaussian processes is that outside the training data they will revert to the process mean. The Gaussian process regression is implemented with the Adam optimizer and the non-linear conjugate gradient method, where the latter performs best. We propose a new robust GP regression algorithm that iteratively trims a portion of the data points with the largest deviation from the predicted mean. The material covered in these notes draws heavily on many di erent topics that we discussed previously in class (namely, the probabilistic interpretation of linear regression1, Bayesian methods2, kernels3, and properties of multivariate Gaussians4). I scraped the results from my command shell and dropped them into Excel to make my graph, rather than using the matplotlib library. . This post aims to present the essentials of GPs without going too far down the various rabbit holes into which they can lead you (e.g. 2. Now, suppose we observe the corresponding $y$ value at our training point, so our training pair is $(x, y) = (1.2, 0.9)$, or $f(1.2) = 0.9$ (note that we assume noiseless observations for now). Hanna M. Wallach [email protected] Introduction to Gaussian Process Regression gprMdl = fitrgp(Tbl,ResponseVarName) returns a Gaussian process regression (GPR) model trained using the sample data in Tbl, where ResponseVarName is the name of the response variable in Tbl. Next steps. The problems appeared in this coursera course on Bayesian methods for Machine Lea Here, we consider the function-space view. Januar 2010. A relatively rare technique for regression is called Gaussian Process Model. Neural nets and random forests are confident about the points that are far from the training data. Gaussian Process Regression Models. Suppose we observe the data below. Example of Gaussian Process Model Regression. Another example of non-parametric methods are Gaussian processes (GPs). the predicted values have confidence levels (which I don’t use in the demo). Thus, we are interested in the conditional distribution of $f(x^\star)$ given $f(x)$. How the Bayesian approach works is by specifying a prior distribution, p(w), on the parameter, w, and relocating probabilities based on evidence (i.e.observed data) using Bayes’ Rule: The updated distri… it works well with very few data points, 2.) Our aim is to understand the Gaussian process (GP) as a prior over random functions, a posterior over functions given observed data, as a tool for spatial data modeling and surrogate modeling for computer experiments, and simply as a flexible nonparametric regression. Instead, we specify relationships between points in the input space, and use these relationships to make predictions about new points. However, (Rasmussen & Williams, 2006) provide an efficient algorithm (Algorithm $2.1$ in their textbook) for fitting and predicting with a Gaussian process regressor. Gaussian Processes for Regression 517 a particular choice of covariance function2 . There are some great resources out there to learn about them - Rasmussen and Williams, mathematicalmonk's youtube series, Mark Ebden's high level introduction and scikit-learn's implementations - but no single resource I found providing: A good high level exposition of what GPs actually are. 10 Gaussian Processes. 2 Gaussian Process Models Gaussian processes are a flexible and tractable prior over functions, useful for solving regression and classification tasks [5]. The kind of structure which can be captured by a GP model is mainly determined by its kernel: the covariance … Gaussian Process Regression¶ A Gaussian Process is the extension of the Gaussian distribution to infinite dimensions. Generally, our goal is to find a function $f : \mathbb{R}^p \mapsto \mathbb{R}$ such that $f(\mathbf{x}_i) \approx y_i \;\; \forall i$. The prior mean is assumed to be constant and zero (for normalize_y=False) or the training data’s mean (for normalize_y=True).The prior’s covariance is specified by passing a kernel object. We also point towards future research. Center: Built-in social distancing. After having observed some function values it can be converted into a posterior over functions. Chapter 5 Gaussian Process Regression. A Gaussian process defines a prior over functions. Gaussian process (GP) regression is an interesting and powerful way of thinking about the old regression problem. Its computational feasibility effectively relies the nice properties of the multivariate Gaussian distribution, which allows for easy prediction and estimation. An example is predicting the annual income of a person based on their age, years of education, and height. Manifold Gaussian Processes In the following, we review methods for regression, which may use latent or feature spaces. “Gaussian processes in machine learning.” Summer School on Machine Learning. The blue dots are the observed data points, the blue line is the predicted mean, and the dashed lines are the $2\sigma$ error bounds. A relatively rare technique for regression is called Gaussian Process Model. Multivariate Inputs; Cholesky Factored and Transformed Implementation; 10.3 Fitting a Gaussian Process. A linear regression will surely under fit in this scenario. rng( 'default' ) % For reproducibility x_observed = linspace(0,10,21)'; y_observed1 = x_observed. The SVGPR model applies stochastic variational inference (SVI) to a Gaussian process regression model by using the inducing points u as a set of global variables. In standard linear regression, we have where our predictor yn∈R is just a linear combination of the covariates xn∈RD for the nth sample out of N observations. We can show a simple example where $p=1$ and using the squared exponential kernel in python with the following code. For example, we might assume that $f$ is linear ($y = x \beta$ where $\beta \in \mathbb{R}$), and find the value of $\beta$ that minimizes the squared error loss using the training data ${(x_i, y_i)}_{i=1}^n$: Gaussian process regression offers a more flexible alternative, which doesn’t restrict us to a specific functional family. The example compares the predicted responses and prediction intervals of the two fitted GPR models. you can feed the model apriori information if you know such information, 3.) In Section ? Given the lack of data volume (~500 instances) with respect to the dimensionality of the data (13), it makes sense to try smoothing or non-parametric models to model the unknown price function. When this assumption does not hold, the forecasting accuracy degrades. New data, specified as a table or an n-by-d matrix, where m is the number of observations, and d is the number of predictor variables in the training data. For a detailed introduction to Gaussian Processes, refer to … The Concrete distribution is a relaxation of discrete distributions. For my demo, the goal is to predict a single value by creating a model based on just six source data points. In Gaussian process regression, also known as Kriging, a Gaussian prior is assumed for the regression curve. Then we shall demonstrate an application of GPR in Bayesian optimiation. The goal of a regression problem is to predict a single numeric value. Instead of inferring a distribution over the parameters of a parametric function Gaussian processes can be used to infer a distribution over functions directly. Gaussian Processes (GPs) are the natural next step in that journey as they provide an alternative approach to regression problems. # Gaussian process regression plt. A Gaussian process (GP) is a collection of random variables indexed by X such that if X 1, …, X n ⊂ X is any finite subset, the marginal density p (X 1 = x 1, …, X n = x n) is multivariate Gaussian. A Gaussian process is a collection of random variables, any Gaussian process finite number of which have a joint Gaussian distribution. Gaussian processes for regression ¶ Since Gaussian processes model distributions over functions we can use them to build regression models. It took me a while to truly get my head around Gaussian Processes (GPs). BFGS is a second-order optimization method – a close relative of Newton’s method – that approximates the Hessian of the objective function. Posted on April 13, 2020 by jamesdmccaffrey. Xnew — New observed data table | m-by-d matrix. For linear regression this is just two numbers, the slope and the intercept, whereas other approaches like neural networks may have 10s of millions. First, we create a mean function in MXNet (a neural network). Exact GPR Method Authors: Zhao-Zhou Li, Lu Li, Zhengyi Shao. The code demonstrates the use of Gaussian processes in a dynamic linear regression. In this blog, we shall discuss on Gaussian Process Regression, the basic concepts, how it can be implemented with python from scratch and also using the GPy library. Mean function is given by: E[f(x)] = x>E[w] = 0. A brief review of Gaussian processes with simple visualizations. every finite linear combination of them is normally distributed. understanding how to get the square root of a matrix.) Kernel (Covariance) Function Options. Gaussian processes are a powerful algorithm for both regression and classification. Parametric approaches distill knowledge about the training data into a set of numbers. Cressie, 1993), and are known there as "kriging", but this literature has concentrated on the case where the input space is two or three dimensional, rather than considering more general input spaces. Stanford University Stanford, CA 94305 Matthias Seeger Computer Science Div. uniform (low = left_endpoint, high = right_endpoint, size = n) # Form covariance matrix between samples K11 = np. For simplicity, we create a 1D linear function as the mean function. m = GPflow.gpr.GPR(X, Y, kern=k) We can access the parameter values simply by printing the regression model object. it usually doesn’t work well for extrapolation. I decided to refresh my memory of GPM regression by coding up a quick demo using the scikit-learn code library. Gaussian Process Regression Gaussian Processes: Simple Example Can obtain a GP from the Bayesin linear regression model: f(x) = x>w with w ∼ N(0,Σ p). One of the reasons the GPM predictions are so close to the underlying generating function is that I didn’t include any noise/error such as the kind you’d get with real-life data. The goal of a regression problem is to predict a single numeric value. GP.R # # An implementation of Gaussian Process regression in R with examples of fitting and plotting with multiple kernels. Common transformations of the inputs include data normalization and dimensionality reduction, e.g., PCA … It is very easy to extend a GP model with a mean field. Gaussian Random Variables Definition AGaussian random variable X is completely specified by its mean and standard deviation ˙. However, neural networks do not work well with small source (training) datasets. In section 3.3 logistic regression is generalized to yield Gaussian process classification (GPC) using again the ideas behind the generalization of linear regression to GPR. Title: Robust Gaussian Process Regression Based on Iterative Trimming. GaussianProcess_Corn: Gaussian process model for predicting energy of corn smples. Springer, Berlin, Heidelberg, 2003. It defines a distribution over real valued functions $$f(\cdot)$$. zeros ((n, n)) for ii in range (n): for jj in range (n): curr_k = kernel (X [ii], X [jj]) K11 [ii, jj] = curr_k # Draw Y … The weaknesses of GPM regression are: 1.) The GaussianProcessRegressor implements Gaussian processes (GP) for regression purposes. To understand the Gaussian Process We'll see that, almost in spite of a technical (o ver) analysis of its properties, and sometimes strange vocabulary used to describe its features, as a prior over random functions, a posterior over functions given observed data, as a tool for spatial data modeling and computer e xperiments, In a parametric regression model, we would specify the functional form of $f$ and find the best member of that family of functions according to some loss function. Suppose $x=2.3$. First, we create a mean function in MXNet (a neural network). Stanford University Stanford, CA 94305 Andrew Y. Ng Computer Science Dept. Supplementary Matlab program for paper entitled "A Gaussian process regression model to predict energy contents of corn for poultry" published in Poultry Science. Since our model involves a straightforward conjugate Gaussian likelihood, we can use the GPR (Gaussian process regression) class. In particular, consider the multivariate regression setting in which the data consists of some input-output pairs ${(\mathbf{x}_i, y_i)}_{i=1}^n$ where $\mathbf{x}_i \in \mathbb{R}^p$ and $y_i \in \mathbb{R}$. Gaussian Process Regression Kernel Examples Non-Linear Example (RBF) The Kernel Space Example: Time Series. Download PDF Abstract: The model prediction of the Gaussian process (GP) regression can be significantly biased when the data are contaminated by outliers. Gaussian processes are a non-parametric method. The vertical red line corresponds to conditioning on our knowledge that $f(1.2) = 0.9$. (Note: I included (0,0) as a source data point in the graph, for visualization, but that point wasn’t used when creating the GPM regression model.). The kind of structure which can be captured by a GP model is mainly determined by its kernel: the covariance … A machine-learning algorithm that involves a Gaussian pro We can make this model more flexible with Mfixed basis functions, where Note that in Equation 1, w∈RD, while in Equation 2, w∈RM. Using our simple visual example from above, this conditioning corresponds to “slicing” the joint distribution of $f(\mathbf{x})$ and $f(\mathbf{x}^\star)$ at the observed value of $f(\mathbf{x})$. It is specified by a mean function $$m(\mathbf{x})$$ and a covariance kernel $$k(\mathbf{x},\mathbf{x}')$$ (where $$\mathbf{x}\in\mathcal{X}$$ for some input domain $$\mathcal{X}$$). Gaussian Process Regression Raw. It is very easy to extend a GP model with a mean field. An Intuitive Tutorial to Gaussian Processes Regression. The Gaussian Processes Classifier is a classification machine learning algorithm. This contrasts with many non-linear models which experience ‘wild’ behaviour outside the training data – shooting of to implausibly large values. you must make several model assumptions, 3.) To understand the Gaussian Process We'll see that, almost in spite of a technical (o ver) analysis of its properties, and sometimes strange vocabulary used to describe its features, as a prior over random functions, ... it is a simple extension to the linear (regression) model. The example compares the predicted responses and prediction intervals of the two fitted GPR models. Left: Always carry your clothes hangers with you. Here f f does not need to be a linear function of x x. Notice that it becomes much more peaked closer to the training point, and shrinks back to being centered around $0$ as we move away from the training point. Without considering $y$ yet, we can visualize the joint distribution of $f(x)$ and $f(x^\star)$ for any value of $x^\star$. By the end of this maths-free, high-level post I aim to have given you an intuitive idea for what a Gaussian process is and what makes them unique among other algorithms. In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution, i.e.
King Cole Splash Knitting Patterns, How To Use Kérastase Nectar Thermique, Seattle Nature Shop, Garda Requirements 2020, Fallout 4 Legendary Locations, Buffalo's Cafe Woodstock, Institute 4 Learning, Spanish Potato Salad, Cypress Mulch Termites, Nike Court Backpack, Utility Computing - Geeksforgeeks, Delphinium Flower Tattoo Meaning, Fujifilm X-t4 Amazon,
|
|
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Division of Whole Numbers by Fractions ( Read ) | Arithmetic | CK-12 Foundation
You are viewing an older version of this Concept. Go to the latest version.
# Division of Whole Numbers by Fractions
%
Best Score
Practice Division of Whole Numbers by Fractions
Best Score
%
# Division of Whole Numbers by Fractions
Remember Julie and her game in the Divide a Fraction by a Whole Number Concept? Julie has 40 inches of paper and she wants to divide this piece of paper in one-half inch strips. How can she do it? Previously we worked on dividing a fraction by a whole number, but in this problem, you are going to work the other way around.
To help Julie figure out how to divide this piece of paper into one -half inch strips, you will need to divide a whole number by a fraction.
Pay close attention and you will learn all that you to know in this Concept.
### Guidance
We can also divide a whole number by a fraction. When we divide a whole number by a fraction we are taking a whole and dividing it into new wholes.
$1 \div \frac{1}{2} = \underline{\;\;\;\;\;\;\;}$
Now at first glance, you would think that this answer would be one-half, but it isn’t. We aren’t asking for $\frac{1}{2}$ of one we are asking for 1 divided by one-half. Let’s look at a picture.
Now we are going to divide one whole by one-half.
Now we have two one-half sections. Our answer is two.
We can test this out by using the rule that we learned in the last Concept.
$1 \div \frac{1}{2} = 1 \times \frac{2}{1} = 1 \times 2 = 2$
Our answer is the same as when we used the pictures.
It’s time for you to try a few of these on your own. Find each quotient.
#### Example A
$4 \div \frac{1}{2} = \underline{\;\;\;\;\;\;\;}$
Solution: $8$
#### Example B
$6 \div \frac{1}{3} = \underline{\;\;\;\;\;\;\;}$
Solution: $18$
#### Example C
$12 \div \frac{1}{4} = \underline{\;\;\;\;\;\;\;}$
Solution: $48$
Now back to Julie and the ribbon. Here is the original problem once again.
Remember Julie and her game? Julie has 40 inches of paper and she wants to divide this piece of paper in one - half inch strips. How can she do it? In the last Concept, you divide a fraction by a whole number, but in this problem, you are going to work the other way around.
To help Julie figure out how to divide this piece of paper into one -half inch strips, you will need to divide a whole number by a fraction.
To figure this out, we first can write an equation. Julie wants to divide 40" of paper into one - half inch strips.
$40 \div \frac{1}{2} = \underline{\;\;\;\;\;\;\;}$
Next, we can change this into a multiplication problem.
$40 \times \frac{2}{1} = 80$ strips of paper.
### Vocabulary
Inverse Operation
opposite operation. Multiplication is the inverse operation of division. Addition is the inverse operation of subtraction.
Reciprocal
the inverse of a fraction. We flip a fraction’s numerator and denominator to write a reciprocal. The product of a fraction and its reciprocal is one.
### Guided Practice
Here is one for you to try on your own.
$25 \div \frac{2}{5} = \underline{\;\;\;\;\;\;\;}$
First, we have to convert this problem to a multiplication problem.
$25 \div \frac{2}{5} = 25 \times \frac{5}{2} = \frac{125}{2}$
Next, we convert this improper fraction to a mixed number.
$\frac{125}{2} = 62 \frac{1}{2}$
### Practice
Directions : Divide the following whole numbers and fractions.
1. $8 \div \frac{1}{3} = \underline{\;\;\;\;\;\;\;}$
2. $18 \div \frac{1}{2} = \underline{\;\;\;\;\;\;\;}$
3. $28 \div \frac{1}{4} = \underline{\;\;\;\;\;\;\;}$
4. $14 \div \frac{1}{7} = \underline{\;\;\;\;\;\;\;}$
5. $16 \div \frac{2}{3} = \underline{\;\;\;\;\;\;\;}$
6. $22 \div \frac{1}{2} = \underline{\;\;\;\;\;\;\;}$
7. $24 \div \frac{2}{5} = \underline{\;\;\;\;\;\;\;}$
8. $36 \div \frac{2}{3} = \underline{\;\;\;\;\;\;\;}$
9. $40 \div \frac{3}{10} = \underline{\;\;\;\;\;\;\;}$
10. $60 \div \frac{1}{3} = \underline{\;\;\;\;\;\;\;}$
11. $12 \div \frac{3}{4} = \underline{\;\;\;\;\;\;\;}$
12. $48 \div \frac{2}{12} = \underline{\;\;\;\;\;\;\;}$
13. $18 \div \frac{1}{6} = \underline{\;\;\;\;\;\;\;}$
14. $30 \div \frac{2}{5} = \underline{\;\;\;\;\;\;\;}$
15. $45 \div \frac{5}{9} = \underline{\;\;\;\;\;\;\;}$
|
|
# Condition for Element of Quotient Group of Additive Group of Reals by Integers to be of Finite Order
## Theorem
Let $\struct {\R, +}$ be the additive group of real numbers.
Let $\struct {\Z, +}$ be the additive group of integers.
Let $\R / \Z$ denote the quotient group of $\struct {\R, +}$ by $\struct {\Z, +}$.
Let $x + \Z$ denote the coset of $\Z$ by $x \in \R$.
Then $x + \Z$ is of finite order if and only if $x$ is rational.
## Proof
From Additive Group of Integers is Normal Subgroup of Reals, we have that $\struct {\Z, +}$ is a normal subgroup of $\struct {\R, +}$.
Hence $\R / \Z$ is indeed a quotient group.
By definition of rational number, what is to be proved is:
$x + \Z$ is of finite order
$x = \dfrac m n$
for some $m \in \Z, n \in \Z_{> 0}$.
Let $x + \Z$ be of finite order in $\R / \Z$.
Then:
$\, \displaystyle \exists n \in \Z_{\ge 0}: \,$ $\displaystyle \paren {x + \Z}^n$ $=$ $\displaystyle \Z$ Definition of Quotient Group: Group Axiom $G \, 2$: Identity $\displaystyle \leadstoandfrom \ \$ $\displaystyle n x$ $\in$ $\displaystyle \Z$ Condition for Power of Element of Quotient Group to be Identity $\displaystyle \leadstoandfrom \ \$ $\displaystyle n x$ $=$ $\displaystyle m$ for some $m \in \Z$ $\displaystyle \leadstoandfrom \ \$ $\displaystyle x$ $=$ $\displaystyle \dfrac m n$
$\blacksquare$
|
|
Calculators Topics Go Premium About Snapxam Topics
# Step-by-step Solution
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
## Step-by-step explanation
Problem to solve:
$\int ln\left(2x\right)dx$
Learn how to solve trigonometric integrals problems step by step online.
$\begin{matrix}u=2x \\ du=2dx\end{matrix}$
Learn how to solve trigonometric integrals problems step by step online. Solve the trigonometric integral int(ln(2*x))dx. Solve the integral \int\ln\left(2x\right)dx applying u-substitution. Let u and du be. Isolate dx in the previous equation. Substituting u and dx in the integral and simplify. Take the constant out of the integral.
$\frac{1}{2}\left(2x\ln\left(2x\right)-2x\right)+C_0$
### Problem Analysis
$\int ln\left(2x\right)dx$
### Main topic:
Trigonometric integrals
~ 1.19 seconds
|
|
# How do you convert 3.92 x 10^2 into standard form?
##### 1 Answer
Jul 26, 2015
You get $392$ by moving the point to the right.
#### Explanation:
look at the power of $10$: you move the point to the right of many places as the (positive) value of the exponent of 10, which, in this case is $2$.
You get:
$3.92 \to$ to places to the right:
$39 \textcolor{red}{.} 2$
$392 \textcolor{red}{.} 0 = 392$
|
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# The transferability of lipid loci across African, Asian and European cohorts
## Abstract
Most genome-wide association studies are based on samples of European descent. We assess whether the genetic determinants of blood lipids, a major cardiovascular risk factor, are shared across populations. Genetic correlations for lipids between European-ancestry and Asian cohorts are not significantly different from 1. A genetic risk score based on LDL-cholesterol-associated loci has consistent effects on serum levels in samples from the UK, Uganda and Greece (r = 0.23–0.28, p < 1.9 × 10−14). Overall, there is evidence of reproducibility for ~75% of the major lipid loci from European discovery studies, except triglyceride loci in the Ugandan samples (10% of loci). Individual transferable loci are identified using trans-ethnic colocalization. Ten of fourteen loci not transferable to the Ugandan population have pleiotropic associations with BMI in Europeans; none of the transferable loci do. The non-transferable loci might affect lipids by modifying food intake in environments rich in certain nutrients, which suggests a potential role for gene-environment interactions.
## Introduction
Genome-wide association studies (GWAS) have been very successful in identifying genetic variants linked to cardiovascular disease (CVD) and to cardiometabolic traits1. Due to the improving predictive accuracy of these variants, genetic risk prediction could soon be implemented in clinical settings2,3. However, the majority of samples included in these genome “white” association studies were British or US-Americans with European ancestry4,5 which does not accurately represent the ethnically and ancestrally diverse populations of these nations. Moreover, three quarters of CVD-associated deaths occur in low- and middle-income countries where incidences are rising6. Consequently, it is important to determine whether cardiometabolic loci are transferable to other populations.
Previous research assessed the effects of different allele frequencies and linkage disequilibrium (LD) on genetic associations across ancestry groups7. Here we ask the fundamental question whether causal variants for blood lipids, a major cardiovascular risk factor, are shared across populations. Heterogeneity in effects of variants could result from epistasis or gene-environment interactions. However, the causal variants are usually unknown. The differences in LD structure between populations make it difficult to compare the observed associations between ancestry groups because the effect of a variant depends on its correlation with the causal variant(s)7. Differences in allele frequency also impact the power to detect associations in other ancestry groups.
We employ several strategies which account for these effects and do not require knowledge of the specific causal variants to quantify the extent to which genetic variants affecting lipid biomarkers are shared between individuals from Europe/North America, Asia, and Africa. We assess the transferability of individual signals and compare association patterns across the genome using data from the African Partnership for Chronic Disease Research – Uganda (APCDR-Uganda, N = 6407)8, China Kadoorie Biobank (CKB, N = 21,295)9, the Hellenic Isolated Cohorts (HELIC-MANOLIS, N = 1641 and HELIC-Pomak, N = 1945)10,11, and the UK Household Longitudinal Study (UKHLS, N = 9961)12. We also use summary statistics from Biobank Japan (BBJ, N = 162,255)13 and the Global Lipid Genetics Consortium (European ancestry, GLGC2013 N = 188,577, GLGC2017 N = 237,050)14,15. We find evidence for extensive sharing of genetic variants that affect levels of HDL- and LDL-cholesterol and triglycerides between individuals with European ancestry and samples from China, Japan and Greek population isolates. We estimate that about three quarters of major lipid loci are reproducible. Using trans-ethnic colocalization, we show that many established loci for triglycerides do not affect levels of this biomarker in Ugandan samples, however. Ten out of fourteen of the lipid loci that were not transferable to the Ugandan samples had pleiotropic associations with BMI in European ancestry samples. None of the transferable loci were linked to BMI. This could point to a role of environmental factors in modifying which genetic variants affect lipid levels.
## Results
### Reproducibility of established lipid loci
We assessed rates at which established lipid-associated variants were reproducible in other populations. We selected major lipid loci, i.e., those with lipid associations at p < 10−100 based on a score test in the largest European ancestry GWAS. In this context, reproducibility was operationalised as at least one variant from the credible set being associated at p < 10−3 based on a score test with serum lipid levels in the target study. We defined the credible set as variants correlated at r2 > 0.6 with the lead SNP from the European discovery study. Correlation was estimated from the 1000 Genomes Project samples with European ancestry. As a benchmark, we also assessed replication in a European ancestry study, UKHLS. We found evidence of transferability for 76.5% of major HDL loci in this study (Table 1). For the non-European groups rates ranged from 70.6 to 82.4%. Similar reproducibility rates were observed for LDL loci (61.5–76.9%). For major triglycerides (TG) loci, rates ranged from 78.9 to 94.7%, except in APCDR-Uganda. Only 10.5% of the TG loci showed evidence of reproducibility in that sample. Rates for known loci with p≥10−100 in the discovery set were generally below 10%. However, Biobank Japan, the largest study, exhibited markedly higher reproducibility rates for these loci than the other studies with 24.6–32.7%.
### Trans-ethnic genetic correlations
Trans-ethnic genetic correlations were estimated between the three largest studies, China Kadoorie Biobank, Biobank Japan and GLGC2013 (Fig. 1). For GLGC2013 and CKB, correlations were 0.999, 0.778, 0.999 for HDL, LDL, and TG, respectively. For GLGC2013 and BBJ, correlations were 0.999, 0.959, 0.961 for HDL, LDL, and TG, respectively. None of the estimates were significantly different from 1 (Supplementary Table 1). We also compared associations across lipid biomarkers. This consistently showed negative genetic correlations between TG associations and HDL associations, with estimates ranging from rgen = −0.48 to rgen = −0.86.
### Genetic risk scores
In order to assess patterns of sharing of risk alleles for the smaller studies, we constructed genetic risk scores (GRS) based on the established lipid loci from discovery studies with European-ancestry and assessed the score associations with serum levels of HDL, LDL and TG in HELIC, APCDR-Uganda, CKB and also UKHLS as a benchmark (Fig. 2). All genetic scores were significantly associated with their respective target lipid in the three European samples with largely consistent correlation coefficients and mutually overlapping 95% confidence intervals (CIs) (Table 2). For HDL, LDL and TG, the estimated correlation coefficients ranged from 0.27 to0.28, 0.23 to 0.28 and 0.20 to 0.24, respectively. In APCDR-Uganda, the strongest association was observed for LDL (r = 0.28, SE = 0.01, p = 1.9 × 10−107 based on a mixed model score test). The HDL association was attenuated compared to the European ancestry samples (r = 0.12, SE = 0.01, p = 6.1 × 10−22). The effect of the TG score was markedly weaker (r = 0.06, SE = 0.01, p = 4.5 × 10−7). For CKB, the HDL GRS had a correlation of r = 0.18 (SE = 0.02, p = 1.4 × 10−22) and the LDL GRS of r = 0.20 (SE = 0.02, p = 32 × 10−26) while the triglyceride GRS showed a stronger attenuation relative to UKHLS with r = 0.14 (SE = 0.02, p = 3.8 × 10−12). We also assessed associations between a given score and levels of each of the other lipid biomarkers (Supplementary Table 2). In line with the trans-ethnic genetic correlation results, we observed inverse associations between the HDL score and TG levels and vice versa in all studies, except APCDR-Uganda.
### Trans-ethnic colocalization
Differences in LD structure, MAF and sample size make it difficult to assess the transferability of individual loci. Therefore, we propose a new strategy to assess evidence for shared causal variants between two populations: trans-ethnic colocalization. For this we re-purposed a method that was originally developed for colocalization of GWAS and eQTL results: Joint Likelihood Mapping (JLIM)16. In order to assess its performance for GWAS results from samples with different ancestry, we carried out a simulation study. UK Biobank (UKB) was used as a reference with European ancestry and compared to CKB and APCDR-Uganda. In order to derive an upper boundary for the power, we compared UKB to the ancestry-matched UKHLS set. Phenotypes were simulated. Effect size estimates were varied between 0.10 and 0.25 in order to represent a range similar to that observed for major lipid loci15. In the simulations of distinct causal variants in the non-European and the reference group, the frequencies of false positives were as expected close to 0.05 (Supplementary Table 3, Supplementary Fig. 1). The power to detect shared associations for betas of 0.25 was 73.1% for APCDR-Uganda, 93.1% for CKB and 0.89 for UKHLS (Fig. 3). To investigate whether the lower power for APCDR-Uganda could be due to its smaller sample size, we reran the analyses for CKB using a random subset of samples matching the sample size of APCDR-Uganda. For effect sizes <0.2, the results from this analysis revealed decreased detection power relative to the full CKB set but still consistently higher than APCDR-Uganda. This suggests that the power of this trans-ethnic colocalization method decreases somewhat with greater genetic distance between the populations that are compared.
We applied trans-ethnic colocalization for established lipid loci to each study with UKHLS as the reference. There was evidence for significant (pjlim < 0.05 based on a permutation test) colocalization with at least one of the target studies for about half of the major lipid loci (Supplementary Table 4). For several of the major TG loci, such as 8q24.13, strong evidence of transferability to the Asian studies was observed whilst there was no evidence of association in APCDR-Uganda. Figure 4 shows the regional association plots of this locus for each data set as an example to demonstrate that differences in LD and frequencies lead to different association patterns. As colocalization can account for such differences, the result from the analysis comparing the European and Asian studies was nevertheless statistically significant (p < 0.001).
We compared major lipid loci that showed evidence of transferability to APCDR-Uganda with those that did not. The proximal genes of transferable loci were enriched for lipid pathways including lipoprotein metabolism, lipid digestion mobilisation and transport, chylomicron-mediated lipid transport and metabolism of lipids and lipoproteins. The proximal genes of the non-transferable loci were enriched for several other pathways in addition to lipid metabolism, including SHP2 signalling, ABV3 integrin pathway, cytokine signalling in immune system, cytokine-cytokine receptor interaction and transmembrane transport of small molecules (Supplementary Figs. 2 and 3). We also assessed the associations of these loci with BMI in samples with European ancestry using publicly available summary statistics from the GIANT consortium17 (N≥484,680) (Table 3). Ten of the fourteen non-transferable lipid loci had pleiotropic associations with BMI at a Bonferroni-adjusted threshold of p < 0.0024. None of the seven transferable lipid loci were associated with BMI.
## Discussion
Recent efforts to increase global diversity in genetics studies have been vital, enabling this comprehensive cross-population comparison of genetic associations with blood lipids. We provide evidence for extensive sharing of genetic variants that affect levels of HDL- and LDL-cholesterol and triglycerides between individuals with European ancestry and samples from China, Japan and Greek population isolates. We estimated that at least about three quarters of major lipid loci are reproducible. This was highly consistent across all studies except for triglyceride loci in APCDR-Uganda. None of the estimates of trans-ethnic genetic correlations between European, Chinese and Japanese samples were significantly different from 1. All GRS associations in the two Greek isolated populations were highly consistent with those in the UK samples (correlations ranged from 0.27 to 0.28, 0.23 to 0.28, and 0.20 to 0.24, for HDL, LDL and TG, respectively, in these studies). Associations of genetic risk scores for LDL were not attenuated in the Ugandan population compared to the UK samples (r = 0.28, SE = 0.01, p = 1.9 × 10−107 based on a score test).
Previous studies that compared the direction of effect of established loci or assessed associations of genetic risk scores reported differing degrees of consistency18,19,20,21,22,23,24,25,26,27,28,29. However, most of them were conducted in American samples with diverse ancestry, had smaller sample sizes and applied a single-variant look-up or GRS for a limited number of genetic variants. The high degree of consistency for cholesterol biomarkers we observed also contrasts with previously reported trans-ethnic genetic correlations for other traits, such as major depression, rheumatoid arthritis, or type 2 diabetes, which were substantially different from 130,31. In a recent application using data from individuals with European and Asian ancestry from the UK and USA, the average genetic correlation across multiple traits was 0.55 (SE = 0.14) for GERA and 0.54 (SE = 0.18) for UK Biobank32.
As a limitation of our study, we did not adjust for use of lipid-lowering medication. This could in principle cause a small downward bias for the genetic effect estimates. However, few of the participants of the Ugandan and Chinese studies used lipid-lowering drugs. So this is unlikely to have an effect on the main conclusions of this work.
Differences in LD structure, MAF and sample size make it difficult to assess the transferability of individual loci. We therefore propose a new approach: trans-ethnic colocalization. Simulations showed consistent control of type I error rates, as well as power greater than 80% to detect shared associations between samples with European and Chinese ancestry for SNP effects greater or equal to 0.15. However, power was decreased for comparisons between samples from APCDR-Uganda and UK Biobank (51.5–73.1%). Hence, for the current implementation non-significant colocalization should not be considered as definitive evidence for the absence of shared causal variants when comparing African and European samples. Future work should address this through better modelling of the LD structure. Moreover, for many of the major lipid loci, more than one independent association signal was identified in discovery GWASs15. When these are located in close proximity to each other, they can interfere with the trans-ethnic colocalization analysis because JLIM assumes a single causal variant. Therefore, future work should extend this approach to accommodate loci harbouring multiple causal variants.
Using trans-ethnic colocalization, we showed that many established loci for triglycerides did not affect levels of this biomarker in Ugandan samples. This included loci associated at genome-wide significance in all the other studies, such as GCKR at 2p23.3 or LPL at 8p21.3. The genetic risk score for triglycerides had a weak effect on measured levels in APCDR-Uganda. This is unlikely to be an artefact of unreliable measurement: triglyceride levels had a heritable component in this sample (SNP heritability of 0.25, SE = 0.058) and there were genome-wide significant associations. It is also unlikely that this can be explained purely by differences in LD and MAF because they would affect the analyses of the other two lipid biomarkers as well. Instead these discrepancies could be caused by gene-environment interactions. Ten out of fourteen of the lipid loci that were not transferable to the Ugandan samples had pleiotropic associations with BMI in European ancestry samples while none of the transferable loci were linked to BMI. It is possible that the non-transferable variants affect the amount of food intake with downstream consequences for lipid levels. This might require an environment offering diets that are rich in certain nutrients. While the proximal genes for transferable loci were almost exclusively linked to pathways of lipid metabolism, the ones for non-transferable loci were involved in diverse pathways which is in line with this hypothesis. An alternative explanation could be that the non-transferable loci are involved in metabolising nutrients given a particular diet that is not common in Uganda with downstream consequences for weight.
Overall, this could suggest an important role of environmental factors in modifying which genetic variants affect lipid levels. Studying the causes for discordant loci between groups has promise to further elucidate the biological mechanisms of lipid regulation and other complex traits. Applying genetic risk prediction within clinical settings is receiving increasing attention. Our findings demonstrate that the transferability of genetic associations across different ancestry groups and environmental settings should be assessed comprehensively for medically relevant traits. This is important in order to ensure that health benefits of precision medicine are widely shared within and across populations. Ongoing programs in underrepresented countries33, such as the Human Hereditary and Health in Africa Initiative34, and programs focussing on underrepresented groups, such as PAGE35, All of Us36, or East London Genes and Health37, could provide the basis for this.
## Methods
### Data resources
We included data from the Global Lipid Genetics Consortium (European ancestry samples only, GLGC), The UK Household Longitudinal Study (UKHLS), two isolated populations from the Greece Hellenic Isolated Cohorts (HELIC), a rural West Ugandan population from the African Partnership for Chronic Disease Research (APCDR-Uganda) study, China Kadoorie Biobank (CBK), and Biobank Japan (BBJ). Raw genotype and phenotype data were available for UKHLS, APCDR-Uganda, CKB, HELIC-MANOLIS, and HELIC-Pomak. All participants provided written informed consent and each study obtained approval from ethical review boards. The APCDR-Uganda study was approved by the Uganda Virus Research Institute, Science and Ethics Committee (Ref. GC/127/10/10/25), the Uganda National Council for Science and Technology (Ref. HS 870), and the U.K. National Research Ethics Service, Research Ethics Committee (Ref. 11/H0305/5). The HELIC study was approved by the Harokopio University Bioethics Committee. The UKHLS study has been approved by the University of Essex Ethics Committee and the nurse data collection by the National Research Ethics Service (10/H0604/2). For CKB, central ethics approvals were obtained from Oxford University, and the China National CDC. In addition, approvals were also obtained from institutional research boards at the local CDCs in the 10 regions. BBJ was approved by the ethics committees of RIKEN Center for Integrative Medical Sciences and the Institute of Medical Sciences, the University of Tokyo. Our analyses were based on summary statistics for BBJ and GLGC. The details of genotyping, QC and imputation for all studies are summarised in Supplementary Table 5. Descriptive information about the sample sets is provided in Supplementary Table 6. Details of the quality control, imputation, genome-wide association analyses and ethical approval have also been previously described for GLGC14, BBJ13, HELIC10, APCDR-Uganda8 and UKHLS12. Each study confirmed sample ethnicity through PCA which rules out sample overlap between studies.
For CKB, 102,783 participants were genotyped using 2 custom-designed Affymetrix Axiom® arrays including up to 803 K variants, optimised for genome-wide coverage in Chinese populations. Stringent quality control included SNP call rate > 0.98, plate effect P > 10−6, batch effect P > 10−6, HWE P > 10−6 (combined 10df χ2 test from 10 regions), biallelic, MAF difference from 1KGP EAS < 0.2, sample call rate > 0.95, heterozygosity < mean + 3 SD, no chrXY aneuploidy, genetically-determined sex concordant with database, resulting in genotypes for 532,415 variants present on both array versions. Imputation into the 1,000 Genomes Phase 3 reference (EAS MAF > 0) using SHAPEIT version 3 and IMPUTE version 4 yielded genotypes for 10,276,633 variants with MAF > 0.005 and info > 0.3.
In CKB, lipid levels were regressed against eight principle components, region, age, age2, sex, and − for LDL and TG − fasting time2 for the single SNP association analysis. For CKB, PCs were included in both single SNP and PRS association analyses to improve inflation. Recruitment for CKB occurred at 10 different rural and urban locations across China leading to somewhat increased population structure. The resulting inflation estimates lambda after PC adjustment were 1.063, 1.050, and 1.053 for HDL, LDL, and TG, respectively. LDL levels were derived using the Friedewald formula. After rank-based inverse normal transformation, the residuals were used as the outcomes in the genetic association analyses using linear regression. Associations were carried out within a mixed model framework using BOLT-LMM38.
The single SNP association analysis for APCDR-Uganda was carried out within a mixed model framework using GEMMA39. Rank-based inverse normal transformation was applied to the lipid biomarkers after adjusting for age and gender. For Uganda, the inflation estimates lambda were 1.000, 1.004, and 1.005 for HDL, LDL, and TG, respectively.
### Established lipid loci
A list of established lipid-associated loci was extracted from the latest Global Lipid Genetics Consortium (GLGC2017) publication15 reporting 444 independent variants in 250 loci associated at genome-wide significance with HDL, LDL, and triglyceride levels. We excluded three LDL variants where the association was not primarily driven by the samples with European ancestry. We assessed evidence for transferability of the loci, applied trans-ethnic colocalization and used them to construct genetic risk scores.
### Reproducibility of established lipid loci
We assessed evidence that these established lipid signals are reproducible in other populations. For loci harbouring multiple signals, we only kept the most strongly associated variant. Out of the 444 loci, this left 170 HDL, 135 LDL and 136 TG variants. We distinguished major loci, i.e. those with p < 10−100 based on a score test in GLGC2017. For each lead SNP we identified all variants in LD (r2 > 0.6) based on the European ancestry 1000 Genomes data. We assessed whether the lead or any of the correlated variants, henceforth called credible set, displayed evidence of association in the target study. If this was not the case, we tested whether there was any other variant with evidence of association within a 50 Kb window. We used a p-value threshold of p < 10−3 based on a score test. This threshold was derived by computing the minimum p-value in 1000 random windows of 50 Kb for each study. Less than 5% of random windows had a minimum p < 10−3 for the non-European ancestry studies. While this p-value threshold might not be appropriate to provide conclusive evidence of reproducibility for individual loci, we used this to test evidence of reproducibility across sets of loci. These analyses excluded the HELIC studies because the smaller sample size makes it difficult to differentiate between lack of power and lack of reproducibility.
### Trans-ethnic genetic correlations
We used the popcorn software30 to estimate trans-ethnic genetic correlations between studies while accounting for differences in LD structure. This provides an indication of the correlation of causal-variant effect sizes across the genome at SNPs common in both populations. Variant LD scores were estimated for ancestry-matched 1000 Genomes v3 data for each study combination. The estimation of LD scores failed for chromosome 6 for some groups. We therefore left out the major histocompatibility complex (MHC) region (positions 28,477,797 to 33,448,354) from chromosome 6 from all comparisons. Variants with imputation accuracy r2 < 0.8 or MAF < 0.01 were excluded. Popcorn did not converge for any of the studies with less than 20,000 samples. Therefore, results are presented for comparisons between GLGC2013, CKB and BBJ. We estimated effect rather than impact correlations. We used a Bonferroni correction to adjust for multiple testing of three traits with each other (p < 0.05/9 = 0.0056).
### Genetic risk scores
As it was not possible to compute trans-ethnic genetic correlations for UKHLS, the HELIC cohorts, and APCDR-Uganda, we created genetic risk scores based on the established lipid loci and assessed their associations with serum lipid levels in these studies. We also tested the associations of GRS in CKB as raw data were available for this study as well. Age and sex were adjusted for by regressing them on the lipid biomarker values and using the residuals as outcomes for subsequent analyses. For CKB, we additionally adjusted for 20 PCs and region covariates in order to ensure population structure was accounted for. To ensure values are normally distributed, we used rank-based inverse normal transformation for all biomarkers and data sets which involves ordering values first and then assigning them to expected normal values. To make sure GRS were comparable across studies, we excluded variants that were absent, rare (MAF < 0.01) or badly imputed (r2 < 0.8) in any of the studies and variants that had different alleles from those in the GLGC. The variant with larger discovery p-value from each correlated pair of SNPs (r2 > 0.1) was also removed. These filters were applied based on each, UKHLS, HELIC, and APCDR-Uganda and then the intersection of variants was carried forward to generate GRS. Out of the 444 loci, this left 120, 103, and 101 variants for HDL, LDL and TG, respectively (Supplementary Table 7). We created trait-specific weighted GRS. The β-regression coefficients from SNP-trait associations in GLGC201715 were used as weights. All lipid biomarkers and scores were scaled to mean = 0 and standard deviation = 1 for each study, so that the regression coefficients represent estimates of the correlation between scores and lipid biomarkers.
We carried out association analyses between each genetic risk score and each lipid biomarkers using a linear mixed model with random polygenic effect implemented in GEMMA39 in order to account for relatedness and population structure. For CKB, we used BOLT-LMM because it is efficient for large samples. We used a Bonferroni correction to adjust for multiple testing of three GRS with three different lipid biomarker outcomes (p < 0.05/9 = 0.0056 for the score test).
### Trans-ethnic colocalization
Differences in allele frequency, LD structure and sample size make it difficult to assess whether a given GWAS hit is transferable to samples with different ancestries. Therefore, we applied trans-ethnic colocalization. Colocalization methods test whether the associations in two studies can be explained by the same underlying signal even if the specific causal variant is unknown. The joint likelihood mapping (JLIM) statistic was developed by Chun and colleagues to estimate the posterior probabilities for colocalization between GWAS and eQTL signals and compare them to probabilities of distinct causal variants16:
$$\Lambda = \mathop {\sum}\limits_{i \in N_{\theta} ^{1}\left( {m^\ast} \right)} L_{1}\left( i \right) \times {\mathrm{log}}\frac{{L_{1}\left( i \right)L_{2}\left( i \right)}}{{\mathop {\mathrm{max}}\limits_{j \notin N_{\theta} ^{2}\left( i \right)} L_{1}\left( i \right)L_{2}\left( j \right)}}$$
(1)
Where i SNP1; mlead SNP; L1(i) likelihood of SNP i being causal for trait 1; L2(i) likelihood of SNP i being causal for trait 2; $$N_\theta ^1\left( i \right)$$, $$N_\theta ^2\left( i \right)$$ sets of SNPs in LD with iθ LD threshold.
JLIM explicitly accounts for LD structure. Therefore, we assessed whether it is suitable for trans-ethnic colocalization. For the reference sample set, it was possible to use genome-wide summary statistics for the analysis. For this set, LD scores were estimated using a subset of samples from the 1000 Genomes Project v3 that had matching ancestry to that study. The second sample set needed raw genotype data and LD was estimated directly for these samples. JLIM assumes only one causal variant within a region in each study. We therefore used small windows of 50Kb for each known locus to minimise the risk of interference from additional association signals. Distinct causal variants were defined by separation in LD space by r2 ≥ 0.8 from each other. We excluded loci within the MHC region due to its complex LD structure. We used a significance threshold of p < 0.05 given the evidence of association of the established lipid loci in Europeans and the overall evidence for shared causal genetic architecture across populations for most lipid traits from our other analyses. We compared each target study to UKHLS because of the study’s matched ancestry with the discovery study, high level of homogeneity in terms of ancestry, biomarker quantification and study design.
### Simulation
To test the power of trans-ethnic colocalization to detect associations shared between pairs of populations with different ancestry, we ran JLIM on two sets of simulated traits with realistic effect size and environmental noise level. The first set of simulations used the same causal variant in both populations, whereas the second set of simulations used discordant causal variants. Causal variants were selected using the sample function in R, corresponding to a uniform random draw from the entire chromosome. We sampled 10,000 randomly chosen biallelic variants with MAF > 0.05 and simulated random phenotypes in UKHLS, CKB, APCDR-Uganda and 50,000 individuals with British ancestry from UK Biobank as the reference set. For UK Biobank we applied the QC and used the ancestry assignment provided by Bycroft et al.40. UKHLS was included as an ancestry-matched set in order to derive an upper limit estimate of the power. For each data set relatives were excluded. We also sub-sampled CKB to match the number of individuals in APCDR-Uganda in order to test whether the difference in performance was due to ancestry or sample size. We used a simple linear model to generate the phenotype for each individual i:
$$y_i = \beta \ast \left( {x_i - 1} \right) + \eta _i$$
(2)
where y is the phenotype value, β is the effect size, x is the number of the alternate alleles carried at the locus and ηi~N(0,σ2), where σ2 is the variance of the environmental noise and Cov(ηI,ηj)=0. We tested effect size estimate beta from 0.10, 0.15, 0.20, and 0.25 in order to represent a range similar to that observed for the major lipid loci15. We used σ2 = 1 to match the trait variances of the standardised phenotypes.
### Comparison of transferable loci with non-transferable loci
We assessed whether there are any systematic differences between loci that are shared between European ancestry samples and APCDR-Uganda and loci that are not. We identified all loci with evidence of reproducibility based on the above definition that also had significant (p < 0.05) colocalization based on a permutation test. We only kept one variant per region. We contrasted them with loci where none of the evidence suggested generalisation: p > 0.05 for colocalization or missing result due to failed convergence, no variant with a lipid association at p < 10−3 in the region and the lead variant from the discovery study was not rare in APCDR-Uganda. We identified the nearest protein coding gene for each locus and carried out pathway analyses for the two sets using FUMA41. We also assessed the associations of the lead variants with body mass index (BMI) in European ancestry samples using results from a meta-analysis between the GIANT consortium and UK Biobank17. We used a Bonferroni adjusted p-value threshold.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Code availability
Our code to run trans-ethnic colocalization using JLIM and simulations is available through github: https://github.com/KarolineKuchenbaecker/TEColoc
## References
1. 1.
Clarke, S. L. & Assimes, T. L. Genome-wide association studies of coronary artery disease: recent progress and challenges ahead. Curr. Atheroscler. Rep. 20, 47 (2018).
2. 2.
Khera, A. V. et al. Genome-wide polygenic scores for common diseases identify individuals with risk equivalent to monogenic mutations. Nat. Genet. 50, 1219–1224 (2018).
3. 3.
Kuchenbaecker, K. B. et al. Evaluation of polygenic risk scores for breast and ovarian cancer risk prediction in BRCA1 and BRCA2 mutation carriers. J. Natl. Cancer Inst. 109, https://doi.org/10.1093/jnci/djw302 (2017).
4. 4.
MacArthur, J. et al. The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog). Nucleic Acids Res. 45, D896–D901 (2017).
5. 5.
Popejoy, A. B. & Fullerton, S. M. Genomics is failing on diversity. Nat. News 538, 161 (2016).
6. 6.
Bovet, P. & Paccaud, F. Cardiovascular disease and the changing face of global public health: a focus on low and middle income countries. Public Health Rev. 33, 397–415 (2011).
7. 7.
Martin, A. R. et al. Human demographic history impacts genetic risk prediction across diverse populations. Am. J. Hum. Genet. 100, 635–649 (2017).
8. 8.
Heckerman, D. et al. Linear mixed model for heritability estimation that explicitly addresses environmental variation. Proc. Natl Acad. Sci. USA 113, 7377–7382 (2016).
9. 9.
Chen, Z. et al. China Kadoorie Biobank of 0.5 million people: survey methods, baseline characteristics and long-term follow-up. Int. J. Epidemiol. 40, 1652–1666 (2011).
10. 10.
Gilly, A. et al. Cohort-wide deep whole genome sequencing and the allelic architecture of complex traits. Nat. Commun. 9, 4674 (2018).
11. 11.
Gilly, A. et al. Very low depth whole genome sequencing in complex trait association studies. Bioinforma. Oxf. Engl. https://doi.org/10.1093/bioinformatics/bty1032 (2018).
12. 12.
Prins, B. P. et al. Genome-wide analysis of health-related biomarkers in the UK Household Longitudinal Study reveals novel associations. Sci. Rep. 7, 11008 (2017).
13. 13.
Kanai, M. et al. Genetic analysis of quantitative traits in the Japanese population links cell types to complex human diseases. Nat. Genet. 50, 390–400 (2018).
14. 14.
Global Lipids Genetics Consortium. Discovery and refinement of loci associated with lipid levels. Nat. Genet. 45, 1274–1283 (2013).
15. 15.
Liu, D. J. et al. Exome-wide association study of plasma lipids in >300,000 individuals. Nat. Genet. 49, 1758 (2017).
16. 16.
Chun, S. et al. Limited statistical evidence for shared genetic effects of eQTLs and autoimmune-disease-associated loci in three major immune-cell types. Nat. Genet. 49, 600–605 (2017).
17. 17.
Yengo, L. et al. Meta-analysis of genome-wide association studies for height and body mass index in 700000 individuals of European ancestry. Hum. Mol. Genet 27, 3641–3649 (2018).
18. 18.
Musunuru, K. et al. Multi-ethnic analysis of lipid-associated loci: the NHLBI CARe project. PLoS ONE 7, e36473 (2012).
19. 19.
Teslovich, T. M. et al. Biological, clinical, and population relevance of 95 loci for blood lipids. Nature 466, 707–713 (2010).
20. 20.
Dumitrescu, L. et al. Genetic determinants of lipid traits in diverse populations from the population architecture using genomics and epidemiology (PAGE) study. PLoS Genet. 7, e1002138 (2011).
21. 21.
Bryant, E. K. et al. A multiethnic replication study of plasma lipoprotein levels-associated SNPs identified in recent GWAS. PLoS ONE 8, e63469 (2013).
22. 22.
Wu, Y. et al. Trans-ethnic fine-mapping of lipid loci identifies population-specific signals and allelic heterogeneity that increases the trait variance explained. PLoS Genet. 9, e1003379 (2013).
23. 23.
Wang, Z. et al. Genetic associations with lipoprotein subfraction measures differ by ethnicity in the multi-ethnic study of atherosclerosis (MESA). Hum. Genet. 136, 715–726 (2017).
24. 24.
Keebler, M. E. et al. Association of Blood Lipids with Common DNA Sequence Variants at Nineteen Genetic Loci in the Multiethnic United States National Health and Nutrition Examination Survey III. Circ. Cardiovasc. Genet. 2, 238–243 (2009).
25. 25.
Lanktree, M. B., Anand, S. S., Yusuf, S. & Hegele, R. A. Replication of genetic associations with plasma lipoprotein traits in a multiethnic sample. J. Lipid Res. 50, 1487–1496 (2009).
26. 26.
Chang, M. et al. Racial/ethnic variation in the association of lipid-related genetic variants with blood lipids in the US adult population. Circ. Cardiovasc. Genet. 4, 523–533 (2011).
27. 27.
Below, J. E. et al. Meta-analysis of lipid-traits in Hispanics identifies novel loci, population-specific effects, and tissue-specific enrichment of eQTLs. Sci. Rep. 6, 19429 (2016).
28. 28.
Johnson, L., Zhu, J., Scott, E. R. & Wineinger, N. E. An examination of the relationship between lipid levels and associated genetic markers across racial/ethnic populations in the multi-ethnic study of atherosclerosis. PLoS ONE 10, e0126361 (2015).
29. 29.
Marigorta, U. M. & Navarro, A. High trans-ethnic replicability of GWAS results implies common causal variants. PLoS Genet. 9, e1003566 (2013).
30. 30.
Brown, B. C., Ye, C. J., Price, A. L. & Zaitlen, N. Transethnic Genetic-Correlation Estimates from Summary Statistics. Am. J. Hum. Genet. 99, 76–88 (2016).
31. 31.
Wray, N. R. et al. Genome-wide association analyses identify 44 risk variants and refine the genetic architecture of major depression. Nat. Genet. 50, 668–681 (2018).
32. 32.
Galinsky, K. J. et al. Estimating cross-population genetic correlations of causal effect sizes. Genet. Epidemiol. (2018). https://doi.org/10.1002/gepi.22173
33. 33.
Hindorff, L. A. et al. Prioritizing diversity in human genomics research. Nat. Rev. Genet. 19, 175–185 (2018).
34. 34.
Mulder, N. et al. H3Africa: current perspectives. Pharm. Pers. Med 11, 59–66 (2018).
35. 35.
Carlson, C. S. et al. Generalization and dilution of association results from European GWAS in populations of non-European ancestry: the PAGE Study. PLOS Biol. 11, e1001661 (2013).
36. 36.
Program Overview - All of Us | National Institutes of Health. Available at: https://allofus.nih.gov/about/about-all-us-research-program. (Accessed: 28 Dec 2018)
37. 37.
Finer, S. et al. Cohort Profile: East London Genes & Health (ELGH), a community based population genomics and health study in people of British-Bangladeshi and -Pakistani heritage. Preprint at https://doi.org/10.1101/426163 (2018).
38. 38.
Loh, P.-R., Kichaev, G., Gazal, S., Schoech, A. P. & Price, A. L. Mixed-model association for biobank-scale datasets. Nat. Genet. 50, 906–908 (2018).
39. 39.
Zhou, X. & Stephens, M. Genome-wide efficient mixed-model analysis for association studies. Nat. Genet. 44, 821–824 (2012).
40. 40.
Bycroft, C. et al. The UK Biobank resource with deep phenotyping and genomic data. Nature 562, 203–209 (2018).
41. 41.
Watanabe, K., Taskesen, E., van Bochoven, A. & Posthuma, D. Functional mapping and annotation of genetic associations with FUMA. Nat. Commun. 8, 1826 (2017).
## Acknowledgements
C.K.B. thanks the participants, project staff, the China National Centre for Disease Control and Prevention and its regional offices. The Chinese National Health Insurance scheme provided electronic linkage to all hospital admission data. We thank the residents of the Pomak villages and of the Mylopotamos villages for taking part. We thank the African Partnership for Chronic Disease Research (APCDR) for providing a network to support this study as well as a repository for deposition of curated data. We also thank all study participants who contributed to this study. UKHLS is led by the Institute for Social and Economic Research at the University of Essex and funded by the Economic and Social Research Council. The survey was conducted by NatCen and the genome-wide scan data were analysed and deposited by the Wellcome Trust Sanger Institute. This work was funded by the Wellcome Trust (WT098051), (212360/Z/18/Z), and the European Research Council (ERC-2011-StG 280559-SEPI). The baseline survey and first resurvey for CKB were supported by a research grant from the Hong Kong Kadoorie Charitable Foundation. Longterm followup and the second resurvey were supported by grants from the UK Wellcome Trust (212946/Z/18/Z, 202922/Z/16/Z, 104085/Z/14/Z, 088158/Z/09/Z), National Natural Science Foundation of China (81390540, 81390541, 81390544), and National Key Research and Development Program of China (2016YFC 0900500, 0900501, 0900504, 1303904). DNA extraction and genotyping was supported by grants from GlaxoSmithKline and the UK Medical Research Council (MCPC13049, MCPC14135). M.V.H. is supported by the British Heart Foundation (FS/18/23/33512) and the National Institute for Health Research Oxford Biomedical Research Centre. The British Heart Foundation, UK Medical Research Council, and Cancer Research UK provide core funding to the Clinical Trial Service Unit and Epidemiological Studies Unit, Oxford University (Oxford, UK). APCDR-Uganda was funded by the Wellcome Trust, The Wellcome Trust Sanger Institute (WT098051), the UK Medical Research Council (G0901213-92157, G0801566, and MR/K013491/1), and the Medical Research Council/Uganda Virus Research Institute Uganda Research Unit on AIDS core funding. The UK Household Longitudinal Study was funded by grants from the Economic & Social Research Council (ES/H029745/1) and the Wellcome Trust (WT098051).
## Author information
Authors
### Contributions
K.K. conceived this project and supervised the work. K.K. and N.T. carried out the genetic correlation and PRS analyses. K.K. carried out all analyses involving trans-ethnic colocalization. K.K. wrote the manuscript. K.K. and A.E. carried out the simulation study. T.R. implemented earlier versions of the genetic risk scores. HELIC: E.Z. and G.D. are the principle investigators, A.G. and L.S. carried out the quality control, M.K. and E.T. were involved in data collection. APCDR-Uganda: M.S., D.G., G.A., J.S., A.K. were involved in collecting and preparing data as well as leading the study. China Kadoorie Biobank: Z.C. and L.L. are the principle investigators; R.G.W. is the genomics lead; R.G.W., I.Y.M., H.D., Y.G. and M.V.H. were involved in data collection; R.G.W. and K.L. carried out quality control and genome-wide association analysis for lipid biomarkers; K.L. carried out the genotype imputation. All authors approved the manuscript.
### Corresponding author
Correspondence to Karoline Kuchenbaecker.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Communications thanks Bjarni Vilhjalmsson and other anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Members of the Understanding Society Scientific Group are listed at the end of this paper.
## Rights and permissions
Reprints and Permissions
Kuchenbaecker, K., Telkar, N., Reiker, T. et al. The transferability of lipid loci across African, Asian and European cohorts. Nat Commun 10, 4330 (2019). https://doi.org/10.1038/s41467-019-12026-7
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-019-12026-7
• ### Validation of multiple equations for estimating low-density lipoprotein cholesterol levels in Korean adults
• Rihwa Choi
• Mi-Jung Park
• Eun Hee Lee
Lipids in Health and Disease (2021)
• ### The Polygenic Score Catalog as an open database for reproducibility and systematic evaluation
• Samuel A. Lambert
• Laurent Gil
• Michael Inouye
Nature Genetics (2021)
• ### Tractor uses local ancestry to enable the inclusion of admixed individuals in GWAS and to boost power
• Elizabeth G. Atkinson
• Benjamin M. Neale
Nature Genetics (2021)
• ### Comparison of the effectiveness of Martin’s equation, Friedewald’s equation, and a Novel equation in low-density lipoprotein cholesterol estimation
• Youhyun Song
• Hye Sun Lee
• Hyuk-Jae Chang
Scientific Reports (2021)
• ### African genetic diversity and adaptation inform a precision medicine agenda
• Luisa Pereira
• Leon Mutesa
• Michèle Ramsay
Nature Reviews Genetics (2021)
|
|
# Bayesian analysis of a probability of success
Let $S \sim \text{Bin}(N, \pi)$ denote a number of successes. Using the non-informative $\text{Beta}(0.5, 0.5)$ prior, the posterior distribution of the probability of success is $$\text{Beta}(0.5 + S, \, 0.5 + N - S)$$
Say that the interest is in the posterior probability that $\pi > 0.60$, i.e.
$$\text{Pr}\big(\text{Beta}(0.5 + S, \, 0.5 + N - S) > 0.60\Big)$$
The R function f() below assumes that the true $\pi$ equals $0.8$ and computes the proportion of times, over 10000 simulations, that this posterior probability is $> 0.95$.
f <- function(N, pi=0.8)
{
S <- rbinom(n=10000, size=N, prob=pi)
proba <- 1 - pbeta(0.60, 0.5 + S, 0.5 + N - S)
mean(proba > 0.95)
}
I would expect $f$ to be an increasing function of $N$ (after all, the posterior distribution becomes increasingly concentrated around $0.80 > 0.60$ as $N$ increases).
However,
Do you have an explanation for the behaviour?
It occurs because $S$ is discrete.
For every value $N$, there is a certain value of $S$, call it $S_N$, such that the posterior probability of $\pi>0.6$ is greater than 0.95. We can find $S_N$ using this R function:
find_cutoff = function(N) {
s = 0:N
min(s[which(1-pbeta(.6, .5+s, .5+N-s) > .95)])
}
Now that we have the cutoff, we need to find the probability that $S\ge S_N|N$ which is just evaluating one minus the cdf of a binomial distribution.
library(plyr)
d = ddply(data.frame(N=10:40), .(N), function(x){
cutoff = find_cutoff(x$N) data.frame(cutoff = cutoff, prob = 1-pbinom(cutoff-1, x$N, .8))
})
plot(prob~N,d, type='b')
|
|
What is Image Retrieval
/ November 10, 2017
An image retrieval system can be defined as searching, browsing, and retrieving images from massive databases consisting of digital images. Although Conventional and common techniques of retrieving images make use of adding metadata namely captioning keywords so as to perform annotation of words. However image search can be described by dedicated technique of search which is mostly used to find images. For searching images user provides the query image and the system returns the image similar to that of query image. Image Retrieval Architecture Image Retrieval has been adopted in most of the major search engines, including Google, Yahoo!, Bing, etc. A large number of image search engines mainly employ the surrounding texts around the images and the image names to index the images. Because there are only two main places where anyone can place text first in title (Name of image) and second in the tags which are proposed and implemented using web 2.0 concepts? Most of the time user make query in the text format for search contents over any search engine. Figure 1: General Image Retrieval System However, this limits the capability of the search engines in retrieving the semantically related images using a given query. On…
Community Detection : Unsupervised Learning
/ November 9, 2017
Advances in technology and computation have provided the possibility of collecting and mining a massive amount of real-world data. Mining such “big data” allows us to understand the structure and the function of real systems and to find unknown and interesting patterns. This section provides the brief overview of the community structure. Introduction of Community Detection In the actual interconnected world, and the rising of online social networks the graph mining and the community detection become completely up-to-date. Understanding the formation and evolution of communities is a long-standing research topic in sociology in part because of its fundamental connections with the studies of urban development, criminology, social marketing, and several other areas. With increasing popularity of online social network services like Facebook, the study of community structures assumes more significance. Identifying and detecting communities are not only of particular importance but have immediate applications. For instance, for effective online marketing, such as placing online ads or deploying viral marketing strategies [10], identifying communities in social network could often lead to more accurate targeting and better marketing results. Albeit online user profiles or other semantic information is helpful to discover user segments this kind of information is often at a coarse-grained level…
An Introduction of Computer Vision
/ November 8, 2017
Computer vision is the science and technology of machines that see, and seeing in this case means that the machine is able to extract from an image some information that is necessary for solving some task. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems. Computer Vision : Overview The human ability to interact with other people is based on their ability of recognition. This innate ability to effortlessly identify and recognize objects, even if distorted or modified, has induced to research on how the human brain processes these images. This skill is quite reliable, despite changes due to viewing conditions, emotional expressions, ageing, added artifacts, or even circumstances that permit seeing only a fraction of the face. Furthermore, humans are able to recognize thousands of individuals during their lifetime. Understanding the human mechanism, in addition to cognitive aspects, would help to build a system for the automatic…
What is Steganography
/ November 3, 2017
Computer and internet are the major media that connects different parts of the world as one global virtual world in this modern era. That’s why we can exchange lots of information easily at any distance within seconds of time. But the confidential data need to be transferred should be kept confidential till the destination. Steganography: General Overview Due to advances in ICT (Information and Communication Technology), most of information is kept electronically. Consequently, the security of information has become a fundamental issue. Besides cryptography, steganography can be employed to secure information. Steganography is a technique of hiding information in digital media. In contrast to cryptography, the message or encrypted message is embedded in a digital host before passing it through the network, thus the existence of the message is unknown. Besides hiding data for confidentiality, this approach of information hiding can be extended to copyright protection for digital media: audio, video, and images. Nowadays, thanks to the stunningly fast advancement of the computer and network technology, people can easily send or receive secret information in various forms to or from almost any remotest part of the world through the Internet within seconds. In fact, there might be tons of secret…
Ariori Algorithm: Example and Algorithm Description
/ November 2, 2017
With the quick growth in e-commerce applications, there is an accumulation vast quantity of data in months not in years. Data Mining, also known as Knowledge Discovery in Databases (KDD), to find anomalies, correlations, patterns, and trends to predict outcomes. Apriori algorithm is a classical algorithm in data mining. It is used for mining frequent itemsets and relevant association rules. It is devised to operate on a database containing a lot of transactions, for instance, items brought by customers in a store. It is very important for effective Market Basket Analysis and it helps the customers in purchasing their items with more ease which increases the sales of the markets. It has also been used in the field of healthcare for the detection of adverse drug reactions. It produces association rules that indicate what all combinations of medications and patient. Figure 1 Apriori algorithm example application Ariori Algorithm : Overview One of the first algorithms to evolve for frequent itemset and Association rule mining was Apriori. Two major steps of the Apriori algorithm are the join and prune steps. The join step is used to construct new candidate sets. A candidate itemset is basically an item set that could be either Frequent or…
FP Growth(FP-tree) Algorithm with Example
/ November 1, 2017
FP-growth algorithm : Introduction The FP-growth algorithm is currently one of the fastest approaches to frequent item set mining. The FP-Growth methods adopts a divide and conquer strategy as follows: compress the database representing frequent items into a frequent-pattern tree, but retain the itemset association information, and then divide such a compressed database into a set of condition databases, each associated with one frequent item, and mine each such database. First, a scan of database derives a list of frequent items in descending order. Then the FP – tree is constructed as follows. Create the root of the tree and scan the database second time. The items in each transaction are processed in the order of frequent items list and a branch is created for each transaction. When considering the branch to be added to a transaction, the count of each node along a common prefix is incremented by 1. After constructing the tree the mining proceeds as follows. Start from each frequent length-1 pattern, construct its conditional pattern base, then construct its conditional FP-tree and perform mining recursively on such a tree. The support of a candidate (conditional) itemset is counted traversing the tree. The sum of count values…
What is Distributed Database
/ October 31, 2017
A distributed database is a database in which portions of the database are stored in multiple physical locations and processing is distributed among multiple database nodes. Distributed databases can be homogenous or heterogeneous. In a homogenous distributed database system, all the physical locations have the same underlying hardware and run the same operating systems and database applications. In a heterogeneous distributed database, the hardware, operating systems or database applications may be different at each of the locations. Distributed Database: Overview A distributed database is a database distributed between several sites. The reasons for the data distribution may include the inherent distributed nature of the data or performance reasons. In a distributed database the data at each site is not necessarily an independent entity, but can be rather related to the data stored on the other sites. A distributed database (DDB) is a collection of multiple, logically interrelated databases distributed over a computer network. A distributed database management system (DDBMS) is the software that manages the DDB, and provides an access mechanism that makes this distribution transparent to the user. Distributed database system (DDBS) is the integration of DDB and DDBMS. This integration is achieved through the merging the database and…
Association Rule Mining
/ October 30, 2017
Data Mining is the discovery of hidden information found in databases and can be viewed as a step in the knowledge discovery process. Data mining functions include clustering, classification, prediction, and link analysis (associations). One of the most important data mining applications is that of mining association rules. An association rule has two parts, an antecedent (if) and a consequent (then). An antecedent is an item found in the data. A consequent is an item that is found in combination with the antecedent. Association Rule Mining: Overview Association rules are created by analyzing data for frequent if/then patterns and using the criteria support and confidence to identify the most important relationships. Support is an indication of how frequently the items appear in the database. Confidence indicates the number of times the if/then statements have been found to be true. Association rule mining has been an active research area in data mining, for which many algorithms have been developed. In data mining, association rule learning is a popular and well-accepted method for discovering interesting relations between variables in large databases. Association rules are employed today in many areas including web usage mining, intrusion detection and bioinformatics. In general, the association rule…
What is Mobile Computing
/ October 26, 2017
Mobile Computing is a technology that allows transmission of data, voice and video via a computer or any other wireless enabled device without having to be connected to a fixed physical link. Mobile computing (or ubiquitous computing as it is sometimes called) is the use of computers in a non-static environment. This use may range from using notebook-type computers away from one’s office or home to the use of handheld, palmtop-type PDA-like devices to perform both simple and complex computing tasks. Mobile Computing: General Mobile device has become essential part of human life. Apart from call and receive functions, user can access many function in his/her mobile. A user wants everything on his/her mobile device for the ease of work. Some people use tablets instead of laptop or desktop. Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. Mobile phones are set to become the universal interface to online services and cloud computing applications. However, using them for this purpose today is limited to two…
What is Phishing in Web Security
/ October 25, 2017
Phishing is one of the luring techniques used by phishing artists with the intention of exploiting the personal details of unsuspected users. Phishing is a form of identity theft that occurs when a malicious Web site impersonates a legitimate one in order to acquire sensitive information such as passwords, account details, or credit card numbers. Though there are several anti-phishing software and techniques for detecting potential phishing attempts in emails and detecting phishing contents on websites, phishers come up with new and hybrid techniques to circumvent the available software and techniques. This section provide the detail study about the online phishing and their deployment techniques. Phishing: General Description Now a day’s attacks have become major issues in networks. Attacks will intrude into the network infrastructure and collect the information needed to cause vulnerability to the networks. Security is needed to prevent the data from various attacks. Attacks may either active attack or passive attack. One type of passive attack is phishing. Phishing is a continual threat and is larger in social media such as facebook twitter. Phishing emails contain link to the infected website. Phishing email direct the user to the infected website where they are asked to enter the…
Insert math as
$${}$$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.