text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# 2-D kinematics
1. Oct 7, 2007
### jedjj
2-D kinematics[solved]
1. The problem statement, all variables and given/known data
A ball rolls horizontally off the edge of a tabletop that is 1.90 m high. It strikes the floor at a point 1.57 m horizontally away from the table edge. (Neglect air resistance.)
(a) How long was the ball in the air?
(b) What was its speed at the instant it left the table?
2. The attempt at a solution
I have been trying to get the answer to part a for a little while now. I have been trying to use the quadratic equation to find time knowing that $$0=(-\Delta y)+V_{0y}*t+(g)t^2$$
I have not used quadratic equation in about 4 years. Am I setting this problem up correctly when I do
$$t=\frac{V_{0y}+\sqrt{V_{0y}^2-(4)(-\Delta y)(g)}}{2*(g)}$$
which should give me
$$t=\frac{{0}+\sqrt{0^2-(4)(-1.9)(g)}}{2*(g)}$$
$$t=\frac{\sqrt{-(4)(-1.9)(g)}}{2*(g)}$$
which comes to be
$$t=\frac{8.63}{19.6}$$
$$t=0.4403$$
I am not coming up with the correct answer according to what it is online[edit:still]. What am I doing wrong? Thanks for all the help.
I just went ahead and used kinematic equations and found the answer, but I'm still confused why the quadratic equation didn't work.
Last edited: Oct 7, 2007
2. Oct 7, 2007
### jedjj
To start I just realized that quadratic equation is not set up properly. I will edit the equation and set it up in the way that I believe is proper.
3. Oct 7, 2007
### jedjj
Am I giving too much information, or not enough. or is no one seeing a mistake I am making?
|
{}
|
# Getting Started
### Emphasis and Bold
You can add emphasis to your content by applying italic or bold to the text. You can use it on text in various elements, including para, code, and parameter.
To italicize your content or make it bold, highlight the text and select the relevant toolbar button in the Edit menu.
Select the Bold (1) or Italicize (2) toolbar option.
Alternatively, you can use the keyboard shortcuts.
For bold:
• On Windows, Ctrl B
• On Mac, Command ⌘ B
For Italic:
• On Windows, Ctrl I
• On Mac, Command ⌘ I
You can also use the Element Context Menu:
1. Highlight the text that you want to be bold or italic.
2. Use the shortcut to display the Element Context Menu:
• Windows: Alt + Enter
• Mac: Option ⌥ + Enter
3. Select the emphasis element.
The emphasis element makes the text appear italicized.
If you want the text to be bold, select the emphasis element (or highlight the text that uses it). In the element attributes section, add the role attribute and set its value to bold.
### Important
You can use emphasis on text that already has an inline element, such as code. But it is important that the emphasis element is outside the inline element, for example: <para><emphasis><code>code snippet content goes here</code></emphasis></para>
|
{}
|
I. One Mark Questions
Question 1.
The integral of $$\left(\sqrt{x}+\frac{1}{\sqrt{x}}\right)$$ equals _____
(c) $$\frac{2}{3} x^{\frac{3}{2}}+2 x^{\frac{1}{2}}+c$$
Question 2.
(a) $$x^{4}+\frac{1}{x^{3}}-\frac{129}{8}$$
Hint:
Question 3.
$$\int \frac{\sin ^{2} x-\cos ^{2} x}{\sin ^{2} x \cos ^{2} x} d x$$ is equal to _____
(a) tan x + cot x + c
(b) tan x + cosec x + c
(c) -tan x + cot x + c
(d) tan x – sec x + c
(a) tan x + cot x + c
Hint:
Question 4.
(d) $$\frac{\pi}{12}$$
Hint:
Question 5.
The value of $$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\left(x^{3}+x \cos x+\tan ^{5} x\right) d x$$ is ______
(a) 0
(b) 2
(c) π
(d) 1
(a) 0
Hint:
Let f(x) = x3 + x cos x + tan5 x
f(-x)= -x3 – x cos x – tan5 x = -f(x)
So f(x) is odd function.
Integral is 0.
Question 6.
Fill in the blanks.
(a) $$\int_{0}^{\frac{\pi}{2}} \cos ^{3} x d x$$ is equal to _____
(b) $$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \sin ^{31} x d x$$ is equal to _________
(a) $$\frac{2}{3}$$
(b) 0
Hint:
Question 7.
Match the following.
(a) – (iii)
(b) – (iv)
(c) – (v)
(d) – (ii)
(e) – (i)
Question 8.
State True or False.
(a) True
(b) False
(c) True
(d) False
(e) False
(f) True
Question 9.
Which of the following is not equal to ∫ tan x sec2 x dx?
(a) $$\frac{1}{2} \tan ^{2} x$$
(b) $$\frac{1}{2} \sec ^{2} x$$
(c) $$\frac{1}{2 \cos ^{2} x}$$
(d) None of these
(d) None of these
Hint:
Question 10.
∫ex (cos x – sin x) dx is equal to ______
(a) ex sin x + c
(b) ex cos x + c
(c) -ex cos x + c
(d) -ex sin x + c
(b) ex cos x + c
Hint:
Let f(x) = cos x
f'(x) = -sin x
∫ex [f(x) + f'(x)] dx = ex f(x) + c
Question 11.
$$\int \frac{1-\cos 2 x}{1+\cos 2 x} d x$$ is _____
(a) tan x – x + c
(b) x + tan x + c
(c) x – tan x + c
(d) -x – cot x + c
(a) tan x – x + c
Hint:
Question 12.
$$\int_{0}^{\frac{\pi}{2}} \cos x e^{\sin x} d x$$ is equal to ______
(a) e = 1
(b) 1 – e
(c) $$e^{\frac{\pi}{2}}-1$$
(d) $$1-e^{\frac{\pi}{2}}$$
(a) e = 1
Hint:
Question 13.
Which of the following is an even function?
(a) sin x
(b) ex – e-x
(c) x cos x
(d) cos x
(d) cos x
Question 14.
Which of the following is neither odd nor even function?
(a) x sin x
(b) x2
(c) e-x
(d) x cos x
(c) e-x
Question 15.
∫sec2 (7 – 4x) dx equal to ______
(a) tan (7 – 4x)
(b) -tan (7 – 4x)
(c) –$$\frac{1}{4}$$ tan (7 – 4x)
(d) $$\frac{1}{4}$$ tan (7 – 4x)
(c) – $$\frac{1}{4}$$ tan (7 – 4x)
II. 2 Mark Questions.
Question 1.
Solution:
Question 2.
Question 3.
$$\int\left(e^{x}+e^{-x}\right)^{2} d x$$
Solution:
Question 4.
Find $$\int x^{5} \sqrt{3+5 x^{6}} d x$$
Solution:
Question 5.
$$\int \frac{(x+1)(x+\log x)^{2}}{x} d x$$
Solution:
Question 6.
$$\int \frac{e^{2 x}-1}{e^{2 x}+1} d x$$
Solution:
Dividing numerator and denominator by ex, we get $$\int \frac{e^{x}-e^{-x}}{e^{x}+e^{-x}} d x$$
Question 7.
$$\int \frac{1}{x+\sqrt{x}} d x$$
Solution:
III. 3 and 5 Mark Questions.
Question 1.
Find $$\int \frac{e^{x}(x+1)}{(x+3)^{3}} d x$$
Solution:
Question 2.
$$\int \frac{1}{2 x^{2}-x-1} d x$$
Solution:
Question 3.
$$\int \frac{1}{1-3 \sin ^{2} x} d x$$
Solution:
Divide the numerator and denominator by cos2 x
Question 4.
$$\int \frac{d x}{e^{x}-e^{-x}}$$
Solution:
Question 5.
Evaluate $$\int_{-1}^{2}(7 x-5) d x$$ as the limit of a sum.
Solution:
Question 6.
Evaluate $$\int_{1}^{2}\left(x^{2}-1\right) d x$$ as the limit of a sum.
Solution:
Question 7.
Evaluate $$\int_{1}^{2} \frac{1}{x\left(x^{4}+1\right)} d x$$
Solution:
Question 8.
Evaluate $$\int_{\frac{\pi}{6}}^{\frac{\pi}{3}} \frac{d x}{1+\sqrt{\tan x}}$$
Solution:
Question 9.
Evaluate $$\int_{0}^{1} x(1-x)^{5} d x$$
Solution:
Question 10.
Evaluate $$\int \frac{2 x+3}{\sqrt{x^{2}+x+1}} d x$$
Solution:
Question 11.
∫(x2 + 1) log x dx
Solution:
Question 12.
$$\int \sqrt{x^{2}+4 x-5} d x$$
Solution:
Question 13.
$$\int_{0}^{\frac{\pi}{4}}\left(2 \sec ^{2} x+x^{3}+2\right) d x$$
Solution:
Question 14.
Evaluate $$\int_{0}^{1} \frac{2 x}{\left(x^{2}+1\right)\left(x^{2}+2\right)} d x$$
Solution:
Let x2 = t, then 2x dx = dt
when x = 0, t = 0 and x = 1, t = 1
so integral becomes, $$\int_{0}^{1} \frac{d t}{(t+1)(t+2)}$$
We use partial fractions to proceed further
Question 15.
$$\int_{0}^{\frac{\pi}{2}} \frac{\sin x-\cos x}{1+\sin x \cos x} d x$$
Solution:
|
{}
|
# A non-linear spring obeys the modified Hooke's Law, F = -kxe^x, where F is the force in Newtons,...
## Question:
A non-linear spring obeys the modified Hooke's Law, {eq}F = -kxe^x {/eq}, where F is the force in Newtons, k is 2 N/m, and x is horizontal displacement if the right end of the spring in meters measured from the origin.
What is the work done in stretching this spring from the origin to x = 2?
## Hooke's Law
Hooke's law in engineering defines the relationship between the force required to deform the spring by one unit and the spring constant. It is mathematically expressed as {eq}\text{F=kx} {/eq}
where F is the force in N
x is the deformation in m
Given Data:
• Stiffness of the spring is {eq}k=2\ \text{N/m} {/eq}
The work done in stretching the spring is expressed by the relation
{eq}\begin{align} W&=F dx\\[0.3 cm] &= \int_{0}^{2}-kxe^xdx\\ \end{align} {/eq}
We have to solve the integral using integration by parts.
Integration by parts formula {eq}\displaystyle \int u \cdot vdx=u\int vdx-\int u'\left ( \int vdx \right )dx {/eq}
According to the question {eq}u=x, \ v=e^{x} {/eq}
{eq}\begin{align} &=-k\int x e^x \ dx\\[0.3 cm] &=-k \left [x\int e^xdx-\int \left (\frac{\mathrm{d} }{\mathrm{d} x}(x)\int e^xdx \right )dx \right ] & \left [\displaystyle \int u \cdot vdx=u\int vdx-\int u'\left ( \int vdx \right )dx \right ]\\[0.3cm]\\ &=-k\left [xe^x-\int (1)e^xdx \ \right ]& \left [ \frac{\mathrm{d} }{\mathrm{d} x}(x)=1, \int e^xdx=e^x \right ]\\[0.3cm] &=-k\left [xe^x-e^x \right ]_{0}^{2}&\left [\text{Spring is stretched from the origin (0) to 2m} \right ]\\[0.3cm] &=-k\left [xe^x-e^x \right ]_{0}^{2}\\[0.3 cm] &=-k\left [\left (2e^2-e^2 \right )-\left (0e^0-e^0 \right )\right ]\\[0.3 cm] &=-k\left [\left (2e^2-e^2 \right )-\left (0e^0-e^0 \right )\right ]\\[0.3 cm] &=-k\left [ (e^2)-(0-1) \right ]\\[0.3 cm] &=-2\left [ 8.4\right ]\\[0.3 cm] &=\boxed{\color{blue}{-16.8\ \text{N.m}}} \end{align} {/eq}
Practice Applying Spring Constant Formulas
from
Chapter 17 / Lesson 11
3.4K
In this lesson, you'll have the chance to practice using the spring constant formula. The lesson includes four problems of medium difficulty involving a variety of real-life applications.
|
{}
|
One large number in table with decimals
I am using siunitx to align numbers in a column. Problem is I got by intention one row at the beginning which is a large number, indicated by ',' per 1,000. See the following example.
What I would like to achieve:
1) Align the decimals to the decimal point '.' This is row 2, 3 and 4.
2) center the big numbers, i.e. row 1 and 5 in this example. When I tested different options, the large numbers where always either too much to the left or too much to the right. If possible, they should simply be centered. I believe that makes the most sense in terms of formatting.
Is it possible?
\documentclass{article}
\usepackage{siunitx}
\sisetup{%
input-ignore={,},
input-decimal-markers = {.},
table-format = 2.2,
table-number-alignment = center,
}%
\begin{document}
\begin{tabular}{SS}
row & alignToDec \\
row1: & 19,000,000.0 \\
row2: & 12.38 \\
row3: & 1.97 \\
row4: & 91.01 \\
row5: & 87,000,000.0 \\
\end{tabular}
\end{document}
• Try adding a pair of braces around the large numbers. – Bernard Mar 8 '19 at 21:51
• @Bernard what do you mean precisely? I could add braces (could it be anything else but braces?), but I am not sure on the exact consequence of doing that. If I added them only on the left, what would happen? Pls elaborate a bit. – ghx Mar 8 '19 at 21:55
• Apart from your question: Do not use S type columns for columns that only contain text (first column in your example code) and as already suggested by Bernard, enclose text in S type columns (first row in your code) as well as entries that should be centered in a pair of {}. – leandriis Mar 8 '19 at 21:55
• Thanks that works! Does that just generally mean those numbers are ignored? – ghx Mar 8 '19 at 21:59
• Yes: cell content between braces in an Scolumn is considered by siunitx as non-numeric content and centred. – Bernard Mar 8 '19 at 22:42
For some reason your code compiles, but:
1) You should not (at least you have no reason to do that) use an S column type for columns that doesn't contain math.
2) If the contents of a cell inside an 'S' column is text place it inside curly braces. (This way the content will be centered by default from siunitx because it will be considered as a text). [See your "broken" alignToDec in the second column if outside of braces and place it inside braces to see the difference]
3) Use the same method as above for your big numbers...
4) You could specify the table-format as an optional argument in your S columns. (siunitx behaves better like this in general)
\documentclass{article}
\usepackage{siunitx}
\sisetup{%
input-ignore={,},
input-decimal-markers = {.},
table-number-alignment = center,
}%
\begin{document}
\begin{tabular}{cS[table-format=2.2]}
row & {alignToDec} \\
row1: & {19,000,000.0} \\
row2: & 12.38 \\
row3: & 1.97 \\
row4: & 91.01 \\
row5: & {87,000,000.0} \\
\end{tabular}
\end{document}
PS: Remove the luatex tag.
• thanks a lot! My code always compiles ;) – ghx Mar 8 '19 at 22:02
• Welcome... For me, rarely it compiles by the first try when using siunitx... (but usually my tables are somehow ... somehow)... Anyway... I tried to say that usually siunitx doesn't like text when is waiting for numbers and doesn't compile without errors. Happy TeXing – koleygr Mar 8 '19 at 22:05
• No offense taken. Have a great day and thx for your reply. – ghx Mar 8 '19 at 22:06
|
{}
|
# Clear variables that appear in multiple contexts?
Posted 3 months ago
724 Views
|
2 Replies
|
1 Total Likes
|
How do I rid myself of these annoying messages.area::shdw: Symbol area appears in multiple contexts {IPOPTLink,Global}; definitions in context IPOPTLink may shadow or be shadowed by other definitions.I have used "Clear["Global*"]", but I still get these messages, if I run "Clear["Global"]" again then the message will not appear. If "Clear["Global"]" works why does it have to be used in every cell???Sure I could use just Clear or ClearAll, but that means I have to remember every variable I have used and I just want to clear them all, but "Clear["Global*"]" does not work, so what would work???For some reason this format drops letters I used Clear [ " Global * " ] i put in spaces so the letters and symbols would not get dropped out
2 Replies
Sort By:
Posted 3 months ago
You can try Remove, which removes the symbol from the global context. Clear only clears the values, if there are any, but the symbol itself lingers on and triggers messages.
Thank you Gianluca, that workedBut, like Clear["Global*"], Remove["Global*"] worked, sometimes, only after the second time I used the command. It was as if Mathematica did not believe me the first time, something like a safety feature, and I had to re-evaluate the cell a second time before it worked, or enter Clear or Remove a second time in a new cell before it worked???
|
{}
|
WIKISKY.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# NGC 2950
Contents
### Images
DSS Images Other Images
### Related articles
Response of the integrals in the Tremaine-Weinberg method to multiple pattern speeds: a counter-rotating inner bar in NGC 2950?When integrals in the standard Tremaine-Weinberg method are evaluatedfor the case of a realistic model of a doubly barred galaxy, theirmodifications introduced by the second rotating pattern are in accordwith what can be derived from a simple extension of that method, basedon separation of tracer's density. This extension yields a qualitativeargument that discriminates between prograde and retrograde inner bars.However, the estimate of the value of inner bar's pattern speed requiresfurther assumptions. When this extension of the Tremaine-Weinberg methodis applied to the recent observation of the doubly barred galaxy NGC2950, it indicates that the inner bar there is counter-rotating,possibly with the pattern speed of -140 +/- 50 km s-1arcsec-1. The occurrence of counter-rotating inner bars canconstrain theories of galaxy formation. An Atlas of Hα and R Images and Radial Profiles of 29 Bright Isolated Spiral GalaxiesNarrowband Hα+[N II] and broadband R images and surface photometryare presented for a sample of 29 bright (MB<-18 mag)isolated S0-Scd galaxies within a distance of 48 Mpc. These galaxies areamong the most isolated nearby spiral galaxies of their Hubbleclassifications as determined from the Nearby Galaxies Catalog. Stellar Populations in Nearby Lenticular GalaxiesWe have obtained two-dimensional spectral data for a sample of 58 nearbyS0 galaxies with the Multi-Pupil Fiber/Field Spectrograph of the 6 mtelescope of the Special Astrophysical Observatory of the RussianAcademy of Sciences. The Lick indices Hβ, Mg b, and arecalculated separately for the nuclei and for the bulges taken as therings between R=4'' and 7", and the luminosity-weighted ages,metallicities, and Mg/Fe ratios of the stellar populations are estimatedby comparing the data to single stellar population (SSP) models. Fourtypes of galaxy environments are considered: clusters, centers ofgroups, other places in groups, and the field. The nuclei are found tobe on average slightly younger than the bulges in any type ofenvironment, and the bulges of S0 galaxies in sparse environments areyounger than those in dense environments. The effect can be partlyattributed to the well-known age correlation with the stellar velocitydispersion in early-type galaxies (in our sample the galaxies in sparseenvironments are on average less massive than those in denseenvironments), but for the most massive S0 galaxies, withσ*=170-220 km s-1, the age dependence on theenvironment is still significant at the confidence level of 1.5 σ.Based on observations collected with the 6 m telescope (BTA) at theSpecial Astrophysical Observatory (SAO) of the Russian Academy ofSciences (RAS). How large are the bars in barred galaxies?I present a study of the sizes (semimajor axes) of bars in discgalaxies, combining a detailed R-band study of 65 S0-Sb galaxies withthe B-band measurements of 70 Sb-Sd galaxies from Martin (1995). As hasbeen noted before with smaller samples, bars in early-type (S0-Sb)galaxies are clearly larger than bars in late-type (Sc-Sd) galaxies;this is true both for relative sizes (bar length as fraction ofisophotal radius R25 or exponential disc scalelength h) andabsolute sizes (kpc). S0-Sab bars extend to ~1-10 kpc (mean ~ 3.3 kpc),~0.2-0.8R25 (mean ~ 0.38R25) and ~0.5-2.5h (mean ~1.4h). Late-type bars extend to only ~0.5-3.5 kpc,~0.05-0.35R25 and 0.2-1.5h their mean sizes are ~1.5 kpc, ~0.14R25 and ~0.6h. Sb galaxies resemble earlier-type galaxiesin terms of bar size relative to h; their smallerR25-relative sizes may be a side effect of higher starformation, which increases R25 but not h. Sbc galaxies form atransition between the early- and late-type regimes. For S0-Sbcgalaxies, bar size correlates well with disc size (both R25and h); these correlations are stronger than the known correlation withMB. All correlations appear to be weaker or absent forlate-type galaxies; in particular, there seems to be no correlationbetween bar size and either h or MB for Sc-Sd galaxies.Because bar size scales with disc size and galaxy magnitude for mostHubble types, studies of bar evolution with redshift should selectsamples with similar distributions of disc size or magnitude(extrapolated to present-day values); otherwise, bar frequencies andsizes could be mis-estimated. Because early-type galaxies tend to havelarger bars, resolution-limited studies will preferentially find bars inearly-type galaxies (assuming no significant differential evolution inbar sizes). I show that the bars detected in Hubble Space Telescope(HST) near-infrared(IR) images at z~ 1 by Sheth et al. have absolutesizes consistent with those in bright, nearby S0-Sb galaxies. I alsocompare the sizes of real bars with those produced in simulations anddiscuss some possible implications for scenarios of secular evolutionalong the Hubble sequence. Simulations often produce bars as large as(or larger than) those seen in S0-Sb galaxies, but rarely any as smallas those in Sc-Sd galaxies. The stellar populations of low-luminosity active galactic nuclei - III. Spatially resolved spectral propertiesIn a recently completed survey of the stellar population properties oflow-ionization nuclear emission-line regions (LINERs) and LINER/HIItransition objects (TOs), we have identified a numerous class ofgalactic nuclei which stand out because of their conspicuous108-9 yr populations, traced by high-order Balmer absorptionlines and other stellar indices. These objects are called young-TOs',because they all have TO-like emission-line ratios. In this paper weextend this previous work, which concentrated on the nuclear properties,by investigating the radial variations of spectral properties inlow-luminosity active galactic nuclei (LLAGNs). Our analysis is based onhigh signal-to-noise ratio (S/N) long-slit spectra in the 3500-5500Å interval for a sample of 47 galaxies. The data probe distancesof typically up to 850 pc from the nucleus with a resolution of ~100 pc(~1 arcsec) and S/N ~ 30. Stellar population gradients are mapped by theradial profiles of absorption-line equivalent widths and continuumcolours along the slit. These variations are further analysed by meansof a decomposition of each spectrum in terms of template galaxiesrepresentative of very young (<=107 yr), intermediate age(108-9 yr) and old (1010 yr) stellar populations.This study reveals that young-TOs also differ from old-TOs andold-LINERs in terms of the spatial distributions of their stellarpopulations and dust. Specifically, our main findings are as follows.(i) Significant stellar population gradients are found almostexclusively in young-TOs. (ii) The intermediate age population ofyoung-TOs, although heavily concentrated in the nucleus, reachesdistances of up to a few hundred pc from the nucleus. Nevertheless, thehalf width at half-maximum of its brightness profile is more typically100 pc or less. (iii) Objects with predominantly old stellar populationspresent spatially homogeneous spectra, be they LINERs or TOs. (iv)Young-TOs have much more dust in their central regions than otherLLAGNs. (v) The B-band luminosities of the central <~1 Gyr populationin young-TOs are within an order of magnitude of MB=-15,implying masses of the order of ~107-108Msolar. This population was 10-100 times more luminous in itsformation epoch, at which time young massive stars would have completelyoutshone any active nucleus, unless the AGN too was brighter in thepast. Radio sources in low-luminosity active galactic nuclei. IV. Radio luminosity function, importance of jet power, and radio properties of the complete Palomar sampleWe present the completed results of a high resolution radio imagingsurvey of all ( 200) low-luminosity active galactic nuclei (LLAGNs) andAGNs in the Palomar Spectroscopic Sample of all ( 488) bright northerngalaxies. The high incidences of pc-scale radio nuclei, with impliedbrightness temperatures ≳107 K, and sub-parsec jetsargue for accreting black holes in ≳50% of all LINERs andlow-luminosity Seyferts; there is no evidence against all LLAGNs beingmini-AGNs. The detected parsec-scale radio nuclei are preferentiallyfound in massive ellipticals and in type 1 nuclei (i.e. nuclei withbroad Hα emission). The radio luminosity function (RLF) of PalomarSample LLAGNs and AGNs extends three orders of magnitude below, and iscontinuous with, that of “classical” AGNs. We find marginalevidence for a low-luminosity turnover in the RLF; nevertheless LLAGNsare responsible for a significant fraction of present day massaccretion. Adopting a model of a relativistic jet from Falcke &Biermann, we show that the accretion power output in LLAGNs is dominatedby the kinetic power in the observed jets rather than the radiatedbolometric luminosity. The Palomar LLAGNs and AGNs follow the samescaling between jet kinetic power and narrow line region (NLR)luminosity as the parsec to kilo-parsec jets in powerful radio galaxies.Eddington ratios {l_Edd} (=L_Emitted/L_Eddington) of≤10-1{-}10-5 are implied in jet models of theradio emission. We find evidence that, in analogy to Galactic black holecandidates, LINERs are in a “low/hard” state (gas poornuclei, low Eddington ratio, ability to launch collimated jets) whilelow-luminosity Seyferts are in a “high” state (gas richnuclei, higher Eddington ratio, less likely to launch collimated jets).In addition to dominating the radiated bolometric luminosity of thenucleus, the radio jets are energetically more significant thansupernovae in the host galaxies, and are potentially able to depositsufficient energy into the innermost parsecs to significantly slow thegas supply to the accretion disk. Photometric properties and origin of bulges in SB0 galaxiesWe have derived the photometric parameters for the structural componentsof a sample of fourteen SB0 galaxies by applying a parametricphotometric decomposition to their observed I-band surface brightnessdistribution. We find that SB0 bulges are similar to bulges of theearly-type unbarred spirals, i.e. they have nearly exponential surfacebrightness profiles (< n>=1.48±0.16) and their effectiveradii are strongly coupled to the scale lengths of their surroundingdiscs (< r_e/h>=0.20±0.01). The photometric analysis alonedoes not allow us to differentiate SB0 bulges from unbarred S0 ones.However, three sample bulges have disc properties typical ofpseudobulges. The bulges of NGC 1308 and NGC 4340 rotate faster thanbulges of unbarred galaxies and models of isotropic oblate spheroidswith equal ellipticity. The bulge of IC 874 has a velocity dispersionlower than expected from the Faber-Jackson correlation and thefundamental plane of the elliptical galaxies and S0 bulges. Theremaining sample bulges are classical bulges, and are kinematicallysimilar to lower-luminosity ellipticals. In particular, they follow theFaber-Jackson correlation, lie on the fundamental plane and those forwhich stellar kinematics are available rotate as fast as the bulges ofunbarred galaxies. Fast bars in SB0 galaxiesWe measured the bar pattern speed in a sample of 7 SB0 galaxies usingthe Tremaine-Weinberg method. This represents the largest sample ofgalaxies for which the bar pattern speed has been measured this way. Allthe observed bars are as rapidly rotating as they can be. We comparedthis result with recent high-resolution N-body simulations of bars incosmologically-motivated dark matter halos, and conclude that these barsare not located inside centrally concentrated halos. Secular Evolution and the Formation of Pseudobulges in Disk GalaxiesThe Universe is in transition. At early times, galactic evolution wasdominated by hierarchical clustering and merging, processes that areviolent and rapid. In the far future, evolution will mostly be secularthe slow rearrangement of energy and mass that results from interactionsinvolving collective phenomena such as bars, oval disks, spiralstructure, and triaxial dark halos. Both processes are important now.This review discusses internal secular evolution, concentrating on oneimportant consequence, the buildup of dense central components in diskgalaxies that look like classical, merger-built bulges but that weremade slowly out of disk gas. We call these pseudobulges. The Stellar Populations of Low-Luminosity Active Galactic Nuclei. II. Space Telescope Imaging Spectrograph ObservationsWe present a study of the stellar populations of low-luminosity activegalactic nuclei (LLAGNs). Our goal is to search for spectroscopicsignatures of young and intermediate-age stars and to investigate theirrelationship with the ionization mechanism in LLAGNs. The method used isbased on the stellar population synthesis of the optical continuum ofthe innermost (20-100 pc) regions in these galaxies. For this purpose,we have collected high spatial resolution optical (2900-5700 Å)STIS spectra of 28 nearby LLAGNs that are available in the Hubble SpaceTelescope archive. The analysis of these data is compared with a similaranalysis also presented here for 51 ground-based spectra of LLAGNs. Ourmain findings are as follows: (1) No features due to Wolf-Rayet starswere convincingly detected in the STIS spectra. (2) Young starscontribute very little to the optical continuum in the ground-basedaperture. However, the fraction of light provided by these stars ishigher than 10% in most of the weak-[O I] ([OI]/Hα<=0.25) LLAGNSTIS spectra. (3) Intermediate-age stars contribute significantly to theoptical continuum of these nuclei. This population is more frequent inobjects with weak than with strong [O I]. Weak-[O I] LLAGNs that haveyoung stars stand out for their intermediate-age population. (4) Most ofthe strong-[O I] LLAGNs have predominantly old stellar population. A fewof these objects also show a featureless continuum that contributessignificantly to the optical continuum. These results suggest that youngand intermediate-age stars do not play a significant role in theionization of LLAGNs with strong [O I]. However, the ionization inweak-[O I] LLAGNs with young and/or intermediate-age populations couldbe due to stellar processes. A comparison of the properties of theseobjects with Seyfert 2 galaxies that harbor a nuclear starburst suggeststhat weak-[O I] LLAGNs are the lower luminosity counterparts of theSeyfert 2 composite nuclei.Based on observations with the NASA/ESA Hubble Space Telescope, obtainedat the Space Telescope Science Institute, which is operated by theAssociation of Universities for Research in Astronomy, Inc., under NASAcontract NAS 5-26555. Based on observations made with the Nordic OpticalTelescope (NOT), operated on the island of La Palma jointly by Denmark,Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio delRoque de los Muchachos of the Instituto de Astrofísica deCanarias. The Stellar Populations of Low-Luminosity Active Galactic Nuclei. I. Ground-based ObservationsWe present a spectroscopic study of the stellar populations oflow-luminosity active galactic nuclei (LLAGNs). Our main goal is todetermine whether the stars that live in the innermost (100 pc scale)regions of these galaxies are in some way related to the emission-lineproperties, which would imply a link between the stellar population andthe ionization mechanism. High signal-to-noise ratio, ground-basedlong-slit spectra in the 3500-5500 Å interval were collected for60 galaxies: 51 LINERs and LINER/H II transition objects, two starburstgalaxies, and seven nonactive galaxies. In this paper, the first of aseries, we (1) describe the sample; (2) present the nuclear spectra; (3)characterize the stellar populations of LLAGNs by means of an empiricalcomparison with normal galaxies; (4) measure a set of spectral indices,including several absorption-line equivalent widths and colorsindicative of stellar populations; and (5) correlate the stellar indiceswith emission-line ratios that may distinguish between possibleexcitation sources for the gas. Our main findings are as follows: (1)Few LLAGNs have a detectable young (<~107 yr) starburstcomponent, indicating that very massive stars do not contributesignificantly to the optical continuum. In particular, no features dueto Wolf-Rayet stars were convincingly detected. (2) High-order Balmerabsorption lines of H I (HOBLs), on the other hand, are detected in ~40%of LLAGNs. These features, which are strongest in108-109 yr intermediate-age stellar populations,are accompanied by diluted metal absorption lines and bluer colors thanother objects in the sample. (3) These intermediate-age populations arevery common (~50%) in LLAGNs with relatively weak [O I] emission([OI]/Hα<=0.25) but rare (~10%) in LLAGNs with stronger [O I].This is intriguing since LLAGNs with weak [O I] have been previouslyhypothesized to be transition objects'' in which both an AGN and youngstars contribute to the emission-line excitation. Massive stars, ifpresent, are completely outshone by intermediate-age and old stars inthe optical. This happens in at least a couple of objects whereindependent UV spectroscopy detects young starbursts not seen in theoptical. (4) Objects with predominantly old stars span the whole rangeof [O I]/Hα values, but (5) sources with significant young and/orintermediate-age populations are nearly all (~90%) weak-[O I] emitters.These new findings suggest a link between the stellar populations andthe gas ionization mechanism. The strong-[O I] objects are most likelytrue LLAGNs, with stellar processes being insignificant. However, theweak-[O I] objects may comprise two populations, one where theionization is dominated by stellar processes and another where it isgoverned by either an AGN or a more even mixture of stellar and AGNprocesses. Possible stellar sources for the ionization include weakstarbursts, supernova remnants, and evolved poststarburst populations.These scenarios are examined and constrained by means of complementaryobservations and detailed modeling of the stellar populations inforthcoming communications.Based on observations made with the Nordic Optical Telescope, operatedon the island of La Palma jointly by Denmark, Finland, Iceland, Norway,and Sweden, in the Spanish Observatorio del Roque de los Muchachos ofthe Instituto de Astrofísica de Canárias. Evidence of a Misaligned Secondary Bar in the Large Magellanic CloudEvidence of a misaligned secondary bar, within the primary bar of theLarge Magellanic Cloud (LMC), is presented. The density distribution andthe dereddened mean magnitudes (I0) of the red clump stars inthe bar obtained from the Optical Gravitational Lensing Experiment IIdata are used for this study. The bar region that predominantly showed awavy pattern in the line of sight in a recent paper by Subramaniam waslocated. These points in the X-Z plane delineate an S-shaped pattern,clearly indicating a misaligned bar. This feature is statisticallysignificant and does not depend on the considered value of I0for the LMC center. The rest of the bar region was not found to show thewarp or the wavy pattern. The secondary bar is found to be considerablyelongated in the Z-direction, with an inclination of 66.5d+/-0.9d,whereas the undisturbed part of the primary bar is found to have aninclination of 15.1d+/-2.7d, such that the eastern sides are closer tous with respect to the western sides of both the bars. TheP.A.maj of the secondary bar is found to be 108.4d+/-7.3d.The streaming motions found in the H I velocity map close to the LMCcenter could be caused by the secondary bar. The recent star formationand the gas distribution in LMC could be driven by the misalignedsecondary bar. Molecular Gas in Candidate Double-Barred Galaxies. III. A Lack of Molecular Gas?Most models of double-barred galaxies suggest that a molecular gascomponent is crucial for maintaining long-lived nuclear bars. We haveundertaken a CO survey in an attempt to determine the gas content ofthese systems and to locate double-barred galaxies with strong COemission that could be candidates for high-resolution mapping. Weobserved 10 galaxies in CO J=2-1 and J=3-2 and did not detect anygalaxies that had not already been detected in previous CO surveys. Wepreferentially detect emission from galaxies containing some form ofnuclear activity. Simulations of these galaxies require that theycontain 2%-10% gas by mass in order to maintain long-lived nuclear bars.The fluxes for the galaxies for which we have detections suggest thatthe gas mass fraction is in agreement with these models requirements.The lack of emission in the other galaxies suggests that they contain aslittle as 7×106 Msolar of molecularmaterial, which corresponds to <~0.1% gas by mass. This resultcombined with the wide variety of CO distributions observed indouble-barred galaxies suggests the need for models of double-barredgalaxies that do not require a large, well-ordered molecular gascomponent. Inner-truncated Disks in GalaxiesWe present an analysis of the disk brightness profiles of 218 spiral andlenticular galaxies. At least 28% of disk galaxies exhibit innertruncations in these profiles. There are no significant trends oftruncation incidence with Hubble type, but the incidence among barredsystems is 49%, more than 4 times that for nonbarred galaxies. However,not all barred systems have inner truncations, and not allinner-truncated systems are currently barred. Truncations represent areal dearth of disk stars in the inner regions and are not an artifactof our selection or fitting procedures nor the result of obscuration bydust. Disk surface brightness profiles in the outer regions are wellrepresented by simple exponentials for both truncated and nontruncateddisks. However, truncated and nontruncated systems have systematicallydifferent slopes and central surface brightness parameters for theirdisk brightness distributions. Truncation radii do not appear tocorrelate well with the sizes or brightnesses of the bulges. Thissuggests that the low angular momentum material apparently missing fromthe inner disk was not simply consumed in forming the bulge population.Disk parameters and the statistics of bar orientations in our sampleindicate that the missing stars of the inner disk have not simply beenredistributed azimuthally into bar structures. The sharpness of thebrightness truncations and their locations with respect to othergalactic structures suggest that resonances associated with diskkinematics, or tidal interactions with the mass of bulge stars, might beresponsible for this phenomenon. Structure and kinematics of candidatedouble-barred galaxiesResults of optical and NIR spectral and photometric observations of asample of candidate double-barred galaxies are presented. Velocityfields and velocity dispersion maps of stars and ionized gas, continuumand emission-line images were constructed from integral-fieldspectroscopy observations carried out at the 6 m telescope (BTA) of SAORAS, with the MPFS spectrograph and the scanning Fabry-PerotInterferometer. NGC 2681 was also observed with thelong-slit spectrograph of the BTA. Optical and NIR images were obtainedat the BTA and at the 2.1 m telescope (OAN, México).High-resolution images were retrieved from the HST data archive.Morphological and kinematic features of all 13 sample objects aredescribed in detail. Attention is focused on the interpretation ofobserved non-circular motions of gas and stars in circumnuclear (onekiloparsec-scale) regions. We have shown first of all that these motionsare caused by the gravitational potential of a large-scale bar.NGC 3368 and NGC 3786 have nuclearbars only, their isophotal twist at larger radii being connected withthe bright spiral arms. Three cases of inner polar disks in our sample(NGC 2681, NGC 3368 andNGC 5850) are considered. We found ionized-gascounter-rotation in the central kiloparsec of the lenticular galaxyNGC 3945. Seven galaxies (NGC 470,NGC 2273, NGC 2681, NGC3945, NGC 5566, NGC5905, and NGC 6951) have inner mini-disksnested in large-scale bars. Minispiral structures occur often in thesenuclear disks. It is interesting that the majority of the observed,morphological and kinematical, features in the sample galaxies can beexplained without the secondary bar hypothesis. Thus we suggest that adynamically independent secondary bar is a rarer phenomenon than followsfrom isophotal analysis of the images only.Based on observations carried out at the 6 m telescope of the SpecialAstrophysical Observatory of the Russian Academy of Sciences, operatedunder the financial support of the Science Department of Russia(registration number 01-43), at the 2.1 m telescope of the ObservatorioAstronónico Nacional, San Pedro Martir, México, and fromthe data archive of the NASA/ESA Hubble Space Telescope at the SpaceTelescope Science Institute. STScI is operated by the association ofUniversities for Research in Astronomy, Inc. under NASA contract NAS5-26555.Tables 1 to 6 and Figures 2-13 and 15-18 are only available inelectronic form at http://www.edpsciences.org Double-barred galaxies. I. A catalog of barred galaxies with stellar secondary bars and inner disksI present a catalog of 67 barred galaxies which contain distinct,elliptical stellar structures inside their bars. Fifty of these aredouble-barred galaxies: a small-scale, inner or secondary bar isembedded within a large-scale, outer or primary bar. I providehomogenized measurements of the sizes, ellipticities, and orientationsof both inner and outer bars, along with global parameters for thegalaxies. The other 17 are classified as inner-disk galaxies, where alarge-scale bar harbors an inner elliptical structure which is alignedwith the galaxy's outer disk. Four of the double-barred galaxies alsopossess inner disks, located in between the inner and outer bars. Whilethe inner-disk classification is ad-hoc - and undoubtedly includes someinner bars with chance alignments (five such probable cases areidentified) - there is good evidence that inner disks form astatistically distinct population, and that at least some are indeeddisks rather than bars. In addition, I list 36 galaxies which may bedouble-barred, but for which current observations are ambiguous orincomplete, and another 23 galaxies which have been previously suggestedas potentially being double-barred, but which are probably not. Falsedouble-bar identifications are usually due to features such as nuclearrings and spirals being misclassified as bars; I provide someillustrated examples of how this can happen.A detailed statistical analysis of the general population of double-barand inner-disk galaxies, as represented by this catalog, will bepresented in a companion paper.Tables \ref{tab:measured} and \ref{tab:deproj} are only available inelectronic form at http://www.edpsciences.org An Imaging Survey of Early-Type Barred GalaxiesThis paper presents the results of a high-resolution imaging survey,using both ground-based and Hubble Space Telescope images, of a completesample of nearby barred S0-Sa galaxies in the field, with a particularemphasis on identifying and measuring central structures within thebars: secondary bars, inner disks, nuclear rings and spirals, andoff-plane dust. A discussion of the frequency and statistical propertiesof the various types of inner structures has already been published.Here we present the data for the individual galaxies and measurements oftheir bars and inner structures. We set out the methods we use to findand measure these structures, and how we discriminate between them. Inparticular, we discuss some of the deficiencies of ellipse fitting ofthe isophotes, which by itself cannot always distinguish between bars,rings, spirals, and dust, and which can produce erroneous measurementsof bar sizes and orientations. Direct Confirmation of Two Pattern Speeds in the Double-barred Galaxy NGC 2950We present the surface photometry and stellar kinematics of NGC 2950,which is a nearby and undisturbed SB0 galaxy hosting two nested stellarbars. We use the Tremaine-Weinberg method to measure the pattern speedof the primary bar. This also permits us to establish directly and forthe first time that the two nested bars are rotating with differentpattern speeds and, in particular, that the rotation frequency of thesecondary bar is higher than that of the primary one.Based on observations made with the UK Jacobus Kapteyn Telescope and theItalian Telescopio Nazionale Galileo operated at the SpanishObservatorio del Roque de los Muchachos of the Instituto deAstrofísica de Canarias by the Isaac Newton Group and theIstituto Nazionale di Astrofisica, respectively. When Is a Bulge Not a Bulge? Inner Disks Masquerading as Bulges in NGC 2787 and NGC 3945We present a detailed morphological, photometric, and kinematic analysisof two barred S0 galaxies with large, luminous inner disks inside theirbars. We show that these structures, in addition to being geometricallydisklike, have exponential profiles (scale lengths ~300-500 pc) distinctfrom the central, nonexponential bulges. We also find them to bekinematically disklike. The inner disk in NGC 2787 has a luminosityroughly twice that of the bulge; but in NGC 3945, the inner disk isalmost 10 times more luminous than the bulge, which itself is extremelysmall (half-light radius ~100 pc, in a galaxy with an outer ring ofradius ~14 kpc) and has only ~5% of the total luminosity-a bulge/totalratio much more typical of an Sc galaxy. We estimate that at least 20%of (barred) S0 galaxies may have similar structures, which means thattheir bulge/disk ratios may be significantly overestimated. These innerdisks dominate the central light of their galaxies; they are at least anorder of magnitude larger than typical nuclear disks'' found inelliptical and early-type spiral galaxies. Consequently, they mustaffect the dynamics of the bars in which they reside. A Search for Dwarf'' Seyfert Nuclei. VI. Properties of Emission-Line Nuclei in Nearby GalaxiesWe use the database from Paper III to quantify the global and nuclearproperties of emission-line nuclei in the Palomar spectroscopic surveyof nearby galaxies. We show that the host galaxies of Seyferts, LINERs,and transition objects share remarkably similar large-scale propertiesand local environments. The distinguishing traits emerge on nuclearscales. Compared with LINERs, Seyfert nuclei are an order of magnitudemore luminous and exhibit higher electron densities and internalextinction. We suggest that Seyfert galaxies possess characteristicallymore gas-rich circumnuclear regions and hence a more abundant fuelreservoir and plausibly higher accretion rates. The differences betweenthe ionization states of the narrow emission-line regions of Seyfertsand LINERs can be partly explained by the differences in their nebularproperties. Transition-type objects are consistent with being composite(LINER/H II) systems. With very few exceptions, the stellar populationwithin the central few hundred parsecs of the host galaxies is uniformlyold, a finding that presents a serious challenge to starburst orpost-starburst models for these objects. Seyferts and LINERs havevirtually indistinguishable velocity fields as inferred from their linewidths and line asymmetries. Transition nuclei tend to have narrowerlines and more ambiguous evidence for line asymmetries. All threeclasses of objects obey a strong correlation between line width and lineluminosity. We argue that the angular momentum content of circumnucleargas may be an important factor in determining whether a nucleus becomesactive. Finally, we discuss some possible complications for theunification model of Seyfert galaxies posed by our observations. Star Formation Histories of Early-Type Galaxies. I. Higher Order Balmer Lines as Age IndicatorsWe have obtained blue integrated spectra of 175 nearby early-typegalaxies, covering a wide range in galaxy velocity dispersion andemphasizing those with σ<100 km s-1. Galaxies havebeen observed both in the Virgo Cluster and in lower densityenvironments. The main goals are the evaluation of higher order Balmerlines as age indicators and differences in stellar populations as afunction of mass, environment, and morphology. In this first paper, ouremphasis is on presenting the methods used to characterize the behaviorof the Balmer lines through evolutionary population synthesis models.Lower σ galaxies exhibit a substantially greater intrinsicscatter, in a variety of line-strength indicators, than do higherσ galaxies, with the large intrinsic scatter setting in below aσ of 100 km s-1. Moreover, a greater contrast inscatter is present in the Balmer lines than in the lines of metalfeatures. Evolutionary synthesis modeling of the observed spectralindexes indicates that the strong Balmer lines found primarily among thelow-σ galaxies are caused by young age, rather than by lowmetallicity. Thus we find a trend between the population age and thecentral velocity dispersion, such that low-σ galaxies have youngerluminosity-weighted mean ages. We have repeated this analysis usingseveral different Balmer lines and find consistent results from onespectral indicator to another. A new catalogue of ISM content of normal galaxiesWe have compiled a catalogue of the gas content for a sample of 1916galaxies, considered to be a fair representation of normality''. Thedefinition of a normal'' galaxy adopted in this work implies that wehave purposely excluded from the catalogue galaxies having distortedmorphology (such as interaction bridges, tails or lopsidedness) and/orany signature of peculiar kinematics (such as polar rings,counterrotating disks or other decoupled components). In contrast, wehave included systems hosting active galactic nuclei (AGN) in thecatalogue. This catalogue revises previous compendia on the ISM contentof galaxies published by \citet{bregman} and \citet{casoli}, andcompiles data available in the literature from several small samples ofgalaxies. Masses for warm dust, atomic and molecular gas, as well asX-ray luminosities have been converted to a uniform distance scale takenfrom the Catalogue of Principal Galaxies (PGC). We have used twodifferent normalization factors to explore the variation of the gascontent along the Hubble sequence: the blue luminosity (LB)and the square of linear diameter (D225). Ourcatalogue significantly improves the statistics of previous referencecatalogues and can be used in future studies to define a template ISMcontent for normal'' galaxies along the Hubble sequence. The cataloguecan be accessed on-line and is also available at the Centre desDonnées Stellaires (CDS).The catalogue is available in electronic form athttp://dipastro.pd.astro.it/galletta/ismcat and at the CDS via anonymousftp to\ cdsarc.u-strasbg.fr (130.79.128.5) or via\http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/405/5 Efficient multi-Gaussian expansion of galaxiesWe describe a simple, efficient, robust and fully automatic algorithmfor the determination of a multi-Gaussian expansion (MGE) fit to galaxyimages, to be used as a parametrization for the galaxy stellar surfacebrightness. In most cases the least-squares solution found by thismethod essentially corresponds to the minimax, constant relative error,MGE approximation of the galaxy surface brightness, with the chosennumber of Gaussians. The algorithm is well suited to be used withmultiple-resolution images (e.g. Hubble Space Telescope (HST) andground-based images). It works orders of magnitude faster and is moreaccurate than currently available methods. An alternative, morecomputing-intensive, fully linear algorithm that is guaranteed toconverge to the smallest χ2 solution is also discussed.Examples of MGE fits are presented for objects with HST or ground-basedphotometry, including galaxies with significant isophote twist. Two-Dimensional Spectroscopy of Double-Barred GalaxiesWe describe the results of our spectroscopy for a sample of barredgalaxies whose inner regions exhibit an isophotal twist commonly calleda secondary bar. The line-of-sight velocity fields of the ionized gasand stars and the light-of-sight velocity dispersion fields of the starswere constructed from two-dimensional spectroscopy with the 6m SpecialAstrophysical Observatory telescope. We detected various types ofnon-circular motions of ionized gas: radial flows within large-scalebars, counter-rotation of the gas and stars at the center of NGC 3945, apolar gaseous disk in NGC 5850, etc. Our analysis of the optical andnear-infrared images (both ground-based and those from the Hubble SpaceTelescope) revealed circumnuclear minispirals in five objects. Thepresence of an inner (secondary) bar in the galaxy images is shown tohave no effect on the circumnuclear kinematics of the gas and stars.Thus, contrary to popular belief, the secondary bar is not a dynamicallydecoupled galactic structure. We conclude that the so-calleddouble-barred galaxies are not a separate type of galaxies but are acombination of objects with distinctly different morphology of theircircumnuclear regions. Nuclear Cusps and Cores in Early-Type Galaxies as Relics of Binary Black Hole MergersWe present an analysis of the central cusp slopes and core parameters ofearly-type galaxies using a large database of surface brightnessprofiles obtained from Hubble Space Telescope observations. We examinethe relation between the central cusp slopes, core parameters, and blackhole masses in early-type galaxies, in light of two models that attemptto explain the formation of cores and density cusps via the dynamicalinfluence of black holes. Contrary to the expectations fromadiabatic-growth models, we find that the cusp slopes do not steepenwith increasing black hole mass fraction. Moreover, a comparison ofkinematic black hole mass measurements with the masses predicted by theadiabatic models shows that they overpredict the masses by a factor of~3. Simulations involving binary black hole mergers predict that boththe size of the core and the central mass deficit correlate with thefinal black hole mass. These relations are qualitatively supported bythe present data. Bar Galaxies and Their EnvironmentsThe prints of the Palomar Sky Survey, luminosity classifications, andradial velocities were used to assign all northern Shapley-Ames galaxiesto either (1) field, (2) group, or (3) cluster environments. Thisinformation for 930 galaxies shows no evidence for a dependence of barfrequency on galaxy environment. This suggests that the formation of abar in a disk galaxy is mainly determined by the properties of theparent galaxy, rather than by the characteristics of its environment. Double Bars, Inner Disks, and Nuclear Rings in Early-Type Disk GalaxiesWe present results from a survey of an unbiased sample of 38 early-type(S0-Sa), low-inclination, optically barred galaxies in the field, usingimages both from the ground and from space. Our goal was to find andcharacterize central stellar and gaseous structures: secondary bars,inner disks, and nuclear rings. We find that bars inside bars aresurprisingly common: at least one-quarter of the sample galaxies(possibly as many as 40%) are double barred, with no preference forHubble type or the strength of the primary bar. A typical secondary baris ~12% of the size of its primary bar and extends to 240-750 pc inradius. Secondary bars are not systematically either parallel orperpendicular to the primary; we see cases where they lead the primarybar in rotation and others where they trail, which supports thehypothesis that the two bars of a double-bar system rotateindependently. We see no significant effect of secondary bars on nuclearactivity: our double-barred galaxies are no more likely to harbor aSeyfert or LINER nucleus than our single-barred galaxies. We findkiloparsec-scale inner disks in at least 20% of our sample; they occuralmost exclusively in S0 galaxies. These disks are on average 20% thesize of their host bar and show a wider range of relative sizes than dosecondary bars. Nuclear rings are present in about a third of oursample. Most of these rings are dusty, sites of current or recent starformation, or both; such rings are preferentially found in Sa galaxies.Three S0 galaxies (8% of the sample, but 15% of the S0's) appear to havepurely stellar nuclear rings, with no evidence for dust or recent starformation. The fact that these central stellar structures are so commonindicates that the inner regions of early-type barred galaxies typicallycontain dynamically cool and disklike structures. This is especiallytrue for S0 galaxies, where secondary bars, inner disks, and/or stellarnuclear rings are present at least two-thirds of the time. If weinterpret nuclear rings, secondary bars, and (possibly) inner disks andnuclear spirals as signs of inner Lindblad resonances (ILRs), thenbetween one and two-thirds of barred S0-Sa galaxies show evidence forILRs. A synthesis of data from fundamental plane and surface brightness fluctuation surveysWe perform a series of comparisons between distance-independentphotometric and spectroscopic properties used in the surface brightnessfluctuation (SBF) and fundamental plane (FP) methods of early-typegalaxy distance estimation. The data are taken from two recent surveys:the SBF Survey of Galaxy Distances and the Streaming Motions of AbellClusters (SMAC) FP survey. We derive a relation between(V-I)0 colour and Mg2 index using nearly 200galaxies and discuss implications for Galactic extinction estimates andearly-type galaxy stellar populations. We find that the reddenings fromSchlegel et al. for galaxies with E(B-V)>~0.2mag appear to beoverestimated by 5-10 per cent, but we do not find significant evidencefor large-scale dipole errors in the extinction map. In comparison withstellar population models having solar elemental abundance ratios, thegalaxies in our sample are generally too blue at a given Mg2;we ascribe this to the well-known enhancement of the α-elements inluminous early-type galaxies. We confirm a tight relation betweenstellar velocity dispersion σ and the SBF fluctuation count'parameter N, which is a luminosity-weighted measure of the total numberof stars in a galaxy. The correlation between N and σ is eventighter than that between Mg2 and σ. Finally, we deriveFP photometric parameters for 280 galaxies from the SBF survey data set.Comparisons with external sources allow us to estimate the errors onthese parameters and derive the correction necessary to bring them on tothe SMAC system. The data are used in a forthcoming paper, whichcompares the distances derived from the FP and SBF methods. The SBF Survey of Galaxy Distances. IV. SBF Magnitudes, Colors, and DistancesWe report data for I-band surface brightness fluctuation (SBF)magnitudes, (V-I) colors, and distance moduli for 300 galaxies. Thesurvey contains E, S0, and early-type spiral galaxies in the proportionsof 49:42:9 and is essentially complete for E galaxies to Hubblevelocities of 2000 km s-1, with a substantial sampling of Egalaxies out to 4000 km s-1. The median error in distancemodulus is 0.22 mag. We also present two new results from the survey.(1) We compare the mean peculiar flow velocity (bulk flow) implied byour distances with predictions of typical cold dark matter transferfunctions as a function of scale, and we find very good agreement withcold, dark matter cosmologies if the transfer function scale parameterΓ and the power spectrum normalization σ8 arerelated by σ8Γ-0.5~2+/-0.5. Deriveddirectly from velocities, this result is independent of the distributionof galaxies or models for biasing. This modest bulk flow contradictsreports of large-scale, large-amplitude flows in the ~200 Mpc diametervolume surrounding our survey volume. (2) We present adistance-independent measure of absolute galaxy luminosity, N and showhow it correlates with galaxy properties such as color and velocitydispersion, demonstrating its utility for measuring galaxy distancesthrough large and unknown extinction. Observations in part from theMichigan-Dartmouth-MIT (MDM) Observatory. Young Populations in the Nuclei of Barred GalaxiesWe have conducted UBVRI and H_α CCD photometry of five barredgalaxies (NGC 2523, NGC 2950, NGC 3412, NGC 3945 and NGC 5383), alongwith SPH simulations, in order to understand the origin of young stellarpopulations in the nuclei of barred galaxies. The H_α emission,which is thought to be emitted by young stellar populations, is eitherabsent or strongly concentrated in the nuclei of early-type galaxies(NGC 2950, NGC 3412 and NGC 3945), while they are observed in the nucleiand circumnuclear regions of intermediate-type galaxies with strong bars(NGC 2523 and NGC 5383). SPH simulations of realistic mass models forthese galaxies show that some disc material can be driven into thenuclear region by a strong bar potential. This implies that the youngstellar populations in the circumnuclear regions of barred galaxies canbe formed out of such gas. The existence of nuclear dust lanes is anindication of an ongoing gas inflow and extremely young stellarpopulations in these galaxies, because nuclear dust lanes such as thosein NGC 5383 are not long-lasting features according to our simulations.
Submit a new article
|
{}
|
Eddy current losses in the transformer are
This question was previously asked in
UPPCL JE Previous Paper 8 (Held On: 27 November 2019 Shift 2 )
View all UPPCL JE Papers >
1. Directly proportional to the frequency squared
2. Inversely proportional to the thickness of the lamination
3. Inversely proportional to the field strength
4. Inversely proportional to the frequency
Option 1 : Directly proportional to the frequency squared
Free
CT 1: Network Theory 1
11207
10 Questions 10 Marks 10 Mins
Detailed Solution
Eddy current losses: Eddy current loss in the transformer is I2R loss present in the core due to the production of eddy current.
$${W_e} = K{f^2}B_m^2{t^2}V$$
$${B_{max}} \propto \frac{V}{f}$$
Where,
K - coefficient of eddy current. Its value depends upon the nature of magnetic material
Bm - Maximum value of flux density in Wb/m2
t - Thickness of lamination in meters
f - Frequency of reversal of the magnetic field in Hz
V - Volume of magnetic material in m3
Thus, we find eddy current loss directly depends upon the square of the frequency, flux density, and thickness of the plate. Also, it is inversely proportional to the resistivity of the material. The core of the material is constructed using thin plates called laminations.
Hysteresis losses: These are due to the reversal of magnetization in the transformer core whenever it is subjected to the alternating nature of magnetizing force.
$${W_h} = \eta B_{max}^xfv$$
$${B_{max}} \propto \frac{V}{f}$$
Where
x is the Steinmetz constant
Bm = maximum flux density
f = frequency of magnetization or supply frequency
v = volume of the core
|
{}
|
Percolation and Connectivity in AB Random Geometric Graphs
* Auteur correspondant
2 TREC - Theory of networks and communications
DI-ENS - Département d'informatique de l'École normale supérieure, ENS Paris - École normale supérieure - Paris, Inria Paris-Rocquencourt
Abstract : Given two independent Poisson point processes Phi(1),Phi(2) in Rd the AB Poisson Boolean model is the graph with points of Phi(1) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centred at these points contains at least one point of Phi(2). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all $d > 1$ and derive bounds for a critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and cn in the unit cube. The AB random geometric graph is defined as above but with balls of radius r. We derive a weak law result for the largest nearest neighbour distance and almost sure asymptotic bounds for the connectivity threshold.
Keywords :
Type de document :
Rapport
[Research Report] 2009
Domaine :
Littérature citée [15 références]
https://hal.inria.fr/inria-00372331
Contributeur : D. Yogeshwaran <>
Soumis le : samedi 23 janvier 2010 - 01:27:01
Dernière modification le : jeudi 11 janvier 2018 - 06:20:06
Document(s) archivé(s) le : jeudi 23 septembre 2010 - 17:49:52
Fichiers
AB-RGG-revision1_final.pdf
Fichiers produits par l'(les) auteur(s)
Identifiants
• HAL Id : inria-00372331, version 3
• ARXIV : 0904.0223
Citation
Srikanth K. Iyer, D. Yogeshwaran. Percolation and Connectivity in AB Random Geometric Graphs. [Research Report] 2009. 〈inria-00372331v3〉
Métriques
Consultations de la notice
257
Téléchargements de fichiers
|
{}
|
## Wednesday, 5 November 2014
### Automated DEX Decompilation using Androguard
Hey guys, its been a while since my last post and my blog is beginning to gather dust. So I though I would drop a couple posts about some new stuff I've been trying and learning. This post is about Androguard and how to write a simple python script that dumps decompiled dalvik bytecode from an Android APK.
Androguard is a framework and basically a collection of python libraries that give hopeful android bug hunters like me a programmatic interface to decompiling, analyzing, visualizing and parsing content from an APK; another feature allows you to mount Androguard as malware analysis engine. You can learn more about the epic features of this project from the wiki .
Why is this cool? Well basically it allows you to orchestrate:
• As close to completely automated vulnerability analysis and debugging of Android Applications as possible (if you're up to the challenge, which I think I am).
• Application Packaging and Wrapping (for Mobile Device Management solutions)
What I'm going to cover here and in the next couple of posts probably is the basic process of installing Androguard and how to get going with a few basic python scripts to automate a couple tedious tasks. So lets get going!
## Install Androguard
Before we start you will need to grab a copy of the androgaurd project which you can get over here https://code.google.com/p/androguard/ . What you need to do then is dump the archive to a handy location and unzip it like so:
Unpackaged Androguard, ready to install, please make sure you have python-setuptools installed.
After that you can go ahead and fire off the setup.py script with an argument of 'install'. And thats it!
## My First Androguard Script
Here's what this script looks like:
You can clone a copy of this little script from here https://gist.github.com/2f0c0330cc2cd0910040.git.
Running this on a sample application, the output should look a little something like this:
running dump_methods on aca.db,tool.apk
### Breakdown
So lets break down what this script does. Obviously I'm gonna skip the basic stuff, and jump down into the Androguard specific calls; the first of which appears in line 6 . The call the APK class filters down into the APK class which is used to represent an APK file and all its interesting attributes, you can checkout a copy of this class under androguard/core/bytecodes/apk.py
Now one thing to note is that this whole class works by accepting an APK file, which as you should know is just a plain old zip file . Once the constructor method (not sure if that's the correct python-specific term) triggers it scrapes and pulls attributes from the file by primarily using a library that handles zip files (this collection of code is included in the apk.py source). What happens next is we pull out the dex file and parse it to the DalvikVMFormat constructor call.
The DalvikVMFormat class is pretty cool, you can find it under androguard/core/bytecodes/dvm.py. The dvm.py file is where all the dirty dex file parsing happens, whats awesome about this is its a pure python implementation of a dex file parser so if you'd like to do your own dex file parsing in python that's where you should look for ideas (its also pretty isolated and includes a lot of the classes that support is own operation, so its pretty easy to pull this class out and plug it into whatever else you'd like to use it for that requires dex file parsing). What this file does is strip out the contents of the dex file and plug it into a class that represents the file and its attributes (bytecode offsets, type descriptors etc).
What we are left with when the dex file parsing finishes is a class with full grasp of all the attributes of the dex file as well as where all the interesting stuff is stored. For instance in this script we are interested in the classes and the methods. The get_classes() call which follows returns a list (or iterable - basically something you can stick in a for loop and traverse) of . Which contains a whole bunch of ClassDefItem objects, these are defined in dvm.py. Methods as associated to ClassDefItem objects which is why you can call get_methods to grab a list of EncodedMethod objects. Each ClassDefItem object, includes information about its index in the dex file, this is since each method, string, class and other objects are deferenced in the dex file using indexes (commonly refered to as idx's in the androguard and Dalvik VM source).
By the way its great to talk about ClassDefItems and EncodedMethods like this because the API represents the structure of a dex file as it is defined and handled by the Dalvik VM itself :) So I might be talking about python objects and what not, but you are actually learning the dex file format as well!
Anyway back to the code. We now have a list of EncodedMethod objects, what we do next is print out the method name and its descriptor, what is a method descriptor you may ask.
descriptor is a string representing the type of a field or method. Descriptors are represented in the class file format using modified UTF-8 strings (§4.4.7) and thus may be drawn, where not further constrained, from the entire Unicode codespace. - http://docs.oracle.com/javase/specs/jvms/se7/html/jvms-4.html#jvms-4.3
Well its basically just a string that describes the basic attributes of a method or class stuff like the types of the parameters and the return type. If you're just getting into reverse engineering Android apps I suggest reading about these since they are used by the DalvikVM to identify Object and Method types aaaand you're not gonna know what the hell is going on in bytecode if you don't understand this little language.
Moving on, finally we get to pulling code out of methods. The next interesting lines of code start at line 11. Here we make a call to the get_code() method which simply returns a DalvikCode object. After that we pull out a DCode object by calling get_dc(). The Dcode object is used to represent the actual bytecode and some of its atrributes.
You may wonder why its necessary to have DalvikCode objects in the first place? Well I'm not entirely sure of this, but looking at the source the DalvikVMs dexlib as well as the source for angroguard it looks like the DalvikCode object is used to associated meta-data to the actual bytecode. This is stuff like the register sizes (number of registers accessible to the associated bytecode), number of try blocks, debug information and other cool stuff. As far as I know the bytecode checker uses these details to ensure the actual executable byte code operates within its limits and given access to its arguments and makes use of its registers correctly. It also allows some methods to be defined differently according to their DalvikCode objects (for optimization and purposes most probably) i.e. one method could have byte code that only requires 2 registers to complete its task and another may need 10.
Anyway, the DalvikCode object holds a reference to the Dcode object which is used to handle actual byte code. In Androguard this object services as an interface to the part of the dex file that stores the raw dex opcode. It includes support methods for parsing bytecode into human readable representation, one of these methods is get_instructions(). The script calls get_instructions to get an iterable list of bytecode instructions, after that it pulls out parsed bytecode by invoking get_name() which returns the opcode name (invokevirtual, iget, etc) and get_output() which returns the operands for the associated instruction as well as the contents of the operands where applicable.
Aand that's its folks! I hope you guys enjoyed this post and will be writing your own Androguard scripts in no time!
|
{}
|
A company has a fiscal year-end of December 31: (1) on October 1, $22,000 was paid for a one-year fire insurance policy; (2) on June 30 the company lent its chief financial officer$20,000; principal and interest at 6% are due in one year; and (3) equipment costing $70,000 was purchased at the beginning of the year for cash. Depreciation on the equipment is$14,000 per year. Prepare the necessary adjusting entries at December 31 for each of the above items. (If no entry is required for a transaction/event, select "No journal entry required" in the first account field.)
|
{}
|
# Is the convergence in “the topology of pointwise convergence” for sequences or nets?
The topology of pointwise convergence on $Y^X$, where $Y$ is a topological space and $X$ is a set, is defined to be the topology that topologize the pointwise convergence of mappings from $X$ to $Y$.
In the definition, I was wondering if the pointwise convergence here is for all nets of mappings or all sequences of mappings? I am thinking it is the former, but in what I have seen sequences are mentioned all the time in a non-definition context that a sequence converges wrt the topology of pointwise convergence iff the sequence converges pointwise.
Or when specifying the topology of pointwise convergence, one has to also specify whether the convergence is for nets or sequences? If nets or sequences are not specified, which one is the default?
Thanks and regards!
-
If you want to define a unique topology, then you cannot stop at sequences. You should define net convergence, and a net convergence can define a topology (if it satisfies the 4 Kelly conditions etc.).
If you just postulate that all sequences converge in $X^Y$ iff they do pointwise, then this does not define a unique topology.
To see this, let $Y$ be the co-countable topology on $\mathbb{R}$, let $X = \{0,1\}$ for concreteness. Then the pointwise convergence (from nets, or per convention, as the initial topology) is just the product topology of 2 co-countable spaces. A sequence converges in it, iff it is eventually constant (in both coordinates, and so overall). But the discrete topology on $Y \times Y$ (as sets) has the exact same behaviour with respect to convergence of sequences, and so do all topologies that lie inbetween them. So we have many topologies on $Y^X$ that have the behaviour that sequences converge iff they converge pointwise.
So the sequence variant cannot function as the definition. It is a nice property to have, but not enough to define the topology. If you really want to do it via that route, then you have to use nets, there is no escaping that.
-
Thanks! In the same spirit, when talking about the topology of uniform convergence and the topology of compact convergence, are the convergences in both cases also for nets? If the convergences are for sequences, they cannot uniquely determine their topologies? – Tim Feb 25 '13 at 18:53
I suspect so. But with uniformities one does not specify just convergence but uniform convergence. And I'm not aware (but this is just my ignorance!) of a Kelly like theory for when a "uniform convergence space" uniquely defines a uniformity (and not just a topology; note that in general a topological space can have many compatible uniformities, just like a metrizable space and metrics) – Henno Brandsma Feb 25 '13 at 18:59
The topology $\mathcal{T}$ of pointwise convergence on $Y^X$ is defined as the initial topology with respect to the projections $(\pi_x)_{x \in X}$ where $$Y^x \ni f \mapsto \pi_x(f) := f(x) \qquad (x \in X)$$ Let $(f_{\iota})_{\iota \in I}$ a net in $Y^X$ and $f \in Y^X$. Then $f_\iota \to f$ in $(Y^X,\mathcal{T})$, if and only if, $$\forall x \in X: \underbrace{\pi_x(f_\iota)}_{f_\iota(x)} \to \underbrace{\pi_x(f)}_{f(x)}$$ i.e. if the net is pointwise convergent. This fact can be easily concluded from the following theorem.
Let $X$ a non-empty set and $((X_\kappa,\mathcal{T}_\kappa))_{\kappa \in K}$ a family of topological spaces. Let $f_{\kappa}: X \to X_{\kappa}$ a mapping ($\kappa \in K$) and denote by $\mathcal{T}$ the initital topology with respect to $(f_\kappa)_{\kappa \in K}$. Then the following statements are equivalent for a given net $(x_\iota)_{\iota \in I}$ in $X$ and $x \in X$:
1. $x_\iota \to x$ in $(X,\mathcal{T})$
2. $\forall \kappa \in K: f_\kappa(x_\iota) \to f_\kappa(x)$ in $(X_\kappa,\mathcal{T}_\kappa)$
-
Thanks! I understand you define the topology of ptwise cvg to be the product topology. My definition (in my first and second paragraph) is different, and is in terms of topologizing a given family of nets or sequences together their respective "limits". My question was if the given family is of nets or of sequences. Although your reply doesn't address my question directly, I guess the answer to my question is nets instead of sequences? – Tim Feb 25 '13 at 17:34
@Tim I'm not sure whether I understand you correctly. What do you mean by "topologozing a given family of nets or seq. together their respective "limits""? You wrote nothing abot families of nets/seq. in your post above at all. Do you want to define a topology $\tau$ on $Y^X$ such that $f_n \to f$ in $(X,\tau)$ iff $f_n \to f$ pointwise ... and asking whether this equivalence then also holds for nets? (By the way, it's not only "my definition" of topology of ptw. cvg. - it's probably the most popular one.) – saz Feb 25 '13 at 18:32
"Do you want to define a topology $τ$ on $Y^X$ such that $f_n→f$ in $(X,τ)$ iff $f_n→f$ pointwise" yes, that is what I meant, except that I am asking whether $f_n$'s are any net or sequence. "asking whether this equivalence then also holds for nets?" Not really, as I mentioned, my question of whether it is about nets or sequences comes before finding the topology on $Y^X$ not after finding one. – Tim Feb 25 '13 at 18:46
@Tim As Henno Brandsma already pointed out it doesn't suffice to ask for pointwise convergence of sequences to define a unique topology. That's why on defines the topology using projections (which leads to pointwise convergence of nets). – saz Feb 25 '13 at 18:50
|
{}
|
# When do triple integrals involve a fourth dimension?
Of course, heuristically, a single integral gives area under a curve, and a double integral of a function gives the volume under the integrand and above a two-dimensional domain. Now, I understand that a triple integral of the number 1 gives the volume of the three-dimensional shape described by the limits of integration, but my professor told us that triple integrals are just integrals over "a 3D domain."
I suppose my confusion is this: does the value represented by a triple integral depend on the specific context of the problem, or are there different types of triple integrals that correspond to different meanings?
• If you continue with mathematics and enter into studying non-Euclidean geometry or objects of various and sundry strange topologies, I guess the concerns you mention might become important. And even at the level where you are now, there might be sometimes where hypothesizing a "fictional" fourth dimension makes some calculations more convenient somehow. But, no, at the level where you are right now, a triple integral is simply a triple integral, no further context needed. If you know the function and the shape of the volume, that is all there is to be said. – bob.sacamento Dec 10 '18 at 16:03
• Possible duplicate of What does a triple integral represent? – BigbearZzz Dec 10 '18 at 16:04
• A single integral can give the area under a curve, but often that's not what the integral really means. For example, a single integral can give the height of a ball after it has fallen for some time with an accelerating downward speed. We can make a graph of the speed over time and use "area under the curve" to help find the ball's height, but the height is not actually area under a curve and we don't really need to think about any areas in order to compute the height. – David K Dec 10 '18 at 22:02
In some instances, one can use a triple integral to measure the volume of a $$3D$$ region, but triple integrals can also be used to find 'volume' between the graph of a $$4D$$ function and a $$3D$$ region.
Given a $$3D$$ region $$E$$, the volume of $$E$$, which we'll denote as $$V(E)$$, is given by $$V(E)=\iiint\limits_{E}\mathrm dx\mathrm dy\mathrm dz$$ But if you have a function $$f(x,y,z)$$, then you know that it's graph is going to be $$4D$$. But we can still find the 'volume' of $$f$$ over $$E$$: $$V(f,E)=\iiint\limits_{E}f(x,y,z)\mathrm dx\mathrm dy\mathrm dz$$
|
{}
|
# Top arXiv papers
• In this work, we demonstrate a new way to perform classical multiparty computing amongst parties with limited computational resources. Our method harnesses quantum resources to increase the computational power of the individual parties. We show how a set of clients restricted to linear classical processing are able to jointly compute a non-linear multivariable function that lies beyond their individual capabilities. The clients are only allowed to perform classical XOR gates and single-qubit gates on quantum states. We also examine the type of security that can be achieved in this limited setting. Finally, we provide a proof-of-concept implementation using photonic qubits, that allows four clients to compute a specific example of a multiparty function, the pairwise AND.
• Aug 22 2017 quant-ph cs.DS arXiv:1708.06002v1
We consider the problem of quantum state certification, where one is given $n$ copies of an unknown $d$-dimensional quantum mixed state $\rho$, and one wants to test whether $\rho$ is equal to some known mixed state $\sigma$ or else is $\epsilon$-far from $\sigma$. The goal is to use notably fewer copies than the $\Omega(d^2)$ needed for full tomography on $\rho$ (i.e., density estimation). We give two robust state certification algorithms: one with respect to fidelity using $n = O(d/\epsilon)$ copies, and one with respect to trace distance using $n = O(d/\epsilon^2)$ copies. The latter algorithm also applies when $\sigma$ is unknown as well. These copy complexities are optimal up to constant factors.
• We consider the inverse eigenvalue problem for entanglement witnesses, which asks for a characterization of their possible spectra (or equivalently, of the possible spectra resulting from positive linear maps of matrices). We completely solve this problem in the two-qubit case and we derive a large family of new necessary conditions on the spectra in arbitrary dimensions. We also establish a natural duality relationship with the set of absolutely separable states, and we completely characterize witnesses (i.e., separating hyperplanes) of that set when one of the local dimensions is 2.
• We give an adaptive algorithm which tests whether an unknown Boolean function $f\colon \{0, 1\}^n \to\{0, 1\}$ is unate, i.e. every variable of $f$ is either non-decreasing or non-increasing, or $\epsilon$-far from unate with one-sided error using $\widetilde{O}(n^{3/4}/\epsilon^2)$ queries. This improves on the best adaptive $O(n/\epsilon)$-query algorithm from Baleshzar, Chakrabarty, Pallavoor, Raskhodnikova and Seshadhri when $1/\epsilon \ll n^{1/4}$. Combined with the $\widetilde{\Omega}(n)$-query lower bound for non-adaptive algorithms with one-sided error of [CWX17, BCPRS17], we conclude that adaptivity helps for the testing of unateness with one-sided error. A crucial component of our algorithm is a new subroutine for finding bi-chromatic edges in the Boolean hypercube called adaptive edge search.
• Zero-field nuclear magnetic resonance (NMR) provides complementary analysis modalities to those of high-field NMR and allows for ultra-high-resolution spectroscopy and measurement of untruncated spin-spin interactions. Unlike for the high-field case, however, universal quantum control -- the ability to perform arbitrary unitary operations -- has not been experimentally demonstrated in zero-field NMR. This is because the Larmor frequency for all spins is identically zero at zero field, making it challenging to individually address different spin species. We realize a composite-pulse technique for arbitrary independent rotations of $^1$H and $^{13}$C spins in a two-spin system. Quantum-information-inspired randomized benchmarking and state tomography are used to evaluate the quality of the control. We experimentally demonstrate single-spin control for $^{13}$C with an average gate fidelity of $0.9960(2)$ and two-spin control via a controlled-not (CNOT) gate with an estimated fidelity of $0.99$. The combination of arbitrary single-spin gates and a CNOT gate is sufficient for universal quantum control of the nuclear spin system. The realization of complete spin control in zero-field NMR is an essential step towards applications to quantum simulation, entangled-state-assisted quantum metrology, and zero-field NMR spectroscopy.
• A pure multipartite quantum state is called absolutely maximally entangled (AME), if all reductions obtained by tracing out at least half of its parties are maximally mixed. However, the existence of such states is in many cases unclear. With the help of the weight enumerator machinery known from quantum error correcting codes and the generalized shadow inequalities, we obtain new bounds on the existence of AME states in higher dimensions. To complete the treatment on the weight enumerator machinery, the quantum MacWilliams identity is derived in the Bloch representation.
• We study the behavior of non-Markovianity with respect to the localization of the initial environmental state. The "amount" of non-Markovianity is measured using divisibility and distinguishability as indicators, employing several schemes to construct the measures. The system used is a qubit coupled to an environment modeled by an Ising spin chain kicked by ultra-short pulses of a magnetic field. In the integrable regime, non-Markovianity and localization do not have a simple relation, but as the chaotic regime is approached, simple relations emerge, which we explore in detail. We also study the non-Markovianity measures in the space of the parameters of the spin coherent states and point out that the pattern that appears is robust under the choice of the interaction Hamiltonian but does not have a KAM-like phase-space structure.
• In recent years, Unmanned Aerial Vehicle (UAV) technology has been introduced into the mining industry to conduct terrain surveying. This work investigates the application of UAVs with artificial lighting for measurement of rock fragmentation under poor lighting conditions, representing night shifts in surface mines or working conditions in underground mines. The study relies on indoor and outdoor experiments for rock fragmentation analysis using a quadrotor UAV. Comparison of the rock size distributions in both cases show that adequate artificial lighting enables similar accuracy to ideal lighting conditions.
• Aug 22 2017 hep-th arXiv:1708.06342v1
We present the universal form of $\eta$-symbols that can be applied to an arbitrary $E_{d(d)}$ exceptional field theory (EFT) up to $d=7$. We then express the $Y$-tensor, which governs the gauge algebra of EFT, as a quadratic form of the $\eta$-symbols. The usual definition of the $Y$-tensor strongly depends on the dimension of the compactification torus while it is not the case for our $Y$-tensor. Furthermore, using the $\eta$-symbols, we propose a universal form of the linear section equation. In particular, in the SL(5) EFT, we explicitly show the equivalence to the known linear section equation.
• Consider a simple complex Lie group $G$ acting diagonally on a triple flag variety $G/P_1\times G/P_2\times G/P_3$, where $P_i$ is parabolic subgroup of $G$. We provide an algorithm for systematically checking when this action has finitely many orbits. We then use this method to give a complete classification for when $G$ is of type $F_4$. The $E_6, E_7,$ and $E_8$ cases will be treated in a subsequent paper.
• This is the first paper in the sequence devoted to derived category of moduli spaces of curves of genus $0$ with marked points. We develop several approaches to describe it equivariantly with respect to the action of the symmetric group. We construct an equivariant full exceptional collection on the Losev-Manin space which categorifies derangements. Combining our results with the method of windows in derived categories, we construct an equivariant full exceptional collection on the GIT quotient (or its Kirwan resolution) birational contraction of the Losev-Manin space.
• In this paper we consider Witten diagrams at one loop in AdS space for scalar $\phi^3+\phi^4$ theory. After using Schwinger parametrization to trivialize the space-time loop integration, we extract the Mellin-Barnes representation for the one-loop corrections to the four-particle scattering up to an integration over the Schwinger parameters corresponding to the propagators of the internal particles running into the loop. We then discuss an approach to deal with those integrals.
• Joint models of longitudinal and survival data have become an important tool for modeling associations between longitudinal biomarkers and event processes. This association, which is the effect of the marker on the log-hazard, is assumed to be linear in existing shared random effects models with this assumption usually remaining unchecked. We present an extended framework of flexible additive joint models that allows the estimation of nonlinear, covariate specific associations by making use of Bayesian P-splines. The ability to capture truly linear and nonlinear associations is assessed in simulations and illustrated on the widely studied biomedical data on the rare fatal liver disease primary biliary cirrhosis. Our joint models are estimated in a Bayesian framework using structured additive predictors allowing for great flexibility in the specification of smooth nonlinear, time-varying and random effects terms. The model is implemented in the R package bamlss to facilitate the application of this flexible joint model.
• In a recent work devoted to the magnetism of Li$_{2}$CuO$_{2}$, Shu \emphet al. [New J.\ Phys.\ 19 (2017) 023026] have proposed a "simplified" unfrustrated microscopic model that differs considerably from the models that have been refined through decades of prior work. We show that the proposed model is at odds with known experimental data, including the reported magnetic susceptibility $\chi(T)$ data up to 550 K. Using a high-temperature expansion to the eight order for $\chi(T)$, we show that the experimental data for Li$_{2}$CuO$_{2}$ are consistent with the prior model derived from inelastic neutron scattering (INS) studies. We also establish the $T$-range of validity for a Curie-Weiss law for the real frustrated magnetic system. We argue that the knowledge of the long-range ordered magnetic structure for $T<T_N$ and of $\chi(T)$ in a restricted $T$-range provides insufficient information to extract all the relevant couplings in frustrated magnets, the saturation field and INS data must also be used to determine several exchange couplings, including the weak but decisive frustrating antiferromagnetic (AFM) interchain couplings.
• We consider the statistical inverse problem of recovering a function $f: M \to \mathbb R$, where $M$ is a smooth compact Riemannian manifold with boundary, from measurements of general $X$-ray transforms $I_a(f)$ of $f$, corrupted by additive Gaussian noise. For $M$ equal to the unit disk with flat' geometry and $a=0$ this reduces to the standard Radon transform, but our general setting allows for anisotropic media $M$ and can further model local attenuation' effects -- both highly relevant in practical imaging problems such as SPECT tomography. We propose a nonparametric Bayesian inference approach based on standard Gaussian process priors for $f$. The posterior reconstruction of $f$ corresponds to a Tikhonov regulariser with a reproducing kernel Hilbert space norm penalty that does not require the calculation of the singular value decomposition of the forward operator $I_a$. We prove Bernstein-von Mises theorems that entail that posterior-based inferences such as credible sets are valid and optimal from a frequentist point of view for a large family of semi-parametric aspects of $f$. In particular we derive the asymptotic distribution of smooth linear functionals of the Tikhonov regulariser, which is shown to attain the semi-parametric Cramér-Rao information bound. The proofs rely on an invertibility result for the `Fisher information' operator $I_a^*I_a$ between suitable function spaces, a result of independent interest that relies on techniques from microlocal analysis. We illustrate the performance of the proposed method via simulations in various settings.
• Voltage control effects provide an energy-efficient means of tailoring material properties, especially in highly integrated nanoscale devices. However, only insulating and semiconducting systems can be controlled so far. In metallic systems, there is no electric field due to electron screening effects and thus no such control effect exists. Here we demonstrate that metallic systems can also be controlled electrically through ionic not electronic effects. In a Pt/Co structure, the control of the metallic Pt/Co interface can lead to unprecedented control effects on the magnetic properties of the entire structure. Consequently, the magnetization and perpendicular magnetic anisotropy of the Co layer can be independently manipulated to any desired state, the efficient spin toques can be enhanced about 3.5 times, and the switching current can be reduced about one order of magnitude. This ability to control a metallic system may be extended to control other physical phenomena.
• We discuss the standard ab initio calculation of the refractive index by means of the scalar dielectric function and show its inherent limitations. To overcome these, we start from the general, microscopic wave equation in materials in terms of the frequency- and wavevector-dependent dielectric tensor, and we investigate under which conditions the standard treatment can be justified. We then provide a more general method of calculating the frequency- and direction-dependent refractive indices by means of a $(2 \times 2)$ complex-valued "optical tensor", which can be calculated from a purely frequency-dependent conductivity tensor. Finally, we illustrate the meaning of this optical tensor for the prediction of optical material properties such as birefringence and optical activity.
• We introduce a notion of quasilinear parabolic equations over metric measure spaces. Under sharp structural conditions, we prove that local weak solutions are locally bounded and satisfy the parabolic Harnack inequality. Applications include the parabolic maximum principle and pointwise estimates for weak solutions.
• We consider a class of non-equilibrium pure states, which are generally present in an isolated quantum statistical system. These are states of the form $|\Psi\rangle=e^{-{\beta H \over 2}} U e^{{\beta H \over 2}} |\Psi_0\rangle$, where $U$ is a unitary made out of simple operators and $|\Psi_0\rangle$ is a typical equilibrium pure state with sharply peaked energy. We argue that in a system with a holographic dual these states have a natural interpretation as an AdS black hole with transient excitations behind the horizon. We explore the interpretation of these states as pure states undergoing a time-dependent spontaneous fluctuation out of equilibrium. While these states are atypical and the microscopic phases of the wavefunction are correlated with the matrix elements of simple operators, the states are partly disguised as equilibrium states due to cancellations between contributions from different coarse-grained energy bins. These cancellations are guaranteed by the KMS condition of the underlying equilibrium state $|\Psi_0\rangle$. However, in correlators which include the Hamiltonian $H$ these cancellations are spoiled and the non-equilibrium nature of the state $|\Psi\rangle$ can be detected. We discuss connections with the proposal that local observables behind the horizon are realized as state-dependent operators. The states studied in this paper may be useful for implementing an analogue of the "traversable wormhole" protocol for a 1-sided black hole, which could potentially allow us to extract the excitation from behind the horizon. We include some pedagogical background material.
• We obtain exact analytical solutions for a class of SO($l$) Higgs field theories in a non-dynamic background $n$-dimensional anti de Sitter space. These finite transverse energy solutions are maximally symmetric $p$-dimensional topological defects where $n=(p+1)+l$. The radius of curvature of anti de Sitter space provides an extra length scale that allows us to study the equations of motion in a limit where the masses of the Higgs field and the massive vector bosons are both vanishing. We call this the double BPS limit. In anti de Sitter space, the equations of motion depend on both $p$ and $l$. The exact analytical solutions are expressed in terms of standard special functions. The known exact analytical solutions are for kink-like defects ($p=0,1,2,\dotsc;\, l=1$), vortex-like defects ($p=1,2,3;\, l=2$), and the 'tHooft-Polyakov monopole ($p=0;\, l=3$). In certain cases where we did not find an analytic solution, we present numerical solutions to the equations of motion. The asymptotically exponentially increasing volume with distance of anti de Sitter space imposes different constraints than those found in the study of defects in Minkowski space.
• We review the recent highlights of theoretical flavour physics, based on the theory summary talk given at FPCP2017. Over the past years, a number of intriguing anomalies have emerged in flavour violating $K$ and $B$ meson decays, constituting some of the most promising hints for the presence of physics beyond the Standard Model. We discuss the theory status of these anomalies and outline possible future directions to test the underlying New Physics.
• We first give an alternative proof, based on a simple geometric argument, of a result of Marian, Oprea and Pandharipande on top Segre classes of the tautological bundles on Hilbert schemes of $K3$ surfaces equipped with a line bundle. We then turn to the blow-up of $K3$ surface at one point and establish vanishing results for the corresponding top Segre classes in a certain range. This determines, at least theoretically, all top Segre classes of tautological bundles for any pair $(\Sigma,H),\,H\in {\rm Pic}\,\Sigma$.
• For any quasi-triangular Hopf algebra, there exists the universal R-matrix, which satisfies the Yang-Baxter equation. It is known that the adjoint action of the universal R-matrix on the elements of the tensor square of the algebra constitutes a quantum Yang-Baxter map, which satisfies the set-theoretic Yang-Baxter equation. The map has a zero curvature representation among L-operators defined as images of the universal R-matrix. We find that the zero curvature representation can be solved by the Gauss decomposition of a product of L-operators. Thereby obtained a quasi-determinant expression of the quantum Yang-Baxter map associated with the quantum algebra $U_{q}(gl(n))$. Moreover, the map is identified with products of quasi-Plücker coordinates over a matrix composed of the L-operators. We also consider the quasi-classical limit, where the underlying quantum algebra reduces to a Poisson algebra. The quasi-determinant expression of the quantum Yang-Baxter map reduces to ratios of determinants, which give a new expression of a classical Yang-Baxter map.
• In order to prove numerically the global existence and uniqueness of smooth solutions of a fourth order, nonlinear PDE, we derive rigorous a-posteriori upper bounds on the supremum of the numerical range of the linearized operator. These bounds also have to be easily computable in order to be applicable to our rigorous a-posteriori methods, as we use them in each time-step of the numerical discretization. The final goal is to establish global bounds on smooth local solutions, which then establish global uniqueness.
• We present a symmetry-based approach for shape coexistence in nuclei, founded on the concept of partial dynamical symmetry (PDS). The latter corresponds to a situation when only selected states (or bands of states) of the coexisting configurations preserve the symmetry while other states are mixed. We construct explicitly critical-point Hamiltonians with two or three PDSs of the type U(5), SU(3), ${\overline{\rm SU(3)}}$ and SO(6), appropriate to double or triple coexistence of spherical, prolate, oblate and $\gamma$-soft deformed shapes, respectively. In each case, we analyze the topology of the energy surface with multiple minima and corresponding normal modes. Characteristic features and symmetry attributes of the quantum spectra and wave functions are discussed. Analytic expressions for quadrupole moments and $E2$ rates involving the remaining solvable states are derived and isomeric states are identified by means of selection rules.
• We propose a simple, yet powerful regularization technique that can be used to significantly improve both the pairwise and triplet losses in learning local feature descriptors. The idea is that in order to fully utilize the expressive power of the descriptor space, good local feature descriptors should be sufficiently "spread-out" over the space. In this work, we propose a regularization term to maximize the spread in feature descriptor inspired by the property of uniform distribution. We show that the proposed regularization with triplet loss outperforms existing Euclidean distance based descriptor learning techniques by a large margin. As an extension, the proposed regularization technique can also be used to improve image-level deep feature embedding.
• Aug 22 2017 math.RT math.DG math.RA arXiv:1708.06318v1
We examine the N-Koszul calculus for the N-symmetric algebras. The case N=2 corresponds to the Elie Cartan calculus. We conjecture that, as in the case N=2, the N-Cartan calculus extends to manifolds when N>2, which would provide a new type of noncommutative differential geometry.
• Sensitive, real-time optical magnetometry with nitrogen-vacancy centers in diamond relies on accurate imaging of small ($\ll 10^{-2}$) fractional fluorescence changes across the diamond sample. We discuss the limitations on magnetic-field sensitivity resulting from the limited number of photoelectrons that a camera can record in a given time. Several types of camera sensors are analyzed and the smallest measurable magnetic-field change is estimated for each type. We show that most common sensors are of a limited use in such applications, while certain highly specific cameras allow to achieve nanotesla-level sensitivity in $1$~s of a combined exposure. Finally, we demonstrate the results obtained with a lock-in camera that pave the way for real-time, wide-field magnetometry at the nanotesla level and with micrometer resolution.
• Let $(X,J,\omega,g)$ be a complete $n$-dimensional Kähler manifold. A Theorem by Gromov \citeG states that the if the Kähler form is $d$-bounded, then the space of harmonic $L_2$ forms of degree $k$ is trivial, unless $k=\frac{n}{2}$. Starting with a contact manifold $(M,\alpha)$ we show that the same conclusion does not hold in the category of almost Kähler manifolds. Let $(X,J,g)$ be a complete almost Hermitian manifold of dimension four. We prove that the reduced $L_2$ $2^{nd}$-cohomology group decomposes as direct sum of the closure of the invariant and anti-invariant $L_2$-cohomology. This generalizes a decomposition theorem by Drǎghici, Li and Zhang \citeDLZ for $4$-dimensional closed almost complex manifolds to the $L_2$-setting.
• The numerical renormalization group (NRG) is tailored to describe interacting impurity models in equilibrium, but faces limitations for steady-state nonequilibrium, arising, e.g., due to an applied bias voltage. We show that these limitations can be overcome by describing the thermal leads using a thermofield approach, integrating out high energy modes using NRG, and then treating the nonequilibrium dynamics at low energies using a quench protocol, implemented using the time-dependent density matrix renormalization group (tDMRG). This approach yields quantitatively reliable results down to the exponentially small energy scales characteristic of impurity models. We present results of benchmark quality for the temperature and magnetic field dependence of the zero-bias conductance peak for the single-impurity Anderson model.
• We have conducted experimental measurements and numerical simulations of a precession driven flow in a cylindrical cavity. The study is dedicated to the precession dynamo experiment currently under construction at Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and aims at the evaluation of the hydrodynamic flow with respect to its ability to drive a dynamo. We focus on the strongly non-linear regime in which the flow is essentially composed of the directly forced primary Kelvin mode and higher modes in terms of standing inertial waves arising from non-linear self-interactions. We obtain an excellent agreement between experiment and simulation with regard to both, flow amplitudes and flow geometry. A peculiarity is the resonance-like emergence of an axisymmetric mode that represents a double role structure in the meridional plane. Kinematic simulations of the magnetic field evolution induced by the time-averaged flow yield dynamo action at critical magnetic Reynolds numbers around ${\rm{Rm}}^{\rm{c}}\approx 430$ which is well within the range of the planned liquid sodium experiment.
• Optical communication systems represent the backbone of modern communication networks. Since their deployment, different fiber technologies have been used to deal with optical fiber impairments such as dispersion-shifted fibers and dispersion-compensation fibers. In recent years, thanks to the introduction of coherent detection based systems, fiber impairments can be mitigated using digital signal processing (DSP) algorithms. Coherent systems are used in the current 100 Gbps wavelength-division multiplexing (WDM) standard technology. They allow the increase of spectral efficiency by using multi-level modulation formats, and are combined with DSP techniques to combat the linear fiber distortions. In addition to linear impairments, the next generation 400 Gbps/1 Tbps WDM systems are also more affected by the fiber nonlinearity due to the Kerr effect. At high input power, the fiber nonlinear effects become more important and their compensation is required to improve the transmission performance. Several approaches have been proposed to deal with the fiber nonlinearity. In this paper, after a brief description of the Kerr-induced nonlinear effects, a survey on the fiber nonlinearity compensation (NLC) techniques is provided. We focus on the well-known NLC techniques and discuss their performance, as well as their implementation and complexity. An extension of the inter-subcarrier nonlinear interference canceler approach is also proposed. A performance evaluation of the well-known NLC techniques and the proposed approach is provided in the context of Nyquist and super-Nyquist superchannel systems.
• In this paper we present a translation from the quantum programming language Quipper to the QPMC model checker, with the main aim of verifying Quipper programs. Quipper is an embedded functional programming language for quantum computation. It is above all a circuit description language, for this reason it uses the vector state formalism and its main purpose is to make circuit implementation easy providing high level operations for circuit manipulation. Quipper provides both an high-level circuit building interface and a simulator. QPMC is a model checker for quantum protocols based on the density matrix formalism. QPMC extends the probabilistic model checker IscasMC allowing to formally verify properties specified in the temporal logic QCTL on Quantum Markov Chains. We implemented and tested our translation on several quantum algorithms, including Grover's quantum search.
• A tabletop low-noise differential amplifer with a bandwidth of 100 kHz is presented. Low voltage drifts of the order of 100 nV/day are reached by thermally stabilizing relevant amplifer components. The input leakage current is below 100 fA. Input-stage errors are reduced by extensive circuitry. Voltage noise, current noise, input capacitance and input current are extraordinarily low. The input resistance is larger than 1 TOhm. The amplifers were tested with and deployed for electrical transport measurements of quantum devices at cryogenic temperatures.
• We study a generalized nonlocal theory of gravity which, in specific limits, can become either the curvature non-local or teleparallel non-local theory. Using the Noether Symmetry Approach, we find that the coupling functions coming from the non-local terms are constrained to be either exponential or linear in form. It is well known that in some non-local theories, a certain kind of exponential non-local couplings are needed in order to achieve a renormalizable theory. In this paper, we explicitly show that this kind of coupling does not need to by introduced by hand, instead, it appears naturally from the symmetries of the Lagrangian in flat Friedmann-Robertson-Walker cosmology. Finally, we find de-Sitter and power law cosmological solutions for different nonlocal theories. The symmetries for the generalized non-local theory is also found and some cosmological solutions are also achieved under the full theory.
• Manual annotations are a prerequisite for many applications of machine learning. However, weaknesses in the annotation process itself are easy to overlook. In particular, scholars often choose what information to give to annotators without examining these decisions empirically. For subjective tasks such as sentiment analysis, sarcasm, and stance detection, such choices can impact results. Here, for the task of political stance detection on Twitter, we show that providing too little context can result in noisy and uncertain annotations, whereas providing too strong a context may cause it to outweigh other signals. To characterize and reduce these biases, we develop ConStance, a general model for reasoning about annotations across information conditions. Given conflicting labels produced by multiple annotators seeing the same instances with different contexts, ConStance simultaneously estimates gold standard labels and also learns a classifier for new instances. We show that the classifier learned by ConStance outperforms a variety of baselines at predicting political stance, while the model's interpretable parameters shed light on the effects of each context.
• Mobile crowdsensing allows a large number of mobile devices to measure phenomena of common interests and form a body of knowledge about natural and social environments. In order to get location annotations for indoor mobile crowdsensing, reference tags are usually deployed which are susceptible to tampering and compromises by attackers. In this work, we consider three types of location-related attacks including tag forgery, tag misplacement, and tag removal. Different detection algorithms are proposed to deal with these attacks. First, we introduce location-dependent fingerprints as supplementary information for better location identification. A truth discovery algorithm is then proposed to detect falsified data. Moreover, visiting patterns are utilized for the detection of tag misplacement and removal. Experiments on both crowdsensed and emulated dataset show that the proposed algorithms can detect all three types of attacks with high accuracy.
• In this short note we provide a quantitative version of the classical Runge approximation property for second order elliptic operators. This relies on quantitative unique continuation results and duality arguments. We show that these estimates are essentially optimal. As a model application we provide a new proof of the result from \citeF07, \citeAK12 on stability for the Calderón problem with local data.
• This report considers linear multistep methods through time filtering. The approach has several advantages. It is modular and requires the addition of only one line of additional code. Error estimation and variable timesteps is straightforward and the individual effect of each step\ is conceptually clear. We present its development for the backward Euler method and a curvature reducing time filter leading to a 2-step, strongly A-stable, second order linear multistep method.
• Bacteria populations rely on mechanisms such as quorum sensing to coordinate complex tasks that cannot be achieved by a single bacterium. Quorum sensing is used to measure the local bacteria population density, and it controls cooperation by ensuring that a bacterium only commits the resources for cooperation when it expects its neighbors to reciprocate. This paper proposes a simple model for sharing a resource in a bacterial environment, where knowledge of the population influences each bacterium's behavior. Game theory is used to model the behavioral dynamics, where the net payoff (i.e., utility) for each bacterium is a function of its current behavior and that of the other bacteria. The game is first evaluated with perfect knowledge of the population. Then, the unreliability of diffusion introduces uncertainty in the local population estimate and changes the perceived payoffs. The results demonstrate the sensitivity to the system parameters and how population uncertainty can overcome a lack of explicit coordination.
• General dark solitons and mixed solutions consisting of dark solitons and breathers for the third-type Davey-Stewartson (DS-III) equation are derived by employing the bilinear method. By introducing the two differential operators, semi-rational solutions consisting of rogue waves, breathers and solitons are generated. These semi-rational solutions are given in terms of determinants whose matrix elements have simple algebraic expressions. Under suitable parametric conditions, we derive general rogue wave solutions expressed in terms of rational functions. It is shown that the fundamental (simplest) rogue waves are line rogue waves. It is also shown that the multi-rogue waves describe interactions of several fundamental rogue waves, which would generate interesting curvy wave patterns. The higher order rogue waves originate from a localized lump and retreat back to it. Several types of hybrid solutions composed of rogue waves, breathers and solitons have also been illustrated. Specifically, these semi-rational solutions have a new phenomenon: lumps form on dark solitons and gradual separation from the dark solitons is observed.
• Networks are models representing relationships between entities. Often these relationships are explicitly given, or we must learn a representation which generalizes and predicts observed behavior in underlying individual data (e.g. attributes or labels). Whether given or inferred, choosing the best representation affects subsequent tasks and questions on the network. This work focuses on model selection to evaluate network representations from data, focusing on fundamental predictive tasks on networks. We present a modular methodology using general, interpretable network models, task neighborhood functions found across domains, and several criteria for robust model selection. We demonstrate our methodology on three online user activity datasets and show that network model selection for the appropriate network task vs. an alternate task increases performance by an order of magnitude in our experiments.
• Gaussian processes (GPs) are commonly used as models for functions, time series, and spatial fields, but they are computationally infeasible for large datasets. Focusing on the typical setting of modeling observations as a GP plus an additive nugget or noise term, we propose a generalization of the Vecchia (1988) approach as a framework for GP approximations. We show that our general Vecchia approach contains many popular existing GP approximations as special cases, allowing for comparisons among the different methods within a unified framework. Representing the models by directed acyclic graphs, we determine the sparsity of the matrices necessary for inference, which leads to new insights regarding the computational properties. Based on these results, we propose a novel sparse general Vecchia approximation, which ensures computational feasibility for large datasets but can lead to tremendous improvements in approximation accuracy over Vecchia's original approach. We provide several theoretical results, and conduct numerical comparisons. We conclude with guidelines for the use of Vecchia approximations.
• Depth estimation from stereo images remains a challenge even though studied for decades. The KITTI benchmark shows that the state-of-the-art solutions offer accurate depth estimation, but are still computationally complex and often require a GPU or FPGA implementation. In this paper we aim at increasing the accuracy of depth map estimation and reducing the computational complexity by using information from previous frames. We propose to transform the disparity map of the previous frame into the current frame, relying on the estimated ego-motion, and use this map as the prediction for the Kalman filter in the disparity space. Then, we update the predicted disparity map using the newly matched one. This way we reduce disparity search space and flickering between consecutive frames, thus increasing the computational efficiency of the algorithm. In the end, we validate the proposed approach on real-world data from the KITTI benchmark suite and show that the proposed algorithm yields more accurate results, while at the same time reducing the disparity search space.
• In this note we analyse \emphquantitative approximation properties of a certain class of \emphnonlocal equations: Viewing the fractional heat equation as a model problem, which involves both \emphlocal and \emphnonlocal pseudodifferential operators, we study quantitative approximation properties of solutions to it. First, relying on Runge type arguments, we give an alternative proof of certain \emphqualitative approximation results from \citeDSV16. Using propagation of smallness arguments, we then provide bounds on the \emphcost of approximate controllability and thus quantify the approximation properties of solutions to the fractional heat equation. Finally, we discuss generalizations of these results to a larger class of operators involving both local and nonlocal contributions.
• To efficiently establish training databases for machine learning methods, collaborative and crowdsourcing platforms have been investigated to collectively tackle the annotation effort. However, when this concept is ported to the medical imaging domain, reading expertise will have a direct impact on the annotation accuracy. In this study, we examine the impact of expertise and the amount of available annotations on the accuracy outcome of a liver segmentation problem in an abdominal computed tomography (CT) image database. In controlled experiments, we study this impact for different types of weak annotations. To address the decrease in accuracy associated with lower expertise, we propose a method for outlier correction making use of a weakly labelled atlas. Using this approach, we demonstrate that weak annotations subject to high error rates can achieve a similarly high accuracy as state-of-the-art multi-atlas segmentation approaches relying on a large amount of expert manual segmentations. Annotations of this nature can realistically be obtained from a non-expert crowd and can potentially enable crowdsourcing of weak annotation tasks for medical image analysis.
• In this paper, we study the local asymptotics of the eigenvalues and eigenvectors for a general class of sample covariance matrices, where the spectrum of the population covariance matrices can have a finite number of spikes and bulk components. Our paper is a unified framework combining the spiked model and covariance matrices without outliers. Examples and statistical applications are considered to illustrate our results.
• In Part I of this paper, we presented a Hilbert-style system $\Sigma_D$ axiomatizing of stit logic of justification announcements (JA-STIT) interpreted over models with discrete time structure. In this part, we prove three frame definability results for $\Sigma_D$ using three different definitions of a frame plus a yet another version of completeness result.
• The Calderón problem for the fractional Schrödinger equation was introduced in the work \citeGSU, which gave a global uniqueness result also in the partial data case. This article improves this result in two ways. First, we prove a quantitative uniqueness result showing that this inverse problem enjoys logarithmic stability under suitable a priori bounds. Second, we show that the results are valid for potentials in scale-invariant $L^p$ or negative order Sobolev spaces. A key point is a quantitative approximation property for solutions of fractional equations, obtained by combining a careful propagation of smallness analysis for the Caffarelli-Silvestre extension and a duality argument.
• For well-generated complex reflection groups, Chapuy and Stump gave a simple product for a generating function counting reflection factorizations of a Coxeter element by their length. This is refined here to record the number of reflections used from each orbit of hyperplanes. The proof is case-by-case via the classification of well-generated groups. It implies a new expression for the Coxeter number, expressed via data coming from a hyperplane orbit; a case-free proof of this due to J. Michel is included.
Māris Ozols Aug 03 2017 09:34 UTC
If I'm not mistaken, what you describe here is equivalent to the [QR decomposition][1]. The matrices $R_{ij}$ that act non-trivially only in a two-dimensional subspace are known as [Givens rotations][2]. The fact that any $n \times n$ unitary can be decomposed as a sequence of Givens rotations is ex
...(continued)
gae Jul 26 2017 21:19 UTC
For those interested in the literature on teleportation simulation of quantum channels, a detailed and *comprehensive* review is provided in Supplementary Note 8 of https://images.nature.com/original/nature-assets/ncomms/2017/170426/ncomms15043/extref/ncomms15043-s1.pdf
The note describes well the t
...(continued)
Maciej Malinowski Jul 26 2017 15:56 UTC
In what sense is the ground state for large detuning ordered and antiferromagnetic? I understand that there is symmetry breaking, but other than that, what is the fundamental difference between ground states for large negative and large positive detunings? It seems to be they both exhibit some order
...(continued)
Stefano Pirandola Jul 26 2017 15:28 UTC
The performance of the memory assisted MDI-QKD with "quasi-EPR" sources is remarkable. It improves the key rate by 5 orders of magnitude over the PLOB bound at about 600 km (take a look at Figure 4).
Māris Ozols Jul 26 2017 11:07 UTC
Conway's list still has four other \$1000 problems left:
https://oeis.org/A248380/a248380.pdf
SHUAI ZHANG Jul 26 2017 00:20 UTC
I am still working on improving this survey. If you have any suggestions, questions or find any mistakes, please do not hesitate to contact me: shuai.zhang@student.unsw.edu.au.
Alvaro M. Alhambra Jul 24 2017 16:10 UTC
This paper has just been updated and we thought it would be a good
idea to advertise it here. It was originally submitted a year ago, and
it has now been essentially rewritten, with two new authors added.
We have fixed some of the original results and now we:
-Show how some fundamental theorem
...(continued)
Steve Flammia Jul 21 2017 13:43 UTC
Actually, there is even earlier work that shows this result. In [arXiv:1109.6887][1], Magesan, Gambetta, and Emerson showed that for any Pauli channel the diamond distance to the identity is equal to the trace distance between the associated Choi states. They prefer to phrase their results in terms
...(continued)
Stefano Pirandola Jul 21 2017 09:43 UTC
This is very interesting. In my reading list!
|
{}
|
flow-er-1.0.3: More directional operators
Control.Flower.Applicative.Lazy
Description
Synopsis
# Documentation
ap :: Applicative f => f (a -> b) -> f a -> f b Source #
A simple alias for <*>
>>> ap (Just (+1)) (Just 4)
Just 5
lift2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c Source #
An alias for lift2, updating with unified "lift" naming
>>> lift2 (+) (Just 4) (Just 1)
Just 5
lift3 :: Applicative f => (a -> b -> c -> d) -> f a -> f b -> f c -> f d Source #
An alias for lift3, updating with unified "lift" naming
>>> lift3 (\x y z -> x * y * z) (Just 4) (Just 3) (Just 2)
Just 24
(<*) :: Applicative f => f (a -> b) -> f a -> f b infixr 4 Source #
Right-associative, left-flowing applicative operator
>>> Just (+1) <* Just 4
Just 5
(*>) :: Applicative f => f a -> f (a -> b) -> f b infixl 4 Source #
Left-associative, right-flowing applicative operator
>>> Just 4 *> Just (+1)
Just 5
(<$*) :: Applicative f => (a -> b -> c) -> f a -> f b -> f c infixr 4 Source # Right-associative, left-flowing lift2 operator >>> (+) <$* Just 4 |< Just 1
Just 5
(*$>) :: Applicative f => f a -> (a -> b -> c) -> f b -> f c infixl 4 Source # Left-associative, right-flowing lift2 operator >>> Just 4 >| Just 1 *$> (+)
Just 5
(<$**) :: Applicative f => (a -> b -> c -> d) -> f a -> f b -> f c -> f d infixr 4 Source # Right-associative, left-flowing lift3 operator >>> (\x y z -> x * y * z) <$** Just 4 |< Just 3 |< Just 2
Just 24
(**$>) :: Applicative f => f a -> (a -> b -> c -> d) -> f b -> f c -> f d infixl 4 Source # Left-associative, right-flowing lift3 operator >>> Just 2 >| Just 3 >| Just 4 **$> \x y z -> x * y * z
Just 24
|
{}
|
# CS 5220: Applications of Parallel Computers ## Instruction-level parallelism ## 01 Sep 2015
## Example 1: Laundry - Three stages to laundry: wash, dry, fold - Three loads: darks, lights, underwear - How long will this take?
## How long will it take?
Three loads of laundry to wash, dry, fold. One hour per stage. What is the total time? A: 9 hours =: You spend too much time on laundry A: 5 hours =: That's what I had in mind! A: 3 hours =: Maybe at a laundromat; what if only one washer/drier?
## Setup - Three *functional units* - Washer - Drier - Folding table - Different cases - One load at a time - Three loads with one washer/drier - Three loads with friends at the laundromat
## Serial execution (9 hours)
Wash Dry Fold Wash Dry Fold Wash Dry Fold
## Pipelined execution (5 hours)
Wash Dry Fold Wash Dry Fold Wash Dry Fold
## Parallel units (3 hours)
Wash Dry Fold Wash Dry Fold Wash Dry Fold
## Example 2: Arithmetic $$2 \times 2 + 3 \times 3$$ > A child of five would understand this. Send someone to fetch a child > of five. > -- [Groucho Marx](http://www.goodreads.com/quotes/98966-a-child-of-five-could-understand-this-send-someone-to)
## How long will it take?
Suppose all children can do one add or multiply per second. How long would it take to compute $2 \times 2 + 3 \times 3$? A: 3 seconds =: OK, three ops at one op/s; what if there are multiple kids? A: 2 seconds =: OK, if two kids do the multiplies in parallel A: 1 second =: Not without finding faster kids!
## One child
$2 \times 2 = 4$ $3 \times 3 = 9$ $4 + 9 = 13$
Total time is 3 seconds
## Two children
$2 \times 2 = 4$ $4 + 9 = 13$ $3 \times 3 = 9$
Total time is 2 seconds
## Many children
$2 \times 2 = 4$ $4 + 9 = 13$ $3 \times 3 = 9$
Total time remains 2 seconds = sum of latencies for two stages with a data dependency between them.
## Pipelining - Improves *bandwidth*, but not *latency* - Potential speedup = number of stages - What if there's a branch? - Different pipelines for different functional units - Front-end has a pipeline - Functional units (FP adder, multiplier) pipelined - Divider often not pipelined
## SIMD - Single Instruction Multiple Data - Old idea with resurgence in 90s (for graphics) - Now short vectors are ubiquitous - 256 bit wide AVX on CPU - 512 bit wide on Xeon Phi! - Alignment matters
## Example: [My laptop](http://www.everymac.com/systems/apple/macbook_pro/specs/macbook-pro-core-i5-2.6-13-late-2013-retina-display-specs.html) MacBook Pro (Retina, 13 in, Late 2013) - [Intel Core i5-4228U (Haswell arch)](http://ark.intel.com/products/75991/Intel-Core-i5-4288U-Processor-3M-Cache-up-to-3_10-GHz) - Two cores / four HW threads - Variable clock: 2.6 GHz / 3.1 GHz TurboBoost - Four wide front end (fetch+decode 4 ops/cycle/core) - Operations internally broken down into "micro-ops" - Cache micro-ops -- like hardware JIT?!
## My laptop: floating point - Two fully-pipelined multiply or FMA per cycle - FMA = Fused Multiply Add: one op, one rounding error - [256 bit SIMD (AVX)](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) - [Two fully pipelined FP units](http://www.realworldtech.com/haswell-cpu/4/) - Two multiply or Fused Multiply-Add (FMA) per cycle - Only one regular add per cycle
## Peak flop rate - Result (double precision) $\approx 100$ GFlop/s - 2 flops/FMA - $\times 4$ FMA/vector FMA = 8 flops/vector FMA - $\times 2$ vector FMAs/cycle = 16 flops/cycle - $\times 2$ cores = 32 flops/cycle - $\times 3.1 \times 10^9$ cycles/s $\approx 100$ GFlop/s - Single precision $\approx 200$ GFlop/s
## Reaching peak flop - Need lots of *independent* vector work - FMA latency = 5 cycles on Haswell - Need $8 \times 5 = 40$ *independent* FMA to reach peak - Great for matrix multiply -- hard in general - Still haven't [talked about memory!](/slides/2015-09-01-memory.html)
## Punchline - Special features: SIMD, FMA - Compiler understands how to use these *in principle* - Rearranges instructions to get good mix - Tries to use FMAs, vector instructions - *In practice*, the compiler needs your help - Set optimization flags, pragmas, etc - Rearrange code to make obvious and predictable - Use special intrinsics or library routines - Choose data layouts + algorithms to suit machine - Goal: You handle high-level, compiler handles low-level
|
{}
|
### Number Theory
This course intends to facilitate understanding of number theoretic concepts and properties as well as enhance skills in employing different proving techniques which are useful in most areas in mathematics. Generally, it entails exploration, seeking of patterns, generating and proving conjectures as students engage in mathematical investigations. Topics include divisibility, prime numbers, linear diophantine equations, linear congruences and multiplicative number theoretic functions.
|
{}
|
We consider the Fix-Caginalp equation with the Neumann boundary condition in ${\bf R}^n$ with $n=1,2,3$. We obtain a global solution by the existence of the Lyapunov function. After, we construct a dynamical system corresponding to the equation. By the existence of the Lyapunov function, the $\omega$-limit set is included in the set of its stationary solution. We treat its dynamical properties such as a global attractor, absorbing set, exponential attractor and so on. It is important to obtain the estimate independent of the initial value. Finally, we construct an exponential attractor.
back
|
{}
|
It is currently Wed Oct 17, 2018 5:02 am
All times are UTC [ DST ]
Page 1 of 1 [ 3 posts ]
Print view Previous topic | Next topic
Author Message
Post subject: On Riemann SurfacesPosted: Fri Nov 13, 2015 12:59 am
Team Member
Joined: Tue Nov 10, 2015 8:25 pm
Posts: 314
• Suppose that $X$ is a connected and compact Riemann surface and let $\displaystyle f : X \longrightarrow \mathbb{C}$ be a holomorphic function. Show that $\displaystyle f$ is constant.
• Let $\displaystyle f : \mathbb{C} \longrightarrow \mathbb{C}$ be a holomorphic bounded function. Show that there is a unique (holomorphic) extension $\displaystyle \hat{f} : \hat{ \mathbb{C} } \longrightarrow \hat{ \mathbb{C} }$ and then conclude Liouville's theorem (regarding $\mathbb{C}$).
Note that $\displaystyle \hat{ \mathbb{C} } = \mathbb{C} \cup \{ \infty \}$.
Top
Post subject: Re: On Riemann SurfacesPosted: Sun Mar 13, 2016 2:51 pm
Team Member
Joined: Tue Nov 10, 2015 8:25 pm
Posts: 314
(1) Suppose that $f$ is not constant. As a non-constant holomorphic map between Riemann Surfaces, $f$ is open, so $f(X)$ is an open subset of $\mathbb{C}$. Since $X$ is compact and $f$ is continuous, $f(X)$ is also compact and thus a closed subset of $\mathbb{C}$. But $\mathbb{C}$ is connected, so $f(X) = \mathbb{C}$ as $f$ is not constant. This means that $\mathbb{C}$ is compact! Contradiction.
Top
Post subject: Re: On Riemann SurfacesPosted: Sun Mar 13, 2016 3:01 pm
Team Member
Joined: Tue Nov 10, 2015 8:25 pm
Posts: 314
(2) Consider the function $g$ given by $g(z) = f(\frac{1}{z})$ on $\mathbb{C} \smallsetminus \{0\}$. Clearly, $g$ is holomorphic on $\mathbb{C} \smallsetminus \{0\}$ and bounded on a neighborhood of $0$, due to the boundedness assumption on $f$. By Riemann's Removable Singularities Theorem, $0$ is a removable singularity of $g$, which means that $g$ can be defined at $0$ in such a way that it becomes holomorphic on all of $\mathbb{C}$. It follows that $\hat{f} \ \colon \hat{\mathbb{C}} \longrightarrow \mathbb{C} \ , \ \hat{f}(z) = \begin{cases} f(z) \ , \ z \in \mathbb{C} \\ g(0) \ , \ z = \infty \end{cases}$is a holomorphic extension of $f$ to $\hat{\mathbb{C}}$. By (1), $\hat{f}$ is constant and consequently $f$ itself is constant.
Top
Display posts from previous: All posts1 day7 days2 weeks1 month3 months6 months1 year Sort by AuthorPost timeSubject AscendingDescending
Page 1 of 1 [ 3 posts ]
All times are UTC [ DST ]
Mathimatikoi Online
Users browsing this forum: No registered users and 1 guest
You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum
Search for:
Jump to: Select a forum ------------------ Algebra Linear Algebra Algebraic Structures Homological Algebra Analysis Real Analysis Complex Analysis Calculus Multivariate Calculus Functional Analysis Measure and Integration Theory Geometry Euclidean Geometry Analytic Geometry Projective Geometry, Solid Geometry Differential Geometry Topology General Topology Algebraic Topology Category theory Algebraic Geometry Number theory Differential Equations ODE PDE Probability & Statistics Combinatorics General Mathematics Foundation Competitions Archives LaTeX LaTeX & Mathjax LaTeX code testings Meta
|
{}
|
Unlimited PS Actions, graphics, videos & courses! Unlimited asset downloads! From \$16.50/m
# How to Create a Friday the 13th Themed Icon Pack in Adobe Illustrator
Difficulty:IntermediateLength:LongLanguages:
Today’s tutorial is part of the Horror Movie Week special, in which I and everybody else at Team Tuts+ got the chance to recreate a part of their favorite horror flick. Since I’m a big fan of Jason, it was an easy choice for me, and boy we’ll have fun with this one since we’ll get to recreate some cool props using Adobe Illustrator’s most basic shapes and tools.
Now, without wasting any more time, let’s get our hands dirty and bring these icons to life!
Oh, and don't forget you can always expand the set by heading over to Envato Market, where you can find tons of beautifully crafted icon packs just waiting to be taken.
## 1. Set Up a New Document
Since I’m sure that you already have Illustrator up and running in the background, bring it up and let’s set up a New Document (File > New or Control-N) using the following settings:
• Number of Artboards: 1
• Width: 800 px
• Height: 600 px
• Units: Pixels
• Color Mode: RGB
• Raster Effects: Screen (72ppi)
• Align New Objects to Pixel Grid: checked
## 2. Set Up a Custom Grid
As you probably already know, Illustrator lets you take advantage of its powerful Grid System by setting it up using the lowest possible values, so that in the end you’ll have full control over your shapes since you can make sure they’re perfectly snapped to the underlying Pixel Grid.
### Step 1
The settings that we’re interested in can be found under the Edit > Preferences > Guides & Grid submenu, and should be adjusted as follows:
• Gridline every: 1 px
• Subdivisions: 1
### Step 2
Once we’ve set up our custom grid, all we need to do in order to make sure our shapes look crisp is enable the Snap to Grid option found under the View menu, which will transform into Snap to Pixel each time you enter Pixel Preview mode.
Now, since we’re aiming to create the icons using a pixel-perfect workflow, I strongly recommend you go through my how to create pixel-perfect artwork tutorial, which will help you widen your technical skills in no time.
## 3. Set Up the Layers
With the New Document created, it would be a good idea to layering our project, since this way we can maintain a steady workflow by focusing on one icon at a time.
That being said, bring up the Layers panel, and create a total of four layers, which we will rename as follows:
• layer 1 > reference grids
• layer 2 > bear trap
• layer 3 > hockey mask
• layer 4 > gear
## 4. Create the Reference Grids
The Reference Grids (or Base Grids) are a set of precisely delimited reference surfaces, which allow us to build our icons by focusing on size and consistency.
Usually, the size of the grids determines the size of the actual icons, and they should always be the first decision you make on you start a new project, since you’ll always want to start from the smallest possible size and build on that.
Now, in our case, we’re going to be creating the icon pack using just one size, more exactly 128 x 128 px, which is a fairly large one.
### Step 1
Start by locking all but the reference grid layer, and then grab the Rectangle Tool (M) and create a 128 x 128 px orange (#F15A24) square, which will help define the overall size of our icons.
### Step 2
Add another smaller 120 x 120 px one (#FFFFFF) which will act as our active drawing area, thus giving us an all-around 4 px padding.
### Step 3
Group the two squares composing the reference grid using the Control-G keyboard shortcut, and then create two copies at a distance of 40 px from one another, making sure to align them to the center of the Artboard.
Once you’re done, lock the current layer and move on to the next one where we’ll start working on our first icon.
## 5. Create the Bear Trap Icon
First, make sure you’re on the right layer—that would be the second one—and then zoom in on the first reference grid so that you can have a better view of what you’re going to be creating.
### Step 1
Using the Rounded Rectangle Tool, create a 48 x 104 px shape with an 8 px Corner Radius, which we will color using #A06C60 and then align to the center of our active drawing area.
### Step 2
Select the shape that we’ve just created, and then go to Object > Path > Offset Path and create a 4 px Offset from which we will then subtract the smaller shape using Pathfinder’s Minus Front Shape Mode in order to get the trap’s jaw.
### Step 3
Give the resulting shape an outline using the Offset Path method, by applying a 4 px Offset, which we will then color using #3A322F in order to make it stand out.
### Step 4
Draw another 52 x 108 px rounded rectangle (#FFFFFF) with a 10 px Corner Radius, positioning it over the shapes that we’ve just created, and then give it a 2 px Offset, subtracting the smaller shape from it afterwards.
### Step 5
Since we’re going to be using the white shape to create the highlight and shadow for the trap’s jaw, we will have to adjust it by subtracting a 64 x 8 px rectangle (highlighted with orange) from it.
### Step 6
Ungroup the resulting shapes (right click > Ungroup) and then select the top half and turn it into a highlight by setting its Blending Mode to Overlay and lowering its Opacity to 30%.
### Step 7
Select the bottom half and turn it into a shadow by setting its color to black (#000000) and then lowering its Opacity to 20%.
### Step 8
Fill in the empty spaces from between the two detail shapes, by adding two 4 x 8 px rectangles (#000000) which we will adjust by lowering their Opacity to 20%.
### Step 9
Add two smaller 4 x 4 px rectangles over the shadows that we’ve just created, coloring them using #3A322F.
### Step 10
Use the Rectangle Tool (M) to create a 2 x 2 px rectangle (#FFFFFF) followed by a larger 4 x 2 px one (#FFFFFF). Position the two 2 px from one another, and then turn them into highlights by setting their Blending Mode to Overlay and lowering their Opacity to 30%, grouping (Control-G) and positioning a copy on the top side of the jaw, and another one at the bottom.
### Step 11
Start working on the trap’s metal teeth by creating a 4 x 4 px circle (#3A322F) which we will adjust by using the Anchor Point Tool on its left and right anchor points to make them slightly pointy. Then, create three copies of the adjusted shape, at a distance of 4 px from one another, grouping (Control-G) and positioning a set on the jaw’s top side and another one on its bottom one.
Quick tip: also, at this point it would be a good idea to select all of the shapes that we have so far and group them together (Control-G).
### Step 12
Grab the Rectangle Tool (M) and create the stock bar by adding a 68 x 8 px shape (#70625E) towards the center of the trap’s jaw, making sure to position it under (right click > Arrange > Send to Back), giving it a 4 px thick outline (#3A322F) using the Offset Path method.
### Step 13
Add two 4 x 8 px rectangles (#000000) towards the sides of the inner section of the bar, turning them into shadows by lowering their Opacity to 20%.
### Step 14
Next, add a couple of 2 px tall highlights towards the top side of the stock bar. Use white (#FFFFFF) for the color, Soft Light for the Blending Mode, and 60% for the Opacity.
### Step 15
Using the Ellipse Tool (L) add two 4 x 4 px circles (#3A322F) to the sides of the stock bar’s inner section, leaving a gap of 1 px between them and the jaw’s outline.
Once you’ve added them, select all the stock bar’s composing shapes and group them using the Control-G keyboard shortcut.
### Step 16
Start working on the bridge holding the plate, by creating a 4 x 68 px rectangle (#5E524E) which we will adjust by setting the Corner Radius of its bottom Anchor Points to 2 px. Give the resulting shape a 4 px thick outline (#3A322F) and then position the two towards the upper section of the jaw, so that their outlines end up overlapping.
### Step 17
Add a 4 x 4 px rectangle (#000000) towards the upper section of the bridge, turning it into a shadow by lowering its Opacity to 20%.
### Step 18
Add another 4 x 4 px rectangle (#3A322F) about 22 px from the shadow that we’ve just created, and a 2 x 2 px circle (#3A322F) towards the tip of the bridge, selecting and grouping (Control-G) all of its composing shapes afterwards.
### Step 19
Create the trap’s plate by drawing a 20 x 20 px circle (#A06C60), which we will then position over the stock bar, giving it the same 4 px thick outline (#3A322F) afterwards.
### Step 20
Grab a copy (Control-C > Control-F) of the plate’s fill shape (the 20 x 20 px circle) and then subtract a smaller 16 x 16 px one from it, setting the color of the resulting object to white (#FFFFFF), its Blending Mode to Overlay and its Opacity to just 30%.
### Step 21
Finish off the plate by adding the little screws, using four 2 x 2 px circles (#3A322F), one for each of its “corners”.
### Step 22
Cast an outer shadow onto the stock bar by selecting the plate’s outline and applying a 2 px offset, which we will color using black (#000000), lowering its Opacity to 20%.
### Step 23
Since we want the shadow to remain constrained to the stock bar’s surface, we will need to grab a copy of its fill shape and use it as a Clipping Mask (both shapes selected > right click > Make Clipping Mask).
### Step 24
Position the masked shadow underneath the plate (right click > Arrange > Send Backward), and then select all of the latter’s composing shapes and group them together (Control-G).
### Step 25
With the plate in place, start working on the stock bar’s side sections by creating a 4 x 4 px rectangle (#5E524E) with a 4 px outline (#3A322F), which we will position towards its left side.
### Step 26
Add a 4 x 2 px highlight (color: white; Blending Mode: Overlay; Opacity: 30%) towards its upper section, and a 2 x 2 px circle (#3A322F) on top of all the other shapes, grouping (Control-G) the side section's composing elements afterwards.
### Step 27
Create the stock bar’s right section by grabbing and positioning a copy (Control-C > Control-F) of the one that we’ve just created towards its right side.
### Step 28
Start working on the chain by creating the first link using a 10 x 16 px rounded rectangle (#A38989) with a 5 px Corner Radius, from which we will subtract a smaller 6 x 12 px rounded rectangle with a 3 px Corner Radius (#A38989). Give the resulting shape a 2 px outline (#3A322F) and then position the two shapes underneath the stock bar’s left side section.
### Step 29
Give the link an all-around highlight (color: white; Blending Mode: Overlay; Opacity: 30%), and then group (Control-G) all of its composing shapes, and create the second link using a copy of it (Control-C > Control-F).
### Step 30
Add the connecting chain link by creating a 2 x 14 px rounded rectangle (#BCAAAA) with a 1 px Corner Radius and a 2 px outline (#3A322F) which we will position between the two links that we’ve already created.
### Step 31
Add a couple of highlights (color: white; Blending Mode: Overlay; Opacity: 30%) and a bottom shadow (color: black; Opacity: 20%), and then select and group (Control-G) all of the connecting chain link’s composing shapes.
### Step 32
Finish off the current icon by adding a 2 x 11 px rounded rectangle (#3A322F) with a 1 px Corner Radius over the stock bar’s left side section, aligning it to the upper section of the little screw.
Once you’re done, select all the icon’s composing shapes and group them together using the Control-G keyboard shortcut.
## 6. Create the Hockey Mask Icon
Assuming you’ve already locked the previous layer and moved on up to the next one, zoom in on the second reference grid, and let’s start working on the iconic mask.
### Step 1
Using the Ellipse Tool (L), create the main shape of the mask by drawing an 80 x 108 px ellipse (#F7ECD2), which we will position in the center of the underlying active drawing area, 4 px from its bottom edge.
### Step 2
Adjust the ellipse by selecting its side anchor points using the Direct Selection Tool (A) and pushing them upwards by 6 px. You can do this either manually with the help of the directional arrow keys, or by using the Move Tool (right click > Transform > Move > Vertical > -6 px).
### Step 3
Give the resulting shape a 4 px thick outline (#3A322F) using the Offset Path method, removing any extra Anchor Points created in the process, and adjusting the shape as needed.
### Step 4
Create a copy of the smaller fill shape, and then subtract -2 px Offset from it using Pathfinder’s Minus Front Shape Mode, coloring the resulting shape using black (#000000) and lowering the Opacity to just 20%.
### Step 5
Adjust the shadow that we’ve just created, by selecting its inner bottom Anchor Point and moving it upwards by 4 px.
### Step 6
Add the eye cutouts by creating two 16 x 16 px rounded rectangles (#3A322F) with a 6 px Corner Radius at a distance of 16 px from one another, and then position them over the mask, exactly 40 px from its outline.
### Step 7
Adjust the cutouts by setting the Corner Radius of their bottom corners to 8 px from within the Transform panel.
### Step 8
Start working on the little circular cutouts by grabbing the Ellipse Tool (L) and drawing a stack of three 4 x 4 px circles (#3A322F) vertically distributed 4 px from one another, which we will position about 8 px above the left eye.
Quick tip: I recommend you turn on Pixel Preview mode (Alt-Control-Y) since it will be far easier to position your shapes this way.
### Step 9
Add another 4 x 4 px circle (#3A322F) on the right side of the third cutout, at a distance of 4 px from it, making sure to position it slightly towards the bottom by moving it 2 px downwards.
### Step 10
Select and group (Control-G) the little cutouts that we’ve added so far, and create and position a copy above the right eye cutout, making sure to flip it vertically (right click > Transform > Reflect > Vertical).
### Step 11
Take your time, and add the rest of the cutouts using the same 4 x 4 px circle (#3A322F), positioning them as seen in the reference image.
Once you’re done, select and group them together (Control-G) so that they won’t get separated by accident.
### Step 12
Next, we’ll have to grab the Pen Tool (P) and draw the three colored decals using #F7734B as our Fill color. So again, take your time, and use the reference image to add them in.
### Step 13
With the decals in place, add the top section of the strap by creating an 8 x 16 px rectangle (#5E524E) with a 4 px outline (#3A322F) which we will align to the top edge of the active drawing area.
### Step 14
Create two 8 x 1 px rectangles (#3A322F) at a distance of 1 px from one another, positioning them towards the upper section of the strap, leaving a 1 px gap.
Then, add a thicker 8 x 2 px rectangle (#3A322F) underneath them, at a distance of 2 px.
### Step 15
Next, grab the Ellipse Tool (L) and draw a 2 x 2 px circle (#BCAAAA) which will act as the bolt holding the strap, and give it a 2 px outline (#3A322F), grouping (Control-G) and positioning the two shapes in the center of the space created by the thicker strap line that we’ve just created.
### Step 16
Use the Rectangle Tool (M) to add a couple of highlights between the strap’s detail lines using white (#FFFFFF) as your fill color, Soft Light as your Blending Mode, and 60% for the Opacity.
### Step 17
Finish off the top strap by adding a 16 x 2 px rectangle (#000000) underneath its outline, which we will turn into a shadow by lowering its Opacity to 20%. Then, select all of its composing shapes and group them together (Control-G).
### Step 18
Start working on the side straps, by creating a 2 x 8 px rectangle (#5E524E) with a 4 px outline (#3A322F) which we will position on the left side of the mask, Horizontally Center Aligning it to the eye cutout.
### Step 19
Add a 1 x 4 px rectangle (#3A322F) in the center-right of the fill shape, and a top and side highlight (color: white; Blending Mode: Soft Light; Opacity: 60%), selecting and grouping (Control-G) all of the strap’s composing shapes afterwards.
### Step 20
Finish off the icon by creating the right strap, using a copy (Control-C > Control-F) of the one that we’ve just created, and positioning it onto the right side of the mask, flipping it vertically (right click > Transform > Reflect > Vertical).
Then, simply select all the icon’s composing shapes and group them using the Control-G keyboard shortcut.
## 7. Create the Gear Icon
We are now down to our third and last icon, which is probably Jason’s favorite thing in the whole world. Yup, we’re going to create some of the gear used by the character in the movies, so make sure you’re on the right layer, and then zoom in on our last reference grid, and let’s get started.
### Step 1
Let’s start working on the deadly kitchen knife by creating an 8 x 24 px rectangle (#A06C60), which we will adjust by setting the Radius of its bottom corners to 4 px. Give the resulting shape a 4 px thick outline (#3A322F) using the Offset Path method, and then position both shapes towards the bottom edge of the active drawing area, at a distance of 26 px from its left side.
### Step 2
Add an 8 x 2 px rectangle (#FFFFFF) at the top of the handle’s fill shape, which we will turn into a highlight by setting its Blending Mode to Overlay and lowering its Opacity to 30%.
### Step 3
Grab the Pen Tool (P) and, using a 1 px thick Stroke (#3A322F) with the Cap set to Round, draw some little wood lines. Take your time, and once you’re done select and group them using the Control-G keyboard shortcut.
### Step 4
Add two 4 x 4 px circles (#3A322F) on top of the handle, one towards the top and the other one towards the bottom, and then select and group (Control-G) all of its composing shapes together.
### Step 5
Create the knife’s blade by drawing a 16 x 168 px ellipse (#BCAAAA) which we will adjust by removing its bottom Anchor Point. Give the resulting shape the usual 4 px thick outline (#3A322F) and then align the two shapes to the right edge of the handle, making sure that the outlines overlap.
### Step 6
Create a copy (Control-C > Control-F) of the blade’s fill shape, and then give it a -2 px offset which we will subtract from it, in order to create the shape for the highlight. Adjust the resulting shape by changing its color to white (#FFFFFF) and setting its Blending Mode to Overlay while lowering its Opacity to 30%.
### Step 7
Finish off the kitchen knife by adding a 16 x 2 px rectangle (#3A322F) to the lower section of its blade, grouping all its shapes together afterwards (Control-G).
### Step 8
Start working on the little hacksaw by creating its blade using a 12 x 72 px rectangle (#BCAAAA) with a 4 px outline (#3A322F), aligning both shapes to the top edge of the active drawing area, at a distance of 6 px from the knife.
### Step 9
Using the Rectangle Tool (M), create a 2 x 74 px shape which we will color using #997E7E and then position over the left side of the blade.
### Step 10
Add a 12 x 4 px rectangle (#FFFFFF) at the top of the blade, and turn it into a highlight by setting its Blending Mode to Overlay while lowering its Opacity to 30%.
### Step 11
Add another 2 x 70 px vertical highlight (color: white; Blending Mode: Overlay; Opacity: 30%), and position it towards the right side of the blade, just below the one that we created in the previous step.
### Step 12
Using the Rectangle Tool (M) add a 2 x 74 px vertical divider (#3A322F) towards the left side of the blade, leaving a 2 px empty gap from the larger outline.
### Step 13
Add a 2 x 2 px circle (#3A322F) in the upper-right corner of the blade, leaving a 1 px gap around it. Once you’ve added the little bolt, select and group (Control-G) all of the blade’s elements together.
### Step 14
Next, let’s create the little section from underneath the blade that holds it to the handle.
First, draw a 4 x 4 px square (#A38989) with a 4 px outline (#3A322F) which we will position just below the blade.
Then, add a 4 x 2 px shadow (color: black; Opacity: 20%), a 2 x 2 px circle (#3A322F) for a bolt, and finally select and group (Control-G) all its shapes.
### Step 15
Create a 22 x 2 px rectangle which we will color using #A38989, and then give it a 4 px outline (#3A322F), positioning the shapes underneath the blade’s connector that we created in the previous step, aligning them to its left side.
### Step 16
Add a 22 x 1 px top highlight (color: white; Blending Mode: Overlay; Opacity: 30%) and a 4 x 2 px rectangle (#3A322F) towards the bottom of the outline, and then select and group all of the connector’s composing shapes using the Control-G keyboard shortcut.
### Step 17
Start working on the actual handle by creating a 16 x 30 px rectangle (#BCAAAA) (1) which we will adjust by setting the Radius of its top Anchor Points to 2 px (2). Then, subtract a 12 x 20 px rectangle from its lower section (3), giving the resulting shape a 4 px thick outline (#3A322F) (4). Add a couple of highlights (color: white; Blending Mode: Overlay; Opacity: 30%) here and there (5), and a 4 x 4 px circle (#3A322F) in its upper-left corner (6).
### Step 18
Continue adding details to the handle by creating a 32 x 8 px rectangle (#BCAAAA) (1), which we will adjust by setting the Radius of its left corners to 4 px (2). Then, give the resulting shape a 4 px outline (#3A322F), making sure to send it to the back (right click > Arrange > Send to Back) of the handle’s fill shape that we created in the previous step (3). Add a bunch of highlights (color: white; Blending Mode: Overlay; Opacity: 30%) (4 and 5), followed by six 2 x 8 px vertical rectangles (#3A322F) positioned 2 px from one another (5).
Finish off the handle by adding a 2 x 2 px circle (#3A322F) to its rounded side (6) and then selecting and grouping (Control-G) all of its composing shapes.
### Step 19
Once you have the handle, position it in the lower section of the active drawing area, left aligning it to the blade’s outline.
### Step 20
Add the upper section of the hacksaw holding the blade, by creating a 36 x 90 px rounded rectangle (#3A322F) with a 12 px Corner Radius, from which we will subtract a smaller 28 x 82 px one with an 8 px Corner Radius. Remove the left and bottom sections that overlap the blade and handle, and then give the resulting shape the usual 4 px thick outline (#3A322F).
### Step 21
Finish off the icon by adding two 4 x 4 px shadows (color: black; Opacity: 20%) to the shape that we’ve just created, and then select and group (Control-G) the hacksaw’s shapes, before grouping the rest of the icon’s elements.
## It’s a Wrap!
There you have it! A super-comprehensive tutorial on how to create an icon pack for one of the goriest horror flicks ever made. I hope that you’ve found the steps easy to follow, and most importantly learned a new trick along the way.
|
{}
|
I would expect this question to have already been answered, but I could not find an answer on point, so I am looking for thoughts and suggestions. The problem is as follows:
I have a document which I would like to share with a number of individuals for them to write in their comments. I would like to shrink the width of the page or text line, and then add an area on the right. An example would be:
Phasellus tempor! Scelerisque ________________________
platea mattis sit, lorem sed. ________________________
Pid, cursus sit platea quis eu, ________________________
In ac, habitasse, vel lundium. ________________________
Porttitor? Tincidunt sociis, ut ________________________
dapibus ultricies a. Mid duis cum ________________________
in pid! Lacus. Parturient cum ________________________
Elementum tincidunt! Dolor urna, ________________________
sed eu, nascetur ut! Ultrices auctor, ________________________
cras pellentesque parturient placerat ________________________
rhoncus ac turpis hac, placerat vut. ________________________
My first inclination is to use marginpar, but I wanted to see if there were other, preferable options.
As seen in the example, my preference would be to have comment lines corresponding to paragraphs i.e. where there is no text on the left, there needn't be space for comments on the right.
I would also like to be able to turn off this area for leaving comment at the flip of a switch, so to speak i.e. return the margin widths/etc to normal and not add the lines. I do not expect this would be too difficult.
Incidentally, I am using the todonotes and lineno packages with memoir, so I would like a solution that does not break their functionality.
-
I am not sure the "comments" tag means in this question what it generally means on this forum. :) – Brian M. Hunt Mar 15 '12 at 18:14
Since you use lineno package anyways, I made a solution based on this package. You have to have the numbers on and placed on the left to make it work, as in the following example. We introduce a command \PrintCommentLine which prints the line and we add this macro to the macro \makeLineNumberLeft that takes care of line number printing. You can as well easily switch it off.
\documentclass{article}
\usepackage{lineno}
\usepackage[latin]{babel}
\usepackage{lipsum}
% CODE STARTS HERE
\iftrue % change to \iffalse to switch it off
\setlength{\textwidth}{0.5\textwidth}
\def\PrintCommentLine{\kern1.1\textwidth\rule{0.9\textwidth}{1pt}}
\def\makeLineNumberLeft{%
\hss\linenumberfont\LineNumber\hskip\linenumbersep%
\hbox to 0pt{\PrintCommentLine\hss}}
\linenumbers
\leftlinenumbers
\fi
% CODE ENDS HERE
\begin{document}
\noindent
\lipsum
\end{document}
-
This is an excellent solution. – Yiannis Lazarides Mar 15 '12 at 22:17
@YiannisLazarides Thanks. Such solutions are just about recognizing who have done the snippet you need ;) – yo' Mar 15 '12 at 22:25
Sure and a lot of reading:) – Yiannis Lazarides Mar 15 '12 at 22:26
yes, and a lot of shell commands like locate lineno.sty | xargs gedit ;) – yo' Mar 15 '12 at 22:40
This is elegant. Thank you. – Brian M. Hunt Mar 15 '12 at 22:45
|
{}
|
# Hausdorff's Maximal Principle/Formulation 1
## Theorem
Let $\struct {\PP, \preceq}$ be a non-empty partially ordered set.
Then there exists a maximal chain in $\PP$.
## Also known as
Hausdorff's Maximal Principle is also known as the Hausdorff Maximal Principle.
Some sources call it the Hausdorff Maximality Principle or the Hausdorff Maximality Theorem.
## Also see
• Results about Hausdorff's maximal principle can be found here.
## Source of Name
This entry was named for Felix Hausdorff.
|
{}
|
# Understanding Bayes' Theorem
I worked through some examples of Bayes' Theorem and now was reading the proof.
Bayes' Theorem states the following:
Suppose that the sample space S is partitioned into disjoint subsets $B_1, B_2,...,B_n$. That is, $S = B_1 \cup B_2 \cup \cdots \cup B_n$, $\Pr(B_i) > 0$ $\forall i=1,2,...,n$ and $B_i \cap B_j = \varnothing$ $\forall i\ne j$. Then for an event A,
$\Pr(B_j \mid A)=\cfrac{B_j \cap A}{\Pr(A)}=\cfrac{\Pr(B_j) \cdot \Pr(A \mid B_j)}{\sum\limits_{i=1}^{n}\Pr(B_i) \cdot \Pr(A \mid B_i)}\tag{1}$
The numerator is just from definition of conditional probability in multiplicative form.
For the denominator, I read the following:
$A= A \cap S= A \cap (B_1 \cup B_2 \cup \cdots \cup B_n)=(A \cap B_1) \cup (A\cap B_2) \cup \cdots \cup(A \cap B_n)\tag{2}$
Now this is what I don't understand:
The sets $A \cup B_i$ are disjoint because the sets $B_1, B_2, ..., B_n$ form a partition.$\tag{$\clubsuit$}$
I don't see how that is inferred or why that is the case. What does B forming a partition have anything to do with it being disjoint with A. Can someone please explain this conceptually or via an example?
I worked one example where you had 3 coolers and in each cooler you had either root beer or soda. So the first node would be which cooler you would choose and the second nodes would be whether you choose root beer or soda. But I don't see why these would be disjoint. If anything, I would say they weren't disjoint because each cooler contains both types of drinks.
-
The sets $A \cap B_i$ are disjoint, because the sets $B_i$ are. However, the sets $A \cup B_i$ are not disjoint, since they all contain $A$. I think the statement concerns the sets $A \cap B_i$. – Librecoin May 24 '13 at 18:05
@Tharsis: Your comment cannot be correct. A and $B_i$ MUST be mutually exclusive because this is what will allow you to implement the 3rd Axiom of Probability and convert unions into a sum of probabilities when calculating the probability of A. – user1527227 May 24 '13 at 18:11
Notice the difference between $\cup$ and $\cap$. This statement is true only for the sets $A \cap B_i$. – Librecoin May 24 '13 at 18:18
@user1527227 I think Tharsis is pointing out that, e.g., $(A \cap B_1)$ is disjoint from $(A \cap B_2)$, but $A \cup B_1$ is not disjoint from $A \cup B_2$. Pay attention to the $\cap, \cup$: – amWhy May 24 '13 at 18:19
@amWhy: OH THAT WOULD MAKE A LOT MORE SENSE. Thanks for clearing that up. – user1527227 May 24 '13 at 18:20
As Tharsis pointed out, and was clarified in the comments, it is all of sets of the given by $\;(A \cap B_i),\; 1 \leq i \leq n\;$ that are pairwise disjoint.
$$\;(A \cap B_i)\cap (A \cap B_j) = \varnothing,\;\;\;\forall i, j,\;\;\text{s.t.}\;\;1 \leq i, j\leq n\;\;\text{and}\;\;i\neq j$$
e.g., $(A\cap B_1)$ is disjoint from $(A\cap B_2)$, but $(A\cup B_1)$ is certainly not disjoint from $(A\cup B_2)$, etc.
Pay attention to the distinction between $∩,∪.$
-
always sound advice! =1 – Amzoti May 25 '13 at 0:50
It is the sets $A\cap B_i$ that are pairwise disjoint, and that is precisely what you need to calculate the probability in the denominator.
-
You can easily show this using set theoretic arguement.
$A \cap S= A \cap (B_1 \cup B_2 \cup \cdots \cup B_n)=(A \cap B_1) \cup (A\cap B_2) \cup \cdots \cup(A \cap B_n)$
-
Here is a write that describes Bayes theorem in detail along with a bunch of examples on how to use it (different write ups)
-
|
{}
|
# 1.06 The properties of operations with integers
Lesson
When we manipulate expressions with numbers, we follow a set of properties that apply to all numbers. Knowing these properties helps us to be able to rewrite expressions in a variety of different ways, and might make evaluating them easier.
The box below summarizes some of the properties of real numbers and gives an example of each.
Properties of real numbers
Property Symbols Example
Commutative property of addition $a+b=b+a$a+b=b+a $3+6=6+3$3+6=6+3
Commutative property of multiplication $a\times b=b\times a$a×b=b×a $6\times3=3\times6$6×3=3×6
Associative property of addition $a+(b+c)=(a+b)+c$a+(b+c)=(a+b)+c $6+3+2=6+3+2$6+3+2=6+3+2
Associative property of multiplication $a(bc)=(ab)c$a(bc)=(ab)c $6\times3\times2=6\times3\times2$6×3×2=6×3×2
Distributive property
$a(b+c)=ab+ac$a(b+c)=ab+ac
or
$a(b-c)=ab-ac$a(bc)=abac
$4\left(3+5\right)=4\times3+4\times5$4(3+5)=4×3+4×5
or
$4\left(3-5\right)=4\times3-4\times5$4(35)=4×34×5
Identity property of addition $a+0=a$a+0=a $3+0=3$3+0=3
Identity property of multiplication $a\times1=a$a×1=a $3\times1=3$3×1=3
Inverse property of addition $a+\left(-a\right)=0$a+(a)=0 $3+\left(-3\right)=0$3+(3)=0
Inverse property of multiplication $a\times\frac{1}{a}=1$a×1a=1 $3\times\frac{1}{3}=1$3×13=1
The commutative property is the reason that we can add numbers in any order or multiply numbers in any order. While it applies to multiplication and addition, it does not apply to expressions that are written as subtraction or division.
If we rotate the array in the applet below, we can see that the rectangles are the same size when the length and width are switched. This demonstrates the commutative property for multiplication.
The associative property is the reason that we can group sums of numbers differently and the result remains the same. The same is true for products of numbers. However, it's not true for subtraction and division. That's why we say they are not associative.
The distributive property shows us how the product of a number and a sum or difference is applied to each term in the sum or difference. We can see it demonstrated in the applet below when we split the rectangle into two smaller rectangles, but the area remains the same.
We know that adding zero to a number gives us the same number. That's the identity property for addition. In a similar way, multiplying a number by one doesn't change the number. That's the identity property for multiplication.
We also know that opposites add to zero. That's the inverse property of addition. When multiplying fractions, reciprocals multiply to give us one. That's the inverse property of multiplication.
All of these properties can be applied to help us evaluate expressions more easily. Let's walk through a few examples.
#### Worked examples
##### Question 1
Evaluate $6+\left(-5\right)+5$6+(5)+5.
Think: Since we can add numbers in any order, it might be easier to evaluate the sum of ($-5$5) and ($5$5) first. They are opposites, so they combine to make a zero pair.
Do:
$6+\left(-5\right)+5$6+(−5)+5 $=$= $-5+5+6$−5+5+6 The commutative property of addition $=$= $0+6$0+6 $-5+5=0$−5+5=0 by the inverse property of addition $=$= $6$6 $0+6=6$0+6=6 by the identity property of addition
So $6+\left(-5\right)+5=6$6+(5)+5=6
Reflect: Could we have applied different properties to get the same result? Yes! Here's another example.
$6+\left(-5\right)+5$6+(−5)+5 $=$= $6+0$6+0 Since $-5+5=0$−5+5=0, we can substitute $0$0 where we had $-5+5$−5+5. $=$= $6$6 By the identity property
Either way, we have that $6+\left(-5\right)+5=6$6+(5)+5=6.
##### Question 2
Evaluate $-7\times105$7×105 without a calculator.
Think: We can substitute the expression $100+5$100+5 for $105$105. Then we can apply the distributive property to find the product. Multiplying by $100$100 and $5$5 seems much easier!
Do:
$-7\times105$−7×105 $=$= $-7\left(100+5\right)$−7(100+5) Since $100+5=105$100+5=105, we can substitute $100+5$100+5 where we had $105$105. $=$= $-7\times100+\left(-7\times5\right)$−7×100+(−7×5) Apply the distributive property $=$= $-700+\left(-35\right)$−700+(−35) $=$= $-735$−735 Simplify
#### Practice questions
##### Question 3
Use the commutative property of addition to fill in the missing number.
1. $19+15=15$19+15=15$+$+$\editable{}$
##### Question 4
Consider $11\left(7-3\right)$11(73).
1. Using the distributive law, complete the gap so that $11\left(7-3\right)$11(73) is rewritten as the difference of two integers.
$11\left(7-3\right)=77-\editable{}$11(73)=77
##### Question 5
Which property is demonstrated by the following statement?
$4\times\left(9\times5\right)=\left(4\times9\right)\times5$4×(9×5)=(4×9)×5
1. Commutative property of multiplication
A
Associative property of multiplication
B
Distributive property
C
D
E
Commutative property of multiplication
A
Associative property of multiplication
B
Distributive property
C
D
E
### Outcomes
#### 7.NS.1
Apply and extend previous understandings of addition and subtraction to add and subtract rational numbers; represent addition and subtraction on a horizontal or vertical number line diagram.
#### 7.NS.1.d
Apply properties of operations as strategies to add and subtract rational numbers.
#### 7.NS.2
Apply and extend previous understandings of multiplication and division and of fractions to multiply and divide rational numbers.
#### 7.NS.2.c
Apply properties of operations as strategies to multiply and divide rational numbers.
|
{}
|
# A plane electro magnetic wave propagating along x - direction can have following pair of $\overrightarrow{E}$ and $\overrightarrow{B}$
$\begin{array}{1 1} E_x ,B_y \\ E_y,B_z \\ B_x,E_y \\ E_z,B_y \end{array}$
|
{}
|
Index: The Book of Statistical ProofsStatistical Models ▷ Categorical data ▷ Logistic regression ▷ Definition
Definition: A logistic regression model is given by a set of binary observations $y_i \in \left\lbrace 0, 1 \right\rbrace, i = 1,\ldots,n$, a set of predictors $x_j \in \mathbb{R}^n, j = 1,\ldots,p$, a base $b$ and the assumption that the log-odds are a linear combination of the predictors:
$\label{eq:logreg} l_i = x_i \beta + \varepsilon_i, \; i = 1,\ldots,n$
where $l_i$ are the log-odds that $y_i = 1$
$\label{eq:logodds} l_i = \log_b \frac{\mathrm{Pr}(y_i = 1)}{\mathrm{Pr}(y_i = 0)}$
and $x_i$ is the $i$-th row of the $n \times p$ matrix
$\label{eq:X} X = \left[ x_1, \ldots, x_p \right] \; .$
Within this model,
• $y$ are called “categorical observations” or “dependent variable”;
• $X$ is called “design matrix” or “set of independent variables”;
• $\beta$ are called “regression coefficients” or “weights”;
• $\varepsilon_i$ is called “noise” or “error term”;
• $n$ is the number of observations;
• $p$ is the number of predictors.
Sources:
Metadata: ID: D76 | shortcut: logreg | author: JoramSoch | date: 2020-06-28, 20:51.
|
{}
|
0
1310
# Installment Questions for RRB Group-D PDF
Download Top-10 RRB Group-D Installment Questions PDF. RRB GROUP-D Installment questions based on asked questions in previous exam papers very important for the Railway Group-D exam.
Question 1: Mahesh invests a certain sum in a bank. At the end of 2 years, the amount becomes 8000 and at the end of 4.5 years it becomes 10500. If the bank offers simple interest then find the rate of interest which the bank offers.
a) 10 %
b) 16.67 %
c) 12.5 %
d) 20 %
Question 2: A bank offers an interest of 10% per annum which is compounded half-yearly. If Arjun invests 10000, what is the total amount he will earn after 2 years?(approximately)
a) 1200
b) 1617
c) 12155
d) 13441
Question 3: A bank offers a simple interest of 8 % per annum on fixed deposits. Mohit deposits a certain sum in the bank. At the end of 5 years, he withdraws the entire amount from the bank and deposits 50 % of this amount in another bank which offers a simple interest of 10 % per annum. After another 2 years, Mohit gets 16800 rupees from the second bank. What is the amount that he invested in the first bank?
a) 22500
b) 20000
c) 25000
d) 22000
Question 4: Mukesh has 30000 rupees with him. He deposits this amount in two different banks. The first bank offers a simple interest of 12 % per annum and the second bank offers a compound interest of 10 % per annum. How much money (approx) should he invest in the second bank so that at the end of second year he gets same amount from both the banks?
a) 17703.67
b) 16607.14
c) 15983.25
d) 17004.33
Question 5: Amit invests Rs. 20,000 in two banks. He invested half of the sum in a bank that pays compound interest and half in a bank that pays simple interest. The interest rate in both the banks is 10% p.a. How much interest will he make at the end of 2 years?
a) Rs. 2,000
b) Rs. 2,100
c) Rs. 4,000
d) Rs. 4,100
Question 6: A sum of money was invested at a certain rate for 2 years. Had it been invested at 3% higher rate of interest, it would have fetched Rs. 450 more. The sum invested was:
a) Rs. 7500
b) Rs. 600
c) Rs. 5000
d) Rs. 4500
Question 7: A invested Rs 10,000 for 9 months and B invested Rs 18,000 for some time in a business. If the profits of A and B are equal then the period of time for which B’s capital was invested is
a) 6 months
b) 5 months
c) 4 months
d) 3 months
Question 8: Amit invests Rs 1000 for a period of 3 years at the rate of 10% per annum. What would be the difference if the interest is accrued using simple interest versus compound interest compounded annually.
a) Simple Interest would be more by Rs 31
b) Simple Interest would be less by Rs 31
c) Simple Interest would be more by Rs 1031
d) Can’t be determined
Question 9: Amit invest half his money at simple interest rate of 10.5% and compound interest rate of x% compounded annually for 2 years. If he gets the same interest from both investments, find x.
a) 10%
b) 11%
c) 12.5%
d) Can’t be determined
Question 10: A sum of money was invested at a certain rate for 2 years. Had it been invested at 3% higher rate of interest, it would have fetched Rs. 450 more. The sum invested was—
a) Rs. 7500
b) Rs. 600
c) Rs. 5000
d) Rs. 4500
We know that simple interest is same for every year. We have been given that he earned 2500 in 2.5 years.
Hence, he must be earning 1000 in interest every year. Thus, the initial amount invested by him must be 8000 – 2000 = 6000
Thus, the rate of interest must be 1000*100/6000 = 16.67 %
Since it is compounded half yearly, there are 4 time periods with a rate of 5%.
Amount = $10000*[1.05]^4 = 12155.06$ ~ 12155
Hence, option C is the correct answer.
He gets 16800 from the second bank which offers a simple interest of 10 percent per annum. Let us assume that he deposited x with second bank.
Hence
x*1.2 = 16800
=> x = 14000
Hence, he must have got 14000*2 = 28000 from the first bank.
Let P be the original sum that he deposited in the bank. Hence, we have
P*.8*5 + P = 28000
=> 1.4P = 28000
=> P = 28000/1.4 = 20000
Let us assume that Mukesh deposited x in second bank. So at the end of 2 years it will become 1.21x in bank 2.
Now he must have invested 30000 – x in bank 1.
The interest earned will be (30000 – x)*.12*2 = .24*30000 – .24x = 7200 – .24x
Hence, the amount will be 7200 – .24x + 30000 – x = 37200 – 1.24x
We have been given that 1.21x = 37200 – 1.24x
=> 2.24x = 37200
=> x = 16607.14
Hence, option B is the correct answer.
The principal in the bank with simple interest is Rs. 20,000/2 = Rs. 10,000
Interest earned = 10,000 * 2 * 10% = Rs. 2,000
The principal in the bank with compound interest is Rs. 20,000/2 = Rs. 10,000
Interest earned = $10,000 * 1.1^2$ – 10,000 = Rs. 2,100
Total interest earned = 2,000 + 2,100 = Rs. 4,100
Let $x$ be the sum of money invested at simple interest and $r$ be the rate of interest.
The interest at the end of 2 years=$P$x$t$x$r/100=2xr$ -(1)
The interest at the end of 2 years if $r$ is increased by 3%=$2x(r+3)$ -(2)
Given (2)-(1)=450
=>$6x=450$
∴$x= 7500$
Each of their share is proportional to the product of the investment and the time period. Since the share is the same for both
A’s investment x time = B’s investment x time
ie 10000 x 9 = 18000 x time
So, time = 5 months
Simple Interest = 1000*3*10/100 = Rs 300
Compound Interest = $1000(1+10/100)^3 -1000$ = 1331 – 1000 = Rs 331
Hence, Simple Interest would be lesser by Rs 31.
Let the principal be P.
Hence, his interest from first investment is P*10.5*2/100 = 21P/100.
From the second investment, his interest = $P(1+x/100)^2 – P$
As these two are equal, 21P/100 =$P(1+x/100)^2 – P$
Cancelling principal on both sides,
$(1+x/100)^2$ = 121/100
Taking square root
1+x/100 = 11/10
x=10%.
|
{}
|
## How do you calculate rotational energy levels?
Rotational energy levels – diatomic molecules In this equation, J is the quantum number for total rotational angular momentum, and B is the rotational constant, which is related to the moment of inertia , I = μr2 (μ is the reduced mass and r the bond length) of the molecule.
What is the rotational equivalent of velocity?
Angular velocity
), also known as angular frequency vector, is a vector measure of rotation rate, that refers to how fast an object rotates or revolves relative to another point, i.e. how fast the angular position or orientation of an object changes with time. Extensive?
How do you calculate velocity in physics?
Velocity (v) is a vector quantity that measures displacement (or change in position, Δs) over the change in time (Δt), represented by the equation v = Δs/Δt. Speed (or rate, r) is a scalar quantity that measures the distance traveled (d) over the change in time (Δt), represented by the equation r = d/Δt.
### What are rotational energy levels?
b. For a nonlinear molecule the rotational energy levels are a function of three principal moments of inertia IA, IB and IC. These are moments of inertia around three mutually orthogonal axes that have their origin (or intersection) at the center of mass of the molecule.
What is peripheral velocity?
“Peripheral velocity” is the speed that a point in the circumference moves per second. “ Maximum operating speed” is the highest peripheral velocity for ensuring safe operation; in no circumstances should the speed be exceeded. (
How do you calculate change in velocity?
Acceleration
1. Acceleration is the rate of change of velocity. It is the amount that velocity changes per unit time.
2. The change in velocity can be calculated using the equation:
3. change in velocity = final velocity – intial velocity.
4. This is when:
5. The average acceleration of an object can be calculated using the equation:
## What is vibrational and rotational energy levels?
Three types of energy levels in a diatomic molecule: electronic, vibrational, and rotational. If the vibrational quantum number (n) changes by one unit, then the rotational quantum number (l) changes by one unit. vibrational levels. The discrete peaks indicate a quantization of the angular momentum of the molecule.
What is the formula for rotational kinetic energy?
Rotational Kinetic Energy Formula 1 K R is Rotational Kinetic energy 2 I is the moment of inertia 3 ω is the angular velocity
What is rotational energy?
The mechanical work that is required during rotation is the number of torque of the rotation angle. The axis of rotation for unattached objects is mostly around its centre of mass. Rotational kinetic energy K R = $\\frac {1} {2}$ [Moment of inertia × (Angular velocity) 2 ]
|
{}
|
Question
Large amounts of ${}^{65}\textrm{Zn}$ are produced in copper exposed to accelerator beams. While machining contaminated copper, a physicist ingests $50 \textrm{ }\mu\textrm{Ci}$ of ${}^{65}\textrm{Zn}$. Each ${}^{65}\textrm{Zn}$ decay emits an average $\gamma$-ray energy of $0.550 \textrm{ MeV}$, 40.0% of which is absorbed in the scientist's 75.0 kg body. What dose in mSv is caused by this in one day?
$0.0751 \textrm{ mSv/d}$
Solution Video
|
{}
|
Investigating Dilation and Similarity
Topic:
Dilation
Dilations
In the applet below, triangle A1A2A3 is fixed. Move the slider, n, and observe the behavior of triangle B1B2B3. Then answer the questions that follow. Note that , , and . Similarly, , , and .
Corresponding Sides
Which side of triangle B1B2B3 corresponds to side a3 of triangle A1A2A3?
Corresponding Sides
Which side of triangle B1B2B3 corresponds to side a2 of triangle A1A2A3?
Which side of triangle B1B2B3 corresponds to side a1 of triangle A1A2A3?
Ratios of corresponding sides.
Describe the ratios of the corresponding sides, (sides of ) as you move the slider, n.
|
{}
|
###### Question:
Badgersize Company has the following information for its Forming Department for the month of August:
Work in Process Inventory, August 1: 20,000 units
Direct materials: 100% complete $80,000 Conversion: 20% complete 24,000 Balance in work in process, August 1$ 104,000
Units started during August 50,000
Units completed and transferred in August 60,000
Work in process (70% complete), August 31 ?
Costs charged to Work in Process in August
Direct materials $150,000 Conversion costs: Direct labor$ 120,000
Total conversion $252,000 Assume materials are added at the start of processing. Complete a production cost report for the Badgersize Company Forming Department for the month of August. (Round your Costs per equivalent units to 2 decimal places.Leave no cells blank - be certain to enter "0" wherever required. Omit the "$" sign in your response.)
Production Cost Report
Forming Department
For the month of August
Part I. Physical Flow Total Units
Inputs:
• Beginning WIP ?
• Started ?
Units to account for: ?
Outputs:
• Units completed ?
• Ending WIP ?
Units accounted for: ?
Part II. Equivalent Units Consumed Direct Materials Conversion Costs
Based on monthly input:
• Finish BWIP ? ?
• Start new units ? ?
Equivalent units of input ? ?
Based on monthly output: ? ?
• Transferred units ? ?
• EWIP units ? ?
? ?
Equivalent units of output
Part III. Cost Per Equivalent Unit Total Unit Cost Direct Materials Conversion Costs
Input costs in August $?$ ?
Divide by equivalent units, August ? ?
Costs per equivalent unit, Aug. $?$ ? $? Part IV. Total Cost Assignment Total Costs Direct Materials Conversion Costs Costs to account for: • Cost of beginning WIP$ ?
• Cost added during the period ?
Total cost to account for $? Costs accounted for: • Cost of goods transferred BWIP, August 1$ ? $?$ ?
To finish BWIP ? ? ?
Units started and completed ? ? ?
Total cost transferred $?$ ? $? • Add ending WIP ? ? ? Total cost accounted for$ ? $?$ ?
#### Similar Solved Questions
##### Q2. Shell-and-tube heat exchangers with hundreds of tubes housed in a shell are commonly used in...
Q2. Shell-and-tube heat exchangers with hundreds of tubes housed in a shell are commonly used in practice for heat transfer between two fluids (shown in Figure 1). Such a heat exchanger used in an active solar hot-water system transfers heat from a water-antifreeze solution flowing through the shell...
##### Ary Bookmarks People Tab Window Help Chapter 3. Learnsmart X Mail - Ureste, Andrew C. X...
ary Bookmarks People Tab Window Help Chapter 3. Learnsmart X Mail - Ureste, Andrew C. X NFL Streams | Reddit NFL 5 X | NFL Streams Bucation.com als of Financial Accounting - Phillips, Libby, Libby, 6e, The Income Statement In which order, from top to bottom, is the order of the unadjusted trial bala...
##### How do you determine the number of possible triangles and find the measure of the three angles given a=8, b=10, mangleA=45?
How do you determine the number of possible triangles and find the measure of the three angles given a=8, b=10, mangleA=45?...
##### How do you simplify sqrt(14q)*2sqrt(4q)?
How do you simplify sqrt(14q)*2sqrt(4q)?...
##### The question is " 3A + 2B --><--4C with Kc=1.73x10^23, if at this temperature 1.00 mol...
the question is " 3A + 2B --><--4C with Kc=1.73x10^23, if at this temperature 1.00 mol of A and 3.70 mol of B are placed in a 1.00 L container, what are the concentrations of A,B, and C at equilibrium?" this is my work so far, I am not sure how to go further. placed incl.ool at equilib...
##### 4. (10 points) At cereal a time maker when Kellogg's demand was for ready-to-eat cereal was...
4. (10 points) At cereal a time maker when Kellogg's demand was for ready-to-eat cereal was stagnant, a spokesperson for the cereal maker Kellogg's was quoted as saying. for the past several years, our individual company growth has come out of the other fellow's hide." Kellogg's ...
##### Two uniform spheres of identical mass and radius are placed on inclined planes at the same...
Two uniform spheres of identical mass and radius are placed on inclined planes at the same height h and inclination angle θ. One plane is rough and causes one sphere to roll down the plane; the other is frictionless, and so the sphere on it slides down the incline. (a) Find the ratio of the ki...
##### STA1502/101/3/2019 QUESTION 23 A company that produces labelled packaging plans to buy a new shaping and...
STA1502/101/3/2019 QUESTION 23 A company that produces labelled packaging plans to buy a new shaping and labelling They are considering three different machines. Each machine's method of therefore so to five randomly selected shaping and labelling tasks, and recorded the processing time of each ...
##### You should always choose the supplier who offers the lowest price per unit.” Do you agree?...
You should always choose the supplier who offers the lowest price per unit.” Do you agree? Explain....
##### In an area of the Midwest, records were kept on the relationship between the rainfall (in...
In an area of the Midwest, records were kept on the relationship between the rainfall (in inches) and the yield of wheat (bushels per acre). Rain (inches) 10.5 8.8 13.4 12.5 18.8 10.3 7.0 15.6 16.0 Yield (bushels/acre) 50.5 46.2 58.8 59.0 ...
##### An ohject has a charge of-1.6 pC. How many electrons must be removed so that the...
An ohject has a charge of-1.6 pC. How many electrons must be removed so that the charge becomes 2.2 c Section 18 Aarnes ,chw ,ca, serere scernes.change of-14.-Sphere C carmes no sphere of Sowes Aand ธ touched together and owing quetions then sepwrated. Spher eparated hom it. Last,s charge. C i...
##### A proton moves at 5.10 x 10^5 m/s in the horizontal direction. It enters a uniform...
A proton moves at 5.10 x 10^5 m/s in the horizontal direction. It enters a uniform vertical electric field with a magnitude of 8.10 x10^3 N/C. Ignore any gravitational effects. (a) Find the time interval required for the proton to travel 4.70 cm horizontally. (b) Find its vertical displacement durin...
##### How do you use the quotient rule to show that 1/f(x) is decreasing given that f(x) is a positive increasing function defined for all x?
How do you use the quotient rule to show that 1/f(x) is decreasing given that f(x) is a positive increasing function defined for all x?...
|
{}
|
## Bernoulli
• Bernoulli
• Volume 13, Number 2 (2007), 346-364.
### Limiting distributions of the non-central $t$-statistic and their applications to the power of $t$-tests under non-normality
#### Abstract
Let $X_1,X_2$,… be a sequence of independent and identically distributed random variables. Let $X$ be an independent copy of $X_1$. Define $\mathbb{T}_{n}=\sqrt{n}\bar{X}/S$, where $\bar{X}$ and $S^2$ are the sample mean and the sample variance, respectively. We refer to $\mathbb{T}_{n}$ as the central or non-central (Student’s) $t$-statistic, depending on whether $\mathrm{E}X=0$ or $\mathrm{E}X≠0$, respectively. The non-central $t$-statistic arises naturally in the calculation of powers for $t$-tests. The central $t$-statistic has been well studied, while there is a very limited literature on the non-central $t$-statistic. In this paper, we attempt to narrow this gap by studying the limiting behaviour of the non-central $t$-statistic, which turns out to be quite complicated. For instance, it is well known that, under finite second-moment conditions, the limiting distributions for the central $t$-statistic are normal while those for the non-central $t$-statistic can be non-normal and can critically depend on whether or not $\mathrm{E}X=∞$. As an application, we study the effect of non-normality on the performance of the $t$-test.
#### Article information
Source
Bernoulli, Volume 13, Number 2 (2007), 346-364.
Dates
First available in Project Euclid: 18 May 2007
Permanent link to this document
https://projecteuclid.org/euclid.bj/1179498752
Digital Object Identifier
doi:10.3150/07-BEJ5073
Mathematical Reviews number (MathSciNet)
MR2331255
Zentralblatt MATH identifier
1129.60021
#### Citation
Bentkus, Vidmantas; Jing, Bing-Yi; Shao, Qi-Man; Zhou, Wang. Limiting distributions of the non-central $t$-statistic and their applications to the power of $t$-tests under non-normality. Bernoulli 13 (2007), no. 2, 346--364. doi:10.3150/07-BEJ5073. https://projecteuclid.org/euclid.bj/1179498752
|
{}
|
# Vitamin C therapy for patients with sepsis or septic shock: a protocol for a systematic review and a network meta-analysis
Fujii, Tomoko; Belletti, Alessandro; Carr, Anitra; Furukawa, Toshi A; Luethi, Nora; Putzu, Alessandro; Sartini, Chiara; Salanti, Georgia; Tsujimoto, Yasushi; Udy, Andrew A; Young, Paul J; Bellomo, Rinaldo (2019). Vitamin C therapy for patients with sepsis or septic shock: a protocol for a systematic review and a network meta-analysis. BMJ Open, 9(11):e033458.
## Abstract
INTRODUCTION
Vasoplegia is common and associated with a poor prognosis in patients with sepsis and septic shock. Vitamin C therapy in combination with vitamin B$_{1}$ and glucocorticoid, as well as monotherapy in various doses, has been investigated as a treatment for the vasoplegic state in sepsis, through targeting the inflammatory cascade. However, the combination effect and the relative contribution of each drug have not been well evaluated. Furthermore, the best combination between the three agents is currently unknown. We are planning a systematic review (SR) with network meta-analysis (NMA) to compare the different treatments and identify the combination with the most favourable effect on survival.
METHODS AND ANALYSIS
We will include all randomised controlled trials comparing any intervention using intravenous vitamin C, vitamin B$_{1}$ and/or glucocorticoid with another or with placebo in the treatment of sepsis. We are interested in comparing the following active interventions. Very high-dose vitamin C (≥12 g/day), high-dose vitamin C (≥6 g/day), vitamin C (<6 g/day); low-dose glucocorticoid (<400 mg/day of hydrocortisone (or equivalent)), vitamin B$_{1}$ and combinations of the drugs above. The primary outcome will be all-cause mortality at the longest follow-up within 1 year but 90 days or longer postrandomisation. All relevant studies will be sought through database searches and trial registries. All reference selection and data extraction will be conducted by two independent reviewers. We will conduct a random-effects NMA to synthesise all evidence for each outcome and obtain a comprehensive ranking of all treatments. We will use the surface under the cumulative ranking curve and the mean ranks to rank the various interventions. To differentiate between the effect of combination therapies and the effect of a component, we will employ a component NMA.
ETHICS AND DISSEMINATION
This SR does not require ethical approval. We will publish findings from this systematic review in a peer-reviewed scientific journal and present these at scientific conferences.
PROSPERO REGISTRATION NUMBER
CRD42018103860.
## Abstract
INTRODUCTION
Vasoplegia is common and associated with a poor prognosis in patients with sepsis and septic shock. Vitamin C therapy in combination with vitamin B$_{1}$ and glucocorticoid, as well as monotherapy in various doses, has been investigated as a treatment for the vasoplegic state in sepsis, through targeting the inflammatory cascade. However, the combination effect and the relative contribution of each drug have not been well evaluated. Furthermore, the best combination between the three agents is currently unknown. We are planning a systematic review (SR) with network meta-analysis (NMA) to compare the different treatments and identify the combination with the most favourable effect on survival.
METHODS AND ANALYSIS
We will include all randomised controlled trials comparing any intervention using intravenous vitamin C, vitamin B$_{1}$ and/or glucocorticoid with another or with placebo in the treatment of sepsis. We are interested in comparing the following active interventions. Very high-dose vitamin C (≥12 g/day), high-dose vitamin C (≥6 g/day), vitamin C (<6 g/day); low-dose glucocorticoid (<400 mg/day of hydrocortisone (or equivalent)), vitamin B$_{1}$ and combinations of the drugs above. The primary outcome will be all-cause mortality at the longest follow-up within 1 year but 90 days or longer postrandomisation. All relevant studies will be sought through database searches and trial registries. All reference selection and data extraction will be conducted by two independent reviewers. We will conduct a random-effects NMA to synthesise all evidence for each outcome and obtain a comprehensive ranking of all treatments. We will use the surface under the cumulative ranking curve and the mean ranks to rank the various interventions. To differentiate between the effect of combination therapies and the effect of a component, we will employ a component NMA.
ETHICS AND DISSEMINATION
This SR does not require ethical approval. We will publish findings from this systematic review in a peer-reviewed scientific journal and present these at scientific conferences.
PROSPERO REGISTRATION NUMBER
CRD42018103860.
## Statistics
### Citations
Dimensions.ai Metrics
6 citations in Web of Science®
6 citations in Scopus®
### Altmetrics
29 downloads since deposited on 22 Jan 2020
|
{}
|
# linearmodels.panel.results.RandomEffectsResults.corr_squared_between¶
property RandomEffectsResults.corr_squared_between: float
Between Coefficient of determination using squared correlation
Returns:
float
Between coefficient of determination
Notes
The between rsquared measures the fit of the time-averaged dependent variable on the time averaged dependent variables.
This measure is based on the squared correlation between the entity-wise averaged dependent variables and their average predictions.
$Corr[\bar{y}_i, \bar{x}_i\hat{\beta}]$
This measure does not account for weights.
Return type:
float
|
{}
|
# How to make arcpy to iterate through folders & sub-folders to access gdb's? [closed]
With reference to Check whether any feature classes in multiple gdb's has feature class representation, the code I've mentioned in the other thread is working great for single folder alone. We have 100's of gdb's in different folders & sub-folders.
Since I am very much new to arcpy, I don't know how to do this. I searched & found arcpy.da.walk has to be used to iterate through folders. I've tried that (code below). The code is getting processed but no results or not even throwing any errors. But the process gets completed every time.
Edit:
Based on the comments and code given in this link, i've changed the code
import os
import arcpy
def FindField(fc,myField):
fieldList = arcpy.ListFields(fc,myField)
for field in fieldList:
if str.lower(str(field.name)) == str.lower(myField):
print gdb, fc + " contains fieldname: " + myField
myField = "RuleID"
for path, dirs, files in os.walk(top_folder):
for d in dirs:
if not d.endswith(".gdb"):
continue
gdb_path = os.path.join(path, d)
arcpy.env.workspace = gdb_path
for fc in arcpy.ListFeatureClasses():
FindField(fc,myField)
for fds in arcpy.ListDatasets('','feature'):
for fc in arcpy.ListFeatureClasses('','',fds):
FindField(fc,myField)
• You will find it easier to implement new tools if you use the minimum amount of code to test and understand new functionality. This code does not need functions. If you strip it down to print statements, you'll have a better idea of what is happening. May 5 '16 at 13:17
I'd recommend the following because it's very simple, and it's also very worthwhile to get familiar with the os module.
import os
import arcpy
top_folder = r"path\to\top\folder"
for path, dirs, files in os.walk(top_folder):
for d in dirs:
if not d.endswith(".gdb"):
continue
gdb_path = os.path.join(path, d)
print gdb_path
arcpy.env.workspace = gdb_path
all_fcs = arcpy.ListFeatureClasses()
for fds in arcpy.ListDatasets('','feature'):
for fc in arcpy.ListFeatureClasses('','',fds):
all_fcs.append(fc)
for fc in all_fcs:
fieldnames = [f.name.lower() for f in arcpy.ListFields(fc)]
if myField.lower() in fieldnames:
print fc
Things I'm not 100% on (it's been a little while since I needed to use arcpy):
Matching field names by forcing to lower() may allow something to slip through the cracks that will cause trouble later on.
The whole feature class dataset iteration thing. You'll want to print everything out all the time to make sure you aren't hitting feature classes twice.
Ultimately, @Vince was right in the beginning: too much complexity for a simple task.
• I've used this code given here gis.stackexchange.com/questions/26892/… Passed gdb_path for arcpy.env.workspace. But the list is not getting iterated May 5 '16 at 14:49
• first, when gdb_path is printed, is it correct? second, which list are you talking about. I'll add a couple more lines to my answer that should get you close. May 5 '16 at 14:58
• I've edited the question and added the code. May 5 '16 at 15:16
• ok, at this point you're well beyond the scope of your original question, but I added a little bit more to my answer to accomplish what you need. The problem may have been that you were passing a wildcard to the listfields function (more specific) but later only testing after forcing the case (less specific). May 5 '16 at 15:33
• OK. I've just called the findfield function after your list.featureclass and list.fields. Now it's working perfectly. A big thanks. Very helpful May 5 '16 at 15:40
|
{}
|
# Angular speed of bullet
Hey guys,
I have a bullet and its diameter is 0.5 inches. its speed is 400 foot /sec.
It makes one complete rotation in the gun 120 mm per turn and length of the gun is 1.5 meter.
How can I calculate the rotational and angular speed?
-I used a formula for rotational speed;
R=2pi.n n=(number of the rotation) which is 1.5 meter/1.2 mm.
- Angular speed;
w=v/r w=400 ftsec/0.5inch
Are my formulas and calculations correct?
Related Introductory Physics Homework Help News on Phys.org
wow 1.5 meter is quite a long gun haha. anyway if i am correct 120mm is the distance the bullet travel in one round correct? i would find out how many turns it need to make before it exits the barrel and use the speed that it is traveling. As for the diameter i have no idea what it is for
Yes u correct 120 mm is the distance the bullet travel in one correct.
I guess the amount of the turn is 1500/120=12.5
Given diameter may be for the angular velocity; w=v/r
But I am not sure about if i can use this diameter for angular velocity.
kuruman
|
{}
|
# zbMATH — the first resource for mathematics
## Nier, Francis
Compute Distance To:
Author ID: nier.francis Published as: Nier, F.; Nier, Francis
Documents Indexed: 62 Publications since 1990, including 2 Books
all top 5
#### Co-Authors
24 single-authored 6 Ammari, Zied 5 Nataf, Frédéric 4 Helffer, Bernard 4 Mantile, Andrea 3 Bonnaillie-Noël, Virginie 3 Chniti, Chokri 3 Faraj, Ali 3 Patel, Yassine 2 Aftalion, Amandine 2 Arnold, Anton 2 Degond, Pierre 2 Gérard, Christian 2 Lelièvre, Tony 2 Mustieles, Francisco-José 1 Blanc, Xavier 1 Breteaux, Sébastien 1 Delaurens, Frédérique 1 Gallagher, Isabelle 1 Gallay, Thierry 1 Guyot-Delaurens, F. 1 Hérau, Frédéric 1 Hmidi, Toufik 1 Klein, Markus 1 Le Peutrec, Dorian 1 Liard, Quentin 1 Patel, Mainak 1 Pavliotis, Grigorios A. 1 Rouffort, C. 1 Said, Mona Ben 1 Soffer, Avraham 1 Viola, Joe 1 Viterbo, Claude
all top 5
#### Serials
6 Séminaire Équations aux Dérivées Partielles 3 Journal of Functional Analysis 2 Asymptotic Analysis 2 IMRN. International Mathematics Research Notices 2 Communications in Partial Differential Equations 2 Annales Henri Poincaré 1 Archive for Rational Mechanics and Analysis 1 Journal of Computational Physics 1 Journal of Mathematical Physics 1 Journal of Statistical Physics 1 Mathematical Methods in the Applied Sciences 1 Nonlinearity 1 Transport Theory and Statistical Physics 1 Mathematics of Computation 1 Reviews in Mathematical Physics 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Calcolo 1 Journal of Mathematics of Kyoto University 1 Journal of the Mathematical Society of Japan 1 Memoirs of the American Mathematical Society 1 Numerische Mathematik 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 COMPEL 1 Journal of Scientific Computing 1 Forum Mathematicum 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 Mémoires de la Société Mathématique de France. Nouvelle Série 1 Comptes Rendus de l’Académie des Sciences. Série I 1 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 1 Serdica Mathematical Journal 1 Matemática Contemporânea 1 Mathematical Physics, Analysis and Geometry 1 RIMS Kokyuroku 1 Comptes Rendus. Mathématique. Académie des Sciences, Paris 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie V 1 Lecture Notes in Mathematics 1 Journal of Physics A: Mathematical and Theoretical 1 Analysis & PDE 1 Tunisian Journal of Mathematics
all top 5
#### Fields
41 Partial differential equations (35-XX) 28 Quantum theory (81-XX) 15 Statistical mechanics, structure of matter (82-XX) 14 Operator theory (47-XX) 12 Numerical analysis (65-XX) 9 Global analysis, analysis on manifolds (58-XX) 6 Ordinary differential equations (34-XX) 5 Probability theory and stochastic processes (60-XX) 4 Dynamical systems and ergodic theory (37-XX) 2 Potential theory (31-XX) 2 Special functions (33-XX) 2 Integral equations (45-XX) 2 Functional analysis (46-XX) 2 Fluid mechanics (76-XX) 2 Optics, electromagnetic theory (78-XX) 1 History and biography (01-XX) 1 Measure and integration (28-XX) 1 Mechanics of particles and systems (70-XX)
#### Citations contained in zbMATH Open
49 Publications have been cited 721 times in 478 Documents Cited by Year
Hypoelliptic estimates and spectral theory for Fokker-Planck operators and Witten Laplacians. Zbl 1072.35006
Helffer, Bernard; Nier, Francis
2005
Isotropic hypoelliptic and trend to equilibrium for the Fokker-Planck equation with a high-degree potential. Zbl 1139.82323
Hérau, Frédéric; Nier, Francis
2004
Mean field limit for bosons and infinite dimensional phase-space analysis. Zbl 1171.81014
Ammari, Zied; Nier, Francis
2008
The Mourre theory for analytically fibered operators. Zbl 0939.47019
Gérard, Christian; Nier, Francis
1998
Lowest Landau level functional and Bargmann spaces for Bose-Einstein condensates. Zbl 1118.82004
Aftalion, A.; Blanc, X.; Nier, F.
2006
Quantitative analysis of metastability in reversible diffusion processes via a Witten complex approach. Zbl 1079.58025
Helffer, Bernard; Klein, Markus; Nier, Francis
2004
Mean field propagation of Wigner measures and BBGKY hierarchies for general bosonic states. Zbl 1251.81062
Ammari, Z.; Nier, F.
2011
Optimal non-reversible linear drift for the convergence to equilibrium of a diffusion. Zbl 1276.82042
Lelièvre, T.; Nier, F.; Pavliotis, G. A.
2013
A variational formulation of Schrödinger–Poisson systems in dimension $$d\leq3$$. Zbl 0785.35086
Nier, Francis
1993
A stationary Schrödinger-Poisson system arising from the modelling of electronic devices. Zbl 0716.34098
Nier, Francis
1990
Mean field limit for bosons and propagation of Wigner measures. Zbl 1214.81089
Ammari, Z.; Nier, F.
2009
Convergence rate of some domain decomposition methods for overlapping and nonoverlapping subdomains. Zbl 0873.65108
Nataf, Frédéric; Nier, F.
1997
Quantitative analysis of metastability in reversible diffusion processes via a Witten complex approach: the case with boundary. Zbl 1108.58018
Nier, F.; Helffer, B.
2006
Scattering theory for the perturbations of periodic Schrödinger operators. Zbl 0934.35111
Gérard, Christian; Nier, Francis
1998
Spectral asymptotics for large skew-symmetric perturbations of the harmonic oscillator. Zbl 1180.35383
Gallagher, Isabelle; Gallay, Thierry; Nier, Francis
2009
The dynamics of some quantum open systems with short-range nonlinearities. Zbl 0909.34052
Nier, Francis
1998
Mean field propagation of infinite-dimensional Wigner measures with a singular two-body interaction potential. Zbl 1341.81037
Ammari, Zied; Nier, Francis
2015
The two-dimensional Wigner-Poisson problem for an electron gas in the charge neutral case. Zbl 0742.35078
Arnold, Anton; Nier, Francis
1991
A semi-classical picture of quantum scattering. Zbl 0858.35106
Nier, Francis
1996
Schrödinger-Poisson systems in dimension $$d\leqq 3$$: The whole-space case. Zbl 0807.35119
Nier, F.
1993
Precise Arrhenius law for $$p$$-forms: the Witten Laplacian and Morse-Barannikov complex. Zbl 1275.58018
Le Peutrec, Dorian; Nier, Francis; Viterbo, Claude
2013
Low temperature asymptotics for quasistationary distributions in a bounded domain. Zbl 1320.58021
Lelièvre, Tony; Nier, Francis
2015
Asymptotic analysis of a scaled Wigner equation and quantum scattering. Zbl 0870.45003
Nier, Francis
1995
Hypoellipticity for Fokker-Planck operators and Witten Laplacians. Zbl 1286.35283
Nier, Francis
2012
Far from equilibrium steady states of 1D-Schrödinger-Poisson systems with quantum wells II. Zbl 1157.82046
Bonnaillie-Noël, Virginie; Nier, Francis; Patel, Yassine
2009
About theta functions and the Abrikosov lattice. Zbl 1171.33012
Nier, Francis
2007
Dispersion and Strichartz estimates for some finite rank perturbations of the Laplace operator. Zbl 1034.35017
Nier, F.; Soffer, A.
2003
Quantum mean-field asymptotics and multiscale analysis. Zbl 1407.81117
Ammari, Zied; Breteaux, Sébastien; Nier, Francis
2019
Adiabatic evolution of 1D shape resonances: an artificial interface conditions approach. Zbl 1223.35125
Faraj, Ali; Mantile, Andrea; Nier, Francis
2011
Far from equilibrium steady states of 1D-Schrödinger-Poisson systems with quantum wells. I. Zbl 1149.82349
Bonnaillie-Noël, V.; Nier, F.; Patel, Y.
2008
Bose-Einstein condensates in the lowest Landau level: Hamiltonian dynamics. Zbl 1129.82023
Nier, F.
2007
Computing the steady states for an asymptotic model of quantum transport in resonant heterostructures. Zbl 1189.82129
Bonnaillie-Noël, Virginie; Nier, Francis; Patel, Yassine
2006
Boundary conditions and subelliptic estimates for geometric Kramers-Fokker-Planck operators on manifolds with boundaries. Zbl 1419.35001
Nier, Francis
2018
An explicit model for the adiabatic evolution of quantum observables driven by 1D shape resonances. Zbl 1204.81071
Faraj, A.; Mantile, A.; Nier, F.
2010
Improved interface conditions for 2D domain decomposition with corners: A theoretical determination. Zbl 1173.65364
Chniti, Chokri; Nataf, Frédéric; Nier, Francis
2008
Improved interface conditions for a non-overlapping domain decomposition of a nonconvex polygonal domain. Zbl 1096.65124
Chniti, Chokri; Nataf, Frédéric; Nier, Francis
2006
Criteria to the Poincaré inequality associated with Dirichlet forms in $$\mathbb{R}^d,\,d\geq 2$$. Zbl 1098.31005
Helffer, Bernard; Nier, Francis
2003
Convergence of domain decomposition methods via semi-classical calculus. Zbl 0909.35007
Nataf, Frédéric; Nier, Francis
1998
Particle simulation of bidimensional electron transport parallel to a heterojunction interface. Zbl 0724.65133
Degond, P.; Delaurens, F.; Mustieles, F. J.; Nier, F.
1990
Improved interface conditions for $$2 D$$ domain decomposition with corners: numerical applications. Zbl 1203.65274
Chniti, Chokri; Nataf, Frédéric; Nier, Francis
2009
Nonlinear asymptotics for quantum out-of-equilibrium 1D systems: reduced models and algorithms. Zbl 1322.82003
Nier, F.; Patel, M.
2004
Quantitative analysis of metastability in reversible diffusion processes via a Witten complex approach. Zbl 1067.35057
Nier, Francis
2004
Remarks on domain decomposition algorithms. Zbl 1058.65514
Nier, F.
1999
Variational formulation of Schrödinger-Poisson systems in dimension $$d\leq 3$$. Zbl 0823.35154
Nier, Francis
1992
Semiconductor modelling via the Boltzmann equation. Zbl 0732.35098
Degond, P.; Guyot-Delaurens, F.; Mustieles, F. J.; Nier, F.
1991
On the relationship between non-linear Schrödinger dynamics, Gross-Pitaevskli hierarchy and Liouville’s equation. Zbl 1369.37072
Ammari, Zied; Nier, F.; Liard, Q.; Rouffort, C.
2017
Time-dependent delta-interactions for 1D Schrödinger Hamiltonians. Zbl 1190.37013
Hmidi, Toufik; Mantile, Andrea; Nier, Francis
2010
Accurate WKB approximation for a 1D problem with low regularity. Zbl 1199.81023
Nier, F.
2008
Numerical analysis of the deterministic particle method applied to the Wigner equation. Zbl 0763.65092
Arnold, Anton; Nier, Francis
1992
Quantum mean-field asymptotics and multiscale analysis. Zbl 1407.81117
Ammari, Zied; Breteaux, Sébastien; Nier, Francis
2019
Boundary conditions and subelliptic estimates for geometric Kramers-Fokker-Planck operators on manifolds with boundaries. Zbl 1419.35001
Nier, Francis
2018
On the relationship between non-linear Schrödinger dynamics, Gross-Pitaevskli hierarchy and Liouville’s equation. Zbl 1369.37072
Ammari, Zied; Nier, F.; Liard, Q.; Rouffort, C.
2017
Mean field propagation of infinite-dimensional Wigner measures with a singular two-body interaction potential. Zbl 1341.81037
Ammari, Zied; Nier, Francis
2015
Low temperature asymptotics for quasistationary distributions in a bounded domain. Zbl 1320.58021
Lelièvre, Tony; Nier, Francis
2015
Optimal non-reversible linear drift for the convergence to equilibrium of a diffusion. Zbl 1276.82042
Lelièvre, T.; Nier, F.; Pavliotis, G. A.
2013
Precise Arrhenius law for $$p$$-forms: the Witten Laplacian and Morse-Barannikov complex. Zbl 1275.58018
Le Peutrec, Dorian; Nier, Francis; Viterbo, Claude
2013
Hypoellipticity for Fokker-Planck operators and Witten Laplacians. Zbl 1286.35283
Nier, Francis
2012
Mean field propagation of Wigner measures and BBGKY hierarchies for general bosonic states. Zbl 1251.81062
Ammari, Z.; Nier, F.
2011
Adiabatic evolution of 1D shape resonances: an artificial interface conditions approach. Zbl 1223.35125
Faraj, Ali; Mantile, Andrea; Nier, Francis
2011
An explicit model for the adiabatic evolution of quantum observables driven by 1D shape resonances. Zbl 1204.81071
Faraj, A.; Mantile, A.; Nier, F.
2010
Time-dependent delta-interactions for 1D Schrödinger Hamiltonians. Zbl 1190.37013
Hmidi, Toufik; Mantile, Andrea; Nier, Francis
2010
Mean field limit for bosons and propagation of Wigner measures. Zbl 1214.81089
Ammari, Z.; Nier, F.
2009
Spectral asymptotics for large skew-symmetric perturbations of the harmonic oscillator. Zbl 1180.35383
Gallagher, Isabelle; Gallay, Thierry; Nier, Francis
2009
Far from equilibrium steady states of 1D-Schrödinger-Poisson systems with quantum wells II. Zbl 1157.82046
Bonnaillie-Noël, Virginie; Nier, Francis; Patel, Yassine
2009
Improved interface conditions for $$2 D$$ domain decomposition with corners: numerical applications. Zbl 1203.65274
Chniti, Chokri; Nataf, Frédéric; Nier, Francis
2009
Mean field limit for bosons and infinite dimensional phase-space analysis. Zbl 1171.81014
Ammari, Zied; Nier, Francis
2008
Far from equilibrium steady states of 1D-Schrödinger-Poisson systems with quantum wells. I. Zbl 1149.82349
Bonnaillie-Noël, V.; Nier, F.; Patel, Y.
2008
Improved interface conditions for 2D domain decomposition with corners: A theoretical determination. Zbl 1173.65364
Chniti, Chokri; Nataf, Frédéric; Nier, Francis
2008
Accurate WKB approximation for a 1D problem with low regularity. Zbl 1199.81023
Nier, F.
2008
About theta functions and the Abrikosov lattice. Zbl 1171.33012
Nier, Francis
2007
Bose-Einstein condensates in the lowest Landau level: Hamiltonian dynamics. Zbl 1129.82023
Nier, F.
2007
Lowest Landau level functional and Bargmann spaces for Bose-Einstein condensates. Zbl 1118.82004
Aftalion, A.; Blanc, X.; Nier, F.
2006
Quantitative analysis of metastability in reversible diffusion processes via a Witten complex approach: the case with boundary. Zbl 1108.58018
Nier, F.; Helffer, B.
2006
Computing the steady states for an asymptotic model of quantum transport in resonant heterostructures. Zbl 1189.82129
Bonnaillie-Noël, Virginie; Nier, Francis; Patel, Yassine
2006
Improved interface conditions for a non-overlapping domain decomposition of a nonconvex polygonal domain. Zbl 1096.65124
Chniti, Chokri; Nataf, Frédéric; Nier, Francis
2006
Hypoelliptic estimates and spectral theory for Fokker-Planck operators and Witten Laplacians. Zbl 1072.35006
Helffer, Bernard; Nier, Francis
2005
Isotropic hypoelliptic and trend to equilibrium for the Fokker-Planck equation with a high-degree potential. Zbl 1139.82323
Hérau, Frédéric; Nier, Francis
2004
Quantitative analysis of metastability in reversible diffusion processes via a Witten complex approach. Zbl 1079.58025
Helffer, Bernard; Klein, Markus; Nier, Francis
2004
Nonlinear asymptotics for quantum out-of-equilibrium 1D systems: reduced models and algorithms. Zbl 1322.82003
Nier, F.; Patel, M.
2004
Quantitative analysis of metastability in reversible diffusion processes via a Witten complex approach. Zbl 1067.35057
Nier, Francis
2004
Dispersion and Strichartz estimates for some finite rank perturbations of the Laplace operator. Zbl 1034.35017
Nier, F.; Soffer, A.
2003
Criteria to the Poincaré inequality associated with Dirichlet forms in $$\mathbb{R}^d,\,d\geq 2$$. Zbl 1098.31005
Helffer, Bernard; Nier, Francis
2003
Remarks on domain decomposition algorithms. Zbl 1058.65514
Nier, F.
1999
The Mourre theory for analytically fibered operators. Zbl 0939.47019
Gérard, Christian; Nier, Francis
1998
Scattering theory for the perturbations of periodic Schrödinger operators. Zbl 0934.35111
Gérard, Christian; Nier, Francis
1998
The dynamics of some quantum open systems with short-range nonlinearities. Zbl 0909.34052
Nier, Francis
1998
Convergence of domain decomposition methods via semi-classical calculus. Zbl 0909.35007
Nataf, Frédéric; Nier, Francis
1998
Convergence rate of some domain decomposition methods for overlapping and nonoverlapping subdomains. Zbl 0873.65108
Nataf, Frédéric; Nier, F.
1997
A semi-classical picture of quantum scattering. Zbl 0858.35106
Nier, Francis
1996
Asymptotic analysis of a scaled Wigner equation and quantum scattering. Zbl 0870.45003
Nier, Francis
1995
A variational formulation of Schrödinger–Poisson systems in dimension $$d\leq3$$. Zbl 0785.35086
Nier, Francis
1993
Schrödinger-Poisson systems in dimension $$d\leqq 3$$: The whole-space case. Zbl 0807.35119
Nier, F.
1993
Variational formulation of Schrödinger-Poisson systems in dimension $$d\leq 3$$. Zbl 0823.35154
Nier, Francis
1992
Numerical analysis of the deterministic particle method applied to the Wigner equation. Zbl 0763.65092
Arnold, Anton; Nier, Francis
1992
The two-dimensional Wigner-Poisson problem for an electron gas in the charge neutral case. Zbl 0742.35078
Arnold, Anton; Nier, Francis
1991
Semiconductor modelling via the Boltzmann equation. Zbl 0732.35098
Degond, P.; Guyot-Delaurens, F.; Mustieles, F. J.; Nier, F.
1991
A stationary Schrödinger-Poisson system arising from the modelling of electronic devices. Zbl 0716.34098
Nier, Francis
1990
Particle simulation of bidimensional electron transport parallel to a heterojunction interface. Zbl 0724.65133
Degond, P.; Delaurens, F.; Mustieles, F. J.; Nier, F.
1990
all top 5
#### Cited by 596 Authors
20 Nier, Francis 12 Rougerie, Nicolas 11 Lewin, Mathieu 10 Ben Abdallah, Naoufel 10 Hérau, Frédéric 10 Li, Weixi 10 Méhats, Florian 10 Pavliotis, Grigorios A. 10 Pravda-Starov, Karel 9 Phan Thành Nam 8 Bétermin, Laurent 7 Arnold, Anton 7 Lelièvre, Tony 6 Aftalion, Amandine 6 Ammari, Zied 6 Falconi, Marco 6 Helffer, Bernard 6 Hitrik, Michael 6 Klein, Markus 6 Le Peutrec, Dorian 6 Mouhot, Clément 6 Nataf, Frédéric 6 Rosenberger, Elke 6 Schlein, Benjamin 6 Sohinger, Vedran 5 Bedrossian, Jacob 5 Chen, Hua 5 Chniti, Chokri 5 Correggi, Michele 5 Dolbeault, Jean 5 Guillin, Arnaud 5 Michel, Laurent 5 Niikuni, Hiroaki 5 Pinaud, Olivier 5 Popoff, Nicolas 5 Rehberg, Joachim 5 Stoltz, Gabriel 5 Yngvason, Jakob 4 Blanc, Xavier 4 Breteaux, Sébastien 4 Carles, Rémi 4 Di Gesù, Giacomo 4 Germain, Pierre 4 Häfner, Dietrich 4 Hairer, Martin 4 Hwang, Hyung Ju 4 Kaiser, Hans-Christoph 4 Kuchment, Peter A. 4 Lange, Horst 4 Mantile, Andrea 4 Mischler, Stéphane 4 Nectoux, Boris 4 Neidhardt, Hagen 4 Serfaty, Sylvia 4 Sigal, Israel Michael 4 Tentarelli, Lorenzo 4 Thomann, Laurent 4 Xu, Chao-Jiang 3 Aida, Shigeki 3 Amour, Laurent 3 Bierkens, Joris 3 Bony, Jean-François 3 Castella, François 3 Cattiaux, Patrick 3 Chen, Thomas M. 3 Chen, Xuwen 3 Derridj, Makhlouf 3 Duncan, Andrew B. 3 Evans, Josephine 3 Gander, Martin Jakob 3 Geuzaine, Christophe A. 3 Grothaus, Martin 3 Herda, Maxime 3 Holmer, Justin 3 Isozaki, Hiroshi 3 Jager, Lisette 3 Jang, Juhi 3 Lerner, Nicolas 3 Liard, Quentin 3 López, José Luis 3 Macià, Fabricio 3 Markowich, Peter Alexander 3 Menozzi, Stéphane 3 Morioka, Hisashi 3 Motta, Santo 3 Nourrigat, Jean Francois 3 Ottobre, Michela 3 Pavlović, Nataša 3 Petrache, Mircea 3 Schlichting, André 3 Schmeiser, Christian 3 Seiringer, Robert 3 Sjöstrand, Johannes 3 Soler, Juan S. 3 Sparber, Christof 3 Tzaneteas, Tim 3 Viola, Joe 3 Wei, Dongyi 3 Zhang, Zhifei 2 Abdulle, Assyr ...and 496 more Authors
all top 5
#### Cited in 136 Serials
31 Journal of Functional Analysis 26 Journal of Statistical Physics 23 Journal of Mathematical Physics 19 Communications in Mathematical Physics 18 Annales Henri Poincaré 15 Archive for Rational Mechanics and Analysis 15 Communications in Partial Differential Equations 15 SIAM Journal on Mathematical Analysis 14 Journal de Mathématiques Pures et Appliquées. Neuvième Série 11 Reviews in Mathematical Physics 11 Journal of Differential Equations 11 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 11 Comptes Rendus. Mathématique. Académie des Sciences, Paris 9 Journal of Computational Physics 8 Journal of Mathematical Analysis and Applications 8 Transactions of the American Mathematical Society 8 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 8 Kinetic and Related Models 7 Annales de l’Institut Fourier 6 Inventiones Mathematicae 5 Letters in Mathematical Physics 5 Mathematics of Computation 5 Journal of Nonlinear Science 5 Journal of Pseudo-Differential Operators and Applications 4 Communications on Pure and Applied Mathematics 4 Integral Equations and Operator Theory 4 Journal of Computational and Applied Mathematics 4 Communications in Contemporary Mathematics 4 Multiscale Modeling & Simulation 3 Nonlinearity 3 Advances in Mathematics 3 Proceedings of the American Mathematical Society 3 Quarterly of Applied Mathematics 3 Tokyo Journal of Mathematics 3 Journal of Scientific Computing 3 The Annals of Applied Probability 3 St. Petersburg Mathematical Journal 3 Journal of the Institute of Mathematics of Jussieu 3 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 3 Analysis & PDE 3 Science China. Mathematics 3 SIAM/ASA Journal on Uncertainty Quantification 3 Annals of PDE 2 Reports on Mathematical Physics 2 Transport Theory and Statistical Physics 2 The Annals of Probability 2 Applied Mathematics and Computation 2 Mathematische Nachrichten 2 Memoirs of the American Mathematical Society 2 Monatshefte für Mathematik 2 Numerische Mathematik 2 Annals of Global Analysis and Geometry 2 Applied Numerical Mathematics 2 Applied Mathematics Letters 2 Bulletin of the American Mathematical Society. New Series 2 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 2 Bernoulli 2 European Series in Applied and Industrial Mathematics (ESAIM): Proceedings 2 SIAM Journal on Applied Dynamical Systems 2 Acta Numerica 2 Analysis and Mathematical Physics 2 Statistics and Computing 2 Journal de l’École Polytechnique – Mathématiques 2 Séminaire Laurent Schwartz. EDP et Applications 2 Pure and Applied Analysis 1 Advances in Applied Probability 1 Computers & Mathematics with Applications 1 Computer Methods in Applied Mechanics and Engineering 1 Computer Physics Communications 1 Israel Journal of Mathematics 1 Mathematical Methods in the Applied Sciences 1 Mathematical Notes 1 Physica A 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Theory of Probability and its Applications 1 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 1 Automatica 1 Calcolo 1 Geometriae Dedicata 1 Journal of the Mathematical Society of Japan 1 Mathematische Annalen 1 Mathematische Zeitschrift 1 SIAM Journal on Control and Optimization 1 SIAM Journal on Numerical Analysis 1 European Journal of Combinatorics 1 Physica D 1 RAIRO. Modélisation Mathématique et Analyse Numérique 1 Constructive Approximation 1 Revista Matemática Iberoamericana 1 COMPEL 1 Numerical Methods for Partial Differential Equations 1 Journal of Theoretical Probability 1 Mathematical and Computer Modelling 1 Asymptotic Analysis 1 Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Serie IX. Rendiconti Lincei. Matematica e Applicazioni 1 The Journal of Geometric Analysis 1 Numerical Algorithms 1 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 1 SIAM Journal on Applied Mathematics 1 SIAM Review ...and 36 more Serials
all top 5
#### Cited in 46 Fields
317 Partial differential equations (35-XX) 168 Statistical mechanics, structure of matter (82-XX) 145 Quantum theory (81-XX) 78 Operator theory (47-XX) 67 Numerical analysis (65-XX) 64 Probability theory and stochastic processes (60-XX) 41 Global analysis, analysis on manifolds (58-XX) 32 Fluid mechanics (76-XX) 30 Dynamical systems and ergodic theory (37-XX) 18 Ordinary differential equations (34-XX) 15 Optics, electromagnetic theory (78-XX) 13 Functional analysis (46-XX) 11 Differential geometry (53-XX) 10 Integral equations (45-XX) 9 Calculus of variations and optimal control; optimization (49-XX) 7 Combinatorics (05-XX) 6 Mechanics of deformable solids (74-XX) 5 Algebraic geometry (14-XX) 5 Mechanics of particles and systems (70-XX) 5 Biology and other natural sciences (92-XX) 5 Systems theory; control (93-XX) 4 Real functions (26-XX) 4 Measure and integration (28-XX) 4 Several complex variables and analytic spaces (32-XX) 4 Manifolds and cell complexes (57-XX) 4 Relativity and gravitational theory (83-XX) 4 Information and communication theory, circuits (94-XX) 3 Difference and functional equations (39-XX) 3 Statistics (62-XX) 2 Number theory (11-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Topological groups, Lie groups (22-XX) 2 Functions of a complex variable (30-XX) 2 Potential theory (31-XX) 2 Convex and discrete geometry (52-XX) 2 Algebraic topology (55-XX) 2 Astronomy and astrophysics (85-XX) 1 General and overarching topics; collections (00-XX) 1 Mathematical logic and foundations (03-XX) 1 Commutative algebra (13-XX) 1 Associative rings and algebras (16-XX) 1 Group theory and generalizations (20-XX) 1 Special functions (33-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Geometry (51-XX) 1 Classical thermodynamics, heat transfer (80-XX)
|
{}
|
13 Hearts but Nothing else
What has 13 hearts, but no other organs?
There is a very simple answer which you will be kicking yourself over after you see it. There is only one possible answer...
• Someone told me this riddle two weeks ago (to which I guessed the answer correctly). But the point is, that means you might have not come up with the riddle yourself. If that is the case, please include where or how you found the riddle. – Mr Pie Aug 5 '18 at 9:14
• Since @user477343's answer was considered the correct one (at least among some religious community), and since F1Krazy's answer seems definitely valid, there is apparently not "one possible answer". – xhienne Aug 5 '18 at 10:32
• A third possible answer is "this shirt I'm currently wearing" ;) – jafe Aug 5 '18 at 11:26
• @jafe Hahahah :D – Mr Pie Aug 5 '18 at 22:52
• Another alternative answer: My freezer. Please don't call the cops. – Ontamu Aug 6 '18 at 6:18
This is
a deck of playing cards.
Explanation:
There are 13 Hearts cards in a standard deck: King, Queen, Jack, Ace, and 2-10.
You might have said there can only be one answer... but the answer could also be, from a religious perspective,
1 Corinthians 13 in the New Testament of the Christian Bible.
What has 13 hearts, but no other organs?
1 Corinthians 13 was written by St. Paul for which it has 13 verses, each talking about love. Of course, a heart resembles love. If you wish to read it, you can over here.
It is:
A deck of cards.
Reason:
A deck of cards contain 52 cards out of which 13 cards are of the heart suite.
• Look at @F1Krazy's answer. – Mr Pie Aug 6 '18 at 4:05
• No problem. Somebody down-voted this answer, but they have to take into consideration that you are new to this site, wherefore I strongly suggest you visit the Help Center. You can get all the information there, but the sections explaining asking and answering are most important, so check them out. I hope you enjoy this site, and that you make good contributions to this community in the future :D – Mr Pie Aug 6 '18 at 4:17
|
{}
|
# Quadratic Function with Imaginary Coefficients
Algebra Level 2
Find the zeroes of the quadratic function above. To see the solution for this problem, find a post named "Solutions for Quadratic Function with Imaginary Coefficients".
×
|
{}
|
# Serialization¶
In order to dump a Quantity to disk, store it in a database or transmit it over the wire you need to be able to serialize and then deserialize the object.
The easiest way to do this is by converting the quantity to a string:
>>> import pint
>>> ureg = pint.UnitRegistry()
>>> duration = 24.2 * ureg.years
>>> duration
<Quantity(24.2, 'year')>
>>> serialized = str(duration)
>>> print(serialized)
24.2 year
Remember that you can easily control the number of digits in the representation as shown in String formatting.
You dump/store/transmit the content of serialized (‘24.2 year’). When you want to recover it in another process/machine, you just:
>>> import pint
>>> ureg = pint.UnitRegistry()
>>> duration = ureg('24.2 year')
>>> print(duration)
24.2 year
Notice that the serialized quantity is likely to be parsed in another registry as shown in this example. Pint Quantities do not exist on their own but they are always related to a UnitRegistry. Everything will work as expected if both registries, are compatible (e.g. they were created using the same definition file). However, things could go wrong if the registries are incompatible. For example, year could not be defined in the target registry. Or what is even worse, it could be defined in a different way. Always have to keep in mind that the interpretation and conversion of Quantities are UnitRegistry dependent.
In certain cases, you want a binary representation of the data. Python’s standard algorithm for serialization is called Pickle. Pint quantities implement the magic __reduce__ method and therefore can be Pickled and Unpickled. However, you have to bear in mind, that the DEFAULT_REGISTRY is used for unpickling and this might be different from the one that was used during pickling. If you want to have control over the deserialization, the best way is to create a tuple with the magnitude and the units:
>>> to_serialize = duration.to_tuple()
>>> print(to_serialize)
(24.2, (('year', 1.0),))
And then you can just pickle that:
>>> import pickle
>>> serialized = pickle.dumps(to_serialize, -1)
To unpickle, just
>>> loaded = pickle.loads(serialized)
<Quantity(24.2, 'year')>
(To pickle to and from a file just use the dump and load method as described in _Pickle)
You can use the same mechanism with any serialization protocol, not only with binary ones. (In fact, version 0 of the Pickle protocol is ASCII). Other common serialization protocols/packages are json, yaml, shelve, hdf5 (or via PyTables) and dill. Notice that not all of these packages will serialize properly the magnitude (which can be any numerical type such as numpy.ndarray).
Using the serialize package you can load and read from multiple formats:
>>> from serialize import dump, load, register_class
>>> register_class(ureg.Quantity, ureg.Quantity.to_tuple, ureg.Quantity.from_tuple)
>>> dump(duration, 'output.yaml')
|
{}
|
Topics of the workshop are:
• Structured matrix analysis including (but not limited to) Toeplitz, Hankel, Vandermonde, banded, semiseparable, Cauchy, Hessenberg, mosaic, block, multilevel matrices and the theoretical and applicative problems from which they are originated (structured problems);
• Applications involving structured matrices including (but not limited to) interpolation, integral and differential equations, least squares and regularization, polynomial computations, matrix equations, control theory, queueing theory and Markov chains, image and signal processing;
• Design and analysis of algorithms for the solution of structured problems.
The aims of the meeting are:
• presenting recent results on theory, algorithms and applications concerning structured problems in numerical linear algebra and matrix theory;
• reviewing and discussing methodologies and the related algorithmic analysis;
• improving collaborations between theoretical and applied research;
• tracing the state-of-the-art together with the main directions of future research;
• fostering the contacts between PhD students, postdocs and young scholars with the most advanced research groups;
• increasing the collaboration between European research groups.
Papers with the results presented at the meeting will be published in a volume of the "Springer INdAM Series", indexed in Scopus. (Deadline: January 31, 2018).
The workshop will be held in Cortona, Italy, at Il Palazzone, a XVI century monumental palace, also known as Villa Passerini, at walking distance from the center (see map below).
Cortona can be reached:
• by plane. Landing to the airports of Rome (ROM), Florence (FLR) or Perugia (PEG) and continuing the travel by train.
• by train. Stopping to one of the two railway stations: Terontola-Cortona (about 15 km from the village) and Camucia-Cortona (about 5 km). Duration of the trip: from Rome 2h30'; from Florence, 1h30'; from Perugia, 0h35'; from Milan, 4h30'. Check timetables.
The transportation from the railway station to Cortona will be organized locally.
• by car. Get directions from Rome Fiumicino, from Firenze, from Perugia.
Lidia Aceto, Università di Pisa, Italy
Alessandra Aimi, Università di Parma, Italy
Francesca Arrigo, University of Strathclyde, Glasgow, UK
Giovanni Barbarino, Scuola Normale Superiore, Pisa, Italy
Michele Benzi, Emory University, Atlanta, USA
Luca Bergamaschi, Università di Padova, Italy
Daniele Bertaccini, Università di Roma Tor Vergata, Italy
Davide Bianchi, Università dell'Insubria, Como, Italy
Dario A. Bini, Università di Pisa, Italy
Paola Boito, Université de Limoges and LIP-ENS de Lyon, France
Matthias Bolten, University of Wuppertal, Germany
Claude Brezinski, Université de Lille, France
Raymond Chan, The Chinese University of Hong Kong, Hong Kong
Antonio Cicone, Università dell'Aquila, Italy
Stefano Cipolla, Università di Roma Tor Vergata, Italy
Anna Concas, Università di Cagliari, Italy
Fernando De Terán, Universidad Carlos III de Madrid, Spain
Gianna M. Del Corso, Università di Pisa, Italy
Pietro Dell'Acqua, Università dell'Aquila, Italy
Fabio Di Benedetto, Università di Genova, Italy
Carmine Di Fiore, Università di Roma Tor Vergata, Italy
Marco Donatelli, Università dell'Insubria, Como, Italy
Froilán M. Dopico, Universidad Carlos III de Madrid, Spain.
Fabio Durastante, Università dell'Insubria, Como, Italy
Yuli Eidelman, Tel Aviv University, Israel
Claudio Estatico, Università di Genova, Italy
Massimiliano Fasi, The University of Manchester, UK
Dario Fasino, Università di Udine, Italy
Caterina Fenu, Università di Cagliari, Italy
Carlo Garoni, Università della Svizzera italiana, Lugano, Switzerland
Luca Gemignani, Università di Pisa, Italy
Nicola Guglielmi, Università dell'Aquila, Italy
Thomas Huckle, Technische Universität München, Germany
Bruno Iannazzo, Università di Perugia, Italy
Carlo Janna, Università di Padova, Italy
Thomas Kailath, Stanford University, USA
Daniel Kressner, EPFL, Lausanne, Switzerland
Carla Manni, Università di Roma Tor Vergata, Italy
Stefano Massei, EPFL, Lausanne, Switzerland
Nicola Mastronardi, CNR Bari, Italy
Mariarosa Mazza, Max Planck Institute, München, Germany
Beatrice Meini, Università di Pisa, Italy
Marilena Mitrouli, National and Kapodistrian University of Athens, Greece
Vanni Noferini, University of Essex, UK
Silvia Noschese, Università di Roma La Sapienza, Italy
Dimitrios Noutsos, University of Ioannina, Greece
Davide Palitta, Università di Bologna, Italy
Victor Y. Pan, City University of New York, USA
Bor Plestenjak, University of Ljubljana, Slovenia
Federico Poloni, Università di Pisa, Italy
Daniel Potts, University of Chemnitz, Germany
Stefano Pozza, Charles University, Prague, Czech Republic
Michela Redivo Zaglia, Università di Padova, Italy
Lothar Reichel, Kent State University, Kent, USA
Leonardo Robol, ISTI-CNR, Pisa, Italy
Paraskevi Roupa, University of Athens Panepistimiopolis, Greece
Stefano Serra Capizzano, Università dell'Insubria, Como, Italy
Debora Sesana, Università dell'Insubria, Como, Italy
Valeria Simoncini, Università di Bologna, Italy
Hendrik Speleers, Università di Roma Tor Vergata, Italy
Daniel Szyld, Temple University, Philadelphia, USA
Francoise Tisseur, University of Manchester, UK
Francesco Tudisco, University of Strathclyde, Glasgow, UK
Eugene Tyrtyshnikov, Russian Academy of Sciences, Russia
Marc Van Barel, KU Leuven, Leuven, Belgium
Cornelis Van Der Mee, Università di Cagliari, Italy
Raf Vandebril, KU Leuven, Leuven, Belgium
Paris Vassalos, Athens University of Economics and Business, Greece
Joab Winkler, University of Sheffield, UK
The book of abstracts is avalaible here
Move the mouse over the names to read titles and abstracts of the talks and posters. Use the arrow keys to read long abstracts.
Click on the name to download the slides (a password is required).
Monday Tuesday Wednesday Thursday Friday 08.45-09.00 Opening 09.00-09.30 Chan A Nuclear-norm Model for Multi-Frame Super-resolution Reconstruction In this talk, we give a new variational approach to obtain super-resolution images from multiple low-resolution image frames extracted from video clips. First the displacement between the low-resolution frames and the reference frame are computed by an optical flow algorithm. The displacement matrix is then decomposed into product of two matrices corresponding to the integer and fractional displacement matrices respectively. The integer displacement matrices give rise to a non-convex low-rank prior which is then convexified to give the nuclear-norm regularization term. By adding a standard 2-norm data fidelity term to it, we obtain our proposed nuclear-norm model. Alternating direction method of multipliers can then be used to solve the model. Comparison of our method with other models on synthetic and real video clips shows that our resulting images are more accurate with less artifacts. It also provides much finer and discernable details. Benzi Iterative Methods for Linear Systems with Double Saddle Point Structure We consider several iterative methods for solving a class of linear systems with double saddle point structure. Both Uzawa-type stationary methods and block preconditioned Krylov subspace methods are discussed. We present convergence results and eigenvalue bounds together with illustrative numerical experiments using test problems from two different applications: a mixed-hybrid discretization of the potential fluid flow problem, and finite element modeling of liquid crystal directors. Reichel Some structured matrix problems in Gauss-type quadrature It is well-known that Gauss quadrature rules can be computed by evaluating the integrand at a symmetric tridiagonal matrix. This talk describes several generalizations that can be applied to approximate certain matrix functions and estimate the error in the approximation so determined. These generalizations involve symmetric and nonsymmetric tridiagonal and block tridiagonal matrices. Tisseur The Structured Condition Number of a Differentiable Map Between Matrix Manifolds We study the structured condition number of differentiable maps between smooth matrix manifolds and present algorithms to compute it. We analyze automorphism groups, and Lie and Jordan algebras associated with a scalar product as a special case of smooth matrix manifolds (these include orthogonal matrices, symplectic matrices, Hamiltonian matrices, ...). For such manifolds, we derive a lower bound on the structured condition number that is cheaper to compute than the structured condition number. We provide numerical comparisons between the structured and unstructured condition numbers for the principal matrix logarithm and principal matrix square root of matrices in automorphism groups as well as for maps between matrices in automorphism groups and their structured polar decomposition and structured matrix sign decomposition. We show that our lower bound can be used as a good estimate for the structured condition number when the matrix argument is well-conditioned. We show that the structured and unstructured condition numbers can differ by several orders of magnitude, thus motivating the development of algorithms preserving structure. This is joint work with Bahar Arslan and Vanni Noferini. Tyrtyshnikov Multidimensional matrices, optimization and quadratic kernels We discuss the links between multidimensional matrices and some optimization problems, especially the so called singular problems where the classical Newton method cannot be applied. In particular, we show how some algorithms with quadratic convergence can be constructed in the singular case and how the constructions are related with the triviality of the quadratic kernel of some matrix subspaces. 09.30-10.00 Estatico Numerical linear algebra and regularization in Banach spaces We consider the functional linear equation $Af=g$ arising in image restoration, characterized by a structured and ill-posed linear operator $A:X \longrightarrow Y$ between two Banach spaces $X$ and $Y$. In this talk, the discretized linear system is solved in the framework of the regularization theory in Banach spaces, where the variational approach based on non-linear iterative minimization of the residual in the dual spaces $X^*$ and $Y^*$ is used. Within this framework, some relationships between classical algorithms of numerical linear algebra and the more recent non-linear iterative regularization algorithms in Banach spaces are analyzed. In particular, the well-known class of iterative projection algorithms (including Cimmino, SIRT and DROP methods), as well as the class of (conjugate) gradient methods will be investigated within the context of regularization in Banach spaces. Tudisco Modularity matrices and community detection under the degree-corrected stochastic block model A community in a network $G$ is roughly defined as a set of nodes being highly interconnected and poorly linked with the rest of $G$. Conversely, an anti-community is a node set being loosely connected internally but having many external connections. We propose a spectral method based on the extremal eigenspaces of generalized modularity matrices to simultaneously look for communities and anti-communities within a given graph or network [2]. We provide theoretical and experimental evidence in support of the proposed method. Moreover, we discuss the behavior of the proposed strategy under the degree-corrected stochastic block model (DC-SBM) [1]. The latter is a random graph model which generates graphs with prescribed clustering structure and heterogeneous degree distributions. We present a complete spectral analysis of the expected modularity matrix $M$ of a DC-SBM random graph. In particular, we show that $M$ can be written as the Khatri-Rao product of a small matrix associated to the clustering structure of the model and a rank-one, block structured matrix related to the degree distribution. [1] K. Chaudhuri, F. Chung, A. Tsiatas. Spectral clustering of graphs with general degrees in the extended planted partition model. COLT 2012, Journal of Machine Learning Research, 23 (2012), 35.1-35.23, [2] D. Fasino, F. Tudisco. A modularity based spectral method for simultaneous community and anti-community detection. Linear Algebra Appl., 2017, to appear. Guglielmi On some matrix nearness problems in matrix theory Matrix nearness problems arise naturally in matrix theory when measuring robustness of some property of a given matrix or when aiming to correct a matrix which does not present a desired property. We discuss ODE-based methods to deal with this kind of problems with application to stability, controllability, common divisibility of polynomials and other properties. In all cases we are particularly interested to the structure preservation of the matrix, which is usually a relevant issue. Numerical examples will be shown in order to illustrate the behavior of the proposed methodology. The talk is inspired by joint works with Christian Lubich (Universitaet Tuebingen). Noferini Eigenvalue avoidance for structured matrices depending on a real parameter Let $H(t)$ be a Hermitian matrix that depends continuously on the real parameter $t$. If the eigenvalues of $H(t)$ are plotted against $t$, it can typically be observed that the trajectories of the eigenvalue functions appear to collide; however, they undergo a last-minute repulsive effect, thus avoiding intersections. Although it is of course possible to construct examples that do not follow this paradigm, this phenomenon happens with probability 1 in the set of $n\times n$ Hermitian matrices whose entries are continuous functions. Eigenvalue avoidance of self-adjoint operators was first observed experimentally by quantum physicists in the early XX century. In 1929, J. von Neumann and E. P. Wigner were the first to explain eigenvalue avoidance rigorously, proving via geometric arguments that it is generic. I plan to first briefly expose the von Neumann-Wigner approach. Then, I will present a number of new results, extending the theory to various classes of structured matrices. I would like to discuss for which classes eigenvalue avoidance occurs, for which classes it does not, and for which classes the question is still open (to the date in which this abstract is being written). Vandebril Eigenvalues and eigenvectors of matrix polynomials In the last decade matrix polynomials have been investigated with the primary focus on adequate linearizations and good scaling techniques for computing their eigenvalues and eigenvectors. We propose a new method for computing a factored Schur form of the associated companion pencil. The algorithm has a quadratic cost in the degree of the polynomial and a cubic one in the size of the coefficient matrices. Also the eigenvectors can be computed at the same cost. The algorithm is a variant of Francis's implicitly shifted QR algorithm applied on the companion pencil. A preprocessing unitary equivalence is executed on the matrix polynomial to simultaneously bring the leading matrix coefficient and the constant matrix term to triangular form before forming the companion pencil. The resulting structure allows us to stably factor each matrix of the pencil as a product of $k$ matrices of unitary-plus-rank-one form, admitting cheap and numerically reliable storage. The problem is then solved as a product core chasing eigenvalue problem. The algorithm is normwise backward stable. Computing the eigenvectors via reordering the Schur form is discussed as well. 10.00-10.30 Bianchi Nonstationary preconditioned iterative methods with structure preserving property for image deblurring Image deblurring is the process of reconstructing an approximation of an image from blurred and noisy measurements. By assuming that the point spread function (PSF) is spatially-invariant, the observed image $g(x, y)$ is related to the true image $f (x, y)$ via the integral equation $g(x,y)=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} h(x-x',y-y')f(x',y')dx'dy'+\eta(x,y)\qquad\qquad\qquad (1)$ for every $(x,y)\in\Omega\subset \mathbb R^2$ and where $\eta(x,y)$ is the noise. By collocation of the previous integral equation on a uniform grid, we obtain the grayscale images of the observed image, of the true image, and of the PSF, denoted by $G$, $F$, and $H$, respectively. Since collected images are available only in a finite region, the field of view (FOV), the measured intensities near the boundary are affected by data outside the FOV. Given an $n \times n$ observed image $G$ (for the sake of simplicity we assume square images), and a $p \times p$ PSF with $p \le n$, then $F$ is $m \times m$ with $m = n + p - 1$. Denoting by $g$ and $f$ the stack ordered vectors corresponding to $G$ and $F$, the discretization of (1) leads to the under-determined linear system $\boldmath g=A\boldmath f+\boldmath\eta, \qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad (2)$ where the matrix $A$ is of size $n^2 \times m^2$. When Imposing proper Boundary Conditions (BCs), the image $A$ becomes square $n^2 \times n^2$ and in some cases, depending on the BCs and the symmetry of the PSF, it can be diagonalized by discrete trigonometric transforms. For example, the matrix $A$ is block circulant circulant block (BCCB) and it is diagonalizable by Discrete Fourier Transform (DFT), when periodic BCs are imposed. Due to the ill-posedness of (1), A is severely ill-conditioned and may be singular. In such case, linear systems of equations (2) are commonly referred to as linear discrete ill-posed problems. Therefore a good approximation of f cannot be obtained from the algebraic solution (e.g., the least-square solution) of (2), but regularization methods are required. The basic idea of regularization is to replace the original ill-conditioned problem with a nearby well-conditioned problem, whose solution approximates the true solution. Thresholding iterative methods are recently successfully applied to image deblurring problems. We investigate the modified linearized Bregman algorithm (MLBA) used in image deblurring problems, with a proper treatment of the boundary artifacts. The fast convergence of the MLBA depends on a regularizing pre-conditioner that could be computationally expensive and hence it is usually chosen as a block circulant circulant block (BCCB) matrix, diagonalized by discrete Fourier transform. We show that the standard approach based on the BCCB preconditioner may provide low quality restored images. Indeed, in order to get an effective preconditioner it is crucial to preserve the structure of the coefficient matrix that depends on the BCs, in the image deblurring problems. Motivated by a recent nonstationary preconditioned iteration [4], we propose a new algorithm that combines such method with the MLBA and it is structure preserving. References. [1] Y. Cai, M. Donatelli, D. Bianchi and T.Z. Huang, Regularization preconditioners for frame-based image deblurring with reduced boundary artifacts, SIAM J. Sci. Comput., 38(1) (2016) 164--189. [2] J. F. Cai, S. Osher, and Z. Shen, Linearized Bregman iterations for frame- based image deblurring, SIAM J. Imaging Sci., 2--1 (2009), pp. 226--252. [3] P. Dell'Acqua, M. Donatelli, C. Estatico and M. Mazza, Structure preserving preconditioners for image deblurring, J. Sci. Comput., 72(1) (2017) 147--171. [4] M. Donatelli and M. Hanke, Fast nonstationary preconditioned iterative methods for ill-posed problems, with application to image deblurring, Inverse Problems, 29 (2013) 095008. Arrigo Non-backtracking walk centrality measures for networks The task of community detection and node centrality measurement can both be addressed by quantifying traversals around a network: either random or deterministic. In particular, the concept of a walk, which allows nodes and edges to be revisited, forms the basis of the Katz centrality and the total communicability, whose limiting behaviour corresponds to eigenvector centrality. In this talk we describe how to extend certain walk-based centrality measures to their non-backtracking analogue, where walks that include at least one back-and-forth flip between a pair of nodes are not allowed. We further show that these new measures may be described in terms of standard walks taking place in certain multilayer networks, and can thus be computed by working on 3x3 or 2x2 block matrices. Bergamaschi Spectral Low-rank Preconditioners for Large Linear Systems and Eigenvalue Problems Fast solution of large and sparse SPD linear systems by Krylov subspace methods is usually prevented by the presence of eigenvalues near zero of the coefficient matrix $A$. This is particularly true when computing the smallest or interior eigenvalues where, using Lanczos' or the Jacobi-Davidson approach, a system like $(A - \sigma I)x = r$ has to be repeatedly solved, with $\sigma$ close to the wanted eigenvalue. We propose and discuss how cost-effective spectral information on the coefficient matrix can be used to construct a spectral preconditioner, i.e., a low-rank modification of a given approximate inverse of $A$, $P_0 \approx A^{-1}$. The spectral preconditioner usually moves away from zero the smallest eigenvalues of the preconditioned matrices with a consequent, sometimes dramatic, reducing of the condition number and speeding up of the iterative process. Given a set of very roughly approximated smallest eigenvalues of $A:$ $\lambda_1 , \ldots , \lambda_p$ and corresponding approximate eigenvectors $v_1 , \ldots , v_p$, and defining the matrices $\Lambda_p =\hbox{diag}(\lambda_1 , \ldots, \lambda_p)$ and $V_p = [v_1 , . . . , v_p ]$ we investigate the properties of two classes of preconditioners ([3, 4]), among the others, defined as $\begin{split} &P= P_0 + V_p \Lambda_p^{-1}V_p^T \\ &P= V_p\Lambda_p^{-1}V_p^T + (I - V_p V_p^T )P_0 (I - V_p V_p T ) \end{split}$ We will provide an extensive testing of such preconditioners which will be shown to provide an important acceleration of iterative eigensolvers (see e.g. [2]) as well as of very ill-conditioned sequences of linear systems [1]. References. [1] L. Bergamaschi, E. Facca, A. Martínez, and M. Putti, Spectral preconditioners for the efficient numerical solution of a continuous branched transport model, Proc. Cedya + CMA Conf. Cartagena, Spain, 2017. [2] L. Bergamaschi and A. Martínez, Two-stage spectral preconditioners for iterative eigensolvers, Numer. Lin. Alg. Appl., 24 (2017), pp. 1--14. [3] B. Carpentieri, I. S. Duff, and L. Giraud, A class of spectral two-level preconditioners, SIAM J. Sci. Comput., 25 (2003), pp. 749--765 (electronic). [4] A. Martínez, Tuned preconditioners for iterative SPD eigensolvers, Numer. Lin. Alg. Appl., 23 (2016), 427--443. Noschese Computing Structured Pseudospectrum Approximations In many applications it is important to know location and sensitivity to perturbations of eigenvalues of matrices or polynomial matrices. The sensitivity commonly is described with the aid of condition numbers or by computing pseudospectra; see, e.g., [1, 2]. However, the computation of pseudospectra is very demanding computationally. We propose a new approach to computing approximations of pseudospectra of both matrices [3] and matrix polynomials [4] by using rank-one or projected rank-one perturbations that respect the given structure, if any. These perturbations are inspired by Wilkinson's analysis of eigenvalue sensitivity [5]. Numerical examples show this approach to perform much better than methods based on random rank-one perturbations both for the approximation of structured and unstructured (polynomial) pseudospectra. References. [1] L. N. Trefethen and M. Embree, Spectra and Pseudospectra. Princeton University Press, Princeton, 2005. [2] N. J. Higham and F. Tisseur, More on pseudospectra for polynomial eigenvalue problems and applications in control theory. Linear Algebra Appl., 351 (2002), pp. 435-453. [3] S. Noschese and L. Reichel, Approximated structured pseudospectra. Numer. Linear Algebra Appl., 24 (2017) e2082. [4] S. Noschese and L. Reichel, Computing Unstructured and Structured Polynomial Pseudospectrum Approximations. Submitted. [5] J. H. Wilkinson, The Algebraic Eigenvalue Problem. Oxford University Press, 1965. Robol Backward error analysis for core-chasing algorithms Fast methods that exploit the structure of the underlying problem are typically attractive because of their performances and, sometimes, because of the reduced storage cost. However, the improved performance can come at the cost of stability, and it is often challenging to prove satisfying backward stability results. In this talk, we focus on core-chasing algorithms, that allow the efficient development of methods that exploit unitary plus rank $1$ or unitary plus rank $k$ structure. In particular, we analyze the case of polynomial rootfinding, that relies on the computation of the eigenvalues of a matrix with unitary plus rank $1$ structure (or a similarly structured pencil), and we will show that this approach can be backward stable on the polynomial coefficients. We discuss the differences between QZ and QR, and we prove that -- using structured core-chasing methods -- the backward error is also structured. This leads to interesting results, and shows that the QR iteration is backward stable on the polynomial coefficients in unexpected situations. 10.30-11.00 Coffee Break Coffee BreakPosters Coffee BreakPosters Coffee BreakPosters Coffee Break 11.00-11.30 Aimi On the Energetic Galerkin BEM and its algebraic reformulation The Energetic Galerkin Boundary Element Method (BEM) is a discretization technique for the numerical solution of wave propagation problems, introduced in [1] and applied in the last decade to scalar wave propagation inside bounded domains or outside bounded obstacles, in 1D, 2D and 3D space dimension. The differential initial-boundary value problem at hand is converted into a space-time Boundary Integral Equation (BIE), which is then written in weak form through energy considerations and discretized by a Galerkin approach. The talk will focus on the extension of this method in the context of 2D soft and hard scattering of damped waves, taking into account both viscous and material damping. Details will be given on the algebraic reformulation of Energetic Galerkin BEM, i.e., on the so-called time-marching procedure that gives rise to a linear system whose matrix has a Toeplitz lower triangular block structure. Numerical results confirm accuracy and stability of the proposed technique, already proved for the numerical treatment of undamped wave propagation problems in several space dimensions [2-3] and for the 1D damped case [4-5]. References. [1] Aimi, A. and Diligenti, M., A new space-time energetic formulation for wave propagation analysis in layered media by BEMs, Int. J. Numer. Meth. Engrg. 75, 1102--1132 (2008). [2] Aimi, A., Diligenti M. and Panizzi S., Energetic Galerkin BEM for wave propagation Neumann exterior problems, Computer Model. Engrg. Sciences 1(1), 1--33 (2009). [3] Aimi, A., Diligenti, M., Frangi, A. and Guardasoni, C., Neumann exterior wave propagation problems: Computational aspects of 3D energetic Galerkin BEM, Comput. Mech. 51(4), 475--493 (2013). [4] Aimi, A. and Panizzi, S., BEM-FEM coupling for the 1D Klein-Gordon equation, Numer. Methods Partial Diff. Equations 30(6), 2042--2082 (2014). [5] Aimi, A., Diligenti, M. and Guardasoni, C., Energetic BEM-FEM coupling for the numerical solution of the damped wave equation, Adv. Comput. Math. 43, 627-651 (2017). Van Barel Matrices in polynomial system solving This talk is about the role of matrix computations in the problem of solving systems of polynomial equations. Let $k$ be an algebraically closed field and let $p_1 = p_2 = ... = p_n = 0$ define such a system in $k^n$: $p_i \in k[x_1,...,x_n]$. Let $I$ be the ideal generated by these polynomials. We are interested in the case where the system has finitely many isolated solutions in $k^n$. It is a well known fact that this happens if and only if the quotient ring $k[x_1,..., x_n]/I$ is finite dimensional as a k-vector space. The multiplication endomorphisms of the quotient algebra provide a natural linear algebra formulation of the root finding problem. Namely, the eigenstructure of the multiplication matrices reveals the solutions of the system. These multiplication matrices can be calculated from the coefficients of the $p_i$, for example by using Groebner bases. The computations make an implicit choice of basis for $k[x_1,...,x_n]/I$, which from a numerical point of view is not a very good choice. Significant improvement can be made by using elementary numerical linear algebra techniques on a Macaulay-type matrix. In this talk we will present thi technique and show how the resulting method can handle challenging systems. Szyld Singular values of certain almost block Toeplitz matrices We analyze a certain class of "almost" block Toeplitz matrices arising in the analysis of Optimized Schwarz methods for the numerical solution of certain PDEs. The aim is to show that these matrices are contracting. Mastronardi Revisiting the perfect shift strategy in the Implicitly Shifted QR algorithm In this talk we revisit the Implicit-Q Theorem and analyze the problem of performing a QR-step on an unreduced Hessenberg matrix $H$ when we know an "exact" eigenvalue $\lambda_0$ of $H$. Under exact arithmetic, this eigenvalue will appear on diagonal of the transformed Hessenberg matrix $H_1$ and will be decoupled from the remaining part of the Hessenberg matrix, thus resulting in a deflation. But it is well known that in finite precision arithmetic the so-called perfect shift could get blurred and the eigenvalue $\lambda_0$ can not be deflated and/or is perturbed significantly. In this talk we develop a new strategy for computing such a QR step so that the deflation is indeed successful. The method is based on the preliminary computation of the corresponding eigenvector $x$ such that the residual $(H-\lambda_0)/x$ is sufficiently small. The eigenvector is then transformed to a unit vector by a sequence of Givens transformations, which are also performed on the Hessenberg matrix. Such a QR step is the basic ingredient of the QR method to compute the Schur form, and hence the eigenvalues of an arbitrary matrix. But it also is a crucial step in the reduction of a general matrix $A$ to its Weyr form. It is in fact this last problem that lead to the development of this new technique. Brezinski Shanks sequence transformations and their links to linear algebra methods In this talk present a general framework for Shanks transformations of sequences of elements in a vector space. It is shown that the Minimal Polynomial Extrapolation (MPE), the Modified Minimal Polynomial Extrapolation (MMPE), the Reduced Rank Extrapolation (RRE), the Vector Epsilon Algorithm (VEA), the Topological Epsilon Algorithm (TEA), and Anderson Acceleration (AA), which are standard general techniques designed for accelerating arbitrary sequences and/or solving systems of linear and nonlinear equations, all fall into this framework. Their properties and their connections with QuasiNewton and Broyden methods are studied. Then, we exploit this framework to compare these methods. In the linear case, it is known that AA and GMRES are 'essentially' equivalent in a certain sense while GMRES and RRE are mathematically equivalent. The talk also discusses the connection between AA, the RRE, the MPE, and other methods in the nonlinear case. 11.30-12.00 Mazza Spectral analysis and spectral symbol for pure and stabilized 2D curl-curl operator with applications to the related iterative solutions In this work, we focus on large and highly ill-conditioned linear systems of equations arising from various formulations of the Maxwell equations appearing, e.g., in Time Harmonic Maxwell as well as in the MagnetoHydroDynamics. First, we consider a compatible B-Spline discretization based on a discrete De Rham sequence of the 2D curl-curl operator stabilized with zero-order term, and we show that the sequence of the coefficient matrices belongs to the Generalized Locally Toeplitz class. Moreover, looking at the entries of the coefficient matrix, we compute the symbol describing its asymptotic eigenvalue distribution, as the matrix size diverges. Thanks to this spectral information we show that the coefficient matrix is affected by three severe sources of ill-conditioning related to the relevant parameters: the matrix size, the spline degree and the stabilization parameter. As a consequence, when used for solving the associated linear systems, Conjugate Gradient type methods are extremely slow and their convergence rate is not robust with respect to the parameters. On this basis, we replace the zero-order stabilization with a divergence-type one and we spectrally analyze the corresponding B-spline discretization matrix-sequence. The retrieved spectral information is then used to design a 2D vector extension of a multi-iterative approach already used in the literature for the scalar Laplacian operator. Finally, a variety of numerical tests and some open problems are discussed. Plestenjak Minimal determinantal representations of bivariate polynomials It is known since Dixon's 1902 paper that every bivariate polynomial $p$ of degree $n$ admits a determinantal representation $p(x,y)=\det(A+xB+yC)$ with $n\times n$ symmetric matrices. However, the construction of such matrices is far from trivial and up to now there have been no efficient numerical algorithms, even if we do not insist on matrices being symmetric. We present the first numerical construction that returns a determinantal representation with $n\times n$ matrices for a square-free bivariate polynomial of degree $n$, which, with the exception of the symmetry, agrees with Dixon's result. For a non square-free polynomial one can combine it with a square-free factorization to obtain a representation of order $n$. Our motivation is a numerical method for solving systems of bivariate polynomials as two-parameter eigenvalue problems. Symmetry is not important for this particular application. The resulting numerical method for the roots of a system of two bivariate polynomials is competitive with some existing methods for polynomials of small degree. Huckle Preconditioning for sparse and structured matrices We consider linear systems of equations related to sparse matrices $A$. Incomplete LU decomposition usually leads to a good preconditioner. Here, we will present iterative methods for computing ILU and MILU that are also easy to parallelize. The convergence of these iterative methods depends strongly on the condition number of $A$. Furthermore, the condition number of L and U should be large to obtain an efficient preconditioner. So fast converging iterative ILU and MILU methods are necessary in this case. For solving the resulting sparse triangular systems we use the Jacobi method with an incomplete sparse approximate inverse preconditioner. Furthermore, the Jacobi iteration can be accelerated by using the Euler expansion. In view of the triangular structure of L and U, the accelerated Jacobi iteration is guaranteed to converge fast. Overall, this results in efficient, parallel methods for solving sparse linear systems. As special case we discuss the presented methods for structured matrices via the symbol. Noutsos Extensions of M-matrices and their properties concerning the Schur Complement It is well known that $M-$matrices are the square matrices of the form $A=sI-B$, where $B$ is entry-wise nonnegative and $s \geq \rho(B)$. This class of matrices has many applications in Ordinary and Partial Differential Equations, Integral equations, Economics, Linear complementarity problems, Population Dynamics, Markov chains, Theory of Games and Control Theory. Some extended classes of $M-$matrices, with many applications in the same areas of Science, have been proposed in the last decades. The class of ${M_v}-$matrices contains the matrices of the form $A=sI-B$, where $B$ is eventually nonnegative and $s \geq \rho(B)$. The class of Generalized $M-$matrices ($GM-$matrices) is defined following the same reasoning except that the matrix $B$ has the Perron-Frobenius property. Very useful properties concerning $M-$matrices, Inverse $M-$matrices and their Schur complements are known. In this work we discuss extensions of such properties concerning ${M_v}-$ and $GM-$matrices, Inverse ${M_v}-$ and Inverse $GM-$matrices with respect to their Schur complements. Numerical examples are shown to confirm the obtained theoretical results. Redivo-Zaglia Shanks sequence transformations and the $\varepsilon$-algorithms. Theory and applications. Let $(\mathbf S_n)$ be a sequence of elements of a vector space $E$ on a field $\mathbb K$ ($\mathbb R$ or $\mathbb C$) which converges to a limit $\mathbf S$. If the convergence is slow, it can be transformed, by a {\it sequence transformation}, into a new sequence or a set of new sequences which, under some assumptions, converges faster to the same limit. When $E$ is $\mathbb R$ or $\mathbb C$, a well known such transformation is due to Shanks, and it can be implemented by the scalar $\varepsilon$--algorithm of Wynn. This transformation was generalized in several different ways to sequences of elements of a vector space $E$. When $E$ is $\mathbb R^p$ or $\mathbb C^p$, this generalization leads to the {\it Reduced Rank Extrapolation} (RRE) and to the {\it Minimal Polynomial Extrapolation} (MPE). For a general vector space $E$, the {\it Modified Minimal Polynomial Extrapolation} (MMPE) and the {\it topological Shanks transformations} are obtained. The interest of these last two generalizations is that they can treat sequences or matrices or even tensors, and that they can be recursively implemented, the first one by the $S\beta$ algorithm of Jbilou and the second ones by the topological $\varepsilon$--algorithms of Brezinski [1]. However, the topological $\varepsilon$--algorithms are quite complicated since they possess two rules, they require the storage of many elements of $E$, and the duality product with an element $\mathbf y$ is recursively used in their rules. Recently, simplified versions of these algorithms were obtained and called the {\it simplified topological $\varepsilon$--algorithms} [2]. They have only one recursive rule instead of two, they require less storage than the initial algorithms, elements of the dual vector space $E^*$ of $E$ no longer have to be used in the recursive rules but only in their initializations, the numerical stability is improved, and it was possible to prove theoretical results on them. In this talk, we present the {\it simplified topological $\varepsilon$--algorithms} and the Matlab package {\tt EPSfun} [3], available in the public domain library {\it netlib}, for implementing and using them. Then, we give applications to the solution of linear and nonlinear systems of vector and matrix equations, to the computation of matrix functions, to the solution of nonlinear Fredholm integral equations of the second kind, and to the computation of tensor $l^p$-eigenvalues and eigenvectors. References. [1] C. Brezinski, Généralisation de la transformation de Shanks, de la table de Padé et de l'$\varepsilon$--algorithme, Calcolo, 12 (1975) 317--360. [2] C. Brezinski, M. Redivo--Zaglia, The simplified topological $\varepsilon$--algorithms for accelerating sequences in a vector space, SIAM J. Sci. Comput., 36 (2014) A2227--A2247. [3] C. Brezinski, M. Redivo--Zaglia, The simplified topological $\varepsilon$-algorithms: software and applications, Numer. Algorithms 74 (2017), 1237--1260. 12.00-12.30 Vassalos A general tool for determining asymptotic spectral distribution of hermitian matrix sequences The approximation theory for sequences of matrices with increasing dimension is a topic having both theoretical and practical interest. In this talk, we consider sequences of Hermitian matrices with increasing dimension, and we provide a general tool for determining the asymptotic spectral distribution of a 'difficult' sequence ${A_n}_n$ from the one of 'simpler' sequences ${B_{n,m}}_n$ that approximate ${A_n}_n$ when $m \rightarrow \infty$. The tool is based on the notion of an approximating class of sequences (a.c.s.), and it is applied in a more general setting. As an application we illustrate how it can be used in order to derive the famous Szego theorem on the spectral distribution of Toeplitz matrices. This is a joint work with C. Garoni and S. Serra Capizzano. De Terán Polynomial root-finding using companion matrices The use of companion matrices to compute the roots of scalar polynomials is a standard approach (it is for instance, the one followed by the command roots in MATLAB). It consists in computing the roots of a scalar polynomial as the eigenvalues of a companion matrix. In this talk, I will review several numerical and theoretical issues on this topic. I will pay special attention to the backward stability of solving the polynomial root-finding problem using companion matrices. More precisely, to this question: Even if the computed eigenvalues of the companion matrix are the eigenvalues of a nearby matrix, does this guarantee that they are the roots of a nearby polynomial? Usually, the companion matrix approach focuses on monic polynomials, since one can always divide by the leading coefficient, if necessary. But, is it enough for the backward stability issue to focus on monic polynomials? I will also pay attention to some other (more theoretical) questions like: How many companion matrices are there and what do they look like? Potts High dimensional approximation with trigonometric polynomials In this talk, we present algorithms for the approximation of multivariate functions by trigonometric polynomials. The approximation is based on sampling of multivariate functions on rank-1 lattices. To this end, we study the approximation of functions in periodic Sobolev spaces of dominating mixed smoothness. The proposed algorithm based mainly on a one-dimensional fast Fourier transform, and the arithmetic complexity of the algorithm depends only on the cardinality of the support of the trigonometric polynomial in the frequency domain. Therefore, we investigate trigonometric polynomials with frequencies supported on hyperbolic crosses and energy based hyperbolic crosses in more detail. Furthermore, we present algorithms where the support of the trigonometric polynomial is unknown. Fasi Computing the action of the weighted geometric mean of two large-scale matrices on a vector We consider two classes of methods for computing the product of the weighted geometric mean of two large-scale positive definite matrices and a vector. We derive algorithms based on quadrature formulae for the matrix $p$th root and on the Krylov subspace, and compare these approaches in terms of convergence speed and execution time. By exploiting an algebraic relation between the weighted geometric mean and its inverse, we show how these methods can be used to efficiently solve large and sparse linear systems whose coefficient matrix is a weighted geometric mean. Del Corso An implicit QR method for unitary plus low rank matrices In this talk we present an implicit QR method for the approximation of the eigenvalues of unitary plus low rank matrices. Block companion linearizations of matrix polynomials are, perhaps, the most relevant cases of matrices of this form. The problem of computing the roots of a scalar polynomial has received a lot of attention in the last decade [2, 3, 4, 5] and recently the block companion case has been considered by Aurentz and others [1]. The authors gave an implicit QZ method whose cost is $O(d^2 k^3 )$, $d$ being the degree of the matrix polynomial, and $k$ the size of the matrices of the polynomial. This bound is claimed by the authors to be the best one can achieve using the QR or QZ approach. In this talk we present a new implicit QR method for dealing with the eigenvalue problem in the more general case of unitary plus low rank matrices. The algorithm is based on the representation of the matrix as follows $A = V (H + EZ^T )W^T ,$ where $V$ and $W$ are the product of $k$ upper Hessenberg matrices, $H$ is a Hessenberg matrix , $E = [I_k , O]^ T$ and $Z$ is full rank $n \times k$ matrix. A single QR step can be implemented using Givens transformations, with a cost of $O(nk)$ per iteration, giving a total cost of $O(n^2 k)$. In the case of the block companion linearization we obtain the same asymptotic cost of the method proposed in [1], since $n = kd$. Numerical experiments show that the method is backward stable, and we are able to recover, when the polynomial is not too badly ill conditioned, with high accuracy the eigenvalues of the matrix. References. [1] J. Aurentz, T. Mach, L. Robol, R. Vandebril, and D. S. Watkins. Fast and backward stable computation of the eigenvalues and eigenvectors of matrix polynomials. {\em ArXiv e-prints}, November 2016. [2] Jared L. Aurentz, Raf Vandebril, and David S. Watkins. Fast computation of roots of Companion, Comrade, and related matrices. {\em BIT}, 54(1):85--111, 2014. [3] R. Bevilacqua, G. M. Del Corso, and L. Gemignani. Compression of unitary rank--structured matrices to CMV-like shape with an application to polynomial rootfinding. {\em J. Comput. Appl. Math.}, 278:326--335, 2015. [4] D. A. Bini, P. Boito, Y. Eidelman, L. Gemignani, and I. Gohberg. A fast implicit QR eigenvalue algorithm for companion matrices. {\em Linear Algebra Appl.}, 432(8):2006--2031, 2010. [5] S. Chandrasekaran, M. Gu, J. Xia, and J. Zhu. A fast QR algorithm for companion matrices. In {\em Recent advances in matrix and operator theory}, volume 179 of{\em Oper. Theory Adv. Appl.}, pages 111--143. Birkh\"auser, Basel, 2008. 12.30-14.30 Lunch Break Lunch Break Lunch Break Lunch Break Lunch Break 14.30-15.00 Barbarino Equivalence between measurable functions and GLT sequences The theory of Generalized Locally Toeplitz (GLT) sequences established a bridge between the space of matrix sequences and the space of measurable functions. Recently, it has been proved that the algebra of GLT sequences is actually isomorphic to a space of measurable function, and that the notion of Approximating Class of Sequences (acs) is induced by a pseudometric linked to the convergence in measure. The latter is metrizable, and for a wide class of distances inducing this convergence it is possible to build a class of corresponding pseudometrics that provides the GLT algebra with a structure isometrically isomorphic to the measurable functions. These results are used to prove the completeness of the space of matrix sequences, and to provide new tools to test the convergence of matrix sequences through the notion of acs. Dopico Matrix polynomials with bounded rank and degree: generic eigenstructures and explicit descriptions Low rank perturbations of matrices, matrix pencils, and matrix polynomials appear naturally in many applications where just a few degrees of freedom of a complicated system are modified. As a consequence many papers have been published in the last 15 years on this type of problems for matrices and pencils, but just a few for matrix polynomials. A possible reason of this lack of references on low rank perturbations of matrix polynomials is that the set of matrix polynomials with bounded (low) fixed rank and fixed degree is not easy to describe when the rank is larger than one. The purpose of this talk is to describe such set both in terms of its generic eigenstructures and in terms of products of two factors. Meini Solving quadratic matrix equations with infinite quasi-Toeplitz coefficients Quadratic matrix equations of the kind $A_1X^2+A_0X+A_{-1}=X$, where the coefficients are nonnegative semi-infinite quasi-Toeplitz matrices, are encountered in certain Quasi-Birth-Death (QBD) stochastic processes, as the tandem Jackson queue or the reflecting random walk in the quarter plane. We provide a numerical framework for approximating the minimal nonnegative solution of these equations which relies on semi-infinite quasi-Toeplitz matrix arithmetic. This arithmetic does not perform any finite size truncation, instead it approximates the infinite matrices through the sum of a banded Toeplitz matrix and a compact correction. In particular, we show that the algorithm of Cyclic Reduction can be effectively applied and can approximate the infinite dimensional solutions with quadratic convergence at a cost which is comparable to that of the finite case. This way, we may compute a finite approximation of the sought solution, as well as of the invariant probability measure of the associated QBD process, within a given accuracy. Numerical experiments, performed on a collection of benchmarks, confirm the theoretical analysis. Donatelli Spectral analysis and multigrid preconditioners for space-fractional diffusion equations Fractional partial diffusion equations (FDEs) are a generalization of classical partial differential equations, used to model anomalous diffusion phenomena. Several discretization schemes (finite differences, finite volumes, etc.) combined with (semi)-implicit methods leads to a Toeplitz-like matrix-sequence. In the constant diffusion coefficients case such a matrix-sequence reduces to a Toeplitz one, then exploiting well-known results on Toeplitz sequences, we are able to describe its asymptotic eigenvalue distribution. In the case of nonconstant diffusion coefficients, we show that the resulting matrix-sequence is a generalized locally Toeplitz (GLT) and then we use the GLT machinery to study its singular value/eigenvalue distribution as the matrix size diverges (see [3]). The new spectral information is employed for analyzing preconditioned Krylov and multigrid methods recently appeared in the literature, with both positive and negative results. Moreover, such spectral analysis guides the design of new preconditioning and multigrid strategies. We propose new structure preserving preconditioners with minimal bandwidth (and so with efficient computational cost) and multigrid methods for 1D and 2D problems (see [1] for the 1D case and [2] for the 2D case). Some numerical results confirm the theoretical analysis and the effectiveness of the new proposals. References. [1] M. Donatelli, M. Mazza, S. Serra-Capizzano: "Spectral analysis and structure preserving preconditioners for fractional diffusion equations", J. Comput. Phys., Vol. 307, pp. 262--279, 2016. [2] H. Moghaderi, M. Dehghan, M. Donatelli, M. Mazza: "Spectral analysis and multigrid preconditioners for two-dimensional space-fractional diffusion equations", arXiv, 1926487. [3] S. Serra-Capizzano: "The GLT class as a generalized Fourier Analysis and applications", Linear Algebra Appl. Vol. 419, pp. 180--233, 2006. 15.00-15.30 Speleers Design of fast multigrid solvers for isogeometric analysis: a symbol approach In this talk we focus on the numerical solution of elliptic problems by means of isogeometric analysis. By exploiting specific spectral properties compactly described by a symbol, we are able to design efficient multigrid methods for the fast solution of the related linear systems. Despite the theoretical optimality, the convergence rate of multigrid methods with classical stationary smoothers worsens exponentially as the involved spline degrees increase. With the aid of the symbol, we can give a theoretical interpretation of this exponential worsening. Moreover, a proper factorization of the symbol allows us to develop a preconditioned conjugate gradient smoother, in the spirit of the multi-iterative strategy, that results in a good multigrid convergence rate, independent of both matrix size and spline degree. Numerical experiments confirm the effectiveness of our proposal and the numerical optimality. This is joint work with Marco Donatelli, Carlo Garoni, Carla Manni and Stefano Serra-Capizzano. Boito An algorithm for efficient solution of shifted quasiseparable linear systems and applications Solving sets of shifted linear systems $(A + \sigma_k I) x_k = b_k$ is a well-studied problem in numerical linear algebra. In many cases, the matrix $A$ is structured. This work focuses on the case where A is quasiseparable, that is, the off-diagonal blocks of A have low rank. This is often the case, for instance, when discretizing differential operators. We present a fast algorithm that relies on structured QR factorization and exploits both the quasiseparable and the shifted structure of the problem. The main application we consider is the computation of the product of matrix functions times a vector via rational approximations or contour integrals. The same approach can also be used for the solution of linear matrix equations. Palitta Numerical methods for Lyapunov matrix equations with banded symmetric data We are interested in the numerical solution of the large-scale Lyapunov equation $AX + XA^T = C$, where $A, C \in\mathbb R^{ n\times n}$ are both large and banded matrices. We suppose that $A$ is symmetric and positive definite and $C$ is symmetric and positive semidefinite. While the case of low-rank $C$ has been successfully addressed in the literature, the more general banded setting has not received much attention, in spite of its possible occurrence in applications. In this talk we aim to fill this gap. It has been recently shown that if $A$ is well conditioned, the entries of the solution matrix $X$ decay in absolute value as their indexes move away from the sparsity pattern of $C$. This property can be used in a memory-saving matrix-oriented Conjugate Gradient method to obtain a banded approximate solution. For $A$ not well conditioned, the entries of X do not sufficiently decay to derive a good banded approximation. Nonetheless, we show that it is possible to split $X$ as $X = Z_b + Z_r$ , where $Z_b$ is banded and $Z_r$ is numerically low rank. We thus propose a novel strategy that efficiently approximates both $Z_b$ and $Z_r$ with acceptable memory requirements. Numerical experiments are reported to illustrate the potential of the discussed method. Bolten Analysis of parallel time integrators using structured matrices With the advent of large scale high performance computers with 100.000s of cores scalability of numerical algorithms becomes one of the most important aspects. In many applications problems do not only depend on spatial degrees of freedom, but they evolve during time. Usually, the speedup of a pure spatial distribution of these problems saturates at some point, requiring to rethink the parallelization strategy. One option are parallel time integrators that decompose the domain in all dimensions. A variety of parallel time integration schemes exists, e.g., the well-known parareal method by Lions, Maday, and Turinici, PFASST by Emmett and Minion, or space-time multigrid that has been considered already by Hackbusch. The analysis of these methods often leads to structured matrices. In many cases a bidiagonal block Toeplitz matrix has to be studied in order to assess the properties of the numerical method. Parallel-in-time integrators share many similarities with multigrid methods, so the theory and methodology developed in the context of multigrid methods for structured matrices can be applied straightforwardly in this case. We will present the analysis of some time-parallel method using structured matrices. Using these techniques we are able to analyze the behavior of the studied methods. The results provide a better understanding of parallel time integrators and allow to design more efficient methods. 15.30-16.00 Poster blitz Eidelman Rational approximations and fast quasiseparable algorithms for matrix and operator functions and differential equations We discuss methods for computations of matrix and operator functions via rational approximations. Using quasiseparable representations of matrices we obtain fast algorithms to solve these problems. The model examples we use are various problems for differential equations. Massei Solving large scale Sylvester and Lyapunov equations with quasiseparable coefficients In this talk we address the problem of efficiently solving $AX+XB=C$ in the case where the coefficients $A,B$ and $C$ are $m\times m$ matrices with rank structured off-diagonal blocks. We prove that under reasonable assumptions the structure is present in the solution. We design and test different strategies that --- combined with the hierarchical matrices technology --- are able to solve the equation with linear-polylogarithmic complexity. Extension to the treatment of certain generalized Sylvester equation is discussed. Sesana Multigrid methods for block-circulant linear systems Many problems in physics, engineering and applied sciences are governed by functional equations (in differential or integral form) that do not admit closed form solution and, therefore, require numerical discretization techniques which often involve the solution of linear systems of large size. The main novelty contained in the literature treating structured matrices is the use of the \textit{symbol}, that is a function $f$ that provides a compact description of the asymptotic global behavior of the spectrum of the sequence of the coefficient matrices $\{A_n\}$, obtained from the discretization of a differential or integral problem when the fineness parameter tends to zero. Among the iterative methods available to solve such systems, in the last 25 years Multigrid methods have gained a remarkable reputation as fast solvers for structured matrices associated to shift invariant operators, where the size $n$ of the problem is large. The convergence analysis of two particular Multigrid method, the Two-grid and V-cycle, has been handled in a compact and elegant manner by studying few analytical properties of the symbol $f$ associated with the sequence of coefficient matrices. In the cases taken into account, especially concerning Toeplitz matrices, this symbol is a (multivariate) scalar-valued function $f$, while much remains to be done in the case of a matrix-valued symbols, which are obtained for example in the discretization of systems of equations. In this talk our aim is to fill this gap, generalizing some of the existing proofs for the Two-grid and the V-cycle method for systems with matrix in algebra, such as circulant, Hartley and tao, to the case where the latter have a matrix-valued symbol. The next step is the extension for matrices non in algebra, such as Toeplitz matrices. 16.00-16.30 Coffee BreakPoster session Coffee BreakPosters Coffee BreakPosters 16.30-17.00 Poster session Kressner Low-rank updates of matrix functions This talk is concerned with the development and analysis of fast algorithms for updating a matrix function $f(A)$ if $A$ undergoes a low-rank change $A+L$. For example, when $A$ is the Laplacian of an undirected graph then removing one edge or vertex of the graph corresponds to a rank-one change of $A$. The evaluation and update of such matrix functions (or parts thereof) is of relevance in the analysis of network community structure of networks and signal graph processing. Our algorithms are based on the tensorization of polynomial or rational Krylov subspaces involving $A$ and $A^T$. The choice of a suitable element from such a tensorized subspace for approximating $f(A+L)-f(A)$ is straightforward in the symmetric case but turns out to be more intricate in the nonsymmetric case. We show that the usual convergence results for Krylov subspace methods for matrix functions can be extended to our algorithms. If time permits, we will also touch upon the closely related problem of applying the Fréchet derivative of a matrix function to a low-rank matrix. This is joint work with Bernhard Beckermann and Marcel Schweitzer. Mitrouli Estimating bilinear forms for some large scale computation problems A spectrum of applications arising from Statistics, Machine Learning, Network Analysis require computation of bilinear forms $x^Tf (A)y$, where $A$ is a diagonalizable matrix and $x$, $y$ are given vectors. In this work we are interested in efficiently computing bilinear forms primarily due to their importance in several contexts. For large scale computation problems it is preferable to achieve approximations of bilinear forms without exploiting the whole matrix function. For this purpose an extrapolation procedure has been developed, attaining the approximation of bilinear forms with one, two or three term estimates in a complexity of square order. Furthermore, a prediction approach based on Aitken's acceleration scheme has also been developed computing alternative estimates again in a square complexity. Both schemes are characterized by easy applicable formulae of low complexity that can be implemented in vectorized form. 17.00-17.30 Cicone Spectral and convergence analysis of the discrete Adaptive Local Iterative Filtering method by means of Generalized Locally Toeplitz sequences In this talk we introduce a newly developed algorithm, the Adaptive Local Iterative Filtering (ALIF). This is an alternative technique to the well known Empirical Mode Decomposition method for the decomposition of nonstationary real life signals. This last algorithm has been successfully applied in many fields of research in the last two decades. The mathematical convergence analysis of all the aforementioned algorithms is still lacking. For this reason, focusing the attention on the discrete version of ALIF, we show how recent results about sampling matrices and, in particular, the theory of Generalized Locally Toeplitz sequences allow to perform a spectral analysis of the matrices involved in the iterations. In particular we are able to study the eigenvalue clustering and the eigenvalue distribution for such matrices; we provide a necessary condition for the convergence of the Discrete ALIF method and we derive a simple criterion to construct filters, needed in the ALIF algorithm, that guarantee the fulfillment of the aforementioned necessary condition. We show some numerical examples and discuss about important open problems that are waiting to be tackled. Pan Low Rank Approximation: Failing but Accurate Superfast Algorithms, Pre-processing and Extensions Low rank approximation (hereafter LRA) of a large matrix has routinely been computed by using much fewer memory units and arithmetic operations than the input matrix has entries, but so far no proof supports the consistently observed accuracy of such superfast algorithms. We first explain why such a proof has been missing -- we specify a small family of matrices (we call them $\pm \delta$-{\em matrices}) for most of which LRA computed by any superfast algorithm is no better than just by the matrix filled with zeros. Then we analyze superfast LRA algorithms of three kinds and prove that for each of them the class of such hard inputs is narrow: these algorithms output close LRAs to the average matrix allowing LRA, to any matrix allowing LRA unless it degenerates like $\pm \delta$-matrices, and with a high probability even to such a degenerate matrix if it is pre-processed with a random Gaussian, SRHT or SRFT multiplier; moreover empirical efficiency of these multipliers is matched by our much sparser multipliers. We also provide some recipes and formal support for enhancing the accuracy of such superfast LRA and discuss further research directions. Our progress relies on our new insights and techniques, should encourage application of simplified heuristics, the average case analysis and randomized multiplicative pre-processing in matrix computations, and can be immediately extended to various important computational areas, e.g., tensor decomposition, but we also show a novel surprising extension to dramatic acceleration of the Conjugate Gradient algorithms. Dell'Acqua Taylor boundary conditions for accurate image restoration In recent years, several efforts were made in order to introduce boundary conditions for deblurring problems that allow to get accurate reconstructions. This resulted in the birth of Reflective, Anti-Reflective and Mean boundary conditions, which are all based on the idea of guaranteeing the continuity of the signal/image outside the boundary. Here we propose new boundary conditions that are obtained by suitably combining Taylor series and finite difference approximations. Moreover, we show that also Anti-Reflective and Mean boundary conditions can be attributed to the same framework. Numerical results show that, in case of low levels of noise and blurs able to perform a suitable smoothing effect on the original image (e.g. Gaussian blur), the proposed boundary conditions lead to a significant improvement of the restoration accuracy with respect to those available in the literature. 17.30-18.00 Garoni Symbol Analysis of Differential Eigenproblems Isogeometric Analysis (IgA) and Finite Element Analysis (FEA) are two distinguished numerical methods for the numerical solution of differential problems. While FEA is a very popular technique, which dates back to the 1950s, IgA has been introduced only recently, between 2005 and 2009. However, due to its capability to enhance the connection between numerical simulation and Computer-Aided Design (CAD) systems, IgA is gaining more and more attention over time. In this presentation, we focus on the numerical solution of a simple eigenvalue problem by means of IgA and FEA. We show that IgA is superior to FEA in the spectral approximation, because, while the numerical eigenvalues computed by IgA reproduce (approximately) all the exact eigenvalues, only a small portion of the numerical eigenvalues computed by FEA can be considered as approximations of the exact eigenvalues. Our analysis is based on the notion of "symbol", which will be introduced in the talk along with IgA and FEA. Gemignani On the Design of Fast EigenSolvers for Diagonal plus Low-Rank Matrices Most of researches on rank structured matrix technology for structured eigenproblems stem from the design of efficient eigensolvers for diagonal plus low-rank matrices. In this talk we present two fast algorithms for Hessenberg reduction of a structured matrix $A = D + UV^H$ where $D$ is a real or unitary $n \times n$ diagonal matrix and $U, V \in\mathbb{C}^{n \times k}$. The proposed algorithm for the real case exploits a two--stage approach by first reducing the matrix to a generalized Hessenberg form and then completing the reduction by annihilation of the unwanted sub-diagonals. It is shown that the novel method requires $O(n^2k)$ ops and it is significantly faster than other reduction algorithms for rank structured matrices. The method is then extended to the unitary plus low rank case by using a block analogue of the CMV form of unitary matrices. It is shown that a block Lanczos-type procedure for the block tridiagonalization of $\Re(D)$ induces a structured reduction on $A$ in a block staircase CMV--type shape. Then, we present a numerically stable method for performing this reduction using unitary transformations and we show how to generalize the sub-diagonal elimination to this shape, while still being able to provide a condensed representation for the reduced matrix with cost $O(n^2k)$ ops. Winkler A structure-preserving matrix method for the computation of multiple roots of a Bernstein basis polynomial The Bernstein basis is used for the representation of curves and surfaces in geometric modelling systems because of its elegant geometric properties and enhanced numerical stability with respect to the power basis. One of the most important computations in these systems is the calculation of the points of intersection of curves and surfaces, which reduces to the calculation of the roots of a polynomial. Particular interest is focussed on multiple roots because they define conditions of tangency, and therefore smooth intersections, which are required for the reduction of stress-concentration factors at sharp corners, and ease of handling -- it is easier to handle an object with rounded corners than an object with sharp corners. This paper considers the application of a structure-preserving matrix method for the computation of multiple roots of a Bernstein polynomial $f (y)$. Extensive use is made of the Sylvester matrix and its subresultant matrices for the calculation of the multiplicities of the distinct roots of $f (y)$, which involves many greatest common divisor computations. It is shown that the development of this polynomial root solver is significantly more complicated than its equivalent for the power basis form of $f (y)$ because of the combinatorial terms in the Bernstein basis functions. In particular, even if the coefficients of $f (y)$ are of the same order of magnitude, the presence of the combinatorial factors implies that, even for polynomials of modest degree, the entries in the matrices that arise in the computations may span several orders of magnitude, which can cause numerical problems that do not arise when the power basis form of $f (y)$ is considered. Furthermore, the presence of these combinatorial factors in the Sylvester matrix destroys its concatenated Toeplitz matrix structure, which increases the computational cost of the method. It is shown that the adverse effects of the combinatorial factors can be mitigated by multiplying the Sylvester matrix by a diagonal matrix, and that this operation improves the accuracy of the computed roots. Computational results obtained from this root solver, and root solvers that exploit the geometric properties of the Bernstein basis, will be presented and it will be shown that the root solver described above yields considerably better results. Poster: Aceto , Rational approximations to the fractional Laplacian Fractional-order in space mathematical models, in which an integer-order differential operator is replaced by a corresponding fractional one, are becoming increasingly used since they provide an adequate description of many processes that exhibit anomalous diffusion. In this talk, in particular, we focus on the numerical solution of fractional in space reaction-diffusion equations on bounded domains under homogeneous Dirichlet boundary conditions. Using the matrix transfer technique, the fractional Laplacian operator is replaced by a matrix which, in general, is dense [3, 4]. The approach proposed is based on the approximation of this matrix by the product of two suitable banded matrices [1]. Since these two matrices will be involved in the linear algebra tasks required by the differential solver, it is fundamental that their condition numbers are not responsible for inaccuracy, otherwise there will be an a-priori barrier for the choice of the bandwidth. In order to face this problem, we use a simple but reliable strategy that allows to keep the conditioning under control [2]. Work in collaboration with Paolo Novati, Department of Mathematics and Geosciences, University of Trieste. References. [1] L. Aceto, P. Novati, Rational approximation to the fractional Laplacian operator in reaction-diffusion problems. SIAM J. Sci. Comput., 39 (2017), no. 1, A214--A228. [2] L. Aceto, P. Novati, Efficient implementation of rational approximations to fractional differential operators, submitted. [3] M. Ilic, F. Liu, I. Turner, V. Anh, Numerical approximation of a fractional-in-space diffusion equation I, Fract. Calc. Appl. Anal., 8 (2005) 323--341. [4] M. Ilic, F. Liu, I. Turner, V. Anh, Numerical approximation of a fractional-in-space diffusion equation (II)-with nonhomogeneous boundary conditions, Fract. Calc. Appl. Anal., 9 (2006) 333--349. Bertaccini, Fast Solution of Fractional Differential Equation In our talk we propose an innovative algorithm for the large (full) linear systems of time-dependent partial fractional differential equations discretized in time with linear multistep formulas, both in classical [1] and in boundary value form [2]. We use the short--memory principle to ensure the decay of the entries of sparse approximations of the discretized operator and its inverse. FGMRES with preconditioners based on short--memory principle as well are then used to solve the underlying sequence of linear systems. The ideas above are implemented on GPU devices by the techniques for sparse approximate inverse preconditioners proposed in [3]. References. [1] Bertaccini, D., Durastante, F. (2017). Solving mixed classical and fractional partial differential equations using short memory principle and approximate inverses. Numer. Algorithms, 74(4), 1061--1082. [2] Bertaccini, D., Durastante, F. (2017). Limited memory block preconditioners for fast solution of fractional PDEs. (submitted ) [3] Bertaccini, D., Filippone, S. (2016). Sparse approximate inverse preconditioners on high performance GPU platforms. Comput. Math. Appl., 71(3), 693--711. Cipolla , Adaptive matrix algebras in unconstrained minimization In this communication we will introduce some recent techniques which involve structured matrix spaces in the reduction of time and space complexity of BFGS-type minimization algorithms [1], [2]. Some general results for the global convergence of algorithms for unconstrained optimization based on a BFGS-type Hessian approximation scheme are introduced and it is shown how the constructibility of convergent algorithms suitable for large scale problems can be tackled using projections onto low complexity matrix algebras. References. [1] C.Di Fiore, S.Fanelli, F.Lepore, P.Zellini, Matrix algebras in Quasi-Newton methods for unconstrained minimization, Numerische Mathematik, 94, (2003). [2] S.Cipolla, C.Di Fiore, F.Tudisco, P.Zellini, Adaptive matrix algebras in unconstrained minimization, Linear Algebra and its Application, 471, (2015). Concas , Matlab implementation of a spectral algorithm for the seriation problem The seriation problem is an important ordering issue related to finding all the possible ways to sequence the elements of a certain set. It is frequently used in archeology and has applications in many fields such as genetics, bioinformatics and graph theory. We will present a Matlab implementation of a spectral method, which was presented in [1], to solve the seriation problem. This algorithm is based on the use of the Fiedler vector of the Laplacian matrix associated to the problem and it describes the results in terms of a particular data structure called PQ-tree used for storing the admissible orderings. We will discuss the case of the presence of a multiple Fiedler value and some numerical examples of graphs for which is not possible to find a precise solution [2]. References. [1] Jonathan E. Atkins, Erik G. Boman, and Bruce Hendrickson. A spectral algorithm for seriation and the consecutive ones problem. SIAM Journal on Computing, 28(1):297--310, 1998. [2] Anna Concas, Caterina Fenu, and Giuseppe Rodriguez. PQ-ser: a Matlab package for spectral seriation. In preparation. Durastante , Fractional PDEs Constrained Optimization: An optimize--then--discretize approach with L--BFGS and Approximate Inverse Preconditioning In [2] we considered the numerical solution of the problem: $\left\lbrace\begin{array}{l} \displaystyle \min J(y,u) = \frac{1}{2}\|y - z_d\|_2^2 + \frac{\lambda}{2}\|u\|_2^2, \\ \;\;\textrm{subject to }e(y,u) = 0. \end{array}\right.$ where J and e are two continuously Fréchet derivable functionals such that, J:Y x U -> R, e:Y x U -> W, with Y,U and W reflexive Banach spaces, z_d \in U is given and \lambda \in R is a fixed positive regularization parameter. The constraint, namely e(y,u)=0, is chosen not to be an ordinary elliptic PDE as in the classic case, but a Fraction Partial Differential Equation: either the Fractional Advection Dispersion Equation or the two--dimensional Riesz Space Fractional Diffusion equation. We focus on extending the existing strategies for classic PDE constrained optimization to the fractional case. We will present both a theoretical and experimental analysis of the problem in an algorithmic framework based on the L--BFGS method coupled with a Krylov subspace solver. A suitable preconditioning strategy by approximate inverses is taken into account as in [1]. Numerical experiments are performed with benchmarked software/libraries thus enforcing the reproducibility of the results. References. [1] Bertaccini, D., Durastante, F. (2017). Solving mixed classical and fractional partial differential equations using short--memory principle and approximate inverses. Numer. Algorithms, 74(4), 1061--1082. [2] Cipolla S., Durastante F. (2017). Fractional PDEs Constrained Optimization An optimize-then-discretize approach with L-BFGS and Approximate Inverse Preconditioning. Fasino , Error Analysis of TT-format Tensor Algorithms The {\em tensor train decomposition} is a representation technique which allows compact storage and efficient computations with arbitrary tensors [2]. Basically, a tensor train (TT) decomposition of a $d$-dimensional tensor ${\bf A}$ with size $n_1\times n_2 \times \cdots \times n_d$ is a sequence $G_1,\ldots,G_d$ of 3D tensors (the {\em carriages}) such that the size of $G_i$ is $r_{i-1}\times n_i \times r_i$ with $r_0 = r_d = 1$ (that is, $G_1$ and $G_d$ are ordinary matrices) and ${\bf A}(i_1,i_2,\ldots,i_d) = \sum_{\alpha_1,\ldots,\alpha_{d-1}} G_1(i_1,\alpha_1)G_2(\alpha_1,i_2,\alpha_2)\cdots G_d(\alpha_{d-1},i_d).$ The index $\alpha_i$ runs from $1$ to $r_i$, for $i=1,\ldots,d-1$, and the numbers $r_1,\ldots,r_{d-1}$ are the {\em TT-ranks} of ${\bf A}$. An alternative viewpoint on this decomposition is ${\bf A}(i_1,i_2,\ldots,i_d) = G_1'(i_1)G_2'(i_2)\cdots G_d'(i_d),$ where now $G'(i_k)$ is an $r_{k-1}\times r_k$ matrix depending on the integer parameter $i_k$. We will use the notation ${\bf A} = \mathrm{TT}(G_1,\ldots,G_d)$. We present a backward error analysis of two algorithms found in [3] which perform computations with tensors in TT-format. The first one produces an exact or approximate TT-decomposition $G_1,\ldots,G_d$ of a tensor $\bf A$ given in functional form, depending on a tolerance $\varepsilon$. If $\varepsilon = 0$ then the output of the algorithm is an exact TT-decomposition, that is, ${\bf A} = \mathrm{TT}(G_1,\ldots,G_d)$. If $\varepsilon > 0$ then $\mathrm{TT}(G_1,\ldots,G_d)$ is an $\mathcal{O}(\varepsilon)$-approximation of $\bf A$ which can realize significant savings in memory space. The computational core of the algorithm is a suitable (approximate) matrix factorization that, in the original paper, relies on SVD computations. We prove that analogous performances can be obtained by means of QR factorizations. Moreover, we obtain a stability estimate that, in the exact arithmetic case, reduces to the error estimate given in [3] for the SVD-based decomposition and, in the floating point arithmetic case, separates the effects of the tolerance $\varepsilon$, the lack of orthogonality of certain intermediate matrices, and the stability properties of the basic factorization on the backward error of the computed TT-decomposition. The second algorithm computes the multilinear form (also called {\em contraction}) of a given $d$-dimensional tensor in TT-format and vectors $v_1,\ldots,v_d$, $a = \sum_{i_1=1}^{n_1} \cdots \sum_{i_d=1}^{n_d} \sum_{\alpha_1,\ldots,\alpha_{d-1}} G_1(i_1,\alpha_1)G_2(\alpha_1,i_2,\alpha_2) \cdots G_d(\alpha_{d-1},i_d) v_1(i_1) \cdots v_d(i_d).$ By means of known error bounds for inner products in floating point arithmetic [2], we prove backward stability of the proposed algorithm under very general hypotheses on the evaluation order of the innermost summations. More precisely, if ${\bf A} = \mathrm{TT}(G_1,\ldots,G_d)$ and no underflows or overflows are encountered then the output $\widehat a$ computed by the algorithm in floating point arithmetic is the exact contraction of ${\bf \widehat A} = \mathrm{TT} (G_1 + \Delta G_1,\ldots,G_d + \Delta G_d)$ and $v_1,\ldots,v_d$ where $|\Delta G_i| \leq (n_i+r_{i-1})u|G_i| + \mathcal{O}(u^2)$ and $u$ is the machine precision. References. [1] C.-P.~Jeannerod, S.~M.~Rump. Improved error bounds for inner products in floating-point arithmetic. {\em SIMAX} 34 (2013), 338--344. [2] I.~Oseledets. Tensor-train decomposition. {\em SIAM J. Sci. Comput.} 33 (2011), 2295--2317. [3] I.~Oseledets, E.~Tyrtyshnikov. TT-cross approximation for multidimensional arrays. {\em Lin.~Alg.~Appl.} 432 (2010). Fenu, On the computation of the GCV function for Tikhonov Regularization Tikhonov regularization is commonly used for the solution of linear discrete ill-posed problems with error-contaminated data. A regularization parameter, that determines the quality of the computed solution, has to be chosen. One of the most popular approaches to choosing this parameter is to minimize the Generalized Cross Validation (GCV) function. The minimum can be determined quite inexpensively when the matrix A that defines the linear discrete ill-posed problem is small enough to rapidly compute its singular value decomposition (SVD). We are interested in the solution of linear discrete ill-posed problems with a matrix A that is too large to make the computation of its complete SVD feasible. We will present two fairly inexpensive ways to determine upper and lower bounds for the numerator and denominator of the GCV function for large matrices A [1, 2]. The first one is based on Gauss-type quadrature and the second one on a low-rank approximation of the matrix A. These bounds are used to determine a suitable value of the regularization parameter. Computed examples illustrate the performance of the proposed methods. References. [1] C. Fenu, L. Reichel, and G. Rodriguez. GCV for Tikhonov regularization via global Golub-Kahan decomposition. Numer. Linear Algebra Appl., 23:467-484, 2016. [2] C. Fenu, L. Reichel, G. Rodriguez and H. Sadok. GCV for Tikhonov regularization by partial singular value decomposition. BIT Numer. Math., DOI 10.1007/s10543-017-0662-0, 2017. Iannazzo, Nonnegative factorization of tensor grids The Nonnegative Matrix Factorization (NMF) approximately decomposes a (large) matrix with nonnegative entries into the product of two matrices with nonnegative entries and with smaller dimensions. A similar decomposition is desired when dealing with tensor grids, matrices whose entries are positive definite matrices and not just numbers. In this case, NMF is not suited because of the presence of possibly negative entries. We present two generalizations of the NMF preserving the positive definite structure of tensor grids. They are based on the use of non-Euclidean geometries on the set of positive definite matrices. Poloni , Counting Fiedler pencils with diagrams Fiedler pencils (with repetitions) are a family of matrix pencils which generalizes the well-known companion form: they are linearizations of matrix polynomials, i.e., they provide a template to construct, given a matrix polynomial, a linear eigenvalue problem with the same eigenvalues and multiplicity. Fiedler pencils are constructed as product of special block matrices that act nontrivially only on two contiguous blocks. They have a rich structure that gives rise to many combinatorial properties. We introduce a notation that associates to each pencil a diagram that depicts its action on the blocks. Using it, we can obtain visual proofs of several results in the theory, and we can solve several counting problems (such as "how many distinct Fiedler pencils with repetitions of a given dimension exist"). Among them, in particular, we are interested in counting Fiedler pencils associated to symmetric and palindromic matrix polynomials which preserve the same structure. Pozza , Lanczos algorithm and the Gauss quadrature for linear functionals Gauss quadrature can be naturally generalized to approximate quasi-definite linear functionals [1], where the interconnections with formal orthogonal polynomials, Padé approximants, complex Jacobi matrices and Lanczos algorithm are analogous to those in the positive definite case. As presented in [2], the existence of the n-weight complex Gauss quadrature corresponds to performing successfully the first $n$ steps of the non-Hermitian Lanczos algorithm. Some further results on the relation between the non-definite case and the look-ahead Lanczos algorithm will be shown. References. [1] S. Pozza, M. Prani, Z. Strakos, Gauss quadrature for quasi-definite linear functionals, to appear in: IMA Journal of Numerical Analysis. [2] S. Pozza, M. Prani, Z. Strakos, Lanzcos algorithm and the complex Gauss quadrature, submitted, May 2017. Roupa Approximation of matrix-vector products and applications The estimation of matrix-vector products such as $A^{1/2}b$, $\exp(A)b$, $A^{-1}b$, where $A\in\mathbb R^{p\times p}$ is a given diagonalizable matrix and $b\in\mathbb R^p$, appears in many applications arising from the fields of mathematics, statistics, mechanics, networks, machine learning and physics. In particular, the matrix-vector product $A^{1/2}b$ is used in sampling from a Gaussian process distribution. In network analysis the vector form $\exp(A)b$ determines the total communicability as a global measure of how well the nodes in a graph can exchange information [1]. In this work, methods for approximating matrix-vector products based on extrapolation [3], Krylov subspaces, [4] and polynomial approximations [2] are presented. The above methods for estimating vector forms are compared concerning the accuracy, the computational complexity and the execution time. Several numerical examples will be presented to illustrate the effectiveness of these methods. References. [1] F. Arrigo, M. Benzi, Updating and downdating techniques for optimizing network communicability, SIAM J. Sci. Comput., 38 (2016), pp. B25-B49. [2] J. Chen, M. Anitescu, Y. Saad, Computing f (A)b via least squares polynomial approximations, SIAM J. Sci. Comput., 33 (2011), pp. 195-222. [3] P. Fika, M. Mitrouli, P. Roupa, Estimates for the bilinear form $x^TA^{-1}y$ with applications to linear algebra problems, Elec. Trans. Numer. Anal., 43 (2014), pp. 70-89. [4] N. J. Higham, Functions of Matrices: Theory and Computation, SIAM, Philadelphia, PA, USA, 2008.
On Wednesday afternoon, there will be an excursion to the Isola Polvese, the biggest island of the Trasimeno Lake, and then there will be the social dinner at the restaurant "Il Faliero".
Pictures
A selection of pictures of the meeting.
Click on the pictures below to open a gallery (a password is required).
Proceedings
Papers with the results presented at the meeting will be published in a volume of the "Springer INdAM Series", indexed in Scopus.
-- Editors of the volume are D. Bini, F. Di Benedetto, E. Tyrtyshnikov, M. Van Barel.
-- Papers can be submitted to any of the editors.
-- Papers should be about 15 pages, written in LaTeX with the Springer macros.
-- Guidelines and macros can be found here.
-- Editors and authors can access the electronic version of the volume, and have a 40% discount for a Springer book.
-- Editors and corresponding authors will receive a copy of the volume.
-- Springer does not allow to upload published manuscripts on Arxiv / Research Gate / personal Web pages, but manuscripts can be sent in electronic version to other scholars.
-- The volume will appear about 5 months after the end of the refereeing process.
This is the planned schedule
-- Deadline for submissions: January 31, 2018.
-- Communication of the final decision: June 30, 2018.
-- Delivery of the volume: November 30, 2018.
|
{}
|
# Properties
Label 5184.2.a.bw Level $5184$ Weight $2$ Character orbit 5184.a Self dual yes Analytic conductor $41.394$ Analytic rank $0$ Dimension $2$ CM discriminant -4 Inner twists $2$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$5184 = 2^{6} \cdot 3^{4}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 5184.a (trivial)
## Newform invariants
Self dual: yes Analytic conductor: $$41.3944484078$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{3})$$ Defining polynomial: $$x^{2} - 3$$ x^2 - 3 Coefficient ring: $$\Z[a_1, \ldots, a_{5}]$$ Coefficient ring index: $$2$$ Twist minimal: no (minimal twist has level 2592) Fricke sign: $$-1$$ Sato-Tate group: $N(\mathrm{U}(1))$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of $$\beta = 2\sqrt{3}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + (\beta + 1) q^{5}+O(q^{10})$$ q + (b + 1) * q^5 $$q + (\beta + 1) q^{5} + ( - \beta + 3) q^{13} + (2 \beta + 1) q^{17} + (2 \beta + 8) q^{25} + (\beta + 5) q^{29} + ( - 3 \beta - 1) q^{37} - 10 q^{41} - 7 q^{49} + 14 q^{53} + (3 \beta - 5) q^{61} + (2 \beta - 9) q^{65} + ( - 4 \beta + 3) q^{73} + (3 \beta + 25) q^{85} + ( - 4 \beta + 5) q^{89} + 18 q^{97}+O(q^{100})$$ q + (b + 1) * q^5 + (-b + 3) * q^13 + (2*b + 1) * q^17 + (2*b + 8) * q^25 + (b + 5) * q^29 + (-3*b - 1) * q^37 - 10 * q^41 - 7 * q^49 + 14 * q^53 + (3*b - 5) * q^61 + (2*b - 9) * q^65 + (-4*b + 3) * q^73 + (3*b + 25) * q^85 + (-4*b + 5) * q^89 + 18 * q^97 $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2 q + 2 q^{5}+O(q^{10})$$ 2 * q + 2 * q^5 $$2 q + 2 q^{5} + 6 q^{13} + 2 q^{17} + 16 q^{25} + 10 q^{29} - 2 q^{37} - 20 q^{41} - 14 q^{49} + 28 q^{53} - 10 q^{61} - 18 q^{65} + 6 q^{73} + 50 q^{85} + 10 q^{89} + 36 q^{97}+O(q^{100})$$ 2 * q + 2 * q^5 + 6 * q^13 + 2 * q^17 + 16 * q^25 + 10 * q^29 - 2 * q^37 - 20 * q^41 - 14 * q^49 + 28 * q^53 - 10 * q^61 - 18 * q^65 + 6 * q^73 + 50 * q^85 + 10 * q^89 + 36 * q^97
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
1.1
−1.73205 1.73205
0 0 0 −2.46410 0 0 0 0 0
1.2 0 0 0 4.46410 0 0 0 0 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Atkin-Lehner signs
$$p$$ Sign
$$2$$ $$-1$$
$$3$$ $$1$$
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
4.b odd 2 1 CM by $$\Q(\sqrt{-1})$$
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 5184.2.a.bw 2
3.b odd 2 1 5184.2.a.bm 2
4.b odd 2 1 CM 5184.2.a.bw 2
8.b even 2 1 2592.2.a.k 2
8.d odd 2 1 2592.2.a.k 2
12.b even 2 1 5184.2.a.bm 2
24.f even 2 1 2592.2.a.q yes 2
24.h odd 2 1 2592.2.a.q yes 2
72.j odd 6 2 2592.2.i.ba 4
72.l even 6 2 2592.2.i.ba 4
72.n even 6 2 2592.2.i.be 4
72.p odd 6 2 2592.2.i.be 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
2592.2.a.k 2 8.b even 2 1
2592.2.a.k 2 8.d odd 2 1
2592.2.a.q yes 2 24.f even 2 1
2592.2.a.q yes 2 24.h odd 2 1
2592.2.i.ba 4 72.j odd 6 2
2592.2.i.ba 4 72.l even 6 2
2592.2.i.be 4 72.n even 6 2
2592.2.i.be 4 72.p odd 6 2
5184.2.a.bm 2 3.b odd 2 1
5184.2.a.bm 2 12.b even 2 1
5184.2.a.bw 2 1.a even 1 1 trivial
5184.2.a.bw 2 4.b odd 2 1 CM
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(\Gamma_0(5184))$$:
$$T_{5}^{2} - 2T_{5} - 11$$ T5^2 - 2*T5 - 11 $$T_{7}$$ T7 $$T_{11}$$ T11
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$T^{2}$$
$3$ $$T^{2}$$
$5$ $$T^{2} - 2T - 11$$
$7$ $$T^{2}$$
$11$ $$T^{2}$$
$13$ $$T^{2} - 6T - 3$$
$17$ $$T^{2} - 2T - 47$$
$19$ $$T^{2}$$
$23$ $$T^{2}$$
$29$ $$T^{2} - 10T + 13$$
$31$ $$T^{2}$$
$37$ $$T^{2} + 2T - 107$$
$41$ $$(T + 10)^{2}$$
$43$ $$T^{2}$$
$47$ $$T^{2}$$
$53$ $$(T - 14)^{2}$$
$59$ $$T^{2}$$
$61$ $$T^{2} + 10T - 83$$
$67$ $$T^{2}$$
$71$ $$T^{2}$$
$73$ $$T^{2} - 6T - 183$$
$79$ $$T^{2}$$
$83$ $$T^{2}$$
$89$ $$T^{2} - 10T - 167$$
$97$ $$(T - 18)^{2}$$
|
{}
|
# THEMATIC PROGRAMS
May 2, 2016
THE FIELDS INSTITUTE FOR RESEARCH IN MATHEMATICAL SCIENCES
20th ANNIVERSARY YEAR
July-December 2012 Thematic Program on Forcing and its Applications
## November 12-16, 2012 Workshop on Iterated Forcing and Large Cardinals
Organizing Committee:
Michal Hrusak, Paul Larson, Saharon Shelah, W. Hugh Woodin
### Abstracts
Joerg Brendle (KOBE University)
Methods in iterated forcing
We present some techniques for iterating forcing constructions.For example, we discuss Shelah's method of iterating by repeatedly taking ultrapowers of a forcing notion. We will also give a brief outline of Shelah's technique of iterating along templates. While we shall mention some applications, the focus will be on illustrating the basic ideas underlying these techniques.
Moti Gitik (Tel-Aviv University)
A weak generalization of SPFA to higher cardinals.
We apply a form of the Neeman iteration to finite structures with pistes. This allows to formulate a certain weak analog of SPFA for higher cardinals.
Martin Goldstern (Technische Universität Wien)
Cichon's diagram and large continuum
I will sketch a forcing construction of a model in which several well-known cardinal characteristics if the continuum (in particular: continuum itself, cofinality of null, uniformity of null, uniformity of meager, covering of meager) all have different values.
Joint work with Arthur Fischer, Kellner, Shelah. (Work in progress.)
John Krueger
Forcing with Models as Side Conditions
I describe a comparison of elementary substructures which allows for a uniform method of forcing with models as side conditions on $\omega_2$.
Heike Mildenberger (Albert-Ludwigs-Universität Freiburg)
Forcings with block sequences
I will discuss some new preservation theorems for forcings with block sequences.
A study of iterating semiproper forcing
I would like to introduce a way to iterate semiproper forcing. Suppose we have an initial segment, of limit length, of an iterated forcing. We consider the set of conditions that have sort of traceable countable stages. It turns out that this set of conditions forms a limit which sits between the direct and full limits. If we keep iterating semiproper p.o. sets under this limit, then every tail of the iteration is semiproper in the intermediate stage. In particular, the iteration itself is semiproper. This is a generalization of an iteration lemma on proper forcing under countable support.
Itay Neeman (University of California, Los Angeles)
Higher analogs of the proper forcing axiom
I will present a higher analogue of the proper forcing axiom, and discuss some of its applications. The higher analogue is an axiom that allows meeting collections of $\aleph_2$ maximal antichains, in specific classes of posets that preserve both $\aleph_1$ and $\aleph_2$.
This talk will include more details and proofs than my talk in the workshop on Forcing Axioms and their Applications. I will quickly survey the previous talk for audience members who were not present in the
previous workshop.
Ralf Schindler (WWU Münster)
An axiom.
We propose and discuss a new strong axiom for set theory.
Xianghui Shi (Beijing Normal University)
Some consequences of I0 in Higher Degree Theory
We present some consequences of Axiom I0 in higher degree theory. These results indicate some connection between large cardinals and general degree structures. We shall also discuss more evidences along this direction, raise some open questions. This is a joint work with W. Hugh Woodin.
Matteo Viale (University of Torino)
Absoluteness of theory of $MM^{++}$
Assume $\delta$ is a limit ordinal.
The category forcing $\mathbb{U}^\mathsf{SSP}_\delta$ has as objects the stationary set preserving partial orders in $V_\delta$ and as arrows the complete embeddings of its elements with a stationary set preserving quotient.
We show that if $\delta$ is a super compact limit of super compact cardinals and $\mathsf{MM}^{++}$ holds, then
$\mathbb{U}^\mathsf{SSP}_\delta$ completely embeds into a pre saturated tower of height $\delta$.
We use this result to conclude that the theory of $\mathsf{MM}^{++}$ is invariant with respect to stationary set preserving posets that preserve this axiom.
|
{}
|
### Home > CALC > Chapter 4 > Lesson 4.2.4 > Problem4-86s
4-86s.
Brianna is babysitting for her calculus teacher because she broke her phone and needs to buy a new one. She will get paid at a flat rate of $\3.50$ per hour, per child. Natalie is away at a friend’s house for the first two hours while Brianna is babysitting Morgan and Lydia. Natalie returns and Brianna continues to babysit for three additional hours.
1. Sketch a graph and write a piecewise function to represent Brianna’s pay rate.
2. How much will Brianna get paid for the five hours of work?
3. Represent Brianna’s total pay using definite integrals.
His error is in the first line. How does he use or not use the Distributive Property?
|
{}
|
Magyar Information Contest Journal Articles
# Problem B. 4681. (January 2015)
B. 4681. What is the area of the pentagon in exercise C. 1240.?
(4 pont)
Deadline expired on 10 February 2015.
### Statistics:
86 students sent a solution. 4 points: 63 students. 3 points: 10 students. 2 points: 2 students. 1 point: 1 student. 0 point: 10 students.
Our web pages are supported by: Morgan Stanley
|
{}
|
# Counting number of groups
Find the number of ways of forming a group of $2k$ people from $n$ couples,where $n,k \in \mathbb{N}$ with $2k \le n$, in each of the following cases: (i) There are $k$ couples in such a group; (ii) No couples are included in such a group: (iii) At least one couple is in included in such a group; (iv) Exactly two couples are included in such a group.
• Hint for (ii): first choose the $2k$ couples. Then choose one member of each selected couple. – lulu Apr 21 '17 at 13:26
• For case (ii), you can first select $2k$ couples and then select one person out of each couple. As such, the answer is ${{n}\choose{2k}} 2^{2k}$. For case (iv), first select 2 couples of which you will select both partners, then select $2k-4$ couples of which you will select one person. The total number of combinations is then ${{n}\choose{2}} {{n-2}\choose{2k-4}} 2^{2k-4}$. – jvdhooft May 3 '17 at 11:13
|
{}
|
E. Xor Permutations
time limit per test
1 second
memory limit per test
256 megabytes
input
standard input
output
standard output
Toad Mikhail has an array of $2^k$ integers $a_1, a_2, \ldots, a_{2^k}$.
Find two permutations $p$ and $q$ of integers $0, 1, \ldots, 2^k-1$, such that $a_i$ is equal to $p_i \oplus q_i$ for all possible $i$, or determine there are no such permutations. Here $\oplus$ denotes the bitwise XOR operation.
Input
The first line contains one integer $k$ ($2 \leq k \leq 12$), denoting that the size of the array is $2^k$.
The next line contains $2^k$ space-separated integers $a_1, a_2, \ldots, a_{2^k}$ ($0 \leq a_i < 2^k$) — the elements of the given array.
Output
If the given array can't be represented as element-wise XOR of two permutations of integers $0, 1, \ldots, 2^k-1$, print "Fou".
Otherwise, print "Shi" in the first line.
The next two lines should contain the description of two suitable permutations. The first of these lines should contain $2^k$ space-separated distinct integers $p_{1}, p_{2}, \ldots, p_{2^k}$, and the second line should contain $2^k$ space-separated distinct integers $q_{1}, q_{2}, \ldots, q_{2^k}$.
All elements of $p$ and $q$ should be between $0$ and $2^k - 1$, inclusive; $p_i \oplus q_i$ should be equal to $a_i$ for all $i$ such that $1 \leq i \leq 2^k$. If there are several possible solutions, you can print any.
Examples
Input
2
0 1 2 3
Output
Shi
2 0 1 3
2 1 3 0
Input
2
0 0 0 0
Output
Shi
0 1 2 3
0 1 2 3
Input
2
0 1 2 2
Output
Fou
|
{}
|
### How to create your own simple 3D render engine in pure Java
3D render engines that are nowdays used in games and multimedia production are breathtaking in complexity of mathematics and programming used. Results they produce are correspondingly stunning.
Many developers may think that building even the simplest 3D application from scratch requires inhuman knowledge and effort, but thankfully that isn't always the case. Here I'd like to share with you how you can build your very own 3D render engine, fully capable of producing nice-looking 3D images.
Why would you want to build a 3D engine? At the very least, it will really help understanding how real modern engines do their black magic. Also it is sometimes useful to add 3D rendering capabilities to your application without calling to huge external dependencies. In case of Java, that means that you can build 3D viewer app with zero dependencies (apart from Java APIs) that will run almost anywhere - and fit into 50kb!
Of course, if you want to build big 3D applications with fluid graphics, you'll be much better off with using OpenGL/WebGL. Still, once you will have a basic understanding of 3D engine internals, more complex engines will seem much more approachable.
In this post, I will be covering basic 3d rendering with orthographic projection, simple triangle rasterization, z-buffering and flat shading. I will not be focusing on heavy performance optimizations and more complex topics like textures or different lighting setups - if you need that, consider using better suited tools for that, like OpenGL (there are lots of libraries that allow you to work with OpenGL even from Java).
Code examples will be in Java, but the ideas explained here can be applied to any language of your choice. For your convenience, I will be following along with small interactive JavaScript demos right here in the post.
Enough talk - let's begin!
#### GUI wrapper
First of all, we want to put at least something on screen. For that I will use very simple application with our rendered image and two sliders to adjust the rotation.
import javax.swing.*;
import java.awt.*;
public class DemoViewer {
public static void main(String[] args) {
JFrame frame = new JFrame();
Container pane = frame.getContentPane();
pane.setLayout(new BorderLayout());
// slider to control horizontal rotation
JSlider headingSlider = new JSlider(0, 360, 180);
// slider to control vertical rotation
JSlider pitchSlider = new JSlider(SwingConstants.VERTICAL, -90, 90, 0);
// panel to display render results
JPanel renderPanel = new JPanel() {
public void paintComponent(Graphics g) {
Graphics2D g2 = (Graphics2D) g;
g2.setColor(Color.BLACK);
g2.fillRect(0, 0, getWidth(), getHeight());
// rendering magic will happen here
}
};
frame.setSize(400, 400);
frame.setVisible(true);
}
}
The resulting window should resemble this:
Now let's add some essential model classes - vertices and triangles. Vertex is simply a structure to store our three coordinates (X, Y and Z), and triangle binds together three vertices and stores its color.
class Vertex {
double x;
double y;
double z;
Vertex(double x, double y, double z) {
this.x = x;
this.y = y;
this.z = z;
}
}
class Triangle {
Vertex v1;
Vertex v2;
Vertex v3;
Color color;
Triangle(Vertex v1, Vertex v2, Vertex v3, Color color) {
this.v1 = v1;
this.v2 = v2;
this.v3 = v3;
this.color = color;
}
}
For this post, I'll assume that X coordinate means movement in left-right direction, Y means movement up-down on screen, and Z will be depth (so Z axis is perpendicular to your screen). Positive Z will mean "towards the observer".
As our example object, I selected tetrahedron, as it's the easiest 3d shape I could think of - only 4 triangles are needed to describe it. Here's the visualization:
The code is very simple - we just create 4 triangles and add them to a list:
List tris = new ArrayList<>();
new Vertex(-100, -100, 100),
new Vertex(-100, 100, -100),
Color.WHITE));
new Vertex(-100, -100, 100),
new Vertex(100, -100, -100),
Color.RED));
new Vertex(100, -100, -100),
new Vertex(100, 100, 100),
Color.GREEN));
new Vertex(100, -100, -100),
new Vertex(-100, -100, 100),
Color.BLUE));
Resulting shape is centered at origin (0, 0, 0), which is quite convenient since we will be doing rotation around that point later.
Now let's put that on screen. For now, we'll ignore the rotation and will just show the wireframe. Since we are using orthographic projection, it's quite simple - just discard the Z coordinate and draw the resulting triangles.
g2.translate(getWidth() / 2, getHeight() / 2);
g2.setColor(Color.WHITE);
for (Triangle t : tris) {
Path2D path = new Path2D.Double();
path.moveTo(t.v1.x, t.v1.y);
path.lineTo(t.v2.x, t.v2.y);
path.lineTo(t.v3.x, t.v3.y);
path.closePath();
g2.draw(path);
}
Note how I applied translation before drawing all the triangles. That is done to put the origin (0, 0, 0) to the center of our drawing area - initially, 2d origin is located in top left corner of screen. Result should look like this:
You may not believe it yet, but that's our tetrahedron in orthographic projection, I promise!
Now we need to add rotation. To do that, I'll need to digress a little and talk about using matrices to achieve transformations on 3D points.
There are many possible ways to manipulate 3d points, but the most flexible is to use matrix multiplication. The idea is to represent your points as 3x1 vectors, and transformation is then simply multiplication by 3x3 matrix.
You take your input vector A:
$$A = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix}$$
and multiply it with transformation matrix T to get output vector B:
$$AT = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \begin{bmatrix} t_{xx} & t_{xy} & t_{xz} \\ t_{yx} & t_{yy} & t_{yz} \\ t_{zx} & t_{zy} & t_{zz} \end{bmatrix} = \begin{bmatrix} a_x t_{xx} + a_y t_{yx} + a_z t_{zx} & a_x t_{xy} + a_y t_{yy} + a_z t_{zy} & a_x t_{xz} + a_y t_{yz} + a_z t_{zz} \end{bmatrix} = \begin{bmatrix} b_x & b_y & b_z \end{bmatrix}$$
For example, here's how you would scale a point by 2:
$$\begin{bmatrix} 1 & 2 & 3 \end{bmatrix} \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix} = \begin{bmatrix} 1 \times 2 & 2 \times 2 & 3 \times 2 \end{bmatrix} = \begin{bmatrix} 2 & 4 & 6 \end{bmatrix}$$
You can't describe all possible transformations using 3x3 matrices - for example, translation is off-limits. You can achieve it with 4x4 matrices, effectively doing skew in 4D space, but that is beyond the scope of this tutorial.
Most useful transformations that we will need in this tutorial are scaling and rotating.
Any rotation in 3D space can be expressed as combination of 3 primitive rotations: rotation in XY plane, rotation in YZ plane and rotation in XZ plane. We can write out transformation matrices for each of those rotations as follows:
XY rotation matrix:
$$\begin{bmatrix} cos\theta & -sin\theta & 0 \\ sin\theta & cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix}$$
YZ rotation matrix:
$$\begin{bmatrix} 1 & 0 & 0 \\ 0 & cos\theta & sin\theta \\ 0 & -sin\theta & cos\theta \end{bmatrix}$$
XZ rotation matrix:
$$\begin{bmatrix} cos\theta & 0 & -sin\theta \\ 0 & 1 & 0 \\ sin\theta & 0 & cos\theta \end{bmatrix}$$
Here comes the magic: if you need to first rotate a point in XY plane using transformation matrix $T_1$, and then rotate it in YZ plane using transfromation matrix $T_2$, you can simply multiply $T_1$ with $T_2$ and get a single matrix to describe the whole rotation:
$$(AT_1)T_2 = A(T_1T_2)$$
This is a very useful optimization - instead of recomputing multiple rotations on each point, you precompute the matrix once and then use it in your pipeline.
Enough of the scary math stuff, let's get back to code. We will create utility class Matrix3 that will handle matrix-matrix and vector-matrix multiplication:
class Matrix3 {
double[] values;
Matrix3(double[] values) {
this.values = values;
}
Matrix3 multiply(Matrix3 other) {
double[] result = new double[9];
for (int row = 0; row < 3; row++) {
for (int col = 0; col < 3; col++) {
for (int i = 0; i < 3; i++) {
result[row * 3 + col] +=
this.values[row * 3 + i] * other.values[i * 3 + col];
}
}
}
return new Matrix3(result);
}
Vertex transform(Vertex in) {
return new Vertex(
in.x * values[0] + in.y * values[3] + in.z * values[6],
in.x * values[1] + in.y * values[4] + in.z * values[7],
in.x * values[2] + in.y * values[5] + in.z * values[8]
);
}
}
Now we can bring to life our rotation sliders. The horizontal slider would control "heading" - in our case, rotation in XZ direction (left-right), and vertical slider will control "pitch" - rotation in YZ direction (up-down).
Let's create our rotation matrix and add it into our pipeline:
double heading = Math.toRadians(headingSlider.getValue());
Matrix3 transform = new Matrix3(new double[] {
0, 1, 0,
});
g2.translate(getWidth() / 2, getHeight() / 2);
g2.setColor(Color.WHITE);
for (Triangle t : tris) {
Vertex v1 = transform.transform(t.v1);
Vertex v2 = transform.transform(t.v2);
Vertex v3 = transform.transform(t.v3);
Path2D path = new Path2D.Double();
path.moveTo(v1.x, v1.y);
path.lineTo(v2.x, v2.y);
path.lineTo(v3.x, v3.y);
path.closePath();
g2.draw(path);
}
You'll also need to add a listeners on heading and pitch sliders to force redraw when you drag the handle:
headingSlider.addChangeListener(e -> renderPanel.repaint());
Here's what you should get working (this example is interactive - try dragging the handles!):
As you may have noticed, up-down rotation doesn't work yet. Let's add next transform:
Matrix3 headingTransform = new Matrix3(new double[] {
0, 1, 0,
});
Matrix3 pitchTransform = new Matrix3(new double[] {
1, 0, 0,
0, Math.cos(pitch), Math.sin(pitch),
0, -Math.sin(pitch), Math.cos(pitch)
});
Observe that both rotations now work and combine together nicely:
Up to this point, we were only drawing the wireframe of our shape. Now we need to start filling up those triangles with some substance. To do this, we first need to "rasterize" the triangle - convert it to list of pixels on screen that it occupies.
I'll use relatively simple, but inefficient method - rasterization via barycentric coordinates. Real 3d engines use hardware rasterization, which is very fast and efficient, but we can't use the graphic card and so will be doing it manually in our code.
The idea is to compute barycentric coordinate for each pixel that could possibly lie inside the triangle and discard those that are outside. The following snippet implements the algorithm. Note how we started using direct access to image pixels.
BufferedImage img =
new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_ARGB);
for (Triangle t : tris) {
Vertex v1 = transform.transform(t.v1);
Vertex v2 = transform.transform(t.v2);
Vertex v3 = transform.transform(t.v3);
// since we are not using Graphics2D anymore,
// we have to do translation manually
v1.x += getWidth() / 2;
v1.y += getHeight() / 2;
v2.x += getWidth() / 2;
v2.y += getHeight() / 2;
v3.x += getWidth() / 2;
v3.y += getHeight() / 2;
// compute rectangular bounds for triangle
int minX = (int) Math.max(0, Math.ceil(Math.min(v1.x, Math.min(v2.x, v3.x))));
int maxX = (int) Math.min(img.getWidth() - 1,
Math.floor(Math.max(v1.x, Math.max(v2.x, v3.x))));
int minY = (int) Math.max(0, Math.ceil(Math.min(v1.y, Math.min(v2.y, v3.y))));
int maxY = (int) Math.min(img.getHeight() - 1,
Math.floor(Math.max(v1.y, Math.max(v2.y, v3.y))));
double triangleArea =
(v1.y - v3.y) * (v2.x - v3.x) + (v2.y - v3.y) * (v3.x - v1.x);
for (int y = minY; y <= maxY; y++) {
for (int x = minX; x <= maxX; x++) {
double b1 =
((y - v3.y) * (v2.x - v3.x) + (v2.y - v3.y) * (v3.x - x)) / triangleArea;
double b2 =
((y - v1.y) * (v3.x - v1.x) + (v3.y - v1.y) * (v1.x - x)) / triangleArea;
double b3 =
((y - v2.y) * (v1.x - v2.x) + (v1.y - v2.y) * (v2.x - x)) / triangleArea;
if (b1 >= 0 && b1 <= 1 && b2 >= 0 && b2 <= 1 && b3 >= 0 && b3 <= 1) {
img.setRGB(x, y, t.color.getRGB());
}
}
}
}
g2.drawImage(img, 0, 0, null);
Quite a lot of code, but now we have colored tetrahedron on our displays:
If you play around with the demo, you'll notice that not all is well - for example, blue triangle is always above others. It happens becase we are currently painting the triangles one after another, and blue triangle is last - thus it is painted over all others.
To fix this I will introduce the concept of z-buffer (or depth buffer). The idea is to build an intermediate array during rasterization that will store depth of last seen element at any given pixel. When rasterizing triangles, we will be checking that pixel depth is less than previously seen, and only color the pixel if it is above others.
double[] zBuffer = new double[img.getWidth() * img.getHeight()];
// initialize array with extremely far away depths
for (int q = 0; q < zBuffer.length; q++) {
zBuffer[q] = Double.NEGATIVE_INFINITY;
}
for (Triangle t : tris) {
// handle rasterization...
// for each rasterized pixel:
double depth = b1 * v1.z + b2 * v2.z + b3 * v3.z;
int zIndex = y * img.getWidth() + x;
if (zBuffer[zIndex] < depth) {
img.setRGB(x, y, t.color.getRGB());
zBuffer[zIndex] = depth;
}
}
Now you can see that our tetrahedron actually has one white side:
We now have a functioning rendering pipeline!
But we are not finished here. In real life, perceived color of the surface varies with light source positions - if only a small amount of light is incident to the surface, we perceive that surface as being darker.
In computer graphics, we can achieve similar effect by using so-called "shading" - altering the color of the surface based on its angle and distance to lights.
Simplest form of shading is flat shading. It takes into account only the angle between surface normal and direction of the light source. You just need to find cosine of angle between those two vectors and multiply the color by the resulting value. Such approach is very simple and cheap, so it is often used for high-speed rendering when more advanced shading technologies are too computationally expensive.
First, we need to compute normal vector for our triangle. If we have triangle ABC, we can compute its normal vector by calculating cross product of vectors AB and AC and then dividing resulting vector by its length.
Cross product is a binary operation on two vectors that is defined in 3d space as follows:
$$u \times v = \begin{bmatrix} u_x & u_y & u_z \end{bmatrix} \times \begin{bmatrix} v_x & v_y & v_z \end{bmatrix} = \begin{bmatrix} u_y \times v_z - u_z \times v_y & u_z \times v_x - u_x \times v_z & u_x \times v_y - u_y \times v_x \end{bmatrix}$$
Here's the visual explanation of what cross product does:
for (Triangle t : tris) {
// transform vertices before calculating normal...
Vertex norm = new Vertex(
ab.y * ac.z - ab.z * ac.y,
ab.z * ac.x - ab.x * ac.z,
ab.x * ac.y - ab.y * ac.x
);
double normalLength =
Math.sqrt(norm.x * norm.x + norm.y * norm.y + norm.z * norm.z);
norm.x /= normalLength;
norm.y /= normalLength;
norm.z /= normalLength;
}
Now we need to calculate cosine between triangle normal and light direction. For simplicity, we will assume that our light is positioned directly behind the camera at some infinite distance (such configuration is called "directional light") - so our light source direction will be $\begin{bmatrix} 0 & 0 & 1 \end{bmatrix}$.
Cosine of angle between vectors can be calculated using this formula:
$$cos\theta = \frac{A \cdot B}{||A|| \times ||B||}$$
where $||A||$ is length of a vector, and $A \cdot B$ is dot product of vectors:
$$A \cdot B = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \cdot \begin{bmatrix} b_x & b_y & b_z \end{bmatrix} = a_x \times b_x + a_y \times b_y + a_z \times b_z$$
Notice that length of our light direction vector ($\begin{bmatrix} 0 & 0 & 1 \end{bmatrix}$) is 1, as well as the length of triangle normal (we already have normalized it). Thus the formula simply becomes:
$$cos\theta = A \cdot B = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \cdot \begin{bmatrix} b_x & b_y & b_z \end{bmatrix}$$
Also observe that only Z component of light direction vector is non-zero, so we can simplify further:
$$cos\theta = A \cdot B = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \cdot \begin{bmatrix} 0 & 0 & 1 \end{bmatrix} = a_z$$
The code is now trivial:
double angleCos = Math.abs(norm.z);
We drop the sign from the result because for our simple purposes we don't care which triangle side is facing the camera. In real application, you will need to keep track of that and apply shading accordingly.
Now that we have our shade coefficient, we can apply it to triangle color. Naive version may look as follows:
public static Color getShade(Color color, double shade) {
int red = (int) (color.getRed() * shade);
int green = (int) (color.getGreen() * shade);
int blue = (int) (color.getBlue() * shade);
return new Color(red, green, blue);
}
While it will give us some shading effect, it will have much quicker falloff than we need. That happens because Java uses sRGB color space, which is already scaled to match our logarithmic color perception.
So we need to convert each color from scaled to linear format, apply shade, and then convert back to scaled format. Real conversion from sRGB to linear RGB is quite involved, so I won't implement the full spec here - just the basic approximation.
public static Color getShade(Color color, double shade) {
double redLinear = Math.pow(color.getRed(), 2.4) * shade;
double greenLinear = Math.pow(color.getGreen(), 2.4) * shade;
double blueLinear = Math.pow(color.getBlue(), 2.4) * shade;
int red = (int) Math.pow(redLinear, 1/2.4);
int green = (int) Math.pow(greenLinear, 1/2.4);
int blue = (int) Math.pow(blueLinear, 1/2.4);
return new Color(red, green, blue);
}
Observe how our tetrahedron comes to life:
Now we have a working 3d render engine, with colors, lighting and shading, and it took us about 200 lines of code - not bad!
Here's one bonus for you - we can quickly create a sphere approximation from this tetrahedron. It can be done by repeatedly subdividing each triangle into four smaller ones and "inflating":
public static List inflate(List tris) {
List result = new ArrayList<>();
for (Triangle t : tris) {
Vertex m1 =
new Vertex((t.v1.x + t.v2.x)/2, (t.v1.y + t.v2.y)/2, (t.v1.z + t.v2.z)/2);
Vertex m2 =
new Vertex((t.v2.x + t.v3.x)/2, (t.v2.y + t.v3.y)/2, (t.v2.z + t.v3.z)/2);
Vertex m3 =
new Vertex((t.v1.x + t.v3.x)/2, (t.v1.y + t.v3.y)/2, (t.v1.z + t.v3.z)/2);
}
for (Triangle t : result) {
for (Vertex v : new Vertex[] { t.v1, t.v2, t.v3 }) {
double l = Math.sqrt(v.x * v.x + v.y * v.y + v.z * v.z) / Math.sqrt(30000);
v.x /= l;
v.y /= l;
v.z /= l;
}
}
return result;
}
Here's what you will see:
You can find full source code for this app here. It's only 220 lines and has no dependencies - you can just compile and start it!
I will finish this article by recommending one awesome book: 3D Math Primer for Graphics and Game Development. It explains all the details of rendering pipelines and math involved - definitely a worthy read if you are interested in rendering engines.
1. Thanks for a huge article!!!
Maybe a bit offtop, but still: if I put this sample to an Android, will it outperform a similar app that uses OpenGL? Basically, will it benefit to use a video chip on board? Thank again
1. You're welcome!
No, OpenGL will be much faster - it uses a lot of clever optimizations and also benefits from graphic card. This sample is purely software, so it is at a disadvantage.
2. Could you have skipped the manual flat-shading by using g2.fill(path)?
1. Only until z-buffer came into play - after that, it will be impossible to determine z-coordinate from g2.fill.
2. Is there any clever way to reorganize the shapes in the render array, so they come out in order? Could I sort polygons by their maximum Z value?
3. I don't think so. Consider the case of two intersecting triangles - in some pixels triangle 1 will be above, in other triangle 2 will be above. So there will be no strictly defined order.
4. Assuming no two polygons intersect, would it work?
5. Again, not for all cases - imagine configuration with 3 shapes (A,B,C), where A partially overlays B and is partially overlaid by C, B in turn overlays C, and lastly C is partially obscured by B and is above A. Again, no strict ordering.
(something like that famous Escher's work: http://files.harrowakker.webnode.nl/200000058-28fec29f90/EscherOmhoogOmlaag.jpg)
6. Oh, ok. Thought I could get away with using g2.fill and clever ordering. Welp, time to rewrite my Renderable interface
3. Could I implement this using polygons that take any number of inputs, as opposed to just triangles?
1. Yes. You just need to create rasterization method for your polygons. But as far as I know, this will involve splitting polygon into several triangles and then rasterizing those - so you're back at square one. That's the basic reasoning behind the fact that video cards only work with triangles - all polygons can be viewed as a group of adjacent triangles, so it's much simpler to unify all interfaces and view the whole world as lots of triangles.
4. can you provide a download link for the full project? some of this is not very clear.
1. Here it is: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99
5. Why do you need to use barycentric coordinates when determining if a pixel lies inside the triangle's area? Isn't it possible to just use the pixel coordinates and paint the triangle accordingly?
1. It is the simplest method, easier to understand and implement - so I decided to use it in this tutorial. There are several others, but they require vertex sorting and complex logic with many corner cases. Here's the overview: http://www.sunshine2k.de/coding/java/TriangleRasterization/TriangleRasterization.html
2. Is it possible to use the barycentric coordinate system in this tutorial to get texture coordinates on an image?
3. Yes, I suppose. You will need to assign texture coordinates to vertices, and then interpolate using barycentric coordinates to get texture coordinates inside the triangle.
4. I have been able to understand and create my own 3D rendering engine using the great tutorial you have provided and a lot of other documents that explain all of the mathematics beind it. With all of this said, I still have a few questions. My major one right now is if it is possible to use the zbuffer with only two baricentric coordinates. I understand the use of 3, but if you use the statement:
if(b1 >=0 && b2 >== 0 && b1 + b2 <= 1){
.....
}
you can slightly speed up the performance of the engine but I have found that only calculating and using 2 baricentric coordinates doesn't work when applied to the zbuffer. Any insight on a possible solution would be very helpful.
5. Unless you check the third coordinate as well, you may get points outside triangle area (b3 may be negative,for example). If you want to improve the performance, it would be much better to remove barycentric computations completely and use better rasterization algorithms.
6. I figured that it is possible to substitute the third baricentric coordinate by subtracting the sum of the first and second baricentric coordinate from 1 (Not too long after I posted the question actually...). This way you can successfully calculate the correct distance for the zbuffer. It is even possible to use the baricentric coordinates directly as texture coordinates as well.
Now that you mention it, do you know any better algorithms for rasterizing triangles I might be able to look into?
7. Yes, you can look at "standard" algorithm or Bresenham algorithm, described here: http://www.sunshine2k.de/coding/java/TriangleRasterization/TriangleRasterization.html
8. How would you texture an object using this algorithm?
9. That's harder. If you need to go that way, you will probably still need some form of barycentric coordinates. Here's a good explanation, with optimized rasterization: http://www.scratchapixel.com/lessons/3d-basic-rendering/rasterization-practical-implementation/perspective-correct-interpolation-vertex-attributes?url=3d-basic-rendering/rasterization-practical-implementation/perspective-correct-interpolation-vertex-attributes
10. Do you know any good sources for learning how to use OpenCL or LWJGL? I am curious to see how fast my modified 3D rendering program would run using the GPU to render the objects.
11. No, never tried going that route.
6. Is there a way to implement a camera position into this program or is it purely a fixed view system?
1. Of course. In fact, rotation examples in the article do exactly that - you can think of rotating object in front of fixed camera as of rotating camera around a fixed object.
As far as I know, in real 3D-engines camera positions are also implemented this way - camera is always positioned at (0,0,0) and rendered scene is transformed into that "camera space".
2. Instead of working in pixel coordinates, you could use homogenous coordinates (x and y axis is between -1 and 1). From there, you can use a projection, view and model matrix to control the vertex positions on the screen.
7. How do you add an XY rotation?
1. The first rotation matrix in the article achieves just that. Or maybe you are looking for something else?
2. I mean, I can only rotate it in 2 ways. How can I rotate it in the 3rd way?
3. Current examples only show heading and pitch transformations. You need to append roll transformation - I've done a quick tweak of the code for you: http://pastebin.com/7r222Z6r (lines 68-74 are relevant).
8. I work with BlueJ and when I try to compile the triangle-class (from the 2nd code example) it says, that it cannot find the class Color. Could you help me out with this?
1. That's an easy fix - you probably placed that into a separate file, so it can't find the required imports. Add "import java.awt.*" at the top of the file.
You can also look at the full code here: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99
2. Thank you :)
9. I have more or less created my own 3D engine in Java and I'm using scan line rasterisation and refreshing at 60Hz but the problem I am encountering is when painting with the graphics object it cannot paint enough between frames and gives me a semi complete surface with artifacting near the bottom. And when I try drawing the surface on a bufferedimage and render that with a graphics object I get a refresh rate of 60Hz. Any advice on what I should do? Change the rasterisation method etc. Thank you.
1. Edit: I have also overridden the paint method to try and reduce latency without much success.
2. You are looking for double-buffering. Just call .setDoubleBuffered(true) on your top-level component.
Essentially, it is almost the same as your solution with buffered image - all drawing commands are first output to temporary image, and only after the drawing is complete that image is drawn on actual screen.
10. Hi,
I noticed that the program gave error at line 128 & 129 "->"
DemoViewer.java:128: error: illegal start of expression
DemoViewer.java:129: error: illegal start of expression
1. Hi! Which java version are you using? Seems it fails on lambda expressions, which were introduced in Java 8.
For older java versions, you can rewrite those lines as follows: headingSlider.addChangeListener(new ChangeListener { @Override void stateChange(ChangeEvent e) { renderPanel.repaint(); } });
11. Hello Rogach - I am really impressed with how simple this demo is. However, it only shows an affine projection. How difficult would it be to make it a fully 4x4 matrix for perspective projection? I am trying to build a simple cube viewer that I can control the FoV. But not much point unless fully perspective. Can you help?
1. Hi! You probably don't need 4x4 matrix for perspective projection - you can just divide by Z coordinate (but just be careful with negative z values).
But camera control will feel weird in that case, since in current implementation camera is strictly situated at (0,0,0) and there is no way to handle translations in 3x3 matrix. Expanding to 4x4 matrix should not be hard - just add W coordinate to Vertex, replace Matrix3 class with Matrix4 (with appropriate changes), add a [0,0,0,1] row and column to heading, roll and pitch transforms, and add a pan transform somewhere.
2. Here is some code that is able to convert the original affine screen coordinates to perspective projection coordinates. Don't worry about the extra array lists, those are just for my own organization purposes.
double r = Math.pow(objects.get(o).faces.get(i).v.get(ii).zDisplay, 2) + Math.pow(objects.get(o).faces.get(i).v.get(ii).x, 2) + Math.pow(objects.get(o).faces.get(i).v.get(ii).y, 2);
r = Math.sqrt(r);
r = ((r * Math.PI) / (360.0d / FOVslider.getValue()));
r = (r / frame.getHeight());
objects.get(o).faces.get(i).v.get(ii).xDisplay = objects.get(o).faces.get(i).v.get(ii).xDisplay / r;
objects.get(o).faces.get(i).v.get(ii).yDisplay = objects.get(o).faces.get(i).v.get(ii).yDisplay / r;
I use xdisplay and ydisplay as separate values for displaying each vertex on the screen so that I can modify them without worrying about accidentally tampering with other variable values.
3. This looks more like fish-eye projection, not perspective projection.
For example, consider several objects with equal Z coordinate. Under this projection, object close to the center will get one value of R, but for object far away from the center (but still at the same Z) R will be greater (2x, for example). Thus objects away from the center will be smaller (since you divide by R).
4. Yes, it does fish-eye the image, but technically it is mathematically correct perspective projection. For it to look like proper perspective projections in computer graphics, all you have to do is divide by the Z value, not the radial distance to the camera.
5. Hi! Saw your comment and was wondering if you were able to do this with a positionable camera
12. Ok - I'll give it a go. i am pretty new to this stuff. What I like about your implementation is that it is almost entirely raw java - you are not using the Java3D API, which already has its own camera class and so on. The way you have done it means you need to understand every aspect to get it to work. If you already have an example with a 4x4 matrix that would be useful...
1. wow - that was surprisingly easy. However, I do not really have a perspective view (just distorted isometric). Still need to do some maths on the w value (ie scale z or w?). Any ideas? I changed your tetrahedron to a cube. Code is here: http://wyeldsoft.com/temp/DemoViewerPersp.java
2. I don't think you can achieve perspective projection using only a matrix - basically, you need to divide X and Y by Z coordinate, and that's not possible to do via matrix multiplication on the vector. For example, OpenGL's perspective projection matrix is only needed for clipping - actual perspective projection happens manually after all the matrices.
I took your code, and added the necessary tweaks for it to work with perspective transform. The actual magic happens in lines 132-133 (fov angle to scaling computation) and lines 169-174 (division by Z).
3. You'll probably want to rewrite the GUI to see the effects better - you now need 6 sliders: 3 for camera XYZ position and 3 for camera rotation.
4. Any ideas how to adjust the distance from the nominal camera position whilst adjusting the FoV? What I am trying to do is create a slider which adjusts FoV between 0 and 180 degrees (which is done). The problem is of course as it approaches 180 degrees the cube is a long way from the camera and vice versa as it approaches 0 degrees it is too close to the camera. If there was a way to maintain relative size or proportion during the transform then you could see the cube go from obtuse perspective to acute. A bit like going from a wide-angle lens to a telephoto lens but the object in view remains roughly the same size.
here is the section of code I am working with from Rogach:
double fov = (1.0/Math.tan(fovAngle))*180;
13. Thanks for very interesting article, I could make all examples!
14. Hello Rogach. Can this approach be used to create simple 3D view for pipe bending machine?
What I mean is this: when you bend a pipe you basically have 3 'parts' of bending: straight, curve(or bend) and rotation (of the pipe).
For example: to bend a pipe in U shape you need:
1. Straight: 500 mm
2. Bend: 90 degrees
3. straight: 100 mm
4. Bend: 90 degrees
What I like to have is to draw a 'pipe' which follow this steps and at the end you have a complete U shape bended pipe on the screen.
Can you help me with this?
1. You could simply specify a cylinder instead of a tetrahedron. To do this, it would be easier to include a parser for external *.obj or other 3D model format, instead of writing all the vertex locations for cylinder etc. The cylinder object would have to have enough segments for bending. The bend operations would be performed on the model and simply displayed in Rogach's 3D viewer.
15. A link for complete source code would be great. Also, you are not very clear on where to insert the lists and double code and stuff.
1. Sorry for the late reply, comments were broken on the article. I've included the link to the complete source code at the end of the article, here it is just in case: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99/4f2aaf20a468867dc195cdc08a02e5705c2cc95c
16. This comment has been removed by the author.
17. any tips on import/translators for object import
18. How do i move the object in z coordinate?
19. My program has the world z axis going towards the camera and the x axis going left instead of the standard z forward, y up and x right. How can I change this?
1. Sorry for the late reply, comments were broken on the article.
You can either preprocess the coordinates before performing the drawing (e.g. simply copy the object and replace X, Z with their negatives), or you can tweak the rendering code itself - but that would be a bit more difficult since the underlying medium (BufferedImage) expects X axis to increase to the right.
20. Any chance you would be able to do this with camera motion as well? Like in a game engine. If you would be willing to do this, that would be immensely helpful for something I'm trying to do.
21. This comment has been removed by the author.
22. I have used this to make a fairly basic (and not very efficient) 3D render engine. How can I cull out the "backside" of the triangles. So that only one side of the triangle renders, like in most render engines.
1. We compute normal vector for each triange (line 87 in the full code), so you can use sign of Z coordinate of this vector to determine if triangle faces the camera or not (e.g. skip drawing if norm.z is negative).
23. Is this strictly a third person perspective or can you somehow move the rotation to first person? I cant think how I could make this happen
1. Yes, you can rotate the camera without changing the position. I described the basic idea in the comment under the source code: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99/4f2aaf20a468867dc195cdc08a02e5705c2cc95c#gistcomment-3195590
24. How can i change the camera position?
1. and camera rotating
2. Here's the code that is responsible for transformation from world space to camera space: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99/4f2aaf20a468867dc195cdc08a02e5705c2cc95c#file-demoviewer-java-L52
You'll need to tweak it according to your requirements.
25. This was an awesome summary! 🥰
Parts of it I didn't understand, but by looking up those concepts on YouTube, I eventually got it all.
It has been a dream of mine for 30+ years to actually understand basic 3D rendering at a low level - and that finally happened today 😄
Thanks a bundle! 👍👍🏆
|
{}
|
# Browse Dissertations and Theses - Electrical and Computer Engineering by Contributor "Tucker, John R."
• (2012-06-27)
The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. ...
application/pdf
PDF (2MB)
• (1997)
Fabrication technology and device sizes have reached the point where fluctuations on the atomic level may affect device performance. The need for a tool to characterize these structures has been satisfied by cross-sectional ...
application/pdf
PDF (3MB)
• (2001)
Long channel transistors using both platinum and erbium silicide are fabricated and their performance compared. Processing issues, including erbium's reactivity with oxide and tendency to creep, and how they affect the ...
application/pdf
PDF (2MB)
• (1991)
Scanning tunneling microscopy (STM) has been used to study the atomic and electronic structures of quasi-one-dimensional charge-density wave (CDW) materials. The two materials chosen for this study, NbSe$\sb3$ and o-TaS$\sb3$, ...
application/pdf
PDF (3MB)
• (2005)
Electrical and magnetotransport measurements are carried out at low temperature on various device geometries. Two-dimensional unpatterned delta-doped samples yield ohmic conduction and sharp positive magnetoconductance: a ...
application/pdf
PDF (4MB)
• (1998)
The devices are fabricated with lightly doped silicon substrates, 19-A to 34-A gate oxide, $\sim$0.05-$\mu$um gate lithography, 100-A sidewall oxides, self-aligned PtSi, and no intentional doping. The thin gate and sidewall ...
application/pdf
PDF (2MB)
• (1989)
Transport studies examining the dynamics of one-dimensional charge-density wave (CDW) condensates are reported. Results using rf and dc, linear and nonlinear electrical transport techniques have been obtained at temperatures ...
application/pdf
PDF (5MB)
|
{}
|
Asynchronous circuit (clockless or self-timed circuit)[1]: Lecture 12 [note 1][2]: 157–186 is a sequential digital logic circuit that does not use a global clock circuit or signal generator to synchronize its components.[1][3]: 3–5 Instead, the components are driven by a handshaking circuit which indicates a completion of a set of instructions. Handshaking works by simple data transfer protocols.[3]: 115 Many synchronous circuits were developed in early 1950s as part of bigger asynchronous systems (e.g. ORDVAC). Asynchronous circuits and theory surrounding is a part of several steps in integrated circuit design, a field of digital electronics engineering.
Asynchronous circuits are contrasted with synchronous circuits, in which changes to the signal values in the circuit are triggered by repetitive pulses called a clock signal. Most digital devices today use synchronous circuits. However asynchronous circuits have a potential to be much faster, have a lower level of power consumption, electromagnetic interference, and better modularity in large systems. Asynchronous circuits are an active area of research in digital logic design.[4][5]
It was not until the 1990s when viability of the asynchronous circuits was shown by real-life commercial products.[3]: 4
## Overview
All digital logic circuits can be divided into combinational logic, in which the output signals depend only on the current input signals, and sequential logic, in which the output depends both on current input and on past inputs. In other words, sequential logic is combinational logic with memory. Virtually all practical digital devices require sequential logic. Sequential logic can be divided into two types, synchronous logic and asynchronous logic.
### Synchronous circuits
In synchronous logic circuits, an electronic oscillator generates a repetitive series of equally spaced pulses called the clock signal. The clock signal is supplied to all the components of the IC. E.g. the flip-flops only flips when triggered by the edge of the clock pulse, so changes to the logic signals throughout the circuit begin at the same time and at regular intervals. The output of all memory elements in a circuit is called the state of the circuit. The state of a synchronous circuit changes only on the clock pulse. The changes in signal require a certain amount of time to propagate through the combinational logic gates of the circuit. This time is called a propagation delay.
As of 2021, timing of modern synchronous ICs takes significant engineering efforts and sophisticated design automation tools.[6] Designers have to ensure that clock arrival is not faulty. With the ever-growing size and complexity of ICs (e.g. ASICs) it's a challenging task.[6] In huge circuits, signals sent over clock distribution network often end up at different times at different parts.[6] This problem is widely known as "clock skew".[6][7]: xiv
The maximum possible clock rate is capped by the logic path with the longest propagation delay, called the critical path. Because of that the paths that may operate quickly are idle most of the time. Widely distributed clock network dissipates a lot of useful power and must run whether the circuit is receiving inputs or not.[6] Because of this level of complexity in all dimensions the synchronous circuits testing and debugging takes over half of its development time.[6]
### Asynchronous circuits
The asynchronous circuits do not need a global clock, and the state of the circuit changes as soon as the inputs change. The local functional blocks may be still employed but the clock skew problem still can be tolerated.[7]: xiv [3]: 4
Since asynchronous circuits do not have to wait for a clock pulse to begin processing inputs, they can operate faster. Their speed is theoretically limited only by the propagation delays of the logic gates and other elements.[7]: xiv
However, asynchronous circuits are more difficult to design and subject to problems not found in synchronous circuits. This is because the resulting state of an asynchronous circuit can be sensitive to the relative arrival times of inputs at gates. If transitions on two inputs arrive at almost the same time, the circuit can go into the wrong state depending on slight differences in the propagation delays of the gates.
This is called a race condition. In synchronous circuits this problem is less severe because race conditions can only occur due to inputs from outside the synchronous system, called asynchronous inputs.
Although some fully asynchronous digital systems have been built (see below), today asynchronous circuits are typically used in a few critical parts of otherwise synchronous systems where speed is at a premium, such as signal processing circuits.
## Theoretical foundation
The original theory of asynchronous circuits was created by David E. Muller in mid-1950s.[8] This theory was presented later in the well-known book "Switching Theory" by Raymond Miller.[9]
The term "asynchronous logic" is used to describe a variety of design styles, which use different assumptions about circuit properties.[10] These vary from the bundled delay model – which uses "conventional" data processing elements with completion indicated by a locally generated delay model – to delay-insensitive design – where arbitrary delays through circuit elements can be accommodated. The latter style tends to yield circuits which are larger than bundled data implementations, but which are insensitive to layout and parametric variations and are thus "correct by design".
### Asynchronous logic
Asynchronous logic is the logic required for the design of asynchronous digital systems. These function without a clock signal and so individual logic elements cannot be relied upon to have a discrete true/false state at any given time. Boolean (two valued) logic is inadequate for this and so extensions are required. Karl Fant developed a theoretical treatment of this in his work Logically determined design in 2005 which used four-valued logic with null and intermediate being the additional values. This architecture is important because it is quasi-delay-insensitive.[11] Scott Smith and Jia Di developed an ultra-low-power variation of Fant's Null Convention Logic that incorporates multi-threshold CMOS.[12] This variation is termed Multi-threshold Null Convention Logic (MTNCL), or alternatively Sleep Convention Logic (SCL).[13] Vadim Vasyukevich developed a different approach based upon a new logical operation which he called venjunction. This takes into account not only the current value of an element, but also its history.[14]
### Petri nets
Petri nets are an attractive and powerful model for reasoning about asynchronous circuits (see Subsequent models of concurrency). A particularly useful type of interpreted Petri nets, called Signal Transition Graphs (STGs), was proposed independently in 1985 by Leonid Rosenblum and Alex Yakovlev[15] and Tam-Anh Chu.[16] Since then, STGs have been studied extensively in theory and practice,[17][18] which has led to the development of popular software tools for analysis and synthesis of asynchronous control circuits, such as Petrify[19] and Workcraft.[20]
Subsequent to Petri nets other models of concurrency have been developed that can model asynchronous circuits including the Actor model and process calculi.
## Benefits
A variety of advantages have been demonstrated by asynchronous circuits. Both quasi-delay-insensitive (QDI) circuits (generally agreed to be the most "pure" form of asynchronous logic that retains computational universality)[citation needed] and less pure forms of asynchronous circuitry which use timing constraints for higher performance and lower area and power present several advantages.
• Robust and cheap handling of metastability of arbiters.
• Average-case performance: an average-case time (delay) of operation is not limited to the worst-case completion time of component (gate, wire, block etc.) as it is in synchronous circuits.[7]: xiv [3]: 3 This results in better latency and throughput performance.[21]: 9 [3]: 3 Examples include speculative completion[22][23] which has been applied to design parallel prefix adders faster than synchronous ones, and a high-performance double-precision floating point adder[24] which outperforms leading synchronous designs.
• Early completion: the output may be generated ahead of time, when result of input processing is predictable or irrelevant.
• Inherent elasticity: variable number of data items may appear in pipeline inputs at any time (pipeline means a cascade of linked functional blocks). This contributes to high performance while gracefully handling variable input and output rates due to unclocked pipeline stages (functional blocks) delays (congestions may still be possible however and input-output gates delay should be also taken into account[25]: 194 ).[21]
• No need for timing-matching between functional blocks either. Though given different delay models (predictions of gate/wire delay times) this depends on actual approach of asynchronous circuit implementation.[25]: 194
• Freedom from the ever-worsening difficulties of distributing a high-fan-out, timing-sensitive clock signal.
• Circuit speed adapts to changing temperature and voltage conditions rather than being locked at the speed mandated by worst-case assumptions.[citation needed][vague][3]: 3
• Lower, on-demand power consumption;[7]: xiv [21]: 9 [3]: 3 zero standby power consumption.[3]: 3 In 2005 Epson has reported 70% lower power consumption compared to synchronous design.[26] Also, clock drivers can be removed which can significantly reduce power consumption. However, when using certain encodings, asynchronous circuits may require more area, adding similar power overhead if the underlying process has poor leakage properties (for example, deep submicrometer processes used prior to the introduction of high-κ dielectrics).
• No need for power-matching between local asynchronous functional domains of circuitry. Synchronous circuits tend to draw a large amount of current right at the clock edge and shortly thereafter. The number of nodes switching (and hence, the amount of current drawn) drops off rapidly after the clock edge, reaching zero just before the next clock edge. In an asynchronous circuit, the switching times of the nodes does not correlated in this manner, so the current draw tends to be more uniform and less bursty.
• Robustness toward transistor-to-transistor variability in the manufacturing transfer process (which is one of the most serious problems facing the semiconductor industry as dies shrink), variations of voltage supply, temperature, and fabrication process parameters.[3]: 3
• Less severe electromagnetic interference (EMI).[3]: 3 Synchronous circuits create a great deal of EMI in the frequency band at (or very near) their clock frequency and its harmonics; asynchronous circuits generate EMI patterns which are much more evenly spread across the spectrum.[3]: 3
• Design modularity (reuse), improved noise immunity and electromagnetic compatibility. Asynchronous circuits are more tolerant to process variations and external voltage fluctuations.[3]: 4
• Area overhead caused by additional logic implementing handshaking.[3]: 4 In some cases an asynchronous design may require up to double the resources (area, circuit speed, power consumption) of a synchronous design, due to addition of completion detection and design-for-test circuits.[27][3]: 4
• Compared to a synchronous design, as of the 1990s and early 2000s not many people are trained or experienced in the design of asynchronous circuits.[27]
• Synchronous designs are inherently easier to test and debug than asynchronous designs.[28] However, this position is disputed by Fant, who claims that the apparent simplicity of synchronous logic is an artifact of the mathematical models used by the common design approaches.[29]
• Clock gating in more conventional synchronous designs is an approximation of the asynchronous ideal, and in some cases, its simplicity may outweigh the advantages of a fully asynchronous design.
• Performance (speed) of asynchronous circuits may be reduced in architectures that require input-completeness (more complex data path).[30]
• Lack of dedicated, asynchronous design-focused commercial EDA tools.[30] As of 2006 the situation was slowly improving, however.[3]: x
## Communication
There are several ways to create asynchronous communication channels that can be classified by their protocol and data encoding.
### Protocols
There are two widely used protocol families which differ in the way communications are encoded:
• two-phase handshake (a.k.a. two-phase protocol, Non-Return-to-Zero (NRZ) encoding, or transition signalling): Communications are represented by any wire transition; transitions from 0 to 1 and from 1 to 0 both count as communications.
• four-phase handshake (a.k.a. four-phase protocol, or Return-to-Zero (RZ) encoding): Communications are represented by a wire transition followed by a reset; a transition sequence from 0 to 1 and back to 0 counts as single communication.
Illustration of two and four-phase handshakes. Top: A sender and a receiver are communicating with simple request and acknowledge signals. The sender drives the request line, and the receiver drives the acknowledge line. Middle: Timing diagram of two, two-phase communications. Bottom: Timing diagram of one, four-phase communication.
Despite involving more transitions per communication, circuits implementing four-phase protocols are usually faster and simpler than two-phase protocols because the signal lines return to their original state by the end of each communication. In two-phase protocols, the circuit implementations would have to store the state of the signal line internally.
Note that these basic distinctions do not account for the wide variety of protocols. These protocols may encode only requests and acknowledgements or also encode the data, which leads to the popular multi-wire data encoding. Many other, less common protocols have been proposed including using a single wire for request and acknowledgment, using several significant voltages, using only pulses or balancing timings in order to remove the latches.
### Data encoding
There are two widely used data encodings in asynchronous circuits: bundled-data encoding and multi-rail encoding
Another common way to encode the data is to use multiple wires to encode a single digit: the value is determined by the wire on which the event occurs. This avoids some of the delay assumptions necessary with bundled-data encoding, since the request and the data are not separated anymore.
#### Bundled-data encoding
Bundled-data encoding uses one wire per bit of data with a request and an acknowledge signal; this is the same encoding used in synchronous circuits without the restriction that transitions occur on a clock edge. The request and the acknowledge are sent on separate wires with one of the above protocols. These circuits usually assume a bounded delay model with the completion signals delayed long enough for the calculations to take place.
In operation, the sender signals the availability and validity of data with a request. The receiver then indicates completion with an acknowledgement, indicating that it is able to process new requests. That is, the request is bundled with the data, hence the name "bundled-data".
Bundled-data circuits are often referred to as micropipelines, whether they use a two-phase or four-phase protocol, even if the term was initially introduced for two-phase bundled-data.
A 4-phase, bundled-data communication. Top: A sender and receiver are connected by data lines, a request line, and an acknowledge line. Bottom: Timing diagram of a bundled data communication. When the request line is low, the data is to be considered invalid and liable to change at any time.
#### Multi-rail encoding
Multi-rail encoding uses multiple wires without a one-to-one relationship between bits and wires and a separate acknowledge signal. Data availability is indicated by the transitions themselves on one or more of the data wires (depending on the type of multi-rail encoding) instead of with a request signal as in the bundled-data encoding. This provides the advantage that the data communication is delay-insensitive. Two common multi-rail encodings are one-hot and dual rail. The one-hot (a.k.a. 1-of-n) encoding represents a number in base n with a communication on one of the n wires. The dual-rail encoding uses pairs of wires to represent each bit of the data, hence the name "dual-rail"; one wire in the pair represents the bit value of 0 and the other represents the bit value of 1. For example, a dual-rail encoded two bit number will be represented with two pairs of wires for four wires in total. During a data communication, communications occur on one of each pair of wires to indicate the data's bits. In the general case, an m ${\displaystyle \times }$ n encoding represent data as m words of base n.
Diagram of dual rail and 1-of-4 communications. Top: A sender and receiver are connected by data lines and an acknowledge line. Middle: Timing diagram of the sender communicating the values 0, 1, 2, and then 3 to the receiver with the 1-of-4 encoding. Bottom: Timing diagram of the sender communicating the same values to the receiver with the dual-rail encoding. For this particular data size, the dual rail encoding is the same as a 2x1-of-2 encoding.
#### Dual-rail encoding
Dual-rail encoding with a four-phase protocol is the most common and is also called three-state encoding, since it has two valid states (10 and 01, after a transition) and a reset state (00). Another common encoding, which leads to a simpler implementation than one-hot, two-phase dual-rail is four-state encoding, or level-encoded dual-rail, and uses a data bit and a parity bit to achieve a two-phase protocol.
## Asynchronous CPU
Asynchronous CPUs are one of several ideas for radically changing CPU design.
Unlike a conventional processor, a clockless processor (asynchronous CPU) has no central clock to coordinate the progress of data through the pipeline. Instead, stages of the CPU are coordinated using logic devices called "pipeline controls" or "FIFO sequencers." Basically, the pipeline controller clocks the next stage of logic when the existing stage is complete. In this way, a central clock is unnecessary. It may actually be even easier to implement high performance devices in asynchronous, as opposed to clocked, logic:
• components can run at different speeds on an asynchronous CPU; all major components of a clocked CPU must remain synchronized with the central clock;
• a traditional CPU cannot "go faster" than the expected worst-case performance of the slowest stage/instruction/component. When an asynchronous CPU completes an operation more quickly than anticipated, the next stage can immediately begin processing the results, rather than waiting for synchronization with a central clock. An operation might finish faster than normal because of attributes of the data being processed (e.g., multiplication can be very fast when multiplying by 0 or 1, even when running code produced by a naive compiler), or because of the presence of a higher voltage or bus speed setting, or a lower ambient temperature, than 'normal' or expected.
Asynchronous logic proponents believe these capabilities would have these benefits:
• lower power dissipation for a given performance level, and
• highest possible execution speeds.
The biggest disadvantage of the clockless CPU is that most CPU design tools assume a clocked CPU (i.e., a synchronous circuit). Many tools "enforce synchronous design practices".[31] Making a clockless CPU (designing an asynchronous circuit) involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids metastable problems. The group that designed the AMULET, for example, developed a tool called LARD[32] to cope with the complex design of AMULET3.
### Examples
Despite all the difficulties numerous asynchronous CPUs have been built.
The ORDVAC of 1951 was a successor to the ENIAC and the first asynchronous computer ever built.[33][34]
The ILLIAC II was the first completely asynchronous, speed independent processor design ever built; it was the most powerful computer at the time.[33]
DEC PDP-16 Register Transfer Modules (ca. 1973) allowed the experimenter to construct asynchronous, 16-bit processing elements. Delays for each module were fixed and based on the module's worst-case timing.
### Caltech
Since the mid-1980s, Caltech has designed four non-commercial CPUs in attempt to evaluate performance and energy efficiency of the asynchronous circuits.[35][36]
Caltech Asynchronous Microprocessor (CAM)
In 1988 the Caltech Asynchronous Microprocessor (CAM) was the first asynchronous, quasi delay-insensitive (QDI) microprocessor made by Caltech.[35][37] The processor had 16-bit wide RISC ISA and separate instruction and data memories.[35] It was manufactured by MOSIS and funded by DARPA. The project was supervised by the Office of Naval Research, the Army Research Office, and the Air Force Office of Scientific Research.[35]: 12
During demonstrations, the researchers loaded a simple program which ran in a tight loop, pulsing one of the output lines after each instruction. This output line was connected to an oscilloscope. When a cup of hot coffee was placed on the chip, the pulse rate (the effective "clock rate") naturally slowed down to adapt to the worsening performance of the heated transistors. When liquid nitrogen was poured on the chip, the instruction rate shot up with no additional intervention. Additionally, at lower temperatures, the voltage supplied to the chip could be safely increased, which also improved the instruction rate – again, with no additional configuration.[citation needed]
When implemented in gallium arsenide (HGaAs
3
) it was claimed to achieve 100MIPS.[35]: 5 Overall, the research paper interpreted the resultant performance of CAM as superior compared to commercial alternatives available at the time.[35]: 5
MiniMIPS
In 1998 the MiniMIPS, an experimental, asynchronous MIPS I-based microcontroller was made. Even though its SPICE-predicted performance was around 280 MIPS at 3.3 V the implementation suffered from several mistakes in layout (human mistake) and the results turned out be lower by about 40% (see table).[35]: 5
The Lutonium 8051
Made in 2003, it was a quasi delay-insensitive asynchronous microcontroller designed for energy efficiency.[36][35]: 9 The microcontroller's implementation followed the Harvard architecture.[36]
Performance comparison of the Caltech CPUs (in MIPS) .[note 2]
Name Year Word size (bits) Transistors (thousands) Size (mm) Node size (µm) 1.5V 2V 3.3V 5V 10V
CAM SCMOS 1988 16 20 N/A 1.6 N/A 5 N/A 18 26
MiniMIPS CMOS 1998 32 2000 8×14 0.6 60 100 180 N/A N/A
Lutonium 8051 CMOS 2003 8 N/A N/A 0.18 200 N/A N/A N/A 4
### Epson
In 2004, Epson manufactured the world's first bendable microprocessor called ACT11, an 8-bit asynchronous chip.[38][39][40][41][42] Synchronous flexible processors are slower, since bending the material on which a chip is fabricated causes wild and unpredictable variations in the delays of various transistors, for which worst-case scenarios must be assumed everywhere and everything must be clocked at worst-case speed. The processor is intended for use in smart cards, whose chips are currently limited in size to those small enough that they can remain perfectly rigid.
### IBM
In 2014, IBM announced a SyNAPSE-developed chip that runs in an asynchronous manner, with one of the highest transistor counts of any chip ever produced. IBM's chip consumes orders of magnitude less power than traditional computing systems on pattern recognition benchmarks.[43]
### Timeline
• ORDVAC and the (identical) ILLIAC I (1951)[33][34]
• Johnniac (1953)[44]
• WEIZAC (1955)
• Kiev (1958), a Soviet machine using the programming language with pointers much earlier than they came to the PL/1 language[45]
• ILLIAC II (1962)[33]
• Victoria University of Manchester built Atlas (1964)
• ICL 1906A and 1906S mainframe computers, part of the 1900 series and sold from 1964 for over a decade by ICL[46]
• Polish computers KAR-65 and K-202 (1965 and 1970 respectively)
• Honeywell CPUs 6180 (1972)[47] and Series 60 Level 68 (1981)[48][49] upon which Multics ran asynchronously
• Soviet bit-slice microprocessor modules (late 1970s)[50][51] produced as К587,[52] К588[53] and К1883 (U83x in East Germany)[54]
• Caltech Asynchronous Microprocessor, the world-first asynchronous microprocessor (1988)[35][37]
• ARM-implementing AMULET (1993 and 2000)
• Asynchronous implementation of MIPS R3000, dubbed MiniMIPS (1998)
• Several versions of the XAP processor experimented with different asynchronous design styles: a bundled data XAP, a 1-of-4 XAP, and a 1-of-2 (dual-rail) XAP (2003?)[55]
• ARM-compatible processor (2003?) designed by Z. C. Yu, S. B. Furber, and L. A. Plana; "designed specifically to explore the benefits of asynchronous design for security sensitive applications"[55]
• "Network-based Asynchronous Architecture" processor (2005) that executes a subset of the MIPS architecture instruction set[55]
• ARM996HS processor (2006) from Handshake Solutions
• HT80C51 processor (2007?) from Handshake Solutions.[56]
• Vortex, a superscalar general purpose CPU with a load/store architecture from Intel (2007);[57] it was developed as Fulcrum Microsystem test Chip 2 and was not commercialized, excepting some of its components; the chip included DDR SDRAM and a 10Gb Ethernet interface linked via Nexus system-on-chip net to the CPU[57][58]
• SEAforth multi-core processor (2008) from Charles H. Moore[59]
• GA144[60] multi-core processor (2010) from Charles H. Moore
• TAM16: 16-bit asynchronous microcontroller IP core (Tiempo)[61]
• Aspida asyncronous DLX core;[62] the asynchronous open-source DLX processor (ASPIDA) has been successfully implemented both in ASIC and FPGA versions[63]
## Notes
1. ^ Globally asynchronous locally synchronous circuits are possible.
2. ^ Dhrystone was also used.[35]: 4,8
## References
1. ^ a b Horowitz, Mark (2007). "Advanced VLSI Circuit Design Lecture". Stanford University, Computer Systems Laboratory. Archived from the original on April 21, 2016.
2. ^ Staunstrup, Jørgen (1994). A Formal Approach to Hardware Design. Boston, MA: Springer US. ISBN 978-1-4615-2764-0. OCLC 852790160.
3. Sparsø, Jens (April 2006). "Asynchronous Circuit Design A Tutorial" (PDF). Technical University of Denmark.
4. ^ Nowick, S. M.; Singh, M. (May–June 2015). "Asynchronous Design — Part 1: Overview and Recent Advances" (PDF). IEEE Design and Test. 32 (3): 5–18. doi:10.1109/MDAT.2015.2413759. S2CID 14644656. Archived from the original (PDF) on December 21, 2018. Retrieved August 27, 2019.
5. ^ Nowick, S. M.; Singh, M. (May–June 2015). "Asynchronous Design — Part 2: Systems and Methodologies" (PDF). IEEE Design and Test. 32 (3): 19–28. doi:10.1109/MDAT.2015.2413757. S2CID 16732793. Archived from the original (PDF) on December 21, 2018. Retrieved August 27, 2019.
6. "Why Asynchronous Design?". Galois, Inc. July 15, 2021. Retrieved December 4, 2021.
7. Myers, Chris J. (2001). Asynchronous circuit design. New York: J. Wiley & Sons. ISBN 0-471-46412-0. OCLC 53227301.
8. ^ Muller, D.E. (1955). Theory of asynchronous circuits, Report no. 66. Digital Computer Laboratory, University of Illinois at Urbana-Champaign.
9. ^ Miller, Raymond E. (1965). Switching Theory, Vol. II. Wiley.
10. ^ van Berkel, C. H. and M. B. Josephs and S. M. Nowick (February 1999), "Applications of Asynchronous Circuits" (PDF), Proceedings of the IEEE, 87 (2): 234–242, doi:10.1109/5.740016, archived from the original (PDF) on April 3, 2018, retrieved August 27, 2019
11. ^ Karl M. Fant (2005), Logically determined design: clockless system design with NULL convention logic (NCL), John Wiley and Sons, ISBN 978-0-471-68478-7
12. ^ Smith, Scott and Di, Jia (2009). Designing Asynchronous Circuits using NULL Conventional Logic (NCL). Morgan & Claypool Publishers. ISBN 978-1-59829-981-6.
13. ^ Scott, Smith and Di, Jia. "U.S. 7,977,972 Ultra-Low Power Multi-threshold Asychronous Circuit Design". Retrieved December 12, 2011.
14. ^ Vasyukevich, V. O. (April 2007), "Decoding asynchronous sequences", Automatic Control and Computer Sciences, Allerton Press, 41 (2): 93–99, doi:10.3103/S0146411607020058, ISSN 1558-108X, S2CID 21204394
15. ^ Rosenblum, L. Ya. and Yakovlev, A. V. (1985). "Signal Graphs: from Self-timed to Timed ones. Proceedings of International Workshop on Timed Petri Nets, Torino, Italy, July 1985, IEEE CS Press, pp. 199-207" (PDF). Archived (PDF) from the original on October 23, 2003.((cite web)): CS1 maint: multiple names: authors list (link)
16. ^ Chu, T.-A. (June 1, 1986). "On the models for designing VLSI asynchronous digital systems". Integration. 4 (2): 99–113. doi:10.1016/S0167-9260(86)80002-5. ISSN 0167-9260.
17. ^ Yakovlev, Alexandre; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto (November 1, 1996). "A unified signal transition graph model for asynchronous control circuit synthesis". Formal Methods in System Design. 9 (3): 139–188. doi:10.1007/BF00122081. ISSN 1572-8102. S2CID 26970846.
18. ^ Cortadella, J.; Kishinevsky, M.; Kondratyev, A.; Lavagno, L.; Yakovlev, A. (2002). Logic Synthesis for Asynchronous Controllers and Interfaces. Springer Series in Advanced Microelectronics. Vol. 8. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-55989-1. ISBN 978-3-642-62776-7.
19. ^ "Petrify: Related publications". www.cs.upc.edu. Retrieved July 28, 2021.
20. ^ "start - Workcraft". workcraft.org. Retrieved July 28, 2021.
21. ^ a b c Nowick, S. M. and M. Singh (September–October 2011), "High-Performance Asynchronous Pipelines: an Overview" (PDF), IEEE Design & Test of Computers, 28 (5): 8–22, doi:10.1109/mdt.2011.71, S2CID 6515750, archived from the original (PDF) on April 21, 2021, retrieved August 27, 2019
22. ^ Nowick, S. M. and K. Y. Yun and P. A. Beerel and A. E. Dooply (March 1997), "Speculative Completion for the Design of High-Performance Asynchronous Dynamic Adders" (PDF), Proceedings of the IEEE International Symposium on Advanced Research in Asynchronous Circuits and Systems ('Async'): 210–223, doi:10.1109/ASYNC.1997.587176, ISBN 0-8186-7922-0, S2CID 1098994, archived from the original (PDF) on April 21, 2021, retrieved August 27, 2019
23. ^ Nowick, S. M. (September 1996), "Design of a Low-Latency Asynchronous Adder Using Speculative Completion" (PDF), IEE Proceedings - Computers and Digital Techniques, 143 (5): 301–307, doi:10.1049/ip-cdt:19960704, archived from the original (PDF) on April 22, 2021, retrieved August 27, 2019
24. ^ Sheikh, B. and R. Manohar (May 2010), "An Operand-Optimized Asynchronous IEEE 754 Double-Precision Floating-Point Adder" (PDF), Proceedings of the IEEE International Symposium on Asynchronous Circuits and Systems ('Async'): 151–162, archived from the original (PDF) on April 21, 2021, retrieved August 27, 2019
25. ^ a b Sasao, Tsutomu (1993). Logic Synthesis and Optimization. Boston, MA: Springer US. ISBN 978-1-4615-3154-8. OCLC 852788081.
26. ^
27. ^ a b Furber, Steve. "Principles of Asynchronous Circuit Design" (PDF). Pg. 232. Archived from the original (PDF) on April 26, 2012. Retrieved December 13, 2011.
28. ^ "Keep It Strictly Synchronous: KISS those asynchronous-logic problems good-bye". Personal Engineering and Instrumentation News, November 1997, pages 53–55. http://www.fpga-site.com/kiss.html
29. ^ Karl M. Fant (2007), Computer Science Reconsidered: The Invocation Model of Process Expression, John Wiley and Sons, ISBN 978-0471798149
30. ^ a b van Leeuwen, T. M. (2010). Implementation and automatic generation of asynchronous scheduled dataflow graph. Delft.
31. ^ Kruger, Robert (March 15, 2005). "Reality TV for FPGA design engineers!". eetimes.com. Retrieved November 11, 2020.
32. ^ LARD Archived March 6, 2005, at the Wayback Machine
33. ^ a b c d "In the 1950 and 1960s, asynchronous design was used in many early mainframe computers, including the ILLIAC I and ILLIAC II ... ." Brief History of asynchronous circuit design
34. ^ a b "The Illiac is a binary parallel asynchronous computer in which negative numbers are represented as two's complements." – final summary of "Illiac Design Techniques" 1955.
35. Martin, A.J.; Nystrom, M.; Wong, C.G. (November 2003). "Three generations of asynchronous microprocessors". IEEE Design & Test of Computers. 20 (6): 9–17. doi:10.1109/MDT.2003.1246159. ISSN 0740-7475. S2CID 15164301.
36. ^ a b c Martin, A.J.; Nystrom, M.; Papadantonakis, K.; Penzes, P.I.; Prakash, P.; Wong, C.G.; Chang, J.; Ko, K.S.; Lee, B.; Ou, E.; Pugh, J. (2003). "The Lutonium: a sub-nanojoule asynchronous 8051 microcontroller". Ninth International Symposium on Asynchronous Circuits and Systems, 2003. Proceedings. Vancouver, BC, Canada: IEEE Comput. Soc: 14–23. doi:10.1109/ASYNC.2003.1199162. ISBN 978-0-7695-1898-5. S2CID 13866418.
37. ^ a b Martin, Alain J. (February 6, 2014). "25 Years Ago: The First Asynchronous Microprocessor". Computer Science Technical Reports. doi:10.7907/Z9QR4V3H. ((cite journal)): Cite journal requires |journal= (help)
38. ^ "Seiko Epson tips flexible processor via TFT technology" Archived 2010-02-01 at the Wayback Machine by Mark LaPedus 2005
39. ^ "A flexible 8b asynchronous microprocessor based on low-temperature poly-silicon TFT technology" by Karaki et al. 2005. Abstract: "A flexible 8b asynchronous microprocessor ACTII ... The power level is 30% of the synchronous counterpart."
40. ^ "Introduction of TFT R&D Activities in Seiko Epson Corporation" by Tatsuya Shimoda (2005?) has picture of "A flexible 8-bit asynchronous microprocessor, ACT11"
41. ^ "Epson Develops the World's First Flexible 8-Bit Asynchronous Microprocessor"
42. ^ "Seiko Epson details flexible microprocessor: A4 sheets of e-paper in the pipeline by Paul Kallender 2005
43. ^ "SyNAPSE program develops advanced brain-inspired chip" Archived 2014-08-10 at the Wayback Machine. August 07, 2014.
44. ^ Johnniac history written in 1968
45. ^ V. M Glushkov and E. L. Yushchenko. Mathematical description of computer "Kiev". UkrSSR, 1962 (in Russian)
46. ^
47. ^ "Entirely asynchronous, its hundred-odd boards would send out requests, earmark the results for somebody else, swipe somebody else's signals or data, and backstab each other in all sorts of amusing ways which occasionally failed (the "op not complete" timer would go off and cause a fault). ... [There] was no hint of an organized synchronization strategy: various "it's ready now", "ok, go", "take a cycle" pulses merely surged through the vast backpanel ANDed with appropriate state and goosed the next guy down. Not without its charms, this seemingly ad-hoc technology facilitated a substantial degree of overlap ... as well as the [segmentation and paging] of the Multics address mechanism to the extant 6000 architecture in an ingenious, modular, and surprising way ... . Modification and debugging of the processor, though, were no fun." "Multics Glossary: ... 6180"
48. ^ "10/81 ... DPS 8/70M CPUs" Multics Chronology
49. ^ "The Series 60, Level 68 was just a repackaging of the 6180." Multics Hardware features: Series 60, Level 68
50. ^ A. A. Vasenkov, V. L. Dshkhunian, P. R. Mashevich, P. V. Nesterov, V. V. Telenkov, Ju. E. Chicherin, D. I. Juditsky, "Microprocessor computing system," Patent US4124890, Nov. 7, 1978
51. ^ Chapter 4.5.3 in the biography of D. I. Juditsky (in Russian)
52. ^ "Серия 587 - Collection ex-USSR Chip's". Archived from the original on July 17, 2015. Retrieved July 16, 2015.
53. ^ "Серия 588 - Collection ex-USSR Chip's". Archived from the original on July 17, 2015. Retrieved July 16, 2015.
54. ^ "Серия 1883/U830 - Collection ex-USSR Chip's". Archived from the original on July 22, 2015. Retrieved July 19, 2015.
55. ^ a b c "A Network-based Asynchronous Architecture for Cryptographic Devices" by Ljiljana Spadavecchia 2005 in section "4.10.2 Side-channel analysis of dual-rail asynchronous architectures" and section "5.5.5.1 Instruction set"
56. ^ "Handshake Solutions HT80C51" "The Handshake Solutions HT80C51 is a Low power, asynchronous 80C51 implementation using handshake technology, compatible with the standard 8051 instruction set."
57. ^ a b Lines, Andrew (March 2007). "The Vortex: A Superscalar Asynchronous Processor". 13th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC'07): 39–48. doi:10.1109/ASYNC.2007.28. ISBN 978-0-7695-2771-0. S2CID 33189213.
58. ^ Lines, A. (2003). "Nexus: an asynchronous crossbar interconnect for synchronous system-on-chip designs". 11th Symposium on High Performance Interconnects, 2003. Proceedings. Stanford, CA, USA: IEEE Comput. Soc: 2–9. doi:10.1109/CONECT.2003.1231470. ISBN 978-0-7695-2012-4. S2CID 1799204.
59. ^ SEAforth Overview Archived 2008-02-02 at the Wayback Machine "... asynchronous circuit design throughout the chip. There is no central clock with billions of dumb nodes dissipating useless power. ... the processor cores are internally asynchronous themselves."
60. ^ "GreenArrayChips" "Ultra-low-powered multi-computer chips with integrated peripherals."
61. ^ Tiempo: Asynchronous TAM16 Core IP
62. ^ "ASPIDA sync/async DLX Core". OpenCores.org. Retrieved September 5, 2014.
63. ^
|
{}
|
Deepak Scored 45->99%ile with Bounce Back Crack Course. You can do it too!
# A gentleman has 6 friends to invite.
Question:
A gentleman has 6 friends to invite. In how many ways can be send invitation cards to them, if he has 3 servants to carry the cards?
Solution:
Given: A gentleman has 6 friends to invite. He has 3 servants to carry the cards.
Each friend can be invited by 3 possible number of servants.
So the number of ways of inviting 6 friends using 3 servants $=3 \times 3 \times 3 \times 3 \times 3 \times 3=3^{6}$
|
{}
|
# Parabola - find the equation
Printable View
• February 22nd 2010, 10:27 PM
lance
Parabola - find the equation
Find the equation of the line tangent to y^2=-16x and parallel to x+y=1.
Please help me with this or just give me an idea on how to answer this problem... Thanks!
• February 22nd 2010, 10:44 PM
Prove It
Quote:
Originally Posted by lance
Find the equation of the line tangent to y^2=-16x and parallel to x+y=1.
Please help me with this or just give me an idea on how to answer this problem... Thanks!
If the line is parallel to $x + y = 1$, then its gradient is $1$. (Why?)
You want the equation of the tangent that has the same gradient.
So $y^2 = -16x$
$\frac{d}{dx}(y^2) = \frac{d}{dx}(-16x)$
$2y\,\frac{dy}{dx} = -16$
$\frac{dy}{dx} = -\frac{8}{y}$.
So when the gradient is $1$...
$-\frac{8}{y} = 1$
$-\frac{y}{8} = 1$
$y = -8$.
When $y = -8$ we have $(-8)^2 = -16x$
$64 = -16x$
$x = -4$.
So we have the gradient $= 1$ and a coordinate $(-4, -8)$ that lies on the tangent.
Put this information into $y = mx + c$, solve for the unknown $c$, and then you will have the equation of the tangent.
|
{}
|
The exponent used for cubes is 3, which is also denoted by the superscript³. Online aptitude preparation material with practice question bank, examples, solutions and explanations. Video lectures to prepare quantitative aptitude for placement tests & competitive exams like MBA, Bank exams, RBI, IBPS, SSC, SBI, RRB, Railway, LIC, MAT. Example: ∛8 = ∛(2 × 2 × 2) = 2.Since, 8 is a perfect cube number, it is easy to find the cube root of a number.. Finding the cubic root of non-perfect cube number is a little complex process but can be mastered easily. cube root of -1000 is 10i. Now extract and take out the cube root ∛1000 * ∛100. Let's check this with ∛1000*100=∛100000. The common definition of the cube root of a negative number is that [1] Step by step simplification process to get cube roots radical form and derivative: First we will find all factors under the cube root: 100000 has the cube factor of 1000. The symbol for representation of cube root is . Solution: 8, 27, 64, 125, 216, 343, … are perfect cube Cube Root Farms, Freeport, Caroni, Trinidad And Tobago. Please find below all the Cube root of ocho crossword clue answers and solutions for the Universal Crossword October 30 2020 Answers. 1 decade ago. Why is this so? Cite this content, page or calculator as: Furey, Edward "Cube Root Calculator"; CalculatorSoup, Cube of ∛1000=10 which results into 10∛100. (-x)1/3 = -(x1/3). Perfect cube is a number whose cube root is an integer Example : 23, 33, 43, 53, 63, 73 , … are perfect cube i.e. Weisstein, Eric W. "Cube Root." Given a number x, the cube root of x is a number a such that a 3 = x.If x positive a will be positive, if x is negative a will be negative. Perfect Cube Roots … Cube Root. Let's check this with ∛1000*100=∛100000. Related. The radicand no longer has any cube factors. Cube Roots Involving Powers of 10. The 3rd root of 64, or 64 radical 3, or the cube root of 64 is written as $$\sqrt[3]{64} = 4$$. For example, the cube root of 27, denoted as 3 √27, is 3, because when we multiply 3 by itself three times we get 3 x 3 x 3 = 27 = 3 3.So, we can say, the cube root gives the value which is basically cubed. With this tool, you can adjust the size, color, italic, and bold of Cube Root(symbol). Calculator Use. Since 1000000 is a whole number, it is a perfect cube. Square root or principal square root symbol √ does not have 2 on the root. Given a number On the other hand, we also have something known as cube root which is stated to be the cube root of some number let us say x is that number & the equation for this one would be the a 3 = x. Use this calculator to find the cube root of positive or negative numbers. The cube root of -64 is written as $$\sqrt[3]{-64} = -4$$. The cube root of x is the same as [1] For example: Cube roots (for integer results 1 through 10). Cube Root of Number in C++. This pdf consist 1 to 100 cube root. Indeed, this is the case for every number, except for 0. Dear Aspirants, In this article we are sharing the Cube Root Chart pdf. Cube roots is a specialized form of our common Square root & Cube root - Quantitative aptitude tutorial with easy tricks, tips & short cuts explaining the concepts. For example, 10 is the cube root of 1000 because 10 3 = 10•10•10 = 1000, -10 is cube root of -1000 because (-10) 3 = (-10)• (-10)• (-10) = -1000. Perfect Cube Roots Table 1-100. In arithmetic and algebra, the cube of a number n is its third power: the result of the number multiplied by itself twice: n³ = n * n * n. It is also the number multiplied by its square: n³ = n * n². First we will find all factors under the cube root: 1000 has the cube factor of 1000. About Cube Root Calculator . 6 does not have a cube root, it has three cube roots. Find Cube Root of Any Number In C - To find cube root of any number we need to find 0.3 power of any number. sage: a = (-1)^(1/3) you define an element of the Symbolic Ring, which is not a safe place:. *I In case something is wrong or missing kindly let me know and I will be more than happy to help you out with the right solution for … You can never take the square root of a negative number. To find cube root of any number we need to find 0.3 power of any number. As you can see the radicals are not in their simplest form. Cube Root Chart from 1 to 100 pdf download link given at the end of post. What is Cube Root of 1000000 ? The cube root of a number means it is a value of that, when used in a multiplication by itself in three times, gives that number.. This is because when three negative numbers are multiplied together, two of the negatives are cancelled but one remains, so the result is also negative. MathWorld -- A Wolfram Web Resource. What is the cube root of 6? Cube root of a number can also be exponentially represented as the number raised to the power ⅓. In the same way as a perfect square, a perfect cube or cube number is an integer that results from cubing another integer. 1 decade ago. Cube roots: Cube root is simply the converse of cube of any number. The cube root of 10 is written as $$\sqrt[3]{10} = 2.154435$$. FAQs on Cube and Cube Roots. Anonymous. Perfect cubes are the numbers that are obtained when natural numbers are multiplied by itself twice. See … To calculate fractional exponents use our calculator for Now extract and take out the cube root ∛1000 * ∛1. THE MILLION POUND CUBE host Phillip Schofield was left congratulating a father and son duo on the ITV game show this evening after they pulled off a monumental feat. The nearest previous perfect cube is 970299 and the nearest next perfect cube is 1030301. If ‘x’ is any real number, then its cube root is represented as (x) ⅓ or ∛x. The cubic function is a one-to-one function. Cube Root Program In C - Finding that a given number is even or odd, is a classic C program. Thus, the cube root of 24 is a bit more than 2.8 but less than 2.9. So you could first estimate that the cube root is 2.8. The cube root of 8 is written as $$\sqrt[3]{8} = 2$$. When A 2= B then A is the square root of B The cube root of -8 is written as $$\sqrt[3]{-8} = -2$$. If you are preparing for any competitive exam this pdf will help you in the math section. In this page explained about one simple and easy tip for finding Cube Roots of Perfect Cubes of two digits numbers.By this cube root formula we find cube root in fraction of seconds. For example if you need to find cube root of 27 then calculate 0.3 power of 27, result is 3. For other roots, you change the first word; in your case, you are seeking how to perform cube rooting.. Before C++11, there is no specific function for this, but you can go back to first principles: For example, 20 is the cube root of 8000 because 20 3 = 20•20•20 = 8000, -20 is cube root of -8000 because (-20) 3 = (-20)•(-20)•(-20) = -8000. 1K likes. x raised to the 1/3 power. The real number cube root is the Principal cube root, but each real number cube root (zero excluded) also has a pair of complex conjugate roots, for example the other cube roots of 8 are -1 + √3i and -1 - √3i. its 0. The Cube Root Calculator is used to calculate the cube root of a number. From https://www.calculatorsoup.com - Online Calculators. Multiply the numbers by taking each one from each triplet to provide you the Cube Root of a Number. The Prime Factorisation of any Number Cube Root can be calculated by grouping the triplets of the same numbers. A cube root of a number a is a number x such that x 3 = a, in other words, a number x whose cube is a. Definition of cube root. As you can see the radicals are not in their simplest form. a3 = x. Our cube root calculator will only output the principal root. What is cube root? The rule was devised by Rein Taagepera in his 1972 paper "The size of national assemblies".. Use this calculator to find the cube root of positive or negative numbers. radicals calculator. the i stands for imaginary because you cannot have a negative root. Now extract and take out the cube root ∛1000 * ∛100. Example related to cube and cube root: Find the value of (8) 3. 0 1. trying-to-help. The cubed root of one hundred thousand ∛100000 = 46.415888336128. Written as $$\sqrt[3]{x} = x^{\frac{1}{3}}$$. This tool is very convenient to help you preview the symbol, including viewing the details of the symbol display and the effect displayed on the web page. Cube of ∛1000=10 which results into 10∛1; All radicals are now simplified. sage: a.parent() Symbolic Ring sage: a^2 1 So it is better to work on QQbar, the set of algebraic complex numbers.. sage: a = QQbar(-1)^(1/3) sage: a.parent() Algebraic Field sage: a^2 -0.500000000000000? This is because cubing a negative number results in an answer different to that of cubing it's positive counterpart. The process of cubing is similar to squaring, only that the number is multiplied three times instead of two. Cube root of a number can be found by a very simple method which is the prime factorization method.Cube root is denoted by ‘∛ ‘ symbol. 7³ = 7*7*7 = 343 and (-7)³ = (-7)*(-7)*(-7) = -343. USING OUR SERVICES YOU AGREE TO OUR USE OF. We can only determine the cube root of whole numbers having cube roots as whole numbers themselves like 8 having cube root as 2. You'll find that 2.8 cubed is 21.952, and 2.9 cubed is 24.389. Example: 8 = 2 x 2 x 2, 27 = 3 x 3 x 3 etc. sqrt stands for "square root", and "square root" means raising to the power of 1/2.There is no such thing as "square root with root 2", or "square root with root 3". Cube Root (Symbol/sign/mark) Preview and HTML-code. The cube root of -27 is written as $$\sqrt[3]{-27} = -3$$. First we will find all factors under the cube root: 100000 has the cube factor of 1000. x, the cube root of x is a number a such that Examples are 4³ = 4*4*4 = 64 or 8³ = 8*8*8 = 512. x is negative a will be negative. Fractional Exponents. it is special symbol as it has same symbol as square root only the redical attached with it. We shall learn the use of conditional statement if-else in C. The 3rd root of -64, or -64 radical 3, or the cube root of -64 is written as $$\sqrt[3]{-64} = -4$$. Cube roots is a specialized form of our common radicals calculator. 343 and -343 are examples of perfect cubes. For example if you need to find cube root of 27 then calculate 0.3 power of 27, result is 3. Let's check this with ∛1000*1=∛1000. At Cube Root Farms our goal is to lead and innovate in the area of agriculture via the best and most efficient means possible. Fist, when you write . The cube root operation is not distributive with addition or subtraction.. As we can see it works fine but this has its limitations too, we cannot use numbers that have its cube roots in decimals such as 5 which has cube root of 1.709905. If x positive a will be positive, if Cube Numbers List; Cube and Cube Root Calculator; All of Our Miniwebtools (Sorted by Name): Our PWA (Progressive Web App) Tools (17) {{title}} Financial Calcuators (121) … How to find cube root of a number quickly | Cube Root Calculator. 1000000 is said to be a perfect cube because 100 x 100 x 100 is equal to 1000000. Cube root of number is a value which when multiplied by itself thrice or three times produces the original value. Source(s): math class? Math.cbrt() is a method especially used to find the cube root of a number. It takes a number as a parameter and returns its cube root. © 2006 -2020CalculatorSoup® As you can see the radicals are not in their simplest form. All radicals are now simplified. Finding the cube root of a number without a calculator can sometimes be a daunting task. The cube root rule or cube root law is a observation in political science that the number of members of a unicameral legislature or the Lower house of a bicameral legislature is about the cube root of the population being represented. 1 0. All rights reserved. You should know that the cube root must be nearly 3, since 3 cubed is 27, and 24 is very close to 27. Example-1. Example: Cube Root of 216 = 2 × 2 × 2 × 3 × 3 × 3 = 2 × 3 = 6 6 is the cube root of 216. syntax Math.cbrt(64); It takes a number as a parameter and returns its cube root value as the output. Alt Code Shortcut for Square Root Symbol. And one another method for this program is use cbrt() function it is pre-defined in math.h header file. 1. When you want to type square root, cube root and fourth root symbols on your documents then the easy way is to use alt code shortcuts. + 0.866025403784439?
|
{}
|
# Observation of $B^0_s \to K^{*\pm}K^\mp$ and evidence for $B^0_s \to K^{*-}\pi^+$ decays
Abstract : Measurements of the branching fractions of $B^0_{(s)} \to K^{*\pm}K^\mp$ and $B^0_{(s)} \to K^{*\pm}\pi^\mp$ decays are performed using a data sample corresponding to $1.0 \ {\rm fb}^{-1}$ of proton-proton collision data collected with the LHCb detector at a centre-of-mass energy of $7\mathrm{\,TeV}$, where the $K^{*\pm}$ mesons are reconstructed in the $K^0_{\rm S}\pi^\pm$ final state. The first observation of the $B^0_s \to K^{*\pm}K^\mp$ decay and the first evidence for the $B^0_s \to K^{*-}\pi^+$ decay are reported with branching fractions \begin{eqnarray} {\cal B}\left(B^0_s \to K^{*\pm}K^\mp\right) & = & \left( 12.7\pm1.9\pm1.9 \right) \times 10^{-6} \, , \\ {\cal B}\left(B^0_s \to K^{*-}\pi^+\right) & = & ~\left( 3.3\pm1.1\pm0.5 \right) \times 10^{-6} \, , \end{eqnarray} where the first uncertainties are statistical and the second are systematic. In addition, an upper limit of ${\cal B}\left(B^0 \to K^{*\pm}K^\mp\right) < 0.4 \ (0.5) \times 10^{-6}$ is set at $90\,\% \ (95\,\%)$ confidence level.
Type de document :
Article dans une revue
New Journal of Physics, Institute of Physics: Open Access Journals, 2014, 16, pp.123001. 〈10.1088/1367-2630/16/12/123001〉
Littérature citée [11 références]
http://hal.in2p3.fr/in2p3-01053213
Contributeur : Sabine Starita <>
Soumis le : jeudi 22 janvier 2015 - 14:27:59
Dernière modification le : dimanche 22 avril 2018 - 01:20:05
Document(s) archivé(s) le : samedi 12 septembre 2015 - 06:34:06
### Fichier
1367-2630_16_12_123001.pdf
Fichiers éditeurs autorisés sur une archive ouverte
### Citation
R. Aaij, B. Adeva, M. Adinolfi, A. Affolder, Ziad Ajaltouni, et al.. Observation of $B^0_s \to K^{*\pm}K^\mp$ and evidence for $B^0_s \to K^{*-}\pi^+$ decays. New Journal of Physics, Institute of Physics: Open Access Journals, 2014, 16, pp.123001. 〈10.1088/1367-2630/16/12/123001〉. 〈in2p3-01053213〉
### Métriques
Consultations de la notice
## 758
Téléchargements de fichiers
|
{}
|
# Difference between revisions of "2006 AMC 10B Problems/Problem 1"
## Problem
What is $(-1)^{1} + (-1)^{2} + ... + (-1)^{2006}$ ?
$\textbf{(A)} -2006\qquad \textbf{(B)} -1\qquad \textbf{(C) } 0\qquad \textbf{(D) } 1\qquad \textbf{(E) } 2006$
## Solution
Since $-1$ raised to an odd integer is $-1$ and $-1$ raised to an even integer exponent is $1$:
$(-1)^{1} + (-1)^{2} + ... + (-1)^{2006} = (-1) + (1) + ... + (-1)+(1) = \boxed{\textbf{(C) }0}.$
## See Also
2006 AMC 10B (Problems • Answer Key • Resources) Preceded byFirst Problem Followed byProblem 2 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AMC 10 Problems and Solutions
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.
Invalid username
Login to AoPS
|
{}
|
Various issues with siunitx's option exponent-to-prefix
I am using this to collect some of my thoughts on the use of exponent-to-prefix because I know the developer is around here. My question could be titled "Will those three things be fixed in a future version of siunitx?" I could submit this as a bug report, but I imagine not all of this may be fixed, so the answer might include why. I am also looking for a short-term workaround to the final example.
\documentclass{article}
\usepackage{siunitx}
\begin{document}
% infinite loop:
%\SIlist[exponent-to-prefix=true]{1e3}{m}
% should maybe print a warning, not an error:
% \SIlist[scientific-notation=true,exponent-to-prefix=true]{10000; 20000}{\m}
% this works as expected:
\SIlist[scientific-notation=engineering,exponent-to-prefix=true]{10000; 20000}{\m}
% does not work as expected. **Edit** I would love to see "10.000 and 20.000 km" here:
\SIlist[scientific-notation=engineering,exponent-to-prefix=true,list-units=single]{10000; 20000}{\m}
% this is closer, but still does not work:
\SIlist[scientific-notation=fixed,fixed-exponent=3,exponent-to-prefix=true,list-units=single]{10000; 20000}{\m}
\end{document}
The first line causes an infinite loop that should not happen so easily, I would think.
The second line throws an error, but it should really be a warning in my opinion because the manual clearly states
When the exponent-to-prefix option is set true, the package will attempt to convert any exponents in quantities into unit prefixes, and will attach these to the first unit given. This process is only possible if the exponent is one for which a prefix is available, and retains the number of significant figures in the input.
So in my eyes, a warning would be enough.
Finally, I wonder how (with a patch or a future version), exponent-to-prefix can be combined with list-units=single successfully.
• You can ask Joseph Wright in TeX.SX chat. He is there 'sometimes' ;-) – user31729 Mar 29 '16 at 18:54
• All the examples for exponent-to-prefix use a “macro form” for the unit. – egreg Sep 6 '19 at 10:07
Partial response for the first point only: using \meter instead of m in unit seems to fix the infinite loop (but I cannot explain why).
\SIlist[exponent-to-prefix=true]{1e3;2e3}{\metre}
|
{}
|
#### VI STD Model Question
6th Standard
Reg.No. :
•
•
•
•
•
•
Maths
Time : 02:30:00 Hrs
Total Marks : 60
I. Choose the best answer
5 x 1 = 5
1. If ${6\over7}={A\over49}$then the value of A is
(a)
42
(b)
36
(c)
25
(d)
48
2. The number which determines marking the position of any number to its opposite on a number line is
(a)
-1
(b)
0
(c)
1
(d)
10
3. If two identical rectangles of perimeter 30 cm are joined together, then the perimeter of the new shape will be
(a)
equal to 60 cm
(b)
less than 60 cm
(c)
greater than 60 cm
(d)
equal to 45 cm
4. Which word has a vertical line of symmetry?
(a)
(b)
NUN
(c)
MAM
(d)
EVE
5. The difference between 6th term and 5th term in the Fibonacci sequence is
(a)
6
(b)
8
(c)
5
(d)
3
6. II. Fill in the blanks
5 x 1 = 5
7. $7{3\over4}+6{1\over2}=$_________
()
$14{1\over4}$
8. The number which has its own reciprocal is ________
()
1
9. There are ______ integers from−5 to +5 (both inclusive).
()
11
10. ______ is an integer which is neither positive nor negative
()
0
11. ___________ symmetry occurs when an object slides to new position
()
Translation
12. III. True or False
5 x 1 = 5
13. $3{1\over2}$can be written as$3+{1\over2}$
(a) True
(b) False
14. Each of the integers−18, 6, −12, 0 is greater than −20.
(a) True
(b) False
15. All negative integers are greater than zero
(a) True
(b) False
16. A shape has reflection symmetry if it has a line of symmetry
(a) True
(b) False
17. The reflection of the name RANI is
(a) True
(b) False
18. IV. Answer any 10 in the following :
10 x 2 = 20
19. The length of the staircase is 5 1/2m If one step is set at 1/4m then how many steps will be there in the staircase?
20. If 15 km east of a place is denoted as + 15 km, what is the integer that represents
15 km west of it?
21. The area of a rectangular shaped photo is 820 sq. cm. and its width is 20 cm. What is its length? Also find its perimeter
22. Complete the other half of the following figures such that the dotted line is the line of symmetry
23. Fill in the following information
24. The teacher had given the same situation mentioned above and asked two students Ravi and Arun to solve it. They came out with the answers for ${1\over2}+{1\over 4}$
25. In the above situation, find the quantity of milk left over. So subtract $3{1\over4}$from $5{1\over2}$
26. Arrange the following integers in descending order.
i) 14, 27, 15, −14, −9, 0, 11, −17
ii) −99, −120, 65, −46, 78, 400, −600
iii) 111, −222, 333, −444, 555, −666, 7777, −888
27. Find the perimeter of a triangle whose sides are 3 cm, 4 cm and 5 cm
28. Fill in the blanks.
i) 2 cm2 = _____ mm2
ii) 18 m2 = _____ cm2
iii) 5 km2 = _____ m2
29. Draw the reflection image of the following figures about the given line.
30. Observe the calendar showing the month of January 2019.
JANUARY 2019
S M T W T F S
1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31
Answer the following questions
i) Sort out the prime and composite numbers from the calendar.
ii) Sort out the odd and even numbers.
iii) Sort out the multiples of 6; multiples of 4; the common multiples of 4 and 6 and LCM of the two numbers.
iv) Sort out the dates which fall on Monday.
31. V. Answer any 5 in the following questions:
5 x 5 = 25
32. Divide the following:
$i){3\over7}\div4$
$ii){4\over3}\div{5\over9}$
$iii)4{1\over5}\div3{3\over 4}$
$iv)9{2\over3}\div1{2\over3}$
33. Complete the table using the following hints:
C1: the first non-negative integer.
C3: the opposite to the second negative integer.
C5: the additive identity in whole numbers.
C6: the successor of the integer in C2.
C8: the predecessor of the integer in C7.
C9: the opposite to the integer in C5
34. A rectangle has length 40 cm and breadth 20 cm. How many squares with side 10 cm can be formed from it.
35. Find the approximate area of the fl ower in the given square grid.
36. Find the line of symmetry and the order of rotational symmetry of the given regular polygons and complete the following table and answer the questions given below
Shape Equilateral triangle Square Regular pentagon Regular hexagon Regular octagon Number of lines of symmetry Order of rotational symmetry
i) A regular polygon of 10 sides will have ______ lines of symmetry.
ii) If a regular polygon has 10 lines of symmetry, then its order of rotational symmetry is _______
iii) A regular polygon of 'n' sides has _______ lines of symmetry and the order of rotational symmetry is _______.
37. Prepare a daily time schedule for evening study at home
|
{}
|
James W. Cannon
Get James W. Cannon essential facts below. View Videos or join the James W. Cannon discussion. Add James W. Cannon to your PopFlock.com topic list for future reference or share this resource on social media.
James W. Cannon
James W. Cannon
BornJanuary 30, 1943 (age 77)
NationalityAmerican
CitizenshipUnited States
Alma materPh.D. (1969), University of Utah
Known forwork in low-dimensional topology, geometric group theory
AwardsFellow of the American Mathematical Society
Sloan Fellowship
Scientific career
FieldsMathematics
Brigham Young University
James W. Cannon (born January 30, 1943) is an American mathematician working in the areas of low-dimensional topology and geometric group theory. He was an Orson Pratt Professor of Mathematics at Brigham Young University.
## Biographical data
James W. Cannon was born on January 30, 1943, in Bellefonte, Pennsylvania.[1] Cannon received a Ph.D. in Mathematics from the University of Utah in 1969, under the direction of C. Edmund Burgess.
He was a Professor at the University of Wisconsin, Madison from 1977 to 1985.[1] In 1986 Cannon was appointed an Orson Pratt Professor of Mathematics at Brigham Young University.[2] He held this position until his retirement in September 2012.[3]
Cannon gave an AMS Invited address at the meeting of the American Mathematical Society in Seattle in August 1977, an invited address at the International Congress of Mathematicians in Helsinki 1978, and delivered the 1982 Mathematical Association of America Hedrick Lectures in Toronto, Canada.[1][4]
Cannon was elected to the American Mathematical Society Council in 2003 with the term of service February 1, 2004, to January 31, 2007.[2][5] In 2012 he became a fellow of the American Mathematical Society.[6]
In 1993 Cannon delivered the 30-th annual Karl G. Maeser Distinguished Faculty Lecture at Brigham Young University.[7]
James Cannon is a devout member of The Church of Jesus Christ of Latter-day Saints.[8]
## Mathematical contributions
### Early work
Cannon's early work concerned topological aspects of embedded surfaces in R3 and understanding the difference between "tame" and "wild" surfaces.
His first famous result came in late 1970s when Cannon gave a complete solution to a long-standing "double suspension" problem posed by John Milnor. Cannon proved that the double suspension of a homology sphere is a topological sphere.[9][10] R. D. Edwards had previously proven this in many cases.
The results of Cannon's paper[10] were used by Cannon, Bryant and Lacher to prove (1979)[11] an important case of the so-called characterization conjecture for topological manifolds. The conjecture says that a generalized n-manifold ${\displaystyle M}$, where ${\displaystyle n\geq 5}$, which satisfies the "disjoint disk property" is a topological manifold. Cannon, Bryant and Lacher established[11] that the conjecture holds under the assumption that ${\displaystyle M}$ be a manifold except possibly at a set of dimension ${\displaystyle (n-2)/2}$. Later Frank Quinn[12] completed the proof that the characterization conjecture holds if there is even a single manifold point. In general, the conjecture is false as was proved by John Bryant, Steven Ferry, Washington Mio and Shmuel Weinberger.[13]
### 1980s: Hyperbolic geometry, 3-manifolds and geometric group theory
In 1980s the focus of Cannon's work shifted to the study of 3-manifolds, hyperbolic geometry and Kleinian groups and he is considered one of the key figures in the birth of geometric group theory as a distinct subject in late 1980s and early 1990s. Cannon's 1984 paper "The combinatorial structure of cocompact discrete hyperbolic groups"[14] was one of the forerunners in the development of the theory of word-hyperbolic groups, a notion that was introduced and developed three years later in a seminal 1987 monograph of Mikhail Gromov.[15] Cannon's paper explored combinatorial and algorithmic aspects of the Cayley graphs of Kleinian groups and related them to the geometric features of the actions of these groups on the hyperbolic space. In particular, Cannon proved that convex-cocompact Kleinian groups admit finite presentations where the Dehn algorithm solves the word problem. The latter condition later turned out to give one of equivalent characterization of being word-hyperbolic and, moreover, Cannon's original proof essentially went through without change to show that the word problem in word-hyperbolic groups is solvable by Dehn's algorithm.[16] Cannon's 1984 paper[14] also introduced an important notion a cone type of an element of a finitely generated group (roughly, the set of all geodesic extensions of an element). Cannon proved that a convex-cocompact Kleinian group has only finitely many cone types (with respect to a fixed finite generating set of that group) and showed how to use this fact to conclude that the growth series of the group is a rational function. These arguments also turned out to generalize to the word-hyperbolic group context.[15] Now standard proofs[17] of the fact that the set of geodesic words in a word-hyperbolic group is a regular language also use finiteness of the number of cone types.
Cannon's work also introduced an important notion of almost convexity for Cayley graphs of finitely generated groups,[18] a notion that led to substantial further study and generalizations.[19][20][21]
An influential paper of Cannon and William Thurston "Group invariant Peano curves",[22] that first circulated in a preprint form in the mid-1980s,[23] introduced the notion of what is now called the Cannon-Thurston map. They considered the case of a closed hyperbolic 3-manifold M that fibers over the circle with the fiber being a closed hyperbolic surface S. In this case the universal cover of S, which is identified with the hyperbolic plane, admits an embedding into the universal cover of M, which is the hyperbolic 3-space. Cannon and Thurston proved that this embedding extends to a continuous ?1(S)-equivariant surjective map (now called the Cannon-Thurston map) from the ideal boundary of the hyperbolic plane (the circle) to the ideal boundary of the hyperbolic 3-space (the 2-sphere). Although the paper of Cannon and Thurston was finally published only in 2007, in the meantime it has generated considerable further research and a number of significant generalizations (both in the contexts of Kleinian groups and of word-hyperbolic groups), including the work of Mahan Mitra,[24][25] Erica Klarreich,[26]Brian Bowditch[27] and others.
### 1990s and 2000s: Automatic groups, discrete conformal geometry and Cannon's conjecture
Cannon was one of the co-authors of the 1992 book Word Processing in Groups[17] which introduced, formalized and developed the theory of automatic groups. The theory of automatic groups brought new computational ideas from computer science to geometric group theory and played an important role in the development of the subject in 1990s.
A 1994 paper of Cannon gave a proof of the "combinatorial Riemann mapping theorem"[28] that was motivated by the classic Riemann mapping theorem in complex analysis. The goal was to understand when an action of a group by homeomorphisms on a 2-sphere is (up to a topological conjugation) an action on the standard Riemann sphere by Möbius transformations. The "combinatorial Riemann mapping theorem" of Cannon gave a set of sufficient conditions when a sequence of finer and finer combinatorial subdivisions of a topological surface determine, in the appropriate sense and after passing to the limit, an actual conformal structure on that surface. This paper of Cannon led to an important conjecture, first explicitly formulated by Cannon and Swenson in 1998[29] (but also suggested in implicit form in Section 8 of Cannon's 1994 paper) and now known as Cannon's conjecture, regarding characterizing word-hyperbolic groups with the 2-sphere as the boundary. The conjecture (Conjecture 5.1 in [29]) states that if the ideal boundary of a word-hyperbolic group G is homeomorphic to the 2-sphere, then G admits a properly discontinuous cocompact isometric action on the hyperbolic 3-space (so that G is essentially a 3-dimensional Kleinian group). In analytic terms Cannon's conjecture is equivalent to saying that if the ideal boundary of a word-hyperbolic group G is homeomorphic to the 2-sphere then this boundary, with the visual metric coming from the Cayley graph of G, is quasisymmetric to the standard 2-sphere.
The 1998 paper of Cannon and Swenson[29] gave an initial approach to this conjecture by proving that the conjecture holds under an extra assumption that the family of standard "disks" in the boundary of the group satisfies a combinatorial "conformal" property. The main result of Cannon's 1994 paper[28] played a key role in the proof. This approach to Cannon's conjecture and related problems was pushed further later in the joint work of Cannon, Floyd and Parry.[30][31][32]
Cannon's conjecture motivated much of subsequent work by other mathematicians and to a substantial degree informed subsequent interaction between geometric group theory and the theory of analysis on metric spaces.[33][34][35][36][37][38] Cannon's conjecture was motivated (see [29]) by Thurston's Geometrization Conjecture and by trying to understand why in dimension three variable negative curvature can be promoted to constant negative curvature. Although the Geometrization conjecture was recently settled by Perelman, Cannon's conjecture remains wide open and is considered one of the key outstanding open problems in geometric group theory and geometric topology.
### Applications to biology
The ideas of combinatorial conformal geometry that underlie Cannon's proof of the "combinatorial Riemann mapping theorem",[28] were applied by Cannon, Floyd and Parry (2000) to the study of large-scale growth patterns of biological organisms.[39] Cannon, Floyd and Parry produced a mathematical growth model which demonstrated that some systems determined by simple finite subdivision rules can results in objects (in their example, a tree trunk) whose large-scale form oscillates wildly over time even though the local subdivision laws remain the same.[39] Cannon, Floyd and Parry also applied their model to the analysis of the growth patterns of rat tissue.[39] They suggested that the "negatively curved" (or non-euclidean) nature of microscopic growth patterns of biological organisms is one of the key reasons why large-scale organisms do not look like crystals or polyhedral shapes but in fact in many cases resemble self-similar fractals.[39] In particular they suggested (see section 3.4 of [39]) that such "negatively curved" local structure is manifested in highly folded and highly connected nature of the brain and the lung tissue.
## Selected publications
• Cannon, James W. (1979), "Shrinking cell-like decompositions of manifolds. Codimension three.", Annals of Mathematics, Second Series, 110 (1): 83-112, doi:10.2307/1971245, JSTOR 1971245, MR 0541330
• Cannon, James W. (1984), "The combinatorial structure of cocompact discrete hyperbolic groups.", Geometriae Dedicata, 16 (2): 123-148, doi:10.1007/BF00146825, MR 0758901
• Cannon, James W. (1987), "Almost convex groups.", Geometriae Dedicata, 22 (2): 197-210, doi:10.1007/BF00181266, MR 0877210
• Epstein, David B. A.; Cannon, James W., Holt, Derek F.; Levy, Silvio V.; Paterson, Michael S.; Thurston, William P. (1992), Word processing in groups., Boston, MA: Jones and Bartlett Publishers, ISBN 978-0-86720-244-1CS1 maint: uses authors parameter (link)
• Cannon, James W. (1994), "The combinatorial Riemann mapping theorem.", Acta Mathematica, 173 (2): 155-234, doi:10.1007/BF02398434, MR 1301392
• Cannon, James W.; Thurston, William P. (2007), "Group invariant Peano curves.", Geometry & Topology, 11 (3): 1315-1355, doi:10.2140/gt.2007.11.1315, MR 2326947
## References
1. ^ a b c Biographies of Candidates 2003. Notices of the American Mathematical Society, vol. 50 (2003), no. 8, pp. 973-986.
2. ^ a b "Newsletter of the College of Physical and Mathematical Sciences" (PDF). Brigham Young University. February 2004. Archived from the original (PDF) on February 15, 2009. Retrieved 2008.
3. ^ 44 Years of Mathematics. Brigham Young University. Accessed July 25, 2013.
4. ^ The Mathematical Association of America's Earle Raymond Hedrick Lecturers. Mathematical Association of America. Accessed September 20, 2008.
5. ^ 2003 Election Results. Notices of the American Mathematical Society vol 51 (2004), no. 2, p. 269.
6. ^ List of Fellows of the American Mathematical Society, retrieved 2012-11-10.
7. ^ MATH PROFESSOR TO GIVE LECTURE WEDNESDAY AT Y. Deseret News. February 18, 1993.
8. ^ Susan Easton Black.Expressions of Faith: Testimonies of Latter-Day Saint Scholars. Foundation for Ancient Research and Mormon Studies, 1996. ISBN 978-1-57345-091-1.
9. ^ J. W. Cannon, The recognition problem: what is a topological manifold? Bulletin of the American Mathematical Society, vol. 84 (1978), no. 5, pp. 832-866.
10. ^ a b J. W. Cannon, Shrinking cell-like decompositions of manifolds. Codimension three. Annals of Mathematics (2), 110 (1979), no. 1, 83-112.
11. ^ a b J. W. Cannon, J. L. Bryant and R. C. Lacher, The structure of generalized manifolds having nonmanifold set of trivial dimension. Geometric topology (Proc. Georgia Topology Conf., Athens, Ga., 1977), pp. 261-300, Academic Press, New York-London, 1979. ISBN 0-12-158860-2.
12. ^ Frank Quinn. Resolutions of homology manifolds, and the topological characterization of manifolds. Inventiones Mathematicae, vol. 72 (1983), no. 2, pp. 267-284.
13. ^ John Bryant, Steven Ferry, Washington Mio and Shmuel Weinberger, Topology of homology manifolds, Annals of Mathematics 143 (1996), pp. 435-467; MR1394965
14. ^ a b J. W. Cannon, The combinatorial structure of cocompact discrete hyperbolic groups. Geometriae Dedicata, vol. 16 (1984), no. 2, pp. 123-148.
15. ^ a b M. Gromov, Hyperbolic Groups, in: "Essays in Group Theory" (G. M. Gersten, ed.), MSRI Publ. 8, 1987, pp. 75-263.
16. ^ R. B. Sher, R. J. Daverman. Handbook of Geometric Topology. Elsevier, 2001. ISBN 978-0-444-82432-5; p. 299.
17. ^ a b David B. A. Epstein, James W. Cannon, Derek F. Holt, Silvio V. Levy, Michael S. Paterson, William P. Thurston. Word processing in groups. Jones and Bartlett Publishers, Boston, MA, 1992. ISBN 0-86720-244-0. Reviews: B. N. Apanasov, Zbl 0764.20017; Gilbert Baumslag, Bull. AMS, doi:10.1090/S0273-0979-1994-00481-1; D. E. Cohen, Bull LMS, doi:10.1112/blms/25.6.614; Richard M. Thomas, MR1161694
18. ^ James W. Cannon. Almost convex groups. Geometriae Dedicata, vol. 22 (1987), no. 2, pp. 197-210.
19. ^ S. Hermiller and J. Meier, Measuring the tameness of almost convex groups. Transactions of the American Mathematical Society vol. 353 (2001), no. 3, pp. 943-962.
20. ^ S. Cleary and J. Taback, Thompson's group F is not almost convex. Journal of Algebra, vol. 270 (2003), no. 1, pp. 133-149.
21. ^ M. Elder and S. Hermiller, Minimal almost convexity. Journal of Group Theory, vol. 8 (2005), no. 2, pp. 239-266.
22. ^ J. W. Cannon and W. P. Thurston. Group invariant Peano curves. Archived 2008-04-05 at the Wayback Machine Geometry & Topology, vol. 11 (2007), pp. 1315-1355.
23. ^ Darryl McCullough, MR2326947 (a review of: Cannon, James W.; Thurston, William P. 'Group invariant Peano curves'. Geom. Topol. 11 (2007), 1315-1355), MathSciNet; Quote::This influential paper dates from the mid-1980s. Indeed, preprint versions are referenced in more than 30 published articles, going back as early as 1990"
24. ^ Mahan Mitra. Cannon-Thurston maps for hyperbolic group extensions. Topology, vol. 37 (1998), no. 3, pp. 527-538.
25. ^ Mahan Mitra. Cannon-Thurston maps for trees of hyperbolic metric spaces. Journal of Differential Geometry, vol. 48 (1998), no. 1, pp. 135-164.
26. ^ Erica Klarreich, Semiconjugacies between Kleinian group actions on the Riemann sphere. American Journal of Mathematics, vol. 121 (1999), no. 5, 1031-1078.
27. ^ Brian Bowditch. The Cannon-Thurston map for punctured-surface groups. Mathematische Zeitschrift, vol. 255 (2007), no. 1, pp. 35-76.
28. ^ a b c James W. Cannon. The combinatorial Riemann mapping theorem. Acta Mathematica 173 (1994), no. 2, pp. 155-234.
29. ^ a b c d J. W. Cannon and E. L. Swenson, Recognizing constant curvature discrete groups in dimension 3. Transactions of the American Mathematical Society 350 (1998), no. 2, pp. 809-849.
30. ^ J. W. Cannon, W. J. Floyd, W. R. Parry. Sufficiently rich families of planar rings. Annales Academiæ Scientiarium Fennicæ. Mathematica. vol. 24 (1999), no. 2, pp. 265-304.
31. ^ J. W. Cannon, W. J. Floyd, W. R. Parry. Finite subdivision rules. Conformal Geometry and Dynamics, vol. 5 (2001), pp. 153-196.
32. ^ J. W. Cannon, W. J. Floyd, W. R. Parry. Expansion complexes for finite subdivision rules. I. Conformal Geometry and Dynamics, vol. 10 (2006), pp. 63-99.
33. ^ M. Bourdon and H. Pajot, Quasi-conformal geometry and hyperbolic geometry. In: Rigidity in dynamics and geometry (Cambridge, 2000), pp. 1-17, Springer, Berlin, 2002; ISBN 3-540-43243-4.
34. ^ Mario Bonk and Bruce Kleiner, Conformal dimension and Gromov hyperbolic groups with 2-sphere boundary. Geometry & Topology, vol. 9 (2005), pp. 219-246.
35. ^ Mario Bonk, Quasiconformal geometry of fractals. International Congress of Mathematicians. Vol. II, pp. 1349-1373, Eur. Math. Soc., Zürich, 2006; ISBN 978-3-03719-022-7.
36. ^ S. Keith, T. Laakso, Conformal Assouad dimension and modulus. Geometric and Functional Analysis, vol 14 (2004), no. 6, pp. 1278-1321.
37. ^ I. Mineyev, Metric conformal structures and hyperbolic dimension. Conformal Geometry and Dynamics, vol. 11 (2007), pp. 137-163.
38. ^ Bruce Kleiner, The asymptotic geometry of negatively curved spaces: uniformization, geometrization and rigidity. International Congress of Mathematicians. Vol. II, pp. 743-768, Eur. Math. Soc., Zürich, 2006. ISBN 978-3-03719-022-7.
39. J. W. Cannon, W. Floyd and W. Parry. Crystal growth, biological cell growth and geometry. Pattern Formation in Biology, Vision and Dynamics, pp. 65-82. World Scientific, 2000. ISBN 981-02-3792-8, ISBN 978-981-02-3792-9.
|
{}
|
# LC model of an atom?
1. Dec 26, 2006
### waht
I know quantum mechanics describies a hydrogen atom in great detail. I'm wondering if there exists another model using the concepts of inductance and capacitance.
Obvioulsy there could exist a capacitance between a proton and an electron and inductance of the electron in the electron cloud.
So the frequencies emmited by the atom are simply different resonances of LC
$$\omega = \frac{{1}}{\sqrt LC }$$
I know this is unnessesary or even inadequte, but since everything from radio wave to microwaves is modeled by LC, so just a thought.
2. Dec 26, 2006
### marcusl
The failure of classical models, especially simple models, to explain atomic behavior such as spectral lines, heat capacity, chemical bonds and the periodic table were a powerful impetus to develop quantum theory. Unfortunately, not "everything" can be modeled by L's and C's...
3. Dec 26, 2006
### cesiumfrog
If you're going to do this today, the obvious approach is to first take the equations describing the thing your interested in (derived using mainstream technique), break them down into the mathematical constituents (sine functions, square roots, pi, other constants, etc), then choose your favourite set of concepts to apply (maybe relate every sinusoid to a pendulum.. or an LC circuit) and try to draw what the equation describes in terms of your choice of concepts. Finally, come up with a more creative rationale for drawing that picture as your starting point, and exclaim that it produces the exact same result as previous theories! ..but on the other hand, if it sounds kind of complex and arbitrary, and doesn't give any insight into other problems, ... what was the point of this again?
4. Dec 29, 2006
### lightarrow
If the model would work well for different atoms just changing a few parameters, it could be used to predict complicated behaviours of (heavy?) atoms, which are still too difficult to solve with QM; or, from that model, it could also be explored the way two atoms interact...
I think it wouldn't be such a meaningless idea, if the model weren't too complicated.
|
{}
|
# nLab Quantization of Gauge Systems
Contents
## Examples
### $\infty$-Lie algebras
This entry is about the textbook
on the BRST-BV formalism for describing gauge theories.
# Contents
## 20) Complementary material
category: reference
Last revised on January 6, 2018 at 18:41:17. See the history of this page for a list of all contributions to it.
|
{}
|
MercuryDPM Beta
RNG Class Reference
This is a class that generates random numbers i.e. named the Random Number Generator (RNG). More...
#include <RNG.h>
## Public Member Functions
RNG ()
default constructor More...
void setRandomSeed (unsigned long int new_seed)
This is the seed for the random number generator. (note the call to seed_LFG is only required really if using that type of generator, but the other one is always required. More...
Mdouble getRandomNumber (Mdouble min, Mdouble max)
This is a random generating routine can be used for initial positions. More...
Mdouble test ()
This function tests the quality of random numbers, based on the chi-squared test. More...
void setLinearCongruentialGeneratorParmeters (unsigned const int a, unsigned const int c, unsigned const int m)
This functions set the parameters for the LCG random number generator. It goes multiplier, addition, mod. More...
void randomise ()
sets the random variables such that they differ for each run More...
void setLaggedFibonacciGeneratorParameters (const unsigned int p, const unsigned int q)
This function sets the parameters for the LFG random number generator. More...
void setRandomNumberGenerator (RNGType type)
Allows the user to set which random number generator is used. More...
## Private Member Functions
Mdouble getRandomNumberFromLinearCongruentialGenerator (Mdouble min, Mdouble max)
This is a basic Linear Congruential Generator Random. More...
Mdouble getRandomNumberFromLaggedFibonacciGenerator (Mdouble min, Mdouble max)
This is a Lagged Fibonacci Generator. More...
void seedLaggedFibonacciGenerator ()
This seed the LFG. More...
## Private Attributes
unsigned long int randomSeedLinearCongruentialGenerator_
This is the initial seed of the RNG. More...
std::vector< MdoublerandomSeedLaggedFibonacciGenerator_
This is the seeds required for the LFG. More...
unsigned long int a_
This are the two parameters that control the LCG random generated. More...
unsigned long int c_
unsigned long int m_
unsigned long int p_
This are the parameters that control the LFG random generator. More...
unsigned long int q_
RNGType type_
This is the type of random number generator. More...
## Detailed Description
This is a class that generates random numbers i.e. named the Random Number Generator (RNG).
This is a stand-along class; but is encapsulated (used) by the MD class. To make it architecture safe the both LCG and function is hard codes i.e. does not use the internal C++ one.
Todo:
(AT) implement new C++-standard RNG instead of this one (Kudos on the hard work done here though ;). NB: maybe something for Mercury 2?
Definition at line 52 of file RNG.h.
## Constructor & Destructor Documentation
RNG::RNG ( )
default constructor
This is a random number generator and returns a Mdouble within the range specified.
Todo:
{Thomas: This code does sth. when min>max; I would prefer to throw an error.}
{the random seed should be stored in restart}
Definition at line 33 of file RNG.cc.
34 {
36 a_ = 1103515245;
37 c_ = 12345;
38 m_ = 1024 * 1024 * 1024;
40 p_ = 607;
41 q_ = 273;
44 }
unsigned long int c_
Definition: RNG.h:126
void seedLaggedFibonacciGenerator()
This seed the LFG.
Definition: RNG.cc:103
unsigned long int q_
Definition: RNG.h:131
unsigned long int randomSeedLinearCongruentialGenerator_
This is the initial seed of the RNG.
Definition: RNG.h:116
unsigned long int p_
This are the parameters that control the LFG random generator.
Definition: RNG.h:131
std::vector< Mdouble > randomSeedLaggedFibonacciGenerator_
This is the seeds required for the LFG.
Definition: RNG.h:121
unsigned long int a_
This are the two parameters that control the LCG random generated.
Definition: RNG.h:126
RNGType type_
This is the type of random number generator.
Definition: RNG.h:136
unsigned long int m_
Definition: RNG.h:126
## Member Function Documentation
Mdouble RNG::getRandomNumber ( Mdouble min, Mdouble max )
This is a random generating routine can be used for initial positions.
Definition at line 69 of file RNG.cc.
70 {
71 switch (type_)
72 {
77 }
78 }
Mdouble getRandomNumberFromLaggedFibonacciGenerator(Mdouble min, Mdouble max)
This is a Lagged Fibonacci Generator.
Definition: RNG.cc:115
Mdouble getRandomNumberFromLinearCongruentialGenerator(Mdouble min, Mdouble max)
This is a basic Linear Congruential Generator Random.
Definition: RNG.cc:83
RNGType type_
This is the type of random number generator.
Definition: RNG.h:136
Mdouble RNG::getRandomNumberFromLaggedFibonacciGenerator ( Mdouble min, Mdouble max )
private
This is a Lagged Fibonacci Generator.
This is a basic Linear Fibonacci Generator Random Is described by three parameters, the multiplication a, the addition c and the mod m.
Definition at line 115 of file RNG.cc.
References p_, q_, and randomSeedLaggedFibonacciGenerator_.
Referenced by getRandomNumber().
116 {
117 Mdouble new_seed = fmod(randomSeedLaggedFibonacciGenerator_[0] + randomSeedLaggedFibonacciGenerator_[p_ - q_], static_cast<Mdouble>(1.0));
118 //Update the random seed
119 for (unsigned int i = 0; i < p_ - 1; i++)
120 {
122 }
123 randomSeedLaggedFibonacciGenerator_[p_ - 1] = new_seed;
124
125 //Generate a random number in the required range
126
127 Mdouble random_num;
128
129 Mdouble range = max - min;
130 random_num = min + range * new_seed;
131
132 return random_num;
133 }
unsigned long int q_
Definition: RNG.h:131
double Mdouble
unsigned long int p_
This are the parameters that control the LFG random generator.
Definition: RNG.h:131
std::vector< Mdouble > randomSeedLaggedFibonacciGenerator_
This is the seeds required for the LFG.
Definition: RNG.h:121
Mdouble RNG::getRandomNumberFromLinearCongruentialGenerator ( Mdouble min, Mdouble max )
private
This is a basic Linear Congruential Generator Random.
This is a basic Linear Congruential Generator Random Is described by three parameters, the multiplication a, the addition c and the mod m.
Definition at line 83 of file RNG.cc.
References a_, c_, m_, and randomSeedLinearCongruentialGenerator_.
Referenced by getRandomNumber(), and seedLaggedFibonacciGenerator().
84 {
85 //Update the random seed
87
88 //Generate a random number in the required range
89
90 Mdouble random_num;
91
92 Mdouble range = max - min;
93 random_num = min + range * randomSeedLinearCongruentialGenerator_ / (static_cast<Mdouble>(m_) + 1.0);
94
95 return random_num;
96 }
unsigned long int c_
Definition: RNG.h:126
double Mdouble
unsigned long int randomSeedLinearCongruentialGenerator_
This is the initial seed of the RNG.
Definition: RNG.h:116
unsigned long int a_
This are the two parameters that control the LCG random generated.
Definition: RNG.h:126
unsigned long int m_
Definition: RNG.h:126
void RNG::randomise ( )
sets the random variables such that they differ for each run
Definition at line 59 of file RNG.cc.
References setRandomSeed().
60 {
61 setRandomSeed(static_cast<unsigned long int>(time(nullptr)));
62 }
void setRandomSeed(unsigned long int new_seed)
This is the seed for the random number generator. (note the call to seed_LFG is only required really ...
Definition: RNG.cc:46
void RNG::seedLaggedFibonacciGenerator ( )
private
This seed the LFG.
Definition at line 103 of file RNG.cc.
Referenced by RNG(), setLaggedFibonacciGeneratorParameters(), and setRandomSeed().
104 {
105 for (unsigned int i = 0; i < p_; i++)
106 {
108 }
109 }
Mdouble getRandomNumberFromLinearCongruentialGenerator(Mdouble min, Mdouble max)
This is a basic Linear Congruential Generator Random.
Definition: RNG.cc:83
unsigned long int p_
This are the parameters that control the LFG random generator.
Definition: RNG.h:131
std::vector< Mdouble > randomSeedLaggedFibonacciGenerator_
This is the seeds required for the LFG.
Definition: RNG.h:121
void RNG::setLaggedFibonacciGeneratorParameters ( const unsigned int p, const unsigned int q )
This function sets the parameters for the LFG random number generator.
Definition at line 188 of file RNG.cc.
References p_, q_, randomSeedLaggedFibonacciGenerator_, and seedLaggedFibonacciGenerator().
189 {
190 //p must be greater than q so makes sure this is true. Not sure what happens if you set p=q, in the LFG alogrithm.
191 if (p > q)
192 {
193 p_ = p;
194 q_ = q;
195 }
196 else
197 {
198 p_ = p;
199 q_ = q;
200 }
201
204
205 }
void seedLaggedFibonacciGenerator()
This seed the LFG.
Definition: RNG.cc:103
unsigned long int q_
Definition: RNG.h:131
unsigned long int p_
This are the parameters that control the LFG random generator.
Definition: RNG.h:131
std::vector< Mdouble > randomSeedLaggedFibonacciGenerator_
This is the seeds required for the LFG.
Definition: RNG.h:121
void RNG::setLinearCongruentialGeneratorParmeters ( unsigned const int a, unsigned const int c, unsigned const int m )
This functions set the parameters for the LCG random number generator. It goes multiplier, addition, mod.
Definition at line 52 of file RNG.cc.
References a_, c_, and m_.
53 {
54 a_ = a;
55 c_ = c;
56 m_ = m;
57 }
unsigned long int c_
Definition: RNG.h:126
unsigned long int a_
This are the two parameters that control the LCG random generated.
Definition: RNG.h:126
unsigned long int m_
Definition: RNG.h:126
void RNG::setRandomNumberGenerator ( RNGType type )
Allows the user to set which random number generator is used.
Definition at line 64 of file RNG.cc.
References type_.
65 {
66 type=type_;
67 }
RNGType type_
This is the type of random number generator.
Definition: RNG.h:136
void RNG::setRandomSeed ( unsigned long int new_seed )
This is the seed for the random number generator. (note the call to seed_LFG is only required really if using that type of generator, but the other one is always required.
Definition at line 46 of file RNG.cc.
Referenced by DPMBase::constructor(), and randomise().
47 {
50 }
void seedLaggedFibonacciGenerator()
This seed the LFG.
Definition: RNG.cc:103
unsigned long int randomSeedLinearCongruentialGenerator_
This is the initial seed of the RNG.
Definition: RNG.h:116
Mdouble RNG::test ( )
This function tests the quality of random numbers, based on the chi-squared test.
It reports a probability that the random number being generated are coming from a uniform distributed. If this number is less than 0.95, it is strongly advised that you change the parameters being used
Definition at line 140 of file RNG.cc.
References mathsFunc::chi_squared_prob(), and getRandomNumber().
141 {
142 //This are the fixed parameters that define the test
143 static unsigned int num_of_tests = 100000;
144 static Mdouble max_num = 100.0;
145 static unsigned int num_of_bins = 10;
146
147 //This is the generated random_number
148 Mdouble rn;
149 //This is the bin the random number will lie in
150 unsigned int bin = 0;
151 //This is a vector of bins
152 std::vector<int> count;
153 count.resize(num_of_bins);
154
155 //Initialisation of the bins
156 for (unsigned int i = 0; i < num_of_bins; i++)
157 {
158 count[bin] = 0;
159 }
160
161 //Loop over a number of tests
162 for (unsigned int i = 0; i < num_of_tests; i++)
163 {
164 rn = getRandomNumber(0.0, max_num);
165 bin = static_cast<unsigned int>(std::floor(rn * num_of_bins / max_num));
166
167 //Add one to the bin count
168 count[bin]++;
169
170 }
171
172 //Final post-process the result and report on the random number
173 Mdouble chi_cum = 0.0;
174 Mdouble expected = num_of_tests / num_of_bins;
175
176 for (unsigned int i = 0; i < num_of_bins; i++)
177 {
178 chi_cum = chi_cum + (count[i] - expected) * (count[i] - expected) / expected;
179 std::cout << i << " : " << count[i] << " : " << (count[i] - expected) * (count[i] - expected) / expected << std::endl;
180 }
181 //end for loop over computing the chi-squared value.
182 std::cout << chi_cum << std::endl;
183
184 return mathsFunc::chi_squared_prob(chi_cum, num_of_bins);
185 }
Mdouble chi_squared_prob(const Mdouble x, const unsigned int k)
This is the function which actually gives the probability back using a chi squared test...
Definition: ExtendedMath.cc:86
double Mdouble
Mdouble getRandomNumber(Mdouble min, Mdouble max)
This is a random generating routine can be used for initial positions.
Definition: RNG.cc:69
## Member Data Documentation
unsigned long int RNG::a_
private
This are the two parameters that control the LCG random generated.
Definition at line 126 of file RNG.h.
unsigned long int RNG::c_
private
Definition at line 126 of file RNG.h.
unsigned long int RNG::m_
private
Definition at line 126 of file RNG.h.
unsigned long int RNG::p_
private
This are the parameters that control the LFG random generator.
Definition at line 131 of file RNG.h.
unsigned long int RNG::q_
private
Definition at line 131 of file RNG.h.
std::vector RNG::randomSeedLaggedFibonacciGenerator_
private
This is the seeds required for the LFG.
Definition at line 121 of file RNG.h.
unsigned long int RNG::randomSeedLinearCongruentialGenerator_
private
This is the initial seed of the RNG.
Definition at line 116 of file RNG.h.
Referenced by getRandomNumberFromLinearCongruentialGenerator(), RNG(), and setRandomSeed().
RNGType RNG::type_
private
This is the type of random number generator.
Definition at line 136 of file RNG.h.
Referenced by getRandomNumber(), RNG(), and setRandomNumberGenerator().
The documentation for this class was generated from the following files:
|
{}
|
Question
# Write ionic and net ionic equations for the below reaction:$\text{ \operatorname{AgNO}_3(a q)+\mathrm{Na}_2 \mathrm{SO}_4(a q) \longrightarrow}$
Solution
Verified
Answered 1 year ago
Answered 1 year ago
Step 1
1 of 5
In this task we have to write an ionic and a net ionic equation for the reaction of $\mathrm{AgNO_3}$ and $\mathrm{Na_2SO_4}$.
|
{}
|
# Deep Learning - Cross Entropy and Softmax
### Cross Entropy
In this post we will talk about cross entropy and softmax function. They are used in most of the neural network classification problems. The standard setup is to use the output layer to represent the distribution of label classes. For example, if we have M distinguish labels, then the size of the output layer of the neural network is M and the value of the neuron represents the probability of being the correspondent label.
Now we need to answer the following two questions
• How to represent the label in the training data?
• How to measure the difference between a predicated value and a target value?
#### How to represent the label in the training data?
As mentioned earlier, the output layer of the neural network represents a probability distribution so it's natural to convert the label in training data to a distribution representation as well. Suppose the $$i^{th}$$ training data point is in class $$c$$, i.e. $$y_i = c$$. This is a scalar value and we need to convert it into a distribution. We further assume there are $$M$$ different classes so the probability distribution can be represented by a vector of length $$M$$.
Let $$p_{c_j}^{k}$$ denote the probability of being class $$c_j$$ for the $$k^{th}$$ data point in the training data. We then have
$$p_{c_j}^{i}(y_i) = \begin{cases} 1 & \;\;\; \text{if} \; c_j = c \\ 0 & \;\;\; \text{otherwise} \end{cases}$$
#### How to measure the difference between a predicated value and a target value?
It's clear that we need something to quantify the difference between two distributions. We have a few options here.
• Kullback-Leibler Divergence: $$KL(P||Q) = -\sum_x P(x)log \frac{Q(x)}{P(x)}$$
• Cross Entropy: $$H(P, Q) = - \sum_x P(x)log Q(x)$$
• Jensen-Shannon Divergence: $$JSD(P||Q) = \frac{1}{2} (KL(P||M) + KL(Q||M))$$ where $$M = \frac{1}{2} (P + Q)$$
Some notes about these distance measure:
• KL divergence and cross entropy are not symmetric and usually the first argument (e.g. P) is the true distribution.
• $$H(P, Q) = H(P) + KL(P||Q)$$ where $$H(P)$$ is the entropy of $$P$$
• As $$P$$ is the true distribution, its entropy $$H(P)$$ is a constant which makes KL divergence and cross entropy equivalent. In deep learning training, the cross entropy is used as the loss function because it's easier to calculate the cross entropy than the KL divergence.
• Jensen-Shannon divergence is symmetric and the squar root of it forms a metric. The proof can be found in the paper A New Metric for Probability Distributions.
#### Special Case: Binary Cross Entropy
One special cass is binary cross entropy. In this particular case, there are only two classes. Suppose we have class A and class B and if a data point belongs to class A, we set $$y_i$$ to 1, otherwise $$y_i$$ is set to 0, then the cross entropy can be calculated by the following formula:
$$y_i log(p(y_i)) + (1 - y_i) log(1 - p(y_i))$$
Note that the class representation is different as well. For binary classification, we only need one node to represent two classes (instead of using two nodes).
### Softmax
Softmax is only one of many ways to convert a vector of scores to a probability distribution. Suppose we have a vector of scores $$(z_1, z_2, ..., z_n)$$ and we want to conert it into a probability distribution vector $$(y_1, y_2, ..., y_n)$$. The softmax formula is the following
$$y_i = \frac{e^{z_i}}{\sum\limits_{k=1}^{n}e^{z_k}}$$
### Hierarchical Softmax
Technically speaking, hierarchical softmax is an optimization of the probability calculation. This method is proposed by Morin and Bengio in the paper Hierarchical Probabilistic Neural Network Language Model. The discussion of the paper is in a NLP context and the method aims to speed up the calculation of $$P(\omega_t|\omega_{t-1}, ..., \omega_{t-l+1})$$, where $$\omega_t$$ is a word in a document. The $$n$$ in the above softmax expression is the size of vocabulary, denoted by $$|V|$$.
There are two important observations mentioned in the paper:
1. The time complexity of softmax is O(n) (or O(|V|)) due to the calcuation in the denominator.
2. By classifying word and using conditional probability, the time complexity can be reduced.
The idea is to use a binary tree to represent word, which is similar to an encoding process and the probability of having a word is given by
$$P(v|\omega_{t-1}, ..., \omega_{t-l+1}) = \prod \limits_{j=1}^{m} P(b_j(v) | b_1(v), ..., b_{j-1}(v), \omega_{t-1}, .., \omega_{t-l+1})$$
where $$m$$ is the lenght of the path to reach the word in the binary tree.
The probability of having a word becomes the probability of following the correspondent path in the binary tree. In the figure above, there are three probabilities to be calculated and the time complexity of the calculation of each item is O(1) because we only need to iterate 2 classes (e.g. left/right or 0/1 in the encoding). The overall time complexity is the height of the binary tree. If the binary tree is balanced then the height is $$O(log(|V|))$$.
----- END -----
Welcome to join reddit self-learning community.
Want some fun stuff?
|
{}
|
## Limit of a Ratio of Two Functions
In this post, I show that $\lim_{x\rightarrow a}\frac{f(x)}{g(x)}=\frac{\lim_{x\rightarrow a}f(x)}{\lim_{x \rightarrow a}g(x)}$ given that $\lim_{x\rightarrow a}f(x)=A$, $\lim_{x \rightarrow a} g(x) = B$, $B \ne 0$ and $g(x) \ne 0$. To do this, I approximately…
Read more
## A Limit Involving the Cosine Function
Now that several limit properties have been proven, it is possible for me to evaluate $\lim_{\alpha \rightarrow 0} \frac{1 – \cos \alpha}{\alpha}$. To do this, I follow the steps in Reference…
Read more
## Limit of the Product of Two Functions
In this post, I show that $\lim_{x\rightarrow a}[f(x)g(x)] = \lim_{x\rightarrow a} f(x) \lim_{x\rightarrow a} g(x)$ given that $\lim_{x\rightarrow a} f(x) = A$ and $\lim_{x\rightarrow a} g(x) = B$. To do this, I approximately follow…
Read more
## Limit of a Difference of Two Functions
In this post, I show that $\lim_{x\rightarrow a}[f(x) – g(x)] = \lim_{x \rightarrow a} f(x) – \lim_{x \rightarrow a} g(x)$ given that $\lim_{x \rightarrow a} f(x) = A$ and \$\lim_{x \rightarrow a}…
Read more
|
{}
|
# A preliminary study on the inorganic carbon sink function of mineral weathering during sediment transport in the Yangtze River mainstream
### CaO, MgO, calcite and dolomite contents of suspended sediments in the Yangtze River and its main tributaries
Figure 2 shows a declining tendency of the CaO + MgO and calcite + dolomite of suspended sediments in the mainstream of the Yangtze River from upstream to downstream. The total CaO + MgO contents along the Yangtze River were as follows: Tuotuo River, 16.33%; Yibin, 11.43%; Cuntan, 10.35%; Yichang (below the Three Gorges Dam), 6.17%; Wuhan Industrial Port, 6.61%; Datong, 4.80%; and Wusongkou, 4.60%. The total calcite + dolomite contents also decreased along the river as follows: Tuotuo River, 16.8%; Yibin, 9.1%; Cuntan, 6.2%; Yichang, 4.1%; Wuhan Industrial Port, 7.4%; Datong, 4.2%; and Wusongkou, 1.5%. The three stations closest to the estuary were dominated by dolomite and free of calcite.
Due to the dramatic terrain changes in the upper reaches of the Yangtze River, strong gravity erosion and physical weathering, the CaO + MgO and calcite + dolomite contents were high in the mainstream section of the Jinsha River above Yibin and decreased from the headwater to the estuary, which clearly illustrated the dissolution of calcium-magnesium silicate and carbonate minerals during the process of sediment transport in the river. The declining rates of CaO + MgO and calcite + dolomite contents in the upper reaches of the Yangtze River above Yichang were 0.12% / 100 km and 0.29% / 100 km, respectively, while the rates below Yichang were 0.09% / 100 km and 0.15 % / 100 km, respectively. The declining rate of calcite and dolomite contents was higher than that of CaO + MgO contents, which indicated that carbonate minerals were more likely to be dissolved than calcium-magnesium silicate minerals during sediment transport. Because the upstream reaches had greater slope gradients, faster flow velocities and consequently higher mineral dissolution rates, the CaO + MgO and calcite + dolomite contents of suspended sediments in the upper reaches had a higher declining rate than those in the middle and lower reaches.
The CaO + MgO and calcite + dolomite contents of the suspended sediment in the main tributaries of the Yangtze River were as follows: Minjiang River: 9.75% and 6.3%; Jialing River: 5.40% and 5.5%; Wujiang River: 10.87%; Xiangjiang River: 4.75%; Hanjiang River: 4.41%; and Ganjiang River: 2.87% (only CaO + MgO contents and no calcite + dolomite content data for the last four tributaries). Except for the Wujiang River, CaO + MgO contents and calcite + dolomite contents in the Minjiang River were higher than those in other tributaries and close to the Jinsha River (Yibin site) because the Minjiang River basin had similar environments for erosion and sediment transport to the Jinsha River Basin. The CaO + MgO contents of suspended sediments in the Wujiang River were quite different between July 2003 and July 2007, ie, 6.67% and 15.06%, respectively. The relatively high contents might be due to its widespread distribution of carbonate rock.
### Carbon sink capacity during suspended sediment transport in the Yangtze River mainstream
According to Eqs. (1) – (6), the calculated TCS capacity (C1) decreased gradually from the headwater to the estuary (Fig. 3) in the following order: Tuotuo River, 0.271 t / t; Cuntan, 0.151 t / t; Yichang, 0.117 t / t; Wuhan Industrial Port, 0.127 t / t; Datong, 0.092 t / t; and Wusongkou, 0.091 t / t (Table 2). As CaO + MgO contents decreased, so did the TCS capacities. This result verified that CO2 was consumed by the dissolution of Ca – Mg minerals during sediment transport from upstream to downstream and that a carbon sink function existed. The TCS capacities at Cuntan and Wusongkou were 0.151 t / t and 0.091 t / t, respectively. A total of 0.060 tons of CO2 per ton of suspended sediment was dissolved during transport from Cuntan to the sea.
The SCS capacities (C3) of silicate minerals in the sediment were in the range of 0.027–0.047 t / t and had little variation, except for the Tuotuo River (0.061 t / t). The NSCS capacity (C2) was consistent with the variation in the TCS capacity, and both had a gradual decreasing tendency from the headwater to the estuary (Fig. 3), as follows: Tuotuo River, 0.210 t / t; Cuntan, 0.104 t / t; Yichang, 0.078 t / t; Wuhan Industrial Port, 0.097 t / t; Datong, 0.065 t / t; and Wusongkou, 0.051 t / t. The silicate carbon sink capacity (SCS) was smaller and showed a smaller reduction than the NSCS along the Yangtze River, mainly due to the slower dissolution rate of silicate minerals or greater contribution of silicate rock clastics from the watersheds in the middle and lower reaches, which also limited CO2 consumption in comparison with carbonate minerals. Obviously, the decrease in TCS capacity from upstream to downstream was due to the intense dissolution of carbonate minerals in the Yangtze River.
An “ideal mainstream segment” refers to a segment where there was no sediment input from the tributaries or the amount of sediment supply from the tributaries was equal to the amount of sedimentation in the segment (meaning that the suspended sediment yields at the inlet and outlet were similar), and the calcium and magnesium mineral contents of suspended sediments in the tributaries and mainstream were similar. The carbon sinks via CO2 consumption by dissolution of calcium and magnesium minerals in the sediment transport process can be expressed as follows:
$${ text {Wt}} _ {{text text {h}} 1 – 2}} = { text {Ws}} _ {{text text {h}} 1 – 2}} times left ({{ text {C}} _ {{text text {h}} 1}} {-} { text {C}} _ {{text text {h}} 2}}} right)$$
(7)
where Wth1-2 is the consumed CO2 via the dissolution of calcium and magnesium minerals in the segment (h1–H2) (104 t / yr); WSh1-2 is the mean suspended sediment transported at the segment (h1–H2) (104 t / yr); Ch1 is the carbon sink capacity of the suspended sediment at the inlet of the segment (t / t); and Ch2 is the carbon sink capacity of the suspended sediment at the outlet of the segment (t / t).
The Yangtze River has a drainage area of 1785 × 106 km2in which the upper reaches above Yichang have an area of 1.05 × 106 km2. The average sediment yield from 1956 to 2000 was 5.01 × 108 t / yr at Yichang, while it was 4.33 × 108 t / yr at Datong, with a drainage area of 1,705 × 106 km2. In addition, it was 4.39 × 108 t / yr at Cuntan13 with a drainage area of 0.867 × 106 km2. Although there is a large drainage area of the river segment between Cuntan and Datong (0.838 × 106 km2), the sediment yields at Cuntan and Datong were very similar, namely, 4.39 × 108 t / yr, and 4.33 × 108 t / yr, respectively, because sediment deposition in the channels of the segment offset the sediment supply from the tributaries. The contents of CaO and MgO in the suspended sediment in the tributaries (Hanjiang River, Ganjiang River and Xiangjiang River) in the middle and low reaches below the TGR dam were nearly the same as the values in the mainstream of the Yangtze River (Hankou station )12. Thus, we regarded the mainstream segment between Cuntan and Datong as an ideal mainstream segment, with Cuntan being upstream of the TGR dam, which made the evaluation of the damming effects available. The CaO + MgO contents of suspended sediment samples for the three campaigns (July 2003, July 2005 and July 2007) and the calcite + dolomite contents of the sediment (July 2005) at the two sites (Table 2) were used to calculate differences in TCS, NSCS and SCS capacities for the segment. Taking the mean sediment yield of the two sites during the period from 1956 to 2000 for reference, the annual net TCS, NSCS and SCS between the two sites were 26.45 × 106 tons of CO217.51 × 106 tons of CO2 and 8.94 × 106 tons of CO2, respectively. After 2001, due to hydropower exploration (especially the TGR project), ecological mitigation and soil conservation, the sediment yields at the two sites largely decreased and have stabilized since 2006. By comparison to the period before 2000, the sediment yields at Cuntan and Datong decreased by 72.4% and 71.6%, respectively, during the period 2006–201914. Due to the reduction in sediment yields, the annual net TCS, NSCS and SCS in the segment decreased by 18.52 × 106 tons of CO212.24 × 106 tons of CO2 and 6.28 × 106 tons of CO2respectively (Table 3).
### Carbon sink capacities of the global rivers and their implications
The amount of deposited sediments of the Three Gorges Reservoir (TGR) from June 2003 to December 2017 was 1.669 × 109 tons, and the average sedimentation rate was 114.5 × 106 t / yr15. According to the differences in the TCS, NSCS and SCS capacities of the suspended sediments between Cuntan and Datong, the losses of annual TCS, NSCS and SCS by sedimentation in the TGR were estimated to be approximately 6.756 × 106 tons of CO24,466 × 106 tons of CO2 and 2,290 × 106 tons of CO2× 106 tons of CO2 , respectively. The power generation of the Three Gorges Hydropower Station exceeded 100 × 109 kW / h in 2018, equivalent to saving 31.9 × 106 tons of standard coal and reducing 85.80 × 106 tons of CO2 emissions16. The reduction in inorganic carbon sinks from the sedimentation in the TGR was equivalent only to a limited amount (TCS for 7.9% and SCS for 2.7%) of the reduced CO2 emissions by the Three Gorges Hydropower Station. Moreover, the sediment deposited by the TGR could bury and store vast quantities of organic carbon6and particulate organic carbon (POC) contents in the TGR in recent reports17,18,19 varied from 0.26 to 9.2%. An average of 1.5% of POC in buried sediment was used to estimate annual buried organic carbon, and the relatively permanent sedimentation of organic carbon was equivalent to 6.30 × 106 tons of CO2 sequestration. The losses of annual TCS and SCS via silicate weathering by the TGR project could also offset 107.28% and 36.36% of its annual CO2 sequestration (6.30 × 106 tons) via permanent sedimentation, respectively.
From a global perspective, 4462 rivers with basin areas of more than 100 km2 showed that the current annual sediment flux of global rivers into the sea was 12.61 × 109 tons20. Taking the preimpoundment period TCS of 0.060 t / t for the stream segment between Cuntan and Wusongkou (the mouth of the Yangtze River) into consideration, the total inorganic carbon sink amount of 7.57 × 108 tons of CO2was derived from global rivers, which is equivalent to 71.6% of the total inorganic carbon sink of global rock weathering (1.056 × 109 tons of CO2), with weathered silicate being more than 26% of the total weathered rocks4,6,10. The enhanced silicate rock weathering (ERW) strategy proposed by Beerling et al.1 would create a higher annual SCS, reaching 2 × 109 tons of sequestered CO2. To achieve this goal, there is no doubt that the inorganic carbon sink amount contributed by the dissolution of calcium and magnesium minerals in the processes of river sediment transport accounted for a great portion of the carbon sink amount via global rock weathering. The collision and abrasion of river sediments combined with stirring and mixing could promote the dissolution of minerals during sediment transport processes (off-site weathering of the rock). Therefore, it was suggested that the dissolution rate of off-site rock weathering was higher than that of in situ weathering. In comparison to the periods with limited anthropogenic influences, the global sediment fluxes to the sea decreased by approximately 10%20and the corresponding total carbon sink loss in a year was estimated to be 0.757 × 109 tons of CO2 which was still less than the amount of CO2emission reduction contributed by hydropower exploration and the associated buried organic carbon per year.
|
{}
|
pipeline:window:sp_cryolo_train
Differences
This shows you the differences between two versions of the page.
— pipeline:window:sp_cryolo_train [2019/04/02 11:40] (current)lusnig created 2019/04/02 11:40 lusnig created 2019/04/02 11:40 lusnig created Line 1: Line 1: + ~~NOTOC~~ + + ===== sp_cryolo_train ===== + crYOLO - training: Training of crYOLO, a deep learning high accuracy particle picking procedure. + + \\ + ===== Usage ===== + + Usage in command line + + sp_cryolo_train.py particle_diameter training_dir annot_dir --cryolo_train_path=CRYOLO_PATH --architecture=architecture --input_size=input_size --num_patches=num_patches --overlap_patches=overlap_patches --train_times=train_times --pretrained_weights_name=PRETRAINED_NAME --saved_weights_name=SAVE_WEIGHTS_NAME --batch_size=batch_size --learning_rate=learning_rate --np_epoch=np_epoch --object_scale=object_scale --no_object_scale=no_object_scale --coord_scale=coord_scale --valid_image_dir=valid_image_dir --valid_annot_dir=valid_annot_dir --warmup=warmup --gpu=gpu --fine_tune --gpu_fraction=GPU_FRACTION --num_cpu=NUM_CPU + + \\ + ===== Typical usage ===== + + To train crYOLO for a specific dataset, one have to specify the path to training data in the config file. + Then the training typcial happens in two steps: + + \\ __1. Warmup__: + + sp_cryolo_train.py particle_diameter training_dir annot_dir --architecture="YOLO" --warmup=5 + + \\ __2. Actual training__: + + sp_cryolo_train.py --conf=config_path --warmup=0 --gpu=0 + + \\ + ===== Input ===== + === Main Parameters === + ; %%--%%cryolo_train_path : crYOLO train executeable : Path to the crYOLO executeable (default none) + ; particle_diameter : Particle diameter [Pixel] : Particle diameter in pixel. This size will be used for as box size for picking. Should be as small as possible. (default required int) + ; training_dir : Training image directory : Folder which contain all images. (default required string) + ; annot_dir : Annotation directory : Box or star files used for training. The should have the same name as the images. (default required string) + + + \\ + === Advanced Parameters === + ; %%--%%architecture : Network architecture: Type of network that is trained. (default PhosaurusNet) + ; %%--%%input_size : Input image dimension [Pixel] : Dimension of the image used as input to network. (default 1024) + ; %%--%%num_patches : Number of patches : The number of patches (e.g 2x2) the image is divided and classified separately. (default 1) + ; %%--%%overlap_patches: Patch overlap [Pixel]: The amount of overlap the patches will overlap (default 0) + ; %%--%%train_times : Repeat images : How often a images is augmented and repeadet in one epoch. (default 10) + ; %%--%%pretrained_weights_name: Pretrained weights name : Name of the pretrained model (default cryolo_model.h5) + ; %%--%%saved_weights_name: Saved weights name : Name of the model to save (default cryolo_model.h5) + ; %%--%%batch_size : Batch size : How many patches are processed in parallel. (default 5) + ; %%--%%fine_tune : Fine tune mode : Set it to true if you only want to use the fine tune mode. (default False) + ; %%--%%learning_rate : Learning rate : Learning rate used during training. (default 0.0001) + ; %%--%%np_epoch : Number of epochs : Maximum number of epochs. (default 100) + ; %%--%%object_scale : Object loss scale : Loss scale for object. (default 5.0) + ; %%--%%no_object_scale: Background loss scale: Loss scale for background. (default 1.0) + ; %%--%%coord_scale: Coordinates loss scale: Loss scale for coordinates. (default 1.0) + ; %%--%%valid_image_dir : Path to validation images : Images used (default none) + ; %%--%%valid_annot_dir : Path to validation annotations : Path to the validation box files (default none) + ; %%--%%warmup : Warm up epochs : Number of warmup epochs. (default 5) + ; %%--%%gpu: GPUs : List of GPUs to use. (default 0) + ; %%--%%gpu_fraction: GPU memory fraction : Specify the fraction of memory per GPU used by crYOLO during training. Only values between 0.0 and 1.0 are allowed. (default 1.0) + ; %%--%%num_cpu: Number of CPUs : Number of CPUs used during training. By default it will use half of the available CPUs. (default -1) + + \\ + ===== Output ===== + It will write a .h5 file (default yolo_model.h5) into your project directory. + + + \\ + ===== Description ===== + The training is divided into two parts. 1. Warmup: It prepares the network with a few epochs of training without actually estimating the size of the particle. + 2. Actual training: The training will stop when the loss on the validation data stops to improve. + + \\ + ==== Method ==== + See the reference below. + + \\ + ==== Time and Memory === + Training needs a GPU with ~8GB memory. Training on 20 micrographs typicall needs ~20 minutes. + + + \\ + ==== Developer Notes ==== + === 2019/09/19 Thorsten Wagner === + * Initial creation of the document + + \\ + ==== Reference ==== + https://doi.org/10.1101/356584 + + \\ + ==== Author / Maintainer ==== + Thorsten Wagner + + \\ + ==== Keywords ==== + Category 1:: APPLICATIONS + + \\ + ==== Files ==== + sparx/bin/sp_cryolo_train.py + + \\ + ==== See also ==== + [[pipeline:window:cryolo|crYOLO]] + + \\ + ==== Maturity ==== + Stable + + \\ + ==== Bugs ==== + None right now. + + \\
• pipeline/window/sp_cryolo_train.txt
|
{}
|
MathSciNet bibliographic data MR946423 55P35 (55S12 55T20) Hunter, Thomas J. On $H_*(\Omega^{n+2}S^{n+1};{\bf F}_2)$$H_*(\Omega^{n+2}S^{n+1};{\bf F}_2)$. Trans. Amer. Math. Soc. 314 (1989), no. 1, 405–420. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
{}
|
# A trigonometric inequality involving sine
Let $0<a<\pi/2,0<b<\pi/2$, $0<\lambda<1, \mu=1-\lambda$. Does anyone see a good proof of the inequality:
$$\sin(\lambda a)\sin(\lambda b)+\sin(\lambda a)\sin(\mu b)\cos(b)+\sin(\lambda b)\sin(\mu a)\cos(a)+\sin(\mu a)\sin(\mu b)>\sin(a)\sin(b).$$
-
The LHS equals the RHS for $a=b=\pi/4,\lambda=\mu=1/2$. – mathlove Dec 19 '15 at 21:00
|
{}
|
# Bringing the inequality issue to the debate table
## Features
• Author: Carlos Grajales
• Date: 15 Jun 2018
• Copyright: Image appears courtesy of Getty Images
In January, just in time for the World Economic Forum in Davos, Oxfam released some data pertaining to the current situation of the world’s inequality [1], which translated in news headlines such as “World's richest 1% get 82% of the wealth”[2] or “The World's Richest 1% Took Home 82% of Wealth Last Year, Oxfam Says” [3]. According to Oxfam, the world’s inequality numbers have taken a turn for the worse, as much of the world’s recent wealth has been acquired by the very few. Still, an indicator seemed to be looking better: in 2017 the 42 richest people in the world had as much wealth as the poorest half of the planet, which seemed way better than the 8 billionaires who were as rich as world's poorest half in 2016 [4]… until Oxfam revised the 2016 number to 61 [2].
With such a dramatic shift in the estimates (and the hardly coincidental timing), it is not a surprise that this year’s report has spurred such controversy. Well, actually every year the report does spur controversy: some people say the work is nothing more than an attempt to undermine capitalism, while others describe it as a media attention grabber that does nothing to help fix the problem of global inequality [5]. More interestingly, some claim the report is mired with data problems, which to me is the perfect excuse to take a look at Oxfam’s methodology and compare it with some other measures of inequality. Even more, these numbers give us a great incentive to discuss what inequality is and why it is so hard to measure.
Economic inequality is a broad concept, which involves the differences found in various measures of economic well-being among individuals in a group, among groups in a population, or among countries [6]. Still, when we read about high “inequality”, what the media usually talks about is Wealth Inequality, which refers specifically to the differences found in the distribution of ownership of the assets within a society [7]. As you can imagine, measuring such gaps is not a trivial matter. Simply defining wealth is a tricky task in itself. A commonly used definition, which is in fact the one used by Oxfam in their research, defines wealth as the difference between the total amount of assets an individual has and his/her liabilities [7] [8]. This definition is one of the first complaints researchers have on Oxfam’s methodology [4]. Assets minus liabilities means that young people are disproportionately poor, just consider the case of a recently graduated student, who does not have any assets (lands, houses or property) but who most certainly has some important debts (college fees). Most of us would agree that a recently graduated young woman has a bright future with important earning potential, but this definition of wealth will likely rank her among the poorest.
There is a reason why Oxfam uses this definition: they have no choice. They have no choice because they do not use their own data for any of the reported results; they rely completely on data published by the Credit Suisse Research Institute. They publish every year the Global Wealth Report [8] a publication where wealth information from many countries around the globe is gathered and analyzed, an exercise that has been replicated since 2000.
According to the Credit Suisse data, if we summed up everything you own, everything your neighbour owns and all assets the whole planet possess, including everything I own, which admittedly is not much, the world had an overall estimated wealth of 280 trillion US dollars in 2017. In order to reach this number, the Global Wealth report estimates the wealth holdings of households from all around the world. To obtain these, researchers first estimate the average level of wealth for each country. This is done on a country by country basis, using whatever information is available to estimate, with emphasis on “whatever”. For 48 countries, the report relies on household balance sheet (HBS) data, although for 25 of these, the information covers only financial assets [8]. For another 4 countries, which include China and India, estimates are based on household survey data, a less than ideal method for estimating income or wealth, as reported by official surveyors. For instance, in Mexico, the National Institute of Statistics and Geography reported consistent problems while using a standardized survey to estimate income in the country [9]. Still, these are the countries where the Global Wealth Report actually has some data. For most of the world, the Credit Suisse Research Institute has no reliable source of information, so estimates of average wealth levels are obtained with modeling techniques, specifically with three regression models aimed to predict non-financial assets, financial assets and liabilities [8]. The last two of the models are estimated as Seemingly Unrelated Regression Models (SUR), which imply that the errors obtained by each of these models are assumed to be correlated. This is not done for the financial assets models, as the authors claim insufficient sample size for it. Finally, for about 44 countries there’s absolutely no information available and their wealth is estimated as the regional average.
After the wealth of each country has been estimated, a second step involves constructing the distribution of wealth holdings within each nation [8]. For 31 countries, survey data is used. For the rest of the world, the authors use the relationship between wealth distribution and income distribution to use models to generate very rough estimates of wealth distribution for countries which have information available on income distribution. For countries where information on the model predictors is not available, the regional average is again used.
These procedures generate a wealth distribution curve for each country. This information is used with a modified version of the Shorrocks-Wan ungrouping program [8], which constructs a simulated sample conforming exactly to any set of values from a Lorenz curve, which is the traditional graphical representation of income or wealth distribution. This produces samples for each country in the world, each with its corresponding sampling weight to match population totals and consistent with that country’s estimated wealth distribution. Samples for all countries of the planet are obtained and then merged in a single, global dataset of about 1.2 million records, which is the one Oxfam works with. Well, almost. A final adjustment is made, based on Forbes’ list of billionaires [8]. They use the number of billionaires reported by Forbes to fit a Pareto distribution to the upper tail of 56 countries and replace the estimated wealth with the new values.
In briefing, to estimate global wealth the Credit Suisse Research Institute uses official government data, household surveys, mean imputation, modeling techniques and even Forbes’ list of billionaires. Obviously, mixing all those sources of information has the effect of drastically reducing the accuracy of the estimation. In fact, just using in the same context survey data from different countries is less than ideal if the survey methodology is not standardized; now consider merging all those different data sources. The authors themselves conclude that the estimation is far from perfect and that revisions are “inevitable” [8].
With this dataset, Oxfam calculates all statistics for their report, at least for the wealth distribution. They use other sources of information for specific parts of the report, such as their estimates of tax evasion, which are based on work by Gabriel Zucman, or the wealth accumulated by billionaires, which is entirely based on Forbes’ list [10].
The Oxfam team is well aware of the liabilities in the Credit Suisse data they use, particularly their wealth estimations for certain countries that tend to be way off. The reason is the heterogeneous data sources the Credit Suisse report uses, which produces not so accurate results for particular regions. This is the reason why Oxfam made such abrupt shifts in their estimations of wealth for the bottom 50% of the world from 2016, which led to the increase in the estimated number of billionaires who accumulated enough wealth to match the fortunes of the bottom half of the planet. Initially in their 2016 report, Oxfam published that it required only the 8 richest persons in the world to match the fortune of half of the planet [4]. But this number assumed that the total accumulated wealth of the bottom 50 percent was of $409 billion. This year’s Credit Suisse report uncovered about$8 trillion of global wealth in 2016 that was previously not counted and more than $1 trillion of it belonged to the poorest half of the planet, meaning that their accumulated wealth actually more than tripled [5]. Most of the “discovered” assets came from India, China and Russia, places where household survey data did not accurately measure wealth. After Oxfam considered this additional money, they revised the number of rich people required to match the wealth of the bottom half of mankind to 61. This actually means that the world was not in such a bad condition as we thought in 2016: average net wealth for the bottom 50 percent is no longer$110 per person, as Oxfam first estimated, but \$427 per person [5]. Whether this year’s numbers also failed to consider trillion of dollars of fortunes is another matter entirely and we probably won’t find out until next year’s numbers are published and this year’s revised.
With this in consideration, it is no surprise that Oxfam’s yearly report is viewed with skepticism by a fair share of the scientific community. But considering the immense challenge that represents gathering income and wealth data for all countries on earth, it might not be appropriate to immediately discredit Oxfam’s effort. The Credit Suisse data has many opportunities to improve but it is still the only resource available to offer global estimates of wealth. Besides, Oxfam data is often in line with other research done on wealth inequality, at least within countries. Pew Research Center published in 2017 that wealth gap between upper-income families and lower- and middle-income families reached the highest levels recorded, based on data from the Federal Reserve Board’s Survey of Consumer Finances [11]. As I mentioned, measuring wealth with surveys is incredibly hard, as people tend to under-report their assets or, worse, tend to not participate at all. Usually upper-income households are harder to reach with surveys, which is another issue. But even with all those caveats, survey data in the U.S. agrees with Oxfam’s conclusions. Stanford’s Center on Poverty & Inequality also provides yearly analyses of trends in income and wealth inequality for the U.S., this time based on annual updates to tax data [12], with similar results. Even more, a closely related indicator, income inequality, has shown signs of going upward in many parts of the world, at least according to data from OECD countries [13]. Their estimates reflect that the average income of the richest 10% of the population is about nine times that of the poorest 10% across all OECD countries, the largest gap for the past half century.
And to be fair, it is certainly true that Oxfam’s greatest interest lies not in providing the most accurate estimates of wealth distribution, but to bring the issue of global wealth inequality to light and motivate a broader discussion on the economic issues and policies that could help drive inequality down. Since no one actually knows how to do it, as the causes of wealth inequality are a topic of intense debate even today. Education has been identified as an important factor that drives inequality: people with lower education levels have their opportunities restrained and their wealth is also affected in consequence [14]. Low-skilled functions tend to be associated with lower positions in the wealth distribution as well, as competition for these positions has decreased, thus exacerbating the effect of education on the wealth gap. Environmental factors, such as the political order, tax policy or stagnant wages are also major drivers for inequality. But even though most economists agree on the influence of these factors, there is ongoing debate on the actual mechanisms that explain the relationships between these variables and inequality itself.
As such, Oxfam’s report is far from being a perfect statistical exercise, but it is the best available resource we have for a global comparison. And even as imperfect as it is, it has certainly been successful in its other goal: bringing the inequality issue to the debate table.
REFERENCES:
[1] Reward Work, Not wealth Oxfam International (Jan, 2018)
https://www.oxfam.org/sites/www.oxfam.org/files/file_attachments/bp-reward-work-not-wealth-220118-en.pdf?cid=aff_affwd_donate_id78888&awc=5991_1516715345_0a84322c20ef396277dc8ed070020d3e
[2] Hope, Katie 'World's richest 1% get 82% of the wealth', says Oxfam. BBC News (Jan, 2018)
[4] Hope, Katie Eight billionaires 'as rich as world's poorest half' BBC News (Jan, 2017)
[3] The World's Richest 1% Took Home 82% of Wealth Last Year, Oxfam Says. Fortune Website (Jan, 2018)
http://fortune.com/2018/01/21/oxfam-report-global-inequality-billionaires/
[8] Global Wealth Report 2017. The Credit Suisse Research Institute Website (November,2017)
https://www.credit-suisse.com/corporate/en/research/research-institute/global-wealth-report.html
[5] Murphy, Tom. Why The Internet Loves And Hates Oxfam's Global Inequality Report. NPR Website (Jan, 2018)
https://www.npr.org/sections/goatsandsoda/2018/01/24/580087309/why-the-internet-loves-and-hates-oxfam-s-global-inequality-report
[9] Barragán, Daniela & Dulce, Olvera. El 63.3% del ingreso en México se concentra sólo en el 30% de los hogares, dice encuesta del INEGI. (Available in Spanish) SinEmbargo website (Aug, 2017)
http://www.sinembargo.mx/28-08-2017/3294496
[6] Economic Inequality. Wikipedia, The Free Encyclopedia (Last edited March, 2018)
https://en.wikipedia.org/wiki/Economic_inequality
[7] Distribution of Wealth. Wikipedia, The Free Encyclopedia (Last edited March, 2018)
https://en.wikipedia.org/wiki/Distribution_of_wealth
[14] Income Inequality. Investopedia Website (2018)
https://www.investopedia.com/terms/i/income-inequality.asp
[10] Reward Work, Not Wealth. Methodology Note. Oxfam International (Jan, 2018)
https://oxfamilibrary.openrepository.com/oxfam/bitstream/10546/620396/42/tb-reward-work-not-wealth-methodology-note-220118-en.pdf
[11] Kochhar Rakesh et al. How wealth inequality has changed in the U.S. since the Great Recession, by race, ethnicity and income. Pew Research Center (Nov, 2017)
http://www.pewresearch.org/fact-tank/2017/11/01/how-wealth-inequality-has-changed-in-the-u-s-since-the-great-recession-by-race-ethnicity-and-income/
[12] Research Projects. Stanford Center on Poverty & Inequality. (Stanford University, 2017)
https://inequality.stanford.edu/research/projects#income-wealth
[13] Inequality. OECD - Better Policies for Better Lives. OECD Centre for Opportunity and Equality. OECD. 2018
http://www.oecd.org/social/inequality.htm
View all
View all
|
{}
|
# Difference in sample variance formulas
So I have a few formulas, and I am not really sure when each formula is applicable. The first formula is:
$$\newcommand{\Var}{{\rm Var}} \Var[\bar Y] = \sigma^2_\bar Y = \frac {(N-n)\sigma^2}{(N-1)n}$$
My understanding, although it may be wrong is that this is the Variance of the population mean?
My next is $\Var[\bar Y_n]\approx (1-f)\frac {\sigma^2}{n}$, where $f=\frac {n}{N}$. My understanding is that this is the sample variance?
And finally,
$$s^2_\bar y = (1-f)\frac {s^2}{n}$$
Which I understand to be the variance of the sample mean.
I just want to know when to apply each variance and I have been searching online but I don't really know how to phrase it to get the answers I'm looking for.
• where did you get these formulas from? – Lucas Farias May 26 '17 at 23:36
• @lucasfariaslf My university notes, why do you ask? – user162934 May 26 '17 at 23:39
• Because I can't remember seeing it before, but maybe this could help you situate yourself: talkstats.com/showthread.php/… – Lucas Farias May 27 '17 at 0:14
• You will need to provide some context for these. What precedes these in your notes? What are the topics of the sections they show up in? Etc. – gung May 27 '17 at 1:17
Your notes were from a survey sampling class. Survey sampling is a branch of statistics used to deal with FINITE population. Its theories are different from general statistics.
$\sigma^2$ is the variance of $Y$ in the population. (Population mean is constant, and its variance is 0 or it has no variance.)
$\operatorname{Var}[\bar Y]$ is the variance of sample mean. $\operatorname{Var}[\bar Y_n]$ is the same as $\operatorname{Var}[\bar Y]$, the writer did not make them consistent.
The last one $s_{\bar y}^2$ is the estimate of $\operatorname{Var}[\bar Y_n]$.
• I'd say there may be a 'problem' if the student doesn't know what class he is taking. – wolfies May 27 '17 at 18:45
Your formulas are appropriate when the sample is a non-negligible fraction of a fixed population.
The variance of $\bar Y_n$ (from a frequentist viewpoint) is the variance over hypothetical replications of drawing a sample of size $n$. If the samples are drawn from the same population of size $N$ there will be some overlap between samples. On average, each observation in a repeated sample will have a probability $n/N$ of having been in the original sample, so only the $(1-n/N)$ fraction of new observations contribute to the variance. At the extreme, if $n\approx N$, all the samples will be nearly identical.
If the samples are drawn from a very large population you have $(N-n)/(N-1)\approx 1$ and $1-f\approx 1$. If the samples are drawn from a data generating process that doesn't correspond to a finite population (eg $Y_i\sim N(0,1)$, $N$ isn't a thing, but the formulas still work if you think of $N$ as infinite, so that $(N-n)/(N-1)=1$ and $f=0$. In practice, even samples from a finite population are often analysed this way: the survey sampling jargon is "sampling with replacement".
|
{}
|
InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.
Mathematics » "Manifolds - Current Research Areas", book edited by Paul Bracken, ISBN 978-953-51-2872-4, Print ISBN 978-953-51-2871-7, Published: January 18, 2017 under CC BY 3.0 license. © The Author(s).
# Spectral Theory of Operators on Manifolds
By Paul Bracken
DOI: 10.5772/67095
Article top
# Spectral Theory of Operators on Manifolds
Paul Bracken
Show details
## Abstract
Differential operators that are defined on a differentiable manifold can be used to study various properties of manifolds. The spectrum and eigenfunctions play a very significant role in this process. The objective of this chapter is to develop the heat equation method and to describe how it can be used to prove the Hodge Theorem. The Minakshisundaram‐Pleijel parametrix and asymptotic expansion are then derived. The heat equation asymptotics can be used to give a development of the Gauss‐Bonnet theorem for two‐dimensional manifolds.
Keywords: manifold, operator, differential form, Hodge theory, eigenvalue, partial differential operator, Gauss‐Bonnet
## 1. Introduction
Topological and geometric properties of a manifold can be characterized and further studied by means of differential operators, which can be introduced on the manifold. The only natural differential operator on a manifold is the exterior derivative operator which takes k ‐forms to k+1 forms. This operation is defined purely in terms of the smooth structure of the manifold, used to define de Rham cohomology groups. These groups can be related to other topological quantities such as the Euler characteristic. When a Riemannian metric is defined on the manifold, a set of differential operators can be introduced. The Laplacian on k ‐forms is perhaps the most well known, as well as other elliptic operators.
On a compact manifold, the spectrum of the Laplacian on k ‐forms contains topological as well as geometric information about the manifold. The Hodge theorem relates the dimension of the kernel of the Laplacian to the k ‐th Betti number requiring them to be equal. The Laplacian determines the Euler characteristic of the manifold. A sophisticated approach to obtaining information related to the manifold is to consider the heat equation on k ‐forms with its solution given by the heat semigroup [13].
The heat kernel is one of the more important objects in such diverse areas as global analysis, spectral geometry, differential geometry, as well as in mathematical physics in general. As an example from physics, the main objects that are investigated in quantum field theory are described by Green functions of self‐adjoint, elliptic partial differential operators on manifolds as well as their spectral invariants, such as functional determinants. In spectral geometry, there is interest in the relation of the spectrum of natural elliptic partial differential operators with respect to the geometry of the manifold [46].
Currently, there is great interest in the study of nontrivial links between the spectral invariants and nonlinear, completely integrable evolutionary systems, such as the Korteweg‐de Vries hierarchy. In many interesting situations, these systems are actually infinite‐dimensional Hamiltonian systems. The spectral invariants of a linear elliptic partial differential operator are nothing but the integrals of motion of the system. There are many other applications to physics such as to gauge theories and gravity [7].
In general, the existence of nonisometric isospectral manifolds implies that the spectrum alone does not determine the geometry entirely. It is also important to study more general invariants of partial differential operators that are not spectral invariants. This means that they depend not only on the eigenvalues but also on the eigenfunctions of the operator. Therefore, they contain much more information with respect to the underlying geometry of the manifold.
The spectrum of a differential operator is not only studied directly, but the related spectral functions such as the spectral traces of functions of the operator, such as the zeta function and the heat trace, are relevant as well [8, 9]. Often the spectrum is not known exactly, which is why different asymptotic regimes are investigated [10, 11]. The small parameter asymptotic expansion of the heat trace yields information concerning the asymptotic properties of the spectrum. The trace of the heat semigroup as the parameter approaches zero is controlled by an infinite sequence of geometric quantities, such as the volume of the manifold and the integral of the scalar curvature of the manifold. The large parameter behavior of the traces of the heat kernels is parameter independent and in fact equals the Euler characteristic of the manifold. The small parameter behavior is given by an integral of a complicated curvature‐dependent expression. It is quite remarkable that when the dimension of the manifold equals two, the equality of the short‐ and long‐term behaviors of the heat flow implies the classic Gauss‐Bonnet theorem. The main objectives of the chapter are to develop the heat equation approach with Schrödinger operator on a vector bundle and outline how it leads to the Hodge theorem [12, 13]. The heat equation asymptotics will be developed [14, 15] andit is seen that the Gauss‐Bonnet theorem can be proved for a two‐dimensional manifold based on it. Moreover, this kind of approach implies that there is a generalization of the Gauss‐Bonnet theorem as well in higher dimensions greater than two [16, 17].
## 2. Geometrical preliminaries
For an n ‐dimensional Riemannian manifold M , an orthonormal moving frame {e1,,en} can be chosen with {ω1,,ωn} the accompanying dual coframe which satisfy
ωi(ej)=δij, i,j=1,…,n (1)
It is then possible to define a system of one‐forms ωij and two‐forms Ωij by solving the equations,
∇Xei=∑j ωji(X) ej, R(X,Y)ei=∑j Ωji(X,Y)ej (2)
It then follows that the Christoffel coefficients and components of the Riemann tensor for M are
ωji(ek)=∑a〈ωaj(ek)ea,ei〉g=〈∇ekej,ei〉g=Γkji (3)
Ωij(ek,es)=∑a 〈Ωaj(ek,es)ea,ei〉g=〈 R(ek,es)ej,ei〉g=Rksji (4)
The inner product induced by the Riemannian metric on M is denoted here by ,:Γ(TM)×Γ(TM)F(M) and it induces a metric on Λk(M) as well. Using the Riemannian metric and the measure on M , an inner product denoted ,:Λk(M)×Λk(M)R can be defined on Λk(M) so that for α,βΛk(M) ,
〈〈α,β〉〉=∫M 〈α,β〉g dvM (5)
where if (x1,,xm) is a system of local coordinates,
dvM=det(gij)dx1dxm
is the Riemannian measure on M . Clearly, α,β is linear with respect to α , β and α,α0 with equality if and only if α=0 . Hodge introduced a star homomorphism * :Λk(M)Λnk(M) , which is defined next.
Definition 2.1. (i) For ω=i1<<ikfi1ikωi1ωik , define
*ω=i1<<ikj1<<jnkfi1ikò(i1,,ik,j1,,jnk)ωj1ωjnk,
where ò is 1 , 1 , or 0 depending on whether (i1,,ik,j1,,jnk) is an even or odd permutation of (1,,n) , respectively.
(ii) If M is an oriented Riemannian manifold with dimension n , define the operator
δ=(−1)nk+n+1*d*:Λk(M)→Λk−1(M) (6)
In terms of the two operators d and δ , the Laplacian acting on k ‐forms can be defined on the two subspaces
Λeven(M)=⊕even Λk(M), Λodd(M)=⊕odd Λk(M) (7)
The operator d+δ can be regarded as the operators on these subspaces,
D0=d+δ:Λeven(M)→Λodd(M), D1=d+δ:Λodd(M)→Λeven(M) (8)
Definition 2.2. Let M be a Riemannian manifold, then the operator
D0=d+δ:Λeven(M)→Λodd(M) (9)
is called the Hodge‐de Rham operator. It has the property that it is a self‐conjugate operator, D0*=D1 and D1*=D0 . It is useful in studying the Laplacian to have a formula for the operator Δ=(d+δ)2 and hence for D0*D0 and D1*D1 as well.
Let {e1,,en} be an orthonormal moving frame defined on an open set U . Define as well the pair of operators
Ej+=ωj∧⋅+i(ej):Λ*(U)→Λ*(U), Ej−=ωj∧⋅−i(ej):Λ*(U)→Λ*(U) (10)
Lemma 2.1. The operators Ej± satisfy the following relations
Ei+Ej++Ej+Ei+=2δij, Ei+Ej−+Ej−Ei+=0, Ei−Ej−+Ej−Ei−=−2δij (11)
If M is a Riemannian manifold and :Γ(TM)×Γ(TM)Γ(TM) is a Levi‐Civita connection, then a connection on the space Λ*(M) , namely (X,ω)Xω , can also be defined such that
(Xω)(Y)=X(ω(Y))ω(XY),YΓ(TM)
The connection may be regarded as a first‐order derivative operator (X,Y,ω)D(X,Y)ω .
Definition 2.3. The second‐order derivative operator (X,Y,ω)D(X,Y)ω is defined to be
D(X,Y)ω=∇X∇Yω−∇∇XYω (12)
In terms of the operator (Eq. (12)), define a second‐order differential operator Δ0:Λ*(M)Λ*(M) by
Δ0=∑i D(ei,ei), (13)
where {ei}1n is an orthonormal moving frame. The operator Δ0 in Eq. (13) is referred to as the Laplace‐Beltrami operator.
Theorem 2.1. (Weitzenböck) Let M be a Riemannian manifold M with an associated orthonormal moving frame {ei}1n . The Laplace operator can be expressed as
Δ=(d+δ)2=−Δ0−18 ∑i,j,k,s Rijks Ei+Ej+Ek−Es−+14R (14)
In Eq. (14), R is the scalar curvature, R=i,jRijij and Δ0 is the Laplace‐Beltrami operator (13).
The operator defined by Eq. (14) does not contain first‐order covariant derivatives and is of a type called a Schrödinger operator. Thus, Weitzenböck formula (14) implies the that Laplacian can be expressed in the form Δ=Δ0F and is an elliptic operator. The Schrödinger operator (14) can be used to define an operator that plays an important role in mathematical physics. The heat operator is defined to be
H=∂∂t+Δ (15)
The crucial point for the theory of the heat operator is the existence of a fundamental solution. In fact, the Hodge theorem can be proved by making use of the fundamental solution.
Definition 2.4. Let M be a Riemannian manifold, π:EM is a vector bundle with connection. Let Δ0:Γ(E)Γ(E) be the Laplace‐Beltrami operator, which is defined by means of the Levi‐Civita connection on M and the connection on the vector bundle E . Let F:Γ(E)Γ(E) be a F(M) ‐linear map. Then, Δ=Δ0F is a Schrödinger operator. If a family of R ‐linear maps
G(t,q,p):EpEq
with parameter t>0 and q,pM satisfies the following three conditions, the family is called a fundamental solution of the heat operator (15) where Ep=π1(p) . First, G(t,q,p):EpEq is an R‐linear map of vector spaces and continuous in all variables t,q,p . Second, for a fixed wEp , let θ(t,q)=G(t,q,p)w , for all t>0 , then θ has first and second continuous derivatives in t and q , respectively andsatisfies the heat equation, which for t>0 is given by Hθ(t,q)=0 , which can be written as
(∂∂t+Δq)G(t,q,p)=0 (16)
where Δq acts on the variable q . Finally, if φ is a continuous section of the vector bundle E , then
limt0+MG(t,q,p)φ(p)dvp=φ(q)
for all φ , where dvp is the volume measure with respect to the coordinates of p given in terms of the Riemannian metric.
Definition 2.5. Suppose a G0(t,q,p) is given. The following procedure taking G0(t,q,p) to G(t,q,p) is called the Levi algorithm:
K0(t,q,p)=(∂∂t+Δq)G(t,q,p),Km+1(t,q,p)=∫0t dτ∫M K0(t−τ,q,z)Km(τ,z,p) dvzK¯(t,q,p)=∑m=0∞ (−1)m+1Km(t,q,p),G(t,q,p)=G0(t,q,p)+∫0t dτ∫M G0(t−τ,q,z)K¯(τ,z,p) dvz (17)
The Cauchy problem can be formulated for the heat equation such that existence, regularity and uniqueness of solution can be established. The Hilbert‐Schmidt theorem can be invoked to develop a Fourier expansion theorem applicable to this Schrödinger operator.
Suppose Δ:Γ(E)Γ(E) is a self‐adjoint nonnegative Schrödinger operator, then there exists a set of C sections {ψi}Γ(E) such that
ψi,ψj=Mψi(x),ψj(x)dvx=δij
Moreover, denoting the completion of the inner product space Γ(E) by Γ(E)¯ , the set {ψi} is a complete set in Γ(E)¯ , so for any ψΓ(E)¯ ,
ψ=i=1ψ,ψiψi
Finally, the set {ψi} satisfies the equation
Δψi=λiψi,Ttψi=etλiψi
where λi are the eigenvalues of Δ andform an increasing sequence: 0λ1λ2 where limkλk= .
Denote U(t,q) by (Ttψ)(q) when U(0,q)=ψ(q) and Tt satisfies the semigroup property and Tt is a self‐adjoint, compact operator.
Theorem 2.2. Let G(t,q,p) be the fundamental solution of the heat operator (15), then
G(t,q,p)w=∑i=1∞ eλit〈 ψi(p),w〉ψi(q) (18)
with wEp holds in Γ(E)¯ .
Proof: For fixed t>0 and wEp , expand G(t,q,p)w in terms of eigenfunctions ψi(q) ,
G(t,q,p)w=i=1σi(t,p,w)ψi(q),σi(t,p,w)=Mψi(q),G(t,q,p)wdvq
Differentiating with respect to t and using Δψi=λiψi , we get
tσi(t,p,w)=Mψi(q),tG(t,q,p)wdvq=Mψi(q),ΔqG(t,q,p)wdvq=MΔqψi(q),G(t,q,p)wdvq=λiMψi(q),G(t,q,p)wdvq=λiσi(t,p,w)
It follows from this that
σi(t,p,w)=ci(p,w)eλit
and since σi depend linearly on w , so ci(p,w)=ci(p)w , where ci(p):EpR is a linear function. There exists c˜i(p) independent of w such that ci(p)w=c˜i(p),w so that
G(t,q,p)w=i=1eλitψi(q)c˜i(p),w
Consequently, for any βΓ(E) , we have
β(q)=limt0MG(t,q,p)β(p)dvp=k=1ψk(q)Mc˜k(p),β(p)dvp
Moreover, β(q) can also be expanded in terms of the ψk basis set,
β(q)=k=1ψk(q)Mψk(p),β(p)dvp
Upon comparing these last two expressions, it is clear that c˜k(p)=ψk(p) for all k andwe are done.
One application of the heat equation method developed so far is to develop and give a proof of the Hodge theorem.
Theorem 2.3. Let M,E,Δ be defined as done already, then
1. H={φΓ(E)|Δφ=0} is a finite‐dimensional vector space.
2. For any ψΓ(E) , there is a unique decomposition of ψ as ψ=ψ1ψ2 , where ψ1H and ψ2Δ(Γ(E)) .
The first part is a direct consequence of the expansion theorem and due to the fact HΔ(Γ(E)) , the decomposition is unique.
The Hodge theorem has many applications, but one in particular fits here. It is used in conjunction with the de Rham cohomology group HdR*(M) . Define
Zk(M)=ker{d:Λk(M)→Λk+1(M)}≡{α∈Λk(M)| dα=0} (19)
Bk(M)= Im {d:Λk−1(M)→Λk(M)}≡d(Λk−1(M)) (20)
Since d2=0 , it follows that Bk(M)Zk(M) andthe k ‐th de Rham cohomology group of M is defined to be
HdRk(M)=Zk(M)/Bk(M) (21)
From Eq. (21), construct
HdR*(M)=⊕k HdRk(M) (22)
In 1935, Hodge claimed a theorem, which stated every element in HdRk(M) can be represented by a unique harmonic form α , one which satisfies both dα=0 and δα=0 . Denote the set of harmonic forms as Hk(M) .
Theorem 2.4. Let M be a Riemannian manifold of dimension n , then
Hk(M)=ker {d+δ:Λk(M)→Λ*(M)}=ker {Δ:Λk(M)→Λk(M)} (23)
where Δ=(d+δ)2 .
Proof: Since Δ=dδ+δd , this implies that Δ(Λk(M))Λk(M) andit is clear that
Hk(M)ker{d+δ:Λk(M)Λ*(M)}ker{Δ:Λk(M)Λ*(M)}=ker{Δ:Λk(M)Λk(M)}. To finish the proof, it suffices to show that ker{Δ:Λk(M)Λk(M)}Hk(M) . If αker{Δ:Λk(M)Λk(M)} , that is Δα=0 , then
Δα,α,=(d+δ)2α,α=(d+δ)α,(d+δ)α=dα,dα+δα,δα+2dα,δα=dα,dα+δα,δα=0
This implies that dα=0 and δα=0 , hence αHk(M) .
Theorem 2.5. Let M be a Riemannian manifold of dimension n , then
1. Hk(M) is a finite dimensional vector space for k=0,1,2,,n .
2. There is an orthogonal decomposition of Λk(M) as
Λk(M)=Hk(M)+d(Λk−1(M))+δ(Λk+1(M)) (24)
Proof: By Theorem 2.1, Δ:Λk(M)Λk(M) is a Schrödinger operator, so the Hodge theorem applies. Thus Hk(M) is of finite dimension, so the first holds. The second part of the Hodge theorem is Λk(M)=Hk(M)+Δ(Λk(M)) . Since Δ(Λk(M))d(Λk1(M))+δ(Λk+1(M)) , we have Λk(M)=Hk(M)+d(Λk1(M))+δ(Λk+1(M)) . The three spaces in this decomposition are orthogonal to each other, so (ii) holds as well.
Theorem 2.6. (Duality theorem) For an oriented Riemannian manifold M of dimension n , the star isomorphism *:Hk(M)Hnk(M) induces an isomorphism
HdRk(M)≃HdRn−k(M) (25)
The k ‐th Betti number defined as bk(M)=dimHk(M,R) also satisfies bk(M)=bnk(M) for 0kn .
## 3. The Minakshisundaran‐Pleijel paramatrix
Let M be a Riemannian manifold with dimension n and E a vector bundle over M with an inner product and a metric connection. Here, the following formal power series is considered with a special transcendental multiplier eρ2/4t and parameters (t,p,q)(0,)×M×M , defined by
H∞(t,q,p)=1(4πt)n/2e−ρ2/4t ∑k=0∞ tk uk(p,q):Ep→Eq (26)
In Eq. (26), the function ρ=ρ(p,q) is the metric distance between p and q in M , Ep=π1(p) is the fiber of E over p and uk(p,q):EpEq are R ‐linear map.
It is the objective to find conditions for which Eq. (26) satisfies the heat equation or the following equality:
(∂∂t+Δq)H∞(t,q,p)w=0 (27)
To carry out this, a normal coordinate system denoted by {x1,,xn} is chosen in a neighborhood of point p and is centered at p . This means that if q is in this neighborhood about p , which has coordinates (x1,,xn) , then the function ρ(p,q) is
ρ(p,q)=x12+⋯+xn2 (28)
In terms of these coordinates, we calculate the components of g ,
gij=〈∂∂xi,∂∂xj〉, G=det(gij) (29)
and define the differential operator
^=k=1nxkxk
The notion of the heat operator (15) on Eq. (26) is worked out one term at a time. First, the derivative with respect to t is calculated
∂∂tH∞(t,p,q)w=1(4πt)n/2e−ρ2/4t{(ρ24t2−n2t)∑k=0∞ tk uk(p,q)w+∑k=0∞ ktk−1uk(p,q)w}=1(4πt)n/2 e−ρ2/4t∑k=0∞ {ρ24t2−n2t+kt} tkuk(p,q)w (30)
It is very convenient to abbreviate the function appearing in front of the sum in Eq. (30) as follows:
Φ(ρ)=e−ρ2/4t(4πt)n/2 (31)
Let {e1,,en} be a frame that is parallel along geodesics passing through p and satisfies
ei(p)=∂∂xi|p (116)
In terms of the function in Eq. (31), the operator Δ0 acting on Eq. (26) is given as
Δ0H∞(t,p,q)w=(Δ0 Φ)⋅(∑k=0∞ tkuk(p,q)w)+2∑a=1n (eaΦ)⋅∇ea(∑k=0∞ tkuk(p,q)w)+Φ⋅Δ0(∑k=0∞ tkuk(p,q)w) (32)
The individual components of (32) can be calculated as follows; since Φ is a function eaΦ=eaΦ and so
eaΦ(ρ)=Φ′(ρ)ea(ρ),Δ0Φ=∑a{ea eaΦ(ρ)−(∇eaea)Φ(ρ)}=Φ″(ρ)⋅∑a(eaρ)2+Φ′(ρ)⋅Δ0ρ,Φ′(ρ)=−ρ2tΦ(ρ),Φ″(ρ)=(ρ24t2−12t)Φ(ρ) (33)
Consequently,
eaρ=xaρ, ∑a(eaρ)2=1, Δ0ρ=n−1ρ+1ρ∂^ logG (117)
and the Laplace‐Beltrami operator on the function Φ is given by
Δ0 Φ=Φ(ρ)((ρ24t2−12t)−12t(n−1−∂^logG)) (34)
Expression (34) goes into the first term on the right side of Eq. (32). The second term on the right‐hand side of (32) takes the form,
2∑a=1n (eaΦ)⋅∇ea (∑k=0∞ tkuk(p,q)w)=2Φ′(ρ) ∑a=1n xaρ⋅∇ea(∑k=0∞ tkuk(p,q)w)=−ρtΦ(ρ)∇∂^/ρ(∑k=0∞ tkuk(p,q)w) (35)
Substituting these results into (32), it follows that
Δ0 H∞(t,q,p)=Φ(ρ) [ρ24t2−12t−12t(n−1−∂^logG)−ρt∇∂^/ρ+Δ0]∑m=0∞ tmum(p,q)w (36)
Combining Eq. (36) with the derivative of H with respect to t in Eq. (35), the following version of the heat equation results:
(∂∂t−Δ0−F)H∞(t,q,p)w=Φ[(∇∂^+14G∂^G)⋅1tu0(p,q)w+∑k=1∞ [(∇∂^+k+14G∂^G)uk(p,q)w−(Δ0+F)uk−1(p,q)wtk−1 (37)
This is summarized in the following Lemma.
Lemma 3.1. Heat equation (27) for H(t,p,q) is equivalent to
(∇∂^+k+14G∂^G) uk(p,q)w=(Δ0+F) uk−1(p,q)w (38)
for all k=0,1,2, and Eq. (38) is initialized with u1(p,q)=0 .
In fact, for fixed pM and wEp , there always exists a unique solution to problem (Eq. (38)) over a small coordinate neighborhood about p .
Definition 3.1. Denote the solution of Eq. (38) by u(p,q)w , which depends linearly on w . Then, um(p,q):EpEq and the Minakshisundaram‐Pleijel parametrix for heat operator (Eq. 15) is defined by
H∞(t,p,q)=1(4πt)n/2e−ρ2/4t ∑m=0∞ tmum(p,q):Ep→Eq (39)
Based on Eq. (39), the N ‐truncated parametrix is defined based on Eq. (39) to be
HN(t,q,p)=1(4πt)n/2e−ρ2/4t ∑m=0N tmum(p,q):Ep→Eq (40)
Theorem 3.1. Choose a smooth function ϕ:M×MM and let G0(t,q,p)=ϕ(q,p)HN(t,q,p) . Then G0(t,q,p) is a k ‐th initial solution of the heat operator (15), where k=N2n4 and z is the greatest integer less than or equal to z .
Proof: Clearly, G0 is a linear map of vector spaces andis continuous and C in all parameters. From the previous calculation, it holds that
(∂∂t−Δ0−F)HN(t,q,p)w=−1(4πt)n/2e−ρ2/4ttN−n2(Δ0+F)uN(p,q)w (41)
and uN(p,q) is C with respect to p and q . Since tNn2eρ2/4t is Ck([0,)×M×M) , hence H(φ(p,q)HN(t,q,p))Ck([0,)×M×M) . Consider integrating G0 against ψ(s,β) ,
∫M G0(t,q,s)ψ(s,β) dvs=∑m=0N tm∫M 1(4πt)n/2e−ρ2/4tψ(q,s)um(s,q)ψ(s,β) dvs (42)
The integral of Eq. (42) over M can be broken up into an integral over Qq(ò2)={sM|ρ(q,s)<ò/2} anda second integral over the set MMq(ò2) . On the latter set, the limit converges uniformly hence
limteρ2/4t(4πt)n/2=0
To estimate the remaining integral, choose a normal coordinate system at q and denote the integration coordinates as (s1,,sn) , then the integrand of Eq. (42) is given as
1(4πt)n/2e|s|2/4tφ(q,s)um(s,q)ψ(s,β)detsi,sjds1dsn
Therefore, in the limit using Definition 2.4,
limt0M(ò/2)1(4πt)n/2eρ2/4tφ(q,s)um(s,q)ψ(s,β)dvs=um(q,q)ψ(q,β)
This result implies that
limt→0 ∫M G0(t,q,s)ψ(s,β) dvs=∑m=0N limt→0 tmum(q,q)ψ(q,β)=ψ(q,β)u0(q,q)=ψ(q,β) (43)
The convergence here is uniform.
There exists an asymptotic expansion for the heat kernel which is extremely useful and has several applications. It is one of the main intentions here to present this. An application of its use appears later.
Theorem 3.2. (Asymptotic expansion) Let M be a Riemannian manifold with dimension n and E a vector bundle over M with inner product and metric Riemannian connection. Let G(t,q,p) be the heat kernel or fundamental solution for heat operator (Eq. (15)) and (Eq. (39)) the MP parametrix. Then as t0 , G(t,p,p) has the asymptotic expansion G(t,p,p)H(t,p,p) , that is, for any N>0 , it is the case that
G(t,p,p)−1(4πt)n/2 ∑m=0N tmum(p,p)=O(tN−n2) (44)
and the symbol on the right‐hand side of Eq. (44) signifies a quantity ξ with the property that
limt0ξtNn2=0
Proof: It suffices to prove the theorem for any large N . Let G0(t,q,p)=φ(q,p)HN(t,q,p) as in Theorem 3.2. The conclusion of the theorem is equivalent to the statement
G(t,p,p)G0(t,p,p)=O(tNn2)
From the previous theorem and existence and regularity of the fundamental solution, the result G of Levi iteration initialized by G0 is exactly the fundamental solution. Equality (Eq. (41)) means that there exists a constant A such that for any t(0,T) ,
|K0(t,q,p)|=|(t+Δ)G0(t,q,p)|AtNn2
Let v(M) be the volume of the manifold M . Using this result, the following upper bound is obtained
|K1(t,q,p)|0tdτM|K0(tτ,q,s)K0(τ,s,p)|dvs0t[A2(tτ)Nn2τNn2v(M)]dτ0tA2TNn2τNn2v(M)dτABtNn2+1Nn2+1
We have set B=ATNn2v(M) . Exactly the same procedure applies to |K2(t,q,p)| . Based on the pattern established this way, induction implies that the following bound results
|Km(t,q,p)|ABmtNn2+m(Nn2+1)(Nn2+2)(Nn2+m)ABmtmm!tNn2
The formula for Levi iteration yields upon summing this over m the following upper bound
|K˜(t,q,p)|m=0|Km(t,q,p)|AeBttNn2
Using this bound, the required estimate is obtained,
|G(t,q,p)G0(t,q,p)||0tdτMdvzG0(tτ,q,z)K˜(τ,z,p)|0tdτMeρ2/4(tτ)(4π(tτ))n/2AeBττNn2dvsMnAeBt0tτNn2dτv(M)=1Nn2+1MnAeBtv(M)tNn2+1
This finishes the proof.
Now if all the Hodge theorem is used, formal expressions for the index can be obtained. Suppose D:Γ(E)Γ(F) is an operator such that D*D and DD* are Schrödinger operators and D* is the adjoint of D . Suppose the operators D*D:Γ(E)Γ(E) and DD*:Γ(F)Γ(E) are defined, so they are self‐adjoint and have nonnegative real eigenvalues. Then the spaces Γμ(E) and Γμ(F) can be defined this way
Γμ(E)={φ∈Γ(E)|D*Dφ=μφ}, Γμ(F)={φ∈Γ(F)|DD*φ=μφ} (45)
For any m>0 , the dimensions of the spaces in (44) are finite and moreover,
Γ0(E)=ker{D:Γ(E)Γ(F)},Γ0(F)=ker{D*:Γ(F)Γ(E)}
Consequently, an expression for the index Ind(D) can be obtained from Eq. (45) as follows
IndD=dimkerDdimkerD*=dimΓ0(E)dimΓ0(F)
Definition 3.2. For the Schrödinger operator Δ , let etΔ:Γ(E)Γ(E) , for t>0 be defined as
(e−tΔφ)(q)=∫M G(t,q,p)φ(p)dvp (46)
where G(t,q,p) is the fundamental solution of heat operator (Eq. (15)).
Let 0λ1λ2 be the eigenvalues of the operator Δ and {ψ1,ψ2,} the corresponding eigenfunctions. Intuitively, the trace of etΔ is defined as
tr e−tΔ=∑k=1∞ 〈〈e−tΔ ψk,ψk〉〉 (47)
This is clearly keλkt or μetμdimΓμ(E) , so the definition of tr is well‐defined if and only if
∑k e−λkt<∞ (48)
Theorem 3.3. For any p,qM , let {e1(p),,eN(p)} and {f1(q),,fN(q)} be orthonormal bases on Ep and Eq , respectively, then the following two results hold for t>0 ,
(a) ∫M∫∑a,b=1N 〈G(t,q,p)ea(p),fb(q)〉2 dvqdvp<∞,(b) ∑k=1∞ e2λkt<∫M∫∑a,b=1N 〈G(t,q,p)ea(p),fb(q)〉2 dvqdvp<∞ (49)
Proof: When t>0 , G(t,q,p) is continuous and hence satisfies (a). For and wΓ(E) , Theorem 2.5 yields the following expansion for G(t,q,p)Γ(E)¯ , hence the Parseval equality yields
M|G(t,q,p)w|2dvq=k=1e2λktψk(p),w2
Replacing w by the basis element ea(p) , this implies that
a=1NM|G(t,q,p)ea(p)|2dvq=a=1Nk=1e2λktψk(p),ea(p)2=k=1a=1Ne2λktψk(p),ea(p)2=k=1e2λktψk(p),ψk(p)
Then for any m , it follows that
k=1me2λkt=k=1mMe2λktψk(p),ψk(p)dvpMk=1e2λktψk(p),ψk(p)dvp=MdvpMa=1N|G(t,q,p)ea(p)|2dvq=MMa,b=1NG(t,q,p)ea(p),fb(q)2dvqdvp<
Theorem 3.4. For any t>0 ,
tr (e−tΔ)=∫Mtr G(t,p,p) dvp (50)
Proof: From Theorem 2.2, it follows that
trG(t,p,p)=a=1NG(t,p,p)ea(p),ea(p)=a=1Nk=1etλkψk(p)ea(p)ψk(p),ea(p)=a=1Nk=1etλkψk(p),ea(p)2=k=1etλkψk(p),ψk(p)2
Integrating this on both sides, it is found that
MtrG(t,p,p)dvp=Mk=1etλkψk(p),ψk(p)2dvp=k=1etλk=tr(etΔ)
Note that Eq. (48) is a series with positive terms which converges uniformly as t . Therefore,
limt→∞ tr e−tΔ=∑k=1∞ limt→∞ e−tλk=dim Γ0(E) (51)
In fact, as t0 , the equality
G(t,p,p)=1(4πt)n/2+O(1tn/2)
and the previous theorem imply that limt0tretΔ= .
## 4. An application of the expansions: the Gauss Bonnet theorem
As far as Ind(D) is concerned, it is the case for all t>0 that,
Ind(D)=tretD*DtretDD*=MtrG+(t,p,p)dvpMtrG(t,p,p)dvp
by Theorem 3.5, where G±(t,p,p) are the fundamental solutions of t+D*D and t+DD* . As t0 , Theorem 3.2 assumes the form
G±(t,p,p)H±(t,p,p)=1(4πt)n/2m=0tmu±m(p,p)
Lemma 4.1. Let {λi} be the spectrum of the Laplacian on zero‐forms, or functions, on M . Then,
∑k e−λkt=1(4πt)n/2∑k=0∞ ∫M uk(x,x) dvx (52)
Proof:
keλkt=MtrG(t,x,x)dvx=1(4πt)n/2k(Muk(x,x)dvx)tk
The spectrum of the Laplacian on functions characterizes a lot of interesting geometric information. Note that Eq. (52) can be written as
ieλit1(4πt)n/2k=0aktk,ak=Muk(x,x)dvx
and the trace does not appear in the case of functions. The superscript on the Laplacian Δp denotes the form degree acted upon andsimilarly on other objects throughout this section.
Two Riemannian manifolds are said to be isospectral if the eigenvalues of their Laplacians on functions counted with multiplicities coincide.
Corollary 4.1. Let M and N be compact isospectral Riemannian manifolds. Then M and N have the same dimension and the same volume.
Proof: Let {λi} denote the spectrum of both M and N with dimM=m and dimN=n . Then it follows that
1(4πt)m/2k=0(MukM(p,p)dvp)tk=i=0eλit=1(4πt)n/2k=0(NukN(q,q)dvq)tk
This implies that m=n , which in turn implies that
1(4πt)m/2[Mu0M(p,p)dvpNuN(q,q)dvq]=1(4πt)m/2k=1(MukM(p,p)dvpNuN(q,q)dvq)tk
Since the right‐hand side of the equation depends on t , but the left‐hand side does not, this result implies that
∫M u0M(p,p) dvp=∫N u0N(q,q) dvq (53)
Iterating this argument leads to the set of equations
∫MukM(p,p) dvp=∫N ukN(q,q) dvq (54)
for all k>0 . In particular, since u0=1 , Eq. (53) leads to the conclusion vol(M)=vol(N) .
The proof illustrates that in fact there exist an infinite sequence of obstructions to claiming that two manifolds are isospectral, namely the set of integrals Mukdvp . The first integral contains basic geometric information. It is then natural to investigate the other integrals in sequence as well. Recall that Rp,Rp, denote the covariant derivatives of the curvature tensor at p . A polynomial P in the curvature and its covariant derivatives is called universal if its coefficients depend only on the dimension of M . The notation P(Rp,Rp,,kRp) is used to denote a polynomial in the components of the curvature tensor and its covariant derivatives calculated in a normal Riemannian coordinate chart at p . The following theorem will not be proved, but it will be used shortly.
Theorem 4.2. On a manifold of dimension n ,
u1(p,p)=P1n(Rp), uk(p,p)=Pkn(Rp,∇Rp,…,∇2k−2Rp), k≥2 (55)
for some universal polynomials Pkn .
Thus, P1n is a linear function with no constant term and u1(p,p) is a linear function of the components of the curvature tensor at p , with no covariant derivative terms. The only linear combination of curvature components that produces a well‐defined function u1(p,p) on a manifold is the scalar curvature R(p)=Rijij andso there exists a constant C such that u1(p,p)=CR(p) .
Theorem 4.3.
u1(p,p)=16R(p) (56)
Proof: The proof amounts to noticing that P1n is a universal polynomial, so it suffices to compute C over one kind of manifold. A good choice is to integrate over Sn with the standard metric and work it out explicitly in normal coordinates. It is found that u1(p,p)=n(n1)/6 andit is known that R(p)=n(n1) for all pSn andthis implies Eq. (56).
The large t or long‐time behavior of the heat operator for the Laplacian on differential forms is then controlled by the topology of the manifold through the means of the de Rham cohomology. The small t or short‐time behavior is controlled by the geometry of the asymptotic expansion. The combination of topological information has a geometric interpretation. This is made explicit by means of the Chern‐Gauss‐Bonnet theorem. The two‐dimensional version of this theorem will be developed here.
These results can be summarized by the elegant formula
k=0eλkt=1(4πt)n/2{v(M)+16MR(x)dvxt+O(t2)}
where v(M) is the volume of M .
Suppose that λ is positive and here we let Eλp denote the possibly trivial eigenspace of Δ on p ‐forms. If ωEλp then it follows that Δp+1dω=dΔpω=λdω , hence dωEλp+1 . Thus, a well‐defined sequential ordering of the spaces can be established. If ωEλp has the property that dω=0 , then λω=Δpω=(δd+dδ)ω=dδω . Therefore, since λ0 , it is found that ω=d(1λδω) . Thus, the sequence 0Eλ0ddEλn0 is exact. Since the operator d+δ is an isomorphism on kEλ2k , it follows that
∑s (−1)sdimEλs=0 (57)
Theorem 4.4. Let {λis} be the spectrum of the operator Δ , then
∑s(−1)s ∑i e−λist=∑s(−1)sdimkerΔs. (58)
Proof: By (57),
s(1)skeλkst=s(1)seλit
The sum on the right is only over eigenvalues such that λip=0 and so
eλipt=dimkerΔp.
This has the consequence that
∑p (−1)p tr e−tΔ=∑p (−1)p∑k e−λkpt (59)
is independent of the parameter t . This means that its large or long t behavior is the same as its short or small t behavior. To put it another way, the long‐time behavior of tretΔ is given by the de Rham cohomology, while the short‐time behavior is dictated by the geometry of the manifold. Using the definition of the Euler characteristic, it follows that
χ(M)=∑p(−1)pdimHdHp(M)=∑p(−1)pdimkerΔp=∑p(−1)p tr e−tΔp=∑p(−1)p ∫M tr G(t,x,x) dvx (60)
From the asymptotic expansion theorem, the following expression for χ(M) results
χ(M)=1(4πt)n/2 ∑k=0∞(∫M ∑s=0n tr uks(x,x) dvx)tk (61)
The uks in Eq. (61) are the coefficients in the asymptotic expansion for tr(etΔs) . Since χ(M) is independent of t , only the constant or t ‐independent term on the right‐hand side of Eq. (61) can be nonzero. This implies the following important theorem.
Theorem 4.5. If the dimension of M is even, then
1(4π)n/2 ∫M ∑s=0n (−1)s tr uks(x,x) dvx={0,k≠n2;χ(M),k=n2. (62)
Theorem 4.6. (Gauss‐Bonnet) Let M be a closed oriented manifold with Gaussian curvature K and area measure daM , then
χ(M)=12π∫M K daM (63)
Proof: By the last theorem and the fact that trukp(x,x)=trukp1(x,x) , it follows that
χ(M)=14π∫M∑p=02 (−1)p tr u1p daM=14π∫M( tr u10− tr u11+ tr u12) daM=14π∫M (2 tr u10− tr u11) daM=14π ∫M (23K− tr u11) daM (64)
since the scalar curvature is two times the Gaussian. Now it must be that tru11(x,x)=CR(x)=2CK(x) , for some constant C . The standard sphere S2 has Gaussian curvature one andso C can be calculated from Eq. (64),
2=12πS2(13C)daM=12π(13C)(4π)
Therefore, C=2/3 and putting all of these results into Eq. (64), Eq. (62) results.
As an application of this theorem, note that the calculation of u1 gives another topological obstruction to manifolds having the same spectrum.
Theorem 4.7. Let (M,g) and (N,h) be compact isospectral surfaces, then M and N are diffeomorphic.
Proof: As noted in Corollary 4.1,
Mu1M(x,x)dvx=Nu1N(y,y)dvy
On a surface, the scalar curvature is twice the Gaussian curvature, so by the Gauss‐Bonnet theorem,
6πχ(M)=∫M u1M(x,x) dvx=∫N u1N(y,y) dvy=6πχ(N) (65)
However, oriented surfaces with the same Euler characteristic are diffeomorphic.
## 5. Summary and outlook
The heat equation approach has been seen to be quite deep, leading both to the Hodge theorem and also to a proof of the Gauss‐Bonnet theorem. Moreover, it is clear from the asymptotic development that there is a generalization of this theorem to higher dimensions. The four‐dimensional Chern‐Gauss‐Bonnet integrand is given by the invariant 132π2{K24|ρr|2+|R|2} , where K is the scalar curvature, |ρr|2 is the norm of the Ricci tensor, |R|2 is the norm of the total curvature tensor andthe signature is Riemannian. This comes up in physics especially in the study of Einstein‐Gauss‐Bonnet gravity where this invariant is used to get the associated Euler‐Lagrange equations.
Let Rijkl be the components of the Riemann curvature tensor relative to an arbitrary local frame field {ei} for the tangent bundle TM and adopt the Einstein summation convention. Let m=2s be even, then the Pfaffian Em(g) is defined to be
Em(g)=1(8π)ss! Ri1i2j2j1⋯Ri2s−1i2sj2sj2s−1 g(ei1∧⋯∧ei2s,ej1∧⋯∧ej2s) (66)
The Euler characteristic χ(M) of any compact manifold of odd dimension without boundary vanishes. Only the even dimensional case is of interest.
Theorem 5.1. Let (M,g) be a compact Riemannian manifold without boundary of even dimension m . then
χ(M)=∫M Em(g) dvM (67)
This was proved first by Chern, but of greater significance here, this can be deduced from the heat equation approach that has been introduced here. There is a proof by Patodi [18], but there is no room for it now. It should be hoped that more interesting results will come out in this area as well in the future.
## References
1 - Jost J. Riemannian Geometry and Geometric Analysis, Springer‐Verlag, Berlin‐Heidelberg; 2011.
2 - Yu Y. The Index Theorem and the Heat Equation Method, World‐Scientific Publishing, Singapore; 2001.
3 - Berline N, Getzler E, Vergne M. Heat Kernel and Dirac Operators, Springer‐Verlag, Berlin‐Heidelberg, 1992.
4 - Rosenberg S. The Laplacian on a Riemannian Manifold, London Mathematical Society, 31, Cambridge University Press, New York, NY, USA; 1997.
5 - Gilkey P B. The Index Theorem and Heat Equation, Mathematics Lecture Series, No. 4, Publish or Perish Inc, Boston, MA; 1974.
6 - Goldberg S I. Curvature and Homology, Dover, New York, 1970.
7 - Cavicchidi A, Hegenbarth F. On the effective Yang‐Mills Lagrangian and its equation of motion, J. Geom. Phys. 1998; 25, 69–90.
8 - McKean, H Singer I. Curvature and eigenvalues of the Laplacian, J. Diff. Geom. 1967; 1, 43–69.
9 - Atiyah M F, Patodi V K, Singer I. Spectral asymmetry and Riemannian geometry I, Math. Proc. Camb. Phil. Soc. 1975; 77, 43–69.
10 - Gilkey P B. Curvature and eigenvalues of Laplacian for elliptic complexes. Adv. Math. 1973; 11, 311–325.
11 - Bracken P. Some eigenvalue bounds for the Laplacian on Riemannian manifolds. Int. J. Math Sciences. 2013; 8, 221–226.
12 - Bracken P. The Hodge‐de Rham decomposition theorem and an application to a partial differential equation. Acta Mathematica Hungarica, 2011; 133, 332–341.
13 - Bracken P. A result concerning the Laplacian of the shape operator on a Riemannian manifold and an application. Tensor N S. 2013; 74, 43–47.
14 - Minakshisundaram S, Pleijel A. Some properties of the eigenfunctions of the Laplace operator on Riemannian manifolds, Can J. Math. 1949; 1, 242–256.
15 - Bracken P. A note on the fundamental solution of the heat operator on forms, Missouri J. Math Sciences. 2013; 25, 186–194.
16 - Chern S S. A simple proof of the Gauss‐Bonnet formula for closed Riemannian manifolds. Ann. Math. 1944; 45, 747–752.
17 - Gilkey P B, Park J H. Analytic continuation, the Chern‐Gauss‐Bonnet theorem andthe Euler‐Lagrange equations in Lovelock theory for indefinite signature metrics, J. Geom. Phys. 2015; 88, 88–93.
18 - Patodi V K. Curvature and the eigenforms of the Laplace operator. J. Differen. Geom.. 1971; 5 233–249.
|
{}
|
## Category Archives: Uncategorized
### Unsupervised Learning
Deep neural networks have enjoyed a fair bit of success in speech recognition and computer vision. The same basic approach was used for both problems: use supervised learning with a large number of labelled examples to train a big, deep network to solve the problem. This approach just works whenever the solution to the problem that we seek to solve can be represented with a deep neural network, which is often the case for reasons that I will not explain here.
But there is another way. Unsupervised learning is the idea that we can understand how the world “works” by simply passively observing it and building a rich internal mental model of it. This way, when we have a supervised learning task we wish to solve, we can consult the mental model that was learned during the unsupervised learning stage, and solve the problem much more rapidly, without using as many labels. Unsupervised learning is very appealing, because it doesn’t require any labels. Imagine: you simply get a system to observe the world for a while, and after a while we could use it to answer difficult questions about the signal — such as to determine what objects are in an image, what are their 3D shapes, and and what are their poses.
But so far there are no state of the art results that benefit use unsupervised learning. Why?
1: We don’t have the right unsupervised learning algorithm: Supervised learning algorithms work well because they directly optimize the performance measure that we care about and come with a guarantee: if the neural network is big enough and is trained on enough labelled data, the problem will be completely and utterly solved. In contrast, unsupervised learning algorithms do not have such a guarantee. Even if we used the best current unsupervised learning algorithm, and trained it on a huge neural net with a huge amount of data, we could not be confident that it would help us build high-performing systems.
2: Whatever the unsupervised learning algorithm will do, it will necessarily be different from the objective that we care about. As the unsupervised learning algorithm doesn’t know about what the desired objective is, the only thing it can do is to try and understand as much of the signal as possible, in order to, hopefully, understand the relevant bits about the problem we care about. Thus, if it takes a big network to do really well the purely supervised problem (as is the case —- the high-performing supervised nets have 50M parameters, and they’re still growing), where all the network’s resources are dedicated to the task, it will take a much larger network to really benefit from unsupervised learning, since it attempts to solve a much harder problem that includes our supervised objective as a special case.
Thus, at this point, it is simply not clear how and why will the future unsupervised learning algorithms work.
### Universal Approximation and Depth
Many years ago Hornik et al. proved that a neural network with a single hidden layer can approximate any continuous function from a compact domain to the reals to arbitrary precision. The paper is highly cited (search for “Multilayer feedforward networks are universal approximators” on google scholar; this paper has almost almost 1/3 of the citations of the original backpropagation paper!), and convinced many people that neural networks will work for their applications, as they can learn any function.
Sadly this result was very misleading. The result claimed that a single hidden layer neural network can approximate any function, so a one hidden layer neural network should be good for any application. However it is not an efficient approximator for the functions we care about (this claim is true but hard to defend, since it’s not so easy to describe the functions that we care about). Indeed, the universal approximation construction works by allocating a neuron to every to every small volume of the input space, and learning the correct answer for each such volume. The problem is that the number of such small volumes grows exponentially in the dimensionality of the input space, so Hornik’s construction is exponentially inefficient and is thus not useful. (it is worth noting that deep neural networks are not universal approximators unless they are also exponentially large, because there are many more different functions than there are small neural networks).
This caused researchers to miss out on the best feature of the neural networks: depth. By being deep, the neural network can represent functions that are computed with several steps of computation. Deep neural networks are best thought of as constant-depth threshold circuits, and these are known to be able to compute a lot of interesting functions. For example, a small 3-hidden layer threshold network can sort N N-bit numbers, add N such numbers, compute their product, their max, compute any analytic function to high precision. And it is this ability of deep neural networks to perform such interesting computations makes them useful for speech recognition and machine translation.
There is another simple reason why large but not infeasibly huge deep networks must be capable of doing well on vision and speech. The argument is simple: human beings can recognize an object in 100 milliseconds, which gives their neurons the opportunity to fire only 10 times during the recognition. So there exists a parallel procedure that can recognize an object in 10 parallel steps, which means that a big 10-layer net should be good at vision and speech — and it turns out to be the case. But if this argument is actually valid, it means that we should be able to train neural networks for any task that humans can solve quickly. Reading emotions, recognizing faces, reading body language and vocal intonation, and some aspects of motor control come to mind. On all these tasks, high performance is truly achievable if we have the right dataset and fast implementation of a big supervised network.
In addition, there is a well-known intuition for why deep convolutional neural networks work well for vision, and explain why shallow neural networks do not. Many believe that to recognize an object, many steps of computation should be performed. In the first step, the edges should be extracted from the image. In the second step, small parts (or edges of edges) should be computed from the edges, such as corners. In the third step, combinations of small parts should be computed from the small parts. They could be a small circle, a t-junction, or some other visual entity. The idea is to extract progressively more abstract and specific units at each step. If you found this description difficult to follow, here are some images of various object recognition systems, all of which work roughly on the principle of extracting larger parts from smaller ones.
(from http://www.kip.uni-heidelberg.de/cms/vision/projects/recent_projects/hardware_perceptron_systems/image_recognition_with_hardware_neural_networks/)
(from http://www.sciencedirect.com/science/article/pii/S0031320308004603)
(from http://journal.mercubuana.ac.id/data/Hierarchical-models-of-object-recognition-in-cortex.pdf)
By being deep, the convolutional neural network can implement this multistep process. The depth of the convolutional neural network allows each of its layers to compute larger and more elaborate object parts, so that the deepest layers compute specific objects. And its large number of parameters and units allows it to do so robustly, provided that we manage to find the appropriate network parameters.
Something similar must be going on with speech recognition, where deep networks make a very big difference compared to shallow ones, so it is likely that speech recognition consists of breaking speech up into small “parts”, and increasing their complexity at each layer, which cannot be done with a shallow network.
### Calling Python from Matlab
It is sometimes useful to call Python from Matlab. There may be a robust implementation of an optimizer that hasn’t been well-ported to Python yet. What to do in this situation?
There is a simple strategy that should do: first, figure out how to call python from stand-alone C functions, and then use that code within a mex function. While tedious, at least this is straightforward. By using the not terribly complicated PyObject stuff we can create python objects in C, send them to python functions, and unpack whatever the python functions give us back.
However, everything goes bad if we try to import numpy in our python code. We’ll get an error that looks like this:
……../pylib/numpy/core/multiarray.so:
undefined symbol: _Py_ZeroStruct
even though all the required symbols are defined in libpython2.x.
This problem was asked several times on stackoverflow, with no satisfactory answer. But luckily, after much searching, I stumbled upon https://github.com/pv/pythoncall which discovered a way to solve this problem.
Basically, matlab imports dynamic libraries in a peculiar way that messes up the symbols somehow. But if we execute the code
int dlopen_python_hack(){
if (!dlopen_hacked){
dlopen(LIBPYTHON_PATH, RTLD_NOW|RTLD_GLOBAL);
dlopen_hacked = 1;
}
}
where LIBPYTHON_PATH points to libpython2.x.so, then suddenly all the messed-up symbols will fix themselves, and we won’t have undefined symbol problems anymore.
### The useless beauty of Reinforce
Reinforce is one of my favorite algorithms in machine learning. It’s useful for reinforcement learning.
The formal goal of reinforce is to maximize an expectation, $\sum_x p_\theta(x)r(x)$, where $r(x)$ is the reward function and $p_\theta(x)$ is a distribution. To apply reinforce, all we need is to be able to sample from $p(x)$ and to evaluate the reward $r(x)$, which is really nothing.
This is the case because of the following simple bit of math:
$\nabla_\theta \sum_x p_\theta(x) r(x) = \sum_x p_\theta(x)\nabla_\theta \log p_\theta(x) r(x)$
which clearly shows that to estimate the gradient wrt the expected reward, we merely need to sample from $p(x)$ and weigh the gradient $\nabla_\theta \log p_\theta(x)$ by the reward $r(x)$.
Reinforce is so tantalizing because sampling from a distribution is very easy. For example, the distribution $p(x)$ could be the combination of a parametrized control policy and the actual responses of the real world to our actions: $x$ could be the sequence of states and actions chosen by our policy and the environment, so
$p(x)=\prod_t p_{\textrm{world}}(x_t|x_{t-1},a_t)p_\theta(a_t|x_t)$,
and only part of $p(x)$ is parameterized by $\theta$.
Similarly, $r(x)$ is obviously easily computable from the environment.
So reinforce is dead easy to apply: for example, it could be applied to a robot’s policy. To sample from our distribution, we’d run the policy, get the robot to interact with our world, and collect our reward. And we’ll get an unbiased estimate of the gradient, and presto: we’d be doing stochastic gradient descent on the policy’s parameters.
Unfortunately, this simple approach is not so easy to apply. The problem lies in the huge variance of our estimator $\nabla_\theta p(x) r(x)$. It is easy to see, intuitively, where this variance comes from. Reinforce obtains its learning signal from the noise in its policy distribution $p_\theta$. In effect, reinforce makes a large number of random choices through the randomness in its policy distribution, and if they do better than average, then we’ll change our parameters so that these choices are more likely in the future. Similarly, if the random choices end up doing worse than random, the model will try to avoid choosing this specific configuration of actions.
($\sum_x p_\theta(x)\nabla_\theta \log p_\theta(x) r(x)$ = $\sum_x p_\theta(x)\nabla_\theta \log$ $p_\theta(x)(r(x)-r_\textrm{avg})$ because $\sum_x p(x)\nabla \log p(x)=0$. There is a simple formula for choosing the optimal $r_\textrm{avg}$).
To paraphrase, reinforce adds a small perturbation to the choices and sees if the total reward has improved. If we’re trying to do something akin to supervised learning with reinforce with a small label set, then reinforce won’t do so terribly: each action would be a classification, and we’d have a decent chance to guess the correct answer. So we’ll be guessing the correct answer quite often, which will supply our neural net with training signal, and learning will succeed.
However, it’s completely hopeless to train a system that makes a large number of decisions with a tiny reward in the end.
On a more optimistic note, large companies that deploy millions of robots could refine their robot’s policies with large scale reinforce. During the day, the robots will collect the data for the policy, and during the night, the policy will be updated.
### Undirected models are better at sampling
The best directed models should always be a worse at generating samples than the best undirected models, even if their log likelihoods are similar for a simple reason.
If we have an undirected model, then it defines a probability distribution by the equation
$\displaystyle p(x;\theta)=\frac{\exp(G(x;\theta))}{\sum_y \exp(G(y;\theta))}$
As always, the standard objective of unsupervised learning is to find a distribution $p(x;\theta)$ so that the average log probability of the data distribution $E_{x\sim D(x)} [ \log p(x;\theta) ]$ is as large as possible.
In theory, if we learn successfully, we should reach a local maxima of the average log probability. Taking the derivative and setting it to zero yields
$E_{x\sim D(x)}[\nabla_\theta G(x;\theta^*)] = E_{x\sim p(x;\theta)}[\nabla_\theta G(x;\theta^*)]$
(here $\theta^*$ are the maximum likelihood parameters). Notice that this equation is a statement about the samples produced by the distribution $p(x;\theta^*)$: the gradient of the goodness $\nabla_\theta G(x;\theta^*)$ averaged over the data distribution $D(x)$ is equal to the same gradient averaged over the model’s distribution $p(x;\theta^*)$. Therefore, the samples from $p(x;\theta^*)$ must somehow be related to the samples from the data distribution $D(x)$. This is a “promise” made to us by the learning objective of unsupervised learning.
However, directed models do not offer such a guarantee; instead, it promises that the conditional distributions of the data distribution will be similar to the conditional distributions of the model’s distribution, when the conditioned data is sampled form the data distribution. This is the critical point.
More formally, a directed model defines a distribution $p(x;\theta)=\prod_j p(x_j|x_{. Plugging it in into the objective of maximizing the average log likelihood of the data distribution $D(x)$, we get the following:
$\sum_j E_{D(x)}[\log p(x_j|x_{,
which is a sum of indepedent problems.
IF the $p(x_j|x_{‘s don’t share parameters for different $j$‘s, then the problems are truly independent and could be solved completely separately. So let’s say we found a $\theta^*$ that makes all these objectives happy. Then $E_{D(x_{ will be happy, which means that $p(x_j|x_{ is similar, more or less, to $D(x_j|x_{ for $x_{ being sampled from $D(x_{ — which is the critical implied assumption made by the maximum likelihood objective applied to directed models. Why is it a problem when generating samples? It’s bad because this objective makes no “promises” about the behaviour of $p(x_j|x_{ when $x_{. It is easy to imagine that a $p(x_1;\theta^*)$ will be somewhat different from $D(x_1)$, and say that $x_1$ was sampled from $p(x_1;\theta^*)$. Then $p(x_2|x_1;\theta^*)$ will freak out, having never seen anything like $x_1$, which will make the sample $(x_1,x_2)$ look even less like a sample from $D(x_1,x_2)$. Etc. This “chain reaction” will likely cause the directed model to produce worse-looking samples than an undirected model with a similar log probability.
But something should be odd: after all, any undirected model (or distribution for that matter) can be decomposed with the chain rule, $p(x_1,\ldots,x_n)=\prod_j p(x_j|x_{. Why won’t the above argument apply to an undirected model, which I claim is to be superior at sampling? An answer can be given, but it involves lots of handwaving.
If an undirected model is expressed as a directed model using the chain rule, then the conditional probabilities will involve massive marginalizations. What’s more, all the conditional distributions $p(x_j|x_{ will share parameters in a very complicated way for different values of $j$. In all likelihood (and that’s the weak part of the argument), the parameterization is so complex that it’s not possible to make all the objectives $E_{D(x_{ happy for all $j$ simultaneously; that is, the undirected model will not necessarily make $p(x_j|x_{ similar to $D(x_j|x_{ when $x_{. This is why I assumed that the little conditionals don’t share parameters.
So to summarize, directed models are worse at sampling because of the sequential nature of their sampling procedure. By sampling in sequence, the directed model is “fed” data which is unlike the training distribution, causing it to freak out. In contrast, sampling from undirected models requires an expensive Markov chain, which ensures the “self-consistency” of the sample. And intuitively, since we invest more work into obtaining the sample, it must be better.
### The Miracle of the Boltzmann Machine
The Boltzmann Machine, invented by my adviser, is fairly well-known in machine learning. But the Boltzmann Machine (BM) is also the only model that actually uses dreams for a specific computational purpose, as opposed to all other models of sleep, that use it in a more ad-hoc way (e.g., to “remember the day’s events better”, or to “forget unwanted thoughts”, and other claims of this kind). In addition, the BM also forgets its dreams. Just like humans! The BM forgets its dreams due to a differentiation of a simple equation that has apparently nothing to do with sleep. It is so remarkable that I believe the Boltzmann Machine to be “right” in a very essential way.
Unfortunately the Boltzmann Machine can only be understood by using Math. So you’ll like it if you know math too.
The Boltzmann Machine defines a probability distribution over the set of possible visible binary vectors $V$ and hidden binary vectors $H$. The intended analogy is that $V$ is an observation, say the pixels on the retina, and $H$ is the joint activity of all the neurons inside the brain. We’ll also denote the concatenation of $V$ and $H$ by $X$, so $X=(V,H)$. The Boltzmann Machine defines a probability distribution over the configurations $X=(V, H)$ by the equation
$P(X)=\displaystyle \frac{\exp(X^\top W X/2)}{\sum_{X'} \exp(X'^\top W X'/2)}$
So different choices of the matrix $W$ yield different distributions $P(X)$.
The BM makes observations about the world, which are summarized by the world distribution over the visible vectors $D(V)$. For example, $D(V)$ could be the distribution of all the images we’ve seen during the day (so $V$ is a binary image).
The following point is a little hard to justify. We define the goal of learning by the objective function
$L(W)=\displaystyle E_{D(V)} [\log P(V)]$,
where $P(V)=\sum_{H} P(V,H)$ and $E_{D(V)} [f(V)] = \sum_V D(V)f(V)$. In other words, the goal of learning is to find a BM that assigns high log probability to the kind of data we typically observe in the world. This learning rule makes some intuitive sense, because negative log probability can be interpreted as a measure of surprise. So if our BM isn’t surprised by the real world, then it must be doing something sensible. It is hard to fully justify this learnnig objective, because a BM that’s not surprised by the world isn’t obviously useful for other tasks. But we’ll just accept this assumption and see where it leads us.
So we want to find the parameter setting of $W$ that maximizes the objective $L(W)$, which is something we can approach with gradient ascent: we’d iteratively compute the gradient $\partial L/\partial W$, and change $W$ slightly in the direction of the gradient. In other words, if we change our weights by
$\Delta W_{ij} = \varepsilon \partial L/\partial W_{ij}$,
then we’re guaranteed to increase the value of the objective slightly. Do it enough times and our objective $L$ will be in a good place. And finally, here is the promised math:
$\partial L/\partial W_{ij} = E_{D(V)P(H|V)} [X_i X_j] - E_{P(V,H)}[X_i X_j]$.
If you’re into differentiation you could verify the above yourself (remember that $E_{P(V,H)}[X_i,X_j] = \sum_{(V,H)} P(V,H) X_i X_j$, and similarly for $D(V)P(H|V)$) .
We’re finally ready for the magical interpretation of the above equation. The equation states that the weight $W_{ij}$ should change according to the difference of two averages: the average of the products $X_i X_j$ according to $D(V)P(H|V)$ and according to $P(V,H)$.
But first, notice that $X_i X_j$ is the product of the two neurons at the ends of the connection $(i,j)$. So the connection will have little trouble detecting when the product is equal to 1 from local information (remember, our vectors live in $0,1$).
More significantly, we can compute the expectation $E_{D(V)P(H|V)} [X_i X_j]$ by taking the observed data from $D(V)$, “clamping” it onto the visible units $V$, and “running” the BM’s neurons until their states converge to equilibrium. All these terms can be made precise in a technical sense. But the important analogy here is that during the day, the world sets the visible vectors $V$ of the BM, and it does the rest, running its hidden units $H$ until they essentially converge. Then the BM can compute the expectation by simply averaging the products $X_i X_j$ that it observes. This part of the learning rule attempts to make the day’s patterns more likely.
Now $E_{P(V,H)} [X_i X_j]$ is computed by disconnecting the visible vector $V$ from the world and running the BM freely, until the states converge. This is very much like dreaming; we ask the BM to produce the patterns it truly believes in. Then the connection $(i,j)$ computes its expectation by observing the products $X_i X_j$, and subtracts the resulting average from the connections. In other words, the learning rule says, “make whatever patterns we observe during sleep less likely”. As a result, the BM will not be able to easily reproduce the patterns it observed during the sleep, because it unlearned them. To paraphrase: the BM forgets its dreams.
Consequently, the BM keep on changing its weights as long as the day’s patterns are different from the sleep’s patterns, and will stop learning once they two become equal. This means that the goal of learning is to make the dreams as similar as possible to the actual patterns that are observed in reality.
It should be surprising that both dreams and the fact that they are hard to remember are a simple consequence of a simple equation.
### The futility of gigantic training sets with simple models
It is believed that a simple, usually linear, model with an extra-huge training set and a gigantic feature representation is superior to a more powerful model with less data, a claim often made by Google. And indeed, large companies are able to get better results by using ever larger training sets with simple models: each order of magnitude increase in the training set results in a reasonable increase in performance.
This approach is sensible in the sense that increasing the size of the training data is essentially guaranteed to improve performance. And if I have a fast learning algorithm, thousands of cores, and lots of data, then it is conceptually trivial to use more training data whenever the learning algorithm can be parallelized without too much engineering effort. And if I needed better performance very soon, I’d do precisely that.
However, the problem with this approach is that it runs out of steam in the sense that it will not reach human level performance. The following figure illustrates the point:
In this figure, the simpler model eventually outperforms the more sophisticated one, mainly because it is easy and relatively cheap to make the model larger. However, simple models will necessarily fail to reach human level performance, and the more powerful models will eventually but certainly outperform them. It must be so, hence QED. More seriously, a model could not successfully solve tasks that involve any kind of text comprehension, for example, without first extracting a really good representation of text’s meaning with miraculous-looking properties. And that’s something simple models don’t even try to do. By not using a good representation, the simple model falls back on its more primitive feature representation, which does explicitly describe the higher-level concepts that are ultimately needed to solve our task.
Nonetheless, simple (ie linear) models have serious advantages over complex models. They are faster to train and are easier to extend and understand, and their behaviour and performance is more predictable. However, it is finally becoming recognized that neural networks have the potential to be vastly more expressive than linear models without using too many parameters. And now that we are becoming better at trainign deep neural networks, we will see the proliferation of the more powerful multilayered perceptrons. Of course, naive multilayered perceptrons will also probably run out of steam, in which case we’ll have to design more exotic and ambitious architectures. But for now they are the simplest and the most powerful model class.
### Validation Error Shape
Most learning algorithms make many small changes to the model, ensuring that each little change improves the model’s fit of the training data. But when the model starts getting too good at the training data, its test and validation error get worse. That’s the point where we stop learning because additional learning will improve improve our training error at the expense of the validation and the test error.
Many machine learning courses depict a cartoon of the validation error:
Note that both the training and the validation errors are convex functions of time (if we ignore the bit in the beginning). However, if we train the model longer, we discover the following picture:
This is the real shape of the validation error. The error is almost always bounded from above, so the validation error must eventually inflect and converge. So the training error curve is convex, but the validation isn’t!
|
{}
|
Journal topic
Nat. Hazards Earth Syst. Sci., 18, 2825–2840, 2018
https://doi.org/10.5194/nhess-18-2825-2018
Nat. Hazards Earth Syst. Sci., 18, 2825–2840, 2018
https://doi.org/10.5194/nhess-18-2825-2018
Research article 02 Nov 2018
Research article | 02 Nov 2018
# Stochastic downscaling of precipitation in complex orography: a simple method to reproduce a realistic fine-scale climatology
Stochastic downscaling of precipitation in complex orography: a simple method to reproduce a realistic fine-scale climatology
Silvia Terzago, Elisa Palazzi, and Jost von Hardenberg Silvia Terzago et al.
• Institute of Atmospheric Sciences and Climate, National Research Council of Italy, Corso Fiume 4, Turin, Italy
Correspondence: Silvia Terzago (s.terzago@isac.cnr.it)
Abstract
Stochastic rainfall downscaling methods usually do not take into account orographic effects or local precipitation features at spatial scales finer than those resolved by the large-scale input field. For this reason they may be less reliable in areas with complex topography or with sub-grid surface heterogeneities. Here we test a simple method to introduce realistic fine-scale precipitation patterns into the downscaled fields, with the objective of producing downscaled data more suitable for climatological and hydrological applications as well as for extreme event studies. The proposed method relies on the availability of a reference fine-scale precipitation climatology from which corrective weights for the downscaled fields are derived. We demonstrate the method by applying it to the Rainfall Filtered Autoregressive Model (RainFARM) stochastic rainfall downscaling algorithm.
The modified RainFARM method is tested focusing on an area of complex topography encompassing the Swiss Alps, first, in a “perfect-model experiment” in which high-resolution (4 km) simulations performed with the Weather Research and Forecasting (WRF) regional model are aggregated to a coarser resolution (64 km) and then downscaled back to 4 km and compared with the original data. Second, the modified RainFARM is applied to the E-OBS gridded precipitation data (0.25 spatial resolution) over Switzerland, where high-quality gridded precipitation climatologies and accurate in situ observations are available for comparison with the downscaled data for the period 1981–2010.
The results of the perfect-model experiment confirm a clear improvement in the description of the precipitation distribution when the RainFARM stochastic downscaling is applied, either with or without the implemented orographic adjustment. When we separately analyze grid points with precipitation climatology higher or lower than the median calculated over the neighboring grid points, we find that the probability density function (PDF) of the real precipitation is better reproduced using the modified RainFARM rather than the standard RainFARM method. In fact, the modified method successfully assigns more precipitation to areas where precipitation is on average more abundant according to a reference long-term climatology.
The results of the E-OBS downscaling show that the modified RainFARM introduces improvements in the representation of precipitation amplitudes. While for low-precipitation areas the downscaled and the observed PDFs are in good agreement, for high-precipitation areas residual differences persist, mainly related to known E-OBS deficiencies in properly representing the correct range of precipitation values in the Alpine region. The downscaling method discussed is not intended to correct the bias which may be present in the coarse-scale data, so possible biases should be adjusted before applying the downscaling procedure.
1 Introduction
Assessing the impacts of climate change on extreme precipitation events and hydrometeorological hazards requires reliable precipitation data at fine spatial and temporal resolution. A wide range of downscaling methods have been developed to obtain fine-scale precipitation fields from coarse-scale data (see Maraun et al.2010, for a review): in addition to physically based dynamical downscaling approaches, in which high-resolution regional climate models are nested in global datasets, an effective approach is provided by statistical and stochastic downscaling. While statistical downscaling is based on mapping large-scale predictors for precipitation at small scales to produce the expected small-scale rainfall field (e.g., Maraun et al.2010; Chiew et al.2010), stochastic rainfall downscaling, a type of weather generator, uses information directly from the large-scale precipitation to generate an ensemble of possible stochastic realizations of precipitation fields with a realistic spatial and temporal correlation structure and preserving the large-scale properties of the original field (see e.g., Ferraris et al.2003).
Stochastic approaches are attractive owing to their computational efficiency, which allows us to perform ensemble simulations to evaluate the uncertainties in the small-scale precipitation fields, and owing to their flexibility, as most of them can be applied to a range of temporal scales . Several stochastic downscaling methods have been proposed for precipitation, some of them devised for generating time series at single stations or at a set of stations . Here we focus on full-field generators, which have the characteristic of simulating fields of precipitation, thus providing continuous spatial information usable as input to distributed hydrological models. Detailed discussions on these models are offered by and . Full-field weather generators are generally based upon simple autoregressive models, so-called meta-Gaussian models , or point process models simulating individual rain cells , or spatiotemporal implementation of multifractal cascade models .
Among the meta-Gaussian models, an example is provided by the Rainfall Filtered Autoregressive Model (RainFARM) procedure, a stochastic rainfall downscaling method based on the extrapolation of the coarse-scale Fourier power spectrum to small scales. This method was originally developed for spatiotemporal downscaling of rainfall predictions on meteorological timescales and then extended to climatic timescales . An advantage of stochastic downscaling methods like RainFARM is that they have few free parameters, they do not require further fields in addition to the original precipitation to downscale, and the small-scale correlation structure is estimated from the large-scale field. This makes such methods also directly applicable to model outputs in areas where further fine-scale information is not available. Still, the main limitation of most stochastic downscaling methods is that they do not take into account orographic effects at scales smaller than those resolved by the original precipitation field to downscale. Orographic precipitation mechanisms, such as orographic lifting, play an important role in determining patterns of small-scale precipitation in areas with complex orography (Roe2005; Smith2006). Thus, when the fine-scale distribution of precipitation in the downscaled fields is not conditioned on orography, the long-term climatology at individual grid points may differ significantly from observations. This may make such downscaling methods not suitable for applications in which the small-scale hydrological balance is of importance, such as studies involving changes in snow cover or water resources in small mountain basins.
The addition of an orographic component to rainfall downscaling models over land has been investigated, among others, by , , , , and . In particular and were among the first studies using a cascade-based approach to analyze the multiscale statistical properties of orographic rainfall. studied the scaling behavior of orographic rainfall using a high-temporal-resolution rain gauge network in Sardinia, Italy, and developed a modified cascade-based rainfall downscaling model conditioned on local average precipitation and on terrain elevation. These methods require detailed calibration for each study area and the availability of an extensive dataset of local measurements at high temporal frequencies, detailed data which may not be readily available for several regions. Nonetheless, for many areas, information on the spatial distribution of precipitation, at least as a long-term climatological average, may be available from different sources, such as gridded reconstructions based on rain gauge observations (e.g., the EURO4M dataset for the greater Alpine region, http://www.euro4m.eu/datasets.html, last access: 1 October 2018) or from dynamical downscaling simulations with regional climate models (RCMs). In particular non-hydrostatic RCMs, when applied at very fine scales (1 to 5 km resolution), can capture the main physical mechanisms for orographic precipitation and may lead to a realistic spatial distribution of precipitation amounts on average, also over complex topography, albeit often with significant biases in amplitude .
In this paper we present a very simple approach, described and tested for the specific case of the RainFARM method, which allows the integration into a stochastic downscaling method of information on the fine-scale spatial distribution of precipitation, available from high-resolution gridded observations or from dynamical downscaling. This information is used to locally modulate the distribution of precipitation inside each large-scale grid element of the field to downscale or in the neighborhood of each point. The precipitation amplitudes on the fine grid are first determined by the stochastic downscaling procedure, then the downscaled precipitation is modulated using the realistic pattern derived from a fine-scale reference precipitation climatology. This last step, consisting in the application of correction factors (or weights), allows us to take into account the heterogeneity at the fine scales, including topographic effects. Finally, the overall precipitation amounts at the resolution of the precipitation fields to downscale are adjusted to ensure the conservation of the total precipitation at the large scale, a requirement already present in the standard RainFARM procedure.
We demonstrate the application of the method in two cases. First, in a perfect-model experiment, in which a high-resolution precipitation dataset is first upscaled by aggregating it to a coarser resolution and then downscaled with RainFARM to its initial resolution to check the agreement with the original high-resolution field. To this end, a 30-year-long simulation with the Weather Research and Forecasting (WRF) model over Europe is used, providing the reference fine-scale (4 km) precipitation dataset, which is first aggregated to derive the coarse-scale field to be downscaled and then used for validating the downscaled field. Second, we demonstrate the method in a more realistic setup, by applying it to the E-OBS observational dataset (version 17; Haylock et al.2008) at about 25 km resolution and by validating the statistics of the downscaled precipitation fields against surface observations from a dense network of rain gauge stations in Switzerland. Since a high-quality precipitation climatology to be used for calculating correction factors is not always available for many regions of the world, we test the impact of different reference climatologies on the downscaled fields, considering three different datasets with different degrees of accuracy.
The paper is organized as follows: Sect. 2 describes the datasets used in this study; Sect. 3 presents the modifications included in RainFARM to better describe the precipitation at fine scales; Sect. 4 shows the application and the evaluation of the method, first in a “perfect-model experiment” and then in a more realistic case in which E-OBS precipitation is downscaled and the results are compared to surface station measurements; Sects. 5 and 6 provide a discussion of the results and the main conclusions of the paper.
2 Datasets
In order to present and validate the method, we employ different precipitation datasets, described briefly in the following.
## 2.1 WRF simulation outputs
The perfect-model experiment, described further on in Sect. 4.1, is performed using precipitation data from a very high-resolution climate simulation with the regional climate Weather Research and Forecasting (WRF v3.4.1) model, described in . WRF was forced in the period 1979–2008 with boundary conditions from the ERA-Interim reanalysis and run over the European domain with a double nesting, with a resolution for the inner domain of about 0.037 (∼4 km in the meridional direction). This dataset has been validated through comparison with a range of observation-based and reanalysis datasets . In agreement with the general behavior of several regional (as well as global) climate models, which are known to exhibit wet biases over mountainous areas especially in winter (e.g., Kotlarski et al.2014; Palazzi et al.2015), this WRF simulation also overestimates precipitation and localized precipitation extremes over the Alps .
To perform a perfect-model experiment we aggregate WRF precipitation data originally available at ∼4 km resolution to a coarser resolution of 64 km by box averaging. The upscaled field is then downscaled back to 4 km with RainFARM and finally its statistics are compared with those of the original 4 km WRF precipitation. By construction, the results obtained with this approach are not affected by possible biases in the considered datasets; i.e., the total average precipitation flux is the same in the large-scale fields and in the validation dataset.
## 2.2 E-OBS
A more realistic application is provided by a comparison between precipitation downscaled from the European daily observation-based gridded dataset E-OBS (version 17; Haylock et al.2008) and station data. E-OBS provides daily precipitation over land areas from 25 to 75 N in latitude and 40 W to 75 E in longitude, based on the interpolation of in situ station data. For the present study we analyze E-OBS precipitation data at 0.25 lat–long resolution corresponding to about 25 km grid size in the meridional direction. Being based on the interpolation of in situ stations, E-OBS has potential inaccuracies coming from the interpolation algorithms that are employed and from sampling error related to the capability of estimating reliable grid point values from the nearest few available stations. This type of uncertainty is largest in areas with sparse and uneven station coverage, in particular in high-elevation regions where the station distribution is biased towards the lower elevations. It is also worth stressing that, in general, rain gauges tend to underestimate total precipitation in mountain areas since they do not properly account for snowfall, which represents an important contribution in high-elevation regions especially in the cold season.
We use the E-OBS gridded dataset as a sample large-scale precipitation field to be downscaled with RainFARM. The E-OBS data downscaled at 1 km resolution are then compared with MeteoSwiss station data (described in the following) to check the performances of RainFARM and for validation purposes. It is worth noting that in this, as well as in other, “real-case” experiments, average precipitation in the fields to downscale (E-OBS in this case) is generally expected to differ from that of the validation dataset and this is a bias that our downscaling method does not address. This source of uncertainty should be considered when evaluating the downscaling performances.
## 2.3 WorldClim
WorldClim 1.4 is used here as an example of globally available precipitation climatology that could also be used in regions where no high-quality local gridded data are available. WorldClim consists in a set of global gridded climatologies based on observations with a nominal spatial resolution of about 1 km × 1 km. It is a popular choice for ecosystem studies . WorldClim provides 30-year monthly averages of the minimum, mean, and maximum temperature and of precipitation, as well as of other bioclimatic variables, for a reference historical period (1960–1990, labeled as current climate) and for a future period (2050–2080) for four Representative Concentration Pathways (RCPs). Monthly climatologies were obtained from various data sources through spline interpolation methods, which use the latitude, longitude, and elevation as independent variables. Assessment of uncertainties in the gridded products were made, highlighting that the most uncertain estimates correspond to mountainous and other poorly sampled areas. In fact, compared WorldClim data to two high-resolution datasets in the US and found significant differences, particularly in high-elevation regions.
## 2.4 MeteoSwiss station and gridded data
To validate the RainFARM downscaling algorithm we consider precipitation observations registered by 160 automatic stations of the MeteoSwiss network. These data are preprocessed by MeteoSwiss, which performs temporal aggregation, gap filling, and quality control to correct wrong or implausible measurement values according to agreed protocols (https://www.meteoswiss.admin.ch/home/measurement-and-forecasting-systems/datenmanagement/data-preparation.html, last access: 1 October 2018). We focus our analysis on the period 1981–2010 and retain only the stations providing at least 80 % of daily data over this period, leading to a reduced set of 59 stations, shown in Fig. 1.
We also employ the MeteoSwiss climatology RnormM to calculate the corrective weights for the RainFARM downscaling. RnormM provides the average monthly accumulated precipitation over the standard period 1981–2010, calculated from the data of all automatic and manual stations in Switzerland, achieving high accuracy and detailed spatial resolution. RnormM provides precipitation with nominal spatial resolution of 2.2 km in WGS-84 long–lat coordinates, while the effective resolution, i.e., the average distance between individual weather stations, is 15–20 km. The accuracy of the RnormM analysis depends on the accuracy of the underlying measuring stations and on the ability of the interpolation method that is employed.
3 The RainFARM stochastic downscaling method and its modification
## 3.1 RainFARM
The RainFARM procedure is described in detail in and , and in the present paper we refer to the spatial-only downscaling method described in the latter. The RainFARM method downscales a large-scale spatiotemporal precipitation field P(X, Y, t), which is considered reliable at scales larger than a reliability scale Lo (which often may coincide with the spatial resolution of the field). Here and in the following we use uppercase coordinates (X, Y) and lowercase coordinates (x, y) to indicate that a field is defined on a coarse or fine grid, respectively.
From the large-scale field to downscale, the method generates a fine-scale field $\stackrel{\mathrm{̃}}{r}\left(x$, y, t) at a desired fine-scale resolution by extrapolation of its large-scale power spectrum to the unresolved smaller scales, using the same spectral slope in a log–log plot as the large-scale field, choosing random Fourier phases at small scales and finally using an inverse Fourier transform to return to physical space. Since this procedure by itself would create intermediate fields g(x, y, t) with an unrealistic, almost Gaussian amplitude distribution, a final nonlinear (exponential) transformation is applied to the resulting field in physical space: $\stackrel{\mathrm{̃}}{r}\left(x$, y, t)=exp (γg). The parameter γ represents an additional free parameter of the procedure, but, as discussed in , γ=1 is commonly used when there is no adequate information to tune it.
In the final step of the procedure, $\stackrel{\mathrm{̃}}{r}\left(x$, y, t) is further adjusted to guarantee that when upscaled (aggregated) at the large reliability scale, it reproduces exactly the original field to downscale P(X, Y, t):
$\begin{array}{}\text{(1)}& r\left(x,y,t\right)=\frac{\stackrel{\mathrm{̃}}{r}\left(x,y,t\right)〈P\left(x,y,t\right){〉}_{{L}_{\mathrm{o}}}}{〈\stackrel{\mathrm{̃}}{r}\left(x,y,t\right){〉}_{{L}_{\mathrm{o}}}},\end{array}$
where the operator $〈\cdot {〉}_{{L}_{\mathrm{o}}}$ indicates aggregation (averaging) at scale Lo using simple averaging over boxes of side Lo, followed by interpolation using nearest neighbors to the fine-scale grid (xy).
Figure 1(a) Orography of the study area including the Swiss Alps, according to a 500 m resolution digital elevation model. Automatic meteorological stations of the MeteoSwiss network providing at least 80 % of valid daily total precipitation data over the period 1981–2010 are labeled with gray diamonds. (b) Distribution of the elevations of the MeteoSwiss stations grouped in 500 m elevation bins.
In this work we additionally apply a smoothing operator S to both numerator and denominator of Eq. (1), averaging the box-averaged fields a(x, y) over a moving Gaussian window with size $\mathit{\sigma }={L}_{\mathrm{o}}/\mathrm{2}$:
$\begin{array}{}\text{(2)}& S\left[a\left(x,y\right){\right]}_{{L}_{\mathrm{o}}}=\underset{\mathrm{\Omega }}{\int }K\left(x-{x}^{\prime },y-{y}^{\prime }\right)a\left({x}^{\prime },{y}^{\prime }\right)\mathrm{d}{x}^{\prime }\mathrm{d}{y}^{\prime },\end{array}$
where Ω is the entire domain of interest and $K\left(x-{x}^{\prime }$, $y-{y}^{\prime }\right)=\mathrm{exp}\mathit{\left\{}-\left[\left(x-{x}^{\prime }{\right)}^{\mathrm{2}}+\left(y-{y}^{\prime }{\right)}^{\mathrm{2}}\right]/{\mathit{\sigma }}^{\mathrm{2}}\mathit{\right\}}$ is a kernel representing an isotropic distribution of Gaussian weights. The resulting transformation in this case is
$\begin{array}{}\text{(3)}& r\left(x,y,t\right)=\frac{\stackrel{\mathrm{̃}}{r}\left(x,y,t\right)S{\left[〈P\left(x,y,t\right){〉}_{{L}_{\mathrm{o}}}\right]}_{{L}_{\mathrm{o}}}}{S{\left[〈\stackrel{\mathrm{̃}}{r}\left(x,y,t\right){〉}_{{L}_{\mathrm{o}}}\right]}_{{L}_{\mathrm{o}}}}.\end{array}$
Using this improved approach allows us to conserve average precipitation at scale Lo, avoiding box-shaped artifacts in the resulting fields. We verified that all qualitative results discussed in this paper do not change whether the final smoothing step is applied or not.
## 3.2 Reproducing fine-scale precipitation climatology in RainFARM
We assume that a reference precipitation climatology c(x, y) at fine spatial scales is available. This could be obtained from long-term time averages of gridded observational precipitation datasets, radar or satellite observations, or from numerical simulations with high-resolution models. This reference climatology is used only to derive local weights used to modify the spatial distribution of precipitation, but the absolute value of precipitation itself is not taken into account, so that possible large-scale biases in the reference climatology are not introduced in the downscaling chain and do not affect the results.
The spatial pattern of precipitation is translated into a map of weights that is used to correct the spatial pattern of the downscaled precipitation fields as follows:
$\begin{array}{}\text{(4)}& w\left(x,y\right)=c\left(x,y\right)/S\left[c\left(x,y\right){\right]}_{{L}_{\mathrm{o}}}.\end{array}$
That is, we divide each value of c(x, y) by its local smooth average at scale Lo. When the spatial average of c(x, y) is 0 (as may happen in arid areas), the weights are all set to 1. The resulting weight field reflects the distribution in space, inside each cell of size Lo, of the climatological precipitation in the reference dataset. Notice that this provides a map of weights with both positive and negative values and that, on average, precipitation at scale Lo is conserved using this approach.
In general, if the climatology needs to be reproduced at a monthly time-scale, this method can be applied separately for each month, computing monthly weights wi(x, y) from Eq. (4), where ci(x, y) is the long-term monthly average of the reference precipitation dataset for month i, with $i=\mathrm{1},\phantom{\rule{0.125em}{0ex}}\mathrm{\dots },\phantom{\rule{0.125em}{0ex}}\mathrm{12}$.
The weights are then applied to the fine-scale field produced by the RainFARM procedure:
$\begin{array}{}\text{(5)}& \stackrel{\mathrm{̃}}{r}\left(x,y,t\right)\to \stackrel{\mathrm{̃}}{r}\left(x,y,t\right)\cdot w\left(x,y\right),\end{array}$
generating a new field in which precipitation is reduced or intensified according to the weights obtained from the long-term climatology. As a last step, the final amplitude adjustment to conserve average precipitation at scale Lo, i.e., Eq. (3), is again applied to $\stackrel{\mathrm{̃}}{r}\left(x$, y, t).
The resulting fine-scale field r(x, y, t) still coincides exactly with the large-scale field P(X, Y, t) when both are aggregated at the confidence scale Lo, but its long-term time-averaged climatology will reflect the small-scale spatial distribution of the reference dataset c(x, y). Notice that the weights in Eq. (4) only use the local distribution of precipitation so they are not sensitive to possible large-scale biases in the precipitation climatology.
4 Results
## 4.1 Application of RainFARM in a perfect-model experiment
We demonstrate the method using daily precipitation data from long-term simulations (1980–2008) performed with the WRF model over the European domain, at 4 km spatial resolution, forced with ERA-Interim reanalysis data . We focus on the Alpine region, choosing an area encompassing northwestern Italy and Switzerland (see Fig. 2a). The area comprises 128×128 grid elements of the WRF precipitation field, p(x, y, t).
The dataset p(x, y, t) is used for the perfect-model experiment
• i.
to create the coarse-scale field P(X, Y, t) to be downscaled, obtained by aggregating the fine-scale field at scale Lo=64 km, corresponding to 16×16 fine-scale grid points, using a box-averaging aggregation.
• ii.
to calculate the reference fine-scale climatology, which is necessary for the modified RainFARM algorithm to estimate the weights.
• iii.
to validate the fine-scale fields produced by the downscaling method.
In our example we consider separately monthly climatologies in order to show a general case in which the variability in precipitation at seasonal and subseasonal scales is non-negligible.
The coarse-scale field, P(X, Y, t), resulting from the aggregation of p(x, y, t), has 8×8 spatial grid elements. After applying RainFARM to P(X, Y, t), the fine-scale output r(x, y, t) should reproduce, as close as possible, the statistical properties of the original field p(x, y, t). To this end, we tune the value of the parameter γ, described in Sect. 3.1, so that the amplitude distributions of the downscaled fields r(x, y, t) include that of the original field p(x, y, t) (shown in Fig. 3a). A suitable value for the γ parameter is found to be γ=0.75. The spatial spectral slope for the RainFARM procedure is estimated separately for each month of the year from the original coarse precipitation data P(X, Y, t), starting at wave number k=2, corresponding to a change of slope in the spatial power spectra of precipitation in the WRF dataset.
Figure 2a shows the long-term time average (1980–2008) of the WRF 4 km resolution precipitation dataset p(x, y, t). The field presents small-scale details that reflect the underlying topography, and several features can be easily identified, such as the Apennine Mountains in the Italian regions of Liguria and Tuscany (lower corner) or features connected to individual river basins in the Alps. From the small-scale field p(x, y, t) we calculate the monthly climatologies, used as a reference to derive the corresponding fine-scale maps of weights, according to Eq. (4).
Figure 2b shows the long-term time average of the coarse 64 km spatial resolution dataset to be downscaled, P(X, Y, t), obtained by box-averaging aggregation of the fine-scale WRF data p(x, y, t). The RainFARM downscaling procedure estimates the large-scale spectral slopes, month by month, directly from the large-scale field P(X, Y, t).
Figure 2c shows the long-term average of the fine-scale fields obtained using the standard RainFARM procedure and using a smooth moving Gaussian window with an averaging operator of size $\mathit{\sigma }={L}_{\mathrm{o}}/\mathrm{2}$ (Eq. 2). Since RainFARM alone does not take into account orography at scales smaller than the reliability scale Lo, the climatology of the downscaled field matches the reference climatology only if coarse-scale averages of the downscaled climatology are considered. Inside each grid element of size Lo, the RainFARM field presents a distribution that has no correspondence with the actual reference precipitation climatology. Actually, the fine-scale distribution introduced by RainFARM in each large-scale grid element of size Lo is statistically almost homogeneous, as reflected in the smooth distributions found in the long-term average (Fig. 2c). Indeed if a box-averaging operator had been used (Eq. 1 instead of Eq. 3), the resulting climatology of the RainFARM downscaled field would be very similar to Fig. 2b.
Figure 2The “perfect-model” experiment. Average daily precipitation climatology (1980–2008) derived from (a) high-resolution WRF daily fields at 4 km, here considered to be the“truth”; (b) WRF daily fields aggregated at 64 km, i.e., the fields, P(X, Y, t), to be downscaled; (c) P(X, Y, t) downscaled at 4 km using the standard RainFARM method; (d) P(X, Y, t) downscaled at 4 km using the modified RainFARM method; (e) an example of a map of weights obtained in this case for the month of June; (f, g) anomalies of the downscaled climatologies in (c) and (d) with respect to the reference precipitation field (a).
Figure 2d shows the same as Fig. 2c but using the modified RainFARM algorithm, i.e., applying the weights wi(x, y) computed from the WRF monthly climatologies. An example of a weights map is provided in Fig. 2e, which refers to the month of June. In this case the weights correspond to correction factors ranging between 0.4 and 2.3, but similar ranges are found for the other months. Compared to Fig. 2c, in Fig. 2d individual orographic features are now clearly recognizable and there is a significantly improved correspondence with the reference climatology. The improvement gained by the modified RainFARM procedure is better highlighted in Fig. 2f and g, which report the anomalies of the climatologies obtained with the different downscaling procedures (Fig. 2c and d) compared to the reference climatology (Fig. 2a). These figures show that the modified RainFARM algorithm allows us to remarkably reduce the bias with respect to the reference climatology in both the valleys and the mountain ridges. Furthermore, when we compare the climatologies of the standard RainFARM downscaled fields with the reference, we find a pattern correlation of 0.79 and a root-mean-square error (RMSE) of 0.86 mm day−1, while the modified method improves these to a correlation of 0.98 and a RMSE of 0.27 mm day−1.
Figure 3The “perfect-model” experiment: (a) probability density function (PDF) of WRF daily precipitation at coarse resolution (64 km, black) of the downscaled WRF precipitation (from 64 to 4 km) obtained using the standard (gray) and modified (light blue) RainFARM methods compared to the original fine-resolution (4 km) WRF precipitation (blue). Gray and light-blue bands and lines represent the spread of the ensemble and the fifth and 95th percentile ranges, respectively, calculated from 80 realizations of the stochastic downscaled field; (b) same as (a) but separating high- and low-precipitation grid points as specified in the text; (c) ratio between the PDF of WRF downscaled precipitation and the PDF of the reference, for low- and high-precipitation grid points, with the standard and the modified RainFARM methods.
We proceed investigating the extent to which the two RainFARM methods correct the amplitude distributions of the coarse-scale daily precipitation with respect to the reference fine-scale data. Figure 3 shows the PDFs of the WRF daily precipitation before and after application of the downscaling methods. The PDFs are calculated including all the grid points of the previously described precipitation datasets, i.e., $\mathrm{8}×\mathrm{8}×N$ grid points in the coarse-scale dataset and $\mathrm{128}×\mathrm{128}×N$ grid points in both the downscaled and the validation datasets, N being the number of (daily) time steps in the period 1980–2008. For each downscaling method (standard and modified RainFARM) we generate an ensemble of 80 stochastic realizations of the downscaled rainfall fields in order to provide an estimate of the uncertainty associated with the small-scale precipitation. The different realizations are characterized by the same spectral slope and different sets of random Fourier phases. In Fig. 3a we report as light blue and gray the spread of the PDF ensembles and as thick light blue and gray lines the fifth and 95th percentiles of the range of the PDFs obtained with the modified and the standard RainFARM, respectively.
The coarse-scale precipitation fields provide precipitation values mainly below 100–120 mm day−1 and, in any case, they never exceed 150 mm day−1. However, the range of precipitation values simulated at high resolution (∼4 km) by WRF extends up to 400 mm day−1. The aggregation has clearly smoothed out the precipitation extremes, which appear insufficiently represented in the coarse-scale dataset. Both RainFARM downscaling methods reintroduce high-precipitation values in the range of 150–400 mm day−1 with a probability of occurrence that is comparable to that seen in the original WRF dataset. The PDF of the real reference precipitation is included in the range of PDFs obtained via stochastic downscaling: this result confirms the strength of the RainFARM method as a way to effectively represent the upper tails of the precipitation distribution, also when the new modified procedure described in this work is used. In fact, using the modified RainFARM procedure we obtain a PDF distribution similar to in the case of the standard RainFARM, just slightly shifted towards higher precipitation values. The modified procedure, while better reproducing the long-term climatology of precipitation at each point, does not affect the overall capability of the RainFARM method of reproducing extreme precipitation values.
In Fig. 3a all grid points in the study area have been considered together. Since the modified RainFARM procedure leads to an overall better representation of the downscaled precipitation climatology (Fig. 2d and g), it is interesting to analyze more in detail the effects of the downscaling procedure when we separately consider grid points characterized by long-term average precipitation, which is higher and lower with respect to the median over the neighboring grid points. To this end, in each large-scale box of size Lo we separate grid points into two groups with the same numerosity, using the local median of the long-term climatology (Fig. 2a) as a threshold. Grid points with average precipitation climatology above or equal to the local median are classified as “high-precipitation” grid points, while those with average precipitation below the threshold are labeled as “low-precipitation” grid points. For each of the two groups we calculate the PDF of the downscaled daily precipitation and we compare it to the PDF of the original 4 km WRF data in the same group. This exercise is performed using both the standard and the modified RainFARM outputs.
Figure 3b shows the results when the standard (left) and the modified (right) RainFARM methods are applied. When using the standard RainFARM the PDFs of the high- and low-precipitation grid points are not clearly separated from each other, and for given precipitation ranges the reference PDF lies outside the range of variability in the PDFs of the downscaled data. Instead, the modified RainFARM is able to capture the reference rainfall PDF, better separating the high- from the low-precipitation grid points, and the reference PDFs are included in the range of PDFs of the downscaled datasets.
In order to better compare the performance of the modified versus the standard RainFARM, we show the ratio between the PDFs of the downscaled datasets with respect to the PDF of the reference data: the closer the ratio is to 1, the better the model performance is. The results are reported in Fig. 3c in which low- and high-precipitation grid points are shown in the left and right panels, respectively. Also in this case we use the full 80-member ensemble, and the bands in the plot represent the range of variability in the ensemble. The standard RainFARM shows good skill in representing very rare events with precipitation above 200 mm day−1 in low-precipitation grid points. Apart from this, the standard RainFARM overestimates the frequency of precipitation below 200 mm day−1 in low-precipitation grid points and underestimates the frequency of precipitation below 300 mm day−1 in high-precipitation grid points. These results show that the standard RainFARM method is, by construction, not sensitive to the differences between low- and high-precipitation grid points at the fine scale, so precipitation is generally overestimated in low-precipitation grid points and underestimated in high-precipitation grid points (Fig. 3b).
This problem is corrected when the modified RainFARM is used. The modified RainFARM provides precipitation distributions that are closer to the real one, for almost the full range of precipitation values, for both low- and high-precipitation grid points. The only exception is for very rare events with daily precipitation above 300 mm day−1, occurring in high-precipitation grid points only, where the standard RainFARM already showed a good agreement: the frequency of these events is now overestimated with respect to the reference. Apart from this feature, the modified RainFARM outperforms the standard method and allows redistribution of coarse-scale precipitation among the corresponding small-scale grid points in a more realistic way based only on their average climatology.
## 4.2 A more realistic test case
In this section we demonstrate an application in which the large-scale precipitation field, the reference climatology, and the verification data are not derived from the same high-resolution dataset but from different sources. We downscale the E-OBS dataset, one of the most extensively used gridded observational precipitation datasets over Europe, and we compare downscaled data directly with daily station measurements from MeteoSwiss. The domain of study is again the Swiss Alps, for which a high-quality dataset from surface stations is available (Fig. 1). The reference precipitation climatology used to derive the corrective weights is the MeteoSwiss RnormM monthly climatology (see Sect. 2.4), possibly the best available gridded product for the study region. Notice that the MeteoSwiss station data used for verification are included among the stations used to construct this gridded climatology, so the climatology is not independent, but we discuss in Sect. 4.3 the impact of different precipitation climatologies.
Also, in this case we estimate spectral slopes at a monthly scale from the coarse E-OBS fields, starting from wave number k=2. A comparison between the downscaled E-OBS precipitation and MeteoSwiss observations suggested choosing γ=1.35 in this case.
We analyze the PDFs of the E-OBS original dataset at 25 km, including only grid points containing at least one surface station. If one grid point includes more than one station the E-OBS time series is repeated in order to have the same number of time series from the surface stations and from E-OBS. As a second step, we separate points characterized by “high” and “low” long-term average precipitation. To this end we use the station data and we calculate the long-term average daily precipitation climatology at each station. The stations are then split into two groups based on the median of the distribution of the precipitation climatologies, so that low-precipitation stations have below-median long-term daily precipitation climatologies and high-precipitation stations have above-median long-term daily precipitation climatologies. The corresponding E-OBS grid points containing the stations are grouped based on the classification of the station which they contain. Notice that this separation is different from that carried out in the previous section for the perfect-model case in which each grid point was compared with its immediate neighbors since in this case our reference dataset is an ensemble of sparse stations instead of a continuous gridded field. Instead of selecting grid points with high or low precipitation compared only to their neighbors, we select stations with high- or low-precipitation compared to all available stations.
Figure 4a shows the PDFs of E-OBS at its original resolution, of E-OBS downscaled with the standard and the modified RainFARM methods (80 realizations for each experiment) and the PDF of the observations from the 59 stations. The displayed results refer to the case with no distinction between low- and high-precipitation grid points, so all the grid points are considered part of the same sample. In this case the two RainFARM methods provide very similar results, with almost no difference in the fifth and the 95th percentiles of the two PDF distributions. Both downscaling methods introduce variability at small spatial scales, increasing the probability of precipitation events above 100 mm day−1 with respect to the original coarse-scale data. The PDF of the original E-OBS data lies around the lower fifth percentile of the PDF distribution of the downscaled data. The downscaling clearly improves the agreement with observations and allows us to fully capture the observed PDF.
To further investigate this result we separate high- and low-precipitation grid points as previously explained and we evaluate (i) the ratio between the PDF of E-OBS data and that of observations in order to better characterize the E-OBS dataset and (ii) the ratio between the PDF of each downscaling realization (80 realizations for each of the two ensembles) and the observed PDF in order to characterize the performances of the two downscaling methods (Fig. 4b and c). The closer the PDF ratio to 1, the better the agreement with the observations. Please note that the displayed precipitation range corresponds to the full observed precipitation range by construction.
When considering low-precipitation grid points (Fig. 4b), E-OBS at the original spatial resolution shows a clear tendency to overestimate the frequency of precipitation events from a few millimeters per day up to about 80 mm day−1. Above this precipitation threshold the events become rare, with no events above 150 mm day−1. The small-scale fields obtained with the standard RainFARM downscaling method inherit the overestimation errors in the range between a few millimeters per day and about 80 mm day−1. The standard method acts mainly on the tails of the distribution by amplifying the frequency of heavy precipitation events. Their frequency becomes remarkably higher with respect to observations above 100 mm day−1. If the modified RainFARM algorithm is applied, the PDF ratios get closer to 1 throughout the range, showing a clear improvement in the representation of the precipitation distribution with respect to the standard RainFARM method.
When considering high-precipitation grid points (Fig. 4c), E-OBS at the original spatial resolution shows important deficiencies and limited capability to reproduce the observed PDF. The agreement between the E-OBS PDF and the observed PDF drops for values higher than about 20 mm day−1 and gets close to zero in the range between 100 and 300 mm day−1. Such inadequacy found for the E-OBS dataset is expected to also be reflected to some extent in the downscaled data. In fact, we find that, with respect to E-OBS at its original spatial resolution, both downscaling methods correctly increase the frequency of high-precipitation events and they contribute to reducing the discrepancy with respect to the reference dataset. In short, both downscaling methods improve the description of the tail of the precipitation distribution but the discrepancy between the original coarse-scale dataset and the observations is too large to be entirely canceled out by the downscaling method only.
Figure 4(a) The “real-case” experiment. Probability density function (PDF) of E-OBS daily precipitation at the original resolution of 0.25 (black), E-OBS precipitation downscaled using the standard (gray) and the modified (light blue) RainFARM with the MeteoSwiss weights, and the observations from 59 stations in Switzerland (dark blue). Gray and light-blue bands and lines represent the spread of the ensemble and the fifth and 95th percentile ranges, respectively, calculated from 80 realizations of the stochastic downscaled field. (b, c) The ratio between the PDFs of (i) E-OBS at its original resolution (black squares), (ii) 80 E-OBS downscaling realizations with the standard (gray) and the modified (colors) RainFARM, and the PDF of the observations, for low-precipitation (b) and high-precipitation (c) grid points. (d–g) Same as (b) but for different seasons (DJF, MAM, JJA, SON) and for low-precipitation grid points.
To further investigate and better characterize the performances of the downscaling methods, we compare the skills of the standard and of the modified RainFARM in different seasons. Figure 4d–g report the results for low-precipitation stations. The overestimation of the observed PDF from E-OBS at its original resolution (0.25) at low-precipitation stations (see Fig. 4b) occurs mainly in winter and, to a lesser extent, in spring and autumn. When considering the downscaled E-OBS precipitation, a clear difference emerges between the standard and the modified RainFARM methods. The standard RainFARM reproduces and amplifies the E-OBS overestimation, showing large discrepancies with respect to the observed PDF, especially for precipitation above 50 mm day−1 (winter and spring). The modified RainFARM, instead, reduces the original E-OBS overestimation, leading to a closer agreement with observations. In spring and autumn, in particular, the observed PDFs are reproduced very well. In summer, E-OBS at its original resolution is able to correctly reproduce the observed PDF up to about 70 mm day−1. In this case, a clear improvement of the modified RainFARM with respect to the standard RainFARM is registered only for precipitation events above 50 mm day−1. Below this threshold the modified RainFARM still gives PDF ratios close to 1.
For high-precipitation stations (not shown), the performances of the standard and the modified RainFARM methods are similar to each other and they show little variability across different seasons. In fact, for all seasons, the performances of both downscaling methods reflect the behavior found at an annual timescale (Fig. 4c).
## 4.3 Sensitivity of the method to the reference precipitation climatology
The results described in the previous section have been obtained employing a high-quality reference precipitation climatology for the calculation of the corrective weights. The availability of such a high-quality dataset is quite rare in mountain regions, allowed here by the fact that the Alps are among the most instrumented mountain regions of the world. Such high-quality data could be unavailable for other regions in the world. For this reason we explore the sensitivity of the modified RainFARM algorithm to the accuracy of the reference precipitation climatology and we show possible alternatives if a high-quality gridded precipitation climatology is not available. In these cases, one possibility is to use, for example, a high-resolution global precipitation gridded climatology such as WorldClim, already described in Sect. 2.3, which nominally provides monthly climatologies at 1 km resolution, based on more than 47 000 stations distributed around the globe. Clearly, the distribution of the stations is uneven and reflects the level of economic development and the population density of a country, as well as the national data access policies, so the uncertainty in areas with low station density can be remarkable . Even in a station-dense area such as the Swiss Alps the station database used for WorldClim v1.4 counts only 22 stations , so it is sparser than that used to compile the regional-scale MeteoSwiss RnormM climatology , and consequently likely characterized by lower accuracy. Apart from WorldClim, a second possible option in absence of a trusted, high-resolution gridded precipitation climatology over the domain of interest could be to use a reference climatology derived from very high-resolution regional climate model simulations. To address this possibility in the following test, we use the WRF climatology at 4 km resolution already exploited in the perfect case experiment, this time applied only to derive the weights for the correction in Eq. (5).
Figure 5Sensitivity of the modified RainFARM downscaling method to different weights, derived from MeteoSwiss (light green), WorldClim (dark green), and WRF (cyan) climatologies for low-precipitation grid points. The performances of the standard RainFARM method (gray) are shown for comparison. The bands show the range of variability in the PDF ratios between each of the 80 downscaling realizations and the PDF of observations, while black squares represent the ratio between the PDF of E-OBS at its original spatial resolution and the PDF of observations.
Figure 5 compares the results of the downscaling performed using weights derived from three different climatologies, i.e., MeteoSwiss, WorldClim, and WRF 4 km resolution climate simulations. For each of the three experiments we use ensembles of 80 realizations. Keeping in mind the issues of E-OBS (see Sect. 4.2) regarding its limited capability of describing the observed precipitation range for high-precipitation grid points and the consequent difficulties in disentangling the limitations of E-OBS from the limitations of the downscaling method, here we focus the analysis on the low-precipitation grid points that are not affected by these problems. While, not surprisingly, the MeteoSwiss climatology provides the best results, the WorldClim and the WRF climatologies also improve the agreement with the observed PDF, leading to significantly better results than using the original RainFARM method. Up to about 60 mm day−1 all three methods provide PDFs that stay very close to the observed PDF, while at higher values the ensembles of PDFs of the downscaled fields tend to overestimate the verification PDF. It is important to notice that at very high precipitation levels (around 150 mm day−1) all the three ensembles obtained with the different climatologies tend to again contain the verification PDF.
Both the WRF and the WorldClim climatologies tend to slightly overestimate the observed PDF compared to the use of the MeteoSwiss climatology, particularly at higher precipitation levels, with the WRF climatology performing slightly better than the WorldClim climatology. The limitations of the WRF climatology might be due to well-known difficulties of regional models in accurately reproducing precipitation over topography, with significant biases, also dependent on the specific parameterizations used, as also discussed in . The performance of the WorldClim climatology is probably affected by its sparser station density compared to the MeteoSwiss dataset.
5 Discussion
A simple modification to take into account precipitation variability at scales of the order of 1 km into stochastic precipitation downscaling methods has been proposed, applied to RainFARM, and tested in the Swiss Alps in two different cases. First, in the perfect-model framework, high-resolution WRF simulations (0.037, ∼4 km) have been upscaled to 64 km resolution in such a way that the amount of precipitation in a grid point at the coarse scale is, on average, the same as the precipitation fallen in the corresponding 16×16 grid points in the original fine-scale field. The downscaling procedure applied to this coarse-scale dataset shows a very good agreement with the true precipitation data in terms of their amplitude distribution. When separately analyzing grid points with low- and high-precipitation climatology (low and high with respect to the median of the fine-scale daily average precipitation climatology in each coarse-scale grid point), the added value of the new RainFARM version over the standard RainFARM is evident. The new version allows reproduction of the distribution of precipitation in low- and high-precipitation grid points with very high accuracy, remarkably better than the standard RainFARM.
Second, we have considered a more realistic application in which E-OBS gridded precipitation data are downscaled over Switzerland at about 1 km spatial resolution and then compared to in situ observations from the MeteoSwiss network. In this case a preliminary evaluation of the E-OBS dataset has revealed important discrepancies compared to the observations, especially in high-precipitation grid points. In fact, in high-precipitation grid points the frequency of precipitation events is increasingly underestimated from a few millimeters per day up to 100 mm day−1, and events with precipitation above about 140 mm day−1 are simply not represented. In this context, both downscaling methods remarkably improve the agreement with the observations, also reproducing extreme precipitation values, so that downscaled precipitation values cover the full observed precipitation range. Although the downscaling does not fully compensate for the original E-OBS underestimation in high-precipitation grid points, both the standard and the modified RainFARM remarkably improve the agreement with the observations. In particular, in low-precipitation grid points, only the modified downscaling method allows the reconstruction of the observed PDF. The modified RainFARM outperforms the standard RainFARM method by leading to a better agreement of the amplitude distributions compared to observations.
The two experiments discussed in this paper, in a perfect-model and a realistic case framework provide complementary information regarding the skills of the presented downscaling method, and they clearly show what we can expect (or not) when it is used in practical applications. In the perfect-model experiment there is exact conservation of the water flux between the coarse-scale dataset to downscale and the validation dataset, as the former has been derived by aggregation of the latter. This implies that the error associated with the validation dataset is zero and the degree of agreement between the downscaled and the validation data is an exact measure of the skills of the downscaling method. This experiment shows the very good performance of the modified RainFARM in adjusting the PDF of the downscaled data in such a way that they are not distinguishable from the reference PDF. Conversely, in the real-case experiment, the flux conservation between the coarse-scale and validation datasets is not to be expected owing to their very different nature and characteristics. In fact, E-OBS (version 17) is a 25 km resolution dataset generated by interpolating measurements from a subset of all the Swiss surface stations, precisely 36 stations active in the considered period 1981–2010, with elevation ranging between 200 and 2500 m a.s.l. (source E-OBS documentation, http://www.ecad.eu/, last access: 28 August 2018). On the contrary, the validation dataset consists of 59 stations representative of elevations up to 3302 m a.s.l., 50 % of these stations lying above 600 m, with 25 % in the range of 600–1600 m, 10 % in the range of 1600–2000 m, 10 % in the range of 2000–2500 m, and 5 % above 2500 m. Clearly, the reduced number of underlying stations in E-OBS, together with the stations biased toward low elevations, contribute to uncertainties and discrepancies with respect to the station data, especially in areas prone to high precipitation (Fig. 4c).
Among the other sources of errors affecting the downscaled fields in the real-case experiments, the fine-scale climatology used to derive the correction factors (weights) has to be considered. In our case the MeteoSwiss RnormM climatology is probably affected by sources of uncertainty similar to the E-OBS dataset, but to a smaller extent owing to the higher density of stations included and their better altitude representativeness. In detail, RnormM is also derived by interpolation of data from surface stations whose average distance is 15–20 km. The interpolation tends to smooth out peaks and troughs in a surface, so we can expect that the interpolation product provides lower precipitation extremes than the original point measurements. As a consequence, the resulting RnormM climatology, as well as the maps of weights, are smoothed out with respect to the single surface station climatology, with evident impacts on the agreement between the downscaled fields and the validation (surface station) reference.
Despite the fact that experiments in real cases are generally characterized by errors and/or biases in the coarse-scale datasets to downscale, in the climatology used to derive the weights, and in the validation datasets, we show that the RainFARM downscaling method is still effective in improving the agreement between the amplitude distributions of the observed precipitation and of the downscaled fields.
The modified RainFARM algorithm has been shown to also provide robust results in absence of an accurate regional fine-scale precipitation climatology tailored to the area of study. In fact, a fine-scale global monthly precipitation product such as WorldClim (at nominally 1 km spatial resolution, but obtained from a limited number of measurement stations) provides sufficient information for the weight calculation so that the outputs of the downscaling are close to those derived in the optimal case using the regional and more accurate MeteoSwiss climatology. Alternatively, we have also shown that using a climatology from a high-resolution regional climate model simulation to derive weights also provides good results. This suggests that the modified RainFARM method could also be applied in regions of the globe where only limited climatological information is available, such as that provided by the WorldClim dataset or by a regional climate model simulation. Since RainFARM does not require providing or tuning additional parameters it could then be applied directly to the coarse-scale dataset to be downscaled.
In the simple method that we presented, downscaled precipitation data in each grid point and for each month have been corrected by a constant factor each time precipitation occurs. This is of course an approximation that only modifies the amplitude of precipitation events at that point and not their frequency. The same climatological average precipitation at a point could be obtained modifying either the event frequencies or their intensities. Nonetheless, we have seen that the RainFARM method allows reconstruction of plausible fine-scale precipitation values with a frequency of occurrence in agreement with observations, when statistics over 30 years are considered. This makes the new RainFARM approach suitable for downscaling climate model data when we need to describe the statistics of precipitation over long (climatic) timescales, and we are not interested in temporal correlations with the observed fields at finer temporal scales. Further work should be carried out to assess if the new RainFARM algorithm provides added value when the temporal correlation between the downscaled data and the observations is of importance.
The proposed method has been developed and tested in a mountain environment but it could also be used, in principle, in other areas of the globe. In such an extended framework, the more general added value brought by the new RainFARM algorithm is the reproduction of an observed precipitation pattern, no matter if it originates from topography or other surface heterogeneities. When the reference fine-scale precipitation climatology is almost constant over a portion of the domain, or zero as for example in very arid or desert areas in dry months, the resulting weights are all 1 and the modified RainFARM method provides results identical to those of the standard RainFARM method.
6 Conclusions
Stochastic precipitation downscaling methods generally do not take into account local precipitation patterns at spatial scales below those explicitly resolved in the coarse-scale dataset. We propose a simple technique that can be applied to stochastic precipitation downscaling methods to improve the representation of the fine-scale daily precipitation in complex and spatially heterogeneous regions, such as mountain areas. The application of this method requires exclusively fine-scale information on the precipitation climatology from an external reference dataset.
This technique, here applied to the RainFARM stochastic downscaling algorithm, adjusts the fine-scale daily precipitation values calculated by the standard RainFARM method by using monthly sub-grid weights derived by a reference monthly fine-scale climatology before imposing the conservation of the average water flux at the coarse grid scale between the coarse-scale dataset and the downscaled dataset. In our perfect-model experiment, compared to the standard RainFARM, the modified RainFARM allowed reduction of the root-mean-square error on the long-term precipitation climatology from 0.86 to 0.27 mm day−1, thus introducing clear improvements in the downscaling performance. The modified RainFARM has been shown to assign precipitation values to the small-scale grid in such a way that when grid points with low or high average climatological precipitation are considered separately, the distributions (PDFs) of the downscaled precipitation are closer to the PDFs of the corresponding reference dataset. Given its ability to reconstruct the overall precipitation distribution, the modified RainFARM downscaling method can be employed in a number of applications, including the analysis of extreme events and their statistics and of hydrometeorological hazards.
Like the standard RainFARM, the new RainFARM downscaling method is not intended to correct the biases affecting the coarse-scale dataset. Prior to applying the downscaling, it is recommended to evaluate the degree of agreement between the coarse-scale and possible verification datasets. If the coarse-scale dataset presents clear deficiencies or its long-term climatology is substantially different from the observed climatology, bias adjustment of the coarse-scale dataset could be applied before downscaling.
The proposed method can be useful in particular for downscaling climate model data and for any application for which a correlation over fine temporal scales between the downscaled and the observed data is not required. Further work should aim to investigate if this method employing fixed correction factors also improves the downscaling performance when the spatial structures of precipitation have to be reproduced at fine (daily or sub-daily) temporal scales such as applications of downscaling at weather timescales.
In absence of a high-quality fine-scale observed precipitation climatology at regional scales, global datasets, such as WorldClim or a high-resolution regional climate simulation, could also be successfully employed and they provided good performance in our study area.
In conclusion, in spite of its simplicity, the proposed method is found to introduce realistic small-scale variability in the downscaled precipitation fields using only a fine-scale monthly precipitation climatology, and it could be applied in different regions of the world, not only mountain areas, to provide a more realistic representation of the distribution and the climatology of precipitation.
Code and data availability
Code and data availability.
RainFARM is distributed as an open-source library and a command line interface, written in the Julia language, freely available at the link: https://github.com/jhardenberg/RainFARM.jl (RainFARM2018). All the datasets used in this study are publicly accessible and were downloaded from the following web sites: WRF: http://nextdataproject.hpc.cineca.it/thredds/catalog/NextData/eurocdx/h1e4/catalog.html (WRF2015); E-OBS: http://www.ecad.eu (E-OBS2018); RnormM monthly precipitation climatology: https://www.meteoswiss.admin.ch/home/climate/swiss-climate-in-detail/raeumliche-klimaanalysen.html (RnorM2016); MeteoSwiss daily precipitation from the surface stations, accessible upon registration: https://gate.meteoswiss.ch/idaweb/login.do ; WorldClim monthly precipitation climatology: http://www.worldclim.org .
Author contributions
Author contributions.
ST wrote the paper with support from EP and JvH. All authors planned and analyzed the experiments. ST performed the simulations and prepared all figures. JvH wrote the numerical code. JvH conceived the original method and later developed it with support from ST and EP.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
This work received funding from the European Union's Horizon 2020 research and innovation program under grant agreement no. 641762 (ECOPOTENTIAL) and from the Italian Project of Interest NextData of the Italian Ministry for Education, University and Research. We acknowledge the contribution of the C3S 34a Lot 2 Copernicus Climate Change Service project (C3S-MAGIC), funded by the European Union, to the development of software tools used in this work. Part of this work was performed in the framework of the MEDSCOPE (MEDiterranean Services Chain based On climate PrEdictions) ERA4CS project (grant agreement no. 690462) funded by the European Union. We acknowledge the E-OBS dataset from the EU-FP6 project ENSEMBLES and the data providers in the ECA&D project (http://www.ecad.eu, last access: 1 June 2018).
Edited by: Luca Ferraris
Reviewed by: Mario Rohrer and one anonymous referee
References
Badas, M. G., Deidda, R., and Piga, E.: Orographic influences in rainfall downscaling, Adv. Geosci., 2, 285–292, https://doi.org/10.5194/adgeo-2-285-2005, 2005. a, b
Badas, M. G., Deidda, R., and Piga, E.: Modulation of homogeneous space-time rainfall cascades to account for orographic influences, Nat. Hazards Earth Syst. Sci., 6, 427–437, https://doi.org/10.5194/nhess-6-427-2006, 2006. a, b
Bedia, J., Herrera, S., and Gutiérrez, J. M.: Dangers of using global bioclimatic datasets for ecological niche modeling. Limitations for future climate projections, Global Planet. Change, 107, 1–12, 2013. a
Begert, M., Frei, C., and Abbt, M.: Einführung der Normperiode 1981–2010, Tech. Rep., Fachbericht MeteoSchweiz, 245, MeteoSchweiz, Zurich, 50 pp., 2013. a, b
Bordoy, R. and Burlando, P.: Stochastic downscaling of precipitation to high-resolution scenarios in orographically complex regions: 1. Model evaluation, Water Resour. Res., 50, 540–561, 2014. a, b
Charles, S. P., Bates, B. C., and Hughes, J. P.: A spatiotemporal model for downscaling precipitation occurrence and amounts, J. Geophys. Res.-Atmos., 104, 31657–31669, 1999. a
Chiew, F., Kirono, D., Kent, D., Frost, A. J., Charles, S., Timbal, B., Nguyen, K., and Fu, G.: Comparison of runoff modelled using rainfall from different downscaling methods for historical and future climates, J. Hydrol., 387, 10–23, https://doi.org/10.1016/j.jhydrol.2010.03.025, 2010. a
Dee, D. P., Uppala, S. M., Simmons, A. J., Berrisford, P., Poli, P., Kobayashi, S., Andrae, U., Balmaseda, M. A., Balsamo, G., Bauer, P., Bechtold, P., Beljaars, A. C. M., van de Berg, L., Bidlot, J., Bormann, N., Delsol, C., Dragani, R., Fuentes, M., Geer, A. J., Haimberger, L., Healy, S. B., Hersbach, H., Hólm, E. V., Isaksen, L., Kållberg, P., Köhler, M., Matricardi, M., McNally, A. P., Monge-Sanz, B. M., Morcrette, J.-J., Park, B.-K., Peubey, C., de Rosnay, P., Tavolato, C., Thépaut, J.-N., and Vitart, F.: The ERA-Interim reanalysis: configuration and performance of the data assimilation system, Q. J. Roy. Meteorol. Soc., 137, 553–597, https://doi.org/10.1002/qj.828, 2011. a, b
Deidda, R.: Multifractal analysis and simulation of rainfall fields in space, Phys. Chem. Earth Pt. B, 24, 73–78, 1999. a
Deidda, R.: Rainfall downscaling in a space-time multifractal framework, Water Resour. Res., 36, 1779–1794, 2000. a
D'Onofrio, D., Palazzi, E., von Hardenberg, J., Provenzale, A., and Calmanti, S.: Stochastic rainfall downscaling of climate models, J. Hydrometeorol., 15, 830–843, 2014. a, b, c
Ferraris, L., Gabellani, S., Rebora, N., and Provenzale, A.: A comparison of stochastic models for spatial rainfall downscaling, Water Resour. Res., 39, 1368, https://doi.org/10.1029/2003WR002504, 2003. a, b
Harris, D., Menabde, M., Seed, A., and Austin, G.: Multifractal characterization of rain fields with a strong orographic influence, J. Geophys. Res., 101, 26405–26414, 1996. a, b
Haylock, M., Hofstra, N., Klein Tank, A., Klok, E., Jones, P., and New, M.: A European daily high-resolution gridded data set of surface temperature and precipitation for 1950–2006, J. Geophys. Res.-Atmos., 113, D20119, https://doi.org/10.1029/2008JD010201, 2008. a, b
Hijmans, R. J., Cameron, S. E., Parra, J. L., Jones, P. G., and Jarvis, A.: Very high resolution interpolated climate surfaces for global land areas, Int. J. Climatol., 25, 1965–1978, 2005. a, b, c
Hijmans, R. J., Cameron, S. E., Parra, J. L., Jones, P. G., and Jarvis, A.: WorldClim: Global weather stations, https://databasin.org/datasets/15a31dec689b4c958ee491ff30fcce75 (last access: 27 June 2016), 2010. a
Jothityangkoon, C., Sivapalan, M., and Viney, N. R.: Tests of a space-time model of daily rainfall in southwest Australia based on nonhomogeneous random cascades, Water Resour. Res., 36, 267–284, 2000. a
Kotlarski, S., Keuler, K., Christensen, O. B., Colette, A., Déqué, M., Gobiet, A., Goergen, K., Jacob, D., Lüthi, D., van Meijgaard, E., Nikulin, G., Schär, C., Teichmann, C., Vautard, R., Warrach-Sagi, K., and Wulfmeyer, V.: Regional climate modeling on European scales: a joint standard evaluation of the EURO-CORDEX RCM ensemble, Geosci. Model Dev., 7, 1297–1333, https://doi.org/10.5194/gmd-7-1297-2014, 2014. a, b
Lovejoy, S. and Mandelbrot, B.: Fractal properties of rain and a fractal model, Tellus A, 37, 209–232, 1985. a
Lovejoy, S. and Schertzer, D.: Multifractals, cloud radiances and rain, J. Hydrol., 322, 59–88, 2006. a
Maraun, D., Wetterhall, F., Ireson, A., Chandler, R., Kendon, E., Widmann, M., Brienen, S., Rust, H., Sauter, T., Themeßl, M., and Others: Precipitation downscaling under climate change. Recent developments to bridge the gap between dynamical models and the end user, Rev. Geophys., 48, RG3003, https://doi.org/10.1029/2009RG000314, 2010. a, b
Mehrotra, R. and Sharma, A.: A nonparametric stochastic downscaling framework for daily rainfall at multiple locations, J. Geophys. Res.-Atmos., 111, D15101, https://doi.org/10.1029/2005JD006637, 2006. a
MeteoSwiss: Dataset: Daily precipitation from surface stations, available at: https://gate.meteoswiss.ch/idaweb/login.do, last access: 26 August 2016. a
Palazzi, E., von Hardenberg, J., Terzago, S., and Provenzale, A.: Precipitation in the Karakoram-Himalaya: a CMIP5 view, Clim. Dynam., 45, 21–45, 2015. a
Pathirana, A. and Herath, S.: Multifractal modelling and simulation of rain fields exhibiting spatial heterogeneity, Hydrol. Earth Syst. Sci., 6, 695–708, https://doi.org/10.5194/hess-6-695-2002, 2002. a
Peterson, A. and Nakazawa, Y.: Environmental data sets matter in ecological niche modelling: an example with Solenopsis invicta and Solenopsis richteri, Global Ecol. Biogeogr., 17, 135–144, 2008. a
Pieri, A. B., von Hardenberg, J., Parodi, A., and Provenzale, A.: Sensitivity of Precipitation Statistics to Resolution, Microphysics, and Convective Parameterization: A Case Study with the High-Resolution WRF Climate Model over Europe, J. Hydrometeorol., 16, 1857–1872, 2015. a, b, c, d, e
Purdy, J. C., Harris, D., Austin, G. L., Seed, A. W., and Gray, W.: A case study of orographic rainfall processes incorporating multiscaling characterization techniques, J. Geophys. Res., 106, 7837–7845, 2001. a, b
RainFARM: Code: available at: https://github.com/jhardenberg/RainFARM.jl https://doi.org/10.5281/zenodo.1240477, 2018. a
Rebora, N., Ferraris, L., von Hardenberg, J., and Provenzale, A.: The RainFARM: Rainfall Downscaling by a Filtered AutoRegressive Model, J. Hydrometeorol., 7, 724–738, 2006. a, b, c
RnormM: Dataset: Gridded data of precipitation normals, available at: https://www.meteoswiss.admin.ch/home/climate/swiss-climate-in-detail/raeumliche-klimaanalysen.html, last access: 6 September 2016. a
Rodriguez-Iturbe, I., Cox, D. R., and Isham, V.: Some Models for Rainfall Based on Stochastic Point Processes, P. Roy. Soc. Lond. A, 410, 269–288, https://doi.org/10.1098/rspa.1987.0039, 1987. a
Rodriguez-Iturbe, I., Cox, D., and Isham, V.: A point process model for rainfall: further developments, P. Roy. Soc. Lond. A, 417, 283–298, 1988. a
Roe, G. H.: Orographic precipitation, Ann. Rev. Earth Planet. Sci., 33, 645–671, 2005. a
Smith, R. B.: Progress on the theory of orographic precipitation, in: Tectonics, Climate and Landscape Evolution, chap. 1, Special paper 398, edited by: Willet, S. D., Hovius, N., Brandon, M. T., and Fisher, D. M., Geological Society of America, Boulder, Colorado, 1–16, 2006. a
Townsend Peterson, A., Papeş, M., and Eaton, M.: Transferability and model evaluation in ecological niche modeling: a comparison of GARP and Maxent, Ecography, 30, 550–560, 2007. a
Viterbo, F., von Hardenberg, J., Provenzale, A., Molini, L., Parodi, A., Sy, O. O., and Tanelli, S.: High-Resolution Simulations of the 2010 Pakistan Flood Event: Sensitivity to Parameterizations and Initialization Time, J. Hydrometeorol., 17, 1147–1167, https://doi.org/10.1175/JHM-D-15-0098.1, 2016. a
Vrac, M. and Naveau, P.: Stochastic downscaling of precipitation: From dry events to heavy rainfalls, Water Resour. Res., 43, W07402, https://doi.org/10.1029/2006WR005308, 2007. a
Waltari, E., Hijmans, R. J., Peterson, A. T., Nyári, Á. S., Perkins, S. L., and Guralnick, R. P.: Locating Pleistocene refugia: comparing phylogeographic and ecological niche model predictions, PLoS one, 2, e563, https://doi.org/10.1371/journal.pone.0000563, 2007. a
Warren, D. L. and Seifert, S. N.: Ecological niche modeling in Maxent: the importance of model complexity and the performance of model selection criteria, Ecol. Appl., 21, 335–342, 2011. a
Wilks, D. S.: Multisite generalization of a daily stochastic precipitation generation model, J. Hydrol., 210, 178–191, 1998. a
Wilks, D. S.: Multisite downscaling of daily precipitation with a stochastic weather generator, Clim. Res., 11, 125–136, 1999. a
WorldClim: Dataset: WorldClim 1.4: Current conditions (∼1960–1990), available at: http://www.worldclim.org/ (last access: 31 August 2016), 2015. a
WRF: Dataset: available at: http://nextdataproject.hpc.cineca.it/thredds/catalog/NextData/eurocdx/h1e4/catalog.html (last access: 31 January 2017), 2015. a
|
{}
|
# Network policies
## Introduction
• Namespaces help us to organize resources
• Namespaces do not provide isolation
• By default, every pod can contact every other pod
• By default, every service accepts traffic from anyone
• If we want this to be different, we need network policies
## What's a network policy?
A network policy is defined by the following things.
• A pod selector indicating which pods it applies to
e.g.: "all pods in namespace blue with the label zone=internal"
• A list of ingress rules indicating which inbound traffic is allowed
e.g.: "TCP connections to ports 8000 and 8080 coming from pods with label zone=dmz, and from the external subnet 4.42.6.0/24, except 4.42.6.5"
• A list of egress rules indicating which outbound traffic is allowed
A network policy can provide ingress rules, egress rules, or both.
## How do network policies apply?
• A pod can be "selected" by any number of network policies
• If a pod isn't selected by any network policy, then its traffic is unrestricted
(In other words: in the absence of network policies, all traffic is allowed)
• If a pod is selected by at least one network policy, then all traffic is blocked ...
... unless it is explicitly allowed by one of these network policies
## Traffic filtering is flow-oriented
• Network policies deal with connections, not individual packets
• Example: to allow HTTP (80/tcp) connections to pod A, you only need an ingress rule
(You do not need a matching egress rule to allow response traffic to go through)
• This also applies for UDP traffic
(Allowing DNS traffic can be done with a single rule)
• Network policy implementations use stateful connection tracking
## Pod-to-pod traffic
• Connections from pod A to pod B have to be allowed by both pods:
• pod A has to be unrestricted, or allow the connection as an egress rule
• pod B has to be unrestricted, or allow the connection as an ingress rule
• As a consequence: if a network policy restricts traffic going from/to a pod,
the restriction cannot be overridden by a network policy selecting another pod
• This prevents an entity managing network policies in namespace A (but without permission to do so in namespace B) from adding network policies giving them access to namespace B
## The rationale for network policies
• In network security, it is generally considered better to "deny all, then allow selectively"
(The other approach, "allow all, then block selectively" makes it too easy to leave holes)
• As soon as one network policy selects a pod, the pod enters this "deny all" logic
• Further network policies can open additional access
• Good network policies should be scoped as precisely as possible
• In particular: make sure that the selector is not too broad
(Otherwise, you end up affecting pods that were otherwise well secured)
## Our first network policy
This is our game plan:
• run a web server in a pod
• create a network policy to block all access to the web server
• create another network policy to allow access only from specific pods
## Running our test web server
### Exercise
• Let's use the nginx image:
kubectl create deployment testweb --image=nginx
• Find out the IP address of the pod with one of these two commands:
kubectl get pods -o wide -l app=testweb
IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP) • Check that we can connect to the server: curl$IP
The curl command should show us the "Welcome to nginx!" page.
## Adding a very restrictive network policy
• The policy will select pods with the label app=testweb
• It will specify an empty list of ingress rules (matching nothing)
### Exercise
• Apply the policy in this YAML file:
kubectl apply -f ~/container.training/k8s/netpol-deny-all-for-testweb.yaml
• Check if we can still access the server:
curl $IP The curl command should now time out. ## Looking at the network policy This is the file that we applied: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-all-for-testweb spec: podSelector: matchLabels: app: testweb ingress: [] ## Allowing connections only from specific pods • We want to allow traffic from pods with the label run=testcurl • Reminder: this label is automatically applied when we do kubectl run testcurl ... ### Exercise • Apply another policy: kubectl apply -f ~/container.training/k8s/netpol-allow-testcurl-for-testweb.yaml ## Looking at the network policy This is the second file that we applied: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-testcurl-for-testweb spec: podSelector: matchLabels: app: testweb ingress: - from: - podSelector: matchLabels: run: testcurl ## Testing the network policy • Let's create pods with, and without, the required label ### Exercise • Try to connect to testweb from a pod with the run=testcurl label: kubectl run testcurl --rm -i --image=centos -- curl -m3$IP
• Try to connect to testweb with a different label:
kubectl run testkurl --rm -i --image=centos -- curl -m3 \$IP
The first command will work (and show the "Welcome to nginx!" page).
The second command will fail and time out after 3 seconds.
(The timeout is obtained with the -m3 option.)
## An important warning
• Some network plugins only have partial support for network policies
• For instance, Weave added support for egress rules in version 2.4 (released in July 2018)
• But only recently added support for ipBlock in version 2.5 (released in Nov 2018)
• Unsupported features might be silently ignored
(Making you believe that you are secure, when you're not)
## Network policies, pods, and services
• Network policies apply to pods
• A service can select multiple pods
(And load balance traffic across them)
• It is possible that we can connect to some pods, but not some others
(Because of how network policies have been defined for these pods)
• In that case, connections to the service will randomly pass or fail
(Depending on whether the connection was sent to a pod that we have access to or not)
## Network policies and namespaces
• A good strategy is to isolate a namespace, so that:
• all the pods in the namespace can communicate together
• other namespaces cannot access the pods
• external access has to be enabled explicitly
• Let's see what this would look like for the DockerCoins app!
## Network policies for DockerCoins
• We are going to apply two policies
• The first policy will prevent traffic from other namespaces
• The second policy will allow traffic to the webui pods
• That's all we need for that app!
## Blocking traffic from other namespaces
This policy selects all pods in the current namespace.
It allows traffic only from pods in the current namespace.
(An empty podSelector means "all pods".)
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
name: deny-from-other-namespaces
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
## Allowing traffic to webui pods
This policy selects all pods with label app=webui.
It allows traffic from any source.
(An empty from fields means "all sources".)
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
name: allow-webui
spec:
podSelector:
matchLabels:
app: webui
ingress:
- from: []
## Applying both network policies
• Both network policies are declared in the file k8s/netpol-dockercoins.yaml
### Exercise
• Apply the network policies:
kubectl apply -f ~/container.training/k8s/netpol-dockercoins.yaml
• Check that we can still access the web UI from outside
(and that the app is still working correctly!)
• Check that we can't connect anymore to rng or hasher through their ClusterIP
Note: using kubectl proxy or kubectl port-forward allows us to connect regardless of existing network policies. This allows us to debug and troubleshoot easily, without having to poke holes in our firewall.
## Cleaning up our network policies
• The network policies that we have installed block all traffic to the default namespace
• We should remove them, otherwise further exercises will fail!
### Exercise
• Remove all network policies:
kubectl delete networkpolicies --all
## Protecting the control plane
(etcd, API server, etc.)
• At first, it seems like a good idea ...
• But it shouldn't be necessary:
• not all network plugins support network policies
• the control plane is secured by other methods (mutual TLS, mostly)
• the code running in our pods can reasonably expect to contact the API
(and it can do so safely thanks to the API permission model)
• If we block access to the control plane, we might disrupt legitimate code
• ... Without necessarily improving security
|
{}
|
文章 发现 ms@ms.lt +370 607 27 665 My work is in the Public Domain for all to share freely. 读物 书 影片 维基百科 Software Upload Understand how math naturally expresses the main concepts of my philosophy. Express my philosophy's concepts in terms of mathematics and thus better define my concepts as well as understand which mathematical concepts are most central. Challenges How can contradiction be modeled? Express God's Ten Commandments as four geometries and six transformations between them. Express the four geometries in terms of symmetric functions. How do the four classical Lie groups/algebras distinguish between no perspective, perspective, perspective on perspective, and perspective on perspective on perspective, and thus define the four geometries? Show how the gradation of six methods of proof organizes the prayer "Our Father" and the language of argumentation. How are the various kinds of opposite important in driving God's dance? What do the various kinds of duality yield, applied to God? For example, a dual of God (the Center) is the Everything (the Totality). Representations are criteria, filters. Can open sets (closed sets) be thought of as filters? A topology of filters? Some practical projects: Learn about the combinatorics and geometry of finite fields and interpret {$F_{1^n}$}. Understand infinity. Understand how finite fields and the field with one element express infinity. Understand how zero and infinity get differentiated, how their symmetry gets variously broken. Express infinity in terms of geometries. Concepts to express God Equation of life Perspectives Divisions of everything into perspectives The nullsome The onesome, twosome, threesome, foursome, ... The eight-cycle of divisions and the three operations. Representations Topologies Three languages: Argumentation, verbalization, narration The eightfold way Walks on trees Relations between perspectives: dualities Chains of perspectives Relations between chains of perspectives: Ten (6+4) commandments Trinities of perspectives: God's trinity and the three-cycle God I am ever trying to imagine everything from God's point of view. God God can be understood as contradiction. God is the Center of polytopes (such as the Simplex). Everything is the dual of God, the Totality consisting of all of the vertices and all of the simplexes. God may be given by trivial tensor: T00. It is a zero-dimensional array, thus a scalar. An array is a perspective, and so it is having no perspective. A scalar is "spirit". Thus it is spirit with no perspective. God's trinity The field with one element has one element which can be understood as 0, ∞ and 1. One is a lens that relates zero and infinity. 0 makes way for ∞ and 1 is their point of balance. Try to use universal hyperbolic geometry to model going beyond oneself into oneself (where the self is the circle). Relate God's dance to {0, 1, ∞} and the anharmonic group and Mobius transformations. Note that the anharmonic group is based on composition of functions. Four combinations of God and Everything generate four infinite families of polytopes and associated geometries and metalogics. I think these are the four representations of God (true, direct, constant, significant): The simplexes An have a Center and a Totality. They are the basis for affine geometry where paths are preserved. The cross-polytopes Cn have a Center but no Totality. They are the basis for projective geometry where lines are preserved. The cubes Bn have no Center but have a Totality. They are the basis for conformal geometry where angles are preserved. The coordinate systems Dn have no Center and no Totality. They are the basis for symplectic geometry where areas are preserved. Equation of life The family Dn seems to model the equation of eternal life, namely, that God doesn't have to be good, life doesn't have to be fair. Spirit and structure are related by duality, the operation +2. A set is the essence of the spirit, the free monoid, that it generates. Ideas God goes beyond himself: 3 dimensions -> 2 dimensions -> (flip to dual) 1 dimension -> 0 dimensions (point: good heart). Perspective Perspectives are defined structurally by algebra and dynamically by analysis and they come together in the four geometries. Scalars of fields (or division rings) define perspectives, their freedom. The complex numbers offer a dual perspective as opposed to the real numbers' single perspective. Category theory defines perspectives and their composition. Perspectives may be logical quantifiers. Divisions of everything Divisions of everything into N perspectives are given by finite exact sequences with N nonzero terms. String diagrams portray such exact sequences with divisions of the plane by way of objects. Twosome: objects and morphisms Threesome Jacobi identity. The three-cycle of the quaternions. Foursome: Four levels of knowledge (whether, what, how, why). Yoneda lemma. If we think of a functor F as going from a category C of our mental notions and association between them to a category D of linguistic expressions and continuations between them, then this particular application may also serve as a universally relevant interpretation and general foundation of category theory. It may indeed be meaningful to speak in category theory of a duality between paradigmatic application and universally relevant interpretation. Understand the jump hierarchy and the Yates Index Set theorem (the triple jump). Four geometries: affine (no perspective), projective (perspective), conformal (perspective on perspective), symplectic (perspective on perspective on perspective). Four classical families of Lie groups/algebras that define the four geometries. Sixsome See: An Introduction to the K-theory of Banach Algebras The eight-cycle of divisions The eight divisions of everything, and the three operations +1, +2, +3, which act on them cyclically, should be expressible in terms of Bott periodicity and the clock shift of Clifford algebras. Consider a function from one algebraic variety or scheme to another. Then we can define accordingly four functors from one category of sheaves to another such category. These functors are defined to make sense across a family of bases, that is, across base changes. Upper and lower star functors are like everything and nothing. Upper and lower shriek functors are like "fibers" within everything, thus: anything and something. The fiber may be identified with the category. Tensor and Hom are defined within the category of sheaves (thus within the input and also within the output). Tensor can be thought of as decreasing slack by filling it out. Hom can be thought of as increasing slack by creating multiplicity of functions. The four functors relate Hom and Tensor in the input category and in the output category. The six operations can be thought of as naturally defined within a higher order category of correspondences. The six operations can also be thought of as a generalization which grounds Poincare duality and its generalizations, Serre duality. Note that these two seem relate to the Snake lemma. Scopes Do scopes express regularity of choice? Nothing. A point is nothing (as regards choice - its fiber is nothing). Topologies Systems of constraints that may be thought of as defining worlds. Topology is the study of topologies. Languages of argumentation, verbalization and narration The gradation of six methods of proof should organize the prayer "Our Father" and the language of argumentation. Eightfold Way The Eightfold Way relates a left exact sequence and a right exact sequence Homology and cohomology The snake lemma resembles the eightfold way. However, the exact sequence that it defines goes 4->5->6->1->2->3, thus counter to the expected order. Try to understand Lawyere's eight-fold Hegelian taco. Walks on trees Julia sets The tree of choices (the regularity of choice) given by the three operations +1, +2, +3. Walks from A to B in category theory are morphisms and they get mapped to the morphisms from A to B. Relate this to walks on trees. Parentheses establish a tree structure. What are walks on these trees? How do they relate to associativity and to walks in categories from one object to another? Walks On Trees are perhaps important as they combine both unification, as the tree has a root, and completion, as given by the walk. In college, I asked God what kind of mathematics might be relevant to knowing everything, and I understood him to say that walks on trees where the trees are made of the elements of the threesome. Relations between perspectives The kinds of duality express the ways that two perspectives can be related, thus the operations on perspectives. The kinds of duality express the kinds of opposites. Automata theory defines a hierarchy of equations. Chains of perspectives Four geometries Affine geometry models no perspective. Projective geometry models perspective. Conformal geometry models perspective on perspective. Symplectic geometry models perspective on perspective on perspective. Relations between chains of perspectives Six pairs of four levels. six specifications between the four geometries. six ways of thinking about variables. six ways of thinking about multiplication. six visualizations (restructurings in terms of sequences, hierarchies and networks). six qualities of signs. six set theory axioms. six bases of symmetric functions six ways of relating two mental sheets, a logic and a metalogic. The Zermelo Frankel axioms of set theory are structured by 4+6. Restructuring The calculus world is the "exponential" of the discrete world. One of the reasons that Lie groups and Lie algebras are important is because they link together the "calculus world" (Lie groups are "differentiable manifolds") and the "discrete world" (Lie algebras are based on "root systems" that are geometric reflections). The structure of a set is reminiscent of a tree in that there can be sets of sets. It's important that there not be cycles, irregularity.
Šis puslapis paskutinį kartą keistas August 03, 2021, at 03:44 PM
|
{}
|
# Decibels
#### Jimit Kavathia
Joined Nov 30, 2014
9
Why exactly do we use 600 ohms or 60 ohms as expalined in the artcile? I mean how do we decide the load and the power across it?
Practically speaking, how the audio that we listen to is calibrated?
#### kubeek
Joined Sep 20, 2005
5,724
Which article?
#### JamesBond007
Joined Oct 25, 2015
24
Generally, audio has 2 ohms, 4 ohms and 8 ohms for speakers.
With PA - Public Announcement - transformers are used on a high voltage line, typically 70V, to run each speaker. Think Show Day.
These are usually 600 ohm.
That is, impedance matching 600 ohm line to 8 ohm speaker.
If you are doing a UNI course, you are expected to figure this out.
Luck.
#### dl324
Joined Mar 30, 2015
10,751
Why exactly do we use 600 ohms or 60 ohms as expalined in the artcile? I mean how do we decide the load and the power across it?
Practically speaking, how the audio that we listen to is calibrated?
Audio typically uses dBm to describe power, which is usually referenced to 600Ω.
Your article reference didn't make it...
#### Jimit Kavathia
Joined Nov 30, 2014
9
By the term 'Article' , I meant this website's page explaining decibels.
#### Papabravo
Joined Feb 24, 2006
13,732
Decibels is a general purpose technique for measuring practically anything. For example if you want to know the relationship of the US national debt to the median family income, you proceed as follows:
1. Form the ratio of the US national debt (≈ $18,418,630,000,000) to the median family income for 2014 (≈$53,657)
2. Take the logarithm to the base 10
3. Multiply by 20
I get approximately 171 dB
Your mileage may differ and it is growing by the second.
#### JamesBond007
Joined Oct 25, 2015
24
dB is loudness. But it the ratio of input to output. So 1.0 dB can be soft, or very #$%^ loud. It does not have units. #### joeyd999 Joined Jun 6, 2011 4,394 Decibels is a general purpose technique for measuring practically anything. For example if you want to know the relationship of the US national debt to the median family income, you proceed as follows: 1. Form the ratio of the US national debt (≈$18,418,630,000,000) to the median family income for 2014 (≈ \$53,657)
2. Take the logarithm to the base 10
3. Multiply by 20
I get approximately 171 dB
Your mileage may differ and it is growing by the second.
Explain your rational for using a factor of 20 instead of 10 in this example. In other words, where is the square?
#### blocco a spirale
Joined Jun 18, 2008
1,546
Generally, audio has 2 ohms, 4 ohms and 8 ohms for speakers.
With PA - Public Announcement - transformers are used on a high voltage line, typically 70V, to run each speaker. Think Show Day.
These are usually 600 ohm.
That is, impedance matching 600 ohm line to 8 ohm speaker.
If you are doing a UNI course, you are expected to figure this out.
Luck.
The 600Ω standard has nothing to do with 70V loudspeaker lines.
#### Jimit Kavathia
Joined Nov 30, 2014
9
Thanks Bertus and Papabravo!
#### JamesBond007
Joined Oct 25, 2015
24
It has with PA
#### blocco a spirale
Joined Jun 18, 2008
1,546
It has with PA
No, it has not.
The 600Ω standard applies to low level signal impedance matching where 0dBm = 1mW or 0.7746V across 600Ω.
#### Papabravo
Joined Feb 24, 2006
13,732
Explain your rational(e)[Sic] for using a factor of 20 instead of 10 in this example. In other words, where is the square?
Voltage and current ratios use a factor of 20, whilst power ratios use a factor of 10 because power is proportional to voltage squared or current squared. If you want to use a factor of 10 for dollar ratios I don't think I could present a cogent argument against it, except, by analogy to power, why are dollars proportional to something squared.
#### AnalogKid
Joined Aug 1, 2013
8,464
While applying dB to something other than electrical concepts as a way to express ratios and quantitative relationships is not unheard of, it gets into trouble because of the lack of a defined reference plane. Still, I think that for scalar quantities like volts, amps, dollars, marbles, length, etc., I'd go with the 20log calc. For things that are based on two or more other things, like area (as opposed to length), the 10 log would be the better analog. Still, the lack of any universally agreed upon reference means that the calc would have to be stated explicitly. Even within the EE community, the reference can not be assumed. A conversation among a telecom guy, a radio guy, and a TV guy will get into trouble because they have three different definitions of what 0 dB on a VU meter means.
ak
#### Veracohr
Joined Jan 3, 2011
715
It's used because the dBm reference, based on a 600 ohm load, was defined when that was a common load. dBu, the most common audio voltage reference, is based on dBm so also based on a 600 ohm load. For voltage transmission between equipment 600 ohms may no longer be a normal load impedance, but since the reference was already there people decided to keep using it.
|
{}
|
# trying to perfect a pixel shader effect
This topic is 2990 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I'm messing around with shaders, pretty much for the first time. I have a visual effect in my head, but I don't know if how possible it is to implement, and I've reached the limit of my experimentation, documentation reading and google-fu for the night, so I thought I'd post to see if other people had any ideas.
Observe this torus:
See that nice bright shine/glow oriented toward the camera?
Now observe it from an angle:
The brightness only works when the torus surface is orthogonal to the view. Which is what I expect with the approach I've taken, but it's not what I want. I want something akin to a neon tube effect, so that the color gradient remains the same regardless of the view angle of the torus. (I hope I've explained what I'd like to achieve well enough--it's past 3am local ;) )
Ultimately I'd like to apply this effect to pipes with arbitrary bends, which I'm generating from a line strip fed through a geometry shader. I'm using HLSL/Direct3D 10, but I figure this is more of a theory question.
##### Share on other sites
The fact that the first picture looks somewhat like a neon glow is just a byproduct of the lighting approximation function.
For "real" glow, you could try some post-processing bloom. First, render a thinner torus with bright colors to a render target texture. Then, blur it and combine it additively with the non-blurred image.
The following image is a mockup of the process made with Photoshop, but it is relatively easy to get the same results by using a pixel shader made for the purpose.
It is possible to tweak the results greatly by, for example, setting a brightness treshold for the blurred image or scaling the blur's result values to simulate more scattered light transfer and/or brighter source light. This all depends on the artistic effect that you want to achieve.
##### Share on other sites
Quote:
I want something akin to a neon tube effect, so that the color gradient remains the same regardless of the view angle of the torus.
Try a inverse fresnel-like term and add a fake light:
float fresnel_like = 1-dot(normal,view);fresnel_like *= fresnel_like;fresnel_like *= fresnel_like;vec3 fake_light = light_color * (1.0-fresnel_like);color = ... + fake_light;
##### Share on other sites
Quote:
Original post by Nik02The fact that the first picture looks somewhat like a neon glow is just a byproduct of the lighting approximation function.
I understand that. But it's not the neon glow I'm looking for--it's the neon tube. I'm currently experimenting with volumetric lines, but I'm unsure how well it will look at the bends.
Quote:
Original post by Ashaman73Try a inverse fresnel-like term and add a fake light
I've seen the term fresnel term, but I didn't realize what it meant. I'm doing just that in the above screenshots, basing the color on dot(normal, view). But, as you see, it only when the pipe is orthogonal to the view vector.
Thanks for the replies.
##### Share on other sites
Quote:
Original post by yckxso that the color gradient remains the same regardless of the view angle of the torus.
Can you just replace the view-angle variable with a constant?
##### Share on other sites
It looks like this method of volumetric lines is what I've been looking for. Specifically, method B, with a suitable gradient texture. I still need to modify the technique so that line strips won't additively blend at the interstitial vertices, but hopefully that won't be too difficult.
##### Share on other sites
So you want dp3 lighting constant and particularly specular :
Maybe it's not what you want because it is really simple , but why don't take your Normal in object space and consider that your view vector is constantly looking downward to your torus ?
ie V ( 0,-1,0) in object space hardcoded in the shader.
##### Share on other sites
One possible approach to this that is similar in a way to the link a couple posts above is to take your torus and render a depth map from the cameras point of view with the cull-mode reversed so you're rendering the back faces. Then you can render it normally, getting the depth of the front face, subtracting it from the depth of the backface and using that thickness to adjust the glow or alpha value of the output.
I did something similar for a water shader recently and it worked quite well. As soon as the object starts overlapping itself though it can lead to some trouble when using it for the alpha value as you won't be able to get the depth of the piece of the object behind the piece in front. If you just use it for the glow value then that's not a problem though.
##### Share on other sites
Hi guys, although i'm not massively knowledgeable about shaders i think i know what you want.
The neon glow in a tube, possibly with some neon glow leaking out as well.
What about having 2 meshes, 1 big and one smaller that fits inside the bigger one.
you do the neon glow like that shown above in a previous post for the smaller mesh.
Then with the bigger mesh have a texture with an alpha channel so it looks like a neon light in a tube.
I presume that you would have to render the inner ring before the outer, but i think it should give you the desired effect.
Hope that helped.
PureBlackSin
##### Share on other sites
Thanks for all the responses. I'd have taken a good look at them today, but I spent all afternoon at the medical center to find out I have an arthritic sternum(!), which ate up all my free time tonight. I'll read the responses tomorrow and possibly provide an update.
Now I'm off to bed.
1. 1
2. 2
Rutin
21
3. 3
4. 4
frob
17
5. 5
• 9
• 12
• 9
• 33
• 13
• ### Forum Statistics
• Total Topics
632594
• Total Posts
3007257
×
|
{}
|
690 views
How to show that this weak scheme is a cubature scheme?
Weak schemes, such as Ninomiya-Victoir or Ninomiya-Ninomiya, are typically used for discretization of stochastic volatility models such as the Heston Model. Can anyone familiar with Cubature on ...
777 views
4k views
Why use a column database for tick/bar data?
I often hear that column-oriented databases are the best choice method for storing time series data in finance applications. Especially by people selling expensive column-oriented databases. Yet, at ...
164 views
Real world application of stochastic portfolio theory
There is a branche of stochastic portfolio theory (see also this question). Fernholz and Karatzas have published research in this field (e.g. "Diversity and relative arbitrage in equity markets") and ...
I have the following system of SDE's $dA_t = \kappa_A(\bar{A}-A_t)dt + \sigma_A \sqrt{B_t}dW^A_t \\ dB_t = \kappa_B(\bar{B} - B_t)dt + \sigma_B \sqrt{B_t}dW^B_t$ If $\sigma_B > \sigma_A$ I ...
|
{}
|
Home
>
English
>
Class 12
>
Physics
>
Chapter
>
Miscellaneous Volume 3
>
In the given arrangement of ca...
Updated On: 27-06-2022
Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
Text Solution
3muC1muC2muC6muC
Solution : a. (q_(1))/(1)=(q_(2))/(2)=(q_(3))/(3)=K <br> q_(1)+q_(2)_q_(3)=6 <br> Solving. We get<br> K=1 <br> or q_(1)-3muC
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Transcript
hello bhai FIR the question is in the given arrangement of capacitor pic micro coulomb charge is added to a point a find the charge on the upper Capacitor for this is the diagram given below there are the three capacitor to micro Farad 1 microfarad and three micro parrot and each capacitor is capacitor is grounded and there is a point in these are the options given below to we have to find so we have to find the charge on the upper capacitor means 3 micro Farad on the capacitor are grounded for voltage is same in all the capacitor so we know the charge is equals to see with the formula for the charge TV series capacitor voltage and b is equals to give up on CV writers like this and voltage is same so we 1 is equals to suppose this is a search
this is a charge Q to and this is a charge Q 3 ok so we won a everyone is given isliye 2 is equals to be in a and b three is equals to be a ok to be right as for what we write that is Fusion upon 31 and even upon 7 is equal to Q 2 upon 3 2 is equals to 3 upon MP3 that is equals to some constant King ok so put the value of the capacitor so that is you one upon and value of capacitor C1 is 1 is equal to Q 2 upon 2 is 2 is equals to Q 3 upon 33 is 3 is equal to
King value of human is ke Yuvan is equals to K value of Q 2 is equals to 2 ke and value of x is equals to 3 killed by using the charge conservation Q1 Q2 Q3 is equals to micro Farad stick microcoulomb As given in the question to put the value of Q1 Q2 Q3 that is K + 2 k + 3 K is equal to to hear the value of k is one micro coulomb to Q3 is equals to 3 ke to put the value of k hair that is 3 into 13 micro
Kulam this is the charge Q3 sunao check the option so we have we have option in correct status 3 micro Kollam
|
{}
|
1. ## TOUGH binomial theorem/problem
when (1 + ax + bx^2 + cx^3)^10 is expanded in ascending powers of x, the coefficients of x, x^2 , x^3 are 20, 200, 1000 respectively. Find the values of a, b , c and the coefficient of x^4.
Racked my brain and the internet looking for answers. Its for college algebra :S! any and all help appreciated thanks.
2. Are you SURE this is a binomial therorem problem? If so, I suppose you could find a factorization that would mike your life easier.
For my money, it seemed quite a bit easier to create the McLauren expansion and solve sequentially for a, b, and c. It's only a couple of derivatives.
3. yeah supposedly we have to use binomial theorem. My best guess is that u just find eliminate the a b c and find the coefficients for x, xsqaured and xcubed. and i tried that but im not sure if what i got was right since i only did 10C2 = 45 and since 20 = the final coefficient you just 20/45=a. however since u have so many terms in the ^10 i doubt thats what it is. since it would work for any (a+b)^10 but not a ((1+ax)+(bx^2+cx^3))
4. To expand $\left( {1 + ax + bx^2 + cx^3 + dx^4 } \right)^{10}$ we would have ten factors. The only way to have a term containing $x$ is to use the $ax$ in exactly one of the factors and 1’s all the other factors. That will give us $10ax$, so $a=2$. To have a term containing $x^2$ there are two ways to construct it. First, we could take $ax$ from exactly two of the factors and 1’s else. That can be done in ${{10} \choose {2}}=45$ ways. Or we can take $bx^2$ from exactly one of the factors and 1’s else. Again that can be done in 10 ways. So we end up with $45a^2x^2 + 10bx^2$. So we know $180+10b=200$, solve for b.
Now you must continue with the $x^3$ case. How can it be made up?
Then proceed to the $x^4$ case. How can it be made up?
5. I'm not quite sure how using a five-element base constitutes using the Binaomial Theorem. Anyway, since you're going only up to x^4, you won't be using the original x^3 term very much. In other words, it shouldn't be hard to track down the few terms you'll need.
|
{}
|
# How to upgrade R on windows – another strategy (and the R code to do it)
April 23, 2010
By
(This article was first published on R-statistics blog » R, and kindly contributed to R-bloggers)
Update: In the end of the post I added simple step by step instruction on how to move to the new system. I STRONGLY suggest using the code only after you read the entire post.
### Background
If you didn’t hear it by now – R 2.11.0 is out with a bunch of new features.
After Andrew Gelman recently lamented the lack of an easy upgrade process for R, a Stackoverflow thread (by JD Long) invited R users to share their strategies for easily upgrading R.
### Strategy
In that thread, Dirk Eddelbuettel suggested another idea for upgrading R. His idea is of using a folder for R’s packages which is outside the standard directory tree of the installation (a different strategy then the one offered on the R FAQ).
The idea of this upgrading strategy is to save us steps in upgrading. So when you wish to upgrade R, instead of doing the following three steps:
2) copy the “library” content from the old R to the new R
3) upgrade all of the packages (in the library folder) to the new version of R.
You could instead just have steps 1 and 3, and skip step 2.
For example, under windows, you might have R installed on:
C:Program FilesRR-2.11.0
But (in this alternative model for upgrading) you will have your packages library on a “global library folder” (global in the sense of independent of a specific R version):
C:Program FilesRlibrary
So in order to use this strategy, you will need to do the following steps -
1. In the OLD R installation (in the first time you move to the new system of managing the upgrade):
1. Create a new global library folder (if it doesn’t exist)
2. Copy to the new “global library folder” all of your packages from the old R installation
3. After you move to this system – the steps 1 and 2 would not need to be repeated. (hence the advantage)
2. In the NEW R installation:
1. Create a new global library folder (if it doesn’t exist – in case this is your first R installation)
2. Premenantly point to the Global library folder whenever R starts
3. Delete from the “Global library folder” all the packages that already exist in the local library folder of the new R install (no need to have doubles)
4. Update all packages. (notice that you picked a mirror where the packages are up-to-date, you sometimes need to choose another mirror)
Thanks to help from Dirk, David Winsemius and Uwe Ligges, I was able to write the following R code to perform all the tasks I described
So first you will need to run the following code:
Old.R.RunMe <- function (global.library.folder = "C:/Program Files/R/library", quit.R = NULL) { # It will: # 1. Create a new global library folder (if it doesn't exist) # 2. Copy to the new "global library folder" all of your packages from the old R installation # checking that the global lib folder exists - and if not -> create it. if(!file.exists(global.library.folder)) { # If global lib folder doesn't exist - create it. dir.create(global.library.folder) print(paste("The path:" , global.library.folder, "Didn't exist - and was now created.")) } else { print(paste("The path:" , global.library.folder, "already exist. (no need to create it)")) } print("-----------------------") print("I am now copying packages from old library folder to:") print(global.library.folder) print("-----------------------") flush.console() # refresh the console so that the user will see the massage # Copy packages from current lib folder to the global lib folder list.of.dirs.in.lib <- paste( paste(R.home(), "\library\", sep = ""), list.files(paste(R.home(), "\library\", sep = "")), sep = "") folders.copied <- file.copy(from = list.of.dirs.in.lib, # copy folders to = global.library.folder, overwrite = TRUE, recursive =TRUE) print("Success.") print(paste("We finished copying all of your packages (" , sum(folders.copied), "packages ) to the new library folder at:")) print(global.library.folder) print("-----------------------") # To quite R ? if(is.null(quit.R)) { print("Can I close R? y(es)/n(o) (WARNING: your enviornment will *NOT* be saved)") answer <- readLines(n=1) } else { answer <- quit.R } if(tolower(answer)[1] == "y") quit(save = "no") } New.R.RunMe <- function (global.library.folder = "C:/Program Files/R/library", quit.R = F, del.packages.that.exist.in.home.lib = T, update.all.packages = T) { # It will: # 1. Create a new global library folder (if it doesn't exist) # 2. Premenantly point to the Global library folder # 3. Make sure that in the current session - R points to the "Global library folder" # 4. Delete from the "Global library folder" all the packages that already exist in the local library folder of the new R install # 5. Update all packages. # checking that the global lib folder exists - and if not -> create it. if(!file.exists(global.library.folder)) { # If global lib folder doesn't exist - create it. dir.create(global.library.folder) print(paste("The path to the Global library (" , global.library.folder, ") Didn't exist - and was now created.")) } else { print(paste("The path to the Global library (" , global.library.folder, ") already exist. (NO need to create it)")) } flush.console() # refresh the console so that the user will see the massage # Based on: # help(Startup) # checking if "Renviron.site" exists - and if not -> create it. Renviron.site.loc <- paste(R.home(), "\etc\Renviron.site", sep = "") if(!file.exists(Renviron.site.loc)) { # If "Renviron.site" doesn't exist (which it shouldn't be) - create it and add the global lib line to it. cat(paste("R_LIBS=",global.library.folder, sep = "") , file = Renviron.site.loc) print(paste("The file:" , Renviron.site.loc, "Didn't exist - we created it and added your 'Global library link' (",global.library.folder,") to it.")) } else { print(paste("The file:" , Renviron.site.loc, "existed! make sure you add the following line by yourself:")) print(paste("R_LIBS=",global.library.folder, sep = "") ) print(paste("To the file:",Renviron.site.loc)) } # Setting the global lib for this session also .libPaths(global.library.folder) # This makes sure you don't need to restart R so that the new Global lib settings will take effect in this session also # This line could have also been added to: # /etc/Rprofile.site # and it would do the same thing as adding "Renviron.site" did print("Your library paths are: ") print(.libPaths()) flush.console() # refresh the console so that the user will see the massage if(del.packages.that.exist.in.home.lib) { print("We will now delete package from your Global library folder that already exist in the local-install library folder") flush.console() # refresh the console so that the user will see the massage package.to.del.from.global.lib <- paste( paste(global.library.folder, "/", sep = ""), list.files(paste(R.home(), "\library\", sep = "")), sep = "") number.of.packages.we.will.delete <- sum(list.files(paste(global.library.folder, "/", sep = "")) %in% list.files(paste(R.home(), "\library\", sep = ""))) deleted.packages <- unlink(package.to.del.from.global.lib , recursive = TRUE) # delete all the packages from the "original" library folder (no need for double folders) print(paste(number.of.packages.we.will.delete,"Packages where deleted.")) } if(update.all.packages) { # Based on: # http://cran.r-project.org/bin/windows/base/rw-FAQ.html#What_0027s-the-best-way-to-upgrade_003f print("We will now update all your packges") flush.console() # refresh the console so that the user will see the massage update.packages(checkBuilt=TRUE, ask=FALSE) } # To quite R ? if(quit.R) quit(save = "no") }
Then you will want to run, on your old R installation, this:
Old.R.RunMe()
And on your new R installation, this:
New.R.RunMe()
### Update – simple two line code to run when upgrading R
(Please do not try the following code before reading this post and understanding what it does)
In order to move your R upgrade to the new (simpler) system, do the following:
2) Open your old R and run –
source("http://www.r-statistics.com/wp-content/uploads/2010/04/upgrading-R-on-windows.r.txt")
Old.R.RunMe()
(wait until it finishes)
3) Open your new R and run
source("http://www.r-statistics.com/wp-content/uploads/2010/04/upgrading-R-on-windows.r.txt")
New.R.RunMe()
(wait until it finishes)
Once you do this, then from now on, whenever you will upgrade to a new R, all you will need to do only the following TWO (instead of three) steps:
2) Open your new R and run
source("http://www.r-statistics.com/wp-content/uploads/2010/04/upgrading-R-on-windows.r.txt")
New.R.RunMe()
(wait until it finishes)
And that’s it.
If you have any more suggestions on how to make this code better – please do share.
(After some measure of review will be given to this code, I would upload it to a file for easy running through “source(…)” )
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
|
{}
|
If you find any mistakes, please make a comment! Thank you.
## Demonstrate that a given set of matrices is multiplicatively closed
Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 0.1 Exercise 0.1.3
Denote by $\mathcal{A}$ the set of all $2\times 2$ matrices with real number entries. Let $M = \left[ {1 \atop 0} {1 \atop 1} \right]$ and let $$\mathcal{B} = \{ X \in \mathcal{A} \ |\ MX = XM \}.$$ Prove that if $P,Q \in \mathcal{B}$, then $PQ \in \mathcal{B}$, where juxtaposition denotes the usual matrix product.
Solution: Recall that matrix multiplication is associative. Since $P,Q \in \mathcal{B}$, we have $PM=MP$ and $QM=MQ$. Then we have $$(PQ)M = P(QM) = P(MQ) = (PM)Q = (MP)Q = M(PQ),$$ and thus $PQ \in \mathcal{B}$.
|
{}
|
# A trivially copyable ticket for a unique_ptr
In Chandler Carruth’s CppCon 2019 talk “There Are No Zero-Cost Abstractions,” he talked a lot (not exclusively, but a lot) about the hidden performance cost of std::unique_ptr. See, unique_ptr may be the same size as a native pointer, but because it has a user-defined destructor, it is “non-trivial for purposes of ABI,” and that means it has to get passed on the stack.
For more on “trivial for purposes of ABI,” see my previous blog post [[trivial_abi]] 101” (May 2018).
It occurred to me afterward that even if you didn’t want to write your own custom [[trivial_abi]]-enabled MyUniquePtr (maybe because you valued GCC compatibility — I think GCC still hasn’t implemented that attribute), you might be able to hack around it by simply creating a trivially copyable “ticket for a unique_ptr,” in the same sense that I describe weak_ptr as “a ticket for a shared_ptr.” The important thing about a ticket is that you can’t use it directly; it doesn’t afford the user any operation except “redeem this ticket for a unique_ptr.”
It would look something like this (Godbolt):
template<class T>
struct ticket {
ticket() = default;
explicit ticket(T *p) : p_(p) {}
explicit ticket(std::unique_ptr<int>&& p) : p_(p.release()) {}
std::unique_ptr<T> redeem() && { return std::unique_ptr<T>(std::exchange(p_, nullptr)); }
private:
T *p_ = nullptr;
};
And then Chandler’s test harness, which originally looked something like this and produced 27 lines of assembly —
void bar(int*);
void baz(std::unique_ptr<int>);
void foo(std::unique_ptr<int> p) {
bar(p.get());
baz(std::move(p));
}
— would instead look something like this and produce 19 lines of assembly —
void bar(int*);
void baz(ticket<int>);
void foo(ticket<int> t) {
std::unique_ptr<int> p = std::move(t).redeem();
bar(p.get());
baz(ticket<int>(std::move(p)));
}
So what’s the catch? Well, it’s a big one. The trivially copyable ticket object by definition has a trivial destructor. So it doesn’t consider itself to “own” the heap-allocated object. If an exception is thrown during the time the heap allocation is managed only by the ticket, then the allocation will be leaked!
void use(ticket<int> t, int u);
int thrower() { throw "oops"; }
void test() {
auto p = std::make_unique<int>(42);
use(
ticket<int>(std::move(p)), // lose ownership...
thrower() // ...and leak the allocation!
);
}
However, as Chandler himself pointed out, this is only a problem if your codebase uses exceptions at all! If you don’t use exceptions, then you don’t have this issue, and maybe the idea of a “trivially copyable ticket for a unique_ptr” might be interesting to you.
What would make this pattern actually usable, I think, would be if the language had some way to say “ABI-wise, I take a parameter of type X; but the first and only thing I’m ever going to do with that parameter is to convert it to type Y.” Something like this fantasy syntax:
void baz(ticket_view<int>);
void foo(ticket_view<int> -> std::unique_ptr<int> p) {
bar(p.get());
baz(std::move(p));
}
(Here I’ve renamed ticket to ticket_view, and given it an implicit constructor from unique_ptr, and given it an explicit conversion to unique_ptr instead of a named method redeem(). This emphasizes its similarity to string_view as a parameter-only type.)
Posted 2019-09-21
|
{}
|
# Lesson 06
### 01.01.06 Do you know the continents?
Continent outline mapshttp://www.eduplace.com/ss/maps/Continent bordershttp://img509.imageshack.us/img509/1630/sevenconao0.jpg
Download a map of the world from the "Outline Maps" web site. Color the seven continents, each continent a different color. Be careful to put the borders of the continents in the correct place. Making sure that you also include the major oceans that are in or around each continent. Make sure you have a compass rose, equator, prime meridian, tropic of cancer, and tropic of capricorn on the map. Put some of the major countries in also. A big HINT: the map above is not exactly correct as to borders and continents.
You will be graded on the completeness of the assignment. Make sure that the continents borders are correct (hint: Europe and Asia, No. America and So. America, etc.). Make sure to be neat and legible. Use a key if it will help make the map more understandable.
***70% or higher is required to pass any assignment***
Worlds Continents: The borders are not correct in this image.
### 01.06 More Interactions: Design an Experiment (Earth Systems)
Introduction: You have just completed an experiment that explored the interaction between an abiotic factor (fertilizer) and a biotic factor (algae). Now your task is to design and conduct an experiment that explores the interaction between an abiotic and biotic factor of your choice. WOW! How exciting is that! You get to choose, and the world is your laboratory. The options are limitless. All you have to do is choose an abiotic factor and a biotic factor and design an experiment that tests how they interact. I will give you an example of an experiment that explores the relationship between the abiotic factor sunlight and biotic plants. THIS IS ONLY AN EXAMPLE. You may NOT use this experiment for this assignment. [Most of you did this experiment in second grade!] The question might be “What is the effect of sunlight on pansies?” 01.06 pansies The hypothesis might be “If I put pansies in a dark closet then they will not grow as well as pansies put on a lighted windowsill.” The experimental plan could be: 1. Get six pansies (a type of flowering plant) that are the same species and as close to the same size as possible. 2. Put three pansies in the closet and three pansies on the windowsill in the sunlight. 3. Water all six plants the same. 4. Visually inspect the plants once a day for 14 days. 5. Daily measure the height of each plant and count the number of healthy leaves. Record your observations on a data table. See how easy it is? Remember, you cannot do an experiment with sunlight and plants for two reasons. One, you already know what the result will be. You did this already in second grade! Two, I have already outlined the experiment. Part of what you need to learn in this class is how to design your own experiment. You cannot learn that by doing an experiment that I have already designed! So, off you go. Be creative. Design your own experiment. But wait! Be sure to read the directions before you do anything!!!!!
### 01.06 Solving Multi-Step Inequalities (Math Level 1)
Solve multi-step inequalities and justify the steps involved.
Something to Ponder
What are some things you need to consider as you write expressions or equations to model real-life situations and problems?
Mathematics Vocabulary
Inequality: an expression with an inequality sign (like < , ≤ , > or ≥) instead of an equals sign
Solve linear inequalities: perform the same operation on both sides of the inequality.
Note: When $\fn_phv {\color{Red}multiplying }$ or $\fn_phv {\color{Red}dividing }$ both sides of an inequality by a $\fn_phv {\color{Red}negative }$ number, $\fn_phv {\color{Red}reverse }$ the $\fn_phv {\color{Red}inequality }$ $\fn_phv {\color{Red}symbol }$.
An inequality remains unchanged if:
• the same number is added to both sides of the inequatily
• the same number is subtracted from both sides of the inequality
• both sides of the inequality are multiplied or divided by a positive number
Learning these concepts
Click each mathematician image OR click the link below to launch the video to help you better understand this "mathematical language."
$\fn_phv {\color{Red}SCROLL }$ $\fn_phv {\color{Red}DOWN }$ $\fn_phv {\color{Red}TO }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}GUIDED }$ $\fn_phv {\color{Red}PRACTICE }$ $\fn_phv {\color{Red}SECTION }$ $\fn_phv {\color{Red}AND }$ $\fn_phv {\color{Red}WORK }$ $\fn_phv {\color{Red}THROUGH }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}EXAMPLES }$ $\fn_phv {\color{Red}BEFORE }$ $\fn_phv {\color{Red}SUBMITTING }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}ASSIGNMENT!!! }$
### 01.06 The Greek Gods and the Trojan War (English 9)
Many of the assignments in the first semester will refer to a story first told in ancient Greece: The Odyssey, by Homer. You will understand it better if you know a little about the Greek gods, especially Zeus, Poseidon, Athena, Hera, Ares, Hephaestus, Hermes and Aphrodite. If you are already familiar with the Greek gods, you might want to skim this and go on.
The Greek gods were pretty much like regular people, except that they were immortal and had supernatural powers. They were not necessarily ethical, just, merciful or kind, and could be selfish and capricious. They sometimes had temper tantrums or did things on a whim. They definitely played favorites.
Zeus: (WMC, CC, from Väsk image) Zeus was the king of the gods, and the most powerful. He could use lightning bolts to strike people he didn't like. He also liked to sleep around, and had dozens, maybe even hundreds, of children from mortal women. (The other male gods also occasionally had children with mortal women.) Zeus' wife, Hera, was jealous (with good cause) and did not like these children of Zeus and his mortal lovers. Poseidon with his trident and horses (fountain sculpture): (WMC,CC, Pacogq image) Poseidon was Zeus' brother, and god of the ocean. As well as storms at sea, he could cause earthquakes by striking the earth with the trident he carried. Hephaestus, another brother, was god of the forge, volcanoes and fire. He usually minded his own business and left mortals alone. He was married to Aphrodite, goddess of beauty and love, who was a flirt and an airhead. A third brother, Ares, was god of war. He was short-tempered, cruel, impulsive and a bit of a coward and bully. Hades, also brother to Zeus, was god of the underworld - where people went after death. He had a three-headed dog named Cerberus. Athena: (WMC, CC, G.dallorto image) Athena was Zeus' daughter, and goddess of wisdom, useful crafts and war. She was suppposed to have sprung full-grown from Zeus' head. Of all the gods, Athena was the one most likely to be reasonable and just. The city of Athens was named in her honor. In The Odyssey, Athena helps Odysseus because she likes his intelligence and courage. Hermes was the messenger god, and could fly because he had winged sandals. He could be mischievious, but was usually good-humored. Other important gods included Artemis (goddess of the hunt), Apollo (god of the sun), Dionysius (god of wine), Demeter (goddess of the earth & harvest), and Hestia (goddess of the hearth & home).
The Trojan War
Helen was the most beautiful woman in the world, and when she reached marriageable age, young men came from all around in hopes of becoming her husband.
One man who came courting Helen was Odysseus, who was already known for his intelligence and common sense. Odysseus could see that any man who married Helen would probably be attacked and killed by others who wanted to steal her, so he convinced all the men there to make a treaty of sorts - they all agreed that they would help defend whichever man Helen married.
Helen married Menelaus of Sparta, all the others went home, and everything went well for several years. Odysseus went home to Ithaca and fell in love with Penelope. They married and had a son, Telemachus.
Problems started with the gods. Athena, Hera and Aphrodite were tricked into an argument about who was the most beautiful, and Zeus named Paris, a good-looking young man, to be the judge.
Each of the goddesses tried to bribe him, and Aphrodite, who promised him the most beautiful woman in the world as wife, won the contest. She helped Paris to steal Helen from Menelaus.
Paris took Helen to Troy, and Menelaus called upon all the men who had once promised to help defend him.
These Greeks laid siege to Troy for ten years. Many famous battles and heroes had a part in the war, but in the end the Greeks won by a trick planned by Odysseus:
The Trojan horse: (WMC, public domain)
The Greeks pretended to leave, but left a large, hollow wooden horse behind. Odysseus and a few soldiers hid inside the wooden horse, which the Trojans dragged into the city during their premature celebration. That night, the men snuck out of the horse and opened the city gates to let in the whole Greek army, who had come back under cover of darkness. The Greeks sacked Troy, Helen (no longer under Aphrodite's spell) went home with Menelaus, and all the Greeks started home.
The story of The Odyssey tells Odysseus' adventures on his journey home, which takes years.
### 01.06 Aging, Death and Dying (Health II)
As a young person, you probably have not yet had any reason to think about your own old age or death. You might not even have experienced the death of a close loved one. If your grandparents are alive and nearby, you probably have witnessed at least a few of the problems of aging. In any case, you know that all of us will die eventually, and most of us will grow old. In our culture, we try to remain young as long as possible, and tend to avoid thinking about aging or death. Although staying active and healthy is certainly a good thing, sometimes avoiding a subject just makes it even more of a scary bogeyman in the dark closet.
More than likely, you will someday have to cope with issues relating to your parents' aging, and then your own. Some important questions about aging and death are best addressed before the issues arise. Will there be enough money to support you when you can no longer work? If you were badly injured or had a serious heart attack or stroke, would you want to be kept on life support indefinitely even if there seemed to be no hope of recovery? How long would you want to be kept on life support? Would you want (for yourself or an elderly loved one) a 'do not resuscitate' order? Would you want your organs, or a loved one's organs, donated? Should people who are terminally ill have the right to doctor-assisted suicide? When our pets are suffering or very infirm, and we can't 'cure' the problem, we often consider it kind to euthanize them. Should the same be true for people? Should a person dying or in extreme pain be hospitalized, or should 'hospice' care be available at home? If a terminally ill patient is in such great pain that only dangerously high doses of painkillers can keep them comfortable, should we risk giving such high doses of drugs?
Required: Top Ten Aging Challengeshttp://longevity.about.com/od/liveto100/tp/top-aging-challen...Required: Organ donationhttp://www.nlm.nih.gov/medlineplus/organdonation.htmlRequired: Living willshttp://www.mayoclinic.com/health/living-wills/HA00014Supplemental: Assisted Suicidehttp://www.wrtl.org/assistedsuicide/painmanagement.aspxSupplemental: FAQ about organ donationhttp://www.mayoclinic.com/health/organ-donation/FL00077Supplemental: Euthanasia and Physician-Assisted Suicidehttp://www.religioustolerance.org/euthanas.htmSupplemental: Hospice Carehttp://www.nhpco.org/sites/default/files/public/Statistics_R...
### 01.06 Basic Vocabulary for Parts of Speech & Usage review (English 9)
Demonstrate command of the conventions of standard English grammar and usage when writing or speaking. Use various types of phrases (noun, verb, adjectival, adverbial, participial, prepositional, absolute) and clauses (independent, dependent; noun, relative, adverbial) to convey specific meanings and add variety and interest to writing or presentations.
Louis Sergent, determined that he will finish high school and not work in the coal mines, does his homework, 1946, Kentucky.: Russell Lee image, NARA, public domain
Earlier in your school career, you have probably learned about parts of speech (nouns, verbs, adjectives, adverbs, etc), clauses, and phrases. If you are not 100% sure about any of these, use the links below and/or the attachments to review; you will need to understand them to be able to work on some of the more advanced writing we will work on in this class. Mastering the use of various phrases and clauses will help you express your ideas as clearly as possible. As you read more complex works, you will sometimes find it easier to follow long, complicated sentences if you are able to break them down into their constituent parts.
Read me first: Basic vocabulary for parts of speech and usagehttps://share.ehs.uen.org/node/17726Verbs and Verbals videohttp://pp1.ehs.uen.org/groups/english09/weblog/b344f/027_Par...SAS Parts of speech interactive (do Prepare, Identify, Practice, Apply) - username: farm9the, QL#: 942http://www.sascurriculumpathways.com/loginQuiz yourself on finding nouns in sentenceshttp://grammar.ccc.commnet.edu/grammar/quizzes/nouns_quiz1.h...Quiz yourself on finding adjectives in sentenceshttp://grammar.ccc.commnet.edu/grammar/quizzes/adjectives_qu...Quiz yourself on finding verbs in sentenceshttp://grammar.ccc.commnet.edu/grammar/quizzes/verbmaster.ht...Click here to see a list of nouns (in blue), along with some fairly unusual adjectives, on the web.http://en.wiktionary.org/wiki/Appendix:English_irregular_adj...
For more help, do the Parts of Speech interactive lesson (from SAS). Then, try the other links.
To access the SAS lessons:
English: Word Classes, 942
To open this resource in SAS® Curriculum Pathways®:
2. Enter the student user name: farm9the
3. In the Quick Launch box, enter: 942
### 01.06 Composition and Variety Essay (ArtFouii)
teacher-scored 30 points possible 90 minutes
Directions: Refer to the reading material from earlier lessons. Open up a word processing document and create an essay using the information that answers the rubric below. Make sure you proofread your writing, save it as a PDF document and upload it in the next section for grading. If that is not possible, copy and paste your essay in the submission area for grading.
Structure Content Points possible Introduction (one paragraph) In your own words, define variety and composition as they are used in a general sense (not in art). Give examples from your daily life. 3 Background (one paragraph) When you studied composition or variety before (it could be from another class, such as art, English, or a music class), explain what you learned. 3 Narrowing the topic (two paragraphs) Explain how variety and composition can be used in art. Include the four tips for composition. 6 Examples (two paragraphs) Use art museum websites on the internet to choose two images NOT used in the class lessons to illustrate variety and at least one of the composition tips. Do a "save image as" and insert the images into your paper (after re-sizing them to fit); if you can't do that, list the url for each image in your paper so the teacher will be able to see them. Explain how each image uses variety and the composition tip. 7 Process (one paragraph) Explain how you plan to apply the composition tips and the design principle of variety in your still-life drawing for Unit One 3 Conclusion (one paragraph) How and why are composition and variety important in visual art? Explain how you can use these concepts in the future. 3 Editing Proofread and edit your writing for mistakes in spelling, punctuation and other conventions. Up to five off if not done
### 01.06 Death & dying research paper (Health II)
teacher-scored 35 points possible 120 minutes
You will write a researched essay about an issue related to death, aging and dying chosen from the topics listed below.
Overview of assignment
TOPIC CHOICES:
Hospice Care, Euthanasia, Living Wills, Doctor Assisted Suicide, or Organ Donations
Search Pioneer Library, SIRS Knowledge Source, for information on the topic you chose from the list above. Use at least one of the websites listed for the previous lesson (01.6) and at least one source from Pioneer Library. Length: at least 550 words, plus a list of your sources.
Write a two-page paper (12 point font, double or single-spaced, 1" margins, at least 550 words) clearly stating your position on the chosen topic, with research to support your claims (more sources than just Wikipedia are required, though it may be used minimally if you’d like). More quality sources are encouraged. BE SURE TO CITE YOUR SOURCES OF INFORMATION FROM YOUR RESEARCH throughout the paper, as well as a works cited section included at the end. REMEMBER TO WRITE IN YOUR OWN WORDS, CITE YOUR SOURCES ON ALL RESEARCHED OR QUOTED WORK (this means within the paper, as well as a works cited section at the end)!! and make sure to PROOFREAD and EDIT your report before submitting it.
Content by paragraph (you may have MORE than one paragraph on each of these) Structure 1. Begin by stating your position on the issue. Explain any historical background or context for the issue. Then explain why it is important and list at least two reasons for your position. Introduction: write at least three complete sentences. If you use information or quotes from your research, remember to include the author's last name or the title in parenthesis at the end of the sentence. 2. Give one reason for your position on the issue. Use information or quotes from your research. Use logic, an analogy or give an example from experiences of family, friends, or news stories. Topic sentence, then three or more sentences with supporting details and examples; include the author's last name or the title in parenthesis at the end of a sentence/section where you have used information or quotes from your research. 3. Give another reason for your position on the issue. Use information or quotes from your research. Use logic, an analogy or give an example from experiences of family, friends, or news stories. Topic sentence, then three or more sentences with supporting details and examples; include the author's last name or the title in parenthesis at the end of a sentence/section where you have used information or quotes from your research. 4. Give another reason for your position. Use information or quotes from your research. Use logic, an analogy or give an example from experiences of family, friends, or news stories. OR You might explain opposing positions and tell why those positions are wrong or less important. Topic sentence, then three or more sentences with supporting details and examples; include the author's last name or the title in parenthesis at the end of a sentence/section where you have used information or quotes from your research. 5. Sum up the long-term effects on individuals and on society of this issue You might include something from your research, but be sure to make your own generalizations. End with another definite statement of the position you took, but not in exactly the same words as in your introduction. Conclusion: write at least four complete sentences; include the author's last name or the title in parenthesis at the end of a sentence or section if you have used information or quotes from your research. 6. List your sources (authors; book, magazine, or article titles; exact url for internet sources) See a writer's guide for the correct format in which to list sources
Ideas and content 15 points:
Clearly state your opinion on the chosen topic, and cover all requested information
Documented research and citations 15 points:
Support your paper with documented research, including a source from Pioneer Library. Cite your sources within the body of the text so that it is clear where you obtained all of your information (worth 5 points). Include a works cited section at the end of your paper (worth 5 points). Introduce in your own words, put in quotation marks, cite, and comment on (again, in your own words) any researched material used in your paper (worth 5 points).
Conventions 5 points:
Proof, spell check and edit your work before sending it (worth 5 points).
***DO NOT copy and paste material directly from a website, DO NOT leave any links to other websites in your paper, and DO NOT plagiarize or cheat in any way. Papers suspect to plagiarism or cheating will result in an automatic ZERO with no chance of corrections.
Pacing: complete this by the end of Week 3 of your enrollment date for this class.
### 01.06 DOL Refresher Course
teacher-scored 7 points possible 45 minutes
Do You Remember DOLs?
Before you start revising your own writing, let's practice some of the skills of revision by completing the following "Daily Oral Language" exercise.
This will get you in the mindset of finding mistakes that can make writing confusing. You will, in turn, need to employ these same skills when you are reading, re-reading, and revising your own writing.
Copy and paste the practice below between the rows of asterisks into a word document.
Make a list of the needed corrections and explain why those corrections are needed, then place your work into the assignment submission area after you have saved it to your hard drive. (DOL obtained from W.O.W. and D.O.L. - Wolfe County Schools wolfe.k12.ky.us)
*******************************************************************
DOL Practice *Correct seven errors in the following paragraph:
My Brother and me sing in a chorus, and every December we sing the Messiah by George F Handel. This piece was first performed in Dublin, Ireland in 1742. My brother Ray doesn’t sing, but he do play the trumpet. This year he will have the pleaseure of performing the famous trumpet solo.
List the mistakes and their needed corrections (explain) 1. 2. 3. 4. 5. 6. 7.
*******************************************************************
Grading Criteria: 1. Find all seven mistakes in the paragraph above and explain the grammar rule behind the corrections. Submit this assignment now. SAVE ALL OF YOUR WORK FROM THIS QUARTER
Pacing: complete this by the end of Week 1 of your enrollment date for this class.
### 01.06 Language "Work Out" 2
teacher-scored 12 points possible 30 minutes
Gold laurel wreath: Andreas Praefcke, public domain via Wikimedia Commons
Workout 2:
Gold
Copy and paste the material between the asterisk lines into a word processing document.
Complete the requirements for the assignment and copy and paste it back into the submission box.
************************************************************************************
Gold has been valued threw out human history. Unlike silver it doesn't tarnish and it has a bright, yellow color and shine making it atractive. Because it is nonreactive it is often found in pure form as nuggets or veins within rock. Because it is very malleable it can be beaten into gold leaf or worked into jewlry or coins. Kings and Queens of many cultures wore crowns of gold. it has been associated with marriage for centuries first as a bridle crown and now as the traditional material for wedding rings. The top prize at the Olympic games is the gold metal. Much early knowledge of chemistry came from alchemists attempts to turn lead into gold. What they did not realize? Was that gold is an element. There attempts to create it from another element were doomed to failure.
************************************************************************************
15 Corrections Possible; you need to find at least 10 for full credit
Scoring Rubric
10 Points= At least 10 mistakes have been found (1 point each)
2 Points= Changes have been "highlighted" and are easy find in the submission
Pacing: complete this by the end of Week 1 of your enrollment date for this class.
### 01.06 Lesson 1F: Using ¿Cómo? and ¿Qué? (Spanish 1)
Lesson 1F
Using ¿Cómo? and ¿Qué?"
[A copy of this lesson is available in a PDF file!! If you prefer to use this type of document, just click on the following link to complete this lesson: SpI_Lesson1F]
Lesson 1C was all about what to say when you first meet someone. After saying “Hello”, the next thing you want to do is ask questions!! In organizing the curriculum for this Spanish course, we felt that being able to form questions and understanding “interrogative words” was essential for being able to communicate in Spanish!! What we are finding in other textbooks and materials on the internet, is rarely are specific “question words” taught individually. Most resources group all of these words together in one presentation. So we asked ourselves, “Selves … maybe we shouldn’t change our plans??” But over time, we have decided that focusing on forming questions in Spanish, really helps us to feel comfortable in speaking and initiating conversations. It is essential!! So, we are continuing with our original plan and finding that “question words” taught in one presentation could be a great method of learning new words, reviewing words already taught, and introducing words for future lessons!! So we’ll be utilizing many different web links and video clips in our “question words” lessons that will include words we have learned, are learning, and will be learning later in the course!! Exciting!! So lets begin...
Probably, the most commonly used “interrogative words” in English and Spanish are “¿cómo? – what/how? and “qué? – what/how?”. Notice that both of these words appear to have the same English translation, but they are not interchangeable!! Depending on the situation and information needed, only one of these question words is commonly used! Actually, you have already seen both of these words in asking some basic questions in the dialogues from lesson 1C!!
Question word “¿Cómo?”: Usually the question word “¿Cómo?" is translated into English as “How?”. It is used to ask about something/someone and also to question the degree or intensity of something.
Question word “¿Qué?”: Usually the question word “¿Qué?" is translated into English as “What?”. It is used to ask for a definition or an explanation.
Here are a few basic rules for asking questions in Spanish:
The following links are so easy to use and have lots of great information on proper greetings and introductory questions. I couldn’t seem to get the audio to work but maybe you can!! The information is a great review!!
This is a great site that looks at Spanish words used to translate the English word “What?” This web page focuses on ¿Cómo? and ¿Qué? ... which is wonderful for helping to learn this lesson, but also introduces the question word ¿Cuál? ... getting us excited about words we will learn in later lessons!!
Once again, our wonderful “Professor Jason”!! This is a great discussion of “question words”. His presentation includes not only the interrogative words in this lesson, ¿cómo? and ¿qué?, but covers many of the other important question words!! The first few minutes should be familiar to you as he discusses the concepts of this lesson, and a fabulous introduction of many of the other question words we will be learning!! So just sit back and enjoy your own private tutor session with “Professor Jason”!!
Summary of Lesson: The two interrogative/question words, ¿cómo? and ¿qué, are two of the most commonly used Spanish words in forming sentences that ask for information. Now there are all kinds of things you should be able to learn about your new Spanish speaking friend!!
Practice Exercises: There are quite a few activities with question words but once again, they include lots of words that we haven’t learned yet!! These activities include practicing question words, along with introductory greetings!! When finished with each activity, close the window by clicking on the “X” button at the top, right corner of your internet browser to return to this page in do another activity!
Exercises on Question Words!http://www.babelnation.com/spanish/courses/01_01_exerca.htmlMore Exercises!http://www.babelnation.com/spanish/courses/01_01_3.htmlGreat Practice on Question Words!http://www.babelnation.com/spanish/courses/01_01_4.html
### 01.06 Mathematical modeling (Calculus)
HC: Relationships among variableshttp://www.hippocampus.org/course_locator?course=Algebra%20I...HC: The scatterplothttp://www.hippocampus.org/course_locator?course=Statistics%...HC: Linear functions reviewhttp://www.hippocampus.org/course_locator?course=Statistics%...HC: The regression linehttp://www.hippocampus.org/course_locator?course=Statistics%...
### 01.06 More Interactions: Design an Experiment (Earth Systems)
teacher-scored 10 points possible 180 minutes
If you need to review the basics about how to design an experiment, click on the "Experiment Guidelines" document above.
Materials:
· Whatever you decide
Assignment:
Click on the "Rubric Experiment" document above to find out how your experiment will be scored.
1. Determine what you want to find out when you do your experiment. Write down the QUESTION that you are trying to answer. If you are having trouble coming up with a question, look outside. Make a list of 10 biotic factors and 10 abiotic factors. Then think of ways that the biotic things interact with the abiotic things. Think of experiments you could do to test how the factors interact.
2. Predict what you think the outcome of your experiment will be. (Hypothesis)
3. Design an experiment to test your prediction. Remember to include a control. Be very specific. Tell me exactly what you plan to do. Tell me how much of everything you plan on using. Tell me how long you plan on running the experiment and how often you will check it. Tell me how you will record your data. I want details!!!
01.06 4. STOP!!! Submit your experimental design to me via email before going any further. I promise to give you feedback on your design within three days. If the design is scientifically sound, you may go ahead and conduct your experiment. If it has flaws, we will work together until you have designed a valid, reliable experiment---then you may go ahead and conduct your experiment.
5. AFTER you have received my go-ahead, conduct your experiment. Be sure to keep detailed lab notes. Your lab notes should contain a record of everything you did as well as all the data you collected. Each entry on your lab notes should be dated. (Month/day/year) 6. Follow the directions below to submit your assignment.
ANALYSIS
2. Send me your lab notes. I want to the observations that you recorded. Do not simply send me a summary of your results. I want to see a record of your observations.
3. Based on your observations, write a conclusion. What does your data tell you? What did you learn from your experimental results?
4. What kind of relationships did you find between biotic and abiotic factors? 5. Do your findings support your hypothesis? Why or why not? 6. If you were to do this again, what would you change? Why?
7. What additional experiments could be performed?
Please, send me the information requested in analysis questions A-G.
GOOD LUCK AND HAVE FUN!
Pacing: complete this by the end of Week 5 of your enrollment date for this class.
### 01.06 Natural Selection (Biology)
TO DO Read: Chapter 6 Natural Selection in the EHS Biology Quarter 1 - Biological Diversity text book. Explore: The URLs found under the heading 01.06 Darwin's Finches (Biology). Complete: Once you have read the chapter and explored the URLs complete the following activities: 01.06.01 Darwin's Finches - Assignment 01.06.02 - Virtual Peppered Moth - Lab 01.06.03 Natural Selction - Quiz 01.06.04 Lesson Check
### 01.06 Natural Selection (Biology)
Click on this link and listen to information about the scientists now studying on the Galapagos Islands and how the finches have continued to evolve. You must have iTunes and Quicktime installed on your computer to access this.
### 01.06 Nonverbal Communication: Differences in Males/Females Behavior
Women's and Men's Nonverbal Behavior Listeners know that nonverbal use of space, posture, movement, touch, eye contact and facial expression can also communicate power or powerlessness, dominance or submission. As we recall the elements of nonverbal communication, listening to nonverbal messages, we can consider the ways women and men use each and which are the result of culture. Space can indicate power. In our culture women and people of lower status take up less space then men and people of higher status. Women are taught to keep the knees together, cross the legs at the ankle or knee, keep the elbows near the body and hold belongings on the lap. Men, on the other hand, habitually take up space by sprawling and spreading out their belongings. Height can also show power. The person who stands over the one who is seated communicates power. Men are generally taller than women, and thus appear to have more power. (Tall women, however, have not learned to use their height in a powerful way.) Smiling can show happiness, appeasement or submission. Dominant members of a hierarchy smile less than submissive members. Perceptive listeners are aware that women smile more frequently and for longer duration than men, but this does not signal happiness--it is a communication style. Eye contact is also used more often and for longer duration by women when they listen. They avert their eyes more often when looked at. Listeners must note the context in which eye contact is experienced. While eye contact may signal listening or giving respect, prolonged eye contact can deliver a threat or a sexual invitation. Nodding (and sometimes saying "yes" while nodding) is often employed by women listeners to mean, "I am listening." Men often presume this means agreement, because to them, silence means, "I am listening and agreeing." As interesting as it is to speculate on what women and men signal by certain nonverbal cues, we must take care in interpreting them. "There is much potential for much misunderstanding in cross-sex communication exchanges. Both women and men listeners need to be able to identify very precisely those behaviors which seem intrusive or inappropriate."
### 01.06 Nonverbal Communication: Differences in Males/Females Behavior- Gender Differences
teacher-scored 15 points possible 30 minutes
Write a short but concise summary of how men and women may be different in how and what they communicate nonverbally, based on the information you have read.
Give two examples of how these differences may lead to misunderstandings in the workplace.
### 01.06 Parts of Speech review quiz (English 9)
computer-scored 15 points possible 10 minutes
You may need to review the information about parts of speech (see links above) before you take this quiz. If you are not confident with this material, you will have a very difficult time learning about phrases and clauses.
Go to Module 3 on your main class page to take this quiz. You may take the quiz multiple times, but you must score at least 67% to pass.
Pacing: complete this by the end of Week 1 of your enrollment date for this class.
### 01.06 Self-Employment & Entrepreneurs (Financial Literacy)
Identify the risks and rewards of entrepreneurship and self-employment.
Many artisans and artists are self-employed: Image from Wikimedia Commons, Joe Mabel, Attribution-Share Alike 3.0 Unported BACKGROUND
Another way to earn money besides working for others is to be an “entrepreneur”--to be your own boss and own your own business. This can be full-time or part-time. Entrepreneurs cannot set any wage they want. They must make sufficient profit to pay their own wages and others as well. Nonetheless, many entrepreneurs earn far more than if they worked for someone else. But some make NO profit and incur great debt. So owning a business has risks AND rewards. Anyone can be an entrepreneur, including students, parents, Tiff or Cameron, or someone that works full-time in a regular job. Many young people have a business on the side to bring in extra cash. It often begins when a person identifies some skill they have that fills other people's needs. Entrepreneurs have special skills, interests, or experience in providing a service or product others are willing to pay for.
VISIT URL #1 shown below to read the introductory paragraph and take the 7-question “Entrepreneur quiz.” Then click “submit” to read the evaluation of your answers. Then exit the web page and proceed to the URL #2 activity. VISIT URL #2 read the article and then take the short online questionnaire to compare your qualities with successful entrepreneurs. Then exit the web page to complete the assignment quiz, This lesson has no assignment, only a quiz. Please proceed to the “Assignments, Quizzes, and Tests” section for 1.06.
### 01.06 Self-Employment & Entrepreneurs links (Financial Literacy)
URL #1: entrepreneur quizhttp://www.bankrate.com/brm/news/biz/soho/20010710a.aspURL #2: character traits questionnaire http://www.entrepreneur.com/personalityquiz
### 01.06 Self-Employment & Entrepreneurs Quiz (Financial Literacy)
computer-scored 10 points possible 20 minutes
You will now complete a 10-question online quiz. You MUST score at least 8 to receive credit for the assignment but don't worry if you don't get at least 8 the first time since you can retake the quiz as many times as you need. Only your last quiz attempt counts. You may continue to increase your score if you want (7 or less = try again, 8 = B, 9 or 10 = A).
You may now take the quiz. Afterward, simply proceed to the next assignment.
Pacing: complete this by the end of Week 2 of your enrollment date for this class.
### 01.06 Solving Equations, part 4 (Math I)
Common Core Standards: N.Q.1, N.Q.2, A.SSE.1a, A.CED.1, A.REI.1, A.REI.3; Standards for Math Practice: 1-8
In this lesson we learn to solve an arbitrary linear equation.
Please read the lessons below. All sections that start with the number 01.06 are part of this lesson. If you would like to print out the material, the same information is contained in the PDF attached above (Unit01Lesson6.pdf).
Common Core Standards: N.Q.1, N.Q.2, A.SSE.1a, A.CED.1, A.REI.1, A.REI.3; Standards for Math Practice: 1-8
In this lesson we learn to solve an arbitrary linear equation.
Please read the lessons below. All sections that start with the number 01.06 are part of this lesson. If you would like to print out the material, the same information is contained in the PDF attached above (Unit01Lesson6.pdf).
### 01.06 Solving Multi-step Inequalities - Extra Video (Math Level 1)
NROC Developmental Math Multi-Step Inequalitieshttp://www.montereyinstitute.org/courses/DevelopmentalMath/U...
I highly recommend that you click on the link above before continuing.
You can watch the video that is under the PRESENTATION tab or work through the entire lesson.
Guided Practice
After watching the video try these problems. The worked solutions follow.
Solve the following and graph each solution:
Example 1: 5 + 4b < 21
Example 2: 3x + 4(6 - x) < 2
Example 3: 8z - 6 < 3z + 12
Example 4: 5(-3 + d) ≤ 3(3d - 2)
Solve the following and graph each solution:
Example 1: 5 + 4b < 21
Step 1: Subtract 5 from both sides:
4b < 16
Step 2: Divide both sides by 4:
b < 4
Example 2: 3x + 4(6 - x) < 2
Step 1: Clear the parentheses:
3x + 24 - 4x < 2
Step 2: Collect like terms:
-x + 24 < 2
Subtract 24 from both sides:
-x < -22
Step 3: Multiply each side by -1:
Example 3: 8z - 6 < 3z + 12
Step 1: Add 6 to both sides:
8z < 3z + 18
Step 2: Subtract 3z from both sides:
5z < 18
Step 3: Divide both sides by 5:
z < $\fn_phv \frac{18}{5}$
Example 4: 5(-3 + d) ≤ 3(3d - 2)
Step 1: Clear the parentheses:
-15 + 5d ≤ 9d - 6
Step 2: Add 15 to both sides:
5d ≤ 9d + 9
Step 3: Subtract 9d from both sides:
-4d ≤ 9
Step 4: Divide both sides by -4:
d ≥ $\fn_phv -\frac{9}{4}$
### 01.06 The Road to the American Revolution. (US History)
World Book: The Movement for Independencehttp://www.worldbookonline.com/student/article?id=ar576000&s...PBS: Liberty and the Road to Revolutionhttp://www.pbs.org/ktca/liberty/road.html
World Book briefly goes over the development of the constitution.
When using the World Book site please pay attention to the following instructions
1) Go to pioneer.uen.org in a new window or tab in your internet application.
3) Right click the World Book link and as it to open in a new window or tab.
Once you have done this you should have access to the World Book Encyclopedia US History information.
The PBS site is a fun resourse that can help you to test your knowledge as well as introduce you to important people and events leading to the American Revolutionary War.
### 01.06 The Road to the American Revolutionary War (U.S. History)
Investigate the events leading to the Revolutionary War.
In this section you will:
President George Washington, first President of the United States: Gilbert Stuart, 1797, Public domain, via Wikimedia Commons
• Identify the causes of tension between the colonists and the British Government that led to the American Revolutionary War.
Lesson
In history there often seems to be a series of unfortunate events that lead to a huge, explosive event. The Revolutionary War is no different. Let’s take a look at the unfortunate events;
1. Colonization: this is really a more indirect event. As many of the states were colonized there wasn’t really a long term plan to how they were going to interact with the British government. This lack of planning is the unfortunate event. The colonists had years to grow and as they expanded and became more independent (the British government basically let them do their thing as long as money was coming in.) This independence lead to a desire for more rights. You’ve read it before, but these years of independence are known in history as salutary neglect.
2. Enlightenment: In the 1700s Europe was being enlightened. Although the Enlightenment had its roots in France, the ideas spread like a worldwide wildfire. The scientific discoveries of Copernicus, Galileo, and Newton made people begin to look at the world around them for change. Then the ideas of men like John Locke, Montesquieu, Rousseau, Thomas Hobbes, and even Benjamin Franklin and Thomas Jefferson; inspired the colonists as well as many others throughout the world to look around them and question their situation and government. You’ll read more about their ideas later, as we discuss the Declaration of Independence and the Constitution.
3. French and Indian War: Also known as the Seven Years’ War (outside of the US). This war became a huge expense for the British as they battled for land both in Europe and in the Americas. The French didn’t settle in Canada as rapidly as the British did in the American Colonies, but they did treat the Natives more hospitably than the British colonists did. The major area of dispute in the colonies was the Ohio River valley. When the French built Fort Duquesne, the Virginians, who had promised that land to others, sent a militia to kick the French out. The French defeated the colonists at the first battle led by then 22 year old George Washington. The British sent Edward Braddock to assist the colonists and he and Washington tried again but were ambushed by French soldiers and Native American combatants; the British fled. (An interesting side-note; Washington, startled by the weakness of the British attempted to rally the troops and defeat the French. In the process two horses were shot from beneath him along with four bullets blowing through his coat. He obviously survived.) The king George II sent William Pitt to lead the armies. Pitt began to win some battles and because of that the British were able to gain some Native American allies, the Iroquois. After a series of victories, including the defeat at Quebec, the British were victorious and at the Treaty of Paris in 1763, gained most of Canada along with all the land east of the Mississippi. This costly war made the British look at the role the colonies played and decided that there were enough people benefitting from the protection and wealth of Britain that they needed to pay their dues.
4. Proclamation of 1763: A document signed by the British representatives to the American Colonies, with Pontiac, the Native American leader. This document said that the colonists would not cross the Appalachian Mountains. This angered the colonists because they wanted to expand and had planned to do so across the mountains. It said to them that there was dis-connect between them and the British government.
5. George Grenville: King George III’s prime minister who, in his attempt to repay the debts of the British, began to crack down on trade first to Massachusetts and then to all of the colonies. His ideas and actions, which included searching ships suspected to contain smuggled goods, angered the colonists.
6. Sugar Act: 1764, prompted by Grenville. The act cut the duty (tax) on molasses made outside of the colonies, hoping they would pay the tax instead of smuggling. (They were smuggling because they weren’t allowed to buy goods from other countries unless they’d gone through Britain.) The act added duties on items that had not previously been taxed. Lastly this the court that would try those who had gone against the act, would be a British court instead of a colonial court. It is after this act was passed that the term “No taxation without representation,” is first heard.
7. Stamp Act: 1765, the tax was placed on many printed documents. The verification the tax was paid came in the form of a stamp on the document, hence “Stamp Act”. The Sons of Liberty was formed in reaction and protest to this law. Many merchants boycotted until the act was repealed, they were successful in 1766.
8. Townshend Acts: 1767, taxed goods imported from Britain, including lead, glass, paper, paint and tea. Again they boycotted the act hoping to get the act repealed. The British were just looking to make money, while the colonists were looking to have a say in what was going on.
9. Boston Massacre: Tensions between the colonists and the British soldiers grew and came to a head outside the Boston Customs House. Colonists upset by the Townshend Acts began to taunt the soldiers and the soldiers opened fire and killed five colonists.
10. Boston Tea Party: The event which is known as both famous and infamous. It was a reaction to the Tea Act and the Tea Act was a result of the repeal of the Townshend Acts and an attempt to save the British East India Company. The act made it so that only the East India Company could sell tea to the colonists, but it was at a cheaper price. On December 16, 1773 the “Indians” of the tea party dumped 18,000 pounds of tea into the harbor.
11. Intolerable Acts: The King was furious and asked parliament to do something. These acts; shut down Boston harbor, required the quartering of soldiers, and a general was appointed the governor of Massachusetts. General Thomas Gage put Boston under martial law.
12. First Continental Congress: In reaction to the Intolerable Acts, 56 delegates met in Philidelphia and drew up a declaration that said they had the rights to run their own affairs, and if the British used force against them they would fight back.
Our series of unfortunate events exploded with the first Battle of the Revolution; Lexington and Concord, which is the beginning of the Revolutionary war
### 01.06 Two-Variable Statistics - Links (PreCalc)
Scatter Plot Tutorial, look through concept, interpretations, example, and do-it-yourself.http://mtsu32.mtsu.edu:11308/regression/level1/scatplot/inde... Simple Linear Regression, look through concept and example.http://mtsu32.mtsu.edu:11308/regression/level2/simplinreg/in...Practice Graphing Datahttp://math.hws.edu/javamath/ryan/Regression.htmlEstimating a Regression Line http://www.ruf.rice.edu/~lane/stat_sim/reg_by_eye/index.htmlFind an Equationhttp://www.purplemath.com/modules/strtlneq.htmMeasuring Errorhttp://www.nctm.org/standards/content.aspx?id=26787This site can be used to help you plot datahttp://nlvm.usu.edu/en/nav/frames_asid_144_g_3_t_5.html
The bottom 5 web-sites are online interactives that allow you to practice the concepts.
• In the "Practice Graphing Data" website, practice graphing data on a scatter plot by typing values into columns a and b. Notice the regression line is drawn for you. The slope and y-intercept is given to you so you could use the slope-intercept formula to make an equation for the regression line. Please note that the standard error is not a correlation coefficient.
• In the "Estimating a Regression Line" web-site, practice estimating a Regression Line by drawing a line through the data. Click the Draw regression line box to see how well you did. You can also try to guess the correlation coefficient r and click on the "Show r" button to see the answer.
• You can use the "Find an Equation" web-site if you don’t remember how to find an equation from two points. You can use this tutorial to review.
• Use the "Measuring Error" web-site to better understand how error is measured. This interactive allows you to move five points and a best fit line. Then you can see how the error is measured with three different methods. (If you can’t see the final answer, you could use a calculator to find out what it is.)
### 01.06 Unit 01 Test (Participation Skills and Techniques)
computer-scored 10 points possible 30 minutes
Remember you may RETAKE the test as many times as you like, but you must score at least 60%. 60 out of 100.
This assignment should be completed by WEEK 3 of this class
Pacing: complete this by the end of Week 3 of your enrollment date for this class.
### 01.06 Unit 1 Review (Participation Skills and Techniques)
After you have read the preceding lessons and links, Take the Test. You must score at least 60%, but you may take the test as many times as necessary to get a good score.
### 01.06.00 Quarter One Review Quiz (Algebra II - New)
Before taking the review quiz make certain that you can state without hesitation, that I can define, compare, and recognize relations and functions. I can represent relations and functions with graphs, tables, and sets of ordered pairs. I can define domain and range. I can identify the domain and range for relations described with words, symbols, tables, sets of ordered pairs, and graphs. I can find the absolute value of numbers and expressions. I can represent absolute values with numerical statements and on number lines. I can find all possible solutions for absolute value equations involving variables and variable terms. I can solve absolute value inequalities in one variable using the Properties of Inequality. I can represent absolute value inequalities on a number line. I can identify the domain and range of absolute value functions. I can graph absolute value functions and the transformation of their parent functions. I can solve a system of linear equations by graphing. I can determine whether a system of linear equations is consistent or inconsistent. I can determine whether a system on linear equations is dependent or independent. I can determine whether an ordered pair is a solution of a system of equations. I can solve application problems by graphing a system of equations. I can solve a system of linear inequalities by graphing. I can determine whether an ordered pair is a solution of a system of inequalities. I can solve application problems by graphing a system of inequalities. I can solve a system of equations using the substitution method. I can recognize systems of equations that have no solution or an infinite number of solutions. I can solve applications problems using the substitution method. I can solve a system of equations when no multiplication is necessary to eliminate a variable. I can solve a system of equations when multiplication is necessary to eliminate a variable. I can recognize systems that have no solution or an infinite number of solutions. I can solve application problems using the elimination method. I can solve a system of equations when no multiplication is necessary to eliminate a variable. I can solve a system of equations when multiplication is necessary to eliminate a variable. I can solve application problems that require the use of this method. I can recognize systems that have no solution or an infinite number of solutions. I can solve a system of equations when no multiplication is necessary to eliminate a variable. I can solve a system of equations when multiplication is necessary to eliminate a variable. I can solve application problems that require the use of this method. I can recognize systems that have no solution or an infinite number of solutions. I can graph the equation y = x2 by plotting points. I can define the terms "parabola", "vertex", and "axis of symmetry." I can use the symmetry of parabolas to answer question about points on the parabolas. I recognize "y = ax2 + bx + c" as the standard form of the equation of a parabola. I understand how a, b, c affects the parabola. I can find factors and x-intercepts of parabolas. I can change from a parabolic equation in factored form to standard form and vice-versa. I know the vertex form of a parabolic equation, f(x) = a(x – h)2 + k. I can change a parabolic equation from vertex form to standard form and factored form. I can identify the domain and range of the quadratic functions. I can graph transformations of the parent quadratic function. I can solve equations of the form ax2 - k = 0 and other variations. I can solve equations of the form a(x + h)2 = k and other variations. I can solve equations of the form ax2 + bx = 0 and other variations. I can solve equations of the form ax2 + bx + c = 0 and other variations. I can find the vertex and x-intercepts of an equation of the form: y = ax2 + bx + c. I can factor perfect squares and solve equations. I can complete the square for a quadratic where coefficient of leading term is 1. I can complete the square for a quadratic where coefficient of leading term is not 1. I can solve quadratic equations by completing the square. I can convert from standard form to vertex form by completing the square. I understand the derivation of the quadratic formula. I know the difference between exact and approximate solutions. I can rewrite an equation in order to use the quadratic formula to solve it. I can find the discriminant and determine the number of solutions. I can find the discriminant and graph of quadratic equations. I can find the discriminant and number of x-intercepts. I can use the discriminant to determine whether a quadratic equation can be written in factored form. I can use quadratic formulas to describe projectile motion. I can solve for the maximum height and length of a path. Given the perimeter of and area of rectangles and triangles, I can find the length of sides. I can use quadratic equations to find the maximum profit or minimum cost. Otherwise, go back and review as needed!
### 01.06.01
teacher-scored 16 points possible 30 minutes
Assessment Rubric:
Content Main idea supported with appropriate detail and use of dialogue. /4 Support Supporting paragraphs include detail which is specific and directly supports the thesis. /4 Clarity Writing is clear, focused and well organized. /4 Conventions No significant errors in grammar, usage, punctuation or spelling. /4
### 01.06.01 Solving Multi-Step Inequalities - Worksheet (Math Level 1)
teacher-scored 54 points possible 30 minutes
Activity for this lesson.
Pacing: complete this by the end of Week 4 of your enrollment date for this class.
### 01.06.01 Darwin's Finches (Biology)
teacher-scored 10 points possible 120 minutes
Summary:
The theory of evolution consists of the following four major points:
1. Variation exists within the genes of every species (the result of random mutation).
2. In a particular environment, some individuals of a species are better suited for survival, so leave more offspring (natural selection).
3. Over time, change within species leads to the replacement of old species by new species as less successful species become extinct.
4. There is clear evidence from fossils and many other sources that the species now on Earth have evolved (descended) from ancestral forms that are extinct (evolution).
Instructional Procedures:
Copy all information below between the lines of asterisks, including the lesson number, revision date and all questions into a word document.
***************************************************************
ASSIGNMENT 01.06.01 - REVISION DATE: 7/29/14 (Copy everything between the asterisks.)
1. Why is diversity (differences) among the finches important for their survival?
2. How did the different islands play a role in the diversity of the finches?
3. Why do you think scientists believe that the finches came from one species?
4. When would the importance of different beaks be important for the finches' survival?
5. What is your opinion about the variation in the sizes of the birds? How could one species be so much smaller or larger than another if they all came from one original species?
6. After reading the information below, write a paragraph explaining how diversity helps us to understand evolution.
The theory of evolution consists of the following four major points:
1. Variation exists within the genes of every species (the result of random mutation).
2. In a particular environment, some individuals of a species are better suited for survival, so leave more offspring (natural selection).
3. Over time, change within species leads to the replacement of old species by new species as less successful species become extinct.
4. There is clear evidence from fossils and many other sources that the species now on Earth have evolved (descended) from ancestral forms that are extinct (evolution).
***************************************************************
Pacing: complete this by the end of Week 6 of your enrollment date for this class.
### 01.06.01 Five Themes/Maps/Continents Quiz
computer-scored 20 points possible 30 minutes
Please pay attention to the questions because they will help you on the final test. This quiz is open notes, etc. if you need the help.
### 01.06.01 Lesson 1F: Using ¿Cómo? and ¿Qué? (Spanish I)
computer-scored 15 points possible 20 minutes
**Assignment 01.06.01: Using ¿Cómo? and ¿Qué?**: You know the drill, find the button: and click on it!
Businesses And Organizations Will Often Analyze Data To Find Cause-And-Effect Relationships Example 1 A business collecting sales data may analyze buying trends. This information can be used to adjust their supply orders or advertised sales. Candy Sales Increase During The Month of October Example 2 An organization studying wild animal populations will collect data to look for cause-and-effect relationships. A scatter plot can be used to see if a pattern exists for a given a set of data Data Does Not Always Show A Pattern If you were to gather data from people leaving the grocery store. Their height (in inches) would not correlate with how much money they spent. When Points are randomly scattered about... There is no correlation between the items being graphed. Sometimes Data Will Have A Perfect Correlation All points that solve a linear equation have a perfect correlation. If a company has a daily cost equation of C = 20X + 55, the cost of producing X amount of items is exactly C. Correlation Patterns When a correlation pattern exists, the correlation can be described as: A strong correlation - if the values are fairly tight in following the pattern A weak correlation - if the values are more widely scattered. A correlation can also be described as: Positive- if the values increase from left to right. Negative- if the values decrease from left to right. Correlation Measurements The stronger the correlation, the better it will predict results. I. Linear Regression Is A Method For Finding The Best-Fit Line For A Given Set Of Data Creating a best-fit line While formulas exist for creating a best-fit line, you can simply estimate where to draw a line with a slope that would best fit the middle of all the points. Correlation Coefficients While a correlation explains how tightly a set of points follows a pattern, a correlation coefficient measures how closely the points follow the regression line. This coefficient is usually denoted by the letter r Correlation Patterns If the coefficient r = 0 there is no correlation. If r = 1 that shows a perfect positive correlation. If r = -1 that shows a perfect negative correlation. The closer the coefficient is to 1 or -1 the stronger the correlation. II. The Process Of Using Linear Regression To Analyze Data The Process Of Using Linear Regression To Analyze Data Collect the data a. Identify the data to be analyzed b. Determine how to collect the data c. Create a table to record the data in Make A Scatter Plot From The Data> a. Identify the dependent and independent variables b. Determine how to mark the X and Y axis c. Graph your pairs of points Look for a correlation pattern a. Determine if the points follow a general pattern b. Determine if the values generally increase or decrease as you look from right to left c. Determine if the values fit tightly around a line or are they widely scattered Fit the data with a regression line a. Estimate where a best-fit line could be drawn to go through the middle of all the points b. See if this line be drawn through two points in which the X and Y values can easily be determined Find the equation for the line a. Use two points from the best-fit line to find the slope of the line b. Substitute the slope and one point from the line into the point-slope formula c. Simplify the equation by solving for y Calculate the correlation coefficient Use a graphing calculator or go to a web site to graph your data and get a best fit line. http://illuminations.nctm.org/ActivityDetail.aspx?ID=146 Use the equation to predict outcomes a. Select a value for which you would like to make a prediction b. Substitute the value into the equation c. Solve for the unknown III. The Following Is A Demonstration Of How To Use These Steps **GAS PRICE** Use the average price of gas from years past to estimate the price of gas in the future. The Process Of Using Linear Regression To Analyze Data Collect the data a. Identify the data to be analyzed The average gas price from the years 1973 to 2003 b. Determine how to collect the data Online data from the internet c. Create a table to record the data in See the next page The Average Price Of A Gallon of Gas From 1973 to 2003 Make A Scatter Plot From The Data a. Identify the dependent and independent variables The independent X variable is the year. The dependent Y variable is the average gas price b. Identify the largest and smallest values The X values go from 1973 to 2003. the Y values go from .39 to 1.59 c. Determine how to mark the X and Y axis The X axis will go from 1970 to 2005 and increment by 7. The Y axis will go from 0 to 2.0 and increment by .5 d. Graph your pairs of points See the next page Graphing The Year Verses The Price Look for a correlation pattern Do The Points Follow A General Pattern? Do The Points Increase Or Decrease From Left To Right? Are The Points Fairly Tight In Their Grouping Or Are They Scattered? Fit the data with a regression line a. Estimate where a best-fit line could be drawn to go through the middle of all the points b. See if this line be drawn through two points in which the X and Y values can easily be determined Regression Line Make equation for the line Creating An Linear Equation From The Two Points (1991, 1.14) and (1971, 0.36) Find the slope using two points Use the slope and one of the points in the point-slope equation. Simplify the equation Calculate the correlation coefficient This data was graphed and calculated at: http://illuminations.nctm.org/ActivityDetail.aspx?ID=146 Use the equation to predict outcomes What should the average gas price be for 2008? Y = 80.32 - 78.5 Y = 80.32 - 78.5 Y = 1.82 IV. Check The Logic Of The Answer You should always think about the answers you get from the calculation to see if they make sense. The answer is: Y = 1.82 The average gas price for 2008 is not $1.82. What went wrong? The Sample Was Not A Good Representation Of Reality In this case, we didn't use enough data. As gas prices have risen dramatically lately, our sample was not large enough. We should include gas prices through 2007. If we estimate the average gas price of 2005 to be 3. and the average gas price of 2007 to be 3.5, the data actually curves. We will have to fit the data with a different regression. To Use An Equation To Predict new Outcomes, The Regression Sample Needs To Be An Accurate Representation Of The Actual Data Set! ### 01.06.01 Planning a Birthday Party, Revisited (Math I) Do you remember Ashley? I hope so. She wanted to spend her birthday at the water park and needed to determine how much it would cost. We noted that it is unlikely that Ashley's mother will pay for an unlimited number of friends. It is more likely that she will be willing to pay a certain amount, and Ashley will then need to determine how many people she can invite with that limit. In lesson 3 we used a data table and a graph to solve her problem. These worked well enough, but you may not wish to graph every possible data point to find the specific solution you need. It would be great if we could just solve the problem by manipulating the terms in the same way as we did with the equations in lessons 4 and 5. Guess what? We can! When solving a problem that involves both addition/subtraction and multiplication/division, we follow the same rules as we do for solving a problem that only involves one or the other. We just need to follow both sets of rules. We also must apply the order of operations, and other properties of arithmetic that we do when evaluating or simplifying an expression. Consider Ashley's problem. We had previously determined that it could be described by the inequality c ≥ ($17 per person)(p – 5 people) + $100, (eq. 1) where c is the total cost Ashley's mother is willing to pay, and p is the total number of people. We had left it in this form, rather than simplifying. When written in this form, the parts of the equation more clearly represented the original problem. However, to solve this, the first thing we want to do is simplify the inequality. Start by distributing the factor ($17 per person) through the factor (p - 5 people), c ≥ ($17 per person) p – ($17 per person)(5 people) + $100 (eq. 2) c ≥ ($17 per person) p – $85 +$100. (eq. 2a) Next add like terms, in particular add the second and third term, c ≥ ($17 per person) p +$15. (eq. 3) That is simplified enough. Now we can begin solving for p. In this equation, p is multiplied by the factor ($17 per person). Also, the term that includes p is added to the term ($15). To remove the factor ($17 per person), we need to multiply by the multiplicative inverse. The multiplicative inverse is the reciprocal of ($17 per person). To remove the term ($15) we need to add the additive inverse. The additive inverse is -$15. The other thing we need to remember is that we need to do to the right-hand side of the equation whatever we do to the left-hand side. Now, which do you want to do first? It seems less complicated to start by adding the additive inverse. In other words, add -$15 to both sides, c + (-$15) ≥ ($17 per person) p +$15 + (-$15) (eq. 4) c –$15 ≥ ($17 per person) p +$15 – $15 (eq. 4a) c –$15 ≥ ($17 per person) p. (eq. 4b) Next, multiply by the multiplicative inverse. This means we want to multiply by (eq. 5) (eq. 5a) And finally, reverse the order of the equation, (eq. 5b) We could separate this into 2 terms, but since 17 is not a “nice” denominator, that doesn't really improve the “look” of the equation, so let's just keep equation 5b. Notice the direction of the inequality reversed when we reordered the equation. When solving an inequality, it is important to understand the meaning of the inequality. The expression including the number of people is less than or equal to the expression involving the cost. If we had multiplied by a negative number, we would also have needed to reverse the inequality. Okay, back to the inequality. First, check the units. c has units of dollars. 15 also has units of dollars. These terms are added, so they should have the same units – check. The term has units of dollars divided by dollars per person. Remember that when we divide this is the same as multiplying by the reciprocal? This is true for units as well as for numbers and symbols. Therefore, the units are dollars over dollars times people. This simplifies to people. The units for p are people. These expressions must also have the same units – check. Now that we have solved for p, we can enter whatever limit Ashley's mom set. If she set a limit of$200 we would find (eq. 5b) (eq. 6) (eq. 6a) p ≤ $10.8823.... (eq. 6b) Of course, we cannot have 0.88 of a person, so we will round down. Therefore, Ashley can have up to 10 people at her birthday party. Do you remember Ashley? I hope so. She wanted to spend her birthday at the water park and needed to determine how much it would cost. We noted that it is unlikely that Ashley's mother will pay for an unlimited number of friends. It is more likely that she will be willing to pay a certain amount, and Ashley will then need to determine how many people she can invite with that limit. In lesson 3 we used a data table and a graph to solve her problem. These worked well enough, but you may not wish to graph every possible data point to find the specific solution you need. It would be great if we could just solve the problem by manipulating the terms in the same way as we did with the equations in lessons 4 and 5. Guess what? We can! When solving a problem that involves both addition/subtraction and multiplication/division, we follow the same rules as we do for solving a problem that only involves one or the other. We just need to follow both sets of rules. We also must apply the order of operations, and other properties of arithmetic that we do when evaluating or simplifying an expression. Consider Ashley's problem. We had previously determined that it could be described by the inequality c ≥ ($17 per person)(p – 5 people) + $100, (eq. 1) where c is the total cost Ashley's mother is willing to pay, and p is the total number of people. We had left it in this form, rather than simplifying. When written in this form, the parts of the equation more clearly represented the original problem. However, to solve this, the first thing we want to do is simplify the inequality. Start by distributing the factor ($17 per person) through the factor (p - 5 people), c ≥ ($17 per person) p – ($17 per person)(5 people) + $100 (eq. 2) c ≥ ($17 per person) p – $85 +$100. (eq. 2a) Next add like terms, in particular add the second and third term, c ≥ ($17 per person) p +$15. (eq. 3) That is simplified enough. Now we can begin solving for p. In this equation, p is multiplied by the factor ($17 per person). Also, the term that includes p is added to the term ($15). To remove the factor ($17 per person), we need to multiply by the multiplicative inverse. The multiplicative inverse is the reciprocal of ($17 per person). To remove the term ($15) we need to add the additive inverse. The additive inverse is -$15. The other thing we need to remember is that we need to do to the right-hand side of the equation whatever we do to the left-hand side. Now, which do you want to do first? It seems less complicated to start by adding the additive inverse. In other words, add -$15 to both sides, c + (-$15) ≥ ($17 per person) p +$15 + (-$15) (eq. 4) c –$15 ≥ ($17 per person) p +$15 – $15 (eq. 4a) c –$15 ≥ ($17 per person) p. (eq. 4b) Next, multiply by the multiplicative inverse. This means we want to multiply by (eq. 5) (eq. 5a) And finally, reverse the order of the equation, (eq. 5b) We could separate this into 2 terms, but since 17 is not a “nice” denominator, that doesn't really improve the “look” of the equation, so let's just keep equation 5b. Notice the direction of the inequality reversed when we reordered the equation. When solving an inequality, it is important to understand the meaning of the inequality. The expression including the number of people is less than or equal to the expression involving the cost. If we had multiplied by a negative number, we would also have needed to reverse the inequality. Okay, back to the inequality. First, check the units. c has units of dollars. 15 also has units of dollars. These terms are added, so they should have the same units – check. The term has units of dollars divided by dollars per person. Remember that when we divide this is the same as multiplying by the reciprocal? This is true for units as well as for numbers and symbols. Therefore, the units are dollars over dollars times people. This simplifies to people. The units for p are people. These expressions must also have the same units – check. Now that we have solved for p, we can enter whatever limit Ashley's mom set. If she set a limit of$200 we would find (eq. 5b) (eq. 6) (eq. 6a) p ≤ $10.8823.... (eq. 6b) Of course, we cannot have 0.88 of a person, so we will round down. Therefore, Ashley can have up to 10 people at her birthday party. ### 01.06.01 Quarter One Final (Algebra II - New) teacher-scored 10 points possible 60 minutes ### 01.06.02 - Virtual Peppered Moth - LAB (Biology) URL 1 – Peppered Moths and Melanismhttp://darwin200.christs.cam.ac.uk/pages/index.php?page_id=g...URL 2 – Peppered Moths, Black and White (This URL is a .swf and will not work on an ipad.)http://peppermoths.weebly.com/ ### 01.06.02 - Virtual Peppered Moth - Pre Lab (Biology) teacher-scored 10 points possible 30 minutes Summary: Charles Darwin gathered a large collection of facts to support the theory of evolution by natural selection. One problem that he ran into was the inability to persuade people of his theory. All of the specimens he studied had a long life cycle and evolved over hundreds of years. He needed a specimen that had a short life cycle. A great example of natural selection can be seen in the peppered moth, Biston betularia. This moth has a short life cycle, making it easier to study natural selection. Unfortunately, Darwin didn’t realize natural selection was happening among smaller specimens, like the peppered moth, which were found in his home country of England. Instructional Procedures: Copy all information below between the lines of asterisks, including the lesson number, revision date and all questions into a word document. Open URL 1 – Peppered Moths and Melanism Read the information found at URL 1. Use this information to answer PART 1 of the lab. Open and explore URL 2 – Peppered Moths, Black and White. Use the information from URL 2 to complete PART II of the pre lab. *************************************************************** ASSIGNMENT 01.06.02 - REVISION DATE: 8/06/14 (Copy everything between the asterisks.) The Industrial Revolution brought on great changes. There were no environmental regulations, and the factories produced tons of ash and soot. It didn't take long until there was a blanket of soot darkening homes, trees, rocks, and anything else it could land on. It was under these conditions that the first dark colored moth was discovered. Today, in some areas, 90% or more of the-peppered moths are dark in color. Instructions Part I: Open and read URL 1 - Peppered Moths and Melanism. Use the information found in the site to answer the questions. 1. What are the characteristics of the Peppered Moth? Why are the characteristics important when studying natural selection? Instructions Part II: Open and explore URL 2 – Peppered Moths White and Black. Using the sections of the URL titled “Life Cycle”, “Impact of Polluitn” and “Kettlewells Experiments”. Answer the following questions. 1. What are the natural preditors of the peppered moth? 2. What is a lichen? Why is the lichen important to the survival of the peppered moth? You may have to do additional independent research to answer this question. 3. How is the larvae of the peppered moth protected? 4. How long does a peppered moth live? 5. Who is RS Edleston? Why was his discovery significant? 6. What is Natural Selection? 7. Who is J.W. Tutt? What was his theory? 8. In 150 words or more explain who Barnard Kettlewell is and his work as it deals with natural selection. You may have to do additional independent research on Kettlewell. *************************************************************** Pacing: complete this by the end of Week 6 of your enrollment date for this class. ### 01.06.02 A Different Process (Math I) In the previous example, we solved the addition part first, followed by the multiplication part. That is fine. However, it would also have been okay to solve the multiplication part first followed by the addition part. Consider the following. The original simplified inequality was c ≥ ($17 per person) p + $15. (eq. 3) We wish to solve this inequality for p. Instead of adding -$15 as the first step, we could multiply by (eq. 7) Distribute the coefficient through the right-hand side (eq. 7a) Then simplify (eq. 7b) Now we can isolate p by adding to both sides (eq. 8) Combine the terms on the left over the common denominator, (eq. 5a) Reverse the order of the equation, and we again have equation 5b, Two different techniques, same answer. Hmm, didn't we say something about this before.... So, there you go. Personally, I think this was a more complicated method, and therefore more likely to result in algebra and arithmetic errors--but it is not wrong. In the previous example, we solved the addition part first, followed by the multiplication part. That is fine. However, it would also have been okay to solve the multiplication part first followed by the addition part. Consider the following. The original simplified inequality was c ≥ ($17 per person) p +$15. (eq. 3) We wish to solve this inequality for p. Instead of adding -$15 as the first step, we could multiply by (eq. 7) Distribute the coefficient through the right-hand side (eq. 7a) Then simplify (eq. 7b) Now we can isolate p by adding to both sides (eq. 8) Combine the terms on the left over the common denominator, (eq. 5a) Reverse the order of the equation, and we again have equation 5b, Two different techniques, same answer. Hmm, didn't we say something about this before.... So, there you go. Personally, I think this was a more complicated method, and therefore more likely to result in algebra and arithmetic errors--but it is not wrong. ### 01.06.02 Quiz Check Point (Math Level 1) computer-scored 20 points possible 20 minutes Quiz Check Point You are given 3 attempts at this check point quiz. You must earn at least 16 points in order to pass the quiz. You may use all of your attempts to earn the score you are happy with. Pacing: complete this by the end of Week 4 of your enrollment date for this class. ### 01.06.02 Thoughts & feelings on death assignment (Health II) teacher-scored 30 points possible 40 minutes You will write your imagined obituary following the guidelines below, and then complete the questions in part II. First write the obituary in a word processing document on your computer. Copy and paste the section between the lines of asterisks below the obituary, in the same document. Complete your work, and save a copy for yourself. Then submit your work using the assignment submission window for this assignment. Overview of assignment Purpose: to create an 'obituary' for your future self, and consider how you feel about death Audience: your future friends and family TOPIC : Write your own obituary. Focus on the things that you want to accomplish in life, as opposed to looking at the negatives of writing out your own death. You may choose all the details that people actually have no control over. For example, you could die at age 305, in space while defending the universe, if you choose. However, you must include the following information in your obituary. 1. Age and way you die 2. Accomplishments 3. Survived by (who would still be living after you die?) 4. Preceded in death by (who in your family or close associates would have died before you?) 5. Funeral Arrangements (WRITE THESE ITEMS OUT IN PARAGRAPH FORM, AS THEY WOULD APPEAR IN A LOCAL PAPER. DON'T JUST FILL OUT THE FACTS ABOVE). Length: at least 200 words. ************************************************************* PART II:(10 points possible, 1 point per answer) Complete the following statements. 1. Death is 2. I want to die at 3. I don’t want to live past 4. I would like to have at my bedside when I die 5. When I die, I will be proud that when I was living I 6. My greatest fear about death is 7. When I die, I will be glad that when I was living I didn’t 8. If I were to die today, my biggest regret would be 9. When I die, I will be glad to get away from 10. When I die, I want people to say *************************************************************** Overview of grading : Part 1: Write Your Own Obituary including the five requirements listed above (2 points for each of numbers 1-5, and 10 points for spell-checking, editing and composing your obituary in paragraph form). 20 points Part II: : Complete the statements (1 point per completed statement) 10 points Pacing: complete this by the end of Week 3 of your enrollment date for this class. ### 01.06.03 - Virtual Peppered Moth - LAB (Biology) teacher-scored 20 points possible 120 minutes Instructional Procedures: Copy all information below between the lines of asterisks, including the lesson number, revision date and all questions into a word document. Open and explore URL 2 – Peppered Moths, Black and White. Use the information from URL 2 to complete the lab. *************************************************************** ASSIGNMENT 01.06.03 - REVISION DATE: 8/06/14 (Copy everything between the asterisks.) Instructions: Open and explore URL 2 – Peppered Moths White and Black. Using the section of the URL titled “Birds Eye View” run the simulation for a dark and light forest, fill in the table and draw a conclusion. PERCENT OF DARK MOTHS PERCENT OF LIGHT MOTHS DARK FOREST LIGHT FOREST 1. Draw a conclusion. Your conclusion needs to contain at least 150 words. *************************************************************** Pacing: complete this by the end of Week 6 of your enrollment date for this class. ### 01.06.03 Yet Another Process (Math I) Now, what if you didn't want to take the time to simplify equation 1 before solving for p? That is fine, too. It will require you to take more steps in solving, but the overall process is the same. Recall equation 1, c ≥ ($17 per person)(p – 5 people) + $100. (eq. 1) When working with an unsimplified problem, you are still allowed to perform the steps in many different orders. You must remember to multiply/divide the entire expression on each side, and add or subtract terms from the entire expression. However, sometimes this results in an equation that is not very clean initially. People (students, teachers, mathematicians) are more likely to make algebra errors if the equation is complicated. Therefore, for a non-simplified expression, there really is a best order to solve it. The best order to solve a non-simplified equation is to work from the outside in. This means that we do things in the reverse order from the order of operations we would use to evaluate the expression. So, for the sake of discussion, if you knew p, how would you evaluate the expression on the right-hand side of equation 1? The first thing you would do is subtract 5 people from p. Second, you would multiply that value by$17 per person. Finally, you would add $100. Okay? So to solve for p, we work backwards. This means we start by removing the term$100. To do this, we need to add -$100 to both sides. This is the same as subtracting$100 from both sides. c – $100 ≥ ($17 per person)(p – 5 people) + $100 –$100. (eq. 9) c – $100 ≥ ($17 per person)(p – 5 people) (eq. 9a) The second step is to remove the factor $17 per person. To do this, we need to multiply both sides by This is the same as dividing both sides by$17 per person. (eq. 10) (eq. 10a) The final step is to remove the term -5 people. To do this we need to add 5 people to both sides. (eq. 11) (eq. 11a) And as before, we can reverse the order of the inequality, (eq. 11b) At this point, you may notice that equation 11b does not look exactly like equation 5b. Yet I claim it is the same. What's the deal? We need to combine the terms on the right-hand side. In order to do this, we need to find a common denominator. The denominator of the first term is $17 per person. The denominator of the second term is 1. The least common multiple for these numbers is$17 per person. That is already the denominator of the first term, so we do not need to change the second term. If we multiply 1 by $17 per person, we will get$17 per person. Therefore, we need to multiply the denominator of 5 people by $17 per person. We cannot do that and still have the same number. What we can do is multiply 5 people by any version of 1 we choose. How about That is a valid version of 1. (eq. 12) Now simplify this, (eq. 12a) (eq. 12b) Finally, we can add the terms together, (eq. 13) (eq. 5b) Now is it clear that these are the same equation? Awesome :) Here is a problem for you to think about. Which technique did you prefer? Simplifying the expression initially, then solving for the desired quantity? Or solving for the desired quantity without simplifying the expression? Be prepared to justify your answer. Now, what if you didn't want to take the time to simplify equation 1 before solving for p? That is fine, too. It will require you to take more steps in solving, but the overall process is the same. Recall equation 1, c ≥ ($17 per person)(p – 5 people) + $100. (eq. 1) When working with an unsimplified problem, you are still allowed to perform the steps in many different orders. You must remember to multiply/divide the entire expression on each side, and add or subtract terms from the entire expression. However, sometimes this results in an equation that is not very clean initially. People (students, teachers, mathematicians) are more likely to make algebra errors if the equation is complicated. Therefore, for a non-simplified expression, there really is a best order to solve it. The best order to solve a non-simplified equation is to work from the outside in. This means that we do things in the reverse order from the order of operations we would use to evaluate the expression. So, for the sake of discussion, if you knew p, how would you evaluate the expression on the right-hand side of equation 1? The first thing you would do is subtract 5 people from p. Second, you would multiply that value by$17 per person. Finally, you would add $100. Okay? So to solve for p, we work backwards. This means we start by removing the term$100. To do this, we need to add -$100 to both sides. This is the same as subtracting$100 from both sides. c – $100 ≥ ($17 per person)(p – 5 people) + $100 –$100. (eq. 9) c – $100 ≥ ($17 per person)(p – 5 people) (eq. 9a) The second step is to remove the factor $17 per person. To do this, we need to multiply both sides by This is the same as dividing both sides by$17 per person. (eq. 10) (eq. 10a) The final step is to remove the term -5 people. To do this we need to add 5 people to both sides. (eq. 11) (eq. 11a) And as before, we can reverse the order of the inequality, (eq. 11b) At this point, you may notice that equation 11b does not look exactly like equation 5b. Yet I claim it is the same. What's the deal? We need to combine the terms on the right-hand side. In order to do this, we need to find a common denominator. The denominator of the first term is $17 per person. The denominator of the second term is 1. The least common multiple for these numbers is$17 per person. That is already the denominator of the first term, so we do not need to change the second term. If we multiply 1 by $17 per person, we will get$17 per person. Therefore, we need to multiply the denominator of 5 people by $17 per person. We cannot do that and still have the same number. What we can do is multiply 5 people by any version of 1 we choose. How about That is a valid version of 1. (eq. 12) Now simplify this, (eq. 12a) (eq. 12b) Finally, we can add the terms together, (eq. 13) (eq. 5b) Now is it clear that these are the same equation? Awesome :) Here is a problem for you to think about. Which technique did you prefer? Simplifying the expression initially, then solving for the desired quantity? Or solving for the desired quantity without simplifying the expression? Be prepared to justify your answer. ### 01.06.04 Babysitting Income, Revisited (Math I) You should also recall Jeremy. He has a job babysitting for his neighbors 15 hours a month. If he is unavailable, his sister will fill in for him, but he has to pay her. Jeremy also has a couple of bills he needs to pay each month. Jeremy's problem was rather serious. If he asked his sister to babysit for him too often, he would be losing money each month. He needs to make sure that he doesn't have his sister fill in for him more often than he can afford. Jeremy had found the following equation to describe his monthly income: i =$65 – ($7 per hour)(h), (eq. 14) where i was his net income, and h was the number of hours his sister worked for him. Jeremy needs to make sure that he doesn't end up with a negative income. Therefore, he needs to solve this equation for the number of hours his sister works. The term that includes h is negative. Therefore, we may wish to start this problem by adding ($7 per hour)(h) to both sides of the equation. i + ($7 per hour)(h) =$65 – ($7 per hour)(h) + ($7 per hour)(h), (eq. 15) i + ($7 per hour)(h) =$65. (eq. 15a) Using the method of working from the outside in, the next step would be to add -i to both sides of the equation, aka subtract i, i – i + ($7 per hour)(h) =$65 – i, (eq. 16) ($7 per hour)(h) =$65 – i. (eq. 16a) Next we need to remove the coefficient ($7 per hour). We do this by multiplying by or dividing by$7 per hour, (eq. 17) (eq. 17a) Again, we should check the units. i has unit of dollars. $65 also has units of dollars. These terms are added. Therefore these units should be the same. They are. h has units of hours. has units of dollars per dollars per hour. This simplifies to (dollars)(hours/dollar), which equals hours. This should have the same units as h, which it does, so we are good as far as units go. Now that Jeremy has solved this problem for h in general, he can use this equation to determine the maximum number of hours his sister can work for him, for whichever income he wants. However, since the original concern was to not have a negative income, we may as well solve this for i = 0. (eq. 17a) (eq. 18) (eq. 18a) h = 9.2857.... hours (eq. 18b) h ≈ 9.3 hours (eq. 18c) Therefore, if Jeremy doesn't want to go into dept, he can't have his sister fill in for him for more than about 9.3 hours each month. By the way, this is more or less the answer we estimated from our graph in lesson 3. What if Jeremy needs$50 to buy concert tickets? What is the maximum hours that his sister can work for him? We already have this solved for h, so we just need to enter the numbers. This is the advantage of solving the problem with variables initially. We don't need to go back and resolve it every time we want to change the numbers. (eq. 17a) (eq. 19) (eq. 19a) h = 2.1428.... hours (eq. 19b) h ≈ 2 hours (eq. 19c) This means that if Jeremy wants to buy those tickets, he needs to make sure that he doesn't double book for more than 2 hours this month. All right, do you think you can do this on your own? You should also recall Jeremy. He has a job babysitting for his neighbors 15 hours a month. If he is unavailable, his sister will fill in for him, but he has to pay her. Jeremy also has a couple of bills he needs to pay each month. Jeremy's problem was rather serious. If he asked his sister to babysit for him too often, he would be losing money each month. He needs to make sure that he doesn't have his sister fill in for him more often than he can afford. Jeremy had found the following equation to describe his monthly income: i = $65 – ($7 per hour)(h), (eq. 14) where i was his net income, and h was the number of hours his sister worked for him. Jeremy needs to make sure that he doesn't end up with a negative income. Therefore, he needs to solve this equation for the number of hours his sister works. The term that includes h is negative. Therefore, we may wish to start this problem by adding ($7 per hour)(h) to both sides of the equation. i + ($7 per hour)(h) = $65 – ($7 per hour)(h) + ($7 per hour)(h), (eq. 15) i + ($7 per hour)(h) = $65. (eq. 15a) Using the method of working from the outside in, the next step would be to add -i to both sides of the equation, aka subtract i, i – i + ($7 per hour)(h) = $65 – i, (eq. 16) ($7 per hour)(h) = $65 – i. (eq. 16a) Next we need to remove the coefficient ($7 per hour). We do this by multiplying by or dividing by $7 per hour, (eq. 17) (eq. 17a) Again, we should check the units. i has unit of dollars.$65 also has units of dollars. These terms are added. Therefore these units should be the same. They are. h has units of hours. has units of dollars per dollars per hour. This simplifies to (dollars)(hours/dollar), which equals hours. This should have the same units as h, which it does, so we are good as far as units go. Now that Jeremy has solved this problem for h in general, he can use this equation to determine the maximum number of hours his sister can work for him, for whichever income he wants. However, since the original concern was to not have a negative income, we may as well solve this for i = 0. (eq. 17a) (eq. 18) (eq. 18a) h = 9.2857.... hours (eq. 18b) h ≈ 9.3 hours (eq. 18c) Therefore, if Jeremy doesn't want to go into dept, he can't have his sister fill in for him for more than about 9.3 hours each month. By the way, this is more or less the answer we estimated from our graph in lesson 3. What if Jeremy needs $50 to buy concert tickets? What is the maximum hours that his sister can work for him? We already have this solved for h, so we just need to enter the numbers. This is the advantage of solving the problem with variables initially. We don't need to go back and resolve it every time we want to change the numbers. (eq. 17a) (eq. 19) (eq. 19a) h = 2.1428.... hours (eq. 19b) h ≈ 2 hours (eq. 19c) This means that if Jeremy wants to buy those tickets, he needs to make sure that he doesn't double book for more than 2 hours this month. All right, do you think you can do this on your own? ### 01.06.04 Linear Equations -- Supplemental Links (Math I) However, you should read the lesson "Solving Fractional...," and do the practice problems "Practice Solving Fractional...."http://www.regentsprep.org/Regents/math/ALGEBRA/AV5/indexAV5...Read the lesson "Solving Equations," and do the practice problems "Practice with Solving...," and "Practice Translating...."http://www.regentsprep.org/Regents/math/ALGEBRA/AE2/indexAE2...Read the lesson "Literal Equations," and do the practice problems "Practice with Literal...."http://www.regentsprep.org/Regents/math/ALGEBRA/AE4/indexAE4... These websites are intended for supplemental instruction, and the opportunity to practice some of the skills. They are optional links. Use them if you need additional help, or just another point of view. However, you should read the lesson "Solving Fractional...," and do the practice problems "Practice Solving Fractional...."http://www.regentsprep.org/Regents/math/ALGEBRA/AV5/indexAV5...Read the lesson "Solving Equations," and do the practice problems "Practice with Solving...," and "Practice Translating...."http://www.regentsprep.org/Regents/math/ALGEBRA/AE2/indexAE2...Read the lesson "Literal Equations," and do the practice problems "Practice with Literal...."http://www.regentsprep.org/Regents/math/ALGEBRA/AE4/indexAE4... These websites are intended for supplemental instruction, and the opportunity to practice some of the skills. They are optional links. Use them if you need additional help, or just another point of view. ### 01.06.04 Natural Selection (Biology) computer-scored 5 points possible 20 minutes You need to score at least 80% on this quiz before you can take the final. You can take it as many times as you would like, in order to earn the score you desire. Pacing: complete this by the end of Week 6 of your enrollment date for this class. ### 01.06.04.01 Linear Equations -- Assignment 3 (Math I) teacher-scored 75 points possible 90 minutes Complete Unit 01 --Linear Equations -- Assignment 3 Solving Equations Print out the attached assignment and complete the applicable questions in the space provided. You may use additional paper if needed. Once you have completed the assignment, scan it into the computer and convert it to an image file such as .pdf or .jpg. You may need to practice scanning pencil drawing so that you produce a clear, easily readable image. Finally, upload the image using the assignment submission window for this assignment. Alternately, you may type the essay questions and answers into a word processing document, then save the file as an .rtf, and upload that. Questions 1 and 2 will be worth 35 points each. Question 3 will be worth 5 points for a total of 75 points for the assignment. You should complete this assignment after reading Lesson 6. teacher-scored 75 points possible 90 minutes Complete Unit 01 --Linear Equations -- Assignment 3 Solving Equations Print out the attached assignment and complete the applicable questions in the space provided. You may use additional paper if needed. Once you have completed the assignment, scan it into the computer and convert it to an image file such as .pdf or .jpg. You may need to practice scanning pencil drawing so that you produce a clear, easily readable image. Finally, upload the image using the assignment submission window for this assignment. Alternately, you may type the essay questions and answers into a word processing document, then save the file as an .rtf, and upload that. Questions 1 and 2 will be worth 35 points each. Question 3 will be worth 5 points for a total of 75 points for the assignment. You should complete this assignment after reading Lesson 6. ### 01.06.04.02 Linear Equations -- Assignment 4 (Math I) teacher-scored 80 points possible 240 minutes Complete Unit 01 --Linear Equations -- Assignment 4 Linear Equations Project The instructions for this assignment are in the attached file. This assignment should be completed using a word processing program with an equation editor. The formulas and images should be imbedded in the text. The final product must be converted to a .pdf file type. This could also be done as a slide show presentation, again with the final product converted to .pdf format. Finally, upload the .pdf file using the assignment submission window for this assignment. You should complete this assignment after studying Lesson 6. teacher-scored 80 points possible 240 minutes Complete Unit 01 --Linear Equations -- Assignment 4 Linear Equations Project The instructions for this assignment are in the attached file. This assignment should be completed using a word processing program with an equation editor. The formulas and images should be imbedded in the text. The final product must be converted to a .pdf file type. This could also be done as a slide show presentation, again with the final product converted to .pdf format. Finally, upload the .pdf file using the assignment submission window for this assignment. You should complete this assignment after studying Lesson 6. ### 01.07 The Design Brief (Pre-Engineering) teacher-scored 100 points possible 90 minutes The Design Brief A design brief is a written explanation given by the client to the designer at the outset of a project. As the client, you are spelling out your objectives and expectations and defining a scope of work when you issue one. You're also committing to a concrete expression that can be revisited as a project moves forward. It's an honest way to keep everyone honest. If the brief raises questions, all the better. Questions early are better than questions late. Why provide a design brief? The purpose of the brief is to get everyone started with a common understanding of what's to be accomplished. It gives direction and serves as a benchmark against which to test concepts and execution as you move through a project. Some designers provide clients with their own set of questions. Even so, the ultimate responsibility for defining goals and objectives and identifying audience and context lies with the client. Another benefit of the design brief is the clarity it provides you as the client about why you're embarking on a project. If you don't know why, you can't possibly hope to achieve anything worthwhile. Nor are you likely to get your company behind your project. A brief can be as valuable internally as it is externally. If you present it to the people within the company most directly affected by whatever is being produced, you not only elicit valuable input, but also pave the way for their buy-in. When you think about it, the last thing you want is for your project to be a test of the designer's skills. Your responsibility is to help the design firm do the best work it can. That's why you hired the firm. And why you give it a brief. Budgeting and managing the process If the briefing effort is thorough, budgeting and managing a project is easier. It takes two to budget and manage a design project: the client and the designer. The most successful collaborations are always the ones where all the information is on the table and expectations are in the open from the outset. Design costs money. As one very seasoned and gifted designer says, "There is always a budget," whether it is revealed to the design team or not. Clients often are hesitant to announce how much they have to spend for fear that if they do, the designer will design to that number when a different solution for less money might otherwise have been reached. This is a reasonable concern and yet, it's as risky to design in a budgetary vacuum as it is to design without a goal. If your utility vehicle budget stops at four cylinders, four gears and a radio, there's no point in looking at Range Rovers. If you have$100,000 to spend and you'd really like to dedicate $15,000 of it to something else, giving the design team that knowledge helps everyone. Then you won't get something that costs$110,000 that you want but cannot pay for. The trust factor is the 800-pound gorilla in the budgeting phase. Without trust, there isn't a basis for working together.
Assignment for the Design Brief
You are a engineer working for a design firm that develops Mouse Trap Cars. Creating the design brief does not mean building the project but to address the issues and design possibilities that will come later.
Use a google search for "Mouse Trap Cars"
The presentation is for the upper management in your company. Use a multimedia program (like powerpoint )to develop a 10 minute concise viewpoint of your research to present your idea. Include pictures and facts and why you are recommending it. Make sure you list the benefits and the budget to build the Mouse Trap Cars
submit the powerpoint.
A ten minute presentation would usually contain around 15 to 30 slides
Design Brief example http://www.technologystudent.com/designpro/problem1.htm
Take the Assessment for the Design Brief.
### 02.01.06 Climate Maps- Continents (World Geography)
teacher-scored 70 points possible 60 minutes
The seven continents of the world have many different climates. In this assignment you will put, on a map of the world (or map of each continent), the climates that each continent has. It will be important for you to pay attention to the different climates that is located on each continent. Climate is the average weather of a certain area over a long period of time. Each continent has many different climates because of latitude, elevation, landforms, closeness to an ocean or large body of water. For example: the climate of the Wasatch front (Ogden to Provo) is cold winters, and hot summers. Most of the precipitation arrives in the winter and spring. The summers and falls are generally dry. If you take someplace that has a similar latitude as the Wasatch front you will find that the climate can change because of the water and elevation factors. For example, San Francisco is about the same latitude as Salt Lake City. Only 3 degrees difference in latitude but because San Francisco is by the ocean and at sea level, it has a completely different climate. It has moderate winters and cool summers. Refer to the charts below for comparison.
San Francisco
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Avg. High 55° 58° 60° 64° 66° 70° 71° 72° 74° 70° 62° 56°
Avg. Low 41° 45° 45° 47° 48° 52° 54° 55° 55° 51° 47° 42°
Mean 48° 52° 54° 56° 58° 62° 64° 64° 65° 61° 55° 48°
Salt Lake City
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Avg. High 36° 44° 52° 61° 71° 82° 92° 88° 78° 66° 50° 37°
Avg. Low 18° 24° 31° 37° 45° 55° 64° 61° 51° 40° 30° 21°
Mean 28° 34° 42° 50° 58° 68° 78° 76° 65° 54° 41° 30°
Here you can see that the climate in San Francisco is milder than the climate in Salt Lake City.
Copy the outline maps and then click on the link for the climates of the world. Color each continent according to the climates that are there. Make sure you put a KEY on your map(s).
Outline maps: http://www.eduplace.com/ss/maps/ Click on the map you are going to do. Either the world or on each continent.
Climate map of the world: http://www.allcountries.org/maps/world_climate_maps.html
NOTE: you may have trouble attaching all of the maps. One student who was successful at doing so told me the following "You asked how I could fit so many maps into my submission. First, I scanned them at a low resolution and then I zipped them using WinZip into one file and uploaded that one file." Hope that helps. If not, send the maps that won't fit to me through e-mail.
You may also find this information from a fellow student who did an awesome job on this assignment helpful:
"You asked how I made my climate maps for assignment 2.6. Once I opened up a map on the site you gave us, I hit the screen print button at the top of my keyboard. I then opened up the Microsoft Paint application and hit Edit then Paste. The screen print came up and I was then able to crop the image down so I was just viewing the map. (To crop, I outlined the area I wanted to keep in the image with the flashing dotted lines used for cropping, then hit Image then Crop). Then I just painted in the colors to match the colored climate map on the website. To insert the keys, I made separate images on Paint with just the colored blocks I used in each map. The nice thing about using powerpoint is that I am able to insert multiple images into each slide. So I just matched the keys with the corresponding map and then inserted text boxes for all of the text
### 02.04 Online skills Introduction to the Moodle platform - Study Skills
We’ll review some basic study skills that you may be using already in the traditional classroom. However, due to the unique environment of learning online, you will definitely want to make sure you use ALL these strategies.
In a traditional classroom, there are many activities that help you learn: assignments, reading, class discussions, etc. Online, you are somewhat left with just reading and assignments. READING becomes really important to be able to learn and succeed in an online class. If you only opened your textbook when you had to…or skimmed through it looking for an answer, not really ‘reading’…then you’ll need to make a change in your habits for online courses. There are ways to read the material, whether online or in a text that can help you remember and learn. One thing that is a MUST is to think about what you are reading as you go. It kind of takes the place of class discussions in a regular classroom except that you are coming up with all the comments by yourself in your head.
• Relate the new information you just read about to something that you already learned. For instance, have you ever seen the movie Stand and Deliver? At one point, a teacher who really wants a student to succeed in a chemistry class explains how a cell works is like being a member of the ‘hood’, something that the student can relate to.
• Put the information you’ve read into your own words, in several different ways so it will have more meaning to you
• Try to think of ways this new information is relevant or meaningful to your life or the world in general.
• Ask questions. Look at the information from different points of view. Be the ‘antagonist’…is it possible the information could be wrong? Etc.
• Who is saying it and why? It may surprise you to realize that there is a LOT of false information on the Internet because anybody can post anything they want. That is true even for Wikipedia…one of the most popular information websites in use. So you’ll want to check and see who the author is and if other websites agree with the information being presented.
• What perspective is the information from? Just imagine the different perspectives you could get about the Ku Klux Klan by reading the perspective of a KKK member versus an African American person who had a relative killed in a KKK raid? You know the story of little Red riding hood is from Red’s perspective right? Have you ever read the wolf’s perspective? Just for fun, you can read it on one of the links below.
Another thing that is a MUST is to take notes as you read. It’s like taking notes while a teacher is teaching in class, you can even read out loud if it helps you to learn and remember the material. Reading from the computer screen is generally hard on your eyes and results in blurring, eyestrain and visual fatigue. So limit how much time you read on screen or print out what you read if you need to. Be careful when scrolling down the screen to make sure that you don’t miss passages. When you take notes as you read, you will create your own study guide for the quizzes and proctored final…isn’t that a great idea?
To learn more about effective reading and note taking strategies, you will be visiting several websites. Take notes as you go, because there is a quiz for this section.
Dizzy Daizy Doesn't
Note taking strategies: To review 5 different kinds of note taking strategies: http://www.sas.calpoly.edu/asc/ssl/notetaking.systems.htmlMore information on note takinghttp://www.lifehack.org/articles/productivity/advice-for-stu...
### 03.06 Assess the growth and development of labor unions and their key leaders. Quiz 2 (US History)
teacher-scored 15 points possible 40 minutes
Coal miner, 1946: Wikimedia Commons, NARA, public domain
After you have reviewed the course materials covering unions and socialism and reviewed the terms in the vocabulary folder, you are ready to take the quiz for this section.
You must complete the quiz once you have started. You must score 80 percent on the test, and you will have 40 minutes to complete it. If you score below 80 percent, you will need to wait 24 hours before you can take the test again.
Pacing: complete this by the end of Week 2 of your enrollment date for this class.
### 03.06 Checking Account Intro (Financial Literacy)
teacher-scored 10 points possible 30 minutes
****************************************
ASSIGNMENT 03.06 (13E) (Copy everything between the asterisks.)
1) Q: After studying the difference between a Debit card and Credit card, answer the following:
a) Why does a DEBIT card prevent you from going in debt? > ANSWER:
b) If you pay off ALL charges in full before payment is due on a CREDIT card each month, it is like a free loan. How much would you pay in interest (answer not on website)? > ANSWER:
2) Q: Why should you always know the balance of available funds on your DEBIT card? > ANSWER:
3) Q: After reading the web page, why should you NOT use a “debit card” to purchase items online? > ANSWER:
4) Pick one of the checking accounts in URL #3 OR pick a different bank or credit union YOU choose to answer the following questions:
a) Q: Which did you pick? > ANSWER:
5) Q: (3.6): Write your first and last name and today's date.> ANSWER:
****************************************
Pacing: complete this by the end of Week 7 of your enrollment date for this class.
### 03.06 Checking Account Intro links (Financial Literacy)
Managing a checking accounthttp://web.archive.org/web/20130319072539/http://www.youngbi...Kinds of checking accountshttps://www.wellsfargo.com/checking/Debit Cards vs. Credit Cardshttp://banking.about.com/od/checkingaccounts/a/debitvscredit...
### 03.06 Computer Basics Test--CBT (Computer Technology)
computer-scored 35 points possible 20 minutes
Computer Basics Post-test (CBT)
After completing all the activities for this unit, go to the test (CBT) to take this post-test. This test will cover the terminology and concepts from the Computer Basics Unit. You can use your notes, assignments and handouts. There is a one hour time limit.
### 03.06 Critical Movie Review (English 11)
teacher-scored 13 points possible 40 minutes
Critical Viewing Movie Review
Choose a movie (or live play or musical) that fits with the quarter theme (love and relationships) and write a one page response to that viewing.
Suggested movies for this quarter: Ever After, Tangled, Twilight, Love Story, Westside Story, The Great Gatsby, Jane Eyre, Pride and Prejudice, Romeo and Juliet, Much Ado About Nothing
*Note: You are not restricted to any of these titles. Any movie that fits within the theme of the quarter will work.
If you are not sure if a movie fits with the theme, just send me a message. Honestly, I don't watch many movies, so I am not really sure what is out there--use your best judgement.
You will then present your response in one of two ways: use Google Voice (801-317-8401, In the message include: your name, quarter, assignment #, and the assignment. Also, submit a comment through Canvas with the date and time you left the message) to record your response, OR make a short video of your response and submit through Canvas. It is best to have your response written out and practice it before making the phone call or final video.
Respond to the following questions in your review:
1895 illustration for Pride and Prejudice: C. E. Brock, public domain via Wikimedia Commons
1. How does this movie tie into the quarter theme of Love and Relationships?
2. What is the message or theme of the movie?
3. What obstacles did the protagonist have to overcome?
4. What rating (out of five stars) would you give this movie? Why?
If you use Google Voice, you can leave up to a three-minute message. Use the questions to guide your response and write out your review in detail before completing the assignment. In the assignment submission box, simply make a note of the day and time you recorded your message.
If you make a video, you can upload it there as well. The video does not have to be anything super elaborate; it can be just you talking to the camera and answering the questions about the movie.
Scoring Rubric
8 Points - Student responds to each question clearly and adequately.
5 Points - Student is well-spoken during the presentation; it is clear that he or she has practiced.
Pacing: complete this by the end of Week 4 of your enrollment date for this class.
### 03.06 Functions and Function Notation
If after completing this lesson you can state without hesitation that...
Objectives:
1. I can determine whether a relation is a function given a graph or table of values
2. I understand function notation
3. I can evaluate a function for different input values
4. I can define the terms "domain" and "range"
5. I can find the domain and range from a graph or table of values
6. I can find the domain given an equation of a function
…you are ready for the quiz. Otherwise, go back and review the material before moving on!
• Algebra Structure and Method, Book 1 (McDougal Littell) - Chapters 8, 12
Lesson 30 (Course Material)http://www.montereyinstitute.org/courses/Algebra%20IB/course...
### 03.06 History of the English Language (English 9)
SAS Word origins interactive lesson - username: farm9the, QL# 941http://www.sascurriculumpathways.com/loginBrief history of the English languagehttp://www.wordorigins.org/index.php/site/comments/a_very_br...Five events that shaped the history of Englishhttp://www.askoxford.com/worldofwords/history/Etymologic - online quiz game about word origins & meanings (just for fun)http://www.etymologic.com/index.cgi
English: Word Origins, 941
To open this resource in SAS® Curriculum Pathways®:
Enter the student user name: farm9the
In the Quick Launch (QL#) box, enter: 941
Do "Prepare" and "Identify" (follow the on-screen directions to see all the pages and play the video).
### 03.06 Introduction to Computers
Use the website in the link below to explore the various terms listed in the vocabulary list. Go through each of the seven lessons by viewing the video in each section and reading the information for each section. Complete the various activities that are included within the lessons.
Introduction to Computers http://educate.intel.com/en/TheJourneyInside/ExploreTheCurri...
teacher-scored 20 points possible 30 minutes
Assignment:(CB3)
Complete the Intel Assignment worksheet. Please type your answers in bold. This worksheet can be used as you take the Computer Basics test. After completing the worksheet, upload the file the (CB3) assignment
Pacing: complete this by the end of Week 3 of your enrollment date for this class.
### 03.06 Introduction to Computers--Videos (CompTech07)
Introduction to Computers (CB3)
Use the website in section 03.06.1 to explore the various terms listed in the vocabulary list. Go through each of the 7 lessons by viewing the video in each section and reading the information for each section. Complete the various activities that are included within the lessons.
Assignment: Complete the Intel Assignment worksheet. Please type your answers in bold. This worksheet can be used as you take the Computer Basics test. After completing the worksheet, attach it to the (CB3) assignment link.
Introduction to Computers http://educate.intel.com/en/TheJourneyInside/ExploreTheCurri...
### 03.06 Lesson 3F: Simple Yes/No Questions (Spanish I)
Lesson 3F
Simple Yes/No Questions
[A copy of this lesson is available in a PDF file!! If you prefer to use this type of document, just click on the following link to complete this lesson: SpI_Lesson3F]
In the first two units, you were introduced to some of the most common words used to ask questions in Spanish. Do you remember these words: “¿Cómo? – How? and ¿Qué? – What?”(Lesson 1F), “¿Quién? – Who?”(Lesson 1J), “¿Cuándo? – When? and ¿Cuántos(as) – How many?”(Lesson 2E), and “¿Dónde? – Where?”(Lesson 2I)? If not, please review these lessons before continuing this lesson.
(¡¡Ojo!! – I really like the way the Spanish language has the same rule for exclamation and question marks as they have for quotes and parenthesis … using a mark to begin the exclamation or question as well as at the end. It makes reading Spanish much easier, with fewer surprises!!)
Yes or No questions: Most information questions are formed using question words, but there are times when asking a question is as simple as raising the pitch in your voice at the end of a sentence. The rising and falling of the pitch of your voice as you speak is called intonation. When a sentence, in either English or Spanish, is spoken with a rising intonation at the end of the sentence, it changes the sentence from a statement to a question. This simple question can only be answered with a “yes” or “no” response. Let’s look at a few examples:
To make a question negative: Making a question negative follows the same form as making a statement negative, just put the word “no” before the verb!!
Put the verb first: Switching the subject and verb is also an easy way to make a statement a question. You still will need to raise the pitch of your voice as you end the sentence to make sure it is understood to be a question, but putting the verb before the subject clearly makes it a question.
A Great Web Site to Use: This is a great link to use in explaining a bit more about how these questions are formed in Spanish and their similarity to the English language.
Confirmation or “Tag” questions: This is very common way to ask confirmation questions in Spanish speaking countries … just add one word at the end of a statement!! Once again, you must raise the pitch of your voice as you say this confirming word:
Another brief explanation of “tag” questions.http://spanish.about.com/od/sentencestructure/g/question_tag...
Summary of Lesson: Hopefully, you have reviewed and reinforced your knowledge of different ways of asking questions in Spanish!! In order to have a conversation with your native Spanish friend, you first need to be able to ask questions about the information that you would like to know. Gradually, (and I promise that this will happen!!) little by little, you will be able to understand when he/she responds!! Just remember to use “más despacio, por favor – more slowly, please”!!
Practice Exercises: Use the link below to review your knowledge of making simple yes/no questions!
Click on the “Review” menu item on the left side of the page. Now click on “Primera parte” and then on “Grammar”. Do exercise “Rev 2-4a: Formation of yes/no questions and negations!http://wps.prenhall.com/ca_ph_aitken_arriba_1/28/7265/185993...Click on the “Review” menu item on the left side of the page. Now click on “Primera parte” and then on “Grammar”. Do exercise “Rev 2-4b: Formation of yes/no questions and negations!http://wps.prenhall.com/ca_ph_aitken_arriba_1/28/7265/185993...
### 03.06 Proctored Final (Participation Skills and Techniques)
computer-scored 160 points possible 75 minutes
Yes! - you are ready to take the proctored final test.
This should be completed by WEEK 9 of this class
*PE QUARTER 1 FINAL INSTRUCTIONS:
• • You CANNOT have any "0's" on an assignment.
• • The 1st quarter test has 40 questions that cover the entire quarter. You must score at least 60 percent on the proctored final to pass 1st quarter.
• YOU HAVE TWO WEEKS TO TAKE THE PROCTORED FINAL BEFORE EHS WILL DROP YOU FROM MY COURSE
• • When you COMPLETE your final--EMAIL ME: YOUR NAME, QUARTER, and Let me know you completed the FINAL. THEN I WILL SUBMIT YOUR GRADE - you can EMAIL me at: kami.elison@ehs.uen.org
• • There is no provision to improve your grade so please do your best. THIS MEANS IF YOU WANT TO RETAKE ANY QUIZZES, REDO any of your Assignments for a better grade. YOU MUST DO SO BEFORE YOU TAKE THE PROCTORED EXAM.
• • Your credit will be emailed to the school listed in YOUR PROFILE within a few days, it could take a couple of weeks.
• • Check with your counselor to make sure it arrives.
THIS IS ESSENTIAL TO YOUR SUCCESS: Please allow 24 hours from your teacher's approval before completing the following steps.
Good luck on your final, you will need to perform the following steps in order to take the
FINAL:
• STEP ONE: Click on the link "EHS Certified Proctors List." This will bring up a list of proctors and you will be able to see those available in your area. (NOTE: Some schools have certified proctors whose names are not on the list. Your school counselor will know if your school has an unlisted certified proctor. Then proceed to step two.)
• STEP TWO: Contact your preferred proctor to arrange the date and time for the test. It's best to contact the proctor in person to arrange a date and time. Note: based on local district policy, some proctors charge for their proctoring services. For best results, please schedule yourself a few days leeway before you take the proctored final test to allow time for the password to be sent to your proctor. Then proceed to step three.
• STEP THREE: *You must wait 24 hours after your teacher has approved your Ready assignment to complete this step.* Once you have arranged a date and time with the proctor fill in the Proctor Manager Notification Form. Supply your EHS username on that form and your personal Proctor Manager Notification Form will appear.
• When you have successfully passed the proctored final, I will award your credit once you have sent me an email of completion and our registrar will send the record of the credit to the school listed in your EHS account.
Pacing: complete this by the end of Week 9 of your enrollment date for this class.
EHS Certified Proctors Listhttp://media.ehs.uen.org:8080/A/FMPro?-DB=A.fp5&-Lay=form&-f...
### 03.06 Related Rates (Calculus)
HC: Example problemshttp://www.hippocampus.org/course_locator?course=AP%20Calcul...MIT OpenCourseWare Lecture Part 1 (Watch from 45:00 to end)http://ocw.mit.edu/courses/mathematics/18-01-single-variable...MIT OpenCourseWare Lecture Part 2 (Watch from beginning to 40:45)http://ocw.mit.edu/courses/mathematics/18-01-single-variable...
### 03.06 Review Greatest Common Factor and Factor Polynomials with Four Terms (Math Level 2)
Find the greatest common factor (GCF) of monomials, factor polynomials by factoring out the greatest common factor (GCF) and factor expressions with four terms by grouping.
Factors are the building blocks of multiplication. They are the numbers that you can multiply together to produce another number: 2 and 10 are factors of 20, as are 4 and 5 and 1 and 20. To factor a number is to rewrite it as a product. 20 = 4 • 5.
Likewise to factor a polynomial, you rewrite it as a product. Just as any integer can be written as the product of factors, so too can any monomial or polynomial be expressed as a product of factors. Factoring is very helpful in simplifying and solving equations using polynomials.
A prime factor is similar to a prime number—it has only itself and 1 as factors. The process of breaking a number down into its prime factors is called prime factorization.
A whole number, monomial, or polynomial can be expressed as a product of factors. You can use some of the same logic that you apply to factoring integers to factoring polynomials. To factor a polynomial, first identify the greatest common factor of the terms, and then apply the distributive property to rewrite the expression. Once a polynomial in ab + ac form has been rewritten as a(b + c), where a is the GCF, the polynomial is in factored form.
When factoring a four-term polynomial using grouping, find the common factor of pairs of terms rather than the whole polynomial. Use the distributive property to rewrite the grouped terms as the common factor times a binomial. Finally, pull any common binomials out of the factored groups. The fully factored polynomial will be the product of two binomials.
If after completing this topic you can state without hesitation that...
• I can find the greatest common factor (GCF) of monomials.
• I can factor polynomials by factoring out the greatest common factor (GCF).
• I can factor expressions with four terms by grouping.
…you are ready for the assignment! Otherwise, go back and review the material before moving on.
Click the link below to begin viewing the lesson videos and to try the practice problems.
NROC Developmental Math: Greatest Common Factorhttp://www.montereyinstitute.org/courses/DevelopmentalMath/U...
Each topic is divided into sections that include the following:
Warm Up - questions to answer to see if you are ready for the lesson.
Presentation - high quality video with excellent illustrations that teaches the topic.
Worked Examples - examples that are worked out step-by-step with narration.
Practice - quiz problems on the topic covered.
Review - practice test to check your knowledge before moving on.
You are not required to complete every section. However, REMEMBER the goal is to MASTER the material!!
I highly recommend viewing the following video examples before moving on to the assignment.
### 03.06 Review Greatest Common Factor and Factor Polynomials with Four Terms - video (Math Level 2)
OHSU Teacher-Factoring by Groupinghttps://vimeo.com/12652675CK-12 0913 Factoring by Groupinghttps://vimeo.com/46324811Khan Academy 06 Factoring by grouping 02 Example Basic grouping https://www.youtube.com/watch?v=2-bcAAncRQsCK - 12 0913S Factoring by Groupinghttps://vimeo.com/47220938
### 03.06 Review Greatest Common Factor and Factor Polynomials with Four Terms – Assignment (Math Level 2)
teacher-scored 72 points possible 40 minutes
Activity for this lesson
Complete the attached worksheet.
1. Print the worksheet and complete the assignment in the space provided. You may use additional paper if needed. Work all the problems showing ALL your steps.
2. Once you have completed the assignment, digitize (scan or take digital photo, up close and clear) and save it to the computer and convert it to an image file such as .pdf or .jpg.
Pacing: complete this by the end of Week 4 of your enrollment date for this class.
### 03.06 Roundabouts and Continuous Flow Intersections (DriverEd)
Residential Roundabout: By Richard Drdul (Traffic Calming Flickr Photoset), CC-BY-SA-2.0, via Wikimedia Commons ROUNDABOUTS Roundabouts were created in an effort to reduce the number of points where conflict can occur between two vehicles or a vehicle and a pedestrian. A roundabout has 12 potential points of conflict compared to 56 potential points of conflict at a regular “four-leg” intersection. A typical roundabout has a mountable curb around the outside of the center island to accommodate big trucks and semis as necessary. There are four points to remember when using a roundabout: Always yield to the traffic that is already in the roundabout Roundabouts run counter clockwise, always enter the roundabout to your right Always yield to pedestrians Always signal going in and out of a roundabout. The roundabout is a free-flowing traffic lane; therefore, it is not regulated by traffic lights. It is extremely important for the driver to be aware of pedestrians that might be crossing the traffic lanes of a roundabout. CONTINUOUS FLOW INTERSECTIONS (CFI) New to Utah is a Continuous-Flow Intersection (CFI). The first one is located at 3500 South and Bangerter Highway in West Valley City. CFI is a new approach to intersection design. Compared to a traditional intersection, it reduces the steps in the light cycle and places left turns along a safer path. Proceed as you normally would but watch for another light just past the intersection. It’s possible to encounter a red light here which allows left turning cars to cross in front of you. Be sure to yield to traffic, cyclists, and pedestrians. Make your turn, merge with traffic and keep going. Proceed just like a normal intersection, but watch for another light just past the intersection. You may see a red light here which allows left turning cars to cross in front of you. (Select the link "Continuous Flow Intersections" for more information.)
### 03.06 Roundabouts and Continuous Flow Intersections (DriverEd)
Continuous Flow Intersectionshttp://www.udot.utah.gov/cfi
### 03.06 Slope-Intercept Form (Math Level 1)
Use the slope and y-intercepts to graph and write functions.
Something to Ponder
How would you explain the parts of the Slope-Intercept form and how they relate to the graph of an equation?
Mathematics Vocabulary
Slope-intercept form: y = mx + b, where m is the slope and b is the y-intercept of the line
Learning these concepts
Click each mathematician image OR click the link below to launch the video to help you better understand this "mathematical language."
$\fn_phv {\color{Red}SCROLL }$ $\fn_phv {\color{Red}DOWN }$ $\fn_phv {\color{Red}TO }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}GUIDED }$ $\fn_phv {\color{Red}PRACTICE }$ $\fn_phv {\color{Red}SECTION }$ $\fn_phv {\color{Red}AND }$ $\fn_phv {\color{Red}WORK }$ $\fn_phv {\color{Red}THROUGH }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}EXAMPLES }$ $\fn_phv {\color{Red}BEFORE }$ $\fn_phv {\color{Red}SUBMITTING }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}ASSIGNMENT!!! }$
### 03.06 Slope-Intercept Form - Extra Video (Math Level 1)
NROC Algebra 1: Intercepts of Linear Equationshttp://www.montereyinstitute.org/courses/Algebra1/U04L1T2_RE...NROC Algebra 1: Graphing Equations in Slope-Intercept Formhttp://www.montereyinstitute.org/courses/Algebra1/U04L1T3_RE...NROC Developmental Math: Writing the Equation of a Linehttp://www.montereyinstitute.org/courses/DevelopmentalMath/U... Khan Academy: Graphing a line in slope-intercept formhttp://tinyurl.com/nh8tsye
I highly recommend that you click on the links above.
NROC links: You can just watch the videos by clicking on PRESENTATION or work through each section.
Guided Practice
After watching the video try these problems. The worked solutions follow.
Example 1:
What are the slope (m) and y-intercept (b) of the following:
a. y = 2x - 3
b. y=-$\fn_phv \frac{2}{3}$x + 6
c. 4x - 2y = -8
Example 2:
Write an equation in slope-intercept form with slope $\fn_phv \frac{2}{5}$ and y-intercept 4.
Example 3:
Graph the following equations:
a. y = $\fn_phv \frac{1}{3}$x - 2
b. y = -2x + 1
c. y=-$\fn_phv \frac{3}{2}$x
Example 1: What are the slope and y-intercept (b) of the following?
a) $\fn_phv y={\color{Red} 2}x{\color{Blue} -3}$
$\fn_phv {\color{Red} m = 2}$ and $\fn_phv {\color{Blue} b = -3}$
b) $\fn_phv y={\color{Red} \frac{2}{3}}x {\color{Blue} + 6}$
$\fn_phv {\color{Red} m=\frac{2}{3}}$
$\fn_phv {\color{Blue} b=6}$
c) 4x - 2y = -8
Step 1: Write the equation in slope-intercept form.
4x - 2y = -8
Subtract 4x from both sides:
-2y = -4x - 8
Divide all three terms by -2:
$\fn_phv y={\color{Red}2 }x{\color{Blue}+4 }$
Step 2: Identify m and b.
$\fn_phv {\color{Red} m=2}$ and $\fn_phv {\color{Blue} b=4}$
Example 2: Write an equation in slope-intercept form with slope $\fn_phv {\color{Red} \frac{2}{5}}$ and y-intercept $\fn_phv {\color{Blue} 4}$.
$\fn_phv y={\color{Red} m}x {\color{Blue} + b}={\color{Red} \frac{2}{5}}x {\color{Blue} +6}$
Example 3: Graph the following equations:
a) $\fn_phv y={\color{Red} \frac{1}{3}}x {\color{Blue} -2}$
b) $\fn_phv y=\color{Red}-2x {\color{Blue} +1}$
c) $\fn_phv y=-{\color{Red} \frac{3}{2}}x$
### 03.06 Slope-Intercept Form - Worksheet (Math Level 1)
teacher-scored 20 points possible 60 minutes
Activity for this lesson
1. Print the worksheet. Work all the problems showing ALL your steps.
2. Digitize (scan or take digital photo) and upload your worksheet activity.
Pacing: complete this by the end of Week 4 of your enrollment date for this class.
### 03.06 Speech Patterns
Stewart examines speech patterns of Cache Valley, Utah By George Stewart, guest writer An article on Cache Valley Utah speech style by George Stewart, which first appeared in The Statesman. This article was prompted more by interest than expertise. I have always been interested in language and how particular sounds and symbols convey ideas, experiences, and feelings. The lists of words, phrases, and pronunciation keys that follow were first collected for my own children. The children were taught that English usage is not a moral issue, although we may murder the language at times! They learned that individuals were not "good, bad, or better" because of their command of language. Language usage is very much the product of opportunity and experience. To understand and be understood provides each of us with a wider range of experiences and an enriched enjoyment of our interactions with others. Our personal use of the language can restrict or provide opportunities for us. Another lesson that my children learned from this word and usage study was the value of a sense of humor. We all have struggled with the language at some time in our lives--I still do! English "mis-usage" can be funny! We should not, however, be devastated by our own grammatical stumbles, nor should we disparage others when they miss the mark. The challenge for all of us is to grow and to improve. Not all of the following entries are necessarily "Utah"; however, many are in common use. Pronunciations n.ciation (pro.nun.ci.a.tion) tore (tour) edge.ju.cation (ed.u.ca.tion) varly (barely) stastics (statistics) or.i.en.tate (or.i.ent) pacific (specific) door.ing (during) li.berry (library) nu.cu.lar (nu.cle.ar) excetera (etcetera) air.a.gation (irrigation) exspecially (especially) crick (creek) pit.cher (pic.ture) crell (corral) pam.plit (pam.phlet) am.ble.ance (ambulance) "in" (ing)--fishin', eatin' maa.nayze (may.o.naise) 'post (suppose) drownded (drown) 'nother (another) cold slaw (coleslaw) pome (po.em) pertnear (nearly) for (far) far (for) ig.nernt (ig.no.rant) play.sure (pleasure) clean (clear) diff.ernt (dif.fer-ent) man'r (manure) pardner (partner) matore (mature) Weak or Missing Long Vowel Sounds tell (ta-il, ta-le) jell (ja-il) dell, dill (de-al) rilly (re-ally) for sal (for sa-le, often spelled "for sell" in ads) melk (milk) bell (ba-il) mel (me-al, ma-le, ma-il) ("mel" very useful; word refers to food, sex, and letters!) hell (ha-il) Phrases and Words we was (usually pronounced "wuz" meaning "we were") I promise. (Used to mean "I am telling the truth!", rather than, "I will do something or will carry through.") have came (have come) me and ("me" used as subject instead of object) has done good (has done well - good is an adjective, well is an adverb, good usually used to refer to a moral act, well often used to refer to a skill) there is (frequently misused when speaking of more than one event, person, place, or thing where there are is appropriate usage) bath the baby (ba-the the baby) we won them (we beat them, we won the game, we won) This is her/him. (This is he/she. Usually used when answering the telephone) gonna, gotta, hav'ta (going to, have to) Oh, for rude! (You are or that was rude.) tend (babysitting - to tend a baby is not incorrect, but not too common outside of Utah) elastic (not necessarily wrong, but not as commonly understood elsewhere as rubber band would be) unthaw (thaw) What do you times it by? (What do you multiply it by?) boughten (purchased) those, them (These words are often used interchangeably and with syntactical absurdity. "Them cars are neat" or "I would like one of those ones") learn me (teach) borrow me (loan me) Can I go with? (May I go?) take that for granite (take that for granted) in head of (ahead of) What was your name again? (What is your name?) Do you got. . .? (Do you have. . .?) irregardless (there is no such word and would be redundant anyway) Double Negatives-Rather common grammatical error in Utah. (An example would be, "I don't want to hear no more noise.", "We don't have no..." is also too commonly heard.) Dominant Subcultural Pronunciations, Words, and Phrases choice/special (What or who isn't choice or special in Utah?) con.fernce (con.fer.ence) gen.e.ol.ogy (gen.e.al.ogy) hal.a.lu.le.ah (hall.e.lu.jah) inactive/active (used as a personal noun to denote ones level of church attendance or dedication to precepts)
### 03.06 Speech Patterns Assignment
teacher-scored 10 points possible 30 minutes
1. What is meant by the fact that language is a matter of opportunity and experience?
2. Because of the unique use of so many terms here in Utah and other places, we often sound unique. What phrases from the list above are used by people near/around you?
3. Compare some of the “Phrases” in the list above and select two that would be pronounced differently in another city or state.
4. What are two phrases NOT included in the article's list that you could add?
### 03.06 The Metro (FrenchII)
Le lien dans l'URL vous portera à la leçon.
### 03.06 Wormology (Earth Systems)
teacher-scored 10 points possible 120 minutes
Assignment:
You will design and conduct an experiment that answers the following question:
How does a specific atmospheric condition affect worms (life)? In designing and conduction your experiment, please keep the following guidelines in mind:
· Worms should not be harmed maliciously
· Be careful when handling the acid rain solution, should you decide to use it.
Materials, facilities, and resources:
Materials can be added or subtracted as needed.
The following are a few ideas of things you may or may not want to use. You will certainly not need to use all of them. You may want to use some items that are not listed. You should be able to borrow these materials from your local high school teacher or purchase them for a minimal amount at local stores.
• Heat source (warm water, hot plate etc.)
• Acid rain mixture (20 ml of 0.1 M HCl to 2 liters of water or 20 ml lemon juice to 2 liters water)
• Basic glassware (graduated cylinders, beakers, canning jars)
• Worms (Planarias, Earthworms, and/or Mealworms, available in a local pet store, local fishing store, or in your yard.)
• Small paint brushes
• Droppers (Available at a local pharmacy if you do not have a spare medicine dropper hanging around.)
• Straws
• String
• Colored light sources
• Thermometers
• Dissecting trays or something similar (like a pie tin)
• UV light source
• Wind source (fan)
• Ice
• Scales
• Alka Seltzer tablets (source of carbon dioxide)
The experiment rubric (see the attachment) outlines what is expected in your experiment. It also shows how your experiment will be scored.
Click on the "Experiment Rubric" link and review what is expected.
1. Determine exactly which changed atmospheric condition you would like to simulate in your experiment. For example, you may want to test the effect of acid rain on worms or global warming or increased carbon dioxide levels, etc. Decide on the specific question that you would like to test and write it down.
2. Predict what you think the outcome of your experiment will be. This is your HYPOTHESIS. Use the following format for writing your prediction: IF I ________________(write what you will do in your experiment), THEN___________(Fill in the blank with what you think will happen.) For example, IF I increase the temperature 10 degrees Celsius, THEN the worms will become sluggish and move more slowly.
3. Write a set of procedures that will test your hypothesis. Be sure to include a timeline. Your experiment may last a little as 60 minutes or as long as a week. You do not need to plan an experiment that lasts longer than a week. Remember to include a control. Be very specific. Tell me exactly what you plan to do. Tell me how much of everything you plan on using. Tell me how long you plan on running the experiment, and how often you will check it. Tell me how you will measure and record your data. I want details!
4. STOP! Submit your experimental design to me via email before going any further. Send me your question, hypothesis, and your procedures. I promise to give you feedback on your design within three days. If the design is scientifically sound, you may go ahead and conduct your experiment. If it has flaws, we will work together until you have designed a valid, reliable experiment---then you may go ahead and conduct your experiment. If you fail to submit your plan before you do your experiment you may not receive credit for the assignment.
5. AFTER you have received my go-ahead, conduct your experiment. Be sure to keep detailed lab notes. Your lab notes should contain a record of everything you did as well as all the data you collected. Every entry on your lab notes should be dated (month/day/year)
ANALYSIS
2. Send me your lab notes. I want to the observations that you recorded. Do not simply send me a summary of your results. I want to see a record of everything you did as well as the data you collected.
3. Based on your observations, write a conclusion. What does your data tell you? What did you learn from your experimental results?
4. What kind of relationships did you find between worms and your selected atmospheric condition?
6. If you were to do this again, what would you change? Why?
7. What additional experiments could be performed?
GOOD LUCK AND HAVE FUN!!!
Pacing: complete this by the end of Week 4 of your enrollment date for this class.
### 03.06 You are what you eat (Biology)
FDA food labelshttp://www.fda.gov/food/labelingnutrition/default.htmUnderstanding Food lableshttp://www.fda.gov/Food/LabelingNutrition/ConsumerInformatio...
### 03.06.00 - ALLUSION
Authors often use allusions as well as symbols to convey connotative meaning. This exercise will focus on allusion. For this activity, you will read the poem “Demeter’s Prayer to Hades.” This poem has an allusion to a mythological story of Demeter and her daughter Persephone. For an allusion to be effective, you have to know the story, so If you are not familiar with it, read a short version given at one of the links below.
### 03.06.01
“Demeter’s Prayer to Hades,” http://www.oocities.org/talita.geo/3demeters.html"Demeter's Prayer to Hades"http://www.bigfoot.k12.wi.us/~staff/brower/11-lit%20poetry/D...Story of Demeter and Persephonehttp://www.bigfoot.k12.wi.us/~staff/brower/11-lit%20poetry/D...Story of Demeter and Persephonehttp://www.greeka.com/greece-myths/persephone.htm
Literary Deviceshttp://www.slideshare.net/guest05f10c/poetic-conventions-dis...Literary Deviceshttp://www.kareyperkins.com/classes/420/litelements.pdf
### 03.06.01 Lesson 3F: Simple Yes/No Questions (Spanish I)
computer-scored 20 points possible 20 minutes
**Assignment 03.06.01: Simple Yes/No Questions**: You know the drill, find the button: and click on it!
### 03.06.01 Metro Trip to the Musee d'Orsay (FrenchII)
teacher-scored 16 points possible 40 minutes
The link in the URL's will take you to this assignment.
### 03.06.01 Metro Trip to the Musee d'Orsay links (FrenchII)
Unit 3 reading and writing assignmenthttp://www.mrcharon.net/EHSFRENCH/LEVEL2/fr2u3rtassignment.h...
### 03.06.01 Quiz 30
computer-scored 10 points possible 40 minutes
If you are certain you have mastered the material, you are ready for the quiz. Click on the Quiz 30 link.
### 03.06.01 Related rates exploration links (Calculus)
Balooning related rateshttp://education.ti.com/educationportal/activityexchange/Act...
It is an exploration on related rates using a balloon. You will need a balloon, a fabric measuring tape (one that's not metal) and your calculator.
In the exploration, there is an example that will guide you through their work and steps. However, you'll need to show all your work in a seperate write up (either TI Interactive or by pencil) and email to the instructor.
### 03.06.01 Unit 03 Review Quiz (English 11)
computer-scored 20 points possible 30 minutes
Complete the review quiz on unit 3 after you have completed reading the novel. You will need to understand and apply the literary terms as they relate to the book you read.
Pacing: complete this by the end of Week 4 of your enrollment date for this class.
### 03.06.01 You are what you eat (Biology)
teacher-scored 10 points possible 90 minutes
Summary:
Now that you've learned about cells, cell chemistry, properties of elements in cells and much more, let's put a little of that to use in our everyday world. Sure, you understand that you and other organisms are made up of cells and that cells have an important function in our bodies. You have researched the macromolecules (proteins, carbohydrates, lipids, nucleic acids), and you have looked at some of the major elements that play a role in the makeup of the human beings and all other organisms. Don't forget that inorganic materials (those not containing Carbon - nonliving factors in our environment) are also present in our everyday lives.
You've heard the phrase "you are what you eat." Let's take a little stroll to your kitchen and see what exactly it is that you take into your body daily, and what these items may do for your body.
Instructional Procedures:
This assignment contains two parts. Copy all information below between the lines of asterisks, including the lesson number, revision date and all questions into a word document.
******************************************************
ASSIGNMENT 03.06.01 - REVISION DATE: 8/28/14 (Copy everything between the asterisks.)
PART 1
• Obtain the original label from any type of non-perishable food Item (get the can or the box).
• List all of the Ingredients. There should be at least 7 different ingredients.
• Research and explain each of the ingredients. Make sure you include the purpose for the ingredient in the food and how it helps or hinders your body. (Note: Some items have no nutritive value. You should explain what their purpose is in the product. Why put them in?)
• Give references for you information.
PART 2
Being properly hydrated is important for you to stay healthy. There are several ways to determine how much water you need to drink each day. Use the following formula to calculate how much water you need to drink each day.
current weight/2 = how many ounces of water you need to drink each day
1 cup = 8 ounces
For example a person who weighs 130 pounds should drink 65 ounces of water each day. That's equivalent to 8.125 cups of water each day.
130/2 = 65 ounces
65 ounces ÷ 8 ounces = 8.125 cups
1. How many ounces of water do you need to drink each day?
2. How many cups of water do you need to drink each day?
******************************************************
Pacing: complete this by the end of Week 6 of your enrollment date for this class.
### 03.06.02
teacher-scored 12 points possible 30 minutes
Once you have read the myth and the poem, answer these questions (make sure that you answer in complete sentences):
1. How does knowing the story change the meaning of the word “prayer” and of the tone of the poem?
2. What particular lines did you find most effective and why?
Assessment Rubric:
Content Shows the required response and understanding of the poem. /4 Clarity Writing is clear, focused and well organized. /4 Conventions No significant errors in grammar, usage, punctuation or spelling. /4
teacher-scored 16 points possible 45 minutes
Assessment Rubric:
Content Each question answered completely and in a way that shows knowledge and understanding of the selection. /4 Support Each answer includes the specific language from the poem to illustrate analysis. /4 Clarity Writing is clear, focused & well organized—didn’t give me any “huh?” moments. /4 Conventions No significant errors in grammar, usage, punctuation or spelling. /4
Pacing: complete this by the end of Week 6 of your enrollment date for this class.
### 03.06.02 Greek and Latin word parts (English 9)
More Greek and Latin rootshttps://www.msu.edu/~defores1/gre/roots/gre_rts_afx2.htm?ifr...
### 03.06.02 You are what you eat (Biology)
teacher-scored 10 points possible 25 minutes
You need to score at least 80% on this quiz before you can take the final. You can take it as many times as you would like, in order to earn the score you desire.
Pacing: complete this by the end of Week 6 of your enrollment date for this class.
### 03.06.04 Root word webs (English 9)
Gliffy (graphic organizer creator)http://www.gliffy.com/Freemind (graphic organizer creator)http://freemind.sourceforge.net/wiki/index.php/Main_Page
### 03.08 Wie gehts? Dialog #3(German1)
Wie gehts? Dialog #3https://media.ehs.uen.org/GermanEXEQ2/wiegehtesihnen.exe
Click on the link and save the file down to your hard drive as an .exe file. Open it to study this subject. Be patient for download, it takes a few minutes. Memorize each part. Practice it until it is second nature.
### 03.09 Dialogue #2,3 Comprehension Questions(German1)
teacher-scored 20 points possible 60 minutes
Go to the link below and take the online quiz. You may take it multiple times.
### 03.10 Wie Geht es Ihnen Written Assgt(German1)
teacher-scored 20 points possible 60 minutes
Go to the link below and complete the online writing assignment.
Here is a preview of this assignment:
Wie geht es Ihnen?
I. Was fehlt?
Fill in the blanks from the dialogue:
Guten _____, Herr Eckebrecht!
Beate!
Wie geht es _______?
Mir geht es______ ________ ______. __________ geht es _____________ ?
Ich _____________ einen Test in Englisch.
Viel ___________ !
Danke__________!
Wiedersehen!
II. Antworten Sie die Fragen mit Sätzen, bitte!
1. Wie heisst das Mädchen?
2. Wie heisst der Mann?
3. Was hat Beate in heute in Englisch?
4. Wie geht es dem Mann? Es geht ihm ...
5. Wie geht es dem Mädchen? Es geht ihm ...
III. Welche sind positive Ausdrücke und welche sind negative Ausdrücke?
(which are positive responses to "Wie geht es Ihnen?" and which are negative? Put a + by the positives and a - by the negatives.
### 04.03.01 Chapter 4 Assignment 2 - Extra Credit Program (C++)
teacher-scored 20 points possible 45 minutes
Do this assignment and submit it under Topic 3.
Miss Johnson, a math teacher, adds extra credit points to her students' total points if they score well on her final examination. Points are awarded as follows:
score < 60 no extra credit
score >= 60 and score <= 80 extra Credit = score * 2 / 11.0;
score > 80 extra Credit = score * 3 / 11.0;
Write a C++ program to allow the user to enter the final examination score. The program should then calculate and display the student's final score including any extra credit.
Example:
Miss Johnson's Extra Credit Program
Enter the student's score on the final 85
The Students final score is 108.182
### 04.05 Unit 03-04 Review Quiz
computer-scored 10 points possible 10 minutes
Assessment 04.05 Review Quiz 03-04
Complete
03-04 Review Quiz
Ancient Middle East and Ancient India and China
This assignment is found under Review Quiz 03-04 on the course website. The assignment is computer graded, which should make it easier to do and will provide immediate feedback.
Complete this assignment after you finish reading lessons 3 and 4.
Pacing: complete this by the end of Week 4 of your enrollment date for this class.
### 04.06 Space Text book (Earth Systems)
Found in Space! Introduction: You started out fourth quarter describing energy that is “Lost in Space”. (Remember assignment 4.3?) Now, at the end of fourth quarter you’ll describe information that is “Found in Space”. Lost and found--Sounds like someplace you’d look for your missing yo-yo! 04.06 yo-yo Well, I know where my favorite yo-yo is, and hopefully you do too. But, do you know how scientists learned about the universe? Or how scientists believe the universe began? What about how stars are born and die? Or how Earth compares to other planets? You will find this, and other information, in the assignment that follows. Hang on and enjoy your intellectual journey to see you what you can find in (or about) space. To this point, we have studied Earth’s various sub-systems (I’m sure that by now you can recite them in your sleep; water system, geologic system, atmospheric system, biologic system). Now we will examine the Earth as a part of a larger system. Earth is a part of the solar system. The solar system is a component of the system we call the Milky Way Galaxy. 04.06 galaxy Our galaxy is part of the universe as we know it, and who knows what part the universe plays in an even greater whole? As a member of these systems, the Earth both affects and is affected by its celestial neighbors. In fact, the Earth itself exists thanks to interactions that occurred millions of years ago in our “corner” of the universe. It is your job to document these interactions.
### 04.06 Skimming
Now, it's time to show what you can do using the skimming strategy.
### 04.06 Skimming Assignment
A Message for Garciahttp://www.foundationsmag.com/garcia.html
### 04.06 Space Text book (Earth Systems)
A Brief History of Cosmologyhttp://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Cosmolog...Anchors of the Universe http://map.gsfc.nasa.gov/m_uni/uni_101bbtest1.htmlFour Pillars of Cosmologyhttp://www.damtp.cam.ac.uk/user/gr/public/bb_pillars.htmlBig Bang theoryhttp://www.umich.edu/~gs265/bigbang.htmLife and death of starshttp://map.gsfc.nasa.gov/m_uni/uni_101stars.htmlPlanetshttp://nineplanets.org/NASA planetshttp://pds.jpl.nasa.gov/planets/
Remember: Do NOT copy the text from these Internet sites word for word. I have read the information on the sites. I will recognize it if you cut and paste it into your text. You will FAIL the assignment if you cut and paste the information from the sites into your textbook. You MUST write it in your own words.
### 04.06 Translating Functions (Math Level 1)
Graph parent functions and transformations of exponential functions.
Something to Ponder
How would you explain the meaning of vertical shift and how it is used to graph exponential functions?
Mathematics Vocabulary
Parent Function: the most basic equation of a function, before any shifts or alterations
Translation: a transformation that shifts a function horizontally, vertically or both
Vertical Shift: a transformation up or down
Learning these concepts
Click each mathematician image OR click the link below to launch the video to help you better understand this "mathematical language."
$\fn_phv {\color{Red}SCROLL }$ $\fn_phv {\color{Red}DOWN }$ $\fn_phv {\color{Red}TO }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}GUIDED }$ $\fn_phv {\color{Red}PRACTICE }$ $\fn_phv {\color{Red}SECTION }$ $\fn_phv {\color{Red}AND }$ $\fn_phv {\color{Red}WORK }$ $\fn_phv {\color{Red}THROUGH }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}EXAMPLES }$ $\fn_phv {\color{Red}BEFORE }$ $\fn_phv {\color{Red}SUBMITTING }$ $\fn_phv {\color{Red}THE }$ $\fn_phv {\color{Red}ASSIGNMENT!!! }$
### 04.06 Applying your Skills: Video 4 (Participation Skills and Techniques)
teacher-scored 25 points possible 60 minutes
This assignment should be completed by WEEK 3 of this class
Standard Four Video INSTRUCTIONS: You will need to submit an instructional video demonstrating an exercise of your choice from the following list. If you are unable to make a video of YOURSELF, you can use powerpoint to create a presentation USING pictures of YOU demonstrating the basics skills for the exercise you have chosen. (As in Quarter 1.....there needs to be as many pictures as there are CRTICAL CUES for each exercise.)
****Here is the list of acceptable activities you can choose from
*** You must choose an activity from THIS LIST
sit-ups, pull-ups, crunches, push-ups, biceps curl, shoulder press, squats, rows, leg curls, chest fly.
SAFETY IS PARAMOUNT. Take all reasonable safety precautions for the chosen activity. You may not receive credit if your teacher finds that you have not taken safety into consideration. *You will need to demonstrate the correct way to do whatever exercise you choose. You must research the correct form and sequence for your activity (use books or the internet, or ask your coach if you have one for the activity).
You will also need to discuss the importance of the FITT principal as it relates to your activity of choice.
INSTRUCTIONS FOR THE VIDEO:. *Your video needs to be at least one minute long and NO longer than two minutes. If you would rather take pictures and use a video editor program to make a video/powerpoint to create your video, you can but you must have a minimum of 4-6 pictures.
*You MUST be the star of your video, the video needs to be of YOU teaching and demonstrating how to perform the skill or exercise, so you might need to get a friend to do the filming. You will also be the narrator of the video.
*Assume that you are making this video for another student who has never tried your particular activity/sport.
*You CANNOT use the same video for another other PE Skills and Techniques, including both quarters. Example: If the activity you choose was tennis, you would need to show the basic skills of tennis, such as:
How to grip the tennis racket. How to serve a tennis ball. What a backhand swing looks like and how to accomplish it. Etc.
****************************************
When you have completed this process, Submit your video in the submission box by pasting your url in the text box. Make sure you include the following obervation questions:
1. From the list of activities - sit-ups, pull-ups, crunches, push-ups, biceps curl, shoulder press, squats, rows, leg curls, chest fly, which activity did you choose to demonstrate and teach?
2. What source(s) you used to research the correct technique.
3. What you learned from watching yourself in the video about how you might improve your technique in your activity (ie, critique yourself).
4. How can you implement the FITT principal in your activity to improve yourself in the activity of your choice?
****************************************
INSTRUCTIONS ON HOW TO SUBMIT YOUR VIDEO: You will need a computer with a connection to the Internet and your digital video content (under two minutes, please).
You can choose between
"Photobucket" - image hosting and video hosting website
WHICHEVER resource you decide to use to submit your video..... you need to Submit the LINK to your video with your Questions in the assignment Submission, so the questions and the Video are together to be graded.
****DO NOT send the video to my EMAIL!!! *****
In YouTube, you need to make your video "available to the world." When you go to the "My Videos" section of your YouTube account, play the video you want to submit. At this point you need to click on the "Share this Video" then copy the URL address in the URL and paste it in the Text Entry with your Questions or the comment box and submit it WITH YOUR ASSIGNMENT
Photobucket is very similar. If you don't have a video camera, you can use still images in a slide show with narration that is converted to digital video.
IF you decide to NOT use PHOTOBUCKET or YOUTUBE... You can upload your video or power point to your GOOGLE drive and share it and then copy the link, and paste it into the text entry submission page with your Questions and submit it.
Please make sure your video is in one of the following formats or you will have to redo the video.
Pacing: complete this by the end of Week 3 of your enrollment date for this class.
### 04.06 Circuit training workout (Fitness for Life)
Official Navy Page from United States of America MC3 Donald White Jr./U.S. Navy, Public domain, via Wikimedia Commons Circuit training is a method that combines muscular strength, muscular endurance, and aerobic conditioning. Once you perform this type of workout, you can decide how to incorporate circuit training into your overall workout program.
### 04.06 Circuit training workout assignment (Fitness for Life)
teacher-scored 50 points possible 60 minutes
Introduction: The purpose of this assignment is to introduce you to circuit training.
Task: Jog or walk for 5 to 10 minutes to warm up.
Conduct the Circuit Training Workout. This workout asks you to complete a circuit training workout for 30 minutes. You may use the one found on pages 72 – 74 of the text, or... Go to the link below this assignment and complete the total body workout suggested there by “Sports Fitness Advisor”. Or… You may substitute for a different circuit training workout of your choice, but you MUST justify your alternative workout for full points. (You might want to pre-approve your alternative workout with your teacher.) • If you use the textbook workout, use the text for photos of the proper way to do activities. • If you use the on-line “Sports Fitness Advisor” workout, you will need some minor equipment. You may substitute household items where appropriate. Follow the directions given for proper technique. Describe which workout you chose, and list the exercises completed.
• You might consider checking out the holiday circuit workout video link below for additional pointers.
• Studio C's "P90 X" link demonstrates circuit training like no other. See below.
Fill in the exercises you will perform BEFORE you begin the workout. That way you will only need to fill in the number of Reps performed. • Have a visible clock, with a second hand, that you can easily see WHILE you exercise. • Participate in each exercise for 1 minute and 50 seconds. Allow 10 seconds between exercises. • When you check heart rate, check for 10 seconds. Multiply by 6 AFTER the entire workout is completed. • Continue the workout, rotating through the exercises, for a total of at least 30 minutes.
Copy and paste the section between the lines of asterisks into a word processing document on your computer. Print a copy to take with you to your workout. Complete your work, and save a copy for yourself. Don't forget to highlight your answers. Then submit your work using the assignment submission window for this assignment.
***************************************************************************
Name:________________ Date:________________ Workout: _______________________
Exercise Reps 1._______________________ ____________ 2. ______________________ ____________ 3. ______________________ ____________ 4. ______________________ ____________ 5. ______________________ ____________ Heart rate after 10 minutes ______________ 6. ______________________ ____________ 7. ______________________ ____________ 8. ______________________ ____________ 9. ______________________ ____________ 10. _____________________ ____________ Heart rate after 20 minutes ______________ 11. _____________________ ____________ 12. _____________________ ____________ 13. _____________________ ____________ 14. _____________________ ____________ 15. _____________________ ____________ Heart rate after 30 minutes ______________
(30 pts. for completion.)
9. (5 pts.) Did you enjoy alternating muscular endurance/strength activities with aerobic activities as is done in a circuit? • Why or why not?
10. (5 pts.) Now that you have completed workouts to improve muscular endurance, muscular strength, and both (circuit training), which type of workout did you like the best and why?
11. (10 pts.) Given your experience with this workout, create and describe a program that you would use to incorporate circuit training into your future workouts. list at least 5 different exercises and provide the number of repetitions or durations for each.
• General Description: • Exercises:
1 #reps: 2 #reps: 3 #reps: 4 #reps: 5 #reps:
*****************************************************************************
Pacing: complete this by the end of Week 6 of your enrollment date for this class.
Sports Fitness Advisor circuit training workouthttp://www.sport-fitness-advisor.com/circuit-training-exerci...Got an extra minute and twelve seconds? My holiday circuit workout video will give you an idea how circuit training is done. (See if you can catch my typo)https://www.youtube.com/watch?v=9kJAcxo7xBwJust for Fun from Studio C- P90 Xhttps://www.youtube.com/watch?v=pSr7HPikHog
### 04.06 Domain, Range, Graphs and Transformations of Absolute Value Functions (Math Level 2)
Identify the domain and range of absolute value functions and graph absolute value functions and the transformation of their parent functions.
Recall that the absolute value is defined as the distance a number is from zero. Since it is a distance, the absolute value will always be postive, or zero. Absolute value is represented by two parallel vertical lines on either side of a term. For example, $\left | 3 \right |$.
An absolute value function is a function whose rule contains an absolute value expression. The parent graph for absolute values is f(x)=|x| and it looks like this:
The graph is a V shape. It has a vertex and is symmetrical about the vertical line running through the vertex.
The general form of an absolute value function is $y=a\left | x-h \right |+k$. The absolute value function has many similarities with the quadratic function.
• The vertex of the absolute value function is (h,k). Note that the value of h is the opposite of what is in the equation.
• It is symmetric about the line x = h.
• If a > 0, (a is positive), the graph opens up. If a < 0, (a is negative), the graph opens down.
• If $\left | a \right |$ < 1, the graph will be wider. If $\left | a \right |$ > 1, the graph will be narrower.
You will recall that the domain is the set of all possible inputs (all the x values) of a function which allow the function to work. The range is the set of all possible outputs (all the y values) of a function.
Let’s apply this to absolute value functions.
Look at the graph of f(x) = |x|:
Absolute value function graphs always look like the letter V. All such functions will have a domain that includes all real numbers. The range will vary depending on the value of the vertex. Can you see that the range of this function is all real numbers ≥ 0?
Now, let’s graph the function f(x) = |x + 2|:
• How does (x+2) change the domain?
• How does (x+2) change the range?
The domain is still all real numbers and the range is still all real numbers ≥ 0. The (x+2) moved the vertex (0,0) to (-2,0).
What about the graph of f(x) = |x| + 2?
• How does +2 change the domain?
• How does +2 change the range?
The domain is still all real numbers and the range is all real numbers ≥ 2. The vertex is (0,2).
Here is the graph of f(x) = |x – 2|.
• How does (x – 2 change the domain?
• How does (x – 2) change the range?
The domain is still all real numbers and the range is all real numbers ≥ 0.
Finally, look at the graph of f(x) = |x| – 2?
• How does –2 change the domain?
• How does –2 change the range?
In what must be obvious to you by now, the domain is still all real numbers but the range is all real numbers ≥ –2. The vertex is (0, –2).
Have you figured out a pattern?
• f(x) = |x + a| translates the graph to the left and does not alter the domain nor the range.
• f(x) = |xa| translates the graph to the right and does not alter the domain nor the range.
• f(x) = |x| + a translates the graph up and does not alter the domain. However, the range is moved to ≥ k.
• f(x) = |x| – a translates the graph down and does not alter the domain. However, the range is moved to ≥ – k.
In this “family” of graphs, the original f(x) = |x| is called the parent function. The other graphs in the “family” are referred to as transformations of the parent function. Once you have graphed the parent function it is easy to graph the transformations.
Let's make things a bit more interesting. Look at the graph of f(x) = 2|x|:
The domain and range don’t change but the “V” is narrower.
What about the graph of f(x) = –2|x|?
Hmmm. The “V” is upside down! It is a reflection of f(x) = 2|x|.
The domain doesn’t change but the range is now all real numbers ≤ 0.
Of course, we can make things even more interesting.
How about the graph of f(x) = –3|x + 2| + 1?
Can you figure out the domain and range?
The domain is still all real numbers but the range is all real numbers ≤ 1. The vertex is (–2,1).
We can now add to our list from above:
• f(x) = a|x| compresses the graph but does not alter the domain nor the range.
• f(x) = –a|x| compresses the graph, then reflects it across the x-axis and does not alter the domain. However, the range is changed to all real numbers ≤ 0.
• f(x) = a|x + h| + k translates the graph left or right depending on the value of h. It also translates the graph up or down depending on the value of k.
• Finally, it compresses the graph depending on the value of a.
Take time to practice with these types of graphs at the website listed below.
FooPlot: Absolute Value of x Functionhttp://fooplot.com/#W3sidHlwZSI6MCwiZXEiOiJhYnMoeCkiLCJjb2xv...
I highly recommend that you click on the links below and watch the videos before continuing:
mathispower4u: Ex 1: Graph a Transformation of an Absolute Value Function Using a Table http://youtu.be/X_gqB9bVOVEmathispower4u: Ex 2: Graph a Transformation of an Absolute Value Function Using a Table http://youtu.be/zmlay7PlM-Emathispower4u: Ex: Graph an Absolute Value Function Using a Table of Values http://www.youtube.com/watch?v=PJt5dSj7PN4mathispower4u: Ex 1: Find the Equation of a Transformed Absolute Value Function From a Graph http://youtu.be/yYHhUYPYRl0mathispower4u: Ex 2: Find the Equation of a Transformed Absolute Value Function From a Graph http://youtu.be/TF0vbtrG-VQmathispower4u: Ex 3: Find the Equation of a Transformed Absolute Value Function From a Graph http://youtu.be/HaTrbPTvGBQmathispower4u: Ex 4: Find the Equation of a Transformed Absolute Value Function From a Graph http://youtu.be/AFOU73_LFKA
If after completing this topic can you state without hesitation that...
• I can identify the domain and range of absolute value functions.
• I can graph absolute value functions and the transformation of their parent functions.
…you are ready for the assignment! Otherwise, go back and review the material before moving on.
### 04.06 Domain, Range, Graphs and Transformations of Absolute Value Functions – Assignment (Math Level 2)
teacher-scored 66 points possible 40 minutes
Activity for this lesson
Complete the attached worksheet.
1. Print the worksheet and complete the assignment in the space provided. You may use additional paper if needed. Work all the problems showing ALL your steps.
2. Once you have completed the assignment, digitize (scan or take digital photo, up close and clear) and save it to the computer and convert it to an image file such as .pdf or .jpg.
Pacing: complete this by the end of Week 7 of your enrollment date for this class.
### 04.06 Following Instructions
Open and complete the Drawing Pictures Activity.
### 04.06 Lesson 4F “¿Cuál(es)? - Which? (Spanish I)
Lesson 4F “¿Cuál(es)? - Which? [A copy of this lesson is available in a PDF file!! If you prefer to use this type of document, just click on the following link to complete this lesson: SpI_Lesson4F] In this lesson, we’re going to look at another important question word, “¿Cuál(es)? – Which?”. This question word is similar to the question word, “¿Quién(es)? – Who/Whom?” (Spanish I – Quarter 1: Lesson 1J). These question words change depending on whether the thing/person that is being asked about in the question is singular or plural. (¡¡Remember!! – In Spanish, they follow the same rule for exclamation and question marks as they have for quotes and parenthesis … using a mark to begin the exclamation or question as well as at the end. It makes reading Spanish much easier, with fewer surprises!!) Looking at the examples above – the Spanish word “¿cuál(es)?” is used just like the word “which?” in English … but “¿cuál(es)?” can also be used in Spanish, the same way that English speakers use the word, “what?”. This can cause many English speakers to confuse the two Spanish words: “¿Cuál(es)?” and “¿Qué?” … so it is important to understand the subtle difference!! “¿Qué?” – this Spanish question word is used when asking for an identification of an object, a definition, or an explanation. “¿Cuál(es)?” – this Spanish question word is used when asking for a choice from few or many (even billions) of possibilities. Now lets look at the following examples to see when to use each word: So the question word “¿Cuál(es)?” is the correct Spanish word to use when asking for specific information about a person or people. Once you learn this question word, you can ask Spanish speakers many basic questions about their lives using the list of words below. (The YouTube video clip below shows exactly how this is done!) Summary of Lesson: Hopefully, you have learned another important way of asking questions in Spanish!! In order to have a conversation with your native Spanish friend, you first need to be able to ask questions about the information that you would like to know. Gradually, (and we promise that this will happen!!) little by little, you will be able to understand when he/she responds!! Just remember to use “más despacio, por favor – more slowly, please”!! Practice Exercises: Make sure you completely understand the concepts taught in this lesson by reviewing the following article!!
### 04.06 More Car Insurance (Financial Literacy)
Identify more things to consider in purchasing car insurance and understand related terminology.
This isn't the kind of car wash you need for your ride: Andrew Smith, CC-BY-SA-2.0, via Wikimedia Commons BACKGROUND
Now that you know a few more things about insurance, we will consider insurance rates and insurance requirements for cars in Utah. The government does not require that you be insured against all risks. They DO require car insurance. Car insurance is also required by those who loan the money to buy car because they want to minimize their risks. They want to make sure they get their money back. Lets consider a few things about auto insurance.
Visit URL #1. Skip the “Compare Quotes Instantly” section and scroll down to “Utah Car Insurance Laws” to find out what car insurance is required in Utah. You may stop reading when you reach the section entitled “Where to get Utah auto insurance quotes.” Again, ignore the “sidebar” insurance advertisements and other price quote links. Then exit the web page and go to the next URL. Visit URL #2 to learn more about "no-fault" insurance and "minimum liabilty rates." Visit URL #3 to learn how to lower car insurance rates. Read the suggestions for lowering a young driver's car insurance rates.
### 04.06 More Car Insurance (Financial Literacy)
computer-scored 10 points possible 30 minutes
Take the Setting Priorities quiz. You must score 8 or higher on this quiz to continue. If your score is lower, simply re-take the quiz as many times as you need after reviewing the material.
Pacing: complete this by the end of Week 2 of your enrollment date for this class.
### 04.06 More Car Insurance links (Financial Literacy)
Utah auto insurancehttp://www.autoinsuranceindepth.com/auto-insurance-Utah.htmlUtah auto insurance #2.http://www.dmv.org/insurance/no-fault-states.phpHow to lower your rateshttp://www.today.com/id/19838230/ns/today-money/t/how-reduce...
### 04.06 Narrative Writing
Students will write a 600-800 word narrative on a topic related to "Love and Relationships"
The theme for English 11 Quarter 1 is “Love and Relationships.”
Definition of narrative: a story or account of events, experiences or the like, whether true or fictitious. A narrative is not a report or essay, but a story.
Summary of the assignments for this section: (complete instructions will be with each assignment)
The first assignment for this section will be to brainstorm a list of 25 things that come to mind about “love and relationships.” Make a list of things you could write about, then narrow down your topic. (5 Points)
Once you have decided on a topic, please write approximately two pages (600-800 words) in narrative form. (15 points for the submission)
After the paper has been turned in, the teacher will give you a set of revision instructions that you must complete to receive the final grade for this paper. (Final paper: 40 points) Attached is the scoring rubric for the final paper.
### 04.06 Numbers Oral Quiz(German1)
04.06 Numbers Oral Quiz(German1)http://media.ehs.uen.org/GermanEXEQ2/Numbersoralquizexecutal...
### 04.06 Online Resources Test--ORT (Computer Technology)
computer-scored 55 points possible 30 minutes
Review
Review all the reading materials and activities to study for the posttest.
Take the Online Resources Posttest (ORT)
This test covers the concepts from the activities, reading, and handouts for the Online Resources Unit. You can use your notes and handouts for this test. There is a one hour time limit.
### 04.06 Prenatal Development(Childdev1)
Lesson 4.6: Prenatal Development Open the file above, "Stages of Prenatal Development", in the format (Microsoft Word or WordPerfect) that you prefer and go through the materials presented. STAGES OF PRENATAL DEVELOPMENT TIMELINE OF PRENATAL DEVELOPMENT Day 1 - conception takes place. 7 days - tiny human implants in mother’s uterus. 10 days - mother’s menses stop. 18 days - heart begins to beat. 21 days - pumps own blood through separate closed circulatory system with own blood type. 28 days - eye, ear and respiratory system begin to form. 42 days - brain waves recorded, skeleton complete, reflexes present. 7 weeks - photo of thumbsucking. 8 weeks - all body systems present. 9 weeks - squints, swallows, moves tongue, makes fist. 11 weeks - spontaneous breathing movements, has fingernails, all body systems working. 12 weeks - weighs one ounce. 16 weeks - genital organs clearly differentiated, grasps with hands, swims, kicks, turns, somersaults, (still not felt by the mother.) 18 weeks - vocal cords work – can cry. 20 weeks - has hair on head, weighs one pound, 12 inches long. 23 weeks - 15% chance of viability outside of womb if birth premature.* 24 weeks - 56% of babies survive premature birth.* 25 weeks - 79% of babies survive premature birth.* (*Source: M. Allen et. al., "The Limits of Viability." New England Journal of Medicine. 11/25/93: Vol. 329, No. 22, p. 1597.) Introduction/ Summary: The duration of pregnancy is divided into three equal segments called trimesters. The first trimester (months 1-3) is essential to the proper development of the infant and encompasses both the ovum and embryonic period of prenatal development. This is when all organs, nerve cells and brain cells develop. This is when most spontaneous abortions (miscarriages) occur. They generally are caused by abnormal development of the fetus and are nature’s way of eliminating a chromosomal abnormality. It is vital that all necessary nutrients be available to the fetus in order to develop properly. The second trimester (months 4-6) is often referred to as the “Golden trimester”. This is when the mother generally feels the best. Morning sickness and nausea have generally disappeared and the mother is quite comfortable. The third trimester comprises (months 7-9). These are important months for the baby as its organs and body systems mature and prepare to function on their own. The fat accumulated during this time will five the baby a “head start” on life. The prenatal development is sometimes separated into three development periods. The first period is referred to as the period of the zygote. This stage begins at conception and lasts until the zygote is implanted in the mother’s uterus. It lasts for about 10-14 days. The zygote grows to be about the size of a pinhead. Roots grow from the zygote into the wall of the uterus where they can receive nutrients from the mother’s blood The period of the embryo lasts from about 2 weeks to 8 weeks after conception. The embryo is attached to the mother by the umbilical cord (20 inches long) which reaches from the embryo’s stomach to the wall of the uterus. The umbilical cord contains arteries which carry the embryo’s waste products away from the embryo to the mother’s blood system to be purified. It also brings oxygenated and nutrient-rich blood back to the embryo to keep it alive. The umbilical cord is connected to the placenta. The placenta is an organ which serves as a medium for the exchange of nutrients and waste between the mother and the fetus. Throughout this period, the embryo is inside the amniotic sac (a bag filled with watery substance called amniotic fluid). The fluid will protect the developing baby against bumps, bruises and temperature changes. During this period all of the organs that will be present at birth are formed. The third development period is called the period of the fetus. This period extends from the end of the second month of pregnancy until birth. During this stage, the developing baby is referred to as a fetus. The body parts, organs and systems which were formed during the embryo period will become much more developed and begin to function. The fetus will begin will begin to resemble a human being and features will increase in clarity. During the fetal period the baby may increase in length as much as twelve inches. MONTHLY DEVELOPMENT Month 2 The embryo increases in length to about 1 ½ inches. Bones and muscles begin to form. The head grows rapidly at first, accounting for about half of the embryo’s size. The face and neck begin to take on human form. The brain develops very rapidly. Leg and arm buds form and grow the eyes begin converging toward the center of the face. The mouth and nose form. Major organs of the digestive system become differentiated. The heart has been beating for about a month now. Month 3 The fetus measures about 3 inches from head to buttocks and weighs about ½ ounce. The fetus has all of its major systems and they are functioning. However, it is still unable to survive independently. No new organs will need to be formed, but the ones that are present will need time to develop and mature. The digestive system is active. The liver and kidneys are functioning. The fetus practices swallowing amniotic fluid, breathing amniotic fluid and its vocal chords are developing. The roof of its mouth comes together and fuses. Taste buds appear, sex organs continue to develop, buds for all temporary teeth are formed and bone formation begins. During this month, arms, legs and fingers begin to make spontaneous movement. The eyelids close and are sealed shut at this time. They will reopen at about 6 months. Month 4 The fetus grows to almost 6 inches in length and 4 ounces in weight. The skin is thin, loose and wrinkled and appears red because of underlying blood vessels. The face acquires a human appearance. The body outgrows the head at this time. Hands and feet become well formed and finger closure is possible. The fetal reflexes become more brisk as it begins to stir and move the arms and legs. In males, the testes are in position for later descent into the scrotum and in females, the uterus and vagina are recognizable. Month 5 The fetus is now about 12 inches long and weighs about 8 ounces. During this month the mother will probably feel the baby’s movement, called quickening. It is suspended in a quart of amniotic fluid. The development seems so advanced that the skin and digestive organs are not prepared to exist on their own. Also, there is no provision for regulating body temperature. The fetus grows a fine dark body hair called lanugo and collects vernix, which is a waxy coating to cover and protect the skin. The nose and ears begin ossification, the skeleton hardens, and the heartbeat can now be heard. Fingernails and toenails begin to appear and the baby will wake and sleep. Sweat glands are formed and functioning. Month 6 The fetus increases in weight and is now between 1 ½ - 2 pounds. The eyelids, which have been fused shut, are now open and completely formed. The eyes look up, down and sideways. Eyebrows and eyelashes are well defined and taste buds appear on the tongue and in the mouth. Month 7 The fetus is now about 15 inches long and weighs between 2 ½ - 3 pounds. It can cry weakly and can suck its thumb. The fetus can make a variety of reflex movements: startle, grasp, and swim movements. The cerebral hemispheres cover almost the entire brain. Month 8 The fetus will gain 2-3 pounds during this month, which it will need to stay warm following birth. The fingernails reach beyond the fingertips and much of the lanugo is shed. By the end of this month, the fetus will most likely settle into the head down position. However, the baby is capable of changing positions. Month 9 The fetus reaches full growth. It measures 14-15 inches from head to buttocks and weighs 6-8 pounds. During this last month, the baby acquires antibodies from its mother which will give it temporary immunity against some diseases. The eyes are normally blue at birth because pigmentation is not normally formed until after a few weeks of exposure to light. Vernix is present over the entire body. The fetus will alternate between periods of activity and periods of quiet. The organs increase their activity, the fetal heart rate increases to rapid rate. Birth usually occurs approximately 280 days after the first day of the mother’s last menstrual period. TRIMESTERS FIRST TRIMESTER The Mother There are many signs and symptoms that help determine pregnancy. The first and most obvious change is missing a menstrual period. Usually with this symptom a woman will suspect pregnancy, although some women may miss two periods (if their cycle is not regular) before suspecting pregnancy. A simple urine test from the doctor will show whether or not a woman is pregnant. Home pregnancy tests are available for $10-$15 and are quite accurate, but are no substitute for a doctor’s test or visit. (Most doctors will give their own test anyway!) Other changes that take place in the woman are as follow: Morning sickness/nausea: this probably occurs due to the change in hormones or a drop in blood level. Morning sickness does not just take place in the morning. Many women say it is associated with smells or foods they eat. Not much can be done to cure morning sickness. (Drugs or over-the-counter stomach remedies should not be taken.) Watching the diet can help relieve some of the symptoms. Your doctor may recommend eating several small meals through out the day and/or eating something before getting out of bed, such as crackers. Also, there is a vitamin B6 shot the doctor can give that seems to help many women. Frequent urination: Because the uterus lies next to the bladder, the changes in the uterus cause crowding. Therefore, the need for urination is increased. Cravings: Unusual food cravings are also common during pregnancy. Giving in to them once in a while is all right. If you crave non-food items, consult your doctor. Breasts: Swollen, tender breasts are common in pregnancy. This may occur before the menstrual period is missed. The breasts will enlarge a lot during the first few months. Although nothing will prevent stretch marks, lotions can relieve the tightness and itching associated with pregnancy. Fatigue and Dizziness: these are two common symptoms of early pregnancy. To alleviate dizzy spells, get up slowly. To help with fatigue, get plenty of rest and eliminate unnecessary physical exertion. However, maintaining a regular pre-pregnancy exercise program can be most beneficial as long as it is with your doctor’s approval. The Baby During the first trimester many changes take place for the baby. At four weeks the embryo is approximately ¼ inch long and its heart has started to beat. By six weeks after fertilization the embryo is about 5/8 inch long and has developed most of its vital organs. Its bones are still soft but the skeleton is well-formed. The arms and legs are forming. At eight weeks the embryo officially becomes a fetus. In two months the mother has missed two menstrual cycles and her body has created a completely new individual. By the ninth week the fetus floats in the amniotic fluid and is nourished from the placenta through the umbilical cord. At twelve weeks the fetus is 2 ¾ inches long. Most of its organs are working, including the kidneys. Its arms, legs, hand, fingers, etc. are fully developed. The nails on its fingers and toes are starting to develop. The Mother At the end of three months the baby is essentially complete. Form now on the mother’s uterus is busy helping the growth and perfecting of the baby. The doctor should be called immediately if any of these symptoms occur: Vaginal bleeding Sharp abdominal pain or cramping Loss of fluid from the vagina Severe or prolonged nausea or vomiting Frequent dizzy spells Painful urination High fever over 100*F Vaginal discharge that is irritating Some other things to consider: Do not take any medication unless approved by your doctor. This includes over-the-counter drugs. No drugs or alcohol. These have a tremendous effect on the baby. No X-rays. Radiation can interfere with cell division and organ development. No saunas and hot tubs. The high and prolonged temperatures can be harmful to the fetus. Vaccinations. Because vaccinations are live viruses, these should not be taken during pregnancy. However, do vaccinate the children in your home to protect them against these deadly diseases. Cats. A parasite found in cats, cattle, sheep, and pigs can cause a disease in humans called Toxoplasmosis. This can cause severe damage to an unborn child. Because of this risk, you should avoid undercooked meat and changing cat litter boxes. SECOND TRIMESTER The Mother The woman’s body has many changes taking place: Skin: Each woman’s body reacts differently to pregnancy. Skin may become oily, dry, scaly, etc. the skin must stretch over the growing uterus. Therefore, stretch marks appear often. Facial skin may darken. This is called Chloasma or the mask of pregnancy. Staying out of the sun can help but usually there is nothing that can be done to prevent it. It usually disappears after pregnancy. Another area that darkens is a line from the navel to the pubic hair. This is called Linea Nigra. This line disappears after pregnancy. Many women have this line – some darker than others. Emotions: Because of the hormonal changes within the woman’s body, she may experience mood swings, depression, and even bad dreams. She simply must adjust and realize that the moods will pass. She may need a few extra breaks or time to relax. She should not blame herself but realize that this is normal with all of the changes taking place in her life. The Baby By the fourth month the fetus is about two inches long. The first outlines of the face are showing. The muscles have developed and the baby is beginning to move. The baby weighs about ¾ of an ounce (the weight of an ordinary letter). The umbilical cord and placenta are now the source of nourishment from the mother. By the fifth month the fetus is six inches long and is completely formed. The baby’s movements are noticeable to the mother and she will feel them regularly. The skull bones are the most important bones being developed at this time. These will not complete development until after the baby is born. The sixth month is just past the half-way mark. The eyes are now fully developed. The ears are complete. There is a lot of evidence to show that the baby can hear the outside world. The sounds are probably muffled, maybe like sounds under water. It is also believed the baby can hear the mother’s voice and heart beat and, of course, the rumbling of her stomach. Fingerprints are formed. THIRD TRIMESTER The Mother The most obvious change that takes place in the third trimester is the woman’s body. The abdomen enlarges and fatigue is common. The baby moves a lot now. The mother should feel it move every couple of hours. If she does not, she should call her doctor. Generally expectant fathers take more interest during this last trimester. This is because they can feel the baby move and the reality of the impending birth makes them anxious and excited. A lot of women become more interested in how their bodies function during pregnancy, especially with a first pregnancy. They read everything they can to learn about this process. There are some common discomforts many women experience during the third trimester: Heartburn is caused by the large size of the baby and the stomach being pushed up. Usually cutting down on the size of meals will help with this problem. Eating several small meals is suggested. Another help is cutting out greasy ad spic foods. Again, the caution, do not take any over-the-counter medicines without your doctor’s approval. Shortness of breath is due to the size and activity of the baby. Taking deep breaths is a difficult task. Before delivery the baby “drops,” making breathing easier. Some women experience heart palpitations. The body volume has increased and sometimes the heart has to work overtime. However, the heart can stand the strain. Leg cramps are common, especially late in the pregnancy. These are often called “Charley Horses.” The woman must walk them off or relax until they subside. Providing the body with plenty of calcium is important. Sometimes taking extra calcium is helpful. Round ligament pains. Because of all the pressure on the ligaments in the lower abdomen, a mild to moderate pain sometimes occurs. There is a product called a SLING available at women’s personal departments. This helps support the abdomen and back, relieving pain and discomfort. The Baby The last trimester is mostly a time for the baby to grow and develop by developing a layer of fat. The organs develop and get ready for the baby to be born. The lungs develop in preparation for breathing and the baby is now head-down. It does not have room to roll around as in previous months. It still moves, but is not as active in the last few weeks because of limited space. By the end of the 38-40 weeks, the baby “drops” – giving the mother a little breathing space. DANGER SIGNALS As in the other trimesters, there are danger signals to watch for: Vaginal bleeding Sharp abdominal pain/cramping Loss of fluid Frequent dizzy spells Visual disturbances Nausea or vomiting Sudden and excessive swelling of face, hands, and feet Headache Burning, painful urinationFever Vaginal discharge Call your doctor if any of these problems occur. The recommended weight gain for an average woman during pregnancy is 25 to 30 pounds. This weight is distributed as follows: Baby – 7 ½ pounds Placenta – 1 ½ pounds Uterus – 2 pounds Amniotic Fluid – 1 ½ pounds Extra blood volume and water retention – 4 ½ pounds Breast tissue – 3 pounds Maternal stores of protein – 4 pounds ADD .doc HERE
If after completing this lesson you can state without hesitation that...
Objectives:
1. I can solve radical equations
2. I can use the Pythagorean Theorem and distance formula to solve problems
3. I can solve radical equations
4. I can isolate the radical on one side of the equation
5. I understand the Pythagorean Theorem
6. I can solve for the unknown length of a side of a triangle
7. I understand the distance formula
8. I can solve for the distance between two points on a graph
…you are ready for the quiz. Otherwise, go back and review the material before moving on!
• Algebra Structure and Method, Book 1 (McDougal Littell) - Chapter 11
Lesson 36 (Course Material)http://www.montereyinstitute.org/courses/Algebra%20IB/course...
### 04.06 Search Mechanisms
Using Search Mechanisms As mentioned previously, internet searching is a critical skill to learn and master. Attached above is a helpful chart called "Search Mechanisims" to guide you on what search tool to use in various situations. Review the chart and become familiar with the information. There is an interesting search activity on the internet. It is called Googlewhacking. A Googlewhack is a type of contest for finding a Google search query consisting of exactly two words without quotation marks, that returns exactly one hit. A Googlewhack must consist of two actual words found in a dictionary. A Googlewhack is considered legitimate if both of the searched-for words appear in the result page. Published googlewhacks are short-lived, since when published to a website, the new number of hits will become at least two, one to the original hit found, and one to the publishing site. If you want to, go ahead and try it(not required). It's very difficult to come up with one. Double check with your parents before you try, Make sure your browser "safe search" is on. Do not follow any questionable links. Also attached above is document named "Search Engine Info Packet." Review the information on Search engines carefully. Also attached is the document "Internet Searches Worksheet." Use the information from the Search Engine Info packet to fill out the information on the worksheet completely. You will upload this worksheet as part of the assignment submission
### 04.06 Search Mechanisms (CompTech2007)
Using Search Mechanisms This is a helpful chart to guide you on what search tool to use in various situations.
### 04.06 Search Mechanisms Activity
teacher-scored 10 points possible 45 minutes
Assignment (OR9)
In the assignment section of the course you will UPLOAD the completed "Internet Searches Worksheet" that you filled out from the lesson materials above.
Pacing: complete this by the end of Week 5 of your enrollment date for this class.
### 04.06 Section 4.3: Reimann Sums and Definite Integrals (Calculus)
Section 4.3, Concept 1: Reimann Sums Read Concept 1, Example 1. Remember you are adding (summing) many rectangles' areas. The area of a rectangle is height times width. In Fig. 4.18, the height is f(c) (determined by the height of the curve) and the width is Δx. Your summation is simply summing height times width for every rectangle. If n = 3, you've got 3 rectangles; if n = 4, you sum up the area of 4 rectangles. But, if we have unequal rectangles, that width may not apply when i = 4. So, we need a general form for the distance or width of every rectangle. So, you subtract the ith interval from the (i-1)st interval. So, they substituted i-1 in for i in the formula for x sub i. Then simplified! Then, substituted that into the limit, with f(c) also substituted, and the limit is found. That limit is the area under that curve! Section 4.3, Concept 2: Definite Integrals Instead of writing limits, we are now going to use integral notation. And, we are going to specify limits of integration. These limits of integration are values a and b, that we want to find the area of a curve between. Read Example 2. What wrong with the answer? Is that the correct area? It isn't. So, simply computing the definite integral does not necessarily mean the area under the curve, because area below the x-axis is negative! Now complete problems 7, 9, 13-23 odd, 27, and 31. Assignment 6 Section 4.3: 7,9,13-23odd,27,31,41,43,45,46,47,51,53,69 Section 4.3, Concept 3: Properties of Definite Integrals These properties are critical to your being able to manipulate integrals and solve them with ease. You may want to keep a note paper of them as you read about them so you have them handy to refer to! Complete TB Assignment 6.
### 04.06 Section 4.3: Reimann Sums and Definite Integrals (Calculus)
MIT OpenCourseWare Lecturehttp://ocw.mit.edu/courses/mathematics/18-01-single-variable...
### 04.06 Skimming Assignment
teacher-scored 20 points possible 30 minutes
Following Directions: Click on the URL for “A Message for Garcia”.
a. Before reading the link, know that you will SKIM this text. In other words, do not read it word for word, simply skim it for the main ideas, key terms, read the first and last paragraph or the first sentence of each paragraph. Don’t hesitate to make notes on a separate sheet as you read.
b. Then, in 80 words or less, tell me what this reading is about (based on your skimming) and how it can be used to support the idea of following directions.
### 04.06 Space Text book (Earth Systems)
teacher-scored 50 points possible 700 minutes
Assignment:
This is a BIG project. You are to write a textbook that documents many of the components of the space systems of which Earth is a part. Your textbook will contain four chapters, as outlined below.
· Be written in your own words (that means it should be easily understood by the typical high school freshman)
· Cover ALL of the information required
· Include illustrations and diagrams.
· Be scientifically accurate
· Be referenced. You MUST cite all information sources you use, including the addresses of all Internet sites.
· Include THREE multiple-choice questions for each chapter. The multiple-choice questions must have answers with them. Each chapter is worth 10 points. The entire project is worth 50 points. (10 points for the information included in each of the four chapters and an additional 10 points for having illustrations and multiple choice questions.)
Suggested Internet sites will be given for each chapter (see the URL's). You may use these sites, but you do not have to. You may use any sources of information available to you, provided that you reference them. Each chapter will probably be about three pages long.
You should include enough information to adequately cover the subject but I certainly do not expect a major research paper on each topic.
I STRONGLY suggest that you complete the first chapter and send it to me before working on the remaining three chapters. I will review your work and will tell you whether what you have done is satisfactory or if you need to invest more time and effort into your work. You may give your FOUR chapters any title of your choosing, but the contents of your chapters must be as follows:
Chapter One: The History of Cosmology
Cosmology is the scientific study of the large-scale properties of the Universe as a whole. Over the centuries, mankind’s ideas about the nature of the universe have changed significantly. In this chapter you will describe how the accepted ideas regarding the nature of the universe have changed in science throughout history.
In your chapter, include explanations of the ideas of the following individuals:
· The ancient Greeks · Ptolemy · Copernicus · Galileo · Kepler · Newton · Hubble · Einstein · ALSO, identify at least two examples of how technology has helped scientists investigate the universe. (HINT: The telescope is an example of technology.)
Do NOT copy the text from your sources word for word. I have read the information on the sites. I will recognize it if you cut and paste it into your text. You will FAIL the assignment if you cut and paste the information from your sources into your textbook. You MUST write it in your own words.
STOP! Send me your first chapter for review before you continue.
Chapter Two: Origin of the Universe
· Describe the Big Bang Theory. 1. What does the theory state? 2. When did the Big Bang occur?
3. What happened during the Bang?
4. What happened in the microseconds, seconds, minutes, and years after the Big Bang?
· Describe at least three pieces of evidence that support the Big Bang Theory, including:
1. Red shift evidence. What is a red shift? What does it tell us about the relative motion of a star or of the universe? Does the red shift indicate the universe is expanding or contracting?
2. Cosmic microwave background energy. Where does it come from? How does its presence support the Big Bang Theory?
3. The numbers and kinds of atoms found in the universe. There are many more hydrogen and helium atoms in the universe that any other kind of atoms. Where did they come from? How did they form? How does their relative abundance support the Big Bang Theory?
REMEMBER to include your three multiple-choice questions as well as your references. Also, write the text in your own words and include illustrations or diagrams.
Chapter Three: Star Life Cycles
· Describe the life cycle of a typical star.
· Compare life cycle of the sun to the life cycle of other stars.
· Hydrogen and helium (light elements) were formed during the Big Bang. Describe how the heavier elements were formed. (HINT: The answer has to do with the end of a star’s life!)
· While you are at it, explain the origin of heavy elements on Earth. Where did most of the matter on Earth come from? (HINT: The answer is the same as the answer to the preceding question!)
REMEMBER to include your three multiple-choice questions with answers. Also, document your sources and write the text in your own words. Have you included diagrams or illustrations?
Chapter Four: Life on Earth
· Describe the unique physical features of Earth’s environment that make life on Earth possible. Consider things like the atmosphere (including ozone layer and greenhouse gases), solar energy, and water.
· Choose two planets in our solar system and compare them to Earth.
Consider their:
1. average temperatures, low and high
2. location in the solar system
3. satellites
4. atmosphere
5. gravity
6. common elements
7. geology
8. other interesting facts
REMEMBER to include your three multiple-choice questions with answers. Also, write the text in your own words and identify your sources of information. Don’t forget your diagrams and/or illustrations!
Send your completed text to me.
GOOD LUCK AND HAVE FUN!!!!!!!!!
Pacing: complete this by the end of Week 7 of your enrollment date for this class.
### 04.06 Thesis My Thesis . . . - English 10
teacher-scored 50 points possible 60 minutes
What question are you answering?: Image: Michelle Meiklejohn / FreeDigitalPhotos.netTips For Writing A Thesis Statement Now that you have investigated and written about your topic, in depth, it is time to finalize your thesis statement. You have discovered what truly interests you about your subject through your research and writing. You now need to review your writing and your previous thesis statement to ensure that it states what you have presented in your paper. This statement serves as the main idea for your paper. It should express what you believe your research proves. An effective thesis statement tells readers specifically what you plan to tell them in your paper. It serves as a guide to keep your ideas on track as you present your research. Make sure your thesis does the following . . . -makes a statement of importance, takes a stand of some sort, or expresses a specific perspective or feature of the subject being researched. -briefly presents the most important point(s) of the paper in a effort to set a specific approach/direction and or purpose for your writing. The following formula and examples could be used to form your finalized thesis statement:
A specific subject THE GREAT WALL OF CHINA + a particular stand, feeling, or feature WAS SPECIFICALLY BUILT AS A DEFENSE SYSTEM = an effective thesis statement of purpose Sample Thesis Statements: 1. What is the Great Wall of China and why was it built? OR 2. Certain political forces were at work when the Great Wall of China was being developed as a defense system.
Refine your thesis statement according to the following checklist
Thesis Checklist Make sure your thesis statement… _____ identifies a limited, specific subject _____ focuses on a particular stand, feature, or feeling about the subject _____ is stated in a clear, direct sentence (or sentences) _____ can be supported with the convincing facts and details you have researched and included in your drafts _____ meets the requirements of the assignments and explains what you are wanting to present or prove to your readers
You can use your original thesis phraseology, if it seems to meet the stated requirements, but make sure you have included the main idea that you have formulated throughout the research process of your topic.
Final Submission Inclusions: a. copy of the original thesis, labeled as thesis 1 b. copy of the new thesis, pasted below the old one and labeled as thesis 2 c. the body of the research paper needs to be attached below the two thesis statements Grading Criteria:
Category Standards The student chooses a topic for research and formulates a thesis statement The student chooses a topic for research and makes modifications to the topic as information is gathered, then formulates a thesis statement that can be proven by the research. THESIS STATEMENT is clear, obvious, and outlines the (at least) three topics discussed in the BODY of the paper.
SAVE ALL OF YOUR WORK FROM THIS QUARTER
Pacing: complete this by the end of Week 6 of your enrollment date for this class.
### 04.06 Translating Functions - Extra Link (Math Level 1)
Purplemath: Graphing Exponential Functions: Examples (page 3 only)http://www.purplemath.com/modules/graphexp3.htm
I highly recommend that you click on the link above and work through the material before continuing.
Guided Practice:
After watching the video try these problems. The worked solutions follow.
Example 1:
Write the parent function and the vertical shift for each of the following functions:
a) $\fn_phv f(x)=3^{x}+2$
b) $\fn_phv f(x)=(\frac{1}{2})^{x}-6$
Example 2:
Write the parent function and the vertical shift for each of the following functions:
a)
b)
Example 1:
Write the parent function and the vertical shift for each of the following functions:
a) f(x) = 3x + 2
Parent function: 3x
Vertical shift: 2
b) $\fn_phv f(x)=(\frac{1}{2})^{x}-6$
Parent function: $\fn_phv f(x)=(\frac{1}{2})^{x}$
Vertical shift: – 6
Example 2:
a)
Parent function: 2x
Vertical shift: 3 – 1 = 2 (see the blue line)
b)
Parent function: $\fn_phv f(x)=(\frac{1}{2})^{x}$
Vertical shift: -1 – 1 = – 2
### 04.06 Unit 4 Quiz(Psychology)
computer-scored 32 points possible 40 minutes
Be sure that you have completed all your assignments and have copies of them in front of you. Also have all your notes from the objectives handy. Review them once more before attempting this quiz. Remember, it's open note and book!!! You will need to score at least 80% on this quiz (that's 26 out of 32 points!) If you score BELOW this, then you will need to study your notes from the objectives and re-take the quiz. The test is taken from a pool of questions, so you never know exactly what questions will be asked.
### 04.06 Window treatments(IntDes2)
WINDOW TREATMENTS Overview: A window treatment is not considered the focal point of a room. However, the wall space or furniture surrounding the window may be considered a focal point. It is important to use appropriate treatments made of suitable materials to maximize the utility of the window. A window treatment should add to the beauty of the room in both color and style. Other considerations when making a selection are privacy, light control, durability of materials, and the ability of the material to conserve energy and block noise. Window treatments can be created to enhance or obscure an outside view. WINDOW TREATMENTS: BOTTOM UP SHADE A type of roller shade which pulls up instead of down CASEMENT FABRIC Fabric that is heavy enough to be drawn at night for privacy and serve as side draperies during the day CORNICE A rigid horizontal heading of wood or metal. CURTAIN Stationary treatment made of sheet or lightweight fabric DRPERY Heavier fabrics – may be stationary or drawn CASCADE Softly draped fabric hung at each side of swags JABOTS A ruffle or folded fabric placed vertically to separate swags LAMBREQUIN A cornice with a highly-curved bottom edge SEMI SHEERS Drapery usually used for privacy SHUTTERS made of wood or metal horizontal slats or of panels of fabric, which fit inside the window frame SIDE DRAPERIES Stationary or hanging draperies that may be left straight or tied back only at the sides of the window SOFT TREATMENT Window treatments such a draperies or curtains STRUCTURAL OR ARCHITECTURAL TREATMENTS Window treatments such a shutters, blinds, screens, panels, shades, wood valances SWAGS A valance made of fabric, which is pleated or draped across the top of a window TIE BACKS A cord or narrow strip of fabric to hold a drapery back VALANCE A fabric heading VENETIAN BLINDS Wooden, metal, or plastic strips controlled by turning a rod rather than by pulling a cord (may be vertical or horizontal, wide or narrow)
### 04.06.00 Composition of Functions and Inverses (PreCalc)
A composition is a function of a function. You can make interesting designs by making compositions. Spirographs Did you ever draw a spirograph? I loved doing this when I was a kid. Who doesn't! I always wanted to make my own designs, and tried to using cardboard. (They never worked very well, but I tired.) What do spirographs have to do with math? A spirograph is a pattern which gets repeated in a different pattern. It is a pattern within a pattern. A composition. Spirographs are not actually functions in the Cartesian coordinate system (later we will use polar systems, and spirogaphs are functions in that system). But you can imagine taking a function of a function. This will create a new function that has different properties from the original functions. You can download the attached files, or read the same content below.
### 04.06.00 WRITING AN OUTLINE
You should be finished with your reading assignment novel by this point. For this assignment, you will be required to draft an outline for your essay which includes the theme you’ve chosen to write about and specific examples from the book that illustrate that theme. Here is an example: Theme: In coming from innocence to maturity, one has to confront evil and inequality in the world. Support: a) The Symbol of the killing of a mockingbird represents those who would attempt to harm or destroy innocence. Miss Maudie’s explanation that it is a sin to kill mockingbird shows the moral imperative to stand up for those who would be harmed by evil forces in a society. b) Children begin to confront their fear of Boo Radley. He begins as a superstition and gradually Lee begins to 'flesh' him out and he becomes more human. The children begin to change their opinions of him. c) Scout is able to hold on to faith and move forward, but Jem is more damaged by the revelation of the evil that racism is in his community. Scout’s name foreshadows her ability for forge the path from childhood to maturity while maintaining faith in humanity. Before you begin your outline, view the video at the link below. Make sure that you understand the distinction between "topic," "theme," and "thesis." If that isn't clear, let me know.
### 04.06.01
teacher-scored 8 points possible 30 minutes
To help me evaluate your outline, make sure that you include the title of the novel.
Assessment Rubric:
Content Outline shows clear theme. /4 Support Includes three supporting details or examples from novel /4
### 04.06.01 Translating Functions - Worksheet (Math Level 1)
teacher-scored 20 points possible 40 minutes
Activity for this lesson
1. Print the worksheet. Work all the problems showing ALL your steps.
2. Digitize (scan or take digital photo) and upload your worksheet activity
Pacing: complete this by the end of Week 8 of your enrollment date for this class.
### 04.06.01 Activity log week 6 (Fitness for Life)
teacher-scored 65 points possible 120 minutes
Submit your activity log. To submit your work, scan or take a photo of your log and the form. Save as a .jpg, .pdf or .gif and go to Topic 3 on the main class page to upload the files. IMPORTANT: Please do not email logs, as this delays the grading process, since emails do not become submitted into the instructor's grade book. As a last resort, you may mail copies to your teacher's physical address.
Pacing: complete this by the end of Week 7 of your enrollment date for this class.
### 04.06.01 Compositions (PreCalc)
Taking the function of a function is called composition. You have actually done this before, you just probably didn't think of it like that. Consider h(x) = (3x2)5 You solve a similar problem in an earlier problem set. To simplify this, you raise the terms in the parenthesis to the 5th power. So h(x) = (3x2)5 = (3)5(x2)5 = 243x10 Easy enough. But if we look at that closely, we can break h(x) into 2 functions: ƒ(y) = y5 g(x) = 3x2 Then we can combine these to make h(x). This is an important “function” of functions, so it has its own name, composition, and it has a special notation h(x) = (ƒ ° g)(x)= ƒ (g(x)) You read this as “h of x equals ƒ of g of x”. Is (ƒ°g)(x) the same as (g°ƒ)(x) ? Well, let's check our 2 functions. (ƒ °g)(x)=(3x2)5=243 x10 (g° ƒ)(x)=3(x5)2=3 x10 These are not equal. So (ƒ °g)(x)≠(g° ƒ)(x) Let's do another example. Consider This can be “decomposed” in several different ways. We could decompose it as g(x) = x3 But ƒ(y) looks like that could also be decomposed into So and which means that So, you can take compositions of compositions. Cool. One last note about compositions. When you take a composition of 2 functions, the domain of the new function is limited by the domains of the original 2 functions. For example, consider the functions ƒ(x) = x2 The function ƒ(x) = x2 has a domain from −∞ < x < ∞ , but the function g(x) = has the limited domain of -3 ≤ x ≤ 3. If we take the composition (ƒ °g)(x) we find Which has a limited range, but should have an infinite domain. But it doesn't really. Try graphing with your graphing calculator. Do you get the same thing as when you graph y= 32 - x2? You should get something like this. The limited domain of the original function continues to limit the domain of the new function. This will be important later. Your turn: Rewrite as a composition of 2 functions.
### 04.06.01 Drawing Pictures Activity
teacher-scored 10 points possible 20 minutes
Upload a picture of the picture you drew during the Drawing Pictures Activity.
Pacing: complete this by the end of Week 5 of your enrollment date for this class.
### 04.06.01 Lesson 4F: “¿Cuál(es)? - Which? (Spanish I)
computer-scored 15 points possible 20 minutes
**Assignment 04.06.01: “¿Cuál(es)? - Which?**: You got this, find the button: and click on it!
### 04.06.01 Narrative Brainstorming
teacher-scored 5 points possible 8 minutes
The first assignment for this section is to brainstorm a list of 25 things that come to mind about “love and relationships.”
Make a list of things you could write about and then narrow down your topic.
Your brainstorm can be in any form--freewriting, mindwebs, lists, etc.
Narrowing down a topic can be the hardest part of writing. The key to a good narrative is a good idea, and one that is manageable. Don't try to tell about your entire first date, relating every step of the day. Instead, try focusing on a single moment with all the details and showing your experience. Don't overwhelm the reader with too much 'fluff' -- focus on the details and be creative.
Narrative Brainstorm Scoring Rubric
4 points= includes a list of possible topics
1 point= chosen topic is somehow highlighted in the "brainstorm"
5 Points Total
Pacing: complete this by the end of Week 5 of your enrollment date for this class.
### 04.06.01 Quiz 36
computer-scored 10 points possible 45 minutes
If you are certain you have mastered the material, you are ready for the quiz. Click on the Quiz 36 link.
### 04.06.01 Stages of Development(Childdev1)
teacher-scored 40 points possible 60 minutes
Assignment 4.6 - Stages of Development
After going through the material for this lesson, describe the fetal development and the effects on the mother for each of the nine months of a pregnancy.
You may want to copy and paste the information below into a word processor (i.e. Microsoft Word or WordPerfect) and complete your work. Then you MUST copy and paste the information into the submission area of the assignment that corresponds to this lesson which is found in Topic 3.
You may use the following format to submit your work: (Copy and paste everything between the asterisks.)
**********************************************************************
First Month
a. Fetal Development:
b. Effects on Mother:
Second Month
a. Fetal Development:
b. Effects on Mother:
Third Month
a. Fetal Development:
b. Effects on Mother:
Fourth Month
a. Fetal Development:
b. Effects on Mother:
Fifth Month
a. Fetal Development:
b. Effects on Mother:
Sixth Month
a. Fetal Development:
b. Effects on Mother:
Seventh Month
a. Fetal Development:
b. Effects on Mother:
Eighth Month
a. Fetal Development:
b. Effects on Mother:
Ninth Month
a. Fetal Development:
b. Effects on Mother:
**************************************************************
### 04.06.02
teacher-scored 44 points possible 60 minutes
Assessment Rubric:
Content Interesting introduction; clear thesis statement; shows understanding of novel and the theme. /4 Support Supporting paragraphs include detail which is specific and directly supports your analysis. /4 Clarity Writing is clear, focused and well organized. /4 Organization Essay is organized so that the ideas follow logically from one to the next. /4 Conventions No significant errors in grammar, usage, punctuation or spelling. /4 Extra Points Rough Draft is included. /4 Extra Points Reading guide is complete and shows thought & effort. /20
Pacing: complete this by the end of Week 8 of your enrollment date for this class.
### 04.06.02 Drawing Pictures Comparison
teacher-scored 10 points possible 30 minutes
Take the exact same directions to a friend or family member and ask them to complete the activity. (Do not show them your drawing until they are finished with theirs.) Compare your drawing with your friend/family member’s drawing. Talk to them about the differences in the two drawings and explain to each other why you drew it the way you did.
Journal Entry
Using complete sentences, answer the following:
1. What differences did you notice in the two drawings?
2. Why were there so many differences?
3. What does following directions have to do with computers?
Pacing: complete this by the end of Week 5 of your enrollment date for this class.
### 04.06.02 Identity (PreCalc)
Remember that an identity is something that doesn't change whatever you started with? So in numbers, we have 2 identities: the additive identity, 0, and the multiplicative identity, 1. Specifically: In matrices we also had an additive identity matrix, and a multiplicative identity matrix. The 2x2 additive and multiplicative identity matrices are shown below So given a matrix We have So, these are examples of some identities. Previously we stated that the function ƒ(x) = x is the identity function. This is because when we take a composition of any function, g(x), with the identity, or a composition of the identity with any function, g(x), the result is the original function, g(x). Consider the function g(x) and the identity function ƒ(y). g(x) = x2 + 4x ƒ(y) = y Now find (ƒ °g)(x) and (g°ƒ)(x) . To find (ƒ°g)(x) we insert the function g(x) every time there is a y in ƒ(y). Likewise, to find (g° ƒ)(x) we substitute y with ƒ(x) every time it appears in g(y). This gives us (ƒ °g)(x) = (x2 + 4x) = g(x) (g°ƒ)(x) = (x)2 + 4(x) = g(x) And that is how the identity function is defined. Your turn. 3. Given ƒ(x) = x, g(y) = 7 y4, and h(z) = , show that (ƒ °g)(x) = g(x) , (g° ƒ)(x) = g(x) , (ƒ °h)(x) = h(x) , and (h° ƒ)(x) = h(x).
### 04.06.02 Unit 04 Review Quiz (Math Level 1)
teacher-scored 53 points possible 45 minutes
Unit Review Quiz
1. Print the quiz. Work all the problems showing ALL your steps.
Pacing: complete this by the end of Week 8 of your enrollment date for this class.
### 04.06.03 Inverses (PreCalc)
An inverse will return the identity when the operation is performed. For example, consider the number . We need to find a number that when added to will give us the additive identity, 0. Perhaps you already know that - is that number. So - is the additive inverse of , and In a similar manner, the multiplicative inverse of can be found by looking for a number that will return 1 when it is multiplied by . You may realize that the multiplicative inverse of is 1⁄, and In quarter 1, we found the multiplicative inverses of 2x2 matrices, and verified that these were inverses.  To find the inverse function for an arbitrary function, g(x), we look for a function that will return the identity function when composed with g(x). We denote the inverse of g(x) as g-1(x). (This notation is similar to the inverse of matrices.) This means that (g°g−1)(x)=(g−1°g)(x) = x Also similar to matrices, not all functions have an inverse. For an inverse to exist, the original function must be one-to-one. This means that ƒ(x) = x3 has an inverse, but ƒ(x) = x2 does not. It is possible to find a partial inverse to a function that is not one-to-one by limiting the domain. Below are the steps of finding an inverse: Write the equation. Exchange the x and y variables. So initially you had y = ƒ(x), you should now have x = ƒ(y). Solve the equation for y. The result will be the inverse function of ƒ(x). Example: Consider the function g(x) = 2 x - 5 write this as an equation y = 2 x -5 exchange the x and y variables x = 2 y -5 solve for y x +5 = 2 y - 5 +5 this is the inverse of g(x) You should be able to follow these steps to find the inverse of any one-to-one function. So, let's do that. 4. Given ƒ(x) = 3 x - 2, find ƒ-1(x). 5. Show that (ƒ°ƒ−1)(x) = (ƒ−1°ƒ)(x) = x
### 04.06.04 Inverses of Power Functions (PreCalc)
It is fairly easy to find the inverse of the power functions. You remember that the roots are defined as the inverses of the powers. So the inverse of . In a similar manner, the inverse of . Since only one-to-one functions have inverses, only powers raised to fractions with both odd numerators and odd denominators have inverses. (This includes the integers, as an integer can be treated as a fraction with 1 as the denominator.) In general, the power inverses are found by the following: given where m and n are odd integers. So for example, let then Let's check this. Find (h°h−1)(x) and (h−1°h)(x) . To make it clearer, write h(x) and h-1(x) as radicals. Or we can use the rules of exponents to do this. This is a bit easier to do. Both compositions have a result of x. This means that is indeed the inverse of . We also can find the inverses of functions with even numbers in either the numerator or denominator by limiting the domain. Remember, ƒ(x) = x2 was not a one-to-one function, but was. Of course, g(x) is only defined for positive numbers. By limiting the domain of ƒ(x) to only the positive numbers, then we are able to find inverses for both ƒ(x) and g(x). This means that given ƒ(x) = x2 ƒ-1(x) does not exist, but if we only allow , then ƒ(x) = x2; likewise given g-1(x) = x2; . You should be able to find the inverse of all power functions in the same way. Your turn. 6. 7. Show that ƒ-1(x), and g-1(x) are inverses of ƒ(x) and g(x).
### 04.06.05 Graphing Inverses (PreCalc)
The graph of any inverse is found by reflecting the function across the line y = x. If the function is not one-to-one, the resulting reflection will not actually be a function. Consider the following graphs. In the first image, ƒ(x) = x3 and are clearly inverse functions. In the second image, while the graph of ƒ(x) = x2 is a function, the reflected graph is not. This means that the function ƒ(x) = x2 does not have an inverse. Of course, we can limit the domain of ƒ(x) = x2 to positive numbers. If we do this, the resulting function does have an inverse, as shown here. has the inverse This is valid for all the power functions. If there is an even number in either the numerator or denominator, the function does not have an inverse, except for over the limited domain of (or x > 0 if the exponent is negative). If the function has odd numbers in both the numerator and denominator, the function has an inverse over its entire domain, (−∞,∞) for positive powers and (−∞,0)∪(0,∞) for negative powers. Consider the following functions of negative powers. The function has an inverse of if you limit the domain to x > 0. Similarly, does not have an inverse, but would with a limited domain. On the other hand has as an inverse for its entire domain (which is limited, but it is the same domain), and vice-versa. Now it is your turn. Given the following graphs, find the graph of their inverse functions. Indicate if the inverse exists for the entire domain, or only for a limited domain. Answers: (ƒ°g)(x) = ƒ(g(x)) = 7x4 = g(x) (g° ƒ )(x) = g( ƒ (x)) = 7x4 = g(x) ƒ-1(x) exists for the full domain of ƒ(x). ƒ-1(x) has a domain of x > 0, therefore the inverse of ƒ(x) does not exist for the full domain of ƒ(x). ƒ-1(x) has a limited domain of x ≥ 0, which corresponds to the full domain of ƒ(x).
### 04.06.06 Compositions of Functions and Inverses, Review (PreCalc)
In this lesson we learned about compositions, identities, and inverses. Composition of a function: Given ƒ(y), g(x), then h(x) = (ƒ ° g)(x) = ƒ (g(x)) , read "ƒ of g of x", is found by replacing y with g(x) in ƒ(y). It is important to remember that ( ƒ ° g)(x) ≠ (g ° ƒ )(x) . Identity: The identity function is the function ƒ(x) = x. this function has the property that when a composition of the identity is taken with an arbitrary function g(x), or vice-versa, it returns the original function g(x). That is (ƒ ° g)(x) = (g ° ƒ)(x) = g(x) . Inverse: The inverse function, ƒ-1(x), is a function that when a composition of ƒ(x) is taken with its inverse, or vice-versa, the identity is returned. That is (ƒ ° ƒ−1)(x) = (ƒ−1 ° ƒ)(x) = x . Only one-to- one functions have inverses, although a partial inverse can be found by limiting the domain (either of the function or the inverse). To find the inverse of a function, follow these steps: Write the equation. Exchange the x and y variables. So initially you had y = ƒ(x), you should now have x = ƒ(y). Solve the equation for y. The result will be the inverse function of ƒ(x). Inverse of Power Functions: Given , then where m and n are odd integers. If m and/or n are even integers, then a partial inverse can be found by limiting the domain (either of the function of or the inverse). Graphing Inverse Functions: The graph of an inverse function is found by reflecting the function across the line y = x. The function ƒ(x) = x3 and its inverse, are shown.
### 04.06.07 Compositions of Functions and Inverses - Links (PreCalc)
Properties of Inverse Functionshttp://www.analyzemath.com/inversefunction/properties_invers...
### 04.06.08 Introduction to Functions -- Assignment 7 (PreCalc)
teacher-scored 80 points possible 120 minutes
Complete
Unit 04 -- Introduction to Functions -- Assignment 7
Composition of Functions and Inverses
In this assignment you will need prove you can work with compositions and inverses.
Print out the attached assignment and complete the assignment in the space provided and complete the graphs on graph paper or on the computer with an appropriate graphing program. You may use additional paper if needed. Once you have completed the assignment, scan it into the computer and convert it to an image file such as .pdf or .jpg. You may need to practice scanning pencil drawings so that you produce a clear, easily readable image. Finally, upload the image using the assignment submission window for this assignment. Alternately, you may wish to type the answers into a word processing document, convert this to an .rtf file, and upload this. If you do this, be sure to include the questions as well as the answers.
This assignment is worth 80 points.
Complete this assignment after reading Lesson 6.
### 04.08 Quarter Review (Biology)
This is a review for you to prepare for the final test.
TO DO
Complete:
• 04.08.01 Quarter Review
Proctored Final: Once you earn at least 60% on each assignment and 80% on each quiz module 4 will open. Once Module 4 is open you will need to:
• Right under the Ready Assignment, there is a link with a list of proctors. The proctors are listed by county. Select a proctor from the list.
• Contact the proctor (via e-mail or phone) and make arrangements to take the final.
• When you show up to take the final the proctor will type in the password and you will be ready to go.
• In order to earn 0.25 biology credit, you need to get at least 60% on the final.
• To prepare for the final I recommend you understand the concepts from assignment 04.08.01 and the quizzes.
### 04.08 Units 1 through 4 Online Test
computer-scored 102 points possible 20 minutes
Go to the link below and take the online test. You will only be allowed one attempt at this test and it has a 20 minute time limit.
### 04.08.01 Quarter Review (Biology)
teacher-scored 30 points possible 90 minutes
Your answers must be VERY DETAILED. Copy all information below between the lines of asterisks, including the lesson number, revision date and all questions into a word document.
***************************************************************
ASSIGNMENT 04.08.01 - REVISION DATE: 10/11/13 (Copy everything between the asterisks.)
4th quarter Biology final study guide
Make sure you understand all of your assignments and this study guide.
1. Where in a mammal would mitosis take place?
2. Where in a mammal would meiosis take place?
3. Where in a plant would mitosis take place?
4. Where in a plant would meiosis take place?
5. Explain the difference between identical and fraternal twins related?
6. Which nucleotide does guanine bond with?
7. Which nucleotide does adenine bond with?
8. What is genetic engineering?
9. When diagramming a segment of DNA what does:
• G stand for?
• T stand for?
• A stand for?
• C stand for?
• How do the nucleotides pair up?
10. What is asexual reproduction? Give an example of an organism that reproduces asexually. How does asexual reproduction help the survival of a species?
11. Draw and label the structural shape of DNA.
12. Explain mRNA and how the different nucleotides combine.
13. What is contained in the nucleus of a cell? Do all cells have a nucleus?
Pacing: complete this by the end of Week 8 of your enrollment date for this class.
### 05.01.05 Panama Canal Quiz (World Geography)
teacher-scored 10 points possible 30 minutes
Panama Canal-
First, check out these two videos about the Panama Canal:
http://www.metacafe.com/watch/1063460/through_panama_canal_in_75_seconds/
and
http://www.5min.com/Video/Discover-the-Panama-Canal-228514752
http:/http://en.wikipedia.org/wiki/Panama_Canal
Use the information you gain to answer the following questions:
1. What limits the size of ships that can use the Panama Canal?
2. Why is the Panama Canal important?
3. Who operates the Panama Canal?
4. What were the three major engineering jobs necessary to dig the canal?
5. What was the greatest obstacle to building the canal?
6. How did the Spanish-American War affect the canal?
7. How long does it take a ship to pass through the Panama Canal?
8. Summarize the history of the Panama Canal in a five-sentence paragraph.
9. What did you find surprising or interesting about the Canal when
you viewed the two videos? List three things here:
a.
b.
c.
***70% or higher is required to pass any assignment***
### 05.01.06 Rainforest Pop-up Book/ Interactive Power Point (World Geography)
teacher-scored 100 points possible 120 minutes
You will create a childrens lift-a-flap book illustrating the parts of a rainforest, animals that live in a rainforest, and the interaction of man with the rainforest. You may also use power point for this project, but make sure it is INTERACTIVE.
TASKS: Create a story line that will teach a child about the rainforest.
* These topics must be found in your story
o structure of the rainforest
o animals found in the forest
o indigenous people of the forest
o effects of deforestation
* Create illustrations for your books and place windows in them. Mount the illustrations on top of a second sheet and place the text material behind the windows. MAKE SURE THAT COLOR IS ADDED TO THE BOOK!!!
Refer to the web sites to help you investigate the rainforest. Cite the sources at the back of the book or power point.
Search the web for at least two other sites on the rainforest for information and put those sites URL (address) on the last page of your book.
The book can be sent in through surface mail, or digital pictures of the book can be taken and sent in electronically. If you choose to send the digital pictures, be sure to take pictures with your flaps both closed and open. The Power Point can be sent via the submissions in the assignment or via e-mail.
The following criteria will be use to evaluate your book:
1. Is the major idea clear and easily identified?
2. Are the pages and illustration well organized and attractively displayed? What is the overall appearance of the book?
3. Is the text information effective and tied into the main idea?
4. Is the major idea backed up by the supporting information and is all information accurate?
Here are some more links that give some information about the rainforest.
***70% or higher is required to pass any assignment***
### 05.02 Sentences, clauses, and phrases (Writing lab)
When we write or talk, we use groups of words. For the sake of simplicity, we have names for different kinds of groups of words - so that we can talk or write about our talking or writing. The key unit of written communication is the sentence. A sentence begins with a capital letter and ends with a period, question mark or exclamation point. It is the smallest group of words that can stand alone (often it is said to "express a complete thought"). A sentence always has a subject and a predicate (though sometimes the subject may be understood rather than expressed, as in "Get out of here!", where the subject is understood to be "You"). A clause is a group of words that has a subject and a predicate. A clause may be: an independent clause(in which case it may be a whole sentence by itself, or a part of a complex sentence), or a dependent clause, also called a subordinate clause (in which case it must be part of a complex or compound sentence). A dependent clause is not missing any parts - the thing that makes it dependent is an extra word (or words), usually a conjunction which is meant to link or relate it to an independent clause. For example: I stayed up late so that I could study. "I stayed up late" is an independent clause, which could stand alone as a sentence. "So that I could study" is a dependent clause which can't stand alone as a sentence, but ONLY because the words "so that" link it to the first clause. If you left out "so that", "I could study" could be a complete sentence on its own. A phrase is a group of words working together to function as a single part of speech. It may have a subject, or a predicate, but doesn't necessarily have either, and never has both. Here are some examples of phrases: going to Georgia caused various problems up the stairs the strongest man in the county A simple sentence has only one clause (though it may have many phrases). Examples of simple sentences: Marian slept. The cat and dog chased each other around the house. Tom ordered a milkshake. The house has a spiral staircase, two stained glass windows, and a balcony. We stopped for groceries, talked to friends, and drove home. Before starting chores, they changed clothes. A compound sentence has two independent clauses connected by a coordinating conjunction (commonly, and, or, but, & so). Examples of compound sentences: We won the first game, so we had to stay for the finals. I finished running errands, but I was late. Margaret can go to the carnival, or Jared can go to the movie. He was happy, and he was pleased with his son. Another, much less common kind of compound sentence has two independent clauses connected by a semicolon and a conjunctive adverb (such as: then, thus, however, also, nevertheless, similarly, for example, in addition). Examples: We won the first three rounds; however, we lost in the finals. She told her mother; furthermore, she left a note on the refrigerator. A complex sentence has one independent clause and one dependent clause. Either clause can come first, and the dependent (sometimes called subordinate) clause begins with a subordinating conjunction (such as: because, since, that, before, after, in spite of, although, if, while, until, when, where, unless), or a relative pronoun (which, who, that). Examples of complex sentences: After we started home, we quickly got lost. We got lost after we started home. Because she was wearing white, she didn't want to fall in the mud. Lois wouldn't have minded getting muddy if she had been wearing jeans. Unless George is going to drive, Tyler can't go. Desrie likes Robert in spite of the fact that he wears twelve earrings. We went to see Paul, who is my favorite brother. [who, which, that & what can act both as the subordinating conjunction and as the subject of the dependent clause] As you might guess, a compound-complex sentence has both two independent clauses, and at least one dependent clause, in any order. Examples of compound-complex sentences: Marty was hungry, so after they finished cleaning up, they bought a pizza. Before the test started, Jeremy got sick, and Brent passed out. More explanation & examples about simple, compound, and complex sentences: A simple sentence can be stripped down to its single subject & verb. Here is a long simple sentence: Hoping to meet her brother on the evening before their departure, she carefully planned a schedule to help with her preparations. Why is this a simple sentence? Because if you get rid of all the extra modifiers, it strips down to "she planned a schedule". It has only one clause. You might think of a clause as a basic subject/verb unit. If you think of it that way, you can see that this is still a single clause even though the subject & verb are compound: George and Terry stopped and waited. It is one unit of meaning. Compound sentences: On the other hand, if we say "George stopped, and Terry waited," now we have two clauses - two units of meaning. Each unit has its own subject doing a unique thing. We could also say "George stopped, but Terry waited," or "George stopped, so Terry waited." Any of those three are compound sentences because they consist of two independent clauses, connected by a coordinating conjunction (and, but, so, or). Complex sentences The other way of connecting clauses involves subordinating conjunctions, which suggest how one of the units of meaning depends on the other. If we say "Because George stopped, Terry waited," we have a complex sentence - one independent clause (terry waited) with at least one dependent, or subordinate, clause (because George stopped). In many cases we can reverse the order of the clauses ("Terry waited because George stopped.)" A complex sentence can have clauses nested inside of clauses: Because Terry was waiting ~ while George was stopped,~ the whole convoy came to a halt~ until the tow truck arrived~ so that George's car could be hauled back home~ where a mechanic was waiting to fix it ~so that he could continue on with everyone else. That has seven clauses! Only one (the whole convoy came to a halt) is independent. Compound/complex sentences More often, complicated sentences use both ways of connecting clauses, so they have two or more independent clauses AND one or more dependent (subordinate) clauses: George stopped ~ and Terry waited ~ while the rest of the group caught up. (Two independent clauses and one dependent clause)
### 05.02.01 Chapter 5 Assignment 1 - Random Symbols (while) (C++)
teacher-scored 20 points possible 60 minutes
Do this assignment and submit it under Topic 3.
Write a C++ program using a while loop to display 20 random symbols using the characters ! - @ associated with the code numbers 33 - 64.
Example:
Random Symbols
#&(@!^9&$)-78_%$&!.*
### 05.06 Cults and extreme movements (Sociology)
teacher-scored 25 points possible 45 minutes
Perhaps you have heard of situations in which powerful leaders influenced group members to take their own lives or perform other acts that seem hard for a rational person to believe. In this lesson you will examine some of these occurrences, and see what extreme movements have in common.
To begin this lesson, write a one-page report on various cult suicides. Be sure to talk about Jonestown, as well as The People's Temple and Heavens Gate. Use the video clip and website in the URL's and the article below for information.
__________________
SAN FRANCISCO EXAMINER
(San Francisco, CA)
Nov. 8, 1998, pp.
Reprinted with permission from the San Francisco Examiner. (c) 1998 San Francisco Examiner. This article accessed from the SIRS database, available free to all Utah students through http://www.pioneer.uen.org.
DAYS OF DARKNESS: NOVEMBER 1978
Utopian Nightmare--Jonestown: What Did We Learn?
by Larry D. Hatfield
… James Warren Jones was born May 13, 1931, in a tattered town called Crete in Indiana. He was different from the beginning--a Holy Roller preacher as a child, selling spider monkeys on the streets of Indianapolis to buy food as a young student and modeling himself after Father Divine, whose Peace Mission drew a cult following at the time. In a credo that would refine and warp itself until the end, he developed a set of professed beliefs, sometimes called "apostolic socialism," that ignored God while deifying social justice and worshipping the slavific power of socialism.
He became a student minister at an Indiana Methodist church in 1952. He left the Methodists because the church wouldn't desegregate, and in the 1950s founded the movement that would take various names as it evolved into Peoples Temple. In 1965, Jones and his wife, Marceline, brought the nascent temple and a handful of the faithful to California, the promised land for alternative religions.
A CHURCH IN REDWOOD VALLEY
The Joneses and their rainbow family of adopted children and about 70 followers set up in the sylvan Redwood Valley near Ukiah where relations between them and the laidback locals were uneasy but generally peaceful.
Jones quickly relocated his church to the bigger and more lucrative markets of San Francisco and Los Angeles. He moved Peoples Temple into an abandoned synagogue in The City's Fillmore district.
While attracting a growing congregation of urban blacks, he used his considerable intellectual and acting skills to bring onto his bandwagon many of San Francisco's leading politicians.
Current Mayor Brown was one of Jones' biggest cheerleaders, hailing him as a blend of Martin Luther King, Einstein and Mao. Then-Lt. Gov. Mervyn Dymally was another. Mayor George Moscone was still another, naming Jones to head his Housing Commission and, some said, owing his razor-thin election to the few hundred illegal voters Jones allegedly and other seekers were hearing from Jones' pulpit.
…But the politics of the day also fed the paranoia in Jones. He believed the government was conspiring to continue the war in Vietnam and spy on such groups as the Black Panthers. The government was not to be trusted.
The government also was working covertly against Jones and the temple, seeking to short-circuit the drive for a better society. Jones believed all this and so, to an almost universal extent, did the Temple congregation.
So Jones moved the temple and 1,000 or so followers to Guyana's outback, winning a 25-year lease on 3,852 acres in the Orinoco River basin near the disputed border with Venezuela.
A small group of temple pioneers moved in 1974 to what was to become Jonestown, far from the hostile attacks of media, government and others. The lease from the Guyana government required the Peoples Temple Agricultural Project to "cultivate and beneficially occupy" at least one-fifth of the land by 1976.
Temple members raised livestock and grew pineapple, cassava, eddoes and other tropical fruits and vegetables. Charles Garry, Peoples Temple's San Francisco lawyer, called it a paradise.
But there was trouble in paradise. The camp never became agriculturally self-sustaining and the swift tropical diseases of the jungle ran rampant. At the end, only a third of the residents were able-bodied. Worse, critics said, it was a prison camp--people were not allowed to leave. Abusive practices such as beatings and forced sex, long-rumored but little-reported at the San Francisco church, escalated. Jones' deteriorating mental stability made him increasingly erratic and he demanded utter, unquestioning loyalty.
Agencies from the State Department and the CIA to the Internal Revenue Service and the Postal Service were investigating or taking legal action against Peoples Temple for mail fraud, wresting Social Security checks from members, arms and narcotics trafficking, and other crimes, real or imagined.
Aiding and abetting the conspiracy in the eyes of Jones and his followers were the few temple defectors and families of members who were getting attention from the hostile North American media.
Against this background, Jones, who was white, and his predominantly white inner circle had faithful followers practice mass suicide in a series of "white nights." Such a drastic end was the only possible response to the vast and growing outside threat, Jones preached. The people believed him.
CONGRESSMAN'S VISIT
The final act began to unfold when Ryan, a respected Democratic congressman from San Mateo County with a taste for headlines, lead a delegation of newsmen and a group of family members called the Concerned Relatives to Jonestown to investigate allegations of thought control, imprisonment, drug- and gun-running.
Ryan was knifed--superficially--during the visit. When he was ready to leave, several Jonestown residents asked to leave with him, an action that apparently triggered the subsequent massacre of Ryan, Robinson and the others at Port Kaituma.
The incredible scene of mass death quickly followed at Jonestown.
"We died because you would not let us live," wrote temple insider Annie Moore in her suicide note.
Among those who survived was Jim Jones' son, Stephan, who was away with the camp's basketball team. It was left to the younger Jones, who had broken with his father and since has denounced him repeatedly, to declare Jim Jones' most durable epitaph. In a recent ABC television visit to Jonestown, Stephan Jones said simply, "My father was a fraud."
While no one denies that Jones was a madman, some feel the people of Jonestown have been unfairly characterized as mindless zombies.
"The people who died in Jonestown were sweet, altruistic people. One the of the tragedies of Jonestown is that people haven't paid attention to that," says Timothy Stoen, whose son, John Victor, died at Jonestown. Stoen played a pivotal role in a custody battle with Jones that helped precipitate the murder-suicides. Among Ryan's reasons for going to Guyana was the custody fight.
"No single theory could possibly explain the many complex and related issues that led the members of the Peoples Temple to leave family, friends and church communities and take residence in the jungle of Jonestown," says Archie Smith Jr., professor of pastoral psychology at the Pacific School of Religion and Graduate Theological Union in Berkeley. "Labeled by many as sick, dysfunctional or disadvantaged, (they) were seeking an alternative to the status quo, new and just ways of living and being in the world."
"Amidst all the attention that was focused on Jim Jones, the people were accorded scarcely any respect as thinking, feeling, caring human beings....The dehumanizing of the victims of Jonestown by the journalistic community was tantamount to the withholding of permission to grieve."
TRAGIC CHANGE IN DIRECTION
Maaga says the lessons of Jonestown are to be learned by trying to figure out why the committed veered so tragically from their vision of a better world.
"I think things went right earlier in the movement," Maaga says, pointing to the temple's successful social programs in San Francisco.
"What we should be talking about is what went right and when did it go wrong," she said, suggesting the answers might prevent similar tragedies. "I don't think Peoples Temple was a singular example of a self-righteous religion gone horribly awry.
"What was important about it was what they (the members) were--as people--not just how they died."
Rebecca Moore, a University of North Dakota professor of religion and philosophy, lost her sisters Carolyn Layton and Annie Moore at Jonestown, as well as her nephew, Jim-Jon Prokes.
She says some of the still-unexplored questions are whether people in San Francisco could have prevented the tragedy by reining in Jones before he was all-powerful and crazy; what responsibility Willie Brown and others had before and after the catastrophe; and the role of the black church.
"At times I despair that we've learned nothing," Moore says. "By that I mean that people continue to demonize (Jones) and forget the others.
"As a society we fail to take seriously the very strong and powerful desire, or hunger, for community, a community of people working for social change. The people in Jonestown were trying to create a Utopian society, racial justice and social equality. Granted, there were internal contradictions, but they were one of the few groups intentionally addressing the problem of racism and trying concretely to do something about it."
The media, Moore says, almost always dwells on the dead bodies and on Jones himself; the government has swept under the rug all understanding of Jonestown and of its own role. No official inquiry was held in the United States and the Guyanan investigation was almost laughably superficial.
Jonestown, Berkeley's Smith warns, "was not an anomaly. Rather, it was a product of the evolving ethos of our time, an ethos that tends to repress and trivialize the essentially religious impulse. The social and historical forces that gave rise to Jonestown 20 years ago operate today."
A CENTRAL LESSON UNLEARNED
"Perhaps the biggest heartache of all," Sawyer says, "is that our country and our churches still have not sought an answer to the question of why the people joined this movement, and so still have not discerned the central lesson of Jonestown. Why, indeed, were white idealists and black Christians drawn to a movement that promised them sanctuary from America's failure to honor her promises of equality and justice?
"Churches, like the public, were so horrified they psychologically needed to distance themselves...to handle the reality of it. Most people know what Jonestown is but if you say Peoples Temple, people will have a blank look. They don't know there was a longstanding social change movement.
"And they're no more informed today than they were 20 years ago. I hope this anniversary will not be just a replay of Jim Jones as a crazy man but that there is some conscientious effort to make America understand. It takes two or three decades before people start asking the right questions....There hasn't been that kind of soul-searching.
"It's important to do this. We have to have sense of the wounds that are still there."
"What should we be talking about?" asks Jackie Speier, who was elected to the California Assembly after the tragedy and on Tuesday was elected to the state Senate.
"We should be talking about the fact that the menace of cults still lingers. Cults are still around. They're all around us. Many continue to operate under the guise of being religious, under the guise of religiosity and the First Amendment, violating state and federal laws (while) the government again looks the other way."
Dr. Hardat Sukhdeo, a Guyana-born psychiatrist who treated Jonestown survivors, says the conditions that created Peoples Temple could create a similar movement.
People joined the temple because "they needed to have this type of togetherness and leadership and somebody to tell them what to do and when to do it," said Sukhdeo, of Rutgers University.
Sukhdeo sees the majority of temple members, mostly black, as largely alienated and at sea. The Peoples Temple was more of a place to improve their own lives than to achieve the societal do-goodism espoused by young whites who joined.
"They wanted something like the cult to say, 'Yes, we know the truth. Yes, we have the answers.' They were lost, they were angry and they didn't feel they belonged to society."
Nothing has changed, he says: Groups such as Branch Davidian and Heaven's Gate prove that the attraction to cults remains.
A DISTRUST OF RELIGION
The Rev. Robert Warren Cromey, rector of San Francisco's Trinity Episcopal Church, says that a "sad part of Jonestown is it has sent a message of distrust of religion, particularly the small sect religions....(There is) a sense of 'if you're religious, this is what might happen to you.'
"But on the positive side, people are looking more critically before getting involved in religious programs, especially the more fringe type of things. Look at the kind of scrutiny now given to Hare Krishna, Scientology and the like. That's as it should be."
Timothy Stoen also warns of the dangers of cults.
"There are a lot of Jim Jones wannabes out there," said Stoen, who recently moved to Colorado Springs from Mendocino. "...You can't wish them away...(but) one hopefully can learn from (Peoples Temple) that these things can be lethal and there comes a point early on when you can do something about it."
The psychological isolation that leads people to seek groups like Peoples Temple still exists, he says, urging families of cultists to ask questions, challenge the groups and otherwise get involved.
"People just have to take an even-handed look that this is kind of a sign of our times--people need structure, community, etc. If mainline churches don't provide it, there'll be cults. Those cults will be there serving some sort of need."
__________________
What makes people follow extreme leaders and do such tragic and often uncharacteristic things?
In my research I have found that many extreme movements have the following things in common:
Members are taken away from their homes and familiar surroundings
Members are encouraged to break ties with outside family and friends
Material possessions are taken away
People are often given new names
The group fosters a feeling of “us” vs. “them”, paranoia
Very structured way of life, strict rules
Often required to dress the same
Look back at your research. Add a paragraph to your paper describing the extent to which the factors I mentioned either do or do not apply.
### 05.06 Emergency Driving (DriverEd)
Skill Builder 18 - Driving in Hazardous Conditionshttp://www.youtube.com/watch?v=2_4C8hGoOYw
The links in this activity were developed by the state of California to help drivers develop proper driving habits. They are all "YouTube" videos, so if your school blocks access to Youtube.com, then you will have to watch the videos at another location. (library, home, GoogleTV, AppleTV, etc). They are short and contain very good information on the driving task.
Please watch the videos carefully. It may be necessary to watch them a couple of times to understand the information before you start your behind-the-wheel driving experience. After watching the videos in this lesson, you will take a quiz on the content of the video in the next lesson.
### 05.06 Emergency Driving Rules of Road Videos Quiz (DriverEd)
computer-scored 10 points possible 10 minutes
After completing the readings and viewing activities for the lessons in this unit, please take the Unit 05 Rules of Road Videos Quiz.
### 05.06 Here is how it is going to work! - English 10
Interpret words and phrases as they are used in a text. Analyze the structure of a text, including how specific portions of the text relate to each other and the whole. Assess how point of view or purpose shapes the content and style of a text. Use the reading of a text to help determine style and approach to narrative writing. Read narrative writing to determine an appropriate approach to the writing of narratives. Include stylistic elements, from the study of a narrative text, in narrative writing.
Fitting it all together.: Wikimedia, Creative Commons, Jigsaw Puzzle
*All of the activities from this lesson need to be completed and saved in a folder on your hard drive for future use, reference, and grading.
The Plan
You will now be working in tandem between your reading assignments and your narrative writing assignments. The reading will help you to find expressive writing techniques. Many writers practice by copying the text of some of their favorite authors as a way to develop their own style of written expression. The object of these next few assignments is to help you find your voice and write it for others to enjoy.
Keep your reading assignments together in a separate document from your writing assignments. Everything you save in these files will be handed in and graded.
SAVE ALL OF YOUR WORK FROM THIS QUARTER
There are many resources and much information found there that could help you with your study of this book (and possibly other books you may be reading). Feel free to browse it at your leisure.*
### 05.06 Investigate the emerging civil rights movements for women and African-Americans in the early 20th century. (U.S. History)
Investigate the emerging civil rights movements for women and African-Americans in the early 20th century.
Lesson Notes:
Women
Ratification of the 19th Amendment in 1920 giving women the right to vote in national elections was only the opening of a door dealing with women's rights. American women pursued new lifestyles and assumed new jobs and different roles in society during the 1920s. Workplace opportunities and trends in family life are still major issues for women today.
By the 1920s, the experiences of World War I, the draw of cities and changing attitudes had opened up a new world for many young Americans. In the rebellious, pleasure-loving atmosphere of the Twenties, many women began to assert their independence, reject the values of their parents, and demand the same freedoms as men. During the 1920s, a new ideal emerged for some women: the flapper. She was an emancipated young woman who embraced the new fashions and urban attitudes of the day. Close-fitting felt hats, bright waistless dresses an inch above the knees, silk stockings and strings of beads replaced the dark and prim ankle-length dresses, corsets, and petticoats of the late-19th century Victorian days. Many young women cut their long hair into boyish bobs.
Many young women became more assertive. In their bid for equal status with men, some began smoking cigarettes, drinking in public, and talking openly about sex--actions that would have ruined their reputations not too many years before. They danced to new, suggestive dances like the tango, Charleston, the fox trot and the shimmy. Along with these changes, the attitudes toward marriage changed as well. Many middle-class men and women began to view marriage as more of an equal partnership, although housework and child-rearing remained a woman's job.
The fast-changing world of the "Roaring Twenties" produced new roles for women in the workplace and new trends in family life. The booming economy opened new work opportunities for women in offices, factories, stores, and professions. That same economy produced time-saving appliances and products also reshaped the roles of housewives and mothers.
Although women had worked successfully during the war, afterwards employers turned to the returning soldiers to fill positions in industry. Children, who had been thrown in with adults in factory work, farm labor, and apprenticeships, spent most of their days at school and in organized activities with others their own age. Factory owners supported the education of children because an educated child became a more versatile and productive adult.
African-American Voices
During the 1920s, African Americans set new goals for themselves as they moved north to the nation's largest cities. Between 1910 and 1920, in a movement known as the Great Migration, hundreds of thousands of African Americans had uprooted themselves from their homes in the South and moved north and to California in search of jobs. Tensions between the races escalated in the years prior to 1920, culminating, in the summer of 1919, in some 25 urban race riots. Leading the way was James Weldon Johnson, executive secretary of the NAACP (National Association for the Advancement of Colored People), an organization founded in 1909 to protest racial violence and fight for legislation to protect African-American rights. The NAACP made antilynching laws one of its main priorities.
Another approach was taken by Marcus Garvey, an immigrant from Jamaica, who believed that African Americans should build a separate society. His more radical message of black pride aroused the hopes of many. Garvey lured followers with programs to promote African-American businesses. He also encouraged his followers to "return to Africa" and help build a mighty nation. Despite the appeal of Garvey's movement, support for it declined in the mid-1920s when he was convicted of mail fraud and jailed.
But the most powerful and long-lasting movement of black pride is known as the Harlem Renaissance. It was a creative literary and musical movement that spoke for all African-Americans throughout America. Many Southern blacks who migrated north moved to Harlem, a neighborhood on the northern end of Manhattan Island. In the 1920s Harlem became the world's largest black urban community with residents from the South, the West Indies, Cuba, Puerto Rico, and Haiti.
Conclusion
The liberated women and Harlem Renaissance represented two parts of the great social and cultural changes that swept America in the 1920s. The period was characterized by economic prosperity, new ideas, changing values, and personal freedom. The Twenties also brought the nation important new developments in art, literature, and music. Most of these changes were lasting. The economic boom, however, was not.
### 05.06 Lesson 5F: Stem Changing Verbs: O – UE (Poder/Dormir) (Spanish I)
Lesson 5F Stem Changing Verbs: O – UE (Poder/Dormir) [A copy of this lesson is available in a PDF file!! If you prefer to use this type of document, just click on the following link to complete this lesson: SpI_Lesson5F] In lesson 4H, we learned about another type of irregular verbs, “stem changing verbs” and specifically, the verb stems that change from “e” to “ie” (Querer/Preferir). You may want to go back and review that lesson because this lesson follows the same format, but in this lesson, we’ll talk about verbs stems that change from “o” to “ue”. (Remember: a verb “stem” is the beginning part of a verb, that is in front of the verb endings of “-ar”, “-er”, or “ir”. These verbs are appropriately called “stem changing verbs” because they have all the “regular” verb endings, but it is the stem of the verb that changes!). As examples of this stem changing verb, we will be using two very common Spanish verbs “poder – to be able to/can” and “dormir – to sleep”. The stems for these verbs are “poder – pod...” (taking off the “-er” ending) and “dormir – dorm…” (taking off the “-ir” ending). The endings are the same as regular “-er” and “-ir” verbs but in the stems, “ue” replaces the “o”. The stems become “poder – pued...” and “dormir – duerm…” for all forms, except for in the nosotros(as) and vosotros(as) forms!! (¡¡Ojo!!: In the nosotros(as) and vosotros(as) forms, the verb uses the “unchanged” stems, “poder – pod…” and “dormir – dorm…” See the chart below: Other “o” to “ue” stem changing verbs: Learn these other verbs below as they are conjugated exactly as the verbs above: This is a fabulous presentation of this type of stem changing verbs (o to ue). Click on the following link, enter your name, and click on level A. On the left side, click on “Unidad 4”, “Lección 2”, “Gramática”, and then “Presentación: gramática 1”. A window will open with two cute animated marks (save this fun part for the end!!). Click on the tab “English Grammar Connection” and read the page, then click on the tab “Gramática” and read the page. Now click on the fun red and blue tab, “Pablo and Pili”, and click the arrow at the bottom of the page “>” to hear their fun lesson. When finished, click on the “x” in the upper right corner to close this window. On the left side of the screen, click on “Práctica: gramática 1”. Do the first page and when finished, click on the “Answer” button at the bottom of the page. Click on the blue “2” circle to bring up the next activity. After finishing these 5 activities, click on the “Level B” circle in the upper right of the screen. Now you will have 5 more activities to practice. Then go on to “Level C” for another 5 practice pages!! Wasn’t this a great site with lots of good practice??!!! Summary of Lesson: You should now know how to conjugate stem changing (o to ue) verbs in the present tense. Once again, Professor Jason has done a great video clip explaining “stem changing verbs”. These verbs can also be called “boot verbs” … do you see why?? Practice Exercises: Remember ... Do as many of these exercises as YOU need, as many times as YOU need to do each one so that you can successfully complete the assignment for this lesson!!
### 05.06 Letter Samples IMPORTANT
Letter Samples IMPORTANT Use this handout to learn the difference between block and modified block style letters as well as additional letter information. PLEASE REFER TO THIS HANDOUT OFTEN AS YOU DO THE LETTER ASSIGNMENTS. THE LETTER ASSIGNMENTS WILL NEED TO BE FORMATTED CORRECTLY! NOTE: Remember to set the top margin to 2 inches before starting the letter!
### 05.06 Nervous System Links (Human Biology)
E-themes links for the Nervous Systemhttp://www.emints.org/ethemes/resources/S00000666.shtml
These sites have illustrations, images, charts, and research material about the human nervous system and its functions. Also includes interactive human body tours, quizzes, and games.
### 05.06 Point of Interest Assignment (DigitalPhoto2)
teacher-scored 100 points possible 20 minutes
Point of interest.
Submit three different images of three different scenes that demonstrate your understanding and ability to follow the rules of good composition. These images should show a point of interest in the photograph.
Do not send any pictures of your car. A car parked in your driveway is boring.
Take pictures that demonstrate good composition. These images should show a point of interest in the photograph. It could be a still life or something out of the ordinary that captures your attention. Perhaps your picture tells a story.
01.2.6 Point of Interest(DigitalPhoto2)
### 05.06 Reports--WP5A-B (Computer Technology)
teacher-scored 20 points possible 90 minutes
Report Guidelines
Study the attached MLA_Report_Instructions.pdf to learn how to type an MLA style report. Study the Formatting_Reports.pdf instructions to learn how to format a report in Word. Use these guides as you complete your Hero Report that you started in assignment OR1 in the Online Resources unit.
Citing Sources in Reports Presentation
View the attached Citing_Sources.pps presentation to learn rules about citing sources to avoid plagiarism.
Works Cited Activity (WP5A)
This activity will help you see how easy it is to use the Citation Machine to create a Works Cited page. Follow the instructions in Works Cited Activity (WP5A).doc. Send as attachment to WP5A assignment.
MLA Report (WP5B)
Study the Report Guidelines above before completing this assignment. Type your Hero Report using MLA format and include a Works Cited page. Be sure to use the Landmarks Citation Machine website or the EasyBib website in section 05.6.1 to format the sources correctly. Be sure to check your work using the Report Grading Sheet.doc before you send it as an attachment to WP5B.
### 05.06 Section 6.4 (Calculus)
HC: Volume of a solid of known cross-sectionhttp://www.hippocampus.org/course_locator?course=AP%20Calcul...HippoCampus Lesson 43 Homework(link in bottom left)http://www.montereyinstitute.org/courses/AP%20Calculus%20AB%...
### 05.06 Solve Quadratics with Complex Solutions (Math Level 2)
Solve quadratic that have complex roots.
Lesson material is from Beginning Algebra http://2012books.lardbucket.org that has a Creative Commons by-nc-sa 3.0 license.
We complete our study of solutions to quadratic equations by including solutions involving imaginary roots
Let's review a little bit before moving on. We know that $\sqrt{-9}$ is not a real number.
$\sqrt{-9}={\color{Blue} ?}$ or $\left ( {\color{Blue} ?} \right )^{2}=-9$
There is no real number that when squared results in a negative number. We begin to resolve this issue by defining the imaginary unit, $i$, as the square root of −1.
$i=\sqrt{-1}$ and $i^{2}=-1$
To express a square root of a negative number in terms of the imaginary unit $i$, we use the following property where a represents any non-negative real number:
$\sqrt{-a}=\sqrt{-1\cdot a}=\sqrt{-1}\cdot \sqrt{a}=i\sqrt{a}$
With this we can write
$\sqrt{-9}=\sqrt{-1\cdot 9}=\sqrt{-1}\cdot \sqrt{9}=i\cdot 3=3i$
If $\sqrt{-9}=3i$, then we would expect that $3i$ squared will equal −9:
$(3i)^{2}=9i^{2}=9\cdot -1=-9$
In this way any square root of a negative real number can be written in terms of the imaginary unit. Such a number is often called an imaginary number.
http://www.onemathematicalcat.org
Example 1
Rewrite in terms of the imaginary unit $i$.
a. $\sqrt{-7}$
b. $\sqrt{-25}$
c. $\sqrt{-72}$
Solution:
a. $\sqrt{-7}=\sqrt{-1\cdot 7}=\sqrt{-1}\cdot \sqrt{7}=i\sqrt{7}$
b. $\sqrt{-25}=\sqrt{-1\cdot 25}=\sqrt{-1}\cdot \sqrt{25}=i\cdot 5=5i$
c. $\sqrt{-72}=\sqrt{-1\cdot36\cdot 2}=\sqrt{-1}\cdot \sqrt{36}\cdot \sqrt{2}=i\cdot6\cdot \sqrt{2} =6i\sqrt{2}$
Notation Note: When an imaginary number involves a radical, we place $i$ in front of the radical. Consider the following:
$6i\sqrt{2}=6\sqrt{2}i$
Since multiplication is commutative, these numbers are equivalent. However, in the form $6\sqrt{2}i$ the imaginary unit $i$ is often misinterpreted to be part of the radicand. To avoid this confusion, it is a best practice to place $i$ in front of the radical and use $6i\sqrt{2}$.
Now let's apply this to solving quadratic equations that involve including solutions involving imaginary roots
Example 1: Solve using the quadratic formula: x2−2x + 5 = 0
Solution: Begin by identifying a, b, and c: a = 1, b = -2, and c = 5
Substitute these values into the quadratic formula and then simplify.
$\dpi{100} \fn_phv x=\frac{-{\color{Red} b}\pm\sqrt{{\color{Red} b}^{2}-4{\color{Blue} a}{\color{Green} c}} }{2{\color{Blue} a}}$
$\dpi{100} \fn_phv x=\frac{-({\color{Red} -2})\pm\sqrt{({\color{Red} -2})^{2}-4({\color{Blue} 1})({\color{Green} 5})} }{2({\color{Blue} 1})}$
$\dpi{100} \fn_phv x=\frac{2\pm \sqrt{4-20}}{2}$
$\dpi{100} \fn_phv x=\frac{2\pm \sqrt{-16}}{2}$
$\dpi{100} \fn_phv x=\frac{2\pm 4i}{2}$
$\dpi{100} \fn_phv x=\frac{2}{3}\pm \frac{4i}{2}$
$\dpi{100} \fn_phv x=1\pm 2i$
Check these solutions by substituting them into the original equation.
Answer: The solutions are 1−2i and 1 + 2i.
The equation may not be given in standard form. The general steps for solving using the quadratic formula are outlined in the following example.
Example 2: Solve: (2x + 1)(x − 3) = x − 8.
Solution:
Step 1: Write the quadratic equation in standard form:
$\dpi{100} \fn_phv (2x+1)(x-3) = -8$
$\dpi{100} \fn_phv 2x^{2}-6x+x-3 = x-8$
$\dpi{100} \fn_phv 2x^{2}-5x-3 = x-8$
$\dpi{100} \fn_phv 2x^{2}-6x+5=0$
Step 2: Identify a, b, and c for use in the quadratic formula: a = 2, b = - 6, c = 5
Step 3: Substitute the appropriate values into the quadratic formula and then simplify.
$\dpi{100} \fn_phv x=\frac{-{\color{Red} b}\pm\sqrt{{\color{Red} b}^{2}-4{\color{Blue} a}{\color{Green} c}} }{2{\color{Blue} a}}$
$\dpi{100} \fn_phv x=\frac{-{\color{Red} (-6)}\pm\sqrt{{\color{Red} (-6)}^{2}-4{\color{Blue} (2)}{\color{Green} (5)}} }{2{\color{Blue} (2)}}$
$\dpi{100} \fn_phv x=\frac{6\pm \sqrt{36-40}}{4}$
$\dpi{100} \fn_phv x=\frac{6\pm \sqrt{-4}}{4}$
$\dpi{100} \fn_phv x=\frac{6\pm 2i}{4}$
$\dpi{100} \fn_phv x=\frac{6}{4}\pm \frac{2i}{4}$$\dpi{100} \fn_phv x=\frac{6}{4}\pm \frac{1}{2}i$
Answer: The solution is $\dpi{100} \fn_phv \frac{3}{2} \pm \frac{1}{2} i$. The check is optional.
Example 3: Solve: x(x + 2) = - 19
$\dpi{100} \fn_phv x(x+2)=-19$
$\dpi{100} \fn_phv x^{2}+2x=-19$
$\dpi{100} \fn_phv x^{2}+2x+19 = 0$
Here a = 1, b = 2, and c = 19. Substitute these values into the quadratic formula:
$\dpi{100} \fn_phv x=\frac{-{\color{Red} b}\pm\sqrt{{\color{Red} b}^{2}-4{\color{Blue} a}{\color{Green} c}} }{2{\color{Blue} a}}$
$\dpi{100} \fn_phv x=\frac{-{\color{Red} (-2)}\pm\sqrt{{\color{Red} 2}^{2}-4{\color{Blue} (1)}{\color{Green} (19)}} }{2{\color{Blue} (1)}}$
$\dpi{100} \fn_phv x=\frac{-2\pm \sqrt{4-76}}{2}$
$\dpi{100} \fn_phv x=\frac{-2\pm \sqrt{-72}}{2}$
$\dpi{100} \fn_phv x=\frac{-2\pm \sqrt{-1\cdot 36\cdot 2}}{2}$
$\dpi{100} \fn_phv x=\frac{-2\pm 6i\sqrt{ 2}}{2}$
$\dpi{100} \fn_phv x=\frac{-2}{2}\pm 6i\sqrt{2}$
$\dpi{100} \fn_phv x=-1\pm 3i\sqrt{2}$
Answer: The solutions are $\dpi{100} \fn_phv x=-1\pm 3i\sqrt{2}$
Notation Note
Consider the following: $\dpi{100} \fn_phv -1+3i\sqrt{2}= -1+3\sqrt{2}i$
Both numbers are equivalent and $\dpi{100} \fn_phv -1+3\sqrt{2}i$ is in standard form, where the real part is $\dpi{100} \fn_phv -1$ and the imaginary part is $\dpi{100} \fn_phv 3\sqrt{2}$. However, this number is often expressed as $\dpi{100} \fn_phv -1+3i\sqrt{2}$ to avoid the possibility of misinterpreting the imaginary unit as part of the radicand.
Try this! Solve: $\dpi{100} \fn_phv (2x+3)(x+5) = 5x+4.$
Answer: $\dpi{100} \fn_phv \frac{-4\pm i\sqrt{6}}{2} = -2\pm \frac{\sqrt{6}}{2}i$
Click on the link below for a video solution to this problem.
### 05.06 Solve Quadratics with Complex Solutions - Video Solution (Math Level 2)
I highly recommend that you click on the links below and watch the videos before continuing:
### 05.06 Solve Quadratics with Complex Solutions - Videos (Math Level 2)
mathispower4u: Ex: Quadratic Formula - Complex Solutions http://youtu.be/11EwTcRMPn8mathispower4u: Ex: Solve a Quadratic Equation Using the Quadratic Formula with Complex Solutions (Decimal Approx.) http://youtu.be/wz1_dJYBZHUmathispower4u: Ex: Solve a Quadratic Equation Using the Quadratic Formula with Complex Solutions (Exact Value) http://youtu.be/UofR4XEKpYE
If after completing this topic you can state without hesitation that...
• I can solve quadratic that have complex roots.
…you are ready for the assignment! Otherwise, go back and review the material before moving on.
### 05.06 Solve Quadratics with Complex Solutions - Worksheet (Math Level 2)
teacher-scored 80 points possible 40 minutes
Activity for this lesson
Complete the attached worksheet.
1. Print the worksheet and complete the assignment in the space provided. You may use additional paper if needed. Work all the problems showing ALL your steps.
2. Once you have completed the assignment, digitize (scan or take digital photo, up close and clear) and save it to the computer and convert it to an image file such as .pdf or .jpg.
Pacing: complete this by the end of Week 3 of your enrollment date for this class.
### 05.06 Spelling commonly misspelled words (English 9)
Spell correctly.
Why is English spelling so inconsistent and tricky?
It ties back into the history of the English language.
"In spelling, the [English] language was assimilating the consequences of having a civil service of French scribes, who paid little attention to the traditions of English spelling that had developed in Anglo-Saxon times. Not only did French qu arrive, replacing Old English cw (as in queen), but ch replaced c (in words such as church--Old English cirice), sh and sch replaced sc (as in ship--Old English scip), and much more. Vowels were written in a great number of ways. Much of the irregularity of modern English spelling derives from the forcing together of Old English and French systems of spelling in the Middle Ages. People struggled to find the best way of writing English throughout the period. ... Even Caxton didn't help, at times. Some of his typesetters were Dutch, and they introduced some of their own spelling conventions into their work. That is where the gh in such words as ghost comes from.
"Any desire to standardize would also have been hindered by the ... Great English Vowel Shift, [which] took place in the early 1400s. Before the shift, a word like loud would have been pronounced 'lood'; name as 'nahm'; leaf as 'layf'; mice as 'mees'. ...
"The renewed interest in classical languages and cultures, which formed part of the ethos of the Renaissance, had introduced a new perspective into spelling: etymology. Etymology is the study of the history of words, and there was a widespread view that words should show their history in the way they were spelled. These weren't classicists showing off. There was a genuine belief that it would help people if they could 'see' the original Latin in a Latin-derived English word. So someone added a b to the word typically spelled det, dett, or dette in Middle English, because the source in Latin was debitum, and it became debt, and caught on. Similarly, an o was added to peple, because it came from populum: we find both poeple and people, before the latter became the norm. An s was added to ile and iland, because of Latin insula, so we now have island. There are many more such cases. Some people nowadays find it hard to understand why there are so many 'silent letters' of this kind in English. It is because other people thought they were helping."
David Crystal, The Fight for English: How language pundits ate, shot, and left, Oxford, 2006, pp. 26-9.
SpellingEng9205 Loose Talk poster: NARA, ca. 1944, public domaine
Yes, spelling can be a nuisance, but yes, it is important. Here are some general guidelines to help you become a better speller:
1) Learn some of the basic rules that work MOST of the time, for instance:
i before e except after c, or when sounded as "ay" as in neighbor or weigh
When you add a suffix to a word which ends with a single vowel followed by a single consonant, double the consonant (fit becomes fitted or fitting); when you add a suffix that begins with a vowel to a word that ends in a silent e, drop the e and add the suffix (fade becomed fading or faded; fate becomes fated, or fateful).
For words that end in a consonant & y, change the y to i UNLESS the suffix begins with i (fry becomes frying or fries or fried).
2) Use the spell-check on your computer, but
3) Don't depend on ONLY the computer spell-check. Your computer doesn't know what word you meant to spell, only whether the words you used match a long list of possible words.
4) Ask a friend or family member who is a good speller to check your work.
5) Make a list of words you have trouble with, and work on learning one of them each day or week.
Common Words that Cause Spelling Errors
1) there refers to a place (here and there).
they're is the contraction for "they are" (They're coming over later.)
their is the possessive of they, which is why the e comes before the i. (Don't touch their fancy car.)
2)Two is the number 2. (I want two cookies.)
too may mean "also" (I want cake, too.) or may add emphasis (It was too hot.)
to is a preposition or part of the infinitive (I want to go to the store.)
3) Conscious means you know what is going on around you (you haven't been knocked out). (He was conscious of them all staring at him.)
Conscience is the part of your mind that tells you something is morally right or wrong. (Her conscience was bothering her because she had lied to her best friend.)
4)which is a pronoun referring to something.(I couldn't decide which one to buy.)
witch is a woman with magical (often evil) powers. (The witch cast a spell to turn the prince into a frog.)
5) were is the past tense of "are". (They were sad.)
where refers to a place. (Where are we going?)
6) loose is the opposite of tight. (The screw had worked loose and fallen off.)
lose is the verb related to lost. (I didn't want to lose my earring.)
7) all right is the only correct way to write this expression (there is no such word as "alright").
8)desert is a dry place. (Cactus grow in the desert.)
dessert is the treat you eat after dinner. (We're having apple pie for dessert.)
Note that the above two words break the usual pronunciation rule that in two syllable words, vowels followed by a single consonant are long, and vowels followed by a double consonant are short.
Desert (pronounced like "dessert" with a long E in the first syllable) can also be a verb meaning to leave (I can't imagine how a mother could desert her children.)
9) everyday is always an adjective. (I wore my everyday clothes except to church.)
If you mean something that happens day after day, you write every day. (I get up before noon every day.)
10) definitely means certainly (I definitely want to graduate from high school.)
defiantly means with defiance (The demonstrators shouted defiantly at the police.)
11) Pay attention to the difference between into and in to especially if preceded by "turned":
Correct - The sorceress turned them into frogs.
Possibly confusing - I turned into a parking lot. (Did you turn so as to arrive in a parking lot, or were you transformed from a human into flat asphalt?)
Correct - He turned them in to the police.
Incorrect: He turned them into the police.
Correct - I went in to see what was going on.
Correct - I was accepted into the club.
### 05.06 Spelling quiz (English 9)
computer-scored 15 points possible 10 minutes
Take the spelling quiz. Go to the class main page, and to Topic 3, to take this quiz. You may take it multiple times, but you must score at least 73%.
Pacing: complete this by the end of Week 2 of your enrollment date for this class.
### 05.06 Unit 5 quiz (Java)
teacher-scored 1 points possible 30 minutes
Read This - Review Notes - Study these for exam. This exam is a compilation of questions from the previous quizzes. You should do very well on this quiz as you have seen all the questions before. I have also included a review of some key ideas for you. You should also read the sample quizzes and answers from CHAPTER 1 and CHAPTER 2 from the book .
Here is some review information that may be covered on the quiz. But nevertheless, you should know this information very well. That is why this is basically a review quiz It also includes some information on the GUI and JoptionPane objects. Save it for referenece later. It contains some some information on additional functions. (methods) we will use.
Review this information and then take the unit 5 quiz.
IN NO PARTICULAR ORDER
(the information on GUI and Swing will be covered later. There will no questions on this quiz.)
* An application is a program that executes using the Java interpreter after it has been compiled to bytecode
* Blank lines, spaces, carriage returns, and tab characters are known as white space and help in making your source code more readable.
* The first line of the main method must be defined as follows
public static void main(String args[])
* A left brace is read as “begin” and right brace is read as “end”
* Java is case sensitive, upper and lower case letters are considered differently
* Method System.out.println displays or prints a line of information in the command window. When println is finished it finishes with a carriage return and places the cursor on the next line. System.out.print leaves the cursor on the same line… no carriage return
* Every statement must end with a semicolon. Not every line, but every statement.
* Identifiers must begin with a letter
* Identifiers must only contain letters, digits, and the underscore _
* Keywords are reserved words, reserved for Java
* Keywords must all appear in lowercase letters.
* Standard practice and good style states that all Java class names begin with a capital letter, if the class name contains more that one word, the first letter of each word should be capitalized. This is called camel case.
* Keyword class introduces a class definition and immediately followed by a class name.
* Methods are able to perform tasks and return information (if necessary) when they are finished.
* Java applications all begin at method main
* A static method is called by following its class name by a dot and the name of the method.
* The System.exit (0) method terminates an application that uses a GUI. Not necessary in the console window.
* A variable is a location in computer memory where data is stored. The book refers to this as a "box"
* Variable names must be a valid identifier
* Variables all have a NAME, a TYPE , and a VALUE
* All variables have to be declared, it is helpful to read the declaration from right to left and substitute the words “of Type” for the space. The declaration String myname; is read “variable myname is of type string”
* Declarations end with semi-colons;
* There are 8 primitive data types -- int, short, byte, long, char, boolean, float, double.
* int, short, byte, long are all integers (whole number)
* float, double are real numbers, they contain a decimal, double have the greater precision.
* char is any single character that could be typed from the keyboard.
* Boolean is either true or false
* import adds java packages (classes, methods, and commands) to your program, you “check them out” of the java class library.
* The core java classes are called “java”
* The extension java classes are called “javax”
* import javax.swing.* means use ALL the swing methods and classes
* swing is the java package for familiar windows GUI interface classes and methods
* Source code is what you the human create
* Object code is what the java compiler creates
* Java has its own type of object code called BYTECODE
* Syntax errors are errors that must be corrected before the compiler can finish creating the bytecode.
* Semantic errors are errors in logic that can generate incorrect results.
Class/Methods/Functions used so far:
JOptionPane - a class to provide the user a simple GUI interface
Integer.parseInt() - a method to convert a string to a number. “123” is convert to one hundred twenty three. The argument is any string. The value returned is an integer.
System.out.println()
System.out.print()
System.exit(0)
JOptionPane.showConfirmDialog() =Asks a confirming question, like yes/no/cancel.
JOptionPane.showInputDialog() = Prompt for some input.
JOptionPane.showMessageDialog() = Tell the user about something that’s happened.
JOptionPane.showOptionDialog = The Grand Unification of the above three
Key in the following program to see what each of these methods does:
1. In jcreator start a new project, delete the template code, and key in the following. Compile it and run… Change some of the string values to see what they do.
import javax.swing.JOptionPane;
public class Example2 {
public static void main( String[] args) {
JOptionPane.showMessageDialog(null, "Welcome to Java Programming");
JOptionPane.showInputDialog(null,"enter a number");
JOptionPane.showConfirmDialog(null,"Would you like to exit");
System.exit( 0);
}
}
### 05.06 Unit 5 Test (Financial Literacy)
Review your assignments from Unit 5 to prepare for the Unit 5 Test.
### 05.06 Unit 5 test (Financial Literacy)
computer-scored 100 points possible 45 minutes
Take the unit 5 test.
Pacing: complete this by the end of Week 7 of your enrollment date for this class.
### 05.06 Word - What You Need to Know (CompTech07)
teacher-scored 5 points possible 20 minutes
Word--What You Need to Know (WP1C)
Open up Word--What You Need to Know and fill in the worksheet by following the instructions given. (The tabs and ribbons will not be active when you open the worksheet. Print the worksheet. Go to a new Word document. Write in the answers as you go through the features and commands in Word. Type the answers into the worksheet.)
Assignment: Go into Assignment (WP1C) and attach the file.
### 05.06. Investigate the emerging civil rights movements for women and African-Americans in the early 20th century. (U.S. History)
Higher Education for Womenhttp://www.ed.gov/offices/OERI/PLLI/webreprt.htmlWomen's Suffragehttp://www.spartacus.schoolnet.co.uk/USAsuffrage.htmNAWSA and Women's Suffragehttp://www.brynmawr.edu/library/exhibits/suffrage/nawsa.htmlBooker T. Washingtonhttp://memory.loc.gov/ammem/aaohtml/exhibit/aopart6.htmlWomen Organize in Labor Unionshttp://www.historylink.org/index.cfm?DisplayPage=output.cfm&...
Labor activist Mother Jones, early 1900's: Wikimedia Commons, Library of Congress, public domainWomen Organize in Labor Unions
Although women were barred from many unions, they united behind powerful leaders to demand better working conditions, equal pay for equal work and an end to child labor. The most prominent leader in the women's labor movement was Mother Jones. Please review the information that we have seen about her to find out what she was fighting for in this movement.
Higher Education for Women
Please go to this short report to find out why opportunities in higher education for women were increasing in the late 1800's. What did receiving more education mean for women?
Women's Suffrage
Find out about the movement that gained women the right to vote, or the women's suffrage movement. Why was voting important to these women, and who were their major leaders?
NAWSA and Women's Suffrage
Find out much more about the women's suffrage movement and their main organization, the NAWSA. How did the suffrage movement end--did they meet with success?
NACW
Find out about the women who started this organization and what their goals were for their sex and race.
Booker T. Washington
Find out about this prominent African American leader and the Tuskegee Institute. Also, read about his famous speech, the Atlanta Compromise. What did he use as a symbol of how Americans of different races could successfully work together?
W.E.B. Du Bois
W.E.B. Du Bois publicly fought for the rights of African Americans throughout his life. Find out what he believed and wanted for his people. Was it the same as Booker T. Washington's vision? Also, learn about the founding of the NAACP and what Du Bois called the "Talented Tenth."
### 05.06.00 - THE HERO WITHIN
Perhaps the archetype of the Hero's Journey has transcended time and place because it touches on the very essence of human experience. People everywhere throughout time have experienced crises and calls to adventure. To accept such calls is to embrace growth, change and transformation. An awareness of the Hero's Journey, consequently, provides us a map to understand and interpret our lives. As we learn to view life as a journey, we become better equipped to give life-enhancing meaning to our experiences. Not all of life's calls and challenges are of epic proportion. Indeed, virtually anything that requires us to move out of our comfort zone--a challenging class, a new job, even writing an essay--may serve as a catalyst for personal development. For this next activity, I'd like you to recreate a time in your life when you elected (or were perhaps compelled) to stretch and to grow. You have just completed step 1 of this essay in the previous activity. Now for step 2, write the rough draft. As part of your narrative, explain which elements of the Hero's Journey were present in your ordeal. (i.e., what kind of preparation did you have? Were there any helpers or mentor figures? How did the trial test you? Did you encounter disappointing setbacks or periods of self-doubt? How did you grow or develop as a result of the ordeal? With whom did you share your ultimate success with? How did they benefit as a result?) Now, I’d like you to practice step 3 in writing drafts, by sharpening the sentences and polishing just the introductory paragraph. For this assignment, turn in both steps: the rough draft and the revised introductory paragraph.
### 05.06.00 Finding Solutions of Polynomials Using Numerical Methods (PreCalc)
Most polynomial equations do not have rational solutions. Numerical methods can be used to find solutions to these polynomials. You can download the attached file or read the same content below.
### 05.06.01
teacher-scored 12 points possible 60 minutes
Make sure you turn in both the rough draft and the revision of your introductory paragraph.
Assessment Rubric:
Content Includes completed rough draft, and revision of introduction shows a strong thesis statement as a part of an interesting introduction. /4 Support Your rough draft shows a clear thesis and at least three body paragraphs with detail which is specific and directly supports the thesis. /4 Clarity While not a polished draft, the ideas are clear. /4
### 05.06.01 Bisection Method (PreCalc)
The final way to find these illusive non-rational zeros of polynomials (and let's be honest, most polynomials are going to fall in this category) is by using a more sophisticated method of guess and check. Useful Fact If ƒ(x) is a polynomial function and and b are numbers such that ƒ() > 0 and ƒ(b) < 0, then the function ƒ(x) has at least one zero between and b. (This is also called the locator theorem.) So what? Well, we can use this information to find irrational zeros, to whatever precision you wish. (Precision basically means “how many decimal places.”) There are several techniques to narrow down these zeros, but since this is a pre-calculus class, we won't get too sophisticated. So any numerical method of solving a problem will start with a guess, calculate the value, then make another guess. You keep doing this, moving closer to the “real” answer, until you are as precise as you wish. It is simpler to show you how to do this then explain it. You should follow along on you calculator. Consider the polynomial ƒ(x) = 6 x4 – 3 x2 + 4 x – 1 We already looked for rational roots of this polynomial, and determined there are none. We did this by trying all the possible rational roots. You may remember that: While none of these numbers are zero, some get closer than others. In particular, is close to zero. Notice that is positive. So 1⁄3 will be our a from the previous theorem. We need a number that is less than zero as well. Starting as close to zero as possible will help us find the “real” zero faster. So select for the negative term. 1⁄6 will be our b from the previous theorem. We know that the actual zero is between 1⁄3 and 1⁄6. So for our next guess, select the number halfway between these. In particular (I worked through this problem already, so I know the values of ƒ(x). Therefore, I label each new guess with an if ƒ(x) is positive, and b if ƒ(x) is negative. I think this will help make it more clear, but it also means the labels will appear somewhat random. When you do this yourself, you can label things however you want, but remember, you are always making your next guess between a positive ƒ(x) and a negative f(x').) Now, find ƒ(bi) This is not zero, but it is closer than . It is also negative. This means that the actual zero is between 1⁄3 and 1⁄4. So, we guess again. This time we want to guess the number that is halfway between 1⁄3 and 1⁄4 or This number is even closer to zero, and still negative. So our next guess will be halfway between 1⁄3 and 7⁄24 This answer is positive, so we are now looking for a number between 5⁄16 and 7⁄24. Guess halfway between those or: As you can see, we are getting closer. We are at zero to one decimal point, and the number is also the same number to one decimal point. So, if that is all the closer you want to get, you could report that the zero is at x = 0.3. However, I am going to ask you to get even closer. Again, choose a number halfway between i and biii. Which is zero to 2 decimal points (the 6 rounds up), so let's keep going. And repeat. We now want to find a number between ii and biv. We are still only zero to 2 decimal places, so keep going. Notice that both ƒ(ii) and ƒ(iii) are positive, while ƒ(biv) is negative. So we are looking for a number between aiii and biv. And we are still only at 2 decimal places. Each time we repeat this process is an iteration. We will stop when we reach convergence. To reach convergence, first we decide (in advance) how close we want to get. Then we stop when we get there. We can base the convergence on many different criteria. For example, we could say that we want ƒ() to equal zero to 4 decimal places. Or we could say that we want a and b to equal each other to 3 decimal places. Or we could say that we want ƒ(b) to be within 10% of ƒ(). Etc. These are all valid “end point” criteria. Since we have already determined that there are no rational zeros, the zero will be an approximation anyway. You just need to figure out how close you want the answer. (Real problems are solved this way all the time.) Also, since this is such a repetitive process, it is an ideal computer problem. And people write programs all the time to solve problems like this. I have been doing this by using the “Entry” button on my calculator (this button will redisplay your previous entry, which you can then modify). You could also program a spreadsheet to do the iterations. In general, you do not want to do this by hand over and over. You should notice that we are not getting to zero very fast. This is a problem with all numerical methods. Your first few steps get you close to zero quickly. But each step after that is smaller and smaller. If you really wanted to find the complete irrational number that is the zero for this function, it would take you forever. So, with that in mind, I am going to stop when a equals b to 3 decimal places. We currently have iv = 0.30794... biv = 0.30729... Since that 9 rounds up but the 2 rounds down, I only have 2 decimal places. My next 2 iterations gets me v = 0.307617... ƒ(v) = 0.00010877... bv = 0.307454 ƒ(bv) = -0.00015345... We now have ƒ(x) = 0 to 3 decimal places, and = b to 2 decimal places (that 6 rounds up, but the 4 rounds down). So keep going. vi = 0.3075358... ƒ(v) = 0.0000787... bv = 0.307495... ƒ(bv) = -0.0000373... Both of these values round to 0.3075, so they are the same number to 4 decimal places, and we have convergence. (Notice that we actually skipped our 3 decimal place convergence. That happens when the next digit is 5, it is okay.) Now, we can report that a zero for this function is x = 0.3075 Are we done? Well, not really. This is an polynomial of even power with a positive coefficient. This means that the graph goes up at both the right end and the left end. So, if it crossed the x-axis once, it has to cross it again. (For the same reason, an odd function must have at least 1 zero.) At this point, I could use Descartes rule of signs to try and figure out how many more zeros I am looking for, and what their signs are. I am not going to ask you to use the rule, but you may want to familiarize yourself with it. It is only has limited usefulness, since it doesn't tell you how many zeros you have. But it does put some constraints on these zeros. Descartes rule of signs tells you to consider how many times the coefficients of ƒ(x) change signs. In our case ƒ(x) = 6 x4 – 3 x2 + 4 x – 1 The signs of the coefficients are +– + – So they change signs 3 times. This means that there are either 3 or 1 positive zeros (3 – 2 = 1). We already found one positive zero. So we may need to look for 2 more, or we might have all the positive zeros of this function. The graph is increasing around this zero (remember ƒ(0.3074) is negative and ƒ(0.3075) is positive). So, it is possible that we have all the positive zeros. Next, Descartes rule of signs tells you to consider how many times the coefficients of ƒ(-x) change signs. In our case ƒ(-x) = 6 x4 – 3 x2 – 4 x – 1 The signs of the coefficients are +– – – So they change signs once. This means there is exactly one negative zero. (The subtracting 2 thing doesn't apply here, since 1 – 2 = -1, and we can't have a negative number of zeros.) So we should look for that zero.
### 05.06.01 Lesson 5F: Stem Changing Verbs: O – UE (Poder/Dormir) (Spanish I)
computer-scored 20 points possible 30 minutes
**Assignment 05.06.01: Stem Changing Verbs: O – UE (Poder/Dormir)**: You know the drill, find the button: and click on it!
### 05.06.01 Reports WP5 Links (Computer Technology)
Landmarks Citation Machinehttp://www.citationmachine.netUsing Modern Language Association (MLA) Report Formathttp://owl.english.purdue.edu/owl/resource/747/01/A Guide for Writing Research Papershttp://www.ccc.commnet.edu/mla/index.shtmlEasyBibhttp://www.easybib.com/
Landmarks Citation Machine or EasyBib
These are great websites to help you create works cited. You type in the information for the specific type of resource, and it creates the citation for you. You can then copy and paste it into your document. I would definitely choose at least one of these websites to bookmark.
### 05.06.01 Romeo and Juliet comparison essay (English 9)
Audio introduction to 'Romeo and Juliet comparison essay' assignmenthttps://pp1.ehs.uen.org:8170/podcastproducer/attachments/CD6...Interactive guide to writing comparison contrast essayshttp://www.readwritethink.org/classroom-resources/student-in...
To listen to the introduction, use the link above and then click the microphone icon and the play button.
### 05.06.01 STORY BREAK #5 - English 10
"Everybody would come to the house to see my mama . . ."http://storycorps.org/listen/stories/penelope-simmons-and-su...
STORY BREAK COURTESY OF "STORYCORPS"
Listen to the following story . . .
“Everybody would come to the house to see my mama...”
Write down how you can and can't relate to these people. Save your response in its own document/file to be submitted separate from the rest of the assignments.
*All of the above information/work should be saved in a folder on your hard drive for future use, reference and grading.
SAVE ALL OF YOUR WORK FROM THIS QUARTER
|
{}
|
• ### Signatures of Mottness and Hundness in archetypal correlated metals(1708.05752)
Jan. 15, 2019 cond-mat.str-el
Physical properties of multi-orbital materials depend not only on the strength of the effective interactions among the valence electrons but also on their type. Strong correlations are caused by either Mott physics that captures the Coulomb repulsion among charges, or Hund physics that aligns the spins in different orbitals. We identify four energy scales marking the onset and the completion of screening in orbital and spin channels. The differences in these scales, which are manifest in the temperature dependence of the local spectrum and of the charge, spin and orbital susceptibilities, provide clear signatures distinguishing Mott and Hund physics. We illustrate these concepts with realistic studies of two archetypal strongly correlated materials, and corroborate the generality of our conclusions with a model Hamiltonian study.
• ### Uncovering the Origin of Divergence in the CsM(CrO$_4$)$_2$ (M = La, Pr, Nd, Sm, Eu; Am) Family through Examination of the Chemical Bonding in a Molecular Cluster and by Band Structure Analysis(1803.07370)
March 20, 2018 cond-mat.str-el
A series of f-block chromates, CsM(CrO$_4$)$_2$ (M = La, Pr, Nd, Sm, Eu; Am), were prepared revealing notable differences between the AmIII derivatives and their lanthanide analogs. While all compounds form similar layered structures, the americium compound exhibits polymorphism and adopts both a structure isomorphous with the early lanthanides as well as one that possesses lower symmetry. Both polymorphs are dark red and possess band gaps that are smaller than the LnIII compounds. In order to probe the origin of these differences, the electronic structure of $\alpha$-CsSm(CrO$_4$)$_2$, $\alpha$-CsEu(CrO$_4$)$_2$ and $\alpha$-CsAm(CrO$_4$)$_2$ were studied using both a molecular cluster approach featuring hybrid density functional theory and QTAIM analysis, and by the periodic LDA+GA and LDA+DMFT methods. Notably, the covalent contributions to bonding by the f orbitals was found to be more than twice as large in the AmIII chromate than in the SmIII and EuIII compounds, and even larger in magnitude than the Am-5f spin-orbit splitting in this system. Our analysis indicates also that the Am-O covalency in $\alpha$-CsAm(CrO$_4$)$_2$ is driven by the degeneracy of the 5f and 2p orbitals, and not by orbital overlap.
• ### Magneto-transport properties of the "hydrogen atom" nodal-line semimetal candidates CaTX (T=Ag, Cd, X=As, Ge)(1703.01341)
Topological semimetals are characterized by protected crossings between conduction and valence bands. These materials have recently attracted significant interest because of the deep connections to high-energy physics, the novel topological surface states, and the unusual transport phenomena. While Dirac and Weyl semimetals have been extensively studied, the nodal-line semimetal remains largely unexplored due to the lack of an ideal material platform. In this paper, we report the magneto-transport properties of two nodal-line semimetal candidates CaAgAs and CaCdGe. First, our single crystalline CaAgAs supports the first "hydrogen atom" nodal-line semimetal, where only the topological nodal-line is present at the Fermi level. Second, our CaCdGe sample provides an ideal platform to perform comparative studies because it features the same topological nodal line but has a more complicated Fermiology with irrelevant Fermi pockets. As a result, the magnetoresistance of our CaCdGe sample is more than 100 times larger than that of CaAgAs. Through our systematic magneto-transport and first-principles band structure calculations, we show that our CaTX compounds can be used to study, isolate, and control the novel topological nodal-line physics in real materials.
• ### Slave Boson Theory of Orbital Differentiation with Crystal Field Effects: Application to UO$_2$(1606.09614)
Feb. 12, 2017 cond-mat.str-el
We derive an exact operatorial reformulation of the rotational invariant slave boson method and we apply it to describe the orbital differentiation in strongly correlated electron systems starting from first principles. The approach enables us to treat strong electron correlations, spin-orbit coupling and crystal field splittings on the same footing by exploiting the gauge invariance of the mean-field equations. We apply our theory to the archetypical nuclear fuel UO$_2$, and show that the ground state of this system displays a pronounced orbital differention within the $5f$ manifold, with Mott localized $\Gamma_8$ and extended $\Gamma_7$ electrons.
• ### Superconducting Order from Disorder in 2H-TaSe$_{2-x}$S$_{x}$ (0$\leq$x$\leq$2)(1608.06275)
We report on the emergence of robust superconducting order in single crystal alloys of 2H-TaSe$_{2-x}$S$_{x}$ (0$\leq$x$\leq$2) . The critical temperature of the alloy is surprisingly higher than that of the two end compounds TaSe$_{2}$ and TaS$_{2}$. The evolution of superconducting critical temperature T$_{c} (x)$ correlates with the full width at half maximum of the Bragg peaks and with the linear term of the high temperature resistivity. The conductivity of the crystals near the middle of the alloy series is higher or similar than that of either one of the end members 2H-TaSe$_{2}$ and/or 2H-TaS$_{2}$. It is known that in these materials superconductivity (SC) is in close competition with charge density wave (CDW) order. We interpret our experimental findings in a picture where disorder tilts this balance in favor of superconductivity by destroying the CDW order.
• ### TRIQS/DFTTools: A TRIQS application for ab initio calculations of correlated materials(1511.01302)
We present the TRIQS/DFTTools package, an application based on the TRIQS library that connects this toolbox to realistic materials calculations based on density functional theory (DFT). In particular, TRIQS/DFTTools together with TRIQS allows an efficient implementation of DFT plus dynamical mean-field theory (DMFT) calculations. It supplies tools and methods to construct Wannier functions and to perform the DMFT self-consistency cycle in this basis set. Post-processing tools, such as band-structure plotting or the calculation of transport properties are also implemented. The package comes with a fully charge self-consistent interface to the Wien2k band structure code, as well as a generic interface that allows to use TRIQS/DFTTools together with a large variety of DFT codes. It is distributed under the GNU General Public License (GPLv3).
• ### Fermi surface topology and negative longitudinal magnetoresistance observed in centrosymmetric NbAs2 semimetal(1602.01795)
March 14, 2016 cond-mat.str-el
We report transverse and longitudinal magneto-transport properties of NbAs2 single crystals. Attributing to the electron-hole compensation, non-saturating large transverse magnetoresistance reaches up to 8000 at 9 T at 1.8 K with mobility around 1 to 2 m^2V^-1S^-1. We present a thorough study of angular-dependent Shubnikov-de Haas (SdH) quantum oscillations of NbAs2. Three distinct oscillation frequencies are identified. First-principles calculations reveal four types of Fermi pockets: electron alpha pocket, hole beta pocket, hole gamma pocket and small electron delta pocket. Although the angular dependence of alpha, beta and delta agree well with the SdH data, it is unclear why the gamma pocket is missing in SdH. Negative longitudinal magnetoresistance is observed which may be linked to novel topological states in this material, although systematic study is necessary to ascertain its origin.
• ### Gutzwiller Renormalization Group(1509.05441)
Sept. 17, 2015 cond-mat.str-el
We develop a variational scheme called "Gutzwiller renormalization group" (GRG), which enables us to calculate the ground state of Anderson impurity models (AIM) with arbitrary numerical precision. Our method can exploit the low-entanglement property of the ground state in combination with the framework of the Gutzwiller wavefunction, and suggests that the ground state of the AIM has a very simple structure, which can be represented very accurately in terms of a surprisingly small number of variational parameters. We perform benchmark calculations of the single-band AIM that validate our theory and indicate that the GRG might enable us to study complex systems beyond the reach of the other methods presently available and pave the way to interesting generalizations, e.g., to nonequilibrium transport in nanostructures.
• ### Finite-Temperature Gutzwiller Approximation from Time-Dependent Variational Principle(1505.01951)
May 8, 2015 cond-mat.str-el
We develop an extension of the Gutzwiller approximation to finite temperatures based on the Dirac-Frenkel variational principle. Our method does not rely on any entropy inequality, and is substantially more accurate than the approaches proposed in previous works. We apply our theory to the single-band Hubbard model at different fillings, and show that our results compare quantitatively well with dynamical mean field theory in the metallic phase. We discuss potential applications of our technique within the framework of first principle calculations.
• ### Transport properties of Metallic Ruthenates: a DFT+DMFT investigation(1504.00292)
April 1, 2015 cond-mat.str-el
We present a systematical theoretical study on the transport properties of an archetypal family of Hund's metals, Sr$_2$RuO$_4$,Sr$_3$Ru$_2$O$_7$, SrRuO$_3$ and CaRuO$_3$, within the combination of first principles density functional theory and dynamical mean field theory. The agreement between theory and experiments for optical conductivity and resistivity is good, which indicates that electron-electron scattering dominates the transport of ruthenates. We demonstrate that in the single-site dynamical mean field approach the transport properties of Hund's metals fall into the scenario of "resilient quasiparticles". We explains why the single layered compound Sr$_2$RuO$_4$ has a relative weak correlation with respect to its siblings, which corroborates its good metallicity.
• ### Shining light on transition metal oxides: unveiling the hidden Fermi Liquid(1404.6480)
April 25, 2014 cond-mat.str-el
We use low energy optical spectroscopy and first principles LDA+DMFT calculations to test the hypothesis that the anomalous transport properties of strongly correlated metals originate in the strong temperature dependence of their underlying resilient quasiparticles. We express the resistivity in terms of an effective plasma frequency $\omega_p^*$ and an effective scattering rate $1/\tau^*_{tr}$. We show that in the archetypal correlated material V2O3, $\omega_p^*$ increases with increasing temperature, while the plasma frequency from partial sum rule exhibits the opposite trend . $1/\tau^*_{tr}$ has a more pronounced temperature dependence than the scattering rate obtained from the extended Drude analysis. The theoretical calculations of these quantities are in quantitative agreement with experiment. We conjecture that these are robust properties of all strongly correlated metals, and test it by carrying out a similar analysis on thin film NdNiO3 on LaAlO3 substrate.
• ### Plutonium hexaboride is a correlated topological insulator(1308.2245)
Aug. 19, 2013 cond-mat.str-el
We predict that plutonium hexaboride (PuB6) is a strongly correlated topological insulator, with Pu in an intermediate valence state of Pu^2.7+ . Within the combination of dynamical mean field theory and density functional theory, we show that PuB6 is an insulator in the bulk, with non-trivial Z2 topological invariants. Its metallic surface states have large Fermi pocket at X Dirac cones inside the bulk derived electronic states causing a large surface thermal conductivity. PuB6 has also a very high melting temperature therefore it has ideal solid state properties for a nuclear fuel material.
• ### Intermediate-pressure phases of cerium studied by an LDA + Gutzwiller method(1104.0156)
April 15, 2013 cond-mat.str-el
The thermodynamic stable phase of cerium metal in the intermediate pressure regime (5.0--13.0 GPa) is studied in detail by the newly developed local-density approximation (LDA)+ Gutzwiller method, which can include the strong correlation effect among the 4\textit{f} electrons in cerium metal properly. Our numerical results show that the $\alpha"$ phase, which has the distorted body-centered-tetragonal structure, is the thermodynamic stable phase in the intermediate pressure regime and all the other phases including the $\alpha'$ phase ($\alpha$-U structure), $\alpha$ phase (fcc structure), and bct phases are either metastable or unstable. Our results are quite consistent with the most recent experimental data.
• ### Temperature dependent electronic structures and the negative thermal expansion of \delta -Pu(1303.3322)
We introduce a temperature-dependent parameterization in the modified embedded-atom method and combine it with molecular dynamics to simulate the diverse physical properties of the \delta - and \epsilon - phases of elemental plutonium. The aim of this temperature-dependent parameterization is to mimic the different magnitudes of correlation strength of the Pu 5f electrons at different temperatures. Compared to previous temperature independent parameterization, our approach captures the negative thermal expansion and temperature dependence of the bulk moduli in the \delta -phase. We trace this improvement to a strong softening of phonons near the zone boundary and an increase of the f-like partial density and anharmonic effects induced by the temperature-dependent parameterization upon increasing temperature. Our study suggests it is important to include temperature-dependent parameterization in classical force field methods to simulate complex materials such as Pu.
• ### Non-Drude universal scaling laws for the optical response of local Fermi liquids(1212.6174)
March 7, 2013 cond-mat.str-el
We investigate the frequency and temperature dependence of the low-energy electron dynamics in a Landau Fermi liquid with a local self-energy. We show that the frequency and temperature dependencies of the optical conductivity obey universal scaling forms, for which explicit analytical expressions are obtained. For the optical conductivity and the associated memory function, we obtain a number of surprising features that differ qualitatively from the Drude model and are universal characteristics of a Fermi liquid. Different physical regimes of scaling are identified, with marked non-Drude features in the regime where hbar omega ~ kB T. These analytical results for the optical conductivity are compared to numerical calculations for the doped Hubbard model within dynamical mean-field theory. For the "universal" low-energy electrodynamics, we obtain perfect agreement between numerical calculations and analytical scaling laws. Both results show that the optical conductivity displays a non-Drude "foot", which could be easily mistaken as a signature of breakdown of the Fermi liquid, while it actually is a striking signature of its applicability. The aforementioned scaling laws provide a quantitative tool for the experimental identification and analysis of the Fermi-liquid state using optical spectroscopy, and a powerful method for the identification of alternative states of matter, when applicable.
• ### How bad metals turn good: spectroscopic signatures of resilient quasiparticles(1210.1769)
Feb. 22, 2013 cond-mat.str-el
We investigate transport in strongly correlated metals. Within dynamical mean-field theory, we calculate the resistivity, thermopower, optical conductivity and thermodynamic properties of a hole-doped Mott insulator. Two well-separated temperature scales are identified: T_FL below which Landau Fermi liquid behavior applies, and T_MIR above which the resistivity exceeds the Mott-Ioffe-Regel value and bad-metal' behavior is found. We show that quasiparticle excitations remain well-defined above T_FL and dominate transport throughout the intermediate regime T_FL < T_MIR. The lifetime of these resilient quasiparticles' is longer for electron-like excitations, and this pronounced particle-hole asymmetry has important consequences for the thermopower. The crossover into the bad-metal regime corresponds to the disappearance of these excitations, and has clear signatures in optical spectroscopy.
• ### Hallmark of strong electronic correlations in LaNiO$_3$: photoemission kink and broadening of fully occupied bands(1107.5920)
July 29, 2011 cond-mat.str-el
Recent angular-resolved photoemission experiments on LaNiO$_3$ reported a renormalization of the Fermi velocity of $e_g$ quasiparticles, a kink in their dispersion at $-0.2$ eV and a large broadening and weakened dispersion of the occupied $t_{2g}$ states. We show here that all these features result from electronic correlations and are quantitatively reproduced by calculations combining density-functional theory and dynamical mean-field theory. The importance and general relevance of correlation effects in filled bands coupled by inter-orbital interactions to a partially-filled band are pointed out.
• ### LDA+Gutzwiller Method for Correlated Electron Systems: Formalism and Its Applications(0811.3454)
Dec. 13, 2008 cond-mat.str-el
We introduce in detail our newly developed \textit{ab initio} LDA+Gutzwiller method, in which the Gutzwiller variational approach is naturally incorporated with the density functional theory (DFT) through the "Gutzwiller density functional theory (GDFT)" (which is a generalization of original Kohn-Sham formalism). This method can be used for ground state determination of electron systems ranging from weakly correlated metal to strongly correlated insulators with long-range ordering. We will show that its quality for ground state is as high as that by dynamic mean field theory (DMFT), and yet it is computationally much cheaper. In additions, the method is fully variational, the charge-density self-consistency can be naturally achieved, and the quantities, such as total energy, linear response, can be accurately obtained similar to LDA-type calculations. Applications on several typical systems are presented, and the characteristic aspects of this new method are clarified. The obtained results using LDA+Gutzwiller are in better agreement with existing experiments, suggesting significant improvements over LDA or LDA+U.
• ### LDA+Gutzwiller Method for Correlated Electron Systems(0707.4606)
Combining the density functional theory (DFT) and the Gutzwiller variational approach, a LDA+Gutzwiller method is developed to treat the correlated electron systems from {\it ab-initio}. All variational parameters are self-consistently determined from total energy minimization. The method is computationally cheaper, yet the quasi-particle spectrum is well described through kinetic energy renormalization. It can be applied equally to the systems from weakly correlated metals to strongly correlated insulators. The calculated results for SrVO$_3$, Fe, Ni and NiO, show dramatic improvement over LDA and LDA+U.
|
{}
|
• Unit 5: The Integral
While previous units dealt with differential calculus, this unit starts the study of integral calculus. As you may recall, differential calculus began with developing the intuition behind the notion of a tangent line. Integral calculus begins with understanding the intuition behind the idea of an area. We will be able to extend the notion of the area and apply these more general areas to various problems. This will allow us to unify differential and integral calculus through the Fundamental Theorem of calculus. Historically, this theorem marked the beginning of modern mathematics and is extremely important in all applications.
Completing this unit should take you approximately 10 hours.
• 5.1: Introduction to Integration
In this section, you will learn about the concept of Integration. Integral calculus applies to a wide range of applied problems, such as area.
• 5.2: Sigma Notation and Riemann Sums
Earlier, you learned about the concept of area. In this section, you will learn how to use notation to add a large set of values.
• 5.3: The Definite Integral
In this section, you will explore the Riemann Sums further, which will lead us to the concept of the definite integral. You will then learn about their applications.
• 5.4: Properties of the Definite Integral
In this section, you will continue to explore the definite integral by learning about the properties of integrals that sometimes define other functions.
• 5.5: Areas, Integrals, and Antiderivatives
In this section, you will explore the connection between areas, integrals, and antiderivatives, and how those connections can be used.
• 5.6: The Fundamental Theorem of Calculus
The Fundamental Theorem of Calculus is one that you will use very often in calculating integrals.
• 5.7: Finding Antiderivatives
In this section, you will apply previous general knowledge of antiderivatives to find antiderivatives of complicated functions.
• 5.8: First Application of Definite Integral
Using your knowledge of definite integrals, we will now take a look at how to solve applied problems.
• 5.9: Using Tables to Find Antiderivatives
In this section, you will learn how to use the tables of integrals. You will find the table of integrals useful in learning calculus. Think about the table of integrals as a guide to help you solve problems.
|
{}
|
# Is the metric ${d(x,y)}\over {1+d(x,y)}$ complete where $d$ is the usual Euclidean metric on $\mathbb R^{2}$
Let $d(x,y)$ be the usual Euclidean metric on $\mathbb R^{2}.$ $\mathbb R^{2}$ is complete under $d(x,y)$. I have this subspace given $$[0,1]\times [0,\infty )\ \ of\ \ \mathbb R^{2}.$$ I thought this is also complete under $d$ for I could not think of any sequence that is not convergent in this space. Correct me if I am wrong. Now the metric $$d'(x,y)={{d(x,y)}\over {1+d(x,y)}}$$ on the subspace $[0,1]\times [0,\infty )$. Is this complete $?$
I was thinking if I could prove that $d'(x,y)$ and $d(x,y)$ are equivalent then completeness would be readily proved. Am I thinking right $?$ Need help to further the proof.
Thanks for any help.
• You have $d'=\dfrac1{1+\frac1d}=1-\dfrac1{1+d}$, not sure if that helps. – Akiva Weinberger Sep 4 '15 at 15:47
## 1 Answer
Note that $d'(x,y)\leq d(x,y)$ for any $x,y$ in your space, and $d(x,y)\leq 2d'(x,y)$ if $d(x,y)\leq 1$, so a sequence is Cauchy with respect to one metric if and only if it is Cauchy in the other metric.
• Thanks. Another thing here. For the space $[0,1]\times [0,\infty )$ I did not find counter example but how to prove for sure that this space is complete under $d$. – user118494 Sep 4 '15 at 16:02
• Well, as you said $\mathbb R^2$ is complete under $d$, and any Cauchy sequence in the subspace $[0,1]\times [0,\infty)$ (under $d$) is also Cauchy as a sequence in $\mathbb R^2$, so has a limit in $\mathbb R^2$. The last thing you need is that $[0,1]\times [0,\infty)$ is a closed subspace of $\mathbb R^2$, so any limit point of a sequence of points in the subspace lies in the subspace. – Sean Clark Sep 4 '15 at 16:06
• Equivalence of metrics means that they generate the same topology. Completeness of a metric does not automatically imply completeness of an equivalent one. For example on the open interval $(0,\pi /2)$ let $d(x,y)= |x-y|$ and $e(x,y)=| \tan x - \tan y|$. – DanielWainfleet Sep 4 '15 at 17:26
|
{}
|
# Sir Model R0
dI/dt = βSI – γI. An examination of the local stability of the model’s equilibria reveals that there is a critical vaccination proportion PC&J- Ro’ (6). SIR Model Specifics. Author: suwannee. This is exemplified for the dynamics of two competing virus strains. Read also: What is Herd Immunity? The numbers game with COVID-19. re/COVIDmodel). GitHub Gist: instantly share code, notes, and snippets. The P365 series has surprised me. You can also use decimal number as MOV R3,#10d. The SIR model can be applied to viral diseases, such as measles, chicken pox, and influenza. In the present paper we examined the bifurcation of a mathematical model for the spread of an infectious disease. Initial parameters of the model will be: N = 12000000 I0, R0 = 100, 0 S0 = N -I0 -R0 beta, gamma = 0. The system is given as: dS/dt = P - B S Z - d S dZ/dt = B S Z + G R - A S Z dR/dt = d S + A S Z - G R. This seems to be very similar to the numbers in the OP, and may be the same model. ferential equations governing the SIR system are then given as dS dt "!bSI, dI dt "bSI!gI, (1) dR dt "gI, where S, I and R are the proportions of suscep-tible, infectious and recovered individuals, b is the contact rate and 1/g is the mean infectious period (Anderson & May, 1979, 1992). The model found similar results for both Israel and California, with California reaching herd immunity around July 15th, with slightly more than 10% of their population (4. The model (at sdl. The Tom Little Collection: Model Railway 00 Gauge: Hornby R055 LMS Class 4P Loco 2-6-4 Tank x 1, Hornby R066 LMS 4-6-2 Duchess Loco x 1, Hornby R154 SR Loco Sir Dinadan x 1, Hornby R322 LNER Class A3. We assume that all death is natural. Despite the model changes, we continue to see a dramatic and prolonged predicted increase in cases. The transmission rate, β, controls the rate of spread which represents the probability of transmitting disease between a susceptible and an infectious individual. Got it? See the picture below. Average number of individuals effectively contacted per time step This is equal to the R0/Average duration of infectiousness. Where: S is the density of susceptible hosts. We will use simulation to verify some analytical results. General Epidemic: The Basic SIR Model A population is comprised of three compartments: Susceptible Segment not yet infected, disease-free (S) Infected Segment infected and infectious (I) Removed Recovered (usually) with lifelong immunity (R) Model Assumptions: 1. Constant rates (e. 0 (standard deviation 0. These must be solved numerically. STANFORD, Calif. But it is worthy of relative changes given social distancing. When they encounter someone infected with a virus, there is a certain probability that they will become infected. • This is illustrated by the SIR (Susceptible, Infected, Recovered) model for which some technical background will be included in the handout notes, but it is not necessary to understand the key takeaway. We use a simple 3-compartment SIR numeric model, with Susceptible, Infected and Recovered sub-populations (e. The SIR model. The proposed algorithm, described in a Bayesian framework, starts with a non-informative prior on the distribution of the reproduction number R. As stated earlier, another approach to the doubling time formula that could be used with this example would be to calculate the annual percentage yield, or effective annual rate, and use it as r. A time-scaled genealogy with known times of sampling is a necessary input for most functions in rcolgem. In our model, we assume that every member of the population is either susceptible or infectious, giving us the equation S(t)¯I(t) ˘N. Example: If R0 holds the value 20H, and we have a data 2F H stored at the address 20H, then the value 2FH will get transferred to accumulator after executing this instruction. Threshold theorems involving the basic reproduction number R0, the contact number σ, and the replacement number R are presented for these models and their extensions such as SEIR and MSEIRS. My question Am I making any mistakes or is there just not enough data yet? Or is the SIR model too simple? I would appreciate suggestions on how to change the code so that I get some sensible numbers out of it. (2014), I just reproduce the algorithm for easily understanding and create the following function RK4SIR. Introduction. • If we do exactly same thing for SEIR model (straightforward but more involved), we get "So, in comparison with SIR model, invasion speed in SEIR model scales with √R₀ "This seems pretty unwieldy. R¢HtL aIHtL, (3) with initial conditions SH0L S0, IH0L I0, RH0L R0. I am definitely not an epidemiologist but I did want to learn the basics of the popular SIR (Susceptible, Infected, Recovered) models. As seen in Figure 1, the best fitted curve (red color) corresponds to the SIR+P+T model, which is a modification of the SIR model in which the transmission rate is adjusted by using temperature and precipitation. All individuals in the population are assumed to be in one of these four states. Considering a steady decrease in reported mortality rates since then, the basic reproduction number under the current social distancing restrictions was 1. Constant (closed) population size 2. These built-in models are parameterized using $$R_0$$ and the infectious period ($$1/\gamma$$), since these may be more intuitive for new students than the slightly abstract transmission rate. The model used is an SIR (Susceptible, Infected, Recovered) compartmental epidemic model based on the following three Ordinary Differential Equations (ODEs): Fig. When they encounter someone infected with a virus, there is a certain probability that they will become infected. In particular, the antenna system used seven Seasat engineering model panels. R0 is the reproduction number that contains the ‘potential’ for the outbreak and how bad it might get. Mesa SIR provides the basic building blocks for an Agent Based Susceptible-Infected-Recovered (SIR) Epidemic model. 1 # Recovery rate gamma = 0. Also the principle of competitive exclusion holds no longer true. Yang to solve the SIR model. 86 (95% CI: 2. Model fit based on a two-component epidemic model: earsC: Surveillance for a count data time series using the EARS C1, C2 or C3 method. It has since been identified as a zoonotic coronavirus, similar to SARS coronavirus and MERS coronavirus and named COVID-19. Infectious disease surveillance systems are powerful tools for monitoring and understanding infectious disease dynamics; however, underreporting (due to both unreported and asymptomatic infections) and observation errors in these systems create challenges for delineating a complete picture of infectious disease epidemiology. " "" Sir Paul Salvador, this does not mean that I have wrong you. MODEL SETUP AND CALCULATIONS The model is an SIRD model - which is a derivative of the classic SIR mode l Within the model there are several variables derived from out initial inputs. Hence, the R0 derived from the SIR model closely reflected the observed R0. The SIR model and its variants are widely used to predict the progress of COVID-19 worldwide, despite their rather simplistic nature. (2014), I just reproduce the algorithm for easily understanding and create the following function RK4SIR. This model depends on two parameters: β is the contact rate, and we assume that in a unit time each infected individual will come into contact with βN people. 36) of Kiss, Miller, & Simon. Let’s see what happens if we assume γ=σ I SEIR ⇡ I (0) · e 1 2 (+)+ p 4(R0 1)+(+)2 I SEIR ⇡ I (0) ⇥ e(p R0 1)t. For the SIR epidemic we define a naive R0 as such: R0 = cB/ δ = λ/δ. An example is shown in Figure 2. For simplicity and practical reasons, I opted to use the most conventional, albeit simplest, model known: the SIR model. To run this model, you need to know the following:. Song (Montclair State) Compute R0 June 20, 2016 1 / 1. Based on the propagateParSIR function from the code shipping with Yang et al. (-R0)*R0]/R0. It is a mathematical model, based on 4 separate patterns :-Chinese official number-The commonly accepted R0 of 2. Introduction to the SIR model and R0. Directly transmitted microparasite SIR model. ‘people exposed’ which might be important factor in very large geographical area especially if stringent containment measures are implemented. 8 4 Phase-Plane for SIR endemic model when: (a) R0 = 0. The Behavioral SIR Model COVID Regressions BSIR Dynamics Herd Immunity Swine Flu, 2009 The SIR Model (1927) The model takes place in continuous time t ∈ [0,∞) Population is the continuum [0,1] (no aggregate randomness) State transition process of people in the SIR model Mass σ(t) of individuals are susceptible to a disease. SIRS Model This model has been formulated for diarrheal infections caused by the bacteria Shigella. Based on lecture notes of two summer schools with a mixed audience from mathematical sciences, epidemiology and public health, this volume offers a comprehensive introduction to basic ideas and techniques in modeling infectious diseases, for the comparison of strategies to plan for an anticipated epidemic or pandemic, and to deal with a disease outbreak in real time. Friday 6th July 2018 19:13 GMT MOV r0,r0 Lock Story 1: Bank Holiday Locksmith Elderly neighbour locked herself out, distressed at the cost of a bank holiday locksmith (but not quite distressed enough for the police to break in for her) she mentioned there were keys inside in the lock of the other door. An example is shown in Figure 2. Their basic reproduction number, R0, was under current restrictions of 1. This is the most spectacular part, since in order to train the model we will need to: Find the machine with the powerful and, what is very important, supported (read: NVIDIA) by the TensorFlow video card. K, Molineaux. The basic reproduction number (R 0) is a central quantity in epidemiology as it measures the transmission potential of infectious diseases. SIR (chapter 2, Fig 2. Tiwari School of Studies in Mathematics, Vikram University, Ujjain (M. Related: Going viral: 6 new findings about viruses A. Modify the original “translate” script that is used to train a model for the translation of the Eng/Fre. A time-scaled genealogy with known times of sampling is a necessary input for most functions in rcolgem. Jones 2008), in Spotfire. If you are interested in learning more on this model, there is an online module. While R0 and the serial interval can tell us a lot about how the virus spreads, they don’t tell us everything we need to know about how large an outbreak might be and how difficult it could be to control. In this model once someone recovers they are immune and can’t be infected again. The basic reproductive number R0 determines the existence of the equilibrium. in uenza, in a closed population. In the present paper we examined the bifurcation of a mathematical model for the spread of an infectious disease. [link to bedford. Assumptions Population size does not change - good in developed countries. In the classical SIR model of disease transmission, the attack rate (AR : the percentage of the population eventually infected) is linked to the basic reproduction number , by R 0 = − log 1 − AR S 0 AR − 1 − S 0 where S 0 is the initial percentage of susceptible population. We develop a multi-risk SIR model (MR-SIR) where infection, hospitalization and fatality rates vary between groups — in particular between the “young,” the “middle-aged” and the “old. The Behavioral SIR Model COVID Regressions BSIR Dynamics Herd Immunity Swine Flu, 2009 The SIR Model (1927) The model takes place in continuous time t ∈ [0,∞) Population is the continuum [0,1] (no aggregate randomness) State transition process of people in the SIR model Mass σ(t) of individuals are susceptible to a disease. 00 equations are in the file 'SIR_ODEs. Worked examples are provided for inferring transmission rates and R0 for a simple susceptible-infected-recovered (SIR) model and an HIV epidemic model. It is assumed that newly infected. Information is provided 'as is' and solely for informational purposes, not for trading purposes or advice. Updated: SIR Compartment Model 3 minute read Pendahuluan. That's the message I think is key. First of all, for any τ, we show that the disease-free equilibrium is globally asymptotically stable; when R0<1, the disease will die out. com - designed by young people who actually used the model for their communities - explains basic epidemic principles and how to use the model. The required assumptions are homogeneous mixing, closed. Values of R0 and σ are. R0 as I recently learned and everyone now knows is the number of people who would catch a pathogen from one infected person if no one had any resistence. Although the number of new patients in the mainland Child is restrained, the other countries are still struggling with the increasing number of new cases. We assume that all death is natural. The deterministic model. As seen in Figure 1, the best fitted curve (red color) corresponds to the SIR+P+T model, which is a modification of the SIR model in which the transmission rate is adjusted by using temperature and precipitation. Information is provided 'as is' and solely for informational purposes, not for trading purposes or advice. In late June, New York State was close to reaching herd immunity, according to the SIR model, which is defined by a disease reproduction number of less than one. Not only diseases that attack physically, but diseases that are bad habits can also be analyzed with the SIR model. on April 20, but that the number of deaths could range from 362 to 4,989. The infective period T for Covid-19 is estimated to be about. SIR Clasico; SIS Model; INTD 4116 Modelos Epidemiológicos en Bio-Matemática; Mate 4997 – Tópicos Especiales / Seminario E0 = 0. The optimal R0 value was 1. SIR model curves regularly seen and is what is driving our efforts to “flatten the curve. Figure 1: Scheme of the basic SIR model. Yang to solve the SIR model. The severity of symptoms caused by this disease will be another key factor that determines whether the outbreak can be contained. Daron Acemoglu, Victor Chernozhukov, Iván Werning, Michael D. To see the effect on R₀, klick on the small R₀-Button in the lower left corner of the "Infection and Immunity Status" panel. Initial parameters of the model will be: N = 12000000 I0, R0 = 100, 0 S0 = N -I0 -R0 beta, gamma = 0. The standard SIR model supposes that people don’t change their behavior on their own. The Austin indicator data set. ! No incubation period. Based on lecture notes of two summer schools with a mixed audience from mathematical sciences, epidemiology and public health, this volume offers a comprehensive introduction to basic ideas and techniques in modeling infectious diseases, for the comparison of strategies to plan for an anticipated epidemic or pandemic, and to deal with a disease outbreak in real time. compare this with the damped oscillations observed in a spring). SIR model for epidemics (compartmental model) S I with rate (infection rate) I R with rate (recovery rate) N: number of individuals in the population S: number of Susceptible individuals I: number of Infective individuals R: number of Removed (recovered/dead) individuals homogeneous mixing: SIR model for epidemics s=S/N: density of Susceptible. Jones' Notes on R0. This is the basic reproduction number: the average number of people that will catch the disease from an infected person. The basic reproduction number is now given by R0 = +m. In this paper, we consider a SIR epidemic model with non-monotonic incidence rate proposed by [4 ] with the initial conditions: SS(0) 0, ! 0 II(0) 0, ! 0 RR(0) 0 ! 0. ! The duration of infectivity is as long as the duration of the clinical disease. Saya merasa ada beberapa yang harus diupdate terkait perkembangan yang terjadi. 36) of Kiss, Miller, & Simon. The SIR model is the basis for other similar models. Really, it will be tough to find accurate R0 values for enough countries to create a correlation. Fitting the worst scenario SIRD model, the R0 has been estimated as 2. Formula is here: SIR Model Snapshot of Excel file: Sir. There are also other compartmental models: the SIS model, where all infected people return to the susceptible population (valid for the common cold), or SEIR and SEIS models, which take into account the latent or exposed period. 2 L15 BXC Catalog Page # N/A. Bokil (OSU-Math) Mathematical Epidemiology MTH 323 S-2017 16 / 37. Examining the above equation illustrates that R0 = 1 is the threshold separating monotonic extinction of the disease. In a this lighthearted example, a system of ODEs can be used to model a "zombie invasion", using the equations specified in Munz et al. The model uses coupled equations analyzing the number of susceptible people S(t), number of people infected I(t), and number of people who have recovered R(t). This means that the expected duration of infection is simply the inverse. If R0 is larger than one, an epidemic will most likely happen. Particular attention is paid to the key concepts of basic reproduction numberR0) and (infectiousness function, and to the measures used to judge the effectiveness of various public health interventions. This is a SEIR model and may be written in the following form R 0 = 1 + K ( τ E + τ I ) + K 2 τ E τ I. The basic reproduction number is now given by R0 = +m. Song (Montclair State) Compute R0 June 20, 2016 1 / 1. At the lower end of the estimates for COVID-19, a reproduction number of 1. a-b SIR model describing the transmission of infection in a population (S: susceptible, I: infectious, R: immune, V vaccinated). The model consists of three compartments: S: The number of susceptible individuals. , (KRON) — A lab at Stanford’s Department of Biology developed a web model to show the spread of COVID-19 to evaluate possible outcomes of non-pharmaceutical interventions like. The new system is described by the following set of FDE of order 0 3 D. For the SIR epidemic we define a naive R0 as such: R0 = cB/ δ = λ/δ. “On Using SIR Models to Model Disease Scenarios for. Based on the propagateParSIR function from the code shipping with Yang et al. 1a shows a version of the standard model (model A) where vaccinated individuals (V) are fully. We set the recovery period to five days. Introduction to the SIR model and R0. Instead of crowding them all into one script, it makes sense to just import them. Therefore, the Susceptible(S), Infected (I) and Recovered (R) generally called the SIR model first developed by Kermack and McKendrick (1927) is introduced in this study to model the spread of the Ebola Virus Disease (EVD) mathematically. The only variability in the overall formula is that when the day/t hits 22 some random factor changes, represented by 0. The model uses coupled equations analyzing the number of susceptible people S(t), number of people infected I(t), and number of people who have recovered R(t). You can also use decimal number as MOV R3,#10d. SIR model Consider that the disease, after recovery, confers immunity (which in-cludes the deaths, to have a constant population. And this is straightforward, it's just a multiple of beta and 1 over gamma. Perhitungan Dengan model matematika di atas, diperoleh. Looking at the IHME model again, on April 13, the model projected that there would be a 1,648 deaths from COVID-19 in the U. The authors also used the SIRD model to estimate COVID-19’s R0 value, an estimate of contagiousness which reflects the average number of people who may catch an infection from one contagious person. The model consists of a system of three coupled non-linear ordinary differential equations which does not possess an explicit formula solution. The Kermack-McKendrick Model is used to explain the rapid rise and fall in the number of infective. This is good enough if the average age at which immunity is lost < closed. SIR EPIDEMIC MODEL Fig. The outbreak of the novel coronavirus disease (Covid-19) brought considerable turmoil all around the world. In a classical SIR model, the number of people can be obtained by the formula: 1-1/R0. If you are interested in learning more on this model, there is an online module. Differentiating both sides with respect to t, and remembering that N is a constant, we get. A mathematical model for endemic malaria with variable human and mosquito populations. On 4 February 2020, data science blogger Learning Machines posted this analysis of the COVID-19 outbreak, in which he fitted the classic SIR (Susceptible-Infectious. There are also other compartmental models: the SIS model, where all infected people return to the susceptible population (valid for the common cold), or SEIR and SEIS models, which take into account the latent or exposed period. This antenna was again flown on the Shuttle as part of the SIR-B radar instrument [2]. Comments: • Those in state R have been infected and either recovered or died. 005; R0 = 0. Develop a cause of death model. model <- function (t, x, params) { #SIR model equations. This is a generalization of the simple SIR framework to include asymptomatic, non-infective Exposed people and the Deceased: The parameters are such that the disease takes about a week to incubate, and about a week to resolve. Formula is here: SIR Model Snapshot of Excel file: Sir. MODEL SETUP AND CALCULATIONS The model is an SIRD model - which is a derivative of the classic SIR mode l Within the model there are several variables derived from out initial inputs. 05 million) being infected. R0 G132 B82 Bottle green R17 G179 B162 Cyan R0 G156 B200 Light blue R124 G179 B225 Violet R128 G118 B207 Purple R143 G70 B147 Fuscia R233 G69 B140 Red R200 G30 B69 Orange R238 G116 29 Dark grey R63 G69 B72 Work to date •Working party formed in late 2012 •Three strands to working party: 1. I N Average number of contacts with infectives per unit time of one susceptible. Each iteration of this loop will run a simulation of the SIR model, then measure the Euclidean distance between the observed and simulated time series, and finally if this distance is lower than a treshold, consider that the parameter set can be kept: for run in [1:number_samples] simulated_timeseries = sir( param_N, i0_prior[run], beta_prior. The case could be made that these. Modify the original “translate” script that is used to train a model for the translation of the Eng/Fre. The virus with higher R0 can be eradicated from the. This is the most spectacular part, since in order to train the model we will need to: Find the machine with the powerful and, what is very important, supported (read: NVIDIA) by the TensorFlow video card. Stable infection-free steady state for R0 1; unstable infection-free i and endemic steady state 4 6 i* > 1. Jones 2008), in Spotfire. These must be solved numerically. R0 < 1, the epidemic dies out with minimal infection of the susceptible population; but for points such that R0 > 1, infection spreads throughout the population. 6 pattern still stands, for China. , transmission, removal. R0 G132 B82 Bottle green R17 G179 B162 Cyan R0 G156 B200 Light blue R124 G179 B225 Violet R128 G118 B207 Purple R143 G70 B147 Fuscia R233 G69 B140 Red R200 G30 B69 Orange R238 G116 29 Dark grey R63 G69 B72 Work to date •Working party formed in late 2012 •Three strands to working party: 1. In this paper, we consider a SIR epidemic model with non-monotonic incidence rate proposed by [4 ] with the initial conditions: SS(0) 0, ! 0 II(0) 0, ! 0 RR(0) 0 ! 0. Parameters: per capita birth and death rate μ, contact rate. Immunity matters much more than the ‘naive SIR’ model thinks. 0) within the school. Use parameters approximately relevant for this pandemic: mean recovery time about 10 days, and the basic reproduction number Ro = 3. People respond to current death rate 0 10 20 30 40 50 60 70 80 90 100 Days 0 2 4 6 8 10 12 People 104 0 0. In the simplest model herd immunity stops an epidemic when 1-1/R0 of people have been infected. I refer to J. We will use simulation to verify some analytical results. The SIR model was applied to the early spread of SARS-CoV-2 in Italy • The SIR model fits well the reported COVID-19 cases in Italy • We assessed the basic reproduction number R0 • We compared our results with previous literature findings and found that the basic reproduction number associated with the Italian outbreak may range from 2. Behavioral SIR models— a warning log(β t) = logβ 0 −α D ΔD t /N. I changed the model from a pure SIR model. R0 = βS fL. i told u first that i have no change of our bottle this change was made by someone who is already working there. Confused about parameters of a discrete SIR infectious disease model. This is a restriction of the SIR model which models R 0 = β γ where 1 γ is the period how long somebody is sick (time from Infected to Recovered) but that may not need to be the time that somebody is infectious. The model has been built and simulated in Vensim software. SIR model Epidemiology is the study of the pattern of disease in time, place and population. The Basic Reproductive Number (R0) A new swine-origin influenza A (H1N1) virus, ini-tially identified in Mexico, has now caused out-. 005; R0 = 0. 05 million) being infected. Three basic models (SIS endemic, SIR epidemic, and SIR endemic) for the spread of infectious diseases in populations are analyzed mathematically and applied to specific diseases. Differential equations model many natural phenomena as well as applications in engineering and physical sciences. csv #Both csv files must be sorted from lowest to highest FIPS number for. a blog about, strangeness in all it's forms. The most popular model to model epidemics is the so-called SIR model – or Kermack-McKendrick. We compute the factor R0(t) by using the SIR model in Italy and its some regional countries. , transmission, removal. Important point is that social distancing still can have a dramatic impact, it's not too late. parameters values, intial values of the variables and; a vector of time points; as inputs and run the SIR model and returns a data frame of time series as an output as below:. The model that we use is the standard susceptible-infected-recovered (SIR) model (see, for example, [1, 8]). Yang function. This simulator allows you to model a simplified epidemic. ! The model assumes: ! Constant population size. Econometric versions of the SIR model are much better and more realistic than the versions constructed by mathematical epidemiologists, which assumed for example a constant R0 that was estimated outside the model and then imposed into a simulation. All individuals in the population are assumed to be in one of these four states. The SIR model can't be used for diseases that spread other ways, such as by insect bites. A succinct description of the steps involved in the algorithm follows: (1) draw a k-sized sample from each of the parameter's prior distribution. The SIR Model on novel coronavirus. Their basic reproduction number, R0, was under current restrictions of 1. Boxes represent compartments, and arrows indicate ux between the compartments. Quite often R0 does not actually appear in SIR (Susceptible-Infected-Resistant) models, but instead the product of contact rate, infection rate, and disease period (or case duration). We introduce and analyze a basic transmission model for a directly transmitted infectious disease. 98 months, or 11. This is a SEIR model and may be written in the following form R 0 = 1 + K ( τ E + τ I ) + K 2 τ E τ I. Implementation of the BM method centers on the SIR algorithm, which is used to determine the posterior distributions for all the model's components. Le modèle mathématique SIR est un système d'équations différentielles: - dS/dt = β S I , - dI/dt = β S I − α I , - dR/dt = α I Le recalage s'effectue en faisant usage d'un algorithme d'optimisation pour réduire l'écart entre les données réelles et les données correspondantes simulées. 13 for each region respectively. The model explores the effect of two strategies (a) suppression, by which interventions are instituted to bring R e to below 1, and (b) mitigation, by which strategies are instituted to reduce the impact of the epidemic, but not interrupt viral transmission completely, thus reducing R e, but not necessarily below 1. ! The duration of infectivity is as long as the duration of the clinical disease. Behavioral SIR models— a warning log(β t) = logβ 0 −α D ΔD t /N. A more useful form of the logistic equation is: The variables in the above equation are as follows: P 0 = population at time t = 0. ) We can divide the pop-ulation into three classes: S: the susceptibles, which can get the disease. In order to eradicate the disease, the basic reproduction number must be lowered than a threshold. From the tiny components. This antenna was again flown on the Shuttle as part of the SIR-B radar instrument [2]. In order to discuss our eight methods to estimate R 0 in the SIR model, we first need to define the SIR model. In order to have R0 directly appear in our model, we use R 0 /(spreading-days) as the principle propagator coefficient. β is the contact rate (average number. Introduction: The basic epidemic model The classical model for epidemics is described in [1] and [Chapter 10 of 2]. "Their needs are different and varied than other. The standard SIR model As background, here is a simulation of the standard SIR model with these numbers, and a constant $$\beta=1$$ meaning $$R_0=5$$. 1 SIR Model For the duration of this report, the compartmental model that forms the basis of our discussion and analysis is the SIR model. 1) with the initial conditions (3. We study a delayed SIR epidemic model and get the threshold value which determines the global dynamics and outcome of the disease. SIR model In the absence of intervention measures in Tokyo with an estimated R0 of 2. Immunity matters much more than the ‘naive SIR’ model thinks. Logically, if the R0 is < 1, a disease outbreak should wane over time, and if it's > 1, cases should continue to increase. Considering a steady decrease in reported mortality rates since then, the basic reproduction number under the current social distancing restrictions was 1. A succinct description of the steps involved in the algorithm follows: (1) draw a k-sized sample from each of the parameter's prior distribution. cb() 300 times to plot 300 runs of the SIR chain binomial. 58, and is significantly larger than 1. R¢HtL aIHtL, (3) with initial conditions SH0L S0, IH0L I0, RH0L R0. It has since been identified as a zoonotic coronavirus, similar to SARS coronavirus and MERS coronavirus and named COVID-19. These &&mean-"eld’’ or homogeneous equations assume. I: the infected, who have the disease and can transmit it. SIR model for epidemics (compartmental model) S I with rate (infection rate) I R with rate (recovery rate) N: number of individuals in the population S: number of Susceptible individuals I: number of Infective individuals R: number of Removed (recovered/dead) individuals homogeneous mixing: SIR model for epidemics s=S/N: density of Susceptible. The goal of glucose control is supposed to be achieved if the system has a solution, otherwise the goal cannot be achieved. Writing a simulator. The standard SIR model As background, here is a simulation of the standard SIR model with these numbers, and a constant $$\beta=1$$ meaning $$R_0=5$$. MOV R1,#50H // Address of the starting location of destination is moved to R1. The threshold parameter, R0(τ) is obtained which determines whether the disease is extinct or not. Bifurcation analysis for SIR endemic model. In late June, New York State was close to reaching herd immunity, according to the SIR model, which is defined by a disease reproduction number of less than one. com - designed by young people who actually used the model for their communities - explains basic epidemic principles and how to use the model. For the SIR epidemic we define a naive R0 as such: R0 = cB/ δ = λ/δ. 97), medical services is predicted to collapse on Apr 26 and total deaths will be ~500,000 by the end of. To define the initial value problem it is assumed that there is no immunity initially and a small number of infectious cases are introduced to the population. R0 is the reproduction number that contains the ‘potential’ for the outbreak and how bad it might get. More sophisticated models allow re-infections. The SIR Model for Spread of Disease - Background: Hong Kong Flu; The SIR Model for Spread of Disease - The Differential Equation Model; The SIR Model for Spread of Disease - Euler's Method for Systems; The SIR Model for Spread of Disease - Relating Model Parameters to Data; The SIR Model for Spread of Disease - The Contact Number. The relative sizes of these sub-populations changes over time, and is affected by factors such as the rate and duration of contact between individuals, mobility, and the natural rate of recovery from the disease. Author: suwannee. All individuals in the population are assumed to be in one of these four states. The basic reproduction number can be estimated through examining detailed transmission chains or through genomic sequencing. The model found similar results for both Israel and California, with California reaching herd immunity around July 15th, with slightly more than 10% of their population (4. Based on the propagateParSIR function from the code shipping with Yang et al. Fitting the worst scenario SIRD model, the R0 has been estimated as 2. # Population size N = 10000 # Initial infections IInit = 1 SInit = N - IInit RInit = 0 # Transmission rate beta = 0. Infectious disease surveillance systems are powerful tools for monitoring and understanding infectious disease dynamics; however, underreporting (due to both unreported and asymptomatic infections) and observation errors in these systems create challenges for delineating a complete picture of infectious disease epidemiology. Use parameters approximately relevant for this pandemic: mean recovery time about 10 days, and the basic reproduction number Ro = 3. Since the emergence of the new coronavirus (COVID-19) in December 2019, we have adopted a policy of immediately sharing research findings on the deve. during the severe flu season of 2017-2018 amounted to 0. Initially a few infected people are added to the population and the entire population mixes homogeneously (meaning that the people an individual contacts each day are completely random). In the classical SIR model of disease transmission, the attack rate (AR : the percentage of the population eventually infected) is linked to the basic reproduction number , by R 0 = − log 1 − AR S 0 AR − 1 − S 0 where S 0 is the initial percentage of susceptible population. An SIR model is an epidemiological model that computes the theoretical number of people infected with a contagious illness in a closed population over time. Quotes are not sourced from all markets and may be delayed up to 20 minutes. The SIR model is very basic but practically useless. Each iteration of this loop will run a simulation of the SIR model, then measure the Euclidean distance between the observed and simulated time series, and finally if this distance is lower than a treshold, consider that the parameter set can be kept: for run in [1:number_samples] simulated_timeseries = sir( param_N, i0_prior[run], beta_prior. S – proportion of susceptible individuals in total population. In epidemiology, the basic reproduction number, or basic reproductive number (sometimes called basic reproduction ratio or basic reproductive rate), denoted (pronounced R nought or R zero), of an infection can be thought of as the expected number of cases directly generated by one case in a population where all individuals are susceptible to infection. SIR with birth and death. The outbreak of the novel coronavirus disease (Covid-19) brought considerable turmoil all around the world. In late June, New York State was close to reaching herd immunity, according to the SIR model, which is defined by a disease reproduction number of less than one. In the column S(t) it starts at S(0) = 6,810,005, I(t) starts with I(0. The proposed algorithm, described in a Bayesian framework, starts with a non-informative prior on the distribution of the reproduction number R. The data from the absentee survey were reliable, with no missing values. This is a steady-state model with no one dying or being born, to change the total number of people. dS/dt = -βSI. Modify the original “translate” script that is used to train a model for the translation of the Eng/Fre. 1) into the following SIR model: dS SI-=bS + bR-nS-y-, Here 5 denotes susceptible, / infected and R recovered individuals, /x is the per capita death rate due to causes other than the disease, y is the expected number of contacts. This model expands the SI model you studied yesterday to include a class of \recovered" individuals, which are assumed to be immune. The SIR Model. Looking at the IHME model again, on April 13, the model projected that there would be a 1,648 deaths from COVID-19 in the U. How to Patch new BIOS and Unlock Dell Service tag XXXXXXX-E7A8 Gen 9th or newer. That's about double an earlier R 0 estimate of 2. 0) within the school. Sulayman, Unraveling the Myths of R0 in Controlling the Dynamics of COVID-19 Outbreak: a Modelling Perspective (Submitted to a journal) First, I consider a scenario predicted by a simple SIR model (black dashed) when the movement control order (MCO) was implemented in Malaysia. Quite often R0 does not actually appear in SIR (Susceptible-Infected-Resistant) models, but instead the product of contact rate, infection rate, and disease period (or case duration). Looking for. Infectives in SIR Model R 0 = 0:75 0 100 200 300 0 2 4 6 8 10 x 10-4 Time t (Days) Fraction of Infectives Infectives in SIR Model What values of parameters determine the behavior of the model? V. and there's so many more factors that could influence the spread of diseases and the R0 value. Depending on the epidemiology of the disease, modellers would need to construct the best models based on the most plausible assumptions. SIR (chapter 2, Fig 2. during the severe flu season of 2017-2018 amounted to 0. Using only basic tools and a lathe, he has built a non-flying hexacopter display model, each propeller turned by a tiny single cylinder motor that runs on compressed air. In SIR model that takes into account the growing immune population the spread of virus stops when 1 − 1/R0 of the population has been infected and or recovered. % SIR model % this file sets up the parameters and runs the S0 = 0. When a susceptible and an infectious individual come into "infectious contact", the susceptible individual contracts the disease and. As a modification to the SIR model we introduce birth and death. タゥ"ハ0\$モト&ン$$贔* ,・. β is the contact rate (average number. SIR model is very widely used to analyze the spread of diseases in the human environment such as the ebola virus [1], zika [2], malaria [3], diabetes [4]. The model simply keeps track of how many individuals are in each class: individuals that leave one class must enter another class. Boxes represent compartments, and arrows indicate ux between the compartments. Up to three microbial strains with different virulence and transmission parameters can be modeled and the results graphed. Let’s start with 100 infected people on day 0, and assume the contact rate to be 0. In particular, the antenna system used seven Seasat engineering model panels. View full-text. “The Imperial model tries to deal with many things at once,” he says “Other models might focus on one specific thing, or one particular area; all of them help provide an overarching picture of what’s going on. SIR model In the absence of intervention measures in Tokyo with an estimated R0 of 2. tha Created Date: 7/8/2009 1:11:59 PM. Even though there are many high-levellanguages that are currently in demand, assembly programming language is popularly used in many applications. S – proportion of susceptible individuals in total population. So far so simple, this is just the construction of R0 for the simple example of an SIR model. But how do we know that this quantity defines the epidemic threshold of a particular infection? To understand this, we need to formulate an epidemic model. Susceptible, Infected, Recovered (SIR) Model for Epidemics By Bill Levinson, Levinson Productivity Systems PC Disclaimer; does not constitute engineering advice or detailed predictive capability. Model # MTR 5 R0. The SIR+P+T-based estimation. Instead of crowding them all into one script, it makes sense to just import them. For more complex models, you would go through a similar sort of reasoning, remembering that R0 is the average number of secondary infections per index case. The SIR compartmental model of disease spread. An example is shown in Figure 2. Thus Diekmann and Heesterbeek [9, page 56] modify model (1. 5) reduces to a SIR model in which the infectious individuals are removed at a higher rate than the inverse of their mean infectious period γ, with a transmission rate given by the basic reproductive rate of the system, γ e R 0 (S/N). The model found similar results for both Israel and California, with California reaching herd immunity around July 15th, with slightly more than 10% of their population (4. Use parameters approximately relevant for this pandemic: mean recovery time about 10 days, and the basic reproduction number Ro = 3. Originally designed to explore coevolution of myxoma and rabbits, the model is easily. Report the maximum number of infected people and compare it to the case where \( \beta(t) = 0. One piece of good news in this projection is that if you enter an R0 value of 2. 1) with the initial conditions (3. The SIR model describes the change in the population of each of these compartments in terms of two parameters, \beta and \gamma. SIR model Consider that the disease, after recovery, confers immunity (which in-cludes the deaths, to have a constant population. covered) model (the TSIR = the time series SIR [Sus-ceptible-Infected-Recovered] model) that has a dual role. It is for educational and illustrative applications only, to demonstrate and understand the effects of. This seems to be very similar to the numbers in the OP, and may be the same model. Let’s take R0=2. covid-19 r0 The R 0 for COVID-19 is a median of 5. In the early stages of an epidemic, growth is exponential, with a logarithmic growth rate. 01 R0 = beta / gamma. 1 0131 CALPELLA_MV_FINAL_20110. The variable m is used to represent a constant rate of birth and death. The model found similar results for both Israel and California, with California reaching herd immunity around July 15th, with slightly more than 10% of their population (4. The equations that define an SIR or SIRS model are shown in Equations <3> where now: P = (S+I+R) with α as the immunity loss rate, and the birth rate equal to the death rate. Ensure that the bounds given for instance, model. (-R0)*R0]/R0. So in a model like the one shown here, where a certain proportion of cases are less infectious. The SIR Epidemic Model Numerical Simulations. Although it seems that the overly rapid progression could be corrected for by keeping basically the same movie but just relabeling the days, there are other aspects of the. There are also other compartmental models: the SIS model, where all infected people return to the susceptible population (valid for the common cold), or SEIR and SEIS models, which take into account the latent or exposed period. Model specification. An example is shown in Figure 2. Add a classic silhouette to your sneaker collection with a pair of Air Max 90 shoes from Nike. We use a simple 3-compartment SIR numeric model, with Susceptible, Infected and Recovered sub-populations (e. Introduction: The basic epidemic model The classical model for epidemics is described in [1] and [Chapter 10 of 2]. The model found similar results for both Israel and California, with California reaching herd immunity around July 15th, with slightly more than 10% of their population (4. More sophisticated models allow re-infections. Then he made a new bottle from your model. Other parameter values are p, = 0. \beta describes the effective contact rate of the disease: an infected individual comes into contact with \beta N other individuals per unit time (of which the fraction that are susceptible to contracting the. Behavioral SIR models— a warning log(β t) = logβ 0 −α D ΔD t /N. As seen in Figure 1, the best fitted curve (red color) corresponds to the SIR+P+T model, which is a modification of the SIR model in which the transmission rate is adjusted by using temperature and precipitation. One piece of good news in this projection is that if you enter an R0 value of 2. compare this with the damped oscillations observed in a spring). The SIR Model for Spread of Disease. Considering a steady decrease in reported mortality rates since then, the basic reproduction number under the current social distancing restrictions was 1. 8 4 Phase-Plane for SIR endemic model when: (a) R0 = 0. Beta is the infection rate of the pathogen, and gamma is the recovery rate. The model uses coupled equations analyzing the number of susceptible people S(t), number of people infected I(t), and number of people who have recovered R(t). Model fit based on a two-component epidemic model: earsC: Surveillance for a count data time series using the EARS C1, C2 or C3 method. ## ## Set up an empty plot with pre-labelled axes, just like before: # Add the R0 value used to the plot: ## Call plot. Mesa SIR provides the basic building blocks for an Agent Based Susceptible-Infected-Recovered (SIR) Epidemic model. sir model for spread of ebola In our study, we utilize SIR modeling to construct a system of equations with plausible parameters to represent the spread of Ebola. • If we do exactly same thing for SEIR model (straightforward but more involved), we get "So, in comparison with SIR model, invasion speed in SEIR model scales with √R₀ "This seems pretty unwieldy. We numerically simulate the SIR model on various temporal networks. The default parameter arguments for the SIR model are: parm0 = c(R0 = 3, Ip = 7) parm_names = c("R0", "Infectious period") parm_min = c(R0 = 0, Ip = 1) parm_max = c(R0 = 20, Ip = 21) These can also be viewed by calling get_params(model = "SIR"). The transfer diagram of our model is depicted in Figure 1: Figure 1: The basic model compartments and flow The dynamics of the model are governed by the following system of differential equations: () () I 1 1U 2 1 2 2T 12 dS SS dt dI S dI dt dU 1p I dU T dt dT pI U dT dt dR U TR dt Λλ µ λ µδ δ µγ α γ δ γ µγ α αα µ. Department of Mathematical Sciences Montclair State University June 20, 2016 [email protected] If you are interested in learning more on this model, there is an online module. Differential Equations for the SIR Model. GitHub Gist: instantly share code, notes, and snippets. In model calibration for estimating transmission rate, it is necessary to discount the total number of infectious people by the case-infection-ratio to determine the reproduction number (R0), the. Differential equations model many natural phenomena as well as applications in engineering and physical sciences. The scheme can also be translated into a set of di erential equations: dS dt = SI dI dt = SI rI (1) dR dt = rI Using this model, we will consider a mild, short-lived epidemic, e. The SEIR model is an extension of the classical SIR (Susceptibles, Infected, Recovered) model, where a fourth compartment is added that contains exposed persons which are infected but are not yet infectious. Let's see what happens if we assume γ=σ I SEIR ⇡ I (0) · e 1 2 (+)+ p 4(R0 1)+(+)2 I SEIR ⇡ I (0) ⇥ e(p R0 1)t. SIR Model SEIR Model 2017-05-08 13. usceptible. 5, which means that any infectious person will have an opportunity to infect 2. This model has two control parameters—the probability of disease transmission (upon a contact between an infectious and susceptible individual), denoted by λ, and the duration of the infectious stage, denoted by δ. COVID-19 dynamics with SIR model 11 Mar 2020. The goal of glucose control is supposed to be achieved if the system has a solution, otherwise the goal cannot be achieved. 1) with the initial conditions (3. The SIR compartmental model of disease spread. newborn population. SIR model curves regularly seen and is what is driving our efforts to “flatten the curve. states, please visit the Global Epidemic and Mobility Modeling (GLEAM) project site. The model explores the effect of two strategies (a) suppression, by which interventions are instituted to bring R e to below 1, and (b) mitigation, by which strategies are instituted to reduce the impact of the epidemic, but not interrupt viral transmission completely, thus reducing R e, but not necessarily below 1. 5, means that 58% of the population would become infected. 6) across all seasons and locations. during the severe flu season of 2017-2018 amounted to 0. A number of common models are supplied with the package, including the SIR, SIRS, and SIS models. 0 (standard deviation 0. An examination of the local stability of the model’s equilibria reveals that there is a critical vaccination proportion PC&J- Ro’ (6). In SIR model that takes into account the growing immune population the spread of virus stops when 1 − 1/R0 of the population has been infected and or recovered. SIR Epidemic Model. Assuming that register bank #0 is selected. Information is provided 'as is' and solely for informational purposes, not for trading purposes or advice. Calls SIR_super_compact_pairwise after calculating R0, SS0, SI0 from the graph G and initial fraction infected rho SIS_effective_degree (Ssi0, Isi0, tau, gamma) Encodes system (5. The SIR model is analyzed on the numerical data obtained from IGMC, Shimla, H. The SIR model tracks the numbers of susceptible, infected and recovered individuals during an epidemic with the help of ordinary differential equations (ODE). • This is illustrated by the SIR (Susceptible, Infected, Recovered) model for which some technical background will be included in the handout notes, but it is not necessary to understand the key takeaway. A succinct description of the steps involved in the algorithm follows: (1) draw a k-sized sample from each of the parameter's prior distribution. R0, the basic reproduction number, is an important parameter in epidemiology. 98 months, or 11. SIR model is very widely used to analyze the spread of diseases in the human environment such as the ebola virus [1], zika [2], malaria [3], diabetes [4]. 23 (95% CI 1. compare this with the damped oscillations observed in a spring). The SIR model is one of the simplest compartmental models, and many models are derivatives of this basic form. This is not an SIR/SEIR-model and will behave independently from those This model provides a different perspective from the other types of models (R0) 2. Therefore, although initially this model shows large epidemics occurring at regular intervals, eventually the level of the disease reaches a constant value. As stated earlier, another approach to the doubling time formula that could be used with this example would be to calculate the annual percentage yield, or effective annual rate, and use it as r. states, please visit the Global Epidemic and Mobility Modeling (GLEAM) project site. First, it is a model that can be scaled by population size to produce endemic and episodic dynamics (see Bartlett 1956). To model seasonal effects on the spread of the disease, a sinus function with period 365 and the above amplitude is added to the basic reproduction number. β is the transmission rate of the parasite. We used a stochastic individual-based SEIR model for transmission of influenza in the LTCFs combined with a deterministic SIR model for transmission of influenza in the community. ; γ is the recovery rate, and the number 1/γ defines the. 5 30% The effective reproduction rate decreases when people take precautions. The basic reproduction number is now given by R0 = +m. SIR modelとは Susceptible(感受性保持者)がInfected(感染者)と接触したとき、感染する確率をEffective contact rate \beta [1/min]と定義します。 \gamma [1/min]はInfectedからRecovered(回復者)に移行する確率です 1 2 。. Often they even take the number of cases with positive tests to be the number of infections, and use that to predict forward or train their model. Model specification. Neither Reed nor Frost through their model worthy of publication, so the model is described by another author (Abbey) as follows:. } This estimation method has been applied to COVID-19 and SARS. 2, and recovery period 30 days. β is the transmission rate of the parasite. Jones 2008), in Spotfire. Bulletin of the World Health Organization, 50 Google Scholar; Ngwa. The SIR model is an epidemiological model that computes the theoretical number of people infected with a contagious illness in a closed population over time. It is assumed that newly infected. Beta is the infection rate of the pathogen, and gamma is the recovery rate. The basic SIR model - as described in Jones' Notes - considers three factors that make up the reproduction number: \tau = the. The disease can fade out after an outburst. ” For our purposes the R0 helps inform scenarios offered in the Sg2 calculator and influences the calculations behind the scenes that produce the curves you see as output. Quite often R0 does not actually appear in SIR (Susceptible-Infected-Resistant) models, but instead the product of contact rate, infection rate, and disease period (or case duration). model <- function (t, x, params) { #SIR model equations. Fitting an SIR model to the Hubei province data. It is a mathematical model, based on 4 separate patterns :-Chinese official number-The commonly accepted R0 of 2. A model that is well suited to estimating the spread of disease by inhalable respiratory droplets is the susceptible-infected-recovered (SIR) epidemic model. We use a simple 3-compartment SIR numeric model, with Susceptible, Infected and Recovered sub-populations (e. STANFORD, Calif. In this case, model (3. Bending the Curve — the SIR Model. Consider a population of size N, and assume that S is the number of. New born are with passive immunity and hence susceptible. For the simple SIR model, if t is small, S ≈ N, and the equation for I is approximately I0 = (βN −α)I = (R0 −1)αI, and solutions grow exponentially with growth rate (R0 −1)α. states, please visit the Global Epidemic and Mobility Modeling (GLEAM) project site. 05 million) being infected. 5) reduces to a SIR model in which the infectious individuals are removed at a higher rate than the inverse of their mean infectious period γ, with a transmission rate given by the basic reproductive rate of the system, γ e R 0 (S/N). The standard SIR model supposes that people don’t change their behavior on their own. Originally designed to explore coevolution of myxoma and rabbits, the model is easily. Import Model System into a Display Script Sometimes you will want to try out several model systems on the same data set. Finally, the outcomes of this model are applied using a classical mathematical method for calculating R 0 in a heterogeneous mixing population. By assumption all rates are constant. The transfer diagram of our model is depicted in Figure 1: Figure 1: The basic model compartments and flow The dynamics of the model are governed by the following system of differential equations: () () I 1 1U 2 1 2 2T 12 dS SS dt dI S dI dt dU 1p I dU T dt dT pI U dT dt dR U TR dt Λλ µ λ µδ δ µγ α γ δ γ µγ α αα µ. Although the number of new patients in the mainland Child is restrained, the other countries are still struggling with the increasing number of new cases. The SIR model is analyzed on the numerical data obtained from IGMC, Shimla, H. Yang function. The Kermack-McKendrick Model is used to explain the rapid rise and fall in the number of infective. Fitting the worst scenario SIRD model, the R0 has been estimated as 2. " Their purpose in developing this model was to sensitize medical students to the variability of the epidemic process. Measuring Temperature From PT100 Using Arduino: The PT100 is a resistance temperature detector(RTD) which changes its resistance depending on its surrounding temperature, it's used widely for industrial processes with slow dynamics and relatively wide temperature ranges. 1: The Model Diagram 2 12 2 12 1 ( ) ( ) 1 ( ) ( ) dS IS a dS R dt II dI IS d m I T I dt II dR mI d R T I dt O. SIR model with demography. 9 5 Bifurcation diagram. " "" Sir Paul Salvador, this does not mean that I have wrong you. } This estimation method has been applied to COVID-19 and SARS. The model consists of three compartments: S: The number of susceptible individuals. First, we observe that in this case Ω is not uniquely. I: the infected, who have the disease and can transmit it. Let’s take R0=2. • If we do exactly same thing for SEIR model (straightforward but more involved), we get "So, in comparison with SIR model, invasion speed in SEIR model scales with √R₀ "This seems pretty unwieldy. The outbreak of the novel coronavirus disease (Covid-19) brought considerable turmoil all around the world. Beta is the infection rate of the pathogen, and gamma is the recovery rate. We assume that all death is natural. Over time people develop resistence so Rt R0. Infectious Disease Epidemiology and Transmission Dynamics Ann Burchell Invited lecture EPIB 695 McGill University April 3, 2007 Objectives To understand the major differences between infectious and non-infectious disease epidemiology To learn about the nature of transmission dynamics and their relevance in infectious disease epidemiology Using sexually transmitted infections as an example, to. An example is the SIR model; it is an epidemiological model that computes the theoretical number of people infected with a contagious illness in a closed population over time. 25 (new) persons/ (infected) persons/ day. When a susceptible and an infectious individual come into "infectious contact", the susceptible individual contracts the disease and transitions to the infectious compartment. SIR EPIDEMIC MODEL Fig. Model # MTR 5 R0. The model (at sdl. All the given constants have epidemiological signi cance, and perhaps the most epidemiologically signi cant term is the basic reproductive number of disease i, Ri 0, which for our diseases is de ned as R 1 0 = 1 + 1 and R2 0 = ˝ + 2. The new equa-tions with the consideration of birth and death are: Figure 4. ferential equations governing the SIR system are then given as dS dt "!bSI, dI dt "bSI!gI, (1) dR dt "gI, where S, I and R are the proportions of suscep-tible, infectious and recovered individuals, b is the contact rate and 1/g is the mean infectious period (Anderson & May, 1979, 1992). • If we do exactly same thing for SEIR model (straightforward but more involved), we get "So, in comparison with SIR model, invasion speed in SEIR model scales with √R₀ "This seems pretty unwieldy. SIRS Model This model has been formulated for diarrheal infections caused by the bacteria Shigella. In this case, vaccinations are applied to the entire susceptible population every T years. We set the recovery period to five days. I first explain where the model comes from. The above model is too simple for discussing H1N1 (for starters, we can't have fractional populations). R itself is a surprisingly subtle concept (especially in changing systems): for instance, rt. These built-in models are parameterized using \(R_0$$ and the infectious period ($$1/\gamma$$), since these may be more intuitive for new students than the slightly abstract transmission rate. The optimal R0 value was 1. SOLUTION OF SIR MODEL We now introduce fractional order into the model. ‘people exposed’ which might be important factor in very large geographical area especially if stringent containment measures are implemented. ; γ is the recovery rate, and the number 1/γ defines the. Looking at the IHME model again, on April 13, the model projected that there would be a 1,648 deaths from COVID-19 in the U. Let’s see what happens if we assume γ=σ I SEIR ⇡ I (0) · e 1 2 (+)+ p 4(R0 1)+(+)2 I SEIR ⇡ I (0) ⇥ e(p R0 1)t. 005; R0 = 0. After solving, the doubling time formula shows that Jacques would double his money within 138. a Standard SIR model where a fraction v is vaccinated at birth and immediately becomes immune. “On Using SIR Models to Model Disease Scenarios for. Introduction to the SIR model and R0. Basic question is whether net growth rate R0 is = 1 no spread < 1 disease spreads < 1 disease disappears. a blog about, strangeness in all it's forms. In the simplest model, the basic reproductive rate is referred to as R0 It's hard to give too much credit to a SIR model for any of these successes, though. (31) use a modified Susceptible, Infected and Recovered/removed (SIR) model and propose a set of parameters for a COVID-19 Global epidemic and Mobility Model (GLEaM). β is the contact rate (average number. Kuniya, Numerical approximation of the basic reproduction number for an age-structured SIR epidemic model, Shanxi University, May 2017. "Their needs are different and varied than other. Model matematika yang dibentuk merupakan sebuah sistem persamaan diferensial yang dapat dilihat pada Sistem (1). In model calibration for estimating transmission rate, it is necessary to discount the total number of infectious people by the case-infection-ratio to determine the reproduction number (R0), the. Originally designed to explore coevolution of myxoma and rabbits, the model is easily. Let’s take R0=2. R0 is the reproduction number that contains the ‘potential’ for the outbreak and how bad it might get. A malaria model tested in the African savannah. “The Imperial model tries to deal with many things at once,” he says “Other models might focus on one specific thing, or one particular area; all of them help provide an overarching picture of what’s going on. be modeled by the SIR model. Use some of the above code to write a sir_1() function that takes. The Kermack-McKendrick Model is used to explain the rapid rise and fall in the number of infective. Values of R0 and σ are. First, it is a model that can be scaled by population size to produce endemic and episodic dynamics (see Bartlett 1956).
|
{}
|
homepage: Dr. Carol JVF Burns
# CAROL'S BLOG
Blog is short for weblog (weBLOG).
Check here to see the latest information about my site (and, occasionally, about my life in general).
It isn't a conventional blog—it just lets you know what I'm currently working on!
Tuesday, December 30, 2014
I finished the web exercises for Fundamental Trigonometric Identities.
I'm still sick—just can't seem to shake this nasty bug.
Thursday, December 25, 2014
Merry Christmas, everyone!
I've been sick for three days—started with a very sore throat, and progressed into lots of congestion and a hacky cough. I didn't go to the family Christmas celebration, so as not to spread germs—am having a quiet, snuggly day with Julia and Tony's three kitties instead! (However, Mr. Nels and Don Paquito seem to be very confused over the ownership of my Christmas poptart!)
I finished the concept discussion for Fundamental Trigonometric Identities.
Wednesday, December 24, 2014
Merry Christmas Eve, everyone!
I've made more progress on Fundamental Trigonometric Identities.
Getting close to being done with the concept discussion.
Tuesday, December 23, 2014
I got a good start to the concept discussion for Fundamental Trigonometric Identities.
There's a lot going on during the holidays, so my progress has been pretty slow.
Thursday, December 18, 2014
I added a new Family Home Evening lesson on developing a Personal Mission Statement.
Monday, December 15, 2014
I finally finished the web exercises for Trigonometric Values of Special Angles. This took me a long time, to get the generality that I wanted. What a nightmare!
Saturday, December 13, 2014
Today is: 12/13/14
What a cool date!!!
Saturday, December 6, 2014
I finished the concept discussion for Trigonometric Values of Special Angles.
Today is Marybelle's birthday celebration! She's $\,18\,$ today. Her birthday dinner request: lasagne, salads (with sprouts), eggnog, and tres leche cake for dessert. (I also made bread, and had clementines for color on the plates. Bethany decorated! We put both leaves in our new table, since Joshua joined us for dinner and we had Marybelle's ‘apartment-starter’ gift on the table.)
Friday, December 5, 2014
I'm almost done with the concept discussion for Trigonometric Values of Special Angles.
My two paper-pieced quilt projects are coming along nicely: a sampler quilt, and my log-cabin blocks for Christmas gifts.
Thursday, December 4, 2014
A user asked for textboxes on worksheets, so answers could be typed in at the computer. Then, users can get a printed hardcopy (or print-to-pdf) with their typed answers in place. I've put a prototype on my most recent exercise: Signs of All the Trigonometric Functions. Scroll down to the worksheet, check the appropriate box, and then create the worksheet. Check it out!
Wednesday, December 3, 2014
I went to start a new section today, and realized I hadn't yet finished the web exercises on Signs of All the Trigonometric Functions! They're now done. Things got so busy with two Thanksgiving celebrations that I lost track of where I was!
Tuesday, December 2, 2014
I just sent a (slightly modified) version of my TeX Commands Available in MathJax document to Peter Krautzberger for use in a github respository. I used a very non-restrictive license that will allow it to be used in the MathJax documentation. (It can also be used commercially.) People just need to retain the link back to my site.
I just chanced across this MathJax Hangout on air—Q&A with Davide Cervone, Peter Jipsen, and David LippmanShare event 2. It's so great to ‘see’ the people who I've been communicating with (on and off) for years! I love Peter Jipsen's vision—extremely similar to my own—at about minute $\,48\,$. (I like to call this ‘active reading’.)
Here are other MathJax Hangouts I've now found: This is also the first I've heard about XyJax, an extension for MathJax that enables you to draw various graphs and diagrams. (I currently use JSXGraph for all my drawing needs.)
A couple other things I hadn't heard about and want to keep track of here: Monday, December 1, 2014
I updated my monthly stats and website income for November.
Wednesday, November 26, 2014
Our new table arrived yesterday! Palettes by Winesburg, rustic cherry, with two (self-storing) leaves. It's beautiful! My little Flagstaff table served us well for almost three months while we awaited our new table's arrival.
I finished the concept discussion on Signs of All the Trigonometric Functions. I'll likely get started on the web exercises today.
I was just contacted by Dr. Peter Krautzberger, MathJax Manager. He asked for my permission to replicate TeX Commands Available in MathJax in a GitHub repository. He indicated that ‘this would allow others to contribute to your excellent guide and possibly make other forms of delivery (epub3 etc) possible.’ I gave him permission, but indicated that the resource must remain free; that no one may profit from this document without my written permission.
Monday, November 24, 2014
I compiled some convenient links to alto/tenor part practice for Handel's Messiah since we'll be singing it soon.
Saturday, November 22, 2014
I finished the web exercises for the Trigonometric Functions.
This section has some randomly-generated JSXGraph problem types.
Thursday, November 20, 2014
I finished the concept discussion for the Trigonometric Functions.
I'm making a sampler paper-pieced quilt—there are about $\,100\,$ different blocks in the book I have, and I'll use most of them at least once to make a queen-size quilt. There are some I won't use (like the cars and trucks) and there are some I'll double up on (like the log cabin and star designs). I'm trying to do one block per day. They started out easy and are getting harder—but I'm also getting better at the method. It seems to be taking me about one hour per day.
Wednesday, November 19, 2014
I spent a long time deciding how to present the motivations for the names ‘tangent’ and ‘secant’, and creating the graphics. Now, I'm close to being done with the concept discussion for the Trigonometric Functions.
Tuesday, November 18, 2014
I got started on the concept discussion for the Trigonometric Functions—then Ray came home in the middle of the day and we ended up practicing rumba, mambo, and cha-cha for a long time!
A link I don't want to lose: Pentatonix (I had never heard of them before) singing ‘Mary Did You Know’. I cried!! There's a moment when the soprano gets this brief look on her face—the ‘it's so beautiful I can't keep my joy from sneaking onto my face’ look. I know the look. I got it last night while practicing the Messiah with Ray in a tiny church room! Alto and tenor can sound wonderful together!!
Monday, November 17, 2014
What an exciting day for me! This morning, I got a \$50 donation, which is double my biggest donation ever! (It's also more than I've made in many of the months for the past year.) Then, this afternoon, I got another donation, for \$100!! The only thing that has changed is that my biography as a featured speaker for CAMT 2015 went ‘live’. Could it be that news about my site is finally getting out to people who can appreciate its value—or was today just an exceptionally lucky day for me? Whatever—it was a fantastic day for me!
Friday, November 14, 2014
I've finished all the exercises for Relatively Prime Numbers and Related Concepts. After an extended break to improve my Algebra I course, I will now return to trigonometry!
By the way, this sounds great for Christmas gifts: Crazy Quilt Block Pot Holder.
Tuesday, November 11, 2014
I've finished the concept discussion and the timed exercise for a new (optional) Algebra I section, Relatively Prime Numbers and Related Concepts.
Monday, November 10, 2014
The speakers for CAMT 2015 (the Conference for the Advancement of Mathematics Teaching) just went live! (I'm a featured speaker, near the bottom—scroll down.) They didn't end up using the bio I sent (it was likely waaayyyy too long). I spent many hours writing it, though, so I'll include it here:
Since 1999, Carol has put more than 10,000 hours into creating about 350 free online math lessons ranging from arithmetic to calculus, each offering unlimited, randomly-generated exercises and worksheets for both online and offline practice. Carol has a Doctor of Arts in Mathematics (a doctoral-level degree that emphasizes effective teaching), and has taught for about 30 years at both the college and high school levels.
For her entire adult life, Carol has been passionate about the language of mathematics—teaching foundational concepts that empower people to teach themselves mathematics. These skills are woven throughout her sequenced curriculum at http://www.onemathematicalcat.org and her online book: One Mathematical Cat, Please!
From humble beginnings with custom-made IBM selectric typewriter elements, to state-of-the-art dynamic web mathematics made possible by MathJax, Carol's love of mathematics has been complemented by her desire to present it beautifully to the world.
Vita: Dr. Carol JVF Burns
Carol's teaching philosophy
Slide show: history and philosophy of Carol's site
Fun facts about Carol (like: What does the ‘JVF’ stand for?)
Testimonials and More Testimonials
Review of One Mathematical Cat, Please!
The easiest way to get to Carol's site is to type three words—‘math cat burns’—into any search engine!
Friday, November 7, 2014
As per my ‘bonus daughter's’ request, I added a ‘no variables’ checkbox to
(‘Bonus daughter’ sounds a lot better than ‘stepdaughter’!)
I'll be starting my first paper-pieced quilt soon. I found these two fantastic videos: Thursday, November 6, 2014
I added a new Family Home Evening lesson: Remembering Names.
Tuesday, November 4, 2014
I finished the web exercises for The Prime Factorization Theorem.
Saturday, November 1, 2014
I updated my monthly stats and website income for October.
Also, I downloaded this SkinnyTip JavaScript Tooltip Library for use in a new project of mine.
Friday, October 31, 2014
Happy Halloween, everyone!
I finished the concept discussion for The Prime Factorization Theorem,
a new addition to my Algebra I curriculum.
Thursday, October 30, 2014
I put my ASL (American Sign Language) practice online.
I added a section on Prime Numbers to my Algebra I curriculum.
It has both timed practice (more Algebra Pinball!) and concept questions.
Wednesday, October 29, 2014
Several weeks ago I purchased an annual WolframAlpha PRO subscription (\$65.88), so I could start exploring graph interactivity, which requires their CDF player. Thus began weeks of technical support via email, since the CDF player did not work for me. I have MAC OS X, version 10.6.8, which is stated as a supported system. As of today, it works! There were several steps in the process: • When I downloaded the product from Wolfram Alpha's online link, the application was not recognized by my system, and would not install. • Technical support sent me a download link in an email, which allowed me to successfully install the CDF player. However, it still did not work on the WolframAlpha site to enable interactivity. I kept getting a big red star that says ‘Refresh Page or Restart Browser’ (neither of which worked). • Next, technical support informed me that the CDF browser plugin will not work in a 64 bit browser. I was instructed to try Firefox in 32-bit mode: in the Applications directory, right-click on Firefox.app . Select Get Info . Check the Open in 32 bit mode box. However, it still didn't work. • However, something new was happening: I got an activate Wolfram Mathematica grey screen before the (now familiar) red star error message. Technical support indicated that this grey screen comes from the browser trying to block the plugin. • To solve this final issue, they had me do the following: in Firefox, I selected, from the menu, Tools--Add-ons--Plugins (on left). I then navigated to the Wolfram Mathematica plugin. I changed its setting from ‘Ask to Activate’ to ‘Always Activate’. It finally worked! I am very thankful to WolframAlpha Technical Support for working with me on this issue. As my users know, I use WolframAlpha a lot, and I want to be able to share what PRO has to offer! On a different note, my incredible husband Ray came up with a beautiful derivation of the sum formulas for sine and cosine, which I have his permission to use in my future trig sections. He's amazing!! Friday, October 23, 2014 I'm giving three talks at the 2015 CAMT conference and have been spending lots of time getting my biography, talk titles, and talk descriptions done. In the process, I'm cleaning up a lot of my online resources, like finally replacing the text math with MathJax in Algebra Pinball! Tuesday, October 20, 2014 I (finally!) finished the web exercises for Radian Measure: Associating Real Numbers with Points on the Unit Circle. Friday, October 17, 2014 The last couple weeks have been crazy, so I've gotten very little web work done. Hopefully things will settle soon. I've put a couple more Family Home Evening lessons online: Table Manners A Brief Introduction to WolframAlpha Thursday, October 9, 2014 Based on a user email, I added more info about efficient methods of finding the least common multiple. In particular, I extended an efficient method for finding the greatest common factor to one for finding the least common multiple. Wednesday, October 8, 2014 I finished the concept discussion for Radian Measure: Associating Real Numbers with Points on the Unit Circle. This one took a long time to write! Friday, October 3, 2014 I finished the exercises for Reference Angles. This section took me quite a while, but I'm now pleased with it. Thursday, October 2, 2014 I added a section to the Algebra I curriculum that discusses an unfortunate order of operations mistake: Taking PEMDAS Too Literally: Don't Make This Mistake! Wednesday, October 1, 2014 I updated my monthly stats and website income for September. Tuesday, September 30, 2014 I added a discussion of size/sign to Reference Angles. I also added a collapsing paragraph that explains why an illustrated technique (to adjust the size of the angle to a size between$\,-180^\circ\,$and$\,180^\circ\,$) always works. This has ended up being a pretty long section. I'll need to make a decision about breaking it into two pieces, or keeping it as is. Monday, September 29, 2014 I edited/finished the concept discussion for Reference Angles. For a newbie, this can be confusing, since three angles make an appearance for every problem: an original angle for which you want to find trig values (like$\,1747^\circ\,$); remove extra rotations to make it easier to work with (giving$\,1747^\circ−5\cdot 360^\circ=−53^\circ\,$); and the reference angle (which is$\,53^\circ\,$). Saturday, September 27, 2014 I'm not quite done with the concept discussion for Reference Angles, but I'm close! Thursday, September 25, 2014 I put a fun little saying in Special Triangles and Common Trigonometric Values. It's something I've said quite a few times over my many years of teaching!! Wednesday, September 24, 2014 I finished the exercises for Special Triangles and Common Trigonometric Values. Tuesday, September 23, 2014 Today is my third wedding anniversary to Ray Burns! We went for a beautiful hike in the Catalina mountains. While on this hike (of course, without any paper or pencil), Ray (my brilliant husband) came up with a great mathematical model for my ‘years with partners’: $$f(x) = \left|\frac{30x - 70}{x - 3}\right|$$ Note that: $$\,f(1) = \left|\frac{30(1)-70}{1-3}\right| = \left|\frac{-40}{-2}\right| = 20\,$$ My first marriage lasted$\,20\,$years. Also, $$\,f(2) = \left|\frac{30(2)-70}{2-3}\right| = \left|\frac{-10}{-1}\right| = 10\,$$ My second ‘marriage’ (not a legal marriage) lasted$\,10\,$years. So, prediction for my third? Of course,$\,f\,$is not defined at$\,3\,$, but $$\lim_{x\rightarrow 3} f(x) = \infty$$ I like it!! Monday, September 22, 2014 I finished the concept discussion for Special Triangles and Common Trigonometric Values. Friday, September 19, 2014 I finished the exercises that go with the new example in the Unit Circle Approach to Trigonometry. Wednesday, September 17, 2014 I added a second example to the Unit Circle Approach to Trigonometry, and will be adding web exercise(s) corresponding to this example. Tuesday, September 16, 2014 I finished the web exercises for Compatibility of the Right Triangle and Unit Circle Approaches. Monday, September 15, 2014 I finished the web exercises for the Unit Circle Approach to Trigonometry. As for my health, I haven't taken a pain pill since 11:15AM on Sunday—1.5 days. I haven't been able to do this since mid-March. This is real progress for me! Tuesday, September 9, 2014 I finished the web exercises for the Right Triangle Approach to Trigonometry. Whenever I try to use JSXGraph in the randomly-generated worksheets, it takes a long time. This was no exception! It just fights me every step of the way! Wednesday, September 3, 2014 I updated my monthly stats and website income for August. Wednesday, August 27, 2014 I completed the web exercises for Introduction to Trigonometry. After 8 months of living in our two tiny trailers (Dew Drop and Morning Mist; a total combined floor space of about 40 square feet), we've decided we need to break down and rent a house. We've made a lot of progress in the past 8 months (mostly cleaning), but estimate that we have at least a year before we'll be able to live in the Rock House. It was a good challenge passing through one Arizona summer and monsoon season in our trailers, but I'm not up for a repeat! I will miss the serenity and gorgeous stars and all the critters, but I will so appreciate a refrigerator inside the house and a real kitchen to prepare meals for my family! The place we're hoping to get has a large tiled room where Ray and I will be able to dance (as much as my leg will permit). So, house-finding has pre-occupied me this week; the actual move will likely postpone serious progress on trigonometry for yet longer. Also, I'm getting another IVIG treatment for my CIDP the next couple days, since the numbness has started to return to my upper back, and I want to ‘catch’ it before it gets as bad as it was several months ago. Friday, August 22, 2014 I did end up breaking my one looonnnnggg section into four different sections: Now, to write the exercises for all of them! Thursday, August 21, 2014 I got a lot more done on the concept discussion for Introduction to Trigonometry. I'll probably end up breaking this single section into three sections: an introduction, the right triangle approach, and the unit circle approach. It's way too long already, and I'm not done! Wednesday, August 20, 2014 I got a good start to the concept discussion for Introduction to Trigonometry. Introductions are always hard. You can't say everything first. And, you shouldn't say everything right away, because it's too overwhelming. I'm fairly pleased with what I have so far. Monday, August 18, 2014 I've finished the exercises for Doubling Time, Half-Life. Finally, after all these years, on to trigonometry!! Saturday, August 16, 2014 I've finished the doubling time and half-life discussion for Doubling Time, Half-Life. Next, I want to include a short section on a general situation. Friday, August 15, 2014 I've started the concept discussion for Doubling Time, Half-Life. Thursday, August 14, 2014 I had missed a large donation (\$25.00; received \$23.97 after PayPal fees) when reporting my July income. I've corrected it. I believe this is my largest donation ever!! I finished the web exercises for Solving Exponential Growth and Decay Problems, and also expanded the concept discussion to include (equal change,constant multiplier) pairs. Tuesday, August 12, 2014 I finished the concept discussion for Solving Exponential Growth and Decay Problems. I also had my first ‘dry needling’ session, to see if this will help the pain in my right leg. Saturday, August 9, 2014 I finished the web exercises in Solving Logarithmic Equations. These took a long time to code! Thursday, August 7, 2014 Ray and I thought that it might be helpful to have two differently-styled hyperlinks: one style for intra-page links (in the same page); a different style for links that go to a different page. When we re-do the web site, this is an idea that we might incorporate. Wednesday, August 6, 2014 I started creating my own Amazon store. I finished the concept discussion in Solving Logarithmic Equations. Saturday, August 2, 2014 I updated my monthly stats and website income for July. Friday, August 1, 2014 How to harvest our sunflower seeds! In a nutshell, I quote from http://www.gardenersnet.com/vegetable/sunflowr.htm Harvest sunflower seeds after the flower begins to die back, and most if not all, of the petals have fallen off. Pull out a seed and open it to see if it is full. Cut off the head, leaving a few inches of stalk. Hang the stalks to dry in a well ventilated area. Do not stack them in a box, as mold can develop during the drying process. As soon as the flowers have dried, extract the seeds by rubbing two flower heads together. They should come off of the flower head fairly easily. Tuesday, July 29, 2014 I finished the exercises in Solving Exponential Equations. Thursday, July 24, 2014 I finished the concept discussion in Solving Exponential Equations. Has it really been more than a week since posting anything new? Sigh! Tuesday, July 15, 2014 I finished the exercises in Exponential Growth and Decay: Relative Growth Rate. Monday, July 14, 2014 I finished the concept discussion in Exponential Growth and Decay: Relative Growth Rate. Friday, July 11, 2014 Now, you can click on a cell in the ‘problem type’ table and it will give you that problem. Check it out in Logarithm Summary: Properties, Formulas, Laws. (I changed the color scheme, too.) So, teachers can say (for example) ‘practice problems 3, 7, and 11’. Thursday, July 10, 2014 I've finished the exercises in Exponential Growth and Decay Problems—Introduction. Upon suggestion by my brilliant husband, I also created a visual indicator of problem types ‘in progress’, ‘mastered’, and ‘available’. Again, I'm really pleased! I'm hoping this will be useful for my users. I also went back and updated Logarithm Summary: Properties, Formulas, Laws to this graphical style. Tuesday, July 8, 2014 I've finished the exercises in Logarithm Summary: Properties, Formulas, Laws. Forty of them! This section is a great thorough review of logarithms (yes I'm biased). Plus, I added a button so you can remove a problem type you've already mastered. Also, you can see just how many problem types you still have to master. I'm very pleased with this section! Monday, July 7, 2014 In the past few days, we've found a bark scorpion, a western diamondback rattlesnake, a black widow spider and a tarantula near our trailers. I guess the rains must be bringing them all out. A few weeks ago, we saw a gila monster (pronounced HEE-la). I've already had lots of ‘training’ in the dangerous Arizona vegetation, and now I'm getting my training in the dangerous Arizona wildlife! Following a user's suggestion, I created a couple sketches to help explain why you must change the direction of the inequality symbol when multiplying/dividing by negative number. Search for this text (it's about halfway down the page): Here are a couple sketches to further help you understand this concept. Friday, July 4, 2014 I've got a good start to the exercises in Logarithm Summary: Properties, Formulas, Laws. I've finished the exercises through the Change of Base Formula. More to come! Tuesday, July 1, 2014 I updated my monthly stats and website income for June. Saturday, June 28, 2014 Since we have (no joke)$\,200\,$wheels and about$\,140\,$tires on the land we're cleaning, I'm collecting some ideas. I want to make sure that water isn't going to get inside and breed mosquitoes. Also, I don't want pack rats to think they've found a new nesting place! There are also concerns about toxicity (zinc, etc.), and you don't want any chance that they'd burn. alligator tires a climbing wall for kids no-slip stairs (possibly use for steps down into the mine) a living retaining wall planters and raised beds make a tire sculpture make a palm-tree-tire forest flower planters a tire mountain for the kids roofing a house a tire obstacle course mosaic planters Friday, June 27, 2014 I finished the exposition for Exponential Growth and Decay Problems—Introduction. I just realized that I haven't yet written the exercises for the previous lesson, so I'll back up and get that done. Saturday, June 21, 2014 I finished the exposition on Logarithmic Summary: Properties, Formulas, Laws. Friday, June 20, 2014 I've finished the web exercises for Logarithmic Functions: Review and Additional Properties. I'm really pleased with the exercises, particularly solving logarithmic sentences. My husband Ray and Bethany have gone on a DDD (a Daddy-Daughter-Date) to Patagonia Lake in southern Arizona, fishing! My bookmarks list has gotten out of control! Here are some links I want to keep track of (but move off the list): • bunkbed designs: great inspiration for Bethany's room (and more) in our future Rock House • hand-pollinating: I hand-pollinated two pumpkin plants (successfully) last week. Seems we just don't have enough bees around. (Ironic—a couple months ago we had to hire an exterminator to remove a nest of killer bees who took up residence in the insulation beneath the storage trailer near our garden coop.) • This ‘newspaper tree’ • I want to make these hand trees for our Rock House! • I think these jar meals would make really good Christmas gifts. Tuesday, June 17, 2014 I've finished the exposition for Logarithmic Functions: Review and Additional Properties. Also, I'm on ‘kitty duty’ this week, taking care of three cats! Thursday, June 11, 2014 I've finished the web exercises for Continuous Compounding. On to logarithmic functions! Tuesday, June 10, 2014 I've finished the ‘Rock House Story’ (our emerging country home) up to present time. I've finished the exposition on Continuous Compounding. Thursday, June 5, 2014 My homemade grow boxes did not work out well. They turned into sewage-smelling, ant-attracting, soggy messes. The plastic containers that I bought from Walmart were brittle and cracking after only a couple months in the Arizona sun. The plants did not do well in them, and I ended up transplanting directly into the soil in our new coop. The genuine Earth Box that I purchased is doing fine—we're getting lots of delicious cherry tomatoes. Well, it was fun to try! Wednesday, June 4, 2014 [update: June 28, 2014] I will never do this again. To me, it's worth the money to have it done by a professional. I didn't have a cool, protected place to work, so contending with wind and sun was very difficult. I found it near impossible to position the film correctly, and to get out air bubbles and wrinkles. Ah well! I want to tint the cars on our car, Blossom. The car gets really hot in the Arizona summers. You're expected to abide by the laws in the states where you're driving, even if you're just visiting. As of 1994, here are Arizona's requirements: • Windshield: AS1 On the windshield, there is a mark that says ‘AS-1’ toward the top that defines that the tint cannot go below this marking. • Front side windows: 33% Visible Light Transmission (VLT) • Back side windows: ANY VLT (as dark as desired) • Rear window: ANY VLT (as dark as desired) • % of Reflectivity: 35% (I couldn't find this info on my purchased car film) Here's a helpful video for applying the window tint on your car windows. Here are the basic steps for applying window tint: • Check state tint laws. This is a lot of work. You don't want to have to rip it all off soon after you're done! • COOL, CLEAN WORK AREA: Do the application in a shaded, well-lit, dust-free area. I had a big sheet of styrofoam that I propped against the car to provide shade while working. • HINT: In hot Arizona, the sprayed liquid dried very quickly, so the film would start to fall. If it hits the ground and gets dirty, then that piece is ruined. I recommend doing this in the very early morning, and also having a second person to help, who can hold the film if it starts to fall. • Apply when the outside temperatures are expected to be less than 98°F for three consecutive days. I don't have this luxury in Arizona in the summer, so I kept the styrofoam board leaned against the car to provide shade. • CLEAN OUTSIDE WINDOW: Clean outside car window thoroughly. I washed first with water and paper towel. Then, I used the special cleaning solution in the Gila Window Tint Application Kit, and dried with the lint-free cloth. The outside of the window provides the pattern for cutting the film. • CLEAN INSIDE WINDOW, as in previous step. • CLEAN INSIDE EDGES OF INSIDE WINDOW: Wrap lint-free cloth around hard edge of squeegee. Wet cloth with spray. Clean entire perimeter of inside window inside gaskets. • IDENTIFY STICKY/LINER SIDES OF FILM: There are two sides of the film: a smooth side (the liner) and a sticky side (the actual film). The ‘sticky side’ doesn't actually feel sticky to the human touch, but it does feel sticky when rubbed on itself (as in the video). Usually (and this was true for me) the liner side is on the outside of the roll (the exposed side when it is rolled up). • SPRAY OUTSIDE OF WINDOW with the special cleaning solution. This allows the film (next step) to stick to the window. • UNROLL FILM FROM ROLL, liner side OUT (away from window). It will stick to the wet window. Cut off a piece big enough for the entire window. Put the remaining roll in a safe place for future use. Don't let it fall on the ground! • ADJUST FILM ON WINDOW: For roll-down windows, adjust the position of the film so that the straight bottom is about 1/4" to 1/2" below the bottom of the window. This bottom edge does not get cut with the razor blade. • WET/SQUEEGEE TO HOLD IN PLACE: Spray the adjusted film and squeegee to get out wrinkles and hold in place for trimming. • TRIM VERTICAL EDGES WITH RAZOR BLADE: I cut top to bottom. Tear any excess film away from the window area. • GET WINDOW ROLLED DOWN A BIT BEFORE TRIMMING TOP: Open door. Lift bottom edge (only!) of film from glass. Roll down window about two inches. • TRIM TOP OF FILM: Rest razor blade on top of window and cut across. Again, tear excess film away from window area, as needed. I was working in high heat, so things dried out quickly. I had to re-apply spray solution between the window and film to keep it from falling off. • REMOVE CLEAR LINER: As in the video, use two pieces of scotch tape in upper left corner to separate the sticky film from the liner. Peel slowly, spraying the sticky section as you go. Discard the liner. Remove the piece of tape on the sticky section. • THOROUGHLY SPRAY INSIDE OF WINDOW. • REMOVE FILM FROM OUTSIDE OF WINDOW: Standing on the inside of the door, carefully lift the film up and over the top of the door. Immediately stick it, somewhat centered, on the inside window. Move slowly and carefully—you don't want it to stick to itself, and you don't want any part of the film to touch anything dirty! • FOLD UP BOTTOM OF FILM before correctly positioning the top, so that it won't touch any dirty area below the window. • CORRECTLY POSITION FILM ON WINDOW: Before positioning, spray beneath film again, as needed. For roll-up windows, leave a gap of about 1/8" to 1/4" at top. The bottom is still folded up at this point. • SPRAY/SQUEEGEE TOP OF FILM. Squeegee out all wrinkles and liquid from the top (not-rolled-up) part of the film. • ROLL UP THE WINDOW, because now you'll finish the bottom (rolled-up) part of the film. • RE-CLEAN BOTTOM OF WINDOW: spray, squeegee, spray again. • UNFOLD BOTTOM OF FILM ONTO CLEAN/WET WINDOW. • SLIP BOTTOM OF FILM BEHIND BOTTOM GASKET. Use the squeegee tool to press out all wrinkles and liquid, and to push film firmly to edges and inside gaskets. • FINISHING: REMOVE ANY EXCESS LIQUID TRAPPED BENEATH FILM: Wrap the low-lint cloth around the hard edge of the squeegee. Firmly squeegee entire window, from center to edges. • CONGRATULATE YOURSELF! You've saved money and developed a worthwhile skill. You've exercised patience and perseverance. Hooray! All future windows will be easier to do, now that you've done it once! Tuesday, June 3, 2014 I updated my monthly stats and website income for May. I learned that Google did a Panda 4.0 update on about May 19, and this is precisely the date that my hits started increasing dramatically. This update is “designed to help boost great-quality content sites while pushing down thin or low-quality content sites in the search results”. Finally, there seems to be recognition that I have great content! Woo hoo! I'm hopeful that my Adsense income will start to improve with the additional hits. Ray (my husband) and I have been working hard at cleaning out the Rock House!! It's very satisfying to have progress that can actually be seen. (Ray has been doing lots of structural engineering and architectural work, but this is all in his head, on paper, and on his computer.) We do a lot of work very early in the morning (5 AM and even earlier) because by 7 AM it's already very hot here in Arizona. Also, Ray has finished our beautiful chicken-coop-turned-garden-coop. It's lovely! I painted the boards, put a latch on the door, and provided lots of cheerleading—Ray did everything else (and Bethany helped a bit too). We have (some are from seed and have not yet sprouted): lettuce, morning glories, cherry tomatoes, garlic, peas, potatoes, pumpkins, squash, cantaloupe, watermelon, wild flowers, sunflowers, and carrots. My daughter Julia gave me a lovely hanging fabric planter, which I'm using for an herb garden, and it is hung on one wall of the coop. Hopefully, I'll have fresh basil, oregano, parsley, chives, tarragon, rosemary and thyme! Friday, May 23, 2014 Here's a handy video for getting rid of fruit flies. In a nutshell: put some apple cider vinegar (about an inch) in a mug, cover with plastic wrap, punch many tiny holes in the plastic wrap with (say) a toothpick. The fruit flies will fly in, get trapped inside, and die in the vinegar. It may take several days to completely eliminate the fruit flies. [update several days later]: Didn't work at all for me! Maybe Arizona fruit flies don't like apple cider vinegar. Usually, at this time of the year, my hits go down dramatically as school ends. However, I've had 1279, 1229 and 1120 page views in the past three days, which is almost double what I've been getting recently. This is good news—I don't know why, but I'll take it! Thursday, May 22, 2014 I've finished the web exercises on my newest Precalculus section, Simple versus Compound Interest. I'm trying an organic spray to keep insects from munching on my plants. I thoroughly blend all the ingredients below in my Magic Bullet Blender, then pour into a spray bottle to spray on the plants. • 1 teaspoon garlic (insects don't like the smell) (I use crushed garlic in a jar.) • 1 tablespoon dish soap with no bleach content (helps the spray stick to leaves) • 2 tablespoons vegetable oil (to smother/suffocate soft-bodied insects) • 2 cups water I'm also going to plant these flowers around the outside of my gardening coop. They'll look beautiful, and are known to have insect-repelling power: Tuesday, May 20, 2014 We're back in Arizona! It's great to be home, and I'm returning in better health than when I left. I just wrote a testimonial for Even Par Auto Sales. This is a new web site for Mike and Michelle Waldbillig. We bought Blossom from them, and it was a fantastic car-buying experience. Tuesday, May 13, 2014 It has now been just under two weeks since completing my IVIG treatment. I believe my general health has been an upward trend with oscillations, something like this: (Time is along the horizontal axis; my general well-being is along the vertical axis.) I have good days, and I have bad days. I have days when I can forget, for short periods, that I've been so sick. I have days when I can barely walk. However, the upper back pain and headache have been entirely gone for about a week and a half now. I've had physical therapy for the leg pain. The therapist feels my pain is consistent with a herniated disc in the lumbar region. The stretching exercises seem to help a bit. There is no discernable improvement in the pins-and-needles in my hands, feet, and upper back—my greatest fear is that as each day goes by with this numbness, it increases the likelihood of permanent nerve damage. I'm averaging 1-2 percocet per day; one of these is usually to get some sleep at night. Ray arrives very late tomorrow night, and we'll leave together for Tucson on Monday, May 19th. Overall, I feel it has been a successful trip back here. I'm not going back in perfect health, but I'm definitely not continuing to get worse, and I have more functionality and less pain than when I left Tucson on April 17th. Monday, May 12, 2014 I just received an email from a friend who mentioned plasmapheresis. Whereas IVIG adds good antibodies, plasmapheresis cleanses the body of bad antibodies. Friday, May 9, 2014 I bought a Kb sock loom. It was originally$25.00 at Joann's in Pittsfield, MA, but I had a 40% off coupon. The DVD doesn't work on a MAC (sigh). I needed to use 54 pegs for my size foot.
• Sock Loom #1 (introduction)
• How to Make a Slip Knot (there's a video at the bottom of the page, after the written instructions)
• loom orientation: toggle bolt in upper right (across from you)
• Casting on the Sock Loom
Note that you start on the lower right, using the working yarn.
The first cast-on goes around the back of first peg to the left.
Not too tight, or it will be difficult to pull the bottom loop over the top in subsequent steps!
• After casting on both rows, pull the loose end of yarn (on your starting peg) to the inside of loom between pegs #1 (starting peg) and #2 (just to left of starting peg). It doesn't show this step in the video.
• Stitches on Sock Loom
It did not work for me! I was unable to make stitches loose enough. After many hours of casting on, ripping out, searching the web for tips, I ended up giving the loom to my sister! Evidently she knows an 80+-year-old woman who makes lots of socks on a loom, and perhaps this woman can give her tips that will allow her to be successful. As for me, I found a wonderful pattern for crocheted socks, and am almost done with my first one. So, I at least get to use the yarn that I bought!
Tuesday, May 6, 2014
In mid-Febrary, 2014, Youngho Cho contacted me from Seoul, Korea. He was in the process of developing www.easydesk.co.kr for an itembank, online testing and publishing system. He came across my TeX Commands Available in MathJax and asked my permission to use it, together with a Korean translation created under his direction, in a commercial setting. In their commercial mode, I will be credited when my material (English and/or a translation) is offered in help mode, and a link will be provided to the original Tex Commands Available in MathJax document. It was also part of the agreement that I could make translation(s) available, linked from my original document, since this will provide a wonderful service to the online mathematics community.
Here is the Korean translation of ‘TeX Commands Available in MathJax’. A Japanese translation may be coming, too!
Monday, May 5, 2014
I updated my monthly stats and website income for April.
Thursday/Friday, May 1/2, 2014
Ray arrived about 2AM Wednesday morning. It is so wonderful to have him here!
My two final IVIG infusions, without steroids, were Thursday and Friday at 8AM at BMC. Unfortunately, at this point in my treatment, my health is regressing, not progressing. I have greatly increased pain in my right leg; the numbness/pins/needles has increased; extreme fatigue/weakness; headache, nausea, back/neck pain. Unable to get any sleep, I ended up taking some percocet both Thursday and Friday nights. Sometimes things get worse before they get better, and perhaps that's what's going on here.
Wednesday, April 30, 2014
Here are a couple cute acronyms/sayings that I've picked up in the hospital:
• HUSH: Help Us Start Healing
• the Solution to Pollution is Dilution (in other words: drink lots of water)
I'm going home today! I'll have my third steroid/IVIG treatment, and will be released from the hospital thereafter. I'll have the remaining two IVIG treatments done as an out-patient (returning to the hospital just for the treatment). I'll be back at my Mom's house when my husband Ray arrives tonight! Hooray!
My right leg is dramatically better: no spasms, no severe pain, very little weakness. I still have minor upper back pain, minor neck pain, minor headache, mild nausea, but not severe enough to require pain meds. When the neurologist ‘tickled’ the bottom of my feet this morning, I felt it for the first time, so I may be recovering some of my peripheral senses. Hooray! Overall, I'm feeling better than I have in two months. This speaks well to a correct diagnosis and treatment.
I'm glad this is my last (third) day of the steroid treatment. The pain at my IV site, during the injection, has gotten worse and worse each day. Today it is quite painful. Usually, the pain disappears a few minutes after they switch to the IVIG.
I like this description of using the pain scale properly. According to this description, I would say that my worst episodes of pain have been between 8 and 9.
Tuesday, April 29, 2014
The abnormality in my CSR (cerebrospinal fluid) from the spinal tap was albuminocytologic dissociation. Here's the definition and I quote: “increased protein in the cerebrospinal fluid without increase in cell count, characteristic of the Guillain-Barré syndrome; it is also associated with spinal block and with intracranial neoplasia, and is seen in the last phases of poliomyelitis.”
I have cells in my body that are creating ‘bad antibodies’; these ‘bad antibodies’ are breaking down the myelin sheaths on my nerves. This abnormal behavior of my cells could have been caused by exposure to some toxic substances (for example, things that I might have encountered in cleaning the land and filling three 40-cubic-yard dumpsters).
The IVIG is getting rid of the bad antibodies. It is not getting rid of the cells that create the bad antibodies. Steroids, on the other hand, actually kill the cells that are creating the bad antibodies. However, killing the cells compromises your immune system (which is bad), because then the cells are gone and can't do their job in creating the ‘good’ antibodies to fight off other infections.
I am currently on both IVIG and steroids. Here's an analogy—the IVIG is getting rid of the current attackers, and the steroids are getting rid of the intelligence that is sending in new attackers! Since I'm on steroids, it's important that I not expose myself to sickness (get me out of the hospital, right!?). So I ask, please, that if you're sick, don't visit me! Emails, texts, and phone calls are safe!
The doctors hope that this was indeed brought on by (say) toxic exposure, and that, with the toxicity removed, and having ‘cleaned out’ my body of the ‘bad antibodies’, that I will fully recover. He believes that I can recover full ability, including getting rid of the pins-and-needles in both my hands and feet—he doesn't think this has gone on long enough that the damage is permanent. I'm good with this diagnosis!
Here are some possible side effects of IVIG. Today has been a good day for me; I haven't taken any pain medication in over 24 hours as I write this. However, I have had a mild headache, mild upper back and neck pain, mild leg pain, normal numbness (hands/feet/back), some minor chills and some nausea—the headache and nausea are not as normal for me, and are listed as possible IVIG side effects. It's possible that my post-spinal-tap headache is morphing into a post-IVIG-headache! It's also possible that my Motrin-nausea from yesterday is morphing into some IVIG-nausea!
Here are some possible side effects of steroids.
Monday, April 28, 2014
In the continuing saga of my health (i.e., lack of), they think that I may have the very rare CIDP (Chronic Inflammatory Demyelinating Polyneuropathy), which is ‘an acquired immune-mediated inflammatory disorder of the peripheral nervous system’.
I think they may be getting close, because these quotes from the wikipedia article describe my symptoms perfectly:
‘Patients usually present with a history of weakness, numbness, tingling, pain and difficulty in walking.
...
Some patients may have sudden onset of back pain or neck pain radiating down the extremities, usually diagnosed as radicular pain. These symptoms are usually progressive and may be intermittent.’
Evidently the spinal tap fluid did show abnormalities consistent with this diagnosis. Evidently everything else was normal: MRIs, EKG, chest x-ray, bloodwork.
I am going to stop taking the percocet; I do not want to start dealing with either addiction or building tolerance, on top of my other woes! Today, for the first time, I took a motrin (600 mg); I went from excruciating, level-10, charley-horse cramping pain in my right leg to tolerance in about 20 minutes. It has, however, given me some nausea, whereas I didn't experience any side effects with the percocet.
They are puting me on IVIG (intravenous immunoglobulin). I will have five days of this IVIG treatment. However, they suspect that it will take a few days for this to relieve symptoms. For the first three days, before the IVIG treatment (back-to-back), they are giving me a 1000mg dose of methlyprednisolone, a steroid. This should give more immediate relief of pain, and also remediate some of the possible side effects of the ensuing IVIG.
The steroid, and first dose of IVIG went well! Here is a statement, from Office of Rare Diseases Research, about the long-term outlook for people diagnosed with CIDP. I like the part that I've emphasized, and that's going to be ME!
The course of CIDP varies widely among individuals. Some may have a bout of CIDP followed by spontaneous recovery, while others may have many bouts with partial recovery in between relapses. The disease is a treatable cause of acquired neuropathy and initiation of early treatment to prevent loss of nerve axons is recommended. However, some individuals are left with some residual numbness or weakness.
Saturday, April 26, 2014
Dr. Marina Zaretskaya-Fuchs (a neurologist in Lenox, Massachusetts) came highly recommended by two very close friend/family members. The first test she did—having read my recent medical history—was a neurological conductivity test, which yielded highly abnormal results. She wrote: ‘There is electrical evidence to suggest the presence of acute, not length dependent, asymmetric, mixed motor and sensory, primary demyelinating polyneuropathy.’ Finally, after two months, a test result that showed something wrong—I was ecstatic! She wanted me admitted to the hospital right away, to gather more data on what is causing this compromising of the myelin sheath around my nerves.
Since being admitted, I've had a spinal tap (lumbar puncture), blood drawn, a chest x-ray, an EKG, and just under two hours in the MRI (cervical and thoracic, with and without contrast). I've talked to many doctors and nurses, relating my recent medical history. They are wonderful here at BMC (Berkshire Medical Center) in Pittsfield, Massachusetts. I feel that I'm in very good hands, with a team that is determined to figure out what is causing my symptoms.
The MRI was extremely difficult for me—I was fighting panic attacks throughout. When first pushed in, I made the mistake of keeping my eyes open, and had to be removed immediately, to collect myself. Second time in, I was sure to keep my eyes closed, and I cycled through three strategies for avoiding panic: praying, singing to myself, and envisioning Ray's arms wrapping around me.
Friday, April 25, 2014
I'm now in Massachusetts, and have a neurologist appointment scheduled for tomorrow (Saturday). Hopefully this will shed some light on my health condition.
I'm having a wonderful visit with my Mom. Even at age 55, I gain great comfort from my Mom when I'm in such pain.
I've finished the exposition for Simple Versus Compound Interest. Exercises hopefully coming soon!
Wednesday, April 16, 2014
To all my users: I'm sorry I've produced so little new material on my website for months now. I've been very sick.
I'm headed to Massachusetts for about a month to see my Mom, give my sister a much-deserved break for Mom's care, and see some doctors experienced with late-stage Lyme disease, which I suspect I may have.
I had the Lyme disease AB, total, W/RFX to WBbtest performed in Arizona.
The Lyme AB screen maxed out: >= 1.10 is considered positive, and my result is >5.00.
I was reactive on the 18 kD (IgG) band.
I was reactive on the 41 kD (IgM) band.
I was non-reactive on all other bands.
This resulted in a ‘negative for Lyme disease’ diagnosis.
This page has a good discussion of the bands on the Western blot.
There is considerable controversy and discussion over testing for and treatment of lyme disease: see, e.g., discussions at wikipedia.
In particular, from lymedisease.org (talking about the western blot) I quote (emphasis is mine):
Different laboratories use different methods and criteria, so you can have a positive test result from one lab and a negative test result from another. Lyme disease is known to inhibit the immune system and twenty to thirty percent of patients have falsely negative antibody tests.
I grew up in Massachusetts, and lived there again from 1999 to April 2011, when I moved to Arizona. I was very active outdoors: hiking, gardening, walking. I know I've had tick bites, because I can remember removing ticks. (In particular, in my health log I have documented, 6/1/2008, ‘tick, right butt, Karl removed’). However, I don't ever recall seeing the ‘bulls-eye’.
In November 2009 I was diagnosed with shingles. I mention this only because I've read that a rash caused by lyme disease is sometimes mis-diagnosed as shingles.
Here are my current symptoms (from oldest to most recent):
• January 2014: weeks of dizziness; upon changing elevation; upon rolling over in bed. The room spins. I self-diagnosed as BPPV, Benign Paroxysmal Positional Vertigo. I had whacked my head pretty hard a few times on the low mini-dome out at our country land (before I made cute braid ‘pigtails’ to remind me of the low ceiling). The diagnosis seemed to fit. This may or may be related to the subsequent symtoms beginning in late February.
• late February 2014: Numb, pins-and-needles hands and feet.
Initially, I was numb all the way up to my knees, and all the way to my elbows, but I gained some sensations back. This numbness has been constant; it never goes away.
• early March, 2014: I tried to do a water-only fast. It lasted 54 hours; I had to stop due to headaches, excruciating upper back pain, and the worsening numbness in arms/legs. I also had severe neck pain for several days, but this subsided, leaving only the upper back pain.
• early March to present: extreme weakness and fatigue. It is difficult to walk; stairs are particularly challenging. In early April, my entire right leg has severe muscle pain, making it even more difficult to walk.
Here are a couple notable examples of my weakness:
• Under the influence of pain meds I was doing some gardening, forgot for a moment that I was sick, and went to hop over a small trench (which normally would be a trivial move for me). I ended up at the bottom of the trench—no hopping for me these days! (My mind said ‘hop’; my legs said ‘nope’.)
• Again, gardening, I was attacked by a bee. It was swarming around my face. I was trying to get quickly to my trailer, but no going quickly for me—I fell on the ground about four times as I was frantically trying to get away from the bee. Luckly, I was only stung once on the neck.
• March 12, 2014: I ended up in the ER with a half-paralyzed face. I was eating oatmeal, and my tongue wasn't working right; I couldn't press down on the spoon. It quickly spread to the right-side of my face; I couldn't blink my right eye; when I smiled, it only went up on the left. They whisked me in (stroke concern), and quickly ruled out: heart attack, stroke, brain tumor, severe anemia, electrolyte imbalances, thyroid issues. They diagnosed Bell's Palsy, which seems to be a catch-all for facial paralysis that can't be pinned on any particular cause. They prescribed prednisone, 60 mg/day, to heal the facial nerve. By March 18, I could blink my right eye (which saved me many hours a day of putting drops in that eye).
• Since my ER visit, I have been prescribed percocet for pain. It allows me to sleep at night; for weeks, I had not been sleeping much due to the excruciating upper back pain. My poor husband! I would pace and moan, and try to distract myself with movement. I am trying very hard to not just pop-a-pill every six hours, but instead to wait until I just can't stand the pain any more, since I don't want to develop a tolerance to the drug.
• [addition, April 22, 2014] My ankles and lower legs had serious fluid retention on the bus trip from Tucson to Massachusetts (about 72 hours). The swelling has now mostly subsided (about two days after my arrival). My right knee has been troublesome. My entire right leg is much, much worse than it was in Arizona. Pain/numbness extends all the way into my left buttocks. My headache has been much worse than in Arizona.
• [addition, April 22, 2014] Pain in stomach/chest area. When I laugh (watching a movie with my Mom!) my entire chest area hurts.
I am determined to figure out what is going on with me, and start on the road to recovery! If I have lyme disease, then I'm definitely in the ‘late persistent’ (neurologic) stage. I'm determined to get back to serious work on my Precalculus curriculum!
Friday, April 11, 2014
I've written a couple exercises for Diluting a Toxic Liquid.
Even though there are only two cases, there is quite a lot of variability built in.
Thursday, April 10, 2014
This is a very powerful movie about lyme disease: Under Our Skin
Wednesday, April 9, 2014
I've added an optional section to Precalculus,
discussing a real-life problem where the irrational number $\,\text{e}\,$ makes a surprising appearance!
Diluting a Toxic Liquid
I haven't written any exercises yet, but the concept discussion is complete.
Friday, April 4, 2014
My daughter received an honorable mention for Outstanding Interdisciplinary Development by a Graduate/Professional Student (money even!) at the University of Arizona. I'm so proud of all that she does!
We've finished the logo and main page graphic for my web site re-design. They are so beautiful! Now, I'm thinking that, instead of verbiage on the landing page, I'll have a slide show of graphics illustrating key features of my site. I've got Olga working on the first one, over 300 high-quality math lessons.
Tuesday, April 1, 2014
I updated my monthly stats and website income for March.
Wednesday, March 26, 2014
Really beautiful inspirational posters.
I need a greenhouse for wind, harsh sun, and pest protection. They're expensive! I found these DYI plans and youtube video and am tempted to build one myself.
Saturday, March 22, 2014
In my continuing quest for the ‘perfect garden system for me’ I like this Square Foot Gardening.
By the way, my sprouts and composter are doing great, and a bunch of seeds are already up (and a few have already been eaten by critters, so I clearly have to solve that problem). Also, birds have discovered our feeder and waterer, which is great.
My face has almost completely recovered from Bell's Palsy (that was the diagnosis at the Emergency Room on Sunday), but my weakness and upper back pain are still debilitating. So, I'm continuing to try and distract myself with gardening projects.
Tuesday, March 17, 2014
This is a great video showing how to use my Victorio 4-Tray Seed Sprouter.
Also, I've got my first batch going in my new NatureMill Electric Composter!
Saturday, March 15, 2014
We just made one, and it came out great!
Tuesday, March 11, 2014
I put about 15 new cases in Solving for a Particular Variable,
after some feedback from my niece, Sarah Morley. Thanks, Sarah!
I've been experiencing debilitating symptoms for the past week and a half.
After extensive research, I think I might have the (very rare) Guillain-Barré syndrome.
The good news is that it usually goes away.
Here are some good articles: To distract myself from my pain, I've been a planting fiend! Cherry tomatoes, potatoes, cantaloupe, beans, peas, wildflowers and bachelor buttons, green peppers, cucumbers, carrots, lettuce, and more! I've constructed two self-made self-watering grow boxes; ended up buying a ‘hot knife’ to help me cut the plastic, which works okay (but is very slow).
We've also hung a bird feeder and a hummingbird feeder, and I hope the birds discover these soon. Here's a neat map that tracks the hummingbird migration.
Wednesday, March 5, 2014
I've been fasting, so have been reading about it. Here are some interesting articles:
Water Fasting for Health
What Science says about Intermittent Fasting
Water Fasting: Is it Safe?
Common Physical Reactions to Fasting
I think I'm going to try this Real Food Menu when I break my fast.
Monday, March 3, 2014
I updated my monthly stats and website income for February.
I'm studying the code for the two templates I purchased, and am coming across some good resources:
Friday, February 28, 2014
I'm so excited!! I've ordered my first illustration from Olga Dabrowska (of a cat on a swing hanging from a branch of a mathematical tree).
This will be my first business with microlancer.com.
I absolute love her work, and I hope this is the beginning of a long fruitful relationship!
Thursday, February 27, 2014
Unfortunately, things didn't work out with Godat Design. They decided that my needs exceeded my budget, so they withdrew their estimate. I am suspicious that I scared them away with too many questions. Sigh. I feel bad about this. I was disappointed, because I believe they would have done a good job for me.
Perhaps it's a bit of a blessing in disguise, because in ‘going back to the drawing board’ I came across envato.com, with their web design templates and freelancers.
I've contacted Olga Dabrowska for a possible mathematical tree.
I've contacted dabaman (Habibur Rahman) to see if he does custom web design, because I really like his Jumper and Pen-and-Paper templates.
So, I spent many hours today going through all thirty pages of freelancers showing off their portfolios. Everything just blurs together after a while. I did single out a few that I liked. How does someone actually find a service that's right for them?
Wednesday, February 26, 2014
I finished the web exercises for A Special Property of the Natural Exponential Function.
The rest of this entry is going to read like a commercial, but I just have to share!
For full disclosure—if you order this product through the link I have here, then you support my work. Thanks!
Since we moved out to the country, I've been plagued by this horrible skunk smell at the end of a nearby storage trailer. For weeks, I kept telling myself ‘it will air out on its own’. After about four weeks and absolutely no lessening of the smell, I decided to take some action. I started doing web research in getting rid of skunk smells. I tried the spray-with-a-bleach-solution remedy, which did nothing. In the process of researching, I came across this product, ‘Odors Away’. I was extremely skeptical. But I was placing an Amazon order anyways, and it wasn't too expensive, so I decided it wouldn't hurt to try.
Oh my goodness, it works! The skunk smell is completely gone after two applications, two days in a row. I put a few drops on a piece of glass under the end of the trailer yesterday, and that helped, but didn't completely eliminate the odor. This morning, I sprinkled about 6 more drops around the area. In all, I probably used only 10 drops. I am so happy to be rid of that smell!! After about 2 months of living with skunk smell that made me nauseous (I have to go there, since it's by our outside water faucet), there was an unbelievably easy solution.
Here's another use. We're currently living in an RV while we build our eventual home across the street. Those of you with RV experience probably know that the toilets smell. There's really not much you can do, because you're basically on top of where the sewage is stored. I am super clean, and the toilet is clean clean clean, but you flush, and it smells. My husband has very bad chemical sensitivities, so I can't use any ‘traditional’ cleaning or deordorizing products, either. Well, I have now velcroed that tiny bottle to the wall by the toilet with instructions "put one drop in toilet after use". What a difference. Hooray!
‘Odors Away’ does have a citrus-type odor, but it seems to dissipate pretty quickly. Also, you have to work pretty hard to get only one drop (which is all you need) because a typical squeeze produces two or three drops. But, with some practice, you can indeed get only one drop, which will then really give you your money's worth from this tiny bottle!
UPDATE on MARCH 11, 2014:
The skunk smell has returned on various days, and I've had to re-apply some drops.
However, it is dramatically lessened from the initial smell.
UPDATE ON APRIL 5, 2014:
I've had to give this to my daughter, since my husband (who has severe chemical sensitivities) is reacting to the citrus-like fragrance.
Changing the subject—I'm determined and inspired to not have any boring bookshelves in our new home!
Finally, this excellent comparison video for flashing-light animal deterrent technology was very influential in helping me to choose two Predator Guard units to protect my Grow Boxes (see prior day's entry) from critters. I've now spent days researching grow boxes, composting and composters, and critter repellents. I think I'm set to go!
Monday, February 24, 2014
I'm so excited about EarthBoxes (and similar self-watering ideas)!
I bought one real one for comparison purposes, and I'm enjoying going through their videos.
I intend to make lots of my own (DIY = Do It Yourself).
Here are a few sites which show a variety of DIY techniques:
Cultivating Conscience
How to build an Earthbox for $20 Container Gardening: Making your own Earthbox Self-Watering Grow Bag Global Buckets This one is fantastic if I can find someone giving away food-grade 55-gallon containers! How to Build the Ultimate Earth Box, Part I How to Build the Ultimate Earth Box, Part II The height saves lots of bending; the large reservoir saves lots of filling time. Friday, February 21, 2014 I've finished the exposition for A Special Property of the Natural Exponential Function. Tuesday, February 18, 2014 Ray and I met with Godat Design to explore giving a professional look to my web site (sorely needed). They showed us this good site explorer and backlink checker. This is a nice summary of reasons to switch to HTML5. Here's a graphical summary of where major browsers are in HTML5 support. I wanted to know if JSXGraph works with HTML5: it does. MathJax also works with HTML5. Here's a good graphic that discusses SEO Friendly Domain Migration (click for bigger version). This technique worked well to get a list of my pages that are indexed. A bunch of old ones (that have been removed from my server) are still indexed. Ah hah. This is why lots of my input tests aren't doing anything. Monday, February 17, 2014 I've finished the web exercises for Introduction to Instantaneous Rate of Change and Tangent Lines. I also archived this blog for 2013. Tuesday, February 11, 2014 I've finished the concept discussion for Introduction to Instantaneous Rate of Change and Tangent Lines. Web exercises coming soon! Monday, February 10, 2014 This is a good article on pack rats: How to Get Rid of Pack Rats in Arizona And another: Pack Rats in Vehicles Thursday, February 6, 2014 We got our second 40 cubic yard dumpster on Monday, and it's just about full already. Yesterday, two relatives helped remove about 20 old rusty stakes (and broken-down fences) using a technique similar to this: How to Remove Metal Fence Posts or Tree Stakes. However, we used a thick chain with a hook on the end instead of the C-clamp, and we used a solid steel pipe (about 10 feet long) instead of the two boards. Also, we put wood blocks as close to the stake as possible, between the stake and the far end of pipe, and pushed DOWN instead of pulling up. It worked incredibly well! Some of these stakes I had tried to remove (with banging and digging) for hours. I like these two videos: three classes of levers and Physics class levers. I'm also working on the next Precalculus web exercise, Introduction to Instantaneous Rate of Change and Tangent Lines, but don't quite have it to the point of uploading. Saturday, February 1, 2014 I updated my monthly stats and website income for January. Tuesday, January 28, 2014 Google has made available a new AdSense Direct campaign, where a publisher (like myself) can contract with a single advertiser. I'd like to get a reputable, professional math advertiser for my site, so I could always count on the quality and appropriateness of the ad that appears. I submitted feedback to Google to see if they are planning a system where publishers can ‘make themselves available’ for an AdSense Direct campaign, and/or find out which advertisers are seeking an AdSense Direct campaign. Friday, January 24, 2014 I decided to change the style on my homepage. The design that I viewed as artistic, others viewed as chaotic and unprofessional. Wednesday, January 22, 2014 I finished the web exercises for Introduction to Average Rate of Change. Thursday, January 16, 2014 This is a great video for the Speedy Stitcher Sewing Awl that I got for Christmas. My computer wouldn't turn on yesterday, so I made a trip to the Genius Bar. It is comforting to know that I always have a patient, skilled, professional human being that I can talk to! I needed a new Apple MagSafe 85W Power Adapter, so it was an easy fix (but pretty expensive; just over \$80).
I started the web exercises for Introduction to Average Rate of Change.
Tuesday, January 14, 2014
My daughter posted this on Facebook: an inspiring clip of a young man with progeria sharing his philosophy of life. It is beautiful. Try hard to listen to the entire thing (almost 13 minutes).
We're getting settled in our two tiny trailers in the country, so that we can spend the next year (or so) building our home across the street. It has consumed us for weeks now, which is why I haven't worked on Precalculus lessons. But, I'm finally back to it!
I finished the exposition for the Introduction to Average Rate of Change web exercise.
Thursday, January 2, 2014
My goodness, I haven't had an entry since December 10, 2013. This is definitely a sign of how busy things have been here—we've started serious work on our country land. I now appreciate what 40 cubic yards of junk looks like!
Happy New Year, everyone!
I updated my monthly stats and website income for December, and also for 2013. It's the first time that my stats have declined—this shows the power of search engines. Whatever ‘tweak’ Google made at the end of 2012 hit me really hard. I basically lost two years of progress.
Since I'm making so little income these days, I'm discontinuing my guestbook. People who want to leave me a message can use Facebook instead. I took ‘snapshots’ of all my guestbook entries so I wouldn't lose them, and I've posted them here.
Blog Archive, 2013
Blog Archive, 2012
Blog Archive, 2011
Blog Archive, 2010
Blog Archive, 2009
Blog Archive, 2008
Blog Archive, 2007
Blog Archive, 2006
Blog Archive, 2005
|
{}
|
# K-theory tags (versus algebraic-k-theory and topological-k-theory)
A user raised the question (I'm trans-posting here so it gets higher visibility) whether we really have the need for three separate tags , , and .
I do not know enough about K theory myself to be comfortable making the call. Please discuss. If we reach a consensus, I (or Qiaochu) will implement and document the change on the tag-merger thread.
One of the issues is that there is not a lot of traffic in these tags (I imagine). In general I think having more tags is best, this may or may not be one of them. One possible reason for the plain K-theory tag is that there is an arxiv tag that is more applicably described by K-theory than the other two. In the early days, I think people able to make tags were trying to mimic the MO policy of having one for each arxiv area.
I think that there are conceivable questions about K-theory that it might be misleading to tag as just algebraic or topological K-theory. But I strongly doubt those questions would be asked here.
I personally use tags to learn about the views of the asker, and what type of people they want to see the question.
Some of the above are just general comments.
• @Sean: can you take a look at the 4 questions currently tagged as k-theory and say whether any of them is more reasonably tagged k-theory instead of algebraic- or topological- ? – Willie Wong Apr 23 '11 at 2:59
• @Willie Well, 3 of this questions are clearly about topological K-theory but 1 is about K-theory of $C^*$-algebras -- and for this one tag (just) "k-theory" looks natural. – Grigory M Apr 23 '11 at 5:36
• @Grigory: in this case we'll leave the three tags alone. Thanks. – Willie Wong Apr 23 '11 at 11:42
• The reason I asked was the very observation that Grigory M made, namely that it appeared that the k-theory was being used instead of topological-k-theory, but I guess that with such low traffic, it won't cause a big problem. – Raeder Apr 24 '11 at 19:23
• @Grigory: But those questions are about topological K-theory of C*-algebras as opposed to algeraic. – Rasmus Apr 24 '11 at 23:19
• @Rasmus: in this case you are welcome to retag those as topological K-theory. Then the unused k-theory tag will disappear itself after a day or so. Thanks! – Willie Wong Apr 25 '11 at 1:16
• @Rasmus: I don't know if I believe that. There is a nice correspondence called Swan's theorem and the Gelfand-Naimark theorem that essentially say they coincide in the situation of $C^*$ algebra. It is $C^*$ algebra K-theory, which corresponds to topological K-theory via the above result and has $K_0$ and $K_1$ the same as algebraic K-theory. Topological K-theory would have to mean bundles over the topological space underlying the $C^*$ algebra which doesn't seem to be what the OP is asking about. – Sean Tilson Apr 25 '11 at 2:56
• @Rasmus I'd say, there is topological K-theory (i.e. K-theory of spaces), algebraic K-theory (i.e. K-theory of rings/schemes) and some other variants -- including K-theory of C*-algebras. But I'm not an expert. – Grigory M Apr 25 '11 at 6:44
• The real point is, I'm interested in questions about topological K-theory -- but I won't be able to answer any question about K-theory of C*-algebras. So it would be convenient (for me, at least) to separate them. – Grigory M Apr 25 '11 at 6:46
• @Sean: Are you sure that, for C*-algebras, algebraic K-theory coincides with (topological/normal/C*-algebra) K-theory? This article seems to indicate that this is not true in general. – Rasmus Apr 25 '11 at 7:06
• The terminology topological K-theory seems to be used for C*-algebras when "normal" K-theory needs to be distinguished from algebraic K-theory. – Rasmus Apr 25 '11 at 7:06
• Given the ongoing discussion, it seems I've marked acceptance too early. So I've undone the acceptance. – Willie Wong Apr 25 '11 at 12:23
• @Rasmus: I guess this is an issue of terminology. We have a $C^*$ algebra person here as well as someone who does operator algebra stuff, and they don't call it algebraic or topological K-theory. But this is just terminology, so I guess it doesn't matter. $K_0$ and $K_1$ of a $C^*$ algebra are almost exactly the algebraic K-theory of the ring. Maybe the above article is hinting that our projections should be continuous? – Sean Tilson Apr 25 '11 at 17:05
Maybe (algebraic-K-theory), (topological-K-theory) and (operator-K-theory)?
But probably, we should just wait.
• I'm iffy on operator-K-theory. Like I said, I'm no expert on it, but during grad school I have seen talks on the former two, but not explicitly something titled the last. So I suspect the term "operator K theory" maybe too specialised for this site. – Willie Wong Apr 25 '11 at 12:24
• Probably you're right. – Grigory M Apr 25 '11 at 13:12
• I think maybe we should wait until someone asks the question. – Sean Tilson Apr 25 '11 at 16:56
|
{}
|
# 2.3. Create gridded sections¶
Once the section longitude and latitude endpoints have been defined, and the param.py file has been properly set, the next step is to process the model grid. This is achieved in three steps:
1. Coordinates and scale factors are extracted over the entire domain and stored into a pypago.coords.Coords object.
2. Coordinates and scale factors are extracted over the region of interest, and the variables are eventually zonally flipped. The subdomain variables are stored in a pypago.grid.Grid object.
3. Finally, the section endpoints are converted into the pypago.grid.Grid object, and sections out of the domain are discarded.
These steps can be performed either by using Python scripts or by using the pypago.guis.gui_grid_model Python program.
Warning
These steps require a NetCDF file containing the variables defined in the dictvname variable (either default values or values defined in the param.py file).
Both methods are described bellow.
## 2.3.1. Using Python scripts¶
### 2.3.1.1. Extracting coord objects¶
Extraction of coord objects is achieved by using the pypago.coords.create_coord() function as follows:
"""
Extraction of model coordinates,
and saving into a Pygo file.
"""
import pypago.coords
modelname = 'NEMO'
filename = 'data/mesh_mask.nc'
# loading coords
coord = pypago.coords.create_coord(modelname, filename)
pypago.pyio.save(coord, 'data/nemo_coords.pygo')
In [1]: import os
In [2]: cwd = os.getcwd()
In [3]: print(cwd)
/home/nbarrier/Python/pago/pago/trunk/doc_pypago
In [4]: fpath = "examples/define_coords.py"
In [5]: with open(fpath) as f:
...: code = compile(f.read(), fpath, 'exec')
...: exec(code)
...:
Reading longitude: variable glamt
Reading latitude: variable gphit
No bathymetry is read
Reading T-grid mask: variable tmask
Reading T-grid zonal width: variable e1t
Reading T-grid meridional width: variable e2t
Reading V-grid eastern meridional width: variable e2u
Reading U-grid northern meridional width: variable e1v
Reading T-grid height: variable e3t
Reading 1D deptha array: variable gdept_0
Reading mbathy: variable mbathy
Reconstruction of bathy from mbathy and depth
Dzt is 3D. Model grid is in partial step
Reading U-grid eastern height: variable e3u
Reading V-grid northern height: variable e3u
Reading U-grid mask: variable umask
Reading V-grid mask: variable vmask
Reconstruction of U-grid western height from U-grid eastern height
In [6]: print(coord)
Coords object:
-filename: data/mesh_mask.nc
-modelname: NEMO
-bathy: (149, 182)
-dxn: (149, 182)
-dxt: (149, 182)
-dye: (149, 182)
-dyt: (149, 182)
-dyw: None
-dze: (31, 149, 182)
-dzn: (31, 149, 182)
-dzt: (31, 149, 182)
-dzw: (31, 149, 182)
-latt: (149, 182)
-lont: (149, 182)
-mask: (149, 182)
Note that the user may also use the executable program pypago.bin.make_coords as follows:
make_coords.py NEMO data/mesh_mask.nc coords.pygo
The arguments are the model name, the path of the meshfile and the path of the output file.
Note
The param.py file still needs to be in the working directory
### 2.3.1.2. Extracting grid objects¶
The grid objects are obtained by using the pypago.grid.create_grid() function.
Its arguments are the i (imax and imin) and j (jmin and jmax) indexes of the subdomain to extract. When one argument is None, then the higher (or lower) possible index is taken. Note that if imax<imin, zonal periodicity is assumed.
Warning
For regional studies, imax should always be greater than imin
An example of a subdomain extraction with zonal periodicity is shown below.
"""
Extraction of a sub-domain grid. Three examples
are provided: indian grid, North-Atlantic Grid
or Indian ocean (with a longitude flip).
"""
import pylab as plt
import pypago.grid
# indian example
jmin = 10
jmax = 100
imin = 140
imax = 40
coord = pypago.pyio.load('data/nemo_coords.pygo')
# if you do not have a coord object saved in a file, use:
# coord = pypago.coords.create_coord("NEMO", 'data/mesh_mask.nc')
# creation grid
grid = pypago.grid.create_grid(coord,
jmin=jmin, jmax=jmax,
imin=imin, imax=imax)
# saving of the grid
pypago.pyio.save(grid, 'data/indian_grid.pygo')
# drawing the domain on top of the
# coord mask
plt.figure()
coord.plot_mask()
grid.plot_dom()
plt.title('global domain')
plt.savefig('figs/indian_domain')
# drawing the domain mask
plt.figure()
grid.plot_mask()
plt.title('indian grid')
plt.savefig('figs/indian_grid')
In [7]: import os
In [8]: cwd = os.getcwd()
In [9]: print(cwd)
/home/nbarrier/Python/pago/pago/trunk/doc_pypago
In [10]: fpath = "examples/define_grid_indian.py"
In [11]: with open(fpath) as f:
....: code = compile(f.read(), fpath, 'exec')
....: exec(code)
....:
Extraction of lont on the domain
Extraction of latt on the domain
Extraction of bathy on the domain
Extraction of mask on the domain
Extraction of dxt on the domain
Extraction of dyt on the domain
Extraction of dxn on the domain
Reconstruction of the dyw variable from dye
In [12]: print(grid)
Model grid for the NEMO model:
- mesh file: data/mesh_mask.nc
- jmin: 10
- jmax: 100
- imin: 140
- imax: 40
- nlat: 91
- nlon: 83
- nz: 31
Fig. 2.5 Extracting the Indian Ocean from a global grid
Fig. 2.6 Indian domain that has been extracted
Note that the user may also use the executable program pypago.bin.make_grid as follows:
make_grid.py NEMO data/mesh_mask.nc grid.pygo
This script is an interactive script which allows to chose the values of the imin, imax, jmax and jmin variables. The arguments are the model name, the model mesh file and the name of the output file.
### 2.3.1.3. Extracting gridded sections¶
The extraction of gridded section is achieved by using the pypago.sections.extract_grid_sections() function.
The function takes as arguments the pypago.grid.Grid object and the list of section endpoints (pypago.sections.Section objects). It returns a list of pypago.sections.GridSection objects (gridsec in the example below) and the indexes of the discarded sections (badsec in the example below).
import pypago.sections
import pypago.pyio
import pylab as plt
grid = pypago.pyio.load('data/indian_grid.pygo')
sect = pypago.pyio.load('data/endpoints_indian.pygo')
gridsec, badsec = pypago.sections.extract_grid_sections(grid, sect)
pypago.pyio.save(gridsec, 'data/indian_gridsec.pygo')
plt.figure()
grid.plot_mask()
for s in gridsec:
s.plotsecfaces()
plt.savefig('figs/indian_gridsec.png')
In [13]: import os
In [14]: cwd = os.getcwd()
In [15]: print(cwd)
/home/nbarrier/Python/pago/pago/trunk/doc_pypago
In [16]: fpath = "examples/define_gridsec_indian.py"
In [17]: with open(fpath) as f:
....: code = compile(f.read(), fpath, 'exec')
....: exec(code)
....:
In [18]: print(badsec)
[]
In [19]: for sec in gridsec:
....: print(sec)
....:
Gridded section, NEMO model:
-areavect: (31, 70)
-depthvect: (31, 70)
-dire: (1,)
-faces: (70,)
-i: (2,)
-imax: 40
-imin: 140
-j: (2,)
-jmax: 100
-jmin: 10
-lengthvect: (70,)
-lvect: (70,)
-modelname: NEMO
-name: section4
-nlon: 182
-orient: (70,)
-veci: (70,)
-vecj: (70,)
Gridded section, NEMO model:
-areavect: (31, 51)
-depthvect: (31, 51)
-dire: (1,)
-faces: (51,)
-i: (2,)
-imax: 40
-imin: 140
-j: (2,)
-jmax: 100
-jmin: 10
-lengthvect: (51,)
-lvect: (29,)
-modelname: NEMO
-name: section5
-nlon: 182
-orient: (51,)
-veci: (51,)
-vecj: (51,)
Fig. 2.7 Indian Ocean gridded sections
The user is strongly invited to use the executable program pypago.bin.make_gridsec, which is run as follows:
make_gridsec.py grid.pygo endpoints.pygo gridsec.pygo
This interactive program allows the user to check that the sections are well defined (see Warning below), and gives the possibility to correct badly defined sections. It takes as argument the name of the name of the PAGO grid and endpoints file, and the name of the output file.
Warning
It is essential to verify that the orientations of the segments are consistent, i.e. that the dots are on the same side of the line, as in Fig. 2.13. If this is not the case, the section orientation must be corrected.
Furthermore, to perform budgets within a basin, it is essential to ensure that the basin is closed (i.e. that there is no leakage). If not, it might be necessary to add another section or to displace points from sea to land.
## 2.3.2. Using graphical user interface¶
The extraction of gridded sections may also be achieved through the use of the pypago.guis.gui_grid_model Python program. This opens the GUI that is shown in Fig. 2.8.
### 2.3.2.2. Opening a meshfile¶
As a first step, the user must define the name of the model that is going to be processed. This is done by setting the Model ComboBox. Then, the user must load the NetCDF meshfile of the model by using the Open menu item. When this is done, the mask of the model is plotted, as shown in Fig. 2.9, and the default domain is plotted as a black rectangle. The next step is to edit the domain, for instance by reducing the size of the domain according to the section positions.
### 2.3.2.3. Domain edition¶
The domain edition is handled by the top-left widgets. The min_i, max_i, min_j and max_j widgets control the domain left, right i-indices and bottom, top j-indices, respectively. Default values are set to the biggest possible domain.
The section can be changed by “click and drag” on the corner points (but not on the lines) or by a change in the TextControl widgets. In the latter case, the changes are validated when the ENTER key is pressed. Such a change is shown in Fig. 2.10.
With this specific grid, the user interested in the Indian Ocean might be a little disappointed, since the box domain does not cross it. In order to overcome this issue, the user must set, in the TextControl widgets, a value for min_i that is greater than the value of min_j. This switches the previous box into two boxes, as shown in Fig. 2.11. With this layout, the user can define a domain that encompasses the Indian Ocean.
### 2.3.2.4. Loading a section file¶
When the meshfile is loaded and the subdomain selected, the user must now the section endpoints that have been generated using the pypago.pypago_guis.gui_sections_edition. When this is done, the program computes the model indices that are associated with the section endpoints (these indices are model dependent) and draw the sections as “stairs”, as shown in Fig. 2.12. When this is done, the top-left RadioBox activates and switches to Check sections.
### 2.3.2.5. Checking and editing the sections¶
When these “stairs” are plotted, the user must verify that they are well defined. The points that appear on the figure and which define the direction where the transport is counted positive must all be on the same side of the line, as in Fig. 2.13. If it is the case, the user can save the model indices that are associated with the sections into a file, by using the Save/Save As menu items.
If they are not, the user must change the direction of the bad segments. This is done by switching the RadioBox to Edit Sections. This edition mode is similar to the one described previously, except that the section edition can only be achieved by modifying the point positions. Furthermore, if the user plans to perform budgets within closed domains, he needs to check that the sections indeed define a closed domain.
If the domain is too large compared to the section positions, the user may also be interested in reducing the size of the domain. This is done by switching the RadioBox to Edit Sections. Note that the Save/Save As menu item is only activated when this RadioBox is set to Check Sections, in order to force the user to check that the sections are well defined.
When the user is done, he can save the outputs of the program into .pygo files. If the chosen saving path is /output/path/file.pygo, the Grid object will be saved in the /output/path/file_grid.pygo file, while the GridSec objects will be saved in the /output/path/file_gridsec.pygo file.
|
{}
|
# Calculating proper time using schwartzchild metric
1. Mar 3, 2012
### demonelite123
I am using the schwartzchild metric given as $ds^2 = (1 - \frac{2M}{r})dt^2 - (1 - \frac{2M}{r})^{-1} dr^2$, where I assume the angular coordinates are constant for simplicity.
So if a beam of light travels from radius r0 to smaller radius r1, hits a mirror, and travels back to r0, I am trying to find how much proper time has passed for an observer fixed at r0. So far, i have that this path can be parametrized by r = r0 and t = x, where x is just my parameter. Therefore, r' = 0 and t' = 1. Using the formula for arc length, i have that the proper time is given by $\int \sqrt{1 - \frac{2M}{r_0}} dx$.
this is where i am stuck as i am having trouble determining the limits of my integral. can someone give me a hint or two in the right direction? thanks
2. Mar 3, 2012
### bcrowell
Staff Emeritus
Your integrand doesn't have any variable in it. The $r_0$ shouldn't be inside the integral; it should relate to a limit of integration.
You could try setting $ds^2=0$ and then separating variables and integrating to get a relation between r and t for the light beam.
3. Mar 3, 2012
### Staff: Mentor
First of all, you don't really need the extra parameter x; as far as the observer fixed at r0 is concerned, he's just traveling from t0, the time when he emits the light beam, to t1, the time when it returns to him. So you could just write the integral as:
$$\tau = \int_{t_{0}}^{t_{1}} \sqrt{1 - \frac{2M}{r_{0}}} dt$$
But the integrand doesn't depend on t, so you can just factor it out, and that makes the integral trivial:
$$\tau = \sqrt{1 - \frac{2M}{r_{0}}} \left( t_{1} - t_{0} \right)$$
Which, of course, should make you realize that the real focus of the problem is determining the coordinate time interval t1 - t0. The way to do that is to focus, not on the worldline of the observer fixed at r0, but on the worldline of the light beam. There are two segments to it (the one from r0 inward to r1, and the one from r1 back outward to r0), but they are mirror images, so to speak, so they should take equal coordinate time to traverse. So figuring out the coordinate time for one is sufficient. That's where I would recommend focusing your efforts. The key fact you need, in addition to what you've already posted, is that the light beam's worldline is null; that is, the interval ds^2 along the light beam's worldline is zero.
4. Mar 4, 2012
### pervect
Staff Emeritus
Well, the way I'd approach it is this:
Integrating along the path that the light takes won't give us the right answer - we want to integrate along the path that the clock takes between transmission and reception. Which is a simple path, of constant r = r0.
So we need to draw a space-time diagram with the ingoing light beam, and the outgoing lightbeam. How do we do this?
Given the line element
$$ds^2 = (1 - \frac{2M}{r})dt^2 - (1 - \frac{2M}{r})^{-1} dr^2p$$
we know that for a light beam, ds = 0. This immediately gives us the ratio dr/dt for the light beam - which will be a function of r.
So we'll have f(r) dr = dt, where I'm too lazy to write out f(r).
Integrating this we'll get $\Delta t=F(r)$. We'll have the same $\Delta t$ on the ingoing and outgoing null geodesic - so we double it.
This will give us the coordinate time that elapses between emission and reception. To get the proper time, we integrate along the worldline at r=r0 between the emission and reception events. dr=0 for this intergal, so we get a simple time dilation factor
$$ds = \int \sqrt{1 - \frac{2M}{r}} \, dt = \sqrt{1 - \frac{2M}{r}} \Delta t$$
Last edited: Mar 4, 2012
5. Mar 5, 2012
### demonelite123
ok so setting $ds^2 = 0$, i get $(1 - \frac{2M}{r})dt^2 = (1 - \frac{2M}{r})^{-1} dr^2$ or $dt^2 = (1 - \frac{2M}{r})^{-2} dr^2$. then taking the square root of both sides, i get $dt = (1 - \frac{2M}{r})^{-1} dr$.
now i can integrate both sides and i get $t_1 - t_0 = \int_{r_1}^{r_0} \frac{r}{r - 2M}dr = (r_0 - r_1) + 2Mln(r_0 - 2M) - 2Mln(r_1 - 2M)$.
would this be correct? thanks for you replies.
6. Mar 5, 2012
### Staff: Mentor
As noted before, to get the final answer you need to multiply the result by 2 because the integral you have given gives the "one-way" time, and you need the "round trip" time. The integral itself looks OK to me.
|
{}
|
# loui's preferences over pizza (x) and other goods (y) are given by U(X,Y)=XY^2, with associated marginal utilities. his income is $240. question d @) calculate his optimal basket when Px=8 and Py=1. b) calculate his income and substitution effects of a decrease in the price of food to$6. c) calculate the compensating variation of the price change. d)calculate the equivalent variation of the price change.
Question
loui's preferences over pizza (x) and other goods (y) are given by U(X,Y)=XY^2, with associated marginal utilities. his income is $240. question d @) calculate his optimal basket when Px=8 and Py=1. 1. b) calculate his income and substitution effects of a decrease in the price of food to$6.
2. c) calculate the compensating variation of the price change.
3. d)calculate the equivalent variation of the price change.
|
{}
|
# Revision history [back]
First, I'll mention that I prefer LazyPowerSeriesRing to PowerSeriesRing. Here is an example of using it.
sage: P.<t> = LazyPowerSeriesRing(QQ) # Creates the power series ring QQ[[t]]
sage: f = (t + (1/2)*t^2).exponential # In latex, $f = e^{t + \frac{t^2}{2}}$
sage: f[20]*factorial(20) # f[20] is the coefficient of t^20
23758664096
It's an unfortunate fact the interface here is a little bit flaky. For example, using
sage: f = (t + t^2/2).exponential()
gives an error. Even worse is that I only know to define f=1/(1-t) with the following sequence of commands.
sage: f = P() # Prepare to define f by a functional equation
sage: f.define(1 + t*f) # This defines f by the functional equation f = 1 + t*f
sage: f[20] # Coefficient of t^20
1
However, this does has a facility for dealing with infinite products. For example, the generating functions for partitions is the infinite product \prod_{i \ge 1} 1/(1-t^i). We can do this in sage as follows. First we define a function that will return any given factor.
sage: def factor(i):
f = P()
f.define(1+t^i*f) # f = 1/(1-t^i)
return f
Now we define a generator that represents the infinite product without needing to compute it.
sage: def gen():
i = 1 # product starts here
while True: # product continues forever
yield factor(i) # return the i^th factor
i += 1
Now we can define what we want.
sage: g = P.product_generator(gen())
sage: g.compute_coefficients(8) # Compute the first 8 coefficients
sage: g
1 + t + 2*t^2 + 3*t^3 + 5*t^4 + 7*t^5 + 11*t^6 + 15*t^7 + 22*t^8 + O(x^9)
sage: g[20]
627
sage: number_of_partitions(20)
627
As I said, there are definitely problems with the interface here, but the structure is quite powerful. Please ask for more specific help if this answer isn't enough for you to solve your problem.
|
{}
|
# Peano axioms and first-order logic with $\exists^{\infty}$
All Peano axioms except the induction axiom are statements in first-order logic. The induction axiom is written as $$\forall X(0 \in X \land \forall n(n \in \mathbb{N} \rightarrow (n \in X \land n' \in X)) \rightarrow \mathbb{N} \subseteq X$$, where $$n'$$ is the successor of $$n$$.
Now I want to look at an extension of first-order logic. I also allow $$\exists^{\infty}$$ [exists infinity many]. My notes states that the induction axiom can be rewritten only using this extension of first-order logic.
My question is: How?
Since I look at first-order logic I am not allowed to have a set $$X$$. Here I am allready stuck, because I think I can not just say $$\exists{^\infty} x$$ to have something like a set ... I think the solution requires two steps: First get rid of the set and then use $$\exists^{\infty}$$ to make it well-defined in my case.
• It's kinda weird talking about Peano and first-order while quantifying over subsets (second-order) and referring to $\Bbb N$ in the formula. – Asaf Karagila Mar 18 at 11:15
• @AsafKaragila Well he did say all the axioms "expect" the induction axiom - if you assume "expect" was a typo for "except" it makes sense.... – David C. Ullrich Mar 18 at 14:45
Note that induction is equivalent to well-ordering (more generally to well-foundedness). Namely, removing the induction axiom, a model of $$\sf PA$$ is well-ordered if and only if it satisfies the (second-order) induction axiom.
But well-ordering is equivalent to "there is no infinite decreasing chain". Finally, since in $$\sf PA$$ every non-zero element has a predecessor, this means that well-ordering is equivalent to stating that no element has infinitely many elements smaller than itself.
And this should be fairly straightforward to state using $$\exists^\infty$$.
|
{}
|
# Constant acceleration of velocity
1. Sep 28, 2005
### TickleMeElma
A falling stone takes 0.28s to travel past a window 2.2m tall. From what height above the top of the window did the stone fall?
So we have time and dinstance, but not the acceleration, which confuses me terribly. Do I need to find velocity? Or do I need to look for the acceleration? SO confused. Any help will be greatly appreciated.
Thanks!!
2. Sep 28, 2005
### hotvette
It always helps to draw a picture and free body diagram, then write the equation of motion, and solve for what you want. In this case, you have a rock of mass m, acted upon by gravity g, falling from rest, starting from a height x above the window. What you are given is the time it takes to cover a certain distance during it's fall to the ground. Hope this helps get you thinking in the right direction.
3. Sep 28, 2005
### TickleMeElma
I appreciate your advice, but I am still confused. I am using the equation for motion, yes. But it asks for time, and the time from the moment of fall is not the same as the time it took the rock to fall along the window, which means that t is the variable that cannot be used, but is needed for the equation to find the distance covered. And, yes, drawing a diagram is the first thing I do... :)
4. Sep 28, 2005
### hotvette
Let me try this way. You have an equation of motion that relates distance fallen (from an unknown point above the window) as a function of time. Seems to me a fruitful approach is to express that equation of motion for $x_1, t_1$ and $x_2, t_2$, where $x_1$ represents the top of the window and $x_2$ represents the bottom of the window. Subtracting the 2 equations gives you $x_1 - x_2$, which you know.
Last edited: Sep 29, 2005
5. Sep 29, 2005
### TickleMeElma
Ok, what if we don't even use time, except to find the velocity? Having that information, we can find the position, assuming that the acceleration is -9.8m/s^2?
6. Sep 29, 2005
### hotvette
Had to do a little head scratching. I believe the following will work:
Using the equation of motion of the rock you should be able to get an equation for $x_1-x_2$ in terms of $g$, $t_1^2$ and $t_2^2$. You know $x_1-x_2$, and $g$, but not $t_1$ or $t_2$. But, you do know $t1-t2$. Thus, you have two equations in two unkowns and should be able to solve for $t_1$, back substitute into the equation of motion, and find $x_1$.
Last edited: Sep 29, 2005
7. Sep 30, 2005
### hotvette
Were you able to solve it? I got $x_1$ equal to a little over 2m using the above methodology.
|
{}
|
# When can we write the square of a matrix as the product of the matrix and its transpose?
I often see something like $(A - B)^2$ being written as $(A - B)(A - B)^T$ . Here $A$ and $B$ are two matrices. I can see that this is possible when $A$ and $B$ are scalars (i.e) single element matrices. In what other cases does this equality hold ?
-
Say $A-B$ is symmetric. And maybe your observation contains misunderstanding. $(A-B)^2$ and $(A-B)(A-B)^T$ has totally different meanings. It's not about writing style. – Shuchang Mar 1 '14 at 2:59
Makes Sense. Thanks Shuchang. Yes, I understand that it has nothing to do with writing style. I just wanted to know when we could jump between the two. I see this being done a lot in textbooks having derivations. – Bob Mar 1 '14 at 3:04
It's very rarely correct to say $(A-B)^2=(A-B)(A-B)^T$. Essentially, it is only true if $A-B$ is symmetric. Saying you "often" see something makes it very hard to figure out whether the argument is wrong, or the context allows this due to symmetry - please be specific with examples. – Thomas Andrews Mar 1 '14 at 3:13
There is a case like this, where $A,B$ are row vectors, then $$\left|A-B\right|^2 = (A-B)(A-B)^T$$ – Thomas Andrews Mar 1 '14 at 3:15
@Bob It might not be. But you are referring to derivatives. It's highly possible the textbooks read $||A-B||^2$, in which case they are equivalent. – Shuchang Mar 1 '14 at 3:33
It seems to me there are two related questions operative here; the first is the question as posed in the title, which I take to apply to any square matrix $C$: when can we write $C^2 = CC^T$? that is, what special properties must $C$ possess which would imply, or be implied by, $C^2 = CC^T$? The second question, which is really only a minor variation of the first, asks what happens when $C$ is represented in the special form $C = A - B$, so that we have $(A - B)(A - B) = (A - B)(A^T - B^T)$. But the equivalence of these two views is easily seen: since $A$ and $B$ are arbitrary, any $C$ may be written in the form $A - B$ by simply taking $A = C$, $B = 0$; furthermore, by fixing $C$ and taking $A = B + C$, and letting $B$ range over the set of all $B$ square matrices, we see that all pairs of matrices $A$, $B$ with $C = A - B$ may be so obtained. And since nothing further has been specified concerning $A$ and $B$, e.g. there is no special relationship between them which might give $C = A - B$ some additional structure, we might as well enjoy the at least the notational convenience of working with $C$. Results can always be translated back to the $A - B$ form if so desired.
So when does $C^2 = CC^T$ hold? Well, as we used to say out west at old Caltech, it is "trivially obvious to even the most casual observer" that $C$ symmetric, that is, $C =C^T$, implies $C^2 = CC^T$. But wait! There's more! Suppose we write the equation $C^2 = CC^T$ in the form
$C(C - C^T) = 0; \tag{1}$
from (1) we see that in the event that $C$ is invertible, i.e. $\exists C^{-1}$, then upon left multiplication by $C^{-1}$ we obtain
$C - C^T = 0 \Rightarrow C = C^T \tag{2}$
i.e. $C$ must be symmetric in this case. If, the other hand, we assume $C - C^T$ is invertible, we immediately run into trouble: if $\exists (C - C^T)^{-1}$, then right multiplication of (1) by $(C - C^T)^{-1}$ yields
$C = 0; \tag{3}$
but (3) forces
$C - C^T = 0 - 0 = 0, \tag{4}$
and this contradiction shows that $C - C^T$ cannot have an inverse if
$C^2 = CC^T. \tag{5}$
Closer scrutiny of the matrix $C - C^T$, the right-hand factor in (1), reveals that it is in fact skew-symmetric, that is
$(C - C^T)^T = C^T - (C^T)^T = C^T - C = -(C - C^T); \tag{6}$
indeed, $C$, as may any other matrix, be decomposed into unique symmetric and skew-symmetric parts, $C_+ = C_+^T$ and $C_-= -C_-^T$:
$C = C_+ + C_-, \tag{7}$
and if we take the transpose of (7) we see that
$C^T = C_+^T + C_-^T = C_+ - C_-; \tag{8}$
(7) and (8) may be solved for $C_+$ and $C_-$ by adding and subtracting them, yielding
$C_+ = \dfrac{1}{2}(C + C^T); \; C_- = \dfrac{1}{2}(C - C^T), \tag{9}$
and if the equations (9) are inserted into (1) we see that
$2(C_+ + C_-)C_- = 0 \Leftrightarrow (C_+ + C_-)C_- = 0; \tag{10}$
from (10) we have
$(C_+ + C_-)C_- = C_+C_- + C_-^2 = 0, \tag{11}$
or
$C_+C_- = -C_-^2. \tag{12}$
We next observe that, though $C_-$ is skew-symmetric, $C_-^2$ is in fact symmetric:
$(C_-^2)^T = (C_-C_-)^T = C_-^TC_-^T = (-C_-)(-C_-) = C_-^2, \tag{13}$
and so $-C_-^2$ is also symmetric; by transposing (12) we see that
$-C_-C_+ = C_-^TC_+^T = (C_+C_-)^T = (-C_-^2)^T = -C_-^2 = C_+C_-. \tag{14}$
(12)-(14) show that, for (5) to hold, the symmetric and skew-symmetric parts of $C$ must satisfy
$C_+C_- = -C_-^2 = -C_-C_+. \tag{15}$
Having sifted through various implications of (5) we have finally found one, (12), which is both necessary and sufficient. The necessity of (12), given (5), has just been shown. That (5) follows from (12) is also easy to see; just take a walk backwards from (12) to (5); the essential steps are all logically bidirectional.
(15) in fact shows that $C_+$ and $C_-$ must in fact anticommute, i.e.
$C_+C_- + C_-C_+ = 0; \tag{16}$
we also see that $C_+C_- + C_-C_+$ is skew-symmetric, whether it happens to vanish or not:
$(C_+C_- + C_-C_+)^T = -C_-C_+ - C_+C_- = -(C_+C_- + C_-C_+); \tag{17}$
it is in fact in general the skew-symmetric part of $C^2$, as may be seen from
$C^2 = (C_+ + C_-)^2 = (C_+^2 + C_-^2) + (C_+C_- + C_-C_+); \tag{18}$
Note that $C_+^2 + C_-^2$ is symmetric, as we have seen; just as we have seen that $C_+C_- + C_-C_+$ is skew; the decomposition into symmetric and skew-symmetric parts being unique, the formula (18) expresses the only such decomposition of $C^2$; in the specific case at hand, $C^2 = CC^T$, we see that this reduces to
$C^2 = (C_+ + C_-)^2 = C_+^2 + C_-^2 \tag{19}$
by virtue of (16).
Well, this shows the kinds of structural properties $C$ must have for $C^2 = CC^T$ to apply. I think that by digging further along these veins more could be unearthed, but that's as far as I'm taking it for the moment.
Hope this helps. Cheerio,
and as always,
Fiat Lux!!!
-
In the answer of Robert, his condition (12) $C_+C_-=-{C_-}^2$ is exactly the same as the hypothesis $C^2=CC^T$. More seriously, we prove the following:
Proposition: Let $C\in\mathcal{M}_n(\mathbb{R})$ s.t. $CC^T=C^2$. Then $C$ is symmetric.
Proof: $C^2$ is a symmetric real matrix s.t. $C^2\geq 0$. We may assume $C^2=diag(0_k,U)$ where $U>0$. $C$ and $C^2$ commute ; then $C=diag(V,W)$ where $V,W$ are real matrix s.t. $VV^T=V^2=0_k$ and $WW^T=W^2=U$. Clearly $W$ is invertible and consequently symmetric. Now $VV^T=0_k$ implies that the singular values of $V$ are zero. Thus $V=0_k$ and we are done.
Remark: The problem is more difficult over $\mathcal{M}_n(\mathbb{C})$ except if we replace $C^T$ with $C^*$.
-
|
{}
|
Skip to main content
$$\require{cancel}$$
# I-70
I-70 is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Paul D'Alessandris.
• Was this article helpful?
|
{}
|
# Fractal Simulation
## Koch Curve
Depth Koch curve Koch curve is a kind of fractal curve. It appeared in a 1904 paper titled “On a continuous curve without tangents, constructible from elementary geometry” by the Swedish mathematician…
## Pythagoras Tree
The Pythagoras tree is a plane fractal constructed from squares. Invented by the Dutch mathematics teacher Albert E. Bosman in 1942, it is named after the ancient Greek mathematician Pythagoras.
## Pascal’s Triangle
Pascal’s triangle is a triangular array of the binomial coefficients. each number is the sum of the two numbers directly above it. Pascal’s triangle has many properties and contains many patterns of…
## Sierpinski Triangle
Depth Sierpinski triangle is a fractal and attractive fixed set with the overall shape of an equilateral triangle. In this simulation, Create a Sierpinski triangle by endlessly drawing circles.
## C Curve
Depth The C curve is a kind of fractal geometry. Bend each side 90 degrees. Bend this side again 90 degrees. Repeating this operation infinitely can get a C curve.
## Dragon Curve
Depth A dragon curve is a piece of paper that has been folded several times in the same direction as the picture, and then bent vertically. This curve does not intersect even…
## Hilbert Curve
Depth A Hilbert curve is a continuous fractal space-filling curve first described by the German mathematician David Hilbert in 1891.
## Sierpinski Curve
Depth Sierpiński curves are a recursively defined sequence of continuous closed plane fractal curves discovered by Wacław Sierpiński.
## Mandelbrot Set
( + ) ( – ) Reset The Mandelbrot set is a famous example of a fractal in mathematics. The Mandelbrot set is important for the chaos theory. Images of the Mandelbrot…
|
{}
|
# Autoregressive model
#### Fernando Revilla
##### Well-known member
MHB Math Helper
I quote an unsolved problem from another forum (Algebra) posted on January 16th, 2013.
Got the following problem.
In a country you can live in three different citys, A, B and C, the population is constant.
Each year;
70% of the residents in city A stay, 20% move to city B and 10% move to city C
90% of the residents in city B stay, 5% move to city A and 5% move to city C
50% of the residents in city C stay, 45% move to city A and 5% move to city B
I am suppose to explain this as an autoregressive process.
Through some datamining i found that the process is an AR(3) process, with coefficents
2,1 -1.3725 0.2725
My question is, is it possible to solve this analytically, without Least squares trial and error?
I provide an algebraic approach to predict the behaviour in the future.
Denote $$P_{n}=(a_n,b_n,c_n)^t$$, where $$a_n,b_n,c_n$$ are the poblations of $$A,B,C$$ respectively in the year $$n$$. According to the hypothesis:
$$a_n=0.7a_{n-1}+0.05b_{n-1}+0.45c_{n-1}\\b_n=0.2a_{n-1}+0.9b_{n-1}+0.05c_{n-1}\\c_n=0.1a_{n-1}+0.05b_{n-1}+0.5c_{n-1}$$
Equivalently
$$P_n=\begin{bmatrix}{0.7}&{0.05}&{0.45}\\{0.2}&{0.9}&{0.05}\\{0.1}&{0.05}&{0.5}\end{bmatrix}\;P_{n-1}=\dfrac{1}{20}\begin{bmatrix}{70}&{5}&{45}\\{20}&{90}&{5}\\{10}&{5}&{50}\end{bmatrix}\;P_{n-1} =MP_{n-1}$$
Then, $$P_n=MP_{n-1}=M^2P_{n-2}=\ldots=M^nP_0$$
As $$M$$ is a Markov matrix, has the eigenvalue $$\lambda_1=1$$ and easily we can find the rest: $$\lambda_2=(11+\sqrt{3})/20$$ and $$\lambda_3=(11-\sqrt{3})/20$$. These eigenvalues are all simple, so $$M$$ is diagonalizable in $$\mathbb{R}$$. If $$Q\in\mathbb{R}^{3\times 3}$$ satisfies $$Q^{-1}AQ=D=\mbox{diag }(\lambda_1,\lambda_2,\lambda_3)$$, then $$P_n=QD^nQ^{-1}P_0$$. Taking limits in both sides an considering that $$|\lambda_2|<1$$ and $$|\lambda_3|<1$$:
$$P_{\infty}:=\displaystyle\lim_{n \to \infty} P_n=Q\;(\displaystyle\lim_{n \to \infty}D^n)\;Q^{-1}P_0=Q\;\begin{bmatrix}{1}&{0}&{0}\\{0}&{0}&{0}\\{0}&{0}&{0}\end{bmatrix}\;Q^{-1}P_0$$
Not that for computing $$P_{\infty}$$ we only need the first column (an eigenvalue $$v_1$$ associated to $$\lambda_1$$) of $$Q$$ and the first row $$w_{1}$$ of $$Q^{-1}$$. We get $$v_1=(19,42,8)^t$$ and $$w_{1}=(1/69)(1,1,1)$$. So,
$P_{\infty}=\begin{bmatrix}a_{\infty} \\ b_{\infty}\\ c_{\infty}\end{bmatrix}=$ $\dfrac{1}{69}\begin{bmatrix}{19}&{*}&{*} \\ {42}&{*}&{*} \\ {8}&{*}&{*}\end{bmatrix}\;\begin{bmatrix}{1}&{0}&{0} \\ {0}&{0}&{0} \\ {0}&{0}&{0}\end{bmatrix}\;$ $\begin{bmatrix}{1}&{1}&{1}\\{*}&{*}&{*}\\{*}&{*}&{*}\end{bmatrix}\begin{bmatrix}a_{0}\\b_{0}\\c_{0}\end{bmatrix}=$ $\dfrac{1}{69}\begin{bmatrix}{19(a_0+b_0+c_0)}\\{42(a_0+b_0+c_0)}\\{8(a_0+b_0+c_0)}\end{bmatrix}$
which represents the tendency of the poblations of $$A,B$$ and $$C$$ as $$n\to \infty$$.
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Hmm, since it is given that the population remains constant, doesn't it suffice that:
$P_1 = P_0$
That is,
$MP_0 = P_0$
So the population ratios correspond to the eigenvector belonging to eigenvalue 1?
#### Fernando Revilla
##### Well-known member
MHB Math Helper
Hmm, since it is given that the population remains constant, doesn't it suffice that: $P_1 = P_0$
Remains constant means $a_n+b_n+c_n=a_{n-1}+b_{n-1}+c_{n-1}$ for all $n\geq 1$, different from $(a_n,b_n,c_n)=(a_{n-1}b_{n-1},c_{n-1})$.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.