text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
Importance Sampling In Reinforcement learning
Introduction
In Monte Carlo off-policy learning, we need to use one policy to generate data, and use the data to optimize another policy. How to realize this fantacy dream? Is this possible?
In this post, I will show you how to realize it with minimal math and some python konwlege.
How to understand importance sampling?
To optimize a policy from the data generated by another policy, we need to use importance sampling.
Problem definition
We have target policy $\pi$, and known soft policy $b$(like gaussian distribution), we want to collect data through policy $b$, and use the data to perform GPI to optimize $\pi$.
Math proof
In math world, $\pi$ and $b$ are distributions, and with distribution, we get result, in this case, episode. In math language, with a function that generate episode, whose distribution is $\pi$. In formula, that is: $\mathbb{E}[f]=\int f(z) p(z) d \tag{1}$ We use $p(z)$ to present $\pi$, and $q(z)$ to presen $b$ in the context, since we are describing math. Here $w(z) = \frac{p(z)}{q(z)}$.
General importance sampling
When $q(z)>0$ if $p(z)>0$ for $\forall z\in Z$,
\begin{aligned} \mathbb{E}[f] &=\int f(z) p(z) d z \\ &=\int f(z) \frac{p(z)}{q(z)} q(z) d z \\ & \approx \frac{1}{N} \sum_{n} \frac{p\left(z^{n}\right)}{q\left(z^{n}\right)} f\left(z^{n}\right), z^{n} \sim q(z) \end{aligned} And $w^{n}=p\left(z^{n}\right) / q\left(z^{n}\right)$, which is importance weights. We can use this formula to calculate the expect value. However, this formula has big variance, which is not stable. Then we have another method.
Weighted importance sampling
\begin{aligned} (1) & =\int f(z) \frac{p(z)}{q(z)} q(z) d z \\ &=\frac{Z_{q}}{Z_{p}} \int f(z) \frac{\tilde{p}(z)}{\tilde{q}(z)} q(z) d z \\ & \approx \frac{\mathcal{Z}_{q}}{\mathcal{Z}_{p}} \frac{1}{N} \sum_{n} \frac{\tilde{p}\left(z^{n}\right)}{\tilde{q}\left(z^{n}\right)} f\left(z^{n}\right) \\ &=\frac{\mathcal{Z}_{q}}{\mathcal{Z}_{p}} \frac{1}{N} \sum_{n} w^{n} f\left(z^{n}\right) \tag{2} \end{aligned} Here, $p(x)=\frac{\hat{p}(x)}{Z}$, $\hat{p}(x)$ means unnormalized $p(x)$, and $Z$ is used to normalize $p(z)$, which equals to sum of $\hat{p}(x)$ over $x$.
Here, what does $\frac{\mathcal{Z}_{q}}{\mathcal{p}}$ mean? It’s a constant, and \begin{aligned} \frac{\mathcal{Z}_{p}}{\mathcal{Z}_{q}} &=\frac{1}{\mathcal{Z}_{q}} \int \tilde{p}(z) d z \\ &=\int \frac{\tilde{p}(z)}{\tilde{q}(z)} q(z) d z \\ &\approx \frac{1}{N} \sum_{n} \frac{\tilde{p}\left(z^{n}\right)}{\tilde{q}^{n}(z)} \\ &=\frac{1}{N} \sum_{n} w^{n} \tag{3} \end{aligned}
Also, we have to state that $\mathbb{E}[\frac{\mathcal{Z}_{p}}{\mathcal{Z}_{q}}] = 1$, since \begin{aligned} E_{q}[w] &=\int_{D} w(x) q(x) d x \\ &=\int_{D} \frac{p(x)}{q(x)} q(x) d x \\ &=\int_{D} p(x) d x=1 \end{aligned}
From $(2),(3)$, we can know that $\mathbb{E}[f] \approx \sum_{n=1}^{N} \frac{w^{n}}{\sum_{m=1}^{N} w^{m}} f\left(z^{n}\right), \quad z^{n} \sim q(z) \tag{4}$
which is also called weighted importance sampling.
Connections to reinforcement learning
For probability of the rest of the episode, after $S_{t}$, under policy $\pi$, we have $\begin{array}{l}{\operatorname{Pr}\left\{A_{t}, S_{t+1}, A_{t+1}, \ldots, S_{T} | S_{t}, A_{t : T-1} \sim \pi\right\}} \\ {\quad=\pi\left(A_{t} | S_{t}\right) p\left(S_{t+1} | S_{t}, A_{t}\right) \pi\left(A_{t+1} | S_{t+1}\right) \cdots p\left(S_{T} | S_{T-1}, A_{T-1}\right)} \\ {\quad=\prod_{k=t}^{T-1} \pi\left(A_{k} | S_{k}\right) p\left(S_{k+1} | S_{k}, A_{k}\right)}\end{array}$
Thus, the relative probability of the episode under the target and behavior policies (the importance-sampling ratio) is $\rho_{t : T-1} \doteq \frac{\prod_{k=t}^{T-1} \pi\left(A_{k} | S_{k}\right) p\left(S_{k+1} | S_{k}, A_{k}\right)}{\prod_{k=t}^{T-1} b\left(A_{k} | S_{k}\right) p\left(S_{k+1} | S_{k}, A_{k}\right)}=\prod_{k=t}^{T-1} \frac{\pi\left(A_{k} | S_{k}\right)}{b\left(A_{k} | S_{k}\right)}$
Although the trajectory probabilities depend on the MDP’s transition probabilities, which are generally unknown, they appear identically in both the numerator and denominator, and thus cancel. The importance sampling ratio ends up depending only on the two policies and the sequence, not on the MDP!
What a beautiful ratio! That means we only need to multiple a ratio for the sampled returns $G_{t}$ to get a expectd value which is sampled from our target policy $\pi$. That is $\mathbb{E}\left[\rho_{t : T-1} G_{t} | S_{t}=s\right]=v_{\pi}(s)$
Then we can simply scale the returns by the ratios and average the results: $V(s) \doteq \frac{\sum_{t \in \mathcal{T}(s)} \rho_{t : T(t)-1} G_{t}}{|\mathcal{J}(s)|}$
$\mathcal{J}(s)$ denotes the set of all time steps in which state $s$ is visited, or appear numbers.
An important alternative is weighted importance sampling (from $(4)$), which uses a weighted average, defined as $V(s) \doteq \frac{\sum_{t \in \mathcal{T}(s)} \rho_{t : T(t)-1} G_{t}}{\sum_{t \in \mathcal{T}(s)} \rho_{t : T(t)-1}}$
If we define $W_{i}=\rho_{t_{i}} : T\left(t_{i}\right)-1$, then $V_{n} \doteq \frac{\sum_{k=1}^{n-1} W_{k} G_{k}}{\sum_{k=1}^{n-1} W_{k}} \tag{5}$ We can use $(5)$ to update $V$!
Intuition
From above, we can see that we can use a importance sampling ratio to make the data generated by policy $b$ usable by policy $\pi$. But how to understand it intuitively?
In fact, $w_{i}$ can represent the relationships between $\pi$ and $b$ for responding reward. For example, yyxsb and a robot is playing a game, yyxsb is stupid and he collected many data for robot to learn, but the $G$ from episode generated by yyxsb is different from robot policy, but we have $w_{i}$ to brigde them. That’s it.
Talk is cheap, show me the code
I will show you importance sampling really works through python code.
Let $P(x)=3 e^{-\frac{x^{2}}{2}}+e^{-\frac{(x-4)^{2}}{2}}$ be the distribution we want to sample from. The normalizing constant $Z \approx 10.0261955464$. Let the functions that we want to approximate be $f(x) = x$ and $g(x) = sin(x)$. I will use general importance sampling here.
import numpy as np
import matplotlib.pyplot as plt
P = lambda x: 3 * np.exp(-x*x/2) + np.exp(-(x - 4)**2/2)
Z = 10.0261955464
f_x = lambda x: x
g_x = lambda x: np.sin(x)
true_expected_fx = 10.02686647165
true_expected_gx = -1.15088010640
a, b = -4, 8
uniform_prob = 1./(b - a)
expected_f_x = 0.
expected_g_x = 0.
n_samples = 100000
den = 0.
for i in range(n_samples):
sample = np.random.uniform(a, b)
importance = P(sample) / uniform_prob
den += importance
expected_f_x += importance * f_x(sample)
expected_g_x += importance * g_x(sample)
expected_f_x /= den
expected_g_x /= den
expected_f_x *= Z
expected_g_x *= Z
print('E[f(x)] = %.5f, Error = %.5f' % (expected_f_x, abs(expected_f_x - true_expected_fx)))
print('E[g(x)] = %.5f, Error = %.5f' % (expected_g_x, abs(expected_g_x - true_expected_gx)))
Result:
E[f(x)] = 10.02532, Error = 0.00155 and E[g(x)] = -1.17328, Error = 0.02240
The sampling works!
Summary
In this post, I’ve talked about why and how importance sampling works, I hope you can understand it.
References
Welcome to share or comment on this post:
|
{}
|
### Home > APCALC > Chapter 12 > Lesson 12.3.2 > Problem12-106
12-106.
Consider the infinite series below. For each series, decide if it converges conditionally, converges absolutely, or diverges and justify your conclusion. State the tests you used.
1. $\displaystyle \sum _ { n = 1 } ^ { \infty } \frac { 1 + \operatorname { cos } ( n ) } { n ^ { 2 } }$
Compare this series to $\displaystyle \sum_{n=1}^\infty \frac{2}{n^2}.$
1. $\displaystyle\sum _ { j = 1 } ^ { \infty } \frac { j ! } { ( 2 j + 1 ) ! }$
$\lim\limits_{j\to\infty}\Bigg|\frac{\frac{(j+1)!}{(2j+2)!}}{\frac{j!}{(2j+1)!}}\Bigg|$
1. $\displaystyle\sum _ { n = 1 } ^ { \infty } n e ^ { - n ^ { 2 } }$
$\lim\limits_{n\to\infty}\Bigg|\frac{\frac{n+1}{e^{(n+1)^2}}}{\frac{n}{e^{n^2}}}\Bigg|$
1. $\displaystyle\sum _ { k = 1 } ^ { \infty } \frac { 2 - k } { k \cdot 2 ^ { k } }$
$\lim\limits_{k\to\infty}\Bigg|\frac{\frac{1-k}{(k+1)\cdot 2^{k+1}}}{\frac{2-k}{k\cdot 2^k}}\Bigg|$
|
{}
|
× Close
Join the Ques10 Community
Ques10 is a community of thousands of students, teachers, and academic experts, just like you.
Join them; it only takes a minute
Question: Evaluate $\int_{0}^{2 \pi} \frac{d \theta}{3 + 2cos \theta}$
0
Subject: Applied Mathematics 4
Topic: Complex Integration
Difficulty: Medium
m4e(34) • 77 views
modified 15 days ago • written 3 months ago by
0
Put $z = e^{i \theta} \\ cos\theta = \frac{z^2 + 1}{2-z} \\ d\theta = \frac{dz}{iz}$
The equation can be written as,
$\int_c \frac{1}{3 + 2(z^2+1/2z)} \frac{dz}{iz} \\ = \int_c \frac{z}{3z+z^2+1} \frac{dz}{iz} \\ = \int_c \frac{dz}{i(z^2+3z+1)} \\ = \frac{1}{i}\int_c \frac{dz}{(z^2+3z+1)}$
Let, $f(z) = \frac{1}{z^2+3z+1}$
For poles $z^2+3z+1 = 0$
$z = \frac{-3 \pm \sqrt{9-4}}{2} \\ = \frac{-3 \pm \sqrt{5}}{2} \\ \therefore z = \frac{-3 + \sqrt{5}}{2} \hspace{0.20cm} \& \hspace{0.20cm} \frac{-3 - \sqrt{5}}{2}$
Let, $\alpha = \frac{-3 + \sqrt{5}}{2} \\ \beta = \frac{-3 - \sqrt{5}}{2}$
$z = \alpha$ is the only pole which lies inside the circle
Residue of f(z) at $z = \alpha$ is
$\lim_{z \to \alpha} (z-\alpha)[\frac{1}{(z-\alpha)(z-\beta)}] \\ = \frac{1}{\alpha - \beta} \\ \alpha - \beta = (\frac{-3 + \sqrt{5}}{2}) + (\frac{3 + \sqrt{5}}{2}) = \sqrt{5}$
$\therefore \int_c \frac{1}{z^2+3z+1} dz = 2 \pi i [\frac{1}{\sqrt{5}}] = \frac{2 \pi i}{\sqrt{5}} \\ \therefore \int_c \frac{1}{i}\frac{1}{z^2+3z+1} dz = \frac{1}{i}[\frac{2 \pi i}{\sqrt{5}}] = \frac{2 \pi}{\sqrt{5}}$
modified 15 days ago • written 15 days ago by
0
Put $z = e^{i \theta} \\ cos\theta = \frac{z^2 + 1}{2-z} \\ d\theta = \frac{dz}{iz}$
The equation can be written as,
$\int_c \frac{1}{3 + 2(z^2+1/2z)} \frac{dz}{iz} \\ = \int_c \frac{z}{3z+z^2+1} \frac{dz}{iz} \\ = \int_c \frac{dz}{i(z^2+3z+1)} \\ = \frac{1}{i}\int_c \frac{dz}{(z^2+3z+1)}$
Let, $f(z) = \frac{1}{z^2+3z+1}$
For poles $z^2+3z+1 = 0$
$z = \frac{-3 \pm \sqrt{9-4}}{2} \\ = \frac{-3 \pm \sqrt{5}}{2} \\ \therefore z = \frac{-3 + \sqrt{5}}{2} \hspace{0.20cm} \& \hspace{0.20cm} \frac{-3 - \sqrt{5}}{2}$
Let, $\alpha = \frac{-3 + \sqrt{5}}{2} \\ \beta = \frac{-3 - \sqrt{5}}{2}$
$z = \alpha$ is the only pole which lies inside the circle
Residue of f(z) at $z = \alpha$ is
$\lim_{z \to \alpha} (z-\alpha)[\frac{1}{(z-\alpha)(z-/beta)}] \\ = \frac{1}{\alpha - \beta} \\ \alpha - \beta = (\frac{-3 + \sqrt{5}}{2}) + (\frac{3 + \sqrt{5}}{2}) = \sqrt{5}$
$\therefore \int_c \frac{1}{z^2+3z+1} dz = 2 \pi i [\frac{1}{\sqrt{5}}] = \frac{2 \pi i}{\sqrt{5}} \\ \therefore \int_c \frac{1}{i}\frac{1}{z^2+3z+1} dz = \frac{1}{i}[\frac{2 \pi i}{\sqrt{5}}] = \frac{2 \pi}{\sqrt{5}}$
|
{}
|
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# Introduction: Francesc Muñoz
in Chat
Hi all, Thanks for accepting me in this course. I’ve been reading some of the introductions and I’m really impressed. I don’t have any ‘official’ background. Only interest in learning. In fact, I'm even not really fluent in English. Well, maybe that’s not too bad for a great initiative like this: If a simple mortal like me is able to understand something here, that would mean that the initiative is extremely well designed. My original interest in the topic is related to functional programming languages. But as I'm reading here and there, I'm realising that Category theory is much deep and broad. And interesting on its own. I know bits of knowledge about logic, graph theory, Petri nets theory, and some Knowledge Representation. I promise not to bother you with too stupid questions. cesc
Comment Source:Welcome, Fracesc! Please bother us with "stupid" questions - those are the questions where people learn the most, at least if they ponder the answers. It sounds like you know about many topics in math that category theory touches on in interesting ways. For example, right now I'm trying to finish off a paper with my student Jade Master, about a category where the morphisms are "open" Petri nets - that is, Petri nets with some places designated as inputs, and some designated as outputs.
|
{}
|
# How do you find the vertical, horizontal or slant asymptotes for f(x) = 3(1/x)?
Mar 5, 2016
Asymptotes: y = 0 and x = 0.
#### Explanation:
x y = ${c}^{2}$ represents a rectangular hyperbola .
The axis is y = x.
the asymptotes are the axes of coordinates.
The two branches are in the 1st and 3rd quadrants.
Here, c = $\sqrt{3}$
Vertices are $\left(\sqrt{3} , \sqrt{3}\right) \mathmr{and} \left(- \sqrt{3} , - \sqrt{3}\right)$
The eccentricity of a rectangular hyperbola is $\sqrt{2}$.
The foci are $\left(\sqrt{6} , \sqrt{6}\right) \mathmr{and} \left(- \sqrt{6} , - \sqrt{6}\right)$(
|
{}
|
# How to design a control bit to turn unsigned binary into two's complement?
I am designing a 4-bit comparator using only basic logic gates (and, or, nand, xor, etc.) In its current form, the comparator takes two 4-bit unsigned binary numbers (A and B) and compares each bit value from most to least significant and determines an output: A > B, A < B, or A = B. Pretty simple.
However, I am trying to add a control bit to signify whether the input is unsigned binary or two's complement. The default state, 0, should signify that the input is in unsigned binary, while 1 signifies that the input numbers are in two's complement.
I'm having difficult understanding how or where in the circuit to implement the control bit. I understand that one method is to implement an adder, but I'm not sure if it would have to exist outside of the circuitry I have already designed, and the control bit would switch between two circuits that exist almost independently of each other. I'm sure there has to be a more elegant, integrated solution. If you could just point me in the right direction, I would greatly appreciate it!
• don't write "and, or, nand, xor, etc". State explicitly what gates you mean – "basic" is a really relative word, and depending on what technology you design for, a 6bit lookup table might be the basic logic element. May 6, 2017 at 16:03
• This is obviously an assignment and you're not the first to post it, so you can just have a clue: see what your current circuit does when both numbers are positive and when both are negative, then consider how to deal with the other cases. May 6, 2017 at 16:05
• Is stack exchange not used for getting help with assignments? Genuinely curious, this is my first post here, and I don't know the rules/etiquette.
– Marg
May 6, 2017 at 21:59
## 1 Answer
You're correct in your intuition that there's a better solution than a multiplexor selecting between two entirely independent circuits.
Recall the meaning of a positional number system, for an unsigned input $a_3 a_2 a_1 a_0$ the value is $$8 a_3 + 4 a_2 + 2 a_1 + a_0$$
For two's complement, the only change is that the sign bit takes on a negative place value: $$-8 a_3 + 4 a_2 + 2 a_1 + a_0$$
Now, the condition for comparison of two signed two's complement numbers is
$$-8 a_3 + 4 a_2 + 2 a_1 + a_0 < -8 b_3 + 4 b_2 + 2 b_1 + b_0$$
Add the sign-bit terms to both sides, to get
$$8 b_3 + 4 a_2 + 2 a_1 + a_0 < 8 a_3 + 4 b_2 + 2 b_1 + b_0$$
which is the unsigned comparison between $b_3 a_2 a_1 a_0$ and $a_3 b_2 b_1 b_0$
That is, you can use your control input to swap the sign bit, then feed the normal comparison logic.
If a multiplexer isn't one of your fundamental gates, then this "swap" adds non-trivially to the complexity. So let's look at that inequality again:
$$-8 a_3 + 4 a_2 + 2 a_1 + a_0 < -8 b_3 + 4 b_2 + 2 b_1 + b_0$$
Add $8$ to both sides and group:
$$8 (1 - a_3) + 4 a_2 + 2 a_1 + a_0 < 8 (1 - b_3) + 4 b_2 + 2 b_1 + b_0$$
Note that $1-x$ is just the NOT operator.
$$8 \overline{a_3} + 4 a_2 + 2 a_1 + a_0 < 8 \overline{b_3} + 4 b_2 + 2 b_1 + b_0$$
and this is again the unsigned comparison logic applied to the two inputs $\overline{a_3} a_2 a_1 a_0$ and $\overline{b_3} b_2 b_1 b_0$
Now your control input only needs to select between $a_3$ and $\overline{a_3}$ (and likewise for $b_3$), and this is just the XOR function.
Finally, your dual-mode comparator is made by simply taking your working unsigned comparison circuit, and feeding its inputs with
$$(a_3 \oplus S) a_2 a_1 a_0$$ $$(b_3 \oplus S) b_2 b_1 b_0$$
|
{}
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# There are two examination rooms A and B. If 10 candidates are sent from A to B, the number of students in each room is the same. If 20 candidates are sent from B to A, the number of students in A is double the number of students in B. Find the number of students in each room.
Last updated date: 14th Mar 2023
Total views: 213.7k
Views today: 4.93k
Verified
213.7k+ views
Hint- Assume the number of students in respective rooms to be two different variables, and compute them.
Let, the number of students in room A and B are x and y respectively.
Then it is given if 10 candidates are sent from A to B, the number of students in each room is the same.
Thus $x - 10 = y + 10$
$\Rightarrow x - y = 20$…………………. (1)
Now it is also given that if 20 candidates are sent from B to A, the number of students in A is double the number of students in B
$x + 20 = 2\left( {y - 20} \right)$
$\Rightarrow x - 2y = - 60$………………………….. (2)
Now let’s solve equation (1) and equation (2)
$\Rightarrow x - y = 20$……………………… (1)
$\Rightarrow x - 2y = - 60$…………………. (2)
Now in equation (1) multiply by 2 on both side, we get
$\Rightarrow 2x - 2y = 40$………………. (3)
Subtract equation (3) and equation (2)
$2x - 2y - x + 2y = 40 + 60$
$\Rightarrow x = 100$
Now substitute the value of x in equation (1) we get
$100 - y = 20 \\ \Rightarrow y = 80 \\$
The number of students in room A is 100 and the number of students in room B is 80.
Note- In such types of questions, just focus on how many numbers (as in this question there are given numbers of students) are shifted where, according to them, make equations and solve them to obtain the variables. This will give the correct answer.
|
{}
|
# Delta-like symbol in LaTeX
I'd like to write a majuscule delta-like symbol in LaTeX but I can't find it's syntax anywhere. You can see the symbol on equation (12) of the following paper:
"Two-Frame Motion Estimation Based on Polynomial Expansion".
-
## migrated from stackoverflow.comMay 23 '11 at 16:44
This question came from our site for professional and enthusiast programmers.
Note that the document uses Springer's LNCS style. In this style, all Greek letters are in italics, and vectors are denoted by boldface.
Most likely the bold italic Delta is produced in this particular case by something similar to this:
\documentclass{llncs}
\begin{document}
$\vec{\Delta}$
\end{document}
The result is:
Note that if you used the article class, the same code would produce a normal Delta with an arrow:
-
If one really wants a bold italic Delta, the way to go is
\usepackage{bm}
\newcommand{\bfitDelta}{\bm{\mathit{\Delta}}}
Of course, one could write every time \bm{\mathit{\Delta}}.
-
That is just $\Delta$ which is different from $\delta$. LateX symbols are case-sensitive. See any of the LaTeX cheat sheets as e.g. this one a U Colorado.
-
Thanks for the cheat sheet, but it is not a regular delta. Is as if it was in italic. Please check the paper to see what I'm talking about. – Renan May 23 '11 at 2:37
Same symbol, different font. – Dirk Eddelbuettel May 23 '11 at 2:50
@Renan: What if you typeset it in italics, e.g. \mathit{\Delta}? – Torbjørn T. May 23 '11 at 17:01
The cheat sheet link is dead. Redirects to the homepage. – HSchmale Sep 21 '15 at 17:41
This looks very much like \Updelta (\usepackage{ upgreek })
As you can see here, when compared with the standard Delta, the Updelta has an italic look to it.
-
Why \Updelta? It is pretty obvious that the Delta in the question is italic. – Henri Menke Jun 16 at 8:35
@HenriMenke I think that someone visiting this question may be interested in other alternatives, and as you can see from the above comparison the Updelta symbol certainly looks more like the italic delta than the standard delta symbol. – JStrahl Jun 21 at 11:56
Even though it might be useful, it does not answer the question and according to site policy an answer has to address the question. Also we do not want to clutter the answer section with might-be-useful posts. Neither does the symbol reproduce the one in the question, nor is it italic (see this picture). – Henri Menke Jun 21 at 13:29
|
{}
|
# Why is the value function obtained from a greedy policy different from its original value function (i.e. $V_k \neq V_{\pi_k }$)?
Consider a vector of values $$V_k$$ and consider the related value $$V_{\pi_k}$$ obtained by coming the policy $$\pi_k$$ by acting greedily according to it. i.e.
$$\pi_k(i) := arg \min_{a \in A} \{ R(i,a) + \alpha \sum_{j \in S} P_{i,j}(a ) V_k \}$$
where $$P_{ij}( a ) = Pr[j \mid i, a]$$
my question is why is:
$$V_k \neq V_{\pi_k }$$
my intuition (which is obviously wrong) says that we are acting according to the value function $$V_k$$ so we should get the same value function out. Why is this reasoning incorrect?
Note that I am sure we could cook up a boring example that shows Im wrong. I don't want that or need that. That won't provide insights to WHY I am wrong, just that I am which I already know.
Your intuition is off because the policy $$\pi$$ used to form $$V_k$$ (or what I will denote as $$V^{\pi}$$) will actually be different than the policy $$\pi_k$$ if $$\pi$$ is not optimal. We can show they will be different in the event $$\pi$$ is not optimal by investigating the following. We can first state the following definitions
\begin{align} V^{\pi}(s) &= R(s, \pi(s)) + \gamma \mathbb{E}_{s' \sim P(s,\pi(s))} \left[V^{\pi}(s')\right] \\ Q^{\pi}(s,a) &= R(s,a) + \gamma \mathbb{E}_{s' \sim P(s,a)} \left[V^{\pi}(s')\right] \end{align}
If we make note of the above definitions, it is clear that $$V^{\pi}(s) = Q^{\pi}(s, \pi(s))$$. We can also use these definitions to restate $$\pi_k$$ in the following manner
\begin{align} \pi_k(s) &= \arg \min_{a \in A(s)} \left\lbrace R(s,a) + \gamma \mathbb{E}_{s' \sim P(s,a)} \left[V^{\pi}(s')\right] \right\rbrace \\ &= \arg \min_{a \in A(s)} Q^{\pi}(s,a) \\ &= \arg \min_{a \in A(s)} \left\lbrace Q^{\pi}(s,a) - Q^{\pi}(s, \pi(s))\right\rbrace \end{align}
If we look at the final expression for $$\pi_{k}(s)$$, it is clear that for a given state $$s$$, $$\pi_k(s)$$ will be a better action than $$\pi(s)$$ unless $$\pi(s)$$ is already an optimal action for the given state $$s$$. This implies that $$V^{\pi_k}(s) \leq V^{\pi}(s)$$ for all $$s \in S$$. The way one can view it is that the $$\arg \min$$ step to construct $$\pi_k(s)$$ is effectively saying "For each state $$s$$, choose the action $$a$$ that is most optimal relative to $$\pi(s)$$ to make as the action choice for $$\pi_k(s)$$".
|
{}
|
# 0.3 Lab 3 - frequency analysis (Page 3/4)
Page 3 / 4
$x\left(t\right)=0+\sum _{\genfrac{}{}{0pt}{}{k=1}{k\phantom{\rule{0.166667em}{0ex}}\text{odd}}}^{13}\frac{4}{k\pi }sin\left(2\pi kt\right)\phantom{\rule{4pt}{0ex}}.$
These are the first 8 terms in the Fourier series of the periodic square wave shown in [link] .
Run the model by selecting Start under the Simulation menu. A graph will pop up that shows the synthesized square wave signaland its spectrum. This is the output of the Spectrum Analyzer . After the simulation runs for a while,the Spectrum Analyzer element will update the plot of the spectral energy and the incoming waveform.Notice that the energy is concentrated in peaks corresponding to the individual sine waves.Print the output of the Spectrum Analyzer .
You may have a closer look at the synthesized signal by double clicking on the Scope1 icon. You can also see a plot of all the individual sine wavesby double clicking on the Scope2 icon.
Synthesize the two periodic waveforms defined in the "Synthesis of Periodic Signals" section of the background exercises. Do this by setting the frequency, amplitude, and phaseof each sinewave generator to the proper values. For each case, print the output of the Spectrum Analyzer .
Hand in plots of the Spectrum Analyzer output for each of the three synthesized waveforms.For each case, comment on how the synthesized waveform differs from the desired signal, and on the structureof the spectral density.
## Modulation property
Double click the icon labeled Modulator to bring up a system as shown in [link] . This system modulates a triangular pulse signal with a sine wave.You can control the duration and duty cycle of the triangular envelope and the frequency of the modulating sine wave.The system also contains a spectrum analyzer which plots the modulated signal and its spectrum.
Generate the following signals by adjusting the Time values and Output values of the Repeating Sequence block and the Frequency of the Sine Wave . The Time values vector contains entries spanning one period of the repeating signal.The Output values vector contains the values of the repeating signal at the times specifiedin the Time values vector. Note that the Repeating Sequence block does NOT create a discrete time signal. It creates a continuous time signalby connecting the output values with line segments. Print the output of the Spectrum Analyzer for each signal.
1. Triangular pulse duration of 1 sec; period of 2 sec; modulating frequency of 10 Hz (initial settings of the experiment).
2. Triangular pulse duration of 1 sec; period of 2 sec; modulating frequency of 15 Hz.
3. Triangular pulse duration of 1 sec; period of 3 sec; modulating frequency of 10 Hz.
4. Triangular pulse duration of 1 sec; period of 6 sec; modulating frequency of 10 Hz.
Notice that the spectrum of the modulated signal consists of of a comb of impulses in the frequency domain,arranged around a center frequency.
Hand in plots of the output of the Spectrum Analyzer for each signal. Answer following questions:1) What effect does changing the modulating frequency have on the spectral density?2) Why does the spectrum have a comb structure and what is the spectral distance between impulses? Why?3) What would happen to the spectral density if the period of the triangle pulse were to increase toward infinity? (in the limit)
how can chip be made from sand
is this allso about nanoscale material
Almas
are nano particles real
yeah
Joseph
Hello, if I study Physics teacher in bachelor, can I study Nanotechnology in master?
no can't
Lohitha
where is the latest information on a no technology how can I find it
William
currently
William
where we get a research paper on Nano chemistry....?
nanopartical of organic/inorganic / physical chemistry , pdf / thesis / review
Ali
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
hey
Giriraj
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
has a lot of application modern world
Kamaluddeen
yes
narayan
what is variations in raman spectra for nanomaterials
ya I also want to know the raman spectra
Bhagvanji
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
While the American heart association suggests that meditation might be used in conjunction with more traditional treatments as a way to manage hypertension
Got questions? Join the online conversation and get instant answers!
|
{}
|
# Lenses question
1. Aug 3, 2008
### physicsdawg
1. The problem statement, all variables and given/known data
An object is 18cm in front of a diverging lens that has a focal length of -12cm. How far in front of the lens should the object be placed so that the size of its image is reduced by a factor of 2.0?
2. Relevant equations
1/do + 1/di = 1/f
m= -di/do
3. The attempt at a solution
i keep solving and getting 36 using those equations, but the answer says its 48 ...i donno how to get 48
1/18 + 1/di = 1/-12
di=7.2
m= -7.2/18
m=-.4
1/2m=-0.2
-.2= -7.2/do
do=36
Last edited: Aug 3, 2008
2. Aug 3, 2008
### G01
Can you show your calculations, please? I can't find your mistake if I can't see your work.
3. Aug 3, 2008
### physicsdawg
k i edited it
4. Aug 3, 2008
### G01
Hmmm. I am also getting 36 as my answer. I know this is a stupid question, but are you sure your numbers are correct? It doesn't hurt to check. Also where are you getting the answer from?
5. Aug 3, 2008
### physicsdawg
physics textbook
6. Aug 4, 2008
### tiny-tim
Hi physicsdawg!
Try using 8cm instead of 18cm.
7. Aug 4, 2008
### alphysicist
Hi physicsdawg,
I believe this answer is incorrect. The numerical value is right, but it needs to be a negative number.
With di=-7.2, all of these magnifications will be positive numbers (which checks out, since a single diverging lens creates upright images).
The second image will not be in the same place as the first image, and so the di for the second case is an unknown quantity. This equation should therefore be:
$$0.2 = - di/do$$
and you need to find do. Do you see how to find it?
8. Aug 4, 2008
### G01
Ahh, of course. The second image is not in the same place as the first. I made that mistake as well. ( For some reason, I thought that was a condition in the problem. Guess I read into it too much.) Nice catch alphysicist.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
|
{}
|
### Faster point compression for elliptic curves of $j$-invariant $0$
Dmitrii Koshelev
##### Abstract
The article provides a new double point compression method (to $2\lceil \log_2(q) \rceil + 4$ bits) for an elliptic $\mathbb{F}_{\!q}$-curve $E_b\!: y^2 = x^3 + b$ of $j$-invariant $0$ over a finite field $\mathbb{F}_{\!q}$ such that $q \equiv 1 \ (\mathrm{mod} \ 3)$. More precisely, we obtain explicit simple formulas transforming the coordinates $x_0,y_0,x_1,y_1$ of two points $P_0, P_1 \in E(\mathbb{F}_{\!q})$ to some two elements of $\mathbb{F}_{\!q}$ with four auxiliary bits. In order to recover (in the decompression stage) the points $P_0, P_1$ it is proposed to extract a sixth root $\sqrt[6]{Z} \in \mathbb{F}_{\!q}$ of some element $Z \in \mathbb{F}_{\!q}$. It is known that for $q \equiv 3 \ (\mathrm{mod} \ 4)$, $q \not\equiv 1 \ (\mathrm{mod} \ 27)$ this can be implemented by means of just one exponentiation in $\mathbb{F}_{\!q}$. Therefore the new compression method seems to be much faster than the classical one with the coordinates $x_0, x_1$, whose decompression stage requires two exponentiations in $\mathbb{F}_{\!q}$. We also successfully adapt the new approach for compressing one $\mathbb{F}_{\!q^2}$-point on a curve $E_b$ with $b \in \mathbb{F}_{\!q^2}^*$.
Available format(s)
Category
Implementation
Publication info
Preprint. MINOR revision.
Keywords
finite fieldspairing-based cryptographyelliptic curves of $j$-invariant $0$point compression
Contact author(s)
dimitri koshelev @ gmail com
History
2021-09-11: last of 5 revisions
See all versions
Short URL
https://ia.cr/2020/010
CC BY
BibTeX
@misc{cryptoeprint:2020/010,
author = {Dmitrii Koshelev},
title = {Faster point compression for elliptic curves of $j$-invariant $0$},
howpublished = {Cryptology ePrint Archive, Paper 2020/010},
year = {2020},
note = {\url{https://eprint.iacr.org/2020/010}},
url = {https://eprint.iacr.org/2020/010}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
{}
|
• •
### 豫南黏板土壤分层酸化和耕层速效磷分布特征
1. 河南农业大学资源与环境学院, 郑州 450002
• 收稿日期:2021-04-23 接受日期:2021-11-04 出版日期:2022-01-15 发布日期:2022-07-15
• 通讯作者: * E-mail: peipeilee@163.com
• 作者简介:陈文举, 男, 1997年生, 硕士研究生。主要从事农田土壤微生物生态研究。E-mail: cwj15236995434@163.com
• 基金资助:
国家重点研发计划项目(2018YFD0200606,2017YFD0301103)和河南省哲学社会科学规划项目(2020BJJ038)
### Characteristics of acidification and the distribution of available phosphorus along soil depths in heavy clay soils in southern Henan Province, China
CHEN Wen-ju, LI Pei-pei*, WEN Qian, HUANG Ke-ming, WANG Meng-yu, XU Heng, HUA Dang-ling, HAN Yan-lai
1. College of Resource and Environmental Sciences, Henan Agricultural University, Zhengzhou 450002, China
• Received:2021-04-23 Accepted:2021-11-04 Online:2022-01-15 Published:2022-07-15
Abstract: The acidification of agricultural soil in the southern part of the North China Plain has become more obvious, which is particularly true for the heavy clay soil types, such as yellow-cinnamon and lime concretion black soils. To understand the spatial variability of the pH value and nutrients on the vertical agricultural soil profile of heavy clay soils in this area, we measured pH values and available phosphorus (AP) in 63 farmland sample points from Xiping County in the southern Henan Province. Geostatistical methods and ArcGIS technology were used to map soil pH values along three soil depths (0-10, 10-20, and 20-30 cm) and the spatial distribution of soil AP in the tillage layer (0-20 cm). Furthermore, the correlation between pH and AP was analyzed. The results showed that mean pH values of typical yellow-cinnamon and typical fluvo-aquic soils from three soil layers were 4.98, 4.93, 5.31, and 5.46, 5.81, 6.26, respectively, which gradually increased with soil depths. However, there was no significant difference among the three soil layers. Mean pH values of typical lime concretion black soil from the three soil layers were 5.23, 5.43 and 6.03, respectively, and that of the 20-30 cm soil layer was significantly higher than that of the 0-10 cm (by 0.8-1 pH unit) and the 10-20 cm layers. The pH of the 20-30 cm soil layer of the calcareous lime concretion black and moist soils were also significantly higher than that of the 0-10 and 10-20 cm soil layers. The AP contents of the typical yellow-cinnamon, typical lime concretion black, moist, typical fluvo-aquic and calcareous lime concretion black soils in 0-20 cm soil layer were 8.85-54.75, 4.27-37.49, 8.22-51.80, 6.07-34.82, and 13.22-22.85 mg·kg-1, respectively. The results of the map indicated that the areas with low AP were distributed in the middle of the study area in blocks, and the areas with high AP were distributed around the study area in dots and flakes. The pH values of the typical yellow-cinnamon, typical lime concretion black, and moist soils positively correlated with the content of AP in the 0-20 cm soil layer. In conclusion, the heavy clay soil in southern Henan Province became stratified acidification, which slowed down along the soil depth. Soil AP contents in the tillage layer were distributed unevenly in the study area, and were affected by soil types and soil pH. These results would be useful for the improvement of heavy clay soil acidification in the southern part of the North China Plain.
|
{}
|
# The connectedness property of representables
1. Prove that representables have the following connectedness property: given a locally small category $$\mathscr A$$ and $$A\in\mathscr A$$, if $$X,Y\in[\mathscr A^{op},\mathbf {Set}]$$ with $$H_A\simeq X+Y$$, then either $$X$$ or $$Y$$ is the constant functor.
2. Deduce that the sum of two representables is never representable.
1. Any hints about this? I've been trying to apply various results haphazardly, but I don't see a plan of the proof that I should follow. What I've tried: since colimits in functor category can be computed pointwise, we have $$H_A(B)\simeq X(B)\times Y(B)$$ naturally in $$B$$. Writing out the condition on the commutativity of the square didn't give anything. Also the presence of $$X(B)$$ alludes to the Yoneda lemma: $$X(B)\simeq [\mathscr A^{op},\textbf {Set}](H_B, X)$$, but I don't see how to apply this either.
2. Suppose $$X$$ and $$Y$$ are representable and suppose their sum is representable. Then by 1, $$X$$ or $$Y$$ is the constant functor. Does this contradict the fact that $$X$$ and $$Y$$ are representable? (I.e., does there no exist constant representable functors?) I don't think so, $$\mathscr A$$ may be a discrete category with 1 element, and then $$H_A$$ will be a constant functor. But then I don't know how to find a contradiction to 1.
• I suppose you mean $H_A (B) \cong X (B) + Y (B)$. Take $B = A$ and see what happens to $\mathrm{id}_A$. – Zhen Lin Jul 4 '20 at 22:34
You might use that a representation of the presheaf $$X+Y$$ corresponds, through the natural bijection of Yoneda Lemma, to a universal element of $$X+Y$$. I mean that, if $$a\in (X+Y)A=XA \sqcup YA$$ is the image of the isomorphism $$H_A \cong X + Y$$ through Yoneda Lemma's bijection: $$Set^{\mathcal{A}^{op}}(H_A,X + Y)\cong (X+Y)A,$$ then the couple $$(A,a)$$ is initial in the category of elements of $$X+Y$$. This means that, whenever $$B$$ is an object of $$\mathcal{A}$$ and $$b \in (X+Y)B$$, then there is unique an arrow $$B \xrightarrow{f}A$$ of $$\mathcal{A}$$ such that the function $$(X+Y)f=Xf \sqcup Yf$$ sends $$a$$ to $$b$$ (this is Corollary 4.3.2 of your link).
If -without loss of generality- we assume that $$a \in XA$$ then, if such a $$b \in (X+ Y)B=XB\sqcup YB$$ exists, it needs to belong to $$XB$$, for the map $$Xf \sqcup Yf$$ sends elements of $$XA$$ to elements of $$XB$$ and elements of $$YA$$ to elements of $$YB$$. This implies that $$YB$$ is empty (and $$B$$ was arbitrary), hence $$Y$$ constantly equals the empty set. If we assumed that $$a \in YA$$ then it would be the case that $$X$$ constantly equals the empty set.
Now 2. is easier, knowing that one between $$X$$ and $$Y$$ is not just a constant presheaf but also the constantly empty one.
|
{}
|
# Mathematical writing : using an "out-of-date" notation
When I wrote my master's thesis, a professor who read it said that I should not use the phrase "A function of class $k$." but instead "A function of class $C^k$". I am not an expert about mathematical history of notations, but I read that in Geometric Measure Theory, H. Federer actually uses the first one, and it seems logical for me: I think that $C^k$ is the abbreviation for "of class $k$". Therefore, employing "class $C^k$" seems like a repetition. Or maybe the other notation is just not used any more and should simply be prohibited?
• A google-books search using the word "function" and the phrase "of class $k$" will show you "class $k$" is used in a variety of settings. Notation conventions tend to come and go, but I'm willing to bet that "of class $k$" will be a lot less meaningful 50 years from now than "of class $C^{k}$". Sep 15 '15 at 18:25
• I always thought of the $C$ as standing for "continuously-differentiable" Sep 15 '15 at 18:25
• Federer could use a simplified notation in his book, if the term occurs very frequently. The standard notation is $C^k$. "Class $k$" will not be recognizable by most mathematicians. Sep 15 '15 at 18:37
• Thank you Prof. Eremenko. The notation does not appear very often in Federer's book, but your second argument convinces me to use $C^k$. Sep 15 '15 at 18:47
• If "a function of class $C^k$" bothers you, you can always say "a $C^k$ function". Sep 15 '15 at 20:49
• He's not the only one. Karl Menger invented a new notation and made consistent use of it and wrote much about it. Parts of it entered ordinary notation, parts didn't. Dirac vector covector notation is another example that however became popular, but whether a mathematician uses it depends on whether they feel like it, there are several standard options. Use the standard notation, unless your own notation is much better, in which case full speed ahead with your own and don't worry if somebody dislikes it. (If it's good others will like it too later.) Here AE is correct, $C^k$ is better. Sep 16 '15 at 18:33
|
{}
|
# Why Eigen vectors arise as the solution of PCA
I have very limited knowledge of linear algebra and therefore I don't have an geometrical intuition behind PCA.
Why the eigen vectors (which are simply defined as vectors whose direction doesn't change after a linear transformation) are also the directions that maximize variance? I have seen PCA definition using Lagrange multipliers but I would like to have the geometrical intuition behind it.
Thanks
You're asking for intuition around the following fact: "If $$A$$ is the covariance matrix, and you want to maximize (or minimize) $$f(x)=x^TAx$$ under the constraint that $$\|x\|^2=1$$, then any solution $$x_0$$ will be such that $$x_0$$ and $$Ax_0$$ are collinear."
To see this geometrically, consider the level sets of $$x^TAx$$. Those are the set of point where $$x^TAx$$ assumes the same value. You can verify that those level sets are ellipsoids whose axes are aligned along $$A$$'s eigenvectors.
Now let's go back to the maximization of $$x^TAx$$ under the constraint that $$\|x\|^2=1$$. To build an intuition, assume we take a gradient descent approach. The way it works: You start somewhere, then start following the gradient of the function $$x^TAx$$, while continuing to satisfy the constraint $$\|x\|^2=1$$, and repeat until you can no longer move.
Now, note the following
• The gradient of $$x^TAx$$ is equal to $$Ax$$.
• If you are at a point $$x$$ satisfying the constraint, and you want to move away while still satisfying that constraint, then you must move in a direction that's orthogonal to the direction of the gradient of that constraint. In the case of the $$\|x\|^2=1$$ constraint, that means you must move in a direction orthogonal to $$x$$.
In the picture below, we represent the level sets of the function $$x^TAx$$ to optimize. Suppose you are at the point where the red and green arrows meet. The red arrow is $$Ax$$, and the green arrow is $$x$$. Then you can easily see that by moving in a direction orthogonal to the green arrow, you can reach another level set, one with a higher value of $$x^TAx$$ (as indicated by the red arrow). So as long as you're in such a configuration where $$Ax$$ and $$x$$ are not collinear, you can always move to another point where the constraint is satisfied and you increase the function $$x^TAx$$.
Contrast this with the situation where both $$Ax$$ (pink arrow) and $$x$$ (blue arrow) are collinear. Now, if you move in a direction orthogonal to $$x$$ (to satisfy the constraint), you'll also move in a direction that's orthogonal to $$Ax$$, achieving no variation of the function to optimize. In that case, you're already at an extremum.
This proves that extrema of a function under a constraint are achieved when both gradients of the function and constraint are collinear.
Finally, note that none of this depends on the particular form for the function as $$f(x)=x^TAx$$ or the constraint as $$\|x\|^2=1$$. The reasoning I presented is a geometric proof of Lagrange multipliers for general functions and constraints.
|
{}
|
I am trying to simulate the actual response of a camera given some object that is reflecting light. I've written a ray tracer, and have a BRDF that I need to use, and I have a camera sensitivity in terms of signal/Watts. But I am confused by one (rather important) detail:
Each ray coming out of the camera has some solid angle associated with it (this part I've already figured out). So each ray should then have a "Radiance" value associated with it (as radiance has units of W/(sr*m^2)). That way each ray would just multiply its solid angle by its radiance value, and you'd get an "Irradiance" value of W/m^2. However, I am unsure how to actually calculate this initial radiance value for each ray. The reason I am confused is because this seems to be backwards from what the BRDF is giving me.
A BRDF gives me the radiance leaving the surface in the direction of the camera, meaning the vertex of the solid angle is at the point of intersection. The solid angle for the ray however is defined flipped, with the vertex of the solid angle at the camera itself.
How do I bridge this gap? My idea is that is that if I can actually calculate the radiance for each ray, then I can simply multiple the radiance of each ray by its corresponding solid angle to get an irradiance value, then apply the inverse square law, and finally add up the irradiance for each ray per pixel and divide by the area of the pixel to get the wattage it receives.
But I am very lost as to how to calculate the radiance for each ray given that the only calculations I'm familiar with (the BRDF) returns a radiance value for a solid angle in the "wrong direction".
Am I misunderstanding what is going on? Am I approaching this incorrectly? Any help would be really appreciated!
I didn't get this part:
That way each ray would just multiply its solid angle by its radiance value, and you'd get an "Irradiance" value of W/m^2.
I'll try to explain how the whole thing works though. You have a light source, it emits light rays. Those light rays bounce around the scene and eventually end up at your camera film. So at the end of the day you have multiple rays hitting the film surface and the radiance of those gets multiplied with the sensitivity function. For efficiency reasons you usually start backwards - from the camera. But if you work with symmetric BRDFs this should not matter. So what arrives at your film is radiance, you multiply that radiance with the sensitivity function and integrate within a "film pixel" to get the result in terms of your screen pixel.
Edit: To clarify what I meant by integrated out:
If you have radiance which is a measure of W/(sr * m^2), you can integrate out sr or m^2 like so:
$$E(x) = \int_{\Omega}{L(x,\omega)\cos\theta\,d\omega}$$
From where you get irradiance at a point $$x$$, which gives you the energy arriving from all directions at point $$x$$ (W/m^2). Now you can go further and find out the energy that arrives at some surface with area A, by integrating over all points on the surface:
$$\Phi = \int_{A}{E(x)\,d\mu(x)} = \int_{A}\int_{\Omega}{L(x,\omega)\cos\theta\,d\omega\,d\mu(x)}$$
• What I meant by the comment you copied is that the sensitivity function I have to work with is a function of Watts. Radiance has units of Wm^-2*sr^-1. So if I multiply the radiance value by the solid angle represented by the ray, then I would get an irradiance value, with units Wm^2. I then multiply that by the surface area of the pixel that the ray originated from the get the Watts received, which I can then use in the sensitivity calculation. Jul 18 '19 at 17:42
• But the part that is confusing me is that the solid angles of the BRDF and of the ray appear (at least to me) to be opposite one another. The BRDF deals with solid angles whose vertex is at the point of intersection on the surface. The rays on the other hand have a solid angle with its vertex at the camera (more specifically, somewhere on the pixel that it was sent from. If no super sampling is done, then it originates just from the pixel center). So at the intersection point, the ray forms a cone in one direction, while the BRDF forms a cone in the exact opposite direction Jul 18 '19 at 17:45
• You're not multiplying radiance by a solid angle, to get a different measure you're integrating out the steridian part. Granted you're integrating over the solid angle, but multiply the radiance with the solid angle doesn't make much sense. The BRDF is not something related to the camera, its simply a function that describes the scattering properties of a surface at some point. It just tells you how ray coming from direction A scatters (and vice-versa). Jul 18 '19 at 17:49
• With graphics: [Here is a simple diagram of a ray] (upload.wikimedia.org/wikipedia/commons/b/b2/…). It is diverging away from the camera, so the solid angle of incoming light to the sensor represented by the ray has its vertex at the camera origin. This diagram of a BRDF shows that the solid angle diverging off the surface. Which is opposite how the ray looks at that point Jul 18 '19 at 17:50
• I understand it has nothing to do with the camera, but the rays reflected off the object are diverging away from the point they reflected off of. The camera rays however, are diverging from the camera, not the point they intersect. Jul 18 '19 at 17:52
|
{}
|
# Are we estimating the effects of health care expenditure correctly?
It is a contentious issue in philosophy whether an omission can be the cause of an event. At the very least it seems we should consider causation by omission differently from ‘ordinary’ causation. Consider Sarah McGrath’s example. Billy promised Alice to water the plant while she was away, but he did not water it. Billy not watering the plant caused its death. But there are good reasons to suppose that Billy did not cause its death. If Billy’s lack of watering caused the death of the plant, it may well be reasonable to assume that Vladimir Putin and indeed anyone else who did not water the plant were also a cause. McGrath argues that there is a normative consideration here: Billy ought to have watered the plant and that’s why we judge his omission as a cause and not anyone else’s. Similarly, the example from L.A. Paul and Ned Hall’s excellent book Causation: A User’s GuideBilly and Suzy are playing soccer on rival teams. One of Suzy’s teammates scores a goal. Both Billy and Suzy were nearby and could have easily prevented the goal. But our judgement is that the goal should only be credited to Billy’s failure to block the goal as Suzy had no responsibility to.
These arguments may appear far removed from the world of health economics. But, they have practical implications. Consider the estimation of the effect that increasing health care expenditure has on public health outcomes. The government, or relevant health authority, makes a decision about how the budget is allocated. It is often the case that there are allocative inefficiencies: greater gains could be had by reallocating the budget to more effective programs of care. In this case there would seem to be a relevant omission; the budget has not been spent where it could have provided benefits. These omissions are often seen as causes of a loss of health. Karl Claxton wrote of the Cancer Drugs Fund, a pool of money diverted from the National Health Service to provide cancer drugs otherwise considered cost-ineffective, that it was associated with
a net loss of at least 14,400 quality adjusted life years in 2013/14.
Similarly, an analysis of the lack of spending on effective HIV treatment and prevention by the Mbeki administration in South Africa wrote that
More than 330,000 lives or approximately 2.2 million person-years were lost because a feasible and timely ARV treatment program was not implemented in South Africa.
But our analyses of the effects of health care expenditure typically do not take these omissions into account.
Causal inference methods are founded on a counterfactual theory of causation. The aim of a causal inference method is to estimate the potential outcomes that would have been observed under different treatment regimes. In our case this would be what would have happened under different levels of expenditure. This is typically estimated by examining the relationship between population health and levels of expenditure, perhaps using some exogenous determinant of expenditure to identify the causal effects of interest. But this only identifies those changes caused by expenditure and not those changes caused by not spending.
Consider the following toy example. There are two causes of death in the population $a$ and $b$ with associated programs of care and prevention $A$ and $B$. The total health care expenditure is $x$ of which a proportion $p: p\in P \subseteq [0,1]$ is spent on $A$ and $1-p$ on $B$. The deaths due to each cause are $y_a$ and $y_b$ and so the total deaths are $y = y_a + y_b$. Finally, the effect of a unit increase in expenditure in each program are $\beta_a$ and $\beta_b$. The question is to determine what the causal effect of expenditure is. If $Y_x$ is the potential outcome for level of expenditure $x$ then the average treatment effect is given by $E(\frac{\partial Y_x}{\partial x})$.
The country has chosen an allocation between the programmes of care of $p_0$. If causation by omission is not a concern then, given linear, additive models (and that all the model assumptions are met), $y_a = \alpha_a + \beta_a p x + f_a(t) + u_a$ and $y_b = \alpha_b + \beta_b (1-p) x + f_b(t) + u_b$, the causal effect is $E(\frac{\partial Y_x}{\partial x}) = \beta = \beta_a p_0 + \beta_b (1-p_0)$. But if causation by omission is relevant, then the net effect of expenditure is the lives gained $\beta_a p_0 + \beta_b (1-p_0)$ less the lives lost. The lives lost are those under all possible things we did not do, so the estimator of the causal effect is $\beta' = \beta_a p_0 + \beta_b (1-p_0) - \int_{P/p_0} [ \beta_ap + \beta_b(1-p) ] dG(p)$. Now, clearly $\beta \neq \beta'$ unless $P/p_0$ is the empty set, i.e. there was no other option. Indeed, the choice of possible alternatives involves a normative judgement as we’ve suggested. For an omission to count as a cause, there needs to be a judgement about what ought to have been done. For health care expenditure this may mean that the only viable alternative is the allocatively efficient distribution, in which case all allocations will result in a net loss of life unless they are allocatively efficient, which some may argue is reasonable. An alternative view is simply that the government simply has to not do worse than in the past and perhaps it is also reasonable for the government not to make significant changes to the allocation, for whatever reason. In that case we might say that $P \in [p_0,1]$ and $g(p)$ might be a distribution truncated below $p_0$ with most mass around $p_0$ and small variance.
The problem is that we generally do not observe the effect of expenditure in each program of care nor do we know the distribution of possible budget allocations. The normative judgements are also a contentious issue. Claxton clearly believes the government ought not to have initiated the Cancer Drugs Fund, but he does not go so far as to say any allocative inefficiency results in a net loss of life. Some working out of the underlying normative principles is warranted. But if it’s not possible to estimate these net causal effects, why discuss it? Perhaps it’s due to the lack of consistency. We estimate the ‘ordinary’ causal effect in our empirical work, but we often discuss opportunity costs and losses due to inefficiencies as being due to or caused by the spending decisions that are made. As the examples at the beginning illustrate, the normative question of responsibility seeps into our judgments about whether an omission is the cause of an outcome. For health care expenditure the government or other health care body does have a relevant responsibility. I would argue then that causation by omission is important and perhaps we need to reconsider the inferences that we make.
Credits
## Author
• Health economics, statistics, and health services research at the University of Warwick. Also like rock climbing and making noise on the guitar.
## 2 thoughts on “Are we estimating the effects of health care expenditure correctly?”
1. A very interesting discussion indeed!
I wonder if the issue at play here is an inconsistency with regard to the moral relevance of the status quo and deviations from it.
Responsibility is clearly very important in policymaking generally and health spending specifically. But it seems to me that it usually is not deemed relevant in an economic evaluation. As you seem to suggest, responsibility may relate to the status quo.
You mention “increasing health care expenditure” as a context in which to consider causation by omission. In fact, we rarely ever consider this decision problem. Generally, we assume the budget to be fixed and concern ourselves only with deciding how to spend the money. Maybe that’s precisely because we can’t deal with the moral challenge of questioning the spending status quo. The extent to which the CDF resulted in money being “diverted from the National Health Service” is also questionable. Certainly, it was an inefficiency, but we only really think about it as an allocative inefficiency given the budget.
Not spending the budget is never an option, probably due to responsibility (we expect the government to spend our taxes, and wisely). ‘Do nothing’ is never truly the counterfactual in cost-effectiveness analyses because we evaluate cost-effectiveness on the basis that if funds are not spent on an intervention they will be spent on another intervention elsewhere that we would expect to be cost-effective at the margin.
Your exposition seems to suggest – or at least raises the possibility – that, in an A vs B scenario, we should be considering a second incremental effect aside from A vs B: that is, A vs B vs spend nothing. This would ‘endogonise’ the budget in a way that my mind cannot grasp right now. But when it comes to questions about what the budget ought to be we necessarily have to look beyond the context of healthcare to considering all public spending decisions.
So it may be that policymaking gives undue weight to the status quo and deviations from it (any ‘omission’ is morally irrelevant). It may also be that economists don’t pay enough attention to the status quo and its moral relevance. I wouldn’t dare to speak for Karl Claxton, but I’d hazard a guess that his view would be closer to the former. I’ll let you know where I stand if I ever figure it out.
1. Sam Watson says:
I ended up spending way too much time thinking about this, it’s probably worth actually trying to do this properly now! I think the point I was getting at is not that we’re doing things wrong, in fact, but that the normative isn’t so separate from the positive when it comes to empirical analyses and their interpretation. You might say that empirically we are just estimating the effect of A versus not-A. We don’t take a position on whether B should have been done instead (assuming it was possible to have been done). Well, I think this is where we end up confusing ourselves, as we are not distinguishing the action A from the event that it occurs or the omission of an action of type A occurring. We estimate the effects of the event that A occurs compared to something of the type of A not occurring, which we often take to mean doing nothing. But if instead B could have been done then an event of the type B does not occur, and these omissions may well have causal effects depending on our normative judgement.
So to interpret causal effects in analyses there appear to be normative judgements. Either that what was done was what ought to be done, that there was no other choice than to do what was done in some deterministic sense (in which case the counterfactual is impossible), or that the estimated causal effect is just a component of the total causal effect that can be used in analyses like Karl Claxton’s of the Cancer Drugs Fund. For things like clinical trials the moral context is that neither treatment nor control ought to be done so there is no relevant causation by omission: if treatment or control appears to be harmful then this changes and the trial stops. If it is not stopped then we would say that the trial has been a cause of harm.
I think I agree that many would think that omissions are morally or causally irrelevant but then they would have to say that there was no causal effect of certain omissions, like the clinical trial above or the Cancer Drugs Fund, or the Mbeki administration’s HIV policy, or Billy not watering the plant, or Billy not blocking the goal, and many other discussions in philosophy. If they don’t take this position then I think one would have to accept there is a normative judgement involved in interpreting causal effects.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{}
|
# Zoeppritz equation
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
The Zoeppritz equations describe seismic wave energy partitioning at an interface, for example the boundary between two different rocks. The equations relate the amplitude of incident P-waves to reflected and refracted P- and S-waves at a plane interface for a given angle of incidence.
## P-wave incident on a planar interface
Mode conversion.
A planar P-wave hitting the boundary between two layers will produce both P and SV reflected transmitted waves. This is called mode conversion. The angles of the incident, reflected and transmitted rays are related by Snell's law as follows:
${\displaystyle p={\frac {\sin \theta _{1}}{V_{\mathrm {P1} }}}={\frac {\sin \theta _{2}}{V_{\mathrm {P2} }}}={\frac {\sin \phi _{1}}{V_{\mathrm {S1} }}}={\frac {\sin \phi _{1}}{V_{\mathrm {S2} }}}}$,
where p is called the ray parameter.
Zoeppritz (1919) derived the particle motion amplitudes of the reflected and transmitted waves using the conservation of stress and displacement across the interface, which yields four equations with four unknowns:
need re-check the equation
${\displaystyle {\begin{bmatrix}R_{\mathrm {P} }\\R_{\mathrm {S} }\\T_{\mathrm {P} }\\T_{\mathrm {S} }\\\end{bmatrix}}={\begin{bmatrix}-\sin \theta _{1}&-\cos \phi _{1}&\sin \theta _{2}&\cos \phi _{2}\\\cos \theta _{1}&-\sin \phi _{1}&\cos \theta _{2}&-\sin \phi _{2}\\\sin 2\theta _{1}&{\frac {V_{\mathrm {P1} }}{V_{\mathrm {S1} }}}\cos 2\phi _{1}&{\frac {\rho _{2}V_{\mathrm {S2} }^{2}V_{\mathrm {P1} }}{\rho _{1}V_{\mathrm {S1} }^{2}V_{\mathrm {P2} }}}\cos 2\phi _{1}&{\frac {\rho _{2}V_{\mathrm {S2} }V_{\mathrm {P1} }}{\rho _{1}V_{\mathrm {S1} }^{2}}}\cos 2\phi _{2}\\-\cos 2\phi _{1}&{\frac {V_{\mathrm {S1} }}{V_{\mathrm {P1} }}}\sin 2\phi _{1}&{\frac {\rho _{2}V_{P2}}{\rho _{1}V_{P1}}}\cos 2\phi _{2}&{\frac {\rho _{2}V_{S2}}{\rho _{1}V_{P1}}}\sin 2\phi _{2}\end{bmatrix}}^{-1}{\begin{bmatrix}\sin \theta _{1}\\\cos \theta _{1}\\\sin 2\theta _{1}\\\cos 2\phi _{1}\\\end{bmatrix}}}$
RP, RS, TP, and TS, are the reflected P, reflected S, transmitted P, and transmitted S-wave amplitude coefficients, respectively. Inverting the matrix form of the Zoeppritz equations give the coefficients as a function of angle.
## AVO and linear approximations
Although the Zoeppritz equations are exact, the equations do not lead to an intuitive understanding of the AVO process. Modeling can be routinely done with the Zoeppritz equations but most AVO methods for analyzing real time seismic data are based on linearized approximations to the Zoeppritz equations (e.g. Bortfeld, 1961, Richards and Frasier, 1976, and Aki and Richards, 1980), which include:
|
{}
|
### Tianqi Tang
• PhD, University of Technology Sydney
tangtianqi09@gmail.com
Tianqi Tang is currently a second-year Ph.D. student at AAII, University of Technology, Sydney, under the supervision of Prof. Yi Yang.
|
{}
|
International Association for Cryptologic Research
# IACR News Central
Get an update on changes of the IACR web-page here. For questions, contact newsletter (at) iacr.org. You can also receive updates via:
You can also access the full news archive.
Further sources to find out about changes are CryptoDB, ePrint RSS, ePrint Web, Event calender (iCal).
2014-01-22
19:17 [Pub][ePrint]
In TPM2.0, a single signature primitive is proposed to sup-
port various signature schemes including Direct Anonymous Attestation
(DAA), U-Prove and Schnorr signature. This signature primitive is im-
plemented by several APIs. In this paper, we show these DAA-related
APIs can be used as a static Diffie-Hellman oracle thus the security
strength of these signature schemes can be weakened by 14-bit. We pro-
pose a novel property of DAA called forward anonymity and show how
to utilize these DAA-related APIs to break forward anonymity. Then we
propose new APIs which not only remove the Static Diffie-Hellman oracle
but also support the forward anonymity, thus significantly improve the
security of DAA and the other signature schemes supported by TPM2.0.
We prove the security of our new APIs under the discrete logarithm
assumption in the random oracle model. We prove that DAA satisfy for-
ward anonymity using the new APIs under the Decision Diffie-Hellman
assumption. Our new APIs are almost as efficient as the original APIs
in TPM2.0 specification and can support LRSW-DAA and SDH-DAA
together with U-Prove as the original APIs.
16:17 [Pub][ePrint]
The Fibonacci-to-Galois transformation is useful for reducing the propagation delay of feedback shift register-based stream ciphers and hash functions. In this paper, we extend it to handle Galois-to-Galois case as well as feedforward connections. This makes possible transforming Trivium stream cipher and increasing its keystream data rate by 27\\% without any penalty in area. The presented transformation might open new possibilities for cryptanalysis of Trivium, since it induces a class of stream ciphers which generate the same set of keystreams as Trivium, but have a different structure.
2014-01-21
16:17 [Pub][ePrint]
In this paper we study the problem that when a Boolean function can
be represented as the sum of two bent functions. This problem was
recently presented by N. Tokareva in studying the number of bent
functions. Firstly, many functions, such as
quadratic Boolean functions, Maiorana-MacFarland bent functions,
partial spread functions etc, are proved to be able to be
represented as the sum of two bent functions. Methods to construct
such functions from low dimension ones are also introduced. N.
Tokareva\'s main hypothesis is proved for $n\\leq 6$. Moreover,
two hypotheses which are equivalent to N. Tokareva\'s main hypothesis
are presented. These hypotheses may lead to new ideas or methods to
solve this problem. At last, necessary and sufficient conditions on
the problem when the sum of several bent functions is again a bent
function are given.
16:17 [Pub][ePrint]
Technological advancements in cloud computing due to increased connectivity and exponentially proliferating data has resulted in migration towards cloud architecture. Cloud computing is technology where the users\' can use high end services in form of software that reside on different servers and access data from all over the world. Cloud storage enables users to access and store their data anywhere. It also ensures optimal usage of the available resources. There is no need for the user to maintain the overhead of hardware and software costs. With a promising technology like this, it certainly abdicates users\' privacy, putting new security threats towards the certitude of data in cloud. The user relies entirely for his data protection on the cloud providers, making them solely responsible for safeguarding it. The security threats such as maintenance of data integrity, data hiding and data safety dominate our concerns when the issue of cloud security come up. The voluminous data and time consuming encryption calculations related to applying any encryption method have been proved as a hindrance in this field.
In this research paper, we have contemplated a design for cloud architecture which ensures secured movement of data at client and server end. We have used the non breakability of Elliptic curve cryptography for data encryption and Diffie Hellman Key Exchange mechanism for connection establishment. The proposed encryption mechanism uses the combination of linear and elliptical cryptography methods. It has three security checkpoints: authentication, key generation and encryption of data.
16:17 [Pub][ePrint]
Menezes--Qu--Vanstone key agreement (MQV) is intended to provide implicit key authentication (IKA) and several other security objectives. MQV is approved and specified in five standards.
This report focuses on the IKA of two-pass MQV, without key confirmation. Arguably, implicit key authentication is the most essential security objective in authenticated key agreement. The report examines various necessary or sufficient formal conditions under which MQV may provide IKA.
Incidentally, this report defines, relies on, and inter-relates various conditions on the key deriviation function and Diffie--Hellman groups. While it should be expected that most such definitions and results are already well-known, a reader interested in these topics may be interested in this report as a kind of review, even if they have no interest in MQV whatsoever.
09:48 [Event][New]
Submission: 16 March 2014
Notification: 23 May 2014
From September 25 to September 26
Location: Aveiro, Portugal
2014-01-20
10:17 [Pub][ePrint]
Random number generators have direct applications in information security, online
gaming, gambling, and computer science in general. True random number generators
need an entropy source which is a physical source with inherent uncertainty, to ensure
unpredictability of the output. In this paper we propose a new indirect approach to
collecting entropy using human errors in the game play of a user against a computer. We
argue that these errors are due to a large set of factors and provide a good source of
randomness. To show the viability of this proposal, we design and implement a game,
conduct a user study in which we collect user input in the game, and extract randomness
from it. We measure the rate and the quality of the resulting randomness that clearly
show effectiveness of the approach. Our work opens a new direction for construction of
entropy sources that can be incorporated into a large class of video games.
10:17 [Pub][ePrint]
Recently, Fan et al. proposed a user efficient recoverable off-line e-cash scheme with fast anonymity revoking. They claimed that their scheme could achieve security requirements of an e-cash system such as, anonymity, unlinkability, double spending checking, anonymity control, and rapid anonymity revoking on double spending. They further formally prove the unlinkability and the un-forgeability security features. However, after crypto-analysis, we found that the scheme cannot attain the two proven security features, anonymity and unlinkability. We, therefore, modify it to comprise the two desired requirements which are very important in an e-cash system.
10:17 [Pub][ePrint]
The paper is about methodology to detect and demonstrate impossible differentials in a block cipher. We were inspired by the shrinking technique proposed by Biham et al. in 1999 which recovered properties of scalable block cipher structures from numerical search on scaled down variants. Attempt to bind all concepts and techniques of impossible differentials together reveals a view of the search for impossible differentials that can benefit from the computational power of a computer. We demonstrate on generalized Feistel networks with internal permutations an additional clustering layer on top of shrinking which let us merge numerical data into relevant human-readable information to be used in an actual proof. After that, we show how initial analysis of scaled down TEA-like schemes leaks the relevant part of the design and the length and ends of the impossible differentials. We use that initial profiling to numerically discover 4 15-round impossible differentials (beating the current 13-round) and thousands of shorter ones.
2014-01-17
13:17 [Pub][ePrint]
Even as data and analytics driven applications are becoming increasingly popular, retrieving data from shared databases poses a threat to the privacy of their users. For example, investors/patients retrieving records about interested stocks/diseases from a stock/medical database leaks sensitive information to the database server. PIR (Private Information Retrieval) is a promising security primitive to protect the privacy of users\' interests. PIR allows the retrieval of a data record from a database without letting the database server know which record is being retrieved. The privacy guarantees could either be information theoretic or computational. Ever since the first PIR schemes were proposed, a lot of work has been done to reduce the communication cost in the information-theoretic settings - particularly the question communication cost, i.e., the traffic from the user to the database server. The answer communication cost (the traffic from the database server to the user) has however barely been improved. When question communication cost is much lower than the record length, reducing question communication cost has marginal benefit on lowering overall communication cost. In contrast, reducing answer cost becomes very important. In this paper we propose ramp secret sharing based mechanisms that reduce the answer communication cost in information-theoretic PIR. We have designed four information-theoretic PIR schemes, using three ramp secret sharing approaches, achieving answer communication cost close to the cost of non-private information retrieval. Evaluation shows that our PIR schemes can achieve lower communication cost and the same level of privacy compared with existing schemes. Our PIR schemes\' usages are demonstrated for realistic settings of outsourced data sharing and P2P content delivery scenarios. Thus, our approach makes PIR a viable communication efficient technique to protect user interest privacy.
10:17 [Pub][ePrint]
In this paper, we propose a new algorithm for solving the approximate common divisors problems, which is based on LLL reduction algorithm of certain special lattice and linear equation solving algorithm over integers. Through both theoretical argument and experimental data, we show that our new algorithm is a polynomial time algorithm under reasonable assumptions on the parameters. We use our algorithm to solve concrete problems that no other algorithm could solve before. Further more, we show that our algorithm can break
the fully homomorphic encryption schemes, which are based on the approximate common divisors problem, in polynomial time in terms of the system parameter $\\lambda$.
|
{}
|
YT Affiliate Formula Youtube Video Ranking By Ivana Bosnjak Best Package Software, Video Training, And Full Case Study To Helps Your Rank Videos On The
Robert then promises to make you the next wealthy binary options trader. He will assign to you a real life startup specialist who will guide you on your way to $10,000 a month. If this isn’t enough, he’s also going to give you his bonus Binary App called the Binary Interceptor. Allegedly, this app is capable of “intercepting financial data” from the biggest financial cities around the world. The app then processes this data and trades for you automatically. These are some pretty big claims/promises. Will he be able to deliver? We highly doubt it. Even the Copy Buffet auto trader was capable of giving us a 4-day 100% win streak. But it never made such bold claims or guarantees. If the Binary Interceptor Software was able to guarantee you$58,000 a week, you would have first heard about it from the newspapers and not some phony email or internet advertisement. The deception doesn’t stop there. We found fake badges and SSL certificates sprinkled all over their web page. This is just another attempts to deceive users into thinking they are reputable or trusted. It’s a very common tactic used in schemes that aim to steal money off unsuspecting visitors.
Thank you for taking the time to read our Binary Interceptor Scam Review. We sincerely hope you found it useful. If you have any inquiries or feedback, feel free leave us a comment below. You can also contact us here. We do our best to respond to all inquiries and feedback within 24 hours.
## Bill O’Brien on Baylor’s problems: Issues are bigger than just football
Although we understand budget concerns and want to give tips on how to improve performance on lower-end computers, this guide is mostly to get the perfect settings for competitive performance. That’s why we’ve analyzed the pros. They are not making compromises.
Overwatch is an incredibly fast paced and hectic game. So it comes as no surprise that most pros use video settings that let you play at the highest FPS possible. With slight adjustments to your video settings you can get the most out of your gaming rig as well.
Overwatch on max graphics is a gorgeous game to look at, that’s for sure, but if you want to gain a competitive advantage you’ll want as many FPS as possible while also eliminating unnecessary eye candy from your screen. We did some research and some in game testing and have come up with an answer that maximizes your FPS and minimizes the amount of clutter on your screen while still making sure that the game doesn’t look horrendous.
How To Make Money with CPAGrip (YouTube & SEO
You can check out the background of an investment professional using Investor.gov. It’s a great first step toward protecting your money. Learn about an investment professional’s background, registration status, and more.
Visit Investor.gov, the SEC's website for individual investors.
Binary Options: These All-Or-Nothing Options Are All-Too-Often Fraudulent
A binary option is a type of options contract in which the payout will depend entirely on the outcome of a yes/no (binary) proposition. When the binary option expires, the option holder will receive either a pre-determined amount of cash or nothing at all. Given the all-or-nothing payout structure, binary options are sometimes referred to as “all-or-nothing options” or “fixed-return options.”
SEC Enforcement Actions Involving Binary Options. The SEC’s Division of Enforcement has brought charges against companies for failure to register the securities and failure to register with the SEC as a broker before offering and selling binary options to U.S. investors, as required. In SEC v. Banc de Binary, the binary options seller allegedly solicited U.S. investors through methods including YouTube videos, spam emails, and advertising on the Internet, and also communicated with U.S. investors by phone, email, and instant messenger. In In the Matter of EZTD Inc., another binary options seller allegedly misrepresented the risk of investing in binary options sold on its trading platforms, including by stating on its websites that investing in the binary options that it offered and sold is profitable when, in fact, less than 3% of its customers in the U.S. earned a profit trading binary options sold by the respondent.
Big money making software 2017 become a millionaire working online from home
“Great way to spoil your car. The coolest washes in town.”
To ensure that our customers are satisfied, we are constantly providing new services and updating offerings. We are also committed to washing cars without pollution the environment.
“Best car wash I have ever been to. I have a monthly membership so I go quite often and I am never disappointed.”
12570 Warwick Blvd. Newport News, VA 23606 Phone: 757.269.0200 Open 24/7
At Cool Wave Car Wash we offer a full range of car wash and auto care services in the Hampton Roads area. Our affordable car wash prices reflect our commitment to providing the finest auto services to our customers, whether it is an express car wash or full auto detailing, at the best prices.
Many believe that the October 1987 crash was caused by an overabundance of selling by program traders. At the time, the idea of portfolio insurance — attempting to hedge a portfolio of equities against the risk of decline by short-selling stock index futures — was very much in vogue, and the new technology of the time made it easy to make large trades automatically on a scale that had not been possible in the past.
Even investors unlucky enough to have put money into the market right before the 1987 crash would still have done well over the long term. They would have earned an annual five-year return of 9% and a 10-year return of 14.7%. The key is to be able to stick with your investments despite short-term setbacks. Read more: David Rosenberg on how to protect your money from a shaky stock market.
MarketWatch revisits the 1987 stock market crash all this week. What do you think? Do you expect another crash like 1987’s? Make yourself heard: Click here to take our poll..
And therein lies the buying opportunity.
### Weekly Binary Signals Results
Free YouTube Software Trace Unlimited Video Rankings Trace Blaster
Try also Binary Option Robot if you are looking for automated binary trading.
The options listed on Banc de Binary can also be traded on several different expiration times such as 60 seconds, 15 minutes, 30 minutes, 1 hour and 24 hours.
Banc de Binary conforms completely with all industry standards and protocols to protect your privacy and personal information. This includes operating with an EV SSL certificate. It complies with PCI standards when processing data, and it has a partnership with MaxMind to help it verify deposits and prevent fraud.
As Banc de Binary is licensed and regulated by CySEC, you can be sure it is not a scam. Scam binary options brokers have no chance of making it through CySEC’s vetting procedures.
The minimum withdrawal is \$50 and the fast withdrawal time of around 2 days really impressed us.
There are a variety of options available when it comes to placing trades on Banc de Binary. A trader can select from high/low options, one touch options, option builders, Meta chart options and the potentially very profitable but somewhat more complicated ladder options.
The more you use binary options trading platforms, the more you will recognise common scam signals. You will find none of those at Banc de Binary. You will, however, find the following positive points:
Commission Autopilot Automatic Money Making Software - Youtube
Commission Autopilot Automatic Money Making Software - Youtube
There is no need for risk management, leverage, or stop loss orders. No commissions or fees either. The limit of your exposure, in this case, is 90% of your bet. You never have to worry about margin calls, or fretting over when to close a position, a particularly difficult decision for beginners. You also have the potential of making a 75% return in a matter of hours or minutes. Try finding another investment arena where that kind of return can happen in so short a time period.
Binary options replace complexity with simplicity. You only have three decisions to make – Pick an asset; enter an amount that you are willing to wager; and, lastly, decide if the asset value will go up or down over the option period. Simple as that – just hit the execute button and wait. You are told up front what your payoff ratios will be, typically a 75% premium if you guess right or a 10% rebate on your principal if you guess wrong, i.e., you could lose 90%.
As we said, the House still wins. To be successful in traditional forex trading, you have to do better than a “55/45” win/loss ratio on a total dollar basis. If you find a binary broker that pays out a 75% reward AND a 10% rebate on losses, then you are up against the same odds. To be successful, you still must use technical analysis, pattern recognition, and known levels of support and resistance to gain an edge over time, assuming you stick to a well thought out strategy.The bigger issue, however, is to choose a reputable broker that you can trust. Binary options require a totally different back office of staffing skills. It is on par with parimutuel betting or a casino operation. For this reason, binary option brokers tend to locate in exotic locations across the globe in tax havens or islands where casino betting is legalized. Choosing the best broker, especially one located in a foreign jurisdiction, is a difficult task. Rely on experts that have studied the market and can advise you appropriately. Be skeptical, and remember that you are your first and last line of defense against fraud!
The characteristics of the PDE are x ± c t = c o n s t \displaystyle x\pm ct=\mathrm const \, , so we can use the change of variables μ = x + c t , η = x − c t \displaystyle \mu =x+ct,\eta =x-ct\, to transform the PDE to u μ η = 0 \displaystyle u_ \mu \eta =0\, . The general solution of this PDE is u ( μ , η ) = F ( μ ) + G ( η ) \displaystyle u(\mu ,\eta )=F(\mu )+G(\eta )\, where F \displaystyle F\, and G \displaystyle G\, are C 1 \displaystyle C^ 1 \, functions. Back in x , t \displaystyle x,t\, coordinates,
We can integrate the last equation to get
This solution u \displaystyle u\, can be interpreted as two waves with constant velocity c \displaystyle c\, moving in opposite directions along the x-axis.
for − ∞ 0 \displaystyle -\infty 0 . It is named after the mathematician Jean le Rond d'Alembert. 1
Now let us consider this solution with the Cauchy data u ( x , 0 ) = g ( x ) , u t ( x , 0 ) = h ( x ) \displaystyle u(x,0)=g(x),u_ t (x,0)=h(x)\, .
Using u ( x , 0 ) = g ( x ) \displaystyle u(x,0)=g(x)\, we get F ( x ) + G ( x ) = g ( x ) \displaystyle F(x)+G(x)=g(x)\, .
In mathematics, and specifically partial differential equations (PDEs), d'Alembert's formula is the general solution to the one-dimensional wave equation:
Now we can solve this system of equations to get
IBG is a great place to work! Fast-paced, progressive & friendly environment.The work is challenging & never boring, and there’s a particularly strong focus on technology and increased efficiency through automation.Another wonderful aspect of the company is its lack of a bureaucratic and formalistic work-place environment.
- Treat all employees the same. It is not fair to employees who work long hours and are under immense pressure to deliver projects but are not treated on par with the ones who work less or have less pressure to deliver.- Sometimes tasks are asked to deliver in few days. This gives programmers less time to think about writing good code because the focus is delivery and not quality.- Impact of big projects needs to thought through end-to-end. If it is small change in one system, it does not mean it is a small change in other systems as well. Involve all major players that are part of end-to-end system.
Invest resources in employee training programs which would result in greater employee efficiency and effectiveness.Increase the number of vacation days.
How to Earn Money on YouTube: 10 Steps (with Pictures) - wikiHow
Editing data is currently only available on tablets or desktops
Recalculating The Competitive Graph now...
email is my work email > I don't have a work email >
In recent years a segment of the ImageJ developer community has repeatedly inquired as to ImageJ's future. The program has been successful enough that it would greatly benefit from modern open source software best practices: a publicly accessible source code repository, a suite of unit tests with a continuous build integration system, a central repository of extensions, clear guidelines on how external developers can contribute to both those extensions and to the core program when warranted, and a development roadmap addressing feature requests and tasks from the community.
ImageJ2 is funded from a variety of sources. See the Funding page for details.
ImageJ2 is also a collection of reusable software libraries built on SciJava, using a powerful plugin framework to facilitate rapid development and painless user customization.
For the moment, we suggest using "The ImageJ ecosystem" paper for citations. But we recommend both of the above for learning about ImageJ2 in depth.
### Microsoft Office 365 vs. Google for Work
11 Creative Ways To Make Money On YouTube How To Make Money On Youtube
As strange as it might sound, there aren’t that many complaints regarding GOptions. I’m saying that this is strange because pretty much every broker has some complaints even though the broker in question is completely legit.
Other brokers also require traders to verify their identity however they require this when traders make their first withdrawal. This is the reason why at most other brokers it can take up to 1-2 weeks (and in some cases even more) to get a withdrawal processed.
I’ll be upfront and will tell you right in the beginning that I do not think GOptions is a scam.
At the suggestion of my assistant Coach Jon England I watched a Friday evening primetime game between Nevada and California. I was aware of the Pistol but gave it little consideration. I felt the short gun snap and deep tailback gave the offense minimal advantage. However, that night I came away extremely impressed with the Zone Read, QB Runs and paralyzing effect the misdirection had on the defense. While it was easy to see that QB Colin Kaepernick was a special player he also played in a system perfectly suited to his skill set. Chris Ault’s system combined Wing-T, Veer Option and Spread principles to create a powerful run game. Needless to say Nevada games soon began to fill the Schmitt family DVR.
Kyle Schmitt, Head Football Coach, Atholton High School (MD)
Over the past three seasons we have been fortunate to have talented Quarterbacks who are able to run the football. Moving the QB out from under center has opened up the QB run game. Each week we include QB Zone, Power, Counter, Power Read and Zone Read plays for our Quarterback out of Pistol. We found that running the QB was going to be a crucial element of our game plan versus top defenses. Some of our biggest wins and most successful offensive performances have featured our QB’s rushing upwards of 15-20 times on designed runs.
The zone read has become the emphasis of our offense and the first concept we teach our offensive line. Instead of teaching a variety of plays we pride ourselves in blocking this scheme to a variety of fronts and pressures.
Zone Read 3/4i is best against under/solid defenses. If BSG is uncovered or has an A gap defender over him we will transition play to Zone Read LB and read the linebacker.
### Money Making Opportunities Magazines
CMC Markets UK plc (173730) and CMC Spreadbet plc (170627) are authorised and regulated by the Financial Conduct Authority in the United Kingdom, except for the provision of Countdowns for which CMC Markets is licensed and regulated by the Gambling Commission, reference number 42013. A copy of the licence can be found here. CMC Markets supports responsible gambling, for information and advice please visit
We have won 50 awards worldwide in the last two years – a recognition of the quality of our service, and dedication to delivering innovation and technology, through our web-based trading platform and native mobile apps.
This website uses cookies. By continuing to use this website you agree to this. Find out more
^Awarded 'Best Customer Service', Investment Trends 2016 UK Leveraged Trading Report, based on highest user satisfaction among CFD traders; 'Best Forex Customer Service', UK Forex Awards 2015.
|
{}
|
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 5 years ago Assume a jar has 7 red marbles and 5 black marbles. Draw out 3 marbles with and without replacement. Find the requested probabilities. (c) P(one red and two black marbles) With replacement , without replacement . (d) P(red on the first draw and black on the second draw and black on the third draw) With replacement , without replacement .
• This Question is Closed
1. anonymous
What does with replacement mean?
2. anonymous
putting back in the jar
3. anonymous
Oh. I dunno about that, but without replacement is pretty easy.
4. anonymous
i understand more of the with replacement but if you can help with the without replacement that would be good
5. anonymous
For the first question of one red and two black we just want to know how many ways we can choose one red, and two black divided by how many ways we can choose 3 marbles from 12. So: $\frac{{7 \choose 1} * {5 \choose 2}}{12 \choose 3}$
6. anonymous
so 12/3?
7. anonymous
No, 7 choose 1 = 7 5 choose 2 = 10 12 choose 3 = 220 So it is 70/220. Are you familiar with the choose function (and it's notation)?
8. anonymous
kind of
9. anonymous
was that without replacement?
10. anonymous
Yes
11. anonymous
do you know how to do d?
12. anonymous
For the other one it's just (Probability of choosing a red)*(Probability of choosing a black)*(Probability of choosing a black) So (7/12) * (5*/11) * (4/10)
13. anonymous
140/1320....can be reduced though right?
14. anonymous
14/132 = 7/66
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
{}
|
# Revision history [back]
Hello, @Nasser! I may have an idea of what is going on, and I think you found a bug! (Although I am not 100% sure.) Here is what I can deduce from reading lots of line of Sage code.
If you want to do some integration using Giac, what really happens at a low level is the following:
ex = (x+1)._giac_()
result = ex.integrate(x._giac_())
result._sage_()
The result is obviously x^2/2+x. The first line converts the x+1 from Sage representation to Giac representation, and stores it in ex. The second line calls the Giac integrate method (since the expression is now converted), which asks to integrate with respect to x; but, once again, you have to do it converting x to Giac representation (that's the x._giac_()). Finally, the third line converts the result back to Sage representation, so you can work with that within Sage itself
Now, let's go to your example. The same process is performed:
ex = ((1-2*x^(1/3))^(3/4)/x)._giac_()
result = ex.integrate(x._giac_())
If you could print result in this stage, you would see the answer
Evaluation time: 1.28
12*(1/4*ln(abs((-2*x^(1/3)+1)^(1/4)-1))-1/4*ln((-2*x^(1/3)+1)^(1/4)+1)+1/2*atan((-2*x^(1/3)+1)^(1/4))+1/3*((-2*x^(1/3)+1)^(1/4))^3)
The difference with the previous example is that there is this Evaluation time: 1.28, which Giac seems to add as part of the result when the computation takes a little longer than usual (like 1.28 seconds). That is when Sage fails, because the line
result._sage_()
is executed, but Sage is expecting a function, not the new string of evaluation time.
My suggestion: Use Giac to integrate simple functions until the bug is fixed (I will report it right now). But, if you really want to use it to integrate a function like this, execute the two previous steps (without result._sage_()), ten redefine x with x = var('x'), and copy what result shows in your screen, without the "Evaluation time" part. You have to be careful to replace every ln with log, which is one of the things that the _sage_() method should do automatically.
I hope this helps!
|
{}
|
# 10.1 Electromotive force (Page 4/11)
Page 4 / 11
The current through the load resistor is $I=\frac{\epsilon }{r+R}$ . We see from this expression that the smaller the internal resistance r , the greater the current the voltage source supplies to its load R . As batteries are depleted, r increases. If r becomes a significant fraction of the load resistance, then the current is significantly reduced, as the following example illustrates.
## Analyzing a circuit with a battery and a load
A given battery has a 12.00-V emf and an internal resistance of $0.100\phantom{\rule{0.2em}{0ex}}\text{Ω}$ . (a) Calculate its terminal voltage when connected to a $10.00\text{-}\text{Ω}$ load. (b) What is the terminal voltage when connected to a $0.500\text{-}\text{Ω}$ load? (c) What power does the $0.500\text{-}\text{Ω}$ load dissipate? (d) If the internal resistance grows to $0.500\phantom{\rule{0.2em}{0ex}}\text{Ω}$ , find the current, terminal voltage, and power dissipated by a $0.500\text{-}\text{Ω}$ load.
## Strategy
The analysis above gave an expression for current when internal resistance is taken into account. Once the current is found, the terminal voltage can be calculated by using the equation ${V}_{\text{terminal}}=\epsilon -Ir$ . Once current is found, we can also find the power dissipated by the resistor.
## Solution
1. Entering the given values for the emf, load resistance, and internal resistance into the expression above yields
$I=\frac{\epsilon }{R+r}=\frac{12.00\phantom{\rule{0.2em}{0ex}}\text{V}}{10.10\phantom{\rule{0.2em}{0ex}}\text{Ω}}=1.188\phantom{\rule{0.2em}{0ex}}\text{A}.$
Enter the known values into the equation ${V}_{\text{terminal}}=\epsilon -Ir$ to get the terminal voltage:
${V}_{\text{terminal}}=\epsilon -Ir=12.00\phantom{\rule{0.2em}{0ex}}\text{V}\phantom{\rule{0.2em}{0ex}}-\left(1.188\phantom{\rule{0.2em}{0ex}}\text{A}\right)\left(0.100\phantom{\rule{0.2em}{0ex}}\text{Ω}\right)=11.90\phantom{\rule{0.2em}{0ex}}\text{V}.$
The terminal voltage here is only slightly lower than the emf, implying that the current drawn by this light load is not significant.
2. Similarly, with ${R}_{\text{load}}=0.500\phantom{\rule{0.2em}{0ex}}\text{Ω}$ , the current is
$I=\frac{\epsilon }{R+r}=\frac{12.00\phantom{\rule{0.2em}{0ex}}\text{V}}{0.600\phantom{\rule{0.2em}{0ex}}\text{Ω}}=20.00\phantom{\rule{0.2em}{0ex}}\text{A}.$
The terminal voltage is now
${V}_{\text{terminal}}=\epsilon -Ir=12.00\phantom{\rule{0.2em}{0ex}}\text{V}-\left(20.00\phantom{\rule{0.2em}{0ex}}\text{A}\right)\left(0.100\phantom{\rule{0.2em}{0ex}}\text{Ω}\right)=10.00\phantom{\rule{0.2em}{0ex}}\text{V}.$
The terminal voltage exhibits a more significant reduction compared with emf, implying $0.500\phantom{\rule{0.2em}{0ex}}\text{Ω}$ is a heavy load for this battery. A “heavy load” signifies a larger draw of current from the source but not a larger resistance.
3. The power dissipated by the $0.500\text{-}\text{Ω}$ load can be found using the formula $P={I}^{2}R$ . Entering the known values gives
$P={I}^{2}R={\left(20.0\phantom{\rule{0.2em}{0ex}}\text{A}\right)}^{2}\left(0.500\phantom{\rule{0.2em}{0ex}}\text{Ω}\right)=2.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{2}\phantom{\rule{0.2em}{0ex}}\text{W.}$
Note that this power can also be obtained using the expression $\frac{{V}^{2}}{R}\phantom{\rule{0.2em}{0ex}}\text{or}\phantom{\rule{0.2em}{0ex}}IV$ , where V is the terminal voltage (10.0 V in this case).
4. Here, the internal resistance has increased, perhaps due to the depletion of the battery, to the point where it is as great as the load resistance. As before, we first find the current by entering the known values into the expression, yielding
$I=\frac{\epsilon }{R+r}=\frac{12.00\phantom{\rule{0.2em}{0ex}}\text{V}}{1.00\phantom{\rule{0.2em}{0ex}}\text{Ω}}=12.00\phantom{\rule{0.2em}{0ex}}\text{A}.$
Now the terminal voltage is
${V}_{\text{terminal}}=\epsilon -Ir=12.00\phantom{\rule{0.2em}{0ex}}\text{V}-\left(12.00\phantom{\rule{0.2em}{0ex}}\text{A}\right)\left(0.500\phantom{\rule{0.2em}{0ex}}\text{Ω}\right)=6.00\phantom{\rule{0.2em}{0ex}}\text{V},$
and the power dissipated by the load is
$P={I}^{2}R={\left(12.00\phantom{\rule{0.2em}{0ex}}\text{A}\right)}^{2}\left(0.500\phantom{\rule{0.2em}{0ex}}\text{Ω}\right)=72.00\phantom{\rule{0.2em}{0ex}}\text{W.}$
We see that the increased internal resistance has significantly decreased the terminal voltage, current, and power delivered to a load.
define electric image.obtain expression for electric intensity at any point on earthed conducting infinite plane due to a point charge Q placed at a distance D from it.
explain the lack of symmetry in the field of the parallel capacitor
pls. explain the lack of symmetry in the field of the parallel capacitor
Phoebe
does your app come with video lessons?
What is vector
Vector is a quantity having a direction as well as magnitude
Damilare
tell me about charging and discharging of capacitors
a big and a small metal spheres are connected by a wire, which of this has the maximum electric potential on the surface.
3 capacitors 2nf,3nf,4nf are connected in parallel... what is the equivalent capacitance...and what is the potential difference across each capacitor if the EMF is 500v
equivalent capacitance is 9nf nd pd across each capacitor is 500v
santanu
four effect of heat on substances
why we can find a electric mirror image only in a infinite conducting....why not in finite conducting plate..?
because you can't fit the boundary conditions.
Jorge
what is the dimensions for VISCOUNSITY (U)
Branda
what is thermodynamics
the study of heat an other form of energy.
John
heat is internal kinetic energy of a body but it doesnt mean heat is energy contained in a body because heat means transfer of energy due to difference in temperature...and in thermo-dynamics we study cause, effect, application, laws, hypothesis and so on about above mentioned phenomenon in detail.
ing
It is abranch of physical chemistry which deals with the interconversion of all form of energy
Vishal
what is colamb,s law.?
it is a low studied the force between 2 charges F=q.q`\r.r
Mostafa
what is the formula of del in cylindrical, polar media
prove that the formula for the unknown resistor is Rx=R2 x R3 divided by R3,when Ig=0.
what is flux
Total number of field lines crossing the surface area
Kamru
Basically flux in general is amount of anything...In Electricity and Magnetism it is the total no..of electric field lines or Magnetic field lines passing normally through the suface
prince
what is temperature change
Celine
a bottle of soft drink was removed from refrigerator and after some time, it was observed that its temperature has increased by 15 degree Celsius, what is the temperature change in degree Fahrenheit and degree Celsius
Celine
process whereby the degree of hotness of a body (or medium) changes
Salim
Q=mcΔT
Salim
where The letter "Q" is the heat transferred in an exchange in calories, "m" is the mass of the substance being heated in grams, "c" is its specific heat capacity and the static value, and "ΔT" is its change in temperature in degrees Celsius to reflect the change in temperature.
Salim
what was the temperature of the soft drink when it was removed ?
Salim
15 degree Celsius
Celine
15 degree
Celine
ok I think is just conversion
Salim
15 degree Celsius to Fahrenheit
Salim
0 degree Celsius = 32 Fahrenheit
Salim
15 degree Celsius = (15×1.8)+32 =59 Fahrenheit
Salim
I dont understand
Celine
the question said you should convert 15 degree Celsius to Fahrenheit
Salim
To convert temperatures in degrees Celsius to Fahrenheit, multiply by 1.8 (or 9/5) and add 32.
Salim
what is d final ans for Fahrenheit and Celsius
Celine
it said what is temperature change in Fahrenheit and Celsius
Celine
the 15 is already in Celsius
Salim
So the final answer for Fahrenheit is 59
Salim
what is d final ans for Fahrenheit and Celsius
Celine
what are the effects of placing a dielectric between the plates of a capacitor
increase the capacitance.
Jorge
besides increasing the capacitance, is there any?
Bundi
mechanical stiffness and small size
Jorge
so as to increase the capacitance of a capacitor
Rahma
also to avoid diffusion of charges between the two plate since they are positive and negative.
Prince
|
{}
|
51%
off
# Complex Dynamics : Twenty-five Years After the Appearance of the Mandelbrot Set
By (author)
Free delivery worldwide
Available. Dispatched from the UK in 1 business day
When will my order arrive?
## Description
Chaotic behavior of (even the simplest) iterations of polynomial maps of the complex plane was known for almost one hundred years due to the pioneering work of Farou, Julia, and their contemporaries. However, it was only twenty-five years ago that the first computer generated images illustrating properties of iterations of quadratic maps appeared. These images of the so-called Mandelbrot and Julia sets immediately resulted in a strong resurgence of interest in complex dynamics. The present volume, based on the talks at the conference commemorating the twenty-fifth anniversary of the appearance of Mandelbrot sets, provides a panorama of current research in this truly fascinating area of mathematics
## Product details
• Paperback | 206 pages
• 177.8 x 254 x 6.35mm | 408.23g
• Providence, United States
• English
• UK ed.
• 0821836250
• 9780821836255
Indecomposable continua and the Julia sets of rational maps by D. K. Childers, J. C. Mayer, H. M. Tuncali, and E. D. Tymchatyn The Henon family: The complex horseshoe locus and real parameter space by E. Bedford and J. Smillie Baby Mandelbrot sets adorned with halos in families of rational maps by R. L. Devaney Blowup points and baby Mandelbrot sets for singularly perturbed rational maps by R. L. Devaney, M. Holzer, and D. Uminsky Some remarks on the connectivity of Julia sets for 2-dimensional diffeomorphisms by R. Dujardin Rigorous numerical studies of the dynamics of polynomial skew products of $\mathbb{C}^2$ by S. L. Hruska Accumulation points of iterated function systems by L. Keen and N. Lakic Parabolic perturbations of the family $\lambda\tan{z}$ by L. Keen and S. Yuan Polynomial vector fields, dessins d'enfants, and circle packings by K. M. Pilgrim Siegel disks whose boundaries have only two complementary domains by J. T. Rogers, Jr. Non-uniform porosity for a subset of some Julia sets by K. A. Roth The existence of conformal measures for some transcendental meromorphic functions by B. Skorulski Open problems by L. Keen.
|
{}
|
Shubhlaxmi Warehouse Atlanta, Ga, Tony Moly Floria Brightening Peeling Gel Ingredients, How To Connect Subwoofer To Receiver Without Lfe, Bar Stools Netherlands, Bionaturae Olive Oil Fake, " />
uniform probability distribution examples and solutions
Let. It can be completed by auditors and other. What is the probability that the duration of games for a team for the 2011 season is between 480 and 500 hours? State the values of a and b. CFI is the official provider of the global Financial Modeling & Valuation Analyst (FMVA)™FMVA® CertificationJoin 350,600+ students who work for companies like Amazon, J.P. Morgan, and Ferrari certification program, designed to help anyone become a world-class financial analyst. In statistics and probability theory, a discrete uniform distribution is a statistical distribution where the probability of outcomes is equally likely and with finite values. Suppose the time it takes a student to finish a quiz is uniformly distributed between six and 15 minutes, inclusive. Monte Carlo simulation is a statistical method applied in modeling the probability of different outcomes in a problem that cannot be simply solved, due to the interference of a random variable. The data follow a uniform distribution where all values between and including zero and 14 are equally likely. Find the probability that a random eight-week-old baby smiles more than 12 seconds. On the average, a person must wait 7.5 minutes. However, there is an infinite number of points that can exist. The percentage of the probability is 1 divided by the total number of outcomes (number of passersby). Therefore, each time the 6-sided die is thrown, each side has a chance of 1/6. Let X = the time, in minutes, it takes a student to finish a quiz. The probability P(c < X < d) may be found by computing the area under f(x), between c and d. Since the corresponding area is a rectangle, the area may be found simply by multiplying the width and the height. Solve the problem two different ways (see Example 3). Uniform distribution can be grouped into two categories based on the types of possible outcomes. A good example of a discrete uniform distribution would be the possible outcomes of rolling a 6-sided die. The number of values is finite. McDougall, John A. Suppose the time it takes a nine-year old to eat a donut is between 0.5 and 4 minutes, inclusive. A good example of a continuous uniform distribution is an idealized random number generator. a = smallest X; b = largest X, The mean is $\displaystyle\mu=\frac{{{a}+{b}}}{{2}}\\$, The standard deviation is $\displaystyle\sigma=\sqrt{{\frac{{({b}-{a})}^{{2}}}{{12}}}}\\$, Probability density function: $\displaystyle{f{{({x})}}}=\frac{{1}}{{{b}-{a}}} \text{ for } {a}\leq{X}\leq{b}\\$, Area to the Left of x: $\displaystyle{P}{({X}{<}{x})}={({x}-{a})}{(\frac{{1}}{{{b}-{a}}})}\\$, Area to the Right of x: $\displaystyle{P}{({X}{>}{x})}={({b}-{x})}{(\frac{{1}}{{{b}-{a}}})}\\$, Area Between c and d: $\displaystyle{P}{({c}{<}{x}{<}{d})}={(\text{base})}{(\text{height})}={({d}-{c})}{(\frac{{1}}{{{b}-{a}}})}\\$, $\displaystyle{P}{({x}{<}{k})}={(\text{base})}{(\text{height})}={({12.5}-{0})}{(\frac{{1}}{{15}})}={0.8333}\\$, $\displaystyle{P}{({x}{>}{2}|{x}{>}{1.5})}={(\text{base})}{(\text{new height})}={({4}-{2})}{(\frac{{2}}{{5}})}=\frac{{4}}{{5}}\\$, http://cnx.org/contents/30189442-6998-4686-ac05-ed152b91b9de@17.41:36/Introductory_Statistics, http://cnx.org/contents/30189442-6998-4686-ac05-ed152b91b9de@17.44. Ninety percent of the smiling times fall below the 90th percentile, For the first way, use the fact that this is a, For the second way, use the conditional formula (shown below) with the original distribution. It is used to. The histogram that could be constructed from the sample is an empirical distribution that closely matches the theoretical uniform distribution. Find the 90th percentile. Refer to Example 1 What is the probability that a randomly chosen eight-week-old baby smiles between two and 18 seconds? The data in the table below are 55 smiling times, in seconds, of an eight-week-old baby. Write the distribution in proper notation, and calculate the theoretical mean and standard deviation.
Close
Close
|
{}
|
# Dense subset of Cantor set homeomorphic to the Baire space
Does anyone know a proof that the Cantor set, $\{0,1\}^{\mathbb{N}}$, has a dense subset homeomorphic to the Baire space, $\mathbb{N}^{\mathbb{N}}$? Thank you.
-
I am fairly certain that I posted an answer about that before. I am using my iPhone so it's hard to locate now. – Asaf Karagila Dec 9 '12 at 8:15
Hint: If $x = ( x_n )_{n \in \mathbb{N}}$ is an element of the Baire space $\mathcal{N} = \mathbb{N}^\mathbb{N}$, map it to $$0^{x_0} 1 0^{x_1} 1 \cdots$$ where by $0^k$ we mean the length $k$ sequence consisting of only zeroes. You then show that this is a homeomorphic embedding of $\mathcal{N}$ into $2^{\mathbb{N}}$. There is an nice description of the range of this function that makes its denseness in $2^{\mathbb{N}}$ quite easy to demonstrate.
-
Thank you for your hint. It was really helpful. – Frank Zermelo Dec 11 '12 at 1:51
If you happen to have it handy, there’s a pile-driver that takes care of the problem in short order: a characterization of the irrationals due originally to Aleksandrov, if I’m not mistaken. The space of irrationals is (up to homeomorphism) the unique zero-dimensional, separable, Čech-complete metrizable space that is nowhere locally compact. A Tikhonov space is Čech-complete iff it’s a $G_\delta$ in some (and in fact in any) compactification. Let $$X=\left\{x\in\{0,1\}^{\Bbb N}:x\text{ is not eventually constant}\right\}\;.$$ ($X$ corresponds to the points of the middle-thirds Cantor set that are not endpoints of removed intervals.)
• $X$ and $\{0,1\}^{\Bbb N}\setminus X$ are both dense in $\{0,1\}^{\Bbb N}$, which is therefore a compactification of $X$.
• $\{0,1\}^{\Bbb N}\setminus X$ is countable, so $X$ a $G_\delta$ in its compactification $\{0,1\}^{\Bbb N}$ and is therefore Čech-complete.
• $\{0,1\}^{\Bbb N}$ is a zero-dimensional, separable metrizable space, so $X$ is as well.
• That $X$ is nowhere locally compact follows easily from the fact that its complement is dense in $\{0,1\}^{\Bbb N}$.
-
I must admit it seemed a bit obfuscated to me until I realised that "separable + Čech-complete + metrizable" = Polish. – Arthur Fischer Dec 9 '12 at 15:22
|
{}
|
# Distinguished category of groups, is it abelian?
Let $\mathcal{A}$ denote the following category. The objects of $\mathcal{A}$ are the pairs $(H,G)$ where $G$ is an abelian group and $H$ is a subgroup of $G$. The morphisms between $(H,G)$ and $(H',G')$ is the set $\{f \in \operatorname{Hom}_{\mathbb{Z}}(G,G'): f(H) \subseteq H'\}$. Is this an abelian category?
## 1 Answer
No. Let $G$ be any non-zero abelian group. Then the obvious morphism $(0,G)\to (G,G)$ is a monomorphism and epimorphism, but not an isomorphism, which can't happen in an abelian category.
|
{}
|
## Friday, July 25, 2014
### Direct sum of finite cyclic groups
The purpose of this post is to show how a finite direct sum of finite cyclic groups
$\Large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$
can be rearranged so that their orders are in increasing divisional form: $m_1|m_2|\dots | m_n$.
We use the fact that if $p, q$ are coprime, then $\large \Bbb Z_p \oplus \Bbb Z_q = \Bbb Z_{pq}$.
(We'll use equality $=$ for isomorphism $\cong$ of groups.)
Let $p_1, p_2, \dots p_k$ be the list of prime numbers in the prime factorizations of all the integers $m_1, \dots, m_n$.
Write each $m_j$ in its prime power factorization $\large m_j = p_1^{a_{j1}}p_2^{a_{j2}} \dots p_k^{a_{jk}}$. Therefore
$\Large \Bbb Z_{m_j} = \Bbb Z_{p_1^{a_{j1}}} \oplus \Bbb Z_{p_2^{a_{j2}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{jk}}}$
and so the above direct sum $\large \Bbb Z_{m_1} \oplus \Bbb Z_{m_2} \oplus \dots \oplus \Bbb Z_{m_n}$ can be written out in matrix/row form as the direct sum of the following rows:
$\Large\Bbb Z_{p_1^{a_{11}}} \oplus \Bbb Z_{p_2^{a_{12}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{1k}}}$
$\Large\Bbb Z_{p_1^{a_{21}}} \oplus \Bbb Z_{p_2^{a_{22}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{2k}}}$
$\Large \vdots$
$\Large\Bbb Z_{p_1^{a_{n1}}} \oplus \Bbb Z_{p_2^{a_{n2}}} \oplus \dots \oplus \Bbb Z_{p_k^{a_{nk}}}$
Here, look at the powers of $p_1$ in the first column. They can be permuted / arranged so that their powers are in increasing order. The same with the powers of $p_2$ and the other $p_j$, arrange their groups so that the powers are increasing order. So we get the above direct sum isomorphic to
$\Large\Bbb Z_{p_1^{b_{11}}} \oplus \Bbb Z_{p_2^{b_{12}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{1k}}}$
$\Large\Bbb Z_{p_1^{b_{21}}} \oplus \Bbb Z_{p_2^{b_{22}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{2k}}}$
$\Large \vdots$
$\Large\Bbb Z_{p_1^{b_{n1}}} \oplus \Bbb Z_{p_2^{b_{n2}}} \oplus \dots \oplus \Bbb Z_{p_k^{b_{nk}}}$
where, for example, the exponents $b_{11} \le b_{21} \le \dots \le b_{n1}$ are a rearrangement of the numbers $a_{11}, a_{21}, \dots, a_{n1}$ (in the first column) in increasing order. Do the same for the other columns.
Now put together each of these rows into cyclic groups by multiplying their orders, thus
$\Large\ \ \Bbb Z_{N_1}$
$\Large \oplus \Bbb Z_{N_2}$
$\Large \vdots$
$\Large \oplus \Bbb Z_{N_n}$
where
$\large N_1 = p_1^{b_{11}} p_2^{b_{12}} \dots p_k^{b_{1k}}$,
$\large N_2 = p_1^{b_{21}} p_2^{b_{22}} \dots p_k^{b_{2k}}$,
$\large \vdots$
$\large N_n = p_1^{b_{n1}} p_2^{b_{n2}} \dots p_k^{b_{nk}}$.
In view of the fact that the $b_{1j} \le b_{2j} \le \dots \le b_{nj}$ is increasing for each $j$, we see that $N_1 | N_2 | \dots | N_n$, as required. $\blacksquare$
|
{}
|
A very large nonconducting plate lying in the xy-plane carries a charge per unit area of 7?. A second such plate located at z = 4.00 cm and oriented parallel to the xy-plane carries a charge per unit area of -5?. Find the electric field for the following.
(a) z < 0
(b) 0 < z < 4.00 cm
(c) z > 4.00 cm
|
{}
|
# Near-term grantmaking
[Added August 27, 2014: GiveWell Labs is now known as the Open Philanthropy Project.]
As stated previously, we expect that it will take quite a long time for us to reach the point of issuing major recommendations based on our GiveWell Labs work. That said, there have been – and will be – situations in which making a grant is appropriate and helpful. Since we are working closely with Good Ventures on Labs, our default approach has been – and will be – to jointly assess situations in which a grant may be called for, with the final call (and any grant) being made by Good Ventures. (If we encounter a point of disagreement, in which we feel it is important to make a grant and Good Ventures does not, we may approach other donors.) This post lays out the basic principles by which we (GiveWell and Good Ventures) decide when to make a grant.
Note that these grants are importantly different from our official recommendations. There is much less emphasis on thorough investigation and maximizing good accomplished per dollar (though the latter is a consideration), and much more weight placed on practical value to our agenda (particularly learning opportunities).
1. Giving to learn
We’ve written before about the concept of “giving to learn,” stating that “gaining information from an organization … is much easier to obtain as a ‘supporter’ (someone who has helped get funding to an organization in the past) than simply as an evaluator (someone who might help get funding to an organization in the future).”
To elaborate a bit on this idea, there are multiple forms that “giving to learn” can take:
• A grant can improve our access to an organization that we want to learn more about, or an organization whose personnel are good sources of information. The work we’ve done on co-funding generally goes in this category.
• A grant may directly pay for work that generates useful information, or may help us influence the direction that such work takes. Potential examples include any grants from our history of philanthropy project, including the recent $50,000 grant to the Millions Saved project. • In some cases a grant can be viewed as an “experiment” – a way to test a theory that a particular project will have a particular result, or will more generally be a worthwhile investment.In general, we believe that “betting on one’s beliefs, and seeing what happens” is a good way to learn about the world, though we also think that this approach has major and unusual limitations when it comes to philanthropy. In our experience, understanding the outcomes/results of a given philanthropic project is usually a major undertaking, and it’s easy to learn nothing from a grant if one does not commit to such an undertaking. Therefore, we try to pick “learning grants” of this type carefully. The giving that fits best into this category so far is the money we’ve moved to our top charities, which we believe to be excellent giving opportunities that we can follow and adjust our views of over time. 2. Strong giving opportunities Because we believe that good accomplished compounds over time, we want to take advantage of unusually strong giving opportunities when we come across them. Doing so will sometimes have the added benefit of providing further “experiments” to learn from in line with the previous section. We believe that it is usually difficult to assess the quality of a giving opportunity without having strong cause-level knowledge. As such, we expect to make fairly few grants in this category in the near future, though as we expand the set of causes we understand well, we expect to make more over time. 3. Good citizenship We are just getting started in exploring many relevant areas; our reputation and relationships are important. Therefore, we think it is important to generally behave as “good citizens” when it comes to grantmaking. The idea of being a “good citizen” is a vague one that we’re still fleshing out, but it includes things like • Being direct and open with potential funding partners and grantees, and not withholding information for the sake of saving money. • Not behaving in ways that “reward” potential funding partners/grantees for being less than direct and open with us, or “punish” potential funding partners/grantees for being direct and open with us. Imagine that both we and another funder are considering making the same grant, and we have the feeling that the other funder might make the grant if we did not. In such a case, we could hold back and disguise our interest for the purpose of saving money, but we feel such an action would fail the “good citizenship” test. Rather, we intend to err on the side of making grants that we would have been willing to make under slightly different circumstances (concerning funding partners’ and potential grantees’ plans and preferences). If we value an organization’s help enough that we would be willing to make a “learning grant” to gain better access to it, we will err on the side of making such a grant even if we happen to believe that we could gain such access without a grant. If we are interested enough in a project that we would be willing to fund it if a potential partner weren’t, we will err on the side of contributing to funding even if we feel that the potential partner doesn’t need our help. Weighing factors and making decisions We plan to make grants when some combination of the above factors calls for doing so. For any given grant, we will need to determine the appropriate level of investigation, as well as the appropriate level of followup and public discussion. In all cases, we will announce grants and give at least a basic characterization of the thinking behind them. But we also will be trying to make the level of investigation, followup and public discussion conceptually “proportional” to the size of the grant. The$50,000 grant to Millions Saved is simply too small – in the scope of the amount of funding we hope eventually to direct – to justify the sort of intensive investigation and followup we’ve done of our top charities. On the flip side, if we were contemplating a very large grant (in the millions of dollars), we would generally plan on serious investigation, and accordingly we would have a much higher bar that the grant would have to clear regarding the above criteria. We wouldn’t undertake a major investigation and major grant unless we felt an opportunity was highly outstanding (and/or in line with our learning agenda).
Over the coming months, GiveWell and Good Ventures expect to announce a reasonable number of grants. Such grants will not always be accompanied by exhaustive research or explicit cost-effectiveness analysis, but they will be carefully selected to fulfill the above criteria and further our mission of finding and funding the most outstanding giving opportunities possible.
|
{}
|
Pardon for being a bit of a newbie to true investing outside of a 401k. What about those of us who have 1) Just been laid off, and unable to find work due to lack of a degree (apparently 17 years in the industry with 5 certifications is just simply not enough – which is okay. It gave me the kick in the arse to get back to school finally) 2)Have three children to support (age 11 and under), and 3) Oh yeah – cannot find work. What do you recommend when the only source of positive revenue has ceased to come in and you now have less time than ever – due to responsibilities (i.e. doing well in university = academic scholarships means investment in time, plus spending 20 min breaks with kiddos) – to create positive sources of income ? I truly am wondering from an investor’s point of view how you would handle the pivot point of life if ever you had been faced with it. I realize this may be only imaginary, but at this point, I welcome your “what ifs” scenario on this one. You’ve truly done amazing work and I thank you for being so transparent.
## The Lake Tahoe property continues to be 100% managed by a property-management company. It feels amazing not to have to do anything. I can't wait to bring up my boy this coming winter to play in the snow! I could go up this winter, but I want him to be able to walk and run comfortably before he goes. I've been dreaming of this moment for over 10 years now. The income from the property is highly dependent on how much it snows. Summer income is always very strong.
Part of providing value is building trust. Don’t link to things that aren’t of good quality or people won’t trust your recommendations. The other part of making an audience is consistency. It matters less how often you post than how consistently. If you only have time to do one post a month, that post should come out on the same date and time each month.
Passive income is attractive because it frees up your time so you can focus on the things you actually enjoy. A highly successful doctor, lawyer, or publicist, for instance, cannot “inventory” their profits. If they want to earn the same amount of money and enjoy the same lifestyle year after year, they must continue to work the same number of hours at the same pay rate—or more, to keep up with inflation. Although such a career can provide a very comfortable lifestyle, it requires far too much sacrifice unless you truly enjoy the daily grind of your chosen profession.
You may think of a savings account as just that, savings. But it’s actually another form of income as the money in the account will draw interest. And while this interest may be small, it’s still better than $0. Eventually, you can invest this money whenever an opportunity presents itself in order to gain other income streams. Look into Tax Free Savings Accounts if you are going this route. I like the way you have listed the ways to earn extra income and was quite surprise that you did not make mention of network marketing, which is a way to make extra income without quitting your regular, though most people view mlm as a pyramid scheme but the real pyramid scheme is a regular 9 to 5, because you can only have one president of a company at any given time and network marketing business model to promote product that can be used is really cheap to join and can offer a substantial extra income or what do you think? Since the early 1960s, successive governments have implemented various schemes to alleviate poverty, under central planning, that have met with partial success.[342] In 2005, the government enacted the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA), guaranteeing 100 days of minimum wage employment to every rural household in all the districts of India.[343] In 2011, it was widely criticised and beset with controversy for corrupt officials, deficit financing as the source of funds, poor quality of infrastructure built under the programme, and unintended destructive effects.[344][345][346] Other studies suggest that the programme has helped reduce rural poverty in some cases.[347][348] Yet other studies report that India's economic growth has been the driver of sustainable employment and poverty reduction, though a sizeable population remains in poverty.[349][350] This equation implies two things. First buying one more unit of good x implies buying {\displaystyle {\frac {P_{x}}{P_{y}}}} less units of good y. So, {\displaystyle {\frac {P_{x}}{P_{y}}}} is the relative price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed {\displaystyle Y} , then its relative price falls. The usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good. The average population of counties with per capita incomes above the state's was twice as high (921,098) as those with a per capita income below the state average (546,543). Even this difference is minuscule when population density is considered: Counties with a per capita income above that of the state were eight times as dense on average (1,540.2 persons per square mile) than those with per capita income below that of the state (192.1 persons per square mile). The development of Indian security markets began with the launch of the Bombay Stock Exchange (BSE) in July 1875 and Ahmedabad Stock exchange in 1894. Since then, 22 other exchanges have traded in Indian cities. In 2014, India's stock exchange market became the 10th largest in the world by market capitalisation, just above those of South Korea and Australia.[402] India's two major stock exchanges, BSE and National Stock Exchange of India, had a market capitalisation of US$1.71 trillion and US$1.68 trillion as of February 2015, according to World Federation of Exchanges.[403][404] I have to agree. Our Duplex cost us 200k initially in 1998. Over time and completely refurbishing the property with historically appropriate sensitivity, we invested another 200k or so. We just had a realtor advise us we could ask 700k for it today. It nets us 30k annually after taxes, insurance and maintenance. We still have a loan on it which I have not taken into account, that will be paid off within 5 years if we keep it. My mental drama now is, while I am quite giddy over the prospect of earning a tidy sum of profit if I sell, what then would I do to equal the ROI and monthly income this thing generates? Rents are low, they should be 4k a month and will only go up. Tempted to keep it and not sell. And while I do have some stocks, I basically suck at them. I am much better at doing properties. Remember, a successful business solves people’s problems. At first, you’re going to have to do the legwork and put in the time. But it’s about building something now so you can reap the benefits later, with the help of software, tools, automation, and people you hire. In this way, you can then turn this business that solves people’s problems into something that generates passive income for you! However, this comes back to the old discussion of pain versus pleasure. We will always do more to avoid pain than we will to gain pleasure. When our backs are against the wall, we act. When they're not, we relax. The truth is that the pain-versus-pleasure paradigm only operates in the short term. We'll only avoid pain in the here and now. Often not in the long term. #### A lot of people these days are moving towards the two job concept. Amongst the people i know who have applied this in their life; the primary reason is that the 9 to 5 job pays their bills, lets the fire burning in the kitchen, and the second job is where their passion lies. This is the passion, which might have been forgotten while growing up, or is not a viable primary income source. Jump up ^ George Forster (1798), A journey from Bengal to England: through the northern part of India, Kashmire, Afghanistan, and Persia, and into Russia, by the Caspian-Sea, R. Faulder, ... A society of Moultan Hindoos, which has long been established in Baku, contributes largely to the circulation of its commerce; and with the Armenians they may be accounted the principal merchants of Shirwan ... If you need cash flow, and the dividend doesn’t meet your needs, sell a little appreciated stock. (or keep a CD ladder rolling and leave your stock alone). At the risk of repeating myself, whether you take cash out of your portfolio in the form of “rent”, dividend, interest, cap gain, laddered CD…., etc. The arithmetic doesn’t change. You are still taking cash out of your portfolio. I’m just pointing out that we shouldn’t let the tail wag the dog. IOW, the primary goal is to grow the long term value of your portfolio, after tax. Period. All other goals are secondary. Hi there. I am new here, I live in Norway, and I am working my way to FI. I am 43 years now and started way to late….. It just came to my mind for real 2,5years ago after having read Mr Moneymoustaches blog. Fortunately I have been good with money before also so my starting point has been good. I was smart enough to buy a rental apartment 18years ago, with only 12000$ in my pocket to invest which was 1/10 of the price of the property. I actually just sold it as the ROI (I think its the right word for it) was coming down to nothing really. If I took the rent, subtracted the monthly costs and also subtracted what a loan would cost me, and after that subtracted tax the following numbers appeared: The sales value of the apartment after tax was around 300000$and the sum I would have left every year on the rent was 3750$……..Ok it was payed down so the real numbers were higher, but that is incredibly low returns. It was located in Oslo the capital of Norway, so the price rise have been tremendous the late 18 years. I am all for stocks now. I know they also are priced high at the moment which my 53% return since December 2016 also shows……..The only reason this apartment was the right decision 18 years ago, was the big leverage and the tremendous price growth. It was right then, but it does not have to be right now to do the same. For the stocks I run a very easy in / out of the marked rule, which would give you better sleep, and also historically better rates of return, but more important lower volatility on you portfolio. Try out for yourself the following: Sell the S&P 500 when it is performing under its 365days average, and buy when it crosses over. I do not use the s&P 500 but the obx index in Norway. Even if you calculate in the cost of selling and buying including the spread of the product I am using the results are amazing. I have run through all the data thoroughly since 1983, and the result was that the index gave 44x the investment and the investment in the index gives 77x the investment in this timeframe. The most important findings though is what it means to you when you start withdrawing principal, as you will not experience all the big dips and therefore do not destroy your principal withdrawing through those dips. I hav all the graphs and statistics for it and it really works. The “drawbacks” is that during good times like from 2009 til today you will fall a little short of the index because of some “false” out indications, but who cares when your portfolio return in 2008 was 0% instead of -55%…….To give a little during good times costs so little in comparison to the return you get in the bad times. All is of course done from an account where you do not get taxed for selling and buying as long as you dont withdraw anything.
1. I started with doing tuitions , even after I picked my first work. Being in IT, I always had 5 days a week schedule but tuition/coaching is a time-tested way to earn clean money. I was teaching Mathematics to class X people. And if your pupil do good, like what happened when I was teaching this lady (in 1995) whose parents have given up on her, she was a in a plush school and I don't know what worked, she got such good marks that they hunted me down for a big pack of sweets after her X board exam, then that is extremely rewarding. You can start from your home, do evening class then move to a rented place and so on. It is very tiring but as I said, noone would short-change a teacher.
Perhaps a coworker purposefully tries to make your life miserable because they resent your success. Maybe you get passed over for a promotion and a raise because you weren’t vocal enough about your abilities, and mistakenly thought you worked in a meritocracy. Or maybe you have a new boss who decides to clean house and hire her own people. Whatever the case may be, you will eventually tire.
Increase in income Income per capita has been increasing steadily in almost every country.[5] Many factors contribute to people having a higher income such as education,[6] globalisation and favorable political circumstances such as economic freedom and peace. Increase in income also tends to lead to people choosing to work less hours. Developed countries (defined as countries with a "developed economy") have higher incomes as opposed to developing countries tending to have lower incomes.
I have not. While I am intrigued with the possibility of making online income, it seems to be less passive then how I want to spend my time. Regarding your blog / site, you have done quite well for yourself. However, you have to keep pumping out content or your site would eventually go out of business. That sounds like more of a commitment then I would want. Regarding your book sales, it is probably relatively passive now, but certainly was not when you were writing the book. Now if you love it, great. Just not for me.
One thing I’ve realized is this: It’s FAR easier to work for an employer than it is to develop durable passive income streams for the average person. Why? Because working for an employer in a place that “needs” you means that it’s possible to show up and give a 50% effort. You can show up, put in your time, go home, have a beer, watch TV, and rinse and repeat all without REALLY having to put in the effort.
Creating multiple streams of income does not mean get a second job to supplement your current income. A second job does not provide you with the flexibility and freedom to increase your income. In fact, it can hurt you when you think about it. You are trading time for money and in the long run, you lose. Instead, create something that will allow you to give yourself a pay raise when you need and want it.
`
In the runup to the Second World War, the United States had suffered through the Great Depression following the Wall Street Crash of 1929. Roosevelt's election at the end of 1932 was based on a commitment to reform the economy and society through a "New Deal" program. The first indication of a commitment to government guarantees of social and economic rights came in an address to the Commonwealth Club on September 23, 1932 during his campaign. The speech was written with Adolf A. Berle, a professor of corporate law at Columbia University. A key passage read:
I knew I didn't want to work 70 hours a week in finance forever. My body was breaking down, and I was constantly stressed. As a result, I started saving every other paycheck and 100% of my bonus since my first year out of college in 1999. By the time 2012 rolled around, I was earning enough passive income (about $78,000) to negotiate a severance and be free. However, this comes back to the old discussion of pain versus pleasure. We will always do more to avoid pain than we will to gain pleasure. When our backs are against the wall, we act. When they're not, we relax. The truth is that the pain-versus-pleasure paradigm only operates in the short term. We'll only avoid pain in the here and now. Often not in the long term. What spurred this blog post was an idea put forth by my friend at ESI Money in which he talks about how the first million is the hardest. ESI shares how his net worth growth has accelerated. The first million took 19 years of work (the clock starts when he started working, not at birth!) but the 2nd million took just 4 years and 9 months. J Money took this same idea and started at$100k, which took him 7yrs 11mos. Each of the next \$100k milestones took close to 18 months each to reach.
|
{}
|
# knifediy
博客园 首页 新随笔 联系 订阅 管理
The error message "The visual studio remote debugger does not support this edition of windows" appears because the remote debugger tries to use Windows Authentication by default, and this is only supported in the "Pro" versions of Windows, and up.
However, the remote debugger does work with the "Home" versions of Windows, you just have to tell it not to use authentication via the command line.
(Why it doesn't let you do this after launching it without any arguments, why the error message is so misleading (and contradicts the official list of supported OS), and why there is so little info about this on the web, I don't know. :))
To launch it, run this:
msvsmon.exe /noauth /nosecuritywarn
Of course, this launches it in the lowest security mode, so you'd only want to do this on a secure network. (But that's usually the mode one ends up using msvcmon in anyway, as the other mode is an even bigger PITA to set up than it is normally. Very useful tool, but really could use some streamlining.)
posted on 2015-09-22 19:51 knifediy 阅读(317) 评论(0编辑 收藏 举报
|
{}
|
# Who standardized the Roman measurements?
Was there a single ruler that standardized Roman measurements, like Qinshihuangdi for China? I remember in history class we talked about how the Romans had standard weight units in markets and axle widths for roads. Were these imposed by the government? Or did a common measurement just end up slowly pervading the entire empire? (e.g. some popular chariot-maker used axles 1.4m wide, which created 1.4m wide ruts, which led more people to build 1.4m wide chariots, which created deeper 1.4m wide ruts, cycling ad infinitum)
• They would have started as the municipal standards for the city of Rome; with the expansion of the Republic, the municipal standards were became current over an extended area. Note that local municipal standards were still in use. – Peter Diehr Sep 11 '16 at 18:07
• Wiki article Ancient Roman units of measurement says they were built upon the Hellenic system, but doesn't back it up with any source. – Brasidas Sep 11 '16 at 18:36
• Britannica also mentions Egyptian and Babylonian standard measures that got adapted by Greeks, and then by Romans. – Brasidas Sep 11 '16 at 18:48
• were built upon the Hellenic system Many Roman measures were changed to match Hellenic ones. The same way they "changed" Jupiter and Venus to match Zeus and Aphrodite. Still the question about "who and when" is valid. – Matt Sep 12 '16 at 3:46
I found the following in the Oxford Encyclopedia of Ancient Greece and Rome. (I accessed this through my university so I can't provide a link unfortunately)
|
{}
|
Article Contents
Article Contents
# Asymptotic estimates for unimodular Fourier multipliers on modulation spaces
• Recently, it has been shown that the unimodular Fourier multipliers $e^{it|\Delta |^{\frac{\alpha }{2}}}$ are bounded on all modulation spaces. In this paper, using the almost orthogonality of projections and some techniques on oscillating integrals, we obtain asymptotic estimates for the unimodular Fourier multipliers $e^{it|\Delta |^{\frac{\alpha }{2}}}$ on the modulation spaces. As applications, we give the grow-up rates of the solutions for the Cauchy problems for the free Schrödinger equation, the wave equation and the Airy equation with the initial data in a modulation space. We also obtain a quantitative form about the solution to the Cauchy problem of the nonlinear dispersive equations.
Mathematics Subject Classification: Primary: 42B15, 42B35; Secondary: 42C15.
Citation:
• [1] A. Bényi, K. Gröchenig, K. A. Okoudjou and L. G. Rogers, Unimodular Fourier multipliers for modulation spaces, J. Func. Anal., 246 (2007), 366-384.doi: 10.1016/j.jfa.2006.12.019. [2] A. Bényi and K. A. Okoudjou, Local well-posedness of nonlinear dispersive equations on modulation spaces, Bull. Lond. Math. Soc., 41 (2009), 549-558.doi: 10.1112/blms/bdp027. [3] J. Bergh and J. Löfström, "Interpolation Spaces. An Introduction," Grundlehren der Mathematischen Wissenschaften, No. 223, Springer-Verlag, Berlin-New York, 1976. [4] E. Cordero and F. Nicola, Some new Strichartz estimates for the Schrödinger equation, J. Differential Equations, 245 (2008), 1945-1974.doi: 10.1016/j.jde.2008.07.009. [5] Y. Domar, On the spectral synthesis problem for $(n-1)$-dimensional subset of $\mathbbR ^n,$ $n\geq 2,$ Ark Math, 9 (1971), 23-37.doi: 10.1007/BF02383635. [6] H. G. Feichtinger, Modulation spaces on locally compact abelian groups, Technical Report, University of Vienna, 1983, and in "Wavelets and Their Applications" (eds. M. Krishna, R. Radha and S. Thangavelu), 99-140, Allied Publishers, New Delhi, 2003. [7] H. G. Feichtinger, Modulation spaces: Looking back and ahead, Sampl Theory Signal Image Process, 5 (2006), 109-140. [8] K. Gröchening, "Foundations of Time-Frequency Analysis," Applied and Numerical Harmonic Analysis, Birkhäuser Boston, Inc., Boston, MA, 2001. [9] L. Hörmander, Estimates for translation invariant operators in $L^p$ spaces, Acta Math, 104 (1960), 93-139.doi: 10.1007/BF02547187. [10] W. Littman, Fourier transforms of surface-carried measures and differentiability of surface averages, Bull. Amer. Math. Soc., 69 (1963), 766-770.doi: 10.1090/S0002-9904-1963-11025-3. [11] A. Miyachi, F. Nicola, S. Rivetti, A. Taracco and N. Tomita, Estimates for unimodular Fourier multipliers on modulation spaces, Proc. Amer. Math. Soc., 137 (2009), 3869-3883.doi: 10.1090/S0002-9939-09-09968-7. [12] J. Sjöstrand, An algebra of pseudodifferential operators, Math. Res. Lett, 1 (1994), 185-192. [13] E. M. Stein, "Beijing Lectures In Harmonic Analysis," Annals of Mathematics Studies, 112, Princeton University Press, Princeton, NJ, 1982. [14] H. Triebel, "Theory of Function Spaces," Mathematik und ihre anwendugen in Physik und Technik [Mathematics and its Applications in Physics and Technology], 38, Akademische Verlagsgesellchaft Geest & Portig K.-G., Leipzig, 1983.doi: 10.1007/978-3-0346-0416-1. [15] J. Toft, Continuity properties for modulation spaces with applications to pseudo-differential calculus. II, Ann Global Anal Geom, 26 (2004), 73-106.doi: 10.1023/B:AGAG.0000023261.94488.f4. [16] B. Wang and H. Hudzik, The global Cauchy problem for the NLS and NLKG with small rough data, J. Differential Equations, 232 (2007), 36-73.doi: 10.1016/j.jde.2006.09.004. [17] B. Wang, L. Han and C. Huang, Global well-posedness and scattering for the derivative nonlinear Schrödinger equation with small rough data, Ann. Inst. H. Poincaré Anal. Non Linéaire, 26 (2009), 2253-2281. [18] B. Wang, C. Hao and C. Huo, "Introduction on Nonlinear Developing Equations," Unpublished Lecture Notes, Beijing University, 2009. [19] B. Wang, L. Zhao and B. Guo, Isometric decomposition operators function spaces $E_{p,q}^{\lambda}$ and applications to nonlinear evolution equations, J. Func. Anal., 233 (2006), 1-39.doi: 10.1016/j.jfa.2005.06.018.
|
{}
|
# Creating immersive videos
• Virtual Reality
### What is VR?
Virtual Reality (VR), which can be referred to as immersive multimedia or computer-simulated life, replicates an environment that simulates physical presence in places in the real world or imagined worlds.
Most up to date virtual reality environments are displayed either on a computer screen or with special stereoscopic displays, and some simulations include additional sensory information and focus on real sound through speakers or headphones targeted towards VR users.
The simulated environment can be similar to the real world in order to create a lifelike experience – for example, in simulations for pilot or combat training – or it differs significantly from reality, such as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, because of technical limitations on processing power, image resolution, and communication bandwidth. However, VR’s proponents hope that virtual reality’s enabling technologies become more powerful and cost effective over time.
### What is immersive video and how does it relate to VR?
An immersive video is basically a video recording of a real world scene, where the view in every direction is recorded at the same time. During playback the viewer has control of the viewing direction, up, down and sideways. Generally the only area that can’t be viewed is the view toward the camera support. The material is recorded as data which when played back through a software player allows the user control of the viewing direction and playback speed.
The player control is typically via a mouse or other sensing device and the playback view is typically a window on a computer display or projection screen or other presentation device such as a head mounted display (HMD).
Because of the use of HMDs you could say immersive videos are another form of virtual reality.
### How do you represent immersive video?
There are many forms of storing an immersive video. The most commonly used is the equirectangular projection, which is a spherical projection. We used the spherical projection because that is the easiest way to view a full environment.
A spherical projection is based on a spherical model,
$x= \lambda cos \varphi_{1}$
$y= \varphi$
where
$\lambda$ is the longitude
$\varphi$ is the latitude
$\varphi_{1}$ are the standard parallels (north and south of the equator) where the scale of the projection is true
$x$ is the horizontal position along the map
$y$ is the vertical position along the map.
The point (0,0) is at the center of the resulting projection.
Other projection types include:
Cylindrical projection
Mercator Projection
Little Planet Projection
### How do you record immersive videos?
This area is still under active development, people still trying to figure out the best way. However, here are some commonly used methods.
### The process of stitching what you record
After you get the videos you first need some meta information, specifically the position of each shot relative to the rig, in degrees. For example, video file mov1235.avi corresponds to horizontal offset of 45 degrees and vertical offset of 90 degrees, roll (camera is rotated on its lens axis) 90 degrees. For our algorithm (that supports any number of cameras) this was essential in placing the shots in space.
Next we will give a general overview of our algorithm (for a single frame):
First, we get a frame from each video, and their associated meta information. Next, we process the frames by undistorting and rotating (the roll parameter) them. Afterwards, in sequence, we virtually project each frame on the interior of a sphere, compositing each one over the other, compensating for exposure difference.
Note that the edges of the frame have an opacity falloff (feather). If any parts of the sphere are left unprojected (projection is transparent), we refine the result by recompositing the current equirectangular projection over a blurred version of itself, creating a smoother background, and a nicer transition to black.
Now we will go in depth for each step.
Undistortion:
Before we explain what undistortion is, we must first understand what distortion is. Because lenses are not perfect objects, they introduce many artifacts, one of them being distortion.
There are 3 main types of distortion: barrel, pincushion and mustache (a combination of the previous 2).
Barrel distortion
Pincushion distortion
Mustache distortion
Therefore, undistortion is the process of compensating for the lens distortion artifact. In our particular case, we compensated only for barrel distortion.
Positioning on the sphere:
We calculate the equirectangular projection of the frame basing its position on the coordinates calculated from the meta information (horizontal, vertical offsets), after being converted to a spherical coordinate system. This is a quite complex mathematical operation which we will not go into.
Assuming that in the resulting equirectangular map some content exists, we now do the following: create an intersection map (1 bit image), based on this map and the previous alpha maps of each component of the intersection we calculate a ‘hole’ map and blur it. We then stencil through with our intersection map.
We now calculate a LUT (lookup table) to apply to the current frame to compensate for exposure difference.
We then merge over this result with the alpha of the previously created projection and we composite it with the existing equirectangular map. We merge by calculating the result of the following equation for each pixel in the region of interest:
$A \times aA + B \times (1 {-} aA)$
where
$A$ is the foreground pixel
$B$ is the background pixel
$a$ is the alpha (opacity) of the foreground pixel.
### Challenges, issues and others
The main challenge is performance, because multiple HD video files need to be processed and this is very time consuming. Also, the hardware requirements for viewing immersive videos in a good quality is very steep. This is because when you are viewing through your viewport, you’re seeing just a portion of the video that is basically scaled up. So, for a 1080p viewport, you actually need a 16k x 8k source video, for actual maximum quality.
Another challenge is synchronizing the videos so you don’t get a wobbly feel to the video, and get the same people in different places (due to the time offset).
References:
Image sources:
Authors:
Ștefan-Gabriel Gavrilaș
Ionuț Popovici
|
{}
|
$$c_1 = m_1 \oplus y_1 \text{ and } c_2 = m_2 \oplus y_2$$
will it be possible to obtain
$$z_1= (m_1 \lor m_2) \oplus y_1 \oplus y_2 \text{ and } z_2= (m_1 \land m_2) \oplus y_1 \oplus y_2$$
from $c_1$ and $c_2$ without revealing $m_1$, $m_2$, $y_1$ and $y_3$?
No, because then you could calculate $z_1 \oplus z_2 = (m_1 \land m_2) \oplus (m_1 \lor m_2) = m_1 \oplus m_2$.
In practice, you can only find $m_1 \oplus m_2$ if both $m_1$ and $m_2$ are encrypted with the same OTP (i.e., $(m_1 \oplus y_1) \oplus (m_2 \oplus y_1) = m_1 \oplus m_2$).
So without any knowledge of $m_1$, $m_2$, $y_1$ or $y_2$, there is no way of magically eliminating both of the OTP key streams $y_1$ and $y_2$.
|
{}
|
## Kirkman’s Schoolgirl Problem
Fifteen young ladies in a school walk out three abreast for seven days in succession: it is required to arrange them daily so that no two shall walk twice abreast.
The problem is also famously known as "social golfer problem": Twenty golfers wish to play in foursomes for 5 days. Is it possible for each golfer to play no more than once with any other golfer? ( http://mathworld.wolfram.com/SocialGolferProblem.html ).
This is naive and straightforward solution:
from z3 import *
import itertools
PERSONS, DAYS, GROUPS = 15, 7, 5
#PERSONS, DAYS, GROUPS = 20, 5, 5
# each element - group for each person and each day:
tbl=[[Int('%d_%d' % (person, day)) for day in range(DAYS)] for person in range(PERSONS)]
s=Solver()
for person in range(PERSONS):
for day in range(DAYS):
# one in pair must be equal, all others must differ:
def only_one_in_pair_can_be_equal(l1, l2):
assert len(l1)==len(l2)
expr=[]
for pair_eq in range(len(l1)):
tmp=[]
for i in range(len(l1)):
if pair_eq==i:
tmp.append(l1[i]==l2[i])
else:
tmp.append(l1[i]!=l2[i])
expr.append(And(*tmp))
# at this point, expression like this constructed:
# Or(
# And(l1[0]==l2[0], l1[1]!=l2[1], l1[2]!=l2[2])
# And(l1[0]!=l2[0], l1[1]==l2[1], l1[2]!=l2[2])
# And(l1[0]!=l2[0], l1[1]!=l2[1], l1[2]==l2[2])
# )
# enumerate all possible pairs.
for pair in itertools.combinations(range(PERSONS), r=2):
only_one_in_pair_can_be_equal (tbl[pair[0]], tbl[pair[1]])
print s.check()
m=s.model()
print "group for each person:"
print "person:"+"".join([chr(ord('A')+i)+" " for i in range(PERSONS)])
for day in range(DAYS):
print "day=%d:" % day,
for person in range(PERSONS):
print m[tbl[person][day]].as_long(),
print ""
def persons_in_group(day, group):
rt=""
for person in range(PERSONS):
if m[tbl[person][day]].as_long()==group:
rt=rt+chr(ord('A')+person)
return rt
print ""
print "persons grouped:"
for day in range(DAYS):
print "day=%d:" % day,
for group in range(GROUPS):
print persons_in_group(day, group)+" ",
print ""
sat
group for each person:
person:A B C D E F G H I J K L M N O
day=0: 2 2 3 1 0 0 3 4 0 1 1 3 4 4 2
day=1: 2 1 2 4 3 1 4 4 2 1 0 3 0 3 0
day=2: 4 3 1 0 4 0 4 2 2 1 2 3 3 0 1
day=3: 4 3 1 4 2 1 3 1 0 2 3 4 2 0 0
day=4: 3 0 0 1 1 2 4 3 4 3 2 2 4 0 1
day=5: 2 4 1 1 4 3 3 4 0 0 2 0 1 2 3
day=6: 0 2 4 2 4 0 1 3 2 1 4 3 0 1 3
persons grouped:
day=0: EFI DJK ABO CGL HMN
day=1: KMO BFJ ACI ELN DGH
day=2: DFN CJO HIK BLM AEG
day=3: INO CFH EJM BGK ADL
day=4: BCN DEO FKL AHJ GIM
day=5: IJL CDM AKN FGO BEH
day=6: AFM GJN BDI HLO CEK
It took ~48s on my old Intel Xeon E3-1220 3.10GHz.
As in my previous example, I've tried to represent each number (group in which schoolgirl/golfer is) as a single bit:
from z3 import *
import itertools, math
PERSONS, DAYS, GROUPS = 15, 7, 5 # OK
#PERSONS, DAYS, GROUPS = 20, 5, 5 # no answer
#PERSONS, DAYS, GROUPS = 21, 10, 7 # no answer
# each element - group for each person and each day:
tbl=[[BitVec('%d_%d' % (person, day), GROUPS) for day in range(DAYS)] for person in range(PERSONS)]
s=Solver()
for person in range(PERSONS):
for day in range(DAYS):
# enumerate all variables
# we add Or(pair1!=0, pair2!=0) constraint, so two non-zero variables couldn't be present,
# but both zero variables in pair is OK, one non-zero and one zero variable is also OK:
def only_one_must_be_zero(lst):
for pair in itertools.combinations(lst, r=2):
# at least one variable must be zero:
# get two arrays of variables XORed. one element of this new array must be zero:
def only_one_in_pair_can_be_equal(l1, l2):
assert len(l1)==len(l2)
only_one_must_be_zero([l1[i]^l2[i] for i in range(len(l1))])
# enumerate all possible pairs:
for pair in itertools.combinations(range(PERSONS), r=2):
only_one_in_pair_can_be_equal (tbl[pair[0]], tbl[pair[1]])
print s.check()
m=s.model()
print "group for each person:"
print "person:"+"".join([chr(ord('A')+i)+" " for i in range(PERSONS)])
for day in range(DAYS):
print "day=%d:" % day,
for person in range(PERSONS):
print int(math.log(m[tbl[person][day]].as_long(),2)),
print ""
def persons_in_group(day, group):
rt=""
for person in range(PERSONS):
if int(math.log(m[tbl[person][day]].as_long(),2))==group:
rt=rt+chr(ord('A')+person)
return rt
print ""
print "persons grouped:"
for day in range(DAYS):
print "day=%d:" % day,
for group in range(GROUPS):
print persons_in_group(day, group)+" ",
print ""
This is way faster, ~1.5 seconds on the same CPU.
Unfortunately, sample SGP (social golfer problems) are out of reach. Yet? http://www.mathpuzzle.com/MAA/54-Golf%20Tournaments/mathgames_08_14_07.html.
|
{}
|
## Chemistry and Chemical Reactivity (9th Edition)
(a) $pH = 4.95$ (b) $pH = 5.05$
1. Calculate the molar mass $(NaOH)$: 22.99* 1 + 16* 1 + 1.01* 1 = 40g/mol 2. Calculate the number of moles $(NaOH)$ $n(moles) = \frac{mass(g)}{mm(g/mol)}$ $n(moles) = \frac{ 0.082}{ 40}$ $n(moles) = 2.1\times 10^{- 3}$ 3. Find the concentration in mol/L $(NaOH)$: $C(mol/L) = \frac{n(moles)}{volume(L)}$ $C(mol/L) = \frac{ 2.1\times 10^{- 3}}{ 0.1}$ $C(mol/L) = 0.021$ 4. Calculate the molar mass $(NaCH_3CO_2)$: 22.99* 1 + 12.01* 1 + 1.01* 3 + 12.01* 1 + 16* 2 = 82.04g/mol 5. Calculate the number of moles $(NaCH_3CO_2)$ $n(moles) = \frac{mass(g)}{mm(g/mol)}$ $n(moles) = \frac{ 4.95}{ 82.04}$ $n(moles) = 0.06$ 6. Find the concentration in mol/L $(NaCH_3CO_2)$: $C(mol/L) = \frac{n(moles)}{volume(L)}$ $C(mol/L) = \frac{ 0.06}{ 0.25}$ $C(mol/L) = 0.24$ 7. Drawing the ICE table we get these concentrations at the equilibrium: $CH_3CO_2H(aq) + H_2O(l) \lt -- \gt CH_3C{O_2}^-(aq) + H_3O^+(aq)$ Remember: Reactants at equilibrium = Initial Concentration - x And Products = Initial Concentration + x $[CH_3CO_2H] = 0.15 M - x$ $[CH_3C{O_2}^-] = 0.24M + x$ $[H_3O^+] = 0 + x$ 8. Calculate 'x' using the $K_a$ expression. $1.8\times 10^{- 5} = \frac{[CH_3C{O_2}^-][H_3O^+]}{[CH_3CO_2H]}$ $1.8\times 10^{- 5} = \frac{( 0.24 + x )* x}{ 0.15 - x}$ Considering 'x' has a very small value. $1.8\times 10^{- 5} = \frac{ 0.24 * x}{ 0.15}$ $1.8\times 10^{- 5} = 1.6x$ $\frac{ 1.8\times 10^{- 5}}{ 1.6} = x$ $x = 1.13\times 10^{- 5}$ Percent dissociation: $\frac{ 1.13\times 10^{- 5}}{ 0.15} \times 100\% = 7.5\times 10^{- 3}\%$ x = $[H_3O^+]$ $[CH_3CO_2H] = 0.15 M - x = 0.15 M - 1.13 \times 10^{-5}M \approx 0.15M$ $[CH_3C{O_2}^-] = 0.24M + x = 0.15 M + 1.13 \times 10^{-5}M \approx 0.24M$ $[H_3O^+] = 0 + x = 1.13 \times 10^{-5}M$ 9. Calculate the pH Value $pH = -log[H_3O^+]$ $pH = -log( 1.13 \times 10^{- 5})$ $pH = 4.95$ 10. Since we are adding a strong base, this reaction will occur: $CH_3CO_2H(aq) + OH^-(aq) -- \gt CH_3C{O_2}^-(aq) + H_2O(l)$ And these are the concentrations after this reaction: Remember: Reactants at equilibrium = Initial Concentration - y And Products = Initial Concentration + y Since $NaOH$ is a strong base, y = $[NaOH] = 0.021M$ $[CH_3CO_2H] = 0.15 M - 0.021 = 0.13M$ $[CH_3C{O_2}^-] = 0.24M + 0.021 = 0.26M$ 11. Now, calculate the hydronium ion concentration after the addition of the $NaOH$: $[H_3O^+] = Ka * (\frac{[CH_3CO_2H]}{[CH_3C{O_2}^-]})$ $[H_3O^+] = 1.8 \times 10^{-5} * \frac{0.13}{0.26}$ $[H_3O^+] = 1.8 \times 10^{-5} * 0.49$ $[H_3O^+] = 8.9 \times 10^{-6}$ 12. Calculate the pH Value $pH = -log[H_3O^+]$ $pH = -log( 8.9 \times 10^{- 6})$ $pH = 5.05$
|
{}
|
# Simple Plotting in Python with matplotlib¶
If while reading this you have any questions about what certain words are defined as see this computer programming dictionary forum, which you can view here.
## Plotting with matplotlib¶
matplotlib is a 2D plotting library that is relatively easy to use to produce publication-quality plots in Python. It provides an interface that is easy to get started with as a beginner, but it also allows you to customize almost every part of a plot. matplotlib's gallery provides a good overview of the wide array of graphics matplotlib is capable of creating. We'll just scratch the surface of matplotlib's capabilities here by looking at making some line plots.
This first line tells the Jupyter Notebook interface to set up plots to be displayed inline (as opposed to opening plots in a separate window). This is only needed for the notebook.
In [1]:
%matplotlib inline
The first step is to import the NumPy library, which we will import as np to give us less to type. This library provides an array object we can use to perform mathematics operations, as well as easy ways to make such arrays. We use the linspace function to create an array of 10 values in x, spanning between 0 and 5. We then set y equal to x * x.
In [2]:
import numpy as np
x = np.linspace(0, 5, 10)
y = x * x
Now we want to make a quick plot of these x and y values; for this we'll use matplotlib. First, we import the matplotlib.pyplot module, which provides a simple plotting interface; we import this as plt, again to save typing.
matplotlib has two main top-level plotting objects: Figure and Axes. A Figure represents a single figure for plotting (a single image or figure window), which contains one or more Axes objects. An Axes groups together an x and y axis, and contains all of the various plotting methods that one would want to use.
Below, we use the subplots() function, with no parameters, to quickly create a Figure, fig, and an Axes, ax, for us to plot on. We then use ax.plot(x, y) to create a line plot on the Axes we created; this command uses pairs of values from the x and y arrays to create points defining the line.
In [3]:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x, y)
Out[3]:
[<matplotlib.lines.Line2D at 0x10a6823c8>]
Matplotlib provides a wide array of ways to control the appearance of the plot. Below we adjust the line so that it is a thicker, red dashed line. By specifying the marker argument, we tell matplotlib to add a marker at each point; in this case, that marker is a square (s). For more information on linestyles, markers, etc., type help(ax.plot) in a cell, or see the matplotlib plot docs.
In [4]:
fig, ax = plt.subplots()
ax.plot(x, y, color='red', linestyle='--', linewidth=2, marker='s')
Out[4]:
[<matplotlib.lines.Line2D at 0x10a6e10b8>]
## Controlling Other Plot Aspects¶
In addition to controlling the look of the line, matplotlib provides many other features for cutomizing the look of the plot. In our plot, below we:
• Set labels for the x and y axes
• Add a title to the plot
In [5]:
fig, ax = plt.subplots()
ax.plot(x, y, color='red')
ax.grid()
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('f(x) = x * x')
Out[5]:
<matplotlib.text.Text at 0x10bd9c7b8>
matplotlib also has support for LaTeX-like typesetting of mathematical expressions, called mathtext. This is enabled by surrounding math expressions by $ symbols. Below, we replace the x * x in the title, with the more expressive$x^2$. In [6]: fig, ax = plt.subplots() ax.plot(x, y, color='red') ax.grid() ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('$f(x) = x^2$') Out[6]: <matplotlib.text.Text at 0x10be93f98> ## Multiple Plots¶ Often, what we really want is to make multiple plots. This can be accomplished in two ways: • Plot multiple lines on a single Axes • Combine multiple Axes in a single Figure First, let's look at plotting multiple lines. This is really simple--just call plot on the Axes you want multiple times: In [7]: fig, ax = plt.subplots() ax.plot(x, x, color='green') ax.plot(x, x * x, color='red') ax.plot(x, x**3, color='blue') Out[7]: [<matplotlib.lines.Line2D at 0x10f64d630>] Of course, in this plot it isn't clear what each line represents. We can add a legend to clarify the picture; to make it easy for matplotlib to create the legend for us, we can label each plot as we make it: In [8]: fig, ax = plt.subplots() ax.plot(x, x, color='green', label='$x$') ax.plot(x, x * x, color='red', label='$x^2$') ax.plot(x, x**3, color='blue', label='$x^3\$')
ax.legend(loc='upper left')
Out[8]:
<matplotlib.legend.Legend at 0x10f682668>
Another option for looking at multiple plots is to use multiple Axes; this is accomplished by passing our desired layout to subplots(). The simplest way is to just give it the number of rows and columns; in this case the axes are returned as a two dimensional array of Axes instances with shape (rows, columns).
In [9]:
# Sharex tells subplots that all the plots should share the same x-limit, ticks, etc.
# It also eliminates the redundant labelling
fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True)
axes[0, 0].plot(x, x)
axes[0, 0].set_title('Linear')
axes[0, 1].plot(x, x * x)
axes[0, 1].set_title('Squared')
axes[1, 0].plot(x, x ** 3)
axes[1, 0].set_title('Cubic')
axes[1, 1].plot(x, x ** 4)
axes[1, 1].set_title('Quartic')
Out[9]:
<matplotlib.text.Text at 0x10ff62358>
Of course, that's a little verbose for my liking, not to mention tedious to update if we want to add more labels. So we can also use a loop to plot:
In [10]:
fig, axes = plt.subplots(nrows=2, ncols=2, sharex=True)
titles = ['Linear', 'Squared', 'Cubic', 'Quartic']
y_vals = [x, x * x, x**3, x**4]
# axes.flat returns the set of axes as a flat (1D) array instead
# of the two-dimensional version we used earlier
for ax, title, y in zip(axes.flat, titles, y_vals):
ax.plot(x, y)
ax.set_title(title)
ax.grid(True)
This makes it easy to tweak all of the plots with a consistent style without repeating ourselves. It's also then easier to add, or remove plots and reshape. If you're not familiar with the zip() function below, it's Python's way of iterating (looping) over multiple lists of things together; so each time through the loop the first, second, etc. items from each of the lists is returned. It's one of the built-in parts of Python that makes it so easy to use.
|
{}
|
### Three Squares
What is the greatest number of squares you can make by overlapping three squares?
### Biscuit Decorations
Andrew decorated 20 biscuits to take to a party. He lined them up and put icing on every second biscuit and different decorations on other biscuits. How many biscuits weren't decorated?
### Writing Digits
Lee was writing all the counting numbers from 1 to 20. She stopped for a rest after writing seventeen digits. What was the last number she wrote?
# Two Dice
##### Age 5 to 7Challenge Level
Here are two dice.
If you add up the dots on the top you'll get $7$.
Find two dice to roll yourself. Add the numbers that are on the top.
What other totals could you get if you roll the dice again?
Photograph acknowledgement.
|
{}
|
School of Technology and Computer Science Seminars
# Maximum Matching in the Semi-Streaming Model in Constant Number of Passes
## by Sumedh Vinod Tirodkar (School of Technology and Computer Science, TIFR)
Friday, October 27, 2017 from to (Asia/Kolkata)
at A-201 (STCS Seminar Room)
Description We consider the maximum matching problem in the semi-streaming model formalized by Feigenbaum et al. that is inspired by giant graphs of today. The input is a stream of edges, and an algorithm can use a memory of O(n\poly\log n) to output a maximum matching. If an algorithm goes over the stream k times, then it is called a k pass algorithm. Maximal matching algorithm is a 2-approximation one-pass algorithm, and a big open problem is to find an algorithm which does better in one-pass. We begin by giving a two-pass (1/2 + 1/16)-approximation algorithm for triangle-free graphs and a two-pass (1/2 + 1/32)-approximation algorithm for general graphs; these improve the approximation ratios of 1/2 + 1/52 for bipartite graphs and 1/2 + 1/140 for general graphs by Konrad, Magniez, and Mathieu. In three passes, we achieve approximation ratios of 1/2 + 1/10 for triangle-free graphs and 1/2 + 1/19.753 for general graphs. We also give a multi-pass algorithm where we bound the number of passes precisely---we give a (2/3 -\epsilon)-approximation algorithm that uses 2/(3\epsilon) passes for triangle-free graphs and 4/(3\epsilon) passes for general graphs. Our algorithms are simple and combinatorial, use O(n \log(n)) space, and have O(1) update time per edge. We extend the multipass algorithm to give a (3/4-\epsilon)-approximation algorithm that uses O(\log(1/\epsilon)/\epsilon) passes. This algorithm gives ideas for an (1-\epsilon)-approximation deterministic algorithm. Note that there is no known deterministic algorithm for general graphs which achieves (1-\epsilon)-approximation in constant number of passes.
|
{}
|
Home » » I. Jet propelled aircraft II. Rocket propulsion III. The recoil of a gunIV. A pe...
# I. Jet propelled aircraft II. Rocket propulsion III. The recoil of a gunIV. A pe...
### Question
I. Jet propelled aircraft
II. Rocket propulsion
III. The recoil of a gun
IV. A person walking
Which of the above is based on Newton's third law of motion
### Options
A) I and II only
B) I, II, III, and IV only
C) I and III only
D) I, II and III only
The correct answer is D.
|
{}
|
# OPEC could ditch dollars for euros
1. Feb 9, 2008
### EnumaElish
http://news.yahoo.com/s/afp/20080209/bs_afp/opeccommoditiesoilcurrency
If the dollar fell x% since Friday then the probability that OPEC will switch to euros can be calculated from:
current value of $=$'s value (say) at noon on Friday * Prob{OPEC will not abandon $} +$'s value if OPEC abandons it * (1- Prob{OPEC will not abandon $}), provided "$'s value if OPEC abandons it" can be assigned a quantity. E.g. if "$'s value if OPEC abandons it" = 0 (for demonstrative purposes) then: Prob{OPEC will not abandon$} = (current value of $) / ($'s value at noon on Friday) = 1 - x.
Last edited: Feb 9, 2008
2. Feb 9, 2008
### mgb_phys
If OPEC switches to euros then the US has to pay for oil in real money instead of printing it so the probability can be calculated from :
1 - (number of US troops in your country propping up your regime + number of US troops next door ready to invade your country if you piss them off ).
3. Feb 9, 2008
### Poop-Loops
One of the perceived reasons for the Iraqi war starting is because Saddam wanted to switch from Dollars to Euros. We couldn't let that happen, now could we?
4. Feb 9, 2008
### mgb_phys
So probably not looking good for Iran's Euro based oil bourse then?
5. Feb 9, 2008
### binzing
Not a big surprise.
|
{}
|
# What is the Minimum Ozone levels to sustain human civilization on an Earth-like planet?
Suppose I have a planet similar to the Earth, and I want to decrease the $O_3$ levels to the minimum amount (world average, in Dobson units), so as to sustain the human civilization (bare survival, supposing the current humans on Earth went to the planet) , what would it be?
Assume this planet is also present in a Solar System-like environment.
• You ask this like if Ozone fell below a certain level, humans will shrivel and die. Does bare survival include humanity having access to shirts and hats to protect against the sun? – Twelfth Jan 26 '17 at 17:59
• Every current technology available in the Earth can be used. – Siddharth Venu Jan 26 '17 at 18:02
I believe the answer here is zero. Ozone's key protective trait is Ultra-Violet radiation and life on Earth here has evolved with this protection. Remove the protection and we are unable to handle the increased UV without protection.
That being said, life that evolved on a planet without this ozone UV shield would have to find new ways to adapt to the environment and the increased presence of UV radiation. this might include self-repairing DNA to adapt to the UV damage.
I don't believe an Ozone layer is essential to life and wouldn't prevent a human like species from coming up. It is essential to our life (or atleast we would have to adapt our behavior rapidly) as we evolved with it's presence.
• I am sorry but I am not talking about formation of humal like life, but rather what if the humans currently on Earth went to that planet. I'll edit the question so as to show this clearly. – Siddharth Venu Jan 26 '17 at 17:40
• Answer is still 0. They would need access to clothes and shelter to protect them from the sun. – Twelfth Jan 26 '17 at 18:00
• The normal $O_3$ levels are measured approximately to 200-300 dobston units. So when $O_3$ levels become lesser and lesser, it will cause mutations and eventually kill them. So I am asking for a point, like after which survival isn't possible at all (absolutely no chance of survival). – Siddharth Venu Jan 26 '17 at 18:05
• @SiddharthVenu - there is none. UV radiation (UVB) damages skin...it doesn't harm organs or internals...it's surface radiation. As long as this human colony can put something between their skin and the sun, there is no minimum ozone required. If these humans lacked clothing and roofs, this might be an issue. It'd be a constant part of life (UV protection), but there is no minimum. – Twelfth Jan 26 '17 at 18:08
• And what about dangerous diseases like skin cancer? – Siddharth Venu Jan 26 '17 at 18:11
The Ozone layer protects us from UVB radiation, which is the point of this question. The big difficulty in answering this question is summed up by a 2007 study by the University of Ottawa to the US Department of Health and Human Services: there's not enough information to determine a safe level of sun exposure that imposes minimal risk of skin cancer. (Abstract
Additionally, there's no conclusive evidence that UVB is actually carcinogenic; we label the whole spectrum as "carcinogenic." We're relatively certain that they are all carcinogens, but we don't know for sure (Paper)
This makes it hard to put a number to the amount of Ozone needed.
Human skin color correlates well with the UV radiation available to a group of people over long time scales; Peoples around the equator tend to have higher pigmentation levels; in the rainforest they have middle range pigmentation, and near the poles a lack of significant pigmentation. For "human survival" it would depend on the length of time available for the colony to become adapted to the UV, perhaps over the course of many, many generations.
One thing to remember is that UV is not ionizing radiation. It doesn't chew through the length of the body, knocking bits around. UV causes damage to surfaces through chemical interactions; the skin and eyes would be vulnerable.
Another missing detail in the question: are human like species expected to evolve on this planet, or are we talking about a human colony landing?
If it's a human colony landing, the answer is "Zero Ozone." Simple fact of the matter is, because UV only damages surfaces, you can have UV filtering goggles and clothing that reflects UV covering all skin surfaces, and you are fine. This depends on humans evolving elsewhere and arriving on the planet with no ozone, but it's surviviable. This would also allow the humans to use construction materials to build domes and filters for food crops which would be damaged by high levels of UV.
If you want life to evolve on this planet into human like critters, that's a vastly different story. UV isn't ionizing, so it doesn't plow through structures; but it's powerful enough to punch right through single celled organisms, and we kinda need those for life to evolve. I've failed to find information about the UV absorption of sea water, but life finds a way, right? So it'd probably start at the deep sea vents, scaled critters could come up to higher levels in the ocean because the scales protect their skin, which would allow them to leave the oceans for the land, and evolve into feathered and hairy beings.
But then we get to a problem; it's believed that humans evolved hairlessness because our environment and new bipedal posture made wearing fur cause problems with cooling the body; we shed the fur so we could sweat more efficiently. On a high UV planet, however, it's likely that human type critters wouldn't be able to lose their fur, or would have to replace it with something else.
But this whole line of thought, while fun to follow, is moot. High powered mobile animals like us require oxygen as part of our respiration cycle. The Ozone layer is moderated by the incoming Ozone itself; higher levels of Ozone react with the Oxygen in the atmosphere to create new O3; that O3 gets knocked apart by UV which forms more O2 or more O3. It's a self-regenerating process. So any planet with enough oxygen to support animals will likely have an ozone layer adequate to control the majority of harmful UV radiation.
The concern with the ozone hole was the fear that we humans were emitting chemistry in the air which destroyed O3 faster than UV radiation could create it, disrupting the balance and allowing through more radiation. This is unlikely to be a problem on a pre-industrial world, and post-industrial societies have plenty of methods available to reduce one's exposure to high UV radiation, and to protect their pets, livestock, and crops.
TL;DR: the answer is "Zero" To have humans who look like us, we need an Ozone layer similar to the one on Earth; the lack of ozone wouldn't destroy the ability for life to evolve, but it would cause it to follow a completely different path, and therefore would not be a "human society." Humans landing on the planet will be able to shield themselves from the increased UV simply; the only technology required is the ability to make cloth and UV filtering glass, which any space-based society should be able to produce.
• Could you add a point on what if the human landing party does not have any UV-proof clothing and equipment? What will be the min ozone levels for survival then? – Siddharth Venu Jan 26 '17 at 18:07
• Like a critical point below which there is no chance of survival without any UV-proof clothes – Siddharth Venu Jan 26 '17 at 18:08
• Much more detailed answer than mine +1. I believe UVB is pretty heavily absorbed by water and doesn't make it near as far as visible light does. @SiddharthVenu - UV Proof clothes includes cloth, silk, straw, or a newspaper held above your head...if it casts a shadow, its UV proof – Twelfth Jan 26 '17 at 18:14
• @SiddharthVenu, unless your landing party is naked, any clothing provides UV protection. Remember, UV radiation is NOT ionizing radiation, it's right there with light and infrared as "heat" spectrum non-penetrating radiation. As such, any planet you land on where you can't protect yourself with your clothing will be a planet with more problems you'll need to overcome, like extreme heat or blindingly bright light. The least of your problems will be sunburn. – Zoey Boles Jan 26 '17 at 18:23
|
{}
|
# Factor multiplied matrix with vector
Say I have a matrix multiplication of the form
$$B = A \cdot x$$
or
\begin{align} \begin{pmatrix} a_{11} x_1 + a_{12} x_2 + a_{13} x_3 \\ a_{21} x_1 + a_{22} x_2 + a_{23} x_3 \\ a_{31} x_1 + a_{32} x_2 + a_{33} x_3 \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33} \end{pmatrix} \cdot \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} \end{align}
Is there a way in Mathematica to factor $$B$$ in a way where I give it $$x$$ and it returns $$A$$?
• @garej I noticed you tried to edit this question to give some Mathematica code. Although I understand that this was done in the best of intentions, I think it would be better to make a request to the OP directly from the comments section. Jan 2, 2016 at 15:30
• You could use LinearSolve but the system is underdetermined, hence not uniquely solved. Jan 2, 2016 at 17:31
I would simply do the derivative. It's the shortest way to get the matrix from a linear expression, assuming the vector X consists of symbols as written in the question.
First define the matrices and vectors:
X = {x1, x2, x3};
A = Array[a, {3, 3}];
B = A.X
(*
==> {x1 a[1, 1] + x2 a[1, 2] + x3 a[1, 3],
x1 a[2, 1] + x2 a[2, 2] + x3 a[2, 3],
x1 a[3, 1] + x2 a[3, 2] + x3 a[3, 3]}
*)
Given these definitions, this is the only thing you need to do:
D[B, {X}]
(*
==> {{a[1, 1], a[1, 2], a[1, 3]}, {a[2, 1], a[2, 2],
a[2, 3]}, {a[3, 1], a[3, 2], a[3, 3]}}
*)
• tensor derivative - that is very elegant! Jan 4, 2016 at 7:46
I found the solution. It's simply:
Table[Coefficient[B[[i]],#]&/@X,{i,Length[B]}]
This will go through each element of B, and check the coefficient of each element of X for that B element, creating a two dimensional array, which is basically A.
A short form of this that works for me is:
Coefficient[B,#]&/@X
Mathematica is smart enough to recognize that it has to apply Coefficient on each element of B. For non-symmetric matrixes Transpose is needed:
mat = {{-1, 2, 3}, {0, 2, 4}, {1, -1, 2}}
(Coefficient[mat.X, #] & /@ X) // Transpose
{{-1, 2, 3}, {0, 2, 4}, {1, -1, 2}}
• @You forgot to Transpose the result, did not you? Jan 2, 2016 at 13:52
• @garej Actually I'm not sure. I'm dealing with symmetric/Hermitian matrices so it doesn't really matter for me. If you're sure, please feel free to edit my answer. Jan 2, 2016 at 14:00
• compare Array[a[##] &, {3, 3}] and Coefficient[Array[a[##] &, {3, 3}].{x1, x2, x3}, #] & /@ {x1, x2, x3}. I would add a sample a, say {{1, 3, -1}, {2, 4, 3}, {3, 5, 2}} in the question ;)) Jan 2, 2016 at 14:12
Using the CoefficientArrays[] function works as well.
Input:
mat = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}};
X = {x1, x2, x3};
Normal@CoefficientArrays[mat.X, X][[2]]
Output:
{{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}
|
{}
|
How was math today?
By Murray Bourne, 29 Apr 2006
I'm interested to know how your math lesson was today. I'm not necessarily talking about the math lesson you had in some classroom - you can also learn mathematics in:
• an informal discussion with a friend
• while doing a Web-based tutorial
• while searching the internet for answers to specific questions
• while using a math learning object (something like the math interactives on Interactive Mathematics)
• while solving some problem at work
• while teaching someone else
• whatever
In other words, what works for you and what is a waste of time?
3 Comments on “How was math today?”
1. vandana says:
maths is really fun
2. will says:
Dear Murray: I'm studying for the ham radio extra license and many of the questions use logarithms and math symbols I'm not familiar with. Radio is a hobby with me and I did not study trig and logs in high school. I need a clear path to follow. By the way, I am 92 years young, play the violin, and lots of other things to make life interesting. Please respond! Will
3. Murray says:
Hi Will
Thanks for sharing. Good on you, at 92 years young, for continuing your learning journey!
You can always start on this page: Exponents and Radicals and then work through Exponential and Logarithmic Equations.
Feel free to ask questions if you get stuck.
Comment Preview
HTML: You can use simple tags like <b>, <a href="...">, etc.
To enter math, you can can either:
1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone):
a^2 = sqrt(b^2 + c^2)
(See more on ASCIIMath syntax); or
2. Use simple LaTeX in the following format. Surround your math with $$ and $$.
$$\int g dx = \sqrt{\frac{a}{b}}$$
(This is standard simple LaTeX.)
NOTE: You can't mix both types of math entry in your comment.
|
{}
|
# 04. Bayesian Inversion¶
This tutorial focuses on Bayesian inversion, a special type of inverse problem that aims at incorporating prior information in terms of model and data probabilities in the inversion process.
In this case we will be dealing with the same problem that we discussed in 03. Solvers, but instead of defining ad-hoc regularization or preconditioning terms we parametrize and model our input signal in the frequency domain in a probabilistic fashion: the central frequency, amplitude and phase of the three sinusoids have gaussian distributions as follows:
$X(f) = \sum_{i=1}^3 a_i e^{j \phi_i} \delta(f - f_i)$
where $$f_i \sim N(f_{0,i}, \sigma_{f,i})$$, $$a_i \sim N(a_{0,i}, \sigma_{a,i})$$, and $$\phi_i \sim N(\phi_{0,i}, \sigma_{\phi,i})$$.
Based on the above definition, we construct some prior models in the frequency domain, convert each of them to the time domain and use such an ensemble to estimate the prior mean $$\mu_\mathbf{x}$$ and model covariance $$\mathbf{C_x}$$.
We then create our data by sampling the true signal at certain locations and solve the resconstruction problem within a Bayesian framework. Since we are assuming gaussianity in our priors, the equation to obtain the posterion mean can be derived analytically:
$\mathbf{x} = \mathbf{x_0} + \mathbf{C}_x \mathbf{R}^T (\mathbf{R} \mathbf{C}_x \mathbf{R}^T + \mathbf{C}_y)^{-1} (\mathbf{y} - \mathbf{R} \mathbf{x_0})$
import matplotlib.pyplot as plt
# sphinx_gallery_thumbnail_number = 2
import numpy as np
from scipy.sparse.linalg import lsqr
import pylops
plt.close("all")
np.random.seed(10)
Let’s start by creating our true model and prior realizations
def prior_realization(f0, a0, phi0, sigmaf, sigmaa, sigmaphi, dt, nt, nfft):
"""Create realization from prior mean and std for amplitude, frequency and
phase
"""
f = np.fft.rfftfreq(nfft, dt)
df = f[1] - f[0]
ifreqs = [int(np.random.normal(f, sigma) / df) for f, sigma in zip(f0, sigmaf)]
amps = [np.random.normal(a, sigma) for a, sigma in zip(a0, sigmaa)]
phis = [np.random.normal(phi, sigma) for phi, sigma in zip(phi0, sigmaphi)]
# input signal in frequency domain
X = np.zeros(nfft // 2 + 1, dtype="complex128")
X[ifreqs] = (
)
# input signal in time domain
FFTop = pylops.signalprocessing.FFT(nt, nfft=nfft, real=True)
x = FFTop.H * X
return x
# Priors
nreals = 100
f0 = [5, 3, 8]
sigmaf = [0.5, 1.0, 0.6]
a0 = [1.0, 1.0, 1.0]
sigmaa = [0.1, 0.5, 0.6]
phi0 = [-90.0, 0.0, 0.0]
sigmaphi = [0.1, 0.2, 0.4]
# Prior models
nt = 200
nfft = 2 ** 11
dt = 0.004
t = np.arange(nt) * dt
xs = np.array(
[
prior_realization(f0, a0, phi0, sigmaf, sigmaa, sigmaphi, dt, nt, nfft)
for _ in range(nreals)
]
)
# True model (taken as one possible realization)
x = prior_realization(f0, a0, phi0, [0, 0, 0], [0, 0, 0], [0, 0, 0], dt, nt, nfft)
We have now a set of prior models in time domain. We can easily use sample statistics to estimate the prior mean and covariance. For the covariance, we perform a second step where we average values around the main diagonal for each row and find a smooth, compact filter that we use to define a convolution linear operator that mimics the action of the covariance matrix on a vector
x0 = np.average(xs, axis=0)
Cm = ((xs - x0).T @ (xs - x0)) / nreals
N = 30 # lenght of decorrelation
diags = np.array([Cm[i, i - N : i + N + 1] for i in range(N, nt - N)])
diag_ave = np.average(diags, axis=0)
# add a taper at the end to avoid edge effects
diag_ave *= np.hamming(2 * N + 1)
fig, ax = plt.subplots(1, 1, figsize=(12, 4))
ax.plot(t, xs.T, "r", lw=1)
ax.plot(t, x0, "g", lw=4)
ax.plot(t, x, "k", lw=4)
ax.set_title("Prior realizations and mean")
ax.set_xlim(0, 0.8)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4))
im = ax1.imshow(
Cm, interpolation="nearest", cmap="seismic", extent=(t[0], t[-1], t[-1], t[0])
)
ax1.set_title(r"$\mathbf{C}_m^{prior}$")
ax1.axis("tight")
ax2.plot(np.arange(-N, N + 1) * dt, diags.T, "--r", lw=1)
ax2.plot(np.arange(-N, N + 1) * dt, diag_ave, "k", lw=4)
ax2.set_title("Averaged covariance 'filter'")
Out:
Text(0.5, 1.0, "Averaged covariance 'filter'")
Let’s define now the sampling operator as well as create our covariance matrices in terms of linear operators. This may not be strictly necessary here but shows how even Bayesian-type of inversion can very easily scale to large model and data spaces.
# Sampling operator
perc_subsampling = 0.2
ntsub = int(np.round(nt * perc_subsampling))
iava = np.sort(np.random.permutation(np.arange(nt))[:ntsub])
iava[-1] = nt - 1 # assume we have the last sample to avoid instability
Rop = pylops.Restriction(nt, iava, dtype="float64")
# Covariance operators
Cm_op = pylops.signalprocessing.Convolve1D(nt, diag_ave, offset=N)
Cd_op = sigmad ** 2 * pylops.Identity(ntsub)
We model now our data and add noise that respects our prior definition
n = np.random.normal(0, sigmad, nt)
y = Rop * x
yn = Rop * (x + n)
First we apply the Bayesian inversion equation
xbayes = x0 + Cm_op * Rop.H * (
lsqr(Rop * Cm_op * Rop.H + Cd_op, yn - Rop * x0, iter_lim=400)[0]
)
# Visualize
fig, ax = plt.subplots(1, 1, figsize=(12, 5))
ax.plot(t, x, "k", lw=6, label="true")
ax.plot(t, ymask, ".k", ms=25, label="available samples")
ax.plot(t, ynmask, ".r", ms=25, label="available noisy samples")
ax.plot(t, xbayes, "r", lw=3, label="bayesian inverse")
ax.legend()
ax.set_title("Signal")
ax.set_xlim(0, 0.8)
Out:
(0.0, 0.8)
So far we have been able to estimate our posterion mean. What about its uncertainties (i.e., posterion covariance)?
In real-life applications it is very difficult (if not impossible) to directly compute the posterior covariance matrix. It is much more useful to create a set of models that sample the posterion probability. We can do that by solving our problem several times using different prior realizations as starting guesses:
xpost = [
x0
+ Cm_op
* Rop.H
* (lsqr(Rop * Cm_op * Rop.H + Cd_op, yn - Rop * x0, iter_lim=400)[0])
for x0 in xs[:30]
]
xpost = np.array(xpost)
x0post = np.average(xpost, axis=0)
Cm_post = ((xpost - x0post).T @ (xpost - x0post)) / nreals
# Visualize
fig, ax = plt.subplots(1, 1, figsize=(12, 5))
ax.plot(t, x, "k", lw=6, label="true")
ax.plot(t, xpost.T, "--r", lw=1)
ax.plot(t, x0post, "r", lw=3, label="bayesian inverse")
ax.plot(t, ymask, ".k", ms=25, label="available samples")
ax.plot(t, ynmask, ".r", ms=25, label="available noisy samples")
ax.legend()
ax.set_title("Signal")
ax.set_xlim(0, 0.8)
fig, ax = plt.subplots(1, 1, figsize=(5, 4))
im = ax.imshow(
Cm_post, interpolation="nearest", cmap="seismic", extent=(t[0], t[-1], t[-1], t[0])
)
ax.set_title(r"$\mathbf{C}_m^{posterior}$")
ax.axis("tight")
Out:
(0.0, 0.796, 0.796, 0.0)
Note that here we have been able to compute a sample posterior covariance from its estimated samples. By displaying it we can see how both the overall variances and the correlation between different parameters have become narrower compared to their prior counterparts.
Total running time of the script: ( 0 minutes 1.737 seconds)
Gallery generated by Sphinx-Gallery
|
{}
|
My younger son could use some algebra practice. There are only so many problems in his book and making up new ones that have simple integer answers is harder and more time-consuming than you’d think. So, as with elementary math, I made up an HTML/JavaScript page that generates a new set of problems every time it’s (re)loaded. I print some out, have him do one or two, and go over them with him.
The sheets look like this:
I use a pretty big font and provide plenty of space below each problem to work out the solution. This same sheet can be used to practice factoring, completing the square, and using the quadratic formula. There’s a Boolean flag you can set if you just want quadratic expressions, not equations.
Here’s the source code:
xml:
1: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
2: "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
3: <html>
4: <!--
5: A math practice sheet for factoring quadratic expressions and
7: Released under the Creative Commons Attribution-Share Alike 3.0
9: by Dr. Drang (http://www.leancrew.com).
10: -->
12: <title>Math Practice</title>
13: <style type="text/css">
14: h1 {
15: text-align: center;
16: font-family: Sans-Serif;
17: font-weight: bold;
18: font-size: 36px;
19: margin: 20px 0 30px 0;
21: }
22: table {
23: width: 100%;
24: margin-left: auto;
25: margin-right: auto;
26: font-family: Sans-Serif;
27: font-size: 28px;
28: }
29: td {
30: height: 9.5em;
31: width: 15em;
32: vertical-align: top;
33: text-align: center;
34: }
35: </style>
36: <script>
37: function single_problem() {
38: // Set to true for equations, false for expressions.
39: var eqn = true;
40:
41: // Construct the parts of the binomial (a1*x + c1)(a2*x + c2)
42: // Coefficients. Small numbers, usually 1.
43: var coeffs = [1, 1, 1, 2, 3, 4]
44: var a1 = coeffs[Math.floor(Math.random()*6)];
45: var a2 = coeffs[Math.floor(Math.random()*6)];
46:
47: // Constants
48: var c1 = Math.floor(Math.random()*9 + 1);
49: var c2 = Math.floor(Math.random()*9 + 1);
50:
51: // Change the signs of the constants at random.
52: if (Math.random() < .5) {
53: c1 = -c1;
54: }
55: if (Math.random() < .5) {
56: c2 = -c2;
57: }
58:
59: // Put in standard form
60: var A = a1*a2;
61: var B = a1*c2 + a2*c1;
62: var C = c1*c2;
63: var opB = opC = '+';
64: var quad = 'x<sup>2</sup> '
65: var lin = 'x '
66:
67: // Determine the operators.
68: if (C < 0) {
69: opC = '−';
70: C = -C;
71: }
72:
73: if (B < 0) {
74: opB = '−';
75: B = -B;
76: }
77: else if (B == 0) {
78: opB = '';
79: B = '';
80: lin = '';
81: }
82:
83: // Don't show coefficients that are 1.
84: if (A == 1) A = '';
85: if (B == 1) B = '';
86:
87: term = A + quad + opB + ' ' + B + lin + opC + ' ' + C;
88:
89: if (eqn) return term + ' = 0';
90: else return term;
91:
92: }
93: </script>
95: <body>
96: <h1>Math Practice</h1>
97: <table id="whole">
98: <script>
99: for (i=0; i<3; i++){
100: document.write("<tr>");
101: for (j=0; j<2; j++) {
102: document.write('<td>' + single_problem() + '</td>');
103: }
104: document.write('</tr>');
105: }
106: </script>
107: </table>
108:
109: </body>
110: </html>
The logic is fairly straightforward. I start by using a random number generator to set the constants in the expression
and then multiply the two binomials to get the quadratic expression
That’s all pretty much handled in Lines 41–62. The rest of the code is concerned with making the problems look the way a teacher or textbook would present them: with real minus signs instead of hyphens, suppressing the coefficient when it’s one, eliminating a term entirely if its coefficient is zero. A couple of things you may want to change:
1. As I said earlier, there’s a Boolean flag that determines whether the output is a set of expressions or a set of equations. That’s set on Line 39.
2. The $a_1$ and $a_2$ coefficients are generated on Lines 43-45. By choosing the coefficients randomly from the array [1, 1, 1, 2, 3, 4], I’m keeping the leading $A$ coefficient small, with one being the most common value. I did it this way because it makes the equations look more like those you see in books and on tests, but you can change to whatever you like by fiddling with the array.
This will print on a single sheet of paper if you open it in Safari or Chrome on a Mac. You’ll have to turn off the printing of headers and footers, and you’ll probably have to set the margins to Minimal in Chrome to keep it from bleeding over onto another page. I think I could get it to print on a single page in Firefox, but I don’t know (and really don’t want to know) how to suppress headers and footers there. If you can’t get it to fit on one page, adjust the top and bottom header margins in Line 19 and/or the height of the <td> cells in Line 30.
I think you’re best off with your own local copy of the page, but you can also just point your browser to the copy on my server. Every time you reload, you’ll get a new set of problems.
If, by the way, you’re interested in similar practice sheets for more elementary math, I have a page with links to several others.
|
{}
|
Lemma 43.7.2. Let $f : X \to Y$ and $g : Y \to Z$ be flat morphisms of varieties. Then $g \circ f$ is flat and $f^* \circ g^* = (g \circ f)^*$ as maps $Z_ k(Z) \to Z_{k + \dim (X) - \dim (Z)}(X)$.
Proof. Special case of Chow Homology, Lemma 42.14.3. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# How to Convert a Time Series to a Supervised Learning Problem in Python
https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/
Machine learning methods like deep learning can be used for time series forecasting.
Before machine learning can be used, time series forecasting problems must be re-framed as supervised learning problems. From a sequence to pairs of input and output sequences.
In this tutorial, you will discover how to transform univariate and multivariate time series forecasting problems into supervised learning problems for use with machine learning algorithms.
After completing this tutorial, you will know:
• How to develop a function to transform a time series dataset into a supervised learning dataset.
• How to transform univariate time series data for machine learning.
• How to transform multivariate time series data for machine learning.
Let’s get started.
## Time Series vs Supervised Learning
Before we get started, let’s take a moment to better understand the form of time series and supervised learning data.
A time series is a sequence of numbers that are ordered by a time index. This can be thought of as a list or column of ordered values.
For example:
A supervised learning problem is comprised of input patterns (X) and output patterns (y), such that an algorithm can learn how to predict the output patterns from the input patterns.
For example:
For more on this topic, see the post:
## Pandas shift() Function
A key function to help transform time series data into a supervised learning problem is the Pandas shift() function.
Given a DataFrame, the shift() function can be used to create copies of columns that are pushed forward (rows of NaN values added to the front) or pulled back (rows of NaN values added to the end).
This is the behavior required to create columns of lag observations as well as columns of forecast observations for a time series dataset in a supervised learning format.
Let’s look at some examples of the shift function in action.
We can define a mock time series dataset as a sequence of 10 numbers, in this case a single column in a DataFrame as follows:
Running the example prints the time series data with the row indices for each observation.
We can shift all the observations down by one time step by inserting one new row at the top. Because the new row has no data, we can use NaN to represent “no data”.
The shift function can do this for us and we can insert this shifted column next to our original series.
Running the example gives us two columns in the dataset. The first with the original observations and a new shifted column.
We can see that shifting the series forward one time step gives us a primitive supervised learning problem, although with X and y in the wrong order. Ignore the column of row labels. The first row would have to be discarded because of the NaN value. The second row shows the input value of 0.0 in the second column (input or X) and the value of 1 in the first column (output or y).
We can see that if we can repeat this process with shifts of 2, 3, and more, how we could create long input sequences (X) that can be used to forecast an output value (y).
The shift operator can also accept a negative integer value. This has the effect of pulling the observations up by inserting new rows at the end. Below is an example:
Running the example shows a new column with a NaN value as the last value.
We can see that the forecast column can be taken as an input (X) and the second as an output value (y). That is the input value of 0 can be used to forecast the output value of 1.
Technically, in time series forecasting terminology the current time (t) and future times (t+1, t+n) are forecast times and past observations (t-1, t-n) are used to make forecasts.
We can see how positive and negative shifts can be used to create a new DataFrame from a time series with sequences of input and output patterns for a supervised learning problem.
This permits not only classical X -> y prediction, but also X -> Y where both input and output can be sequences.
Further, the shift function also works on so-called multivariate time series problems. That is where instead of having one set of observations for a time series, we have multiple (e.g. temperature and pressure). All variates in the time series can be shifted forward or backward to create multivariate input and output sequences. We will explore this more later in the tutorial.
## The series_to_supervised() Function
We can use the shift() function in Pandas to automatically create new framings of time series problems given the desired length of input and output sequences.
This would be a useful tool as it would allow us to explore different framings of a time series problem with machine learning algorithms to see which might result in better performing models.
In this section, we will define a new Python function named series_to_supervised() that takes a univariate or multivariate time series and frames it as a supervised learning dataset.
The function takes four arguments:
• data: Sequence of observations as a list or 2D NumPy array. Required.
• n_in: Number of lag observations as input (X). Values may be between [1..len(data)] Optional. Defaults to 1.
• n_out: Number of observations as output (y). Values may be between [0..len(data)-1]. Optional. Defaults to 1.
• dropnan: Boolean whether or not to drop rows with NaN values. Optional. Defaults to True.
The function returns a single value:
• return: Pandas DataFrame of series framed for supervised learning.
The new dataset is constructed as a DataFrame, with each column suitably named both by variable number and time step. This allows you to design a variety of different time step sequence type forecasting problems from a given univariate or multivariate time series.
Once the DataFrame is returned, you can decide how to split the rows of the returned DataFrame into X and y components for supervised learning any way you wish.
The function is defined with default parameters so that if you call it with just your data, it will construct a DataFrame with t-1 as X and t as y.
The function is confirmed to be compatible with Python 2 and Python 3.
The complete function is listed below, including function comments.
Can you see obvious ways to make the function more robust or more readable?
Now that we have the whole function, we can explore how it may be used.
## One-Step Univariate Forecasting
It is standard practice in time series forecasting to use lagged observations (e.g. t-1) as input variables to forecast the current time step (t).
This is called one-step forecasting.
The example below demonstrates a one lag time step (t-1) to predict the current time step (t).
Running the example prints the output of the reframed time series.
We can see that the observations are named “var1” and that the input observation is suitably named (t-1) and the output time step is named (t).
We can also see that rows with NaN values have been automatically removed from the DataFrame.
We can repeat this example with an arbitrary number length input sequence, such as 3. This can be done by specifying the length of the input sequence as an argument; for example:
The complete example is listed below.
Again, running the example prints the reframed series. We can see that the input sequence is in the correct left-to-right order with the output variable to be predicted on the far right.
## Multi-Step or Sequence Forecasting
A different type of forecasting problem is using past observations to forecast a sequence of future observations.
This may be called sequence forecasting or multi-step forecasting.
We can frame a time series for sequence forecasting by specifying another argument. For example, we could frame a forecast problem with an input sequence of 2 past observations to forecast 2 future observations as follows:
The complete example is listed below:
Running the example shows the differentiation of input (t-n) and output (t+n) variables with the current observation (t) considered an output.
## Multivariate Forecasting
Another important type of time series is called multivariate time series.
This is where we may have observations of multiple different measures and an interest in forecasting one or more of them.
For example, we may have two sets of time series observations obs1 and obs2 and we wish to forecast one or both of these.
We can call series_to_supervised() in exactly the same way.
For example:
Running the example prints the new framing of the data, showing an input pattern with one time step for both variables and an output pattern of one time step for both variables.
Again, depending on the specifics of the problem, the division of columns into X and Y components can be chosen arbitrarily, such as if the current observation of var1 was also provided as input and only var2 was to be predicted.
You can see how this may be easily used for sequence forecasting with multivariate time series by specifying the length of the input and output sequences as above.
For example, below is an example of a reframing with 1 time step as input and 2 time steps as forecast sequence.
Running the example shows the large reframed DataFrame.
Experiment with your own dataset and try multiple different framings to see what works best.
## Summary
In this tutorial, you discovered how to reframe time series datasets as supervised learning problems with Python.
Specifically, you learned:
• About the Pandas shift() function and how it can be used to automatically define supervised learning datasets from time series data.
• How to reframe a univariate time series into one-step and multi-step supervised learning problems.
• How to reframe multivariate time series into one-step and multi-step supervised learning problems.
Do you have any questions?
# Mueen Keogh算法
Speeded up Brute Force Motif Discovery:
Github:https://github.com/saifuddin778/mkalgo
Generalization to multiple reference points:
https://github.com/nicholasg3/motif-mining/tree/95bbb05ac5d0f9e90134a67a789ea7e607f22cea
for j = 1 to m-offset 而不是 for j = 1 to R
Time Series Clustering with Dynamic Time Warping (DTW)
https://github.com/goodmattg/wikipedia_kaggle
|
{}
|
# Would it be a viable option to make the engine generate lift?
I was thinking that if you made the engines generate lift it might help a tiny bit with the amount of time to takeoff, and thus lowering runway lengths.
• if you mean helicopters, they are already invented ;-) Oct 27 '17 at 21:38
• For the complexity it will need to help more than "a tiny bit". There are also safety issues... what happens if there's an engine failure? Can the plane still climb safely?
– fooot
Oct 27 '17 at 21:40
• that's called "short tale off", as in STOVL. google that if you don't know what that means Oct 27 '17 at 22:33
• short answer is that wings are really good at generating lift. Look up "lift to drag ratio". It can be 10 - 20. Which means for every 1 pound of thrust generated by the engine, you get 10 - 20 pounds of lift. So you let the wings do what they are good at (generating lift) and you let the engines do what they are good at (generating thrust). That is the most efficient design. Oct 28 '17 at 2:00
Good thought, and it does happen. Tilt the jet exhaust or propeller downwards at a shallow angle $\phi$, and there is lift created at sin $\phi$ while thrust is reduced by cos $\phi$. If we take small angles, let's say 3 degrees:
• $\Delta L$ = T $\cdot$ sin(3°) = 0.052 T
• $\Delta T$ = T $\cdot$ cos(3°) = 0.9986 T
So 5.2% of engine thrust is converted into lift, for a loss of 0.14% of horizontal thrust. Free lift! Slight angles like this are found in aircraft installations, for instance in tail mounted jet engines which are angled horizontally to reduce yawing angle with a failed engine.
With engines mounted underneath the wing,the downwards pointed thrust would help a tiny bit in lift and reduce the take-off length. As @mins points out, with tail mounted engines the nose down pitching moment may counteract the lift benefits.
• That happens too, and quite clearly, in present-day, light pusher autogyros, where, in slow flight, the angle between the horizontal and the propeller axis can reach 20º and sometimes even more... Oct 27 '17 at 23:07
• Ah yes and then the propeller helps with the CG management. Oct 28 '17 at 3:27
• You haven't told whether this vertical lift at the tail has an impact on required takeoff length and takeoff time. It seems this vertical lift is acting against elevators during takeoff, and maybe the takeoff is longer.
– mins
Oct 28 '17 at 8:18
• @mins The moment arm of the horizontal tail is longer, net thrust is still up :) Oct 30 '17 at 3:34
|
{}
|
Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st.
It is currently 18 Jul 2019, 11:07
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# When the integer n is divided by 6, the remainder is 3, Which of the f
Author Message
TAGS:
### Hide Tags
Retired Moderator
Joined: 22 Jun 2014
Posts: 1102
Location: India
Concentration: General Management, Technology
GMAT 1: 540 Q45 V20
GPA: 2.49
WE: Information Technology (Computer Software)
When the integer n is divided by 6, the remainder is 3, Which of the f [#permalink]
### Show Tags
24 Dec 2018, 03:38
2
00:00
Difficulty:
5% (low)
Question Stats:
88% (00:57) correct 12% (01:24) wrong based on 88 sessions
### HideShow timer Statistics
When the integer n is divided by 6, the remainder is 3, Which of the following is NOT a multiple of 6?
(A) n – 3
(B) n + 3
(C) 2n
(D) 3n
(E) 4n
Project PS Butler : Question #97
_________________
Intern
Joined: 28 Oct 2017
Posts: 8
Re: When the integer n is divided by 6, the remainder is 3, Which of the f [#permalink]
### Show Tags
24 Dec 2018, 04:12
Since 6q+3=n where q is the quotient, n-3 is a multiple of 6.
Now, 6q+6=n+3, so n+3 is also a multiple.
2 x (6q-3)=2n is also a multiple, so it 4n
We can see that only 3n is not a multiple.
Posted from my mobile device
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 9446
Location: Pune, India
Re: When the integer n is divided by 6, the remainder is 3, Which of the f [#permalink]
### Show Tags
24 Dec 2018, 04:37
1
HKD1710 wrote:
When the integer n is divided by 6, the remainder is 3, Which of the following is NOT a multiple of 6?
(A) n – 3
(B) n + 3
(C) 2n
(D) 3n
(E) 4n
Project PS Butler : Question #97
Check out divisibility concept here:
https://www.veritasprep.com/blog/2011/0 ... unraveled/
When n is split into groups of 6, 3 is left over.
If you remove 3 from n, there will be only groups of 6 leftover. So (n - 3) will be divisible by 6.
If you add 3 to n, the remaining will also form a complete group of 6 and hence there will be only groups of 6. So (n + 3) will be divisible by 6.
When you double n, 3 left over will become 6 and so we will have only groups of 6. Hence 2n will be divisible by 6.
If 2n is divisible by 6, 4n must be divisible by 6 too.
When you triple n, 3 will still be left over. So 3n will not be divisible by 6.
_________________
Karishma
Veritas Prep GMAT Instructor
VP
Joined: 09 Mar 2016
Posts: 1273
Re: When the integer n is divided by 6, the remainder is 3, Which of the f [#permalink]
### Show Tags
24 Dec 2018, 13:24
HKD1710 wrote:
When the integer n is divided by 6, the remainder is 3, Which of the following is NOT a multiple of 6?
(A) n – 3
(B) n + 3
(C) 2n
(D) 3n
(E) 4n
Project PS Butler : Question #97
The integer n is divided by 6, the remainder is 3
This meabs that $$n = 6q+3$$
If n is divided by even number and leaves remainder it means that N is ODD number
to prove this just plug any numbers for q
n=9= 6(1)+3
now test 9 in aswer choices
(A) 9 – 3 = 6 (MULTIPLE OF 6)
(B) 9 + 3=12 (MULTIPLE OF 6)
(C) 2(9)=18 (MULTIPLE OF 6)
(D) 3(9)= 27 ( 27 IS NOT MULTIPLE OF 6 )
(E) 4 (9) =36 (MULTPLE OF 6)
IMO: D
VP
Joined: 07 Dec 2014
Posts: 1206
When the integer n is divided by 6, the remainder is 3, Which of the f [#permalink]
### Show Tags
24 Dec 2018, 18:32
Quote:
When the integer n is divided by 6, the remainder is 3, Which of the following is NOT a multiple of 6?
(A) n – 3
(B) n + 3
(C) 2n
(D) 3n
(E) 4n
n=6q+3→
3n=18q+9
18q is a multiple of 6
9 is not a multiple of 6
so 18q+9 is not a multiple of 6
3n
D
When the integer n is divided by 6, the remainder is 3, Which of the f [#permalink] 24 Dec 2018, 18:32
Display posts from previous: Sort by
|
{}
|
Thread: Is there a closed form solution to this infinite series?
1. Is there a closed form solution to this infinite series?
Is there a closed form solution to this infinite series?
x is a number between 0 and 1.
n can take any value between 1 and infinity.
1/n + x/(n+2) + x^2/(n+4) + x^3/(n+6) + ...................
Thanks!
2. Originally Posted by Titian
Is there a closed form solution to this infinite series?
x is a number between 0 and 1.
n can take any value between 1 and infinity.
1/n + x/(n+2) + x^2/(n+4) + x^3/(n+6) + ...................
Thanks!
Wolfram Alpha
Lerch Transcendent -- from Wolfram MathWorld
CB
3. Originally Posted by Titian
Is there a closed form solution to this infinite series?
x is a number between 0 and 1.
n can take any value between 1 and infinity.
1/n + x/(n+2) + x^2/(n+4) + x^3/(n+6) + ...................
Thanks!
Let's suppose that n is a 'natural number' so that we have a family of functions defined as...
$\displaystyle \varphi_{n} (x) = \sum_{k=0}^{\infty} \frac{x^{k}}{n+2 k}$ (1)
First we set $\xi= \sqrt{x}$ and then we start with n=1…
$\displaystyle \varphi_{1} (\xi)= 1 + \frac{\xi^{2}}{3} + \frac{\xi^{4}}{5} + \frac{\xi^{6}}{7} + ... = \frac{1}{\xi}\ (\xi + \frac{\xi^{3}}{3} + \frac{\xi^{5}}{5} + \frac{\xi^{7}}{7} + ...) =$
$\displaystyle = \frac{1}{\xi}\ \frac{\ln (1+\xi)- \ln (1-\xi)}{2} = \frac{1}{2 \xi}\ \ln \frac{1+\xi}{1-\xi}$ (1)
Now for n=2...
$\displaystyle \varphi_{2} (\xi)= \frac{1}{2} + \frac{\xi^{2}}{4} + \frac{\xi^{4}}{6} + \frac{\xi^{6}}{8} +...= \frac{1}{\xi^{2}}\ (\frac{\xi^{2}}{2} + \frac{\xi^{4}}{4} + \frac{\xi^{6}}{6} + \frac{\xi^{8}}{8}+ ...)=$
$\displaystyle = \frac{1}{\xi^{2}}\ \frac{- \ln (1+\xi) - \ln (1-\xi)}{2} = \frac{1}{2 \xi^{2}} \ \ln \frac{1}{(1+\xi)\ (1-\xi)}$ (2)
Now for n=3...
$\displaystyle \varphi_{3} (\xi)= \frac{1}{3} + \frac{\xi^{2}}{5} + \frac{\xi^{4}}{7} + \frac{\xi^{6}}{9} +...= \frac{1}{\xi^{3}}\ (\frac{\xi^{3}}{3} + \frac{\xi^{5}}{5} + \frac{\xi^{7}}{7} + \frac{\xi^{9}}{9}+ ...)=$
$\displaystyle = \frac{1}{\xi^{3}}\ \{\frac{\ln (1+\xi) - \ln (1-\xi)}{2} -\xi\} = \frac{1}{2 \xi^{3}}\ \ln \frac{1+\xi}{1-\xi} - \frac{1}{\xi^{2}}$ (3)
And now for n=4...
$\displaystyle \varphi_{4} (\xi)= \frac{1}{4} + \frac{\xi^{2}}{6} + \frac{\xi^{4}}{8} + \frac{\xi^{6}}{10} +...= \frac{1}{\xi^{4}}\ (\frac{\xi^{4}}{4} + \frac{\xi^{6}}{6} + \frac{\xi^{8}}{8} + \frac{\xi^{10}}{10}+ ...)=$
$\displaystyle = \frac{1}{\xi^{4}}\ \{\frac{- \ln (1+\xi) - \ln (1-\xi)}{2} -\frac{\xi^{2}}{2} \} = \frac{1}{2 \xi^{4}}\ \{\ln \frac{1}{(1+\xi)\ (1-\xi)}- \frac{1}{\xi^{2}}\}$ (4)
Observing (1), (2), (3) and (4) it seems that the general explicit expression for $\varphi_{n} (x)$ is...
$\displaystyle \varphi_{n}(x)=\left\{\begin{array}{ll} x^{-\frac{n}{2}}\ \{\frac{1}{2}\ \ln \frac{1}{(1+x^{\frac{1}{2}})\ (1-x^{\frac{1}{2}})} - \sum_{k=1}^{\frac{n}{2}-1} \frac{x^{k}}{2 k} \} ,\,\, n\ even \\{}\\x^{-\frac{n}{2}}\ \{\frac{1}{2}\ \ln \frac{1+ x^{\frac{1}{2}}}{1-x^{\frac{1}{2}}} - \sum_{k=1}^{\frac{n-1}{2}} \frac{x^{k -\frac{1}{2}}}{2 k-1} \} ,\,\, n\ odd\end{array}\right.$ (5)
Kind regards
$\chi$ $\sigma$
P.S. : also the expressions like $\displaystyle \sum_{k} \frac{x^{k}}{2 k}$ can be written as functions of x...
4. Originally Posted by chisigma
Let's suppose that n is a 'natural number' so that we have a family of functions defined as...
$\displaystyle \varphi_{n} (x) = \sum_{k=0}^{\infty} \frac{x^{k}}{n+2 k}$ (1)
First we set $\xi= \sqrt{x}$ and then we start with n=1…
$\displaystyle \varphi_{1} (\xi)= 1 + \frac{\xi^{2}}{3} + \frac{\xi^{4}}{5} + \frac{\xi^{6}}{7} + ... = \frac{1}{\xi}\ (\xi + \frac{\xi^{3}}{3} + \frac{\xi^{5}}{5} + \frac{\xi^{7}}{7} + ...) =$
$\displaystyle = \frac{1}{\xi}\ \frac{\ln (1+\xi)- \ln (1-\xi)}{2} = \frac{1}{2 \xi}\ \ln \frac{1+\xi}{1-\xi}$ (1)
Now for n=2...
$\displaystyle \varphi_{2} (\xi)= \frac{1}{2} + \frac{\xi^{2}}{4} + \frac{\xi^{4}}{6} + \frac{\xi^{6}}{8} +...= \frac{1}{\xi^{2}}\ (\frac{\xi^{2}}{2} + \frac{\xi^{4}}{4} + \frac{\xi^{6}}{6} + \frac{\xi^{8}}{8}+ ...)=$
$\displaystyle = \frac{1}{\xi^{2}}\ \frac{- \ln (1+\xi) - \ln (1-\xi)}{2} = \frac{1}{2 \xi^{2}} \ \ln \frac{1}{(1+\xi)\ (1-\xi)}$ (2)
Now for n=3...
$\displaystyle \varphi_{3} (\xi)= \frac{1}{3} + \frac{\xi^{2}}{5} + \frac{\xi^{4}}{7} + \frac{\xi^{6}}{9} +...= \frac{1}{\xi^{3}}\ (\frac{\xi^{3}}{3} + \frac{\xi^{5}}{5} + \frac{\xi^{7}}{7} + \frac{\xi^{9}}{9}+ ...)=$
$\displaystyle = \frac{1}{\xi^{3}}\ \{\frac{\ln (1+\xi) - \ln (1-\xi)}{2} -\xi\} = \frac{1}{2 \xi^{3}}\ \ln \frac{1+\xi}{1-\xi} - \frac{1}{\xi^{2}}$ (3)
And now for n=4...
$\displaystyle \varphi_{4} (\xi)= \frac{1}{4} + \frac{\xi^{2}}{6} + \frac{\xi^{4}}{8} + \frac{\xi^{6}}{10} +...= \frac{1}{\xi^{4}}\ (\frac{\xi^{4}}{4} + \frac{\xi^{6}}{6} + \frac{\xi^{8}}{8} + \frac{\xi^{10}}{10}+ ...)=$
$\displaystyle = \frac{1}{\xi^{4}}\ \{\frac{- \ln (1+\xi) - \ln (1-\xi)}{2} -\frac{\xi^{2}}{2} \} = \frac{1}{2 \xi^{4}}\ \{\ln \frac{1}{(1+\xi)\ (1-\xi)}- \frac{1}{\xi^{2}}\}$ (4)
Observing (1), (2), (3) and (4) it seems that the general explicit expression for $\varphi_{n} (x)$ is...
$\displaystyle \varphi_{n}(x)=\left\{\begin{array}{ll} x^{-\frac{n}{2}}\ \{\frac{1}{2}\ \ln \frac{1}{(1+x^{\frac{1}{2}})\ (1-x^{\frac{1}{2}})} - \sum_{k=1}^{\frac{n}{2}-1} \frac{x^{k}}{2 k} \} ,\,\, n\ even \\{}\\x^{-\frac{n}{2}}\ \{\frac{1}{2}\ \ln \frac{1+ x^{\frac{1}{2}}}{1-x^{\frac{1}{2}}} - \sum_{k=1}^{\frac{n-1}{2}} \frac{x^{k -\frac{1}{2}}}{2 k-1} \} ,\,\, n\ odd\end{array}\right.$ (5)
Kind regards
$\chi$ $\sigma$
P.S. : also the expressions like $\displaystyle \sum_{k} \frac{x^{k}}{2 k}$ can be written as functions of x...
OK I suppose the use of n indicates that the OP wants this to be an natural.
CB
|
{}
|
Algebra
# Rational Equations
Solve the following for $x$:
$\frac{ x-14 } { x - 7 } = 1 + \frac{ 14 } { x - 28}.$
Solve for $x$:
$\frac{5}{x } + \frac{ 3x + 8 }{ x^2 - 8 x } = \frac{7 x + 8 }{ x^2- 8 x} .$
Solve the following for $x:$
$\frac{ 28 } { x^2 - 4x } = 1 + \frac{ 7 } { x - 4}.$
How many solutions are there for
$\frac { 18 }{ x^2 + 18x } + \frac{ 18 }{ x^2 + 54x + 648 } = - \frac{ 1}{ 9 } ?$
Solve the following for $x:$
$\frac{ 6 } { x - 6 } = \frac{ x } { 8 } - 1.$
×
|
{}
|
{{ item.displayTitle }}
navigate_next
No history yet!
Progress & Statistics equalizer Progress expand_more
Student
navigate_next
Teacher
navigate_next
{{ filterOption.label }}
{{ item.displayTitle }}
{{ item.subject.displayTitle }}
arrow_forward
We are given the graphs of $f(x)$ and $g(x)$ on a coordinate plane, and want to describe the transformation from the graph of $f(x)$ to the graph of $g(x).$ Let's find some corresponding points in the graphs and measure the distance between them.
We see that the graph of $g(x)$ is a horizontal translation $2$ units left of the graph of $f(x).$ We can also see this using the function rules. The function $g(x)$ can be written on the form $y=f(x-h).$ $g(x)=f(x+2) \quad \Leftrightarrow \quad g(x)=f(x-(\text{-} 2))$ In our function, we have that $h=\text{-} 2.$ This means the graph of $g(x)$ is a horizontal translation $2$ units left of the graph of $f(x).$
|
{}
|
#### Archived
This topic is now archived and is closed to further replies.
# Checking the type of a function at compile time
This topic is 5656 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I stumbled on a bit of code today that I''m thinking there may be a solution to. I need to export a virtual function from a class and be able to call it as though it were virtual. I know that this isn''t possible and the only way around it is to create an additional function that passes the parameters to the virtual function, ie
class Obj
{
public:
virtual void Func(int);
__declspec(dllexport) void VFunc(int a) {
Func(a);
}
};
I happen to be doing this an awful lot (for reasons I won''t go into) and there are other people working on the project so ideally I''d like to make this look a lot nicer with some sort of macro (just to make sure mistakes aren''t made), ie:
#define VIRTUAL1(FUNC,ARG1) \
__declspec(dllexport) void VIRTUAL_##FUNC(ARG1 a) { \
FUNC(a); \
}
class Obj
{
public:
virtual void Func(int);
VIRTUAL1(Func,int);
};
I''m not too bothered if things stay like that, but it would be nice to make sure that no mistakes are ever made when specifying the return and/or argument types in the macro''s. Ideally I''d like to add to the macro something that runs a compile time check on the function pointer type to compare the params with the macro created one. I know this is a bit pedantic, but I kinda got thinking about if this was possible or not. Rob p.s I use VC6
##### Share on other sites
AFIAK, this isn''t possible. There are two reasonable possibilies... Firstly, declare both the virtual and the exported functions with the same macro. Then you only pass the argument list once, thus there can be no mismatch problem.
If you can''t declare them at the same time... Can #define expansions contain other #define commands? If yes, I''d suggest, tentatively, something like:
#define DEF_CLASS(name) \#ifdef __CLASSNAME \#undef __CLASSNAME \#endif \#define __CLASSNAME name \ class name \#define DEF_VITRUAL(name, args) \#define __CLASSNAME_ ## name ## _ARGS args \void virtual name args#define DEF_EXPORT(name) \__declspec(dllexport) void V ## name __CLASSNAME_ ## name ## _ARGSDEF_CLASS(Obj){ DEF_VIRTUAL(Func, (int)); DEF_EXPORT(Func);}
I''d use the __CLASSNAME so we didn''t get Foo::wombat and Baz::wombat mixed up. However, I don''t know if the preprocessor works like this, and I don''t think it''s a good idea anyway: what about polymorphic functions? If the same function name is associated with several different actual functions, there''s no way to make it work without repeating the parameter list each time.
I''d opt for defining both methods with one macro.
Hail Eris! All fnord hail Discordia!
##### Share on other sites
no, pre-processor stuff cannot exist within macors. I think I''ve almost got a solution though, although this only works for global functions at the moment....
template<class T> struct FuncCheck { FuncCheck( T a, T b) {}};#define VIRTUAL1(FUNC,ARG0) \ __declspec(dllexport) void VIRTUAL_##FUNC(ARG0 a) { \ typedef void (*func)(ARG0); \ FuncCheck<func>(FUNC,VIRTUAL_##FUNC); \ FUNC(a); \ }void funca(int i) {}VIRTUAL1(funca,int); // compiles fineVIRTUAL1(funca,float); // chucks up a compile error
now I''ve got to get it working within a class definition....
##### Share on other sites
I''ve now got this, not perfect cos the error message is pretty cacky and it involves pointless use of a function pointer, but it kinda works. Doubt I''ll use it though.....
#ifdef _DEBUG#define VVIRTUAL1(CLASS,FUNC,ARG0) \ __declspec(dllexport) void VIRTUAL_##FUNC(ARG0 a) { \ typedef void (CLASS::*func##FUNC)(ARG0); \ func##FUNC pF = FUNC; \ FUNC(a); \ }#else#define VVIRTUAL1(CLASS,FUNC,ARG0) \ __declspec(dllexport) void VIRTUAL_##FUNC(ARG0 a) { \ FUNC(a); \ }#endif
##### Share on other sites
Sorted
template struct ERROR_Function_Defined_With_VIRTUAL_Macro_Does_Not_Match_Real_Function
{
ERROR_Function_Defined_With_VIRTUAL_Macro_Does_Not_Match_Real_Function(T a,T b) {};
};
#define VVIRTUAL1(CLASS,FUNC,ARG0) __declspec(dllexport) void VIRTUAL_##FUNC(ARG0 a) { typedef void (CLASS::*func##FUNC)(ARG0); ERROR_Function_Defined_With_VIRTUAL_Macro_Does_Not_Match_Real_Function(FUNC,VIRTUAL_##FUNC); FUNC(a); }
[edited by - RobTheBloke on July 29, 2002 7:43:57 AM]
##### Share on other sites
Even though you''ve figured it out, you might find this approach to be of interest:
http://www.accu-usa.org/Listings/2000-05-Listing01.html
|
{}
|
## Sunday, June 26, 2011 ... /////
### Does the LHC see trivial Higgs at 750 GeV?
I predict that in 2015, in a far future, lots of people will look for blog posts about $750\GeV$ Higgs bosons and they will land on this page. A message for those travelers in the future: this blog post discusses hints of a boson whose width is huge, in hundreds of ${\rm GeV}$, so it has nothing to do with the December 15th, 2015 ATLAS+CMS hints of a new boson whose width is between $25$ and $50\GeV$. Check newer, future blog posts for remarks on the new signals.
I don't discuss preprints that are disconnected from (or in contradiction with) the body of the research and findings about the physics beyond the Standard Model too often, especially not those that have 0 citations from other people. But this one is kind of fun.
Leonardo Cosmai and Paolo Cea have just claimed (1106.4178) that the ATLAS Collaboration has already found some evidence for their "trivial Higgs" model. They're excited about certain three events allegedly supporting their model although they don't seem capable of calculating any confidence levels.
An intermezzo: science and Miss USA
Misses of all the 50 (+1) states of the U.S. democratically vote that evolution should either not be taught at school, or it should be taught along with creationism. The only clean and enthusiastic pro-science viewpoint was presented by Alyssa Campanella of California (1:49); girls from MA, VT, and NM were also pro-science but not thrillingly so. Given the fact that she even believes the Big Bang Theory, it's hard to understand how such an extraterrestrial alien, a Sheldon Cooper in G-strings, and a self-described huge science geek could become Miss USA a week ago. ;-) Miss World will clearly be the best string theorist among the contestants who will beat the 99% of the Shmoit-Shwolin apologists who will be the competitors. Via Sean Carroll
Back to the trivial Higgs
In fact, it's even better. The ATLAS Collaboration hasn't found just some evidence. They claim in the very title that it has found "evidences" so their certainty converges to that of the creationists who have the monopoly over the term "evidences". ;-)
At the end of their new preprint, they discuss the rejection of their paper in a journal. To fix this problem, they propose to shoot the referee and make other changes to the system that will guarantee that their papers are always accepted.
Their model (0911.5220) of the trivial Higgs claims that the Higgs may have no quartic self-interactions and, building upon some computer simulations, it's still possible to have a nonzero vev and a stable Higgs potential because of some non-perturbative surprises.
They end up claiming that it's universally true (at any RG scale) in a non-perturbative regime of a self-interacting scalar with the interaction going to zero that
(Higgs mass) = pi * (Higgs vev)
where the (approximate or accurate?) factor of "pi" was found numerically, if I understand their statements correctly. That would mean that the Higgs mass is 750 GeV plus minus 20 stat and 20 syst GeV error margin. (It's an impressive achievement to list even the "statistical error" in a theoretical paper that doesn't measure anything.) The decay is dominated by ZZ and WW and the width is 340 GeV i.e. huge.
Well, I would bet it's not possible for a nominally non-interacting theory of a scalar to have this huge vev - especially because the perturbative conclusions should become more valid, not less valid, when the quartic coupling is sent to zero - but I like the "minimalistic" and "bold" character of their solution.
Lots of progress has been recently made in our understanding of various new phenomena and dualities in strongly coupled gauge theories - but we may be missing some analogous insights about the scalar theories and our opinions that at strong coupling, they can only hit a Landau pole and become inconsistent, could be wrong in a subtle way.
Don't get me wrong. I would still reject their paper as well because it contradicts many apparently known things and doesn't offer enough detailed evidence to support that their new answers could be right. But among the papers that end up in the non-serious category, theirs is kind of creative. It may be viewed as a good solution to the problem "how would you hide the Higgs sector so that the LHC will see as little new physics as possible?".
|
{}
|
# The difference between and basis and subbasis for a topology
• December 6th 2011, 10:58 PM
Jame
The difference between and basis and subbasis for a topology
The two definitions seem very similar to me and is confusing.
To express it simply
a basis generates a topology T and a subbasis generates a basis for a topology?
Is there an example that could clarify this difference?
• December 6th 2011, 11:06 PM
Drexel28
Re: The difference between and basis and subbasis for a topology
Quote:
Originally Posted by Jame
The two definitions seem very similar to me and is confusing.
To express it simply
a basis generates a topology T and a subbasis generates a basis for a topology?
Is there an example that could clarify this difference?
Roughly, bases are collections of subsets by which all open sets can be obatined via unions from that family and subbases are collection of open subsets for which any open subset can be obtained by unions of INTERSECTIONS of elements of the collection. For example the set of all infinite rectangles $\{\mathbb{R}\times (a,b):a,b\in\mathbb{R}\}$ forms a subbasis for the usual topology on $\mathbb{R}^2$ but not a basis. Explain to me why.
Oh i see. There are open sets in $\mathBB{R}^2$ which we cannot get from unioning these infinite rectangles together, but they can be made by unioning intersects of these rectangles.
|
{}
|
# rank
Find rank of symbolic matrix
## Description
example
rank(A) returns the rank of symbolic matrix A.
## Examples
syms a b c d
A = [a b; c d];
rank(A)
ans =
2
### Rank of Symbolic Matrices Is Exact
Symbolic calculations return the exact rank of a matrix while numeric calculations can suffer from round-off errors. This exact calculation is useful for ill-conditioned matrices, such as the Hilbert matrix. The rank of a Hilbert matrix of order n is n.
Find the rank of the Hilbert matrix of order 15 numerically. Then convert the numeric matrix to a symbolic matrix using sym and find the rank symbolically.
H = hilb(15);
rank(H)
rank(sym(H))
ans =
12
ans =
15
The symbolic calculation returns the correct rank of 15. The numeric calculation returns an incorrect rank of 12 due to round-off errors.
### Rank Function Does Not Simplify Symbolic Calculations
Consider this matrix
$A=\left[\begin{array}{cc}1-{\mathrm{sin}}^{2}\left(x\right)& {\mathrm{cos}}^{2}\left(x\right)\\ 1& 1\end{array}\right].$
After simplification of 1-sin(x)^2 to cos(x)^2, the matrix has a rank of 1. However, rank returns an incorrect rank of 2 because it does not take into account identities satisfied by special functions occurring in the matrix elements. Demonstrate the incorrect result.
syms x
A = [1-sin(x) cos(x); cos(x) 1+sin(x)];
rank(A)
ans =
2
rank returns an incorrect result because the outputs of intermediate steps are not simplified. While there is no fail-safe workaround, you can simplify symbolic expressions by using numeric substitution and evaluating the substitution using vpa.
Find the correct rank by substituting x with a number and evaluating the result using vpa.
rank(vpa(subs(A,x,1)))
ans =
1
However, even after numeric substitution, rank can return incorrect results due to round-off errors.
## Input Arguments
collapse all
Input, specified as a number, vector, or matrix or a symbolic number, vector, or matrix.
|
{}
|
Browse Questions
Home >> CBSE XII >> Math >> Matrices
# Construct a 2 x 2 matrix, $A=[a_{ij}],$whose elements are given by:$(ii) a_{ij}=\frac{i}{j}\qquad$
Note: This is a 3 part question, split as 3 separate questions here
Toolbox:
• In general $a_{2\times 2}$ matrix is given by$\begin{bmatrix}a_{11} & a_{12}\\a_{21} & a_{22}\end{bmatrix}$
• Elements are given by $a_{ij}=\frac{i}{j}$, where (i, j) can be either (1,1), (1,2), (2,2) or (2,1)
Given, $a_{ij}=\frac{i}{j} \Rightarrow$
$a_{11}=\frac{1}{1}=1.$
$a_{12}=\frac{1}{2}.$
$a_{21}=\frac{2}{1}=2.$
$a_{22}=\frac{2}{2}=1.$
Hence the required matrix is given by $A=\begin{bmatrix}1 & \frac{1}{2}\\2 & 1\end{bmatrix}$
|
{}
|
# Differences between programming model and programming paradigm?
1. What is the relation and difference between a programming model and a programming paradigm? (especially when talking about the programming model and the programming paradigm for a programming language.)
2. Wikipedia tries to answer my question in 1:
Programming paradigms can also be compared with programming models that are abstractions of computer systems. For example, the "von Neumann model" is a programming model used in traditional sequential computers. For parallel computing, there are many possible models typically reflecting different ways processors can be interconnected. The most common are based on shared memory, distributed memory with message passing, or a hybrid of the two.
But I don't understand it:
• Is it incorrect that the quote in Wikipedia says "the 'von Neumann model' is a programming model", because I understand that the Von Neumann model is an architectural model from https://en.wikipedia.org/wiki/Von_Neumann_architecture?
• Are the parallel programming models "typically reflecting different ways processors can be interconnected"? Or are parallel architectural models "reflecting different ways processors can be interconnected" instead?
3. In order to answer the question in 1, could you clarify what a programming model is?
Is it correct that a programming model provided/implemented by a programming language or API library, and such implementation isn't unique?
From Rauber's Parallel Programming book, "programming model" is an abstraction above "model of computation (i.e. computational model)" which is in turn above "architectural model". I guess that a programming model isn't just used in parallel computing, but for a programming language, or API library.
A programming model is implied by the system architecture. If your system architecture is a register machine, your programming model will consist of machine code operations on registers. If your architecture is a stack machine, your programming model will consist of stack operations. A Von Neumann architecture and a Harvard architecture will have other programming models. Self modifying code p.e. wil be possible in a Von Neumann architecture but not in a Harvard architecture.
A programming paradigm is more highlevel: it is the way a problem is modelled (imperative or declarative, Object oriented, functional, logic,...). A single paradigm language supports one of these. Multiparadigm languages are more a sort of Swiss armyknive which take elements out of more paradigms.
Every architecture (and corresponding model) will have his own set of machine code instructions. This machine code language itself will follow the imperative paradigm (do this , do that, read register A, add the value to register B,... or put a value on top af the stack, put another value on top of the stack, add the two values on top..., etc...)
(At least I never saw a non-imperative hardware processor)
A high level language (of whatever paradigm) will be compiled or interpreted to this machine code.
About parallelism: If we consider interconnected processors it will be clear that the way they interconnect will be part of the programming model. An old INMOS transputer p.e. connects with four other transputers. The machine code wil have instructions to communicate with the naburing transputers.
But also on recent systems the way to provide mutual exclusion will have to be resolved on low level. On a one processor system we will have to put the interrupts on and off when leaving or entering the critical section. On a multiprocessor system we will need a monoatomic 'test and set' instruction. This is part of the programming model.
Parallel computing paradigms are high level models to use parallelism. Think on languages who have threaded objects, or use semaphores and monitors as language elements.
When we program on different operating systems, different API's will be used. (or even if we program on the same system but we use an other library - a graphics library p.e.). This will change our programming model. the low level code will be different, but if there is a good abstraction (sort of code once, compile anywhere) this will be invisible in the high level language. If not, you will have to make small changes in your code. But since you will use the same high level language, there will be no change of paradigm.
There is no exact answer to your question. The terms "programming model" and "programming paradigm" are not exact technical terms that have fixed definitions. Depending on a context, some authors might define "programming model" in some specific way, but that will usually turn out to cover only some aspects of what people understand under "programming model".
As a good rule of thumb, you should run away from anyone using the word "paradigm". (I was a graduate student when the word was severly overused.)
Nevertheless, it is still very useful to use these phrases, even though they are not very precise. They are helpful in explaining and organizing various aspects of computation. But keep in mind that there are no accepted mathematical definitions that entirely cover the usage. Therefore, if you ask very precise questions, you will not get very precise answers, but rather opinions and helpful explanations.
In particular, you asked: "What is the relation and difference between a programming model and a programming paradigm?" Well, that depends on who you ask and what decade you live in. Here is what my dictionary says:
paradigm – a worldview underlying the theories and methodology of a particular scientific subject
model – a simplified description, especially a mathematical one, of a system or process, to assist calculations and predictions
So, a paradigm is a broader concept than a model. In any case, I advise you to not think about your question as one of science because it is not.
Programming models bridge the gap between the underlying hardware architecture and the supporting layers of software available to applications.
Programming models are different from both programming languages and application programming interfaces (APIs). Specifically, a programming model is an abstraction of the underlying computer system that allows for the expression of both algorithms and data structures.
In comparison, languages and APIs provide implementations of these abstractions and allow the algorithms and data structures to be put into practice – a programming model exists independently of the choice of both the programming language and the supporting APIs.
Programming models are typically focused on achieving increased developer productivity, performance, and portability to other system designs. The rapidly changing nature of processor architectures and the complexity of designing an exascale platform provide significant challenges for these goals. Several other factors are likely to impact the design of future programming models.
Source https://asc.llnl.gov/content/assets/docs/exascale-pmWG.pdf, "Advanced Simulation and Computing" (nuclear weapons)
More on twitter : #ProgrammingModel
Imperative Programming with an explicit sequence of commands that update state.
Declarative Programming by specifying the result you want, not how to get it.
Structured Programming with clean, goto-free, nested control structures.
Procedural Imperative programming with procedure calls.
Functional (Applicative) Programming with function calls that avoid any global state.
Event-Driven Programming with emitters and listeners of asynchronous actions.
Flow-Driven Programming processes communicating with each other over predefined channels.
Logic (Rule-based) Programming by specifying a set of facts and rules. An engine infers the answers to questions.
Thinking through these definitions helps give answers to the question.
One way to reason would be to say that programming models are closer to the design of a system / application and can be made of one or more programming paradigms which are close to a programming language.
This can be seen very clearly in data science applications where a number of technologies are used, each with their own programming paradigm, but overall the system the architect should keep a cohesive programming model.
More on this way of reasoning : https://twitter.com/semanticbeeng/status/1103027054302425089?s=20
|
{}
|
# Science at the triple point between mathematics, mechanics and materials science
## Publication 68
### Regularity of Solutions to Fully Nonlinear Elliptic and Parabolic Free Boundary Problems
##### Authors:
E. Indrei
Center for Nonlinear Analysis
Carnegie Mellon University
Pittsburgh PA 15213-3890 USA
Andreas Minne
Department of Mathematics
KTH, Royal Institute of Technology
100 44 Stockholm, Sweden
##### Abstract:
We consider fully nonlinear obstacle-type problems of the form \begin{equation*} \begin{cases} F(D^{2}u,x)=f(x) & \text{a.e. in }B_{1}\cap\Omega,\\ |D^{2}u|\le K & \text{a.e. in }B_{1}\backslash\Omega, \end{cases} \end{equation*} where $\Omega$ is an unknown open set and $K>0$. In particular, structural conditions on $F$ are presented which ensure that $W^{2,n}(B_1)$ solutions achieve the optimal $C^{1,1}(B_{1/2})$ regularity when $f$ is Hölder continuous. Moreover, if $f$ is positive on $\overline B_1$, Lipschitz continuous, and $\{u\neq 0\} \subset \Omega$, then we obtain local $C^1$ regularity of the free boundary under a uniform thickness assumption on $\{u=0\}$. Lastly, we extend these results to the parabolic setting.
##### Get the paper in its entirety
14-CNA-008.pdf
Back to Publications
|
{}
|
Home > Standard Error > Regression Standard Error Of Mean
# Regression Standard Error Of Mean
ISBN 0-521-81099-X relevant mainly when you need precise predictions. Doi:10.4103/2229-3485.100662. ^ Isserlis, L. (1918). "On the valueEdwardsabove, cannot be fitted using linear regression techniques.
The effect of the FPC is that the error becomes zero See also unbiased estimation of standard deviation for more discussion. of this contact form be realistic guides to the precision with which future observations can be predicted. mean Standard Error Of Regression Calculator It is rare that the equal, Y is expected to increase by b2 units. In an example above, n=16 runners were of
The coefficients, standard errors, and forecasts the regression and as the standard error of the estimate. ρ=0 diagonal line with log-log slope -½. Repeating the sampling procedure as for the Cherry Blossom runners, take error a measure of the accuracy of predictions.
I was looking for something that would look like 0 1 0 0 0 1 0 0 ..., and so on. Journal of thewith unknown σ, then the resulting estimated distribution follows the Student t-distribution. Standard Error Of Regression Formula 9.27/sqrt(16) = 2.32.Suppose our requirement is that the predictions mustmeans for 20,000 samples, where each sample is of size n=16.
The standard error can include the variation between the calculated mean of The standard error can include the variation between the calculated mean of You should not try to compare R-squared between models that do and do not include other the standard table and chart output by merely not selecting any independent variables.the U.S. 187 ^ Zwillinger D. (1995), Standard Mathematical Tables and Formulae, Chapman&Hall/CRC.
be expected, larger sample sizes give smaller standard errors.Ecology 76(2): 628 – 639. ^ Klein, RJ. Standard Error Of Regression Coefficient ones for a population are shown below.Larger sample sizes give smaller standard errors As would which is called R-squared, is the square of the correlation between Y and X. In this scenario, the 400 patients are a sampleSummary of Model table that also contains R-squared.
Statgraphics and RegressIt will automatically generate forecasts rather than fitted values regression Thanks forof observations is drawn from a large population.The standard deviation of all possible sample regression the question!Read more about how to obtain and use http://enhtech.com/standard-error/answer-regression-what-does-standard-error-mean.php error intervals In many practical applications, the true value of σ is unknown.
The standard deviation of the age for the 16 runners is 10.23, which Gurland and Tripathi (1971)[6] provide aexceeding the observed t-value by chance if the true coefficient were zero. The standard deviation of all possible sample https://en.wikipedia.org/wiki/Standard_error sample mean is the standard error divided by the mean and expressed as a percentage.Notice that the population standard deviation of 4.72 years for age at firststandard deviation of the Student t-distribution.
^ James R. See unbiased estimation ofExtremely high values here (say, much above 0.9 in absolute value)
ThanksThis is not The survey with the lower relative standard error can be said to have Standard Error Of Regression Interpretation sample or the accuracy of multiple samples by analyzing deviation within the means.Hyattsville,
JSTOR2682923. ^ Sokal and Rohlf (1981) Biometry: Principles and Check This Out way of knowing.For the purpose of hypothesis testing or estimating confidence intervals, the standard error is pop over to these guys the sample standard deviation is 2.56.For the age at first marriage, the population meana more precise measurement, since it has proportionately less sampling variation around the mean.Because the 9,732 runners are the entire population, 33.88 years is the population mean,to the standard error estimated using this sample.
The margin of error and the confidence interval are is 23.44, and the standard deviation of the 20,000 sample means is 1.18. Standard Error Of Estimate Interpretation of the Gaussian when the sample size is over 100.is key to understanding the standard error. that standard deviation, derived from a particular sample used to compute the estimate.
A variable is standardized by converting itmarriage is about half the standard deviation of 9.27 years for the runners.Was there something more1.The mean of these 20,000 samples from the age at first marriage populationAs an example of the use of the relative standard error, consider two^ James R.
National Center for his comment is here The next graph shows the sampling distribution of the mean (the distribution ofmeans is equal to the population mean.The standard deviation is used to help determine validity of the data possible sample means is equal to the population mean. Hyattsville, Linear Regression Standard Error
S becomes smaller when the data will usually be less than or greater than the population mean.Table selected at random from the 9,732. Standard error functions more as a way to determine the accuracy of the
That is, the absolute change in Y is proportional to the absolute post where I use BMI to predict body fat percentage. The sample mean will very rarelythe population and once which is considered known, or accepted as accurate. of Standard Error Of Estimate Calculator standard error of \$5,000, then the relative standard errors are 20% and 10% respectively. standard As the sample size increases, the sampling distribution of standard deviation for further discussion.
μ {\displaystyle \mu } , and 9.27 years is the population standard deviation, σ. The standard error (SE) is the standard deviation of thethe basics of regression right (with the math involved)? The standard error of the forecast gets smaller as the Standard Error Of The Slope of the correction factor for small samples ofn<20.
The graphs below show the sampling distribution of the "Healthy People 2010 criteria for data suppression" (PDF). An outlier may or may not have a dramatic effect onmore than 40 countries around the world. regression Edwards the Wikimedia Foundation, Inc., a non-profit organization.
The sample proportion of 52% is an estimate of the true but for n = 6 the underestimate is only 5%.
correction and equation for this effect. Repeating the sampling procedure as for the Cherry Blossom runners, take vary depending on the size of the sample. outlier or two may not be cause for alarm.
Both statistics provide an overall measure of ten requires a hundred times as many observations.
simple model · Beer sales vs. Retrieved 17 output What's a good value for R-squared?
The unbiased standard error plots as the In "classical" statistical methods such as linear regression, information about the precision reduce the standard error of the regression.
|
{}
|
It seems Hohenheim is a man of many ages. So you've already got this awesome old hag Pinako to tap, what's up with you goin around messin with the children of the town?!
written by astrobunny \\ dancing, drinking, edward, elric, fma, hawt, hohenheim, hueg, jailbait, loli, pedo, pinako, superloli, trisha, underage
After seeing Lust regenerate several times already, Roy and Havoc still stood around where Lust obviously didn't move from. Now, I wonder why. The only word floating around my head during these 3 minutes was "stupid". After Havoc got stabbed by Lust's fingernails, I had that "I told you so" moment and continued to wonder if this was really just a satire of the military in real life.
written by astrobunny \\ alchemist, fullmetal, havoc, homunculus, jean, lust, mustang, regeneration, roy, stupidity
SEE, I knew it! Roy's not the kind of non-homounculus person to kill out of rage.
written by astrobunny \\ body, burning, burnt, dummy, fake, maria, ross, roy, trick
Personally, knowing Roy Mustang, I think he put up a bluff so he can extract information from Maria. I don't see him being that irrational. Though I could be wrong.
written by astrobunny \\ alchemist, body, burn, fried, full, maria, metal, mustang, ross, roy
Why? Well for one thing, she scares the life and soul out of her students. I don't mean awesome by what I usually mean it. Usually I'd say MOE or WIN or EPIC WIN or DIES FROM MOE OVERLOAD and stuff. But Izumi is different. Her's is a whole new kind of awesome. The deathly-make-you-wish-you-were-dead awesome however.
written by astrobunny \\ fma, izumi, sensei
Gotta love the antics of the Elrics. Sheska time after the jump.
written by astrobunny \\ alchemist, comic, elric, fma, full, metal, sheska
Square Enix and Aniplex have recently released Fullmetal Alchemist Brotherhood on TV and to be honest I really wondered what they would show, since the previous Fullmetal Alchemist series was already so kickass and went through the whole story already. It turns out however, that this is more of a rewrite. Frankly, I felt bored halfway through the first episode this show and suddenly wished it would end sooner, because I was getting irritated at how helpless Edward and Alphonse were against the Ice Alchemist. WHERE DID THE KICKASS ELRIC GO? His pwning streak probably left him along with the art goodness.
But! Instead of brooding over what the current series is, lets take a look at the biggest observable difference this show has from its predecessor: The Art. Its worth noting that Bones, the group that animated the previous series, is also animating this one. They were probably trying to make it look more like the art from the manga.
|
{}
|
## Stream: triage
### Topic: PR #4069: added statements for trailing_degree, nat_trail...
#### Random Issue Bot (Feb 15 2021 at 14:20):
Today I chose PR 4069 for discussion!
added statements for trailing_degree, nat_trailing_degree, trailing_coeff,... Many proofs are missing!
Created by @damiano (@adomani) on 2020-09-08
Labels: help wanted, incomplete, needs-documentation
Is this PR still relevant? Any recent updates? Anyone making progress?
#### Damiano Testa (Feb 15 2021 at 14:35):
The lemmas in this PR are adaptations of the lemmas that existed at the time for the leading_coefficient of a polynomial, but not for the trailing_coefficient. Now, with the definition of reflect most of these lemmas can, in theory, be proved by reducing statements about the trailing_degree of f to statements about the leading_coefficient of reflect f. The missing ingredient is a proof that leading and trailing coefficients are exchanged by reflect. An earlier proof of this fact did not use dual_order and did not make into mathlib. I was not able to make the version with order_dual to work, so I am stuck and stopped thinking about this.
The lemma that I feel should be in mathlib is the statement that the trailing degree of a product is the product of the trailing degrees (with some assumptions). There have been many refactors, since I last checked: I doubt that this result is in mathlib, but if it is, then closing this PR would be fine for me!
Last updated: May 09 2021 at 16:20 UTC
|
{}
|
# 1.5: Viscous Deformation
The flow of liquids, like water, honey, oil, and magma, is described as viscous flow or viscous deformation. Similar to elastic deformation we can ask how the deformation is related to the stress during viscous deformation. In this case, the stress is related to the strain-rate $$\dot{\epsilon} = \frac{d\epsilon}{dt}$$, rather than the strain $$\epsilon$$ through the viscosity $$\eta$$, (the greek symbol eta):
$\sigma = 2 \eta \dot{\epsilon}$
This equation states that the stress in a viscous material is proportional to how fast it is deforming (the strain-rate). The factor of 2 is there for mathematical convenience in full 3-D descriptions of viscous deformation. The stress has units of Pascals and the strain-rate has units of 1/s, so the viscosity has units of Pa*s. The viscosity can be thought of as the resistance to flow. Higher viscosity indicates more resistance to flow. This can be seen more readily, by rewriting the equation above as:
$\dot{\epsilon} = \frac{\sigma}{2\eta}$
For a given applied stress $$\sigma$$ a higher viscosity will result in a lower strain-rate $$\dot{\epsilon}$$. Viscosity is typically dependent on temperature and composition. The viscosity of some familiar liquids are given in Table 1.5.1. Notice that the viscosity ranges over five orders of magnitude (0.001 to 100 Pa s) for these familiar liquids. For example the difference in viscosity of water and oil is a factor of 10, whereas the difference in viscosity between water and honey is 10,000. Materials with lower viscosity deform (flow) faster than materials with a higher viscosity.
Table 1.5.1: Viscosity of some common liquids.
Material Viscosity (at 20 C) (Pa s)
water 0.0010
milk 0.0020
olive oil 0.010
honey 10.0
toothpaste 70.0
tar 100.00
### Viscous Deformation in Solid Rocks or Ice
While we normally think of rocks as being hard, elastic materials that break, the rheology of rocks (and ice) is in fact both elastic and viscous, but these two different responses occur at different time-scales. The short-term deformation of rocks is best described as elastic with brittle failure and applies to processes like earthquakes (timescale of 1-1000 years between events) and propagation of seismic waves (timescale < seconds). This short term response dominates at cold temperatures and low pressures typically found in the crust. However, even at the high pressures and temperatures of the deep mantle the elastic behavior of the rock allows for the transmission of seismic waves due to the very short timescale of this process. The long term deformation of the lithosphere and the mantle is best described as viscous flow. At the higher pressures and temperatures of the deep lithosphere and mantle, deformation of grains by viscous mechanisms occurs more readily than elastic or brittle deformation. However, it is a very slow process with mantle flow occurring at velocities of 1 to 20 cm/yr, at time-scales of a miliion years or more. Viscosity in the mantle is strongly dependent on temperature, as well as stress and rock composition. Typical values of viscosity in the mantle are $$10^{18}$$ to $$10^{24}$$ Pa s. These are large numbers, but similar to the more common materials discussed above, the variation of rock viscosity in the mantle is about 6 orders of magnitude. Ice is also a viscous and elastic material. The viscous behavior of ice can be see in ice streams that flow downhill carry ice away from large glaciers. The viscosity of ice is typically in the range around $$10^{13}$$ at $$0^\circ C$$ to $$10^{14}$$ at $$-10^\circ C$$ Pa s.
The viscous deformation of the rock (and ice) occurs by crystal-scale deformation (called creep). In this solid-state creep the grains themselves deform as individual atoms (a lot of them) or planes of atoms (called dislocations) move within the grains. These processes of solid state creep are ultimately controlled by how slowly atoms move through the crystal (a diffusive process), which is why the time-scale is so long and the viscosity is so high. Its important to remember that at high pressure, the rocks are flowing but they are still solid. High temperatures (around 1200-1400 C) and relatively lower pressure are needed to cause the rock to melt and form magma.
### Applications of Viscous Flow in/on the Earth
In this section we will use viscous flow to consider the deformation of solid rock and ice behaving as a viscous fluid (termed solid-state creep) and magma behaving as a viscous fluid. We will consider viscous flow in three settings.
#### Viscous Drag on a Tectonic Plate
First, we will consider the viscous flow that occurs in the mantle beneath a moving plate (called the asthenosphere). In this case, the tectonics plate (lithosphere) is being moved by two forces. The first force is the slab-pull force caused by sinking tectonic plate (also called a slab) in a subduction zone. The second force moving the plate is the ridge-push force caused by the positive buoyancy of less dense material under a mid-ocean spreading ridge. Figure $$\PageIndex{2}$$ shows a profile sketch of a tectonic plate with these forces acting to move the plate across the underlying mantle. The moving plate drags the viscous asthenosphere, and the resistance to this motion is an important force acting on the tectonic plate.
#### Flow Magma Down a Volcano
When a volcano erupts, magma flows from the vent at the top of the volcano down the sides of the volcano. As the magma cools faster at it edges than in its interior, the flowing magma tends to form its own flow channel: cooled magma forms rock levees on the side of the channel. When the eruption rate of magma is high these channels of of magma can carry liquid magma tens of miles from the vent before the magma cools sufficiently to solidify. We will examine the flow-rate of magma in a simplified magma channel in which we ignore how the magma drags against the sides of the channel and just focus on how the middle of a broad magma channel flows. The flow rate will depend on the viscosity of the magma, but the viscosity of the magma depends on its temperature. As the magma cools, it crystallizes and the network of crystals in the magma strongly increase the viscosity of the magma causing to flow slower.
#### Flow of a Glacial Ice Stream
A glacial ice stream is a region of flowing ice that drains ice away form large glaciers in places like Greenland and Antarctica. These rivers of ice move at rates of about 1 to 100 meters per year, depending on the viscosity (temperature) of the ice, as well as the slope of the land glacial ice stream is flow down. Similar to the case of flowing magma, we can gain some intuition about how this flow by simplifying the geometry and considering only the flow at the center of the ice stream, and the effects of resistance at the base of the flowing ice.
ADD SKETCH (MOVED FROM ABOVE AND ADD TOPOGRAPHY) AND PICTURE OF AN ICE STREAM (FROM WIKIPEDIA).
## Flow Through a Pipe
To gain a more intuitive understanding of the parameters and conditions that affect viscous flow, let's begin with a simple model of viscous flow through pipe, like a water pipe in your house or a hose in your backyard.
Water flow through the pipe in response to a pressure gradient along the pipe. The pressure at the open end of the pipe $$P_{o}$$=0, while the water is being pushed through the pipe by higher pressure coming from the water being pumped into the street pipes by the utility company. The pressure gradient is given by
$\Delta P=P_{pipe}-P_{o}$
Fluid always flows from high pressure to low pressure. It is the pressure difference along the pipe that drives the flow.
Now let's consider for a moment what the fluid flow looks like inside the pipe. Note that at the boundary of the pipe, the pipe is not moving, this means that the fluid directly adjacent to the pipe must also have a velocity of zero. However, a tiny distance away from the wall, the fluid is moving. This creates a gradient in the fluid velocity. The magnitude of the fluid velocity in the x direction, $$u_x$$, changes in the y direction, so the gradient of the velocity is non-zero
$\frac{d u_x}{d y} \neq 0$
Let's pause for a moment to consider the units associated with this equation. Velocity has units of meters/second (m/s) while length has units of meters (m)
$\frac{\frac{m}{s}}{m} = \frac{1}{s}$
This has units of strain-rate (1/s)
$\dot{\epsilon}=\frac{1}{2} \frac{d u_x}{d y}$
Recall that strain is given by the gradient in displacement
$\epsilon = \frac{\Delta x}{y}$
Likewise, velocity is the rate of displacement, therefore the gradient in the rate of displacement results in a strain rate.
Estimating Strain Rate
It is useful to pause now to get an order of magnitude estimate of strain-rate. To do this, let's look at the plot below comparing strain as a function of time.
The slope is $$\frac{\Delta e}{\Delta t}$$
Let's assume we have a strain of 0.3 that has accrued over a time period of 1 my:
\begin{align*}&=\dfrac{0.3}{1my}\cdot \frac{-1 my}{1\times10^6 yr} \cdot \frac{1 yr}{3.15\times10^7 s} \\[4pt] &= \frac{0.1}{1\times10^{13}}=1\times10^{-14} s^{-1} \end{align*}
Strain-rates along plate boundaries and near subducting plates are typically about $$1\times10^{-15}$$ to $$1\times10^{-12} s^{-1}$$. The rates depend both on the magnitude of the stresses and the magnitude of the viscosity, which is strongly temperature dependent. The viscosity of the asthenosphere is about $$1\times10^{18}$$ to $$1\times10^{21}$$ Pa-s (Pascal seconds), while viscosity inside a sinking tectonic plate may be as high as $$1\times10^{25}$$ Pa-s. Strain rates are very slow in the solid earth because the viscosity is large. But, other materials can deform much faster, such as an ice flow which deforms at about 1 inch a year. Flowing magma moves even faster, at about 1 inch per minute. Viscosity values for ice range from $$10^{11}-10^{13}$$ (Pa)(s) and for lava $$10^{2}-10^{6}$$ (Pa)(s).
We often only think about viscous flow in terms of order of magnitude due to the large range in viscosity values. Viscosity depends on a lot of properties of the rock and the physical conditions. Such properties include: composition, grain-size, water in minerals, presence of melt and crystal grains in the melt.
## Comparing Elastic and Viscous Deformation
Viscous deformation is different than elastic deformation, in several ways, but one of the primary differences is that for elastic deformation the strain is linear proportional to the stress ($$\sigma = E \epsilon$$), while for viscous deformation it is the strain-rate that is proportional to stress ($$\sigma = \eta \dot{\epsilon}$$). To better understand what this difference means, let's first revisit elastic deformation.
The figure below shows the stress resulting from an applied strain in an elastic material. Different material will have different values for the Young's modulus and thus the same amount of strain will result in different amounts of stress in the material.
The next figure shows how the strain in a rock deforming elastically is not permanent, but rather it is recoverable. The stress applied to the rock is ramped up from time 1 to time 2 and then held constant from time 2 to time 3. Then the stress is released from time 3 to time 4. The figure below illustrates that the strain in the rock is proportional to the stress and that once the stress returns to zero, the strain also returns to zero.
Returning to viscous deformation, we know that viscosity and stress have a time dependent relationship:
$\sigma=2\eta\frac{de}{dt}$
When pulling a stick out of a very viscous fluid like honey or tar, the slower you pull the less stress is required, and the faster you pull, the more stress is needed. For a viscous material, consider that the stress is increased instantaneously, the response of the fluid is to start to flow at a proportional strain-rate. As time goes on the amount of strain in the fluid increases linearly. However, when the stress is returned to zero and the fluid stops moving(the strain-rate is zero), therefore the fluid can not flow back to its original position and instead there is permanent (no-recoverable) deformation. The fact the flow is not recovered does not mean it is not reversible. If the stress is reversed exactly the fluid can be returned to its original position.
#### Derivation of Viscous flow Equations
The basic equation describing viscous flow $$\sigma = \eta \dot{\epsilon}$$ is found by considering conservation of momentum in a fluid. Conservation of momentum is similar to considering a force balance on an element of fluid. Consider an infinitesimal 3D volume of fluid with sides, $$\delta_{x}$$, $$\delta_{y}$$, $$\delta_{z}$$. On each side of the box in the x direction there is a traction acting on the fluid.
From the figure, we can see that A=$$\delta_{z} \cdot \delta_{y}$$. Therefore, the force acting on each side is F=$$\sigma A$$.
$F=\sigma \delta_{z} \delta_{y}$
Dividing by $$\delta_z$$ we get the force per unit length in the z direction.
$\frac{F}{\delta_z} =\sigma \delta_{y}$
Using the force per unit length allows us to consider just a 2D cross section of the flow.
Next, let's consider all the forces acting on the sides of the element of fluid (in 2D). First, on the top and bottom of the fluid element there are shear stress acting tangential to the top and bottom surface. These shear stresses exist in the fluid because the fluid is being held back at the walls of the pipe, while it is free to move in the interior of the pipe.
Finally, we need to balance all the forces acting on the fluid element due to both the normal stresses acting on the sides of the box and the shear stresses acting on the top and bottom of the box. All the forces are directed in the x direction, so the forces should balance (sum to zero).
The normal stresses on the sides are the pressure in the fluid. Recall that there is a pressure gradient along the pipe in the x direction due to the pressure gradient. Therefore the pressure on the upstream side, $$P_1$$ is larger than the pressure on the downstream side, $$P_o$$. The force (per unit length) on these sides is by the pressure (stress) times the length $$\delta_y$$. Similarly the force of the top and bottom are given by the sheear stress times the length, $$\delta_x$$. The sum of the forces on all four sides is the
$P_1 \delta_y - P_o \delta_y + \tau(y) \delta_x - \tau(y+\delta_y) \delta_x = 0$
Combining terms we get
$(P_1 - P_o) \delta_y - (\tau(y+\delta_y) - \tau(y)) \delta_x = 0$
Next divide through by both $$\delta_x$$ and $$\delta_y$$
$\frac{P_1-P_o}{\delta_x} - \frac{\tau(y+\delta_y) - \tau(y)}{\delta_y} = 0$
Note the first term is the pressure gradient in the pipe in the x direction, and the second term is the gradient in the shear stress across the pipe
$\frac{dP}{dx} = \frac{d\tau}{dy}$
This equation, describes how the pressure gradient drives the fluid flow, which is resisted by shear stress in the fluid. This general relationship is true regardless of the geometry or boundary conditions on the fluid, and it is a useful concept to use in thinking about how a fluid will deform.
Figure $$\PageIndex{7}$$: Conservation of Momentum (MIB: change l to \delta_x)
Finally let's relate shear stress to strain-rate in order to determine the exact form of the fluid flow (velocity) in the pipe. Recall from above that
$\tau = 2 \eta \dot{\epsilon} = \eta \frac{du}{dy}$
Substituting in for the shear stress
$\eta \frac{d^2 u}{dy^2}=\frac{dp}{dx}$
This is a differential equation, which says that the curvature of the velocity of the fluid in the y direction is equal to the pressure gradient in the x-direction. To determine the shape of the velocity profile, we need to integrate (twice) with respect to y.
$u(y)=\frac{1}{2\eta} \frac{dp}{dx} y^2 +c_1 y+c_2$
Although we derived this thinking about flow in a pipe, it is the general solution to flow in one direction driven by a pressure gradient. You can see that there are two unknowns in the equation, $$c_o$$ and $$c_1$$, which are the result of integrating twice to get the velocity. To solve for these constants, we must use information we have about how the flow behaves at the boundaries (this information is referred to as the boundary conditions). Different boundary conditions will lead to different shapes for the velocity profiles. Below we discuss two possible solutions referred to as Couette Flow and flow down an inclined plane.
Couette Flow
Couette flow is defined by a flow in which there is no applied pressure gradient, but one of the boundaries is moving at a fixed velocity.
$$\frac{dp}{dx}=0$$ and $$u_o \neq 0$$
Therefore, in addition to knowing that the term $$\frac{dp}{dx} =0$$ we know the velocity at both the top and bottom boundaries:
u(y=h)=0
u(y=0)=$$u_o$$
By examining the equation for the u(y) and letting $$y=0$$ we see that the constant ($$c_2 = u_o$$)
Similarly, if we plug in $$u = 0$$ and $$y = h$$ we find that
$u=u_o (1-\frac{y}{h}) \nonumber$
This equation gives the shape of the velocity profile, between stationary and moving boundaries, and that profile is linear, as shown in the figure above.
Next, we can find the strain rate in the fluid
$\dot{\epsilon}=\frac{1}{2} \frac{du}{dy} =-\frac{u_o}{2h} \nonumber$
and the shear stress
$\sigma=2\eta\dot{\epsilon} = \eta \frac{du}{dy} = -\frac{\eta u_o}{h} \nonumber$
The negative sign indicates that the shear strain is directed against the direction of flow, in the -x direction.
Couette flow can be used to approximate the flow in the mantle be dragged by a moving tectonic plate at the surface above. From the figure below, we estimate that the thickness of the shearing layer beneath the plate $$h = \approx 100-150$$ km and the tectonic plate is moving with a velocity of $$v_p \approx 5 \frac{cm}{yr}$$. The solution for the Couette flow can then be used to estimate the strain-rate and the shear stress in the asthenosphere below the plate.
Flow Down An Inclined Plane
Now let's consider another geologic application fluid flow down an inclined plane. This geometry can be used to examine flow in a glacial ice stream and flow of magma within channels on the slopes of a volcano.
In this case we consider a layer of flowing fluid with thickness, h, flowing down plane inclined at an angle $$\alpha$$. To help solve the problem using the equations we already derived above, we orient the x axis parallel to the inclined plane and the y axis perpendicular with $$y = 0$$ at the top of the flowing fluid. The boundary conditions are that the shear stress is zero at the top of the fluid (there is nothing but air above the fluid) and the velocity is zero at the bottom of the fluid at $$y = h$$.
Now, lets think about why the fluid is flowing down the plane. The answer is gravity. While gravity is directed down, there is a component of gravity that is oriented parallel to the inclined surface. This pulls the fluid down the incline due to the weight of the fluid. From the geometry shown below
$\sin(\alpha)=\frac{g_x}{g} \nonumber$
so
$g_x=g\sin(\alpha). \nonumber$
The third panel in the figure below shows how the pressure changes in fluid. At some positions x, the pressure is $$P_o$$. At a position $$\Delta x$$ down the plane, the pressure is $$P_o +pg\sin(\alpha)\Delta x$$ . Therefore, the pressure difference in the fluid is
$\Delta P=P_o - (P_o +pg\sin(\alpha)\Delta x \nonumber$
or rearranging to get the pressure gradient
$\frac{\Delta P}{\Delta x}=-\rho g\sin(\alpha) \nonumber$
Finally, we get that
$\frac{dp}{dx}=-\rho g\sin(\alpha) \nonumber$
Substitute $$\frac{d\tau}{dy}=\frac{dp}{dx}$$ into our previous general form of the force balance equation, we can solve to get
$u=\frac{1}{2\eta} (-\rho g\sin(\alpha))y^2 +c_1 y+c_2 . \nonumber$
We still need our boundary conditions to solve the equation for the integrations constants.
Using the condition that at y=h, u=0, we find that $$c_1=\frac{\rho g\sin(\alpha)}{2\eta}h$$
Using the condition that at y=0, $$\tau=\eta \frac{du}{dy}=0$$ we find that
$u=\frac{\rho g\sin(\alpha)}{2\eta}(h^2 -y^2) \nonumber$
Thus, the velocity profile in the inclined plane is a parabola.
As we can see from the figure, the maximum velocity occurs at the top of the flow, at $$y = 0$$.
However, if we use the velocity to determine the strain-rate, we find that
$\dot{\epsilon}=\frac{1}{2} \frac{du}{dy}=-\frac{\rho g\sin(\alpha)}{2\eta} y \nonumber$
Therefore the maximum strain-rate (in terms of magnitude: the sign is not really of importance here) occurs at the bottom of the flow $$y=h$$.
This illustrates the important point that fast velocity does not necessarily mean high strain-rate. Instead high strain rate occurs where there is a strong spatial gradient in fluid velocity.
|
{}
|
# When Two Dice Are Rolled Find The Probability Of Getting A Sum Of 5 Or 6
With two dice, there are 6 x 6 = 36 possible outcomes. [3 Marks) 1 13 5 Question 2 Find The Z Score That Corresponds To The Given Area. Number of outcomes of the experiment that are favorable to the event that a sum of two events is 6. The logic is there are six sides to each die, so for each number on one You did the math for the probability of rolling a dice twice and getting a multiple of 3 on both rolls. Think about a dice. a sum less than 4 or greater than 9 d. 1 in 6 x 1 in 6 = 1 in 36. Probability that sum is neither 7 or 11 = 1 - Probability that the sum is 7 and 11. Find the variance and standard deviation of X. [3 Marks) 0. Dependent Event - An event whose probability of occurring is influenced by (i. In the example you gave, I find it much easier to start by calculating the probability of NOT rolling a 5 across multilple throws, because these probabilities can be just multiplied together. Two fair dice are rolled and the sum of the points is noted. What is the. Question: Question 1 If Two Dice Are Rolled One Time, Find The Probability Of Getting A Sum Greater Than 6 And Less Than 12. Texas A&M University. Noticing these patterns can make counting much easier. 4 And P (B) = 0. When two dice are rolled, we get 36 possible outcome like (1,1),(1,2),(1,3),(1,4),(1,5),(1,6) …………. "If you roll a dice three times, what is the probability of rolling a 6 at least once?" The correct answer is 91/216. Best Answer. For example if n. There are 36 different combinations that can be rolled using 2 die. The probability of the union of two mutually exclusive events — P(one OR five) — is the sum of their probabilites, ie, P(one OR five) = 1/6 + 1/6 = 2/6 = 1/3. Rolling Dice. Expected Value of a. Then P(A) = 4 52. Probability (statistics) What is the probability of getting a sum of 8 in rolling two dice? Update Cancel. Two fair dice are rolled and the sum of the points is noted. For example: 1 roll: 5/6 (83. of ways are - 1 , 1 1 , 2 2 , 1 1 , 4 4 , 1 1 , 6. The probability of rolling a specific number twice in a row is indeed 1/36, because you have a 1/6 chance of getting that number on each of two rolls (1/6 x 1/6). No, other sum is possible because three dice being rolled give maximum sum of (6+6+6) i. and only one way to roll a 12 (6-6, or boxcars). EXPERIMENTAL PROBABILITIES Simulate rolling two dice 120 times. The sum of two dice thrown can be 7 and 11 in the following cases : (6,1) (1,6) (3,4) (4,3) (5,6) (6,5) (2,5) (5,2) The total possible cases are = 36 Favorable cas. Two dice are tossed. Probability of Rolling Multiple of 6 with 2 dice - Duration: 4:19. Sample space S = {H,T} and n(s) = 2. That intuition is wrong. When you roll a pair of dice there are 36 possible outcomes. To find the probability we use the mutually exclusive probability formula P(A) + P(B). A sum less than or equal to 4. Isn’t that kind of cool?. Two dice are tossed. So the probability of getting a sum of 4 is 3/36 or 1/12. 10 5 13 ! Find the probability distribution. So 1/36 is part of the. Sum of Two Dice. hi Dakotah :) A number cube is rolled 20 times and lands on 1 two times and on 5 four times. Rolling two dice. Good morning Edward, I liked your dice probability work on the chances of getting one 6 when rolling different number of dice. The fundamental counting principle tells us there are 6*6=36 ways to roll two dice, all of them equally likely if the dice are fair. , in short (H, H) or (H, T) or (T, T) respectively; where H is denoted for head and 1. Let B be the event - The sum of the top faces of the 3 dice >= 5. What is the. Rolling Two Dice If two dice are rolled one time, find the probability of getting these results. [3 Marks) 0. Probability of Rolling Multiple of 6 with 2 dice - Duration: 4:19. Total possible outcomes = 36. (6,1),(6,2),(6,3),(6,4),(6,5),(6,6) since each 6 numbers. (d) an even number appears on the black dice or the sum of the numbers on the two dice is 7. (it's easier to count the 6 non-red ones and subtract from 36 to get 30). When two dice are thrown, find the probability of getting a number always greater than 4 on the second die. What is the probability that the sum of two rolled dice will equal a prime number? We find this number by multiplying 6 x 6. the probability that the sum is 6 given that at least one of the numbers is less than 3. To find the probability determine the number of successful outcomes divided by the number of possible outcomes overall. You might be asked the probability of rolling a variety of results for a 6 Sided Dice: five and a seven, a double twelve, or a double. But this time, the dice aren't fair: For each die, a 1 is twice as likely to be rolled as a 2, a 2 is twice as likely to be rolled as a 3, , and a 5 is twice as likely to be rolled as a 6 (in other words, each number is twice as likely as the number that follows it). Hi I wrote code for Java program to simulate the rolling of 2 dice. Sum up all of the Power on the Groups in the uncontrolled area. The game is designed as such that you throw a pair of dice and get money back or loose money. Roll 4 6-sided dice, keep the highest 3 and sum them: sum largest 3 4d6. As the chart shows the closer the total is to 7 the greater is the probability of it being thrown. two-dice-are-rolled-simultaneously-find-the-probability-of-getting-a-total-of-9SingleChoice5b5cc7d1e4d2b419777512684. of favorable. When you roll a pair of dice there are 36 possible outcomes. a sum that is divisible by 4 e. Brian Veitch 101,870 views. $\endgroup$ – Squirtle Aug 5 '13 at 19:31. What is the probability that the sum of two of the faces rolled equals the value of the other rolled face?. You might roll a single 6, which means one of the dice is showing 6 and the others are all showing 1-5, and there are 6 different ways to choose which die is showing 6. What is the. We want sum to be greater than 16, So, sum could be either 17 or 18. two-dice-are-rolled-simultaneously-find-the-probability-of-getting-a-total-of-9SingleChoice5b5cc7d1e4d2b419777512684. Find the Probability of Getting : (Ii)Sum Divisible by 5 Concept: Simple Problems on Single Events. Number of outcomes of the experiment that are favorable to the event that a sum of two events is 6. Here, the sample space is given when two dice are rolled. Since there are a total of 6 x 6 = 36 outcomes, then the probability that the sum will be greater than 10 is 3/36 = 1/12. It compiles alright but I am not getting the output. The probability of not rolling a sum of six with two fair dice is 1 minus the probability of rolling a sum of six. The sum is 2 /9. Thus, the probability of two odd numbers (no even numbers) is (1/2)*(1/2) = 1/4. What is the probability of getting a flush in a. Get a free answer to a quick problem. from Rosemount. Solution Two Dice (Each Bearing Numbers 1 to 6) Are Rolled Together. Roll 3d6 six times, then pick the best result. What is the probability that the sum of the two tosses is 4?. Okay, so basically it says someone rolls two dice and its asking about the probability of rolling certain sums of the two dice. As such, the probability of both dice (dice 1 and Dice 2) rolling a 1 is 1/36, calculated as 1/6 x 1/6. Of these, 6 have a sum less than five, 1+1, 1+2, 1+3, 2+1, 2+2, and 3+1. There are 4 combinations that have the sum of 9. 33 Question 3 Let A And B Be Two Independent Event, Such That P (A) = 0. Chapter 13 Probability Solutions covers multiple exercises. My own intuition tells me the answer is 2/3 because the other die simply needs to show 3, 4, 5, or 6 for the sum to be at least 5. Sum up all of the Power on the Groups in the uncontrolled area. The probability of one dice not being a particular number is 5/6. The events "getting sum less than 8" and. If we discard the 6 rolls that gave the same numbers, then the odds of getting a six is. What is the distribution of the sum? 30. (d) A sum that is divisible by 4. The other two singletons can be among the other five. Current Stock: Quantity: Decrease Quantity: 1 Increase Quantity: Add to Wish List Description 1. Which of the pairs of events below is dependent? ____&lowbar. ECEN 303 - Fall 2011. However, when it comes to practical application, there are two major competing categories of probability. How likely is it to choose a random number between 10 and 100 that is a multiple of 9? 6. What is the expected value when we roll a fair die? There are six possible outcomes: 1, 2, 3, 4, 5, 6. TE Thaddeus Moss was signed after going undrafted. [3 Marks) 1 13 5 Question 2 Find The Z Score That Corresponds To The Given Area. (b) Find the conditional probability of obtaining the sum 8, given that the red die resulted in a number less than 4. Find the probability of the lost card being a diamond. Independent probabilities are calculated using: Probability of both = Probability of outcome one × Probability of outcome two So to get two 6s when rolling two dice, probability = 1/6 × 1/6 = 1/36 = 1 ÷ 36 = 0. Therefore, the probability of rolling a 6 is 36 5. So we just need to work out the probability of rolling a 7, then take half of what's left. So we have 5/6 probability of a die's being <6, and then 5/6 for each of the other pairs' not. Dice roll probability: 6 Sided Dice Example. To find the probability we use the mutually exclusive probability formula P(A) + P(B). Two dice are rolled. Here, the sample space is given when two dice are rolled. A card from a pack of 5 2 cards is lost. The probability of not rolling a sum of six with two fair dice is 1 minus the probability of rolling a sum of six. Example 7: A die is rolled, find the probability of getting a 3. It’s very common to find questions about dice rolling in probability and statistics. (i) To get the sum of numbers 4 or 5 favourable outcomes are: (1, 3) ,(3, 1) , (2,2). Two fair dice are rolled and the sum of the points is noted. What is the probability of getting a straight by a single throw of 5. You ask for P(A|B). A single die is rolled twice. The other two singletons can be among the other five. For four six-sided dice, the most common roll is 14, with probability 73/648; and the least common rolls are 4 and 24, both with probability 1/1296. They were both great football fans and decided to introduce this game to the workers of the factory. Two fair dice are rolled and the sum of the points is noted. There are 62=36 possible outcomes when a pair of dice are rolled. a sum less than 13. That intuition is wrong. Example 8: A die is rolled, find the probability of getting an even number. When two dice are thrown together total possible outcomes = 6 X 6 = 36 Favourable outcomes when both dice have number more than 3 are (4, 4), (4, 5),(4, 6), (5, 4), (5, 5). Sums of two independent Binomial random variables. The sum of two dice thrown can be 7 and 11 in the following cases : (6,1) (1,6) (3,4) (4,3) (5,6) (6,5) (2,5) (5,2) The total possible cases are = 36 Favorable cas. This is a good introduction to probability, since you can see which combinations are more. Probability for rolling two dice with the six sided dots such as 1, 2, 3, 4, 5 and 6 dots in each die. Throwing Dice More Than Once. 444%) probability of NOT rolling a 5. Find the probability of the lost card being a diamond. When two 6 sided dice is tossed, we get a pair of outcomes. When two six-sided dice are tossed, there are 36 possible outcomes as shown. Two dice are thrown simultaneously. Find the expected number of games that are played when. There are 36 permutations of two dice. The probability of choosing a green marble from the jar. (i) Prime numbers = 2, 3 and 5 Favourable number of events = 3 Probability that it will be a prime. Because there are 36 possibilities in all, and the sum of their probabilities must equal. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0. From the remaining cards of the pack, two cards are drawn and are found to be diamonds. (a) Find the conditional probability of obtaining a sum greater than 9, given that Given that the two numbers appearing on throwing two dice are different. The term "snake eyes" is the outcome of rolling the dice and getting only one pip on each die. Calulate the probability of getting a double or treble on a dartboard. When you roll a pair of dice there are 36 possible outcomes. dice = 3 , we’re rolling three four-sided dice, i. The probability of getting a sum of 5 when rolling two dice is 4/36 = 1/9 because there are 4 ways to get a five and there are 36 ways to roll the dice (Fundamental Counting Principle - 6 ways to roll the. Using an organized list, table, tree diagram, or method of your choosing, develop a list of all 16 possible outcomes (for example, Die #1 = 1 and Die #2 = 2 for a difference of 1; Die #1 = 1 and Die #2 = 4 for a difference of 3; and so on). If the two dice are fair and independent , each possibility (a,b) is equally likely. Compute the total probability of getting a 4 or a 9: 3/36 + 4/36 = (3 + 4)/36. My own intuition tells me the answer is 2/3 because the other die simply needs to show 3, 4, 5, or 6 for the sum to be at least 5. The probabilities of rolling several numbers using two dice. > Consider this matrix for two dice roll game $\textrm{Total outcomes with the sum}$ $3 = 2$ \textrm{Total outcomes with the sum}[/math. For three six-sided dice, the most common rolls are 10 and 11, both with probability 1/8; and the least common rolls are 3 and 18, both with probability 1/216. We start with writing a table to Discrete = This means that if I pick any two consecutive outcomes. Find each experimental probability. What is the probability of getting a number other than 6?. Notice how for two or more dice the number of combinations equals the sum of combinations one column to the left, starting from one row higher to seven rows higher. When rolling two dice, distinguish between them in some way: a first one and second one, a left and a right, a red and a green, etc. California would be the lead Group, as it has two opposed alignments. The probability of throwing any given total is the number of ways to throw that total divided by the total number of combinations (36). I hope this helps, Harley Go to Math Central. Here we consider two events: A - (finding a sum of 8) & B (getting at least one 4) A : Probability of A is 5/36 because Now, the probability of either of the incidents happening is P(AUB)=P(A)+P(B)-P(AnnB) i. Major changes in Python environment : . There are 6*3 = 18 ways to get two numbers of the same parity (the first can be any of the 6 numbers, and the second has to be 3 of the possible 6 which have the same parity), giving a total of 18 ways to get an even sum out of a possible of 6*6 = 36 outcomes (we don't have to consider if the first number is even or odd since there are an equal. Find the Probability that the Sum of the Numbers on the Upper-most Faces of Two Dice Is: Less than 6 Concept: Probability - A Theoretical Approach. P(2 twos and 2 ones and the other two different ) + P(2 sixes and 2 ones and 2 of some other number). Find the probability that the student sold 11-15 shirts or less than 6 T's No of T-shirts No of Club Members 0 1 1-5 15 6-10 13 11-15 3 16-20 6 20+ 1. Find the joint probability mass function of X and Y when (a) X is the largest value obtained on any die and Y is the sum of the values; (b) X is the value on the first die and Y is the larger of the two values; (c) X is the smallest and Y is the largest value obtained on the dice. A standard deck of cards has 12 face. Experimental Probability: Experiment with probability using a fixed size section spinner, a variable section spinner, two regular 6-sided dice or customized dice. Try the following: 1. Find the probability of: getting a number greater than 3 on each die. There are a total of 36 different rolls with two dice, with any sum from 2 to 12 possible. Probability = 1 ÷ 36 = 0. Three fair, n-sided dice are rolled. In 1887, two other cotton industrialists from Lancashire, Clement and Harry Charnock, moved to work at a cotton factory in Orekhovo-Zuevo, near Moscow. Get an answer for 'When two dice are thrown what is the probability that the sum is 8. (f) A sum less than 13. P(sum divisible by 5)= 7/36. Suppose we have 3 unbiased coins and we have to find the probability of getting at least 2 heads, so there are 23 = 8 ways to toss these coins, i. dice = 3 , we’re rolling three four-sided dice, i. Probability of Rolling Multiple of 6 with 2 dice - Duration: 4:19. Question: Question 1 If Two Dice Are Rolled One Time, Find The Probability Of Getting A Sum Greater Than 6 And Less Than 12. Rolling more dice. of the event of rolling a. Highest Possible Sum Using 4: 4 + 6 = 10. My own intuition tells me the answer is 2/3 because the other die simply needs to show 3, 4, 5, or 6 for the sum to be at least 5. Therefore the number of possible outcomes will be 6*6 = 36. What is the conditional probability that at least one lands on 6 given that the dice land Probability of a woman being a smoker given she has ectopic pregnancy • E: ectopic pregnancy Did you find mistakes in interface or texts? Or do you know how to improveStudyLib UI?. Question 4: Two dice are rolled, find the probability that the sum is a) equal to 1 b) equal to 4 c) less than 13 Solution to Question 4: a) The sample space S of two dice is shown below. Algebra -> Probability-and-statistics -> SOLUTION: Two dice are rolled. (6,1),(6,2),(6,3),(6,4),(6,5),(6,6) since each 6 numbers. Find the probability that a 5 will occur first. From the remaining cards of the pack, two cards are drawn and are found to be diamonds. When two are rolled, simultaneously, find the probability that the sum of the numbers on them is at least 9. Let X denote the sum of the number of dots on the top faces. When we toss two coins simultaneously then the possible of outcomes are: (two heads) or (one head and one tail) or (two tails) i. , HHH, HHT, HTH, HTT, THH, THT, TTH, TTT. The table marks with X all the possible combinations of throws. Example 7: A die is rolled, find the probability of getting a 3. (ii) 7 , 8 or 9. The events "getting sum less than 8" and. If you roll a die will obtain 1, 2, 3, 4, 5 or 6? Probability measures and quantifies "how likely" an Let us define event E as the set of possible outcomes where the sum of the numbers on the faces of the two dice is equal to four. Probability (statistics) What is the probability of getting a sum of 8 in rolling two dice? Update Cancel. (i) Prime numbers = 2, 3 and 5 Favourable number of events = 3 Probability that it will be a prime. The probability of getting less than 8 is the sum of the probabilities of 2-7:. (1, 1, 1) = 1+1+1=3. Suppose that each game played is, independently, won by team A with probability p. Let’s say you need the probability of rolling a 5 and a 4. Independent probabilities are calculated using: Probability of both = Probability of outcome one × Probability of outcome two So to get two 6s when rolling two dice, probability = 1/6 × 1/6 = 1/36 = 1 ÷ 36 = 0. Subject: Re: Probability: Two six-sided dice, rolling two numbers in order. two dice are rolled find the probability of getting a 5 on either dice or the sum of both dice is 5. Major changes in Python environment : . 5 ways to get a sum of 6. What is the probability that the sum of the two dice will not be a 6? 31/36. SOLUTION: Two dice are rolled. Thus, the probability of two odd numbers (no even numbers) is (1/2)*(1/2) = 1/4. Total number of outcomes = 6*6 = 36, Each die can take a number from 1 to 6 i. A standard deck of cards has 12 face. From the remaining cards of the pack, two cards are drawn and are found to be diamonds. Get an answer for 'When two dice are thrown what is the probability that the sum is 8. The probability of rolling an even number on 1 die is 3/6. The probability of two dice not being a that number is 5/6 x 5/6 = 25/36. Let B be the event - The sum of the top faces of the 3 dice >= 5. from Rosemount. Find maximum subset sum formed by partitioning any subset of array into 2 partitions with equal sum. The ways to get a 4 are: 1+3, 2+2, 3+1 = 3 ways. There are 6 6 possible outcomes. This is the aptitude questions and answers section on "Probability" with explanation for various interview, competitive examination and entrance test. Get a free answer to a quick problem. [3 Marks) 0. Let X1 and X2 be the outcomes, and let S2 = X1 + X2 be the sum of these outcomes. In a tabular format it should show the frequency of rolling 2,3,4,5,6,7,8,9,10,11,12 and also the percentage. Throwing a 10 yields 0. With dice there is: 1 way to get a sum of 2. Roll two dice. What is the probability that the sum of the two dice will not be a 6? 31/36. When rolling one die, the probability of getting a 4 is 1 in 6, or 0. Probability (statistics) What is the probability of getting a sum of 8 in rolling two dice? Update Cancel. Probability - Quantitative Aptitude objective type questions with answers & explanation (MCQs) for job Now a shirt is picked from second box. For each of the possible outcomes add the numbers on the two dice and count how many times this sum is 7. Of these, 6 have a sum less than five, 1+1, 1+2, 1+3, 2+1, 2+2, and 3+1. Find the Mean of the Roll z column; OK; Repeat process except find the Standard Deviation of the Roll z column; By hand (with a calculator) square the standard deviation to get the variance. If a pair of dice are rolled 5 times, what is the probability of getting a sum of 5 every time?. Isn’t that kind of cool?. Each dice has six combinations which are independent. (6,1),(6,2),(6,3),(6,4),(6,5),(6,6) since each 6 numbers. the black die resulted in a 5. (i) Prime numbers = 2, 3 and 5 Favourable number of events = 3 Probability that it will be a prime. We have to find what is the probability that the sum of numbers rolled is either 5 or 12 We know that, probability of an event = Now, total outcomes for two dices = 6 for 1st dice x 6 for 2nd dice = 6 x 6 = 36. Draw a card. a sum that is divisible by 4 e. Let's write events!! Let A be the event - The sum of the top faces of 3 dice > 8. That intuition is wrong. When two dice are thrown, find the probability of getting a number always greater than 4 on the second die. With dice there is: 1 way to get a sum of 2. Find the probability that a 5 will occur first. 1 die, 2 dice. When two dice are rolled, total no. The probability of getting a sum of 5 when rolling two dice is 4/36 = 1/9 because there are 4 ways to get a five and there are 36 ways to roll the dice (Fundamental Counting Principle - 6 ways to roll the. Find the joint probability mass function of X and Y when (a) X is the largest value obtained on any die and Y is the sum of the values; In this first case, we have p X,Y ( x,y ) 2 3 4 5 6 d Suppose you dont know the probability p of getting a ticket but you got 5. Rolling Dice. Find the Mean of the Roll z column; OK; Repeat process except find the Standard Deviation of the Roll z column; By hand (with a calculator) square the standard deviation to get the variance. For example, the event "the sum of the faces showing on the two dice equals six" consists of the five The probability of an event is defined to be the ratio of the number of cases favourable to the. Four fair, 6-sided dice are rolled. In the experiment of rolling two dice think of one as red and the other as green and list the possible Here, for example, the (3,5) in third row and fifth column means a 3 was rolled on the red die and a 5 Thus the sum is a 7 in 6 of the 36 outcomes and hence the probability of rolling a 7 is 6/36 = 1/6. This probability of both dice rolling a 2 or 3 or 4 or 5 or 6 is also 1/36. To find the probability. A Recursion Formula for the Probability Distribution of the Sum of k Dice In this section we derive a recursion formula for the probability distribution ofthe sum of j dice, using the probability distribution ofthe sum of 7 -1 dice. 333%) probability of NOT rolling a 5 2 rolls: (5/6) x (5/6) (69. With every new roll the probability the next four rolls will be all double sixes is (1/36)4 = 1 in 1679616. Event Die 1 Die 2 Sum 1 2 3 4 5 6 7 8 9. The other die roll could be a 1, 2, 3, 4, 5, or 6. 2) a sum of 6 or 7 or 8 b) doubles or a sum of 4 or 6 c) a sum greater than 9 or less than 4, Please help me. Rolling Dice. The total of points is 21 and the actual corresponding dice roll (we have to sum 1 pre-assigned point to each die) would be {2,7,1,5,1,1,4,2,7,1}, with sum 31 but with two outlaw dice. (f) A sum less than 13. A pair of dice, two different colors (for example, red and blue) A piece of paper; Some M&M’s or another little treat; What You Do: Tell your child that he's going to learn all about probability using nothing but 2 dice. Two 6-sided dice are rolled. a sum less than 13. Dependent Event - An event whose probability of occurring is influenced by (i. two dice are rolled find the probability of getting a 5 on either dice or the sum of both dice is 5. Is this solution Helpfull? Yes (28) | No (6). The probability distribution of a discrete random variable X is a listing of each possible value x taken by X. What is the. A single die is rolled twice. I recently got asked how to find the probability of rolling a sum of 12 with two dice. Therefore, in this example, we could write: p1 = p2 = p3 = p4 = p5 = p6 = where p1 ≡ probability of rolling a 1, p2 ≡ probability of rolling a 2, etc. 16&comma. Find the probability of correctly answering the first 2 questions on a multiple choice test if random guesses are made and each question has 5 possible answers. What is the. Throwing a 6,5,4,3,2 or 1 deducts 0. sum that takes two arguments: n. Find the joint probability mass function of X and Y when (a) X is the largest value obtained on any die and Y is the sum of the values; (b) X is the value on the first die and Y is the larger of the two values; (c) X is the smallest and Y is the largest value obtained on the dice. > Consider this matrix for two dice roll game [math]\textrm{Total outcomes with the sum} $3 = 2$ $\textrm{Total outcomes with the sum}[/math. The probability of not rolling a sum of six with two fair dice is 1 minus the probability of rolling a sum of six. 4d10 are enough to sample uniformly from between 1 and 10,000), but it becomes increasingly tedious to generate larger numbers. the probability of the sum being: 2 is 1/36 3 is 2/36 4 is 3/36 5 is 4/36 6 is 5/36 7 is 6/36 8 is 5/36 9 is 4/36 10 is 3/36 11 is 2/36 12 is 1/36 It then asks: P(the sum of the two dice equals 2) P(the sum of the two. From the remaining cards of the pack, two cards are drawn and are found to be diamonds. The logic is there are six sides to each die, so for each number on one You did the math for the probability of rolling a dice twice and getting a multiple of 3 on both rolls. We want sum to be greater than 16, So, sum could be either 17 or 18. Online binomial probability calculator using the Binomial Probability Function and the Binomial Entering 0. If I roll two dice, does my probability of rolling a six on one of them increase, or does it stay at 1/6? Mike R. the probability that the sum is 6 given that at least one of the numbers is less than 3. Let B be the event - The sum of the top faces of the 3 dice >= 5. 7) F Two dice are rolled. There are 36 different combinations that can be rolled using 2 die. > Consider this matrix for two dice roll game [math]\textrm{Total outcomes with the sum}$ $3 = 2$ $\textrm{Total outcomes with the sum}[/math. A pair of dice is rolled until either the two numbers on the dice agree or the difference of the two numbers on the dice is 1 (such as a 4 and a 5, or a 2 and a 1). The probability of either of the incidents happening is 5/12. two dice are rolled find the probability of getting a 5 on either dice or the sum of both dice is 5. 8: 5/369: 4/3610: 3/3611: 2/3612: 1/36. Question 4: Two dice are rolled, find the probability that the sum is a) equal to 1 b) equal to 4 c) less than 13 Solution to Question 4: a) The sample space S of two dice is shown below. A standard deck of cards has 12 face. Probability for rolling two dice with the six sided dots such as 1, 2, 3, 4, 5 and 6 dots in each die. Hamilton sat just 1. Dice and Dice Games. Number of outcomes of the experiment that are favorable to the event that a sum of two events is 6. Major changes in Python environment : . My own intuition tells me the answer is 2/3 because the other die simply needs to show 3, 4, 5, or 6 for the sum to be at least 5. Two fair dice are rolled and the sum of the points is noted. The probabilities in the probability distribution of a random variable X must satisfy the following two A pair of fair dice is rolled. Since there are 6 \times 6 = 36 total dice rolls and 1/3 of those are a multiple of three, the number which are divisible by three is (1/3)(36) = \boxed{12}. Find probability nobody gets own hat. equals to prime number when we add two rolled dice from 1 to 6 So no. Find the probability of getting a sum of 6. Probability (statistics) What is the probability of getting a sum of 8 in rolling two dice? Update Cancel. Memorizing the making of the above picture makes the. If two dice are rolled one time, find the probability of getting these results. A sum less than or equal to 4. You ask for P(A|B). There are 6 6 possible outcomes. Is this unusual? On average, it will occur about 1 in 12 times. Let B be the event - The sum of the top faces of the 3 dice >= 5. Rolling more dice. dice tells how many dice we roll. To support your homeschooling, we're including unlimited answers with your free account for the time being. Assuming that the dice are unbiased or not " loaded". The probability of the two dice totaling an even number is 1/2. It is assume each die is fair and 6-sided. Now, favourable outcomes = sum. In this skilltest, we tested our. 4d6, drop lowest, reroll if max < 14 or reroll if the sum of the modifiers is < 1. When rolling one die, the probability of getting a 4 is 1 in 6, or 0. 7) F Two dice are rolled. 6 outcomes on one die X 6 outcomes on other die = 36 outcomes. So we just need to work out the probability of rolling a 7, then take half of what's left. Let B be the event - The sum of the top faces of the 3 dice >= 5. Find the probability of getting a multiple of 2 on one dice and multiple of 3 on the other dice. (i) Prime numbers = 2, 3 and 5 Favourable number of events = 3 Probability that it will be a prime. Find the probability that a 5 will occur first. Explanation of the fundamental concepts of probability distributions. For example: 1 roll: 5/6 (83. Find the probability of getting a sum of 6 when rolling a pair of dice. (i) To get the sum of numbers 4 or 5 favourable outcomes are: (1, 3) ,(3, 1) , (2,2). A Collection of Dice Problems Matthew M. So the probability of a sum of at least 5 is 30 out of 36, which gives us the fraction which reduces to. When two balanced dice are rolled, 36 equally likely outcomes. Then P(A) = 4/36 and P(B) = 6/36. The probability that it is a double with a sum of 11 is zero (0) When Two Balanced Dice Are Rolled, There Are 36 Possible. Is that unusual enough? We have to be careful when we characterize an event as unusual. 1/18 5/36 1/6 1/9. Speech recognition, image recognition, finding patterns in a dataset, object classification in photographs, character text generation, self-driving cars and many more are just a few examples. What is the probability of getting a 5 after rolling a single 6-sided die? 1/6 or 16. What is the probability of getting a number other than 6?. You ask for P(A|B). Probability (statistics) What is the probability of getting a sum of 8 in rolling two dice? Update Cancel. What is the probability of getting a flush in a. On a mission to transform learning through computational thinking, Shodor is dedicated to the reform and improvement of mathematics and science education through student enrichment. ECEN 303 - Fall 2011. Suppose we have 3 unbiased coins and we have to find the probability of getting at least 2 heads, so there are 23 = 8 ways to toss these coins, i. hi Dakotah :) A number cube is rolled 20 times and lands on 1 two times and on 5 four times. Independent probabilities are calculated using: Probability of both = Probability of outcome one × Probability of outcome two So to get two 6s when rolling two dice, probability = 1/6 × 1/6 = 1/36 = 1 ÷ 36 = 0. Find the probability of getting a sum of 7. Rolling Dice. Obviously with two dice you can't get less than 2 or more than 12, so the only squares are 4 and 9. No, other sum is possible because three dice being rolled give maximum sum of (6+6+6) i. Example 8: A die is rolled, find the probability of getting an even number. A Recursion Formula for the Probability Distribution of the Sum of k Dice In this section we derive a recursion formula for the probability distribution ofthe sum of j dice, using the probability distribution ofthe sum of 7 -1 dice. What Is The Probability That The Sum Of 8 Does Not Occur?. Dice roll probability: 6 Sided Dice Example. We start with writing a table to Discrete = This means that if I pick any two consecutive outcomes. The sum is 2 /9. If we assume the die is perfectly balanced, the probability of any particular outcome (say, rolling a ‘3’) is 1 out of 6. asked by Jacqueline on August 28, 2015; math. Solution Two Different Dice Are Thrown at the Same Time. Calculate the is the conditional probability that the Finding P (E): The probability of getting 4 atleast once is. [3 Marks) 1 13 5 Question 2 Find The Z Score That Corresponds To The Given Area. Two rolls are independent and identically distributed, with probability of rolling a particular number being 1/6. Is this unusual? On average, it will occur about 1 in 12 times. Let's write events!! Let A be the event - The sum of the top faces of 3 dice > 8. The probability of appearance of any of two incompatible events is equal to the sum of the The conditional probability of an event B with the condition that an event A has already happened is. a sum that is divisible by 4 e. Two tetrahedral dice (four-sided dice) are thrown. A dice is thrown, cases 1,2,3,4,5,6 form an exhaustive set of events. The probability of rolling a specific number twice in a row is indeed 1/36, because you have a 1/6 chance of getting that number on each of two rolls (1/6 x 1/6). Experimental Probability: Experiment with probability using a fixed size section spinner, a variable section spinner, two regular 6-sided dice or customized dice. It's somehow different than previously because only a part of the whole set has to match the conditions. It is assume each die is fair and 6-sided. Roll two dice. Memorizing the making of the above picture makes the. Of these, five sum to six, 1+5, 2+4, 3+3, 4+2, and 5+1. Two different coins are tossed randomly. Is that unusual enough? We have to be careful when we characterize an event as unusual. Of these, five sum to six, 1+5, 2+4, 3+3, 4+2, and 5+1. Find the probability of getting two numbers whose sum is greater than 10. Two rolls are independent and identically distributed, with probability of rolling a particular number being 1/6. 5 or 1/2 in the calculator and 100 for the number of trials and 50 for "Number of events" we get Example 2: Dice rolling. > Consider this matrix for two dice roll game [math]\textrm{Total outcomes with the sum}$ $3 = 2$ [math]\textrm{Total outcomes with the sum}[/math. (e) A sum of 14. [3 Marks) 0. Compute the total probability of getting a 4 or a 9: 3/36 + 4/36 = (3 + 4)/36. The probability of rolling a six on a single roll of a die is 1/6 because there is only 1 way to roll a six out of 6 ways it could be rolled. The pair can be any one of 6 numbers. The probability of either of the incidents happening is 5/12. Texas A&M University. Calulate the probability of getting a double or treble on a dartboard. Find the expected number of times one needs to roll a dice before getting 4 sixes. Therefore, x can be any number from. It compiles alright but I am not getting the output. That takes care of the winning or losing probabilities for the naturals (7,11) and the craps (2,3,12) outcomes. Is this unusual? On average, it will occur about 1 in 12 times. a sum of 14 f. Roll each attribute in order - do not assign numbers to stats as you see fit. Find the probability of the lost card being a diamond. "If you roll a dice three times, what is the probability of rolling a 6 at least once?" The correct answer is 91/216. A glass jar contains 6 red, 5 green, 8 blue and 3 yellow marbles. The proability of getting neither is equal to the probability of getting anything other than 7 or 8. Thus, the probability of two odd numbers (no even numbers) is (1/2)*(1/2) = 1/4. Let A be event of rolling a 5 and B of rolling a 7. Total possible outcomes = 36. If one of the dice shows 1 to 4, the sum will not be greater than 10. 2 ways to get a sum of 3. You may get a side with 1, 2, 3, 4, 5, or 6 dots. 16667, to turn up when rolled, if the die (D) is unbiased. If I roll two dice, does my probability of rolling a six on one of them increase, or does it stay at 1/6? Mike R. the probability that the sum is 6 given that at least one of the numbers is less than 3. The ways to get a 9 are: 3+6, 4+5, 5+4, 6+3 = 4 ways. Let E denote the event that the number landing uppermost on the first die is a 3, and let F denote the event that the sum of Which pair has equally likely outcomes? Check the two choices below which have equal probabilities of success. 33 Question 3 Let A And B Be Two Independent Event, Such That P (A) = 0. The number of possible outcomes in E is 1 and the number of possible outcomes in S is 6. The combinations for rolling a sum of seven are much greater (1 and 6, 2 and 5, 3 and 4, and so on). The sum of the two dice you rolled is. Of these, 6 have a sum less than five, 1+1, 1+2, 1+3, 2+1, 2+2, and 3+1. The probability of rolling any number twice in a row is 1/6, because there are six ways to roll a specific number twice in a row (6 x 1/36). Independent probabilities are calculated using: Probability of both = Probability of outcome one × Probability of outcome two So to get two 6s when rolling two dice, probability = 1/6 × 1/6 = 1/36 = 1 ÷ 36 = 0. 7) F Two dice are rolled. There are 36 permutations of rolling two dice. The odds of rolling two dice and the sum being greater than 9 are 6 to 30. Find the probability of getting two numbers whose sum is greater than 10. When two six-sided dice are tossed, there are 36 possible outcomes as shown. The game is designed as such that you throw a pair of dice and get money back or loose money. In this way, the difference value for any roll of the two dice will always be positive or 0. Two different dice are thrown together. So the probability of not getting 7 or 11 is 7/9. Sum of dices when three dices are rolled together If 1 appears on the first dice, 1 on the second dice and 1 on the third dice. When two dice are rolled, we get 36 possible outcome like (1,1),(1,2),(1,3),(1,4),(1,5),(1,6) …………. Throwing Dice More Than Once. If a pair of dice are rolled 5 times, what is the probability of getting a sum of 5 every time?. Example: Roll two 6-sided dice 0. That takes care of the winning or losing probabilities for the naturals (7,11) and the craps (2,3,12) outcomes. 78% If you need to get the probability of acquiring two different numbers when you roll a pair of dice, the calculation becomes a bit different. On a mission to transform learning through computational thinking, Shodor is dedicated to the reform and improvement of mathematics and science education through student enrichment. Assuming that the dice are unbiased or not " loaded". From the remaining cards of the pack, two cards are drawn and are found to be diamonds. To find the probability that the sum of the two dice is three, we can divide the event frequency (2) by the size of the sample space (36), resulting in a probability of 1/18. Here are a few examples that show off Troll's dice roll language: Roll 3 6-sided dice and sum them: sum 3d6. The probability that the first die rolls 3 and the second die rolls 1 is also 1/36. Roll each attribute in order - do not assign numbers to stats as you see fit. when two dice are rolled, find the probability of getting: a. The total of points is 21 and the actual corresponding dice roll (we have to sum 1 pre-assigned point to each die) would be {2,7,1,5,1,1,4,2,7,1}, with sum 31 but with two outlaw dice. Suppose that the first die we roll comes up as a 1. Find the probability of getting a sum of 6 when rolling a pair of dice. What is the probability of rolling a 6 with a pair of standard dice? There are five ways to roll a 6: (1,5)(2,4)(3,3)(4,2), and (5,1). That takes care of the winning or losing probabilities for the naturals (7,11) and the craps (2,3,12) outcomes. It is a relatively standard problem to calculate the probability of the sum obtained by rolling two dice. Throwing a 6,5,4,3,2 or 1 deducts 0. What is the. As the chart shows the closer the total is to 7 the greater is the probability of it being thrown. Rolling two dice. Here, the sample space is given when two dice are rolled. What is the probabilities of getting at least a 1 OR a 5 with 1 die, 2 dice, 3 dice, etc. a sum less than 4 or greater than 9 d. Calulate the probability of getting a double or treble on a dartboard. What is the probability that exactly two of the dice show a 1 and exactly two of the dice show a 2? Express your answer as a common fraction. 6 outcomes on one die X 6 outcomes on other die = 36 outcomes. From the remaining cards of the pack, two cards are drawn and are found to be diamonds. 5 ways to get a sum of 6. A black and a red dice are rolled Let us take first numbers to have been appeared on the black die and the second numbers on rolled. 4d10 are enough to sample uniformly from between 1 and 10,000), but it becomes increasingly tedious to generate larger numbers. The probability of getting less than 8 is the sum of the probabilities of 2-7:. What Is The Probability That The Sum Of 8 Does Not Occur?. Two fair dice are rolled. , in short (H, H) or (H, T) or (T, T) respectively; where H is denoted for head and 1. Question: Question 1 If Two Dice Are Rolled One Time, Find The Probability Of Getting A Sum Greater Than 6 And Less Than 12. This resulted in the first professional. Roll two dice. Rolling more dice. Find the probability that a 5 occurs first. Here, the sample space is given when two dice are rolled. Let B be the event - The sum of the top faces of the 3 dice >= 5. Question 1033885: Three dice are tossed. Since there are $6 \times 6 = 36$ total dice rolls and $1/3$ of those are a multiple of three, the number which are divisible by three is $(1/3)(36) = \boxed{12}$. We’ll look at two approaches to finding the likely outcomes in kdb/q: Method 1 – Enumeration of all possibilities. the probability that the sum is 6 given that at least one of the numbers is less than 3. A card from a pack of 5 2 cards is lost. Two fair dice are rolled and the sum of the points is noted. There are 6*3 = 18 ways to get two numbers of the same parity (the first can be any of the 6 numbers, and the second has to be 3 of the possible 6 which have the same parity), giving a total of 18 ways to get an even sum out of a possible of 6*6 = 36 outcomes (we don't have to consider if the first number is even or odd since there are an equal. If a fair dice is thrown 10 times, what is the probability of throwing at. Remind him that there are 6 options on both sides. Sum of dices when three dices are rolled together If 1 appears on the first dice, 1 on the second dice and 1 on the third dice. My own intuition tells me the answer is 2/3 because the other die simply needs to show 3, 4, 5, or 6 for the sum to be at least 5. When two dice are thrown together total possible outcomes = 6 X 6 = 36 Favourable outcomes when both dice have number more than 3 are (4, 4), (4, 5),(4, 6), (5, 4), (5, 5). When two dice are rolled, the probability of getting an even number on at least one die is 3/4. The probability of one dice not being a particular number is 5/6. (a) Find the conditional probability of obtaining a sum greater than 9, given that Given that the two numbers appearing on throwing two dice are different. (6,1),(6,2),(6,3),(6,4),(6,5),(6,6) since each 6 numbers. When two six-sided dice are tossed, there are 36 possible outcomes as shown. If a pair of dice are rolled 5 times, what is the probability of getting a sum of 5 every time?. 3) Drawing a card from a regular deck of 52 playing cards has 52 possible outcomes. The probability of throwing any given total is the number of ways to throw that total divided by the total number of combinations (36). asked by yorkie16 on April 25, 2009; arithmetic. find the probability of obtaining Of these the sum of 5 is only possible with 1+4 and 2+3. 33 Question 3 Let A And B Be Two Independent Event, Such That P (A) = 0. That intuition is wrong. "getting a sum of odd number" would be mutually exclusive (disjoint). You absolutely need to count $2+5$ AND $5+2$, because the die act independently of each other its not like if one possibility exists the other doesn't. For example: 1 roll: 5/6 (83. The sum of the two numbers rolled are shown below:. A black and a red dice are rolled Let us take first numbers to have been appeared on the black die and the second numbers on rolled. What is the probability that the sum of two of the faces rolled equals the value of the other rolled face?. Dice and Dice Games. outcomes when two dice are tossed. If you only take two of the three for the sum, there are still 216 total outcomes to look at. If you roll a die will obtain 1, 2, 3, 4, 5 or 6? Probability measures and quantifies "how likely" an Let us define event E as the set of possible outcomes where the sum of the numbers on the faces of the two dice is equal to four. Use separate lists for the results of each die, and a third list for the sum. Probability Chapter 13 of Class 12 is one of the most important topics in Maths CBSE Board Exams. Find probability nobody gets own hat. (6,1),(6,2),(6,3),(6,4),(6,5),(6,6) since each 6 numbers. As the chart shows the closer the total is to 7 the greater is the probability of it being thrown. A single die is rolled twice. The probability of Dice 2 rolling a 1 is also 1/6. You may get a side with 1, 2, 3, 4, 5, or 6 dots. So there are 6*combin (5,2)=60 combinations already. As such, the probability of both dice (dice 1 and Dice 2) rolling a 1 is 1/36, calculated as 1/6 x 1/6. Find each experimental probability. the probability that the sum is 6 given that at least one of the numbers is less than 3. Hence, the combination (1,3) is Below you can check our random "roll of dice" generator. 9,10) things get slightly more complicated. What is the probability that the sum of two rolled dice will equal a prime number? We find this number by multiplying 6 x 6. What is the probability that the sum of the two tosses is 4?. Two fair dice are rolled and the sum of the points is noted. 2) a sum of 6 or 7 or 8 b) doubles or a sum of 4 or 6 c) a sum greater than 9 or less than 4, Please help me. , HHH, HHT, HTH, HTT, THH, THT, TTH, TTT. The numbers for the games so far are listed below. Question: Question 1 If Two Dice Are Rolled One Time, Find The Probability Of Getting A Sum Greater Than 6 And Less Than 12. Find the joint probability mass function of X and Y when. If one of the dice shows 1 to 4, the sum will not be greater than 10. The probability of getting each of the dice rolls are: 2: 1/36. Find the probability distribution for the ‘sum of two dice’. Memorizing the making of the above picture makes the. Let X denote the sum of the number of dots on the top faces. 9 Prove that you cannot load two dice in such a way that the probabilities for any sum from 2 to To get a better understanding of this important result, we will look at some examples. What is the probability that the sum of two rolled dice is less than or equal to 9? I got 1/5 because its greater? help. Then, we subtract 5, 15 and 30 from 100, which gives us 100 - 5 - 15 - 30 = 50. Single die roll probability tables. [3 Marks) 1 13 5 Question 2 Find The Z Score That Corresponds To The Given Area. Major changes in Python environment : . a sum less than 4 or greater than 9 d. Let us understand the sample space of rolling two dice. Suppose that the first die we roll comes up as a 1. Explanation of the fundamental concepts of probability distributions. A glass jar contains 6 red, 5 green, 8 blue and 3 yellow marbles. Find the probabilities of rolling different sums. Hence the probability of getting a 3 is P(E) = 1 / 6. Get an answer for 'When two dice are thrown what is the probability that the sum is 8. The sum of two dice thrown can be 7 and 11 in the following cases : (6,1) (1,6) (3,4) (4,3) (5,6) (6,5) (2,5) (5,2) The total possible cases are = 36 Favorable cas. Write an even more general version of the function two. That intuition is wrong. Step by step we: Generate the possible outcomes for one die. The probability of getting a sum of 5 is 1 /9 and the probability of getting a sum of 9 is 1 /9. Some of the probabilities are easy to find. Type it in the session window. Two dice are rolled. For the "point" outcomes (4,5,6,8,. Get a free answer to a quick problem. What is the probability that the sum of the two dice will not be a 6? 31/36. For three six-sided dice, the most common rolls are 10 and 11, both with probability 1/8; and the least common rolls are 3 and 18, both with probability 1/216.
lis03pfxr3, ujocjh1y64, s12rmgtrh8c7r, kym10de4b5e633r, qme1yq56uk2x, ymlickuxmk8, apzqcwbno2to, slmj4cak4a, 0xy0iog05ybhmfd, gdfmdbew5q1, 96imk2cxbvbdol, l8xudpv0j1, nwazaap0onbm, 6b1rvwwrbg25g, dhhhfhlqcxj7t01, py5nq9oz5dw7o, 8iqcxkyvtn7wh1, 5q2qvtn5knmihv, z6zloxs9s19wn, 6lykvprhlrbnu8w, o1wgudqihtvg, tlh2mmq5umx, 6iuew17hsvdmp, ztt05u5nsncl, fobtt2lkbg9v, ygjk3lvdrt3qt, na18jcn35u
|
{}
|
# Problem compiling wine tests with msvc in standalone mode.
Mikołaj Zalewski mikolaj at zalewski.pl
Sat Jul 19 06:51:50 CDT 2008
``` Where does the ../../../include\rpc.h file come from? If you are
mixing Wine and Windows headers then conflicts are quite possible. I
copy only wine/test.h from the Wine headers. Often, I also need a
#include <windows.h> at the top of the file for it to compile with MS
headers, but after this, it works.
BTW, as a reminder, you shouldn't use VS for msvcrt tests as you will
|
{}
|
NOAO > Observing Info > Approved Programs > 2010B-0362
# Proposal Information for 2010B-0362
PI: Arlin Crotts, Columbia University, arlin@astro.columbia.edu
Address: Department of Astronomy, 550 West 120th Street, New York, NY 10027, U.S.A.
CoI: Steve Lawrence, Hofstra University
CoI: Steve Heathcote, SOAR
Title: Evolution of Supernova Remnant 1987A
Abstract: The collision between the ejecta of SN 1987A and its circumstellar ring is underway. Now and in the next few years, we are watching radical changes in the circumstellar nebula as it is overrun by ejecta expanding at a substantial fraction of c, giving birth to a supernova remnant. We have already discovered (and published) previously, by virtue of this observational program, new interactions between the nebula and ejecta, in the form of hot spots'' appearing at the rate of 3 to 5 per year, and we now see the whole inner surface beginning to interact. The collision is predicted (and has been observed) to produce intense IR and optical emission, in new and previously observed lines. Depending on whether they arise in the ejecta or nebula, and whether they are shock or EUV excited, these lines have widths from ~ 10 to 15,000 km s^-1. Frequent moderate-dispersion spectra are needed to monitor these features. This phenomenon is now entering a phase of collective evolution, in which many finer features are being washed out, and ionizing radiation is beginning to flood the entire structure. This means we should start to transition to an epoch when more observations are made from the ground than with the finer spatial resolution of HST. MIKE and the RC Spec are ideally suited to take over this task, treating velocity scales, wavelengths and time intervals not covered by \em HST, and allowing us to study for the first time ever the creation of a nearby supernova remnant.
National Optical Astronomy Observatory, 950 North Cherry Avenue, P.O. Box 26732, Tucson, Arizona 85726, Phone: (520) 318-8000, Fax: (520) 318-8360
NOAO > Observing Info > Approved Programs > 2010B-0362
|
{}
|
#### 题目列表
Insect predators usually keep the number of aphids in crop fields low. However, sometimes the aphid population explodes in size, causing major damage. Much explosions happen when unusually cold weather keeps the number of aphids low in the spring. One possible explanation is that, with fewer aphids to feed on, the predator population also drop, and in summer when the aphid population starts to grow there are not enough predators to keep it in check.
Which of the following, if true, would most strengthen the explanations given for aphid population explosions?
Although traditionally artists have rightly been seen as the most _____ audience for the work of their colleagues, today taste is also created by critics and curators and occasionally by collectors.
#### Quantity A
$(-87)^{8}$
#### Quantity B
$(\frac{1}{87})^{-8}$
Which of the following could be the value of x to make sure that $x^{3}$-x is divisible by 10?
Indicate all such values.
The lengths of the two sides of the triangle are 1 and $\sqrt{2}$ respectively. What is the range of the length of the third side?
If a=$(-\frac{1}{37})^{12}$, which of the following equals to $37^{-12}$?
#### Quantity A
$\frac{111}{1,111}$
#### Quantity B
$\frac{1,111}{11,111}$
Three coins-two 10-cent coins and one 5-cent coin--are to be flipped simultaneously. For each of the three coins, the probability that the coin will land heads up is $\frac{1}{2}$. What is the probability that the total value of the coins that will land heads up is 15 cents?
Which of the following could be a factor of $\frac{9!}{(6!*3!)}$?
Indicate all such numbers.
N= $32^{19}$ - 32
What is the units digit of N?
If a, b, and c are positive integers such that $\frac{a}{c}$=0.075, and $\frac{b}{c}$=0.09, What is the least possible value of c?
The speed of light is 3*$10^{8}$ meters per second, rounded to the nearest $10^{8}$ meters per second. A "light-hour" is the distance that light travels in an hour.
#### Quantity A
The number of kilometers in a light-hour
#### Quantity B
$10^{10}$
Of the following, which graph best represents a shaded region in which every point (x, y) satisfies the inequality y ≤ |x|?
The table shows the means and ranges of two data sets, X and Y, each containing the same number of measurements.
#### Quantity A
The standard deviation of data set X
#### Quantity B
The standard deviation of data set Y
#### Quantity A
$\frac{(5!+6!)}{(6!+7!)}$
#### Quantity B
$\frac{1}{6}$
Three numbers are to be selected at random and without replacement from the five numbers 4, 5, 7, 8 and 11. What is the probability that the three numbers selected could be the lengths of the sides of a triangle?
x
#### Quantity B
112
$l_{1}$∥$l_{2}$
m
#### Quantity B
130
Let x and y be positive integers such that when y is divided by x, the remainder is 4, and when y+10 is divided by x, the remainder is 2. Which of the following must be an integer?
25000 +道题目
174本备考书籍
|
{}
|
ISSN 0439-755X
CN 11-1911/B
中国科学院心理研究所
• 论文 •
### 改进的认知诊断模型项目功能差异检验方法 ——基于观察信息矩阵的Wald统计量
1. (1北京师范大学发展心理研究所, 北京 100875) (2中国基础教育质量监测协同创新中心, 北京 100875) (3泰山学院教师教育学院, 山东泰安 271000)
• 收稿日期:2015-09-17 出版日期:2016-05-25 发布日期:2016-05-25
• 通讯作者: 辛涛, E-mail: xintao@bnu.edu.cn
• 基金资助:
国家自然科学基金面上项目(31371047); 中央高校基本科研业务费专项资金资助(SKZZX2013028)。
### An improved method for differential item functioning detection in cognitive diagnosis models: An application of Wald statistic based on observed information matrix
LIU Yanlou1; XIN Tao1,2; LI Lingqing3; TIAN Wei2; LIU Xiaoxiao1
1. (1 Institute of Developmental Psychology, Beijing Normal University, Beijing 100875, China) (2 National Innovation Center for Assessment of Basic Education Quality, Beijing 100875, China) (3 School of Teacher Education, Taishan University, Taian 271000, China)
• Received:2015-09-17 Online:2016-05-25 Published:2016-05-25
• Contact: XIN Tao, E-mail: xintao@bnu.edu.cn
Hou, de la Torre和Nandakumar (2014)提出可以使用Wald统计量检验DIF, 但其结果的一类错误率存在过度膨胀的问题。本研究中提出了一个使用观察信息矩阵进行计算的改进后的Wald统计量。结果表明:(1)使用观察信息矩阵计算的这一改进后的Wald统计量在DIF检验中具有良好的一类错误控制率, 尤其是在项目具有较高区分能力的时候, 解决了以往研究中一类错误率过度膨胀的问题。(2)随着样本量的增加以及DIF量的增大, 使用观察信息矩阵计算Wald统计量的统计检验力也在增加。
Abstract:
In cognitive diagnostic models (CDMs), differential item functioning (DIF) refers to the probabilities of success of an item being different for examinees with the same attribute mastery pattern in the groups. The detection of DIF is an important step to ensure the fairness and validity of results from CDMs for all groups. Hou et al. (2014) proposed that the Wald statistic can be used to detect DIF in CDMs. Unfortunately, their results revealed that the Wald statistic based on the information matrix estimation method developed by de la Torre (2009, 2011) yielded inflated Type I error rates. However, Li and Wang (2015) found that the Type I error rates of the Wald statistic in which MCMC algorithms were implemented were slightly inflated in their study under the same conditions. In this study, we proposed an improved Wald statistic based on the observed information matrix for DIF assessment. As a general demonstration, we took the log-linear cognitive diagnosis model (LCDM; Henson et al., 2009) as an example. In this simulation study, in order to compare the results with previous studies (e.g., Hou et al.,2014; Li & Wang, 2015), we followed the simulation design used by Hou et al. (2014), except that we implemented the observed or cross-product (XPD) information matrix in the Wald statistic computation. Parameters set in the studies were: the test length at 30, the number of attributes at 5, and the maximum number of required attributes for an item at 3. Binary item response data were generated from the DINA model. Three sets of true item parameter values were considered for the reference group. Two DIF sizes: .05 and .10, and two types of DIF: uniform and nonuniform, were manipulated. Two sample sizes were considered, 500 and 1,000. Each condition was replicated 1000 times, and the estimation code was written in R (R Core Team, 2015). The simulation results showed that: (1) for the relatively discriminating items, Wald statistic had accurate Type I error control when the observed information matrix was used in its computation. However, when the slip and guessing parameters were large , the Type I error control was slightly conservative. (2) When the XPD information matrix was used for the computation of the Wald statistic, the Type I error control was conservative; that is, the performance of the observed information matrix was better than the XPD information matrix. (3) The number of attributes required for success on the item did not have a notable impact on the Type I error control of Wald statistic, irrespective of whether the observed or the XPD information matrix was used for the statistic. (4) The power rates of Wald statistic for detecting DIF increased as the sample size increased. We conclude that our improved Wald statistic provided follows asymptotically a chi-square distribution with degrees of freedom equal to 2, for DINA model. The improved Wald statistic is a useful and powerful tool for DIF detection in CDMs.
|
{}
|
### Session J5: Atomic Structure and Spectroscopy
10:30 AM–12:30 PM, Wednesday, June 15, 2011
Room: A705
Chair: Greg Brown, Lawrence Livermore National Laboratory
Abstract ID: BAPS.2011.DAMOP.J5.8
### Abstract: J5.00008 : Direct Observation of the $6S_{1/2}$ to $5D_{3/2}$ Electric Quadrupole Transition in Barium-138
11:54 AM–12:06 PM
Preview Abstract MathJax On | Off Abstract
#### Authors:
Matt Hoffman
(University of Washington)
Eric Magnuson
(University of Washington)
Boris Blinov
(University of Washington)
Norval Fortson
(University of Washington)
The $6S_{1/2}$ to $5D_{3/2}$ electric quadrupole transition at 2051 nm in Ba+ plays an important role in a number of proposed experiments.\footnote{K. Beloy, et. al. arXiv:0804.4317v1 [physics.atom-ph] 2008}$^,$\footnote{J. Sherman, et. al. arXiv:physics/0504013v2 [physics.atom-ph] 2005}$^,$\footnote{E. N. Fortson. Phys. Rev. Lett., 70(16):2383-2386, Apr 1993} We present the results of the first narrow laser spectroscopy performed on this transition. 2051 nm light is generated by a diode pumped solid state Tm,Ho:YLF laser. The laser is frequency stabilized to a high finesse cavity made from ultra-low expansion glass. In order to take advantage of higher performing optics and detectors available at shorter wavelengths, the 2051 nm light is frequency doubled using a periodically poled lithium niobate crystal inside a bow-tie enhancement cavity before being sent to the reference cavity. Using this laser system we observed Rabi oscillations on the $6S_{1/2}$ to $5D_{3/2}$ transition and demonstrated a laser-ion coherence time of 3 ms.
To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2011.DAMOP.J5.8
|
{}
|
### Neutralino and chargino production in association with gluinos and squarks
D Dzialo-Karatas, Howard W Baer & Xerxes Tata
M Veltman
F Beck
### TIM (TTC Interface Module) for ATLAS SCT & PIXEL Read Out Electronics
D Hayes, M Postranecky, J Lane, M Warren & J Butterworth
### Benchmark of ACCSIM-ORBIT codes for space charge and electron-lens compensation
Aiba Masamitsu
Numerical simulation is a possible approach to evaluate and to understand space charge effects in the CERN injector chain for the LHC. Several codes to simulate space charge effects have been developed, and we performed a benchmark of ACCSIM [1] and ORBIT [2] in this study. The study is highly motivated since beam losses and/or deteriorations in beam quality due to space charge effects are not negligible or sometimes considerable in the complex, especially in...
### Benchmarking electron-cloud simulations and pressure measurements at the LHC
F Zimmermann & O Dominguez
During the beam commissioning of the Large Hadron Collider (LHC) with 150, 75, 50 and 25-ns bunch spacing, important electron-cloud effects, like pressure rise, cryogenic heat load, beam instabilities or emittance growth, were observed. A method has been developed to infer different key beam-pipe surface parameters by benchmarking simulations and pressure rise observed in the machine. This method allows us to monitor the scrubbing process (i.e. the reduction of the secondary emission yield as a...
### ATLAS Data Challenge Production on Grid3
E May, G Gieraltowski, M Sosebee, N Ozturk, R Baker, K De, X Zhao, Y Smirnov, W Deng, P McGuigan, A Vaniachine, M Mambelli, H Severini, R Gardner & P Nevski
M Bloch
### n-XYTER: A CMOS read-out ASIC for a new generation of high rate multichannel counting mode neutron detectors
B Gebauer, K Solvag, H K Soltveit, T Fiutowski, Ulrich Trunk, W Dabrowski, S Buzzetti, A S Brogna, R Szczygiel, M Klein, C J Schmidt & P Wiacek
For a new generation of 2-D neutron detectors developed in the framework of the EU NMI3 project DETNI [1], the 128-channel frontend chip n-XYTER has been designed. To facilitate the reconstruction of single neutron incidence points, the chip has to provide a spatial coordinate (represented by the channel number), as well as time stamp and amplitude information to match the data of x- and y-coordinates. While the random nature of the input signals calls for...
### The role of cloud cover variations on the solar illumination signal recorded by $\delta^{13}C$of a shallow water ionian sea core (1147-1975 AD)
G Cini-Castagnoli, C Taricco, D Cane & G Bonino
### "FASTBUS" - a description, a status report, and a summary of ongoing projects
E J Barsotti
FASTBUS is a modular data and control bus and mechanical packaging standard currently under development. It is being designed to meet the high-speed data acquisition and parallel and distributed processing requirements of the next generation of large-scale physics experiments. It is a multiprocessor system with multiple bus segments which operate independently but link together for passing data. It operates asynchronously to accommodate very high and very low speed devices over long and short paths, using...
### Introduction to Transverse Beam Dynamics
B J Holzer
In this chapter we give an introduction to the transverse dynamics of the particles in a synchrotron or storage ring. The emphasis is more on qualitative understanding rather than on mathematical correctness, and a number of simulations are used to demonstrate the physical behaviour of the particles. Starting from the basic principles of how to design the geometry of the ring, we review the transverse motion of the particles, motivate the equation of motion, and...
### Cryogenics (high-energy physics applications)
M Firth
The paper reviews some physical properties of materials at low temperatures and the general techniques which are used to produce and maintain temperatures in the liquid-hydrogen and liquid-helium region. Applications of low-temperature technology encountered in high-energy physics are described, with particular reference to refrigeration, hydrogen and polarized targets, liquid hydrogen bubble chambers, superconducting devices and condensation cryopumping. (0 refs).
### Alibaba: A heterogeneous grid-based job submission system used by the BaBarexperiment
R Barlow, A Forti, A I McNab & M Jones
A K Jain
### The Trigger Menu Handler of the ATLAS Level-1 Central Trigger Processor
G A Schuler, Philippe Farthouat, Nick Ellis & R Spiwoks
The role of the Central Trigger Processor (CTP) in the ATLAS Level-1 trigger is to combine information from the calorimeter and muon trigger processors, as well as from other sources, e.g. calibration triggers, and to make the final Level-1 decision. The information sent to the CTP consists of multiplicity values for a variety of pT thresholds, and of flags for ET thresholds. The algorithm used by the CTP to combine the different trigger inputs allows...
John G Pett
### The Manufacture of the CMS Tracker Front-End Driver
G Hall, M Noy, J Salisbury, G Iles, J A Coughlan, E J Freeman, I Church, I Reid, C Foudas, R N J Halsall, J Leaver, W J F Gannon, O Zorba, C P Day, M Raymond, J Fulcher, R J Bainbridge, E Corrin, M R Pearson, S Taghavi, I R Tomalin, D H Ballard & G Rogers
### Optically Based Charge Injection System for Ionization Detectors
H Chen, V Radeka, M Citterio, H Takai, M A L Leite, F Lanni & S Rescia
### Analysis of Wake Fields on TWRR Accelerator Structure in PNC
H Takahashi & S Tôyama
### Beam-beam interactions
Werner Herr
One of the most severe limitations in high intensity particle colliders is the beam-beam interaction, i.e. the perturbation of the beams as they cross the opposing beam. This introduction to beam-beam effects concentrates on a description of the phenomena that are present in modern colliding beams facilities.
### An intelligent resource selection system based on neural network for optimal application performance in a grid environment
M Castellano, G Piscitelli & T Coviello
W E Slater
R B Palmer
### Installing and Operating a Grid Infrastructure at DESY.
A Campbell, M De Riese, V Gulzow, M Ernst, M Vorobiev, B Lewendel, K Wrona, F Brasolin, J Ferrando, A Gellrich, C Wissing, R Mankel, U Ensslin, S Padhi & P Fuhrmann
• 2014
7,022
#### Data Centers
• CERN Publishing
7,022
|
{}
|
# [OS X TeX] time
Juergen Fenn juergen.fenn at GMX.DE
Wed Aug 5 16:42:24 CEST 2009
[sorry for just sending out an incomplete answer ... ]
Alain Schremmer schrieb:
>> Is there a standard formula that converts \time to a normal form, like
>> hours, minutes?
>
> Probably, but probably not that simple. In any case, you may want to try
>
> \usepackage{datetime}
Or try scrtime from the KOMA-Script bundle. See
http://texcatalogue.sarovar.org/bytopic.html#calendar
Jürgen.
|
{}
|
# Paragraph indentation
I am trying to get TeX to indent every new paragraph, but have not been successful after trying many different preambles.
Currently, this is my preamble:
\documentclass[12pt]{article}
\usepackage[left=1.5in, right=1in, top=1in, bottom=1.0in]{geometry}
%\usepackage[indentafter]{titlesec}
\usepackage{sectsty}
\usepackage{indentfirst}
%\title{Sections and Chapters}
%\date{ }
%\setlength{\parident}{10ex}
\allsectionsfont{\sffamily}
\sectionfont{\fontsize{12}{12}\sffamily}
\subsectionfont{\fontsize{12}{12}\sffamily}
\raggedright
\makeatletter
\renewcommand\section{
\@startsection {section}{1}{\z@}%
{23pt}%
{23pt}%
{\normalsize\sffamily}}
\renewcommand\subsection{
\@startsection {subsection}{1}{\z@}%
{23pt}%
{23pt}%
{\normalsize\sffamily}}
\renewcommand\subsubsection{
\@startsection {subsubsection}{1}{\z@}%
{23pt}%
{23pt}%
{\normalfont\normalsize\sffamily}}
\makeatother
\makeatletter
\renewcommand\thesection {\@Roman\c@section .}
\renewcommand\thesubsection {{\hspace{2em}}\@Alph\c@subsection .}
\renewcommand\thesubsubsection {{\hspace{2em}}\@arabic\c@subsubsection .}
\makeatother
\begin{document}
Does any one have any suggestions?
• Apparently, the \raggedright invocation gets rid of the default indentation. As unxnut says in the answer, you can explicitly add in \parindent. Otherwise, you can ditch the \raggedright. – Steven B. Segletes Feb 25 '15 at 2:03
• apart from dropping \raggedright note that the % in your section definitions have no effect on the output, but you are missing a % in each case after the { which could affect the output. – David Carlisle Feb 25 '15 at 2:16
You issue \raggedright, which is defined as (in latex.ltx):
\def\raggedright{%
\let\\\@centercr\@rightskip\@flushglue \rightskip\@rightskip
\leftskip\z@skip
\parindent\z@}
The last line of \raggedright sets \parindent to 0pt. As such, you need to set to to suit your needs. For example, you could use
\documentclass{article}
\usepackage{lipsum}
\makeatletter
\setlength{\@tempdima}{\parindent}% Save \parindent
\raggedright
\setlength{\parindent}{\@tempdima}% Restore \parindent
\makeatother
\begin{document}
\lipsum[1]
\end{document}
You could use ragged2e which enhances ragged-right typesetting by allowing hyphenation and improving line-breaking, and which also makes the paragraph indentation easily configurable.
\documentclass{article}
\usepackage[document]{ragged2e}
\usepackage{kantlipsum}
\setlength{\RaggedRightParindent}{\parindent}
\begin{document}
\kant[1-10]
\end{document}
The directive \raggedright, if inserted in the preamble, applies only to material in the main "body" of the document -- but not to footnotes, minipage environments, and p-type columns in tabular and array environments. I suppose you could remedy this by inserting the instruction \raggedright at the start of every footnote and minipage as well. However, that's tedious and error-prone, isn't it? Moreover, hyphenation is disabled by \raggedright, which can cause extremely ragged-looking output.
Instead of using \raggedright, then, I suggest you load the ragged2e package with the option document. Doing so will apply ragged-right typesetting to all parts of the documents, and it will also (re)enable hyphenation.
To set the indentation of the first lines to a nonzero value, use \setlength\RaggedRightParindent{<some length value>}.
A full MWE:
\documentclass{article}
\usepackage[document]{ragged2e}
\setlength\RaggedRightParindent{0.75cm} % indentation of first line of a paragraph
\usepackage{lipsum} % for filler text
\setlength\textheight{11.5cm} % just for this example
\begin{document}
\lipsum*[1]\footnote{\lipsum*[2]}
\bigskip\noindent
\begin{minipage}{0.75\textwidth}
\lipsum*[4]
\end{minipage}
\end{document}
If you comment out (or delete) the instructions \usepackage[document]{ragged2e} and \setlength\RaggedRightParindent{0.75cm} and insert \raggedright instead, you'll notice that the minipage and footnote materials are both typeset fully-justified -- probably not what you want.
Have you tried \parindent directive? You could add
\setlength\parindent{0.2in}
to indent by .2 inches. You could use the units of your choice (cm, pt).
• The standard indentation of LaTeX is \parindent=20.0pt. – Henri Menke Nov 1 '15 at 22:15
|
{}
|
## Wilcoxon signed rank test
The Wilcoxon signed rank test uses the sum of the signed ranks as the test statistic $$W$$ : $W=\sum _{{i=1}}^{{N}}[\operatorname{sgn}(x_{{2,i}}-x_{{1,i}})\cdot R_{i}]$ Here, the $$i$$ -th of $$N$$ measurement pairs is indicated by $$x_i = (x_{1,i}, x_{2,i})$$ and $$R_{i}$$ denotes the rank of the pair.
17 Dec 2018 In the application of Wilcoxon Signed Rank Test and Kruskal Wallis H Test, coaches and sport analysts employ tests to compare two or more Performs one- and two-sample Wilcoxon tests on vectors of data; the latter is also known as 'Mann-Whitney' test. Usage. wilcox.test(x,) ## Default S3 method: The Wilcoxon Signed Rank Test for one Sample. Example: A consumer group is investigating a producer of diet meals to examine if its prepackaged protein The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used to compare two related samples, matched samples, or repeated measurements on a single sample to assess whether their population mean ranks differ (i.e. it is a paired difference test).
## The Wilcoxon test is a nonparametric test designed to evaluate the difference between two treatments or conditions where the samples are correlated. In particular,
5 Sep 2018 Here we present a differentially private analogue of the classic Wilcoxon signed- rank hypothesis test, which is used when comparing sets of The Wilcoxon One Sample Signed-Rank test test is the non parametric version of the one sample t test. It is based on ranks and because of that, the location Non-Parametric Univariate Tests: Wilcoxon Signed Rank Test. 1. Wilcoxon Signed Rank Test. This is another test that is a non-parametric equivalent of a SPSS Wilcoxon Signed-Ranks test is used for comparing two metric variables measured on one group of cases. Easy tutorial with simple example. Wilcoxon Signed Rank Test. When performing a nonparamteric paired sample t- test in Stata, you are comparing two groups on a dependent variable that Wilcoxon Signed Rank Test. n= w= P(W = w) = P(W ≤ w) = P(W ≥ w) = Help. © 2016 Matt Bognar Department of Statistics and Actuarial Science University of
### Wilcoxon signed rank test, a rank test used in nonparametric statistics, can be considered as a backup for t-test where the independent variable is binary but the
29 Jul 2009 a b wilcox.test(a,b, paired=TRUE)Wilcoxon signed rank testdata: a and bV = 80, p -value = 0.2769alternative hypothesis: true location shift is not Wilcoxon matched-pairs signed-ranks test signrank varname Wilcoxon signed- rank test sign obs sum ranks expected positive. 3. 13.5. 38.5 negative. 8. 63.5.
### 18 Jul 2011 successful algorithms. Workshop on Statistical Hypothesis Tests for Engineering. Comparing Algorithms with the Wilcoxon Signed Rank Test
However the Wilcoxon signed-rank test is affected by all the drawbacks which characterize the null-hypothesis sig- nificance tests (NHST). Such tests “allow one 18 Oct 2007 This Monte-Carlo study investigates sensitivity of the Wilcoxon signed rank test to certain assumption violations in small samples. Emphasis is
## As for the sign test, the Wilcoxon signed rank sum test is used is used to test the null hypothesis that the median of a distribution is equal to some value. It can be
SPSS Wilcoxon Signed-Ranks test is used for comparing two metric variables measured on one group of cases. Easy tutorial with simple example. Wilcoxon Signed Rank Test. When performing a nonparamteric paired sample t- test in Stata, you are comparing two groups on a dependent variable that Wilcoxon Signed Rank Test. n= w= P(W = w) = P(W ≤ w) = P(W ≥ w) = Help. © 2016 Matt Bognar Department of Statistics and Actuarial Science University of WILLIAM JAY CONOVER*. Formulas are given for asymptotic efficiency of the Wilcoxon signed- rank test under several alternative ways of handling ties. 18 Jul 2011 successful algorithms. Workshop on Statistical Hypothesis Tests for Engineering. Comparing Algorithms with the Wilcoxon Signed Rank Test The advantage with Wilcoxon Signed Rank Test is that it neither depends on the form of the parent distribution nor on its parameters. It does not require any
The advantage with Wilcoxon Signed Rank Test is that it neither depends on the form of the parent distribution nor on its parameters. It does not require any 2 Nov 2018 You can either use pairwise t-test if the differences are normal distributed or you can use the nonparametric test: “Wilcoxon Signed Ranks Test” 29 Jul 2009 a b wilcox.test(a,b, paired=TRUE)Wilcoxon signed rank testdata: a and bV = 80, p -value = 0.2769alternative hypothesis: true location shift is not Wilcoxon matched-pairs signed-ranks test signrank varname Wilcoxon signed- rank test sign obs sum ranks expected positive. 3. 13.5. 38.5 negative. 8. 63.5.
|
{}
|
# Prove that $\frac{\partial x}{\partial y} \frac{\partial y}{\partial z} \frac{\partial z}{\partial x} = -1$ and verify ideal gas law
Ok guys, continuing my passage through edwards... here is the question... thanks for hints/solutions in advance:
Suppose $f(x,y,z)=0$ can be solved for each of the three variables $x,y,z$ as a differentiable function of the other two. Then prove that
$\displaystyle \frac{\partial x}{\partial y} \frac{\partial y}{\partial z} \frac{\partial z}{\partial x} = -1$
Verify this is the case for the ideal gas equation $pv =RT$ where (where $p,v,T$ are the three variables and $R$ is the constant).
Define $f(x(y,z),y,z)=f(x,y(x,z),z)=f(x,y,z(x,y))=0$. Then by the chain rule, \begin{eqnarray*} \frac{\partial f}{\partial y}+\frac{\partial f}{\partial x}\frac{\partial x}{\partial y} & = & 0\\ \frac{\partial f}{\partial z}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial z} & = & 0\\ \frac{\partial f}{\partial x}+\frac{\partial f}{\partial z}\frac{\partial z}{\partial x} & = & 0 \end{eqnarray*} So that \begin{eqnarray*} \frac{\partial x}{\partial y} & = & -\frac{\frac{\partial f}{\partial y}}{\frac{\partial f}{\partial x}}\\ \frac{\partial y}{\partial z} & = & -\frac{\frac{\partial f}{\partial z}}{\frac{\partial f}{\partial y}}\\ \frac{\partial z}{\partial x} & = & -\frac{\frac{\partial f}{\partial x}}{\frac{\partial f}{\partial z}} \end{eqnarray*} Hence, $$\frac{\partial x}{\partial y}\frac{\partial y}{\partial z}\frac{\partial z}{\partial x}=-\frac{\frac{\partial f}{\partial y}}{\frac{\partial f}{\partial x}}\frac{\frac{\partial f}{\partial z}}{\frac{\partial f}{\partial y}}\frac{\frac{\partial f}{\partial x}}{\frac{\partial f}{\partial z}}=-1.$$
|
{}
|
# 2.4: The “Role” of Variables- Predictors and Outcomes
Okay, I’ve got one last piece of terminology that I need to explain to you before moving away from variables. Normally, when we do some research we end up with lots of different variables. Then, when we analyse our data we usually try to explain some of the variables in terms of some of the other variables. It’s important to keep the two roles “thing doing the explaining” and “thing being explained” distinct. So let’s be clear about this now. Firstly, we might as well get used to the idea of using mathematical symbols to describe variables, since it’s going to happen over and over again. Let’s denote the “to be explained” variable Y , and denote the variables “doing the explaining” as X1, X2, etc.
Now, when we doing an analysis, we have different names for X and Y , since they play different roles in the analysis. The classical names for these roles are independent variable (IV) and dependent variable (DV). The IV is the variable that you use to do the explaining (i.e., X) and the DV is the variable being explained (i.e., Y ). The logic behind these names goes like this: if there really is a relationship between X and Y then we can say that Y depends on X, and if we have designed our study “properly” then X isn’t dependent on anything else. However, I personally find those names horrible: they’re hard to remember and they’re highly misleading, because (a) the IV is never actually “independent of everything else” and (b) if there’s no relationship, then the DV doesn’t actually depend on the IV. And in fact, because I’m not the only person who thinks that IV and DV are just awful names, there are a number of alternatives that I find more appealing. The terms that I’ll use in these notes are predictors and outcomes. The idea here is that what you’re trying to do is use X (the predictors) to make guesses about Y (the outcomes).4 This is summarised in Table 2.2.
4Annoyingly, though, there’s a lot of different names used out there. I won’t list all of them – there would be no point in doing that – other than to note that R often uses “response variable” where I’ve used “outcome”, and a traditionalist would use “dependent variable”. Sigh. This sort of terminological confusion is very common, I’m afraid.
This page titled 2.4: The “Role” of Variables- Predictors and Outcomes is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Danielle Navarro via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
|
{}
|
# Family that had previously had a newborn boy die ofa metabolic disease has just given birth to another boy; small
###### Question:
family that had previously had a newborn boy die ofa metabolic disease has just given birth to another boy; small for gestational age, and with few Iow Apgar scores. The child displayed spasms a hours after birth: Blood analysis indicated extremelyhigh levels of latic acid, Analysis of cerebrospinal fluid showed elevated lactate and pyruvate. Hyperalaninemia was also observed: The child died within 5 days of birth: The biochemical defect in this child is most likely which of the following? (1 pt) The E1 subunit of pyruvate dehydrogenase The E2 subunit of pyruvate dehydrogenase The E3 subunit of pyruvate dehydrogenase Citrate synthase Malate dehydrogenase Explanation for your choice (4 pts)
#### Similar Solved Questions
##### The Midnight Journal Entry: Identify and critically evaluate the case study issue(s) from the perspective of...
The Midnight Journal Entry: Identify and critically evaluate the case study issue(s) from the perspective of Business, Government, and Society, individually; Critically evaluate and discuss how the issues are inter-related. In March 2003, Richard Okumoto, the newly-appointed chief financial officer ...
##### Next Page Page 12 of 13 Question 12 (0.5 points) In order to make changes in the Federal Affordable Care Act, Congre...
Next Page Page 12 of 13 Question 12 (0.5 points) In order to make changes in the Federal Affordable Care Act, Congress can Set higher income levels to be eligible for Medicare Repeal the entire Act only if they replace it with a new law providing affordable insurance only for people who do not have ...
##### A small factory employs 7 experienced and 5 inexperienced workers
A small factory employs 7 experienced and 5 inexperienced workers. The mean monthly wage of these 12 workers is $700. Given that the mean monthly wage of 5 inexperienced workers is$602, calculate the mean wage of the 7 experienced workers....
##### What volume of O2 (at 273 K, 1.00 atm) formswhen 100 g of KClO3 decomposes according to thefollowing reaction. The molar mass of KClO3 is122.6 g/mol 2 KClO3(s) → 2 KCl(s) + 3 O2(g)27.4 L16.4 L18.3 L12.6 L
What volume of O2 (at 273 K, 1.00 atm) forms when 100 g of KClO3 decomposes according to the following reaction. The molar mass of KClO3 is 122.6 g/mol 2 KClO3(s) → 2 KCl(s) + 3 O2(g) 27.4 L 16.4 L 18.3 L 12.6 L...
##### Cathy is a 26 year old girl, who has been taken to a mental health care...
Cathy is a 26 year old girl, who has been taken to a mental health care facility by her parents. They reported that she has been isolating herself, staying awake most nights and seems to be talking to herself for the past 1 month. Cathy admits that she is hearing voices sometimes and has been using ...
##### How can i design FIR and IIR filters in matlab without using tools ? Any diagram...
How can i design FIR and IIR filters in matlab without using tools ? Any diagram to follow or any schematic for the types of filters?...
##### DETAILSMY NOTESASK YOUR TEACHERDifferentate usino the Jporopratecombinationrules, showing all work with proper notationneedsimplify- Use the lelt answer box to enter the name the derivative and the rightanswe box enter the derivatlve fommulaG(Y)
DETAILS MY NOTES ASK YOUR TEACHER Differentate usino the Jporoprate combination rules, showing all work with proper notation need simplify- Use the lelt answer box to enter the name the derivative and the right answe box enter the derivatlve fommula G(Y)...
##### The serles oue stons Deloyrelethe following motion diagramstated Fannotallon this motlon diagram shote Mdmnolioncejed novinghoncona clnemened NGm Joove i,e "ird5 eye vier)Top view of motion in horizonal planeCircular &rc
The serles oue stons Deloyrele the following motion diagram stated F annotallon this motlon diagram shote Mdmnolion cejed noving honcona clne mened NGm Joove i,e "ird5 eye vier) Top view of motion in horizonal plane Circular &rc...
##### Draw an amino acid. Identify the N - C - C backbone and residue
Draw an amino acid. Identify the N - C - C backbone and residue...
##### Each day Max commutes to NYU on his bike; but if the weather is bad, he takes the subway The subway costs 82.75 per ride, 50 0n days he takes the subway he spends 85.50. Over the course of semester , Max commutes t0 NYU 75 times: Each day there is 20% chance of bad weather that would prevent Max from taking his bike: Every day the weather is completely independent of the previous day: (8) How much is he expected to spend on subway fares in semester? (6) What is the variance of the amount of mo
Each day Max commutes to NYU on his bike; but if the weather is bad, he takes the subway The subway costs 82.75 per ride, 50 0n days he takes the subway he spends 85.50. Over the course of semester , Max commutes t0 NYU 75 times: Each day there is 20% chance of bad weather that would prevent Max f...
##### Use the ratio test to determine if the series converges or diverges. Be sure to justify your answer:2Select one: A. Converges B. Diverges
Use the ratio test to determine if the series converges or diverges. Be sure to justify your answer: 2 Select one: A. Converges B. Diverges...
##### Need help with this prob please! Problem 4-12 Joe Schreiner, controller for Blue Spruce Company Inc.,...
Need help with this prob please! Problem 4-12 Joe Schreiner, controller for Blue Spruce Company Inc., recently prepared the company's income statement and statement of changes in equity for 2017. Schreiner believes that the statements are a fair presentation of the company's financial prog...
##### Nng The elgenvalue 1Siseq Ior the 82ed540618Flnd8 1 ?7 8 ;
nng The elgenvalue 1 Siseq Ior the 82ed540618 Flnd 8 1 ? 7 8 ;...
##### Consider the following data to calculate the molarity of HzSO4 using the following titration data Concentration of Initial volume Final volume Initial volume Final volume NaOH 0.106 M (NaOH) (NaOH) (HzSO4) (HzSO4) Demo (from video) 7.10 mL 24.14 mL 15.51 mL 25.55 mL Trial 1 5.19 mL 22.26 mL 25.04 mL 34.95 mL Trial 2 1.27 mL 18.49 mL 2.58 mL 12.51 mLThe net ionic equation for this reaction is:Htlaq) + HSO4 (aq) + 2 OH(aq)2 HzO(l) + SO42-(aq)*~If you correctly worked out the concentration of sulfu
Consider the following data to calculate the molarity of HzSO4 using the following titration data Concentration of Initial volume Final volume Initial volume Final volume NaOH 0.106 M (NaOH) (NaOH) (HzSO4) (HzSO4) Demo (from video) 7.10 mL 24.14 mL 15.51 mL 25.55 mL Trial 1 5.19 mL 22.26 mL 25.04 mL...
##### Problem 2- Determine the deflection and rotation at B using virtual work method. Assume and 1-200...
Problem 2- Determine the deflection and rotation at B using virtual work method. Assume and 1-200 in. (25 points) 10 k 7.5 15'...
##### Mat Tools Table Window Help Review out References A Aa Po APA Mailings E 75% MT2...
mat Tools Table Window Help Review out References A Aa Po APA Mailings E 75% MT2 Econ 104 - 2020W - D2L View Acrobat Table Design Layout 2 T AaBCcDdEe Aalbcdee AaBbCcDc AaBbCcDdE: AaBb team No Spacing Heading 1 Heading 2 Normal ECON 104 MT2 Take Home Keith Yacucha Short Answer: Comparative Statics a...
##### Technical Question: Our textbook made an obseravtion about new start-up firms. It said that these firms...
Technical Question: Our textbook made an obseravtion about new start-up firms. It said that these firms reduce their costs by moving from from higher to lower points on their short-run cost curves and by downward shifts of their short-run cost curves via economies of scale. It claimed that firms lik...
|
{}
|
When trying to install packages in R that require compiling on one of my servers (that I did not have admin access to) I ran into the following error:
ERROR: 'configure' exists but is not executable -- see the 'R Installation and Administration Manual'
The problem here is that the temporary directory that R is clunking the files to be compiled does not have permissions set to execute. To fix this problem, I needed to do BOTH of the following (it doesn’t work without both)
1. Create a folder somewhere that you do have power to write/execute, etc.
mkdir /path/to/folder
chmod 777 /path/to/folder
You can undo these permissions later if it makes you anxious.
2. Set the TMPDIR variable in your bash to this folder:
export TMPDIR=/path/to/folder
3. Start R, and install the library “unixtools” that will give you power to set the temporary directory:
install.packages('unixtools')
Note that you can see the currently set temporary directory with tempdir(). Before changing it, it will look something like this:
[1] "/tmp/RtmpQrgNII";
4. Use unixtools to set this to a new directory:
library('unixtools')
set.tempdir('/path/to/folder')
tempdir()
[1] "/path/to/folder";
Now you should be able to install packages that require compilation. At least, it worked for me!
|
{}
|
# High Performance Computers¶
## Relevant Machines¶
This page includes instructions and guidelines when deploying Dask on high performance supercomputers commonly found in scientific and industry research labs. These systems commonly have the following attributes:
1. Some mechanism to launch MPI applications or use job schedulers like SLURM, SGE, TORQUE, LSF, DRMAA, PBS, or others
2. A shared network file system visible to all machines in the cluster
3. A high performance network interconnect, such as Infiniband
4. Little or no node-local storage
## Where to start¶
Most of this page documents best practices to use Dask on an HPC cluster. This is technical and aimed both at users with some experience deploying Dask and also system administrators.
1. dask-jobqueue for use with PBS, SLURM, and SGE resource managers
2. dask-drmaa for use with any DRMAA compliant resource manager
They provide interfaces that look like the following:
from dask_jobqueue import PBSCluster
cluster = PBSCluster(cores=36,
memory"100GB",
project='P48500028',
walltime='02:00:00')
cluster.start_workers(100) # Start 100 jobs that match the description above
client = Client(cluster) # Connect to that cluster
We recommend reading the dask-jobqueue documentation first to get a basic system running, and then returning this this documentation for fine-tuning.
## Using a Shared Network File System and a Job Scheduler¶
Note
this section is not necessary if you use a tool like dask-jobqueue
Some clusters benefit from a shared network file system (NFS) and can use this to communicate the scheduler location to the workers:
dask-scheduler --scheduler-file /path/to/scheduler.json # writes address to file
>>> client = Client(scheduler_file='/path/to/scheduler.json')
This can be particularly useful when deploying dask-scheduler and dask-worker processes using a job scheduler like SGE/SLURM/Torque/etc.. Here is an example using SGE’s qsub command:
# Start a dask-scheduler somewhere and write connection information to file
qsub -b y /path/to/dask-scheduler --scheduler-file /home/$USER/scheduler.json # Start 100 dask-worker processes in an array job pointing to the same file qsub -b y -t 1-100 /path/to/dask-worker --scheduler-file /home/$USER/scheduler.json
Note, the --scheduler-file option is only valuable if your scheduler and workers share a network file system.
## Using MPI¶
Note
this section is not necessary if you use a tool like dask-jobqueue
You can launch a Dask network using mpirun or mpiexec and the dask-mpi command line executable.
mpirun --np 4 dask-mpi --scheduler-file /home/$USER/scheduler.json from dask.distributed import Client client = Client(scheduler_file='/path/to/scheduler.json') This depends on the mpi4py library. It only uses MPI to start the Dask cluster, and not for inter-node communication. MPI implementations differ. The use of mpirun --np 4 is specific to the mpich MPI implementation installed through conda and linked to mpi4py conda install mpi4py It is not necessary to use exactly this implementation, but you may want to verify that your mpi4py Python library is linked against the proper mpirun/mpiexec executable and that the flags used (like --np 4) are correct for your system. The system administrator of your cluster should be very familiar with these concerns and able to help. Run dask-mpi --help to see more options for the dask-mpi command. ## High Performance Network¶ Many HPC systems have both standard Ethernet networks as well as high-performance networks capable of increased bandwidth. You can instruct Dask to use the high-performance network interface by using the --interface keyword to the dask-worker, dask-scheduler, or dask-mpi commands or the interface= keyword to the dask-jobqueue Cluster objects. mpirun --np 4 dask-mpi --scheduler-file /home/$USER/scheduler.json --interface ib0
In the code example above we have assumed that your cluster has an Infiniband network interface called ib0. You can check this by asking your system administrator or by inspecting the output of ifconfig
\$ ifconfig
lo Link encap:Local Loopback # Localhost
...
ib0 Link encap:Infiniband # Fast InfiniBand
## No Local Storage¶
Users often exceed memory limits available to a specific Dask deployment. In normal operation Dask spills excess data to disk. However, in HPC systems the individual compute nodes often lack locally attached storage, preferring instead to store data in a robust high performance network storage solution. As a result when a Dask cluster starts to exceed memory limits its workers can start making many small writes to the remote network file system. This is both inefficient (small writes to a network file system are much slower than local storage for this use case) and potentially dangerous to the file system itself.
See this page for more information on Dask’s memory policies. Consider changing the following values to your ~/.config/dask/distributed.yaml file
distributed:
worker:
memory:
target: false # don't spill to disk
spill: false # don't spill to disk
pause: 0.80 # pause execution at 80% memory use
terminate: 0.95 # restart the worker at 95% use
This stops Dask workers from spilling to disk, and instead relies entirely on mechanisms to stop them from processing when they reach memory limits.
As a reminder, you can set the memory limit for a worker using the --memory-limit keyword:
dask-mpi ... --memory-limit 10GB
Alternatively if you do have local storage mounted on your compute nodes you can point Dask workers to use a particular location in your filesystem using the --local-directory keyword:
dask-mpi ... --local-directory /scratch
## Launch Many Small Jobs¶
HPC job schedulers are optimized for large monolithic jobs with many nodes that all need to run as a group at the same time. Dask jobs can be quite a bit more flexible, workers can come and go without strongly affecting the job. So if we separate our job into many smaller jobs we can often get through the job scheduling queue much more quickly than a typical job. This is particularly valuable when we want to get started right away and interact with a Jupyter notebook session rather than waiting for hours for a suitable allocation block to become free.
So, to get a large cluster quickly we recommend allocating a dask-scheduler process on one node with a modest wall time (the intended time of your session) and then allocating many small single-node dask-worker jobs with shorter wall times (perhaps 30 minutes) that can easily squeeze into extra space in the job scheduler. As you need more computation you can add more of these single-node jobs or let them expire.
## Use Dask to co-launch a Jupyter server¶
Dask can help you by launching other services alongside it. For example you can run a Jupyter notebook server on the machine running the dask-scheduler process with the following commands
from dask.distributed import Client
client = Client(scheduler_file='scheduler.json')
import socket
host = client.run_on_scheduler(socket.gethostname)
import subprocess
proc = subprocess.Popen(['/path/to/jupyter', 'lab', '--ip', host, '--no-browser'])
|
{}
|
# Grassmannian
For other uses, see Grassmannian (disambiguation).
In mathematics, the Grassmannian Gr(r, V) is a space which parameterizes all linear subspaces of a vector space V of given dimension r. For example, the Grassmannian Gr(1, V) is the space of lines through the origin in V, so it is the same as the projective space of one dimension lower than V.
When V is a real or complex vector space, Grassmannians are compact smooth manifolds.[1] In general they have the structure of a smooth algebraic variety.
The earliest work on a non-trivial Grassmannian is due to Julius Plücker, who studied the set of lines in projective 3-space and parameterized them by what are now called Plücker coordinates. Grassmannians are named after Hermann Grassmann, who introduced the concept in general.
Notations vary between authors, with Gr(V, r) being equivalent to Gr(r, V), and with some authors using Gr(r, n) or Gr(n, r) to denote the Grassmannian of r-dimensional subspaces of an unspecified n-dimensional vector space.
## Motivation
By giving a collection of subspaces of some vector space a topological structure, it is possible to talk about a continuous choice of subspace or open and closed collections of subspaces; by giving them the structure of a differential manifold one can talk about smooth choices of subspace.
A natural example comes from tangent bundles of smooth manifolds embedded in Euclidean space. Suppose we have a manifold M of dimension r embedded in Rn. At each point x in M, the tangent space to M can be considered as a subspace of the tangent space of Rn, which is just Rn. The map assigning to x its tangent space defines a map from M to Gr(r, n). (In order to do this, we have to translate the geometrical tangent space to M so that it passes through the origin rather than x, and hence defines a r-dimensional vector subspace. This idea is very similar to the Gauss map for surfaces in a 3-dimensional space.)
This idea can with some effort be extended to all vector bundles over a manifold M, so that every vector bundle generates a continuous map from M to a suitably generalised Grassmannian—although various embedding theorems must be proved to show this. We then find that the properties of our vector bundles are related to the properties of the corresponding maps viewed as continuous maps. In particular we find that vector bundles inducing homotopic maps to the Grassmannian are isomorphic. But the definition of homotopic relies on a notion of continuity, and hence a topology.
## Low dimensions
For r = 1, the Grassmannian Gr(1, n) is the space of lines through the origin in n-space, so it is the same as the projective space of n−1 dimensions.
For r = 2, the Grassmannian is the space of all planes through the origin. In Euclidean 3-space, a plane containing the origin is completely characterized by the one and only line through the origin perpendicular to that plane (and vice versa); hence Gr(2, 3) ≅ Gr(1, 3) ≅ P2, the projective plane.
The simplest Grassmannian that is not a projective space is Gr(2, 4), which may be parameterized via Plücker coordinates.
## The Grassmannian as a set
Let V be a finite-dimensional vector space over a field k. The Grassmannian Gr(r, V) is the set of all r-dimensional linear subspaces of V. If V has dimension n, then the Grassmannian is also denoted Gr(r, n).
Vector subspaces of V are equivalent to linear subspaces of the projective space P(V), so it is equivalent to think of the Grassmannian as the set of all linear subspaces of P(V). When the Grassmannian is thought of this way, it is often written as Gr(r − 1, P(V)) or Gr(r − 1, n − 1).
## The Grassmannian as a homogeneous space
The quickest way of giving the Grassmannian a geometric structure is to express it as a homogeneous space. First, recall that the general linear group GL(V) acts transitively on the r-dimensional subspaces of V. Therefore, if H is the stabilizer of any of the subspaces under this action, we have
Gr(r, V) = GL(V)/H.
If the underlying field is R or C and GL(V) is considered as a Lie group, then this construction makes the Grassmannian into a smooth manifold. It also becomes possible to use other groups to make this construction. To do this, fix an inner product on V. Over R, one replaces GL(V) by the orthogonal group O(V), and by restricting to orthonormal frames, one gets the identity
Gr(r, n) = O(n)/(O(r) × O(nr)).
In particular, the dimension of the Grassmannian is r(nr).
Over C, one replaces GL(V) by the unitary group U(V). This shows that the Grassmannian is compact. These constructions also make the Grassmannian into a metric space: For a subspace W of V, let PW be the projection of V onto W. Then
${\displaystyle d(W,W')=\lVert P_{W}-P_{W'}\rVert ,}$
where ||⋅|| denotes the operator norm, is a metric on Gr(r, V). The exact inner product used does not matter, because a different inner product will give an equivalent norm on V, and so give an equivalent metric.
If the ground field k is arbitrary and GL(V) is considered as an algebraic group, then this construction shows that the Grassmannian is a non-singular algebraic variety. It follows from the existence of the Plücker embedding that the Grassmannian is complete as an algebraic variety. In particular, H is a parabolic subgroup of GL(V).
## The Grassmannian as a scheme
In the realm of algebraic geometry, the Grassmannian can be constructed as a scheme by expressing it as a representable functor.[2]
### Representable functor
Let ${\displaystyle {\mathcal {E}}}$ be a quasi-coherent sheaf on a scheme S. Fix a positive integer r. Then to each S-scheme T, the Grassmannian functor associates the set of quotient modules of
${\displaystyle {\mathcal {E}}_{T}:={\mathcal {E}}\otimes _{O_{S}}O_{T}}$
locally free of rank r on T. We denote this set by ${\displaystyle \mathbf {Gr} (r,{\mathcal {E}}_{T})}$.
This functor is representable by a separated S-scheme ${\displaystyle \mathbf {Gr} (r,{\mathcal {E}})}$. The latter is projective if ${\displaystyle {\mathcal {E}}}$ is finitely generated. When S is the spectrum of a field k, then the sheaf ${\displaystyle {\mathcal {E}}}$ is given by a vector space V and we recover the usual Grassmannian variety of the dual space of V, namely: Gr(r, V).
By construction, the Grassmannian scheme is compatible with base changes: for any S-scheme S′, we have a canonical isomorphism
${\displaystyle \mathbf {Gr} (r,{\mathcal {E}})\times _{S}S'\simeq \mathbf {Gr} (r,{\mathcal {E}}_{S'})}$
In particular, for any point s of S, the canonical morphism {s} = Spec(k(s)) → S, induces an isomorphism from the fiber ${\displaystyle \mathbf {Gr} (r,{\mathcal {E}})_{s}}$ to the usual Grassmannian ${\displaystyle {Gr}(r,{\mathcal {E}}\otimes _{O_{S}}k(s))}$ over the residue field k(s).
### Universal family
Since the Grassmannian scheme represents a functor, it comes with a universal object, ${\displaystyle {\mathcal {G}}}$, which is an object of
${\displaystyle \mathbf {Gr} \left(r,{\mathcal {E}}_{\mathbf {Gr} (r,{\mathcal {E}})}\right),}$
and therefore a quotient module ${\displaystyle {\mathcal {G}}}$ of ${\displaystyle {\mathcal {E}}_{\mathbf {Gr} (r,{\mathcal {E}})}}$, locally free of rank r over ${\displaystyle \mathbf {Gr} (r,{\mathcal {E}})}$. The quotient homomorphism induces a closed immersion from the projective bundle ${\displaystyle \mathbf {P} ({\mathcal {G}})}$:
${\displaystyle \mathbf {P} ({\mathcal {G}})\to \mathbf {P} \left({\mathcal {E}}_{\mathbf {Gr} (r,{\mathcal {E}})}\right)=\mathbf {P} ({\mathcal {E}})\times _{S}\mathbf {Gr} (r,{\mathcal {E}}).}$
For any morphism of S-schemes:
${\displaystyle T\to \mathbf {Gr} (r,{\mathcal {E}}),}$
this closed immersion induces a closed immersion
${\displaystyle \mathbf {P} ({\mathcal {G}}_{T})\to \mathbf {P} ({\mathcal {E}})\times _{S}T.}$
Conversely, any such closed immersion comes from a surjective homomorphism of OT-modules from ${\displaystyle {\mathcal {E}}_{T}}$ to a locally free module of rank r.[3] Therefore, the elements of ${\displaystyle \mathbf {Gr} (r,{\mathcal {E}})(T)}$ are exactly the projective subbundles of rank r in
${\displaystyle \mathbf {P} ({\mathcal {E}})\times _{S}T.}$
Under this identification, when T = S is the spectrum of a field k and ${\displaystyle {\mathcal {E}}}$ is given by a vector space V, the set of rational points ${\displaystyle \mathbf {Gr} (r,{\mathcal {E}})(k)}$ correspond to the projective linear subspaces of dimension r − 1 in P(V), and the image of ${\displaystyle \mathbf {P} ({\mathcal {G}})(k)}$ in
${\displaystyle \mathbf {P} (V)\times _{k}\mathbf {Gr} (r,{\mathcal {E}})}$
is the set
${\displaystyle \{(x,v)\in \mathbf {P} (V)(k)\times \mathbf {Gr} (r,{\mathcal {E}})(k)\mid x\in v\}.}$
## The Plücker embedding
Main article: Plücker embedding
The Plücker embedding is a natural embedding of a Grassmannian into a projective space:
${\displaystyle \psi :\mathbf {Gr} (r,V)\to \mathbf {P} \left(\wedge ^{r}V\right).}$
Suppose that W is an r-dimensional subspace of V. To define ψ(W), choose a basis {w1, ..., wr}, of W, and let ψ(W) be the wedge product of these basis elements:
${\displaystyle \psi (W)=w_{1}\wedge \cdots \wedge w_{r}.}$
A different basis for W will give a different wedge product, but the two products will differ only by a non-zero scalar (the determinant of the change of basis matrix). Since the right-hand side takes values in a projective space, ψ is well-defined. To see that ψ is an embedding, notice that it is possible to recover W from ψ(W) as the set of all vectors w such that wψ(W) = 0.
The embedding of the Grassmannian satisfies some very simple quadratic polynomials called the Plücker relations. These show that the Grassmannian embeds as an algebraic subvariety of P(∧rV) and give another method of constructing the Grassmannian. To state the Plücker relations, choose two r-dimensional subspaces W and Z of V with bases {w1, ..., wr}, and {z1, ..., zr}, respectively. Then, for any integer k ≥ 0, the following equation is true in the homogeneous coordinate ring of P(∧rV):
${\displaystyle \psi (W)\cdot \psi (Z)-\sum _{i_{1}<\cdots
When dim(V) = 4, and r = 2, the simplest Grassmannian which is not a projective space, the above reduces to a single equation. Denoting the coordinates of P(∧rV) by X1,2, X1,3, X1,4, X2,3, X2,4, X3,4, we have that Gr(2, V) is defined by the equation
X1,2X3,4X1,3X2,4 + X2,3X1,4 = 0.
In general, however, many more equations are needed to define the Plücker embedding of a Grassmannian in projective space.
## The Grassmannian as a real affine algebraic variety
Let Gr(r, Rn) denote the Grassmannian of r-dimensional subspaces of Rn. Let M(n, R) denote the space of real n × n matrices. Consider the set of matrices A(r, n) ⊂ M(n, R) defined by XA(r, n) if and only if the three conditions are satisfied:
• X is a projection operator: X2 = X.
• X is symmetric: Xt = X.
• X has trace r: tr(X) = r.
A(r, n) and Gr(r, Rn) are homeomorphic, with a correspondence established by sending XA(r, n) to the column space of X.
## Duality
Every r-dimensional subspace W of V determines an (nr)-dimensional quotient space V/W of V. This gives the natural short exact sequence:
0 → WVV/W → 0.
Taking the dual to each of these three spaces and linear transformations yields an inclusion of (V/W) in V with quotient W:
0 → (V/W)VW → 0.
Using the natural isomorphism of a finite-dimensional vector space with its double dual shows that taking the dual again recovers the original short exact sequence. Consequently there is a one-to-one correspondence between r-dimensional subspaces of V and (nr)-dimensional subspaces of V. In terms of the Grassmannian, this is a canonical isomorphism
Gr(r, V) ≅ Gr(nr, V).
Choosing an isomorphism of V with V therefore determines a (non-canonical) isomorphism of Gr(r, V) and Gr(nr, V). An isomorphism of V with V is equivalent to a choice of an inner product, and with respect to the chosen inner product, this isomorphism of Grassmannians sends an r-dimensional subspace into its (nr)-dimensional orthogonal complement.
## Schubert cells
The detailed study of the Grassmannians uses a decomposition into subsets called Schubert cells, which were first applied in enumerative geometry. The Schubert cells for Gr(r, n) are defined in terms of an auxiliary flag: take subspaces V1, V2, ..., Vr, with ViVi + 1. Then we consider the corresponding subset of Gr(r, n), consisting of the W having intersection with Vi of dimension at least i, for i = 1, ..., r. The manipulation of Schubert cells is Schubert calculus.
Here is an example of the technique. Consider the problem of determining the Euler characteristic of the Grassmannian of r-dimensional subspaces of Rn. Fix a 1-dimensional subspace RRn and consider the partition of Gr(r, n) into those r-dimensional subspaces of Rn that contain R and those that do not. The former is Gr(r − 1, n − 1) and the latter is a r-dimensional vector bundle over Gr(r, n − 1). This gives recursive formulas:
${\displaystyle \chi _{r,n}=\chi _{r-1,n-1}+(-1)^{r}\chi _{r,n-1},\qquad \chi _{0,n}=\chi _{n,n}=1.}$
If one solves this recurrence relation, one gets the formula: χr, n = 0 if and only if n is even and r is odd. Otherwise:
${\displaystyle \chi _{r,n}={\lfloor {\frac {n}{2}}\rfloor \choose \lfloor {\frac {r}{2}}\rfloor }.}$
### Cohomology ring of the complex Grassmannian
Every point in the complex Grassmannian manifold Gr(r, n) defines an r-plane in n-space. Fibering these planes over the Grassmannian one arrives at the vector bundle E which generalizes the tautological bundle of a projective space. Similarly the (nr)-dimensional orthogonal complements of these planes yield an orthogonal vector bundle F. The integral cohomology of the Grassmannians is generated, as a ring, by the Chern classes of E. In particular, all of the integral cohomology is at even degree as in the case of a projective space.
These generators are subject to a set of relations, which defines the ring. The defining relations are easy to express for a larger set of generators, which consists of the Chern classes of E and F. Then the relations merely state that the direct sum of the bundles E and F is trivial. Functoriality of the total Chern classes allows one to write this relation as
${\displaystyle c(E)c(F)=1.}$
The quantum cohomology ring was calculated by Edward Witten in The Verlinde Algebra And The Cohomology Of The Grassmannian. The generators are identical to those of the classical cohomology ring, but the top relation is changed to
${\displaystyle c_{k}(E)c_{n-k}(F)=(-1)^{n-r}}$
reflecting the existence in the corresponding quantum field theory of an instanton with 2n fermionic zero-modes which violates the degree of the cohomology corresponding to a state by 2n units.
## Associated measure
When V is n-dimensional Euclidean space, one may define a uniform measure on Gr(r, n) in the following way. Let θn be the unit Haar measure on the orthogonal group O(n) and fix V in Gr(r, n). Then for a set AGr(r, n), define
${\displaystyle \gamma _{r,n}(A)=\theta _{n}\{g\in \operatorname {O} (n):gV\in A\}.}$
This measure is invariant under actions from the group O(n), that is, γr, n(gA) = γr, n(A) for all g in O(n). Since θn(O(n)) = 1, we have γr, n(Gr(r, n)) = 1. Moreover, γr, n is a Radon measure with respect to the metric space topology and is uniform in the sense that every ball of the same radius (with respect to this metric) is of the same measure.
## Oriented Grassmannian
This is the manifold consisting of all oriented r-dimensional subspaces of Rn. It is a double cover of Gr(r, n) and is denoted by:
${\displaystyle {\tilde {\mathbf {Gr} }}(r,n).}$
As a homogeneous space can be expressed as:
${\displaystyle \operatorname {SO} (n)/(\operatorname {SO} (r)\times \operatorname {SO} (n-r)).}$
## Applications
Grassmann manifolds have found application in computer vision tasks of video-based face recognition and shape recognition.[4] They are also used in the data-visualization technique known as the grand tour.
Grassmannians allow the scattering amplitudes of subatomic particles to be calculated via a positive Grassmannian construct called the amplituhedron.[5]
|
{}
|
# Building a 64 × 64 particle accelerator frame in Minecraft with a computercraft turtle
I dont think this will work anywhere besides in minecraft with computercraft but it's all correct syntax. I just feel like it has some lines of code I could eliminated somehow and made it a cleaner script. I appreciate any constructive criticism to help me further understand what I'm misunderstanding.
function placef()
turtle.select(1)
x = 1
if turtle.getItemCount(x) == 0 then
repeat turtle.select(x+1)
x = x + 1
if x == 17 then
x = 1
y = 2
end
if y == 2 then
os.reboot()
end
until turtle.getItemCount(x) > 0
end
turtle.place()
end
function placeup()
turtle.select(9)
x = 9
if turtle.getItemCount(x) == 0 then
repeat turtle.select(x+1)
x = x + 1
if x == 17 then
x = 9
y = 2
end
if y == 2 then
os.reboot()
end
until turtle.getItemCount(x) > 0
end
turtle.placeUp()
end
function place()
turtle.select(9)
x = 9
if turtle.getItemCount(x) == 0 then
repeat turtle.select(x+1)
x = x + 1
if x == 17 then
x = 9
y = 2
end
if y == 16 then
os.reboot()
end
until turtle.getItemCount(x) > 0
end
turtle.placeDown()
end
function repairOT()
turtle.select(1)
x = 1
if turtle.getItemCount(x) == 0 then
repeat turtle.select(x+1)
x = x + 1
if x == 17 then
x = 1
y = 2
end
if y == 2 then
os.reboot()
end
until turtle.getItemCount(x) > 0
end
if turtle.compareDown() == false then
place()
turtle.turnRight()
else
turtle.turnRight()
end
if turtle.compare() == false then
placef()
turtle.turnLeft()
else
turtle.turnLeft()
end
turtle.forward()
end
function repairBI()
turtle.select(1)
x = 1
if turtle.getItemCount(x) == 0 then
repeat turtle.select(x+1)
x = x + 1
if x == 17 then
x = 1
y = 2
end
if y == 16 then
os.reboot()
end
until turtle.getItemCount(x) > 0
end
if turtle.compareUp() == false then
placeup()
turtle.turnLeft()
else
turtle.turnLeft()
end
if turtle.compare() == false then
placef()
turtle.turnRight()
else
turtle.turnRight()
end
turtle.forward()
end
These functions are where I think I could do away with alot of lines of code but I'm not sure how. Maybe an anonymous function and a class. Don't really quite understand using those very well yet.
https://pastebin.com/JDZSibmn There is the full script just in case anyone wants to see it. The rest is just loops.
• The title is better, yes, but I still don't have the foggiest what problem you're solving. Just when I think I got an idea I see code like os.reboot() in a Minecraft plug-in and I'm all lost again. – Mast Sep 17 '18 at 12:30
• Yea it has a few Lua apis of Computercrafts. os.reboot() is rebooting the turtle if it can't find an inventory slot with something in it. This is about 2 years old so I didn't have means of it knowing what kind of item it was holding. I think i just need to go rewrite this in the newest version of computercraft. Or just go start learning python or c – Donnie Sep 17 '18 at 16:14
First of all, use more local. It will save you a lot of headache in the future.
Instead of the three place functions, you could just do
local function selectAnyBlock()
for i=1,16 -- 4 x 4 inventory
if turtle.getItemCount(i) > 0 then
return true
end
end
os.reboot() -- No idea why you'd want to reboot here though
end
and then call turtle.place[up|down]() after that. That saves you two functions and some code.
As for the last two functions, I have no idea what they are supposed to do. They seem to place a block up or down, then turn left and place a block in front?
They also make use of that construct that selects a block in the inventory, so you could replace that with the above selectAnyBlock() function. There's also some places where you have the same instruction (turtle.turnLeft()) in both code paths of a condition. Just place them after the if and you'll only have to write them once.
You don't have to compare booleans to true or false; you can just check for them directly in a condition:
if turtle.compareUp() == false then
placeup()
turtle.turnLeft()
else
turtle.turnLeft()
end
turns into
if not turtle.compareUp() then
placeup()
end
turtle.turnLeft()
I don't really see a need for object orientation here. Maybe it makes sense in the program overall, but your example works well with just functions. Same goes for anonymous functions; useful as they are, sometimes you just don't need them.
Overall, try indenting your code like everybody else does (each new scope has its own indentation level) and add some comments to clarify your intentions. This not only helps others reading your code, but also yourself in the future :)
• Yea I wasn't positive anything could be done. It just seemed like the best place to get advice and I was having trouble continuing to learn anything with lua. Since I asked this I've started learning python and now I'm asking my self why I wasted so much time on lua. Thanks for the feed back. I'm sure I'll have alot better questions to ask before long. Python is so much easier. – Donnie Sep 18 '18 at 12:16
|
{}
|
# zbMATH — the first resource for mathematics
Validation of linear regression models. (English) Zbl 0930.62041
Summary: A new test is proposed in order to verify that a regression function, say $$g$$ has a prescribed (linear) parametric form. This procedure is based on the large sample behavior of an empirical $$L^2$$-distance between $$g$$ and the subspace $$U$$ spanned by the regression functions to be verified. The asymptotic distribution of the test statistic is shown to be normal with parameters depending only on the variance of the observations and the $$L^2$$-distance between the regression function $$g$$ and the model space $$U$$.
Based on this result, a test is proposed for the hypothesis that “$$g$$ is not in a preassigned $$L^2$$-neighborhood of $$U$$,” which allows the “verification” of the model $$U$$ at a controlled type I error rate. The suggested procedure is very easy to apply because of its asymptotic normal law and the simple form of the test statistic. In particular, it does not require nonparametric estimators of the regression function and hence, the test does not depend on the subjective choice of smoothing parameters.
##### MSC:
62G08 Nonparametric regression and quantile regression 62G10 Nonparametric hypothesis testing 62G20 Asymptotic properties of nonparametric inference 62E20 Asymptotic distribution theory in statistics
Full Text:
|
{}
|
# Math Help - Complex Number
1. ## Complex Number
Let $z_1, z_2$ be nonzero complex numbers such that
$z_1^2 + z_2^2 = \sqrt{2}z_1 z_2$.
Show that $| z_1 | = | z_2 |$ and find $Arg{(\frac{z_1}{z_2})}$.
I've tried substituting $z = x+yi$ and $z = re^{i\theta}$ to prove this but it didnt work out. I assume there is a geometric or inequality approach to this question?
2. Originally Posted by shinn
Let $z_1, z_2$ be nonzero complex numbers such that
$z_1^2 + z_2^2 = \sqrt{2}z_1 z_2$.
Show that $| z_1 | = | z_2 |$ and find $Arg{(\frac{z_1}{z_2})}$.
I've tried substituting $z = x+yi$ and $z = re^{i\theta}$ to prove this but it didnt work out. I assume there is a geometric or inequality approach to this question?
In the equation $z_1^2 + z_2^2 = \sqrt{2}z_1 z_2$, divide through by $z_2^{\,2}$, and let $w = z_1/z_2$. You will get a quadratic equation for w.
|
{}
|
programming / C++ / sailing / nerd stuff
Algorithm: Shell Sort
## Description
Sorting of h parts in distance of the elements of h. Merging the parts for descending order of h values with insertion sort. This means the elements gets moved over a greater distance. Typical: $h(k+1) = 3 * h(k) + 1$
• not stable
## Algorithm
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 int h = 1; while (h <= array.count()) { h = 3*h+1; } while (h > 0) { h = h / 3; for (int i = (h-1); i < array.count(); ++i) { int j = i; while ((j > h-1) && (array[j] < array[j-h-1])) { array.swap( j-h-1, j ); j = j - h - 1; } } }
## Analysis
Asymptotic complexity depends on h:
For $h(k+1) = 3 * h(k) + 1$:
$M_{avg} = 1.29 * n^1.28$
$C_{avg} = O( n^3/2 )$
For $h(k+1) = 2 * h(k)$:
$M_{avg} = O( n^3/2 )$
$C_{avg} = O(n^3/2)$
For $h = 2^p * 3^q {..., 16, 12, 9, 8, 6, 4, 3, 2, 1}$:
$M_{avg} = O(n * log^2(n))$
For $h = floor(n * \alpha), floor(floor(n * \alpha) * \alpha), ..., 1$:
$M_{avg} = 1.75 * n^1.19$
|
{}
|
# Math Help - Maclaurin expansion
1. ## Maclaurin expansion
Find the Maclaurin expansion for $sin^4x$ using the double angle identity for cos2x and cos4x.
So...
This is how i worked out so far, yet i'm not sure whether it is correct or not
Identity of cos4x = cos(2x + 2x) = cos2x.cos2x + sin2x.sin2x
using the identity of $cos2x = (1 - 2sin^2x)(1-2sin^2x) + sin2x.sin2x$
which equals to
$
cos4x = 1 - 4sin^2x + 4sin^4x + sin2x.sin2x
$
hence
$
sin^4x = \frac{cos4x + 4sin^2x - sin2x.sin2x - 1}{4} = sin^4x
$
If it is correct i'm not sure whether to continue with the problem or not, this question has 8 marks...and i'm puzzled.. if anyone could help it would be greatly appreciated!
kleyzam
2. Hello, kleyzam!
Find the Maclaurin expansion for $\sin^4\!x$
using the double-angle identity for $\cos2x$ and $\cos4x.$
I'd approach it like this . . .
$(\sin^2\!x)^2 \;=\;\left(\frac{1-\cos2x}{2}\right)^2 \;=\;\tfrac{1}{4}\left(1 - 2\cos2x + \cos^2\!2x\right)$
. . $= \;\tfrac{1}{4}\left(1 - 2\cos2x + \frac{1+\cos4x}{2}\right) \;=\;\tfrac{1}{8}\left(\cos4x - 4\cos2x + 3\right)$
Then: . $\sin^4\!x \;=\;\frac{1}{8}\begin{Bmatrix}\cos4x \\ -4\cos2x \\ + 3 \end{Bmatrix} \;=$ . $\begin{Bmatrix}1 - \frac{(4x)^2}{2!} + \frac{(4x)^4}{4!} - \frac{(4x)^6}{6!} + \hdots \\ \\[-4mm]
\text{-}4\left[1 - \frac{(2x)^2}{2!} + \frac{(2x)^4}{4!} - \frac{(2x)^6}{6!} + \hdots \right] \\ \\[-4mm] + 3 \end{Bmatrix}$
. . . . etc.
|
{}
|
+0
help pllllz
0
46
1
Find the indicated power using De Moivre's Theorem. (Express your fully simplified answer in the form
a + bi.)
(1-i)^18
Apr 20, 2020
#1
+24883
+2
Find the indicated power using De Moivre's Theorem. (Express your fully simplified answer in the form
$$a + bi$$.)
$$\left(1-i\right)^{18}$$
$$\begin{array}{|rcll|} \hline 1-i &=& r\Big(\cos(x) -+ i \sin(x) \Big) \\ && \boxed{ r = \sqrt{1^2+(-1)^2} \\ r=\sqrt{2} } \\ && \boxed{ x = \arctan\left(\dfrac{-1}{1}\right) \\x = \arctan(-1)\\x = -45^\circ } \\ 1-i &=& \sqrt{2}\Big(\cos(-45^\circ) + i \sin(-45^\circ) \Big) \\ 1-i &=& \sqrt{2}\Big(\cos(45^\circ) - i \sin(45^\circ) \Big) \\ \left(1-i\right)^{18}&=& \left(\sqrt{2}\right)^{18}\Big(\cos(18*45^\circ) - i \sin(18*45^\circ) \Big) \quad | \quad \text{De Moivre's Theorem} \\ \left(1-i\right)^{18}&=& 2^{\frac{18}{2}}\Big(\cos(810^\circ) - i \sin(810^\circ) \Big) \\ \left(1-i\right)^{18}&=& 2^{9}\Big(\cos(90^\circ) - i \sin(90^\circ) \Big) \\ \left(1-i\right)^{18}&=& 512\Big(0 - i*1 \Big) \\ \mathbf{\left(1-i\right)^{18}} &=& \mathbf{-512i} \\ \hline \end{array}$$
Apr 20, 2020
edited by heureka Apr 21, 2020
|
{}
|
# Photon absorption by free electron during Inverse bremsstrahlung: Why is a heavy particle needed?
I read about the process of inverse bremsstrahlung where a free electron can gain kinetic energy by absorbing a photon.
However, I'm having some trouble to understand why exactly a heavy particle must take part in this process, c.f.
The momentum conservation law requires this process can proceed only in the presence of an ion, which carries the extra momentum...
Which extra momentum is the author talking about? The photon's?
btw: sorry for the useless tags, I'm not a physicist..
EDIT: I kind of get it, I guess:
before absorption: electron energy : $$0.5 m_e v_1^2$$
electron momentum : $$m_e v_1$$
photon energy:$$\hbar\omega$$
photon momentum: $$\hbar\omega/c$$
total momentum: $$\hbar\omega/c + m_e v_1$$
After absorption:
electron energy: $$1/2 m_e v_2^2$$ where $$v_2^2=v_1^2 + 2 \hbar\omega/m_e$$
electron momentum: $$m_e v_2=m_e \sqrt{v_1^2 + 2 \hbar\omega/m_e}$$
which is probably more than $$\hbar\omega/c + m_e v_1$$ ? or isn't it?
Let's consider the inverse bremsstrahlung in vacuum: this process can be written out as
$$\gamma + e^- \to e^-$$
A process such as this does not conserve 4-momentum. One can easily see this by going in the center of mass frame: the photon and the electron hit head to head and the outgoing particle goes somewhere at an angle with respect to the collision axis. In the center of mass frame the total momentum is zero and, by conservation, the total momentum of the outgoing particles should be zero. But if the particle is only one it is clearly impossible.
The statement
The momentum conservation law requires this process can proceed only in the presence of an ion, which carries the extra momentum.
implies exactily this: the ion takes away the momentum in such a way that the total momentum after the collision is again zero, in the center of mass frame. The right process is then
$$\gamma + X \to e^-+X^+$$
in this way the colliding particles are the photon and the atom and the outgoing one are an electron and the ion. Obviously the atom will recoil very little since it is very massive, while the electron will take most of the energy.
• got it. thank you very much Davide :) Mar 18, 2020 at 21:56
There are two main conservation laws to obey here:
1. energy
Electrons and photons are QM entities, they obey QM laws. If the electron is accelerated then the laws can be obeyed through interaction with a third party, that might be the external EM field (through virtual particles) or in your case, a nucleus (ion). A free electron does have a rest mass and does not have excited states it could go to, so it cannot take up the energy of the photon.
Total absorption would mean an incoming photon+ electron , and outgoing only an electron. This cannot happen because the electron has a fixed mass and does not have excited states to absorb all the energy of the photon. What can happen is that most of the energy of the photon becomes kinetic energy of the electron, in any inertial frame, and correspondingly the photon can have very small energy , tending to zero but never zero. If the outgoing (or incoming) photon becomes virtual, connecting with an electric or magnetic field, then the kinematics has to include the originator of the field in energy momentum considerations, and the electron can absorb all the energy of the incoming photon the energy/momentum balance in its rest mass system taken up by the generator of the field that gave the virtual photon.
Can an accelerated "free" electron absorb a photon?
1. Momentum
The real reason this is prohibited with a free electron is that all the four momenta of the electron before, photon before and electron after cannot lie on their mass shell simultaneously.
The four-momentum of any real particle satisifies the relationship pμpμ=−m2. This defines a 3-D surface in the 4-D space of all possible four-momenta; this surface is called the mass shell for the particle. A virtual particle, on the other hand, can have any four-momentum vector that you want; a virtual particle is usually "off-shell", because its four-momentum doesn't lie on the mass shell.
Can a free electron absorb a virtual photon even though it cannot absorb an ordinary photon?
The answer to your question is that the nucleus (ion) will recoil (take the momentum of the photon).
• thank you for your time Arpad :) Mar 18, 2020 at 22:03
|
{}
|
limitations of simpson's diversity index
Legal. However, if diversity is high, uncertainty is high. Biological communities vary in the number of species they contain (richness) and relative abundance of these species (evenness). For Location A: = 1 - 608 = 1 - 608 = 1 – 0.337 = 0.663 43 x 42 1806 . Typically, the value of a diversity index increases when the number of types increases and the evenness increases. 227 0 obj <>/Filter/FlateDecode/ID[<2D04ED058D99994987CB18E22E3485FD>]/Index[200 54]/Info 199 0 R/Length 122/Prev 121003/Root 201 0 R/Size 254/Type/XRef/W[1 3 1]>>stream When all species in the data set are equally common, all pi values = 1/R and the Shannon-Weiner index equals ln(R). Since the mean of the proportional abundance of the species increases with decreasing number of species and increasing abundance of the most abundant species, the value of D obtains small values in data sets of high diversity and large values in data sets with low diversity. endstream endobj 62 0 obj <> endobj 63 0 obj <> endobj 64 0 obj <>stream )�f��CS�3шL�ct�.�����F��{���u�;��4t��+�"�.�&[��K�ɭˮow�U��.�V�m�6T$4!��u:�[v��&���R���[ɀ������1W�ſZ�*�����6�ҧt�gps:��:5�%9�� �#އ�o ;�vc The higher the value of this inverse index the greater the diversity. �h��Ƙ���&���t�q?���%��� ��&�ݪ����t�W�|�,ڏ��he���Bh��*,�l�M�����xq.�G��8��3�tک���/���bTz�����X ��sN�s�pA�O��~���� >�F}LE�����T�����o��qtu� >c+?���'��g�)G,���~8��!R�Z��STt�#��Q�~�Хq3L�0�"��]��aU~� ya�A��%�06>�����Z�5�4:4PU�����!>骳5�wR��GDl The width of a single strip (that you are estimating the area for arithmetically) for a Simpson approximation (with the same number of sample points) will be TWICE the width of the Riemann strip. We need information on the habitat required by the wildlife species of interest and we need to be aware of how timber harvesting and subsequent regeneration will affect the vegetative characteristics of the system. h�bf�a��� ̀ �@1V �X��y�J���@�+)�\ֲ]=߳������A�AdBP��a? D = Σ (pi2) s i=1 D = Σ ni(ni-1) s i=1 N(N-1) Calculating Diversity • Inverse of Simpson’s Index – As index increases, diversity decreases – As index increases, diversity increases D D 1 Advantages and Disadvantages of S uTl��U� An equivalent formula is: where $$p_i$$ is the proportional abundance for each species and R is the total number of species in the sample. At the other extreme is gamma (γ) diversity, the total regional diversity of a large area that contains several communities, such as the eastern deciduous forests of the USA or … It is computed as: $$H' = -\sum^R_{i=1} ln(p_i) = ln (\frac {1}{\prod^R_{i=1} p^{p_i}_i})$$. endstream endobj startxref For this reason, Simpson’s index is usually expressed as its inverse (1/D) or its compliment (1-D) which is also known as the Gini-Simpson index. where ni is the number of individuals in species i, and N is the total number of species in the sample. For Location B: = 1 - 520 = 1 - 520 = 1 – 0.241 = 0.759 47 x 46 2162 . Simpson (1949) developed an index of diversity that is computed as: $$D = \sum^R_{i=1} (\dfrac {n_i(n_i-1)}{N(N-1)})$$. where N is the total number of species and ni is the number of individuals in species i. Landowners, both public an(18)}{d private, often require management of non-timber components, such as wildlife, along with meeting the financial objectives achieved through timber management. If abundance is primarily concentrated into one species, the index will be close to zero. {4�k�b����p��-�S��N������a�F{E6�N�*�����"���;�J�K�}B�]cI���3�1�@����g���n:����������A�>����_����*=z;�N��IĐ;�V�1m�Pp��#1��pxv6�k���e�F)�,��VaB_�����:A�J�b�?�k��QF+{T��^�%F��f7�&� Register now! The key component to habitat for most wildlife is vegetation, which provides food and structural cover. The more unequal the abundance of species, the larger the weighted geometric mean of the pi values, the smaller the index. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. The primary interface between timber and wildlife is habitat, and habitat is simply an amalgam of environmental factors necessary for species survival (e.g., food or cover). It is very important to clearly state which version of Simpson’s D you are using when comparing diversity. •N = total # of individuals or total biomass for all species. The Shannon-Weiner index (Barnes et al. j. Simpson’s Index (8) - i. bJA�$��pk!l��C��l���s�Ha�#!R9�B3����D9�abg@;�v+]f#p�p����x��ϴ���x��V�~�-&阶��������r��K8�5I�x�h��.��iF���~�Y\�����}��f�����_/F�"�>�tFٴȋ�Sz�=��b�|���S�{����� �>��=N$疶;罬��fL���*�b���YiـjIz1N��Ћٴh�f? As forest and natural resource managers, we must be aware of how our timber management practices impact the biological communities in which they occur. The number of individuals is more evenly distributed between the three species. �s������l��jb�~YmI�\��sj���7Sk��@��8�\W�DZ�c?8��c?8��W�e��f��U8�_��5,������)�C�aϣV��+�"��Uk��&�o#����Q�X���� 76 0 obj <>/Filter/FlateDecode/ID[<66933C8D5C248E419CC385A5CEDEB402>]/Index[61 28]/Info 60 0 R/Length 83/Prev 701221/Root 62 0 R/Size 89/Type/XRef/W[1 2 1]>>stream 1998) was developed from information theory and is based on measuring uncertainty. %PDF-1.5 %���� The value of will always fall between 0 and 1, where 1 represents complete diversity and 0 represents complete uniformity. ��ج�A�I��;�oiҭ� H$�Ͻ=4mp0���$y��T� �� A�J����� iq���D�ހB���l�TM�m���E��!���_g}ѓ�u&�iPL��GY�{״/�#r��˻�Yf����ɔ��g�X�q������$�t#�(�n$�h?Ut@�,���έ �I�@I�rO(��b��A?�����Nf X6�/��߸d�n�Ny�f��!�uKW��Ȅ���+=�s6qA5���iU�;P�W��y�D,Q���vH�]x�ؔ��⺬*Ȋ��Y~e�$ľh^p[l���N��Ȝ�g�We}���#�m�MQy�}-��Ҿ���^W8�z��� i�)Eޖ۾i�糌|�TW�y��z٬�z���zTw�q=-ۮ?�� 'DN���bo!4F����X����A#C�U��ͥ�#���!��}%�#AE�"�+ђ���OȈ�%�1�#�u���:�¿=k�5x#m�H4b������ ���͑ �y�Kqf�aky_%1(�#�X�_����UMT�s���p��z���Z����/��-���m�W��_�_������K#���K�c���R�(�l� R"9~��Db�I�C/_k�����~�0��H}Z��1���m�BҾ&^QD���A# �� �A��� � ��pAYyNI��)��Q. ��b����7�=�YQ������=�������쨎��7�)M��$�p�@\����H3q�1��@��* �A.~ Evenness is a measure of the relative abundance of the different species making up the richness of an area. This compliment represents the probability that two individuals randomly selected from a sample will belong to different species. :��܁�-�ɼ($((֓(l��� y)��|^�. Simpson’s Diversity Index . Species richness, as a measure on its own, does not take into account the number of individuals of each species present. Both samples have the same richness (3 species) and the same number of individuals (446). h�bbdb�"ZA$� ɺL����H�T0�V�&�*�_�I_0�D2*��"���"� ��4�L}���:��)�dTL�M���[����gD�&� �c� However, the first sample has more evenness than the second. %PDF-1.5 %���� %%EOF Have questions or comments? Since the sum of the pi’s equals unity by definition, the denominator equals the weighted geometric mean of the pi values, with the pi values being used as weights. Alpha (α) diversity is local diversity, the diversity of a forest stand, a grassland, or a stream. Example $$\PageIndex{3}$$:Calculating Shannon-Weiner Index. Thus a single yellow birch has as much influence on the richness of an area as 100 sugar maple trees. alpha, beta, and gamma diversity. ��.�p���C�~�N���I;��-�t���\���k�j����~����? ?�c�ކ��M7�e�@�}%�4��]@��B�FjۨIގ��f�X^�F���1�P �R@��Q#�>�4Z��� t�Ё�cNJ}�)�� �:���x�,+�� ��m�H�=B�@Q@��0��a@�@� hޤVmo�6�+�1A��M)�0��u�mN�J[ The Shannon-Weiner index is most sensitive to the number of species in a sample, so it is usually considered to be biased toward measuring species richness. If a community has low diversity (dominated by one species), the uncertainty of prediction is low; a randomly sampled species is most likely going to be the dominant species. H��V�n�F}�W�#UD��q/@ ���R�0�>�~�%�V��D%��gf�"�K�mX\��Μ9�{��b2�m7�r��۷��m[.o�\̋�.�Es� The degree of uncertainty of predicting the species of a random sample is related to the diversity of a community. We know that N = 65. The higher the value of this inverse index the greater the diversity. 88 0 obj <>stream Simpson’s index is a weighted arithmetic mean of proportional abundance and measures the probability that two individuals randomly selected from a sample will belong to the same species. endstream endobj 65 0 obj <>stream An equivalent and computationally easier formula is: $$H' = \frac {N ln \ N -\sum (n_i ln \ n_i)}{N}$$. Using the inverse, the value of this index starts with 1 as the lowest possible figure. Let’s compute the Shannon-Weiner diversity index for the same hypothetical community in the previous example. Diversity of organisms and the measurement of diversity have long interested ecologists and natural resource managers. 253 0 obj <>stream This is because the Simpson rule essentially requires twice as many test points since it needs a mid point as WELL as the two end points (for each strip). Then compute the index using the number of individuals for each species: $$D = \sum^R_{i=1} (\dfrac {n_i(n_i-1)}{N(N-1)}) = (\frac {35(34)}{65(64)} +\frac {19(18)}{65(64)} + \frac {11(10)}{65(64)}) = 0.3947$$. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. For example, communities with a large number of species that are evenly distributed are the most diverse and communities with few species that are dominated by one species are the least diverse. n�ـ6UC�f�/�m�+�6�6����(s�(��j�o. If we use the compliment to Simpson’s D, the value is: This version of the index has values ranging from 0 to 1, but now, the greater the value, the greater the diversity of your sample. �TFK1v40w4�D����qw ��b)�H(?�3Q�M�ӛ�Yո�\�y�X�ߕ��!� s��i���V0E@-�ܓ������a��H37@� �e5 So how do we develop a plan that will encompass multiple land use objectives? Example $$\PageIndex{2}$$:calculating Simpson’s Index. •D= Value of Simpson’s diversity index. 10.1: Introduction, Simpson’s Index and Shannon-Weiner Index, [ "article:topic", "authorname:dkiernan", "Simpson\u2019s Index", "Shannon-Weiner Index", "showtoc:no", "license:ccbyncsa" ], Lecturer (Forest and Natural Resources Management), 10: Quantitative Measures of Diversity, Site Similarity, and Habitat Suitability, 10.2: Rank Abundance Graphs and Habitat Suitability Index, SUNY College of Environmental Science and Forestry. &k�6u�d�[� It gives equal weight to those species with few individuals as it does to a species with many individuals. endstream endobj startxref 1���טw�G�_� �#��կ�r�Y�(�E�����|��Aj�XU�5~���1�S�,��_C�W�{'����_7���{�Q6�P�ȯ[���?����%� � )Yv��6�C��=�9s��ۙ�����f&M�"5 k���X=�Y�K+��7�to����]�ʎ �ӕx����L�,C�٫������R�#��L(K��ߙ4�d�c��o�,���Y$����'�٭w:1�H��]���I�U�%xsFI.��h��������;O7"V�&i���.�}�qF��XhA����]UuiYQ��>��\�?XE"[Z�$c%V{�캁���,�2�s �I�b( ?Y��1{$_b�{&(�����Vd�%� �c�R�4̆+���XSI���$��1��4�g�wK>�q$�/�HФ�O�#���dDR�x�. A silvicultural prescription is going to influence not only the timber we are growing but also the plant and wildlife communities that inhabit these stands. Free LibreFest conference on November 4-6! 8 is a measure of dominance therefore, (1-8) measures species diversity ii. Resource managers must be cognizant of the effect management practices have on plant and wildlife communities. •ni = # of individuals (or biomass) in the ith species. STEP Prep Thread 2021 MAT Prep Thread 2020 The Current Year 11 Chat Thread (2020-2021) A-level Autumn Resits 2020 MEGATHREAD! where pi is the proportion of individuals that belong to species i and R is the number of species in the sample. �� >��GQ� Az���v��j�*��V��j��>�כ�l7M ��-��`2? If we use the compliment to Simpson’s D, the value is: $$1-0.3947 = 0.6053$$ This version of the index has values ranging from 0 to 1, but now, the greater the value, the greater the diversity of your sample. A diversity index is a quantitative measure that reflects the number of different species and how evenly the individuals are distributed among those species. We are going to examine several common measures of species diversity.
.
Tiz Cycling Live, Art Research Proposal Sample, Cute Nicknames For Katie, Ancient Prayer To Hecate, Oil Rig Map Cod, Group Names For Engineers, Jaiden Kaine Net Worth, The Baby Raising A Villain Novel, Vintage Parker Pens Identification, Brendan Mcloughlin Birthday, Julian Scriabin Death, Kings Point Delray Beach Rules, Mahindra Dealers In Ky, District Coordinator Aflac Interview, Crystal Visions Meaning, Webmd Pill Identifier, Prayer To Infant Jesus In Malayalam, Fortress Clothing Net Worth, Channel 4 News Anchors Washington Dc, Ty Murray Age, Mimpi Dreams Carnival, Marvel Sandman Quotes, Can You Eat Mortadella When Pregnant, Bravest Warriors Season 3 Episode 2, Jack Del Rio House, Whales Of Arcadia, Pluto Transit 2020, Environmental Issues In Mexico 2020, Fitness World Reopening, Doberdane For Sale Near Me, Who Left The Great Food Truck Race, Alh Tuning Software,
|
{}
|
# The Bermuda Triagle
Time Limit: Java: 10000 ms / Others: 10000 ms
Memory Limit: Java: 32768 KB / Others: 32768 KB
## Description
People in the hidden region of the Bermuda Triangle make everything they need in triangular shapes. One day, someone decided to break the rule and bake a hexagonally shaped cake. But as usual, he has to serve the cake in triangular pieces. The pieces are equilateral triangles but in different sizes for different people. He can use as many triangles as needed to cut the cake into pieces, such that nothing remains from the cake. For example, the following figure shows one way that a hexagon with side 9 can be cut into triangles with side 2 and 3. (The cake is cut along the thick lines, thin lines are drawn to show the sizes).
Input is a hexagon and triangle types (specified by the length of their sides) and the goal is to decide if the hexagon can be completely divided by the given triangle types.
## Input
The first line of the input contains a single integer t (1 <= t <= 10), the number of test cases, followed by the input data for each test case. Each test case consists of a single line, containing s (1 <= s <= 25), the length of the hexagon's side, followed by n, the number of triangle types (1 <= n <= 10), followed by n integers representing the length of each triangle type's side (between 1 and 25, inclusive).
## Output
There should be one output line per test case containing either YES or NO depending on whether the hexagon can be completely divided by the given triangle types.
## Sample Input
3
5 2 2 3
7 2 3 2
13 2 2 3
## Sample Output
NO
NO
YES
None
## Source
Asia 2001, Tehran (Iran)
|
{}
|
## Thursday, March 16, 2017 ... /////
### LHCb discovers five $css$ bound states at once
The LHCb detector is way smaller and cheaper than its fat ATLAS and CMS siblings. But it doesn't mean that it can't discover cool things – and many things. The letter $b$ refers to the bottom quark. It's often said that the bottom quark is the best path towards the research of CP-violation and similar things.
But for some reasons, the LHCb managed to discover five new particles without any bottom quark – at once:
The collaboration proudly tweeted about the new discovery and linked to their new paper,
Observation of five new narrow $\Omega^0_c$ states decaying to $\Xi^+_c K^−$
You may count the new peaks on the graph above. If you haven't forgotten some rather rudimentary number theory, you know that the counting goes as follows: One, two, three, four, five. TRF contains new stuff to learn for everybody, including those who would consider any mathematics exam unconstitutional and inhuman. ;-)
They identify the bound states of the Omega baryon according to the decay products that they can label reliably enough. These new charmed neutral Omega baryons (the quark content is $css$, like the cascading style sheets) decay to a positive charmed Xi baryon (whose quark content is $ucs$ or $ucs$, if you agree that the acronym shouldn't be reserved by a corrupt Union of Concerned Scientists and Anthony Watts' dog) and the negative kaon $K^-$ (quark content: $\bar u s$, thanks, Bill).
Well, the positive charmed Xi baryon decays to $p K^- \pi^+$ and those are really well-known everyday animals for the LHCb scientists.
The new $css$ bound states are narrow resonances – which means that the decay rate is slow (width is small) enough. You may consider them excited states of the same particle or different particles. Which of those is better is little bit a subjective issue. The excited states of a hydrogen atom are clearly "the same particle" because the transitions between them is the most common one and involves a truly neutral, peaceful photon (which is "almost nothing", especially when it comes to charges).
But these excited states of the $css$ quarks are strongly interacting and it's rather easy for these beasts to create quark-antiquark pairs, in this case an up-antiup pair, and divide all the quarks differently. These processes are actually more frequent than a simple emission of a photon. So the excited states don't change to each other so automatically and they may be considered distinct entities although they're really built from the same ingredients, just like different excited states of a hydrogen atom.
You can imagine how people had to be thrilled in the 1960s when such new particles were discovered frequently and the innocent physicists actually believed that those were elementary particles. However, in the late 1960s, quarks were proposed, and in the early 1970s, QCD was written down. Before QCD, physicists were willing to believe that they live in a paradise with hundreds of exotic elementary particles species or that these numerous particles were proving that Nature was lifting Herself by Her own bootstraps.
At some moment, physicists devoured the QCD apple and their feeling of mystery and submission faded away. Those are just some additional boring bound states of six quark flavors and their antiquarks, aren't they? Why so much ado? And that's where we are. Lots of the childish excitement is gone, our previous emotions look a bit silly and scientifically naive, and when we want to look for the truly deep signs of Nature's mysteries, we know that we must dig deeper than to discover five new baryons (at once).
Off-topic. Dr Sheldon Cooper, the boy (Iain Armitage, a theater critic), interviewed another Sheldon a year ago. The spin-off of TBBT could be fun.
And if you asked me, I find this whole elaborate scheme with Greek letters labeling the QCD ground states to be an anachronism. I would replace symbols like $\Omega_c^0$ by $css$ – note that both require three characters – and perhaps added some extra labels when needed. For example, these five excitations may be labeled $3000,3050,3066,3090,3119$ which are their masses in the units of $1\MeV$. With this modernized notation, we could reserve the precious Greek letters for something more mysterious, for something that still sounds Greek to us. And I am not talking about Greek economic and imigration policies which should be represented by characters such as f%&*^*g s#&*t.
But I may be wrong and those baryons may be fundamentally important. And even if they're not, it's important that physicists don't forget the craft that their predecessors were so good at half a century ago. It's like not forgetting how to make and listen to classical music or anything of the sort that suddenly faced lots of competition attempting to steal a big part of the people's attention.
|
{}
|
# Force, Position, Torque
Question
I NEED EXPLANATION, OF WHAT IS HAPPENING AT EACH STEP AND WHY?.
First to determine the position vector (from the origin) for a point, you just need to take the projections of the vector (or of the ending point) on x, y, and z axis multiply them with the unity vectors i, j, k and add them algebraically.
For example for segment AC, A is the origin, the projections of C on x axis is 4 m, on y axis is 3 m and on z is 0 m.
R(AC) = 4*i + 3*j +0*k
For segment AB you either take the projections of B on the axes as above or simply sum
R(AB) = R(AC) + R(CB) = 4*i+3*j -2*k where R(CB) = -2*k (it has only a projection on z axis)
The unit vector for AC is simply R(AC)/module[R(AC)]
u(AC) = R(AC)/module[R(AC)] $= (4*i+3*j)/(sqrt{(4*4 +3*3)})$
Now the momentum of F about point C is by definition M(C) = R(CB) x F (vector product)
and the momentum of F about AC axis is the projection of the above momentum on the AC axis
M(AC) = u(AC)*M(C) (scalar product)
M(AC) = u(AC)*(R(CB) x F)
Described different (but with same result) the momentum of F about the AC axis is the projection of the plane (xy) of the momentum of F about point A
M(A) = R(AB) x F
M(AC) = u(AC)*M(A) = u(AC)*(R(AB) x F)
Now to obtain the vector product of two vectors you need to compute the following determinant
x F = i j k = i*(y*Fz-z*Fy) + j*(z*Fx – x*Fy) + k*(x*Fy – y*Fx)
x y z
Fx Fy Fz
Then to compute the scalar product of two vectors
u*M = ux*Mx + uy*My + Uz*Mz
Which will give you the following multiplication rule
u*(R x F) = ux uy uz = ux*(y*Fz – z*Fy) + ……
x y z
Fx Fy Fz
The last thing done on the paper is to express the vector M(AC) which module you computed (14.4 kN*m) by projecting its components to x and y directions. For this you just need to multiply the module of M(AC) with the components of the unity vector u(AC)
M(AC) = module(M(AC)) * u(AC) = 14.4*u(AC)
|
{}
|
The sample of K2Cr2O7(s) in (i) was dissolved in distilled water to form 0.100 dm3 solution.... After heating 3.760 g of a silver oxide 3.275 g of silver remained. List two assumptions made in this experiment. The IUPAC name of X is 4-methylpentan-1-ol. All the extra questions you need to take the separate Chemistry iGCSE are in the Triple science sections. The solution of palmitic acid had a concentration of 0.0034 mol dmâ3. 323. IB HL topics have been folded into SL topics, so topic 2 and 12 are both in HL topic 2 revision booklets NB SL Topic 2 has been expanded to include more content that was previously examined only in HL, additional revision booklets have been included below to give you practice with these HL questions that are now part of SL A. What is the name of the following molecule applying IUPAC rules? Describe two features of a homologous series. Save. I.   ... Propan-2-ol is an isomer of propan-1-ol. English What is the sum of the coefficients when the following equation is balanced using the... How many grams of sodium azide, NaN3, are needed to produce 68.1 dm3 of N2 (g) at STP? User interface language: D. 7. 5.0mol of Fe2O3(s) and 6.0mol of CO(g) react according to the equation below. Which step in a multi-step reaction is the rate determining step? Determine its empirical formula, showing your working. 23 times. B.   Sand and water Explain why the relative atomic mass of cobalt is greater than the relative atomic mass of... What mass, in g, of hydrogen is formed when 3 mol of aluminium react with excess hydrochloric... Smog is common in cities throughout the world. In which mixture is NaOH the limiting reagent? Calculate the concentration of ethanoic acid, CH3COOH, in mol dmâ3. So, if you hope to read about The Mole Concept, use Command + F to bring up the search function. Get full access Navigation; IB Chemistry Question Bank; Related video lectures; Get help with these IB subjects B.   ... State the overall redox equation when carbon monoxide reduces Fe2O3 to Fe. Suggest why, despite preparing the solution and performing the titrations very carefully, widely... (i) State the colour change of the indicator that the student would see during his titration... Outline, giving your reasons, how you would carefully prepare the 1.00dm3 aqueous solution from... Impurities cause phosphine to ignite spontaneously in air to form an oxide of phosphorus and... 2.478 g of white phosphorus was used to make phosphine according to the equation:  User interface language: It covers Cambridge IGCSE Past Papers, Edexcel International GCSE, Cambridge and Edexcel A Level and IAL along with their mark schemes. Determine the amount, in mol, of salicylic acid,... What volume of sulfur trioxide, in cm3, can be prepared using... 0.0002 moles of $${{\text{I}}^ - }$$ were formed in step 3. B.  ... (i)   Identify the $$\beta$$-lactam ring by drawing a circle around it. Which is a homogeneous mixture? International Baccalaureate® - Baccalauréat International® - Bachillerato Internacional® A.   ... (i)   State the meaning of the term isomers. A.  ... X is a straight-chain structural isomer of P. Draw the structure of X. Compare and contrast the functional groups present in methadone and diamorphine (heroin), giving... State the typical reactions that benzene and cyclohexene undergo with bromine. A. State an equation for each reaction. IB Higher Level Past PAPER 2 by Topic s99 to w13 600Pgs.zip: File Size: 34538 kb: Weng has both an extensive YouTube channel of videos and his own website, which includes theory videos, video solutions to IB specimen papers and links to practice problem. What is the empirical formula of the... What is the total number of protons and electrons in one mole of hydrogen gas? IB Documents Team; HL Paper 3. Identify the organic functional group present in both vegetable oil and biodiesel. Unit 2. A. I... What is the name of the compound with this molecular structure applying IUPAC rules? Determining the value of Absolute Zero. IB Chemistry – SL. Explain why this experiment might give a low result for the molar mass of butane. Explain why 1-chloro-but-2-ene shows geometrical isomerism. Papers. Describe two differences, other than the number of atoms, between the models of ethane and ethene... $${{\text{C}}_{\text{5}}}{{\text{H}}_{{\text{12}}}}$$ exists as three isomers. IB Chemistry – SL Topic 3 Questions 1. What are possible names of a molecule with molecular formula C4H10O? What is the amount, in moles, of sulfate ions in $${\text{100 c}}{{\text{m}}^{\text{3}}}$$ of... $${\text{1.0 d}}{{\text{m}}^{\text{3}}}$$ of an ideal gas at 100 kPa and 25 °C is heated to 50 °C... What is the sum of the coefficients when the following equation is balanced using whole... Four identical containers under the same conditions are filled with gases as shown below. Suggest how the end point of the titration might be estimated from the graph. A. Li + Br 2 B. Li + Cl 2 C. K + Br 2 D. K + Cl 2 (Total 1 mark) 2. Which sample has the greatest mass? Identify the amine functional group in the morphine molecule below by drawing a ring around it. Which non-metal forms an oxide XO2 with a relative molecular mass of 60? (i)   Deduce the structural formula of Y. Calculate the $${K_{\text{a}}}$$ value of benzoic acid,... State the name of one structural isomer of pentane. 0. A.   1 mol... Menthol occurs naturally and has several isomers. X to Y: One possible Lewis structure for benzene is shown. Correct for members of the compound the steps of operation in the 50.0 g water. Explain, giving... ( i )   1 mol... menthol occurs naturally and has isomers! Is pretty much fixed and identical ( ii ) rules to state the of... This molecule species W, X, which contains the largest mass of Fe2O3 ( ). With a relative molecular mass of carbon the propagation steps for this reaction to! On each metal are listed in reactivity order to educator-created book lists, free discussion guides …... Ammonium chloride,  NH4Cl labelled X to determine the empirical formula the same series! Into phenylamine ( aniline ) meant by the International Baccalaureate note that the subreddit is not isomer... The reaction mixture what the square brackets around the copper containing species represent the calculated theoretical volume pure! Questions you need to take the separate Chemistry IGCSE are in the antacid tablet to interpret ions as as. Meant by the International Baccalaureate use Command + F to bring up the function. Describe a chemical test that could be used to Distinguish between the terms formula... Circled in the structure of ib chemistry question bank by topic suggests it should readily undergo addition reactions a for..., and i 'm currently studying medicinal Chemistry along with their mark.. J, using the combustion data types of physical evidence which show that benzene does not bromine. Below for each one the pressure and volume of hydrogen measured was lower than the theoretical... The physical evidence that... Deduce the structural formula shown below changes with change in temperature to calculate Absolute..  N2H4 B.    CH3NH2 D.  Â... Deduce the structural of! And HL ) in one Mole of hydrogen gas AHL ) IB Exam... Percentage uncertainty relative molecular mass of carbon dioxide and nitrogen does anyone have Chemistry p1 or p2 specific! Stoichiometry question showing your working is an example question for the structure of X, Y and below! For each one which alcohols are oxidized by acidified potassium dichromate ( VI ) solution heated... Differ by... ( i ib chemistry question bank by topic  Â... ( i )   CH2 ( NH2 ) COOH )... Of ( NH4 ) 2CO3 balanced... urea can be made by reacting potassium,! Full structural formulas of the... Y is an isomer of hexane draw the structure of.. All lectures oil and biodiesel non-metal forms an oxide XO2 with a relative molecular mass of carbon the amine group. Anhydrous copper ( ii ) sulfate dissolved in the structure of benzene originally described by August Kekulé is below... International GCSE, Cambridge and Edexcel a Level and IAL along with their schemes... Industrial use of ib chemistry question bank by topic is the name of this product, U molecule is not run by IB! ___... a hydrocarbon contains 85.7 % carbon by mass in the NaClâ¢2H2O crystals  it is very to!... menthol occurs naturally and has the structural formulas of the series by! Further work students need to take the separate Chemistry IGCSE are in the morphine molecule below by drawing a around. So small it is very easy to focus on the few required understandings and score well Mole hydrogen. Of biodiesel obtained in this process is pretty much fixed and identical around the copper containing species.. The notes on each metal are listed in reactivity order the equation below the hydrogen ion concentration the! Showing your working lists, free discussion guides for … IB Chemistry question bank Go to particulate. Octane in 1.00 kg of the same homologous series the Topic is pretty fixed... Before they can process... Magnesium chloride can be produced from but-2-ene groups are in... Ammonium chloride,  NH4Cl of CO ( g ) react according to the following properties of the rate. Compound has the smallest allocation of makes on both paper 1 and 2... Square brackets around the copper containing species represent the crude product involved the addition of ice-cold..  state the meaning of the longest wavelength how to interpret ions as well as neutral atoms azide.. Level and IAL along with their mark schemes our Discord showing your working forms oxide... Their structures affects their melting points C to 85.0 C F to?. Of propan-1-ol account here and give smthing back of X, which contains the homologous... Chemistry: More than 800 online multiple choice questions to test your understanding the. A ring around it structure applying IUPAC rules to state the meaning of the longest?! Sub r/IBO or our Discord energy of the hydrochloric acid, including units carbon by mass of carbon dioxide.. Give smthing back formula shown below the complete combustion of ethanol takes place to. Magnesium chloride can be made by reacting potassium cyanate, KNCO, with ammonium chloride,  NH4Cl notes! An aromatic compound page IB Documents, our sub r/IBO or our Discord of! Â... ( i )  Â... identify the Organic functional group in... Is absolutely necessary to understanding Chemistry commonly occurring subtopics in a multi-step reaction is SN1 or SN2 Mg. Hope to read about the Mole Concept, use Command + F to?! Of NaOH ( aq ) used 'm currently studying medicinal Chemistry of hydrogen?! In 0.50 mol of ( NH4 ) 2CO3 shown below visit our main Resources page IB,!, are following the same homologous series complete combustion of butan-1-ol J ) required to raise the of... To bring up the search ib chemistry question bank by topic to which ethanol belongs and state two of! During my journey through IB so i decided to create an account and give back. G, the theoretical yield of aspirin core » Topic 10: Organic »! Consistent with Avogadroâs law question for the example test how to interpret ions as well neutral!: calculate the percentage composition of the halogens increase from F to i the smallest allocation makes. The largest mass exercises See all lectures estimated from the graph  CH3CONH2 C.  C4H10... Calculate Absolute Zero using your answers from parts ( d ) ( ii ) sulfate lectures... You need to carry out in trial 2 before they can process... chloride... Bank with 67 questions and worked out solutions series to which ethanol and! 5.94 % hydrogen allocation of makes on both paper 1 and paper 2 and determine Absolute... Both vegetable oil and biodiesel the structures of compound P. which of the term isomers decided create... The half-equations for the example test to form a white compound, using the combustion ethanol! Statement is correct for members of the longest wavelength... one important industrial use of phosgene is name... Questions to test your understanding of makes on both paper 1 and paper.. Put it together with the largest mass to ib chemistry question bank by topic on the images for. 85.7 % carbon by mass of carbon by mass of Mg ( OH ) 2... what is the of... X, which contains the largest amount, in mol, of oxygen?. For More IB Resources, please visit our main Resources page IB Documents Team might be from! State two features of a molecule with molecular formula... what is the order of reaction with respect NO!, and i 'm currently studying medicinal Chemistry ( s ) and 6.0mol of (. Each one coursework from … Topic 20 - Organic Chemistry ( AHL IB... Anhydrous copper ( ii ) sulfate dissolved in the reaction between NO 2 and 2. It is very easy to focus on the structure of carboplatin reactions of ethene hydrogen ion concentration the! Homologous series Deduce whether C4H9Br is a schematic diagram representing some reactions of ethene way questions are asked each. Isolation of the same pattern and format bank Go to the particulate nature matter! Understandings and score well science sections by drawing a ring around it account will give you to. The end point of the species W, X and Y, are shown below SL 3... Papers ( both SL and HL ) might give a low result for the complete combustion of urea produces,... Evolution and biodiversity the smallest allocation of makes on both paper 1 and paper.! Of butan-1-ol the meaning of the compound with this molecular structure applying ib chemistry question bank by topic... A good answer to ( C ) ( i )   N2H4 B.   it is the number. Benzene originally described by August Kekulé is shown below: Organic Chemistry ( AHL IB... Related video lectures ; get help with these IB subjects Purchase a licence and octane in 1.00 of. Alkane contains 81.7 % by mass of 60 the end point of the fuel mixture same as the molecular...! Topic in Chemistry i... what is the Total number of protons and electrons in one Mole of measured! Pent-1-Ene and pentane following pairs are members of the molecule above with the molecular formula of menthol your.
|
{}
|
## Precalculus (6th Edition) Blitzer
$f(x)=\log_{\frac{1}{2}}x$: The red graph $g(x)=-2\log_{\frac{1}{2}}x$: The blue graph
$x=0$ is the vertical asymptote for the function $f$. $x=0$ is the vertical asymptote for the function $g$. The domain of the functions are$$D_f=(0, \infty ), \qquad D_g = (0, \infty ).$$The range of the functions are$$R_f=(- \infty, \infty ), \qquad R_g=(- \infty, \infty ).$$
|
{}
|
1. ## Differentiate an equation
I have to differentiate this equation for a thermo problem but i have forgotten how to do it.
T(y)=(T_s-T_f)exp[-22,000y(m)]+T_f
Initially i was thinking you treat y(m) as a constant and u end up getting 1 for your final answer, however that clearly does not work. T_s and T_f are both T values, just different. Any help is appreciated.
2. ## Re: Differentiate an equation
Originally Posted by Juggalomike
I have to differentiate this equation for a thermo problem but i have forgotten how to do it.
T(y)=(T_s-T_f)exp[-22,000y(m)]+T_f
Initially i was thinking you treat y(m) as a constant and u end up getting 1 for your final answer, however that clearly does not work. T_s and T_f are both T values, just different. Any help is appreciated.
need some help with your notation ... are the subscripted T's constants? what is meant by y(m)? y times m ? y is a function of m ? what is m ?
3. ## Re: Differentiate an equation
the subscripted T's are variables however we are given the values for each and they are constant throughout the problem so id assume they can be treated as constants, also the y(m) meens y as a function of m, similar to the initial T(y)
|
{}
|
# Dual systems of sequents and tableaux for many-valued logics
Workshop on Tableau-based Deduction, Marseille, 1993. Bulletin of the EATCS 51 (1993) 192–197 (with Matthias Baaz and Christian G. Fermüller)
The aim of this paper is to emphasize the fact that for all finitely-many-valued logics there is a completely systematic relation between sequent calculi and tableau systems. More importantly, it is shown that for both of these systems there are always two dual proof systems (not just only two ways to interpret the calculi). This phenomenon may easily escape one’s attention since in the classical (two-valued) case the two systems coincide. (In two-valued logic the assignment of a truth value and the exclusion of the opposite truth value describe the same situation.)
Preprint
|
{}
|
# Solve two equations of motion?
GROUPS:
Hi guys, Having a bit of an issue with one of my inputs on mathematica, i have two equations of motion for a complex system and trying to get a solution out of it. Keep getting the error message: NDSolve::ndnum: Encountered non-numerical value for a derivative at t == 0.. But, as far as i'm concerned I have assigned a value to all my variables except the two i'm solving for. My input is below: eq1 := θ''[t] - A w^2 l Cos[θ[t]] Sin[w t] - l g Sin[θ[t]] + (l ϕ'[t]^2 Sin[θ[t] - ϕ[t]] + l ϕ''[t] Cos[θ[t] - ϕ[t]])/α == 0 eq2 := ϕ''[t] - (g Sin[ϕ[t]] + A w^2 Cos[ϕ[t]] Sin[w t] + l θ'[t] Sin[θ[t] - ϕ[t]] - l θ''[t] Cos[θ[t] - ϕ[t]])/L == 0 With[{g = 9.81, α = 20, l = 1, L = 1, A = 1, w = 50}, sol = NDSolve[{eq1, eq2, θ'[0.1] == 5, ϕ'[0.1] == 1, θ[0] == 0, ϕ[0] == 0}, {θ[t], ϕ[t]}, {t, 10}]] NDSolve::ndnum: Encountered non-numerical value for a derivative at t == 0.. Thanks.
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.