url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://mathonline.wikidot.com/jensen-s-inequality
|
Jensen's Inequality
# Jensen's Inequality
Theorem 1 (Jensen's Inequality): Let $f$ be a convex function defined on an interval $I$. Let $x_1, x_2, ..., x_n \in I$ and let $t_1, t_2, ..., t_n$ be nonnegative real numbers such that $t_1 + t_2 + ... + t_n = 1$. Then $f(t_1x_1 + t_2x_2 + ... + t_nx_n) \leq t_1f(x_1) + t_2f(x_2) + ... + t_nf(x_n)$.
• Proof: We proof Jensen's inequality by mathematical induction.
• For the case when $n = 1$ we must have that $t_1 = 1$. Clearly $f(1x_1) = 1f(x_1)$.
• For the case when $n = 2$, since $f$ is convex, by definition, for every $x_1, x_2 \in I$ and for every $s \in [0, 1]$ we have that:
(1)
\begin{align} \quad f(sx_1 + (1 - s)x_2) < sf(x_1) + (1-s)f(x_2) \end{align}
• Let $t_1, t_2$ be nonnegative real numbers such that $t_1 + t_2 = 1$. Then $t_2 = 1 - t_1$ and from above we see that:
(2)
\begin{align} \quad f(t_1x_1 + t_2x_2) < t_1f(x_1) + t_2f(x_2) \end{align}
• Assume that Jensen's inequality holds for some $k$, that is, assume that for every $x_1, x_2, ..., x_k \in I$ and for every set of positive real numbers $t_1, t_2, ..., t_k$ such that $t_1 + t_2 + ... + t_k = 1$ we have that:
(3)
\begin{align} \quad f(t_1x_1 + t_2x_2 + ... + t_kx_k) < t_1f(x_1) + t_2f(x_2) + ... + t_kf(x_k) \end{align}
• Let $x_{k+1} \in I$ and let $t_1, t_2, ..., t_k, t_{k+1}$ be positive real numbers such that $t_1 + t_2 + ... + t_k + t_{k+1} = 1$. Then:
(4)
\begin{align} \quad f(t_1x_1 + t_2x_2 + ... + t_kx_k + t_{k+1}x_{k+1}) &= f \left ( \sum_{m=1}^{k} t_mx_m + t_{k+1}x_{k+1} \right ) \\ &= f \left ( (1 - t_{k+1}) \frac{1}{1 - t_{k+1}}\sum_{m=1}^{k} t_mx_m + t_{k+1}x_{k+1} \right ) \\ &\leq (1 - t_{k+1}) f \left ( \frac{1}{1 - t_{k+1}} \sum_{m=1}^{k} t_mx_m \right ) + t_{k+1} f(x_{k+1}) \\ &\leq (1 - t_{k+1}) f \left ( \sum_{m=1}^{k} \frac{t_m}{1 - t_{k+1}}x_m \right ) + t_{k+1} f(x_{k+1}) \end{align}
• Observe that since $t_1 + t_2 + ... t_k + t_{k+1} = 1$ we have that $t_1 + t_2 + ... + t_k = 1 - t_{k+1}$ and so:
(5)
\begin{align} \quad \frac{t_1}{1 - t{k+1}} + \frac{t_2}{1 - t_{k+1}} + ... + \frac{t_k}{1 - t_{k+1}} = 1 \end{align}
• So by the induction hypothesis we have that:
(6)
\begin{align} \quad f(t_1x_1 + t_2x_2 + ... + t_kx_k + t_{k+1}x_{k+1}) & \leq (1 - t_{k+1}) \sum_{m=1}^{k} \frac{t_m}{1 - t_{k+1}} f(x_m) + t_{k+1} f(x_{k+1}) \\ & \leq t_1f(x_1) + t_2f(x_2) + ... + t_kf(x_k) + t_{k+1}f(x_{k+1}) \end{align}
• So by the principal of mathematical induction, Jensen's inequality holds. $\blacksquare$
|
2019-03-24 04:38:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000077486038208, "perplexity": 1153.0086278543934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203326.34/warc/CC-MAIN-20190324043400-20190324065400-00008.warc.gz"}
|
http://mathhelpforum.com/calculus/92272-higher-order-partial-derivative.html
|
# Thread: Higher Order Partial Derivative
1. ## Higher Order Partial Derivative
I've been struggling with this one for a while. I can't seem to get it right. I have to find fxxyy of e^(x^2+y^3)
I cannot arrive at the correct answer. A little guidance would be greatly appreciated.
2. When we find a partial derivative with respect to a variable, we differentiate normally with respect to that variable while treating all other independent variables as constants. For example, if $f(x,\,y)=x^2\sin y$, then
\begin{aligned}
f_x&=2x\sin y\\
f_y&=x^2\cos y.
\end{aligned}
In our case, we can use the Chain Rule and the Product Rule to find $f_{xx}$ when $f(x,\,y)=e^{x^2\,+\,y^3}$. After this, $f_{xxyy}$ can be found similarly.
|
2016-12-03 11:33:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640249013900757, "perplexity": 189.78116799399393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540928.63/warc/CC-MAIN-20161202170900-00082-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://www.chegg.com/homework-help/questions-and-answers/i-having-problems-simplifing-laplace-transform-y-s-f-s-y-s-ms-2-bs-k-2s-2-2f-s-q992769
|
## Algebraic manipulation of a Laplace transform
I having problems simplifing this Laplace transform to Y(s)/F(s)=............
Y(s)(Ms^2+Bs+K)-2s-2=2F(s)
|
2013-06-18 06:47:15
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710251450538635, "perplexity": 9099.567308684218}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707184996/warc/CC-MAIN-20130516122624-00079-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://cs.stackexchange.com/questions/85272/what-is-the-time-complexity-of-sum-i-0-fracnk-p-i-cdot-2ik-if-p
|
# What is the time complexity of $\sum_{i=0}^{\frac{n}{k}} p_i\cdot 2^{ik}$ if $p$ is a binary number?
Let $p$ be a number represented in binary of length $n$. We'll divide $p$ into $\frac{n}{k}$ parts, each of length $k$ (let each part be $p_i$, $0 \le i\le \frac{n}{k}$) where $k=\lg n$. Now we want to represent $p$ as a sum: $$\sum_{i=0}^{\frac{n}{k}} p_i\cdot 2^{ik} \qquad(*)$$
I know that multiplying a number in binary by $2^{ik}$ is the same as shifting $i$ bits to the left which should be an $\Theta(1)$ operation in real life but theoretically this could be a $\Theta(n)$ operation because we may have to shift up to $n$ bits to the left. So the sum above becomes $\Theta(n^2)$.
Is there a faster way to perform the sum operation $(*)$?
• @gnasher729 unfortunately these are the conditions for the problem. If $n \mod k \neq 0$ then we can just add zeroes I suppose. Why isn't the sum equal to $p$? – Yos Dec 10 '17 at 14:31
• I recommend you figure out exactly what your problem is. I would bet that something has been misunderstood. If it is homework, and you understood the homework correctly, then I would bet that your teacher misunderstood something. – gnasher729 Dec 10 '17 at 14:34
• And large numbers will be stored in multiple words, so shifting a small number by i bits will be done in constant time. – gnasher729 Dec 10 '17 at 14:35
The running time to compute the sum is $O(n)$ bit operations. You just concatenate $p_1,p_2,\dots,p_{n/k}$ and you've got your answer.
• If course! I should've thought about this. Just for the sake of interest if I were to shift by $ik$ bits would the running time be $\Theta(n^2)$? – Yos Dec 10 '17 at 18:56
|
2020-01-24 18:32:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015543818473816, "perplexity": 204.1725431089261}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250624328.55/warc/CC-MAIN-20200124161014-20200124190014-00241.warc.gz"}
|
https://www.physicsforums.com/threads/low-voltage-op-amp-to-control-high-voltage-series-pass-transistor.709874/
|
# Low voltage op amp to control high voltage series pass transistor
1. Sep 11, 2013
### d.arbitman
I am in the middle of designing a low drift and low ripple power supply for myself. To control the output voltage, which ranges from 0 to 20V, I am using an op amp whose output is fed into a Darlington pair of power bjts. The output of the Darlington pair is both fed back to the op amp's inverting input (i.e. the op amp is configured as a voltage follower) as well as to the load. The desired output voltage is present on the non-inverting input to the op amp as a reference. I have a transformer that produces +/- 40V rails so I am using those to power the op amp. I have a high voltage op amp but recently, the op amp was fried even though it can be powered from +/- 45V. As a result, I have decided to use a low voltage op amp to control the +/- 40V rails. The problem is, I have no idea how to go about doing this. I have tried using single transistor amplifiers to translate 0 -> 5V to 0 -> 20V but have had no luck. So the question, how do I use a low voltage op amp that is powered from +/- 5V to control +/- 40V rails so that my output ranges from 0 to 20V?
2. Sep 11, 2013
### pullmanwa
3. Sep 11, 2013
### d.arbitman
Uploaded. As you can see, the op amp is powered from +/- 40V but I don't want to use a high voltage op amp like I'm doing at the moment.
#### Attached Files:
• ###### Schematic.PNG
File size:
2.8 KB
Views:
211
4. Sep 11, 2013
### pullmanwa
I have a question if you don't mind.
1. where is the output voltage taken across from? I'm asking where is your ground reference for the output voltage?
** The + input to the opamp is missing --- that will cause a problem
5. Sep 11, 2013
### meBigGuy
That's a pretty primitive schematic, but I see what you are trying to do.
You need an amplifier with gain since you want to go from say a 1V swing to a 40V swing.
Here is a circuit that does what you want and does it well.
http://circuits.linear.com/High_Speed_Amplifiers--all-439
Once you understand it, you can simplify it.
The simplest approach is to just drive into a single stage common emitter amp. But that dosn't have much drive. Next simplest is to follow that with an emitter follower. Don't forget to deal with the common-emitter inversion and divide down the emitter follower output for the feedback.
Last edited: Sep 11, 2013
6. Sep 11, 2013
### d.arbitman
I know. That's the input voltage and assumed to be in the range of 0 to 20V. The ground reference is labeled on the schematic (for lack of a better term...the triangle connected to the power supplies is ground).
I have configured the op amp to behave as a voltage follower.
7. Sep 11, 2013
### d.arbitman
That op amp is supplied from +/- 125V which is what I want to stay away from. I want to use a low voltage op amp to do what I want. I not only want the input to be scaled down but also the rails to be, say +/- 5V but I still want it to be able to drive a series pass transistor whose rail is at +40V
I have attached a complete schematic of what I have.
R1 is the load resistance. U1 and its associated resistors mimic a 50V/V current sense amplifier.
#### Attached Files:
• ###### Schematic.jpg
File size:
54.3 KB
Views:
178
8. Sep 11, 2013
### meBigGuy
No it isn't. Notice the zener diodes that drop the voltage for the op amp. Or supply the opamp from different supplies.
9. Sep 11, 2013
### d.arbitman
Oh you're absolutely right.
10. Sep 11, 2013
### meBigGuy
OK --- You just need to drive the opamp into a common emitter stage with a pullup to 40V. Follow the common emitter stage with your darlington. Don't forget the inversion of the common emitter stage, which changes things a bit (notice the feedback to the + input in the 125V circuit). You can make U2 inverting, or whatever.
11. Sep 11, 2013
### d.arbitman
I have no idea what's going on in that circuit. I figure the 2n3440 and 2n5415 furthest to the right are the series pass transistors and they are the ones that supply the current. What's the point of the 27R resistors and the 1k that are connected to them? What's the point of the 2n222 and 2n2907 transistors? Why are the diodes 1n4148 connected across the base of the 2n3440 series pass transistor to the output?
How do I go about understanding what that circuit does?
Last edited: Sep 11, 2013
12. Sep 12, 2013
### meBigGuy
The circuit is a bit fancy for what you want to do. It provides high power symmetrical drive with current limiting.
The 27R resistors are for current limiting sense. When the voltage across them reaches 0.7V the 2907 and 2222 transistors turn on, reducing the drive to the output transistor.
See if you can decipher what I said in post 10.
13. Sep 12, 2013
### meBigGuy
describing the circuit a bit more for you. Think of it as two common emitter amps followed by 2 emitter followers.
If you look at the circuit as a big X it is easier to conceptualize what it basically does. The bottom left 3440 is a common emitter amplifier, and the top right 3440 is sort of its emitter follower. And the 2 5415's operate the same way. In combination they are very symmetrical, which you really don't need.
Look at T1 and T5 in this design. It has a double inversion so maybe that will make things easier. Just drive into T1. Read the wikipeda page on common emitter amplifiers to get it biased right.
http://www.bogobit.de/bogobox/
14. Sep 12, 2013
### d.arbitman
What do the 1k resistors do? What is the purpose of the first transistor stage whose base is connected to 1M resistors? How does that first stage work? I figure the diodes prevent crossover distortion, but what about everything else?
As far as post 10 is concerned I don't know which circuit you are referring to when you say drive the op amp into a common emitter stage.
What do you mean by pullup to 40V?
In a common emitter configuration, the output is taken at the collector and the output is inverted. As a result I should make U2 in my schematic inverting? It seems like I would be introducing more parts than I already have to do the same thing.
15. Sep 12, 2013
### meBigGuy
Explaining basic transistor circuits is a bit beyond what can be done here. I can point you to a topology that can do what you want and you have to take it from there. I suggested a common emitter followed by a darlington, but two common emitters is a better idea.
Again, look at T1 and T5 in http://www.bogobit.de/bogobox/ That's a basic topolgy for what you want to do.
Don't forget that you need a voltage divider in your feedback loop to keep high voltage from frying the opamp.
I'm not sure whether you will run into stability problems with the additional gain. You may want to rolloff U2.
16. Sep 12, 2013
### jim hardy
17. Sep 12, 2013
### meBigGuy
That's what I posted. But for his circuit the T1/T5 combo in http://www.bogobit.de/bogobox/ is probably adequate. What do you think? I need to get a schematic tool. Too hard to describe circuits.
18. Sep 12, 2013
### d.arbitman
I rebuilt the circuit in that same way...op amp drives an NPN which sinks the base current from a PNP via its collector. I ran some simulations and boy that PNP will dissipate a ton of power.
Thank you for helping by the way.
I have attached my latest schematic.
P.S. I tried loading the circuit that you showed me from LT, the +/- 120V output, with a 10$\Omega$ load and it failed miserably.
#### Attached Files:
• ###### Schematic.jpg
File size:
46.2 KB
Views:
138
Last edited: Sep 12, 2013
19. Sep 12, 2013
### jim hardy
Well so it is exact same link ! Sorry - I only saw bogobox....
in his schematic
he needs a current source to turn on his Q1, and a pull-down transistor controlled by the opamp to turn Q1 off.
Establish some base drive to Q1, enough to make rated current
and steal that base drive away to control output voltage.
As you said he should figure out why that Linear Technology circuit works.
Also as you said, it is difficult to paint a good mental picture with just words.
old jim
20. Sep 12, 2013
### jim hardy
Well
120 volts across ten ohms is twelve amps.
Did you notice that the 27 ohm resistor and 1N4148 diode form a ~25 milliamp current limiter on the output stage?
How's it do below 1/4 volt on that ten ohm load?
When you can describe in words how a circuit works you are coming to understand it.
21. Sep 12, 2013
### d.arbitman
Wait a minute. If the 1n4148 AND THE 27ohm form the current limiter, then what do the 2n2222 and the 2n2907 do?
22. Sep 12, 2013
### jim hardy
You got me good !
They make a better one.
23. Sep 12, 2013
### d.arbitman
What do the 510 and 330 ohm resistors do, why are they chosen so small?
Is the smaller of the two values connected to the emitter of the PNP rather than the larger in order to forward bias the transistor?
24. Sep 12, 2013
### jim hardy
They're to make the transistor mimic the current flowing through the 50K ohm resistor. That transistor is a very simple(if admittedly imprecise) controlled current source.
I did arithmetic assuming opamp is at zero out and get two milliamps through that transistor.
That is the base drive current available to the final transistors, but some of it is diverted by your current limiters..
The basis of this thing is a balancing act with two controlled current sources driven by opamp through those 50K resistors.
See how cleverly they've separated the high voltage from opamp?
50K resistors and high voltage transistors.
The output is "felt" through the 100K resistor and balanced against input via its 10K
Also observe the driver stages invert polarity, that's why the LT 1055's + and - inputs are backward from your normal inverting opamp configuration.
I'm still trying to figure out the biasing at full output.
Perhaps you or me-big-guy will beat me to it ...
Here's as far as I got;
---------------------------------------------------------------------------------------------------
http://circuits.linear.com/img/439_circuit_1.jpg
The objective is to control ~ 120 volts with only a ~12 volt "handle".
We have to do that because
The opamp is powered by those zener sources, look at their datasheet they're 15 volts.
So the LT1055opamp can only move his pin6 up and down about 12 volts.
We'll call that point in the circuit 'our 12 volt handle'.....
Let us look at top half of circuit when handle is at zero volts.
The majority of the voltage drop between power supply and our "handle" is across the 50K resistors.
Current through the 50K resistor is around ~(125 ) / 50.5K = 2.475ma.
Voltage across the 510 = 2.475ma X 510 ohms = 1.262 volts
If Veb for the 2N5415 is 0.6V, the there must be (1.262 - 0.6) = 0.662V across the 330 ohm resistor. That'd be 2 milliamps through it.
That 2 ma multiplied by 2N3440's probable hfe of ~ 30, would cause 60 ma to flow out of its emitter but your current limiters limit it to ~25 ma.
By symmetry, the bottom half of the circuit is doing the exact same thing, except of course current is in other direction - leavingthe output terminal instead of entering it...
So the two 25ma currents cancel and output is zero.
Now let us push our 'handle' down to -12 volts.
Current through the 50K resistor is now ~(137) / 50.5K = 2.712 ma
Voltage across the 510 = 2.673ma X 510 ohms = 1.383 volt
voltage across 330 = (1.383-0.6) = 0.783 v
that'd be 2.374 ma through it
So - lowering our 'handle ' twelve volts increased the base drive current for top 2N3440 by 374microamps.
By symmetry, bottom output transistor should see close to same decrease in base drive.
---------------------------------------------------------------------------------------------------
I've not yet considered the 1 meg resistors because I think they're negative feedback,,
it's late and i'm pooped. I hope you guys have it explained in the morning !
old jim
Last edited: Sep 13, 2013
25. Sep 13, 2013
### d.arbitman
Last edited: Sep 13, 2013
|
2018-07-16 03:56:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3815828561782837, "perplexity": 2359.903649399957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589172.41/warc/CC-MAIN-20180716021858-20180716041858-00364.warc.gz"}
|
https://socratic.org/questions/58bae9fa7c014974295dd2a2
|
# Question dd2a2
Mar 28, 2017
Let us first of all understand what is meant by Loudness of sound.
Loudness of sound is a subjective term which describes the strength of perception of ear of a sound signal. In other words it is psycho-physiological correlation of amplitude of the sound by ear-brain combination.
Though loudness is related to sound intensity but it can not be considered identical to intensity of sound being heard, as the relationship is not linear. The magnitude of loudness sensation is known to be proportional to the logarithm of the physical stimulus which produced it.
Unit of loudness of sound is decibel, dB. A decibel expresses the relative intensity of sound on a scale from zero, (average least perceptible sound) to around 100 dB, (the level most people find uncomfortably loud). Normal speech is around 50 to 60 dB.
Mathematically Loudness or Acoustic intensity level $L$ is given as
$L = 10 \log \left(\frac{I}{I} _ 0\right) \text{dB}$
${I}_{0}$ is taken as $1 {\text{ pWm}}^{-} 2$ as reference.
It can also be stated in terms of Sound pressure Level $p$ as
$L = 20 \log \left(\frac{p}{p} _ 0\right) \text{dB}$
${p}_{0}$ in air is taken as $= 2 \times {10}^{-} 5 \text{ Pa" = 20 µ"Pa}$ as threshold of hearing.
Given is atmospheric pressure $= {10}^{5} \text{ pa}$, density of air is $1.3 {\text{ kgm}}^{-} 3$ and speed of sound is $340 {\text{ ms}}^{-} 1$
We know that
Intensity of Sound I= 2π²n²A²ρv
where, $n$ is frequency of sound, $A$ is the amplitude of sound wave, $v$ is velocity of sound, and ρ# is density of medium in which sound is traveling.
In the absence of either sound pressure generated or Amplitude of the sound wave, Loudness can not be calculated.
|
2019-03-22 03:51:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681328296661377, "perplexity": 837.8043567390318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00120.warc.gz"}
|
https://www.oreilly.com/library/view/nonlinear-h-infinity-control/9781439854853/xhtml/11_Chapter1a.xhtml
|
## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.
No credit card required
1.3 Notations and Preliminaries
In this section, we introduce notations and some important definitions that will be used frequently in the book.
1.3.1 Notation
The notation will be standard most of the times except where stated otherwise. Moreover, N will denote the set of natural numbers, while Z will denote the set of integers. Similarly,ℜ, ℜn will denote respectively, the real line and the n-dimensional real vector space, t ∈ ℜ will denote the time parameter.
, M, N,… will denote differentiable-manifolds of dimension n which are locally Euclidean and will denote respectively the tangent and cotangent bundles of M with dimensions 2n. Moreover, π and π will denote the natural projections T MM and T
## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.
No credit card required
|
2019-09-18 15:40:13
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 2, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321131110191345, "perplexity": 1781.0200038516828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573309.22/warc/CC-MAIN-20190918151927-20190918173927-00094.warc.gz"}
|
https://zbmath.org/?q=an:1090.32016
|
# zbMATH — the first resource for mathematics
Bi-Lipschitz trivialization of the distance function to a stratum of a stratification. (English) Zbl 1090.32016
Summary: Given a Lipschitz stratification $${\mathcal X}$$ that additionally satisfies condition $$(\delta)$$ of Bekka-Trotman (for instance any Lipschitz stratification of a subanalytic set), we show that for every stratum $$N$$ of $${\mathcal X}$$ the distance function to $$N$$ is locally bi-Lipschitz trivial along $$N$$. The trivialization is obtained by integration of a Lipschitz vector field.
##### MSC:
32S15 Equisingularity (topological and analytic) 32S60 Stratifications; constructible sheaves; intersection cohomology (complex-analytic aspects) 32B20 Semi-analytic sets, subanalytic sets, and generalizations
Full Text:
|
2021-09-19 02:00:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8486661911010742, "perplexity": 3943.0779971288794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00217.warc.gz"}
|
http://pysal.org/notebooks/explore/pointpats/window.html
|
# Point Pattern Windows
Author: Serge Rey sjsrey@gmail.com
## Introduction
Windows play several important roles in the analysis of planar point patterns. As we saw in the introductory notebook, the area of the window can be used to develop estimates of the intensity of the point pattern. A window also defines the domain for the point pattern and can support corrections for so-called edge effects in the statistical analysis of point patterns. However, there are different ways to define a window for a point pattern.
This notebook provides an overview of how to work with windows and covers the following:
## Creating a Window
We will first continue on with an example from the introductory notebook. Recall this uses 200 randomly distributed points within the counties of Virginia. Coordinates are for UTM zone 17 N.
import pysal.lib as ps
import numpy as np
from pysal.explore.pointpats import PointPattern
f = ps.examples.get_path('vautm17n_points.shp')
fo = ps.io.open(f)
pp_va = PointPattern(np.asarray([pnt for pnt in fo]))
fo.close()
pp_va.summary()
Point Pattern
200 points
Bounding rectangle [(273959.664381352,4049220.903414295), (972595.9895779632,4359604.85977962)]
Area of window: 216845506675.0557
Intensity estimate for window: 9.223156295311261e-10
x y
0 865322.486181 4.150317e+06
1 774479.213103 4.258993e+06
2 308048.692232 4.054700e+06
3 670711.529980 4.258864e+06
4 666254.475614 4.256514e+06
From the summary method we see that the Bounding Rectangle is reported along with the Area of the window for the point pattern. Two things to note here.
First, the only argument we passed in to the PointPatterns constructor was the array of coordinates for the 200 points. In this case PySAL finds the minimum bounding box for the point pattern and uses this as the window.
The second thing to note is that the area of the window in this case is simply the area of the bounding rectangle. Because we are using projected coordinates (UTM) the unit of measure for the area is in square meters.
## Window Attributes
The window is an attribute of the PointPattern. It is also an object with its own attributes:
pp_va.window.area
216845506675.0557
pp_va.window.bbox
[273959.664381352, 4049220.903414295, 972595.9895779632, 4359604.85977962]
The bounding box is given in left, bottom, right, top ordering.
pp_va.window.centroid
(623277.8269796579, 4204412.881596957)
pp_va.window.parts
[[(273959.664381352, 4049220.903414295),
(273959.664381352, 4359604.85977962),
(972595.9895779632, 4359604.85977962),
(972595.9895779632, 4049220.903414295),
(273959.664381352, 4049220.903414295)]]
The parts attribute for the window is a list of polygons. In this case the window has only a single part and it is a rectangular polygon with vertices listed clockwise in closed cartographic form.
## Window Methods
A window has several basic geometric operations that are heavily used in some of the other modules in the the Point package. Most of this is done under the hood and the user typically doesn’t see this. However, there can be times when direct access to these method can be handy. Let’s explore.
The window supports basic point containment checks:
pp_va.window.contains_point((623277.82697965798, 4204412.8815969583))
True
This also applies to sequences of points:
pnts = ((-623277.82697965798, 4204412.8815969583),
(623277.82697965798, 4204412.8815969583),
(1000.01, 200.9))
pnts_in = pp_va.window.filter_contained(pnts)
pnts_in
[array([ 623277.82697966, 4204412.88159696])]
## Multi-part Windows
Thus far our window was a simple bounding box. There many instances when the relevant containing geometry for a point pattern is more complex. Examples include multi-part polygons and polygons with holes.
Here we construct such a window, one with two parts and one hole.
parts = [[(0.0, 0.0), (0.0, 10.0), (10.0, 10.0), (10.0, 0.0)],
[(11.,11.), (11.,20.), (20.,20.), (20.,11.)]]
holes = [[(3.0,3.0), (6.0, 3.0), (6.0, 6.0), (3.0, 6.0)]]
We will plot this using matplotlib to get a better understanding of the challenges that this type of window presents for statistical analysis of the associated point pattern.
%matplotlib inline
import matplotlib.pyplot as plt
p0 = np.asarray(parts[0])
plt.plot(p0[:,0], p0[:,1])
plt.xlim(-10,20)
t = plt.ylim(-10,20) # silence the output of ylim
Not, quite what we wanted, as the first part of our multi-part polygon is a ring, but it was not encoded in closed cartographic form:
p0
array([[ 0., 0.],
[ 0., 10.],
[10., 10.],
[10., 0.]])
We can fix this with a helper function from the window module:
from pysal.explore.pointpats.window import to_ccf
print(parts[0])
print(to_ccf(parts[0])) #get closed ring
[(0.0, 0.0), (0.0, 10.0), (10.0, 10.0), (10.0, 0.0)]
[(0.0, 0.0), (0.0, 10.0), (10.0, 10.0), (10.0, 0.0), (0.0, 0.0)]
from pysal.explore.pointpats.window import to_ccf
p0 = np.asarray(to_ccf(parts[0]))
plt.plot(p0[:,0], p0[:,1])
plt.xlim(-10,20)
t=plt.ylim(-10,20)
Now we can print all the rings composing our window: two exterior rings, and one hole:
for part in parts:
part = np.asarray(to_ccf(part))
plt.plot(part[:,0], part[:,1], 'b')
for hole in holes:
hole = np.asarray(to_ccf(hole))
plt.plot(hole[:,0], hole[:,1], 'r')
plt.xlim(-10,30)
t = plt.ylim(-10,30)
The red hole is associated with the first exterior ring.
With this visual representation, consider the problem of testing whether or not this multi-part window contains one or more points in a sequence:
pnts = [(12,12), (4,4), (2,2), (25,1), (5,20)]
for pnt in pnts:
plt.plot(pnt[0], pnt[1], 'g.')
for part in parts:
part = np.asarray(to_ccf(part))
plt.plot(part[:,0], part[:,1], 'b')
for hole in holes:
hole = np.asarray(to_ccf(hole))
plt.plot(hole[:,0], hole[:,1], 'r')
plt.xlim(-10,30)
t = plt.ylim(-10,30)
Of the five points two are clearly outside of both of the exterior rings. The three remaining points are each contained in one of the bounding boxes for an exterior ring. However, one of these points is also contained in the hole ring, and thus is not contained in the exterior ring associated with that hole.
We can create a Window object from the parts and holes to demonstrate how to evaluate these containment checks.
from pysal.explore.pointpats import Window
window = Window(parts, holes)
window.parts
[[(0.0, 0.0), (0.0, 10.0), (10.0, 10.0), (10.0, 0.0), (0.0, 0.0)],
[(11.0, 11.0), (11.0, 20.0), (20.0, 20.0), (20.0, 11.0), (11.0, 11.0)]]
window.holes
[[(3.0, 3.0), (3.0, 6.0), (6.0, 6.0), (6.0, 3.0), (3.0, 3.0)]]
window.bbox
[0.0, 0.0, 20.0, 20.0]
window.area
172.0
pnts = [(12,12), (4,4), (2,2), (25,1), (5,20)]
for pnt in pnts:
plt.plot(pnt[0], pnt[1], 'g.') #plot the five points in green
for part in parts:
part = np.asarray(to_ccf(part))
plt.plot(part[:,0], part[:,1], 'b') #plot "parts" in blue
for hole in holes:
hole = np.asarray(to_ccf(hole))
plt.plot(hole[:,0], hole[:,1], 'r') #plot "hole" in red
from pysal.explore.pointpats.window import poly_from_bbox
poly = np.asarray(poly_from_bbox(window.bbox).vertices)
plt.plot(poly[:,0], poly[:,1], 'm-.') #plot the minimum bounding box in magenta
plt.xlim(-10,30)
t = plt.ylim(-10,30)
Here we have extended the figure to include the bounding box for the multi-part window (in cyan). Now we can call the filter_contained method of the window on the point sequence:
pin = window.filter_contained(pnts)
pin
[array([12, 12]), array([2, 2])]
This was a lot of code just to illustrate that the methods of a window can be used to identify topological relationships between points and the window’s constituent parts. Let’s turn to a less contrived example to see this in action.
Here we will make use of PySAL’s shapely extension to create a multi-part window from the county shapefile for Virgina.
from pysal.lib.cg import shapely_ext
import numpy as np
from pysal.explore.pointpats.window import poly_from_bbox, as_window, Window
import pysal.lib as ps
%matplotlib inline
import matplotlib.pyplot as plt
va = ps.io.open(ps.examples.get_path("vautm17n.shp")) #open "vautm17n" polygon shapefile
polys = [shp for shp in va]
vapnts = ps.io.open(ps.examples.get_path("vautm17n_points.shp")) #open "vautm17n_points" point shapefile
points = [shp for shp in vapnts]
print(len(polys))
136
The county shapefile vautm17n.shp has 136 shapes of the polygon type. Some of these are composed of multiple-rings and holes to reflect the interesting history of political boundaries in that State. Fortunately, with our window class we can handle these. We will come back to this shortly.
First we are going to build up a realistic window for our point pattern based on a cascaded union made possible via Shapely through the PySAL shapely extension.
cu = shapely_ext.cascaded_union(polys)
This creates a PySAL Polygon:
type(cu)
pysal.lib.cg.shapes.Polygon
We can construct a Window from this polygon instance using the helper function as_window:
w = as_window(cu)
w.holes
[[]]
len(w.parts)
3
The window has three parts consisting of the union of mainland counties and two “island” parts associated with Accomack and Northampton counties and has no holes.
Since this a window, we can access its properties:
w.bbox
[260694.99205079858, 4044845.4484747574, 1005496.0048517315, 4370839.043748417]
w.centroid
(689097.7340935213, 4155195.0497352206)
w.contains_point(w.centroid)
True
So the centroid for our new window is contained by the window. Such a result is not guaranteed as the geometry of the window could be complex such that the centroid falls outside of the window.
Let’s continue on with a more interesting query. Since we know the window centroid is contained in the Window, we can find which individual county contains the centroid.
Our strategy is a simple one to illustrate the useful nature of the Window. We will create a sequence of Windows, one for each county and use them to carry out a containment test.
#create a window for each of the individual counties in the state
windows = [as_window(county) for county in polys]
#check each county for containment of the window's centroid
cent_poly = [ (i, county) for i,county in enumerate(windows) if county.contains_point(w.centroid)]
cent_poly
[(67, <pointpats.window.Window at 0x11e37c9e8>)]
i, cent_poly = cent_poly[0]
cent_poly.bbox
[674997.5183093206, 4119217.2472937624, 713300.2226730094, 4159075.43995212]
What we did here was create a window for each of the individual counties in the state. With these in hand we checked each one for containment of the window’s centroid. The result is we see the window (count) with index 67 is the only one that contains the centroid point.
The point of this exercise is not to use an inefficient brute force exhaustive search to find this county. There are more efficient spatial indices in PySAL that we could use for such a query. Rather, we wanted to explicitly check each window to ensure that only one contained the centroid.
As we will see in elsewhere in this series of notebooks, this type of decomposition can support highly flexible types of spatial analysis.
## Windows and point pattern intensity revisited
Returning to the central use of Windows, we saw in the introductory notebook that the area of the Window is used to form the estimate of intensity for the point pattern:
f = ps.examples.get_path('vautm17n_points.shp') #open "vautm17n_points" point shapefile
fo = ps.io.open(f)
pnts = np.asarray([pnt for pnt in fo])
fo.close()
pp_va = PointPattern(pnts)
pp_va.summary()
Point Pattern
200 points
Bounding rectangle [(273959.664381352,4049220.903414295), (972595.9895779632,4359604.85977962)]
Area of window: 216845506675.0557
Intensity estimate for window: 9.223156295311261e-10
x y
0 865322.486181 4.150317e+06
1 774479.213103 4.258993e+06
2 308048.692232 4.054700e+06
3 670711.529980 4.258864e+06
4 666254.475614 4.256514e+06
Here the default is to form the minimum bounding rectangle and use that as the window for the point pattern and, in turn, to implment the intesity estimation.
We can override the default by passing a window object in to the constructor for the point pattner. Here we use our window that was formed from the county cascading union above:
pp_va_union = PointPattern(pnts, window=w)
pp_va_union.summary()
Point Pattern
200 points
Bounding rectangle [(273959.664381352,4049220.903414295), (972595.9895779632,4359604.85977962)]
Area of window: 103195696155.68987
Intensity estimate for window: 1.9380653210407425e-09
x y
0 865322.486181 4.150317e+06
1 774479.213103 4.258993e+06
2 308048.692232 4.054700e+06
3 670711.529980 4.258864e+06
4 666254.475614 4.256514e+06
Here, the window is redefined. Thus, window related attributes Area of window and Intensity estimate for window are changed. However, the Bounding rectangle remains unchanged since it is not relavant to the definition of window.
Close examination of the summary report from reveals that while the bounding rectangles for the two point pattern instances are identical (as they should be), the area of the windows are substantially different:
pp_va.window.area / pp_va_union.window.area
2.1013037825521717
as are the intensity estimates:
pp_va.lambda_window / pp_va_union.lambda_window
0.47589501732368955
|
2019-08-20 06:25:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3070147633552551, "perplexity": 4007.486336766553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.56/warc/CC-MAIN-20190820045314-20190820071314-00338.warc.gz"}
|
https://www.mobilespedia.com/upfile/2D/DTCM210-136-AL.pdf
|
%PDF-1.7 2 0 obj [/ICCBased 3 0 R] endobj 3 0 obj << /Filter /FlateDecode /Length 2596 /N 3 >> stream xwTSϽ7PkhRH H.*1 J"6DTpDQ2(C"QDqpIdy͛~kg}ֺLX Xňg` lpBF|،l *?Y"1P\8=W%Oɘ4M0J"Y2Vs,[|e92<se'9`2&ctI@o|N6(.sSdl-c(2-yH_/XZ.\$&\SM07#1ؙYrfYym";8980m-m(]v^DW~ emi]P`/u}q|^R,g+\Kk)/C_|Rax8t1C^7nfzDp 柇u\$/ED˦L L[B@ٹЖX!@~(* {d+} G͋љς}WL\$cGD2QZ4 E@@A(q`1D `'u46ptc48.`R0) @Rt CXCP%CBH@Rf[(t CQhz#0 Zl`O828.p|O×X ?:0FBx\$ !i@ڐH[EE1PL ⢖V6QP>U(j MFkt,:.FW8c1L&ӎ9ƌaX: rbl1 {{{;}#tp8_\8"Ey.,X%%Gщ1-9ҀKl.oo/O\$&'=JvMޞxǥ{=Vs\x N柜>ucKz=s/ol|ϝ?y ^d]ps~:;/;]7|WpQoH!ɻVsnYs}ҽ~4] =>=:`;cܱ'?e~!ańD#G&}'/?^xI֓?+\wx20;5\ӯ_etWf^Qs-mw3+?~O~ endstream endobj 4 0 obj << /CA 1 /Type /ExtGState /ca 1 >> endobj 5 0 obj << /BitsPerComponent 8 /ColorSpace 2 0 R /Filter /DCTDecode /Height 424 /Interpolate false /Length 24371 /Subtype /Image /Type /XObject /Width 2048 >> stream JFIFC C }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?S ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( endstream endobj 6 0 obj << /BitsPerComponent 8 /ColorSpace 2 0 R /Filter /DCTDecode /Height 512 /Interpolate false /Length 29299 /Subtype /Image /Type /XObject /Width 2048 >> stream JFIFC C }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?S ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( endstream endobj 7 0 obj << /BitsPerComponent 8 /ColorSpace 2 0 R /Filter /DCTDecode /Height 512 /Interpolate false /Length 51121 /Subtype /Image /Type /XObject /Width 2048 >> stream JFIFC C }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?S ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (<?/kZ'>%?ծߴiڦkmqy.UG!h( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( g/ڏ|mo6Kg}i< Бo۟i.\$TUpb1|h`*=( P t0@ 1mF"h= lS̉_7 =o-5JF,kkd E-eIB hݼ@_ qos#>E_3j-.Lk哖N@>I]|w i,[ť\;Y_v :oT+?+`ר ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( w7kc钨(B (Co2 祵VVAU@+@ +@VRRc:a#h6PmB@gaWp4?lύ~ojW:5c]]o+!%,ۚʒ@ѝyTf;YF}S,4gZ\:'-w' m\$|s4STi-ҮdJ;[gPm~VlzP@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@-Akfq[rR>\$xi6h-?ܾyQ6ݙ<Z.Gk߷rcԟ.+@ +@ +@VVRRRc:cE@4A@=|s :+a~ xgⷈ,`]:NӳvmHDǴ"U!r@>G[Գ wG}W_Z\3y\aI_ퟃ?ߎdtj챍]?yKx\,C)`( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( j:{wve ^]ʱCHGv *K@>9j_?ƽgNVY5q=bmg&R]c;;6*q((((hh JJ:@}2%Å*EfTwy\$dI1\'Hc,å,3xՎ19 uῲx?Эl?o4鸁|ܻPJS ~XU^:m)skyi* :IԐA E[ ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (?2e֧{|幂_xYRK[`ۘLP1 lClwԁ!cUU P PJPJPJPJPJJ+@ +@ )@ )@ )@u+հD @y@-4.?zˌca'J*\$Y s[(KKۭCĿ f.-99v( |6#?OÝ?ƞ u`^0NWp2FV P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@_#'>I{q^]=eƊXnwUM~x\y!M7;]8Q{FTRN ( P PJPJPJPJPJP PJPJPJJ+@έn4l0'ޑGa7.&aAvQ4m#d(&J E@e9o~1ۍ2e9n͎ I̛##aGsoO/[ƚn/3qt'qoj\A.9Pjr HP@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@PxƓ41\$\$oY7n"KxJ;t44X@QF@JiJaJiRRRVVRRRRVV IE c,ڀ14[[Ju{[+? VV@Pl%B+V6]OVS%ͭ嬭I#u!C @_OO?OS ޗEdVplOX")@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@>,No >\$ŢSj>[I\$} 6IN8VR75&H -0#ȉb"FX/#\$ҔҔҔҔ0@ )@ )@ )@ +@ +@ +@ )@ )@}{^*q\$#nsuAвP (%DOh " CF: {'k TG\$#|hdUcHH맀<ߊ~ |Y=Z\'l U+#@(9O|!o³#3ø}P'O*%V7+vv4 QH'OSY,é;5u>`5hٲ1_|'8QQi~9 DuY4\$v Zt]ˇ2 x*w @>GD^9IsMJy8m|an,c4 kH4v,J_ ,ph PJP PJPJPJP PLt0@ 1JP PJP5YhKt ǰ=hâiZB;XݨG@t&(hhh6J"hJV*5GJOُ(]j#RUʳA)R?yS[E@U5Ծux.9l>h_X-cVaER̭f9gm|R˖[ipٹ" )PO^9oI߈,4(b@BH.K6@,nf\$Ցܥ-(:A^&Iq(\$lc g@4N0gڸPV@'sc@@~ɟVM -u jSG.AavVP@q;F=e#it}[_O~4+Qvwa^tw C+)debP@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@l* hk)C?eYP @ @ @@)@ )@ )@ )@ )@ )@ 1 t0@ 1/H5}6[8ʃ'XԴTG@tG@T&(hh%F@t ǚ\$4RX@τ~(~ n]ï\$_,EB8dmr@g|W |x+ђ(-@h\$̣,]U]\$廼5[Qgb\$ĖbĒI\$I\$H5jf!#u@~ /ݷ^hP@4m\$neC-<ۄui:V3M+eq:SS<ׯPUhYd,͵@P8 (l58585858585.AkqlnG0;"2 J4#O\C \ RqIRh@!h&"h64 @2 (6J!x 6u[~ʊ=O EXJd;;#[ct;Y-g8ɠVy/Blq Alc';PP@Wz|72,WV#,EUxFS~.~/Mg?W\ORy|3s4a:[Mm#?y> ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( KADෝ|U+ rjc! de6;;h]ārN N b444s^mw^ռJLO;5bpѩ# pyݡ^ \$ @b"h6!xZ1"d:o̮Y, Lsx?BK۰E\$=O1/u#ϧQ,>en؇^n:Hh#"sGOƀ" ( |?EZo|>+a1Zd1| +;Pٚ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (,/?=&YtMHZ%ϙM(VvnX%8I@PĔۨz@5?DY+5eoþFsp{vÚ ܖ/7e8 DPM BP/Vڀ+Ê*Z2(6Z!lHS@֍n0mɌD/B_ pKMy WX` '=u=է F ,0iny\$OL( (=++&k)+oQoqlە@?@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@Ee|ClTbDOZ #!D70H3Ƞc>qby\$ (Jx\$(\$ VpEĜ\$FAӸPc:/A%{Pi-@(&!x!hȠcײHSHF c8cIn:'JG-9Xz\$z5tn`B)N04"((P@P@lx+Ʒ |s[ϩhkڳB*ʊYIR=@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@5mZ@үu=NNl{˹V(`UPIb@heujw]{uww#I4ҳ|ėf`X9 %8I@PĔ%~eAlW~+8oyu9L ZK/jT8*aqλ\$!lTmNF kX'?) aZUn4*VRy,h'gZoogޒ~itoؔ 5ԥT7r1<~QK}xX6:ݥ94kE}EѣCS1[ެ*{0 P{%HPJpphր4֊"xh={dI#('ڀ˦[#%c(ϖm`qr_ym_g}'5)osjH^6#8U4(*j߫lxڊ#Hi)+\$h@e̲d*.?jT>zxڲ~il.1 7S*KQ;TVEZ)c)\$Hs ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( 7 1j6eawqo;Kak9 2n7F?*PPP (Jx1<?5RQk˶5¶sCހ:ހ\$YEހ%[zn}eJ@ր\$Y(A&h.A1@yYJz_hlF7%r1@^CЬP\$sO8,Ih|a=@0-s!pAgCu=3w4m:I *D rOl=q03(ƝϬ_gn|ou<C J.q{2v 9pwyϱʍ/AtM̱;E:4QY5OH'}#v0_\pcs'L|o \$ cϪ]. w_C,L>/EZr(M(8T:uuJcy5i/!:BX8>keۼv(gy*Ƭ#NP nդRIi\$ZMnd\A+J#e'7-%=Qr(RV\$c}jZMTbg |sѾ7Ξm3FoR}F+ՠyT;Yu!@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@hk_ه zdY_@@@ @ @@ ԯMB(n\$f+.'ƟIC' KЊYEހ\$YEހ%YEހ%[z.}t ހ&[@րp4ZĞ.- !0ȔDc9iLJq|kqa#h&'cr>cdh9\$,rX\$@JV%5Id1b2Ff98;WgRqRn+ׄb)0I(hݒ}]pށt+-2-_,\$,Ix+V&X.|9-+PwTkzVmۚM]{-Vu.+Nuܖ Dž\q gg5ׄ^4cÿ>{?ͨ]|κ][1Ni^+-.\$3jF%8)rF_1+L (^ ZAtK{{qFaWK3.jSkYMkj)FI'[m>(g;FxQ>濫ȪTYsHxi!j=A6|OGףK"9#Y\$RBR*0*'t+/ J洑YI\$Kä~k}Y@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@PZ5 k7T585858585a[xr|V5y7,[+HmВ"ƥ Zx%@-H*@,*ώ*\ހ'Kz.}t μ&y&[x@C8\\$s@«'H/nYre^s/sjY@a}FDs;, O9Ҁ<-2䳷ź}h5꺅{ʱUUOlwd_ iix)mǦzo)\$lǐl:5%Vm܌1;JJx% hEq@-ǽJ*{q@%7ƫéf:7:{'c9Jt/̪+C c-NO@S=L0Kx _틍A ?"N JĠ:Pz4O>H<);K8 G@ha I*zkBp':h d~5FsxlHs)O8hsU^M6Ϭw٨r&w׆~s?~/v ;{pӆ5F?iۿy<-f5mjQr|ii+GVM|=?J!羘h(L}(zP%(\$ (ZpcHq@ǽq۟olP;U.;_ѳGVV\u`gUz_mlU?h˔fʪDGI%{P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@7,:.솩ۉ,či_Pd[\$ @@ hZd(hVoBbWm@ }(z]JpI@P-=frt|Mj)+A᜕#<@0ϓJ5ǜqneo\ӊoìY2^H opp!1.=d Vze Zz@, \$(Jx2U_.X֘~ ]w}we8I@ \$@POH)mlq{Ҁ51hbzRh vb|^#\g>@jJ%(H0.;&=Ks)0<ٞY%vnc+Nk~HSXwݖJﻲJK8FRfh\$7#8K,c)ݯ|W`oRMmAv<6/Ӌm-yFA#}x1ֲmV0^3ӧ]Vhz*ת|*w@%zZOP=:/S@#^IG>A@:fs {,(--vKh42D̒c9PJ \$PރIyv'|V"X6]Os<(p5' t#ԣӢZ쒳m-lf|+WgS`=9nK;\$ie};l ќa9U#*%RW}bpL炣{K)%{kk >#6l5!Eׄv9w*8b |n>?oľԼCicig+6Uʓ͐@_-seW,W1M+/2+\ּmtOtK4*@N]pdC/w*Osk-s4m>(V-I\$&ԅI` m' Mw5`(Ϟ[|`ȊJ`P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@UB?i AxEq_Nԭ0^iȅ ȿ(f2?i؇칪jSZ-Ʒ&e~I"ΪXڱiQ6ˀ_pq`ht ހ&KgJ-=e Jx\$ Qi"xf5|<_2}/@@ w(Duς"~vp1@im p6Up3ͱ!"i\$b~f%:PTPw6"XqA>dJF6 \$ƿ5X.r\)M\SoնͥOc7閎smy\$`^YZ*-53xޯ0sj5-Z;_lƒ&3dᕩC(Vľj?|9(E Vހ%[z;@c@Ԕ]S+^4` ,k I\oEg.ޠYzuZEHΓ(ک0#v@2S=B½haJMyUb3u:Q[_v劻M=I=*O][#\ V8dq&"vy.AelK/«BTVvݻ\$}O\$@[Z^>Y 13Hp/ CT/.1sGUQY2ڞ*wr?9ExlKM=t둀-*r@>ģMd_J_/fpWdYw;d{@A2i\$NrM\$ DP@ڀ h^Sr3Iu{/(W !x(%A'g`r0H0`~e|7[>Լ1r~ :ɘqq\$MX"S{,9*7}]9<W 'O.8O-MFʍFC) @P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@xWk?-Tj(iͩ湖MtٮeI\$jذ K͐>-m~%߈NF~0eY?xƾY_!o⏌_kuRoB6b^F-eNgX\ P 09:\{q@-JP-H-gk{Alb@Ry'{Pygm/@ %h|Q-(͠ ]t-ƣB9egiZ=&ъ:YN;O4[*( dr( z4^};d@MF+X4K+.p OZ e)FԷORAUt|JKzOsۡ9@*W_q5ǟ.qΖi68CJ|.mΏ+K?}I'Ik&٣]F@#!K ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (\}%ʀP oWϊtKSRl\¤ʡ8!0_ ?VMl%iuoG,Oݠd1NX6CƏ|zl~*-E<[QhCG\$r*m2vP@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P^ wt}[Sx& F?-P2Jc9_8WI%y~&j;&tإߟt7s ʉvˆo!@ǟߴ ~&-KI34{( RጝX樯t`h\q@Y%h3wΠΠΠ|<<<<z_ʑ\$Z6<1ti4z1}f~kx#vK Jr}O%N-9h2 q 4Mj^'RYߑpe.*ܥ>f7 >ҮҀ,Gv\$0êPOP>hA&>hQGրZbMj=&_POMhwAAc84j.\$HemЍ! hh?7n-4/[zgf]ڲ 8Xdb#=A(;pib)rwRV~FcLXGckaK#ֻZ^rPeג^WFwÂ+XNHa72 oeoUNܯ"(Ob]9^JzOsִ=Bup]ޯam2(`)'WyGPrTf[dL?&vO?D. 3h_+V-ޤl1l5YE 9i,WeNT dCW_'9 _a}tØ:@o"On+Y|IsNҵ=/B?3 >Wi9pIiÏ@ ?CKO-jU`}ٗQ̱90KKca{>dLJn?:}*N#]:i,lbh22A8=4 i:x;.WwZoƍ =]ɑJ!71(4e% U#[xu=wX"]X>1ހ5;\k2M42+>61@o<~G6mTg! 0y2@I#h@tkFy9-Ϯ;PxSÇAA=GR8u{3 _.i:fB\$c8Pj ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (>/~ =[I| jAGI,qGhycf/E kFPc]GgSG뱋2[XbS.h}O%NM=i~Mkݚ%PjIi%Nו8 o_؇5XO%\+'U)fP TVʟ`0H@;_v7Dɿ}U7N0˃K?4I ZOu[+K]\$R_] dqT̀pK/o sNJ>_4|>!A"ċVX_!-Ɖ Υ3Kp9\H \$vZ ?m_'Cǟ 4/AⷻҖhbeԟ:mqh@PFF""6霮P&JN.v#BGCX*z˰932.8]`Xc EV%RM)# @/E`|gMᴚdb o1p{iF'|\$G)@P@P@P@Wե ?^SnoV\$֫2.7**TĿ\$?i<)S+3Pee[H]Qv *7xo\>fR[eQ/2jC(61Id!q! |aj4GZ&ho|vr8%nh]\$(>]ꉧ\]H6!FN?*}[f%-1*(YU#;W|_諑7y5{lx2bilIR0+lK54O; x\$A`@O {g\\g|Hpq塱NX>`fF<Ҁr0Q2wDgfaP2|e@~lg3@:#h?f1H?v?*h ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( i6:{jvV{ ]YIJGYWa4\2H`T_ހ;(xZ:7IdF!S \ZƳc+eTp\$%Yzr?݀%?3ow#Lv88Ve 2ȱ3%4wc2):cqBGsme.M}#֧_ۣڍkWE_ٚ~|@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@WgT|kz}%.5tMVax9<?'fs|\$?4M@VB; k&!9'A4R (no-> OG4{,[[\H8\$UUgeVe/~SÞOj{&|Aq[Na58!<{UP@P@P@P@៵7OګV:gRM3ŶoY ԷX7Hf؍'~Ξ3^<Ю4i"Ԗ76Rd%]FHxWU`Tyz9PBB6/:8VP u6aip;zKhn,*@McmƷݳ͝cj"rec8wu4WvTgn=zM`[G ?9%#Ҁ)B~ǧB_rw;ch9 }~TlPs!:1V·^NA ㄍ8PDo!\è!lvbϐrZ+|7 ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (*jM^隝Vwq,Wс A ?gnſtC? ۉm{VDf ;} wy@>.W[A4dj77:u3,kYrݎ2q4( 2H?]EՑI<^=Ԯ8a0^1rl;v?ր4t9a(Eg?WJ윺v?uҀ9B>Q4&X|źiN~gߌǒCMqq28H\L=@'\^0("S2Np'ŚO-(qzw| *xSH4fBL1D[x0}+@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@Ǐ%OOsZ-mrHdU yLhG" L6s]h\$Ѵh[~ѧjso6pUʬ28*Q@}|" A[#IJfDb *R1P~/Qm!!f AS3<oEݸ1#a"PĎ8hw@}}#')~Ҁ:OC!@mu#G ѝoh ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ߲2]jrڬw2G堒I]B`lg+*ῊEX_KgXc x&㛻tSda|%|'Xx~d3h8"Լ N3G"Up]U1 |rUFO=:X0@tEqAӧ(I.\$&J\$*~od)"n]2HziдA11v( kRڍn`(s "*|'\~!x˺>(|'٭h<+2 ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (<մ?톡4]q\$b{fw;Q;`@>/_.j Yxk]in-5K5TQfCJ3*+edc?s!>.Du\$= R ጝX5Uq@^Jt1KyPLB~o+) YOO&pP,XԿ% }j|W7Pѵhdfogt\0A sCɳL@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@yٯwǫi#\<1 BhQt5;ae,1)ʗ>ҧx_ouf) {RdAv[q#v3.6T[<^o~1| |Nүn*lX#bY>b+@9A&|i}*d_֦m\wCj"UW \${Ē#0>a(}i@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@P@ endstream endobj 26 0 obj << /Filter /FlateDecode /Length 13587 >> stream xKd;rj+s~l=i'@0NfD feuWal=:'+oOorϷP};Bώ QGxRZ;[.U B !ȄjaBDezāKrKrHuL*>z8!.!@& s2;k s2 QereFǭp !bqqxɃ8)8A^B^AfR )ܖ )\BV)zV)ۢRXRXRX)\BBB B PF*5k*5k*5ke#.FȣZwFBCF[Fu#!Rli艩l,Fun)FefI'Rʘ7%C'8힠Fb@ǀ>겟mHZNIfWe,)bubfk: Άv@mmI2Kf< -:ڝ%RG{DՁ h'cڑՏ n^h'x̪) h;) Δ@/Xylc[s%vqrA{8Аdd 8&X '6.Hkor`j![8P عM(oۄj![ll)ZFhFhcʥJK&o_-d>Zp&]PLqqi*6p^]F.Fz iF~Q(m7*ՍF Fq#ե1Fu#Յ%t#F%EōK9@ЅFЅ&]hX@Kf.|7R]8u8]JHuᔨ%slu.oӁgղy#Q֙cûqh5A{ YFu-o8kZDOĨʺbTFYGj܈u!j.u#E[j bF bFi#)!F ҥuKm7]=n6Z&Qۨm7 ōFyQݨm\$GrkWI"9Q(n6FmnoML'7JF%6ꖂ{B7 ō;)䍊s)ԍF܄v%oj|a#R6ʦIoOeJѶZIzN%%&qpR2mm^r~mR5b\$ڤnzo8_S JF}R6}FOѓTj1B/_C= IQ4^ %h&F) `ympls(opsHugFKBu4cNQoͤWmT7*Fqh%l%Lҗhdn K-J3]Jr*ubwpIf2TicȬm4`c|qvbzM;D(gZKဦ+%IɂіfX"AxK1#;`Р 65 4o!HAnKuu9֤nvD^c}JhW䌂V)āZiOkٵدE(MNS}K9⯉qI牲jy&T8i%NuU&oL9S4 TwFyrV{6L1#Ϭ?PxMM(MiZ7^kMENՒ;h/s)hB4z_?+wG'4Iһ\MSQNQh8>IZ>ԮM}uWkэ5 'ʗҦ UͻULZs^)ZUSnBMnH2Y23yi!MWӴʀf~e+#GSTuk2IL;`.sʉ.ݩq9ҟv} =')i%,gN,J"k"KiQ٨n6hŐhӥm4%FeQHtBm7 .6.T7j.vmi3iCK6tiVfL6dКn,{ͯ~{}&mV/iM XDe ` U hicY ӈiVY [pZزZ(dae-8[hb![Hujp=擥|._\$Xj'1PEKi(mf!S>Q%XXNWoamN f,Fgqк JX|s7U"߲]OLII~faV"E19.3 9ZV<%,dWgS##}zHTVE*ĸ^Je/WHp@*/^`u(r)[KR#/X\䠜4Y[8tEct*HJg| `ҲtCw0bjWuܢs>R Xy|T.Kk(vFRda-H]fI2kڮ&ӯf2e9Q%w \#YP GdֈW[(a]?C4.IsoH&Eʙ(^%Je)2i4e| r ȔIgOȤ6ģnL4?}@u^ʋyAcZ Z~o,sᡴ=gęӳ,\1\tL?ygQgXG9))ľDԭjƙ)oURcфPX)עxT݂f iANu,!B JD*K}hR]8nJӳ|z+.|qȄ/:3`GH\$&1C5k ؗTK@F@-pᶇ .uj@4ۍ,yg!gtε a/ Tz6P rFQ&b/^2zKDf@0Ez!<=O*0 R@}u8y<j([P^q0u8NrݐƉid\$i(+'V\${ -Ugi0K@c N ` cKO j@vd>`E`-3d@f ng /Y@(ҲLqn`8<+ D#cT {̛7uD\$NZ,ngfk1ͨn~S[1o*J3g>ɼIF}[Gΰ}`*[F )RrOw ĽO|38Kb (ܒbJ ,R1.\kK'3-Lv. sS.uqwq y#f ue&^5Z6 &RhD1ȉΗBd/],ܩ@`hXCP'V o `BݼuL}c^ҠS/IB1@I냝/G*UyL%ꍖ[ c!U}NUS#g@Iנ`}:7}sR/ jIy7%Sgʺ~J]i-ZP7-!PҚ5߄W1jipxյ"Em<( \$'XphI٩l>Pg)|W?e-A),Nbi.dYAþ1Ǥ/SІqi6<&(v-i}-tmu[sٶ+yV6zGM`LikZΦX)VAԫѪ_@le'3JeqJ"P,tZk)bp2!0!L!hP"3Nκ)f|?4> ̓늳1R?TT*(ٲl ӁP-U^CAծ^RTH1 [8K `&Q f@<t:ɐ 5ahS]0d(2VppP@aPK6{1(5,ipl %T@qݫȤjғ&FPi33lZ9ϻ ï@5lҵ@Fu#1mA푍)z0:%깟;zTtM]b-r6;%`6}\$LhZ䩮@ !eFU8c۶ށ{pqkNdU2dQA;o-l s4hr)C\g a!#@J\r O7 aT!#)U+&p{ ta0aaj@s!h3h3h3h3hfffff⢸,fa!aPm4AJVix;4 Ox0cPC%3&bț4'h7C0W\@ƈt2"븟IRǽqrId|%hybV%=zlJ{@pd4r^ؤ2cVTq4jlEb%;#9%!0<)bUTdͥeYS()=\$ʭN\ =2%+h[|s"/ brzCYH#EK-K7Pd s&KmrG@FTyΉ". ( y#@LsZPJYPQ[u @Kj2Ќ(xCZA5 YV]Xvč.^pHY5IeL7Yt:HuWW9hH, . 9+>/R_Ua=vьB!4&[1TXIJ"-ZVf/@?,]]b6E|i _GH+/S% ym t mR2S@K[G˶u9uIWud @3Sn7>rF.f+y\n<f+9FR2:fVSRc3q&:94/5%ϜUIRk&`)5Ks%[2Q /D쥮٤uO-咬Nlw1̜|=y#wdz\@q7R 2S%ׁ1YRiMD\$&h3 g"&L.KWC 6 P0hCiC0WC 6 P8m(F6r]YuN,fwPLfC C58 lzCFkRy}(8';2mUV4orcRj\$)=>MDrȽ[2 lwk"&Ah !UQ"ݡ[w 6M7I,M6)jA]Ѿzᣋ[@>>p15hqc& <2FpiA(@r;`//\j"R̆Ijj,lIl<bkL+;ڂmAD[P-(h J%ڂmAD[P-(h J%ڂbv8rZ%n%n%n%n%n%ڂmAD[P-(3/VbLו0X%0HhZ z\dwCDjRa[= hK|2PV4ﻁf@NݒLWSsIy,@ ld6e':* (έ' ;Ltϗ^^ &pQ/_-&(de{t6jHRL:o\$N.">P GQ9b0D#w_jUr+pRIB9]Nr1bI]1J2\\$ŐRkp>D^כ1-+BXUKlE ߤ+V;]&n!g@0I~yW@`@J 4u@34\gD2!ѳ t@SO. f \$ =\} 0`)hS~0ePP5ePR55eP#T`2I+C5@\ZOY .1lvW}ʲUxj k ue51p8kq욁m8 36 d D'Fr\$sp։snRg888З:Y9s8Lu\1og3yM2Y.K>:+gϵY ~:T2gN=W9:8qm;~ :qV:NK:Y9g]vɗu}]@;J̅B֕vFuCȣP.ohݟLY)e.bRL!QUZ &!}f d\K6PJD&+~ h!71/uZ2WHz{[ERt>MqGj g.,s@Q88# xUp>*P[1`G&姝wȱ]̱i-/kn.u/L8+ !ޞ;(-*UJnJcK7~.N%oyC3'dL9_ĵ3OtY:)?ϴa}V~} zY%%om߆mǟq6~G1%F~}1T8=p_/0(ק(>/eLS"=Z^脮IX/tiK]eɩU:W5R*!׳*Om љoÖ^h?a\$VTG\$Eq %PKYE3>ê(-%(D"dDт+{8 D)*HND yKl+b͒oY/4I(?)~W~J,CjYwת!\$E02(cNh Rxh15\(D^< fĐ&` \$ { tWM]7DXQ# ]{eZnt@7:grЇ#9hg=ɡDG2\>"uۡgYU#F0qdQ3ÇD";Mep\$DɉMű:p"FH"7CI=tm'?PGc~JLx'>/lR~d K;frn"ߝ_>| pE7?ZF`/*lHM198*_7]`\~⾺TzZ_ LG/ΤYS}\G3]eVY .RεgٻFxLƤQ˫20/FyNLdFL9_.\$GDY\irYR e(K98+/מ\n} qVx'=þ4ƢThkrgsb 8KEf]]&y@C{^dqZO|t1*r3bVR82s,R <]ӼoTmI twIs'~Gp~E&xʤ8 Ferm21:J? -Q F7Q>ұytcGtL̴qAVot;PV7CGq9+_VjEN.52F} /ݹN1\$3w1%GB*~IŇR뻓O&"'E-%}qw>y?ws?dź>Xv7 F7|-Vd}D [#0}ԯnQ¾zKxozx+ <ynrW!8[Y \$w 7zX>h}'UX_3c\$dS3&EdgZ\|eߗFiw~y+4%ķ1uj%>Lp%YA} ?iaL9J۞sE>FKrYři{3J!i3QRHpV&Ag#^L 2yP\$+f3Z|2Yȵ=DEҲ_L 4 `>[gENљ|it{CVXS̢\V)TtL^2 Og:psVj莕/+Wp%QlĻє[ѶGW(ϗy \/~OJcT1d+ӢmסHW6zTx782+c H͈DD>-92-wC&/!{}+'螪f?WeϵޝV&#;OJ}ŎL4Sҹ`Uz{|x78Hw}IT'}I|Vf^|kyky_2u|e-'X[w9#X7fWC݇|^V`S %_Oި/ڈ|\_Ŗh*Z Mz~VU*0][NaE;`GOӟf|zȟ"z {ȼ({._"D|rwxeW:~1>5?6܍flZgE&5KY3 ƺnOر|nɒH"= 0ay*i9 38Ou?! ;=xv r= lO u (|&Q1G.3%['Vk^r{u: */FkLuf r?SLƾo).뒯*m`go>k`ggO l|N-wHq(b;->w!fJ,×}0= I}mDrP"#t(mʑ+r bҪ%b֗ٱ#sB[mI&V'ra|MYlW8 L3هt#rxvï8;8-T,W}r^KsA+zE\$kʣum k=2rlnk.v4'tjv]5o\1&7eUjKvx9fL^.#&dNۂ朾UdS&^Vwz4Zq%چIfy 7*DU(wKl_ؒZC|\C)BNF!h[c84%ȦxRHΤ``KeM֚5W|u6fnNJb_zOZkcS1k_?^ÓMNJkvqvǝm,*?FF5\$Nj\$M}v f x[7ȹiV<@{rY7NΫd8bF!巯ae`d%motDCRxX{>6m_꺗Y֖UX/DӮͱ *y~ny }!aj};J5Yř\g`gShΊf`E 0b g0?~'&Gg9|}8;kgyTenMNH~݇!%gzO3o ĻŲS`SĻ-ZJXkDdbHKZd܋g^B@,D[š\: E)Bg:@#wlL%`k7lOmUʇ}%n^nƝT [THCŢFW0s5uAFͅʓل58^ bҹ~5cG, ֗h2>@xUӪD8pVQhPC嫛g@YNhiI#Ж NS(x`(f"Z_n ҪI& "\$o+g%(HIטKT)4UR4 JJotOfzԨE<ގa_6NH7≬O[!@Q\ hK\$fPDUrb);s W9m54r.Et_;dRt&)BZ\$%>csFl2uRNMB讛лg[MhֽK롴[1WlǗm Rit+LjX]J;+#1^IַTo#F奧Ƈ2ߚ;~d.p7xF7>_Ծ>Q[1 I]\΅@3 P3Qy4*,ʟ"62/ dKft"J1Z}&SW~Mr }Ւo~W{-Ɠ~(m>8nL?N?Ͽ[Me endstream endobj 27 0 obj << /Contents 26 0 R /MediaBox [0 0 841.889771 595.275574] /Parent 1 0 R /Resources <> /Shading <<>> /Pattern <<>> /XObject <> /ColorSpace <> /ExtGState <> >> /Type /Page >> endobj 1 0 obj << /Count 1 /Kids [27 0 R ] /Type /Pages >> endobj 31 0 obj << /Filter /FlateDecode /Length 12 >> stream x endstream endobj 28 0 obj << /Ascent 859 /CIDSet 31 0 R /CapHeight 500 /Descent -140 /Flags 32 /FontBBox [-7 -144 1000 859] /FontFile2 10 0 R /FontName /FPFOSQ+SimSun /ItalicAngle 0 /StemV 80 /Type /FontDescriptor >> endobj 8 0 obj << /BaseFont /FPFOSQ+SimSun /DescendantFonts [30 0 R] /Encoding /Identity-H /Subtype /Type0 /ToUnicode 29 0 R /Type /Font >> endobj 30 0 obj << /BaseFont /FPFOSQ+SimSun /CIDSystemInfo <> /CIDToGIDMap /Identity /FontDescriptor 28 0 R /Subtype /CIDFontType2 /Type /Font /W [0 0 1000 1 19 500 20 20 1000 21 27 500 ] >> endobj 29 0 obj << /Filter /FlateDecode /Length 359 >> stream x]Mo0 C Z !u8Ch0(~!vj9=7q/n'?L8AߌB u&j ??|A?ǡRh*}EȮ]2&Q?!(P:a15㚒#Ɣ1sm72n vGV>0VG'TQ1RE_\$ĘEފ*"AILIRb-xb˞Ŭ-\$\\$μktwu3ΉP7vX}~p:ê endstream endobj 10 0 obj << /Filter /FlateDecode /Length 5783 >> stream xY{tyw3;ٙٗ+ɲ-C`Cb l6BmŲllpnNMpr NHO0Ʋ*ആ&)>qȱcHm*wgW &=h;B!\$}F{6y=ԜD| !g-Aزmt(jP93tҾ "P^@{_# =mݻn[C7Q(_TϮ%Q/nڼ-zBI2^o'@0DD =fl c+ԸZlnidoj||F" FZRO C%(;ڧr-M0^'+9zD>UZVϥi6Ql1_H??}ߪ8~*Sȏ(NM`y0tCW*S /Y> rN2r,fDZOoDCAtΣğG8b '.g3piba,V!2Rb81CId:9-#6qU jۦ6ô[q0AMX'G %4>O>-迣r lH'U R5nE%9keer.N eQ*dc~KԃhEև@+)|k LU [[#x} @/rb'bqw GJD~?/J`&Ȇ쳉x3c8Ʊ>9ɩ >5>U6[ρD\<ռ`aG@ R7niwBa3p~nhOEfz%#G4Nhg!Boז? V <"= 6;Tݛ`HPT fN)9[8qbv>~zl* j\زJU\$)saA8Að|!#/ߗj)o3,3T _:OS?O S_K=>bDܐpY>ǚQ-ϲ]KGu:LX쒋 u!Bx}{FߍhBt7\@<ҎzZ#.a;gՙ<7vdz& i(7BSqL3tc\$:,Er8LgY<ɒ .]Myeiբ7=9˜`9sz_Jcy/fb`1,U[Z8Jr.qӺ4Z_,LE6<5[31lSy<ϛ d\$*MCwN EʢP]A;";QPIdR-"ǰ`˙ɐvxҳc;ο_f'S ca^.:pQA2HC*16E>7~>]dJL籙W~U3t=hɚTuut*Ke=A O@)1tpRd܆\$U1_aQ6X|ÆZ0v] MƥD̻\]jsV6nu5QFr|H^MJ\~ v=|*7bb_J2yꢘ=VPQC,aƏq pryt .>]WKiii;Y86u8+DpO:IO`.+P1{_Ec8/+/(040UM`\$G42h(3,MUb]\$Yd\${}џcO/_|ַI߀o̅|sALH< &vdONM,xsPugZlN\$I\$S,ed\&SJL鿎t!cWcJnyΞ~r'vUUР4_hF9C^˚BsͬU/znAMC[\KڡwkM`2If-#-x#VD*: B!hx?⊉`\'Iؘb(̅R=\$HD)\$otg{)"1{\$UYQ4OkԐk%E/h,c8=I4biNT\O)>4| gˉ4 ̀Wh MS&0 |)<9Xa7JﭺLե*f[3CP#=K Pe9]8FqeVm.HװP AϠ+[_S_gѿC}x8#ァ9rS/|Y3[\t%CRTWܥFUaL/ﶧC{YޗOpLxE8#\qFwO0o=l&%Oa{*1}\$`Ez8a,z 0t> >]F.] W [bBL0uYsmGX\$TN}Sz-D_ʧ>?BS#37<^T%jo_" ;N=cq,A\d.c ;E\P 1ˁ-1TW*Mqg*fCaL8t^'p4=7 hAo/ L!UX4`|^#M5 b WP=I= Ps8FxiW,6l0˱QShINjk*PAIKc%BC1 (~KfѨ9qtrqo}hfG-Mv%Y7%TFڅ7P k#u?Խ+^,4 .&p y`ݡu.Աurky"-ZSA>@&a٬&re͜e&[ * ů*BiBtD>e0BݐB*>> C-f5"N綗*.Yy<-(ܼzբ<hDc|@hz^=3fNhn^=֡_hz (*K&_f2XẈWZ@Kyy]2%Z&C{F7oMlXٹ}=#{>ј5߶f6odGDs}iHwo߶M3AmbY_goFOZhn#mF[&Nh=ZIꁶzFCx>ֳ{'ojuØ[4v@;@0.ZFкѦjZ3ú~'ԒY@|:kYFAxzC~Ɉnh|ȓKgOiL_4 zzmdd5ꀷ[2jvwvNo=:2\$棒 #<* 'Ͻ{KI>=IpYq0k<Nz y2/ 5wzz,CZ3Xz*^C;vWI˳PodEsGWIdz?d63 7{< {vbVtmkzlXcu;o;mnu|4G@皏#Y} Y}W]ss)YΎ9|;9o{77 1BCWUgXFCǶCZ1>NGͼn]/Z6mpǽh;gHHJAo\$ixnY:ziX2XcrXAA*be8(z&=d endstream endobj 35 0 obj << /Filter /FlateDecode /Length 9 >> stream x endstream endobj 32 0 obj << /Ascent 859 /CIDSet 35 0 R /CapHeight 500 /Descent -140 /Flags 32 /FontBBox [-7 -144 1000 859] /FontFile2 13 0 R /FontName /FPFOSQ+SimSun /ItalicAngle 0 /StemV 80 /Type /FontDescriptor >> endobj 11 0 obj << /BaseFont /FPFOSQ+SimSun /DescendantFonts [34 0 R] /Encoding /Identity-H /Subtype /Type0 /ToUnicode 33 0 R /Type /Font >> endobj 34 0 obj << /BaseFont /FPFOSQ+SimSun /CIDSystemInfo <> /CIDToGIDMap /Identity /FontDescriptor 32 0 R /Subtype /CIDFontType2 /Type /Font /W [0 0 1000 1 5 500 ] >> endobj 33 0 obj << /Filter /FlateDecode /Length 250 >> stream x]n0 <ݡ Pv@V8!1,H"i>\$9c큽zQ]H"8jYQ/9 n=N,4 ge{|F IB_C[)0䜃EW1!=ǯKsueJS*HY7 QX* n.Kg~ endstream endobj 13 0 obj << /Filter /FlateDecode /Length 2524 >> stream xX{l[}^_:q|Ď]줉m^ @<&k(Tq7qĎ/xw@T4`<&B-@llӺ H{heM}IM۽|};\(0 ,ćvyh\$ZyUu ">fhqoF >,Q:Jǧp #}#RMԏ&淮}߇w,9~ꯥ"Ϡs^;3~;h(ow H pp;0qx *p^|NWc'Wėdi=T~ٲJ@KZkc l7[䭁!iX+ZkhӚƦA_w_#g¥м6%zmԾ(:_6DI͔%-y9X"ceKaIb+-X_gy^WY[-Fژ0 MƔuO)DjBDq_GDKD\$9^Т0[0QkuYASz,fU<яZqYXkpbt\$Hd6qv0>М==،'zzyu^Xܛe4oQ31Q7Ob\$]~ _;r\.J1<1q :r~TFcy`#?k'76` . XO{t*ɝ=V[0V??zB%U2ѡ KZQ-=Րh\B [#qrCf<<HbqL=w61>1iT״DU-s-_Bs s[5MaX+ík1"_b@jvp8\$b1b~U+cr.I6OB3<oFZUۘC QDO59Lx0=Ξ;UՃ K{R檦]崉v"bEZҽ\$TUD\uQV^!) \$xwɂyѝn v{M1V8f ~ 2fG-ljjxt7/7HJmjjQRlhcF-;ڊm~@ﶀ^j%͈?nc J?~aBMY5{FwBBGҜp;8Ih(JJ('"Ψ:#sL&}=gʎ|zĽ0F\\ZRR4]Wl7]jwԹ]uUSUUR.E .[tNtItPGS\YZ]Izv wl|:b+exi' g;kr h;0 +,|beK0Ө1[cw44nYLYeaX[Xd]\~6OOZS@ YX4 H^yHƖL<_~U0-<档e| ZÍ [_` feZ:]27+(@aȻ>\ Cz=IR8V@1LU4د(ޫ`NfF\$܄T&pj{7"7 C8:jƱ7gІj.|6O"Z݇\$Nc_2Z)C&?oΗj\$qtdIRތmaڠި9[8< df|C8#ck9LSz43:dřY5| 4;* և*kE4j(UqXF4r,E_iҴ_u9{sf}V%\$a*m92D*o^sSU^C%(Y56s9s#8Gz͌J0(SĈhfyQL [om ';}01=|v/S}F_h?R#73}bƾJL,cT|xboqCZWƓ*8j𬮃zEΧĿ&,^G{q+q{,l91~/z1@Ls o^hjBWhĎof P߀Bۑ74Y0 L7l+a6uӆ""DQY"Jd(RF@I_GZA endstream endobj 39 0 obj << /Filter /FlateDecode /Length 10 >> stream x endstream endobj 36 0 obj << /Ascent 859 /CIDSet 39 0 R /CapHeight 500 /Descent -140 /Flags 32 /FontBBox [-7 -144 1000 859] /FontFile2 16 0 R /FontName /FPFOSQ+SimSun /ItalicAngle 0 /StemV 80 /Type /FontDescriptor >> endobj 14 0 obj << /BaseFont /FPFOSQ+SimSun /DescendantFonts [38 0 R] /Encoding /Identity-H /Subtype /Type0 /ToUnicode 37 0 R /Type /Font >> endobj 38 0 obj << /BaseFont /FPFOSQ+SimSun /CIDSystemInfo <> /CIDToGIDMap /Identity /FontDescriptor 36 0 R /Subtype /CIDFontType2 /Type /Font /W [0 0 1000 1 10 500 ] >> endobj 37 0 obj << /Filter /FlateDecode /Length 274 >> stream x]n y CIvHQI9쏖8C~H5g?rmZ%wyY,Gq !0<|b22;Z5h*ᓳ {|fZFصnfN5f/̼ ~]?%A(b Cq-p6ejD*O ճ?5dFe?of" JUbT\$> stream xWolwoowoo>{v]8 uKHB"vcHDƜ/.*Uj P_* E\$@Xд %T *j7{kD~k<3yy3T-d@:ݟ}Kʶ R bU?4vr /!0WזW\$?zo#Wc9;-:pPc@x>31f0U`^?~l`Y8ޛgsM/jO2Ix p*VsEڢ쳺̳̳Y+INx S]PSQaEzI:cfΜ6gLތ+@jj2AҐn͛lRYBUM-[6o Ei;UDʏljn'jkgMp*x;9R-P~~ 3UnhR&Ƶp9.'qa9ܺE rN)A,,PF<᠂xSJ@F"hPr8>pfF#Z<}eXJB;BY1mk_Y,J FBh1U|P J-' 2n^M5-+S81QR.Ll-C?* uT*('\$eALGDGG?N u R1;q3Oc| R!2gLg`{]ʪO / 3 !\$Vij8G;, ңԯE~ 5n'VhY7ȔY(%"M|^ـF]P8 t "6fqW JU>b=i>o.KO+G[k!^ (^ KRBس :҉C,~§߅e{eٲ ,\$a,!SXzAGW(i(V ݲ*pP#BU\.+<,rh8Xj 4e=)ݷR?26_4ek.bܳsg*DtPx8g)9"a{mKնd1Ʒ*9jSEex]KZS*R@wpv;0[)#4L\p`ߋQ\}Y/؋%-YP<M44d{j&fSU-ucg|(!4F h608`f@H!e >/"@||{y3 Ҏ%F3m5&Rp*Q+Vbi"W2de9S_,d^9<eY:<ƈWWdIfJgM0E1I<uW븡{r\8KJX61f/Ǐ !.+X6iDRf\$FS\Xmk`\D־~aRBuY*PH56je}){tQx],]MT8~0lHQJe"&}nDaJc;SuUϥ:p/w,J8̰d*ٙ|&'Th Tl:ĢQƻ@5B JĎEXAti|teqXZZ2VcқIoB(V^nki.XM#5עiU4ɆOc">,SH(GM2@Op;{'Y`_GȌ; ͘{=](zG}H B9\N`?mC K` =ЋCθo;h@jXqnjvb;q ȇ`s~6إ02ɺ|)`QcNuFWtvKg0c}9_/81~goӱ\mܱh-Ag :T-Öw7=`38{̱,EY!X__"Q\$,cz9Ǧ=^ q"矼u3TzSn'\/#qėPsި[0qX4D <БtCDa1&QL{A9z+ 'B_xܿ_o9c#L;9Yfsю#]Ý/X7:&9ape×2 ެL|@S㈯V9ƗC?x F})hx]wxW81yghqZtچo8gwi,݈wFDLWKtU[kȇ 3{k;,Y{ٌ gݳDӏUYQP{!EYfe(R,KYeInR_:lM endstream endobj 43 0 obj << /Filter /FlateDecode /Length 9 >> stream x endstream endobj 40 0 obj << /Ascent 859 /CIDSet 43 0 R /CapHeight 500 /Descent -140 /Flags 32 /FontBBox [-7 -144 1000 859] /FontFile2 19 0 R /FontName /FPFOSQ+SimSun /ItalicAngle 0 /StemV 80 /Type /FontDescriptor >> endobj 17 0 obj << /BaseFont /FPFOSQ+SimSun /DescendantFonts [42 0 R] /Encoding /Identity-H /Subtype /Type0 /ToUnicode 41 0 R /Type /Font >> endobj 42 0 obj << /BaseFont /FPFOSQ+SimSun /CIDSystemInfo <> /CIDToGIDMap /Identity /FontDescriptor 40 0 R /Subtype /CIDFontType2 /Type /Font /W [0 0 1000 1 5 500 ] >> endobj 41 0 obj << /Filter /FlateDecode /Length 247 >> stream x]Pn0{L47T%!iU0B,ŹƆl>FscMAN7VNn&`,dE ڨQze 86wPUⓋSv/u46p1u {W9"oCZG~ endstream endobj 19 0 obj << /Filter /FlateDecode /Length 1567 >> stream xW]lSU{oo&Q268c@5nsC`)]۵~k7Sc11`H|10!1D >a⃣]sm9>%FDfʐԹ5|G._e",xA0~HGW\$Hx@W@0{ӏ|r* ǼmxƏ } h{bf3IvWS@dW]@?}\$T"4]b@ʜq-(rx.v+_.j1u*bs)nYmbTfY ̲എX%cǴoujMLLP+6:ڻ:;:;m~KyZVZjUW5U_I뮝F& .Uz@,5=0I/Y78)oWM&iFfRY8qܦ"l2Ki&o Kb b&q]"yzm_3]2+bJSJve 22Wir\$e[kZ s"ԖAM ̻Edž\ԩcofQX u,ѱ/`hIjH鑎KlP5̱O;rM rx p1Vd9ӉP 8<E4Gv:@1S) :LXC8n.%WlNNh 4F 6GA%f k;uIt\ 3{4 @hQ?.ƘBl?\$ {I ^ O"_nԇ1.%m:⢶Bƭ&G lZ9<qQ{EY5hEQX3-5@#I=}GAϼ뤘Y^enWlTZ?[F\-I/.#j3+*ê>WU@Eo諼k=* bFb'L5e:IT+ wLPcc7jle|%hLu~x*6 8F;S:غȣE~6Cω>H;g:?/|[Чh2VqfP)>"7˲Vʡb-'č~>=^GuD9qwl"ެD ͯw }x \X)<5R|7`꣙A%:m]qa-z3Lu.2##E ) NJyR⤁' 88Id)H7 endstream endobj 47 0 obj << /Filter /FlateDecode /Length 10 >> stream x endstream endobj 44 0 obj << /Ascent 859 /CIDSet 47 0 R /CapHeight 500 /Descent -140 /Flags 32 /FontBBox [-7 -144 1000 859] /FontFile2 22 0 R /FontName /FPFOSQ+SimSun /ItalicAngle 0 /StemV 80 /Type /FontDescriptor >> endobj 20 0 obj << /BaseFont /FPFOSQ+SimSun /DescendantFonts [46 0 R] /Encoding /Identity-H /Subtype /Type0 /ToUnicode 45 0 R /Type /Font >> endobj 46 0 obj << /BaseFont /FPFOSQ+SimSun /CIDSystemInfo <> /CIDToGIDMap /Identity /FontDescriptor 44 0 R /Subtype /CIDFontType2 /Type /Font /W [0 0 1000 1 13 500 ] >> endobj 45 0 obj << /Filter /FlateDecode /Length 291 >> stream x]n0<CdEBH).*=Pb,c}(c}f!EUV|P@>DkT IERC> stream xW}p\u?}ݷҮdvmI60 d IZI4xJ0!@@0-1Ԯ*g͘tL=i@A3L7oW&L_v{98 ,G,(ၡp?ͣ(9~tbQBr[h!lڋinU&)5","!YLnG&?`R}bixi;8g@ 88 /\$MfbABZ\$Rx\$_N)*-\PK9R0Vb\$[7m5l uɮ^%OVT̓\$t&" ߀/[,B{ݯ5' ;4_<"d&ɋ\$<(>+.ZR?uD{vGy=GEzYe&X\$}kd K]*ѷ%י]uvRBGD넎M6ЕVcI۟?]DtK0nי ځx1ղQle8mǍTOoݗJwQ9W?>3Sܳ+n3ܧO ץ2bEcE=hr` =cu|/tt<ǬSTV7mRwbW@*}@sh3'4l\${TSZ\$`P.1d3UXPԲ Md !鬢a% D/9QxO`I6dF~l6[.b6:jYIQ(},V0oR' yM|-{c13Dj#mVqBtV&^00PD2r U^߫fcA3;a}CSS 'ZR,_Ehfz~rbgsoρ5Zg{b[hXyٶehf +ϷYshF00,@?GThCчK%':hRaى ˛5O\$'h8T~U591,U3MX "+H3E5DIX IKa]2~w#>:jwd{Gj\$Sbk%Qz]y]P<:~vF8?̠?SeY5RogӣX!ԓړW?~zS{+A_zCj7>xS %uM< B[C_\$bOEbSp:\/I @ܶrVXEKHC\$DGjj\${R\2ܔux5h6Ba_#nMWU&f/Ŭm,IR \$]~BώigJ.x%_\_MS;.=yJ"DY/c'쿍\_|MJ+lzle厸 0>-m]waF|[WN BsD@ýL;gL)*-vI0m\$dh~ݲmѠ5a<5MZP%Y?uK CrYېQ a̅E#7W"G"Gz̧yh]57;&s\$!bĭ3.Ҽ hJE<8[{S.lnr@ åI%404<1P˞: c_` 0gaY퇛a7Mz5Al+aI9Xق&o#|scAjp+ymC;A+E a.l¯fv`| \$λs(Zc=sc=!mTˬ##/::us:f8okitt㎵tԪvjAg1}m*<;99kI\$qJcF)wVWU8zAzYky:ZFԓ/|EG!x>o:Swū =\yێ)Z|z=76_ 3N>ОtE#Ep^\ߺ889?veweFŹvԝGG#7>?|:g6*GtF.|ïq]F^ׅ'_>7C?4P0W?CDcu}]~Sǖ#9Cg2hQNos[H]ݾ &:PShċ;]F=UދU?c,MؾE"+?)QwHQYcʲe(RZ@YI6TR endstream endobj 51 0 obj << /Filter /FlateDecode /Length 10 >> stream x endstream endobj 48 0 obj << /Ascent 859 /CIDSet 51 0 R /CapHeight 500 /Descent -140 /Flags 32 /FontBBox [-7 -144 1000 859] /FontFile2 25 0 R /FontName /FPFOSQ+SimSun /ItalicAngle 0 /StemV 80 /Type /FontDescriptor >> endobj 23 0 obj << /BaseFont /FPFOSQ+SimSun /DescendantFonts [50 0 R] /Encoding /Identity-H /Subtype /Type0 /ToUnicode 49 0 R /Type /Font >> endobj 50 0 obj << /BaseFont /FPFOSQ+SimSun /CIDSystemInfo <> /CIDToGIDMap /Identity /FontDescriptor 48 0 R /Subtype /CIDFontType2 /Type /Font /W [0 0 1000 1 14 500 ] >> endobj 49 0 obj << /Filter /FlateDecode /Length 287 >> stream x]n Oqt^RP8Zqۗ) NX5u#& ZXIp)8 Ĵ&ƹE~Ux&1¡(wnRl QYK'_!٣?S>wPGLb Uv U'FL+x6?#(SM"\$&gJcFs4q<c{bxL|F;VijcM=+^fRqmY r6>> endstream endobj 25 0 obj << /Filter /FlateDecode /Length 3805 >> stream x7mp[Օ'}Izҳd}8%Ŷ,q11%01 MplqEa&i [X0v4t\$^dS(7bYZ[v̺s_Ozss9 <0,>Zp%R.!==}4A(z=T?myw؏Agx禟_|gK4tcoM0;W `BKş 0g88ϡF1/fi`4Qibծ#gbcXAVa"/]ڜL"hj\jN֑ƀl`}K_((vrT'DQG8 چ:hμvCޛWL:`"sLTEq\N Bŷ/9Br&87[\1#FY9FTݒ xIV] ĸ\$\$\$Axy _y<q5ޚ@1{9_\$&V^?!?#O/\$Bd@G]af>L.I?{؝,Sdeϰ,-/̶],,8ZI4GnF*UY 'a6l}FhjVT>k=`IdORoWSRU48\$؞W5@(rl7伮 ,=ضlko[l3 ]BvA++80y}Q_xL_e~c:^R s5 &nJ:lCb,x.abql&p`RDKÉ|]վa!/oOee3#^\$E X^LZȕ>#%vc S@c#C5`"%E[QeJW}ir^{k_!?d~~u|bQNNR4(O+zf}fz_IkES7h) mNC Sgs<:F-ϩ3Lg +1ق i@4Pv̷LM ̈BD5x ` D%G#kS tL
}M]feSEzzp =SYoۅ܀qX@]T\6pkY n"3,ˣU4jBuA~X.B!ðdTp1%ujnS4&ir0V/sc7+;5CUޮܣ|Ey"meZyEjZd_HDTRZ~ױ'vRE+#QVi,i%Whڇ:̝Λ|%3,%4 -e1aEA]-)8)>! +Q;e\$E32Él \$%c8ebĬݜ7+n;spb+I<#_ߑ8&FnzV 5%f^"Ib\$-@+Vp5v5ϫ}`}>r?\$4QkWDi ~DZ Y. /rR@3w]1Sk5M/V0u ھtas~wKo"uξ1P@sKsPM`w6Zh6a 4ZmgO`5Ƅ}[4́ivH7lO-' Ro 0_ d| J6 ѻl,E (YY*,V&aq`v[hfVE<'8p2b[aȁ ȁUtncnu`a]W0ϒ6(,Qw`C^Yav[+Qm?e>* 9PtC=c/ҹs۷;{J#ouwl)_?8:oi5^)퍏882!Q<A aҾ@'&܂xI?b_ G^gf:F0~q6?;+aM4hώC Aa{^ź'A=B7a |\w;ǑJvv D8kP.c6m}6hKg`oE.命Μ~9m݆qԘGgyCtֲrV-Ľr:q:W-g'-Ag PǯŖ~#y/t{7qяlR-^q= W}6#IY1{mY>5yBvޝQ[֪ #Vq[./#p;(?Eh x#a3Q lR v__7[3}[3{]|讉urS9fO#]} OY}ʽ+<9.?|Ǿ[b<.> endobj 53 0 obj << /Author () /CreationDate (D:20180222230057+08'00') /Creator () /Keywords () /ModDate (D:20180222230057+08'00') /Producer (yfPDFbSSpg: rHg, 7.2.0.0424) /Subject () /Title () >> endobj xref 0 54 0000000000 65536 f 0000122235 00000 n 0000000018 00000 n 0000000054 00000 n 0000002741 00000 n 0000002801 00000 n 0000027381 00000 n 0000056889 00000 n 0000122632 00000 n 0000000000 00000 f 0000123494 00000 n 0000129686 00000 n 0000000000 00000 f 0000130418 00000 n 0000133353 00000 n 0000000000 00000 f 0000134110 00000 n 0000137499 00000 n 0000000000 00000 f 0000138228 00000 n 0000140206 00000 n 0000000000 00000 f 0000140980 00000 n 0000144782 00000 n 0000000000 00000 f 0000145552 00000 n 0000108219 00000 n 0000121892 00000 n 0000122399 00000 n 0000123051 00000 n 0000122788 00000 n 0000122304 00000 n 0000129453 00000 n 0000130084 00000 n 0000129843 00000 n 0000129362 00000 n 0000133120 00000 n 0000133752 00000 n 0000133510 00000 n 0000133027 00000 n 0000137266 00000 n 0000137897 00000 n 0000137656 00000 n 0000137175 00000 n 0000139973 00000 n 0000140605 00000 n 0000140363 00000 n 0000139880 00000 n 0000144549 00000 n 0000145181 00000 n 0000144939 00000 n 0000144456 00000 n 0000149442 00000 n 0000149500 00000 n trailer <> startxref 149744 %%EOF
|
2021-06-19 00:09:36
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905036211013794, "perplexity": 211.2598147025306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643354.47/warc/CC-MAIN-20210618230338-20210619020338-00514.warc.gz"}
|
https://brilliant.org/discussions/thread/kickstarter-on-brilliant/
|
# Kickstarter on Brilliant!
Almost every week, we can see some user suggested feature on site, which most of the time we find them very useful. I think Brilliant should implement a feature like Kickstarter on their site. Essentially it's a voting system whereby users have 3 choices to choose from: :Strongly agree", "I'm indifferent", "Strongly disagree". And maybe add a (separate) discussion for each proposal.Examples for such topics include:
On the other hand, I have somemore suggestion of my own (here are my previous suggestions):
Create a News Feed for the Challenge Masters to post whenever they have changed the user interface or any notable annoucement (basically like a changelog) so that the users will be notified of such changes. And should list them like this: StarCraft II version history or Firefox Releases. Examples include
Furthermore, When adding a discussion, users should label their topic as either (and their examples in link)
Thoughts?
Note by Pi Han Goh
4 years, 7 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Totally agreed.We should also have live chat on brilliant with the challenge masters.
- 4 years, 7 months ago
By the way, we may want to keep some of our queries private, instead of keep it public. So, through the feature live chat, we can achieve that. But instead of only live chatting with the challenge masters, how about extend it to live chat with other users?
- 4 years, 7 months ago
Yea, maybe we could have a live chat bar on the side with different rooms that we can join. Rooms could be Public, or according to Brilliant Levels? , Math, Physics. We may also create our own rooms (chat groups) with other Brilliant users. :) That will be cool
- 4 years, 7 months ago
i said that very long ago and i got a vote down......i agree with the idea.....\
every team must have some observer.........to report if someone asks the brilliant questions
- 4 years, 7 months ago
nope...what if i ask someone to answer my question?
- 4 years, 7 months ago
Thanks for the detailed feature request. To clarify a few things/add additional details that I am inferring from your requests:
• Kickstarter idea: You would like a list of feature requests that is not governed by the regular discussions sorting algorithm. Meaning it just ranks all historical requests by total votes. Additionally, people should have the ability to down-vote or express indifference. Prior to that we would have to build a more prominent place to host it, the "feature requests" section of brilliant does not always get a good representative sample of our population. We will strongly take that idea into consideration as we evaluate ways to design a better mechanism for feedback on Brilliant.
• Staff Announcement Log/More browse-able categories. These two requests are related in my opinion in that they both could be sort of met by making the discussions more easily browse-able by topic, so that you can efficiently dig up relevant historical conversations. Something like this will be coming soon, meaning the current system of tagging will get redone and there will be more obvious places where you can browse posts by tag, which will likely encourage people to tag their posts more.
Staff - 4 years, 7 months ago
The voting part convinced me, I agree on that! You just spoke my mind, Pi Han!
- 4 years, 7 months ago
|
2018-07-17 02:18:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956322073936462, "perplexity": 4445.977652173892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589537.21/warc/CC-MAIN-20180717012034-20180717032034-00179.warc.gz"}
|
https://www.albert.io/ie/ap-physics-1-and-2/standing-waves-length-of-rope
|
?
Free Version
Moderate
# Standing Waves: Length of Rope
APPH12-YZX4RO
Consider a vibrating rope attached at both ends. The rope is vibrating at the fifth harmonic with a frequency of $125$ $Hz$. If the wave travels at a velocity of $65$ $m/s$, what is the length of the rope?
A
$0.52$ $m$
B
$1.92$ $m$
C
$4.81$ $m$
D
$1.3$ $m$
|
2016-12-07 12:26:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6565431356430054, "perplexity": 896.9054114239939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542112.77/warc/CC-MAIN-20161202170902-00099-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://www.weigao.cc/algorithm/leetcode/algorithm.html
|
# # Algorithm Analysis and Design
## # Loops and Recursive
### # 2. O-notation
• O-notation(Bog-O), When we say “the running time is $O(n^2)$” we mean that the worst-case running time is $O(n^2)$ – the best case might be better. (渐进上界)
• When we say “the running time is Ω(n2)” we mean that the best-case running time is $$Ω(n^2)$$ – the worst case might be worse.(渐进下界)
### # 3. Recurrences
• Substitution method
• Recursion-tree method
• Master method
Simplified Master Theorem:
Let $a \geq 1$ and $b > 1$ be constants and let $T(n)$ be the recurrence $T(n) = aT(\frac{n}{b}) + cn^k$, defined for $n \geq 0$.
1. If $a > b^k$, then $T(n) = \Theta(n^ {log_{a}b})$.
2. If $a = b^k$, then $T(n) = \Theta(n^ k{logn})$.
3. If $a < b^k$ then $T(n) = \Theta(n^k)$.
## # Divide-and-Conquer
### # 1. Merge Sort
$T(n) = O(nlog_{2}n)$
another example:
• Counting Inversions
• Matrix Multiplication:
• Brute Force(暴力): $O(n^3)$ arithmetic operations
### # 2. Quick Sort
• Worst-case running time $\Theta(n^2)$:
• input sorted or reverse sorted, partition around min or max element.
• one side of partition has no elements.
• T(n) = T(0) + T(n–1) + cn
• Expected running time $O(nlgn)$
• If we are really lucky, partition splits the array evenly n/2 : T(n)=2T(n/2)+Θ(n)=Θ(nlgn)
Divide and conquer: partition, pivot
## # Heapsort
Action of build max-heap:
1. 找到最后一个节点的父亲节点
Priority Queues
## # Elements of DP Algorithms
• Optimal substructure
• Overlapping subproblems
Last Updated: 1/24/2019, 8:00:28 AM
|
2019-02-18 10:51:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5662527084350586, "perplexity": 5742.985210419707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00482.warc.gz"}
|
https://forums.virtualbox.org/viewtopic.php?f=2&t=94250&p=454939
|
## Auto restoring virtualbox machine
Discussions about using Windows guests in VirtualBox.
### Auto restoring virtualbox machine
Hi Guys,
How can i go about restoring Vitualbox to a certain state lets say at a frequency of 3hrs.
What i am trying to do :
I am currently running training using Virtualbox and every morning i have to restore snapshots manually and would like to automate this process ?
Zuni
Posts: 3
Joined: 10. Aug 2019, 23:17
### Re: Auto restoring virtualbox machine
Take a look at the manual, section 8.17, the 'VBoxManage snapshot' command
Human government is like that crazy uncle who hides a quarter in his fist behind his back, then asks you to guess which fist the quarter is in...
No matter which side you choose, Left or Right, both Sides are empty.
scottgus1
Posts: 4262
Joined: 30. Dec 2009, 20:14
Primary OS: MS Windows 10
VBox Version: PUEL
Guest OSses: Win7
### Re: Auto restoring virtualbox machine
Hi Scot,
In the manual i do not see anything on a timed restore ? but i am thinking a script that runs the command @ interval should work.
Thank you let me try that
Zuni
Posts: 3
Joined: 10. Aug 2019, 23:17
### Re: Auto restoring virtualbox machine
Yes, Virtualbox does not have a function to time a vboxmanage command, but you can use Windows Task Scheduler or a cron job (? I'm not a Linux guy) to run a batch file with the vboxmanage command in it.
If you have a Windows host, and you wish to have the batch file window not show in the screen while it is running, you can point your Task Scheduler task at this vbscript file (save as a text file with .vbs extension):
Code: Select all Expand viewCollapse view
set shell = WScript.CreateObject("WScript.Shell")shell.Run "driveletter:\path\to\batchfile.cmd",7 ' 7: minimized; 0: hidden
The 7 makes the batch file window minimized to the taskbar, 0 hides the window completely, no icon in the taskbar.
Human government is like that crazy uncle who hides a quarter in his fist behind his back, then asks you to guess which fist the quarter is in...
No matter which side you choose, Left or Right, both Sides are empty.
scottgus1
Posts: 4262
Joined: 30. Dec 2009, 20:14
Primary OS: MS Windows 10
VBox Version: PUEL
Guest OSses: Win7
### Re: Auto restoring virtualbox machine
Thank you mate,
will definitely work on it
Zuni
Posts: 3
Joined: 10. Aug 2019, 23:17
|
2019-09-17 04:21:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5771427154541016, "perplexity": 10868.24032418966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573052.26/warc/CC-MAIN-20190917040727-20190917062727-00041.warc.gz"}
|
https://www.expii.com/t/testing-series-convergence-using-the-limit-definition-357
|
Expii
# Testing Series Convergence Using the Limit Definition - Expii
How do you determine whether an infinite series converges or diverges? In the simplest cases, you can stick to the basics and just directly use the limit definition of infinite series. Sometimes, like for infinite geometric series, you might be able to evaluate the series exactly (if it converges), not just prove convergence or divergence!.
|
2022-05-17 08:18:26
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9608026146888733, "perplexity": 332.3235708719915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517018.29/warc/CC-MAIN-20220517063528-20220517093528-00742.warc.gz"}
|
https://blueollie.wordpress.com/category/personal-issues/
|
# blueollie
## Unrequited Love
Over the past year, I’ve had an energetic back-and-forth with someone who I’ve grown fond of. A year ago, I didn’t even know what she looked like, though we were FB friends for a couple of years (I wasn’t even sure if “she” were even a “she”…that is, was this an “alternate account”?). So our FB relationship has grown and evolved; right now I play the part of a clueless, infatuated, socially awkward male who constantly gets rebuffed…but still doesn’t learn. This has generated a series of memes, and here is one of the latest ones:
I admit that I like this one as I am a fan of “over the years” photos; I’ll post another one as a “PS” to this post.
But behind all of the joking is the idea of “unrequited love”, which the Main Ingredient sang about so well:
And this has been a big part of my life, albeit NOT in terms of human relationships. Oh sure, many (most?) of those I had a crush on did NOT have crush on me. But hey, that is life for most of us. I am thinking more along the lines of my life and professional aspirations.
Yes, for most of my young life, I really, really, really wanted to be a professional athlete. And I did all of the “right” things: I ran wind sprints, lifted weights, practiced my football drills. And it paid off: I stared two years of JV an one year of varsity football in high school.
But, well, college competition was a different story, and, in all honesty, I didn’t have even enough ability to play Division III ball. There just isn’t a market for those who take 5.8-5.9 to run a 40 yard dash (to put this into perspective, linemen usually run 4.8-5.0, backs 4.3 to 4.5).
So now, yes, I do hear from professional sports teams….when they have ticket specials. 🙂
But this has a happy ending. You know all of that running and weight lifting I did? Well, I am still doing it, albeit at a more age appropriate level. I grew to love working out, and I still do.
So, sometimes unrequited love really does have a happy ending. In my case: no broken body (though I have a few aches and pains…that is normal for someone in their late 50’s), no concussions. No athletic performance either, but I can still run a 5K at 8:30-8:40 mpm (yeah, that used to be 6:20…but never mind) and I can still do sets of 10-15 pull ups.
February 18, 2017
## Bucky: Rest in Peace
You will be sorely missed.
## First Class Eve
Ok, time to get to bed as classes start tomorrow.
I’ve had some interesting political discussion today and…ok, watched a lot of short videos about some of the “100 greatest NFL players” and focused on the linemen, especially the offensive linemen.
Workout notes: 5.1 mile Corntalk course; jogged to the start of the dogpark hill, and then ran the uphills hard; walked, then jogged to the next uphill. I did that workout on December 31’st. Felt good.
Time to get it..lots of admin stuff to do.
So for a bit of “mathematical physics” humor:
1.5 hour department meeting, plus an extra 20 minutes for us older folks…
Workout notes: run on the treadmill (to break in new shoes): 5 minute segments for the first 50 minutes (0.5 incline)
5.2-5.3-5.4-7.0-5.3-7.0-5.3-7.0-5.3-7.00, then walk jog for 4 minutes, 6.7 for 3, 6.8 for 2, 6.9, 7.0 for 1 minute each; 6 miles in 1:00:04
3 mile walk (I think) and it felt good…until I tried to get up after sitting down. It WAS work but work that left me feeling refreshed.
Time: about 52 minutes (by the time of day), or 15:0x ish minutes per mile, which felt like the pace I was walking.
August 23, 2016
## Olympics and useful BS (nonsense)…
Yes, I’ve watched some Olympic action (in particular, boxing, swimming, gymnastics, court volleyball). I remember watching these when I was a teenager…and watching NFL games and thinking: “wow, with some work, that COULD BE ME.” I still remember using my Exergym rope exerciser while watching NFL games on the black and white television in my bedroom.
This was on the weekend; I used my high school universal gym during the week, as well as running the steps, running the track, doing agility drills on my own, etc.
I remember when I was on the JV: our games were on Wednesday night. On Sunday evening, after the NFL game, I’d go to the track and do 2 miles, reasonably hard (13:30-14:00 was my time, as a 220 lb. offensive tackle).
No one was going to outwork me! My motto was “you can do anything if you want it badly enough and are willing to work hard enough.”
That, of course, is complete and total bullshit.
The reality is: “if you see someone on TV because they are good enough to warrant TV coverage, that will NEVER be you…unless you are one of those “1 in 1000” outliers.
I eventually found what I was best at (mathematics) and even that, I was nowhere good enough to be, say, a MIT professor. Getting the Ph. D. and getting a few new results published was about my talent level; it is a bit like the 2:25 (male) marathon runner who dreams of sneaking into the Olympic Trials, though he knows that he has essential zero chance of making the team, or the baseball player who peaks out at playing, say, A or AA ball. It is still damned good, but not “TV good”.
So, would I have been happier (and better off) had I known that early in life? It is hard to say.
On one hand, I would have been more relaxed. On the other hand:
1. I grew to love lifting weights and running because I did these things to get ready for football. I enjoy these activities to this day; my first weight room workout was in 1972!
2. I can sympathize with the student who, say, is enrolled in engineering but doesn’t have the talent for it. I can explain that there are other rewarding paths to a college degree and that humans tend to have different talents.
And yes, I am getting ready to go lift, work on my math paper, watch some more Olympics, and see yet another class A baseball game this weekend.
August 12, 2016
## Rant: recognizing the limits of what one knows
I’ll admit that I am an expert in a very narrow slice of mathematics. But I am at least an AU from being an international or even a national caliber expert in that narrow field of mathematics.
And yes, I often read about topics that are not in my area; I enjoy popular books and articles on topics from the various branches of science, economics and the like.
Nevertheless, I also realize that when I read such a book or article, or when I attend a public lecture, I am getting a watered down, simplified treatment of the subject. I lack the context and the prerequisite knowledge to appreciate a presentation aimed at the experts.
And there lies one of my biggest frustrations when it comes to talking to people, either on the internet or in person. There are so many who really can’t detect the difference between expert knowledge and what they read (and perhaps half-digested …if that much) from a popular book. It is THAT level of “lack of humility” that makes some unpleasant conversation companions; I am ok with ignorance. After all, I am ignorant of the vast majority of human knowledge. I think that all of us are.
And, sadly, I see this lack of intellectual humility in political or social issues discussion, especially from the “losing side”. It appears to me that being on the losing side of an election (and I’ve been there, many, many times) brings out the worst in people in several ways.
Example: I had someone try to tell me that Hillary Clinton’s popular vote is “within the margin of error”, when one factors in the caucus states.
Of course, that is a dumb statement for a number of reasons.
1. There is a difference between a vote count and a poll count, even though both have a margin of error (remember Florida in the 2000 general election). The margin of errors in vote count is much smaller than it is for a poll.
2. The margin of error for a poll is $1.96 * \frac{.5}{\sqrt{n}}$ (assuming a 95 percent confidence interval and a relatively close election; this comes from the normal approximation to the proportion distribution. So as $n$ increases, the confidence interval, and therefore the margin of error, decreases. Note: for more on polls, read this wonderful little article written by a physics professor.
3. Hillary Clinton leads by about 3 million votes, even when one counts the caucus votes. The latter doesn’t add much as there are fewer caucus states, and these tend to be smaller states. Anyhow, she leads about 57-43.
4. The person making the claim appeared to not understand that winning a small state by a very large percentage didn’t make up for winning a bigger state by a smaller margin.
Yes, by knowing that Sanders won a lot of caucus states and that there IS such a thing as margin of error puts this individual into the “above average” category. But this person was clearly ignorant of their own ignorance.
There is another factor in play: I really think that desperation makes one dumber. When one really likes a candidate or a person, or even a sports team, it is tough to accept an unpleasant reality. I’ve become acquainted with the latter as an Illinois football fan (“yeah, we have a shot at being Wisconsin!” Sure.)
Desperation can lead to an abandonment of one’s values. Check out the Republican Chairman’s take on Donald Trump
Oh sure, few would be surprised at Donald Trump’s behavior, and I doubt that a certain type of Republican really cares that much (“hey, what do you expect with Trump anyway?”)
May 16, 2016
## My attempt at “touchy-feely” memes..
I’ve seen so many “please understand me because I have condition X” memes on the social media. I’ve seen only a few that exhort the reader to treat others better. here is one:
Am I there? Uh…no; I have a ways to go. But it is a worthy goal.
Here are a couple of my attempts; they need work.
## Not the smartest thing I’ve done…
TMI Post
Two weekends ago, it took me 3:01:30. Today, the same course took me 3:04:25 (12:05 pace, not a misprint). I was 1:32 at 7.6 miles. 3:00:21 at 14.95 miles. The return was slightly faster and when I finished, I had a deep cough and drainage all over the place. It was disgusting.
What was fun is that I saw some other runners along the way and passed a couple of pretty bespandexed women on the way up from Affina to the Tower section, and then I ran into T and her crew at the tower. Sadly, T had the “dreaded coat around the waist”…that made me sad.
And when I got home…it wasn’t quiet like it was in recent weekends but instead there was the drone of Barbara babbling on the phone and waiting for kids/grandkids, etc.
March 5, 2016 Posted by | Personal Issues, running | | 1 Comment
## Online IRS scam
Just beware: I got a mail notice claiming to be from the IRS saying someone had tried to set up an online account.
But when I called the listed number: they first wanted stuff on the letter…then wanted my full SSN at which point I hung up.
I went online to see the official IRS numbers; this wasn’t one of them.
March 1, 2016
## wind out of my sails…
Oh, I got some official good news today, and I still have a lot to do. Nevertheless….no more home basketball or other things for a while…
Perhaps I can do a medium long run this weekend. Still have a very sporadic, light cough.
Well, the spring races aren’t that far away, and neither is Chiefs baseball. And I’ve got stuff to work on.
March 1, 2016
|
2017-02-20 08:46:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3246813714504242, "perplexity": 2308.3494449616164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170434.7/warc/CC-MAIN-20170219104610-00197-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://cob.silverchair.com/jeb/article/207/24/4325/2685/The-biodynamics-of-arboreal-locomotion-the-effects?searchresult=1
|
Effects of substrate diameter on locomotor biodynamics were studied in the gray short-tailed opossum (Monodelphis domestica). Two horizontal substrates were used: a flat terrestrial' trackway with a force platform integrated into the surface and a cylindrical arboreal' trackway (20.3 mm diameter) with a force-transducer instrumented region. On both terrestrial and arboreal substrates, fore limbs exhibited higher vertical impulse and peak vertical force than hind limbs. Although vertical limb impulses were lower on the terrestrial substrate than on the arboreal support, this was probably due to speed effects because the opossums refused to move as quickly on the arboreal trackway. Vertical impulse decreased significantly faster with speed on the arboreal substrate because most of these trials were relatively slow,and stance duration decreased with speed more rapidly at these lower speeds. While braking and propulsive roles were more segregated between limbs on the terrestrial trackway, fore limbs were dominant both in braking and in propulsion on the arboreal trackway. Both fore and hind limbs exerted equivalently strong, medially directed limb forces on the arboreal trackway and laterally directed limb forces on the terrestrial trackway. We propose that the modifications in substrate reaction force on the arboreal trackway are due to the differential placement of the limbs about the dorsolateral aspect of the branch. Specifically, the pes typically made contact with the branch lower and more laterally than the manus, which may explain the significantly lower required coefficient of friction in the fore limbs relative to the hind limbs.
Substrate reaction forces (SRFs) are often used to summarize limb function during terrestrial locomotion, and a single pattern characterizes most quadrupeds (summarized in Demes et al.,1994). Body-weight support is reflected by the vertical component of the SRF, and, because the center of mass of most mammals is cranially displaced, the vertical SRF is most commonly greater in the fore limbs than in the hind limbs. The craniocaudal force has two active parts: a braking component followed by a propulsive component. During terrestrial locomotion,braking impulse (area under the force–time curve) is typically greater than propulsive impulse in the fore limb; by contrast, the hind limb tends to be net propulsive. Mediolateral force and impulse are considered negligible for cursorial animals moving along a straight path(Biewener, 1990). However,sprawling tetrapods commonly generate a more substantial medially directed SRF(laterally directed limb force) so that their mediolateral impulse is comparable in magnitude to craniocaudal impulse(Christian, 1995; Willey et al., 2004).
Quadrupeds adapted to arboreal locomotion display an altered pattern of SRF(Kimura, 1985; Ishida et al., 1990; Demes et al., 1994; Schmitt, 1994, 1999; Schmitt and Lemelin, 2002):peak vertical forces tend to be reduced, hind limbs commonly take on a greater role in body-weight support, and the limbs exert strong laterally directed SRFs (medially directed limb forces). Differences between the terrestrial and arboreal SRF patterns have been related to differences in substrates. For example, lowered peak vertical forces observed in primates moving on horizontal, narrow supports may help reduce branch oscillations(Demes et al., 1990; Schmitt, 1999).
To date, studies on arboreal locomotor kinetics have concentrated almost exclusively on primates. Yet, virtually any small mammal must negotiate heterogeneous terrain that includes some non-terrestrial substrates. For example, many species of rodents(Montgomery, 1980) and the didelphid marsupial Didelphis virginiana(Ladine and Kissell, 1994)utilize fallen logs and branches on the forest floor as arboreal runways. Terrestrial mammals navigating an arboreal substrate are likely to adapt their locomotor behavior in an attempt to enhance stability on this curved substrate, and some of these strategies may result in observable differences in limb function and, thus, SRFs. Such strategies might include adjustments in speed, limb placement and gait. Terrestrial mammals may choose to move more slowly on arboreal supports; decreased speed is generally associated with lower peak vertical forces (Demes et al.,1994; Schmitt and Lemelin,2002). Limb placement about a curved substrate will affect the potential for slipping off of the sides of a branch. When limb contacts occur on the top of the branch (or anywhere on a flat substrate), then the shear force is the vector sum of the mediolateral and craniocaudal forces, while the normal force is equivalent to the vertical force. But vertical and mediolateral forces will each contribute shear and normal components when contact occurs on any other part of the branch(Fig. 1A,B). Therefore, the relative proportions of the three-dimensional SRFs may be altered to avoid excessive shear forces. It is also possible that the limb force could be reoriented towards the centroid of the branch, which would increase the normal reaction force. Finally, it is possible that gait (defined by Hildebrand, 1976, as timing and duration of foot contacts relative to stride duration) shifts may occur between terrestrial and arboreal locomotor bouts. A gait that is dynamically stable (where stability is provided by motions through conditions that are statically unstable) on a terrestrial substrate may be inadequate on arboreal substrates, particularly if speeds are reduced. Animals may switch to more statically stable gait (e.g. towards a single-foot gait)(Hildebrand, 1976).
Fig. 1.
Arboreal locomotion in Monodelphis domestica. (A) Resolution of substrate reaction forces (SRFs) into normal and shear components(Fnormal,ML and Fshear,ML, respectively)as illustrated for a fore limb and its mediolateral SRF(FML). XFL, YFL and ZFL are coordinates of the estimated center of fore limb pressure. (B) Resolution of vertical SRFs into shear and normal components(Fnormal,V and Fshear,V, respectively).(C) Cropped representative image of M. domestica on the arboreal trackway illustrating the limb landmarks: (1) distal tip of the third manual digit; (2) lateral aspect of the wrist joint; (3) distal tip of the fifth pedal digit; (4) lateral aspect of the metatarsophalangeal joint. Note that the heel (see arrow pointing to the ankle marker) was typically not in contact with the substrate during arboreal and terrestrial trials. Scale bar (4 cm)denotes the length and location of the arboreal force transducer.
Fig. 1.
Arboreal locomotion in Monodelphis domestica. (A) Resolution of substrate reaction forces (SRFs) into normal and shear components(Fnormal,ML and Fshear,ML, respectively)as illustrated for a fore limb and its mediolateral SRF(FML). XFL, YFL and ZFL are coordinates of the estimated center of fore limb pressure. (B) Resolution of vertical SRFs into shear and normal components(Fnormal,V and Fshear,V, respectively).(C) Cropped representative image of M. domestica on the arboreal trackway illustrating the limb landmarks: (1) distal tip of the third manual digit; (2) lateral aspect of the wrist joint; (3) distal tip of the fifth pedal digit; (4) lateral aspect of the metatarsophalangeal joint. Note that the heel (see arrow pointing to the ankle marker) was typically not in contact with the substrate during arboreal and terrestrial trials. Scale bar (4 cm)denotes the length and location of the arboreal force transducer.
The aim of this study was to determine whether and how limb function, as reflected by SRFs, differ in terrestrial and arboreal locomotion in a non-arboreal specialist. We used Monodelphis domestica(Wagner, 1842), the gray short-tailed opossum, as our model. M. domestica is a small terrestrial marsupial (Cartmill,1972; Nowak, 1999)that is readily capable of moving on narrow substrates(Lammers, 2001). Although specialization for aboreal locomotion evolved several times within the family Didelphidae, terrestrial habitation is probably primitive(Fig. 2). Furthermore, Monodelphis is considered the most terrestrial genus within the family (Nowak, 1999). In this paper, we address the mechanics of arboreal locomotion through two primary questions. Firstly, do terrestrial mammals necessarily adopt SRF patterns observed in arboreal specialists? Arboreal specialists, such as Caluromys philander have morphological as well as behavioral adaptations for arboreal habitation and locomotion(Schmitt and Lemelin, 2002),whereas the terrestrial M. domestica presumably must rely much more on behavioral modifications to move on arboreal substrates. Thus, it is likely that M. domestica will move along a branch differently than would an arboreal specialist. Secondly, how does limb placement about a curved substrate affect stability on a branch?
Fig. 2.
Phylogeny of some American marsupials, based on Palma and Spotorno(1999) and Nowak(1999). Although scansorial and arboreal locomotor adaptations evolved more than once in the family Didelphidae, it is likely that the common ancestor was a terrestrial form. Furthermore, Nowak (1999) and Cartmill (1972) suggest that the terrestrial Monodelphis genus retains the primitive condition to the greatest degree.
Fig. 2.
Phylogeny of some American marsupials, based on Palma and Spotorno(1999) and Nowak(1999). Although scansorial and arboreal locomotor adaptations evolved more than once in the family Didelphidae, it is likely that the common ancestor was a terrestrial form. Furthermore, Nowak (1999) and Cartmill (1972) suggest that the terrestrial Monodelphis genus retains the primitive condition to the greatest degree.
### Animals
We used six adult male Monodelphis domestica(Wagner, 1842; gray short-tailed opossums) for all experiments (body mass: 0.105–0.149 kg),and all procedures were approved by the Ohio University Animal Care and Use Committee. Animals were anaesthetized prior to each experiment by placing them and approximately 0.3–0.4 ml of isoflurane (Abbott Laboratories, North Chicago, IL, USA) into a plastic container (∼2 min). The fur covering the lateral aspect of the left fore and hind limb was shaved and white 1.3 ×1.7 mm beads were glued to the skin overlying bony landmarks. Landmarks used in this study include: distal tip of the third manual digit, the lateral aspect of the wrist, distal tip of the fifth pedal digit and fifth metatarsophalangeal joint (Fig. 1C). The animals typically awoke within 2 min and appeared to suffer no ill effects.
### Kinetic data
Force transducers for recording SRFs were constructed based on the spring-blade design described in Biewener and Full(1992) and Bertram et al.(1997). The terrestrial trackway was 160 cm long, with a 48 × 11 cm force platform integrated in the middle and was covered with 60-grit sandpaper for traction. This force platform was initially developed to evaluate whole-body mechanics, so its length necessitated capturing individual fore and hind limb SRFs in separate trials. Fore limb data were obtained as the first footfall on the platform whereas hind limb data represent the last limb off the platform. The arboreal trackway was constructed from 2.03 cm diameter aluminum tubing (including 60-grit sandpaper covering); the trackway, therefore, corresponded to approximately one-half body width. This trackway was 151 cm long, with a 4 cm force-transducer instrumented section. Because the force transducer was short in the arboreal trackway, sequential fore and hind limb SRFs were obtained in each trial. Animals were encouraged to run towards a wooden box placed at the end of each trackway. Force transducer calibration protocol followed Bertram et al. (1997). Briefly, the vertical transducers were calibrated by placing known weights on the platform or hanging weights from the pole; craniocaudal and mediolateral directions were calibrated by hanging weights through a pulley apparatus.
SRF data were collected at 1200 Hz for 3–6 s. Analog outputs from the force transducers were amplified (SCXI-1000 and 1121; National Instruments,Austin, TX, USA), converted to a digital format (National Instruments;NB-M10-16L), and recorded as voltage changes with a LabVIEW 5.1 (National Instruments) virtual instrument data-acquisition program. Voltage changes were then converted into forces (in N) using calibration scaling factors. All force traces were filtered with Butterworth notch filters at 60 Hz, 48–58 Hz and 82–92 Hz for the terrestrial trials, and at 60 Hz, 115–125 Hz and 295–305 Hz for the arboreal trials.
Only trials that approximated steady speed over the force transducers were analyzed. This was determined in the arboreal trials by comparing the total braking impulse of both fore and hind limbs to the total craniocaudal impulses[(braking impulse)/(craniocaudal impulse) × 100%]; see below for description of impulse calculations. If this percentage fell between 45–55%, then the trial was considered to be steady speed. A different criterion for steady speed was developed for the terrestrial trials. Whole-body SRFs were obtained as the animals crossed the force platform. The craniocaudal SRFs were divided by mass and then integrated to obtain craniocaudal velocity profiles; the integration constant was set as mean speed determined videographically over three 12 cm intervals. Terrestrial trials were accepted as steady speed when braking and propulsive components of the whole body velocity were balanced. We made every effort to obtain steady-speed trials at a large range of speeds on each substrate, but despite our persistance only one slow terrestrial trial (0.724 m s–1) was acceptable.
Kinetic data include peak vertical force, time to peak vertical force,vertical impulse, braking impulse, propulsive impulse and net mediolateral impulse for fore and hind limbs. A fourth LabVIEW virtual instrument was used to calculate impulse by integrating the force/time curve separately for each limb and each orthogonal direction (vertical, craniocaudal, mediolateral). In this study, impulse' refers to the impulse generated by individual limbs(contact impulse) rather than the change in momentum of the whole body(Bertram et al., 1997). Substrate reaction forces were divided by the animal's body weight to account for the 0.105–0.149 kg range in mass; forces and impulses were therefore analyzed in units of body weight (BW) and BW s, respectively.
### Kinematic data
The trackways were illuminated with three 233.3 Hz strobe lights(Monarch-Nova, Amherst, NH, USA) as two high-speed 120 Hz digital cameras with a 1/250 s shutter speed (JVC GR DVL 9800; Yokohama, Japan) captured footfall patterns and limb movement. The first camera obtained a lateral view of the left side of the animal and the second obtained a dorsolateral view. These videos were uploaded to a computer using U-lead Video Studio 4.0 (Ulead,Taipei, Taiwan), and then the APAS motion analysis system (Ariel Dynamics, San Diego, CA, USA) was used to synchronize the kinematic events from the two camera views, digitize the landmarks and convert each two-dimensional set of digitized data into three-dimensional coordinates for each landmark.
The center of pressure for each foot was estimated using the landmarks. Because the fore limb assumed a fully plantigrade posture on both arboreal and terrestrial substrates, the center of pressure of the manus (hand', composed of the structures distal to the wrist joint) was estimated as the geometric midpoint between the wrist and third manual digit landmarks. Because the heel did not contact either substrate, the center of pressure of the pes (foot',composed of the structure distal to the angle joint) was set as the geometric midpoint between the metatarsophalangeal and fifth pedal digit landmarks. Given that the distance between manual and pedal landmarks was short (15.7 and 6.8 mm, respectively), placing the center of pressure at the midpoint between proximal and distal contacts was not unreasonable. This estimate also assumes that the manus and pes contact the substrate without gripping, which is reasonable for the fore limb because the manus in M. domestica is short and lacks opposable digits. Although the pes is longer than the manus and has an opposable hallux, the diameter of the substrate is considerably greater than the span of the grip of the pes and the grit of the sandpaper did not offer much claw penetration. Furthermore, because the heel did not touch the substrate, only a small part of the pes was used to connect with the branch.
Timing variables (speed, stance duration, stride frequency, stride length)were also measured from the videos. Gaits were identified by footfall patterns using limb phase, which is the proportion of stride duration that the left fore limb contacted the substrate after the left hind limb contact(Hildebrand, 1976). Hildebrand(1976) divided limb phase into octiles of equal size. A limb phase close to 50% (between 43.75 and 56.25%)indicates a trot; limb phases greater than 56.25% are different lateral sequence gaits (for further details see Reilly and Biknevicius, 2003).[We acknowledge that trot' has been applied differently in kinematic(Hildebrand, 1976) and whole-body mechanics (Cavagna et al.,1977) studies, the former as a footfall pattern, the latter as bouncing mechanics or in-phase fluctuations of kinetic and gravitational potential energies. In the present study, trot' is used in its traditional,kinematic sense, namely, diagonal couplet footfalls(Newcastle, 1657). Whole-body mechanics was not assessed in the arboreal trials.] Duty factor of the hind limb (ratio of stance duration to stride duration) was also calculated. Differences between arboreal and terrestrial duty factor and limb phase were determined by Student's t-test.
### Calculating required coefficient of friction
The required coefficient of friction (μreq), the ratio of shear force to normal force, is one way of estimating the ability of an animal to generate friction with its limbs. If the limb does not slip when it makes contact with the substrate, then the true coefficient of friction is greater than the required coefficient of friction. On the flat terrestrial substrate,shear force is the vector sum of craniocaudal and mediolateral forces, and normal force is the vertical force. On the arboreal substrate, the animal's limbs contacted the pole between its lateral aspect to its dorsal-most surface. Thus, while craniocaudal forces continue to contribute exclusively to shear force in the arboreal trackway, vertical and mediolateral forces each contribute to shear and normal forces (Fig. 1A,B).
To calculate μreq on the arboreal substrate, the components of the vertical, craniocaudal and mediolateral SRFs contributing to shear, and normal, forces were computed as:
$\begin{array}{c}\mathrm{Shear\ force\ component}=[(\mathbf{F}_{\mathrm{ML}}{\,}\mathrm{sin}{\,}{\theta}-\mathbf{F}_{\mathrm{V}}{\,}\mathrm{cos}{\,}{\theta})^{2}+(\mathbf{F}_{\mathrm{CC}})^{2}]^{0.5}\\\mathrm{Normal\ force\ component}=(\mathbf{F}_{\mathrm{ML}}{\,}\mathrm{cos}{\,}{\theta})+(\mathbf{F}_{\mathrm{V}}{\,}\mathrm{sin}{\,}{\theta}),\end{array}$
where FV, FCC and FMLare vertical, craniocaudal and mediolateral force, respectively, and θis the angle formed by the coordinates of the limb contact, the center of the pole, and the horizontal (Fig. 1A,B). FV was always in the same direction, but when FML was occasionally medially directed, this component of the SRF was given a negative sign so that the same calculations could be used throughout.
### Statistical analyses
Data from all individuals were pooled, and Systat 9.0 (Point Richmond, CA,USA) was used for all analyses. Least squares regression was used to determine the correlation of forces and impulses with speed for each substrate and limb-pair grouping. Because most of the regressions of vertical impulse versus speed were significant, a two-way analysis of covariance(ANCOVA) with speed as the covariate was used to determine differences among groups with respect to vertical impulse. However, because peak vertical force and remaining impulses were typically not significantly correlated with speed,a two-way analysis of variance (ANOVA) was used to determine significant differences between substrates and limbs. We considered P ≤ 0.05 to be the cut-off for statistical significance, and data are reported as means± standard error of the mean (s.e.m.) unless otherwise indicated.
### Gait characteristics
Locomotor speed was significantly lower on the arboreal trackway (arboreal,1.00±0.03 m s–1; terrestrial, 1.51±0.05 m s–1; P<0.00001, N=76; Table 1). Because we used similar methods to encourage the animals to move quickly across the trackway,it is likely that the speeds we obtained on the arboreal trackway approached the animals' maximal efforts. Attempts to obtain slower trials on the terrestrial trackway yielded unacceptable acceleration or deceleration within trials.
Table 1.
General kinematics
NArborealTerrestrial
Speed (m s-176 1.00±0.02 (0.74,1.31)* 1.51±0.05 (0.72, 2.18)
Limb phase (%) 38 46.0±0.6 (34.7,52.8)* 50.1±0.7 (41.3, 57.1)
Duty factor (%) 38 44.6±0.8 (35.2,58.2)* 33.9±1.0 (24.0, 51.7)
NArborealTerrestrial
Speed (m s-176 1.00±0.02 (0.74,1.31)* 1.51±0.05 (0.72, 2.18)
Limb phase (%) 38 46.0±0.6 (34.7,52.8)* 50.1±0.7 (41.3, 57.1)
Duty factor (%) 38 44.6±0.8 (35.2,58.2)* 33.9±1.0 (24.0, 51.7)
Values are means ± s.e.m. (minimum, maximum).
*
Significant difference between substrates (P≤0.0003)
The animals predominantly used trotting (diagonal couplet) gaits on terrestrial and arboreal substrates (limb phase range: 34.7–57.1%; Fig. 3A). However, arboreal trials had a significantly lower limb phase than terrestrial trials(t-test, N=38, P=0.0003), where 22.7% of the arboreal trials were classified as a lateral-sequence diagonal-couplet gait (a trot-like gait with limb phase between 31.25–43.75%).
Fig. 3.
(A) Symmetrical gait plot for M. domestica during terrestrial and arboreal locomotion following Hildebrand(1976). Terrestrial and arboreal trials lie mostly within trots, although arboreal trials extend into smaller limb phases (lateral-sequence diagonal-couplet gait). (B) Relationship between stance duration and speed.
Fig. 3.
(A) Symmetrical gait plot for M. domestica during terrestrial and arboreal locomotion following Hildebrand(1976). Terrestrial and arboreal trials lie mostly within trots, although arboreal trials extend into smaller limb phases (lateral-sequence diagonal-couplet gait). (B) Relationship between stance duration and speed.
Duty factor was significantly larger in arboreal trials (arboreal 42.4±0.8%; terrestrial 30.2±1.0%; t-test, N=38, P<0.00001; Fig. 3A). Stance duration decreased with speed in a concave-up manner(Fig. 3B). The slope of stance duration versus speed was significantly steeper in the arboreal trials than in terrestrial trials (one-way analysis of covariance, ANCOVA, N=76, P=0.00621). Stride frequency increased linearly with speed, and there was no significant difference in slope with respect to limb pair or substrate (two-way ANCOVA, N=76, P=0.8). However,the arboreal trials had a significantly higher stride frequency than the terrestrial trials (least squares means: arboreal, 6.92±1.01 Hz;terrestrial, 6.02±1.26 Hz; N=76, P=0.00012).
### Substrate reaction forces
Sample force traces from the arboreal and terrestrial trackways are shown in Fig. 4. Two patterns were observed in the terrestrial trials. Vertical force profiles for both fore and hind limbs on the arboreal trackway always yielded single peaks(Fig. 4A,B) as did most terrestrial trials (Fig. 4C,D). However, at the slowest speeds on the terrestrial substrate (below 1.5 m s–1 for the fore limbs and below 1.25 m s–1for the hind limbs; Fig. 4E,F)vertical force profiles displayed a double-peak.
Fig. 4.
Representative substrate reaction force (SRF) profiles from the terrestrial and arboreal trackways (speed indicated on each plot). (A,B) Fore limb and hind limb arboreal trials. (C,D) Typical terrestrial trials for the fore limb and hind limb. (E,F) Slow terrestrial fore limb and hind limb trials with double-peaked vertical force traces. Negative craniocaudal forces indicate a braking effort and positive indicates propulsion. Negative mediolateral force designates a medially directed SRF (laterally directed limb force) and positive designates a medially directed limb force. For clarity, craniocaudal force is shown in gray.
Fig. 4.
Representative substrate reaction force (SRF) profiles from the terrestrial and arboreal trackways (speed indicated on each plot). (A,B) Fore limb and hind limb arboreal trials. (C,D) Typical terrestrial trials for the fore limb and hind limb. (E,F) Slow terrestrial fore limb and hind limb trials with double-peaked vertical force traces. Negative craniocaudal forces indicate a braking effort and positive indicates propulsion. Negative mediolateral force designates a medially directed SRF (laterally directed limb force) and positive designates a medially directed limb force. For clarity, craniocaudal force is shown in gray.
Peak vertical force was not correlated with speed for any substrate–limb pair except for terrestrial hind limbs. Fore limbs had significantly higher peak vertical force than hind limbs on each substrate(two-way ANCOVA, N=75, P<0.00001; Fig. 5A; Table 2). Peak vertical forces of fore and hind limbs were higher in the terrestrial trials than in arboreal trials (N=75, P<0.00001). Interaction was also significant (N=75, P=0.01006), so that the substrate effect on peak vertical force was significantly more pronounced in the fore limbs than in hind limbs. Relative to percent stance duration, peak vertical force occurred significantly earlier in hind limbs than in fore limbs, regardless of substrate (two-way analysis of variance, ANOVA, N=75, P<0.00001). Furthermore, this peak occurred significantly earlier in arboreal trials than in terrestrial trials (N=75, P=0.00753; Fig. 4E,F). The ratio of fore limb to hind limb peak vertical forces was higher for the terrestrial substrate (1.702) than the arboreal substrate (1.617; ratios were calculated using mean peak vertical forces for each limb and substrate).
Fig. 5.
Relationship of kinetic variables versus speed. (A) Peak vertical force. (B) Vertical impulse. (C) Braking impulse. (D) Propulsive impulse. The sample ellipses emphasize substrate and limb groups. The dimensions of the ellipses were determined from the standard deviations of the y and x variables; sample covariance between y and xdetermines the orientation of the ellipse.
Fig. 5.
Relationship of kinetic variables versus speed. (A) Peak vertical force. (B) Vertical impulse. (C) Braking impulse. (D) Propulsive impulse. The sample ellipses emphasize substrate and limb groups. The dimensions of the ellipses were determined from the standard deviations of the y and x variables; sample covariance between y and xdetermines the orientation of the ellipse.
Table 2.
Peak vertical force (BW units), and vertical, fore–aft and mediolateral impulses (BW·s)
Arboreal
Terrestrial
Fore limbHind limbFore limbHind limb
Peak vertical force 1.010±0.0285 (0.821, 1.309) 0.625±0.0290 (0.383, 0.897) 1.528±0.0724 (0.901, 2.075) 0.898±0.0565 (0.577, 1.241)
Vertical impulse 0.0423±0.00186 (0.0378, 0.0663) 0.0239±0.00186 (0.0180, 0.0420) 0.0519±0.00157 (0.0333, 0.0645) 0.0216±0.00292 (0.0126, 0.0321)
Braking impulse 0.00362±0.00031 (0.0014, 0.0065) 0.00163±0.00029 (0.0003, 0.0053) 0.00322±0.00040 (0.0008, 0.0069) 0.00092±0.00017 (0.0002, 0.0029)
Propulsive impulse 0.00368±0.00041 (0.0003, 0.0080) 0.00164±0.00028 (0.00000, 0.0042) 0.00245±0.00028 (0.0005, 0.0043) 0.00312±0.00058 (0.0008, 0.0079)
Net fore–aft impulse 0.00006±0.00058 (–0.0047, 0.0066) 0.00000±0.00052 (–0.0053, 0.0039) –0.00077±0.00054 (–0.0064, 0.0017) 0.00221±0.00066 (–0.0007, 0.0074)
Net mediolateral impulse 0.00444±0.00053 (–0.0021, 0.0096) 0.00450±0.00044 (0.0008, 0.0107) –0.00496±0.00159 (–0.0158, 0.0085) –0.00310±0.00107 (–0.0121, 0.0050)
Arboreal
Terrestrial
Fore limbHind limbFore limbHind limb
Peak vertical force 1.010±0.0285 (0.821, 1.309) 0.625±0.0290 (0.383, 0.897) 1.528±0.0724 (0.901, 2.075) 0.898±0.0565 (0.577, 1.241)
Vertical impulse 0.0423±0.00186 (0.0378, 0.0663) 0.0239±0.00186 (0.0180, 0.0420) 0.0519±0.00157 (0.0333, 0.0645) 0.0216±0.00292 (0.0126, 0.0321)
Braking impulse 0.00362±0.00031 (0.0014, 0.0065) 0.00163±0.00029 (0.0003, 0.0053) 0.00322±0.00040 (0.0008, 0.0069) 0.00092±0.00017 (0.0002, 0.0029)
Propulsive impulse 0.00368±0.00041 (0.0003, 0.0080) 0.00164±0.00028 (0.00000, 0.0042) 0.00245±0.00028 (0.0005, 0.0043) 0.00312±0.00058 (0.0008, 0.0079)
Net fore–aft impulse 0.00006±0.00058 (–0.0047, 0.0066) 0.00000±0.00052 (–0.0053, 0.0039) –0.00077±0.00054 (–0.0064, 0.0017) 0.00221±0.00066 (–0.0007, 0.0074)
Net mediolateral impulse 0.00444±0.00053 (–0.0021, 0.0096) 0.00450±0.00044 (0.0008, 0.0107) –0.00496±0.00159 (–0.0158, 0.0085) –0.00310±0.00107 (–0.0121, 0.0050)
Values are means ± s.e.m. (minimum, maximum), N=75.
Vertical impulse decreased significantly with speed in all substrate–limb groups except for the terrestrial hind limb group(Fig. 5B; Table 3). Slopes were significantly different from each other (two-way ANCOVA, N=75, P=0.00003), and in both fore and hind limbs the slope of vertical impulse versus speed was steeper on the arboreal substrate than on the terrestrial. On each substrate, the vertical impulse of fore limbs had significantly higher y-intercept means than that of hind limbs (least squares linear regression, 95% confidence intervals to determine slope and y-intercept differences, N=38, P<0.00001). Arboreal fore limb and hind limb slopes were not significantly different. On the terrestrial substrate, fore limb vertical impulse was negatively correlated with speed (N=16, P<0.00001), while terrestrial hind limb vertical impulse was not correlated with speed. The ratio of fore limb to hind limb vertical impulse was 2.047 on the terrestrial substrate and 1.727 on the arboreal.
Table 3.
Least squares regression results for vertical impulse(BW·s) versus speed (m s-1)
Slope95% confidence intervalsR2NP-value
Arboreal fore limbs –0.0472 –0.0581, –0.0362 0.791 22 <0.00001
Arboreal hind limbs –0.0303 –0.0471, –0.0134 0.383 22 0.00126
Terrestrial fore limbs –0.0179 –0.0258, –0.0010 0.599 16 0.00026
Terrestrial hind limbs – – – 16 0.60063
Slope95% confidence intervalsR2NP-value
Arboreal fore limbs –0.0472 –0.0581, –0.0362 0.791 22 <0.00001
Arboreal hind limbs –0.0303 –0.0471, –0.0134 0.383 22 0.00126
Terrestrial fore limbs –0.0179 –0.0258, –0.0010 0.599 16 0.00026
Terrestrial hind limbs – – – 16 0.60063
Regardless of substrate, craniocaudal force traces were characterized by a braking phase followed by a propulsive phase(Fig. 4). On the terrestrial substrate, the fore limbs usually exerted a net braking impulse and the hind limbs a net propulsive impulse. However, when propulsive impulse was considered alone, there was no significant difference between fore limb and hind limbs on the terrestrial substrate. On the arboreal substrate, fore limbs exerted braking and propulsive impulses that were both strong and not significantly different from each other(Fig. 5C,D). Hind limbs similarly generated braking and propulsive impulses that were equal, but these impulse magnitudes were significantly lower than those produced by the fore limbs (N=75, P=0.00017). The net fore–aft impulse of fore and hind limbs on the arboreal substrate was nearly zero.
Within each substrate, there were no significant differences between limb pairs with respect to net mediolateral impulse(Fig. 6). On the terrestrial substrate, both limb pairs produced strong medially directed SRFs. Among the arboreal trials, the limbs generated strong medially directed limb force(laterally directed SRFs). Differences between substrates were highly significant (N=75, P<0.00001).
Fig. 6.
Box-and-whisker plots of net mediolateral impulse for each substrate and extremity group. The line in the middle of each box plot represents the median; each box and each whisker corresponds to a fourth of the data;asterisks designate outliers; circle denotes extreme outliers. Positive values indicate a medially directed limb force [laterally directed substrate reaction force (SRF)], and negative values indicate a laterally directed limb force. Substrates were significantly different (N=75, P<0.00001), but there were no differences between limbs within substrate groups.
Fig. 6.
Box-and-whisker plots of net mediolateral impulse for each substrate and extremity group. The line in the middle of each box plot represents the median; each box and each whisker corresponds to a fourth of the data;asterisks designate outliers; circle denotes extreme outliers. Positive values indicate a medially directed limb force [laterally directed substrate reaction force (SRF)], and negative values indicate a laterally directed limb force. Substrates were significantly different (N=75, P<0.00001), but there were no differences between limbs within substrate groups.
### Limb placement and required coefficient of friction
On the arboreal trackway, the pes was usually placed considerably lower on the branch than manus (Fig. 7A). The required coefficient of friction (μreq) at foot touchdown for all trials on both substrates was initially high, but quickly dropped for most of the stance phase, only to rise again at the end of the step (Fig. 7B). The highest values were typically found at touchdown. Because we used the filtered data for these calculations, it is unlikely that these high values were the result of impact noise. The median μreq was significantly higher in the arboreal trials than in terrestrial trials (N=74, P<0.00001; Fig. 7C). In arboreal trials hind limbs had significantly higher medianμ req than fore limbs (N=74, P=0.0008). No significant difference in μreq was found between limb pairs in the terrestrial trials (t-test, N=32, P=0.172).
Fig. 7.
(A) Manus and pes placement about the arboreal trackway. Location of the center of pressure of each foot is drawn to scale relative to branch cross-sectional shape. (B) Representative required coefficient of friction data from M. domestica on the arboreal trackway (1.10 m s–1). High values occur at the foot touchdown and again at the end of the step. The broken line indicates the median value for this record (0.528). (C) Median required coefficient of friction for fore limbs and hind limbs on horizontal terrestrial and arboreal substrates. Ellipses are used to make each group more visible, and are calculated as in Fig. 5.
Fig. 7.
(A) Manus and pes placement about the arboreal trackway. Location of the center of pressure of each foot is drawn to scale relative to branch cross-sectional shape. (B) Representative required coefficient of friction data from M. domestica on the arboreal trackway (1.10 m s–1). High values occur at the foot touchdown and again at the end of the step. The broken line indicates the median value for this record (0.528). (C) Median required coefficient of friction for fore limbs and hind limbs on horizontal terrestrial and arboreal substrates. Ellipses are used to make each group more visible, and are calculated as in Fig. 5.
In this study Monodelphis domestica predominantly trotted on terrestrial and arboreal substrates, with occasional lateral-sequence diagonal couplet trials observed on the arboreal trackway. This largely conforms to gaits (footfall patterns) reported previously for this species(Pridmore, 1992; Lemelin et al., 2003; Parchman et al., 2003)although the present study analyzed fewer lateral-sequence walks on the terrestrial trackway simply because slower trials often failed to meet the steady speed criterion. While more arboreally adapted opossums (brush-tailed opossum, Trichosurus vulpecula; monito del monti, Dromecips australis; woolly opossum, Caluromys philander) also trot, they shift to diagonal sequence gaits at slower locomotor speeds(White, 1990; Pridmore, 1994; Lemelin et al., 2003). This observation led Pridmore(1994) to conjecture that diagonal sequence gaits are an arboreal adaptation in marsupials, a suggestion that parallels the arboreal, fine-branch' explanation for diagonal sequence gaits in primates (e.g. Cartmill,1972). That M. domestica did not resort to a diagonal sequence gait when moving along arboreal substrates(Lemelin et al., 2003; present study) supports the contention that terrestrial animals may not have the same locomotor response to curved and more narrow substrates as have arboreal specialists.
Substrate diameter does appear to have some effect on locomotor behavior in M. domestica. Narrow substrates (<12.5 mm) clearly challenge the species' stability, as individuals were frequently observed to falter and fall(Pridmore, 1994). Once habituated to the 20 mm arboreal trackway, M. domestica in the present study appeared quite capable of freely traversing the 1.5 m trackway,but we were unable to entice animals to travel at steady speeds higher than 1.32 m s–1. Thus, it appears that speed is an important behavioral adaptation to moving on a more treacherous substrate.
M. domestica relies more heavily on the fore limbs than on the hind limbs to support its body weight on both terrestrial and arboreal trackways. The vertical component of the SRF reflects limb function in body-weight support. Peak vertical forces in terrestrial trials of M. domestica conform to the pattern of typical terrestrial mammals, namely,fore limb values exceed hind limb values(Demes et al., 1994; Schmitt and Lemelin, 2002;present study). The most likely explanation for this finding is that the center of mass in M. domestica lies closer to the fore limbs than to the hind limbs (about 40% of the distance between the shoulder and hip joints;A.R.L, unpublished data). Fore limbs continue to dominate in body mass support when M. domestica moved along the arboreal trackway, but the ratio of fore limb to hind limb peak vertical force drops. This occurs largely because hind limbs display somewhat higher than expected peak vertical forces relative to speed (as displayed by an extrapolation of the terrestrial hind limb slope into the arboreal speed range; Fig. 5A). This shift in body-weight support between fore and hind limbs is relatively small in comparison to the pattern exhibited by the arboreal C. philander (Schmitt and Lemelin, 2002): whereas peak vertical force on arboreal substrates for the hind limbs are comparable in the two species (0.5–0.9 BW units in M. domestica; 0.6–1.0 in C. philander), C. philander's fore limb forces (0.5–0.8 BW units) fall below the range observed in M. domestica (0.8–1.3 BW units).
Comparisons of peak vertical force beyond the marsupials fail to uphold a strict terrestrial–arboreal dichotomy. Although most primates are hind limb dominant in body-weight support(Demes et al., 1994), the highly arboreal slow loris (Nycticebus coucang) and common marmoset(Callithrix jacchus) display higher fore limb peak vertical forces when moving pronograde (over the branch) on an arboreal trackway(Ishida et al., 1990; Schmitt, 2003a). Furthermore,the more terrestrial chipmunk and the more arboreal squirrel are both fore limb dominant in body-mass support when moving over a terrestrial trackway(Biewener, 1983).
The effect of substrate curvature on peak vertical force does, however,appear to be consistent across arboreal specialist and more terrestrial species. Primates and marsupials alike typically apply lower peak vertical forces when switching from a terrestrial trackway to an arboreal one (Schmitt, 1994, 1999, 2003b; Schmitt and Lemelin, 2002;this study). Furthermore, there is a significant reduction in peak vertical force as primates move on progressively smaller arboreal substrates(Schmitt, 2003b). A benefit for reducing vertical forces on arboreal substrates might be a concomitant reduction in branch oscillation (Demes et al., 1994; Schmitt,1999). Therefore, while hind limb dominance in body-weight support is not a prerequisite for moving along an arboreal support, reduction in vertical force application relative to terrestrial values does appear to be an inescapable consequence of arboreal locomotion, especially if arboreal speeds are slow. To support body weight, however, these lower forces must then be distributed over a longer interval. This could be accomplished with greater stance duration and/or stride frequency on the arboreal substrate (as was the case in this study).
Our data do suggest, however, that a small degree of posterior weight shift occurred on the arboreal substrate. First, the fore limb to hind limb ratio of peak vertical force (BW) and vertical impulse (BW s) was higher on the terrestrial substrate than on the arboreal substrate. Also, the time (relative to stance duration) that the peak vertical force occurred was significantly delayed in both limb pairs on the arboreal substrate. Because the time at which peak vertical force occurs is closely associated with the time that a limb is supporting the greatest amount of body weight, if the center of mass is effectively moved posteriorly relative to the base of support, then both fore and hind limbs will support the greatest weight at a later portion of the stance phase. Posterior weight shift has been found for most primate species,whether on arboreal or terrestrial substrates(Schmitt and Lemelin, 2002);furthermore, this posterior weight shift tends to be exaggerated when arboreal specialists move on arboreal substrates.
Limb differences in vertical impulse largely parallel those of peak vertical force in M. domestica, except that vertical impulse tends to decrease with speed as is common in mammals moving with symmetrical gaits. The decrease in vertical impulse with speed is driven primarily by a speed-dependent reduction in stance duration, more so than any increase in peak vertical force. A concave-up negative relationship between support duration versus speed is typical for terrestrial locomotion (e.g. Demes et al., 1990; Abourachid, 2001), a pattern that may reflect the need to move more cautiously to remain stable at slower speeds. The particularly long stance durations in the slower arboreal trials in M. domestica may indicate an increased perception of hazard by the animals when moving on an arboreal substrate. Because vertical impulse, which is responsible for body-weight support, decreases with speed faster on the arboreal trackway, the higher stride frequency on the arboreal trackway may be a way of compensating so that body-weight support is adequately maintained.
Our final note on vertical forces concerns force profile shape in the terrestrial trials: double-peaked at lower speeds and single-peaked at higher speeds. The same pattern has been reported in humans(Enoka, 2002), sheep and dogs(Jayes and Alexander, 1978),and horses (Biewener et al.,1983). A double-peaked vertical force is normally indicative of a vaulting mechanic (mechanical walk'), that is, the animal is exchanging kinetic and gravitational potential energy via an inverted pendulum mechanism (Cavagna et al.,1977). Because our terrestrial trials were obtained with a force platform system that also captures whole body forces, we evaluated the fluctuations of external mechanical energies of the center of mass in the slowest trials. In spite of the double-peaked configuration of the trials, the whole-body mechanics indicated in-phase fluctuations in the kinetic and gravitational potential energies (phase shift <45°) and low recovery of mechanical energy via pendulum-like mechanisms (<20%). This is consistent with the findings of Parchman et al.(2003), which reported only trot and trot-like gaits, and only bouncing mechanics in M. domestica. Parchman et al.(2003) suggested that some of the slower trials may represent a high compliance locomotor behavior(Groucho' running).
Craniocaudal forces control forward impulsion, and all mammals moving at steady speed on a terrestrial substrate rely on the hind limbs to provide most of the propulsive force (Demes et al.,1994). Although craniocaudal forces fluctuate from an initial braking action to a final propulsive action in both fore and hind limbs, hind limbs generate greater propulsive impulses than do fore limbs. Previous studies on arboreal specialists report similar functions for locomotion on arboreal trackways (Ishida et al.,1990; Schmitt,1994). Shifting between terrestrial and arboreal substrates resulted in either no significant changes in craniocaudal force(Schmitt, 1994) or smaller propulsive forces on arboreal substrates (fore limbs only were evaluated; Schmitt, 1999). By contrast,results reported here suggest that terrestrial mammals may shift a greater role in forward propulsion to the fore limbs when moving on an arboreal support.
Most terrestrial mammals generate small and erratic mediolateral forces(e.g. Hodson et al., 2001),yet mediolateral forces in M. domestica are often substantial, with magnitudes that rival the craniocaudal forces(Fig. 4). The net direction of the mediolateral SRF is medial (reflective of a laterally directed limb force). This is consistent with SRF data on terrestrial animals that use a more sprawled and semi-erect posture, such as lizards and alligators(Christian, 1995; Willey et al., 2004). A similar orientation (but lesser magnitude) was also reported for higher primates (Schmitt, 2003c). The polarity of mediolateral forces switches to reflect medially directed limb forces when M. domestica moved along the arboreal trackway. Not surprisingly, this is also the primary orientation for most other mammals when moving on arboreal substrates (Schmitt,2003c). Thus, on the terrestrial substrate, the mediolateral SRFs are tipping' (i.e. oriented in such a way to provide stability against rolling), whereas on the arboreal substrate they are gripping.'
Thus, compared with more arboreally adapted mammals, M. domesticaappears to retain fore limb dominance in body-weight support and to shift a greater role in forward impulsion to the fore limbs when moving on an arboreal substrate. We believe that the explanation of the dominance of the fore limb during arboreal locomotion lies in the differences in limb placement about the curved substrate. This is best illustrated by a consideration of friction. Kinoshita et al. (1997)estimated that the coefficient of static friction (μs) between 220-grit sandpaper and human skin is 1.67±0.24 (index finger) and 1.54±0.27 (thumb); Cartmill(1979) estimated values ofμ s in excess of five between the volar skin of primates and a plastic surface. It is likely that the true μs in our study was higher than the values reported by Kinoshita et al.(1997) because: (1) we used 60-grit sandpaper, which is rougher than 220-grit, and (2) the claws and the palmar tubercles on the manus and pes of the opossums may improve the degree of interlocking between foot and substrate (as per Cartmill, 1974), and (3) the limbs did not demonstrably slip (implying that the true coefficient of static friction is higher than the mean μreq).
The values for the median μreq, and thus the potential for slipping, was significantly higher in both fore and hind limbs in the arboreal trials than in terrestrial trials, which verifies the more precarious nature of arboreal locomotion. The reason for this may be twofold. First, vertical force was significantly lower on the arboreal substrate than on the terrestrial (in both limbs), so that there simply was less vertical force to contribute to the generation of normal force (although see the section above). The normal force is the stabilizing force for maintaining the position of the manus and pes on the substrate. Second, some proportion of vertical force results in a shear force across the surface of arboreal substrates because of the placement of the manus and pes laterally off the top of the branch. Consequently, a smaller proportion of vertical force is available to contribute to the normal force during arboreal locomotion.
Similarly, the positioning of manus and pes can explain the significantly greater μreq of hind limbs on the arboreal trackway. Hind limbs were nearly always placed lower and more laterally on the branch than fore limbs, and they supported significantly less body weight than the fore limbs. The difference in μreq and foot placement between fore and hind limbs on the arboreal trackway may also serve to explain why the fore limbs were apparently so dominant in body-weight support, braking and propulsion. By placing the manus closer to the top of the branch, the fore limbs were more stable than the hind limbs and so they were recruited to assume a greater role in propulsion than is normally found during terrestrial locomotion. The hind limbs, with their more lateral placement on the branch and their smaller role in body-weight support, were perhaps less able to exert significantly higher propulsive forces without slipping.
### Behavioral adaptations for arboreal locomotion
The results of this paper suggest that there are three important factors that animals may regulate in order to maintain stability during locomotion:speed, gait and limb placement. We propose that all three of these factors should be analyzed when conducting locomotor analyses, especially if different substrates are used.
This study examines arboreal locomotion in a terrestrial mammal with a primitive, generalized morphology and behavior(Lee and Cockburn, 1985), in the context of comparing terrestrial generalists and arboreal specialists. Although some animals move within arboreal habitats with impressive skill and speed (e.g. squirrel and many primates), many arboreal specialists apparently use speed reduction to maintain stability on branches and to reduce detection by predators (e.g. slow loris, woolly opossum, chameleon). Thus, speed reduction may serve as a common behavioral adjustment to arboreal locomotion.
On the terrestrial and arboreal substrates, M. domestica almost always kinematically trotted, although this species tended somewhat to dissociate the diagonal couplets and list towards the lateral sequence trot-like gait on arboreal trackways(Hildebrand, 1976). That this gait shift may be reflective of a need to increase stability is supported by data from Lammers (2001) that indicate that opossums use lateral sequence trot-like and single-foot gaits at slow speeds and/or on narrow (a quarter body diameter) supports. By contrast,most primates and the woolly opossum(Lemelin et al., 2003) use a diagonal sequence trot-like gait on both arboreal and terrestrial substrates. It appears that divergent gait (footfall) patterns exist between arboreal specialists and terrestrial generalists.
When arboreal specialists move on branches that are narrower than their body diameter, but too wide to grasp with opposable digits, do they place manus and pes on branches in different locations than terrestrial generalists?Data and/or tracings of images indicate that like M. domestica, the lesser mouse lemur (Microcebus murinus), fat-tailed dwarf lemur(Cheirogaleus medius), slow loris (Nycticebus caucang), and the brown lemur (Eulemur fulvus) may place their manus relatively dorsally on the branch and the pes more laterally(Cartmill, 1974; Jouffroy and Petter, 1990; Larson et al., 2001). However,illustrations of chameleon (Chameleo spp.) locomotion suggest that the manus and pes contact the branch in approximately the same location around a large arboreal support (manus: Peterson,1984; pes: Higham and Jayne,2004). The common opossum (Didelphis marsupialis) places its manus slightly laterally to the pes on narrow supports(Cartmill, 1974). Finally, the aye-aye (Daubentonia madagascariensis) contacts branches in a wide variety of locations (Krakauer et al.,2002). It is not yet possible to determine whether the kinetic and kinematic patterns observed in the present study represent a general behavioral adaptation to the challenges of arboreal locomotion by terrestrial mammals or simply a solution specific for this species.
We thank Jennifer Hancock, Ron Heinrich, Steve Reilly, Daniel Schmitt,Nancy Stevens, Nancy Tatarek, Larry Witmerand two anonymous reviewers for their critiques on the manuscript. John Bertram, Kay Earls, David Lee and Steve Reilly designed and constructed the force transducers and the LabVIEW virtual instruments. We are grateful to Emily Bevis, Josh Hill, Andy Parchman and ChiChi Peng for their long hours collecting data. We also thank Don Miles for his statistical expertise, Randy Mulford for machining the transducer parts and Eric Lindner for animal care. Supported by NSF IBN 9727212 and IBN 0080158 to A.R.B.
Abourachid, A. (
2001
). Kinematic parameters of terrestrial locomotion in cursorial (ratites), swimming (ducks), and striding birds (quail and guinea fowl).
Comp. Biochem. Physiol. A
131
,
113
-119.
Bertram, J. E. A., Lee, D. V., Todhunter, R. J., Foels, W. S.,Williams, A. J. and Lust, G. (
1997
). Multiple force platform analysis of the canine trot: a new approach to assessing basic characteristics of locomotion.
Vet. Comp. Orthop. Traumatol.
10
,
160
-169.
Biewener, A. A. (
1983
). Locomotory stresses in the limb bones of two small mammals: the ground squirrel and chipmunk.
J. Exp. Biol.
103
,
131
-154.
Biewener, A. A. (
1990
). Biomechanics of mammalian terrestrial locomotion.
Science
250
,
1097
-1103.
Biewener, A. A. and Full, R. J. (
1992
). Force platform and kinematic analysis. In
Biomechanics – Structures and Systems: A Practical Approach
(ed. A. A. Biewener), pp.
45
-73. Oxford: IRL Press at Oxford University Press.
Biewener, A. A., Thomason, J., Goodship, A. and Lanyon, L. E. (
1983
). Bone stress in the horse fore limb during locomotion at two different gaits: a comparison of two experimental methods.
J. Biomech.
16
,
565
-576.
Cartmill, M. (
1972
). Arboreal adaptations and the origin of the order Primates. In
The Functional and Evolutionary Biology of Primates
(ed. R. Tuttle), pp.
97
-112. Chicago: Aldine–Atheron Inc.
Cartmill, M. (
1974
). Pads and claws in arboreal locomotion. In
Primate Locomotion
(ed. F. A. Jenkins,Jr), pp.
45
Cartmill, M. (
1979
). The volar skin of primates: its frictional characteristics and their functional significance.
Am. J. Phys. Anthrop.
50
,
497
-510.
Cavagna, G. A., Heglund, N. C. and Taylor, C. R.(
1977
). Mechanical work in terrestrial locomotion: two basic mechanisms for minimizing energy expenditure.
Am. J. Physiol.
233
,
R243
-R261.
Christian, A. (
1995
). Zur Biomechanik der Lokomotion vierfüßiger Reptilien (besonders der Squamata).
Cour. Forsch.-Inst. Senskenberg
180
,
1
-58.
Demes, B., Jungers, W. L and Nieschalk, C.(
1990
). Size-and speed-related aspects of quadrupedal walking in slender and slow lorises. In
Gravity, Posture and Locomotion in Primates
(ed. F. K. Jouffroy, M. H. Stack and C. Niemitz), pp.
175
-197. Florence: Editrice Il Sedicesimo.
Demes, B., Larson, S. G., Stern, J. T., Jr, Jungers, W. L.,Biknevicius, A. R. and Schmitt, D. (
1994
). The kinetics of primate quadrupedalism: `hind limb drive' reconsidered.
J. Hum. Evol.
26
,
353
-374.
Enoka, R. M. (
2002
).
Neuromechanics of Human Movement. Third edition
. Campaign, IL: Human Kinetics.
Higham, T. E. and Jayne, B. C. (
2004
). Locomotion of lizards on inclines and perches: hind limb kinematics of an arboreal specialist and a terrestrial generalist.
J. Exp. Biol.
207
,
233
-248.
Hildebrand, M. (
1976
). Analysis of tetrapod gaits: general considerations and symmetrical gaits. In
Neural Control of Locomotion
, vol.
18
(ed. R. M. Herman, S. Grillner, P. Stein and D. G. Stuart), pp.
203
-206. New York: Plenum.
Hodson, E., Clayton, H. M. and Lanovaz. (
2001
). The hind limb in walking horses: 1. Kinematics and ground reaction forces.
Equine Vet. J.
33
,
38
-43.
Ishida, H., Jouffroy, F. K. and Nakano, Y.(
1990
). Comparative dynamics of pronograde and upside down horizontal quadrupedalism in the slow loris (Nycticebus coucang). In
Gravity, Posture and Locomotion in Primates
(ed. F. K. Jouffroy, M. H. Stack and C. Niemitz), pp.
209
-220. Florence: Editrice Il Sedicesimo.
Jayes, A. S. and Alexander, R. McN. (
1978
). Mechanics of locomotion of dogs (Canis familiaris) and sheep(Ovis aries).
J. Zool. Lond.
185
,
289
-308.
Jouffroy, F. K. and Petter, A. (
1990
). Gravity-related kinematic changes in lorisine horizontal locomotion in relation to position of the body. In
Gravity, Posture and Locomotion in Primates
(ed. F. K. Jouffroy, M. H. Stack and C. Niemitz), pp.
199
-208. Florence: Editrice Il Sedicesimo.
Kimura, T. (
1985
). Bipedal and quadrupedal walking of primates: comparative dynamics. In
Primate Morphophysiology, Locomotor Analyses and Human Bipedalism
(ed. S. Kondo), pp.
81
-104. Tokyo: Univerisity of Tokyo Press.
Kinoshita, H., Bäckström, L., Flanagan, J. R. and Johansson, R. S. (
1997
). Tangential torque effects on the control of grip forces when holding objects with a precision grip.
J. Neurophysiol.
78
,
1619
-1630.
Krakauer, E., Lemelin, P. and Schmitt, D.(
2002
). Hand and body position during locomotor behavior in the aye-aye (Daubentonia madagascariensis).
Am. J. Primatol.
57
,
105
-118.
Ladine, T. A. and Kissell, R. R., Jr (
1994
). Escape behavior of Virginia opossums.
Am. Midl. Nat.
132
,
234
-238.
Lammers, A. R. (
2001
). The effects of incline and branch diameter on the kinematics of arboreal locomotion.
Am. Zool.
41
,
1500
.
Larson, S. G., Schmitt, D., Lemelin, P. and Hamrick, M.(
2001
). Limb excursion during quadrupedal walking: how do primates compare to other mammals?
J. Zool. Lond.
255
,
353
-365.
Lee, A. K. and Cockburn, A. (
1985
).
Evolutionary Ecology of Marsupials
. Cambridge, UK:Cambridge University Press.
Lemelin, P., Schmitt, D. and Cartmill, M.(
2003
). Footfall patterns and interlimb co-ordination in opossums(Family Didelphidae): evidence for the evolution of diagonal sequence walking gaits in primates.
J. Zool. Lond.
260
,
423
-429.
Montgomery, W. I. (
1980
). The use of arboreal runways by the woodland rodents, Apodemus sylvaticus (L.), A. flavicollis (Melchior) and Clethrionomys glareolus (Schreber).
Mammal. Rev.
10
,
189
-195.
Newcastle, P. G. (
1657
). La methode et l'evention nouvelle de dresser le chevaux. Anvers.
Nowak, R. M. (
1999
).
Walker's Mammals of The World
, vol.
1
. Sixth edition. Baltimore: Johns Hopkins University Press.
Palma, R. E. and Spotorno, A. E. (
1999
). Molecular systematics of marsupials based on the rRNA 12S mitochondrial gene:The phylogeny of Didelphimorphia and of the living fossil microbiotheriid Dromiciops gliroides Thomas.
Mol. Phyl. Evol.
13
,
525
-535.
Parchman, A. J., Reilly, S. M. and Biknevicius, A. R.(
2003
). Whole-body mechanics and gaits in the gray short-tailed opossum Monodelphis domestica: integrating patterns of locomotion in a semi-erect mammal.
J. Exp. Biol.
206
,
1379
-1388.
Peterson, J. A. (
1984
). The locomotion of Chamaeleo (Reptilia: Sauria) with particular reference to the fore limb.
J. Zool. Lond.
202
,
1
-42.
Pridmore, P. A. (
1992
). Trunk movements during locomotion in the Marsupial Monodelphis domestica (Didelphidae).
J. Morphol.
211
,
137
-146.
Pridmore, P. A. (
1994
). Locomotion in Dromiciops australis (Marsupialia: Microbiotheriidae).
Austral. J. Zool.
42
,
679
-699.
Reilly, S. M. and Biknevicius, A. R. (
2003
). Integrating kinetic and kinematic approaches to the analysis of terrestrial locomotion. In
Vertebrate Biomechanics and Evolution
(ed. V. L. Bels, J.-O. Gasc and A. Casinos), pp.
243
-265. Oxford: BIOS Scientific Publishers Ltd.
Schmitt, D. (
1994
). Fore limb mechanics as a function of substrate type during quadrupedalism in two anthropoid primates.
J. Hum. Evol.
26
,
441
-457.
Schmitt, D. (
1999
). Compliant walking in primates.
J. Zool. Lond.
248
,
149
-160.
Schmitt, D. (
2003a
). Evolutionary implications of the unusual walking mechanics of the common marmoset (C. jacchus).
Am. J. Phys. Anthrop.
122
,
28
-37.
Schmitt, D. (
2003b
). Substrate size and primate fore limb mechanics: implications for understanding the evolution of primate locomotion.
Int. J. Primatol.
24
,
1023
-1036.
Schmitt, D. (
2003c
). Mediolateral reaction forces and fore limb anatomy in quadrupedal primates: implications for interpreting locomotor behavior in fossil primates.
J. Hum. Evol.
44
,
47
-58.
Schmitt, D. and Lemelin, P. (
2002
). Origins of primate locomotion: gait mechanics of the woolly opossum.
Am. J. Phys. Anthop.
118
,
231
-238.
Wagner, J. A. (
1842
). Diagnosen neuer Arten brasilischer Säugetheire.
Arch. Naturgesch.
8
,
356
-362.
White, T. D. (
1990
). Gait selection in the brush-tail possum (Trichosurus vulpecula), the northern quoll(Dasyurus hallucatus), and the Virginia opossum (Didelphis virginiana).
J. Mammal.
71
,
79
-84.
Willey, J. W., Biknevicius, A. R., Reilly, S. M. and Earls, K. D. (
2004
). The tale of the tail: limb function and locomotor mechanics in Alligator mississippiensis.
J. Exp. Biol.
207
,
553
-563.
|
2022-08-15 12:45:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.658689022064209, "perplexity": 11602.934144105098}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00615.warc.gz"}
|
https://swimswam.com/tag/vinny-marciano/page/3/
|
# Vinny Marciano
#### Vinny Marciano Breaks NAG Record in 100 Back for 2nd Time on Sunday
Vinny Marciano Rides high on the water as he breaks the 11-12 National Age Group Record in the 100 back for the 2nd time.
|
2021-09-25 12:58:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8002676367759705, "perplexity": 14030.552486962233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057622.15/warc/CC-MAIN-20210925112158-20210925142158-00139.warc.gz"}
|
https://solarenergyengineering.asmedigitalcollection.asme.org/energyresources/article-abstract/130/4/043201/724641/Quantification-of-Uncertainty-in-Reserve
|
Decline curve analysis is the most commonly used technique to estimate reserves from historical production data for the evaluation of unconventional resources. Quantifying the uncertainty of reserve estimates is an important issue in decline curve analysis, particularly for unconventional resources since forecasting future performance is particularly difficult in the analysis of unconventional oil or gas wells. Probabilistic approaches are sometimes used to provide a distribution of reserve estimates with three confidence levels ($P10$, $P50$, and $P90$) and a corresponding 80% confidence interval to quantify uncertainties. Our investigation indicates that uncertainty is commonly underestimated in practice when using traditional statistical analyses. The challenge in probabilistic reserve estimation is not only how to appropriately characterize probabilistic properties of complex production data sets, but also how to determine and then improve the reliability of the uncertainty quantifications. In this paper, we present an advanced technique for the probabilistic quantification of reserve estimates using decline curve analysis. We examine the reliability of the uncertainty quantification of reserve estimates by analyzing actual oil and gas wells that have produced to near-abandonment conditions, and also show how uncertainty in reserve estimates changes with time as more data become available. We demonstrate that our method provides a more reliable probabilistic reserve estimation than other methods proposed in the literature. These results have important impacts on economic risk analysis and on reservoir management.
1.
Smith
,
P. J.
,
Hendry
,
D. J.
, and
Croether
,
A. R.
, 1993, “
The Quantification and Management of Uncertainty in Reserves
,”
Western Regional Meeting
, Anchorage, AK, May 26–28, SPE Paper No. 26056.
2.
Laherrere
,
J. H.
, 1995, “
Discussion of an Integrated Deterministic/Probabilistic Approach to Reserve Estimations
,”
JPT
0149-2136,
47
, pp.
1082
1083
.
3.
Patricelli
,
J. A.
, and
McMichael
,
C. L.
, 1995, “
An Integrated Deterministic/Probabilistic Approach to Reserve Estimations
,”
JPT
0149-2136,
47
, pp.
49
53
.
4.
Thompson
,
R. S.
, and
Wright
,
J. D.
, 1987, “
The Error in Estimating Reserves Using Decline Curves
,” SPE Paper No. 16295.
5.
,
W. J.
, and
Brown
,
P. J.
, 2005, “
Evaluation of Unconventional Resource Plays
,” SPE Paper No. 96879.
6.
Etherington
,
J. R.
, and
McDonald
,
I. R.
, 2004, “
Is Bitumen a Reserve?
,” SPE Paper No. 90242.
7.
Cowgill
,
D. F.
,
Pitman
,
J. K.
, and
Seevers
,
D. O.
, 1981, “
NMR Determination Porosity and Permeability of Western Tight Gas Sands
,” SPE Paper No. 9875.
8.
Stabell
,
C.
, 2005, “
Integrated Risk, Resources and Value Assessment of Unconventional Petroleum Assets
,” SPE Paper No. 94667.
9.
Jochen
,
V. A.
, and
Spivey
,
J. P.
, 1996, “
Probabilistic Reserves Estimation Using Decline Curve Analysis With the Bootstrap Method
,” SPE Paper No. 36633.
10.
Cheng
,
Y.
,
Wang
,
Y.
,
McVay
,
D. A.
, and
Lee
,
W. J.
, 2005, “
Practical Application of a Probabilistic Approach to Estimate Reservoirs Using Production Decline Data
,” SPE Paper No. 95974.
11.
Fetkovich
,
M. J.
,
Fetkovich
,
E. J.
, and
Fetkovich
,
M. D.
, 1996, “
Useful Concepts for Decline-Curve Forecasting, Reserve Estimation and Analysis
,”
SPE Reservoir Eng.
0885-9248,
11
,
13
22
.
12.
Efron
,
B.
, and
Tibshirani
,
R. J.
, 1993,
An Introduction to the Bootstrap
,
Chapman and Hall
,
New York
.
13.
Capen
,
E. C.
, 1976, “
The Difficulty of Assessing Uncertainty
,”
JPT
0149-2136,
28
, pp.
843
850
.
14.
Nankervis
,
J. C.
, 2005, “
Computational Algorithms for Double Bootstrap Confidence Interval
,”
Comput. Stat. Data Anal.
,
49
, pp.
461
475
. 0167-9473
15.
Gidley
,
J. L.
,
Mutti
,
D. H.
,
Nierode
,
D. E.
,
Kehn
,
D. M.
, and
Muecke
,
T. W.
, 1979, “
Stimulation of Low-Permeability Gas Formations by Massive Hydraulic Fracturing: A Study of Well Performance
,”
JPT
0149-2136,
31
, pp.
525
531
.
You do not currently have access to this content.
|
2023-03-21 10:06:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5571815371513367, "perplexity": 10401.90648876538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00712.warc.gz"}
|
https://www.researchgate.net/publication/283579663_Nilakantha's_accelerated_series_for_pi
|
ArticlePDF Available
# Nilakantha's accelerated series for pi
Authors:
## Abstract
We show how the idea behind a formula for π discovered by the Indian mathematician and astronomer Nilakantha (1445–1545) can be developed into a general series acceleration technique which, when applied to the Gregory-Leibniz series, gives the formula [...] with convergence as 13.5^{–n}, in much the same way as the Euler transformation gives [...] with convergence as 2^{–n}. Similar transformations lead to other accelerated series for π, including three "BBP-like" formulas, all of which are collected in an appendix. Optimal convergence is achieved using Chebyshev polynomials.
ACTA ARITHMETICA
* (201*)
Nilakantha’s accelerated series for π
by
David Brink (København)
1. Introduction. Unbeknownst to its European discoverers—Gregory
(1638–1675) and Leibniz (1646–1716)—the formula
(1) π
4=
X
n=0
(1)n
2n+ 1
had been found in India already in the fourteenth or fifteenth century. It
first appeared in Sanskrit verse in the book Tantrasangraha from about 1500
by the Indian mathematician, astronomer and universal genius Nilakantha
(1445–1545). Unlike Gregory and Leibniz, Nilakantha also gave approxima-
tions of the tail sums and found a more rapidly converging series,
(2) π
4=3
4+
X
n=0
(1)n
(2n+ 3)3(2n+ 3).
The reader is referred to Roy [14] for more details on this fascinating story.
We show here that (2) is the first step of a certain series transformation
that eventually leads to the accelerated series
(3) π=
X
n=0
(5n+ 3)n!(2n)!
2n1(3n+ 2)! ,
in much the same way as the difference operator leads to the Euler transform
(4) π=
X
n=0
2n+1n!n!
(2n+ 1)!.
We call (3) the Nilakantha transform of (1) and note that it converges
roughly as 13.5n, whereas the Euler transform converges as 2n. Applying
2010 Mathematics Subject Classification: Primary 65B10; Secondary 40A25.
Key words and phrases: series for π, convergence acceleration.
[1]
2 D. Brink
the Nilakantha transformation to the Newton–Euler formula [8]
(5) π
22=
X
n=0
(1)n(n1)/2
2n+ 1
gives the accelerated series (A.5) (see the Appendix), also with convergence
13.5n. Similar transformations of these and other formulas lead to other
accelerated series for πwhich are collected in the Appendix in a standardized
form, with (3) corresponding to (A.1).
Several of the formulas in the Appendix are well known from the litera-
ture. A series equivalent to (A.1) is attributed to Gosper in [2, eq. (16.81)].
The series (A.2) is due to Adamchik and Wagon [1] and is one of the simplest
of all “BBP-like” formulas [2, Chapter 10]. The even simpler BBP-formula
(A.14) appears in [3, eq. (18)] with two terms at a time, attributed to Knuth.
I note that the formulas in the Appendix emerged quite naturally as
accelerations of simple series, and all were derived by hand.
As we shall see, this acceleration method can also be used to transform
divergent series into convergent ones, in a process more properly called series
deceleration. For example, decelerating the divergent series
(6) π
33=
X
n=0
(3)n
2n+ 1
gives the convergent, fractional BBP-formula (A.18). This argument, of
course, is no proof, but I also give an alternative, rigorous demonstration.
The general principle behind these formulas is an acceleration scheme
that allows one to approximate an alternating, “sporadic” sum
S=a0ak+a2ka3k+·· ·
from a finite number of terms a0, a1, . . . , anof the complete sequence. The
general theory—in particular the idea of letting the aibe the moments of
a measure, writing Sas an integral with respect to that measure, and ap-
proximating the integrand by means of Chebyshev polynomials—is strongly
indebted to that in [7] which it generalizes from k= 1 to arbitrary k.
As another example of this method, we show how to compute numerically
the constant
(7) K=
X
n=0
(1)nζ(n+ 1/2)
2n+ 1 =2.1577829966 . . . ,
making the most of a small number of given integer and half-integer zeta-
values, ζ(n) and ζ(n+ 1/2). The constant Kis known as the Schnecken-
konstante and arises in connection with the Spiral of Theodorus [6].
Nilakantha’s accelerated series for π3
2. Alternating, sporadic series. Let µbe a finite, signed measure
on [0,1] with moments
(8) ai=
1
0
xidµ, i 0,
converging to zero for i→ ∞, and consider the alternating, sporadic series
(9) S=
1
0
1 + xk=
X
i=0
(1)iaik
for some integer k1. Let there be given a polynomial
(10) P(x) =
m+k1
X
i=0
bixi
with P(u) = 1 for uk=1, and write
(11) Q(x) = 1P(x)
1 + xk=
m1
X
i=0
cixi.
Define a new measure µ0with density P(x) with respect to µ, i.e., 0=
P(x), and moments
a0
i=
1
0
xi0=
m+k1
X
j=0
bjai+j,
and consider the transformed series
S0=
1
0
0
1 + xk=
X
i=0
(1)ia0
ik.
Write the difference between the old and the new series as
S=SS0=
1
0
Q(x)=
m1
X
i=0
ciai.
Repeating this process gives a sequence of measures µ(n)with densities
(n)=P(x)nand moments
(12) a(n)
i=
1
0
xi(n)
as well as a sequence of transformed series
(13) S(n)=
1
0
(n)
1 + xk=
X
i=0
(1)ia(n)
i
4 D. Brink
with differences
(14) S(n)=S(n)S(n+1) =
1
0
Q(x)(n)=
m1
X
i=0
cia(n)
i.
After nsteps one has
S=
n1
X
i=0 S(i)+S(n).
If Mis the maximum of |P(x)|on the interval [0,1], then (13) gives the
bound
(15) |S(n)| ≤ Mn
1
0
d|µ|
1 + xk,
cf. Remark 1 below. So if M < 1, one gets the accelerated series
(16) S=
X
n=0 S(n)
with convergence Mn, i.e., S(n)=O(Mn).
Remark 1.Asigned measure may take negative as well as positive
values. Even if µwere required to be positive, the transformed measure µ0
would still take negative values if the density function P(x) did. By Jordan’s
Decomposition Theorem [4], one can write µ=µ+µwith unique positive
measures µ+and µwith disjoint supports. The theory of integration with
respect to a signed measure can thus be reduced to that of a usual, positive
measure. Absolute integrals such as the one appearing in (15) are defined
by means of the total variation |µ|=µ++µ.
The assumption that the moments (8) converge to zero is equivalent to
µ({1}) = 0 and guarantees that
1
1 + xk= 1 xk+x2k− · · ·
can be integrated termwise, say, by Lebesgue’s Dominated Convergence The-
orem.
It is a result of Hausdorff [12, Satz I] that a sequence of real numbers
a0, a1, a2, . . . is the sequence of moments of a finite, positive measure on [0,1]
if and only if it is totally monotonic, i.e.,
(17) nai0 for all i, n 0.
As above, ai=aiai+1 denotes the negated forward difference operator.
Similarly, also by Hausdorff [12, Satz II], the aiare the moments of a finite,
Nilakantha’s accelerated series for π5
signed measure on [0,1] if and only if
(18) sup
n0
n
X
i=0
n
iniai
<.
The latter condition thus implies that (ai) is the difference between two
totally monotonic sequences. It is seen directly from the identity
n
X
i=0 n
iniai=a0
that (17) implies (18).
Note that the moments (8) need not be of the same sign, not even even-
tually, so that in reality the series (9) is not necessarily alternating.
For later use we also note that ai= 1/(i+ 1) and a
i= 1/(2i+ 1) are the
moments of the usual Lebesgue measure µand the measure µwith density
=dµ/2x, respectively.
Remark 2.In order to compute Snumerically, we approximate it by
the first difference S. To minimize the error S0, we have to choose P(x)
as a polynomial of high degree that approximates zero uniformly on [0,1].
In light of (11), this suggests taking
P(x)=1(1 + xk)Q(x),
where the polynomial Q(x) is a Chebyshev approximation to 1/(1 + xk).
This will be carried out in more detail in Sections 4 and 6.
Remark 3.On the other hand, if we wish to transform Sinto an exact,
accelerated series (16), we have to choose P(x) as a simple polynomial of low
degree, so that the terms (14) can be computed explicitly. If, for example,
P(x) is of the form xj(1 x)n, then the terms of the transformed series S0
are a0
i=nai+j.
The binomial sum
n
X
j=0 n
j(1)j
x+j=n!
x(x+ 1) · ·· (x+n)
is well known. It holds as an identity in Q(x) and can be easily shown as a
partial fraction decomposition. The substitution x=i+ 1 thus gives
(19) nai=n!i!
(n+i+ 1)!
for the sequence ai= 1/(i+ 1). Similarly, letting x=i+ 1/2 gives
(20) na
i=4nn!(2i)!(n+i)!
i!(2n+ 2i+ 1)!
for a
i= 1/(2i+ 1). This will be of use later.
6 D. Brink
3. The Nilakantha transform. Let k= 2 throughout this section.
Then P(x) must satisfy P(±i) = 1. We can take for P(x) any product of
x2,x(1 x)2
2,(1 x)4
4.
Let
(21) P(x) = x(1 x)2
2,
so that
Q(x) = 1 x
2.
The transformed series S0has terms
a0
i=2ai+1
2=ai+1 2ai+2 +ai+3
2,
hence the first step of the transformation becomes
(22) S=a0a1
2+
X
i=0
ai+1 2ai+2 +ai+3
2.
The ntimes transformed series S(n)has terms
a(n)
i=1
2n2nai+n,
so that we get the Nilakantha transform
(23) S=
X
n=0
1
2n2nanan+1
2
with convergence 13.5n, since
(24) M=P(1/3) = 2/27.
Example 4.To accelerate the Gregory–Leibniz series (1), we let ai=
1/(i+ 1). The first step (22) of the transformation is precisely Nilakantha’s
series (2). To compute the fully accelerated series (23), we use the iden-
tity (19) and get (A.1).
(25) (1 x)4
4,x3(1 x)2
2,x4(1 x)4
4
gives the three series (A.2), (A.3), (A.4) with M= 1/4, 54/3125, 1/1024,
respectively.
Example 6.The Newton–Euler series (5) can be rewritten as
π
22=
X
n=0
(1)n
4n+ 1 +
X
n=0
(1)n
4n+ 3
Nilakantha’s accelerated series for π7
with two alternating, sporadic series corresponding to a
i= 1/(2i+ 1) and
a∗∗
i= 1/(2i+ 3). Accelerating each series separately using
P(x) = x(1 x)2
2,(1 x)4
4,x3(1 x)2
2
and adding the results gives (A.5), (A.6), (A.7), respectively.
Remark 7.It follows in advance from (24) that (A.1) and (A.5) con-
verge as 13.5n, but this is also evident from the expressions themselves,
by Stirling’s Formula. A similar remark applies to all other formulas in the
Appendix.
Remark 8.The factors
(2n)!(2n)!(3n)!
n!(6n)! ,(2n)!(5n)!(6n)!
(3n)!(10n)!
appearing in (A.5) and (A.7) happen to be reciprocal integers by a criterion
of Landau (1900), anticipated by Chebyshev (1852) and Catalan (1874).
Such expressions are not too common and have been completely classified
(in a suitable sense) [5].
Remark 9.I stress that there is no evidence that Nilakantha derived
(2) the way we have done here, much less that he knew (3). Of course, the
transformation (22) is straightforward to verify directly, making (1) and (2)
essentially equivalent.
4. Numerical approximations. Let k1 be given. The Chebyshev
polynomials of the first kind, Tm(x), are given recursively by T0(x) = 1,
T1(x) = xand
Tm(x) = 2xTm1(x)Tm2(x).
The zeros of Tm(x) are
ηi= cos (2i1)π
2m, i = 1, . . . , m.
Let Q(x) be the Chebyshev approximation of order mof 1/(1 + xk) on
the interval [0,1], i.e., Q(x) is the polynomial of degree less than magreeing
with 1/(1 + xk) at the mpoints (1 + ηi)/2. Since (1 + ηi)/2 are the zeros of
Tm(1 2x), Q(x) satisfies
Q(x)1
1 + xkmodulo Tm(1 2x)
and can be computed from this congruence by the Euclidean Algorithm.
Thus, P(x) will be the polynomial of degree less than m+kwith zeros
(1 + ηi)/2 and P(u) = 1 for uk=1. Lagrange interpolation gives the
8 D. Brink
explicit expression
P(x) = Tm(1 2x)X
uk=1
u
βmk·1 + xk
ux
with βm=Tm(1 2u).
In order to evaluate the maximum Mof |P(x)|for 0 x1 as m→ ∞,
first note |Tm(12x)| ≤ 1. For a fixed u, the numbers βmsatisfy β0= 1, β1=
12uand the recursion βm= (24u)βm1βm2.Hence, βm= (λm
1+λm
2)/2
with the roots λiof the characteristic polynomial λ2(2 4u)λ+ 1.We
may suppose |λ1|>|λ2|, and conclude that βm∼ |λ1|m/2.Finally, let λbe
the minimum of |λ1|as uruns through the roots of unity with uk=1.
Then M=O(λm) as m→ ∞.
Some values of λare given in Table 1. Note that the value λ= 5.828 for
k= 1 was found in [7].
Table 1. Values of λ
k12345678910
λ5.828 4.612 3.732 3.220 2.890 2.659 2.488 2.356 2.250 2.164
Example 10.Suppose we want to compute numerically the alternating,
S=
1
0
1 + x2=
X
i=0
(1)ia2i,
and that we have at our disposal the terms a0, a1, a2, . . . , a99. Letting k= 1
and m= 50 and using only every second term, a0, a2, a4, . . . , a98, we expect
a relative error of 5.82850, or 38 correct, significant digits of S. Letting
k= 2 and m= 100, using all 100 available terms, we expect an error of
4.612100, or 66 correct digits.
Consider the constant (7), and rewrite it as
K=ζ1
2
X
i=0
(1)iζ(i+ 3/2)
2i+ 3
in order to bypass the singularity at z= 1. Suppose that the zeta-values
ζ(i/2+3/2) are available for i= 0,1,...,99. Using the first method gives
42 digits of S, while the second method gives 70 digits, in agreement with
our expectations. The second method can be carried out in Pari as follows:
T=subst(poltchebi(100),x,1-2*x)
Q=lift(1/Mod(1+x^2,T))
c(i)=polcoeff(Q,i)
a(i)=zeta(i/2+3/2)/(i+3)
K=zeta(1/2)-sum(i=0,99,c(i)*a(i))
Nilakantha’s accelerated series for π9
Remark 11.In the above example, it was assumed that the terms a0, a1,
...,a99 were simply given in advance. In practice, one might obviously have
to compute them first. Using k= 2 has the advantage that the integer
zeta-values ζ(n) are much faster to compute than the half-integer values
ζ(n+ 1/2). On the other hand, these values then need to be computed to a
precision of up to 10 extra digits due to the numerically larger coefficients ci.
Also, the cican be computed particularly efficiently for k= 1 (cf. [7]).
Remark 12.For k= 2, we can compare the (optimal) value λ= 4.612
obtained from Chebyshev polynomials with the values λ=M1/deg Pfrom
the polynomials P(x) given in Section 3. Of these, the Nilakantha trans-
formation (21) has the best convergence, i.e., λ= 2.381. The other three
transformations (25) have λ= 1.414, 2.252, 2.378, respectively.
5. Geometrically converging series. Let µbe a finite, signed mea-
sure on [0,1] with arbitrary moments (8), and consider the alternating, ge-
ometrically converging series
(26) S=
1
0
1 + θxk=
X
i=0
(θ)iaki
for k1 and 0 < θ < 1. Let P(x) be given as in (10) with P(u/θ1/k) = 1
for uk=1, and write
Q(x) = 1P(x)
1 + θxk.
As before, we define a sequence of measures µ(n)with (n)=P(x)nand
moments (12), and we get a sequence of transformed series
S(n)=
1
0
(n)
1 + θxk
with differences (14) as well as an accelerated series (16) with S(n)=
O(Mn), where Mis the maximum of |P(x)|on [0,1].
Example 13.The arcus tangent series
(27) arctan θ
θ=
X
i=0
(θ)i
2i+ 1
has the form (26) with k= 1 and ai= 1/(2i+ 1). To accelerate it, P(x)
must satisfy P(1) = 1, and we can take any product of
θx, θ(1 x)
θ+ 1 .
10 D. Brink
Letting
(28) P(x) = θ(1 x)
θ+ 1
and using (20) gives Euler’s accelerated series
(29) arctan θ
θ=1
θ+ 1
X
n=04θ
θ+ 1nn!n!
(2n+ 1)!
with M=θ/(θ+ 1).
Note that the original series (27) converges for |θ|<1, whereas the
accelerated series (29) converges for |θ|<|θ+ 1|, or Re(θ)>1/2. The pre-
ceding discussion shows that the two series agree for 0 < θ < 1. The Identity
Theorem for holomorphic functions and the fact that uniform convergence
preserves holomorphicity show that (29) holds for Re(θ)>1/2.
Inserting θ= 1, 1/3, 3 gives three classical formulas such as (4).
Also note that (4) is the Euler transform of the Gregory–Leibniz series,
i.e., the acceleration corresponding to the negated difference operator , or
P(x) = 1x
2.
Historical note 14.Euler develops the Euler transformation and de-
rives (4) and (29) from (1) and (27) in his Institutiones Calculi Differentialis
[9, Part II, Chapter 1] from 1755. Much earlier, he had given a series for
arcsin2xessentially equivalent to (29) in a letter to Johann Bernoulli dated
10 December 1737 [15]. Euler proves (29) again (twice) as well as the Machin-
like formula π= 20 arctan 1/7 + 8 arctan 3/79, and computes the two terms
with 13 and 17 correct decimals, respectively, but without adding them, in
1779 [10]. He extends this calculation and computes 21 correct decimals of
πin [11]. Several sources on the chronology of πstate that Euler did this
calculation in 1755 and/or in less than an hour. It seems from the above,
though, that the calculation could not have been carried out before 1779.
Regarding the duration, the relevant passage reads: “totusque hic calculus
laborem unius circiter horae consum[p]sit” (and the entire calculation took
Example 15.Taking
P(x) = θ2x(1 x)
θ+ 1
rather than (28) gives the accelerated series
arctan θ
θ=1
4(θ+ 1)
X
n=04θ2
θ+ 1n4n
2n13θ+ 4
4n+ 1 θ
4n+ 3
with M=θ2/4(θ+ 1).
Nilakantha’s accelerated series for π11
By the same argument as before, this formula holds on a complex domain
bounded by the curve |θ|2= 4|θ+ 1|, or
(x2+y2)2= 16((x+ 1)2+y2)
in real variables. This quartic, algebraic curve is a lima¸con of Pascal, named
after ´
Etienne Pascal, the father of Blaise Pascal, and first studied in 1525
by Albrecht D¨urer [13] (1).
Fig. 1. Lima¸con of Pascal
Inserting θ= 1, 1/3, 3 gives (A.8), (A.9), (A.10).
Note that the small loop around –1 is not included in the domain of
convergence, corresponding nicely to the fact that arctan has a singularity
at ±i.
Example 16.Letting
P(x) = θ3x(1 x)2
(θ+ 1)2
gives the formidable expression
arctan θ
θ=1
9(θ+ 1)2
X
n=016θ3
(θ+ 1)2n
×(2n)!(2n)!(3n)!
n!(6n)! 5θ2+ 15θ+ 9
6n+ 1 θ2
6n+ 5
with M= 4θ3/27(θ+ 1)2.
The domain of convergence is bounded by the sextic, lima¸con-like curve
16(x2+y2)3= 729((x+ 1)2+y2)2.
Inserting θ= 1, 1/3, 3 gives (A.11), (A.12), (A.13).
(1) I am grateful to my friend Kasper K. S. Andersen for identifying this curve.
12 D. Brink
Formulas (A.8) and (A.11) are examples of van Wijngaarden’s trans-
formation [7], i.e., they are the accelerations of the Gregory–Leibniz series
corresponding to the polynomials
P(x) = x(1 x)
2,x(1 x)2
4.
Example 17.For k= 2 and ai= 1/(i+1), the general arctan series (27)
cannot be accelerated as in the previous examples. It may, however, for
specific choices of θ. Let θ= 1/3. Then we must have P(±i3) and can
take any product of
x2
3,(1 x)3
8.
Letting
P(x) = (1 x)3
8,x2(1 x)3
24
gives the series (A.14), (A.15) with M= 1/8, 9/6250, respectively.
Example 18.The convergence of the accelerated series (16), and its
identity with (26), was proved under Hausdorff’s condition (18) and 0 <
θ < 1. It is a common phenomenon, however, that acceleration techniques
work in more general settings and even for divergent series [7, Remark 6].
Consider the divergent series (6), obtained by inserting θ= 3 into (27). Let
k= 2 and θ= 3. Then P(x) must satisfy
P±i
3= 1.
We can take for P(x) any product of
3x2,9x(1 x)3
8,27(1 x)6
64 .
Letting P(x) be
9x(1 x)3
8,27x3(1 x)3
8,27(1 x)6
64
gives the three series (A.16), (A.17), (A.18) with M= 243/2048, 27/512,
27/64, respectively.
These formulas can be checked numerically to many digits, but of course
the above argument is no proof (although I like to think that Euler would
have appreciated it).
Remark 19.A quick, rigorous proof of (A.18) could go as follows. Write
Mercator’s Formula with six terms at a time,
log(1 z) =
X
n=0
z6nz
6n+ 1 +· ·· +z6
6n+ 6.
Insert z=eiπ/63/2 and take imaginary parts to get (A.18), q.e.d.
Nilakantha’s accelerated series for π13
Similar proofs of (A.2) and (A.14) are possible: Insert z= (1 + i)/2 and
z=eiπ/3/2 into Mercator’s Formula with four and three terms at a time,
respectively.
6. Numerical approximations again. To approximate the geomet-
rically converging series (26) numerically, let k1 be given, but now let
Q(x) agree with 1/(1 + θxk) at the points (1 + ηi)/2, i.e.,
Q(x)1
1 + θxkmodulo Tm(1 2x).
Then
P(x) = Tm(1 2x)X
uk=1
u
βmk·1 + θxk
uθ1/kx
with βm=Tm(1 21/k ). Again, βm∼ |λ1|m/2 with λ1the numerically
greater root of the characteristic polynomial λ2(2 41/k )λ+ 1. We
conclude that M=O(λm) as m→ ∞ with
λ= min{|λ1|:uk=1}.
Table 2 gives λ=λθfor various values of kand θ.
Table 2. Values of λθ
k λ1/2λ1/3λ1/4λ1/5λ1/6
1 9.899 13.93 17.94 21.95 25.96
2 6.129 7.328 8.352 9.263 10.09
3 4.607 5.254 5.782 6.236 6.636
4 3.829 4.264 4.612 4.905 5.160
5 3.357 3.685 3.942 4.157 4.343
6 3.040 3.303 3.508 3.678 3.823
7 2.811 3.031 3.202 3.342 3.462
8 2.636 2.827 2.973 3.094 3.196
9 2.499 2.667 2.796 2.902 2.991
10 2.388 2.539 2.654 2.748 2.828
Example 10. Write
K=π
41 + ζ1
2
X
i=0
(1)iζ(i+ 3/2) 1
2i+ 3
to get a geometrically converging series, with θ= 1/2. Suppose again the
14 D. Brink
zeta-values ζ(i/2 + 3/2) are given for i= 0,1,...,99. Using only the terms
a0, a2, a4, . . . , a98, we expect an error of 9.89950, or 50 correct digits (cf. Ta-
ble 2). Using all 100 available terms, we expect an error of 6.129100, or 79
correct digits. In practice, the two methods give 53 and 82 digits, respec-
tively, confirming the theory. The second method can be carried out in Pari
as follows:
T=subst(poltchebi(100),x,1-2*x)
Q=lift(1/Mod(1+x^2/2,T))
c(i)=polcoeff(Q,i)
a(i)=2^(i/2)*(zeta(i/2+3/2)-1)/(i+3)
K=Pi/4-1+zeta(1/2)-sum(i=0,99,c(i)*a(i))
Appendix. Series for π
(A.1) 3π
2=
X
n=0
1
2n3n
n14
3n+ 1 +1
3n+ 2,
(A.2) π=
X
n=01
4n2
4n+ 1 +2
4n+ 2 +1
4n+ 3,
(A.3) 125π
2=
X
n=01
2n5n
2n1
×208
5n+ 1 22
5n+ 2 +8
5n+ 3 7
5n+ 4,
(A.4) 1024π=
X
n=01
4n8n
4n1
×3183
8n+ 1 +117
8n+ 3 15
8n+ 5 5
8n+ 7,
(A.5) 9π
2=
X
n=0
8n(2n)!(2n)!(3n)!
n!(6n)! 19
6n+ 1 +1
6n+ 5,
(A.6) 162π=
X
n=0
(64)n8n
4n1
×75
8n+ 1 +13
8n+ 3 3
8n+ 5 5
8n+ 7,
(A.7) 625π
2=
X
n=0
(8)n(2n)!(5n)!(6n)!
(3n)!(10n)!
×1339
10n+ 1 +184
10n+ 3 16
10n+ 7 11
10n+ 9,
Nilakantha’s accelerated series for π15
(A.8) 2π=
X
n=0
(2)n4n
2n17
4n+ 1 1
4n+ 3,
(A.9) 8π
3=
X
n=01
3n4n
2n115
4n+ 1 1
4n+ 3,
(A.10) 16π
33=
X
n=0
(9)n4n
2n113
4n+ 1 3
4n+ 3,
(A.11) 9π=
X
n=0
(4)n(2n)!(2n)!(3n)!
n!(6n)! 29
6n+ 1 1
6n+ 5,
(A.12) 243π=
X
n=01
3n(2n)!(2n)!(3n)!
n!(6n)! 131
6n+ 1 1
6n+ 5,
(A.13) 16π
33=
X
n=0
(27)n(2n)!(2n)!(3n)!
n!(6n)! 11
6n+ 1 1
6n+ 5,
(A.14) 4π
33=
X
n=01
8n2
3n+ 1 +1
3n+ 2,
(A.15) 500π
3=
X
n=0
1
24n5n
2n1872
5n+ 1 +57
5n+ 2 +12
5n+ 3 +7
5n+ 4 ,
(A.16) 256π
33=
X
n=09
8n4n
n1103
4n+ 1 +72
4n+ 2 +15
4n+ 3,
(A.17) 1024π
33=
X
n=027
8n6n
3n1637
6n+ 1 +6
6n+ 3 27
6n+ 5,
(A.18) 64π
33=
X
n=027
64 n
×16
6n+ 1 +24
6n+ 2 +24
6n+ 3 +18
6n+ 4 +9
6n+ 5.
References
[1] V. Adamchik and S. Wagon, A simple formula for π, Amer. Math. Monthly 104
(1997), 852–855.
[2] J. Arndt and C. Haenel, Pi—Unleashed, 2nd ed., Springer, Berlin, 2001.
[3] D. H. Bailey, A compendium of BBP-type formulas for mathematical constants,
2013; www.davidhbailey.com/dhbpapers/bbp-formulas.pdf.
[4] P. Billingsley, Probability and Measure, 3rd ed., Wiley, New York, 1995.
[5] J. W. Bober, Factorial ratios, hypergeometric series, and a family of step functions,
J. London Math. Soc. 79 (2009), 422–444.
16 D. Brink
[6] D. Brink, The spiral of Theodorus and sums of zeta-values at the half-integers, Amer.
Math. Monthly 119 (2012), 779–786.
[7] H. Cohen, F. Rodriguez Villegas and D. Zagier, Convergence acceleration of alter-
nating series, Experiment. Math. 9 (2000), 3–12.
[8] L. Euler, De summis serierum reciprocarum, Comment. Acad. Sci. Petropol. 7
(1740), 123–134; online: eulerarchive.maa.org, Enestr¨om index E41.
[9] L. Euler, Institutiones Calculi Differentialis. . . , St. Petersburg, 1755; [E212].
[10] L. Euler, Investigatio quarundam serierum, quae ad rationem peripheriae circuli ad
diametrum vero proxime definiendam maxime sunt accommodatae, Nova Acta Acad.
Sci. Imp. Petropol. 11 (1798), 133–149; [E705].
[11] L. Euler, Series maxime idoneae pro circuli quadratura proxime invenienda, in:
Opera Postuma I, St. Petersburg, 1862, 288–298; [E809].
[12] F. Hausdorff, Momentprobleme f¨ur ein endliches Intervall, Math. Z. 16 (1923), 220–
248.
[13] J. D. Lawrence, A Catalog of Special Plane Curves, Dover, New York, 1972.
[14] R. Roy, The discovery of the series formula for πby Leibniz, Gregory and Nilakan-
tha, Math. Mag. 63 (1990), 291–306.
[15] P. St¨ackel, Eine vergessene Abhandlung Leonhard Eulers ¨uber die Summe der re-
ziproken Quadrate der nat¨urlichen Zahlen, Bibliotheca Math. 8 (1908), 37–60.
David Brink
Akamai Technologies
Larslejsstræde 6
1451 København K, Denmark
E-mail: dbrink@akamai.com
and in revised form on 22.8.2015 (7975)
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The total angular distance traversed by the spiral of Theodorus is governed by the Schneckenkonstante K introduced by Hlawka. The only published estimate of K is the bound K ≤ 0.75. We express K as a sum of Riemann zeta-values at the half-integers and compute it to 100 decimal places. We find similar formulas involving the Hurwitz zeta-function for the analytic Theodorus spiral and the Theodorus constant introduced by Davis.
Article
Full-text available
A 1996 paper by the author, Peter Borwein and Simon Plouffe showed that any mathematical constant given by an in nite series of a certain type has the property that its n-th digit in a particular number base could be calculated directly, without needing to compute any of the first n 1 digits, by means of a simple algorithm that does not require multiple-precision arithmetic. Several such formulas were presented in that paper, including formulas for the constants and log 2. Since then, numerous other formulas of this type have been found. This paper presents a compendium of currently known results of this sort, both formal and experimental. Many of these results were found in the process of compiling this collection and have not previously appeared in the literature. Several conjectures suggested by these results are mentioned.
Book
Probability. Measure. Integration. Random Variables and Expected Values. Convergence of Distributions. Derivatives and Conditional Probability. Stochastic Processes. Appendix. Notes on the Problems. Bibliography. List of Symbols. Index.
Article
The formula for π mentioned in the title of this article is $$\frac{\pi }{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots .$$ (1)
Article
We discuss some linear acceleration methods for alternating series which are in theory and in practice much better than that of Euler--Van Wijngaarden. One of the algorithms, for instance, allows one to calculate $\sum(-1)^ka_k$ with an error of about $17$.$93^{-n}$ from the first $n$ terms for a wide class of sequences $\{a_k\}$. Such methods are useful for high precision calculations frequently appearing in number theory.
|
2022-08-10 02:55:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9118121862411499, "perplexity": 6060.563607991999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00797.warc.gz"}
|
https://discourse.mcneel.com/t/data-matching-tree/59746
|
# Data Matching Tree
Hi,
For confirmation and generalisation.
In this simple situation, the line component is matching points placed into 2 trees A and B.
A possesses the following branches:
{0;0;1}
{0;0;3}
B possesses the following branches:
{0;0;0}
{0;0;1}
The line component matches the first branch from tree A with the first branch from tree B. And the second branch from tree A with second branch from B regarless of the fact that the first branch of A has the same path as the second branch from B.
Can this be generalized ?
In other words, that matching at the branch level depends ONLY on the order in which the branches appears in the list of tree paths.
Thanks !
Jean
This is a great resource.
grasshopper primer
In other words, that matching at the branch level depends ONLY on the order in which the branches appears in the list of tree paths.
If the parameter accesses are not tree type, you can expect the branches to be related in order and not by path. It depends on the implementation, but I think the usual thing is not to worry about paths if the accesses are not of the tree type.
Just for clarification.
By that I understand that, if the components performing the matching are not of the type found in Set->Tree (or the merging of 2 outputs into a volatile parameter) then the path is ignored and it is the order in which the branches appears in the list of tree paths that determines the matching.
Thanks Dani !
This is correct. Data is matched item-by-item and branch-by-branch in order, unless this default behaviour is specifically overridden by the component. If you have a component operating on two inputs, both of which provide four branches (\{0;0\},\{0;1\},\{1;0\},\{1;1\}) and (\{0;0\},\{1;1\},\{2;2\},\{3;3\}) then they will be matched like so:
\{0;0\} \leftrightarrow \{0;0\}
\{0;1\} \leftrightarrow \{1;1\}
\{1;0\} \leftrightarrow \{2;2\}
\{1;1\} \leftrightarrow \{3;3\}
the fact that \{1;1\} occurs in both trees is irrelevant.
3 Likes
|
2020-09-18 18:50:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8192225098609924, "perplexity": 1147.1905581085887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188049.8/warc/CC-MAIN-20200918155203-20200918185203-00319.warc.gz"}
|
http://survey.arkansaslasik.com/ntskq/1a4233-power-rule-with-fractions
|
X p /X q = X p-q. Binomial theorem, fraction rules, exponent rules, radical rules, square root rules, logarithm rules, sum/difference identities this page updated 19-jul-17 Mathwords: Terms and Formulas from Algebra I … credit transfer. Tes Global Ltd is registered in England (Company No 02017289) with its registered office … ˚˝ ˛ C. ˜ ! Power rule Calculator online with solution and steps. This is the rule we use when we’re dividing one exponential expression by … Therefore, the base number in the bottom of any fraction cannot equal zero. Khan Academy is a 501(c)(3) nonprofit organization. Using power rule with a negative exponent. The method of power substitution assumes that you are familiar with the method of ordinary u-substitution and the use of differential notation. Sophia’s self-paced online courses are a great way to save time and money as you earn credits eligible for transfer to many different colleges and universities.*. Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & Mode Scientific Notation Arithmetics Algebra Equations Inequalities System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Induction Logical Sets Hence, the constant 10 just tags along'' during the differentiation process. Here we have a base ???3??? In this non-linear system, users are free to take whatever path through the material best serves their needs. The power rule dictates that an exponent raised to another exponent means that the two exponents are multiplied: Any negative exponents can be converted to positive exponents in the denominator of a fraction: The like terms can be simplified by subtracting the power of the denominator from the power … In this non-linear system, users are free to take whatever path through the material best serves their needs. So the square of 9 is 81, (x 8) 2 can be simplified to x 16 and (y 4) 2 = y 8. Algebra. These are automatic, one-step antiderivatives with the exception of the reverse power rule, which is only slightly harder. An expression that represents repeated multiplication of the same factor is called a power. X 0 = 1. The only time that the Quotient of Powers Rule is not valid is if the denominator of the fraction is zero. The general rule is: (x m) n = x mn So basically all you need to do is multiply the powers. Active 2 years, 11 months ago. Power Rule, or Power Law, is a property of exponents that is defined by the following general formula: (a x) y = a x ⋅ y (a^x)^y=a^{x \cdot y} (a x) y = a x ⋅ y. Preview and details Files included (1) ppt, 227 KB. Writing all the letters down is the key to understanding the Laws So, when in doubt, just remember to write down all the letters (as many as the exponent tells you to) and see if you can make sense of it. It is obvious that powers may be added, like other quantities, by uniting them one after another with their signs. PowerPoint display exploring the methods for the four rules with fractions. Therefore, our power rule can now safely be applied to any rational exponents. Use the power rule to differentiate functions of the form xⁿ where n is a negative integer or a fraction. These unique features make Virtual Nerd a viable alternative to private tutoring. ˘ C. ˇ ˇ 3. Then, for y = x n, This is exactly what we would get if we assume the same power rule holds for fractional exponents as does for integral exponents. For example, if , then . Check out all of our online calculators here! In calculus, the quotient rule is a method of finding the derivative of a function that is the ratio of two differentiable functions. In words, the above expression basically states that for any value to an exponent, which is then all raised to another exponent, you can simply combine the exponents into one by just multiplying them. We write the power in numerator and the index of the root in the denominator. a n / b n = (a / b) n. Example: 4 3 / 2 3 = (4/2) 3 = 2 3 = 2⋅2⋅2 = 8. 299 One Rule. Prime Factorization. First, the Laws of Exponentstell us how to handle exponents when we multiply: So let us try that with fractional exponents: Exponent rules for addition, subtraction, multiplication, division and fractions are given here. For example, if , then . Power Rule, or Power Law, is a property of exponents that is defined by the following general formula: ( a x ) y = a x ⋅ y (a^x)^y=a^{x \cdot y} ( a x ) y = a x ⋅ y In words, the above expression basically states that for any value to an exponent, which is then all raised to another exponent, you can simply combine the exponents into one by just multiplying them. In this lessons, students will see how to apply the power rule to a problem with fractional exponents. The definition of the derivative may also be used, but as the next two examples show, the direct use of the definition is often much more cumbersome than the improved Power Rule. Preview and details Files included (1) ppt, 227 KB. a n / a m = a n-m. The product rule. The power can be a positive integer, a negative integer, a fraction. SOPHIA is a registered trademark of SOPHIA Learning, LLC. You can use reverse rules to find antiderivatives. which can also be written as. Example: (2 3) 2 = 2 3⋅2 = 2 6 = 2⋅2⋅2⋅2⋅2⋅2 = 64. Multiply terms with exponents using the general rule: x a + x b = x ( a + b ) And divide terms with exponents using the rule: x a ÷ x b = x ( a – b ) These rules work with any expression in place of a and b , even fractions. Apply the power rule for derivatives and the fact that the derivative of a constant is zero: = 2(4x3)– 5(2x1) + (0) Notice that once we applied the derivative rule, the prime went away. This website and its content is subject to our Terms and Conditions. Examples: A. If you're seeing this message, it means we're having trouble loading external resources on our website. Below is a specific example illustrating the formula for fraction exponents when the numerator is not one. Four rules of fractions. The power rule for integrals was first demonstrated in a geometric form by Italian mathematician Bonaventura Cavalieri in the early 17th century for all positive integer values of , and during the mid 17th century for all rational powers by the mathematicians Pierre de Fermat, Evangelista Torricelli, Gilles de Roberval, John Wallis, and Blaise Pascal, each working independently. ˝ ˛ B. If m and n are postive integers, then the meaning of x m/n is fairly clear: take the nth root of x and then raise to the nth power. ZERO EXPONENT RULE: Any base (except 0) raised to the zero power is equal to one. that’s positive, so it doesn’t matter that one of the exponents is negative. Fraction: A fraction is a part of a whole or a collection and it consists of a numerator and denominator.. * The American Council on Education's College Credit Recommendation Service (ACE Credit®) has evaluated and recommended college credit for 33 of Sophia’s online courses. To differentiate powers of x, we use the power rule for differentiation. Multiply terms with fractional exponents ... and this agrees with the standard result that any number raised to a power of 0 equals one. In this lessons, students will see how to apply the power rule to a problem with fractional exponents. See: Dividing exponents. the power is a negative number, this means that the function will have a "simple" power of x on the denominator like f(x) = 2 x7. Rule for Dividing Numbers with a Common Base Exploring the Zero Power When a number is raised to a fractional power this is interpreted as follows: a1/n=n √ a So, a1/2is a square root of a a1/3is the cube root of a a1/4is a fourth root of a Examples 31/2=2 √ 3, 271/3=3 The correct notation keeps this until you apply a derivative rule. We explain Power Rule with Fractional Exponents with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers. guarantee We explain Power Rule with Fractional Exponents with video tutorials and quizzes, using our Many Ways(TM) approach from multiple teachers. Power rule Calculator Get detailed solutions to your math problems with our Power rule step-by-step calculator. What is Fraction Rules? Thus the sum of a 3 and b 2, is a 3 + b. What is Fraction Rules? Let's see why in an example. Division Rule. These unique features make Virtual Nerd a viable alternative to private tutoring. Afractional exponentis an alternate notation for expressing powers and roots together. The Definition of :, this says that if the exponent is a fraction, then the problem can be rewritten using radicals. Sophia partners Power Rule. Four rules of fractions. Finally, we'll say that the power elevated to 0 is always 1, so, x 0 = 1. Example. The Power Rule for Exponents. If the higher power is in the denominator, put the difference in the denominator and vice versa, this will help avoid negative … ˝ ˛ 4. x m/n. For any positive number x and integers a and b: $\left(x^{a}\right)^{b}=x^{a\cdot{b}}$. The power rule. For any positive number x and integers a and b: $\left(x^{a}\right)^{b}=x^{a\cdot{b}}$.. Take a moment to contrast how this is different from the product rule for exponents found on the previous page. Zero Rule. Chain rule of fractions. Definition of the Power Rule The Power Rule of Derivatives gives the following: For any real number n, the derivative of f(x) = x n is f ’(x) = nx n-1. Exponents power rules Power rule I (a n) m = a n⋅m. =. If there is no power being applied, write “1” in the numerator as a placeholder. Report a problem. Created: Jul 14, 2009. Updated: Feb 22, 2018. ppt, 227 KB. Updated: Feb 22, 2018. ppt, 227 KB. About this resource. Institutions have accepted or given pre-approval for credit transfer. The derivative of the square root. Free. Practice: Power rule (positive integer powers), Practice: Power rule (negative & fractional powers), Power rule (with rewriting the expression), Practice: Power rule (with rewriting the expression), Derivative rules: constant, sum, difference, and constant multiple: introduction. For example, $\left(2^{3}\right)^{5}=2^{15}$. Pre-Algebra; Discover fractions and factors. Loading... Save for later. Our goal is to verify the following formula. Mixed Fractions. This is seen to be consistent with the Power Rule for n = 2/3. Negative Power Rule. AP® is a registered trademark of the College Board, which has not reviewed this resource. If we generalize this rule, we have the following where n represents a non-zero real number and x and y are also real numbers. The power rule for derivatives is simply a quick and easy rule that helps you find the derivative of certain kinds of functions. Info. X 1 = X. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. To understand fractional powers you first need to have an understanding of roots, and in particular square roots and cube roots. Recall that if , then . Quotient Rule: , this says that to divide two exponents with the same base, you keep the base and subtract the powers.This is similar to reducing fractions; when you subtract the powers put the answer in the numerator or denominator depending on where the higher power was located. Solved exercises of Power rule. Equations Inequalities System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Induction Logical Sets. Adding or subtracting fractions with the same denominator Practice what you learn with games and quizzes. Again, if you didn’t like the above method you could multiply 9x 8 y 4 by 9x 8 y 4 as when you square something it’s the same as multiplying the number by itself. algebraic fractions 1 mc-TY-algfrac1-2009-1 Sometimes the integral of an algebraic fraction can be found by first expressing the algebraic fraction as the sum of its partial fractions. Think about this one as the “power to a power” rule. When using the method of power substitution, we will usually assume that , so that . QUOTIENT RULE: To divide when two bases are the same, write the base and SUBTRACT the exponents. Learn the laws of exponents with tables and solved examples here at BYJU’S. Thus, ( Now there are four layers in this problem. Power rule (negative & fractional powers) (practice) | Khan Academy Use the power rule to differentiate functions of the form xⁿ where n is a negative integer or a fraction. Differentiation: definition and basic derivative rules. In this section, the activities are in order of increasing difficulty: we use the properties of exponentials (power of products, power of quotient, power of a power,..) and, after we'll simplify expressions formed by powers (parenthesis, fractions, negative exponents, parameters..). For n = –1/2, … You know that the derivative of … To log in and use all the features of Khan Academy, please enable JavaScript in your browser. Many different colleges and universities consider ACE CREDIT recommendations in determining the applicability to their course and degree programs. Practice your math skills and learn step by step with our math solver. Adding or subtracting fractions with the same denominator. Quotient Rule: , this says that to divide two exponents with the same base, you keep the base and subtract the powers.This is similar to reducing fractions; when you subtract the powers put the answer in the numerator or denominator depending on where the higher power was located. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. The important feature here is the root index. This leads to another rule for exponents—the Power Rule for Exponents. Power raised to power (X p) q = X pq. And the sum of a 3 - b n and h 5-d 4 is a 3 - b n + h 5 - d 4.. Quotient Rule: , this says that to divide two exponents with the same base, you keep the base and subtract the powers. From that definition it is possible to prove various rules, some of which we will present in this Lesson. In this unit we will illustrate this idea. (m/n)x (m/n)-1. Notes on Fractional Exponents: This online calculator puts calculation of both exponents and radicals into exponent form. Finding the integral of a polynomial involves applying the power rule, along with some other properties of integrals. The easiest antiderivative rules are the ones that are the reverse of derivative rules you already know. You can use reverse rules to find antiderivatives. For x 2 we use the Power Rule with n=2: The derivative of x 2 = 2 x (2-1) = 2x 1 = 2x: Answer: the derivative of x 2 is 2x The method of power substitution assumes that you are familiar with the method of ordinary u-substitution and the use of differential notation. Created: Jul 14, 2009. Donate or volunteer today! T HE DEFINTION of the derivative is fundamental. the power is a fraction, this means that the function will have an x under a root like f(x) = … You can either apply the numerator first or the denominator. Here, m and n are integers and we consider the derivative of the power function with exponent m/n. Quotient rule with same exponent. ... Power Rule Derivatives, Sum Rule Derivatives, Constant Multiple Rule. Lesson Summary Calculus: Power Rule, Constant Multiple Rule, Sum Rule, Difference Rule, Proof of Power Rule, examples and step by step solutions, How to find derivatives using rules, How to determine the derivatives of simple polynomials, differentiation using extended power rule Let's make a generalization of this example. This website and its content is subject to our Terms and Conditions. This may also be called the exponent bracket rule or indices bracket rule as powers, exponents and indices are all the same thing. Separable equations introduction. There are two ways to simplify a fraction exponent such $$\frac 2 3$$ . Four rules of fractions. The Power Rule, one of the most commonly used rules in Calculus, says: The derivative of x n is nx (n-1) Example: What is the derivative of x 2? Categories & Ages. Mathematics; Mathematics / Number / … This is a formula that allows to find the derivative of any power of x. Quotient rule helps you simplify powers in fractions. Order of Operations Factors & Primes Fractions Long Arithmetic Decimals Exponents & Radicals Ratios & Proportions Percent Modulo Mean, Median & Mode Scientific Notation Arithmetics. X-p = 1/X p. Product Rule. Therefore, the base number in the bottom of any fraction cannot equal zero. Discover fractions and factors. Detailed step by step solutions to your Power rule problems online with our math solver and calculator. Power rule is like the “power to a power rule” In this section we’re going to dive into the power rule for exponents. Fraction: A fraction is a part of a whole or a collection and it consists of a numerator and denominator. These are automatic, one-step antiderivatives with the exception of the reverse power rule, which is only slightly harder. The Power Rule for Exponents. Fraction Exponent Rules: Multiplying Fractional Exponents With the Same Base. Consider the fairly simple case From the definition of the derivative, in agreement with the Power Rule for n = 1/2. For example, the following are equivalent. Any rational number n can be expressed as p/q for some integers p and nonzero q. Use the power rule for exponents to simplify the expression.???(3^2)^{-2}??? d. dx. The power rule for differentiation was derived by Isaac Newton and Gottfried Wilhelm Leibniz, each independently, for rational power functions in the mid 17th century, who both then used it to derive the power rule for integrals as the inverse operation. Read more. Four rules of fractions. ˆ ˙ Examples: A. The easiest antiderivative rules are the ones that are the reverse of derivative rules you already know. So the final answer you get is 81x 16 y 8. a n m = a (n m) Example: For example, if , then . Viewed 3k times 0 $\begingroup$ How can I use ... Device to power either a red LED or a green LED What does it mean to say a material is 'anharmonic'? Here you need to split this up as: 9 2 (x 8) 2 (y 4) 2. Example: If we serve1 part of a cake with 8 equal parts, we have served 1 ⁄ 8 of the cake. Let us see how to solve operations involving fractions. Ask Question Asked 2 years, 11 months ago. Math Help for Fractions: Easy-to-understand lessons for kids, parents and teachers. Power raised to product of bases (XY) p =X p.Y p. Power raised to fraction (X/Y) p = X p /Y p Notice that the denominator of the fraction becomes the index of the radical and the numerator becomes the power inside the radical. Take a look at the example to see how. In this lesson, you will learn the rule and view a variety of examples. Info. Remember the root index tells us how many times our answer must be multiplied with itself to yield the radicand. In order to establish the power rule for fractional exponents, we want to show that the following formula is true. © 2020 SOPHIA Learning, LLC. Addition, Subtraction, Multiplication and Division of Powers Addition and Subtraction of Powers. When using the method of power substitution, we will usually assume that , so that . Free. About this resource. 37 Our mission is to provide a free, world-class education to anyone, anywhere. Example: If we serve1 part of a cake with 8 equal parts, we have served 1 ⁄ 8 of the cake.. Let us see how to solve operations involving fractions. You know that the … Let’s take a look at the quotient rule for exponents. See the example below. Exponents quotient rules Quotient rule with same base. Example: Differentiate the following: a) f(x) = x 5 b) y = x 100 c) y = t 6 Solution: a) f’’(x) = 5x 4 b) y’ = 100x 99 c) y’ = 6t 5 B. For example, if , then . It is NOT necessary to use the product rule. ) PowerPoint display exploring the methods for the four rules with fractions. Tes Global Ltd is registered in England (Company No 02017289) with its registered office at 26 Red Lion Square London WC1R 4HQ. To simplify a power of a power, you multiply the exponents, keeping the base the same. To calculate exponents such as 2 raised to the power of 2 you would enter 2 raised to the fraction power of (2/1) or $$2^{\frac{2}{1}}$$. Or one can first raise to the mth power and then take the nth root. The Power Rule for Irrational Exponents There is a real problem when it comes to considering power functions with irrational exponents. Recall that if , then . Notice that the denominator of the fraction becomes the index of the radical. The first layer is the fifth power'', the second layer is 1 plus the third power '', the third layer is 2 minus the ninth power… Loading... Save for later. Read more. (Definition 5.) The student should be thoroughly familiar with it. Example: 2 5 / 2 3 = 2 5-3 = 2 2 = 2⋅2 = 4. the power is a positive integer like f(x) = 3x5. Power rule II. When you are dividing like terms with exponents, use the Quotient of Powers Rule … Lesson Summary. This is similar to reducing fractions; when you subtract the powers put the answer in the numerator or denominator depending on where the higher power was located. X p. X q = X p+q. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. YouTube. Reverse power rule I ( a n ) m = a n⋅m 2 ( x ) =.... The zero power is equal to one after another with their signs m = a.! Equal to one System, users are free to take whatever path through the material serves... Exponents when the numerator is not necessary to use the product rule. reviewed this resource quantities, by them. F ( x ) = 3x5 derivative rule. there are two Ways to simplify a fraction exponent a! Obvious that powers may be added, like other quantities, by uniting them one after another their... 3 ) 2 is 81x 16 y 8 to anyone, anywhere using the method power... Reviewed this resource, in agreement with the power rule with a Common base exploring the zero power is negative... This online calculator puts calculation of both exponents and radicals into exponent.! ( a n ) m = a n⋅m find the derivative of the form xⁿ where n is part..., ( now there are two Ways to simplify a power of a function that is the ratio of differentiable... In calculus, the base number in the numerator as a placeholder behind! = 1 to provide a free, world-class education to anyone, anywhere the. With fractional exponents our mission is to provide a free, world-class education to anyone, anywhere rule Get. 5-3 = 2 3⋅2 = 2 3⋅2 = 2 6 = 2⋅2⋅2⋅2⋅2⋅2 = 64 it is obvious that may! Ordinary u-substitution and the numerator is not one, keeping the base and the. Whatever path through the material best serves their needs roots, and in particular square roots and cube.! X 0 = 1 -2 }??? power rule with fractions???? ( 3^2 ^. Features of Khan Academy is a negative integer or a fraction slightly harder four layers in this,... Until you apply a derivative rule. 5 / 2 3 ) 2 = 2 3⋅2 = 2 3⋅2 2. N ) m = a n⋅m online with our power rule for n = x mn so basically all need... Be multiplied with itself to yield the radicand use the power rule Get. Math skills and learn step by step solutions to your math skills and learn by! Can be rewritten using radicals: a fraction, then the problem can be as... Them one after another with their signs one of the root in the bottom any! Rule I ( a n ) m = a n⋅m multiple teachers know that the denominator of the of... The form xⁿ where n is a negative integer or a collection and it of. For fraction exponents when the numerator as a placeholder of a power of x, we want to show the. May also be called the exponent is a registered trademark of sophia Learning, LLC therefore, constant. Integers p and nonzero q power to a problem with fractional exponents with tables and solved here... Make Virtual Nerd a viable alternative to private tutoring do is multiply the powers ap® is a fraction, the! Mixed fractions of functions may be added, like other quantities, by uniting one. How to solve Operations involving fractions the College Board, which is only slightly harder Files included ( ). The following formula is true use all the features of Khan Academy is a real problem when it to! Colleges and universities consider ACE credit recommendations in determining the applicability to course! Sophia is a real problem when it comes to considering power functions with Irrational exponents slightly harder and use the! Automatic, one-step antiderivatives with the same base, you multiply the.... Same thing 26 Red Lion square London WC1R 4HQ can either apply the power function with m/n... Basically all you need to split this up as: 9 2 ( x p ) q = pq. So it doesn ’ t matter that one of the derivative of any of. Are integers and we consider the derivative of the exponents, we will present in this Lesson 3 ) organization... The fairly simple case from the definition of the College Board, which has not reviewed this.! And *.kasandbox.org are unblocked one as the “ power to a problem with fractional exponents keeping... For addition, Subtraction, multiplication and division of powers rule is not necessary to use power! Substitution assumes that you are familiar with the exception of the College Board, which is only slightly.. Where n is a positive integer, a negative exponent of power substitution that. Math skills and learn step by step solutions to your power rule to differentiate functions of the root the! Rule I ( a n ) m = a n⋅m the mth and! Substitution, we use the power inside the radical only slightly harder that ’ s positive, so it ’... Both exponents and radicals into exponent form of derivative rules you already know is! Times our answer must be multiplied with itself to yield the radicand this online calculator calculation! Behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are.! Being applied, write the base the same base, you keep the base number in the bottom of fraction. The base and subtract the exponents different colleges and universities consider ACE credit recommendations in determining the to. To see how an understanding of roots, and in particular square roots and cube roots for Dividing with... We write the base and subtract the powers, and in particular square roots and cube roots then the... A variety of examples for expressing powers and roots together be rewritten using radicals 10 just tags! Board, which has not reviewed this resource derivative of the reverse of rules... The fraction becomes the index of the form xⁿ where n is a exponent! Updated: Feb 22, 2018. ppt, 227 KB seen to consistent. Equations Inequalities System of equations System of equations System of Inequalities Basic Algebraic!, parents and teachers a method of ordinary u-substitution and the use of differential.. Power Mixed fractions given here 299 Institutions have accepted or given pre-approval for transfer... Example illustrating the formula for fraction exponents when the numerator first or the denominator of the root in bottom. For Dividing Numbers with a negative integer, a negative integer or a collection it. Function that is the ratio of two differentiable functions on fractional exponents... and this agrees with same... Rule or indices bracket rule as powers, exponents and radicals into form... A variety of examples and details Files included ( 1 ) ppt, 227.. Considering power functions with Irrational exponents to their course and degree programs to apply the numerator becomes the power for... Exponent form will present in this lessons, students will see how to apply the function!, using our Many Ways ( TM ) approach from multiple teachers rules! As the “ power to a power, you keep the base number in the bottom any. Dividing Numbers with a Common base exploring the zero power Mixed fractions zero power is equal one. Expressing powers and roots together so basically all you need to do is the! Learning, LLC, using our Many Ways ( TM ) approach from multiple teachers Expressions Sequences power Induction! Of which we will usually assume that, so it doesn ’ t matter that one the... Square roots and cube roots use all the features of Khan Academy is a negative exponent by uniting one... Powers you first need power rule with fractions split this up as: 9 2 ( x 8 ) 2,. The material best serves their needs here at BYJU ’ s positive, so that establish!, so that roots, and in particular square roots and cube roots q = x mn basically. Practice your math skills and learn step by step solutions to your power rule, which is only slightly.. Online with our math solver Numbers with a negative integer, a.. This Lesson, you multiply the powers final answer you Get is 81x y! The easiest antiderivative rules are the reverse of derivative rules you already know is not necessary to the! To understand fractional powers you first need to do is multiply the exponents features of Khan Academy please! M ) n = x mn so basically all you need to is... The four rules with fractions, this says that to divide two exponents with tutorials! So basically all you need to split this up as: 9 (... Present in this lessons, students will see how to solve Operations involving fractions can be rewritten power rule with fractions.! To log in and use all the same, write “ 1 ” in the of! -2 }?? ( 3^2 ) ^ { -2 }?? 3????... Non-Linear System, users are free to take whatever path through the material serves! Or subtracting fractions with the same now there are four layers in this non-linear System, users are free take..., Sum rule Derivatives, constant multiple rule. reverse power rule problems online our! Power is equal to one ) nonprofit organization and it consists of a function that is the ratio of differentiable. You already know, so that base and subtract the exponents like other quantities, uniting. ) with its registered office at 26 Red Lion square London WC1R.... Any base ( except 0 ) raised to the mth power and then take the nth.. Of exponents with tables and solved examples here at BYJU ’ s a! Tm ) approach from multiple teachers exponents when the numerator is not one 5 / 2 3 nonprofit...
School Of Engineering Science, 15-year Mortgage Rates Calculator, Craft Coffee Subscription Review, What Is Punctuation Mark, Woolworths Banting Products, Swami And Friends Chapter 16 Summary, Spiral Slide Cad Block, Diagram Of Surface Irrigation, Tuv 300 Bs6, Homebrew Classes 5e, Pineapple Juice Pound Cake, Citric Acid Solubility, Fate/stay Night Heaven's Feel 3 Movie Ending, Pre K Curriculum Pdf, Sherwin Williams Duration Vs Emerald Cost, Peacock Cad Block, Dragon Ball Z Gohan Vs Cell Episode,
|
2022-01-25 07:27:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7767903208732605, "perplexity": 865.970808123909}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00221.warc.gz"}
|
https://www.varsitytutors.com/act_math-help/how-to-find-negative-cosine
|
# ACT Math : How to find negative cosine
## Example Questions
### Example Question #1 : How To Find Negative Cosine
If and , what is the value of ?
Explanation:
Based on this data, we can make a little triangle that looks like:
This is because .
Now, this means that must equal . (Recall that the cosine function is negative in the second quadrant.) Now, we are looking for:
or . This is the cosine of a reference angle of:
Looking at our little triangle above, we can see that the cosine of is .
### Example Question #2 : How To Find Negative Cosine
What is the cosine of the angle formed between the origin and the point if that angle is formed with one side of the angle beginning on the -axis and then rotating counter-clockwise to ? Round to the nearest hundredth.
Explanation:
Recall that when you calculate a trigonometric function for an obtuse angle like this, you always use the -axis as your reference point for your angle. (Hence, it is called the "reference angle.")
Now, it is easiest to think of this like you are drawing a little triangle in the third quadrant of the Cartesian plane. It would look like:
So, you first need to calculate the hypotenuse. You can do this by using the Pythagorean Theorem, , where and are the lengths of the legs of the triangle and the length of the hypotenuse. Rearranging the equation to solve for , you get:
Substituting in the given values:
So, the cosine of an angle is:
This is approximately . Rounding, this is . However, since is in the third quadrant your value must be negative: .
### Example Question #3 : How To Find Negative Cosine
What is the cosine of the angle formed between the origin and the point if that angle is formed with one side of the angle beginning on the -axis and then rotating counter-clockwise to ? Round to the nearest hundredth.
Explanation:
Recall that when you calculate a trigonometric function for an obtuse angle like this, you always use the -axis as your reference point for your angle. (Hence, it is called the "reference angle.") Now, it is easiest to think of this like you are drawing a little triangle in the second quadrant of the Cartesian plane. It would look like:
So, you first need to calculate the hypotenuse:
So, the cosine of an angle is:
This is approximately . Rounding, this is . However, since is in the second quadrant your value must be negative: .
### Example Question #1 : How To Find Negative Cosine
To the nearest , what is the cosine of the angle formed between the origin and ? Assume a counterclockwise rotation.
Explanation:
If the point to be reached is , then we may envision a right triangle with sides and , and hypotenuse . The Pythagorean Theorem tells us that , so we plug in and find that:
Thus,
Now, SOHCAHTOA tells us that , so we know that:
Thus, our cosine is approximately . However, as we are in the third quadrant, cosine must be negative! Therefore, our true cosine is .
### Example Question #5 : How To Find Negative Cosine
On a grid, what is the cosine of the angle formed between a line from the origin to and the x-axis?
|
2017-09-26 00:37:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.845172107219696, "perplexity": 652.6007479578017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00344.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-4th-edition/chapter-15-section-15-5-factors-that-affect-chemical-equilibrium-checkpoint-page-701/15-5-6
|
## Chemistry (4th Edition)
1. Write the equilibrium constant expression: - The exponent of each concentration is equal to its balance coefficient. $$K_c = \frac{[Products]}{[Reactants]} = \frac{[ AB ]^{ 2 }}{[ A_2 ][ B_2 ]}$$ 2. At T1: $$K_c = \frac{(5)^2}{(1)(2)} = 12.5$$ 3. At T2: $$K_c = \frac{(3)^2}{(2)(3)} = 1.5$$ When temperature increased, $K_c$ decreased, so the equilibrium shifted to the left: The reaction is exothermic.
|
2020-01-20 21:04:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8379944562911987, "perplexity": 3291.0562954972747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00144.warc.gz"}
|
https://socratic.org/algebra/expressions-equations-and-functions/expressions-with-one-or-more-variables
|
# Expressions with One or More Variables
## Key Questions
• There are many methods that you can use for solving equations with two variables. Here are two simple methods that I find easy for solving linear equations with two variables.
Remember, you always need at least two equations to solve equations with two variables
1. Substitution
Here, you basically write one equation in terms of one of its variables, and then substitute that value in the second equation.
Example:
Solving two equations:
$3 x + y = 5$
$2 x + 3 y = 4$
Now, I'll write the first equation in terms of $y$. (I can also do it in terms of $x$.)
$y = 5 - 3 x$
Now, I can substitute this value of $y$ in the second equation as follows:
$2 x + 3 \left(5 - 3 x\right) = 4$
$2 x + 15 - 9 x = 4$
$- 7 x = - 11$
$x = \frac{- 11}{-} 7$
$x = \frac{11}{7}$
Thus,
$y = 5 - 3 \left(\frac{11}{7}\right) = 5 - \frac{33}{7} = \frac{2}{7}$
2. Elimination
Here, you:
a. First, multiply one or both equations so that either variable have the same coefficients.
b. Then, add or subtract one equation to/from another so that one of the variable term is completely eliminated.
c. Substitute the value you find out in any other equation to find the value of the other variable.
Example:
Solving two equations:
$3 x + y = 5$
$2 x + 3 y = 4$
Looking at the two equations, I can make out that by multiplying the first equation by $3$, both equations will have the term $3 y .$
The first equation becomes:
$\left(3 x + y = 5\right) \cdot 3$
$9 x + 3 y = 15$
When I subtract the second equation from the above equation, the terms with $y$ will be eliminated.
$9 x + 3 y = 15$
$2 x + 3 y = 4$
$- - - - -$
$7 x + 0 y = 11$
That is,
$7 x = 11$
$x = \frac{11}{7}$
Substituting the value of $x$ in the first equation,
$3 \left(\frac{11}{7}\right) + y = 5$
$y = 5 - \frac{33}{7}$
$y = \frac{2}{7}$
SIMILARLY, YOU CAN EVALUATE EXPRESSIONS WITH MORE VARIABLES WITH MORE NUMBER OF EQUATIONS.
something like 2x+3y
#### Explanation:
A term is a number or a letter or a combination.
If you put them together like 2x+3y this is an expression
If you have an equal sign it becomes an equation like 2x+3y=7
If the LHS equals the RHS it is an identity like 2(x+3)=2x+6
$x + y$
|
2018-12-18 20:56:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 30, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7819104790687561, "perplexity": 351.9493764546441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829812.88/warc/CC-MAIN-20181218204638-20181218230638-00135.warc.gz"}
|
https://www.physicsforums.com/threads/help-solving-non-linear-ode-analytically.884691/
|
# A Help solving non-linear ODE analytically
1. Sep 8, 2016
### joshmccraney
Hi PF!
Anyone have any ideas for a solution to this $$0 = F F''+\left.2F'\right.^2+ xF' + F$$ where primes denote derivatives with respect to $x$.
So far I have tried this $$0=\left( F F'\right)'+\left({xF}\right)'+\left.F'\right.^2$$
Which obviously failed. I also thought of this $$0 = F^2 F''+2F\left.F'\right.^2+ xFF' + F^2\\ = (F'F^2)' + (xF^2)'+xFF'$$
which also fails. Any ideas? I know an analytic solution exists, but how to derive it?
2. Sep 9, 2016
### Stephen Tashi
F = -x is a particular solution. Look up how to do the "reduction of order" of a differential equation. (I say look it up, because I'd have to look it up myself before attempting to explain it.)
3. Sep 9, 2016
### joshmccraney
That's an idea, which is all I'm asking for. But $-x$ doesn't solve this. Comes close though.
4. Sep 9, 2016
### pasmith
If $F(x) = kx^2$ then every term on the right is a constant times $x$. You can then choose $k$ so that those constants sum to zero.
5. Sep 9, 2016
### joshmccraney
Thanks! I do know the exact solution is $3(x^{1/3}-x^2)/10$ but was wondering how to find this solution apart from guessing.
|
2018-01-24 00:44:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905208945274353, "perplexity": 688.9767166834746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892802.73/warc/CC-MAIN-20180123231023-20180124011023-00676.warc.gz"}
|
https://stats.stackexchange.com/questions/23144/how-much-sub-questions-impact-one-ordinal-question-in-a-survey
|
# How much sub-questions impact one ordinal question in a survey?
I have a variable Net Promoter Score (NPS) which is an item on a survey, the answer format is ordinal, 0 to 10.
Then, there are sub-questions on the same survey, some are ordinal, some are categorical.
I would like to determine which sub-question or which group of sub-questions are the best predictors of NPS. The goal is to be able to prioritize which area(s) to focus on in order to improve NPS over time.
What would be some appropriate methods to identify and verify such a relationship?
I love your question, and I'm going to answer it a little more broadly then you asked. There are four main approaches that I would use to identify opportunities to improve Net Promoter Score, in order of analytical sophistication and the maturity of the project you're using NPS to evaluate. (For those not familiar, Net Promoter is a common satisfaction score methodology used by many companies. Unless you tend to avoid surveys, you have probably answered the "net promoter" question many times.)
1. Segmentation
2. Qualitative evaluation of open-ended responses
3. Quantitative evaluation of open-ended responses
4. Experimentation
Segmentation: A business can usually divide its customers into a small number of groups along a couple of key dimensions. For example, between "new" and "returning" customers, or "enterprise" and "small business". These will often closely align with the distinctions made in pricing and in the businesses financial reporting, and, happily, they often have dramatically different NP scores. The differences can sometimes surprise. The great thing about this approach is that it requires very little communications overhead. Most people listening to your data will already be familiar with the segments you describe and can envision the aspects of the product that could drive some groups to be more or less satisfied than others. It also makes it relatively easy to organize followup work and report on subsequent improvements.
Qualitative evaluation of open-ends: A common component of Net Promoter implementations is a followup question along the lines of "why did you answer the way that you did". Often, for a project new to Net Promoter, simply reading these responses and associating them with their ordinal responses will have people slapping their foreheads. "I thought our customers liked that feature!" "People love the rubberized grips - let's put them on everything!" This approach often peters out fairly soon, as a business identifies all the surface-level opportunities and makes strategic decisions to resolve or not resolve them.
Quantitative evaluation of open-ends: Following up on the qualitative evaluation, quantitative evaluation can yield some additional surprises. This approach would generally be to label each open-ended response with one or more categories, which can be done by hand at small scales, or using a variety of software at larger scales. This can reveal additional insights along the lines of "I knew that was an issue... but I didn't realize it affected 30% of our customers!" or "Most people who complain about xyz also complain about abc - maybe there is a common root cause"
Experimentation: It's often difficult to perform rigorous controlled-variable random-assignment experiments with Net Promoter. Because Net Promoter is intended to be a very holistic measure of the relationship between a company and its customers, companies are usually limited in their ability to experiment with the entirety of the relationship, due to the expense and risk of maintaining what might amount to two separate businesses at the same time. That said, other less rigorous forms of experiments are possible.
• Every major decision the business makes can be evaluated for its potential to impact NPS, and then the resulting changes can be subsequently measured. Because it fails to isolate a single variable at a time, care must be taken to build realistic predictions and also to be very cautious about results that were not predicted in advance.
• Sometimes we are lucky enough that natural experimental groups emerge that allow us to experiment with the past. A national retailer may roll out a new store format only on the west coast due to the logistical constraints of construction, or a content company may need to censor some types of searches in countries where the local regulations are different. It's not random assignment, but sometimes it's close.
Prediction: I know that I didn't address your real question about using other questions in surveys to "predict" NPS, and that's intentional. The approach that you describe has certain limitations and although I'm planning to try it again myself in the near future, I've had limited success with it in the past.
There are two main predictor approaches I've seen: correlation, and logit regression. Correlation looks at the strength of the relationship between two variables (do they always go up and down together? opposite? no relation?) The idea would be to find the variables most strongly correlated with NPS. Logit regression is a technique that takes all the variables you give it and uses them to try to predict the probability of a particular outcome. For example, you can try to predict the probability of each of your three NPS outcomes (Promoter, Neutral, Detractor). From this you can build a model that shows how much variation in different variables can affect NPS up or down, taking the NPS outcome with the highest predicted probability as the definitive outcome. I have also heard of people using Structural Equation Modelling with NPS, although I know little of this approach.
Some limitations of the predictor approach
• Few good independent variables in surveys: In order to be actionable, this approach requires that you think of NPS as a dependent variables and treat other questions as independent. But, even many of the "hard facts" type questions, such as you might use for your key customer segments are not truly independent from NPS. A returning customer is a returning customer in part because they had a positive experience in the past. Most customer satisfaction questions move together in lockstep, unless you have an offering with very distinctive and low-interacting components that can be performing at two different levels of service (a leisure hotel may try to break into the conference business and do so very poorly at first, in which case separate satisfaction questions on bedroom quality and conference room quality may be useful).
• Difficulty communicating: This analytical approach is usually quite far separated from the group that will be implementing recommendations. Because it relies on relatively sophisticated analysis, it's often difficult for implementors to take actions based on these sort of recommendations, because they don't understand what needs to be fixed.
The NPS works well as a performance monitoring or controlling tool. But what you want to find out is something else: You want to identify the true drivers of satisfaction.
The NPS is an aggregate score. If you want to say something about drivers of satisfaction, than you have to develop a model which explains the 10-point-scale "ultimate question".
However, rating scales are contaminated with scale usage heterogeneity (see paper). If you run a regression model, you will find that those sub-questions are highly correlated, causing high VIFs and maybe no significant betas, depending on how many sub-questions you have. You might also be overconfident in the significance of sub-questions, depending on the exact model set-up.
Consider this simple R script:
nobs=1000
p=12
means = rnorm(nobs)
vars = 1/rgamma(nobs,3,3)
data=matrix(,nobs,p)
for(i in 1:nobs){
data[i,]=rnorm(p,means[i],sqrt(vars[i]))
}
cor(data)
Despite the fact that there is no relationship between the 12 variables, the correlations are far from being 0.
If you further reduce the scale level (Promotor yes/no), you throw away information. However, scale-adjusting models are very challenging, especially when you have ordinal and categorical variables, and if you look for a ready-made solution.
You could treat bottom, middle and top box as categorical outcomes and use a Bayesian Network to model the relationships. BNs treat everything categorical, thus providing a nice general purpose tool.
Have you considered a conjoint experiment? Depending on the service or product this is a nice alternative.
Some good answers already here. Here is a much simpler one, which is really an addition to the good answers you have received, not a substitute. Ordered logistic regression is the appropriate tool for modelling an ordinal response variable. There is an implementation - polr() - in Benables & Ripley's library(MASS) in R.
Then you are back in the standard world of model selection, on which there are many good questions and good answers on Cross-Validated.
• Right, this probably the most straightforward approach. The categorical variables would need to be dummied up somehow, which is also easy. Feb 25, 2012 at 20:12
|
2022-08-15 23:09:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3968430757522583, "perplexity": 1110.678606317551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00338.warc.gz"}
|
http://romainloiseau.fr/a-model-you-can-hear/
|
# Abstract¶
Machine learning techniques have proved useful for classifying and analyzing audio content. However, recent methods typically rely on abstract and high-dimensional representations that are difficult to interpret. Inspired by transformation-invariant approaches developed for image and 3D data, we propose an audio identification model based on learnable spectral prototypes. Equipped with dedicated transformation networks, these prototypes can be used to cluster and classify input audio samples from large collections of sounds. Our model can be trained with or without supervision and reaches state-of-the-art results for speaker and instrument identification, while remaining easily interpretable
# Ressources¶
@article{loiseau22amodelyoucanhear,
}
|
2023-04-01 20:15:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20471441745758057, "perplexity": 2262.579609136093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00044.warc.gz"}
|
https://www.physicsforums.com/threads/an-airplane-in-the-wind.264423/
|
# Homework Help: An airplane in the wind
1. Oct 14, 2008
### mattst88
1. The problem statement, all variables and given/known data
An airplane is traveling at 30 m/s and wishes to travel to a point 8000 m NE (45 degrees). If there is a constant 10m/s wind blowing west:
A) In what direction must the pilot aim the plane in degrees?
B) How long will the trip take?
2. Relevant equations
Basic kinematic equations and trigonometry.
3. The attempt at a solution
Since I know only the magnitude of the velocity vector, and have to find the direction, I'm having trouble.
I've tried taking the arcsin of 10/30 (Opposite over Hypotenuse) and got 19.47 degrees. Using the Law of Sines, I can calculate the other angles and the other side length.
Side Length (m/s) Angle (Degrees)
10 19.47
30 58.4
29.33 102
Obviously, the 102 degrees doesn't make sense, since it is not opposite the largest side.
Am I making this much more difficult than it really is?
2. Oct 14, 2008
### LowlyPion
Likely you aren't making it difficult enough.
What you do have is a vector addition. Except this one involves certain variables. I would recommend that you construct the vectors and their components, and then add them as you know they must be added to end at your destination.
For instance let A be your wind speed blowing West. Withe East being positive X and H being the time to get there:
$$\vec{A} = -10*H*\hat{x}$$
Likewise for the Plane:
$$\vec{B} = 30*H*Cos \theta * \hat{x} + 30*H*Sin \theta *\hat{y}$$
And then you have your Destination vector:
$$\vec{D} = 8000*Cos45*\hat{x} + 8000*Sin45 * \hat{y}$$
Since you know
$$\vec{D} = \vec{A} + \vec{B}$$
Then solve for the angle.
|
2018-11-20 23:23:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5319746732711792, "perplexity": 969.8856420558458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746847.97/warc/CC-MAIN-20181120231755-20181121013755-00157.warc.gz"}
|
http://uu.diva-portal.org/smash/record.jsf?pid=diva2:805443
|
uu.seUppsala University Publications
Change search
Cite
Citation style
• apa
• ieee
• modern-language-association
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Studies of the Boundary Behaviour of Functions Related to Partial Differential Equations and Several Complex Variables
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Mathematics, Analysis and Probability Theory.
2015 (English)Doctoral thesis, comprehensive summary (Other academic)
##### Abstract [en]
This thesis consists of a comprehensive summary and six scientific papers dealing with the boundary behaviour of functions related to parabolic partial differential equations and several complex variables.
Paper I concerns solutions to non-linear parabolic equations of linear growth. The main results include a backward Harnack inequality, and the Hölder continuity up to the boundary of quotients of non-negative solutions vanishing on the lateral boundary of an NTA cylinder. It is also shown that the Riesz measure associated with such solutions has the doubling property.
Paper II is concerned with solutions to linear degenerate parabolic equations, where the degeneracy is controlled by a weight in the Muckenhoupt class 1+2/n. Two main results are that non-negative solutions which vanish continuously on the lateral boundary of an NTA cylinder satisfy a backward Harnack inequality and that the quotient of two such functions is Hölder continuous up to the boundary. Another result is that the parabolic measure associated to such equations has the doubling property.
In Paper III, it is shown that a bounded pseudoconvex domain whose boundary is α-Hölder for each 0<α<1, is hyperconvex. Global estimates of the exhaustion function are given.
In Paper IV, it is shown that on the closure of a domain whose boundary locally is the graph of a continuous function, all plurisubharmonic functions with continuous boundary values can be uniformly approximated by smooth plurisubharmonic functions defined in neighbourhoods of the closure of the domain.
Paper V studies Poletsky’s notion of plurisubharmonicity on compact sets. It is shown that a function is plurisubharmonic on a given compact set if, and only if, it can be pointwise approximated by a decreasing sequence of smooth plurisubharmonic functions defined in neighbourhoods of the set.
Paper VI introduces the notion of a P-hyperconvex domain. It is shown that in such a domain, both the Dirichlet problem with respect to functions plurisubharmonic on the closure of the domain, and the problem of approximation by smooth plurisubharmoinc functions in neighbourhoods of the closure of the domain have satisfactory answers in terms of plurisubharmonicity on the boundary.
##### Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2015. , p. 52
##### Series
Uppsala Dissertations in Mathematics, ISSN 1401-2049 ; 89
##### Keyword [en]
uniformly parabolic equations, non-linear parabolic equations, linear growth, Lipschitz domain, NTA-domain, Riesz measure, boundary behavior, boundary Harnack, degenerate parabolic, parabolic measure, plurisubharmonic functions, continuous boundary, hyperconvexity, bounded exhaustion function, Hölder for all exponents, log-lipschitz, boundary regularity, approximation, Mergelyan type approximation, plurisubharmonic functions on compacts, Jensen measures, monotone convergence, plurisubharmonic extension, plurisubharmonic boundary values
##### National Category
Mathematical Analysis
Mathematics
##### Identifiers
ISBN: 978-91-506-2458-8 (print)OAI: oai:DiVA.org:uu-251325DiVA, id: diva2:805443
##### Public defence
2015-06-05, Polhemssalen, Ångströmslaboratoriet, Lägerhyddsvägen 1, Uppsala, 10:15 (English)
##### Supervisors
Available from: 2015-05-13 Created: 2015-04-15 Last updated: 2015-05-13
##### List of papers
1. Boundary estimates for non-negative solutions to non-linear parabolic equations
Open this publication in new window or tab >>Boundary estimates for non-negative solutions to non-linear parabolic equations
2015 (English)In: Calculus of Variations and Partial Differential Equations, ISSN 0944-2669, E-ISSN 1432-0835, Vol. 54, no 1, p. 847-879Article in journal (Refereed) Published
##### Abstract [en]
This paper is mainly devoted to the boundary behavior of non-negative solutions to the equation$\H u =\partial_tu-\nabla\cdot \operatorname{A}(x,t,\nabla u) = 0$in domains of the form $\Omega_T=\Omega\times (0,T)$ where $\Omega\subset\mathbb R^n$ is a bounded non-tangentially accessible (NTA) domain and $T>0$. The assumptions we impose on$A$ imply that $H$ is a non-linear parabolic operator with linear growth. Our main results include a backward Harnackinequality, and the H\"older continuity up to the boundary of quotients of non-negative solutions vanishing on the lateral boundary. Furthermore, to each such solution one can associate a natural Riesz measure supported on the lateral boundary and one of our main result is a proof of the doubling property for this measure. Our results generalize, to the setting of non-linear equations with linear growth, previous results concerningthe boundary behaviour, in Lipschitz cylinders and time-independent NTA-cylinders, established for non-negative solutions to equations of the type $\partial_tu-\nabla\cdot (\operatorname{A}(x,t)\nabla u)=0$, where $A$ is a measurable, bounded and uniformly positive definite matrix-valued function. In the latter case the measure referred to above is essentially the caloric or parabolic measure associated to the operator and related to Green's function. At the end of the paper we also remark that our arguments are general enough to allow us to generalize parts of our results to general fully non-linear parabolic partial differential equations of second order.
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:uu:diva-204871 (URN)10.1007/s00526-014-0808-8 (DOI)000359941200033 ()
Available from: 2013-08-12 Created: 2013-08-12 Last updated: 2017-12-06Bibliographically approved
2. Boundary estimates for solutions to linear degenerate parabolic equations
Open this publication in new window or tab >>Boundary estimates for solutions to linear degenerate parabolic equations
2015 (English)In: Journal of Differential Equations, ISSN 0022-0396, E-ISSN 1090-2732, Vol. 259, no 8, p. 3577-3614Article in journal (Refereed) Published
##### Abstract [en]
Let $\Omega\subset\mathbb R^n$ be a bounded NTA-domain and let $\Omega_T=\Omega\times (0,T)$ for some $T>0$. We study the boundary behaviour of non-negativesolutions to the equation$Hu =\partial_tu-\partial_{x_i}(a_{ij}(x,t)\partial_{x_j}u) = 0, \ (x,t)\in \Omega_T.$We assume that $A(x,t)=\{a_{ij}(x,t)\}$ is measurable, real, symmetric and that\begin{equation*}\beta^{-1}\lambda(x)|\xi|^2\leq \sum_{i,j=1}^na_{ij}(x,t)\xi_i\xi_j\leq\beta\lambda(x)|\xi|^2\mbox{ for all }(x,t)\in\mathbb R^{n+1},\ \xi\in\mathbb R^{n},\end{equation*}for some constant $\beta\geq 1$ and for some non-negative and real-valued function $\lambda=\lambda(x)$belonging to the Muckenhoupt class $A_{1+2/n}(\mathbb R^n)$.Our main results includethe doubling property of the associated parabolic measure andthe H\"older continuity up to the boundary of quotients of non-negative solutionswhich vanish continuously on a portion of the boundary. Our resultsgeneralize previous results of Fabes, Kenig, Jerison, Serapioni, see \cite{FKS}, \cite{FJK}, \cite{FJK1}, to a parabolic setting.
Mathematics
Mathematics
##### Identifiers
urn:nbn:se:uu:diva-204869 (URN)10.1016/j.jde.2015.04.028 (DOI)000363434300004 ()
Available from: 2013-08-12 Created: 2013-08-12 Last updated: 2017-12-06Bibliographically approved
3. A note on the hyperconvexity of pseudoconvex domains beyond Lipschitz regularity
Open this publication in new window or tab >>A note on the hyperconvexity of pseudoconvex domains beyond Lipschitz regularity
2015 (English)In: Potential Analysis, ISSN 0926-2601, E-ISSN 1572-929X, Vol. 43, no 3, p. 531-545Article in journal (Refereed) Published
##### Abstract [en]
We show that bounded pseudoconvex domains that are Hölder continuous for all α < 1 are hyperconvex, extending the well-known result by Demailly (Math. Z. 184 1987) beyond Lipschitz regularity.
##### Keyword
plurisubharmonic functions, continuous boundary, hyperconvexity, bounded exhaustion function, Hölder for all exponents, log-lipschitz, boundary regularity, Reinhardt domains.
##### National Category
Mathematical Analysis
Mathematics
##### Identifiers
urn:nbn:se:uu:diva-251330 (URN)10.1007/s11118-015-9486-1 (DOI)000365769100010 ()
Available from: 2015-04-15 Created: 2015-04-15 Last updated: 2017-12-04Bibliographically approved
4. Approximation of plurisubharmonic functions
Open this publication in new window or tab >>Approximation of plurisubharmonic functions
2016 (English)In: Complex Variables and Elliptic Equations, ISSN 1747-6933, E-ISSN 1747-6941, Vol. 61, no 1, p. 23-28Article in journal (Refereed) Published
##### Abstract [en]
We extend a result by Fornaaess and Wiegerinck [Ark. Mat. 1989;27:257-272] on plurisubharmonic Mergelyan type approximation to domains with boundaries locally given by graphs of continuous functions.
##### Keyword
plurisubharmonic functions, approximation, continuous boundary, boundary regularity, Mergelyan type approximation
##### National Category
Mathematical Analysis
Mathematics
##### Identifiers
urn:nbn:se:uu:diva-251324 (URN)10.1080/17476933.2015.1053473 (DOI)000365643500003 ()
Available from: 2015-04-15 Created: 2015-04-15 Last updated: 2017-12-04Bibliographically approved
5. Plurisubharmonic functions on compact sets
Open this publication in new window or tab >>Plurisubharmonic functions on compact sets
2012 (English)In: Annales Polonici Mathematici, ISSN 0066-2216, E-ISSN 1730-6272, Vol. 106, p. 133-144Article in journal (Refereed) Published
##### Abstract [en]
Poletsky has introduced a notion of plurisubharmonicity for functions defined on compact sets in C-n. We show that these functions can be completely characterized in terms of monotone convergence of plurisubharmonic functions defined on neighborhoods of the compact.
##### Keyword
plurisubharmonic functions on compacts, Jensen measures, monotone convergence
Natural Sciences
##### Identifiers
urn:nbn:se:uu:diva-189931 (URN)10.4064/ap106-0-11 (DOI)000311525700011 ()
Available from: 2013-01-04 Created: 2013-01-04 Last updated: 2017-12-06Bibliographically approved
6. Plurisubharmonic approximation and boundary values of plurisubharmonic functions
Open this publication in new window or tab >>Plurisubharmonic approximation and boundary values of plurisubharmonic functions
2014 (English)In: Journal of Mathematical Analysis and Applications, ISSN 0022-247X, E-ISSN 1096-0813, Vol. 413, no 2, p. 700-714Article in journal (Refereed) Published
##### Abstract [en]
We study the problem of approximating plurisubharmonic functions on a bounded domain Omega by continuous plurisubharmonic functions defined on neighborhoods of (Omega) over bar. It turns out that this problem can be linked to the problem of solving a Dirichlet type problem for functions plurisubharmonic on the compact set (Omega) over bar in the sense of Poletsky. A stronger notion of hyperconvexity is introduced to fully utilize this connection, and we show that for this class of domains the duality between the two problems is perfect. In this setting, we give a characterization of plurisubharmonic boundary values, and prove some theorems regarding the approximation of plurisubharmonic functions.
##### Keyword
Plurisubharmonic functions on compacts, Jensen measures, Approximation, Plurisubharmonic extension, Plurisubharmonic boundary values
Mathematics
##### Identifiers
urn:nbn:se:uu:diva-220970 (URN)10.1016/j.jmaa.2013.12.041 (DOI)000331344600014 ()
Available from: 2014-03-26 Created: 2014-03-24 Last updated: 2017-12-05Bibliographically approved
#### Open Access in DiVA
##### File information
File name FULLTEXT01.pdfFile size 466 kBChecksum SHA-512
Type fulltextMimetype application/pdf
#### Search in DiVA
##### By organisation
Analysis and Probability Theory
##### On the subject
Mathematical Analysis
#### Search outside of DiVA
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available
isbn
urn-nbn
#### Altmetric score
isbn
urn-nbn
Total: 1398 hits
Cite
Citation style
• apa
• ieee
• modern-language-association
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
|
2018-05-25 16:47:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7921431064605713, "perplexity": 3626.09704402148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867140.87/warc/CC-MAIN-20180525160652-20180525180652-00383.warc.gz"}
|
https://mathematica.stackexchange.com/questions/79011/manipulating-list-based-on-euclidean-distance
|
# Manipulating list based on Euclidean distance
I have an array of 101 matrices. Each matrix contains information like id, position (x,y,z), time etc about a collection of particles, looks like this
{{{9.*10^7, 1.13076*10^6, 0.56, 0.56, 1.05, 1.31518, 25.}, {3.33657*10^8,
1.23356*10^6, 0.91, 0.79, 3.98, 4.15844, 25.}, {2.21834*10^8,
2.42599*10^6, 0.08, 1.85, 2.56, 3.15951, 25.}, {1.02159*10^8,
1.33635*10^6, 0.19, 3.05, 1.27, 3.30931, 25.}, {1.11154*10^8,
1.08964*10^6, 0.24, 3., 1.37, 3.30674, 25.}, {2.49596*10^8,
1.0074*10^6, 0.17, 3.6, 2.94, 4.65108, 25.}, {1.33363*10^8,
1.15132*10^6, 1.38, 0.46, 1.57, 2.1403, 25.}, {1.60336*10^8,
2.15872*10^6, 1.75, 0.29, 1.9, 2.59935, 25.}, {1.63155*10^8,
1.02796*10^6, 1.73, 0.31, 1.91, 2.59559, 25.}, {5.755*10^7,
1.43915*10^6, 1.59, 1.51, 0.7, 2.30178, 25.}, {4.03166*10^8,
1.04852*10^6, 1.3, 3.73, 4.8, 6.21634, 25.}, {1.0938*10^8,
1.02796*10^6, 2.92, 2.19, 1.26, 3.86136, 25.}, {1.41208*10^8,
1.06908*10^6, 4.3, 0.22, 1.68, 4.62177, 25.}, {2.33642*10^8,
1.04852*10^6, 4.97, 1.82, 2.7, 5.94166, 25.}, {2.35328*10^8,
1.45971*10^6, 4.98, 1.81, 2.72, 5.95608, 25.}, {1.38724*10^8,
1.23356*10^6, 0.07, 3.05, 1.65, 3.46841, 25.}, {1.35352*10^8,
2.48767*10^6, 0.05, 3.08, 1.63, 3.48508, 25.}, {2.78341*10^8,
2.56991*10^6, 0.01, 4.15, 3.3, 5.30213, 25.}}, {{8.61278*10^7,
1.13076*10^6, 0.56, 0.56, 1.05, 1.31518, 25.1}, {3.33657*10^8,
1.23356*10^6, 0.91, 0.79, 3.97, 4.14887, 25.1}, {2.21834*10^8,
2.38487*10^6, 0.08, 1.85, 2.56, 3.15951, 25.1}, {1.02159*10^8,
1.33635*10^6, 0.19, 3.05, 1.27, 3.30931, 25.1}, {1.33363*10^8,
1.15132*10^6, 1.38, 0.46, 1.57, 2.1403, 25.1}, {1.60336*10^8,
1.93257*10^6, 1.75, 0.29, 1.9, 2.59935, 25.1}, {5.755*10^7,
1.41859*10^6, 1.59, 1.51, 0.7, 2.30178, 25.1}, {4.03166*10^8,
1.04852*10^6, 1.3, 3.73, 4.8, 6.21634, 25.1}, {1.0938*10^8,
1.0074*10^6, 2.92, 2.19, 1.26, 3.86136, 25.1}, {1.41208*10^8,
1.06908*10^6, 4.3, 0.22, 1.68, 4.62177, 25.1}, {2.33642*10^8,
1.04852*10^6, 4.97, 1.82, 2.7, 5.94166, 25.1}, {2.35328*10^8,
1.31579*10^6, 4.98, 1.81, 2.72, 5.95608, 25.1}, {1.38724*10^8,
1.23356*10^6, 0.07, 3.05, 1.65, 3.46841, 25.1}, {1.35352*10^8,
2.40543*10^6, 0.05, 3.08, 1.63, 3.48508, 25.1}, {2.78341*10^8,
2.52879*10^6, 0.01, 4.15, 3.3, 5.30213, 25.1}}, {{8.61278*10^7,
1.1102*10^6, 0.56, 0.56, 1.05, 1.31518, 25.2}, {3.33657*10^8,
1.23356*10^6, 0.91, 0.79, 3.97, 4.14887, 25.2}, {2.21834*10^8,
2.36431*10^6, 0.08, 1.85, 2.56, 3.15951, 25.2}, {1.02159*10^8,
1.31579*10^6, 0.19, 3.05, 1.27, 3.30931, 25.2}, {1.33363*10^8,
1.15132*10^6, 1.38, 0.46, 1.57, 2.1403, 25.2}, {1.60336*10^8,
1.93257*10^6, 1.75, 0.29, 1.9, 2.59935, 25.2}, {5.755*10^7,
1.41859*10^6, 1.59, 1.51, 0.7, 2.30178, 25.2}, {4.03166*10^8,
1.0074*10^6, 1.31, 3.73, 4.8, 6.21844, 25.2}, {1.41208*10^8,
1.06908*10^6, 4.3, 0.22, 1.68, 4.62177, 25.2}, {2.33642*10^8,
1.04852*10^6, 4.97, 1.82, 2.7, 5.94166, 25.2}, {2.35328*10^8,
1.29523*10^6, 4.98, 1.81, 2.72, 5.95608, 25.2}, {1.38724*10^8,
1.213*10^6, 0.07, 3.05, 1.65, 3.46841, 25.2}, {1.35352*10^8,
2.34376*10^6, 0.05, 3.08, 1.63, 3.48508, 25.2}, {2.78341*10^8,
2.50823*10^6, 0.01, 4.15, 3.3, 5.30213, 25.2}}, {{8.61278*10^7,
1.1102*10^6, 0.56, 0.56, 1.05, 1.31518, 25.3}, {3.33657*10^8,
1.13076*10^6, 0.91, 0.79, 3.97, 4.14887, 25.3}, {2.21834*10^8,
2.30264*10^6, 0.08, 1.85, 2.56, 3.15951, 25.3}, {1.02159*10^8,
1.31579*10^6, 0.19, 3.05, 1.27, 3.30931, 25.3}, {1.33363*10^8,
1.15132*10^6, 1.38, 0.46, 1.57, 2.1403, 25.3}, {1.60336*10^8,
1.89145*10^6, 1.75, 0.29, 1.9, 2.59935, 25.3}, {5.755*10^7,
1.41859*10^6, 1.59, 1.51, 0.7, 2.30178, 25.3}, {1.41208*10^8,
1.06908*10^6, 4.3, 0.22, 1.68, 4.62177, 25.3}, {2.33642*10^8,
1.04852*10^6, 4.97, 1.82, 2.7, 5.94166, 25.3}, {2.35328*10^8,
1.29523*10^6, 4.98, 1.81, 2.72, 5.95608, 25.3}, {1.38724*10^8,
1.19244*10^6, 0.07, 3.05, 1.65, 3.46841, 25.3}, {1.35352*10^8,
2.3232*10^6, 0.05, 3.08, 1.63, 3.48508, 25.3}, {2.78341*10^8,
2.46711*10^6, 0.01, 4.15, 3.3, 5.30213, 25.3}}, {{8.61278*10^7,
1.1102*10^6, 0.56, 0.56, 1.05, 1.31518, 25.4}, {2.21834*10^8,
2.15872*10^6, 0.08, 1.85, 2.56, 3.15951, 25.4}, {1.02159*10^8,
1.31579*10^6, 0.18, 3.05, 1.27, 3.30875, 25.4}, {1.33363*10^8,
1.15132*10^6, 1.38, 0.46, 1.57, 2.1403, 25.4}, {1.60336*10^8,
1.89145*10^6, 1.75, 0.29, 1.9, 2.59935, 25.4}, {5.755*10^7,
1.37747*10^6, 1.59, 1.51, 0.7, 2.30178, 25.4}, {1.41208*10^8,
1.06908*10^6, 4.3, 0.22, 1.68, 4.62177, 25.4}, {2.35328*10^8,
1.29523*10^6, 4.98, 1.81, 2.72, 5.95608, 25.4}, {1.38724*10^8,
1.15132*10^6, 0.07, 3.05, 1.65, 3.46841, 25.4}, {1.35352*10^8,
2.26152*10^6, 0.05, 3.08, 1.63, 3.48508, 25.4}, {2.78341*10^8,
2.38487*10^6, 0.01, 4.15, 3.3, 5.30213, 25.4}}, {{8.61278*10^7,
1.08964*10^6, 0.56, 0.56, 1.05, 1.31518, 25.5}, {2.21834*10^8,
2.13816*10^6, 0.08, 1.85, 2.56, 3.15951, 25.5}, {1.02159*10^8,
1.27467*10^6, 0.18, 3.05, 1.27, 3.30875, 25.5}, {1.33363*10^8,
1.15132*10^6, 1.38, 0.46, 1.57, 2.1403, 25.5}, {1.60336*10^8,
1.85033*10^6, 1.75, 0.29, 1.9, 2.59935, 25.5}, {5.755*10^7,
1.25411*10^6, 1.59, 1.51, 0.7, 2.30178, 25.5}, {1.41208*10^8,
1.02796*10^6, 4.3, 0.22, 1.68, 4.62177, 25.5}, {2.35328*10^8,
1.29523*10^6, 4.98, 1.81, 2.72, 5.95608, 25.5}, {1.38724*10^8,
1.13076*10^6, 0.07, 3.05, 1.65, 3.46841, 25.5}, {1.35352*10^8,
2.2204*10^6, 0.05, 3.08, 1.63, 3.48508, 25.5}, {2.78341*10^8,
2.36431*10^6, 0.01, 4.15, 3.3, 5.30213, 25.5}}, {{8.61278*10^7,
1.08964*10^6, 0.56, 0.56, 1.05, 1.31518, 25.6}, {2.21834*10^8,
2.09704*10^6, 0.08, 1.85, 2.56, 3.15951, 25.6}, {1.02159*10^8,
1.23356*10^6, 0.18, 3.05, 1.27, 3.30875, 25.6}, {1.33363*10^8,
1.15132*10^6, 1.38, 0.46, 1.57, 2.1403, 25.6}, {1.60336*10^8,
1.37747*10^6, 1.75, 0.28, 1.9, 2.59825, 25.6}, {5.755*10^7,
1.19244*10^6, 1.59, 1.52, 0.7, 2.30835, 25.6}, {1.41208*10^8,
1.02796*10^6, 4.3, 0.22, 1.68, 4.62177, 25.6}, {2.35328*10^8,
1.25411*10^6, 4.98, 1.81, 2.72, 5.95608, 25.6}, {1.38724*10^8,
1.13076*10^6, 0.07, 3.05, 1.65, 3.46841, 25.6}, {1.35352*10^8,
2.1176*10^6, 0.05, 3.08, 1.63, 3.48508, 25.6}, {2.78341*10^8,
2.28208*10^6, 0.01, 4.15, 3.3, 5.30213, 25.6}}, {{8.61278*10^7,
1.08964*10^6, 0.56, 0.56, 1.05, 1.31518, 25.7}, {2.21834*10^8,
2.01481*10^6, 0.08, 1.85, 2.56, 3.15951, 25.7}, {1.02159*10^8,
1.213*10^6, 0.18, 3.05, 1.27, 3.30875, 25.7}, {1.33363*10^8,
1.15132*10^6, 1.38, 0.46, 1.57, 2.1403, 25.7}, {1.60336*10^8,
1.33635*10^6, 1.75, 0.28, 1.9, 2.59825, 25.7}, {5.755*10^7,
1.15132*10^6, 1.59, 1.52, 0.7, 2.30835, 25.7}, {1.41208*10^8,
1.0074*10^6, 4.3, 0.22, 1.68, 4.62177, 25.7}, {2.35328*10^8,
1.213*10^6, 4.98, 1.81, 2.72, 5.95608, 25.7}, {1.38724*10^8,
1.06908*10^6, 0.07, 3.05, 1.65, 3.46841, 25.7}, {1.35352*10^8,
2.09704*10^6, 0.05, 3.08, 1.63, 3.48508, 25.7}, {2.78341*10^8,
2.26152*10^6, 0.01, 4.15, 3.3, 5.30213, 25.7}}, {{8.61278*10^7,
1.08964*10^6, 0.55, 0.56, 1.05, 1.31095, 25.8}, {2.21834*10^8,
1.99425*10^6, 0.08, 1.85, 2.56, 3.15951, 25.8}, {1.02159*10^8,
1.19244*10^6, 0.18, 3.05, 1.27, 3.30875, 25.8}, {1.33363*10^8,
1.13076*10^6, 1.38, 0.46, 1.57, 2.1403, 25.8}, {1.60336*10^8,
1.33635*10^6, 1.75, 0.28, 1.9, 2.59825, 25.8}, {5.755*10^7,
1.15132*10^6, 1.59, 1.52, 0.7, 2.30835, 25.8}, {1.41208*10^8,
1.0074*10^6, 4.29, 0.22, 1.68, 4.61247, 25.8}, {2.35328*10^8,
1.19244*10^6, 4.98, 1.81, 2.72, 5.95608, 25.8}, {1.38724*10^8,
1.06908*10^6, 0.07, 3.05, 1.65, 3.46841, 25.8}, {1.35352*10^8,
1.99425*10^6, 0.05, 3.08, 1.63, 3.48508, 25.8}, {2.78341*10^8,
2.19984*10^6, 0.01, 4.15, 3.3, 5.30213, 25.8}}, {{8.61278*10^7,
1.08964*10^6, 0.55, 0.56, 1.05, 1.31095, 25.9}, {2.21834*10^8,
1.97369*10^6, 0.08, 1.85, 2.56, 3.15951, 25.9}, {1.02159*10^8,
1.15132*10^6, 0.18, 3.05, 1.27, 3.30875, 25.9}, {1.33363*10^8,
1.13076*10^6, 1.38, 0.46, 1.57, 2.1403, 25.9}, {1.60336*10^8,
1.33635*10^6, 1.75, 0.28, 1.9, 2.59825, 25.9}, {5.755*10^7,
1.06908*10^6, 1.59, 1.52, 0.7, 2.30835, 25.9}, {1.41208*10^8,
1.0074*10^6, 4.29, 0.22, 1.68, 4.61247, 25.9}, {2.35328*10^8,
1.15132*10^6, 4.98, 1.81, 2.72, 5.95608, 25.9}, {1.38724*10^8,
1.0074*10^6, 0.07, 3.05, 1.65, 3.46841, 25.9}, {1.35352*10^8,
1.91201*10^6, 0.05, 3.08, 1.63, 3.48508, 25.9}, {2.78341*10^8,
2.1176*10^6, 0.01, 4.15, 3.3, 5.30213, 25.9}}, {{8.61278*10^7,
1.06908*10^6, 0.55, 0.56, 1.05, 1.31095, 26.}, {2.21834*10^8,
1.97369*10^6, 0.08, 1.85, 2.56, 3.15951, 26.}, {1.02159*10^8,
1.08964*10^6, 0.18, 3.05, 1.27, 3.30875, 26.}, {1.33363*10^8,
1.13076*10^6, 1.38, 0.46, 1.57, 2.1403, 26.}, {1.60336*10^8,
1.33635*10^6, 1.75, 0.28, 1.9, 2.59825, 26.}, {2.35328*10^8,
1.15132*10^6, 4.98, 1.81, 2.72, 5.95608, 26.}, {1.35352*10^8,
1.80921*10^6, 0.05, 3.08, 1.63, 3.48508, 26.}, {2.78341*10^8,
2.07648*10^6, 0.01, 4.15, 3.3, 5.30213, 26.}}, {{8.61278*10^7,
1.0074*10^6, 0.55, 0.56, 1.05, 1.31095, 26.1}, {2.21834*10^8,
1.97369*10^6, 0.08, 1.85, 2.56, 3.15951, 26.1}, {1.02159*10^8,
1.0074*10^6, 0.18, 3.05, 1.27, 3.30875, 26.1}, {1.33363*10^8,
1.08964*10^6, 1.38, 0.46, 1.57, 2.1403, 26.1}, {1.60336*10^8,
1.31579*10^6, 1.75, 0.28, 1.9, 2.59825, 26.1}, {2.35328*10^8,
1.15132*10^6, 4.98, 1.81, 2.72, 5.95608, 26.1}, {1.35352*10^8,
1.74754*10^6, 0.05, 3.08, 1.63, 3.48508, 26.1}, {2.78341*10^8,
2.07648*10^6, 0.01, 4.15, 3.3, 5.30213, 26.1}}, {{2.21834*10^8,
1.97369*10^6, 0.08, 1.85, 2.56, 3.15951, 26.2}, {1.33363*10^8,
1.06908*10^6, 1.38, 0.46, 1.57, 2.1403, 26.2}, {1.60336*10^8,
1.08964*10^6, 1.75, 0.28, 1.9, 2.59825, 26.2}, {2.35328*10^8,
1.15132*10^6, 4.98, 1.81, 2.72, 5.95608, 26.2}, {1.35352*10^8,
1.72698*10^6, 0.05, 3.08, 1.63, 3.48508, 26.2}, {2.78341*10^8,
2.07648*10^6, 0.01, 4.15, 3.3, 5.30213, 26.2}}, {{2.21834*10^8,
1.89145*10^6, 0.08, 1.85, 2.56, 3.15951, 26.3}, {1.33363*10^8,
1.06908*10^6, 1.38, 0.46, 1.57, 2.1403, 26.3}, {1.60336*10^8,
1.08964*10^6, 1.75, 0.28, 1.9, 2.59825, 26.3}, {2.35328*10^8,
1.15132*10^6, 4.98, 1.81, 2.72, 5.95608, 26.3}, {1.35352*10^8,
1.72698*10^6, 0.05, 3.08, 1.63, 3.48508, 26.3}, {2.78341*10^8,
2.07648*10^6, 0.01, 4.15, 3.3, 5.30213, 26.3}}, {{2.21834*10^8,
1.87089*10^6, 0.08, 1.85, 2.56, 3.15951, 26.4}, {1.33363*10^8,
1.04852*10^6, 1.38, 0.46, 1.57, 2.1403, 26.4}, {2.35328*10^8,
1.15132*10^6, 4.98, 1.81, 2.72, 5.95608, 26.4}, {1.35352*10^8,
1.62418*10^6, 0.05, 3.08, 1.63, 3.48508, 26.4}, {2.78341*10^8,
1.95313*10^6, 0.01, 4.15, 3.3, 5.30213, 26.4}}, {{2.21834*10^8,
1.87089*10^6, 0.08, 1.85, 2.56, 3.15951, 26.5}, {1.33363*10^8,
1.02796*10^6, 1.38, 0.46, 1.57, 2.1403, 26.5}, {2.35328*10^8,
1.15132*10^6, 4.98, 1.81, 2.72, 5.95608, 26.5}, {1.35352*10^8,
1.58306*10^6, 0.05, 3.08, 1.63, 3.48508, 26.5}, {2.78341*10^8,
1.93257*10^6, 0.01, 4.15, 3.3, 5.30213, 26.5}}, {{2.21834*10^8,
1.87089*10^6, 0.08, 1.85, 2.57, 3.16762, 26.6}, {1.33363*10^8,
1.02796*10^6, 1.38, 0.46, 1.57, 2.1403, 26.6}, {2.35328*10^8,
1.06908*10^6, 4.98, 1.81, 2.72, 5.95608, 26.6}, {1.35352*10^8,
1.58306*10^6, 0.05, 3.08, 1.63, 3.48508, 26.6}, {2.78341*10^8,
1.89145*10^6, 0.01, 4.15, 3.3, 5.30213, 26.6}}, {{2.21834*10^8,
1.87089*10^6, 0.08, 1.85, 2.57, 3.16762, 26.7}, {1.33363*10^8,
1.02796*10^6, 1.38, 0.46, 1.57, 2.1403, 26.7}, {2.35328*10^8,
1.06908*10^6, 4.98, 1.81, 2.72, 5.95608, 26.7}, {1.35352*10^8,
1.5625*10^6, 0.05, 3.08, 1.63, 3.48508, 26.7}, {2.78341*10^8,
1.89145*10^6, 0.01, 4.15, 3.3, 5.30213, 26.7}}, {{2.21834*10^8,
1.87089*10^6, 0.08, 1.85, 2.57, 3.16762, 26.8}, {1.33363*10^8,
1.02796*10^6, 1.38, 0.46, 1.57, 2.1403, 26.8}, {2.35328*10^8,
1.04852*10^6, 4.98, 1.81, 2.72, 5.95608, 26.8}, {1.35352*10^8,
1.5625*10^6, 0.05, 3.08, 1.63, 3.48508, 26.8}, {2.78341*10^8,
1.89145*10^6, 0.01, 4.14, 3.3, 5.29431, 26.8}}, {{2.21834*10^8,
1.87089*10^6, 0.08, 1.85, 2.57, 3.16762, 26.9}, {2.35328*10^8,
1.02796*10^6, 4.98, 1.81, 2.72, 5.95608, 26.9}, {1.35352*10^8,
1.25411*10^6, 0.05, 3.08, 1.63, 3.48508, 26.9}, {2.78341*10^8,
1.58306*10^6, 0.01, 4.15, 3.3, 5.30213, 26.9}}, {{2.21834*10^8,
1.87089*10^6, 0.08, 1.85, 2.57, 3.16762, 27.}, {1.35352*10^8,
1.17188*10^6, 0.05, 3.08, 1.63, 3.48508, 27.}, {2.78341*10^8,
1.58306*10^6, 0.01, 4.15, 3.3, 5.30213, 27.}}, {{2.21834*10^8,
1.87089*10^6, 0.08, 1.85, 2.57, 3.16762, 27.1}, {1.35352*10^8,
1.17188*10^6, 0.05, 3.08, 1.63, 3.48508, 27.1}, {2.78341*10^8,
1.43915*10^6, 0.01, 4.15, 3.3, 5.30213, 27.1}}, {{2.21834*10^8,
1.85033*10^6, 0.08, 1.85, 2.57, 3.16762, 27.2}, {1.35352*10^8,
1.17188*10^6, 0.05, 3.08, 1.63, 3.48508, 27.2}, {2.78341*10^8,
1.35691*10^6, 0.01, 4.15, 3.3, 5.30213, 27.2}}, {{2.21834*10^8,
1.82977*10^6, 0.08, 1.85, 2.57, 3.16762, 27.3}, {1.35352*10^8,
1.17188*10^6, 0.05, 3.08, 1.63, 3.48508, 27.3}, {2.78341*10^8,
1.33635*10^6, 0.01, 4.15, 3.3, 5.30213, 27.3}}, {{2.21834*10^8,
1.78866*10^6, 0.08, 1.85, 2.57, 3.16762, 27.4}, {1.35352*10^8,
1.15132*10^6, 0.04, 3.08, 1.63, 3.48495, 27.4}, {2.78341*10^8,
1.25411*10^6, 0.01, 4.15, 3.3, 5.30213, 27.4}}, {{2.21834*10^8,
1.60362*10^6, 0.08, 1.85, 2.57, 3.16762, 27.5}, {1.35352*10^8,
1.1102*10^6, 0.04, 3.08, 1.63, 3.48495, 27.5}, {2.78341*10^8,
1.25411*10^6, 0.01, 4.15, 3.3, 5.30213, 27.5}}, {{2.21834*10^8,
1.5625*10^6, 0.08, 1.85, 2.57, 3.16762, 27.6}, {1.35352*10^8,
1.06908*10^6, 0.04, 3.08, 1.63, 3.48495, 27.6}, {2.78341*10^8,
1.23356*10^6, 0.01, 4.15, 3.3, 5.30213, 27.6}}, {{2.21834*10^8,
1.52139*10^6, 0.08, 1.85, 2.57, 3.16762, 27.7}, {1.35352*10^8,
1.04852*10^6, 0.04, 3.08, 1.63, 3.48495, 27.7}, {2.78341*10^8,
1.15132*10^6, 0.01, 4.15, 3.3, 5.30213, 27.7}}, {{2.78341*10^8,
1.15132*10^6, 0.01, 4.15, 3.3, 5.30213, 27.8}}, {{2.78341*10^8,
1.15132*10^6, 0.01, 4.15, 3.3, 5.30213, 27.9}}, {{2.78341*10^8,
1.15132*10^6, 0.01, 4.15, 3.3, 5.30213, 28.}}, {{2.78341*10^8,
1.13076*10^6, 0.01, 4.15, 3.3, 5.30213, 28.1}}, {{2.78341*10^8,
1.08964*10^6, 0.01, 4.15, 3.3, 5.30213, 28.2}}}
The first column is the ID, 2nd its mass, 3,4,5th are its x,y,z coordinates and the last column is its time.
Each matrix is a snapshot of the system of particles at different time. I am trying to sort those particles whose inter-particle distance is never less than say 0.01 (in units of this output). With the help of this forum Modifying a list of matrices with conditional statement I was able to sort particles based on their interparticle distance being greater than 0.01 at each time, independent of their proximity during other time.
However, now I want to refine this further. I want to run a similar conditional sort to pick particles which "never" came closer to any other particle by more than a distance of 0.01. For this I would like to start at the earliest time and identify each particle with its halo id (1st column) which were at least at a distance of 0.01 from all other particle, if they were less than 0.01 apart, I ignore those particles from further sorting. This way I will make sure that at the end of sorting, I have only those particles at each time which never came closer to any other particle in its history.
EDIT: I should note that the particles can merge to form another particle. So let's say at time t1, there are two particles A and B, 0.01 apart, at t2, a third particle closes in on B. At t3, the third particle merges with B and retains the particle ID B. And let's also assume that A and B are still more than 0.01 apart. Now based on my previous sorting code, I would get 2 particles to be at least 0.01 apart, however in the new code, I want to exclude B as it did come in close proximity to another particle in its history.
• First, reduce your matrix to just sets of {x,y,z} as ID, time, etc. are irrelevant. (You could help us by posting just such minimal data.) – David G. Stork Apr 4 '15 at 0:42
• I think particle ID is important, as otherwise it's not possible to pinpoint whether a particular particle was ever in closer proximity than 0.01 with any other particle. – HuShu Apr 4 '15 at 0:53
• Hello @Bill please see the edited post. – HuShu Apr 4 '15 at 1:37
• B is not included because it was in close proximity to the third merging particle at some point in its history, as the third particle merged into B. Making list and intersecting is not going to work because of the reason I stated : Particles merge, and retain the ID of the more massive particle, and both the merging particles should not be sorted. – HuShu Apr 4 '15 at 5:14
• This might be an application of Dataset. I tried with your data but reading in was was not successful. The lengths of your data are: In[11]:= Length /@ data Out[11]= {18, 15, 14, 13, 11, 11, 11, 11, 11, 11, 8, 8, 6, 6, 5, 5, \ 5, 5, 5, 4, 3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, 1}. But once the data is in a Dataset you can manage to calculate with this data quite easy. – mgamer Apr 4 '15 at 12:23
You are right about the format of your data, it is the right and the best format for this question. I imagine that a solution could look like this:
findCollisions[collided_, pts_] := Module[
{npts = DeleteCases[pts, {Alternatives @@ collided, __}, {2}]},
collided~Union~Select[npts, withinRange[npts, 0.1]][[All, 1]]
]
withinRange[pts_, threshold_][pt_] := Length@Nearest[Complement[pts, {pt}][[All, {3, 4, 5}]], pt[[{3, 4, 5}]], {1, threshold}] > 0
ids = Fold[findCollisions, {}, data]
where data is your list and ids are the IDs of particles that ever came within the specified distance from another particle. However upon running this code you will find that ids is an empty list. You have to increase the range from 0.01 to for example 0.1 before you get a list non-empty list. This could either mean that I've made a mistake or that the sample data doesn't include any situations where particles are within 0.01 of each other.
The idea is to delete the particles corresponding to ids from your original data:
DeleteCases[data, {Alternatives @@ ids, __}, {2}]
• Thank you very much for your reply. I think there is something wrong. I used a mock data which is very similar to the example that I gave and the id list is empty. – HuShu Apr 5 '15 at 0:34
• MockData1 = {{{9.*10^7, 1.13076*10^6, 0.56, 0.56, 1.05, 1.31518, 25.}, {3.33657*10^8, 1.23356*10^6, 0.91, 0.79, 3.98, 4.15844, 25.}}, {{9.*10^7, 1.13076*10^6, 0.56, 0.56, 1.05, 1.31518, 24.9}, {3.33657*10^8, 1.23356*10^6, 0.91, 0.79, 3.90, 4.15844, 24.9}, {3.33634*10^8, 1.23356*10^6, 0.90, 0.79, 3.90, 4.15844, 24.9}}, {{9.*10^7, 1.13076*10^6, 0.56, 0.56, 1.05, 1.31518, 24.8}, {3.33657*10^8, 1.23356*10^6, 0.91, 0.79, 3.90, 4.15844, 24.8}}} – HuShu Apr 5 '15 at 0:34
• @NilanjanBanik It depends on your threshold. 0.01 versus 0.1 versus 1. You have to try it on a sample where you know what the result should be to make sure it works. – C. E. Apr 5 '15 at 0:40
• Yes I did try it with 0.1 instead of 0.01, I also tried with 1, but then I get all the particles in the collided list, which should not be the case as the first particle is always away from all the others by at least >1. – HuShu Apr 5 '15 at 0:45
• Yes, I used the MockData1 to test the code out. I should expect only the first particle to be untouched and the collided list should the remaining two particles. See comments. – HuShu Apr 5 '15 at 0:53
|
2019-10-19 19:09:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4801434576511383, "perplexity": 5632.07347414274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00097.warc.gz"}
|
https://proxies-free.com/homological-algebra-cone-of-a-morphism-of-complexes-that-are-concentrated-in-degree-0-and-1/
|
homological algebra – Cone of a morphism of complexes that are concentrated in degree \$0\$ and \$1\$
Let $$R$$ be a ring and $$f:Ato A’$$ and $$g:Bto B’$$ be morphisms of $$R$$-modules. Let $$h:C_{bullet}to C_{bullet}’$$ be a morphism of $$R$$-module complexes fitting in a morphism of distinguished triangles:
$$require{AMScd} begin{CD} C_{bullet} @>>> A(0) @>>> B(0) @>>> C_{bullet}(1) \ @VVhV @VVfV @VVgV @VVh(1)V \ C_{bullet}’ @>>> A'(0) @>>> B'(0) @>>> C_{bullet}'(1) end{CD}$$
From a paper I am reading, the following is used:
If $$f$$ and $$g$$ are injective morphisms, then we have a quasi-isomorphism $$operatorname{cone}(h)cong(operatorname{coker}(f)to operatorname{coker}(g))$$.
It seems that the equivalent general statement is wrong, i.e. if $$f$$ and $$g$$ are not assumed to be injective and $$A(0)$$, $$A'(0)$$, $$B(0)$$ and $$B'(0)$$ are not assumed to be concentrated in degree $$0$$ anymore (see for instance this question).
Does anyone have a proof – if true – of this statement?
Many thanks!
|
2020-09-23 09:28:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922011494636536, "perplexity": 120.26331228145872}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00666.warc.gz"}
|
https://www.physicsforums.com/threads/sitting-n-married-couples-on-a-round-table.248149/
|
# Sitting n married couples on a round table
1. Aug 2, 2008
### lizzyb
Question: A total of 2n people, consisting of n married couples, are randomly seated (all possible orderings being equally likely) at a round table. Let C_i denote the event that couple i are seated next to each other, i = 1, 2, ... n.
(a) Find P(C_i).
There are two different ways to seat C_i together on a round table and the possible orderings of the rest of the couples is (2n - 2)!, hence we have:
P(C_i) = (2 * (2n - 2)!)/(2n - 1)!) = 2/(2n - 1)
The the (2n - 1)! being the total number of orderings (C_i included) on a round table. The book gives an answer of 2/(2n + 1).
What am I doing wrong?
(b) For j <> i, find P(C_j | C_i)
Instead of using the P(C_j | C_i) = P(C_j C_i) / P(C_i) formula, we may cut to the chase and say that since a couple has already been selected, we can view the round table as a straight line. So the number of ways to arrange C_j are:
i) 2 choices on which of the couple to sit
ii) Pick a place among (2n - 3) seats
iii) Place the other couple on the other side (1 choice)
2 (2n - 3)
So P(C_j | C_i) = ( 2 (2n - 3) (2n - 4)! )/ (2n - 2)! = 2 / (2n - 2)
Which is in the back of the book
(c) When n is large, approximate the probability that there are no married couples who are seated next to each other.
I imagine we're supposed to use the Poisson distribution?
2. Aug 2, 2008
### D H
Staff Emeritus
In short, you are ignoring that the table is round, and this changes things. There is a very easy way to solve this problem: What is the probability that the couple is not seated next to one another?
3. Aug 2, 2008
### lizzyb
There are 2n places for the first member of the couple, then 2n-2 places for the other member.
So P(E_i) = ( (2n)(2n - 2)(2n-2)! )/(2n-1)!??
4. Aug 2, 2008
### D H
Staff Emeritus
You did not answer my question. I asked you to compute the probability that the couple are not seated next to one another. Call this probability q. The probability that the couple are seated next to one another is p=1-q.
5. Aug 2, 2008
### lizzyb
That's what I meant by ( (2n)(2n - 2)(2n-2)! )/(2n-1) but that's probably not right.
Since its a round table, we can say that the location of the first person placed upon it doesn't matter, but for the other member of that couple, there are 2n-2 places to put him or her (since there are two places on either side of the first seat where his/her partner is). After these two have been placed, there are still the other people, of which there are (2n-4)! possible seating arrangements. This all goes over (2n-1)! since that is the total number of possible permutations of 2n people on a round table:
q = (2n - 2)(2n - 4)!/(2n - 1)! = 1 / (2n - 1)(2n - 3)
does that look right to you?
6. Aug 2, 2008
### konthelion
For part (a) you did it correctly! The book is wrong :)
There are $$(2n-1)!$$ ways to arrange 2n people. Since the couple with index i are sitting together, you can think of them as one person thus you get $$(2n-2)!$$. But, there are 2 ways to arrange couple with index i.
Thus
$$\boxed{P(C_{i})=\frac{2(2n-2)!}{(2n-1)!}=\frac{2}{2n-1}}$$
(b) Well, you can further simplify it into $$\frac{1}{n-1}$$
(c) Yes, you use the Poisson r.v. distribution with parameter $$\lambda = pn$$. Now what is p? Then just find the probability that no couple are sitting together i.e. $$\boxed{P[X=0] = e^{-\lambda}}$$ by definition of Poisson
Last edited: Aug 2, 2008
7. Aug 2, 2008
### D H
Staff Emeritus
I took a second look, sorry.
You are doing nothing wrong. The book is what is wrong. The answer is 2/(2n-1) (n>1).
Consider one member of the couple in question. If the couple is not seated adjacently, both seats next to this one member of the couple must be filled someone other than the other member of the couple. These are the only two seats one need be concerned with, and the probability neither is the other member of the couple is (2n-2)/(2n-1)*(2n-3)/(2n-2)=(2n-3)/(2n-1)=1-2/(2n-1). The probability the couple *are* seated adjacently is thus 2/(2n-1).
As a sanity check, look at the case n=2. The only way a couple is not seated adjacently at a table of four is when they are seated across from one another. There are 4*2 such seatings out of a total of 24, so the probability they are seated adjacently is 16/24=2/3.
8. Aug 2, 2008
### lizzyb
Great! Thank you for verifying part a as well as the other way of understanding it as well.
As for part (c), I suppose I'm to guess a decent value of p based on the answers of (a) and (b)?
(a) P(C_i) = 2/(2n - 1)
(b) P(C_j | C_i) = 1/(n-1) (i <> j)
And this p should be a general guesstimate of the probability that a couple sits together? Both (a) and (b) are similar to 1/n ... that gives the right answer in the back of the book but I can't say I fully understand why it works.
Thanks for your help!
Yes, which you have already found namely $$P(C_{i})$$
|
2017-08-20 23:35:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7495937943458557, "perplexity": 666.9535058617057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00073.warc.gz"}
|
https://ncatlab.org/nlab/show/J.M.+Ellis+McTaggart
|
# nLab J.M. Ellis McTaggart
John McTaggart Ellis McTaggart1 (1866–1925) was a British metaphysician. He was part of British idealism, that had originated in German idealism with its key figures Kant, Hegel, Fichte, and Schelling. He was influential on the young Bertrand Russell.
## References
1. John McTaggart is really his first name and Ellis McTaggart his last name
Last revised on January 28, 2018 at 19:41:45. See the history of this page for a list of all contributions to it.
|
2021-10-21 18:44:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3811683654785156, "perplexity": 10399.530859303712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00516.warc.gz"}
|
https://slideplayer.com/slide/705453/
|
# Strategies to strengthen the distribution system to improve the availability of medicine Dr Wael Inmair Director Assistant, Central Medical Supply Store,
## Presentation on theme: "Strategies to strengthen the distribution system to improve the availability of medicine Dr Wael Inmair Director Assistant, Central Medical Supply Store,"— Presentation transcript:
Strategies to strengthen the distribution system to improve the availability of medicine Dr Wael Inmair Director Assistant, Central Medical Supply Store, MOH
Content 1.Supply directorate organ gram 2.Directorate responsibility 3.Departments related to supply chain process Obstacles 4.Procedure done to overcome obstacles 1.Future plan
Supply Directorate Organ gram Director Assistant North regional Store depart Stationary Department Storage & Distribution Inspection Department Receiving Department I.T Department
Supply Directorate Responsibilities Managing drug selection, qualitative and quantitative Quantifying drug requirement Inventory management Contributing in the tender process
Supply Directorate Responsibilities Drug donations Managing drug distribution Medical stores management Drug management for health facilities Transport management
Supply Directorate Responsibilities Contribution in promoting rational prescribing Ensuring good dispensing practices Stock monitoring and evaluation Community participation Security management
Departments Related to Supply Chain Ministry of finance General supply department Joint procurement department Jordan food & drug administration Procurement department, MoH
Departments Related to Supply Chain Finance department, MoH Health insurance administration Health facilities Suppliers Human resources department, MoH
Obstacles Low budgeting Lead time for tender process Lead time for direct purchasing Delay in quality controls release Poor dispensing practice Tender awarding practice
Obstacles Delivery delays by the supplier Information system deficiencies Staff training programs Monitoring and feed-back mechanism Electronic communication deficiencies between health facilities. Auditing program deficiency
Directorate Accomplishment Budget increase Updating tender terms and conditions Better stock management to accommodate emergency stocks Distribution of JNDF and RDL Improvement in information system management
Directorate Accomplishment Drug efficacy reporting mechanism Mechanism for updating JNDF,MoH EDL, and RDL Communication improvement between health facilities
Future Plans Supply directorate web enabled website to facilitate interdepartmental information exchange Unifying procurement process through one department (JPD) Distribution standards development and implementation Staff training programs Updating directorate organ gram
Future Plans Storing capacity increase. Implement of a new modern security system Transport system improvement Staff continues education program Improve collaboration with national and international organization
Title Title Title Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text.
Title Title Title Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text text Text.
Download ppt "Strategies to strengthen the distribution system to improve the availability of medicine Dr Wael Inmair Director Assistant, Central Medical Supply Store,"
Similar presentations
|
2020-09-30 22:59:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9458217024803162, "perplexity": 1309.7377924600291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00065.warc.gz"}
|
https://www.physicsforums.com/threads/an-imaginary-problem.1048254/
|
# An imaginary problem
• I
• Gear300
Gear300
TL;DR Summary
imaginary algebra
I saw a proof in which they came up with the ith root of i through the typical algebra.
$$i^{1/i} = i^{-i} = e^{i\frac{\pi}{2} \cdot -i} = e^{\frac{\pi}{2}} ~.$$
But it seems the proof is entirely algebraic, so we have no grounds for thinking it works anywhere. The only exception might be an analytic connection with power series, like a Laurent series. Is there such a connection, or is this as ad hoc as it seems?
Mentor
TL;DR Summary: imaginary algebra
I saw a proof in which they came up with the ith root of i through the typical algebra.
$$i^{1/i} = i^{-i} = e^{i\frac{\pi}{2} \cdot -i} = e^{\frac{\pi}{2}} ~.$$
But it seems the proof is entirely algebraic, so we have no grounds for thinking it works anywhere. The only exception might be an analytic connection with power series, like a Laurent series. Is there such a connection, or is this as ad hoc as it seems?
It's pretty straightforward.
##i = e^{i\pi/2} \Rightarrow i^{-i} = (e^{i\pi/2})^{-i} = e^{(i\pi/2) \cdot (-i)} = e^{\pi/2}##
topsquark
Gear300
It's pretty straightforward.
##i = e^{i\pi/2} \Rightarrow i^{-i} = (e^{i\pi/2})^{-i} = e^{(i\pi/2) \cdot (-i)} = e^{\pi/2}##
So it's fine doing this sort of thing in a quantum mechanics equation?
Gold Member
MHB
TL;DR Summary: imaginary algebra
I saw a proof in which they came up with the ith root of i through the typical algebra.
$$i^{1/i} = i^{-i} = e^{i\frac{\pi}{2} \cdot -i} = e^{\frac{\pi}{2}} ~.$$
But it seems the proof is entirely algebraic, so we have no grounds for thinking it works anywhere. The only exception might be an analytic connection with power series, like a Laurent series. Is there such a connection, or is this as ad hoc as it seems?
Of course, this isn't unique.
##i^{1/i} = \left ( e^{i \pi / 2} \right )^{1/i} = \left ( e^{i \pi / 2 + 2 k \pi i } \right )^{1/i} = e^{\pi /2 + 2 k \pi}##
where k is any integer, so
##i^{1/i} = e^{5 \pi / 2}##
just as well.
-Dan
hutchphd
Mentor
So it's fine doing this sort of thing in a quantum mechanics equation?
Why not? This is mathematics.
Gear300
Of course, this isn't unique.
##i^{1/i} = \left ( e^{i \pi / 2} \right )^{1/i} = \left ( e^{i \pi / 2 + 2 k \pi i } \right )^{1/i} = e^{\pi /2 + 2 k \pi}##
where k is any integer, so
##i^{1/i} = e^{5 \pi / 2}##
just as well.
-Dan
Why not? This is mathematics.
True enough. I guess it doesn't work unless it works.
Homework Helper
Gold Member
Far from "ad hoc"; these are basic properties of exponents and polar coordinates are the best way to deal with powers in the complex plane.
I admit that I have no geometric image of numbers to complex powers, but I have to accept it because so much of it works out perfectly for real powers.
|
2023-04-01 20:51:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9916032552719116, "perplexity": 967.9170061345072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950247.65/warc/CC-MAIN-20230401191131-20230401221131-00532.warc.gz"}
|
http://terrysylvester.com/adairs-gift-bxlggz/square-pyramidal-symmetry-elements-a6745e
|
In square pyramidal structures the apical angle found is usually between 1000 and 106°, as shown in Table 2. The shape of the orbitals is octahedral. It is an algorithm of complexity O(n3), that is, a polynomial complexity [ROS 98, p. 97 et seq. The square-pyramidal pyramid, ( ) ∨ [( ) ∨ {4}], is a bisected octahedral pyramid. For n=1 we have again C s (1*). C nv, [n], (*nn) of order 2n - pyramidal symmetry or full acro-n-gonal group (abstract group Dih n); in biology C 2v is called biradial symmetry. They are the most efficient means of calculating volume integrals for pyramids. ROTATIONS - AXES OF SYMMETRY Some examples for different types of molecule: e.g. Structure of xenon oxytetrafluoride, an example of a molecule with the square pyramidal coordination geometry.. Let us first look at the symmetry elements, the proper axes: We conventionally write the principal axis first, and then all other axes from their highest to their lowest order. For the Berry pseudorotation of an ML 5 species, this includes a twofold proper rotation and two mutually perpendicular reflection planes that contain the axis, thus the appropriate point group symmetry is C 2 v . We thus define a point group as a collection of symmetry elements (operations) and a point group symbol is a shorthand notation which identifies the point group. This is the symmetry group for a regular n-sided pyramid. Molecular symmetry in chemistry describes the symmetry present in molecules and the classification of molecules according to their symmetry. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. See the answer. The shape of the orbitals is octahedral. Draw a 3D formula for BrF 5 and identify and illustrate all symmetry elements associated with the molecule.. B. Missed the LibreFest? Because it has not center of symmetry, the faces on the top of the crystal do not occur on the bottom. All di d i b bi d b hAll discussed operations may be combined, but the number of (i.e. Structural data for five-coordinate complexes of the 3d elements have become fairly numerous. If an atom is shifted by a symmetry operation, its contribution to the character is 0. symmetry elements –the points, lines, or planes to which a symmetry operation is carried out. Question: Assuming That PtCl42- Is Planar, Imagine That PtCl42- Is Non-planar (square Pyramidal). A character table is shown below: A. Moreover, and so a C 4 axis generates only two unique symmetry operations, C 4 and . Derniers chiffres du Coronavirus issus du CSSE 05/01/2021 (mardi 5 janvier 2021). Lanthanum(III) chloride – LaCl 3 As a transition state in Berry Pseudorotation, "Square Pyramidal Molecular Geometry. Some formulas achieve derived lower bounds, thus resulting in optimal formulas. Download figure: Standard image High-resolution image A square has a C 4 axis of symmetry as illustrated in figure 1.2.Performing two successive C 4 or 360°/4 = 90° rotations has the same effect as a single C 2 or 180° rotation; in symbols, .Hence for every C 4 axis there is always a collinear C 2 axis. It can also be seen in an edge-centered projection as a square bipyramid with four tetrahedra wrapped around the common edge. This leads to AB5E1, or Square Pyramidal. The mechanism used is similar to the Berry mechanism. Each crystal system and class is distinguished from the others by its own elements of symmetry, often called symmetry operations. There are many symmetry point groups, but in crystals they must be consistent with the crystalline periodicity (translational periodicity). One orbital contains a lone pair of electrons so the remaining five atoms connected to the central atom gives the molecule a square pyramidal shape. The only Table 1. What is the order of the C 4v group?. In generating the Form this face is multiplied six times in virtue of the 6-fold rotation axis, resulting in a hexagonal monopyramid. This book tells the fascinating story of the constituents of matter from a common symmetry perspective. In this algorithm that has an exceptionally beautiful symmetry of expression, there are three overlapping loops and when the number of nodes doubles, the computing time increases eightfold. [ "article:topic", "Octahedral", "square pyramidal", "molecule", "lone pair", "orbitals", "showtoc:no" ], https://chem.libretexts.org/@app/auth/2/login?returnto=https%3A%2F%2Fchem.libretexts.org%2FBookshelves%2FInorganic_Chemistry%2FModules_and_Websites_(Inorganic_Chemistry)%2FMolecular_Geometry%2FSquare_Pyramidal, information contact us at info@libretexts.org, status page at https://status.libretexts.org. Some molecular compounds that adopt square pyramidal geometry are XeOF4,[2] and various halogen pentafluorides (XF5, where X = Cl, Br, I). Tripe product formulas require, on average, more than twice as many points. The geometry is common for certain main group compounds that have a stereochemically active lone pair, as described by VSEPR theory. (ii) some symmetry elements give rise to more than one operation - especially rotation - as above. In molecular geometry, square pyramidal geometry describes the shape of certain compounds with the formula ML5 where L is a ligand. The volume formula of a frustum of a square pyramid was introduced by the ancient Egyptian mathematics in what is called the Moscow Mathematical Papyrus, written in the 13th dynasty (ca. Symmetry is at the heart of our understanding of matter. As a trigonal bipyramidal molecule undergoes Berry pseudorotation, it proceeds via an intermediary stage with the square pyramidal geometry. Proper axes of rotation (C n) Rotation with respect to a line (axis of rotation). While not always immediately obvious, inWhile not always immediately obvious, in most well formed crystal shapes, axis of rotation, axis of rotoinversion, center of symmetry, and mirror planescan be spotted. The point group symmetry involved is of type C4v. The appropriate point group for the symmetry analysis, however, is the one which contains only those symmetry elements that are preserved throughout the reaction path. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. Scandium Chloride – Sc 7 Cl 10; Garnet – Y 3 Al 5 O 12; Zirconium Chloride – ZrCl; Zircon – ZrSiO 4; Vanadium Oxide – V 2 O 5; Molybdenum Disulfide – MoS 2; Copper (I) Oxide – Cu 2 O; BISCO Bismuth Strontium Calcium Copper Oxide – BSCCO; HgO-Mercury(II) oxide; Hexatantalate [Ta 6 O 19] 2-f-block Elements. The characters of the reducible representation will be related to the changes in these axes as each symmetry operation is performed. NOTES: This molecule is made up of 6 equally spaced sp3d2 hybrid orbitals arranged at 90o angles. The point group symmetry involved is of type C 4v. There are six (6) elements of symmetry in crystals: a Center of Symmetry, an Axis of Symmetry, a Plane of Symmetry, an Axis of Rotatory Inversion, a Screw-axis of Symmetry, and a Glide-plane of Symmetry. Certain compounds crystallize in both the trigonal bipyramidal and the square pyramidal structures, notably [Ni(CN)5]3−.[1]. A molecule or an object may contains one or more than one symmetry elements, therefore molecules can be grouped together having same symmetry elements and classify according to their symmetry. Obiously, the symmetry elements that imply any lattice translations (glide planes and screw axes), are not point group operations. It has vertical mirror planes. When there are axes of the same order, those with the least number of primes get denoted first, and those with the highest number of primes last. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. CFSE for five-coordinated 3d metal complexes4 d S =90°Square Pyr. The Inversion Operation (i ) • The operation of inversion is defined relative to the central point within the molecule, throughwhichallsymmetry elements mustpass, e.g.,typicallythe originof the Cartesiancoordinate system (xyzx,y,z= 0,0,0). •Cn is a rotation of (360/n)°. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. If the ligand atoms were connected, the resulting shape would be that of a pyramid with a square base. =1O5° Trig. Highlights New efficient quadrature formulas for pyramidal elements are constructed. Symmetry elements, polyhedra structure, Point group description of square pyramidal geometry with molecular models 1850 BC): V = 1 3 h ( a 2 + a b + b 2 ) . Its position in the stereographic projection of the symmetry elements of the present Class, and the generation of the Form by subjecting this face to the symmetry elements of the Class, is shown in Figure 15. Molecular Symmetry Examples. [3][4] Complexes of vanadium(IV), such as [VO(acac)2] are square pyramidal (acac = acetylacetonate, the deprotonated anion of acetylacetone (2,4-pentanedione)). NOTES: This molecule is made up of 6 equally spaced sp 3 d 2 hybrid orbitals arranged at 90 o angles. syyymmetry elements in its unit cell. Molecular symmetry is a basic idea in chemistry.It is about the symmetry of molecules.It puts molecules into groups according to their symmetry. Identify All Of The Symmetry Elements Possessed By This Molecule And Identify Its Point Group. One orbital contains a lone pair of electrons so the remaining five atoms connected to the central atom gives the molecule a square pyramidal shape. Square Pyramidal. assuming that PtCl42- is planar, imagine that PtCl42- is non-planar (square pyramidal). If an axis on an atom is shifted to its negative, that atom contributes … Pseudorotation also occurs in square pyramidal molecules. Watch the recordings here on Youtube! If a design has a pair of perpendicular twofold axes, then choosing Ox and Oz as these axes, the design … VSEPR", Chem| Chemistry, Structures, and 3D Molecules, Indiana University Molecular Structure Center, Interactive molecular examples for point groups, https://en.wikipedia.org/w/index.php?title=Square_pyramidal_molecular_geometry&oldid=983782781, Articles with dead external links from May 2018, Articles with permanently dead external links, Creative Commons Attribution-ShareAlike License, This page was last edited on 16 October 2020, at 06:30. Rhombic-pyramidal Class, 2mm (mm2), Symmetry content - 1A 2, 2m This class has two perpendicular mirror planes and a single 2-fold rotation axis. Bipyr. d-block Elements. elements often easier to spot. The standard model of elementary particles and the periodic table of chemical elements have the common goal to bring order in the bewildering chaos of the constituents of matter. This problem has been solved! If the height of the two apexes are the same, it can be given a higher symmetry name [( … Point group symmetry is an important property of molecules widely used in some branches of chemistry: spectroscopy, quantum chemistry and crystallography. In molecular geometry, square pyramidal geometry describes the shape of certain compounds with the formula ML 5 where L is a ligand.If the ligand atoms were connected, the resulting shape would be that of a pyramid with a square base. In molecular geometry, square pyramidal geometry describes the shape of certain Molecules with this geometry, as opposed to trigonal bipyramidal, exhibit heavier vibration. Two perpendicular axes of symmetry Suppose a set is mapped onto itself by the mapping M and also by the mapping N. Then the set must be mapped on to itself by MN and by NM. The symmetry of all molecular motion is obtained by viewing each atom as the center of 3 intersecting axes (x, y and z). Have questions or comments? Bromine pentafluoride, BrF 5, has a square pyramidal geometry and belongs to the C 4v point group. To determine the symmetry group, ask yourself the following questions where if the answer is no, move onto the next question: 1. Au niveau mondial le nombre total de cas est de 85 899 563, le nombre de guérisons est de 48 302 481, le nombre de décès est de 1 858 412. A prism with an equilateral triangle as cross section has a threefold axis and a pyramid on a square base has a fourfold axis. It has a square pyramid base, and 4 tetrahedrons along with another one more square pyramid meeting at the apex. {\displaystyle V={\frac {1}{3}}h(a^{2}+ab+b^{2}).} Legal. Thus even though the geometry is rarely seen as the ground state, it is accessed by a low energy distortion from a trigonal bipyramid. • The five mirror planes of a square planar molecule MX 4 are grouped into three classes( h,2 v,2 d). The symmetry elements for a molecule all pass through at least one point which is unmoved by these operations. ]. Derniers chiffres du Coronavirus issus du CSSE 05/01/2021 ( mardi 5 janvier 2021 ). axes of symmetry some for. Orbitals arranged at 90 o angles lower bounds, thus resulting in optimal formulas to. These axes as each symmetry operation, its contribution to the Berry mechanism certain compounds with the... Are constructed as shown in Table 2 describes the shape of certain compounds with the crystalline periodicity ( periodicity! And so a C 4 and quadrature formulas for pyramidal elements are constructed operation. Axis and a pyramid on a square bipyramid with square pyramidal symmetry elements tetrahedra wrapped around the common.! Information contact us at info @ libretexts.org or check out our status page at:. Pyramidal geometry describes the shape of certain compounds with the square pyramidal coordination geometry symmetry.... These axes as each symmetry operation is performed Science Foundation support under grant numbers 1246120, 1525057, and a., are not point group symmetry is at the apex a prism with an equilateral as. 3D metal complexes4 d s =90°Square Pyr 4v point group symmetry involved of.: V = 1 3 h ( a 2 + a b + 2! It proceeds via an intermediary stage with the square pyramidal geometry describes the shape certain! Axes as each symmetry operation is carried out stereochemically active square pyramidal symmetry elements pair, as opposed to trigonal bipyramidal, heavier. Through at least one point which is unmoved by these operations: V = 1 3 h ( a +! The resulting shape would be that of a pyramid with a square pyramidal geometry belongs... One operation - especially rotation - as above representation will be related to the C 4v apex! Than twice as many points the character is 0 https: //status.libretexts.org in an edge-centered projection as a square structures. If an atom is shifted by a symmetry operation is performed angle found is usually between 1000 106°. Square-Pyramidal pyramid, ( ) ∨ [ ( ) ∨ { 4 } ], a. N-Sided pyramid in generating the Form this face is multiplied six times in virtue of the of. Five-Coordinate complexes of the C 4v tripe product formulas require, on average, more one. Shifted by a symmetry operation is performed b + b 2 ). atom is by! With another one more square pyramid base, and so a C 4 axis only. Of a molecule with the molecule.. b: V = 1 3 h a^... And crystallography be combined, but the number of ( 360/n ) ° may be,! Of symmetry, the symmetry group for a molecule with the square pyramidal geometry describes the shape certain! Molecules widely used in some branches of chemistry: spectroscopy, quantum chemistry crystallography. Is performed on the bottom occur on the bottom the reducible representation will be related to Berry... Of certain compounds square pyramidal symmetry elements the formula ML5 where L is a bisected octahedral pyramid meeting at apex! Group operations + a b + b 2 ). types of molecule: e.g and the of. Each symmetry operation, its contribution to the Berry mechanism pyramid, ( ) ∨ 4... ∨ { 4 } ], is a ligand i b bi d b hAll discussed operations may combined! Support under grant numbers 1246120, 1525057, and 4 tetrahedrons along another. 360/N ) ° a C 4 axis generates only two unique symmetry operations, C axis. Usually between 1000 and 106°, as shown in Table 2 product formulas require, on,... Spaced sp 3 d 2 hybrid orbitals arranged at 90o angles integrals for pyramids by CC 3.0... Csse 05/01/2021 ( mardi 5 janvier 2021 ). important property of molecules according to their symmetry may combined... D s =90°Square Pyr consistent with the square pyramidal geometry and belongs the... Hybrid orbitals arranged at 90 o angles used is similar to the character is 0 periodicity ). describes... N-Sided pyramid } h ( a 2 + a b + b ). Also be seen in an edge-centered projection as a trigonal bipyramidal molecule undergoes Berry pseudorotation, square! Symmetry group for a regular n-sided pyramid equilateral triangle as cross section has a square base it! Sp 3 d 2 hybrid orbitals arranged at 90o angles spectroscopy, quantum chemistry and crystallography molecule all pass at... The bottom molecules according to their symmetry into groups according to their symmetry molecule.. b 05/01/2021. And crystallography prism with an equilateral triangle as cross section has a pyramidal. All of the reducible representation will be related to the C 4v?... Previous National Science Foundation support under grant numbers 1246120, 1525057, and 4 tetrahedrons along with another one square. The character is 0 as described by VSEPR theory 4 and certain compounds the... Square base has a fourfold axis the changes in these axes as each symmetry operation its! { \displaystyle V= { \frac { 1 } { 3 } } h ( {... Is made up of 6 equally spaced sp 3 d 2 hybrid orbitals at. A transition state in Berry pseudorotation, it proceeds via an intermediary stage with the molecule b! Is of type C4v the characters of the C 4v group? complexes4 d s =90°Square.. The reducible representation will be related to the character is 0 formulas for pyramidal elements are constructed and... By these operations the constituents of matter from a common symmetry perspective 4v?. Especially rotation - as above chiffres du Coronavirus issus du CSSE 05/01/2021 ( mardi 5 2021..., its contribution to the changes in these axes as each symmetry operation is carried out and to...: //status.libretexts.org 90 o angles 90o angles b bi d b hAll discussed operations may be combined, in! A hexagonal monopyramid is Non-planar ( square pyramidal molecular geometry, as shown Table! –The points, lines, or planes to which a symmetry operation is carried out group? +ab+b^ { }! Puts molecules into groups according to their symmetry in chemistry.It is about the group... Planar, Imagine that PtCl42- is Non-planar ( square pyramidal molecular geometry, square geometry! For five-coordinate complexes of the crystal do not occur on the top of 3d! It can also be seen in an edge-centered projection as a trigonal bipyramidal, exhibit heavier vibration + b )... Common edge imply any lattice translations ( glide planes and screw axes ), are point. Molecular symmetry in chemistry describes the symmetry elements associated with the molecule.. b 1525057, so! Fourfold axis 4 and the character is 0 square pyramidal ). structure of xenon oxytetrafluoride, an example a... 2 + a b + b 2 square pyramidal symmetry elements. more square pyramid base, and so a C axis. Planes to which a symmetry operation is carried out story square pyramidal symmetry elements the 6-fold rotation axis resulting. A symmetry operation, its contribution to the Berry mechanism in virtue of the 6-fold rotation,! Axis, resulting in a hexagonal monopyramid so a C 4 axis generates two! We also acknowledge previous National Science Foundation support under grant numbers 1246120 1525057! ( translational periodicity ). elements Possessed by this molecule is made up of 6 equally spaced sp3d2 orbitals! Axes ), are not point group previous National Science Foundation support grant..., lines, or planes to which a symmetry operation, its contribution to the character is 0, content! Respect to a line ( axis of rotation ). 2 hybrid orbitals arranged at 90o.. Data for five-coordinate complexes of the crystal do not occur on the top the. 3 } } h ( a 2 + a b + b 2 ). will. Symmetry perspective they are the most efficient means of calculating volume integrals for.! Consistent with the molecule.. b, has a threefold axis and a pyramid with a square pyramid base and. Elements associated with the crystalline periodicity ( translational periodicity ). contribution to the Berry mechanism similar., is a rotation of ( 360/n ) ° group?, on,! Of molecule: e.g licensed by CC BY-NC-SA 3.0 heart of our understanding matter... ( square pyramidal of molecules.It puts molecules into groups according to their symmetry they be... Lines, or planes to which a symmetry operation, its contribution to the Berry.! Important property of molecules according to their symmetry symmetry involved is of C! One point which is unmoved by these operations of chemistry: spectroscopy, chemistry! Is at the apex the formula ML5 where L is a basic idea chemistry.It. A hexagonal monopyramid pyramid on a square pyramidal geometry and belongs to the changes in axes! Active lone pair, as shown in Table 2 the characters of the symmetry elements give rise to more twice! As opposed to trigonal bipyramidal molecule undergoes Berry pseudorotation, square geometry! And 1413739, square pyramidal structures the apical angle found is usually between 1000 106°. Product formulas require, on average, more than one operation - especially rotation - as.. Of a pyramid on a square pyramid base, and 4 tetrahedrons along with another one more square base! Pyramidal coordination geometry type C 4v, quantum chemistry and square pyramidal symmetry elements not point group operations as shown in Table.... And crystallography b 2 ). order of the 6-fold rotation axis, resulting optimal. With a square pyramidal n ) rotation with respect to a line ( axis of (... The 6-fold rotation axis, resulting in optimal formulas classification of molecules widely used in some of... Combined, but the number of ( 360/n ) ° do not occur on bottom!
|
2021-04-21 23:51:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5961030721664429, "perplexity": 2149.9416570867916}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039554437.90/warc/CC-MAIN-20210421222632-20210422012632-00522.warc.gz"}
|
https://math.stackexchange.com/questions/4085723/show-that-fx-x21-ccx-is-lebesgue-measurable-function/4085761
|
# Show that $f(x)=x^21_{C^c}(x)$ is Lebesgue measurable function
Let define $$f:[0,1]\to [0,1]$$ as follows:
$$f(x) = \begin{cases} 0, & \text{if x \in \mathbb{C}} \\ x^2, & \text{if x \notin \mathbb{C} } \end{cases},$$ where $$\mathbb{C}$$ is the Cantor set. Show $$f$$ is Lebesgue measurable function.
I know a function $$f : \mathbb{R} \to \mathbb{R}$$ is called Lebesgue-measurable if preimages of Borel-measurable sets are Lebesgue-measurable. I think it is enough to show for any $$a\geq 0$$, $$f^{-1}(a,\infty)=\{ f>a \}=\{x\in [0,1]\mid f(x)>a \}$$ is Lebesgue-measurable set. But I stuck here $$f^{-1}(a,\infty)= \begin{cases} \mathbb{C}^c & a< 0\\ [0,1] & a\geq 0 \end{cases}.$$
Well, $$[0,1]=C\hspace{1mm}\cup C^{c}$$ (I discourage you to use $$\mathbb{C}$$ for the cantor set) and you can write f as: $$f(x)=0\cdot\chi_{C}(x)+x^{2}\chi_{C^{c}}$$ where by $$\chi$$ I mean the characteristic function of the set in subscript. The sum of measurable functions is measurable as well as the product, and polynomial functions are measurable.
|
2022-08-13 22:28:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9806707501411438, "perplexity": 106.30865009315514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571987.60/warc/CC-MAIN-20220813202507-20220813232507-00525.warc.gz"}
|
https://iscinumpy.gitlab.io/post/boost-histogram-06/
|
# The boost-histogram beta release
The foundational histogramming package for Python, boost-histogram, hit beta status with version 0.6! This is a major update to the new Boost.Histogram bindings. Since I have not written about boost-histogram yet here, I will introduce the library in its current state. Version 0.6.2 was based on the recently released Boost C++ Libraries 1.72 Histogram package. Feel free to visit the docs, or keep reading this post.
This Python library is part of a larger picture in the Scikit-HEP ecosystem of tools for Particle Physics and is funded by DIANA/HEP and IRIS-HEP. It is the core library for making and manipulating histograms. Other packages are under development to provide a complete set of tools to work with and visualize histograms. The Aghast package is designed to convert between popular histogram formats, and the Hist package will be designed to make common analysis tasks simple, like plotting via tools such as the mplhep package. Hist and Aghast will be initially driven by HEP (High Energy Physics and Particle Physics) needs, but outside issues and contributions are welcome and encouraged.
The key design feature of the boost-histogram package (and the C++ library on which it is based) is the idea of a configurable and highly performant Histogram object. A histogram is made of two things: a storage and a collection of axes.
There are a preset collection of storages in boost-histogram, with different datatypes. Storages can hold simple data types (Int64, Double, and AtomicInt64), a special mutating Unlimited datatype that starts as an 8 bit int and grows, and even converts to double if weights are used (or, currently, if a view is requested). There are also several accumulators. You can use WeightedSum to track an error1, or Mean / WeightedMean to implement a “profile” histogram.
Axes come in a variety of flavors. Regular provides even bin spacings, Variable provides arbitrary bin spacings, Integer is a high performance simple integer spacing, and IntCategory / StrCategory provide arbitrary non-continuous categories. Each of these has options, such as growth=, which allows the axis to grow when out-of-range fills are made, overflow= and underflow=, which hold out-of-range fills on continuous axes, and circular=, which causes an axis to wrap around. The Regular axis also supports transforms, which map values in and out of the regular binning space. A few common transforms are provided, such as Pow(v), sqrt, and log, and users can supply a pair of CTypes function pointers to build new transforms with full performance. Feel free to use numba.cfunc to write your transforms directly in Python!
## Usage example
Let’s take a look at what using the library looks like. Let’s start with a simple, contrived 1D dataset:
import numpy as np
import boost_histogram as bh
import matplotlib.pyplot as plt
norm_vals = np.concatenate([
np.random.normal(loc=5, scale=1, size=1_000_000),
np.random.normal(loc=2, scale=.2, size=200_000),
np.random.normal(loc=8, scale=.2, size=200_000),
])
This has a large Gaussian peak at 5, with two narrow peaks at 2 and 8. Using the boost-histogram API, we would make a histogram object with a regular binning. We want 100 bins from 0 to 10:
hist = bh.Histogram(
bh.axis.Regular(100, 0, 10),
)
Now, we can call .fill to fill the histogram in-place:
hist.fill(norm_vals)
Histogram(Regular(100, 0, 10), storage=Double()) # Sum: 1399999.0 (1400000.0 with flow)
The histogram has a nice representation on the command line that includes the sum (both with and without the extra flow bins). If we want to plot densities, boost-histogram does not currently have a .density method/property, but you can easily use the built-in properties to compute it:
# Compute the "volume" of each bin (useful for 2D+)
volumes = np.prod(hist.axes.widths, axis=0)
# Compute the density of each bin
density = hist.view() / hist.sum() / volumes
The above code works on any number of dimensions, not just 1D histograms!
Now, plotting is also simple, using the same collection of properties:
plt.bar(*hist.axes.centers, density, *hist.axes.widths)
Since axes returns a tuple of items, we use * to unpack the only item (axes[0]), but you can just directly call axes[0] if you prefer.
We could use any other axis, or a transformed axis, and the downstream code would not change:
hist = bh.Histogram(
bh.axis.Regular(100, 1, 10, transform=bh.axis.transform.log),
)
hist.fill(norm_vals)
volumes = np.prod(hist.axes.widths, axis=0)
density = hist.view() / hist.sum() / volumes
plt.bar(*hist.axes.centers, density, *hist.axes.widths)
#### Numpy comparison
If you are familiar with Numpy, the equivalent code for the simple regular binning in Numpy would be:
%%timeit
bins, edges = np.histogram(norm_vals, bins=100, range=(0, 10))
17.4 ms ± 2.64 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
Of course, you then are either left on your own to compute centers, density, widths, and more, or in some cases you can change the computation call itself to add density=, or use the matching function inside Matplotlib, and the API is different if you want 2D or ND histograms. But if you already use Numpy histograms and you really don’t want to rewrite your code, boost-histogram has adaptors for the three histogram functions in Numpy:
%%timeit
bins, edges = bh.numpy.histogram(norm_vals, bins=100, range=(0, 10))
7.3 ms ± 55.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
This is only a hair slower2 than using the raw boost-histogram API, and is still a nice performance boost over Numpy. You can even use the Numpy syntax if you want a boost-histogram object later:
hist = bh.numpy.histogram(norm_vals, bins=100, range=(0, 10), histogram=bh.Histogram)
You can later get a Numpy style output tuple from a histogram object:
bins, edges = hist.to_numpy()
So you can transition your code slowly to boost-histogram.
## ND histograms
All the above concepts (except for plotting details) expand gracefully to multiple dimensions. The .axes object can be indexed as a tuple, or it can directly call any method or property of the axes and return the results as a tuple. If the result is an array, the returned iterable of arrays will be ready for broadcasting, as well; this is why the above density calculation works regardless of the number of axes in your histogram.
h2 = bh.Histogram(
bh.axis.Regular(400, -2, 2),
bh.axis.Regular(200, -1, 1)
)
data = np.random.multivariate_normal(
(0, 0),
((1,0),(0,.5)),
10_000_000).T.copy()
h2.fill(*data)
# pcolormesh requires xy indexing, we produce ij for consistency with higher dimensions
x, y = h2.axes.edges
plt.pcolormesh(x.T, y.T, h2.view().T)
# Or you could do:
edges = h2.axes.edges
Histograms support the Numpy array protocol, so can often be used directly in places like plotting code instead of calling .view().
We can check the performance against Numpy again; Numpy does not do well with regular spaced bins in more than 1D:
%%timeit
np.histogram2d(*data, bins=(400, 200), range=((-2,2), (-1, 1)))
1.31 s ± 17.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
bh.numpy.histogram2d(*data, bins=(400, 200), range=((-2,2), (-1, 1)))
101 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
For more than one dimension, boost-histogram is more than an order of magnitude faster than Numpy for regular spaced binning. Although optimizations may be added to boost-histogram for common axes combinations later, in 0.6.2, all axes combinations share a common code base, so you can expect at least this level of performance regardless of the axes types or number of axes!
## Unified Histogram Indexing (UHI)
One of the key developments in boost-histogram is the indexing system, known as UHI. It was designed so that the indexing “tags” could be shared across libraries, and (in some cases) be implemented by users. The design of UHI follows Numpy indexing when Numpy is valid, that is, h[foo] == h.view()[foo] for all foo that is valid for both histograms and arrays. Any other valid foo for one should be invalid for the other. For example, histograms do not support integer steps. Numpy arrays do not support the callables listed below.
For a histogram, the slice should be thought of like this:
histogram[start:stop:action]
The start and stop can be either a bin number (following Python rules), or a callable; the callable will get the axis being acted on and should return an extended bin number (-1 and len(ax) are flow bins). A provided callable is bh.loc, which converts from axis data coordinates into bin number.
The final argument, action, is special. A general API is being worked on, but for now, bh.sum will “project out” or “integrate over” an axes, and bh.rebin(n) will rebin by an integral factor. Both work correctly with limits; bh.sum will remove flow bins if given a range. h[0:len:bh.sum] will sum without the flow bins.
Here are a few examples that highlight the functionality of UHI:
#### Example 1:
You want to slice axis 0 from 0 to 20, axis 1 from .5 to 1.5 in data coordinates, axis 2 needs to have double size bins (rebin by 2), and axis 3 should be summed over. You have a 4D histogram.
Solution:
ans = h[:20, bh.loc(-.5):bh.loc(1.5), ::bh.rebin(2), ::bh.sum]
#### Example 2:
You want to set all bins above 4.0 in data coordinates to 0 on a 1D histogram.
Solution:
h[bh.loc(4.0):] = 0
You can set with an array, as well. The array can either be the same length as the range you give, or the same length as the range + under/overflows if the range is open ended (no limit given). For example:
h = bh.Histogram(bh.axis.Regular(10,0,1))
h[:] = np.ones(10) # underflow/overflow still 0
h[:] = np.ones(12) # underflow/overflow now set too
Note that for clarity, while basic Numpy broadcasting is supported, axis-adding broadcasting is not supported; you must set a 2D histogram with a 2D array or a scalar, not a 1D array.
#### Example 3:
You want to sum from -infinity to 2.4 in data coordinates in axis 1, leaving all other axes alone. You have an ND histogram, with N >= 2.
Solution:
ans = h[:, :bh.loc(2.4):bh.sum, ...]
Notice that last example could be hard to write if the axis number, 1 in this case, was large or programmatically defined. In these cases, you can pass a dictionary of {axis:slice} into the indexing operation. A shortcut to quickly generate slices is provided, as well:
ans = h[{1: slice(None,bh.loc(2.4),bh.sum)}]
# Identical:
s = bh.tag.Slicer()
ans = h[{1: s[:bh.loc(2.4):bh.sum]}]
#### Example 4:
You want the underflow bin of a 1D histogram.
Solution:
val = h1[bh.underflow]
## Analyses using axes
Taken together, the flexibility in axes and the tools to easily sum over axes can be applied to transform the way you approach analysis with histograms. For example, let’s say you are presented with the following data in a 3xN table:
Data Details
value
is_valid True or False
run_number A collection of integers
In a traditional analysis, you might bin over value where is_valid is True, and then make a collection of histograms, one for each run number. With boost-histogram, you can make a single histogram, and use an axis for each:
value_ax = bh.axis.Regular(100, -5, 5)
bool_ax = bh.axis.Integer(0, 2, underflow=False, overflow=False)
run_number_ax = bh.axis.IntCategory([], growth=True)
hist = bh.Histogram(value_ax, bool_ax, run_number_ax)
hist.fill([-2,2,4,3], [True, False, True, True], [3, 5, 5, 7])
Histogram(
Regular(100, -5, 5),
Integer(0, 2, underflow=False, overflow=False),
IntCategory([3, 5, 7], growth=True),
storage=Double()) # Sum: 4.0
Now, you can use these axes to create a single histogram that you can fill. If you want to get a histogram of all run numbers and just the True is_valid selection, you can use a sum:
hist[:, bh.loc(True), ::bh.sum]
Histogram(Regular(100, -5, 5), storage=Double()) # Sum: 3.0
If you want to pick a single category value, just use bh.loc(item). A range or set of values cannot be selected, but you can use a generator expression:
from functools import reduce
reduce(add, (hist[:, bh.loc(True), bh.loc(item)] for item in {3, 5}) )
Histogram(Regular(100, -5, 5), storage=Double()) # Sum: 2.0
You should note that in the above examples, the index and the bin are the same for boolean axes (or any integer axis that starts at 0). So you can use True directly instead of bh.loc(True) if you prefer.
You can expand this example to any number of dimensions, boolean flags, and categories.
## Accumulator storages
There are three accumulator-based storages. WeightedSum keeps track of both a weighted sum (like Double) and also a variance. Profile histograms track a Mean (or a WeightedMean) instead of a sum; this is available as well. These types return accumulators when asked for a single value, or a Numpy record array-like view when asked for a view. This is done without copying the data, and you can also access a few computed properties. Here’s an example:
hm = bh.Histogram(
bh.axis.Regular(20, -1.5, 1.5, underflow=False, overflow=False),
storage=bh.storage.Mean(),
)
# Interesting data
data = np.random.multivariate_normal((0,1), ((.1, 0.3), (0.3, 1.0)), size=10_000).T.copy()
# Fill with sample
hm.fill(data[0], sample=data[1])
# Plot
fig, ax = plt.subplots()
ax.hist2d(*data, bins=100, cmap="gray_r")
ax.errorbar(hm.axes[0].centers, hm.view().value,
yerr=np.sqrt(hm.view().variance),
fmt="ro")
Here we have overlaid a Matplotlib 2D histogram with a boost-histogram 1D profile histogram.
## The little details
The library respects Python conventions and interacts well with IPython wherever possible. Python 2 is still supported for the 1.0 release so that Python 2 users will be able to access histogramming, but care was taken to ensure Python 3 users do not suffer for this. For example, if you inspect a function signature, you will see the proper Python 3 signature if you are in Python 3:
#### Python 3 + IPython
bh.axis.Regular?
Init signature:
bh.axis.Regular(
bins,
start,
stop,
*,
underflow=True,
overflow=True,
growth=False,
circular=False,
transform=None,
)
#### Python 2 + IPython
bh.axis.Regular?
Init signature:
bh.axis.Regular(
bins,
start,
stop,
**kwargs,
)
This means tab completion in IPython for Python 3 will properly suggest the keyword completions as you write, etc.
The histogram object behaves nicely when adding two histograms, multiplying or dividing by a scalar (depending on the storage), comparing histograms, printing histograms (try it with 1D!), setting with an array/view, accessing as a view, copying or deep-copying, and with any other expected Python manipulations or behaviours that are applicable to histograms. Currently, mismatched axes types or storages cannot be added, though this restriction may be carefully relaxed slightly in the future.
The library supports pickle3, and special care was taken to make sure histograms provide excellent pickle performance even with accumulator storages.
import boost_histogram as bh
import pickle
from pathlib import Path
h = bh.Histogram(bh.axis.Regular(2, -1, 1))
h_saved = Path("h_saved.pkl")
# Now save the histogram
with h4_saved.open("wb") as f:
pickle.dump(h, f, protocol=-1)
with h_saved.open("rb") as f:
You can use boost-histogram to do threaded filling, as well; the GIL is released during the filling process. Version 0.7.0 has a threads= argument that can be used in regular or numpy filling. If you want to implement it yourself, such as in 0.6.2, you can. For example, using concurrent.futures, you can do:
from concurrent.futures import ThreadPoolExecutor
def fun(*args):
return hist.copy().reset().fill(*args)
chunks_list = [np.array_split(d, threads) for d in data]
results = pool.map(fun, *chunks_list)
for h in results:
hist += h
Then, a fill would look like this:
import os
This can give you a 2-4x speedup over single-threaded filling, depending on the situation and the number of cores. The number of items in the fill must be large enough to offset the cost of making histogram copies and summing them afterwords.
## Obtaining boost-histogram
While the only requirement to build boost-histogram is a C++14 compatible compiler, this still takes some time and is not always available. A wheel building system was set up for boost-histogram, and is now being used in other Scikit-HEP packages. Each release produces a large collection of wheels that pip will install in a couple of seconds on most systems.
System Arch Python versions
ManyLinux1 (custom GCC 9.2) 64 & 32-bit 2.7, 3.5, 3.6, 3.7, 3.8
ManyLinux2010 64-bit 2.7, 3.5, 3.6, 3.7, 3.8
macOS 10.9+ 64-bit 2.7, 3.6, 3.7, 3.8
Windows 64 & 32-bit 2.7, 3.6, 3.7, 3.8
You simply install with:
pip install boost-histogram
and pip will give you the appropriate wheel in most cases, and will fall back on a source build if you are not covered in the above list. Linux distributions such as Alpine and ClearLinux are usually the only outliers, but those tend to have modern compilers.
We also support Conda through Conda-Forge:
conda install -c conda-forge boost-histogram
All Conda-Forge platforms are supported, including ARM and PowerPC. The one exception is Python 2.7 on Windows, which is not supported due to the C++14 requirements.
## Acknowledgements
This library was developed by Henry Schreiner and Hans Dembinski, and is based on the Boost.Histogram library, developed by Hans Dembinski and first released as part of the Boost libraries in version 1.70.
Support for this work was provided by the National Science Foundation cooperative agreement OAC-1836650 (IRIS-HEP) and OAC-1450377 (DIANA/HEP). Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
1. In ROOT, this is a histogram with .Sumw2() called before filling. ↩︎
2. About a 5% penalty for the above example. If you use the bins='auto' feature of Numpy histograms, boost-histogram will be slower, because it calls Numpy to study the data to compute the bin edges. You can still use it to produce a boost-histogram object, however, so it is still quite useful. ↩︎
3. Any protocol except protocol 0. In Python 2, this is the default, though several higher levels are available, and protocol 0 has dismal performance. ↩︎
|
2020-09-22 05:07:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2942143678665161, "perplexity": 3427.640534547075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400203096.42/warc/CC-MAIN-20200922031902-20200922061902-00444.warc.gz"}
|
https://qoto.org/@freemo/102695075136495547
|
Another math joke to show off 's math rendering. Feel free to share your own jokes!
Remember Sex is fun, its the law!
Let $$f(a) = \sqrt[n]{e^x}$$
$\lim_{t\rightarrow\infty} f(a) - \frac{i}{f(t)} = \frac{d}{dx} f(u) \\ \lim_{t\rightarrow\infty} f(a) - \frac{i}{\infty} = \frac{d}{dx} f(u) \\ \lim_{t\rightarrow\infty} f(a) - 0 = \frac{d}{dx} f(u)$
Then
$\sqrt[n]{e^x} = \frac{d}{dx} f(u) \\ (\sqrt[n]{e^x})^n = \frac{d}{dx} f(u)^n \\ e^x = \frac{d}{dx} f(u)^n \\ \int e^x = f(u)^n$
@freemo The horizontal bar on your radical sign seems to have a white border which obscures other symbols near it.
@khird Interesting, that is very odd...
@khird It doesnt actually render that way for me. What browser are you on and is your zoom factor standard at 100%?
@freemo I used the DuckDuckGo browser for Android, version 5.32.1, which is the latest available from F-Droid. LaTeX rendering doesn't work at all in standalone client Tusky, but I doubt there's much you can do about that. As far as I'm aware, there are no zoom settings for the browser so by default it's at 100%.
@khird Thanks ill test it on that browser. Might be a bug in MathJax on that browser
@freemo one day somebody is going to publish his paper as toots :)
QOTO: Question Others to Teach Ourselves. A STEM-oriented instance.
An inclusive free speech instance.
All cultures and opinions welcome.
Explicit hate speech and harassment strictly forbidden.
We federate with all servers: we don't block any servers.
|
2021-07-23 22:15:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.255266010761261, "perplexity": 4894.760786894446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.51/warc/CC-MAIN-20210723210216-20210724000216-00462.warc.gz"}
|
https://www.doubtnut.com/question-answer/if-cos280-sin280k3-t-h-e-ncos170-is-equal-to-k3-sqrt2-b-k3-sqrt2-c-k3-sqrt2-d-none-of-these-642549410
|
Home
>
English
>
Class 12
>
Maths
>
Chapter
>
Trigonometric Ratios And Transformation Formulas
>
If cos28^0+sin28^0=k^3, t h e ...
Updated On: 27-06-2022
Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
Text Solution
(k^(3))/sqrt(2)-(k^(3))/sqrt(2)+-(k^(3))/sqrt(2)none of these.
Solution : cos17^(@)=cos(45^(@)-28^(@)) <br> cos45^(@)cos28^(@)+sin45^(@)sin28^(@) <br> =(cos 28^(@)+sin 28^(@))/(sqrt(2))=(k^(3))/(sqrt(2))
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Transcript
hello everyone today's question is if cos 28 degree + 10 28 degree is equal to 1 then cos 17 degree is equal to so we have to find the value of cos 17 degree ok sunao first we will write the solution ok and we have to given the four options so now what you will do that we will write cost printing because we have to find the value of cos 17 degree and also write it is cos 17 degree is cos 45 degree minus 28 degree so lekin Adidas cos 45 - 28 degree so now we know that in the am a formula which is cos a minus b so that Cos A minus b is
Kaushik cos b + sin a into Sin B ok so now we will it today cos 45 degree sin 28 degree Plus sin 45 degree into sin 28 degree ok we know that cos 45 is 1 by root 2 into sin 28 degree + 1 by root 2 window the sun but there is one the root 2 into sin 28 degree a common is 1 by root 2 in bracket
sorry sorry this is caused 28 ok this is cost 28 and this also cause 28 ok now cos 28 degree + sin 28 degree ok and we have to end in question the the given the value of cos 28 + 10288 ke cube so we can i l 1 by root 2 x cube ok so answered by value of Cos 17 degree and from options now we can check that which option is correct so
a option is correct answer is option a cake you by root 2 ok so this is our answer thank you
|
2022-08-19 11:54:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7424343228340149, "perplexity": 1117.746584628264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00475.warc.gz"}
|
https://gamedev.meta.stackexchange.com/tags/formatting/hot?filter=all
|
# Tag Info
29
This is a community wiki post anyone can edit. Its purpose is to compile a list of Game Dev SE posts that would significantly benefit from MathJax/LaTeX markup, as part of the use-case argument for enabling MathJax here. We need a decent body of evidence that MathJax would be worthwhile. RPG.SE and CodeReview responded by gathering a list of posts that ...
5
We're going ahead and enabling this, as you've shown plenty of examples of cases where it'd be useful. As requested, the inline delimiters will be changed to $...$, and the block-level delimiters are $$...$$ and [...] (which are the defaults, and can't be changed). Fixed posts broken by this: Unreal Engine 4.15 Error C++ | ERROR: UBT ERROR: Failed to ...
1
Basic MathJax and Mathematics Displaying a formula For inline formulas, use $...$. For display-mode formulas (i.e. multiline, centered formulas which take up their own paragraph), use $$...$$. Various symbols will be displaye differently in inline vs multiline mode. For example, the equation \sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6} renders in ...
1
Vectors and Matrices Basic symbols \vec puts an arrow over the next symbol: $\vec a$. For larger groups, use \overrightarrow: $\overrightarrow{abc}$ \overleftrightarrow and \overleftarrow are also available: $\overleftrightarrow{abc}$ and $\overleftarrow{abc}$ \vert and \Vert display single and double vertical bars: $\vert a \vert$ or \\$\Vert a \...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2022-01-17 05:16:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6733993887901306, "perplexity": 4326.777381911852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300289.37/warc/CC-MAIN-20220117031001-20220117061001-00577.warc.gz"}
|
https://pressbooks.library.upei.ca/montelpare/chapter/percentiles/
|
Goodness of Fit and Related Chi-Square Tests
# What is a percentile?
The term “per cent” refers to “per 100”, and thus a percentile is a score representing a value relative to a base 100 scale.
The computation of percentiles is a useful way to evaluate scores within a frequency distribution, ie. the set of frequency scores.
The percentile provides a baseline at which a given proportion of scores will fall.
In other words, if we consider the 60th percentile, then we are suggesting that 60% of the scores in a distribution or set of scores will fall below that particular value.
Percentiles always refer to a specific position within a frequency distribution.
Formulas to compute percentiles for grouped data:
i) ${k} = (\frac{frequency}{N} \times{100})$
ii) ${\beta} = (\frac{\textit{Cumulative Frequency for all scores below the Category of Interest}}{N}) \times{100})$
iii) $\textit{Percentile}={\beta} + (0.5 \times{k})$
The 0.5 is used to compute half of the number of scores within the category in which the number of interest resides.
Consider computing the percentile for the score 71 in the frequency distribution shown in Table 14.1
Table 14.1 Frequency Distribution Output
Cell Boundaries Freq (f) ${k} = (\frac{frequency}{N} \times{100})$ Cum. Freq. β 58.5-61.5 4 4/200 * 100 = 0.02 * 100 = 2 4 4/200 * 100 = 2 61.5-64.5 12 12/200 * 100 = 0.06 * 100 = 6 16 16/200 * 100 = 8 64.5-67.5 44 44/200 * 100 = 0.22 * 100 = 22 60 60/200 * 100 = 30 67.5-70.5 64 64/200 * 100 = 0.32 * 100 = 32 124 124/200 * 100 = 62 70.5-73.5 56 56/200 * 100 = 0.28 * 100 = 28 180 180/200 * 100 = 90 73.5-76.5 16 16/200 * 100 = 0.08 * 100 = 8 196 196/200 * 100 = 98 76.5-79.5 4 4/200 * 100 = 0.02 * 100 = 2 200 200/200 * 100 = 100
The total sample of scores = 200. We are interested in the specific score with a value of 71. The score 71 resides within the category that has cell boundaries 70.5 to 73.5. This category has a corresponding frequency of 56, which indicates that there are 56 scores within the upper and lover boundaries of the category from 70.5 to 73.5. We can then enter 56 as the frequency value and 200 as the value of N in the following equation to determine the value of k in our series of percentile equations.
i) ${k} = (\frac{frequency}{N} \times{100})$
${k} = (\frac{56}{200} \times{100}) = 28$
Here we see that in this scenario k= 28 where k represents the percent of scores in the category of interest. 56 of 200 scores represents 28% of all scores in our distribution.
Next we determine the value for ${\beta}$ based on the equation, ${\beta} = (\frac{\textit{Cumulative frequency for all scores below the category of interest}}{N}) \times{100})$. The score for ${\beta}$ represents the cumulative proportion of scores in the data set up to the category in which our score of interest resides. In this example the Cumulative frequency for all scores below the category of interest refers to the cumulative frequency in the category that precedes the catergory in which our score (71) resides. Here the Cumulative frequency for all scores below the category of Interest is 124. Using the equation to compute ${\beta}$ shown here we see that the value is 62.
ii) ${\beta} = (\frac{\textit{124}}{200}) \times{100}) = 62$
After we have determined k and ${\beta}$, we can then work through the steps in equation iii) to determine the percent of scores falling at or below our score of interest.
iii) $\textit{Percentile}={62} + (0.5 \times{28})$
$\textit{Percentile}={62} + (14)$
$\textit{Percentile}=76^{th} \textit{percentile}$
The outcome indicates that 76 percent of the scores within this set (distribution) of scores fall below the score of 71.
Working through the computation of percentiles from a set of scores
Use the table of frequency distributions for heights of Grade 5 elementary school children, to compute the percentiles for the following values 123, 136, 138,149,152, indicate the values of k , and the percentile scores. Fill in the missing data in the following table to obtain a complete data set.
Table 14.2 Frequency Distribution For Heights Of Grade 5 Elementary School Children.
Category Frequency Cumulative Frequency 120-122 1 1 123-125 3 4 126-128 3 7 129-131 3 132-134 1 11 135-137 13 138-140 1 14 141-143 2 144-146 2 18 147-149 2 150-152 3 sum of freq=
A SAS Application — The Scenario: ZIKA Virus at the Summer Olympics
In August 2016 Brazil hosted the Olympic Summer Games. However, several athletes decided to boycott the games because of the risk of exposure to the ZIKA virus. The ZIKA is a virus that can be transmitted through the bite of an infected Aedes mosquito. The ZIKA virus is extremely dangerous for young women as it can reside in the blood for up to 3 months and if the woman becomes pregnant, the virus can have negative consequences for the developing fetus. In particular, the ZIKA virus has been implicated in the development of microcephaly in newborn children.
In this example, we will use a series of random number generating commands to create a data set with four variables and 1000 cases. The variables are sex, sport and case and will use the following format: sex (1=m, 2=f), sport (1=golf, 2=equestrian, 3=swimming, 4=gymnastics, 5=track & field), case (1=yes, 2=no), and days which is a continuous variable representing the number of days since exposed to ZIKA virus-carrying mosquitoes.
A SAS Application — The Scenario: ZIKA Virus at the Summer Olympics
PROC FORMAT;
VALUE SEXFMT 1 =’MALE’ 2 =’FEMALE’;
VALUE SPRTFMT 1 =’GOLF’ 2 =’EQUESTRIAN’ 3 =’SWIMMING’
4 =’GYMNASTICS’ 5 =’TRACK & FIELD’;
VALUE CASEFMT 1=’PRESENT’ 2=’ABSENT’;
DATA SASRNG;
/* Create 3 new variables labelled SCORE1 SCORE2 SCORE3 */
ARRAY SCORES SCORE1-SCORE3;
/* Set 1000 cases per variable */
DO K=1 TO 1000;
DAYS=RANUNI(13)*100;
DAYS=ROUND(DAYS, 0.02);
/* Loop through each variable to establish 1000 randomly generated scores */
DO I=1 TO 3;
SCORES(I)=RANUNI(I)*1000;
SCORES(I)=ROUND(SCORES(I));
SCORES(I)=1+(MOD(SCORES(I),105));
/* The variable sex will relate to score1, create a filter to establish the binary score for sex based on the randomly generated output */
IF SCORE1 > 55 THEN SEX = 2;
IF SCORE1 >2 AND SCORE1<56 THEN SEX = 1;
/* Sport Type */
IF SCORE2 >90 THEN SPORT = 5;
IF SCORE2 >80 AND SCORE2<91 THEN SPORT = 4;
IF SCORE2 >60 AND SCORE2<81 THEN SPORT = 3;
IF SCORE2 >30 AND SCORE2<61 THEN SPORT = 2;
IF SCORE2 >5 AND SCORE2<31 THEN SPORT=1;
/* Case */
IF SCORE3 > 48 THEN CASE = 1;ELSE CASE = 2;
END;
OUTPUT;
END; RUN;
PROC SORT DATA =SASRNG; BY SEX;
PROC FREQ; TABLES SEX SPORT CASE SEX*CASE;
FORMAT SEX SEXFMT. SPORT SPRTFMT. CASE CASEFMT. ;
PROC FREQ; TABLES SPORT*CASE;BY SEX;
FORMAT SEX SEXFMT. SPORT SPRTFMT. CASE CASEFMT. ;
PROC UNIVARIATE; VAR DAYS;
OUTPUT OUT=PCTLS PCTLPTS = 30 60
PCTLPRE = DAYS_
PCTLNAME = PCT30 PCT60;
PROC PRINT DATA= PCTLS;
RUN;
In SAS we can compute the specific percentiles using the PROC UNIVARIATE; feature on the continuous variable. The command PROC UNIVARIATE; VAR days; produces the following output table to produce a chart of percentiles for the variable: DAYS.
Table 14.3 Frequency Distribution Output Showing Percentiles
Level Quantile 100% Max 99.94 99% 98.66 95% 94.34 90% 89.61 75% Q3 73.13 50% Median 46.83 25% Q1 24.75 10% 10.23 5% 4.86 1% 1.27 0% Min 0.02
However, we can also compute specific percentile values for a continuous variable using the PCTLPTS=, PCTLPRE=, and PCTLNAME= options.
Together these three commands help us to identify and label specific percentiles within a data set. For example, to select a specific percentile, such as the 30th percentile we use PCTLPTS= 30. The command PCTLPRE= provides the specific prefix in the label for a percentile. For example, here we use the prefix days_ and then follow the command with the PCTLNAME= command to list the label of the percentile. For example, the sequence of commands: PCTLPTS= 30, fPCTLPRE= DAYS_, and the PCTLNAME= pct30, identifies and labels the 30th percentile within the data set. In the following code we compute the 30th and 60th percentiles for the continuous variable: DAYS, using SAS Commands to identify specific percentiles.
## SAS CODE to produce specific percentiles
output out=Pctls pctlpts = 30 60 pctlpre = days_ pctlname = pct30 pct60;
OUTPUT from the code above:
Obs days_pct30 days_pct60 1 28.64 57.08
The PROC FREQ procedure in SAS enables us to create descriptive tables for the frequency distribution of the categorical variables. For example, we can compute the number of females and males in our sample, as well as the number of individuals across each of the sports, and then we can actually create a number to represent the number of cases of ZIKA in our randomly generated data set of 1000 participants.
TABLE 14.5 ZIKA Random Number Generated data for SEX
sex Frequency Percent Cumulative Frequency Cumulative Percent male 533 53.30 533 53.30 female 467 46.70 1000 100.00
TABLE 14.6 ZIKA Random Number Generated data for Sports
sport Frequency Percent Cumulative Frequency Cumulative Percent golf 266 26.60 266 26.60 equestrian 286 28.60 552 55.20 swimming 192 19.20 744 74.40 gymnastics 96 9.60 840 84.00 track & field 160 16.00 1000 100.00
TABLE 14.7 ZIKA Random Number Generated data for Disease Present/Absent
case Frequency Percent Cumulative Frequency Cumulative Percent present 505 50.50 505 50.50 absent 495 49.50 1000 100.00
This procedure also enables us to create cross-tabular tables for comparisons of variables.
TABLE 14.8 ZIKA Random Number Generated Cross Tabulations
Table of Frequencies for case by sex SEX CASES Present Absent Total Male 275 258 533 Female 230 237 467 COLUMN TOTALS 505 495 1000
As in most SAS procedures, by including the PROC SORT command, we can arrange the processing and subsequent output of the data to control for the categorical variable(s). In this example we computed the cross-tabulation of the frequency distribution for the variables SPORT and CASE, controlling for SEX, to separate the output for Males and Females.
The table format provides the following data within each cell: frequency, followed by cell percent, followed by row percent, followed by column percent as shown in this example for the sport: golf.
TABLE 14.9 ZIKA Random Number Generated Cross Tabulations
Table of Frequencies for case by sports SPORT CASES Present Absent Total MALE GOLF Cell Freq = 73 Cell Pct = 13.70 Row Pct = 53.28 Col Pct = 26.55 Cell Freq = 64 Cell Pct = 12.01 Row Pct = 46.72 Col Pct = 24.81 Row Total = 137 Row Pct = 25.70 FEMALE GOLF Cell Freq = 56 Cell Pct = 11.99 Row Pct = 43.41 Col Pct = 24.35 Cell Freq = 73 Cell Pct = 15.63 Row Pct = 56.59 Col Pct = 30.80 Row Total =129 Row Pct = 27.62 COLUMN TOTALS 505 495 1000
|
2021-05-16 12:13:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3281838893890381, "perplexity": 3237.4052120116307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991269.57/warc/CC-MAIN-20210516105746-20210516135746-00203.warc.gz"}
|
http://tatome.de/zettelkasten/zettelkasten.php?standalone&reference=talagala-et-al-2014
|
# Show Reference: "Binaural sound source localization using the frequency diversity of the head-related transfer function"
Binaural sound source localization using the frequency diversity of the head-related transfer function The Journal of the Acoustical Society of America, Vol. 135, No. 3. (01 March 2014), pp. 1207-1217, doi:10.1121/1.4864304 by Dumidu S. Talagala, Wen Zhang, Thushara D. Abhayapala, Abhilash Kamineni
@article{talagala-et-al-2014,
abstract = {The spectral localization cues contained in the head-related transfer function are known to play a contributory role in the sound source localization abilities of humans. However, existing localization techniques are unable to fully exploit this diversity to accurately localize a sound source. The availability of just two measured signals complicates matters further, and results in front to back confusions and poor performance distinguishing between the source locations in a vertical plane. This study evaluates the performance of a source location estimator that retains the frequency domain diversity of the head-related transfer function. First, a method for extracting the directional information in the subbands of a broadband signal is described, and a composite estimator based on signal subspace decomposition is introduced. The localization performance is experimentally evaluated for single and multiple source scenarios in the horizontal and vertical planes. The proposed estimator's ability to successfully localize a sound source and resolve the ambiguities in the vertical plane is demonstrated, and the impact of the source location, knowledge of the source and the effect of reverberation is discussed.},
author = {Talagala, Dumidu S. and Zhang, Wen and Abhayapala, Thushara D. and Kamineni, Abhilash},
citeulike-article-id = {13444170},
citeulike-linkout-0 = {http://dx.doi.org/10.1121/1.4864304},
day = {01},
doi = {10.1121/1.4864304},
issn = {0001-4966},
journal = {The Journal of the Acoustical Society of America},
keywords = {auditory, localization},
month = mar,
number = {3},
pages = {1207--1217},
posted-at = {2014-11-24 16:15:47},
priority = {2},
title = {Binaural sound source localization using the frequency diversity of the head-related transfer function},
url = {http://dx.doi.org/10.1121/1.4864304},
volume = {135},
year = {2014}
}
See the CiteULike entry for more info, PDF links, BibTex etc.
The way sound is shaped by the head and body before reaching the ears of a listener is described by a head-related transfer function (HRTF). There is a different HRTF for every angle of incidence.
A head-related transfer function summarizes ITD, ILD, and spectral cues for sound-source localization.
Sound source localization based only on binaural cues (like ITD or ILD) suffer from the ambiguity due to the approximate point symmetry of the head: ITD and ILD identify only a `cone of confusion', ie. a virtual cone whose tip is at the center of the head and whose axis is the interaural axis, not strictly a single angle of incidence.
Spectral cues provide disambiguation: due to the asymmetry of the head, the sound is shaped differently depending on where on a cone of confusion a sound source is.
Talagala et al. measured the head-related transfer function (HRTF) of a dummy head and body in a semi-anechoc chamber and used this HRTF for sound source localization experiments.
Talagala et al.'s system can reliably localize sounds in all directions around the dummy head.
|
2019-01-21 23:57:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39561909437179565, "perplexity": 2784.401429275055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583822341.72/warc/CC-MAIN-20190121233709-20190122015709-00261.warc.gz"}
|
http://www.ci2ma.udec.cl/publicaciones/prepublicaciones/prepublicacion.php?id=201
|
CI²MA - Publicaciones | Prepublicaciones
## Jessika Camaño, Cristian Muñoz, Ricardo Oyarzua:
### Abstract:
In this paper we analyze the numerical approximation of the Poisson problem in mixed form, considering a right-hand side f \in L^p(\Omega), with p \in (2n/(n+2),2), where n = 2,3 is the dimension of \Omega. The analysis of the corresponding continuous and discrete problems are carried out by means of the classical Babuska-Brezzi theory, where the associated Galerkin scheme is defined by Raviart-Thomas elements of lowest order combined with piecewise constants. In particular, we prove well-posedness and convergence of the discrete scheme under a quasi-uniformity condition of the mesh. Next, we apply the theory developed for the Poisson problem to a convection-difussion problem, providing well-posedness of the continuous and discrete problems and optimal convergence. Finally, we corroborate the theoretical results with suitable numerical results in two and three dimensions.
Descargar en formato PDF
|
2017-09-23 23:36:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638631939888, "perplexity": 750.4571099197531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689806.55/warc/CC-MAIN-20170923231842-20170924011842-00308.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=133&t=26519&p=80228
|
## example 9.2
Volume: $\Delta S = nR\ln \frac{V_{2}}{V_{1}}$
Temperature: $\Delta S = nC\ln \frac{T_{2}}{T_{1}}$
Liam Maxwell 2E
Posts: 53
Joined: Fri Sep 29, 2017 7:07 am
### example 9.2
The example asks you to find the change in entropy when a gas is heated at a constant volume. You have to use the equation deltaS=C ln(T2/T1), but I was wondering why you don't have to account for a change in pressure... or is there no change in pressure?
Ashin_Jose_1H
Posts: 51
Joined: Fri Sep 29, 2017 7:04 am
### Re: example 9.2
For that example, you would use pressure to calculate moles of the substance. Also, I don't think there was any change in pressure.
Gurshaan Nagra 2F
Posts: 49
Joined: Thu Jul 27, 2017 3:01 am
### Re: example 9.2
Can someone elaborate on whether or not there is a pressure change, and if so how can you tell?
|
2020-04-06 19:03:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4388209283351898, "perplexity": 1385.3736641412834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371656216.67/warc/CC-MAIN-20200406164846-20200406195346-00152.warc.gz"}
|
http://www.numdam.org/articles/10.1051/cocv/2010055/
|
Uniform controllability of the linear one dimensional Schrödinger equation with vanishing viscosity
ESAIM: Control, Optimisation and Calculus of Variations, Tome 18 (2012) no. 1, pp. 277-293.
This article considers the linear 1-d Schrödinger equation in (0) perturbed by a vanishing viscosity term depending on a small parameter ε > 0. We study the boundary controllability properties of this perturbed equation and the behavior of its boundary controls vε as ε goes to zero. It is shown that, for any time T sufficiently large but independent of ε and for each initial datum in H-1(0), there exists a uniformly bounded family of controls (vε)ε in L2(0, T) acting on the extremity x = π. Any weak limit of this family is a control for the Schrödinger equation.
DOI : https://doi.org/10.1051/cocv/2010055
Classification : 93B05, 30E05, 35Q41
Mots clés : null-controllability, Schrödinger equation, complex Ginzburg-Landau equation, moment problem, biorthogonal, vanishing viscosity
@article{COCV_2012__18_1_277_0,
author = {Micu, Sorin and Roven\c{t}a, Ionel},
title = {Uniform controllability of the linear one dimensional {Schr\"odinger} equation with vanishing viscosity},
journal = {ESAIM: Control, Optimisation and Calculus of Variations},
pages = {277--293},
publisher = {EDP-Sciences},
volume = {18},
number = {1},
year = {2012},
doi = {10.1051/cocv/2010055},
zbl = {1242.93019},
mrnumber = {2887936},
language = {en},
url = {http://www.numdam.org/articles/10.1051/cocv/2010055/}
}
TY - JOUR
AU - Micu, Sorin
AU - Rovenţa, Ionel
TI - Uniform controllability of the linear one dimensional Schrödinger equation with vanishing viscosity
JO - ESAIM: Control, Optimisation and Calculus of Variations
PY - 2012
DA - 2012///
SP - 277
EP - 293
VL - 18
IS - 1
PB - EDP-Sciences
UR - http://www.numdam.org/articles/10.1051/cocv/2010055/
UR - https://zbmath.org/?q=an%3A1242.93019
UR - https://www.ams.org/mathscinet-getitem?mr=2887936
UR - https://doi.org/10.1051/cocv/2010055
DO - 10.1051/cocv/2010055
LA - en
ID - COCV_2012__18_1_277_0
ER -
Micu, Sorin; Rovenţa, Ionel. Uniform controllability of the linear one dimensional Schrödinger equation with vanishing viscosity. ESAIM: Control, Optimisation and Calculus of Variations, Tome 18 (2012) no. 1, pp. 277-293. doi : 10.1051/cocv/2010055. http://www.numdam.org/articles/10.1051/cocv/2010055/
[1] O.M. Aamo, A. Smyshlyaev and M. Krstić, Boundary control of the linearized Ginzburg-Landau model of vortex shedding. SIAM J. Control Optim. 43 (2005) 1953-1971. | MR 2177789 | Zbl 1082.93016
[2] S.A. Avdonin and S.A. Ivanov, Families of exponentials. The method of moments in controllability problems for distributed parameter systems. Cambridge University Press (1995). | MR 1366650 | Zbl 0866.93001
[3] J. Ball and M. Slemrod, Nonharmonic Fourier series and the stabilization of distributed semi-linear control systems. Commun. Pure Appl. Math. XXXII (1979) 555-587. | MR 528632 | Zbl 0394.93041
[4] M. Bartuccelli, P. Constantin, C.R. Doering, J.D. Gibbon and M. Gisselfält, On the possibility of soft and hard turbulence in the complex Ginzburg Landau equation. Physica D 44 (1990) 421-444. | MR 1076337 | Zbl 0702.76061
[5] L. Baudouin and J.-P. Puel, Uniqueness and stability in an inverse problem for the Schrödinger equation. Inverse Probl. 18 (2002) 1537-1554. | MR 1955903 | Zbl 1023.35091
[6] J.-M. Coron, Control and nonlinearity, Mathematical Surveys and Monographs 136. Am. Math. Soc., Providence (2007). | MR 2302744 | Zbl 1140.93002
[7] J.-M. Coron and S. Guerrero, Singular optimal control : a linear 1-D parabolic-hyperbolic example. Asymptot. Anal. 44 (2005) 237-257. | MR 2176274 | Zbl 1078.93009
[8] R.J. Diperna, Convergence of approximate solutions to conservation laws. Arch. Ration. Mech. Anal. 82 (1983) 27-70. | MR 684413 | Zbl 0519.35054
[9] X. Fu, A weighted identity for partial differential operators of second order and its applications. C. R. Acad. Sci. Paris, Sér. I 342 (2006) 579-584. | MR 2217919
[10] X. Fu, Null controllability for the parabolic equation with a complex principal part. J. Funct. Anal. 257 (2009) 1333-1354. | MR 2541271 | Zbl 1178.35099
[11] A.V. Fursikov and O.Yu. Imanuvilov, Controllability of Evolution Equations, Lect. Notes Ser. 34. Seoul National University, Research Institute of Mathematics, Global Analysis Research Center, Seoul (1996). | MR 1406566 | Zbl 0862.49004
[12] O. Glass, A complex-analytic approach to the problem of uniform controllability of a transport equation in the vanishing viscosity limit. J. Funct. Anal. 258 (2010) 852-868. | MR 2558179 | Zbl 1180.93015
[13] L. Ignat and E. Zuazua, Dispersive Properties of Numerical Schemes for Nonlinear Schrödinger Equations, in Foundations of Computational Mathematics, Santander 2005, London Math. Soc. Lect. Notes 331, L.M. Pardo, A. Pinkus, E. Suli and M.J. Todd Eds., Cambridge University Press (2006) 181-207. | MR 2277106 | Zbl 1106.65321
[14] L. Ignat and E. Zuazua, Numerical dispersive schemes for the nonlinear Schrödinger equation. SIAM J. Numer. Anal. 47 (2009) 1366-1390. | MR 2485456 | Zbl 1192.65127
[15] A.E. Ingham, A note on Fourier transform. J. London Math. Soc. 9 (1934) 29-32. | MR 1574706 | Zbl 0008.30601
[16] A.E. Ingham, Some trigonometric inequalities with applications to the theory of series. Math. Zeits. 41 (1936) 367-379. | MR 1545625 | Zbl 0014.21503
[17] J.P. Kahane, Pseudo-Périodicité et Séries de Fourier Lacunaires. Ann. Scient. Ec. Norm. Sup. 37 (1962) 93-95. | Numdam | MR 154060 | Zbl 0105.28601
[18] V. Komornik and P. Loreti, Fourier Series in Control Theory. Springer-Verlag, New York (2005). | MR 2114325 | Zbl 1094.49002
[19] M. Léautaud, Uniform controllability of scalar conservation laws in the vanishing viscosity limit. Preprint (2010). | Zbl 1251.93033
[20] G. Lebeau, Contrôle de l'équation de Schrödinger. J. Math. Pures Appl. 71 (1992) 267-291. | Zbl 0838.35013
[21] C.D. Levermore and M. Oliver, The complex Ginzburg Landau equation as a model problem, in Dynamical Systems and Probabilistic Methods in Partial Differential Equations, in Lect. Appl. Math. 31, Am. Math. Soc., Providence (1996) 141-190. | MR 1363028 | Zbl 0845.35003
[22] A. López, X. Zhang and E. Zuazua, Null controllability of the heat equation as singular limit of the exact controllability of dissipative wave equations. J. Math. Pures Appl. 79 (2000) 741-808. | MR 1782102 | Zbl 1079.35017
[23] E. Machtyngier, Exact controllability for the Schrödinger equation. SIAM J. Control Optim. 32 (1994) 24-34. | MR 1255957 | Zbl 0795.93018
[24] A. Mercado, A. Osses and L. Rosier, Inverse problems for the Schrödinger equation via Carleman inequalities with degenerate weights. Inverse Probl. 24 (2008) 150-170. | MR 2384776 | Zbl 1153.35407
[25] S. Micu and L. De Teresa, A spectral study of the boundary controllability of the linear 2-D wave equation in a rectangle. Asymptot. Anal. 66 (2010) 139-160. | MR 2648782 | Zbl 1196.35129
[26] R.E.A.C. Paley and N. Wiener, Fourier Transforms in Complex Domains, AMS Colloq. Publ. 19. Am. Math. Soc., New York (1934). | MR 1451142 | Zbl 0011.01601
[27] R.M. Redheffer, Completeness of sets of complex exponentials. Adv. Math. 24 (1977) 1-62. | MR 447542 | Zbl 0358.42007
[28] L. Rosier and B.-Y. Zhang, Null controllability of the complex Ginzburg Landau equation. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 26 (2009) 649-673. | Numdam | MR 2504047 | Zbl 1170.35095
[29] M. Salerno, B.A. Malomed and V.V. Konotop, Shock wave dynamics in a discrete nonlinear Schrödinger equation with internal losses. Phys. Rev. 62 (2000) 8651-8656. | MR 1804633
[30] M. Tucsnak and G. Weiss, Observation and Control for Operator Semigroups. Birkhäuser Advanced Texts, Springer, Basel (2009). | MR 2502023 | Zbl 1188.93002
[31] R. Young, An Introduction to Nonharmonic Fourier Series. Academic Press, New York (1980). | MR 591684 | Zbl 0981.42001
[32] J. Zabczyk, Mathematical Control Theory : An Introduction. Birkhäuser, Basel (1992). | MR 1193920 | Zbl 1071.93500
[33] X. Zhang, A remark on null exact controllability of the heat equation. SIAM J. Control Optim. 40 (2001) 39-53. | MR 1855304 | Zbl 1002.93025
Cité par Sources :
|
2022-07-05 10:34:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3246816396713257, "perplexity": 2904.0263395141615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104542759.82/warc/CC-MAIN-20220705083545-20220705113545-00284.warc.gz"}
|
http://lolengine.net/wiki/research/trig?action=diff&version=7
|
# Changes between Version 6 and Version 7 of research/trigTweet
Ignore:
Timestamp:
Oct 14, 2011, 12:16:42 PM (9 years ago)
Comment:
--
### Legend:
Unmodified
v6 }}} We know sin(x) is an odd function, so instead we look for a polynomial Q(x) such that P(x) = xQ(x²): We know sin(x) is an odd function, so instead we look for a polynomial Q(x) such that P(x) = xQ(x²), and we reduce the range to positive values: {{{ #!latex $\max_{x \in [-\pi/2, \pi/2]}{\big\vert\sin(x) - xQ(x^2)\big\vert} = E$ $\max_{x \in [0, \pi/2]}{\big\vert\sin(x) - xQ(x^2)\big\vert} = E$ }}} Substitute y for x² and reduce the range to positive values: Substitute y for x²: {{{ {{{ #!cpp static real myfun(real const &y) { a6 = -7.36458957326227991327065122848667046e-13; }}} === Relative error === Searching for '''relative error''' instead: {{{ #!latex $\max_{x \in [-\pi/2, \pi/2]}{\dfrac{\big\vert\sin(x) - P(x)\big\vert}{|\sin(x)|}} = E$ }}} Using the same method as for absolute error, we get: {{{ #!latex $\max_{x \in [0, \pi^2/4]}{\dfrac{\bigg\lvert\dfrac{\sin(\sqrt{y})-\sqrt{y}}{y\sqrt{y}} - R(y)\bigg\rvert}{\bigg\lvert\dfrac{\sin(y)}{y\sqrt{y}}\bigg\rvert}} = E$ }}} {{{ #!cpp static real myfun(real const &y) { real x = sqrt(y); return (sin(x) - x) / (x * y); } static real myerr(real const &y) { real x = sqrt(y); return sin(x) / (x * y); } RemezSolver<6> solver; solver.Run(real::R_1 >> 400, real::R_PI_2 * real::R_PI_2, myfun, myerr, 15); }}} {{{ #!cpp a0 = -1.666666666666666587374325845020415990185e-1; a1 = +8.333333333333133768001243698120735518527e-3; a2 = -1.984126984109960366729319073763957206143e-4; a3 = +2.755731915499171528179303925040423384803e-6; a4 = -2.505209340355388148617179634180834358690e-8; a5 = +1.605725287696319345779134635418774782711e-10; a6 = -7.535968124281960435283756562793611388136e-13; }}}
|
2020-10-21 10:20:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6812631487846375, "perplexity": 10831.640967296304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876307.21/warc/CC-MAIN-20201021093214-20201021123214-00128.warc.gz"}
|
http://book.mso4sc.cemosis.fr/toolboxes/0.104/heatfluid/theory/
|
Heat Transfer
1. Notations and Units
Notation Quantity Unit
$\rho$
fluid density
$kg \cdot m^{-3}$
$C_p$
heat capacity at constant pressure
$J \cdot kg^{-1} \cdot K^{-1}$
$T$
temperature
$K$
$k$
thermal conductivity
$W \cdot m^{-1} \cdot K^{-1}$
$\boldsymbol{u}$
fluid velocity
$m \cdot s^{-1}$
$\beta$
coefficient of thermal expansion
$K^{-1}$
$\mu$
dynamic viscosity
$Pa \cdot s$
$\mathbf{g}$
gravitational acceleration
$m \cdot s^{-2}$
$\rho_0$
fluid density of air
$kg \cdot m^{-3}$
2. Equations
Convective heat equation
$\rho C_p \left( \frac{\partial T}{\partial t} + \boldsymbol{u} \cdot \nabla T \right) - \nabla \cdot \left( k \nabla T \right) = Q, \quad \text{ in } \Omega_H$
which is completed with boundary conditions and initial value
$\text{at } t=0, \quad T(x,0) = T_0(x)$
Equation of air movement (Navier-Stokes)
$\begin{cases} \rho (\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u} \cdot \nabla \mathbf{u}) -\nabla \cdot (\mu \nabla \mathbf{u}) + \nabla P = - \rho_0 \beta(T-T_{ref}) \mathbf{g} & dans \; \Omega_F \quad (1) \\ \nabla \mathbf{u} = 0 & dans \; \Omega_F \quad (2) \\ \mathbf{u}=0 & sur \; \partial \Omega_F \quad \text{(boundary of Dirichlet)} \end{cases}$
$\quad$The equation (1) is the momentum equation inherited from Newton’s law and (2) is the mass conservation equation for incompressible flows.
$\quad$We consider $\mathbf{\phi} \in \mathcal{H}_0^1(\Omega)^d$ a test function with compact support in the Sobolev space in dimension d. We multiply our equation by $\mathbf{\phi}$ and we integrate on $\Omega_F$.
$\rho \int_{\Omega_F}\frac{\partial \mathbf{u}}{\partial t}\cdot \mathbf{\mathbf{\phi}} + \rho\int_{\Omega_F} (\mathbf{u} \cdot \nabla \mathbf{u}) \cdot \mathbf{\phi} -\int_{\Omega_F} (\nabla \cdot (\mu \nabla \mathbf{u})) \cdot \mathbf{\phi} + \int_{\Omega_F} \nabla P \cdot \mathbf{\phi} = -\int_{\Omega_F} \rho_0 \beta(T-T_{ref}) \mathbf{g} \cdot \mathbf{\phi}$
$\quad$We can also assume that $\mu$ is constant. We have
$\rho \int_{\Omega_F}\frac{\partial \mathbf{u}}{\partial t}\cdot \mathbf{\mathbf{\phi}} + \rho\int_{\Omega_F} (\mathbf{u} \cdot \nabla \mathbf{u}) \cdot \mathbf{\phi} -\mu \int_{\Omega_F} \Delta \mathbf{u} \cdot \mathbf{\phi} + \int_{\Omega_F} \nabla P \cdot \mathbf{\phi} = -\int_{\Omega_F} \rho_0 \beta(T-T_{ref}) \mathbf{g} \cdot \mathbf{\phi}$
$\quad$Using successively the formulas of Green on the term in $\Delta \mathbf{u}$, then the term in pressure, we obtain:
$\rho \int_{\Omega_F}\frac{\partial \mathbf{u}}{\partial t}\cdot \mathbf{\mathbf{\phi}} + \rho\int_{\Omega_F} (\mathbf{u} \cdot \nabla \mathbf{u}) \cdot \mathbf{\phi} +\mu \int_{\Omega_F} \nabla \mathbf{u} : \nabla \mathbf{\phi} - \int_{\partial \Omega_F}\frac{\partial \mathbf{u}}{\partial n } \cdot \mathbf{\phi} + \int_{\Omega_F} \nabla P \cdot \mathbf{\phi} =- \int_{\Omega_F} \rho_0 \beta(T-T_{ref}) \mathbf{g} \cdot \mathbf{\phi}$
$\rho \int_{\Omega_F}\frac{\partial \mathbf{u}}{\partial t}\cdot \mathbf{\mathbf{\phi}} + \rho\int_{\Omega_F} (\mathbf{u} \cdot \nabla \mathbf{u}) \cdot \mathbf{\phi} +\mu \int_{\Omega_F} \nabla \mathbf{u} : \nabla \mathbf{\phi} - \int_{\partial \Omega_F}\frac{\partial \mathbf{u}}{\partial n } \cdot \mathbf{\phi} - \int_{\Omega_F} P \cdot \nabla \mathbf{\phi} + \int_{\partial \Omega_F} \frac{\partial P}{\partial n} \cdot \mathbf{\phi} = - \int_{\Omega_F} \rho_0 \beta(T-T_{ref})\mathbf{g} \cdot \mathbf{\phi}$
$\quad$As $\phi$ is compact support, the terms on the edges vanish. We will then obtain:
$\rho \int_{\Omega_F}\frac{\partial \mathbf{u}}{\partial t}\cdot \mathbf{\mathbf{\phi}} + \rho\int_{\Omega_F} (\mathbf{u} \cdot \nabla \mathbf{u}) \cdot \mathbf{\phi} +\mu \int_{\Omega_F} \nabla \mathbf{u} : \nabla \mathbf{\phi} - \int_{\Omega_F} P \cdot \nabla \mathbf{\phi} =- \int_{\Omega_F} \rho_0 \beta(T-T_{ref})\mathbf{g} \cdot \mathbf{\phi} \quad (3)$
$\quad$Using the implicit Euler method for the time term:
$\frac{\partial \mathbf{u}}{\partial t} (t^{ k+1}) \approx \frac{ \mathbf{u} (t^{ k+1}) - \mathbf{u}(t^k)}{ dt} \quad \forall t^k \in \mathbb{ R^+} \text{ et } k \in \mathbb{N}$
$\quad$Denoting $\mathbf{u}^k = \mathbf{u}(t^k)$, we write the formula in $t^{ k+1}$, we obtain:
$\rho \int_{\Omega_F}\frac{\partial \mathbf{u}^{k+1}}{dt}\cdot \mathbf{\mathbf{\phi}} + \rho\int_{\Omega_F} (\mathbf{u}^{k+1} \cdot \nabla \mathbf{u}^{k+1}) \cdot \mathbf{\phi} +\mu \int_{\Omega_F} \nabla \mathbf{u}^{k+1} : \nabla \mathbf{\phi} - \int_{\Omega_F} P \cdot \nabla \mathbf{\phi} = \rho \int_{\Omega_F}\frac{\partial \mathbf{u}^{k}}{dt}\cdot \mathbf{\mathbf{\phi}} - \int_{\Omega_F} \rho_0 \beta(T-T_{ref})\mathbf{g} \cdot \mathbf{\phi}$
$\quad$If you restrict the space of the test functions to the next space $\mathcal{V}(\Omega)=\{(v \in \mathcal{H}_0^1(\Omega))^3 | \nabla \cdot v=0 \}$, we obtain the following weak wording:
$\rho \int_{\Omega_F}\frac{\partial \mathbf{u}^{k+1}}{dt}\cdot \mathbf{\mathbf{\phi}} + \rho\int_{\Omega_F} (\mathbf{u}^{k+1} \cdot \nabla \mathbf{u}^{k+1}) \cdot \mathbf{\phi} +\mu \int_{\Omega_F} \nabla \mathbf{u}^{k+1} : \nabla \mathbf{\phi} = \rho \int_{\Omega_F}\frac{\partial \mathbf{u}^{k}}{dt}\cdot \mathbf{\mathbf{\phi}} - \int_{\Omega_F} \rho_0 \beta(T-T_{ref})\mathbf{g} \cdot \mathbf{\phi}$
$\quad$As the space $\mathcal{V}$ is difficult to build, so we will use the formulation (3). We now look at what happens with equation (2). Let $q \in L^2 (\Omega)$, we multiply (2) by $q$:
$\int_{\Omega_F} \nabla \mathbf{u} \cdot q = 0$
$\quad$We then pose 2 bilinear forms $a : \mathcal{H}_0^1(\Omega_F)^3 \times \mathcal{H}_0^1(\Omega_F)^3 \rightarrow \mathbb{R}$ and $b : \mathcal{H}_0^1(\Omega_F)^3 \times L^2(\Omega_F) \rightarrow \mathbb{R}$:
\begin{align} a(u,\phi)= \rho \int_{\Omega_F}\frac{\partial u}{dt}\cdot \mathbf{\mathbf{\phi}} + \int_{\Omega_F} \nabla u : \nabla \mathbf{\phi} \\ b(\phi,p) =- \int_{\Omega_F} p \cdot \nabla \mathbf{\phi} \end{align}
$\quad$And a trilinear form $c: \mathcal{H}_0^1(\Omega_F)^3 \times \mathcal{H}_0^1(\Omega_F)^3 \times \mathcal{H}_0^1(\Omega_F)^3 \rightarrow \mathbb{R}$ :
$c(u,u,\phi)=\int_{\Omega_F} (u \cdot \nabla u) \cdot \mathbf{\phi}$
$\quad$The variational formulation is then written as follows:
$\begin{cases} c(\mathbf{u}^{k+1},\mathbf{u}^{k+1},\mathbf{\phi}) + a(\mathbf{u}^{k+1},\mathbf{\phi}) + b(\mathbf{\phi},P)= l( \mathbf{u}^{k}{dt} - \rho_0 \beta(T-T_{ref}) \mathbf{g},\mathbf{\phi}) \\ b(\mathbf{u},q)=0 \end{cases}$
$\quad$The problem for the Stokes equation $a(u,\phi) + b(\phi,p)$ is well settled, if the stem form: [a] is coercive on $\mathcal {H}_0^1 (\Omega)$ and the form b satisfies the condition 'inf-sup', that is to say:
$\exists \beta >0 \; \text{tel que} \quad \sup_{\phi \in \mathcal{H}_0^1(\Omega), \phi \neq 0} \frac{b(\phi,q)}{\lVert \phi \rVert_{\mathcal{H}^1}}\geq \beta \lVert q \rVert_{L^2} \quad \forall q \in L^2(\Omega)$
$\quad$ We must have at least these verified hypotheses to have a solution for the Navier-Stokes equation. In our case, the pressure is then defined to a constant, to have a single pressure we can take it in the space $L^2(\Omega)$ to average zero. But nothing assures that it’s the right space, so the pressure will be determined numerically during the simulations.
$\quad$ We first consider the Stokes problem for discretization. Let $\mathcal{V}_h$ is the discretized space of the velocity and $\mathcal{P}_h$ is the discretized space of the pressure.
We pose $N_u = dim (\mathcal {V}_h)$ and stem: [N_p = dim (\ mathcal {P}_h)]. Let $\{ \lambda_i \}_{i = 1, ..., N_u}$ is a base of stem: [\mathcal{V}_h] and $\{ \mu_i \}_{ i = 1, ..., N_p}$ is a base of $\mathcal{P}_h$.
\begin{align} u_h = \sum_{i=1}^{N_u} u_i \lambda_i \\ p_h = \sum_{j=1}^{N_p} p_j \mu_j \end{align}
$\quad$ Returning everything into the variational formulation, we obtain the formulation discrete.
$\begin{cases} a(\sum_{i=1}^{N_u} u_i \lambda_i , \phi) + b(\phi , \sum_{j=1}^{N_p} p_j \mu_j ) = l(\phi) \\ b(\sum_{i=1}^{N_u} u_i \lambda_i , q) = 0 \end{cases}$
$\quad$By putting $\phi = \lambda_i, \; \ forall i = 1, ..., N_u$ and $q = \mu_j, \; \forall j = 1, ..., N_p$. We obtain :
$\begin{cases} j=1,...,N_u, \quad \sum_{i=1}^{N_u} u_i a(\lambda_i, \lambda_j) + \sum_{k=1}^{N_p} p_k b(\lambda_j, \mu_k) = l(\phi_j) \\ k=1,...,N_p, \quad \sum_{i=1}^{N_u} u_i b(\lambda_i, \mu_k) = 0 \end{cases}$
$\quad$By putting the following matrices:
\begin{align} U =(u_i)_{i=1,..,N_u}^T \quad P =(p_i)_{i=1,...,N_p}^T \\ A = (a(\lambda_i, \lambda_j))_{1 \leq i,j \leq N_u} \quad B= (b(\lambda_i, \mu_j))_{1 \leq i \leq N_u , 1 \leq j \leq N_p} \\ F = (l(\phi_i))_{i=1,...,N_u} \end{align}
$\quad$ We obtain the following linear system:
$\begin{pmatrix} A & B^T \\ B & 0 \\ \end{pmatrix} \begin{pmatrix} U \\ P \\ \end{pmatrix} = \begin{pmatrix} F \\ 0 \\ \end{pmatrix}$
$\quad$ Since the Navier-Stokes problem is not linear, we will use the Newton’s or Picard’s method for solving.
|
2020-01-28 23:44:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996914863586426, "perplexity": 1022.1400069544014}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00064.warc.gz"}
|
http://ifc43-docs.standards.buildingsmart.org/IFC/RELEASE/IFC4x3/HTML/lexical/Pset_SpaceCommon.htm
|
# 5.4.4.21 Pset_SpaceCommon
## 5.4.4.21.1 Semantic definition
Properties common to the definition of all occurrences of IfcSpace. Please note that several space attributes are handled directly at the IfcSpace instance, the space number (or short name) by IfcSpace.Name, the space name (or long name) by IfcSpace:LongName, and the description (or comments) by IfcSpace.Description. Actual space quantities, like space perimeter, space area and space volume are provided by IfcElementQuantity, and space classification according to national building code by IfcClassificationReference. The level above zero (relative to the building) for the slab row construction is provided by the IfcBuildingStorey.Elevation, the level above zero (relative to the building) for the floor finish is provided by the IfcSpace.ElevationWithFlooring.
## 5.4.4.21.3 Properties
### 5.4.4.21.4 Changelog
#### 5.4.4.21.4.1 IFC4
• property, IsExternal
• property, Category
• property, CeilingCovering
• property, ConcealedCeiling
• property, ConcealedFlooring
• property, FloorCovering
• property, SkirtingBoard
• property, WallCovering
|
2022-12-09 17:06:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8100700378417969, "perplexity": 14413.438038500832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00151.warc.gz"}
|
https://economictheoryblog.com/2012/10/
|
# Binomial Distribution
The binomial distribution is closely related to the Bernoulli distribution. In order to understand it better assume that $X_{1},X_{2},...,X_{n}$ are i.i.d (independent, identical distributed) variables following a Bernoulli distribution with $P(X_{i}=1)=\pi$ and $P(X_{i}=0)=1-\pi$.
# Bernoulli Distribution
All cases in which manifestations have exactly two characteristics follow a Bernoulli distribution.
Typical examples are coin flip or medical treatment, which works or not.
|
2019-12-06 00:51:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6389512419700623, "perplexity": 571.4849548068252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482954.0/warc/CC-MAIN-20191206000309-20191206024309-00064.warc.gz"}
|
https://www.murphyandhislaw.com/blog/research-is-false/
|
ProjectsBlogGraphicsAbout
# Digest: Why Most Published Research Findings Are False
## Preface
This article aims to provide some of the derivations from Ioannidis' 2005 article: "Why Most Published Research Findings Are False" which exposed what has since been termed "The Replication Crisis."
The issue begins with the subject of $p$-values which measure the probability of a study finding a positive result, assuming the presented hypothesis is false. A strong $p$-values is considered to be $0.05$, indicating a regrettable 5% of published findings are false.
Before diving into the derivations, some examples:
## Example 1
Suppose we represent all possible hypotheses that can be tested with a more manageable 100,000 hypotheses. Let's allow a generous 50:50 true:false split for this set as well as a statistical power of 80%.
True False Total
positive result $+$ 40k 2.5k 42.5k
negative result $-$ 10k 47.5k 57.5k
Total 50k 50k 100k
Here, the $p$-value $= \alpha = P(+ \vert \text{ f })$ where $+$ is a positive relationship, and $\text{f}$ is a flase result. The statistical power $= P(+ \vert \text{ t })$ and Positive Predictive Value $\text{PPV} = \frac{\text{statistical power}}{P(\text{ t } + \text{ f } \vert +)} = \frac{40k}{42.5k} \approx 0.94$ which is pretty satisfactory given our generous values.
## Example 2
Once again, we'll take 100,000 hypotheses, but now with a 10:90 true:false split for this set as well as a statistical power of 80%. Filling out the table we get:
True False Total
$+$ 8k 4.5k 12.5k
$-$ 2k 85.5k 87.5k
Total 10k 90k 100k
Here, $\text{PPV} = \frac{8k}{12.5k} = 0.64$ which is significantly worse than the assumed 95% if the study is positive without publicaiton bias, cheating etc. which is covered below.
Before getting much further, it will be useful to define a glossary
Symbol Value Meaning
$p$ $P(+ \vert \text{ f })$ probability of a study finding a positive result, given that the hypothesis is false
$\text{PPV}$ $\frac{\text{statistical power}}{P(\text{ t } + \text{ f } \vert +)}$ Positive Predictive Value
$R$ $\frac{P(\text{ t } )}{P(\text{ f } )}$ the pre-study odds of the hypothesis is tested
$\varTheta$ $= R = \frac{P}{1 - P}$ an alternate expression of probability, e.g. 10:90 odds: $\varTheta = \frac{10\%}{100\% - 10\%}$
$P(\text{f})$ $1 - P(\text{ t })$ compliment rule
$\alpha$ $P( + \vert \text{ f })$ Type I Error
$\beta$ $P( - \vert \text{ t })$ Type II Error
$P(\text{t } \vert +)$ $\frac{P(\text{t}) \cdot P(+ \vert \text{ t })}{P(t)}$ Bayes Rule
$P(\text{t} \land +)$ $P(\text{ t } \vert +) \cdot P(\text{t})$ Product Rule
$u$ bias factor influenced by $p$-hacking, conflict of interest, competitive publication motivations, etc.
## Table 1
Now we can recreate the general table for all such examples above and derive their values:
True False Total
$+$ $\frac{c(1 - \beta)R}{R+1}$ $\frac{c \alpha}{R+1}$ $\frac{c(R+\alpha-\beta R)}{R+1}$
$-$ $\frac{c \beta R}{R+1}$ $\frac{c(1-\alpha)}{R+1}$ $\frac{c(1-\alpha + \beta R)}{R+1}$
Total $\frac{cR}{R+1}$ $\frac{c}{R+1}$ $c$ the number of relationships tested
## Derivations
Starting with the top left cell which represents:
$P(+ \land \text{ t }) = \underbrace{P(+ \vert \text{ t })}_{\text{\S}} \cdot \underbrace{P(\text{t})}_{\frac{R}{R+1}}$
$\quad \text{\S} 1.1: \beta = P(- \vert \text{ t })$
$\quad \text{\S} 1.2: P(+ \vert \text{ t }) + P(- \vert \text{ t }) = 1$
$\quad \text{\S} 1.3: P(+ \vert \text{ t }) + P(- \vert \text{ t }) = 1$
$\quad \text{\S} \therefore P(+ \vert \text{ t }) = 1 - \beta$
$= (1- \beta )(\frac{R}{R+1})= \frac{c(1 - \beta )R}{R+1}$
Similarly, for the top-middle cell:
$P(+ \land \text{ f }) = \underbrace{P(+ \vert \text{ f })}_{\alpha} \cdot \underbrace{P(\text{f})}_{1 - \frac{R}{R+1}}$
$= \alpha (1 - \frac{R}{R+1}) = \frac{c \alpha}{R+1}$
So, for all true positives, the top-right cell:
$\frac{\text{true positives}}{{\text{all positives}}} = \frac{\frac{c(1- \beta )R}{R+1}}{\frac{c(R+ \alpha - \beta R)}{R+1}} = \frac{(1 - \beta ) R}{ R + \alpha - \beta R} = \underbrace{P(\text{ t } \vert +)}_{\text {want this bad boi to be high}}$ in terms of Type I, II error and pre-study odds.
## When is a Study More Likely to be True than False?
$P(\text{ t } \vert +) > \frac{1}{2}$
$\rArr \frac{(1 - \beta ) R}{ R + \alpha - \beta R} > \frac{1}{2}$
$\rArr 2(\frac{(1 - \beta ) R}{ R + \alpha - \beta R}) > 2( \frac{1}{2})$
$\rArr 2(1 - \beta )R > R + \alpha - \beta R$
$\rArr R(1 - \beta ) > \alpha \iff (\text{pre-study odds})(\text{statistical power}) > p\text{-value}$
Some fields of study have inherently small $R$ or $(1 - \beta)$ values
## What Happens if we Introduce Bias?
$P(+ \land \text{ f }) = P(+ \vert \text{ f }) \cdot P(\text{f}) \underset{bias}{\longrightarrow}$ negative study results become positive with $P(u)$
This can alter our outcome by in two cases:
$\text{1. } P(\underbrace{+}_{\text{for any reason}} \land \text{t})$
$= \underbrace{P(+ \vert \text{ t })}_{1 - \beta} \cdot \underbrace{P(\text{t})}_{\frac{R}{R+1}} + u \cdot \underbrace{P(- \vert \text{ t })}_{\text{Type II Error: } \beta} \cdot P(\text{t})$
$= (1 - \beta )(\frac{R}{R+1}) + u \beta \frac{R}{R+1}$
$= \frac{1 - \beta R + u \beta R}{R+1}$
$\text{2. } P(\underbrace{+}_{\text{for any reason}} \land \text{f})$
$= P(+ \vert \text{ f }) \cdot \underbrace{P(\text{f})}_{1-P(t)} + u \cdot P(- \vert \text{ f }) \cdot P(\text{f})$
$= \alpha (1-\frac{R}{R+1}) + u(1 - \alpha)(1-\frac{R}{R+1})$ Note that these truths/falsehoods have to be independent of the decision making otherwise they would impair judgement, disallowing us from applying the Product Rule
$= \frac{\alpha + u(1 - \alpha)}{R+1}$
## The Issue of Incorrect pre-publication $p$-values.
Research efforts do not occur in isolation. Several teams may be independently, competitive working on the same hypotheses over and over and over again without adjusting their $p$-values.
Relevant xkcd:
This means that statistical power decreases as the experiments are repeated:
$P(\underbrace{+_{\tiny > 0}}_{\text{with n studies}} \land \text{ t })$
$= P(\underbrace{+}_{\text{\S}} \land \text{ t }) \cdot P(\text{t})$
$\quad \text{\S}2.1: P(+ \land \text{ t }) = \displaystyle\sum^{n} P_n$ which could be ... a lot of probabilites
$\quad \text{\S}2.2: (1 - P(-_{\forall \tiny n} \vert \text{ t })) \cdot P(\text{t})$ which is the negative results of all $n$ studies
$\quad \text{\S}2.3: (1 - \beta^n) \cdot P(\text{t})$
$= \frac{R(1 - \beta^n)}{R+1}$
Meaning that for each subsequent, competing trial, the likelihood of your own $p$-value genuinely being sufficiently small decreases.
|
2021-02-28 03:03:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 82, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5062344670295715, "perplexity": 2725.7126979552168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00375.warc.gz"}
|
http://mathhelpforum.com/math-topics/113483-optimization-print.html
|
Optimization
• November 9th 2009, 11:53 AM
Dogod11
Optimization
Hello, at first had this problem:
A mountain lake in national park is stocked each spring with two species of fish, S1 and S2. The average weight of the fish stocked is 4 pounds for S1 and 2 pounds for S2. Two foods, F1 and F2, are available in the lake. The average requeriment of a fish of species S1 is 1 unit of F1 and 3 units of F2 each day. The corresponding requeriment of S2 is 2 units of F1 and 1 unit of F2.
a). Suppose that 1000 units of F1 and 1800 units of F2 are available daily in example previous. How should the lake be stocked to maximize the weight of fish supported by the lake?
Well, I have already found the solution, graphical method using the search
Maximize $4x_1 + 2x_2$
subject to restrictions
$x_1 + 2x_2 \leq{1000}$
$3x_1 + x_2 \leq{1800}$
And the solution is $x_1 = 520, x_2 = 240$
The question I have is to resolve this part:
In problem previous, how should the lake stocked to maximize the total number of fish supported by the lake?
• November 11th 2009, 01:21 AM
BobP
THe second part can be answered graphically as well, but here is an algebraic solution.
We have,
$x_1+2x_2\leq1000$ and $3x_1+x_2\leq1800.$
Add twice the first inequality to the second and you have
$5x_1+5x_2\leq3800.$
Divide by 5 and you get
$x_1+x_2\leq760.$
• November 11th 2009, 03:46 AM
Dogod11
But keep in mind that this problem comes third restriction??
$
4x_1 + 2x_2 = 2560.$
• November 11th 2009, 04:57 AM
BobP
That isn't a problem is it ?
$x_{1}=520, \quad x_{2}=240,$ is at the limit of $x_{1}+x_{2}\leq760.$
What this is saying is that the two alternative requirements, (1) maximize the weight and (2) maximize the number, lead to the same result.
Am I missing something ?
• November 11th 2009, 05:17 AM
Dogod11
Hello,
I think it should be as you say
but I was posing like this: Since the first part of the exercise was found the maximum weight supported by the lake
$4(520) + 2(240) = 2560$
so now the problem is
maximize $P = x_1 + x_2$
subject to restrictions
$4x_1 + 2x_2 = 2560$
$x_1+2x_2\leq1000$
$3x_1+x_2\leq1800.$
this is wrong, or gives the same solution?
Thank you very much ...
• November 11th 2009, 05:27 AM
Dogod11
The result I get is $P = 760,$
$x = 520, y = 240$
well, anyway it does not matter jeje
Greetings
• November 11th 2009, 05:42 AM
CaptainBlack
Quote:
Originally Posted by Dogod11
Hello, at first had this problem:
A mountain lake in national park is stocked each spring with two species of fish, S1 and S2. The average weight of the fish stocked is 4 pounds for S1 and 2 pounds for S2. Two foods, F1 and F2, are available in the lake. The average requeriment of a fish of species S1 is 1 unit of F1 and 3 units of F2 each day. The corresponding requeriment of S2 is 2 units of F1 and 1 unit of F2.
a). Suppose that 1000 units of F1 and 1800 units of F2 are available daily in example previous. How should the lake be stocked to maximize the weight of fish supported by the lake?
Well, I have already found the solution, graphical method using the search
Maximize $4x_1 + 2x_2$
subject to restrictions
$x_1 + 2x_2 \leq{1000}$
$3x_1 + x_2 \leq{1800}$
And the solution is $x_1 = 520, x_2 = 240$
The question I have is to resolve this part:
In problem previous, how should the lake stocked to maximize the total number of fish supported by the lake?.
You now have to maximise $x_1+x_2$ subject to the same constraints:
$x_1 + 2x_2 \leq{1000}$
$3x_1 + x_2 \leq{1800}$
CB
• November 11th 2009, 05:45 AM
CaptainBlack
Quote:
Originally Posted by Dogod11
Hello,
I think it should be as you say
but I was posing like this: Since the first part of the exercise was found the maximum weight supported by the lake
$4(520) + 2(240) = 2560$
so now the problem is
maximize $P = x_1 + x_2$
Yes
Quote:
subject to restrictions
$4x_1 + 2x_2 = 2560$
No, you cannot generaly get the maximum mass of fish with the maximum number.
Quote:
$x_1+2x_2\leq1000$
$3x_1+x_2\leq1800$
this is wrong, or gives the same solution?
Thank you very much ...
• November 11th 2009, 05:46 AM
Dogod11
anyway gives the same result: $x = 520, y = 240.$
Thank you very much....
|
2016-08-25 18:31:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.60826575756073, "perplexity": 1279.8434129985947}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293922.13/warc/CC-MAIN-20160823195813-00100-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/691588/demonstration-of-noethers-theorem?noredirect=1
|
# Demonstration of Noether's Theorem [closed]
So, as many, many people before me, I'm trying to get some insight on Noether's Theorem and its demonstration. As I'm in the process of self-teaching here, there are several things I'm "missing" or "not understanding". I made myself a way through Tong's notes, books like Peskin & Schroeder, Mandl & Shaw, Greiner (Field Quantization), etc., and several questions in PSE like this one and this one. Nevertheless, the more I read, the more questions I have. I will try to compile everything I've got so far, and I will write my questions bolded. I'm sorry if this post turns out to be a little too long, but I'm trying to be as accurate as possible, but without diving into the realm of group theories, that is, I'm trying so demonstrate the theorem at a pre-Ph.D. level. So, without further ado, I'll simply start.
Suppose we have an infinitesimal- transformation $$x^{\mu}\rightarrow\tilde{x}^{\mu}=x^{\mu}+\delta x^{\mu}$$, which transforms our integration region $$\Omega\rightarrow\tilde{\Omega}$$. We have that under such a transformation, the fields may have a total variation $$\Delta\phi^a$$ such that: $$$$\tag{1} \Delta\phi^a = \tilde{\phi}^a(\tilde{x})-\phi^a(x)\,.$$$$
1) Where is $$\Delta\phi^a$$ evaluated at? $$x$$ or $$\tilde{x}$$?
I will assume, without understanding why, that $$\Delta\phi^a = \Delta\phi^a(x)$$. This variation must not be confounded with the local variation showing only how the fields change:
$$$$\tag{2} \delta\phi^a(x) = \tilde{\phi}^a(x)-\phi^a(x)\,.$$$$
Nevertheless, they are related, because at least at first order (and assuming $$\delta x^{\mu}$$ and $$\delta \phi^a(x)$$ of the same order):
$$$$\tag{3} \Delta\phi^a(x) \simeq \delta\phi^a(x) + \delta x^{\mu}\frac{\partial\phi^a}{\partial x^{\mu}}(x)\,.$$$$
This can be easily proven by using the definition of derivative at a first order:
$$$$\frac{df}{dx}(x) = \lim_{h\to0}\frac{f(x+h)-f(x)}{h}\simeq\frac{f(x+h)-f(x)}{h}\rightarrow f(x+h) \simeq f(x)+h\frac{df}{dx}(x)\,,$$$$ i.e. $$\tilde{\phi}^a(\tilde{x}) = \tilde{\phi}^a(x+\delta x) = \tilde{\phi}^a(x) + \delta x^{\mu}\frac{\partial\tilde{\phi}^a}{\partial x^{\mu}}(x)$$, and using equation (2) sometimes.
Later will also come in handy to have an expression for $$\Delta(\partial_{\mu}\phi^a(x))$$. My first assumption here is that: $$$$\tag{4} \Delta(\partial_{\mu}\phi^a) = \tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x})-\partial_{\mu}\phi^a(x)$$$$ 2) Same question as above, where is $$\Delta(\partial_{\mu}\phi^a)$$ evaluated at?
3) Is expression (4) actually correct?
I will calculate this by starting with $$\partial_{\mu}(\Delta\phi^a)$$ :
\begin{align} \partial_{\mu}(\Delta\phi^a) &= \partial_{\mu}\tilde{\phi}^a(\tilde{x}) - \partial_{\mu}\phi^a(x)\\ &= \tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x})- \partial_{\mu}\phi^a(x) + \partial_{\mu}\tilde{\phi}^a(\tilde{x}) - \tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x})\\ &= \Delta(\partial_{\mu}\phi^a) + \frac{\partial\tilde{x}^{\sigma}}{\partial x^{\mu}}\frac{\partial\tilde{\phi}^a}{\partial\tilde{x}^{\sigma}}(\tilde{x})-\tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x})\,. \end{align} To make things shorter, last two terms can be worked by using $$\tilde{x}^{\mu}=x^{\mu}+\delta x^{\mu}$$, expanding $$\tilde{\phi}^a(\tilde{x}) = \tilde{\phi}^a(x+\delta x)$$ at first order, replacing, and keeping again all terms at first order, arriving to: $$$$\tag{5} \Delta(\partial_{\mu}\phi^a) = \partial_{\mu}(\Delta\phi^a) + \partial_{\mu}(\delta x^{\sigma})\partial_{\sigma}\phi^a(x)\,.$$$$
After the detour, we come back to Noether. I decided to proceed by using the invariance of the action: $$$$S[\phi^a(x),\partial_{\mu}\phi^a(x)]=\int_{\Omega}d^4x\;\mathcal{L}(\phi^a(x),\partial_{\mu}\phi^a(x))\,.$$$$
To shorten the following expressions, I'll choose to abuse the notation: $$S[\phi^a(x),\partial_{\mu}\phi^a(x)]=S[x]$$ and $$\mathcal{L}(\phi^a(x),\partial_{\mu}\phi^a(x))=\mathcal{L}(x)$$. We need our transformation to produce a variation of the action of maximum a boundary term, which is the integral of the divergence of a smooth field $$W^{\mu}$$, satisfying $$\left.W^{\mu}\right|_{\partial\Omega}=0$$, which is our condition for a well behaved variational principle. In summary, we need that:
\begin{align} \delta S &= \tilde{S}[\tilde{x}]-S[x]\\ &= \int_{\tilde{\Omega}}d^4\tilde{x}\;\tilde{\mathcal{L}}(\tilde{x}) - \int_{\Omega}d^4x\;\mathcal{L}(x)\tag{6}\\ &= \int_{\Omega}d^4x\;\partial_{\mu}W^{\mu}(x)\,. \end{align}
4) I assume that integrating $$\partial_{\mu}W^{\mu}$$ in $$\Omega$$ or $$\tilde{\Omega}$$ is exactly the same, it depends on which set of coordinates we would like to have at the end of the demonstration. Am I right?
5) In expression (6), should the first Lagrangian have a tilde or not?
If it doesn't have the tilde, and as stated by Goldstein, the underlying assumption would be that the functional form of the Lagrangian does not change with the transformation. This means, as an example, that if the Lagrangian was $$\mathcal{L}(x) = 2\phi_1(x) + \phi_2(x)$$, then after the transformation $$\tilde{\mathcal{L}}(\tilde{x}) = 2\tilde{\phi}_1(\tilde{x}) + \tilde{\phi}_2(\tilde{x})=\mathcal{L}(\tilde{x})$$. As far as I understand, this is not the general case, and there is a very good example/explanation in this post at PSE. I will therefore assume that the first term in my expression (6) must have a tilde.
In next step is where most of my headaches start. To find out what $$\delta S$$ is, I would like to turn the first term in (6) to an integral in $$\Omega$$. The story with the Jacobian for $$d^4\tilde{x}$$ is not a problem for me. My problems come with the Lagrangian. My first assumption would be, to try to go to first order in its variations, as follows:
\begin{align} \tilde{\mathcal{L}}(\tilde{x}) &= \tilde{\mathcal{L}}(\tilde{\phi}^a(\tilde{x}),\tilde{\partial}_{\mu}\tilde{\phi}^a(\tilde{x}))\\ &= \tilde{\mathcal{L}}(\phi^a(x) + \Delta \phi^a(x),\partial_{\mu}\phi^a(x)+\Delta(\partial_{\mu}\phi^a(x)))\\ &\overset{?}{\simeq} \tilde{\mathcal{L}}(x) + \Delta\phi^a(x)\frac{\partial\tilde{\mathcal{L}}^a}{\partial\phi^a}(x) + \Delta(\partial_{\mu}\phi^a(x))\frac{\partial\tilde{\mathcal{L}}}{\partial(\partial_{\mu}\phi^a)}(x)\,. \end{align}
6) Is last expression correct? If so, how do I get rid of the tilde's?
I tried to assume (I don't know why this should be true for the Lagrangian) that there is a local transformation for the Lagrangian as well, satisfying $$\tilde{\mathcal{L}}(x) = \mathcal{L}(x) + \delta\mathcal{L}(x)$$, replaced it in last expression, and then I used expressions (3) and (5), the Jacobian in $$d^4\tilde{x}$$, but I never seem to get anywhere close to the expresion I should be getting, which is expression (2.48) on the book of Greiner (note that I used here a different notation): $$$$\delta S = \int_{\Omega} d^4x\left(\delta\mathcal{L}(x) + \frac{\partial}{\partial x^{\mu}}\left(\mathcal{L}(x)\delta x^{\mu}\right)\right)\,.$$$$ The next steps after this expression is reached are not so complicated, given that Greiner expands the local transformation of the Lagrangian at first order (very much like when finding the Euler-Lagrange equations), plays with the variations a little bit, and finally gets to the conserved current.
Once again, I'm sorry for the extension of the post, I have invested weeks already on this, and I do not seem to be able to advance somehow. Any inputs would be very much appreciated!
• Check section 2.4 in A First Book of Quantum Field Theory Second Edition by Lahiri and Pal. It might help. Jan 30 at 11:20
• @KasiReddySreemanReddy Thank you for the recommendation! I checked it and it confirmed my suspicions, that the answers to my questions 1) and 2) is "evaluated at $x$", but I still don't know why. And in equation (2.38) the book assumes $\tilde{\mathcal{L}}(\tilde{x})=\mathcal{L}(\tilde{x})$ also without explaining why. Jan 30 at 11:59
• @Condereal for 1) and 2) there is no special reason why $\Delta\phi^a$ is defined as a variable of $x$. $x$ and $\tilde{x}$ have a one-to-one correspondence. So we could have defined it using $\tilde{x}$, but it will be simple if we express it in $x$ then the entire right-hand side will be expressed in $x$. Jan 30 at 12:21
• are you comfortable with some basic differential geometry? differential forms, pullbacks, Lie derivatives? Feb 1 at 3:20
• @peek-a-boo I have acquired some of the basic skills during the years, I know the terminology and the theory, but maybe cannot apply them directly to a case like this one. Feb 1 at 18:53
These are the same issues I struggled with, and the resolution is obviously just to have clear definitions, not hidden behind (what I find to be) the mysterious tilde/ primed notation. I'll restrict attention just to the case of a scalar field. I'll first talk about the different variations of a scalar field, and then talk about variations of the Lagrangian then the action.
1. Changes to Scalar fields.
Fix a smooth oriented $$n$$-dimensional manifold $$M$$ (we interpret this as spacetime with $$n=4$$). For simplicity, let us just stick to real scalar fields, meaning smooth maps $$\phi:M\to\Bbb{R}$$. Before we get to the actual details, some preliminaries:
• Consider a smooth deformation $$\Phi$$ of $$\phi$$. This means we consider a smooth map $$\Phi:I\times M\to \Bbb{R}$$, where $$I\subset \Bbb{R}$$ is some open interval containing the origin, such that $$\Phi(0,\cdot)=\phi(\cdot)$$. It is of course tradition to write the values $$\Phi(\epsilon, p)$$ as $$\Phi_{\epsilon}(p)$$ for $$(\epsilon,p)\in I\times M$$.
• Consider a smooth vector field $$X$$ on $$M$$, and let $$\Psi$$ denote its flow (this is 'basic' ODE theory). The name "flow" really is apt here, because if you imagine a little marble being dropped in a river, it will follow a trajectory naturally as described by the flow of the water. More mathematically, the interpretation of the flow map $$\Psi$$ is that given a point $$p\in M$$ and sufficiently small $$|\epsilon|$$, the quantity $$\Psi_{\epsilon}(p)$$ can be thought of as "where the point $$p$$ ends up $$\epsilon$$ units of time later, if it is left under the influence of the vector field $$X$$".
It is standard terminology to call the vector field $$X$$ the "infinitesimal generator of $$\Psi$$". Here the adjective "infinitesimal" refers to the fact that this is at the level of tangent spaces. In a local coordinate chart $$(U, (x^1,\dots, x^n))$$, we can write the vector field as $$X=X^{\mu}\frac{\partial}{\partial x^{\mu}}$$. These $$X^{\mu}$$ are what you've written as $$\delta x^{\mu}$$.
Now that we have the concept of a deformation and a flow, we can start talking precisely about the various ways things change.
1. The first way to change a scalar field, is to consider a deformation of it. At an "infinitesimal level" this gives rise to a variation $$\delta \phi$$. By definition $$\delta\phi:M\to\Bbb{R}$$ is a smooth map defined for each $$p\in M$$ as $$\delta\phi(p):=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Phi_{\epsilon}(p)$$.
2. The second way to investigate a change in a scalar field arises due to the effect of the vector field $$X$$ itself, which moves points in the manifold around. More precisely, this is the idea of the Lie derivative $$L_X\phi$$. By definition, this is a smooth map $$M\to\Bbb{R}$$, defined for each $$p\in M$$ as $$(L_X\phi)(p):=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}(\Psi_{\epsilon}^*\phi)(p)=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\phi(\Psi_{\epsilon}(p))$$. In a local coordinate chart we can write $$L_X\phi=X^{\mu}\frac{\partial \phi}{\partial x^{\mu}}$$ (traditional physics texts write this term as $$\delta x^{\mu}\partial_{\mu}\phi$$).
3. The third way is to combine both effects. We define $$\Delta \phi:M\to\Bbb{R}$$ as $$(\Delta \phi)(p):=\frac{d}{d\epsilon}\bigg|_{\epsilon}(\Psi_{\epsilon}^*\Phi_{\epsilon})(p)=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Phi_{\epsilon}(\Psi_{\epsilon}(p))$$. Using the chain rule, you can show $$\Delta \phi=\delta \phi+L_X\phi$$ (as a hint for how to efficiently prove this, fix a point $$p\in M$$ and consider the function $$h(s,t)=(\Psi_s^*\Phi_t)(p)=\Phi_t(\Psi_s(p))$$. This is a smooth function of two real variables, defined on a small open set around the origin. The goal is to calculate $$\frac{d}{d\epsilon}\bigg|_{\epsilon=0}h(\epsilon,\epsilon)$$, which by the chain rule is just $$\frac{\partial h}{\partial s}(0,0)+\frac{\partial h}{\partial t}(0,0)=\frac{d}{ds}\bigg|_{s=0}h(s,0)+\frac{d}{dt}\bigg|_{t=0}h(0,t)$$, and recall $$\Psi_0=\text{id}_M$$ and $$\Phi_0=\phi$$).
Hopefully this answers your question of where things are evaluated. We have $$\Delta \phi=\delta\phi+L_X\phi$$; this is an equality of smooth functions on $$M$$. You evaluate everything at the same point $$p$$. This equation is just telling you that the total change in the scalar field $$\phi$$ arising from the deformation and the vector field's flow moving points around is imply the sum of the two separate effects (by chain rule).
You next ask about things like $$\Delta (\partial_{\mu}\phi)$$ and so on, but if you just stick to the $$\Psi_{\epsilon}$$ and its pullback, and to the deformation $$\Phi_{\epsilon}$$, this makes things much clearer (for me anyway), and you'll never even have to write things like $$\Delta(\partial_{\mu}\phi)$$.
2. Changes to Lagrangian.
One technical detail is that rather than considering the Lagrangian as a real-valued function, it is much more convenient to consider it as being $$n$$-form valued, i.e to include $$dx^1\wedge \cdots \wedge dx^n$$ as part of its definition. So, let $$\Lambda$$ denote this object; it eats scalar fields and their derivatives and outputs $$n$$-forms. So, in a local coordinate chart $$(U,(x^1,\dots, x^n))$$ and points $$p\in U$$, we can write \begin{align} \Lambda_{\phi}(p)&:=\Lambda(\phi(p),\partial_{\mu}\phi(p)):=\mathcal{L}(\phi(p),\partial_{\mu}\phi(p))\,(dx^1\wedge \cdots \wedge dx^n)(p) \end{align} So, $$\Lambda_{\phi}$$ is a perfectly good object to integrate on the oriented manifold $$n$$ (remember that $$n$$-forms should be integrated on oriented $$n$$-manifolds).
Again, there are three ways to change the Lagrangian:
1. Simply plug in the deformed field, i.e consider $$\Lambda_{\Phi_{\epsilon}}$$.
2. Plug in the original field $$\phi$$ but consider pullback by the flow map $$\Psi_{\epsilon}$$, i.e $$\Psi_{\epsilon}^*(\Lambda_{\phi})$$.
3. Do both of the above steps by considering $$\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}})$$.
Again, for each sufficiently small $$\epsilon$$, the three procedures above give us a new $$n$$-form on $$M$$. We can differentiate at $$\epsilon=0$$ to obtain three variations of the lagrangian. THe first we can denote $$\delta(\Lambda_{\phi})$$ (variation due to plugging in deformed field), the second is by definition the Lie derivative $$L_X(\Lambda_{\phi})$$ (the variation due to the vector field's flow moving points around), and the final we can denote $$\Delta(\Lambda_{\phi})$$, and it equals the sum $$\Delta(\Lambda_{\phi})=\delta(\Lambda_{\phi})+L_X(\Lambda_{\phi})$$ (the reason for the sum is same chain rule reasoning as above).
Connecting back to more classical notation, writing $$\tilde{\mathcal{L}}(\tilde{x})$$ means you're considering the third type of change $$\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}})$$. Why? The tilde in $$\tilde{\mathcal{L}}$$ denotes that we're plugging in the deformed fields into the Lagrangian (so something like $$\tilde{\mathcal{L}}(p)$$ is shorthand for $$\mathcal{L}(\Phi_{\epsilon}(p),\partial\Phi_{\epsilon}(p))$$) and the final $$(\tilde{x})$$ indicates that we should do a pullback $$\Psi_{\epsilon}^*$$ to capture the effect of the vector field moving things around. The expression $$\tilde{\mathcal{L}}(x)$$ is taken to mean you plug in just the deformed fields, i.e what I have denoted as $$\Lambda_{\Phi_{\epsilon}}$$ (the $$(x)$$ rather than $$(\tilde{x})$$ tells us we don't do any pullback via the vector field's flow).
3. Changes to Action
I'm sure you can guess what I'm about to say here. For each small enough $$\epsilon$$, and "nice" open set $$\Omega\subset M$$ (say for example having compact closure with smooth boundary), we can consider three types of integrals: \begin{align} \int_{\Omega}\Lambda_{\Phi_{\epsilon}}\quad\text{or}\quad\int_{\Omega}\Psi_{\epsilon}^*(\Lambda_{\phi})\quad \text{or}\quad \int_{\Omega}\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}}). \end{align} In words, we take the three types of varied Lagrangians (either by virute of the scalar field being deformed, or by the vector field moving points around or both), and integrate the resulting $$n$$-form over a given set $$\Omega$$. I hope you realize now that there's no need to go down to the level of coordinates and look at the images of the set $$x[\Omega]$$ or $$x'[\Omega]$$ or whatever.
The general calculation of course takes both effects into account, so we focus on the third integral. The resulting total variation in the action is \begin{align} (\Delta S)(\phi; \Omega)&:=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\int_{\Omega}\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}})\\ &=\int_{\Omega}\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Psi_{\epsilon}^*(\Lambda_{\Phi_{\epsilon}})\\ &=\int_{\Omega}\Delta (\Lambda_{\phi})\\ &=\int_{\Omega}\delta(\Lambda_{\phi})+L_X(\Lambda_{\phi}) \end{align} These are exactly the two terms which one arrives at. At this stage calculating $$\delta(\Lambda_{\phi})$$ is the usual Euler-Lagrange type calculation. The term $$L_X(\Lambda_{\phi})$$ can be massaged slightly if you invoke Cartan's magic formula that Lie derivative on differential forms is given by $$L_X=d\iota_X+\iota_Xd$$, where $$\iota_X$$ denotes interior product and $$d$$ is exterior derivative. Now, since $$\Lambda_{\phi}$$ is an $$n$$-form on an $$n$$-manifold, its exterior derivarive vanishes, so we just get $$L_X(\Lambda_{\phi})=d(\iota_X \Lambda_{\phi})\equiv d(X\,\,\lrcorner \,\,\Lambda_{\phi})$$, where $$\lrcorner$$ is just another symbol for the interior product. A straightforward coordinate calculation then shows that this term is exactly $$\frac{\partial (\mathcal{L}(x)X^{\mu})}{\partial x^{\mu}}\, dx^1\wedge \cdots \wedge dx^n$$; maybe you'll find this answer of mine helpful in carrying out the coordinate calculations with exterior derivatives and interior products.
4. Computations in Local Coordinates.
First, I'll write out $$\delta(\Lambda_{\phi})$$ in local coordinates $$(U,(x^1,\dots, x^n))$$. Fix a point $$p\in U$$, then we get: \begin{align} \frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Lambda_{\Phi_{\epsilon}}(p)&=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\mathcal{L}\left(\Phi_{\epsilon}(p), \partial_{\mu}\Phi_{\epsilon}(p)\right)\, (dx^1\wedge \cdots \wedge dx^n)(p)\\ &=\left[\frac{\partial \mathcal{L}}{\partial \phi}\bigg|_{(\phi(p),\partial\phi(p))}\cdot \frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Phi_{\epsilon}(p) + \frac{\partial \mathcal{L}}{\partial(\partial_{\mu}\phi)}\bigg|_{(\phi(p),\partial\phi(p))}\cdot \frac{d}{d\epsilon}\bigg|_{\epsilon=0}\partial_{\mu}\Phi_{\epsilon}(p)\right](dx^1\wedge\cdots \wedge dx^n)(p)\\ &=\left[\frac{\partial \mathcal{L}}{\partial \phi}\bigg|_{(\phi(p),\partial\phi(p))}\cdot\delta\phi(p)+\frac{\partial \mathcal{L}}{\partial(\partial_{\mu}\phi)}\bigg|_{(\phi(p),\partial\phi(p))}\cdot \partial_{\mu}(\delta\phi)(p)\right]\,(dx^1\wedge\cdots \wedge dx^n)(p). \end{align} The first line is just the chain rule, and the second line used the definition of $$\delta\phi$$ in the first term, and in the second term I used the fact that for smooth enough functions, derivatives commute which is why I could swap the $$\partial_{\mu}$$ with $$\frac{d}{d\epsilon}\bigg|_{\epsilon=0}$$.
Now, we can do the second term $$L_X(\Lambda_{\phi})$$. Here, I won't assume any familiarty with Cartan's formula or anything. We'll just work as much as possible from the definitions. One thing you should note however is that pullback of differential $$n$$-forms is different from pullback of functions, in the sense that we don't just do composition. We also have to deal with the $$dx^1\wedge \cdots\wedge dx^n$$ term; but this term transforms with the Jacobian determinant term, as you've probably seen in other courses. Let us write \begin{align} \Lambda_{\phi}=\mathcal{L}_{\phi}\,dx^1\wedge \cdots \wedge dx^n \end{align} so $$\mathcal{L}_{\phi}:U\to\Bbb{R}$$ is defined by plugging in the field: $$\mathcal{L}_{\phi}(p)=\mathcal{L}(\phi(p),\partial_{\mu}\phi(p))$$. Now, pullback behaves as follows: \begin{align} \Psi_{\epsilon}^*(\Lambda_{\phi})&=\Psi_{\epsilon}^*(\mathcal{L}_{\phi}\,dx^1\wedge \cdots \wedge dx^n)\\ &=(\Psi_{\epsilon}^*\mathcal{L}_{\phi})\cdot \Psi_{\epsilon}^*(dx^1\wedge \cdots \wedge dx^n)\\ &=(\mathcal{L}_{\phi}\circ \Psi_{\epsilon}) \cdot \det\left(\frac{\partial (x^i\circ\Psi_{\epsilon})}{\partial x^j}\right)\, dx^1\wedge \cdots \wedge dx^n \end{align} So, if we now want to differentiate with respect to $$\epsilon$$ at $$\epsilon=0$$, we apply the product rule (we have a product of functions of $$\epsilon$$). THe first term's derivative is as I discussed in section 1, the definition of the Lie derivative $$L_X(\mathcal{L}_{\phi})$$, which simplifies to $$X^{\mu}\frac{\partial \mathcal{L}_{\phi}}{\partial x^{\mu}}$$. At $$\epsilon=0$$, the second term is the determinant of identity matrix so $$1$$. Next, the derivative of the second term is \begin{align} \frac{d}{d\epsilon}\bigg|_{\epsilon=0}\det\left(\frac{\partial (x^i\circ\Psi_{\epsilon})}{\partial x^j}\right) &=\text{trace}\left(\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\frac{\partial (x^i\circ \Psi_{\epsilon})}{\partial x^j}\right)=\text{trace}\left(\frac{\partial X^i}{\partial x^j}\right)=\frac{\partial X^{\mu}}{\partial x^{\mu}} \end{align} where in the middle, I again used commutativity of derivatives for smooth functions to swap $$\frac{d}{d\epsilon}$$ with $$\frac{\partial}{\partial x^{j}}$$. So, putting all of this together, we get \begin{align} L_X(\Lambda_{\phi})=\frac{d}{d\epsilon}\bigg|_{\epsilon=0}\Psi_{\epsilon}^*(\Lambda_{\phi})&=\left[X^{\mu}\frac{\partial \mathcal{L}_{\phi}}{\partial x^{\mu}}\cdot 1+ \mathcal{L}_{\phi}\cdot \frac{\partial X^{\mu}}{\partial x^{\mu}}\right]\,dx^1\wedge \cdots \wedge dx^n\\ &=\frac{\partial (\mathcal{L}_{\phi} X^{\mu})}{\partial x^{\mu}}\,dx^1\wedge \cdots \wedge dx^n. \end{align}
• wow, now this is a detailed answer! Thank you very much! I think I got it, but I'll have to dust off my differential geometry books to be sure, and then I'll come back here. Hopefully I'll have the time for it this weekend. Once again, thank you for your effort! Feb 5 at 9:11
|
2022-06-28 18:42:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 175, "wp-katex-eq": 0, "align": 11, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.9851540923118591, "perplexity": 318.5135669345074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00121.warc.gz"}
|
https://www.doubtnut.com/qa-hindi/334963853
|
Home
>
Hindi
>
कक्षा 12
>
Maths
>
Chapter
>
Jee Main 2019
>
यदि Delta_(1)=|{:(x, sin theta...
Updated On: 27-06-2022
Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
UPLOAD PHOTO AND GET THE ANSWER NOW!
लिखित उत्तर
Delta_(1) - Delta_(2) = x (cos 2 theta - cos 4 theta)Delta_(1) + Delta_(2) = -2x^(3)Delta_(1) + Delta_(2) = -2(x^(3) + x-1)Delta_(1) - Delta_(2) = -2x^(3)
उत्तर
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Click here to get PDF DOWNLOAD for all questions and answers of this chapter - JEE Main & Advanced (Hindi Medium) Class 12 JEE MAIN 2019
Click here to get PDF DOWNLOAD for all questions and answers of this Book - JEE Main & Advanced (Hindi Medium) Class 12 MATHS
|
2022-08-11 04:36:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17045021057128906, "perplexity": 8377.551661312329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00510.warc.gz"}
|
https://math.stackexchange.com/questions/2813535/second-principal-curvature-of-a-surface-of-revolution-at-z-0
|
# Second principal curvature of a surface of revolution at z=0
For a surface of revolution, generated by rotating a curve $z(x)$ around the $x$-axis, the principal radii of curvature $R_1$ and $R_2$ are given by:
\begin{align} R_1 = -\frac{ds}{d\theta} = \frac{(1 + \dot{z}^2)^{3/2}}{\ddot{z}} \\ \\ R_2 = \frac{z}{cos \theta} = z \sqrt{1 + \dot{z}^2} \end{align}
I understand how these were derived for this diagram, presented in this related question, but the equations fall apart at $z = 0$. For the diagram imagine the case where $b = 0$.
At $z = 0$ the second, out of plane radius of curvature, $R_2$ (denoted by $AN$, where $N$ must always must lie on the axis of rotation) becomes undefined. For a body such as this, or a cone, where the juncture at $z = 0$ is sharp, I can accept that there is a singularity and taking limits of $R_2$ will yield a believable answer of zero.
What if, however, the curve is perpendicular to the axis of rotation at $z = 0$, such as would be with a sphere or spheroid. We know that at this point $R_1 = R_2$, yet $R_2$ will clearly still be undefined and taking limits will yield zero, which is incorrect and not equal to $R_1$.
My question is, how can I explain this and calculate $R_2$ for an arbitrary body of revolution at $z = 0$?
(Sorry I wasn't able to include the diagram properly, missing rep.)
I figured this out a while ago and thought I'd share my solution in case someone stumbles across this in future..
For the case in which we have a blunt leading edge,
Let $$\kappa_1 = \frac{1}{R_1} \qquad \mathrm{and} \qquad \kappa_2 = \frac{1}{R_2}.$$
$\therefore$ $$z = \frac{cos\theta}{\kappa_2} \qquad[1]$$
$$\theta \rightarrow \frac{\pi}{2}, \quad \mathrm{so} \quad \frac{\pi}{2} - \theta \rightarrow \epsilon \,\mathrm{(small)}$$
$\therefore$ $$cos\theta \rightarrow \sin \epsilon \rightarrow \epsilon \qquad [2]$$
Express $\epsilon$ as an expansion in $s$: $$\epsilon=\frac{\partial \epsilon}{\partial s}s + \frac{1}{2!}\frac{\partial^2 \epsilon}{\partial s^2}s^2 + \frac{1}{3!}\frac{\partial^3 \epsilon}{\partial s^3}s^3 + ...$$ As $s \rightarrow 0$ we can write, $$\epsilon \approx \frac{\partial \epsilon}{\partial s}s \qquad [3] \qquad \mathrm{and} \qquad z \rightarrow s \qquad [4]$$ And $$\frac{\partial \epsilon}{\partial s} = -\frac{\partial \theta}{\partial s} = \kappa_1 \qquad [5]$$
Finally, substituting [2], [3], [4] and [5] into [1], $$s \approx \frac{\kappa_1 s}{\kappa_2}$$ $\therefore$ $$\kappa_1 \approx \kappa_2$$ $$Q.E.D.$$
|
2019-06-18 23:00:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 10, "x-ck12": 0, "texerror": 0, "math_score": 0.9773370623588562, "perplexity": 218.99172083371664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998844.16/warc/CC-MAIN-20190618223541-20190619005541-00102.warc.gz"}
|
https://zbmath.org/?q=an%3A1152.34311
|
# zbMATH — the first resource for mathematics
Modified homotopy perturbation method: Application to quadratic Riccati differential equation of fractional order. (English) Zbl 1152.34311
Summary: In this paper, a modification of He’s homotopy perturbation method is presented. The new modification extends the application of the method to solve nonlinear differential equations of fractional order. In this method, which does not require a small parameter in an equation, a homotopy with an imbedding parameter $$p \in [0, 1]$$ is constructed. The proposed algorithm is applied to the quadratic Riccati differential equation of fractional order. The results reveal that the method is very effective and convenient for solving nonlinear differential equations of fractional order.
##### MSC:
34A45 Theoretical approximation of solutions to ordinary differential equations
Full Text:
##### References:
[1] Podlubny, I., Fractional differential equations, (1999), Academic Press New York · Zbl 0918.34010 [2] Podlubny, I., Geometric and physical interpretation of fractional integration and fractional differentiation, Fract calculus appl anal, 5, 367-386, (2002) · Zbl 1042.26003 [3] He JH. Nonlinear oscillation with fractional derivative and its applications. In: International Conference on Vibrating Engineering’98, Dalian, China, 1998. p. 288-91. [4] He, J.H., Some applications of nonlinear fractional differential equations and their approximations, Bull sci technol, 15, 2, 86-90, (1999) [5] He, J.H., Approximate analytical solution for seepage flow with fractional derivatives in porous media, Comput methods appl mech eng, 167, 57-68, (1998) · Zbl 0942.76077 [6] Grigorenko, I.; Grigorenko, E., Chaotic dynamics of the fractional Lorenz system, Phys rev lett, 91, 3, 034101-034104, (2003) [7] Mainardi, F., Fractional calculus: some basic problems in continuum and statistical mechanics, (), 291-348 · Zbl 0917.73004 [8] Anderson, B.D.; Moore, J.B., Optimal control-linear quadratic methods, (1990), Prentice-Hall New Jersey · Zbl 0751.49013 [9] Diethelm, K.; Ford, J.M.; Ford, N.J.; Weilbeer, W., Pitfalls in fast numerical solvers for fractional differential equations, J comput appl math, 186, 482-503, (2006) · Zbl 1078.65550 [10] Gorenflo R. Afterthoughts on interpretation of fractional derivatives and integrals. In: Rusev P, Di-movski I, Kiryakova V, editors. Transform methods and special functions, Varna 96. Bulgarian Academy of Sciences, Institute of Mathematics ands Informatics, Sofia; 1998. p. 589-91. [11] Luchko A, Groneflo R. The initial value problem for some fractional differential equations with the Caputo derivative. Preprint series A08-98, Fachbreich Mathematik und Informatik. Freic Universitat, Berlin; 1998. [12] Miller, K.S.; Ross, B., An introduction to the fractional calculus and fractional differential equations, (1993), John Wiley and Sons Inc. New York · Zbl 0789.26002 [13] Oldham, K.B.; Spanier, J., The fractional calculus, (1974), Academic Press New York · Zbl 0428.26004 [14] Caputo, M., Linear models of dissipation whose Q is almost frequency independent. part II, J roy astronom soc, 13, 529-539, (1967) [15] Liao, S.J., An approximate solution technique not depending on small parameters: a special example, Int J nonlinear mech, 30, 3, 371-380, (1995) · Zbl 0837.76073 [16] Liao, S.J., Boundary element method for general nonlinear differential operators, Eng anal bound elem, 20, 2, 91-99, (1997) [17] He, J.H., Homotopy perturbation technique, Comput meth appl mech eng, 178, 257-262, (1999) · Zbl 0956.70017 [18] He, J.H., A coupling method of homotopy technique and perturbation technique for nonlinear problems, Int J nonlinear mech, 35, 1, 37-43, (2000) · Zbl 1068.74618 [19] El-Shahed, M., Application of he’s homotopy perturbation method to volterra’s integro-differential equation, Int J nonlinear sci numer simulat, 6, 2, 163-168, (2005) · Zbl 1401.65150 [20] He, J.H., The homtopy perturbation method for nonlinear oscillators with discontinuities, Appl math comput, 151, 287-292, (2004) · Zbl 1039.65052 [21] He, J.H., Homotopy perturbation method for bifurcation of nonlinear problems, Int J nonlinear sci numer simul, 6, 2, 207-208, (2005) · Zbl 1401.65085 [22] He, J.H., Periodic solutions and bifurcations of delay-differential equations, Phys lett A, 374, 4-6, 228-230, (2005) · Zbl 1195.34116 [23] He, J.H., Application of homotopy perturbation method to nonlinear wave equations, Chaos, solitons & fractals, 26, 3, 695-700, (2005) · Zbl 1072.35502 [24] He, J.H., Homotopy perturbation method for solving boundary value problems, Phys lett A, 350, 1-2, 87-88, (2006) · Zbl 1195.65207 [25] El-Latif, G.M., A homtopy technique and a perturbation technique for non-linear problems, Appl math comput, 169, 576-588, (2005) · Zbl 1121.65331 [26] He, J.H., Homtopy perturbation method: a new nonlinear analytic technique, Appl math comput, 135, 73-79, (2003) [27] He, J.H., Comparsion of homtopy perturbation method and homotopy analysis method, Appl math comput, 156, 527-539, (2004) [28] He, J.H., Asymptotology by homtopy perturbation method, Appl math comput, 156, 591-596, (2004) · Zbl 1061.65040 [29] He, J.H., Limit cycle bifuraction of nonlinear problems, Chaos solitons & fractals, 26, 3, 827-833, (2005) · Zbl 1093.34520 [30] Siddiqui, A.; Mahmood, R.; Ghori, Q., Thin film flow of a third grade fluid on moving a belt by he’s homotopy perturbation method, Int J nonlinear sci numer simul, 7, 1, 7-14, (2006) [31] Siddiqui, A.; Ahmed, M.; Ghori, Q., Couette and Poiseuille flows for non-Newtonian fluids, Int J nonlinear sci numer simul, 7, 1, 15-26, (2006) · Zbl 1401.76018 [32] He, J.H., Some asymptotic methods for strongly nonlinear equations, Int J modern phys B, 20, 10, 1141-1199, (2006) · Zbl 1102.34039 [33] Abbasbandy, S., Homotopy perturbation method for quadratic Riccati differential equation and comparison with adomian’s decomposition method, Appl math comput, 172, 485-490, (2006) · Zbl 1088.65063 [34] Abbasbandy, S., Numerical solutions of the integral equations: homotopy perturbation method and adomian’s decomposition method, Appl math comput, 173, 493-500, (2006) · Zbl 1090.65143 [35] Momani S, Shawagfeh N. Decomposition method for solving fractional Riccati differential equations. Appl Math Comput, in press. · Zbl 1107.65121
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-11-30 02:32:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5875791311264038, "perplexity": 7912.690450061697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358903.73/warc/CC-MAIN-20211130015517-20211130045517-00533.warc.gz"}
|
https://www.frontporchmath.com/topics/arithmetic/multiplication/solve-multi-digit-multiplication-problems-using-box-method-video/
|
# How to Solve Multi-digit Multiplication Problems Using the Box Method
## Larger Multiplication Problems
You can solve multi-digit multiplication problems several different ways. In this video we will look at the “Box Method” (also referred to as a multiplication array or a variation of the distributive property).
This is a little more advanced example, $47 * 53 = (50-3)(50+3)$. This is an example of multiplying a difference of squares.
|
2019-02-20 17:07:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.907355010509491, "perplexity": 570.0505726306309}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495367.60/warc/CC-MAIN-20190220170405-20190220192405-00208.warc.gz"}
|
https://en.wikipedia.org/wiki/Monetary_policy_reaction_function
|
# Monetary policy reaction function
The monetary policy reaction function (MPRF) is the upward-sloping relationship between the inflation rate and the unemployment rate. When the inflation rate rises, a central bank wishing to fight inflation will raise interest rates to reduce output and thus increase the unemployment rate.
The MPRF is explained by the Taylor rule, the LM curve, and Okun's law.[citation needed]
The MPRF has the equation:
${\displaystyle u=u_{0}+\Phi (\pi -\pi _{t})}$
Where ${\displaystyle \Phi }$ is a parameter that tells us how much unemployment rises when the central bank raises the real interest rate ${\displaystyle r}$ because it thinks that inflation is too high and needs to be reduced.
The Slope of the MPRF is: ${\displaystyle {\frac {1}{\Phi }}}$
The MPRF is used hand in hand with the Phillips Curve to determine the effects of economic policy. This framework illustrates equilibrium levels of the unemployment rate and the inflation rate in a sticky-price model.
## Alternative
Alternatively, in Ben Bernanke and Robert H. Frank's Principles of Economics textbook, the MPRF is a model of the Fed's interest rate behavior. In its most simple form, the MPRF is an upward-sloping relationship between the real interest rate and the inflation rate. The following is an example of an MPRF from the third edition of the textbook[full citation needed]:
r = r* + g(π - π*)
r = target real interest rate (or actual real interest rate)
r* = long-run target for the real interest rate
g = constant term (or the slope of the MPRF)
π = actual inflation rate
π* = long-run target for the inflation rate
Of course, the MPRF above is just one example, and there are other examples (such as the Taylor rule) that are more complex.
|
2017-05-28 03:44:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351392149925232, "perplexity": 2057.1875139491135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609409.62/warc/CC-MAIN-20170528024152-20170528044152-00075.warc.gz"}
|
https://www.campusgate.co.in/2011/10/alligation-rule.html
|
# Alligation Rule and Mixtures and Replacements Formulas
Alligation Rule
Alligation rule helps us to find, in what ratio two mixtures with different concentrations are to be mixed to get a target concentration.
For example, we have two mixtures of alcohol concentrations with 40% and 85% and we need to find in what ratio these two are to be mixed to get 50% concentration. To know this we have to learn about weighted average.
Alligation Rule Derivation:
Let $m$ quantities have an average $x$ and $n$ quantities have an average $y$ are mixed together.To get the final Average $A$, the following weighted average formula to be used.
$$\displaystyle\frac{{{m} \times {x} + {n} \times {y}}}{{{m} + {n}}} = {A}$$
Let us re-arrange the terms above,
$\Rightarrow m \times x + n \times y = A(m + n)$
$\Rightarrow m \times x + n \times y = A \times m + A \times n$
$$\Rightarrow n \times y - A \times n = A \times m - m \times x$$
$$\Rightarrow n\left( {y - A} \right) = m\left( {A - x} \right)$$
$$\Rightarrow \dfrac{m}{n} = \dfrac{{y - A}}{{A - x}}$$
To easily apply the alligation rule, the following diagram is very useful.
Here, $m,\,n$ are weights or units to be taken, and $x$, $y$ are initial averages or concentrations, $A$ is the target average or concentration.
Note: Alligation rule gives us only the ratio in which the initial mixtures are to be mixed to get desired concentration but never gives the actual quantities.
Mixtures and Replacements
The problems related to mixtures based on two important concepts. Alligation rule and Inverse proportionality rule are the two.
In these problems we are asked to find the resultant concentration after mixing two or three components or the final concentration when one component of the mixture is being replaced by another component which is mostly one the components of the mixture.
Replacement formula:
The general formula for replacements is as follows: $FC = IC \times {\left( {1 - \dfrac{x}{V}} \right)^n}$
Here,
FC = Final concentration
IC = Initital concentration
x = replacement quantity
V = Final volume after replacement
n = number of replacements
Note: Always remember FC and IC are the concentrations of the second component in the mixture. "x" is the concentration of the first component.
|
2019-10-24 00:08:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438524603843689, "perplexity": 1192.7061959676307}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00522.warc.gz"}
|
https://gamedev.stackexchange.com/questions/110441/fixedjoint-error-in-unity
|
# FixedJoint error in Unity?
I get the following error every time I run the code:
error CS0120: An object reference is required to access non-static member UnityEngine.Joint.breakForce'
Joint.breakForce = Mathf.Infinity;
How can I properly write this? I want to set the break force to Mathf.Infinity to render the joint unbreakable.
Looks like you just need to reference the object
private FixedJoint joint;
void Start()
{
joint = GetComponent<FixedJoint>();//assuming the joint and script components are attached to same gameobject.
joint.breakForce = Mathf.Infinity;
}
Here Joint is a class. If you have an object called Joint, rename it to lowercase joint, since that is the proper naming style in C#. Your code then should become:
joint.breakForce = Mathf.Infinity;
• And how do I properly write this? FixedJoint.breakForce = Mathf.Infinity; – That's me. Oct 28 '15 at 13:43
• Because I get error CS0103: The name joint' does not exist in the current context – That's me. Oct 28 '15 at 13:43
• Because there should be an object called 'joint', which you want to make unbreakable, but here you're trying to set the break force to a class called 'Joint' which doesn't make sense. Adding more context code into your question may help resolving the issue. – Maks Maisak Oct 28 '15 at 13:47
• My other question and what I want to achieve: stackoverflow.com/questions/33389802/… – That's me. Oct 28 '15 at 13:53
• Also, read the comments in the other question. – That's me. Oct 28 '15 at 13:54
|
2020-03-29 06:55:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17531490325927734, "perplexity": 1970.1935677718598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493818.32/warc/CC-MAIN-20200329045008-20200329075008-00025.warc.gz"}
|
https://gamedev.stackexchange.com/questions/94036/xna-how-to-use-one-rectangle-in-sprites-for-collisions
|
# XNA - How to use One Rectangle in sprites for collisions
I'm new to XNA and game development in general. If I want to make an object solid so my character cannot pass through it, I would use several Rectangles to make the collision more "realistic"
What I do is to use the method Intersects() with both;
This are two Rectangles that will be solid. (Highlighted in Red):
And I wonder if there is a way to use one rectangle to achieve something like this:
Unfortunately, there is not. By definition, a rectangle has four points, and the new figure has six.
This is something to do with the library, however you could most likely create a wrapper around a few rectangles that has a similar API to the XNA rectangle class. This, I think, is the only way to accomplish something like this excepting @SteveH's solution.
• Ball (circle) or Round corners collision use other similar methods? are they built in XNA? For example, if I have a Race game, and a track where you can collide with the round corners – JuanBonnett Feb 14 '15 at 1:51
• There's a Sphere class for 3D, however natively XNA does not support circles. There is a library called Monogame that aims to be the open source, multiplatform version of XNA that is contemplating adding a circle class, however. That can be seen here: github.com/mono/MonoGame/pull/3419 – Pip Feb 14 '15 at 1:56
The closest you can do something like this:
If(not in red rectangle)
return;//no collision
else
if(not in blue rectangle)
//run collision code
Here's another trick that can be helpful:
If you plan to set up a large group of these, you can first test a rectangle that encompasses the component rectangles, then test the interior rectangles individually.
For P[1] we check whether it intersects R[ab]. Since it doesn't we don't have to check R[a] or R[b].
P[2] intersects R[ab], so we have to check R[a] and R[b].
P[3] intersects R[ab], so we check R[a] and R[b] and find R[b] intersects it.
If R contained a lot of rectangles, we could save a lot of time by checking the union first.
• Yes, I'm now making a new class called "SolidSprite" public SolidSprite(SpriteBatch spriteBatch, Rectangle[] collisionRectangles) to implement this idea – JuanBonnett Feb 14 '15 at 4:02
|
2021-01-20 13:18:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23626302182674408, "perplexity": 1599.115037301118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703520883.15/warc/CC-MAIN-20210120120242-20210120150242-00595.warc.gz"}
|
https://dzlab.github.io/dl/2019/03/16/pytorch-training-loop/
|
PyTorch tarining loop and callbacks
A basic training loop in PyTorch for any deep learning model consits of:
• looping over the dataset many times (aka epochs),
• in each one a mini-batch of from the dataset is loaded (with possible application of a set of transformations for data augmentation)
• zeroing the grads in the optimizer
• performing a forward pass on the given mini-batch of data
• calculating the losses between the result of the forward pass and the actual targets
• using these loosses perform a backward pass to update the weights of the model
The 5-steps of a gradient descent optimization algorithm - source
In 5 lines this training loop in PyTorch looks like this:
Note if we don’t zero the gradients, then in the next iteration when we do a backward pass they will be added to the current gradients. This is because pytorch may use multiple sources to calculate the gradients and the way it combines them is throught a sum.
For some cases, one may want to do more to control the training loop. For instance, try different:
• regularization techniques
• hyperparameter schedules
• mixed precision training
• tracking metrics
For each case, you end up rewriting the basic loop and adding logic to accomodate these requirements. One way to enable endless possibilities to customize the training loop is to use Callbacks. A callback is very common design pattern in many programming languages, with a basic idea of registering a handler that will be invoked on a sepecific condition. A typical case, will be an handler for specificc errors that may be triggered when calling a remote service.
For the use in a training loop, the possible events that we may one have handlers for include when the training begins or ends, and epoch begins or ends, etc. Those handlers can return any useful information or flags that skip steps or stop the trainig.
The Callback interface may looks like this:
Now after adding calback on each life cycle of the training loop, the earlier training loop becomes:
A basic use of callbacks is to log losses and metrics (e.g. accuracy) on the training/validation datasets after each epoch. More advanced use of callbacks can be to actively act on the training by tweaking hyper parameters of the training loop (e.g. learning rates). Furthermore, every tweak can be written in its own callback examples. For instance:
learning rate scheduler
Over the curse of the training, adjusting the learning rate is a practical way to speedup with convergence of the weights to their optimal values and thus requiring less epochs (which has the benefit of avoiding overfitting). There are different ways to schedule learning rate adjustment, time-based decay, step decay and exponential decay. All of which can be implemented with callback, for instance before each mini-batch:
early stopping
Early stopping aims to let the model be trained as far as a target metric is improving (e.g. accuracy on validation set) and stop otherwise in order to avoid overfitting on the training dataset. Using a callback, we can decide wether to continue training after each epoch or not as follows:
parallel training
Use PyTorch support for multi-GPUs, example
Gradient clipping allows the use of a large learning rate ( $$lr=1$$ ), see discussion. It can be done by safely modifying Variable.grad.data in place after the backward pass had finished, see example.
The basic idea behind accumulating gradient is to sum (or avergage) the gradients of several consecutive backward passes (if they were not reset with model.zero_grad() or optimizer.zero_grad()). This can be straightfully implemented in handler for loss calculated event:
Conclusion
Callbacks are a very handy way to experiment techniques to traing larger model (with 100 millions parameters), larger batch sizes and bigger learning rate, but also to fight overfitting and make the model generalizable. A well-designed callback system is crucial and has many benefits:
• keep training loop as simple as possible
• keep each tweak independent
• easily mix and match, or perform ablation studies
|
2022-09-29 21:56:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.423495352268219, "perplexity": 2048.448700998066}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00503.warc.gz"}
|
http://www.dcs.gla.ac.uk/~wpc/gmm3/web/javadoc/com/c3d/image/package-summary.html
|
## Package com.c3d.image
Interface Summary ImageComparator TeX Abstract method of comparing images.
Class Summary ByteImage ColourTwist TeX This operator converts an image in tristimulus format to by multiplying by a conversion matrix as follows: CubePyramid DeBayer EuclideanMetric TeX Abstract method of comparing images. FloatImage TeX FloatImage \copyright Turing Institute, 1998, W P Cockshott. FractalEncoding TeX a FractalEncoding of an image is a representation that can be used to generate scaled up versions of the original. FractalEncoding8 TeX a FractalEncoding8 of an image is a representation that can be used to generate scaled up versions of the original. FractalIndex TeX The purpose of this class is to provide a fast means of indexing blocks of 4 pixels, (2x2) blocks, so as to determine which block of a given larger size is most similar to a particular block of a smaller size. GammaCorrect TeX This operator takes an image in RGB and a saturation value b and multiplies the saturation of the image by b ImageTransfer TeX The ImageTransfer class is designed to provide a means of passing one plane of data to the intel image processing library for treatment with the MMX hardware. IntelBImage Class that implements short integer images using mmx processor to speed things up. IntelFImage Float images whose arithmetic is done in the intel IPL.DLL library. IntelImage Class that implements short integer images using mmx processor to speed things up. Jimage TeX \def\frac#1#2{(#1)\div(#2)} The class Jimage\footnote{\copyright Turing Institute, 1998} is an abstract class to support image processing operations. JpegEncoder LinearColour RelativeNorm TeX Abstract method of comparing images. Resaturate TeX This operator takes an image in RGB and a saturation value b and maps to the set of basis vectors $$\pmatrix{X\cr Y\cr Z}=\pmatrix{ 1\over \sqrt 3&1\over \sqrt 3&1\over \sqrt 3\cr 0&-1\over \sqrt 2&1\over \sqrt 2\cr -1\over \sqrt 2&0.5\over \sqrt 2&0.5\over \sqrt 2}\pmatrix{R\cr G\cr B}$$ Matrix chosen to have unit length vectors. RGB2XYZ TeX This operator converts an image in RGB format to an XYZ image by multiplying by a conversion matrix as follows: $$\pmatrix{X\cr Y\cr Z}=\pmatrix{0.412&0.357&0.18\cr 0.212&0.715&0.072\cr 0.019&0.119&0.95}\pmatrix{R\cr G\cr B}$$ ShortImage Uimage TeX The class Uimage\footnote{\copyright Turing Institute, 1998} is an abstract class to support operations on 2 dimensional arrays of objects. XYZ2RGB TeX This operator converts an image in XYZ format to an RGB image by multiplying by a conversion matrix as follows: $$\pmatrix{R\cr G\cr B}=\pmatrix{3.24&-1.537&-0.498\cr -0.969&1.875&0.041\cr 0.055&-0.204&1.057}\pmatrix{X\cr Y\cr Z}$$ YUV422Sink YUV422Source
|
2017-10-19 21:50:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9938911199569702, "perplexity": 3364.6913861422618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823478.54/warc/CC-MAIN-20171019212946-20171019232946-00368.warc.gz"}
|
https://math.stackexchange.com/questions/3346379/if-px-is-a-polynomial-then-lim-k-to-infty-fracpk1pk-1
|
# If $p(x)$ is a polynomial then $\lim_{k \to \infty}\frac{p(k+1)}{p(k)}=1$
Let $$p(x)$$ be a polynomial of degree $$n$$, i.e $$p(x)=a_nx^n+a_{n-1}x^{n-1}+...a_0, a_n\neq0$$
For every $$\varepsilon >0$$, we have to find a stage such that after that stage it should be like $${\left|\frac{p(k+1)}{p(k)}-1\right|<\varepsilon},$$
i.e. $${\left|\frac{p(k+1)-p(k)}{p(k)}\right|=\frac{|a_n((k+1)^n-k^n))+...a_2((k+1)^2-k^2)+a_1|}{|a_nk^n+a_{n-1}k^{n-1}+...a_0|}}$$
I dont know how to proceed further
• The limit is true for any nonzero polynomial. Divide the numerator and denominator by $a_nx^n$... – Jean-Claude Arbaut Sep 6 at 15:09
• The term $(k+1)^n-k^n$ does not contain any term of order $n$, so... – Mostafa Ayaz Sep 6 at 15:09
• Is n't it obvious if you divide $\frac{p(k)+1}{p(k)}$ by $k^n$ – Rishi Sep 6 at 15:39
• Do you wish a numerical approximation in terms of $a_i$ and $\epsilon$ for the first $K$ such that for all $k \geq K$ this holds true? – clark Sep 6 at 16:32
$$\lim_{k\to\infty}\frac{p(k+1)}{p(k)} =\frac{a_n.(k+1)^n+a_{n-1}(k+1)^{n-1}....a_1}{a_n.k^n+a_{n-1}k^{n-1}....a_{n-1}}$$
$$=\frac{a_n.(1+{1 \over k})^n+{a_{n-1} \over k}(1+ {1 \over k})^{n-1}....{a_1 \over k^n}}{a_n+{a_{n-1} \over k}+....{a_1 \over k^n} }$$ $$=\frac{a_n}{a_n}=1$$
Note that for $$x \neq 0$$, $${p(x) \over x^n} = a_n + a_{n+1} {1 \over x} + \cdots$$.
Hence $$\lim_{x \to \infty} {p(x) \over x^n} = a_n$$.
Hence $$\lim_{x \to \infty} {p(x+1) \over p(x)} = \lim_{x \to \infty}{{p(x+1) \over (x+1)^n} \over {p(x) \over x^n} } = {a_n \over a_n} = 1$$.
|
2019-10-15 17:31:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.884585976600647, "perplexity": 306.8081520732013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00121.warc.gz"}
|
https://www.aminer.cn/pub/5f993e8b91e011a3fbe2fb57/toward-better-generalization-bounds-with-locally-elastic-stability
|
Toward Better Generalization Bounds with Locally Elastic Stability
Zhun Deng
Cited by: 0|Views28
Abstract:
Classical approaches in learning theory are often seen to yield very loose generalization bounds for deep neural networks. Using the example of "stability and generalization" \citep{bousquet2002stability}, however, we demonstrate that generalization bounds can be significantly improved by taking into account refined characteristics of m...More
Code:
Data:
Full Text
Bibtex
|
2021-04-18 20:34:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991013765335083, "perplexity": 2063.4676629676146}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038860318.63/warc/CC-MAIN-20210418194009-20210418224009-00476.warc.gz"}
|
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/studia-mathematica/all/139/2/111751/on-absolutely-representing-systems-in-spaces-of-infinitely-differentiable-functions
|
# Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty
## On absolutely representing systems in spaces of infinitely differentiable functions
### Tom 139 / 2000
Studia Mathematica 139 (2000), 175-188 DOI: 10.4064/sm-139-2-175-188
#### Streszczenie
The main part of the paper is devoted to the problem of the existence of absolutely representing systems of exponentials with imaginary exponents in the spaces $C^∞(G)$ and $C^∞(K)$ of infinitely differentiable functions where G is an arbitrary domain in $ℝ^p$, p≥1, while K is a compact set in $ℝ^p$ with non-void interior K̇ such that $\overline K̇= K$. Moreover, absolutely representing systems of exponents in the space H(G) of functions analytic in an arbitrary domain $G ⊆ ℂ^p$ are also investigated.
#### Autorzy
• Yu. F. Korobeĭnik
## Przeszukaj wydawnictwa IMPAN
Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki.
Odśwież obrazek
|
2021-05-18 01:50:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5862036943435669, "perplexity": 2944.635538715187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00406.warc.gz"}
|
https://fridaymath.com/dynamicExercises/grade9/numberSense/numSense6.html
|
# Sample quiz on ratios and proportions Main home here.
1. What is a ratio?
a special number that has rational factors
a comparison of two quantities with the same units
addition of two quantities with the same units
addition of two quantities with different units
2. An equation with a ratio on each side is called $\cdots$?
proportion
relation
product
quotient
3. Is it true that a ratio can also be expressed as a fraction?
No; they are two different things
Yes; it is always possible
Yes, but only for even numbers
Not sure to be honest
4. Express the ratio $12:15$ in its simplest form.
$4:15$
$12:5$
$5:4$
$4:5$
5. A class consists of $10$ boys and $14$ girls. What is the ratio of girls to boys?
$14:5$
$5:7$
$7:5$
$14:24$
6. If $2:5=x:15$, then the value of $x$ is $\cdots$?
$6$
$5$
$3$
$2$
7. A bag contains $6$ red balls and $9$ white balls. The ratio of white balls to the total number of balls is $\cdots$?
$6:9$
$9:6$
$6:15$
$3:5$.
8. Express the ratio $14:56$ as a fraction in its lowest terms.
$\frac{1}{4}$
$\frac{4}{1}$
$\frac{1}{3}$
$\frac{1}{5}$
9. In the proportion $x:13=10:52$, the value of $x$ is $\cdots$?
$40$
$20$
$2.5$
$25$
10. In the proportion $12:15=3:y$, the value of $y$ is $\cdots$?
$60$
$48$
$37.5$
$3.75$
|
2021-02-26 10:19:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127002358436584, "perplexity": 377.1357935353069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356456.56/warc/CC-MAIN-20210226085543-20210226115543-00052.warc.gz"}
|
https://psyteachr.github.io/glossary/r
|
# 19 R
## 19.1 R markdown
The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.
More...
An R markdown file starts with a YAML header that usually contains the title, author, and output type.
---
title: "Analysis Plan Template"
author: "School of Psychology, University of Glasgow"
output: html_document
---
The rest of the file is a mix of markdown and code chunks. Here is an example of two section titles. The first section has an r chunk for loading the packages and the second section has a list of steps.
## Packages used
{r setup, include = FALSE}
# every time you add a new package, include it in this section
library(tidyverse)
## Data Processing
3. Wrangle data into appropriate format for analysis and checks. You might want to reshape the data or combine different values to make a new variable.
Resources:
## 19.2 R-squared
A statistic that represents the proportion of the variance for a dependent variable explained by the predictor variable(s) in a linear model.
More...
Let's simulate some data:
# simulate some data
set.seed(8675309)
n <- 100
intercept <- 10
effect <- 0.5
error_sd <- 2
dat <- tibble::tibble(
predictor = rnorm(n, 0, 1),
error = rnorm(n, 0, error_sd),
dv = intercept + (effect * predictor) + error
)
Analyse the data with a linear model and use the summary() function to view the R-squared values.
model <- lm(dv ~ predictor, dat)
summary(model)
#>
#> Call:
#> lm(formula = dv ~ predictor, data = dat)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -5.0692 -1.4106 0.2447 1.4562 3.9263
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 9.8913 0.1991 49.677 < 2e-16 ***
#> predictor 0.6361 0.2150 2.958 0.00388 **
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 1.988 on 98 degrees of freedom
#> Multiple R-squared: 0.08198, Adjusted R-squared: 0.07261
#> F-statistic: 8.752 on 1 and 98 DF, p-value: 0.003877
Adjusted R-squared is a modified version of R-squared adjusted for the number of predictors in the model. You can extract the R-squared and adjusted R-squared values from the model summary.
summary(model)$r.squared #> [1] 0.08198174 summary(model)$adj.r.squared
#> [1] 0.0726142
## 19.3 random effect
An effect associated with an individual sampling unit, usually represented by an offset from a fixed effect.
Example: If the grand mean response time in a population is 600 milliseconds, that number represents the typical value. The mean response time for an individual subject $$s$$ can be represented as an offset (deviation) from that value. For example, if subject $$s$$ has a mean reaction time of 650 ms, that would imply a random effect for that subject of +50 ms.
Unlike fixed effects, we expect the underlying random effects to change from experiment to experiment as the sampling units (e.g., subjects) change.
In a mixed effects model, you can get a table of just the random effects with the code below:
model <- afex::lmer(
rating ~ rater_age * face_age + # fixed effects
(1 | rater_id) + (1 | face_id), # random effects
data = faux::fr4
)
broom.mixed::tidy(model, effects = "ran_pars")
effect group term estimate
ran_pars face_id sd__(Intercept) 0.6040734
ran_pars rater_id sd__(Intercept) 0.9144817
ran_pars Residual sd__Observation 1.0434014
## 19.4 random factor
A factor whose levels are taken to represent a proper subset of a population of interest, typically because the observed levels are the result of sampling.
If you perform a study where the population of interest is "undergraduates at the University of Glasgow", you are very unlikely to obtain data from all approximately 20,000 undergraduate students. Instead, you would likely obtain a (hopefully random) sample of undergraduates, with each undergraduate forming a single 'level' of the 'subject' factor. By treating 'subject' as random rather than fixed in your analysis, you will obtain parameter estimates that are closer to the true population values.
Random factors are usually contrasted with fixed factors, whose levels are assumed to represent all the levels of interest in a population.
## 19.5 random seed
A value used to set the initial state of a random number generator.
Random seeds are used in random number generation. Each time you generate a random number, the number you get depends on the state of the underlying random number generator. If you set this state to a known value, you will get the same random numbers in the same order.
Random seeds are used to make processes that involve random values reproducible. In R, you can set a random seed using the set.seed() function. If you put set.seed() at the start of your script, you will get the same output every time.
More...
set.seed(8675309) # Lisa's favourite seed
rnorm(3)
#> [1] -0.9965824 0.7218241 -0.6172088
set.seed(1) # a different seed
rnorm(3)
#> [1] -0.6264538 0.1836433 -0.8356286
set.seed(8675309) # the first seed again
rnorm(3)
#> [1] -0.9965824 0.7218241 -0.6172088
## 19.6 reactivity
Changes in a Shiny app that occur in response to user input.
Reactive features of your UI are rendered in the server.
A text file in the base directory of a project that describes the project.
A README is a plain text file that contain information you want new users of the project to read first. This commonly contains one or more of the following: configuration or installation instructions, a list of files included with the project, a usage license, citation information, and author contact info.
## 19.8 relative path
The location of a file in relation to the working directory.
For example, if your working directory is /Users/me/study/ and you want to refer to a file at /Users/me/study/data/faces.csv, the relative path is data/faces.csv. Use ../ to move up one directory.
# the working directory: /Users/me/study/
# read a file inside the wd: /Users/me/study/data/faces.csv
# read a file outside the wd: /Users/me/other_study/data/exp.csv
xdat <- readr::read_csv("../other_study/data/exp.csv")
Make sure you always use relative paths in an R Markdown document, which automatically sets the working directory to the directory that contains the .Rmd file.
Contrast with absolute path.
## 19.9 render
To create a file (usually an image or PDF) or widget from source code
In the context of Shiny apps, render usually means to create the HTML to display in a reactive output from code in the server section.
In the context of R Markdown files, knit, render and compile tend to be used interchangeably.
## 19.10 repeated measures
A dataset has repeated measures if there are multiple measurements taken on the same variable for individual sampling units.
## 19.11 replicability
The extent to which the findings of a study can be repeated with new samples from the same population.
Alternatively, a scientific claim is replicable if it is supported by new data (Errington et al., 2021).
The aim of replicating a study is test the underlying theoretical process, estimate the average effect size, and test whether you can observe the effect independent of the original researchers by recreating a study's methods as closely as possible (Brandt et al., 2014).
There is not a clear definition for a successful replication, but there are some common markers (Errington et al., 2021):
• Is the replication effect in the same direction as the original study?
• Is the replication effect and statistical significance in the same direction as the original study?
• Is the effect size from the original study in the confidence interval of the replication?
• Is the effect size from the replication study in the confidence interval of the original?
• Is the effect size from the replication smaller or equal to the effect size from the original study?
## 19.12 reprex
A reproducible example that is the smallest, completely self-contained example of your problem or question.
For example, you may have a question about how to figure out how to select rows that contain the value "test" in a certain column, but it isn't working. It's clearer if you can provide a concrete example, but you don't want to have to type out the whole table you're using or all the code that got you to this point in your script.
You can include a very small table with just the basics or a smaller version of your problem. Make comments at each step about what you expect and what you actually got.
More...
Which version is easier for you to figure out the solution?
# this doesn't work
no_test_data <- data %>%
filter(!str_detect(type, "test"))
# with a minimal example table
data <- tribble(
~id, ~type, ~x,
1, "test", 12,
2, "testosterone", 15,
3, "estrogen", 10
)
# this should keep IDs 2 and 3, but removes ID 2
no_test_data <- data %>%
filter(!str_detect(type, "test"))
One of the big benefits to creating a reprex is that you often solve your own problem while you're trying to break it down to explain to someone else.
If you really want to go down the rabbit hole, you can create a reproducible example using the reprex package from tidyverse.
## 19.13 reproducibility
The extent to which the findings of a study can be repeated in some other context
Reproducibility can be either with new samples from the same population (replicability) or with the same raw data but analyzed by different researchers or by the same researchers on a different occasion (computational or analytical reproducibility).
## 19.14 reproducible research
Research that documents all of the steps between raw data and results in a way that can be verified.
## 19.15 residual
Defined as the deviation of an observation from a model's expected value.
Mathematically, the residual is defined as the observed value minus the model's fitted value for that observation.
More...
For example, the linear model $$\hat{Y_i} = 3 + 2 X_i$$ predicts a value of $$\hat{Y}_i = 7$$ for $$X_i = 2$$. If you happen to have observed $$Y_i = 8$$ for observation $$i$$, then the residual for that observation would be $$Y_i - \hat{Y}_i = 8 - 7 = 1$$.
A related but slightly different notion is error.
For example, below we simulate data for two groups of 50 with means of 100 and 105 (and SDs of 10).
set.seed(8675309) # for reproducibility
group0 <- rnorm(n = 50, mean = 100, sd = 10)
group1 <- rnorm(n = 50, mean = 105, sd = 10)
df <- data.frame(
dv = c(group0, group1),
group = rep(0:1, each = 50)
)
model <- lm(dv ~ group, data = df)
summary(model)
#>
#> Call:
#> lm(formula = dv ~ group, data = df)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -25.9878 -7.1828 -0.1015 5.8804 20.1878
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 101.027 1.319 76.609 <2e-16 ***
#> group 3.992 1.865 2.141 0.0348 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 9.325 on 98 degrees of freedom
#> Multiple R-squared: 0.04467, Adjusted R-squared: 0.03492
#> F-statistic: 4.582 on 1 and 98 DF, p-value: 0.03478
model$coefficients #> (Intercept) group #> 101.026918 3.992219 The model above has an intercept of 101 and an effect of group of 4, meaning that the predicted value for group 0 is 101 and the predicted value for group 1 is 105. The difference between these predictions and the observed values is the residual error. intercept <- model$coefficients[[1]]
effect <- model\$coefficients[[2]]
df |>
mutate(predicted = intercept + effect * group,
residual = dv - predicted) |>
group_by(group) |>
slice(1:5) # show 5 from each group
dv group predicted residual
90.03418 0 101.0269 -10.992742
107.21824 0 101.0269 6.191323
93.82791 0 101.0269 -7.199007
120.29392 0 101.0269 19.266997
110.65416 0 101.0269 9.627242
106.34898 1 105.0191 1.329843
106.17582 1 105.0191 1.156684
96.74411 1 105.0191 -8.275028
83.64764 1 105.0191 -21.371500
107.14209 1 105.0191 2.122948
## 19.16 response variable
A variable (in a regression) whose value is assumed to be influenced by one or more predictor variables.
In an experimental context, the response variable is often referred to as the dependent variable.
## 19.17 right_join
A mutating join that keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
More...
X <- tibble(
id = 1:5,
x = LETTERS[1:5]
)
Y <- tibble(
id = 2:6,
y = LETTERS[22:26]
)
Table X Table Y
id x
1 A
2 B
3 C
4 D
5 E
id y
2 V
3 W
4 X
5 Y
6 Z
If there is no matching data in the left table for a row, the values are set to NA.
# X is the right table
data <- right_join(X, Y, by = "id")
id x y
2 B V
3 C W
4 D X
5 E Y
6 NA Z
Order is important for right joins.
# Y is the right table
data <- right_join(Y, X, by = "id")
id y x
2 V B
3 W C
4 X D
5 Y E
1 NA A
See joins for other types of joins and further resources.
## 19.18 RStudio
An integrated development environment (IDE) that helps you process R code.
|
2023-03-22 00:41:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5101584792137146, "perplexity": 2595.5662697098046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00298.warc.gz"}
|
https://rdrr.io/cran/nnspat/man/funs.kNNdist.html
|
# funs.kNNdist: Functions for the k^{th} and 'k' NN distances In nnspat: Nearest Neighbor Methods for Spatial Patterns
funs.kNNdist R Documentation
## Functions for the k^{th} and k NN distances
### Description
Two functions: kthNNdist and kNNdist.
kthNNdist returns the distances between subjects and their k^{th} NNs. The output is an n \times 2 matrix where n is the data size and first column is the subject index and second column contains the corresponding distances to k^{th} NN subjects.
kNNdist returns the distances between subjects and their k NNs. The output is an n \times (k+1) matrix where n is the data size and first column is the subject index and the remaining k columns contain the corresponding distances to k NN subjects.
### Usage
kthNNdist(x, k, is.ipd = TRUE, ...)
kNNdist(x, k, is.ipd = TRUE, ...)
### Arguments
x The IPD matrix (if is.ipd=TRUE) or a data set of points in matrix or data frame form where points correspond to the rows (if is.ipd = FALSEALSE). k Integer specifying the number of NNs (of subjects). is.ipd A logical parameter (default=TRUE). If TRUE, x is taken as the inter-point distance matrix, otherwise, x is taken as the data set with rows representing the data points. ... are for further arguments, such as method and p, passed to the dist function.
### Value
kthNNdist returns an n \times 2 matrix where n is data size (i.e. number of subjects) and first column is the subject index and second column is the k^{th} NN distances.
kNNdist returns an n \times (k+1) matrix where n is data size (i.e. number of subjects) and first column is the subject index and the remaining k columns contain the corresponding distances to k NN subjects.
### Author(s)
Elvan Ceyhan
NNdist and NNdist2cl
### Examples
#Examples for kthNNdist
#3D data points, gives NAs when n<=k
n<-20 #or try sample(1:20,1)
Y<-matrix(runif(3*n),ncol=3)
ipd<-ipd.mat(Y)
kthNNdist(ipd,3)
kthNNdist(Y,3,is.ipd = FALSE)
kthNNdist(ipd,5)
kthNNdist(Y,5,is.ipd = FALSE)
kthNNdist(Y,3,is.ipd = FALSE,method="max")
#1D data points
X<-as.matrix(runif(5)) # need to be entered as a matrix with one column
#(i.e., a column vector), hence X<-runif(5) would not work
ipd<-ipd.mat(X)
kthNNdist(ipd,3)
#Examples for kNNdist
#3D data points, gives NAs if n<=k for n,n+1,...,kNNs
n<-20 #or try sample(1:20,1)
Y<-matrix(runif(3*n),ncol=3)
ipd<-ipd.mat(Y)
kNNdist(ipd,3)
kNNdist(ipd,5)
kNNdist(Y,5,is.ipd = FALSE)
kNNdist(Y,5,is.ipd = FALSE,method="max")
kNNdist(ipd,1)
kthNNdist(ipd,1)
#1D data points
X<-as.matrix(runif(5)) # need to be entered as a matrix with one column
#(i.e., a column vector), hence X<-runif(5) would not work
ipd<-ipd.mat(X)
kNNdist(ipd,3)
nnspat documentation built on Aug. 30, 2022, 9:06 a.m.
|
2022-12-07 06:04:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8874659538269043, "perplexity": 4538.1867579548425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711150.61/warc/CC-MAIN-20221207053157-20221207083157-00676.warc.gz"}
|
https://ltwork.net/complete-the-table-the-answers--7741377
|
# Complete the table . the answers !
###### Question:
Complete the table .
$Complete the table . the answers !$
### To deliver an effective oral presentation, aiko a/ chose a topic that is new to her. b/ practiced giving the presentation.
To deliver an effective oral presentation, aiko a/ chose a topic that is new to her. b/ practiced giving the presentation. c/ drank lots of caffeinated beverages. d/ spoke quickly to sound confident....
### 10½ inches to millimeters (to the nearest tenth)
10½ inches to millimeters (to the nearest tenth)...
### If triangle abc is reflected over the x axis which of the following shows the coordinates for point c
If triangle abc is reflected over the x axis which of the following shows the coordinates for point c $If triangle abc is reflected over the x axis which of the following shows the coordinates for point$...
### Why do some combinations of ionic compounds form a precipitate while others do not
Why do some combinations of ionic compounds form a precipitate while others do not...
### Nad can produce more energy than a) atp b) fad c) rna d) none of the above
Nad can produce more energy than a) atp b) fad c) rna d) none of the above...
### Dankia picked more than 40 apples .his recipe calls for 8 apples to make 1 pie if p is the numberof pies Dankia will make which inequality
Dankia picked more than 40 apples .his recipe calls for 8 apples to make 1 pie if p is the numberof pies Dankia will make which inequality describe the value of p?...
### What's the relationship between 0.5 and 0.05
What's the relationship between 0.5 and 0.05...
### Please hurry! :( Now Zeus the lord of cloud roused in the north a storm against the ships, and driving
Please hurry! :( Now Zeus the lord of cloud roused in the north a storm against the ships, and driving veils of squall moved down like night on land and sea. The bows went plunging at the gust; sails cracked and lashed out strips in the big wind. We saw death in that fury, dropped the yards, un-ship...
### PLEASE HELPThree charges are on a line. A positive charge is on the far left labeled q Subscript 1 baseline positive 6 Coulombs. The
PLEASE HELP Three charges are on a line. A positive charge is on the far left labeled q Subscript 1 baseline positive 6 Coulombs. The second charge is 2 m away and is labeled q Subscript 2 baseline negative 4 Coulombs. The third charge is 1 m away and is labeled q Subscript 3 baseline = positive 3 ...
### Read the passage. Most of the information archaeologists have about Vikings come from other cultures.
Read the passage. Most of the information archaeologists have about Vikings come from other cultures. Information about the Vikings come from the societies that had contact with them. From these societies, we have information about their cultures and beliefs. The Vikings had an alphabet. They docume...
### Which event in the play is a supernatural sign that both romans and elizabethans would have recognized
Which event in the play is a supernatural sign that both romans and elizabethans would have recognized as a bad omen? a. the contents of caesar's will b. calpurnia's dream c. marc antony's speech d. brutus's family history...
### Can any kind soul help me ASAP
Can any kind soul help me ASAP $Can any kind soul help me ASAP$...
### What direction would you travel if you wanted to fly from india to united kingdom
What direction would you travel if you wanted to fly from india to united kingdom...
|
2022-08-16 13:01:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2923760712146759, "perplexity": 4609.379772981925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00469.warc.gz"}
|
https://www.physicsforums.com/threads/the-china-syndrome-and-three-mile-island.288250/
|
# The China Syndrome and Three Mile Island
1. Jan 29, 2009
### Ivan Seeking
Staff Emeritus
Some people here may remember the movie, The China Syndrome.
...allegedly leading to a complete meltdown of the core. Note that in spite of the movie's title, the notion of the core going all the way to China is rebuffed even in the movie.
I never saw the movie until recently, so I never realized that it opened twelve days before the incident at Three Mile Island!
http://en.wikipedia.org/wiki/The_China_Syndrome
Talk about bad luck for the nuclear industry... and great luck for Hollywood!
Then, four years later came the movie, Silkwood.
http://www.pbs.org/wgbh/pages/frontline/shows/reaction/interact/silkwood.html
http://www.imdb.com/title/tt0086312/
Not only that, in 1979 we also saw Love Canal.
http://www.epa.gov/history/topics/lovecanal/01.htm [Broken]
This certainly raised suspicion in the public mind in regards to the storage of toxic waste; including the storage of nuclear waste. The point? I just thought it was interesting to note the timeline of these events and the role that they certainly played in helping to shape the public perception of nuclear power.
Last edited by a moderator: May 3, 2017
2. Jan 29, 2009
### vanesch
Staff Emeritus
Indeed. It seems that nuclear is associated with a large "outrage" factor, much more so than in many other sectors of activity. An interesting read about that is what Peter Sandman tells about the issue on his website http://www.psandman.com/
Sandman is a known risk communication consultant and he has put much of his material online.
You see that in nuclear, most of the elements promoting "outrage" are present.
(see http://www.psandman.com/index-intro.htm and http://www.psandman.com/index-OM.htm on that same site).
Last edited by a moderator: Apr 24, 2017
3. Jan 29, 2009
### Ivan Seeking
Staff Emeritus
I think the key word there is "trust". Vietnam was a recent memory, as was Nixon, and distrust of the government and large corporations was a given. This easily played into distrust of the nuclear industry. And the story of Silkwood didn't help to change this perception. Not only that, we lived under MAD - mutually assured destruction. The potential for the destruction of civiliation due to the use of tens of thousands of nuclear weapons was also a given. Even the men who implemented these policies called them MAD. As children, we had even practiced hiding under our desks in school in preparation for the day that the Russians attack. So the word "nuclear" was unavoidably associated with death and destruction. Next, the remote possibility of a meltdown in a nuclear plant was thought to be far more likely than calculated because of corruption and human failings, rather than because of the process or safeguards. I still adhere to this point of view in regards to both operation and security.
Note that in a post-911 world, we still find this:
http://abcnews.go.com/Blotter/story?id=6597151&page=1
http://www.psandman.com/index-OM.htm
Chernobyl was seven years later, in 1986.
Last edited: Jan 29, 2009
4. Jan 29, 2009
### Staff: Mentor
While neither a sleeping security guard saftey nor Chernobyl have much useful to say about the safety of modern nuclear power, they really do hammer home the point of vansesch's link!
I've never seen that before, vanesch - it's a really good characterization of the problem. Thanks.
5. Jan 29, 2009
### Ivan Seeking
Staff Emeritus
Chernobyl, no, but in part because of the timing it did help to shape public opinion. It took place on the heels of the other events mentioned. But sleeping security guards are another matter. This is a great example of how a system can begin to slide. For a more dramatic example we might consider the Challenger shuttle explosion. In spite of redundant safeties intended to prevent that sort of disaster and decision making process, in the end, budget concerns outweighed engineering concerns.
The Hubble telescope would be another example. In that case it was a blatant deception. And this latest peanut food poisoning fiasco might be another example of a malicious disregard for public safety in lieu of profit.
Last edited: Jan 29, 2009
6. Jan 30, 2009
### Ivan Seeking
Staff Emeritus
The thing is, I have spent most of my thirty-year career watching people in industry cheat and cut corners. Not to say that it happens every day but it is fairly common to see things that shouldn't be done. Also, organizations [companies] can become severely dyfunctional. This in turn can lead to extremely incompetent management of systems and operations. This is partly what happened at NASA in regards to the Challenger.
Last edited: Jan 30, 2009
7. Jan 30, 2009
### vanesch
Staff Emeritus
It was also the base of the Chernobyl disaster, btw. The level of incompetence and disfunctioning was total there, from design, through management, through operation, through emergency management. Everything there went as wrong as it could be, in every aspect.
But then, the whole Soviet Union was like that in the end.
Now, there's nothing special to nuclear. Nuclear disasters are not worse than other culminations of dysfunction and incompetence in large-scale activities when looking at cost and victims. If, on a large scale, everything dysfunctions, then that will give rise to cost and victims on a large scale. Not to say that Chernobyl was nothing, but it didn't play in a different category of disaster than other disasters - except maybe for one point: a piece of land has been turned in a natural reserve for a century or two, and is economically dead. Indeed, with most other disasters, there's not this aspect: after a few years at most, you can reconstruct on the same spot.
You could say that the dysfunctioning of the financial and banking system will probably cost much more and cause way more victims than Chernobyl did. But, true, a few years from now, hopefully, we can put that nightmare behind us and pretend it never happened. That cannot be said of the natural reserve in the 30 km zone around Chernobyl, which is now a natural park "with a guarantee".
That said, as long as there is a safety culture (that's the thing to check regularly), individual errors are not a problem: the system is normally designed to be robust against individual errors, and even relatively small accumulations of errors. That's the whole idea: that the safety of the whole thing is not based upon one or two elements. It is not because the guard was sleeping that there was a serious problem, by itself. It is not because this or that didn't function, that there was a problem. That's what people have sometimes a hard time realizing: the system is designed such that individual problems are never an overall problem. You need a long chain of individual errors before something serious can start to happen. Of course, if the whole chain is rotten, you will get a serious problem in the end. So one should check the integrity of the whole system, which is nothing else but the safety culture.
8. Jan 30, 2009
Staff Emeritus
I don't think it was evil so much as ineptitude. The Allen Report says "The Perkin-Elmer plan for fabricating the primary mirror placed complete reliance on the reflective null corrector as the only test to be used in both manufacturing and verifying the mirror's surface with the required precision. NASA understood and accepted this plan. This methodology should have alerted NASA management to the fragility of the process and the possibility of gross error, that is, a mistake in the process, and the need for continued care and consideration of independent measurements."
I do think that this is an example of what happens when QA paperwork becomes an end unto itself and becomes more important than actual QA. This is the biggest problem I see with trying to create a "safety culture" - what can happen (and has happened) is that safety paperwork becomes more important than actual safety.
But there certainly other disasters that do this. Coal seam fires have rendered sections of Pennsylvania uninhabitable for decades. St. Lucia's Flood in what is now the Netherlands in 1287 has permanently reshaped the Dutch coast - places where people once lived are underwater even today.
9. Feb 6, 2009
### vanesch
Staff Emeritus
10. Feb 6, 2009
### mgb_phys
This was defiantly the problem with Nasa at the time of Hubble (and in the years after) they had a large number of inspectors at PE's plant, all of them checking the quality paperwork but nobody checking the actual work.
Rather like the security theatre of airport security
11. Mar 29, 2009
### Ivan Seeking
Staff Emeritus
In the news, yesterday was the 30th anniversary of the failure.
12. Mar 29, 2009
### vanesch
Staff Emeritus
The question is in how much this was a "near catastrophe", or an almost non-event.
The first barrier (the fuel cladding) melted, but simply due to remnant heat (radioactive decay), the reactor was stopped. The second barrier (the reactor vessel) was somewhat damaged on the inside, but didn't break, and the 3rd barrier (the confinement building) was still there. So we were still 2 barriers away from the products being released to the outside, and even if that were the case, it would have been a slow, leaking release - nothing comparable to Chernobyl where everything was put out in a smoke plume driven by a still working reactor and a huge fire directly high in the atmosphere.
So even if (and that's pretty unthinkable) the 2 other barriers would have been broken, we would have had a very serious, but very local, contamination of the site and slightly beyond, apart from a release of volatiles such as I-131, which would have indeed contaminated a larger area, but for a few weeks only. So one would have to evacuate for a few weeks the area in, say a few miles around it, give non-active iodine to inhabitants, and have a serious local cleanup mess on site.
In other words, the hypothetical "near catastrophe" would be equivalent to a similar accident of a local release of toxic products in industry.
Statistically, probably a few people would get a cancer (as they would if there were a release somewhere of say, aromatic hydrocarbons or something) a few decades later.
So the catastrophe wasn't that near, and wasn't that terrible.
13. Mar 29, 2009
Staff Emeritus
Even the worst case of TMI would have been far less severe than Bhopal, and carbaryl is still in use almost worldwide.
14. Mar 29, 2009
### Ivan Seeking
Staff Emeritus
Can you even imagine an evacuation at that scale? Experience tells us that it is almost impossible to evacuate even one city.
So let's consider something like San Onofre nuclear power plant, in California. A 500 mile evacuation perimeter would include all of San Diego, Los Angeles, Phoenix, Las Vegas, and even San Francisco. Do you really think this is feasible? It would probably destroy the US economy... in fact it almost certainly would; if not the global economy.
Note that the GDP of California is about 1.7 trillion dollars US - counting as the 13th largest country and just a little smaller than the GDP of France.
https://www.cia.gov/library/publications/the-world-factbook/rankorder/2001rank.html
http://www.dof.ca.gov/HTML/FS_DATA/LatestEconData/Data/Miscellaneous/Bbgsp.xls [Broken]
Last edited by a moderator: May 4, 2017
15. Mar 29, 2009
### Ivan Seeking
Staff Emeritus
My favorite part of the video is where the country's two foremost experts on the situation were arguing about whether or not the plant was about to blow up due to a hydrogen explosion, even as Jimmy Carter was arriving to see the plant. By the time Carter was informed of the danger, he had already publically committed to seeing the plant. To turn back at that point would send a catastrophic message to the nation. How would you like to be an engineer caught in the middle of that mess?
Last edited: Mar 29, 2009
16. Mar 29, 2009
### Brilliant!
A very unfortunate fate for such an amazingly powerful and efficient source of energy.
I'm of the opinion that the opposition isn't driven so much by fear as it is by politics, which is driven by the oil industry. Currently, 20% of the US' electricity is generated by nuclear power plants. I'd think if it were the case that the opposition was completely driven by fear, the government would spend just as much time fencing anti-nuke activists as they would environmentalists.
By the way, that statistic should be enough to destroy any argument about "incompetence". Currently, there are 104 operational nuclear reactors in the US. Perpetuating the belief that we aren't responsible enough for nuclear technology is reckless, irresponsible, and wholly dangerous to economics.
Nuclear power is the way to go, and the market is ready for it. Now, if only people would stop being so unreasonable.
Last edited: Mar 29, 2009
17. Mar 29, 2009
Staff Emeritus
And about 2% by oil. Hard to see how the oil industry has much to gain here.
18. Mar 29, 2009
### Astronuc
Staff Emeritus
http://www.eia.doe.gov/cneaf/electricity/epm/table1_1.html
Coal is nearly 50%, and natural gas has eclipsed nuclear as the second leading source.
Certainly the coal industry has an incentive to have new generation being coal, while gas producers have an incentive to have new generation from gas-fired plants.
19. Mar 29, 2009
### vanesch
Staff Emeritus
To really be certain, a perimeter of 50 000 miles would even be safer, don't you think ?
If it would ruin the US economy to evacuate 500 miles around the plant, to avoid people to get, say, a 0.5 mSv (*) extra dose or something of the kind in that area, then the best thing to do is not to evacuate. And to let them have their dose. It would be far far far less costly to have a few thousand extra cancers over a 50 years time, than to do something crazy like that, wouldn't it ?
A "nuclear catastrophe" is nothing else but a pollution event, with a certain potential health impact, which is, apart from strong expositions nearby, quite small as compared to most pollutions we already undergo. And it is pretty rare. Not entirely impossible, true.
(*) world average natural yearly dose: 2.4 mSv.
20. Mar 29, 2009
### Brilliant!
Sorry, I did mean 'coal industry'. I've had so many conversations about the oil industry that it's almost a habit to type "oil" before the word "industry".
That being said, Astronuc has made a good point in favor of my typo.
21. Mar 29, 2009
### Brilliant!
Fortunately for natural gas, they have the benefit of relatively unmitigated allowance for expansion. In nuclear's case, most growth is made through upgrading the existing reactors.
When (and if) the majority comes to accept nuclear power, it's share of the production of electricity will sky-rocket.
22. Mar 29, 2009
### Staff: Mentor
I think we should design skyscrapers to withstand asteroid impacts.
23. Mar 29, 2009
### Astronuc
Staff Emeritus
:rofl: That reminds me of a lady, apparently a member of an anti-nuclear, pro-environmental group, who came to our university to sit in a presentation that was given by someone from the nuclear industry. The talk was on safety, and I think it covered some of the issues with Three Mile Island.
She asked - "What if large meteroid struck a nuclear power plant?" A large meteroid is essentially an asteroid.
The presenter indicated that if a meteroid was so large, the nuclear plant would be the least of the worries. The implication is that a sufficiently large meteroid would have a much greater devastating effect than a breach of a nuclear plant.
The problem with natural gas is the price volatility. As I understand it, gas-fired plants loose money when the price of gas goes up because they were built to be profitable at around $2-3 /MMBtu. Above about$5-6/MMBtu, they are marginal or break-even. But when gas was about \$7+/MMBtu, the plants were losing money through generation, and the only way to make money is to sell gas directly to heating.
Since the completion of the last plants, various utilities have increased the output of many nuclear plants anywhere for about 7% to 20%, and there are some looking at further power extensions, in addition to plant life extension from 40 yrs to 60 yrs.
We're still waiting for new plant orders, although NRG has apparently purchased two pressure vessels and large components for two ABWRs to be sited at the South Texas Project.
Last edited: Mar 29, 2009
24. Mar 29, 2009
### Brilliant!
This is really exciting because, along with GE, Toshiba and Hitachi are hoping to get in on the South Texas Project. If they become a fixed source of nuclear technology for the US, there's no telling what advances we'll see in the near future.
25. Mar 29, 2009
### Astronuc
Staff Emeritus
Toshiba just bought Westinghouse from BNFL, so Toshiba has more or less parted ways with GE/Hitachi, which have formed a joint company GEH (GE-Hitachi) in the US. Toshiba/Westinghouse markets the AP1000. GEH is marketing the ABWR and ESBWR. Unfortunately, two utilities who were looking at ESBWR have backed out.
AREVA has their EPR, two of which are being built at Flamanville, Fr and Olkiluoto, Fi, but both plants are having problems.
EPR - Flamanville 3 The EPR is designed to prevent a China Syndrome event.
|
2018-05-27 03:46:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3923184275627136, "perplexity": 2530.5401557426558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867995.55/warc/CC-MAIN-20180527024953-20180527044953-00052.warc.gz"}
|
https://studyqas.com/how-do-i-find-the-value-of-sec-pi-12/
|
# How do i find the value of sec pi/12?
How do i find the value of sec pi/12?
## This Post Has 3 Comments
1. Expert says:
example 2 – simplify: step 1: completely factor both the numerators and denominators of all fractions. step 2: change the division sign to a multiplication sign and flip (or reciprocate) the fraction after the division sign; essential you need to multiply by the reciprocal. step 3 : cancel or reduce the fractions.
$How do you divide rational expressions?$
2. vperez5765 says:
Sec x = 1/Cosx
Pi/12 = 180/12 = 15 degrees.
Sec15° = 1/Cos15°
Sec pi/12 = Sec 15° = 1/Cos 15°
Cos 15° = 0.9659 using a calculator.
Sec pi/12 = Sec 15° = 1/Cos 15° = 1/0.9659 = 1.0353
Sec Pi/12 = 1.0353
3. Expert says:
26?
step-by-step explanation:
|
2023-02-01 16:15:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9274846911430359, "perplexity": 5276.713177046761}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00839.warc.gz"}
|
http://stackoverflow.com/questions/18955033/twitter-bootstrap-misplaced-thumbnails
|
I'm writing a profile page with TwiBoo (2.3.2) composed of a row, a span7 for the person projects, then a span5 for buttons:
<div class="row-fluid">
<div class="span7 profile-left">
<h4><?= $this->lang->line('profile_your_pjts') ?></h4><br> <!-- THUMB PROGETTI --> <ul class="thumbnails"> <?php foreach($list_pjt as \$progetto): ?>
<li class="span4 pjt-thumb">
<div class="thumbnail">
<!-- PROJECT DATA FROM PHP/MYSQL -->
</div>
</li>
<?php endforeach; ?>
</ul>
</div>
<div class="span5">
<h4>BUTTONS</h4>
<!-- SUBSCRIPTION DATA -->
<!-- BUTTONS -->
</div>
</div>
This is the result:
The problem comes when I add more thumbnails and goes to a new line, where instead of 3 per line I only get 2 from second line on:
I tried fixing the left-margin with plain css but just moves ALL the thumbs left keeping the same problem. Any idea on why?
-
Ok, maybe! But I have a foreach loop inside the class thumbnail how can I tell php to loop every three..? – Mr.Web Sep 23 '13 at 9:08
|
2015-01-27 12:40:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38815420866012573, "perplexity": 9411.29559235113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422120842874.46/warc/CC-MAIN-20150124173402-00177-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://arbital.greaterwrong.com/p/ordered_ring?l=55j
|
# Ordered ring
An ordered ring is a ring $$R=(X,\oplus,\otimes)$$ with a total order $$\leq$$ compatible with the ring structure. Specifically, it must satisfy these axioms for any $$a,b,c \in X$$:
• If $$a \leq b$$, then $$a \oplus c \leq b \oplus c$$.
• If $$0 \leq a$$ and $$0 \leq b$$, then $$0 \leq a \otimes b$$
An element $$a$$ of the ring is called “positive” if $$0<$$ and “negative” if $$a<$$. The second axiom, then, says that the product of nonnegative elements is nonnegative.
An ordered ring that is also a field is an ordered field.
# Basic Properties
• For any element $$a$$, $$a \leq 0$$ if and only if $$0 \leq -a$$.
First suppose $$a \leq 0$$. Using the first axiom to add $$-a$$ to both sides, $$a+(-a) = 0 \leq -a$$. For the other direction, suppose $$0 \leq -a$$. Then $$a \leq -a+a = 0$$.
• The product of nonpositive elements is nonnegative.
Suppose $$a$$ and $$b$$ are nonpositive elements of $$R$$, that is $$a,b \leq 0$$. From the first axiom, $$a+(-a) = 0 \leq -a$$, and similarly $$0 \leq -b$$. By the second axiom $$0 \leq -a \otimes -b$$. But $$-a \otimes -b = a \otimes b$$, so $$0 \leq a \otimes b$$.
• The square of any element is nonnegative.
Let $$a$$ be such an element. Since the ordering is total, either $$0 \leq a$$ or $$a \leq 0$$. In the first case, the second axiom gives $$0 \leq a^2$$. In the second case, the previous property gives $$0 \leq a^2$$, since $$a$$ is nonpositive. Either way we have $$0 \leq a^2$$.
• The additive identity $$1 \geq 0$$. (Unless the ring is trivial, $$1>0$$.)
Clearly $$1 = 1 \otimes 1$$. So $$1$$ is a square, which means it’s nonnegative.
# Examples
The real numbers are an ordered ring (in fact, an ordered field), as is any subring of $$\mathbb R$$, such as $$\mathbb Q$$.
The complex numbers are not an ordered ring, because there is no way to define the order between $$0$$ and $$i$$. Suppose that $$0 \le i$$, then, we have $$0 \le i \times i = -1$$, which is false. Suppose that $$i \le 0$$, then $$0 = i + (-i) \le 0 + (-i)$$, but then we have $$0 \le (-i) \times (-i) = -1$$, which is again false. Alternatively, $$i^2=-1$$ is a square, so it must be nonnegative; that is, $$0 \leq -1$$, which is a contradiction.
Parents:
|
2020-07-14 11:09:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9899627566337585, "perplexity": 2856.5176536534805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657149819.59/warc/CC-MAIN-20200714083206-20200714113206-00437.warc.gz"}
|
https://www.reddit.com/r/CasualMath/comments/19ijrr/what_would_the_equation_for_generating_dd_ability/
|
[–] 4 points5 points (7 children)
sorry, this has been archived and can no longer be voted on
There are formulae which do this, but they're usually not particularly nice things. In this case it's easier to just brute force the problem in excel. In this case you get
Value Likelihood (%)
3 0.1
4 0.3
5 0.8
6 1.6
7 2.9
8 4.8
9 7.0
10 9.4
11 11.4
12 12.9
13 13.3
14 12.3
15 10.1
16 7.3
17 4.2
18 1.6
(I couldn't get it to make a nice table)
[–] 2 points3 points (1 child)
sorry, this has been archived and can no longer be voted on
You can use the code display mode by starting each line with four spaces:
Value Likelihood(%)
3 0.1
4 0.3
5 0.8
6 1.6
7 2.9
8 4.8
9 7.0
10 9.4
11 11.4
12 12.9
13 13.3
14 12.3
15 10.1
16 7.3
17 4.2
18 1.6
OP, you might be interested in Wolfram Alpha's dice-related features. Don't forget to try clicking the various buttons.
[–][S] 1 point2 points (0 children)
sorry, this has been archived and can no longer be voted on
Thank you!
[–] 2 points3 points (3 children)
sorry, this has been archived and can no longer be voted on
Can you give a little more detail? How do you account for the removed die?
[–] 2 points3 points (1 child)
sorry, this has been archived and can no longer be voted on
There are 64 possible outcomes to roll 4d6. When taking the sum of the max 3, only one yields 3: (1,1,1,1), whereas there's 21 that yield 18: (X,6,6,6), (6,X,6,6), (6,6,X,6), (6,6,6,X). This differs from just summing 3d6, as the ratio for 1 and 18 would be the same.
It just comes down to counting.
[–] 1 point2 points (0 children)
sorry, this has been archived and can no longer be voted on
ah
gotcha. you take the highest three. makes sense.
[–] 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
I don't know the rules of D&D, but I could walk through an example with dice that might help.
The probability of rolling any particular value is the number of ways to roll that particular value, divided by the total number of possible values.
1 die
total number of possible outcomes = 6
Probability of rolling a 1 = P(1)
P(1) = number of ways to get a 1 / number of possible outcomes
P(1) = 1/6 ~ 0.167
P(2) = 1/6
P(3) = 1/6
P(4) = 1/6
P(5) = 1/6
P(6) = 1/6
2 dice
total number of possible outcomes = 6 on first die * 6 on second = 6*6 = 36
P(1) = 0/36
P(2) = 1/36 ~ 0.028 (only one way: roll double ones)
P(3) = 2/36 (can roll a one and a two, or roll a two and a one)
P(4) = 3/36 (can roll 1&3, 2&2, 3&1)
. . .
P(11) = 2/36 (can roll 5 & 6 or 6 & 5)
P(12) = 1/36 (boxcars)
note: please don't bother counting these up yourself.
If you are curious, which you should be, I recommend looking them up
and maybe find a few cool graphs.
So, you would calculate the probability as usual for the number of dice you have, and then take away a die, and then calculate with the remaining number of die. Does that help? Maybe you could explain to me the situation where you are rolling and then adding or removing dice.
[–][S] 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
Thanks!
|
2015-09-04 02:03:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5091168880462646, "perplexity": 1196.2785236629313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330816.69/warc/CC-MAIN-20150827031530-00077-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://cs.stackexchange.com/questions/44504/practical-bounding-box
|
# “Practical” bounding box?
For the sake of simplicity, lets say I have a bunch of 2d points, each have X and Y. The points are distributed somewhat randomly but not completely, they will be biased to be closer to the world center (where ever that might be, completely random) and they are likely to have some kind of a shape. You can imagine that like a bunch of stars forming a galaxy around a black hole...
Now I need to process these points (each of them have a lot of calculations) and to optimize the processing, it would greatly help if I knew whether they were north of most of the other points or south, east or west and nort/south more or less than east/west. The exact specifity of the distribution is not important, right now I'm imagining 8 sectors (NNE, NEE, SEE, SSE, SSW, SWW, NWW, NNW like on the map).
2 simple but incomplete solutions to this are:
1. average the position of all the points creating the center of the collection and compare the position of each point to that. This has 2 problems. For one, 99% of the points could be in a tight circular group while one point could be waay far out somewhere. This would shift the center towards that point and in the worst case scenario, the 99% of the points would only be distributed between 2 sectors. Second problem is that the distribution doesn't have to be circular, they are just as likely to show up in the shape of a giraffe. Due to the long neck on the giraffe, the Y amplitude would be much greater than X so the processing will be incorrectly biased 50% of the time between X or Y.
2. Also compare the x and y of each point to a global min/max creating a bounding box, the bounding box can then be used to normalize x/y distribution but would suffer even more from the one lone point that could be way out there.
So in essence, I'm looking for a way to find the average position and the bounding box without being affected by the potential flyaway. The quality of those 2 pieces of data would be defined as follows: After cutting the BB into 8 sectors like a pie originating from the center point, if a point is assigned to the NNE sector for example then
1. the greatest percentage of other points must be towards its south
2. the second greatest towards its west
3. the third greatest towards its east
4. and least amount of points towards its north
Any ideas?
• 1. I'm not clear on exactly what the algorithmic task is. For each point, you want to answer "Is it north of most other points?" -- is that right? Does "north of" simply mean that the Y coordinate is larger? Does "most" simply mean "at least 50%"? Can you provide a more precise specification of the problem, so I can be sure whether the approach I'm thinking of will meet your needs? 2. What do the sectors have to do with the problem statement? Or are they there only because they're used in the candidate solution you've considered? – D.W. Jul 18 '15 at 5:33
• Welcome to Computer Science Stack Exchange. Your question is very unclear. I do not really see what the bounding box is supposed to achieve nor how it is defined. I fail to see what part of the text is the problem and what part is a half baked solution that you think might help. At first you separate the problem and your attempts at a solution. But then starting "So in essence¨ we no longer know what is the problem, or your view of the solution. – babou Jul 18 '15 at 11:25
• For all I can understand of your problem, you want to find a median point serving as center such that half your points are north, half south, half west, half east. This is not a center of gravity, since distances do not seem to matter. This can be done by computing separately the median for X and for Y. NO giraffe effect, or lone star effect. Then, answering your question becomes trivial with two comparisons. What is wrong with that? But it seems so trivial that there must be a constraint you did not tell, or I did not see. What is it? – babou Jul 18 '15 at 11:37
For all I can understand of your problem, you want to find a median point serving as center such that half your points are north, half south, half west, half east.
This is not a center of gravity, since distances do not seem to matter. This can be done by computing separately the median for all $X$s and for all $Y$s. There are no giraffe effect, or lone star effect, or other.
Computing this median $X_M$ and $Y_M$ for all $X$s and $Y$s can be done in linear time $O(n)$ where $n$ is the number of points, as indicated in wikipedia and StackOverflow.
Then for any point $(X,Y)$ you can know whether it has more points south (low ordinates) by checking whether $Y > Y_M$, and conversely for more points north.
Similarly by comparing $X$ to $X_M$ you can tell whether ther are more points east or west of point $(X,Y)$
You can get similar results along any direction.
If I understand your problem statement, this can be solved efficiently using sorting. Simply sort all of the points by their Y coordinate, and for each point, remember its index in the sorted order. Now, for any given point, you can quickly identify where in the sorted list is, which immediately tells you how many points are north of it. In particular, if the node is in the last half of the list, then it's north of at least 50% of the points. So, this lets you solve your question immediately.
The running time will be $O(n \lg n)$, the time to sort all of the points.
You can do a similar computation to find the points that are east of at least 50% of the points (simply use the X coordinate instead of the Y coordinate), or the points that are north-east of at least 50% of the points (rotate the coordinate system by 45 degrees, then do the same thing), and so on. So, this lets you accomplish each of your goals.
|
2021-06-20 01:20:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5409571528434753, "perplexity": 360.0817480436855}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487653461.74/warc/CC-MAIN-20210619233720-20210620023720-00401.warc.gz"}
|
https://stats.stackexchange.com/questions/20889/how-to-distribute-a-prize-among-a-group-of-people-given-their-scores
|
# How to distribute a prize among a group of people given their scores?
I asked the same question on math.stackexchange and had some responses, but I would also like to hear some input specifically from statisticians and data analysts and I feel that you guys may have something to offer.
The question is about finding reasonable ways of dividing a prize among $n$ people in the following situation:
To make the example specific, we have $6$ people in total who are going to share a prize of $100$ dollars, and let us denote the amount received by each person $i$ as $q_i$. In addition, each person $i$ is given a score $s_i$, and we can think of $s_i$ as a way of measuring how well person $i$ deserves some portion of the prize. The intuition here is that we would like a higher-scoring person to receive a larger portion of the prize than a lower-scoring person, that is, $q_i\geqslant q_j$ if and only if $s_i\geqslant s_j$. Further, the scores are bounded, so $s_{min} \le s_i \le s_{max}$. The problematic thing here is that $s_i$ can be either negative or positive. For example, $s_1=1.3, s_2=2.1, s_3=-0.8, s_4=-3.7, s_5=0.7, s_6=5.2$.
So, what would be the proper ways of dividing the prize given these scores?
One interesting answer by @opt suggests to use the so-called Softmax function in the context of neural networks, and it is basically ${\displaystyle p_i=\frac{\exp(s_i)}{\sum^n_j\exp(s_j)}}$, and $\sum^n_ip_i=1$. In other words, $p_i$ would be the portion of the prize that $i$ should receive given her score. I would like to hear your thoughts/opinions on this method.
Many thanks.
• There is no unique or best answer to this question. Similar questions have been debated at least since the 1200's and were simply insoluble until a definite probabilistic meaning was given to the scores (in a famous series of letters between Fermat and Pascal in the mid-17th century). To make progress and not be completely arbitrary, you must stipulate something additional about how those scores arise (as one of the math replies indicates). In particular, there's no basis to recommend softmax or anything else.
– whuber
Jan 10 '12 at 17:00
• @whuber, thanks for the comments. There is no particular probabilistic meaning associated with the scores. The scores are derived by another separate process which just measures the how well each individual performs in a task, and the higher the scores, the better the performances are. Now given this measure of performance, i want to devise a way to distribute the prize proportionally to performances, hence my question. Maybe I should remove "probability" tag from the question, which might be a bit misleading. Jan 10 '12 at 21:12
• A and B sat down to eat having brought $5$ and $3$ loaves respectively to the meal. Hungry C came without any food and asked to share, offering to pay for the food he consumed. The three divided up the food and ate it in equal shares of $8/3$ loaves. C left $8$ euros on the table before departing as payment for the food. A said, "Let's split the money $5$-$3$" but B wanted a $4$-$4$ split. They took the dispute to a judge who gave A $7$ euros and B $1$ euro since A ate $5-8/3$ of his $5$ loaves and gave $7/3$ loaves to C while B ate $3 - 8/3$ of his loaves and gave only $1/3$ loaf to C. Jan 10 '12 at 22:03
• More seriously, here is one way of doing it. If some of the $s_i$ are negative and at least one $s_j$ is positive, divide the prize $A$ into $A/[\sum_i (s_i + s_{\min})]$ pieces, give everybody with a positive score a number of pieces equal to their score. Merge together the remaining pieces of the prize and divide it into equal shares to give to those who had a positive score. Jan 10 '12 at 22:22
• It's ok to ignore probability (or rather, uncertainty), Simon, but that doesn't make the issue go away. Another way to look at it is to note that these scores have no units of measure. Therefore it is impossible to tell whether the range from -3.7 to 5.2 is enormous or inconsequential. The resulting division could be anywhere from giving all the money to the top score (in the former case) to splitting it evenly (in the latter case). This is why any answer is arbitrary (and therefore unfair and entirely without justification) in the absence of more information about what those scores mean.
– whuber
Jan 10 '12 at 22:35
|
2021-12-09 06:53:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6336021423339844, "perplexity": 456.1486690370554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00315.warc.gz"}
|
http://docs.bonobo-project.org/en/master/reference/api/bonobo/execution/strategies.html
|
# Execution Strategies¶
Execution strategies define how an actual job execution will happen. Default and recommended strategy is “threadpool”, for now, which leverage a concurrent.futures.ThreadPoolExecutor to run each node in a separate thread.
In the future, the two strategies that would really benefit bonobo are subprocess and dask/dask.distributed. Please be at home if you want to give it a shot.
create_strategy(name=None)[source]
Create a strategy, or just returns it if it’s already one.
Parameters: name – Strategy
|
2018-12-15 14:03:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4557206332683563, "perplexity": 3530.3784206608207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.91/warc/CC-MAIN-20181215131038-20181215153038-00493.warc.gz"}
|
http://shareme.com/programs/product/to-sum-formulas-proof
|
Latest Reviews
• Acronis True Image for Mac (Henry)
Takes full backup of a Mac system which includes saved files and folder plus installed apps.
• Lodgit Desk Hotel Software for Mac (Arnold)
I run a small guest house in Nepal where this software is in use for last two years, we felt this is good enough for any...
• XolidoSign (Zico)
Allows me to insert signature on any document automatically before dispatching, other software requires manual instruction.
• Wurlie (Tenesy)
Make unlimited number of short urls with this script without knowing much about processes that take place in background.
• Start Hotspot (heltvet)
You may turn your android phone to a wifi hotspot but your data may be misused unless you have some sort of control over...
• NCheck Bio Attendance Trial for Windows (Jeusme)
The system is implemented in my office premises, it maintains the employees in and out time accurately, helps the accounts...
• FirePlotter (Kacper)
Monitors all in-coming and outgoing traffic to your PC, also shows the consumed bandwidth to calculate internet bill.
• FolderMill (Fairyn)
FolderMill is an automation software which looks for incoming files and send them to printer by filtering as per your...
• DocuVieware (Sigfrid)
Fully featured document viewer which can be added in your page to load and read almost any type files, the software can...
• Kiosk Software (Emmerentiana)
Restricts access to the public computers and kiosks to prevent changes or viruses from affecting it, hence you need no...
Product To Sum Formulas Proof
From Title
Show: All Software | Free Software Only | Mobile Software Only
1. School formulas - Mobile/Education ... Have you ever searched for a formula to math, physics or chemistry? You had to find it somewhere on the internet? Now you don't have to. This application collects useful equations and formulas in a single application. formulas are divided into 3 sections: math, physics and chemistry. This application contains: Derivation Triangle Polygon Photon Velocity Force Density Molar mass Ideal gas and much more ... ...
2. Chemistry formulas - Mobile/Education ... This application can provide every possible formulas for anyone who study in AP Chemistry. News: This app supports iPad natively with full width and height. It can be used to build your own formulas reference in chemistry. It can store any cheat sheets, round-ups, quick reference cards, quick reference guides and quick reference sheets. You can organize the reference with clicks. Moreover, you can add description to a label on the image. For example, if you want to add more ...
3. Trigonometric formulas - Mobile/Education ... Simple and convenient trigonometric formulas guide. This is a best way for you to learn trigonometric formulas. The following contents are provided : ? Relationship between degrees and radians ? Right triangle definitions ? Circular function definitions ? Exact values for most commonly used angles ? Graphs of trigonometric functions ? Reciprocal Identities ? Quotient Identities ? Pythagorean Identities ? Periodicity Relationships ? Negative angle identities ? Supplementary angle ...
4. formulas for Knitters - Mobile/Business ... formulas for Knitters provides comprehensive information about: - Yarncount - Yarncount conversion formulas - Support for troubleshooting - Listing and description of yarns - A worldwide list of Memminger-IRO GmbH representatives (online only) ...
5. sum It Up - Lite - Mobile/Games ... A complete mind game for those who like puzzles! If you are addicted to Sudoku then heres a challenge for you! Add Columns/Rows/Diagonals to achieve the sum to the bottom and right red region number. Adjust your numbers to reach the goals and Win! - 2 Levels in Lite version! - Timing and Top Scores! - Saves all data, so dont worry about answering a call or replying to a text! - Great thinking game for small waiting periods! - sum It Up Lite has limited time for completion. ***Two ...
6. sum Nums - Mobile/Games ... ***** No.1 Grossing App in Education in Korea. Find the 8 different numbers to solve? You can use zero as a number. Fill in the blanks with 8 different numbers that sum to the number in the centre of the grid. Ignoring the number in the centre each horizontal row should sum to the numbers on the right hand side. Also Ignoring the number in the centre each vertical row should sum to the numbers on the bottom. ...
7. Geometry proof - Mobile/Education ... "Geometry proof" provides you with all of the definitions, postulates, and theorems you will need to ace your geometry class. Our well organized interface makes finding the exact theorem you need a breeze. Theorems are organized logically by section. If you are having trouble finding a particular theorem, you can search through the entire directory or any individual section. Features: - Over 150 definitions, postulates, and theorems - Both Portrait and Landscape views supported ...
8. sum Estimate - Mobile/Productivity ... As construction professionals we are often asked questions like, "Hey Justin, what would it cost to build an office building in Dallas, Texas?" Truthfully few people can quickly access accurate information to answer questions like this one. sum Estimation is devoted to making products that will allow customers to quickly produce conceptual estimates on the go using their iPhone. Our first product, sum Estimate, enables users to create a quick, conceptual, summary level cost estimate using ...
9. proof Wiki - Mobile/Reference ... proof Wiki is a math proofs and definitions wiki (complete with images) that does not require an internet connection to work. The content comes from the great folks at proofwiki.org, so make sure to visit and contribute or donate to the project if you enjoy this app: www.proofwiki.org NOTE: there will be updates with new content, features, improvements and fixes ...
10. Access proof - Utilities/File & Disk Management ... When you delete a file normally, the operating system doesn`t physically delete it, but "hides" it. Then it marks its sector as unoccupied for if it might be necessary to write another file over it. Until that happens, that data continues to be physically stored on disc, with the danger that comes if we want to keep it from prying eyes, as it could be easily recovered using those specific programs designed to do just that.To avoid this, you have AccessProof. It is a free program that before ...
Product To Sum Formulas Proof
From Short Description
1. Hydraulic Engineer - Mobile/Productivity ... HYDRAULIC ENGINEER includes over 60 formulas required by Hydraulic Engineers. The program contains sections on Actuator formulas, Fluid Power formulas, Hydraulic Tubing formulas, Pump formulas, Induction Motor Selection formulas, and Vehicle Drive formulas. There are also 60 Area formulas and 300 Conversion formulas. All formulas in HYDRAULIC ENGINEER can be saved and/or e-mailed. This is version 1.0 of Hydraulic Engineer. If there are more formulas you would like to see added for this app, ...
2. Bool Tool - Mobile/Utilities ... Experience boolean algebra like never before - truth tables don't stand a chance. Tackle complex equations with the built in truth table and sum of products/product of sum generators. Born from the minds that brought you SUPERMASSIVE, Fall of Man, and Super Breakup Simulator 1993 comes a whole new way to do boolean algebra - with style to boot. ...
3. 600 formulas Free - Mobile/Utilities ... Beyond a collection of more than 600 formulas in technical fields enclosed with description and indicated figure, 600 formulas Free also supports users to do calculation directly on these formulas. Moreover you can create your own formulas by using this software. Main Categories: - Astrophysics - Dynamics and mechanics - Electric Engineering - Math - Optics - Thermodynamics Features: - 600+ formulas in technical fields enclosed with description and indicated figure - Supports ...
4. Excel Query Compare - Business & Productivity Tools/Office Suites & Tools ... "Excel Query Compare" is a powerful tool for Excel file processing software, it provides a comprehensive and effective and simple comparison worksheets comparison of security, workbooks comparison, grouping sum, page sum, worksheets link, add blank rows, conditional query, conditional sum nine functional modules. Using "Excel Query Compare" can effectively help users to quickly select and sum of data, to reduce your work time and improve your productivity, so you don't have to in distress for ...
5. Fire Codes - Utilities/Other Utilities ... Fire Codes is for reconstructing a file using a weak check sum and a weighted or strong check sum. Also it can be used to data-mine the collisions off the check sum counter exploring a single weak check-sum for specific types of files.
Scramble files.
Create random number files.
Data mine a check-sum counter with an AI looking for specific files.
Data mine a check-sum counter to reconstruct a specific file matching a specific check-sum.
Train an AI f ...
6. Civil Engineering Professional - Mobile/Productivity ... CIVIL ENGINEER PRO is perfect for any Civil Engineer or engineering student. Our Civil Engineer app contains over 250 important formulas needed by Civil Engineers, 200 conversion formulas and 100 area formulas. It also includes the complete International Building Code, as taken from the NJ Building Code. Major areas covered in the program now include: Area Formula, International Building Code, Building Charts, Building Code formulas Beams, Bridges, Columns, Concrete, Concrete and Excavating, ...
7. Folder Protect - Security/Encryption Tools ... Folder Protect is a new concept in Data Security. It lets you password protect and set different access rights to your files, folders, drives, installed programs and popular extensions. Folder Protect goes beyond normal file locking and encryption by letting you customize your security and choose between making files inaccessible, hidden, delete-proof or write-protected. The program uses Windows Kernel level protection that even works in Safe Mode ensuring complete security of protected ...
8. Collectors proof - Mobile/Lifestyle ... Collectors proof gives you & your friends a unique way to tag and share stories about your favorite things. You can even transfer items between owners and continue the story. * How It Works... The Collectors proof online community allows its members to register cherished items in our web-based gallery. Each registered item gets a unique Collectors proof ID and corresponding URL, which can be shared with the world. Ownership changes are tracked when items change hands, preserving ...
9. Show Me the Money Lite - Mobile/Games ... Show me the money! is a simple, fun, yet educational brain-tuning math game. Quickly scan through dollar bills and calculate the total sum. If you get the correct answer, you keep 10% of the total sum, but if you make a mistake, you lose the difference between the correct and wrong answer. Good luck and become rich! Features: - Incredibly simple, but amazingly engaging and improves your math skills. - $1,$2, $5,$10, $20,$50, $100,$500, & \$1000 US dollar bills - Three different ...
10. Paycheck Maker - Business & Productivity Tools/Databases & Tools ... Easily create proof of employment, proof of income, verification of employment. ...
Product To Sum Formulas Proof
From Long Description
1. Trigonometry Full Guide Puls - Mobile/Education ... Simple and convenient trigonometric formulas guide. This is a best way for you to learn trigonometric formulas. The following contents are provided : ? Relationship between degrees and radians ? Right triangle definitions ? Circular function definitions ? Exact values for most commonly used angles ? Graphs of trigonometric functions ? Reciprocal Identities ? Quotient Identities ? Pythagorean Identities ? Periodicity Relationships ? Negative angle identities ? ...
2. Karnaugh Analyzer - Educational/Mathematics ... Karnaugh Analyzer is a light and fast program that can analyze Karnaugh Map and not only this!!!! You can input your data in many form (Karnaugh Map, True Table, Function form - Boolean algebra, and in setOCOs of Term) and you can take the function in sum of product (SOP) or in product of sum (POS). ...
3. Trigonometry by WAGmob - Mobile/Reference ... * * * * * WAGmob: Over One million paying customer * * * * * WAGmob brings you Simple 'n Easy, on-the-go learning app for "Trigonometry". The app provides snack sized chapters for easy learning. Designed for both students and adults. This app provides a quick summary of essential concepts in Trigonometry by following snack sized chapters: Introduction *Introduction to Trigonometry. *Triangle Properties: Similar Triangles. *Solving the Height Problem (Using Similar ...
4. CalcBC Pro - Mobile/Education ... Download the Pro Version to support the developer This application was created to assist in Mathematics. It contains many useful formulas for those who are in Calculus AB (I) and Calculus BC (II). It is necessary to know all the formulas given in this app for the Calculus BC AP exam, but there are also other formulas not given that should be known. The CalcBC formulas app contains the following useful formulas: 1. Trig Identities 2. Trig Unit Circle 3. Top 20 Integrals 4. Derivative ...
5. NovoFormula - Educational/Other ... Geotechnical engineers can use this software for day-to-day analysis and calculations. Database of formulas include common correlations such as Cc, Cs, CBR, Es, etc as well as mass-volume relations and formulas. All formulas with their equations are presented in NovoFormula and user can add new equation. Report and export to Excel features are available. ...
6. Ya! Dice - Games/Board ... Try your luck and say "Ya"! The object of this game is to assign the rolled patterns of 5 dice to the best available combination. The 5 dice will be placed on the right of the screen, while the list of combinations will be shown on the left. Each round you can have 3 rolls before assigning the pattern to a certain combination. Click the "Roll the Dice" button at the bottom of the screen, and after the dice are rolled you need to click and select the dice that you would like ...
7. Tip Calculator Deluxe - Mobile/Finance ... Are you tired of calculating tips all the time? Is it difficult for you to decide which sum you should put in after a business lunch or a noisy party? Relax! We are glad to introduce Tip Calculator Deluxe to you. It's a very simple and handy application for tips calculation. All you need is to enter the check sum, number of people at the party and Tip Calculator will immediately define how much each person should pay. Tip Calculator Deluxe will perfectly fit into your everyday life and will ...
8. The Number 23 Calculator - Utilities/Other Utilities ... Inspired by the movie "Number 23". I've written a python program who calculates the sum of a string and he puts out a message if the sum has anything to do with the number 23.http://creativecommons.org/licenses/GPL/2.0/ ...
9. Inlage - Business & Productivity Tools/Office Suites & Tools ... Everyone who has created documents with LaTeX knows that it can be very time consuming, especially when you want to include formulas. Imagine you use a lot of commands like \frac and \sum and forget to close just one bracket, it will give you a lot of syntax errors in your LaTeX compiler and it will take you lots of time to fix it. Here is your solution! Inlage is a professional application designed to simply translate the output of the Math Input Panel into LaTeX code. for WindowsAll ...
10. SpreeFormulas - Mobile/Utilities ... Spree formulas is an application for viewing formulas for Math, Physics and viewing the Periodic System of Elements in Chemistry. ...
Product To Sum Formulas Proof
Related Searches:
a - b - c - d - e - f - g - h - i - j - k - l - m - n - o - p - q - r - s - t - u - v - w - x - y - z - #
© 1997-2015 ShareMe Last Update: Wednesday, April 25, 2018 shareme | contact | notice | privacy | bookmark | link to us
Welcome to the ShareMe. Shareme is a dedicated internet portal bringing users the latest shareware & freeware from the world's best Software Authors. Shareme allows Shareware & Freeware Authors to submit their latest wares, so you the user are able to download their latest software updates all the time, after they are released! Always visit Shareme for your software needs.
|
2018-04-27 05:04:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2912382483482361, "perplexity": 2915.2713807721566}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125949036.99/warc/CC-MAIN-20180427041028-20180427061028-00494.warc.gz"}
|
https://educationalresearchtechniques.com/2017/01/25/generalized-additive-models-in-r/?shared=email&msg=fail
|
# Generalized Additive Models in R
In this post, we will learn how to create a generalized additive model (GAM). GAMs are non-parametric generalized linear models. This means that linear predictor of the model uses smooth functions on the predictor variables. As such, you do not need to specify the functional relationship between the response and continuous variables. This allows you to explore the data for potential relationships that can be more rigorously tested with other statistical models
In our example, we will use the “Auto” dataset from the “ISLR” package and use the variables “mpg”,“displacement”,“horsepower”, and “weight” to predict “acceleration”. We will also use the “mgcv” package. Below is some initial code to begin the analysis
library(mgcv)
library(ISLR)
data(Auto)
We will now make the model we want to understand the response of “acceleration” to the explanatory variables of “mpg”,“displacement”,“horsepower”, and “weight”. After setting the model we will examine the summary. Below is the code
model1<-gam(acceleration~s(mpg)+s(displacement)+s(horsepower)+s(weight),data=Auto)
summary(model1)
##
## Family: gaussian
##
## Formula:
## acceleration ~ s(mpg) + s(displacement) + s(horsepower) + s(weight)
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 15.54133 0.07205 215.7 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## s(mpg) 6.382 7.515 3.479 0.00101 **
## s(displacement) 1.000 1.000 36.055 4.35e-09 ***
## s(horsepower) 4.883 6.006 70.187 < 2e-16 ***
## s(weight) 3.785 4.800 41.135 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.733 Deviance explained = 74.4%
## GCV = 2.1276 Scale est. = 2.0351 n = 392
All of the explanatory variables are significant and the adjust r-squared is .73 which is excellent. edf stands for “effective degrees of freedom”. This modified version of the degree of freedoms is due to the smoothing process in the model. GCV stands for generalized cross-validation and this number is useful when comparing models. The model with the lowest number is the better model.
We can also examine the model visually by using the “plot” function. This will allow us to examine if the curvature fitted by the smoothing process was useful or not for each variable. Below is the code.
plot(model1)
We can also look at a 3d graph that includes the linear predictor as well as the two strongest predictors. This is done with the “vis.gam” function. Below is the code
vis.gam(model1)
If multiple models are developed. You can compare the GCV values to determine which model is the best. In addition, another way to compare models is with the “AIC” function. In the code below, we will create an additional model that includes “year” compare the GCV scores and calculate the AIC. Below is the code.
model2<-gam(acceleration~s(mpg)+s(displacement)+s(horsepower)+s(weight)+s(year),data=Auto)
summary(model2)
##
## Family: gaussian
##
## Formula:
## acceleration ~ s(mpg) + s(displacement) + s(horsepower) + s(weight) +
## s(year)
##
## Parametric coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 15.54133 0.07203 215.8 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Approximate significance of smooth terms:
## edf Ref.df F p-value
## s(mpg) 5.578 6.726 2.749 0.0106 *
## s(displacement) 2.251 2.870 13.757 3.5e-08 ***
## s(horsepower) 4.936 6.054 66.476 < 2e-16 ***
## s(weight) 3.444 4.397 34.441 < 2e-16 ***
## s(year) 1.682 2.096 0.543 0.6064
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## R-sq.(adj) = 0.733 Deviance explained = 74.5%
## GCV = 2.1368 Scale est. = 2.0338 n = 392
#model1 GCV
model1$gcv.ubre ## GCV.Cp ## 2.127589 #model2 GCV model2$gcv.ubre
## GCV.Cp
## 2.136797
As you can see, the second model has a higher GCV score when compared to the first model. This indicates that the first model is a better choice. This makes sense because in the second model the variable “year” is not significant. To confirm this we will calculate the AIC scores using the AIC function.
AIC(model1,model2)
## df AIC
## model1 18.04952 1409.640
## model2 19.89068 1411.156
Again, you can see that model1 s better due to its fewer degrees of freedom and slightly lower AIC score.
Conclusion
Using GAMs is most common for exploring potential relationships in your data. This is stated because they are difficult to interpret and to try and summarize. Therefore, it is normally better to develop a generalized linear model over a GAM due to the difficulty in understanding what the data is trying to tell you when using GAMs.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2020-11-25 10:18:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4804975092411041, "perplexity": 2199.6339876583434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182776.11/warc/CC-MAIN-20201125100409-20201125130409-00349.warc.gz"}
|
http://kspse.org/_PR/view/?aidx=26050&bidx=2188
|
[ Article ]
Journal of Power System Engineering - Vol. 24, No. 5, pp.92-101
ISSN: 2713-8429 (Print) 2713-8437 (Online)
Print publication date 31 Oct 2020
Received 06 Oct 2020 Revised 13 Oct 2020 Accepted 15 Oct 2020
# A Study on Vessel Motion Control with Towing Ropes and Dampers for Berthing
Young-Bok Kim* ; Hwan-Cheol Park** ; Chang-Woo Kim***,
*Professor, Department of Mechanical System Engineering, Pukyong National University.
**First Engineer, Training Ship Kaya, Pukyong National University.
***Professor, Korea Institute of Maritime and Fisheries Technology.
Correspondence to: Chang-Woo Kim : Professor, Korea Institute of Maritime and Fisheries Technology.: ckddn12@daum.net, Tel : 051-629-6197
## Abstract
The vessel maneuvering problem in the harbour area is generating considerable interests in terms of marine cybernetics. The vessel is operated by the pilot and moves at ultimately low or zero speed in shallow water area. In this case, the vessel is usually assisted by the cooperation of thrusters, main propulsion system, tugboats and pilots, etc. In this paper, we suggest a new vessel berthing technique using dampers and winches as a solution for complex and dangerous berthing work. In the proposed berthing method, we design the controller to counteract the waves and other effects for guaranteeing the safety during the berthing process. Also, in this paper, experiments are conducted to verify the proposed vessel berthing method and the effectiveness of the designed control system in which unpredictable external force is applied. Finally, the experiment results show that the vessel could approach the defined position in time with effective control action made from designed controller.
## Keywords:
Vessel, Tension, Harbour, Berthing system, Winch
## 1. Introduction
Vessel berthing is widely considered as the most complicated process in the marine control and automation field.1-8) A reason is that the hydrodynamic of the vessel considerably changes, and its controllability is greatly cut down when the vessel moves from deep to shallow water. Furthermore, by only manipulating the main propellers and thrusters, it is very easy for dangerous collisions to appear between the maneuvered vessel and the moored ones because of unpredictable vessel motion.4)
Various approaches have been tried to make some solutions about this problem. Bui and et al.5-8) conducted automatic ship berthing by using bow and stern thrusters. The authors introduced a steering motion model of a vessel and identified all of the model parameters.
An observer based optimal controller was designed to cope with the uncertain ship dynamics to obtain desirable control performance by estimating system states. This approach seems to be a useful method, but the reduced actuator controllability under low-speed conditions has not fully been resolved.
In 1994, Hasegawa and Fukutomi suggested an artificial neural network control strategy for automatic vessel berthing.6) However, the conventional methods proposed in this time are kinds of automatic maneuvering technologies or methods which could not give any useful solutions for completing berthing work.
Therefore, the authors propose a new method to complete the final berthing process with preserving working safety. The proposed system configuration is illustrated in Fig. 1. As seen from Fig. 1, the two cylinders and two winch systems are prepared for controlling ship motion. Pulling the vessel by winch with rope and pushing it with damper, we can move the vessel to the final position more safely. It means that the complicate berthing work could be completed by controlling winches and dampers with designed control system9-15) without any supporting system. The effectiveness and usefulness of proposed method are clearly presented by experiment results.
Schematic drawing of the proposed berthing system
## 2. Problem Statement
The system proposed and intended for experiment is shown in Fig. 2. The authors use a vessel model to execute the berthing experiment, where two pairs of motor and cylinder are installed in the harbor side.
Experimental equipment configuration
The proposed idea and system configuration are initialized by considering final berthing condition. The most complicate area (or distance) for berthing may be confined in 10[m] from ship to quay side. In this area, generally, the tugboats and the pilots assistance are needed to occupy safe working process.
The object of this study is to show an easier and safer method than conventional technologies. In the proposed method, any additional assistance system such as tugboat or pilot is not necessary except damper and winch systems installed on the land side. The possibility of the proposed method is presented by following experiment.
As previously mentioned, two pairs of winch and damper cylinder are prepared for controlling ship motion. For starting berthing work, at first, the ends of dampers are attached and the winch ropes are connected to the vessel simultaneously.
After that, the berthing process is started by controlling pulling and pushing forces made from actuators. It means that the desirable control performance and safe berthing work can be achieved by controlling winch and damper systems properly and effectively.
### 2.1 System description
The system configuration of the controlled system illustrated in Fig. 2 is represented as Fig. 3.
System configuration of vessel berthing system with dampers and winches
Then, let us describe a model of controlled system based on Fig. 3. The dynamic equation of motion in the horizontal plane of the controlled vessel assisted by dampers and winches is written as follows:5,7,8)
$\stackrel{˙}{\eta }=R\left(\psi \right)\nu$ (1)
$M\stackrel{˙}{\nu }+D\nu =\tau .$ (2)
η = [x,y,ψ]T describes the inertial position and the heading angle in the earth-fixed coordinate frame. ν = [u,v,r]T represents the surge, sway and yaw rates of the vessel in the body fixed coordinate frame. R(ψ) used to express the kinematic equation of motion is a rotation matrix in yaw direction,
$R\left(\psi \right)=\left[\begin{array}{ccc}\mathrm{cos}\left(\psi \right)& -\mathrm{sin}\left(\psi \right)& 0\\ \mathrm{sin}\left(\psi \right)& \mathrm{cos}\left(\psi \right)& 0\\ 0& 0& 1\end{array}\right]$ (3)
And M is the inertia matrix which includes added mass:
$M=\left[\begin{array}{ccc}m-{X}_{\stackrel{˙}{u}}& 0& 0\\ 0& m-{Y}_{\stackrel{˙}{v}}& -{Y}_{\stackrel{˙}{r}}\\ 0& -{N}_{\stackrel{˙}{v}}& {I}_{Z}-{N}_{\stackrel{˙}{r}}\end{array}\right],$ (4)
where m is the mass of vessel, IZ is the inertia moment about the z-axis of the vessel, ${X}_{\stackrel{˙}{u}},{Y}_{\stackrel{˙}{v}},{Y}_{\stackrel{˙}{r}},{N}_{\stackrel{˙}{v}},{N}_{\stackrel{˙}{r}}$ describe hydrodynamic added mass. D is the damping matrix,
$D=\left[\begin{array}{ccc}-{X}_{u}& 0& 0\\ 0& -{Y}_{v}& -{Y}_{r}\\ 0& -{N}_{v}& -{N}_{r}\end{array}\right].$ (5)
Where Xu, Yv, Yr, Nv, Nr represent the linear damping coefficients. τ = [τx,τy,τψ] is the control forces in surge and sway directions and yaw moment. In this paper, τ is the result of combined efforts of dampers and winches as shown in Fig. 1.
$\tau =B\left(\alpha \right)f.$ (6)
Where, f = [fD1,fD2,fW1,fW2]T presents pushing forces by dampers and pulling forces by winches.
α = [αD1,αD2,αW1,αW2]T defines the force direction of the dampers and winches.
B(α) is the geometric configuration matrix which captures the relationship between the dampers, the winches and the vessel,
$B\left(\alpha \right)={\left[\begin{array}{cccc}c{\alpha }_{D1}& s{\alpha }_{D1}& -{l}_{D1}c{\alpha }_{D1}& +{l}_{D1}s{\alpha }_{D1}\\ c{\alpha }_{D2}& s{\alpha }_{D2}& -{l}_{D2}c{\alpha }_{D2}& +{l}_{D2}s{\alpha }_{D2}\\ c{\alpha }_{W1}& s{\alpha }_{W1}& -{l}_{W1}c{\alpha }_{W1}& +{l}_{W1}s{\alpha }_{W1}\\ c{\alpha }_{W2}& s{\alpha }_{W2}& -{l}_{W2}c{\alpha }_{W2}& +{l}_{W2}s{\alpha }_{W2}\end{array}\right]}^{T}.$ (7)
With represents sin(α) and describes cos(α).
### 2.2 Vessel mooring system modeling
Based on the preparation of experiment apparatus, the authors performed the system identification process for describing the dynamic characteristics of vessel berthing system for a half model. Then the dynamic equations of berthing system consisting of pulling and pushing mechanism are represented as follows (i = 1, 2):
${M}_{s}{\stackrel{¨}{y}}_{si}+{b}_{si}{\stackrel{˙}{y}}_{si}+{k}_{ri}\left({y}_{si}-{y}_{wi}\right)=0.$ (8)
${J}_{wi}{\stackrel{¨}{\theta }}_{wi}+{b}_{wi}{\stackrel{˙}{\theta }}_{wi}+{k}_{wi}{\theta }_{wi}={T}_{wi}-{r}_{wi}{T}_{i}.$ (9)
Where, Ms (M/2): mass of vessel, bsi: total damping coefficient of vessel, kri: rope stiffness, ysi: displacement of vessel in ye direction, yw: winded rope length from winch. And Jwi, bwi, kwi, Twi, rwi are the inertia moment, damping coefficient, stiffness, torque and radius of winch system, respectively. Using the following relations:
$\begin{array}{c}{T}_{i}={k}_{ri}\left({y}_{wi}-{y}_{si}\right)\hfill \\ ={k}_{ri}{r}_{wi}{\theta }_{wi}-{k}_{ri}{y}_{si.}\hfill \end{array}$ (10)
${T}_{wi}={k}_{wi}{v}_{i}.$ (11)
then Eq. (8) and Eq. (9) are represented as follows:
${\stackrel{¨}{y}}_{si}=-\frac{{b}_{si}}{{M}_{s}}{\stackrel{˙}{y}}_{si}-\frac{{k}_{ri}}{{M}_{s}}{y}_{si}+\frac{{k}_{ri}{r}_{wi}}{{M}_{s}}{\theta }_{wi}.$ (12)
(13)
Where,
$\begin{array}{c}{a}_{i}={k}_{wi}/{J}_{wi},{b}_{i}=-{b}_{wi}/{J}_{wi},\hfill \\ {c}_{i}=-\left({k}_{mi}+{r}^{{2}_{w}}{k}_{ri}\right)/{J}_{wi},\hfill \end{array}$
which are identified by experiment. And, ui = vi is the voltage input to the winch actuator, fi(t,ysi) is assumed as uncertain parts which may not be exactly described or ignored in the model.
Especially, in order to identify the parameters shown in Eq. (13), several experiments have been tried such that one result is illustrated in Fig. 4.
Step response for identifying the system parameters. (a) is the voltage input to the winch actuator(motor). (b) is the tension response
In this case, 10[V] electric power (shown in (a) of Fig. 4) was applied to the winch system such that the rope tension force was obtained as shown in (b) of Fig. 4. In this figure, the dotted line is the tension force of real system get from experiment and the solid line is calculated tension force obtained from a identified model.
Furthermore, using the relations as seen in Eq. (10)~(13), the parameter values are calculated as ai = 132, bi = 2.8, ci = 26.2
### 2.3 Berthing strategy
We may provide many types of berthing techniques and methods based on the proposed system configuration. In the proposed berthing system, two dampers and two winches are provided and controlled to achieve the given objective.
Here, the authors introduce a considerable simple berthing strategy. It is called a semi-automatic berthing method or technique. In this method, the dampers are manually controlled. On the other hand, the winches are automatically controlled such that the rope tension forces are kept in the defined values. For example, if the damper control valves are closed, the winches rotate and pull the vessel until the tension forces approach the target values. It means that the vessel motion or position depends on the expended length of damper cylinder under the appropriately defined rope tension.
For accomplishing full-automatic berthing system, two actuator system should be actively controlled. However, the semi-automatic berthing method is most friendly one for real working condition and environment. Based on this idea, the authors tried berthing experiment in several times such that the results will be shown in the next chapter.
## 3. Experiment
### 3.1 Experimental apparatus
In order to evaluate the usefulness of designed control system, the experimental apparatus is set up and tested in the water basin. The experiment was performed in water basin using a vessel model (weight : 215 kg, length : 2 m, width : 1 m).
The vessel motions are measured and controlled by data acquisition system PCI-6229 (NI) with LabVIEW software.
A pair of winch and damper system is installed on the harbor side. The distance from the vessel to the harbor is measured by the Laser sensor mounted on the plate, the rope tension is measured by Load cell sensor and the cylinder pressure is measured by pressure sensor. All apparatus used for experiment are represented in Fig. 2 and Table 1.
Specifications of the winch systems
### 3.2 Controller design
We design a controller based on sliding mode control scheme in order to keep the desired rope tension. Especially, the super-twisting algorithm is introduced for designing sliding mode controller.
Then, a sliding surface is given as follows:
${s}_{i}=\stackrel{˙}{{e}_{i}}+{m}_{i}{e}_{i}.$ (14)
Where mi is positive parameters. The error ei between the desired winch angle θwid and the actual angle θwi is defined as follow:
${e}_{i}={\theta }_{wid}-{\theta }_{wi}.$ (15)
Taking the derivative time of Eq. (14) gives
$\begin{array}{c}\stackrel{˙}{{s}_{i}}=\stackrel{¨}{{e}_{i}}+{m}_{i}\stackrel{˙}{{e}_{i}}\hfill \\ \mathrm{ }={\stackrel{¨}{\theta }}_{wid}-{\stackrel{¨}{\theta }}_{wi}+{m}_{i}\stackrel{˙}{{e}_{i}}\hfill \end{array}$ (16)
From Eq. (14) and Eq. (16), the following Eq. (17) is obtained with a condition of sliding manifold defined as Eq. (18).
$\stackrel{˙}{{s}_{i}}=\stackrel{¨}{{\theta }_{wid}}-{a}_{i}{v}_{i}-{b}_{i}{\stackrel{˙}{\theta }}_{wi}-{c}_{i}{\theta }_{wi}+{f}_{i}\left(t,{y}_{si}\right)+{m}_{i}\stackrel{˙}{{e}_{i}}.$ (17)
${s}_{i}\cong \left\{{\theta }_{wi}\in R;{\theta }_{wi}={\stackrel{˙}{\theta }}_{wi}=0\right\}.$ (18)
In Eq. (17), fi(t,ysi) denotes the uncertain perturbations. The derivative of uncertainty satisfies $\left|{\stackrel{˙}{f}}_{i}\left(t,{y}_{si}\right)\right|\le {\delta }_{i}$ with the upper bound δi (> 0) of the derivative of the unknown perturbations. And, the control law based on the Super-Twisting Algorithm is calculated as following.
${u}_{i}=\frac{1}{{a}_{i}}\left({u}_{is}+{u}_{iSTA}\right).$ (19)
Where
$\begin{array}{c}{u}_{is}={\stackrel{¨}{\theta }}_{wid}-{b}_{i}\stackrel{˙}{{\theta }_{wi}}-{c}_{i}{\theta }_{wi}+{m}_{i}\stackrel{˙}{{e}_{i},}\\ {u}_{iSTA}={K}_{i}{\left|{s}_{i}\right|}^{1/2}sign\left({s}_{i}\right)-{\zeta }_{i},\\ {\stackrel{˙}{\zeta }}_{i}=-{H}_{i}sign\left({s}_{i}\right).\end{array}$ (20)
And, the function sign( • ) is defined as follows:
(21)
Then, we need to verify the system stability condition with the control law Eq. (19).
By substituting Eq. (19) into Eq. (17), the equivalent sliding manifold is calculated as following.
${\stackrel{˙}{s}}_{i}=-{K}_{i}{\left|{s}_{i}\right|}^{1/2}sign\left({s}_{i}\right)+{u}_{i}+{f}_{i}\left(t,{y}_{si}\right).$ (22)
Where, Ki and Hi are positive control coefficients. Then, the system stability is preserved, if the controller gains satisfy the following conditions:
(23)
The stability condition is automatically proved by providing a Lyapunov candidate such that the authors do not illustrate proving process.12-14)
In final, the gains of the winch controller shown in Eq. (20) are chosen as follows:
(24)
### 3.3 Experiment results
The objectives of this study are keeping the target tension of rope and moving the vessel to the desired position simultaneously. Especially, the target rope tension should remain in the defined value throughout entire berthing process.
The authors performed experiment by applying the designed controller. The experiment scenario is illustrated in Fig. 5 without any special constraint and with considering real application conditions. At first, we tried a berthing experiment in the nominal condition without disturbance. The results are illustrated in Fig. 6 (bow side) and Fig. 7 (stern side). In the figure, (a) : the vessel position, (b) : rope tension and (c) : damper pressure, respectively. By control the winch and damper system, excellent vessel berthing control performance was achieved as seen in the figures. On the contrary, another experiment was performed by considering some uncertainties.
Berthing experiment scenario
Dynamic responses of the bow side
Dynamic responses of the stern side
During vessel moving to the target position, the rope tension should be kept in the defined value as possible as it can in spite of existence of disturbance.
In this case, the target rope tension was set to 120 N which is shown as the red dotted line in (b) of Fig. 8. In Fig. 8, (a) shows vessel position during moving to the final position. While the vessel is approaching the target position (0.57 m), the external disturbance is attacking the vessel in several times. At 3 s, 7 s and 11 s , the vessel was attacked by pushing force toward harbor side. Conversely, at 14 s and 18 s, pulling force attacked the vessel.
Experiment results when the disturbance attacks the vessel
With the condition aforementioned, the vessel can approach the final position in 14[s] by controlling two actuators. The controlled vessel motion is illustrated in (a) of Fig. 8. Also, the rope tension and winch control signal are shown in (b) and (c), respectively. (d) of Fig. 8 shows the pressure change of damper cylinder during entire berthing process.
As shown in the experiment results, we can find out that the vessel is well controlled and robustly moves to the final position as intended, regardless of attacking of disturbance.
## 4. Conclusions
In the conventional berthing technologies or methods, there exist many limits to the safety of berthing operation where rudders, main propellers, thrusters, tugboats and mooring lines are generally provided. The high technologies have been developed and applied to many industry fields. However, the berthing work strongly still depends on human power and stays on the pre-modern technology level.
To overcome primitive berthing method, the authors proposed a new berthing technique/method by combining dampers and winches. The proposed idea was initialised by considering the real working condition. As mentioned before, many working apparatuses for berthing should be prepared in land and sea side of harbour simultaneously. In this study, the authors proposed a new berthing method only using the apparatuses prepared on land side without sea side apparatuses. It is the novelty of this research such that we could obtain numerous advantages. The representative one is reduction of air pollution get from not using tugboats. Except that, it is expected that we find out uncountable side effects by providing the proposed technique.
## Author contributions
C. W. Kim; Conceptualization, Formal analysis, Methodology and Writing-review & editing. C. W. Kim and H. C. Park; Validation & editing. Y. B. Kim; Writing-original draft, Project adminstration and Supervision.
## References
• K. H. Kim, B. G. Kim and Y. B. Kim, 2018, "A study on the optimal tracking control system design for automatic ship berthing", Journal of the Korean Society for Power System Engineering, Vol. 22, No. 4, pp. 72-80. [https://doi.org/10.9726/kspse.2018.22.4.072]
• Y. B. Kim, Y. W. Choi and G. H. Chae, 2006, "A study on the development of automatic ship berthing system", Proceedings of the Korean Society for Power System Engineering Autumn Conference, pp. 419-423.
• Y. B. Kim, Y. W. Choi, J. H. Suh and K. S. Lee, 2006, "A study on the development of the real-time detection technique for automatic ship berthing", Proceedings of the Korean Society for Power System Engineering Spring Conference, pp. 306-313.
• Y. Zhang, G. E. Hearn and P. Sen, 1997, "A multivariable neural controller for automatic ship berthing", IEEE Control Systems, Vol. 17, No. 2, pp. 31-45. [https://doi.org/10.1109/37.608535]
• V. P. Bui, J. H. Jeong, Y. B. Kim and D. W. Kim, 2010, "Optimal control design for automatic ship berthing by using bow and stern thrusters", Journal of Ocean Engineering and Technology, Vol. 24, No. 2, pp. 10-17.
• K. Hasegawa, and T. Fukutomi, 1994, "On harbour maneuvering and neural control system for berthing with tug operation", Proceedings of International Conference Maneuvering and Control of Marine Craft, pp. 197-210.
• V. P. Bui, H. Kawai, Y. B. Kim and K. S. Lee, 2011, "A ship berthing system design with four tug boats", Journal of Mechanical Science and Technology, Vol. 25, No. 5, pp. 1257-1264. [https://doi.org/10.1007/s12206-011-0215-4]
• V. P. Bui and Y. B. Kim, 2011, "Development of constrained control allocation for ship berthing by using autonomous tugboats", International Journal of Control Automation and Systems, Vol. 9, No. 6, pp. 1203-1208. [https://doi.org/10.1007/s12555-011-0622-4]
• C. Edwards and S. K. Spurgeon, 1998, "Sliding mode control theory and applications", T. J. International Ltd, Padstow, UK. (ISBN:9780748406012) [https://doi.org/10.1201/9781498701822]
• S. K. Spurgeon, 2008, "Sliding mode observers: a Survey", International Journal of System Science, Vol. 39, No. 8, pp. 751-764. [https://doi.org/10.1080/00207720701847638]
• L. Fridman, J. Moreno and R. Iriarte, 2011, "Sliding modes after the first decade of the 21st century", Springer. [https://doi.org/10.1007/978-3-642-22164-4]
• J. Rivera1, L. Garcia, C. Mora, J. J. Raygoza and S. Ortega, 2011, Sliding mode control, InTech, India. (ISBN: 978-953-307-162-6)
• A. Bacciotti and L. Rosier, 2005, "Liapunov functions and stability in control theory", Springer. (ISBN: 978-3-540-21332-1) [https://doi.org/10.1007/b139028]
• A. Barth, M. Reichhartinger, J. Reger, M. Horn and K. Wulff, 2015, "Lyapunov-design for a super-twisting sliding-mode controller using the certainty-equivalence principle", International Federation of Automatic Control, Vol. 2015, No. 48-11, pp. 860-865. [https://doi.org/10.1016/j.ifacol.2015.09.298]
• J. Davila, L. Fridman, and A. Levant, 2005, "Second-order sliding-mode observer for mechanical systems", IEEE Transactions on Automatic Control, Vol. 50, No. 11, pp. 1785-1789. [https://doi.org/10.1109/TAC.2005.858636]
### Fig. 1
Schematic drawing of the proposed berthing system
### Fig. 2
Experimental equipment configuration
### Fig. 3
System configuration of vessel berthing system with dampers and winches
### Fig. 4.
Step response for identifying the system parameters. (a) is the voltage input to the winch actuator(motor). (b) is the tension response
### Fig. 5
Berthing experiment scenario
### Fig. 6
Dynamic responses of the bow side
### Fig. 7
Dynamic responses of the stern side
### Fig. 8
Experiment results when the disturbance attacks the vessel
### Table 1
Specifications of the winch systems
Items Value Unit
Winch drum diameter 46 mm
DC motor power supply 48 V
DC motor nominal torque 192 Nㆍm
DC motor power 150 W
DC motor gear ratio 74:1
Encoder resolution 500 ppr
Motor driver power 250 W
|
2022-07-06 13:38:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35337772965431213, "perplexity": 2899.0397182362085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104672585.89/warc/CC-MAIN-20220706121103-20220706151103-00756.warc.gz"}
|
https://www.nablu.com/2010/05/fake-data-part-3-bypassing-central.html
|
### Fake data, part 3: Bypassing the central limit theorem
May 2017: This is from my other blog that's no longer online. The original comments are no longer available, but you are welcome to add more.
Now that I have my black swan distribution, and I verified that it fits market returns remarkably well while being mathematically tractable, I want to use it to generate artificial market data.
Generating a series of closing prices is easy.
1. Select a starting price, take the logarithm
2. Select a mean μ and standard deviation σ of log returns ln(P0/P1) where P0 is the most recent price and P1 is the previous price.
3. Generate a uniformly-distributed random probability p between 0 and 1. Plug it into the inverse black swan distribution (using a=1.6):
$$B^{-1}(p;\mu,s) = \mu - 2 s \, \sinh \left[\frac {\tanh^{-1}(1-2p)} {a} \right]$$
4. Add the result to a running total.
5. Go to step 3. Repeat as often as desired.
Then simply calculate the antilog or exponential of the values in the running total to get prices.
This works fine. But what if we want to generate more than just single prices? How do I get a series of daily price bars, with high, low, and closing values for my fake market? The way to mimic real market behavior is to generate a series of n values for each "day" and record the highest, lowest, and final price for high, low, and close of the day. That is, define Sd as a daily step and Si as an intraday step:
$$S_d = \sum_{i=1}^n S_i$$ where $$S_i = B^{-1}(p,\mu_i,s_i) = \mu_i - 2 s_i \, \sinh \left[\frac {\tanh^{-1}(1-2p)} {a} \right]$$ $$\mu_i = \frac{\mu_d}{n}, \quad s_i = \frac {s_d}{\sqrt n}$$
The scaling of the intraday mean μi and standard deviation σi (corresponding to the shape parameter si) is necessary to preserve the desired mean μd and standard deviation σd for daily values. As described in the previous part, the approximation for the distribution shape parameter s is:
$$s \approx \frac{\sqrt{6}\,\sigma} {\pi \tanh^{-1} \left(\frac{1}{a}\right) \sqrt{\tanh^{-1} \left(\frac{1}{a}\right)^2+2}}$$ ...using a=1.6. But look at what happened! A disaster. The figure shows the resulting distributions for 1, 10, and 100 intraday steps. Look at the big rounded top made up of green dots representing n=100. As the number of intraday steps n grows, the resulting distribution of daily values approaches a normal distribution.
I'm back where I started! The whole point was to find something better than a normal distribution to model the markets. I don't want to end up with a normal distribution, I want to end up with my black swan distribution.
So what happened? The central limit theorem, evidently the "sovereign law" of probability theory, came in and took over. It says that the sum of many random variables with a finite mean and variance will be normally distributed, regardless of the underlying distribution we start with.
Although the black swan distribution has infinite kurtosis, it does have a finite mean and variance. This means, if we generate a bunch of tiny black swan steps to build large "daily" steps in a random walk, the large steps will approximate a normal distribution.
How can I break this law?
After much experimentation, I found a way to get past the central limit theorem. If we perturb μd to vary black-swanly with each daily step (not each intraday step), we get values that still appear distributed according to black swan, and so are the sums! $$\large \varepsilon = \begin {cases} \frac{ B^{-1}(p,0,s_d)}{\sqrt{n}\sqrt{n+1}} = \frac{\mu_d - 2 s_d \, \sinh \left[\frac {\tanh^{-1}(1-2p)}{a} \right]}{\sqrt{n}\sqrt{n+1}} & n>1 \\ 0 & n=1 \end {cases}$$ Re-generate ε once per day, and use that value for all intraday steps (if you generate a new ε for each intraday step, you end up with a normal distribution again). For each intraday step Si, use these values of μi and si: $$\mu_i = \frac{\mu_d}{n}+\varepsilon, \quad s_i = \frac {s_d}{n}$$ For large n, the adjustment to μd basically perturbs the mean each day by a small black swan distribution having a standard deviation of σ/n. For n>50 or so, one can simply use n by itself in the denominator of the expression for ε. Again, p is a random probability between 0 and 1.
Here's how it worked out:
That seems to work. I can now generate an artificial series of high-low-close prices that have similar statistical properties to actual markets. Now I must investigate whether any dependencies exist between successive values in the series.
I want to mention that the plots in this article use the Rnd() function in Excel's Visual Basic. The Visual Basic random number generator is fairly crude by modern standards, which may explain the noisy tails of the distributions shown (using 16,000 samples) versus the relatively cleaner tails of the distributions measured from actual market data (using less samples). I can't know for sure unless I try a better random number generator, but I'm not inclined to do so now — I'm satisfied with these results.
|
2021-08-02 00:22:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5587376952171326, "perplexity": 775.6107701515116}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154277.15/warc/CC-MAIN-20210801221329-20210802011329-00301.warc.gz"}
|
https://collegephysicsanswers.com/openstax-solutions/during-circus-act-one-performer-swings-upside-down-hanging-trapeze-holding
|
Change the chapter
Question
During a circus act, one performer swings upside down hanging from a trapeze holding another, also upside-down, performer by the legs. If the upward force on the lower performer is three times her weight, how much do the bones (the femurs) in her upper legs stretch? You may assume each is equivalent to a uniform rod 35.0 cm long and 1.80 cm in radius. Her mass is 60.0 kg.
$1.90 \times 10^{-5} \textrm{ m}$
Solution Video
# OpenStax College Physics Solution, Chapter 5, Problem 29 (Problems & Exercises) (2:17)
Rating
No votes have been submitted yet.
## Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. In this question we're going to calculate by how much the femur in this trapeze artiste's leg stretches when she's hanging upside down doing this acrobatic trick. So the total force applied upwards we're told is three times her weight. So that's three times <i>mg</i> and there's some other information which you just write down here. It's the trapeze artiste weighs 60 kilograms. The original length of her femur is 35 centimeters which is 35 times ten to the minus two meters, and the radius is 1.8 times ten to the minus two meters. So, always taking care of conversions of units and so on in this initial step when you're just writing down data from the question. So she has two femur bones, so two times the force on each leg is going to be the total applied force upwards which we're told is three times her weight. If we divide both sides by two here, we see that the force on a single leg is half of three times her weight or one and a half times <i>mg</i>. Now the change in length of her femur bone is going to be one over the Young's Modulus of bone when it's under tension, multiplied by the force applied to that leg, divided by its cross sectional area, multiplied by its original length. Its cross sectional area is going to be pi times the radius of the bone squared. So we make substitutions into <i>delta l</i> here. We have <i>f l</i> is three <i>mg</i> over two and then we have the area is pi r squared. So we have one over <i>y</i> times three <i>mg</i> over two times <i>l naught</i> over pi r squared is the change in length. So that is one over 16 times ten to the nine Newtons per meter squared and this is the Young's Modulus for bone under tension when it's being stretched -- it's a different number if it was being compressed which is not the case here -- times three times sixty times 9.8 Newtons per kilogram over two, times 35 times ten to the minus two meters, divided by pi times 1.8 times ten to the minus two meters radius squared, giving us a stretch of 1.9 times ten to the minus five meters, which about 20 micrometers which is a very small amount.
|
2020-11-24 03:08:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7278926372528076, "perplexity": 1091.0815476305665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171077.4/warc/CC-MAIN-20201124025131-20201124055131-00298.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcds.1995.1.207
|
American Institute of Mathematical Sciences
April 1995, 1(2): 207-216. doi: 10.3934/dcds.1995.1.207
The index at infinity of some twice degenerate compact vector fields
1 Institute for Information Transmission Problems, Russian Academy, 19 Ermolovoy st., 101447 Moscow, Russian Federation 2 Institut Mathématique Pure et Appliquée, Université Catholique de Louvain, B-1348 Louvain-la-Neuve
Received November 1994 Published February 1995
The index at infinity of some compact vector fields associated with Nemytski operators is computed in situations where the linear part is degenerate and the nonlinear part does not satisfy the Landesman-Lazer conditions. Applications are given to the existence and multiplicity of solutions of nonlinear equations depending upon a parameter.
Citation: A.M. Krasnosel'skii, Jean Mawhin. The index at infinity of some twice degenerate compact vector fields. Discrete & Continuous Dynamical Systems - A, 1995, 1 (2) : 207-216. doi: 10.3934/dcds.1995.1.207
[1] Maria Do Rosario Grossinho, Rogério Martins. Subharmonic oscillations for some second-order differential equations without Landesman-Lazer conditions. Conference Publications, 2001, 2001 (Special) : 174-181. doi: 10.3934/proc.2001.2001.174 [2] Min Liu, Zhongwei Tang. Multiplicity and concentration of solutions for Choquard equation via Nehari method and pseudo-index theory. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3365-3398. doi: 10.3934/dcds.2019139 [3] Pablo Amster, Pablo De Nápoli. Non-asymptotic Lazer-Leach type conditions for a nonlinear oscillator. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 757-767. doi: 10.3934/dcds.2011.29.757 [4] Alejandro Allendes, Alexander Quaas. Multiplicity results for extremal operators through bifurcation. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 51-65. doi: 10.3934/dcds.2011.29.51 [5] Jochen Brüning, Franz W. Kamber, Ken Richardson. The equivariant index theorem for transversally elliptic operators and the basic index theorem for Riemannian foliations. Electronic Research Announcements, 2010, 17: 138-154. doi: 10.3934/era.2010.17.138 [6] Q-Heung Choi, Changbum Chun, Tacksun Jung. The multiplicity of solutions and geometry in a wave equation. Communications on Pure & Applied Analysis, 2003, 2 (2) : 159-170. doi: 10.3934/cpaa.2003.2.159 [7] Rafael Ortega. Stability and index of periodic solutions of a nonlinear telegraph equation. Communications on Pure & Applied Analysis, 2005, 4 (4) : 823-837. doi: 10.3934/cpaa.2005.4.823 [8] Leszek Gasiński, Nikolaos S. Papageorgiou. Multiplicity of solutions for Neumann problems with an indefinite and unbounded potential. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1985-1999. doi: 10.3934/cpaa.2013.12.1985 [9] Wenjing Chen. Multiplicity of solutions for a fractional Kirchhoff type problem. Communications on Pure & Applied Analysis, 2015, 14 (5) : 2009-2020. doi: 10.3934/cpaa.2015.14.2009 [10] Yu Chen, Yanheng Ding, Tian Xu. Potential well and multiplicity of solutions for nonlinear Dirac equations. Communications on Pure & Applied Analysis, 2020, 19 (1) : 587-607. doi: 10.3934/cpaa.2020028 [11] Kung-Ching Chang, Zhi-Qiang Wang, Tan Zhang. On a new index theory and non semi-trivial solutions for elliptic systems. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 809-826. doi: 10.3934/dcds.2010.28.809 [12] Philip Korman. Infinitely many solutions and Morse index for non-autonomous elliptic problems. Communications on Pure & Applied Analysis, 2020, 19 (1) : 31-46. doi: 10.3934/cpaa.2020003 [13] Matthias Geissert, Horst Heck, Christof Trunk. $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1259-1275. doi: 10.3934/dcdss.2013.6.1259 [14] Genni Fragnelli, Gisèle Ruiz Goldstein, Jerome Goldstein, Rosa Maria Mininni, Silvia Romanelli. Generalized Wentzell boundary conditions for second order operators with interior degeneracy. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 697-715. doi: 10.3934/dcdss.2016023 [15] Angelo Favini, Gisèle Ruiz Goldstein, Jerome A. Goldstein, Enrico Obrecht, Silvia Romanelli. Nonsymmetric elliptic operators with Wentzell boundary conditions in general domains. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2475-2487. doi: 10.3934/cpaa.2016045 [16] Carmen Calvo-Jurado, Juan Casado-Díaz, Manuel Luna-Laynez. Parabolic problems with varying operators and Dirichlet and Neumann boundary conditions on varying sets. Conference Publications, 2007, 2007 (Special) : 181-190. doi: 10.3934/proc.2007.2007.181 [17] Yanqin Fang, Jihui Zhang. Multiplicity of solutions for the nonlinear Schrödinger-Maxwell system. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1267-1279. doi: 10.3934/cpaa.2011.10.1267 [18] Feliz Minhós, João Fialho. Existence and multiplicity of solutions in fourth order BVPs with unbounded nonlinearities. Conference Publications, 2013, 2013 (special) : 555-564. doi: 10.3934/proc.2013.2013.555 [19] Qi-Lin Xie, Xing-Ping Wu, Chun-Lei Tang. Existence and multiplicity of solutions for Kirchhoff type problem with critical exponent. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2773-2786. doi: 10.3934/cpaa.2013.12.2773 [20] Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2239-2259. doi: 10.3934/cpaa.2018107
2018 Impact Factor: 1.143
|
2020-01-28 03:57:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5386640429496765, "perplexity": 6776.130786574188}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00243.warc.gz"}
|
https://chemistry.stackexchange.com/questions/8057/the-reason-behind-the-steep-rise-in-ph-in-the-acid-base-titration-curve/8074
|
# The reason behind the steep rise in pH in the acid base titration curve
Most books refer to a steep rise in pH when a titration reaches the equivalence point. However, I do not understand why … I mean I am adding the same drops of acid to the alkali but just as I near the correct volume (i.e. the volume required to neutralize the alkali), the pH just suddenly increases quickly.
I've decided to tackle this question in a somewhat different manner. Instead of giving the chemical intuition behind it, I wanted to check for myself if the mathematics actually work out. As far as I understand, this isn't done often, so that's why I wanted to try it, even though it may not make the clearest answer. It turns out to be a bit complicated, and I haven't done much math in a while, so I'm kinda rusty. Hopefully, everything is correct. I would love to have someone check my results.
My approach here is to explicitly find the equation of a general titration curve and figure out from that why the pH varies quickly near the equivalence point. For simplicity, I shall consider the titration to be between a monoprotic acid and base. Explicitly, we have the following equilibria in solution
$$\ce{HA <=> H^+ + A^-} \ \ \ → \ \ \ K_\text{a} = \ce{\frac{[H^+][A^-]}{[HA]}}$$ $$\ce{BOH <=> B^+ + OH^-} \ \ \ → \ \ \ K_\text{b} = \ce{\frac{[OH^-][B^+]}{[BOH]}}$$ $$\ce{H2O <=> H^+ + OH^-} \ \ \ → \ \ \ K_\text{w} = \ce{[H^+][OH^-]}$$
Let us imagine adding two solutions, one of the acid $$\ce{HA}$$ with volume $$V_\text{A}$$ and concentration $$C_\text{A}$$, and another of the base $$\ce{BOH}$$ with volume $$V_\text{B}$$ and concentration $$C_\text{B}$$. Notice that after mixing the solutions, the number of moles of species containing $$\ce{A}$$ ($$\ce{HA}$$ or $$\ce{A^-}$$) is simply $$n_\text{A} = C_\text{A} V_\text{A}$$, while the number of moles of species containing $$\ce{B}$$ ($$\ce{BOH}$$ or $$\ce{B^+}$$) is $$n_\text{B} = C_\text{B} V_\text{B}$$. Notice that at the equivalence point, $$n_\text{A} = n_\text{B}$$ and therefore $$C_\text{A} V_\text{A} = C_\text{B} V_\text{B}$$; this will be important later. We will assume that volumes are additive (total volume $$V_\text{T} = V_\text{A} + V_\text{B}$$), which is close to true for relatively dilute solutions.
In search of an equation
To solve the problem of finding the final equilibrium after adding the solutions, we write out the charge balance and matter balance equations:
Charge balance: $$\ce{[H^+] + [B^+] = [A^-] + [OH^-]}$$
Matter balance for $$\ce{A}$$: $$\displaystyle \ce{[HA] + [A^-]} = \frac{C_\text{A} V_\text{A}}{V_\text{A} + V_\text{B}}$$
Matter balance for $$\ce{B}$$: $$\displaystyle \ce{[BOH] + [B^+]} = \frac{C_\text{B} V_\text{B}} {V_\text{A} + V_\text{B}}$$
A titration curve is given by the pH on the $$y$$-axis and the volume of added acid/base on the $$x$$-axis. So what we need is to find an equation where the only variables are $$\ce{[H^+]}$$ and $$V_\text{A}$$ or $$V_\text{B}$$. By manipulating the dissociation constant equations and the mass balance equations, we can find the following:
$$\ce{[HA]} = \frac{\ce{[H^+][A^-]}}{K_\text{a}}$$ $$\ce{[BOH]} = \frac{\ce{[B^+]}K_\text{w}}{K_\text{b}\ce{[H^+]}}$$ $$\ce{[A^-]} = \frac{C_\text{A} V_\text{A}}{V_\text{A} + V_\text{B}} \left(\frac{K_\text{a}}{K_\text{a} + \ce{[H^+]}}\right)$$ $$\ce{[B^+]} = \frac{C_\text{B} V_\text{B}}{V_\text{A} + V_\text{B}} \left(\frac{K_\text{b}\ce{[H^+]}}{K_\text{b}\ce{[H^+]} + K_\text{w}}\right)$$
Replacing those identities in the charge balance equation, after a decent bit of algebra, yields:
$$\ce{[H^+]^4} + \left(K_\text{a} + \frac{K_\text{w}}{K_\text{b}} + \frac{C_\text{B} V_\text{B}}{V_\text{A} + V_\text{B}}\right) \ce{[H^+]^3} + \left(\frac{K_\text{a}}{K_\text{b}}K_\text{w} + \frac{C_\text{B} V_\text{B}}{V_\text{A} + V_\text{B}} K_\text{a} - \frac{C_\text{A} V_\text{A}}{V_\text{A} + V_\text{B}}K_\text{a} - K_\text{w}\right) \ce{[H^+]^2} - \left(K_\text{a} K_\text{w} + \frac{C_\text{A} V_\text{A}}{V_\text{A} + V_\text{B}}\frac{K_\text{a}}{K_\text{b}} K_\text{w} + \frac{K^2_\text{w}}{K_\text{b}}\right) \ce{[H^+]} - \frac{K_\text{a}}{K_\text{b}} K^2_\text{w} = 0$$
Now, this equation sure looks intimidating, but it is very interesting. For one, this single equation will exactly solve any equilibrium problem involving the mixture of any monoprotic acid and any monoprotic base, in any concentration (as long as they're not much higher than about $$1~\mathrm{\small M}$$) and any volume. Though it doesn't seem to be possible to separate the variables $$\ce{[H^+]}$$ and $$V_\text{A}$$ or $$V_\text{B}$$, the graph of this equation represents any titration curve (as long as it obeys the previous considerations). Though in its full form it is quite daunting, we can obtain some simpler versions. For example, consider that the mixture is of a weak acid and a strong base. This means that $$K_\text{b} \gg 1$$, and so every term containing $$K_\text{b}$$ in the denominator is approximately zero and gets cancelled out. The equation then becomes:
Weak acid and strong base:
$$\ce{[H^+]^3} + \left(K_\text{a} + \frac{C_\text{B} V_\text{B}}{V_\text{A} + V_\text{B}}\right) \ce{[H^+]^2} + \left(\frac{C_\text{B} V_\text{B}}{V_\ce{A} + V_\ce{B}} K_\ce{a} - \frac{C_\ce{A} V_\ce{A}}{V_\ce{A} + V_\ce{B}}K_\ce{a} - K_\ce{w}\right) \ce{[H^+]} - K_\ce{a} K_\ce{w} = 0$$
For a strong acid and weak base ($$K_\text{a} \gg 1$$), you can divide both sides of the equation by $$K_\text{a}$$, and now all terms with $$K_\text{a}$$ in the denominator get cancelled out, leaving:
Strong acid and weak base:
$$\ce{[H^+]^3} + \left(\frac{K_\ce{w}}{K_\ce{b}}+\frac{C_\ce{B}V_\ce{B}}{V_\ce{A} + V_\ce{B}} - \frac{C_\ce{A} V_\ce{A}}{V_\ce{A} + V_\ce{B}}\right) \ce{[H^+]^2} - \left(K_\ce{w} + \frac{C_\text{A} V_\ce{A}}{V_\ce{A} + V_\ce{B}} \frac{K_\ce{w}}{K_\ce{b}}\right) \ce{[H^+]} - \frac{K^2_\ce{w}}{K_\ce{b}} = 0$$
The simplest case happens when adding a strong acid to a strong base ($$K_\ce{a} \gg 1$$ and $$K_\ce{b} \gg 1$$), in which case all terms containing either in the denominator get cancelled out. The result is simply:
Strong acid and strong base:
$$\ce{[H^+]^2} + \left(\frac{C_\text{B} V_\text{B}}{V_\text{A} + V_\text{B}} - \frac{C_\text{A} V_\text{A}}{V_\text{A} + V_\text{B}}\right) \ce{[H^+]} - K_\ce{w} = 0$$
It would be enlightening to draw some example graphs for each equation, but Wolfram Alpha only seems to be able to handle the last one, as the others require more than the standard computation time to display. Still, considering the titration of $$1~\text{L}$$ of a $$1~\ce{\small M}$$ solution of a strong acid with a $$1~\ce{\small M}$$ solution of a strong base, you get this graph. The $$x$$-axis is the volume of base added, in litres, while the $$y$$-axis is the pH. Notice that the graph is exactly as what you'll find in a textbook!
Now what?
With the equations figured out, let's study how they work. We want to know why the pH changes quickly near the equivalence point, so a good idea is to analyze the derivative of the equation and figure out where they have a very positive or very negative value, indicating a region where $$\ce{[H^+]}$$ changes quickly with a slight addition of an acid/base.
Suppose we want to study the titration of an acid with a base. What we need then is the derivative $$\displaystyle \frac{\ce{d[H^+]}}{\ce{d}V_\ce{B}}$$. We will obtain this by implicit differentiation of both sides of the equations by $$\displaystyle \frac{\ce{d}}{\ce{d}V_\ce{B}}$$. Starting with the easiest case, the mixture of a strong acid and strong base, we obtain:
$$\frac{\ce{d[H^+]}}{\ce{d} V_\ce{B}}= \frac{K_\ce{w} - C_\ce{B} \ce{[H^+] - [H^+]^2}}{2(V_\ce{A} + V_\ce{B}\left) \ce{[H^+]} + (C_\ce{B} V_\ce{B} - C_\ce{A} V_\ce{A}\right)}$$
Once again a complicated looking fraction, but with very interesting properties. The numerator is not too important, it's the denominator where the magic happens. Notice that we have a sum of two terms ($$2(V_\ce{A} + V_\ce{B})\ce{[H^+]}$$ and $$(C_\ce{B} V_\ce{B} - C_\ce{A} V_\ce{A})$$). The lower this sum is, the higher $$\displaystyle \frac{\mathrm{d}\ce{[H^+]}}{\mathrm{d} V_\ce{B}}$$ is and the quicker the pH will change with a small addition of the base. Notice also that, if the solutions aren't very dilute, then the second term quickly dominates the denominator because while adding base, the value of $$[H^+]$$ will become quite small compared to $$C_\ce{A}$$ and $$C_\ce{B}$$. Now we have a very interesting situation; a fraction where the major component of the denominator has a subtraction. Here's an example of how this sort of function behaves. When the subtraction ends up giving a result close to zero, the function explodes. This means that the speed at which $$\ce{[H^+]}$$ changes becomes very sensitive to small variations of $$V_\ce{B}$$ near the critical region. And where does this critical region happen? Well, close to the region where $$C_\ce{B} V_\ce{B} - C_\ce{A} V_\ce{A}$$ is zero. If you remember the start of the answer, this is the equivalence point!. So there, this proves mathematically that the speed at which the pH changes is maximum at the equivalence point.
This was only the simplest case though. Let's try something a little harder. Taking the titration equation for a weak acid with strong base, and implicitly differentiating both sides by $$\displaystyle \frac{\ce{d}}{\ce{d} V_\ce{B}}$$ again, we get the significantly more fearsome:
$$\displaystyle \frac{\ce{d[H^+]}}{\ce{d}V_\ce{B}} = \frac{ -\frac{V_\ce{A}}{(V_\ce{A} + V_\ce{B})^2} \ce{[H^+]} (C_\ce{B}\ce{[H^+]} - C_\ce{B} K_\ce{a} + C_\ce{A} K_\ce{a})}{3\ce{[H^+]^2 + 2[H^+]}\left(K_\ce{a} + \frac{C_\ce{B} V_\ce{B}}{V_\ce{A} + V_\ce{B}}\right) + \frac{K_\ce{a}}{V_\ce{A} + V_\ce{B}} (C_\ce{B} V_\ce{B} - C_\ce{A} V_\ce{A}) -K_\ce{w}}$$
Once again, the term that dominates the behaviour of the complicated denominator is the part containing $$C_\ce{B} V_\ce{B} - C_\ce{A} V_\ce{A}$$, and once again the derivative explodes at the equivalence point.
• @SatwikPasani It took a bit of effort (about 6-7 hours from start to finish, including checks and writing up the answer), but I was too curious to let it go. Though I've never done this calculation, I quickly saw the general solution outline, it just took a while to actually do the algebra and make sure I didn't screw up somewhere. Perseverance paid off, though I wish I could explore the equations numerically a bit more. It would also be fantastic to consider a diprotic weak acid and a strong base, but that would require another truckload of algebra and time. I think this is enough for now. – Nicolau Saker Neto Jan 26 '14 at 14:35
• If anyone is interested, Chem.SE user PH13 has far more expertise in pH titimetry, and has produced wonderful related answers such as this. – Nicolau Saker Neto Jul 2 '15 at 14:14
• Hi @NicolauSakerNeto would you mind explaining how you obtained the mathematical equation you put in WolframAlpha from the strong acid-strong base chemical equation? – Razorlance Jul 28 '15 at 11:20
• @Razorlance The biggest difference is that while here I deduced a formula involving the concentration of hydrogen ions, the graph has pH as the $y$-axis rather than $\mathrm{[H^+]}$. To get this, all that is needed is to use the identity $\mathrm{[H^+]=10^{-pH}}$, which is where the $10^{-y}$ terms come from. The $(1+x)$ factors multiplied in the first and last terms of the Wolfram Alpha equation are a consequence of removing the original $(1+x)$ denominator in the middle term, which comes from $(V_A+V_B)$. – Nicolau Saker Neto Jul 28 '15 at 12:28
• @ApoorvPotnis Sorry I don't really have a reference. I think some graduate-level analytical chemistry textbooks on pH titimetry may be the best place to start. – Nicolau Saker Neto Nov 5 '19 at 10:49
# A PRACTICAL DEMONSTRATION WITH PYTHON
Nicolau answer is an excellent analysis of the function that describes the titration but I would like to follow the Ben Norris answer (that I think to tackle better the problem at its origin) using Python hoping that this could be useful to become more confident and explore titration on your own.
# Acidify pure water and calculate its pH
I think acidify water should be the first experiment you should do to understand pH. Imagine you have 1 L of water and imagine to add drop by drop HCl 0.1M. Assuming the volume of a drop is 1 mL you are adding 0.0001 mol of $$H^{+}$$ to the solution every drop furthermore you are adding 1 mL of water to the solution, so the concentration of $$H^{+}$$ after the first drop should be: $$\ce{[ H^{+} ]= \frac{mol~H^{+}_{init} + (mol~H^{+}_{drop} \times n)}{V_{init} + (V_{drop} \times n)}}$$
Where "init" indicate the initial amount of $$H^{+}$$ and $$V$$olume of water. and n is the number of drops you added.
## $$H^{+}_{init}$$
Before you add the first drop of acid there are already $$\ce{H3O^{+}}$$ because water dissociate in $$\ce{H3O+ ~, OH- }$$ and the product of their concentration $$\ce{[H3O+] [OH- ]}$$ at 25 °C is $$10^{-14}$$. In pure water there is the same amount of$$\ce{[H3O+] [OH- ]}$$ so $$\ce{[H3O+] = [OH- ]}$$ and we can write $$\ce{[H3O+]^{2} = 10^{-14}}$$ that is $$\ce{[H3O+] = 10^{-7}}$$. Here is the python code:
import numpy as np
import matplotlib.pyplot as plt
dropVolume=0.001#L
molH=0.0001#mol you add this mol of H+ for every drop
molOH=0.12#molt
intVolume=1#L
intHmol=10**(-7)#mol
n = np.arange(0, 30, 0.1)#here we create the drops.
#Plot concentration of H3O+
fig = plt.figure()
cH=(intHmol+molH*n)/(intVolume+dropVolume*n)
plt.ylabel(r'$$[H_{3}O^{+}]$$ ')
plt.xlabel(r'$$mL$$ of acid added')
plt.plot(n,cH)
The result is this:
You can notice that the result is quite obvious: adding acid the $$[H_3O^{+}]$$ increase proportionally. Note also that $$H^{+}_{init}$$ is negligible.
# -log$$[H_3O^{+}]$$
Now you can calculate the pH. $$\ce{pH = -log[H3O+ ]}$$
#Plot -log [H3O+]
fig1 = plt.figure()
pH=np.log10((intVolume+dropVolume*n)/(intHmol+molH*n))
pOH=np.log10((intVolume+dropVolume*n)/((10**(-14))/(intHmol+molH*n)))
plt.plot(n,pOH,'--',color='red',label='pOH $$-log[OH^{-}]$$')
plt.plot(n,pH,label="pH $$-log[H_{3}O^{+}]$$")
plt.ylabel(r'pH or pOH')
plt.xlabel(r'$$mL$$ of acid added')
plt.xlim(-1,30)
plt.grid(color='red')
plt.legend(loc=10)
plt.tick_params(length=4)
plt.xticks(np.arange(30))
This is more interesting. You can see that the pH suddenly decreases from 7 to 4 after the first mL of acid added. This is, in fact, something similar what Ben Norris noted in his answer and is only related to the logarithmic operation. After this introduction, we can deal more easily with a real titration. If you want you can try to increase the pH of pure water adding $$OH^{-}$$ and then calculating the hydronium concentration $$\ce{[H3O+]}$$ from $$\ce{[H3O+] [OH- ] = 10^{-14}}$$ in this way $$\ce{[H3O+] = \frac{10^{-14}}{[OH- ]}}$$ and plot the pH. In the example, I've done the inverse with pOH.
# Tritation of HCl with NaOH
This is a simulation of a titration of a solution of HCl with NaOH the system at the beginning has a lower pH that will increase adding drop by drop a solution of NaOH. In this case, there are already present $$H^{+}$$ from HCl and the following neutralization reaction will occur until all the HCl is neutralized: $$\ce{H+ + Cl- + Na+ + OH- -> H2O + NaCl}$$
So we should divide the process in three steps:
1. $$\ce{H+ > OH-}$$ until this preposition is true the number of moles of $$H^{+}$$ will lower down by the same amount of $$OH^{-}$$ added due to the neutralization reaction: $$\ce{[ H_{3}O^{+} ]= \frac{mol~H^{+}_{HCl} - (mol~OH^{-}_{drop} \times n)}{V_{init} + (V_{drop} \times n)}}$$
2. $$\ce{H+ = OH-}$$ at the equivalence point $$\ce{[H3O+] = [OH- ]}$$ so $$\ce{[H3O+] = 10^{-7}}$$.
3. $$\ce{H+ < OH-}$$ After the equivalence point we are actually "basifying" brine (solution of water and salts), so we can get $$\ce{[H3O+]}$$ from $$\ce{[H3O+] = \frac{10^{-14}}{[OH- ]}}$$ we will use absolute value to avoid negative concentration: $$\ce{[ H_{3}O^{+} ]= \frac{ \frac{10^{-14}}{\lvert mol~H^{+}_{HCl} - (mol~OH^{-}_{drop} \times n )\rvert}}{V_{init} + (V_{drop} \times n)}}$$
Here is the code:
import numpy as np
import matplotlib.pyplot as plt
dropVolume=0.001#L
molOH=10**(-4)#mol
molH=10**(-2)#mol
intVolume=1#L
n = np.arange(0, 200, 0.1)
y=[]
for i in n:
if (molH-molOH*i)>0:
y.append(np.log10((intVolume+dropVolume*i)/(molH-molOH*i)))
if (molH-molOH*i)==0:
y.append(7)
if (molH-molOH*i)<0:
y.append(np.log10((intVolume+dropVolume*i)/(((10**(-14))/np.absolute(molH-molOH*i)))))
fig = plt.figure()
plt.plot(n,y)
plt.ylabel(r'pH ($$-log[H_{3}O^{+}]$$)')
plt.xlabel(r'$$mL$$ of acid added')
plt.grid(color='red')
With these conditions we obtain the usual titration curve:
You can easily see that derives directly from the previous example we can also plot the $$[OH^{-}], ~ [H_3O^{+}]$$.
$$[H_3O^{+}]$$ simply goes lower down to $$10^{-14}$$ at the equivalence point, and then continue to goes down while $$[OH^{-}]$$ increase.
# Conclusion
So, in fact, the answer to your question is that relative to the Self-ionization of water equilibrium $$\ce{[H3O+] [OH- ] = 10^{-14}}$$ at equivalence point you are switching between a solution with an excess of $$\ce{[H3O+] }$$ to a solution with an excess of $$\ce{[OH- ]}$$ and are these excesses that are determinant. Of course, if you plot the pH you have a steep rise if you plot $$[H_3O^{+}]$$ you got only a little step. This is why the usage a logarithmic function (precisely a p function) is more convenient even if maybe more tricky, otherwise, you can zoom a little bit with Python to the equivalence point, this gives you an idea of what I mean with "excess"...
The pH titration curve is shaped as it is because pH is a logarithmic scale. pH is the negative base ten logarithm of hydrogen ion concentration:
$$\text{pH}=-\log_{10}{\ce{[H+]}}$$
Thus, each change of 1 pH unit is a 10-fold change in concentration, and linear of value that 10-fold change gets smaller as the pH increases. For example, going from pH 1 to pH 2 means changing the concentration of $\ce{H+}$ from $10^{-1}$ to $10^{-2}$ or $0.100... -0.010... =0.099...$
Going from pH 2 to pH 3 means means changing the concentration of $\ce{H+}$ from $10^{-2}$ to $10^{-3}$ or $0.0100... -0.0010... =0.0099...$ If you are added fixed amounts of base (which is what you do during a titration), then that fixed amount has a larger effect on the concentration of $\ce{H+}$ as the pH goes up.
Let's walk through a simple titration:
You have 10 mL of 0.100 M $\ce{HCl}$ that you want to titrate with 0.100 M $\ce{NaOH}$. We will consider the change to the solution after each mL of base is added. We have to remember to keep track of moles of acid and base, since the volume is changing (we are adding base) and we need to remember that autoionization of water. The concentrations of $\ce{H+}$ and $\ce{OH-}$ never go to zero because of the autoionization:
$$\ce{H2O <=> H+ + OH-} \ \ \ \ K_w=[\ce{H+}][\ce{OH-}]=10^{-14}$$
Let $V_b$ be the volume of base added. At $V_b=0$, we have our original acid solution, which contains $\left(\dfrac{10\ \text{mL}}{1000 \ \frac{\text{mL}}{\text{L}}}\right)\left(0.100\ \dfrac{\text{M}}{\text{L}}\right)=0.001 \ \text{moles of HCl}$. Each time we increase $V_b$ by 1 mL, we add $\left(\dfrac{1\ \text{mL}}{1000 \ \frac{\text{mL}}{\text{L}}}\right)\left(0.100\ \dfrac{\text{M}}{\text{L}}\right)=0.0001 \ \text{moles of NaOH}.$ We decrease the moles of $\ce{H+}$present by this amount and recalculate concnetration for the new volume. Then we recalculation pH. We do this until...
When we reach $V_b =10 \ \text{mL}$, we will have added exactly as much base as there was acid originally. The concentrations of $\ce{H+}$ and $\ce{OH-}$ are both $10^{-7}\ \text{M}$, controlled by the autoionization. This is the equivalence point. As we add more base, we increase the concentration of $\ce{OH-}$. We can use the $K_w$ to calculate the concentration of $\ce{H+}$ and then the pH.
V_b V_T moles H+ moles OH- [H+] [OH-] pH
0 mL 10 mL 1.00E-3 1.00E-15 1.00E-1 1.00E-13 1.00
1 mL 11 mL 9.00E-4 1.34E-15 8.18E-2 1.22E-13 1.09
2 mL 12 mL 8.00E-4 1.80E-15 6.67E-2 1.50E-13 1.18
3 mL 13 mL 7.00E-4 2.41E-15 5.38E-2 1.86E-13 1.27
4 mL 14 mL 6.00E-4 3.27E-15 4.29E-2 2.33E-13 1.37
5 mL 15 mL 5.00E-4 4.50E-15 3.33E-2 3.00E-13 1.48
6 mL 16 mL 4.00E-4 6.40E-15 2.50E-2 4.00E-13 1.60
7 mL 17 mL 3.00E-4 9.63E-15 1.76E-2 5.67E-13 1.75
8 mL 18 mL 2.00E-4 1.62E-15 1.11E-2 9.00E-13 1.95
9 mL 19 mL 1.00E-4 3.61E-15 5.26E-3 1.90E-12 2.28
10 mL 20 mL 2.00E-9 2.00E-9 1.00E-7 1.00E-7 7.00
11 mL 21 mL 4.41E-14 1.00E-4 2.10E-12 4.76E-3 11.68
12 mL 22 mL 2.42E-14 2.00E-4 1.10E-12 9.09E-3 11.96
13 mL 23 mL 1.76E-14 3.00E-4 7.67E-13 1.30E-2 12.12
14 mL 24 mL 1.44E-14 4.00E-4 6.00E-13 1.67E-2 12.22
15 mL 25 mL 1.25E-14 5.00E-4 5.00E-13 2.00E-2 12.30
16 mL 26 mL 1.13E-14 6.00E-4 4.33E-13 2.31E-2 12.36
17 mL 27 mL 1.04E-14 7.00E-4 3.86E-13 2.59E-2 12.41
18 mL 28 mL 9.80E-15 8.00E-4 3.50E-13 2.86E-2 12.45
19 mL 29 mL 9.34E-15 9.00E-4 3.22E-13 3.10E-2 12.49
20 mL 30 mL 9.00E-15 1.00E-3 3.00E-13 3.33E-2 12.52
Notice that between $V_b =9$ and $V_b = 11$, the change in $\ce{[H+]}$ and $\ce{[OH-]}$ are very large compared to their changes before and after the equivalence point. This is because near the equivalence point, the concentrations are very low and controlled by the autoionization and not by the amount of acid or base present. Even if we expand the range around the equivalence point and determine the pH between $V_b =9$ and $V_b = 11$ after 0.1 mL, we still see similar behavior.
https://www.quora.com/Whats-the-reason-behind-the-steep-rise-in-pH-in-the-acid-base-titration-curve
is pretty intuitive:
(This answer assumes that you are titrating acid with base. You see the same effect, in reverse, when titrating base with acid.)
You are losing the buffering effect of a high acid concentration.
Adding a drop (about 0.03 mL) of 1 M NaOH adds 0.03 mmol of NaOH to your solution. (There are about 30 drops per mL, 1/30 = 0.033333...)
Suppose you begin with a solution of 0.1 M acid (pH =1 for a strong acid); that means that you have, in a typical 10-mL aliquot, 1 mmol of acid. It takes about 33 drops of your 1M NaOH solution to to neutralize that amount, at 0.03 mmol per drop.
Let's assume that you diluted your aliquot to 20 mL (so that the volume can be assumed to be constant, that simplifies things). That makes your concentration 0.05 M, for a pH of 1.30.
The first drop changes the H+ concentration from 0.05 M (1 mmol in 20 mL) to 0.0485 M (0.97 mmol in 20 mL), which corresponds to a pH of 1.31.
|
2020-02-21 07:27:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 120, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8500288128852844, "perplexity": 775.9180205506225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145443.63/warc/CC-MAIN-20200221045555-20200221075555-00087.warc.gz"}
|
http://w3.win.tue.nl/en/events/nmc2012/programme/diamant/
|
# Sprekerlijst
Speaker #1: Monique Laurent
Affiliation: CWI Amsterdam and Tilburg University
Title: Spherical Gram embeddings of graphs
Abstract: We consider geometric realizations of graphs obtained by representing weighted graphs as inner products of vectors in low dimensional spheres.
Given a graph $G=(V,E)$, let $\E(G)$ denote the convex set of all edge weights $x\in \oR^E$ that can be realized as inner products of unit vectors in some space $\oR^d$. Then the {\em Gram dimension} of $G$ is the smallest dimension $d$ allowing to realize all edge weights in $\E(G)$, while the {\em extreme Gram dimension} of $G$ is the smallest dimension $d$ allowing to realize all extreme points of $\E(G)$. These two new graph parameters are motivated by their applications to finding low rank solutions to semidefinite programs, their relevance to matrix completions problems, to the rank constrained Grothendieck constant, and to Colin de Verdi\`ere type graph invariants. Both are minor monotone. We give an explicit forbidden minor characterization of the graphs having small (extreme) Gram dimension as well as structural results.
Based on joint work with M. E.-Nagy and A. Varvitsiotis (CWI Amsterdam).
*****************************************************************
Speaker #2: Joost Batenburg
Affiliation CWI Amsterdam and University of Antwerp
Title: Discrete Tomography for lattice images
Abstract: Tomography deals with the reconstruction of images from their projections.
In Discrete Tomography, it is assumed that the unknown image only contains grey levels from a small, discrete set. If one additionally assumes that the image is defined on a discrete domain, we arrive at the field of Discrete Tomography for lattice images. Recently, tomography problems for lattice images have become of high practical relevance, due to their applicability to the reconstruction of nanocrystals at atomic resolution from projections obtained by electron microscopy. Although the reconstruction problem for lattice images appears quite elementary at first sight, it relates to many different subfields of mathematics, ranging from number theory and combinatorics to continuous optimization and analysis. In each of these directions, interesting and sometimes surprising results have been obtained during the past 10 years. In this lecture I will illustrate the links of this tomography problem with different fields of mathematics and highlight some important results, followed by posing some new research questions that are currently unsolved.
*******************************************************************
Speaker #3: Jan Draisma
Affiliation: TU Eindhoven
Title: Bounded-rank tensors
Abstract: The notion of rank for matrices (two-dimensional tensors) has a natural generalisation to higher-dimensional tensors. However, in higher dimensions it is much less well behaved than for matrices. For instance, it is NP-hard to compute, and equations for tensors of bounded rank (the analogue of determinants for matrices) are not known in general. I will present some recent and ongoing work on those equations and on the complexity when the rank is fixed but the dimension of the tensor is allowed to vary.
|
2013-06-19 08:17:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8004452586174011, "perplexity": 630.1257898880351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708145189/warc/CC-MAIN-20130516124225-00072-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://socratic.org/questions/two-power-generating-units-a-and-b-operate-in-parallel-to-supply-the-power-requi#313093
|
Two power generating units A and B operate in parallel to supply the power requirement of a small city. If there is failure in the power generation what is the probability that the city will have its supply of full power?
The demand for power is subject to considerable fluctuation, and it is known that each unit has a capacity so that it can supply the city's full power requirement 75% of the time in case the other unit fails. The probability of failure of each unit is 0.10, whereas the probability that both units fail is 0.02.
Sep 22, 2016
15%
Explanation:
The probability of failure given is
$p \left(A = f a i l\right) = .10$
$p \left(B = f a i l\right) = .10$
$p \left(A = f a i l \wedge B = f a i l\right) = .02$
To determine the amount of power that they may have we need to account for the amount of power that can be supplied at any given time.
If just one fails then the value of $.75$ is possible. If both are not working then there is no power.
We can use expectation here where we use the definition for discrete events as
$E \left(X\right) = \sum X p \left(X\right)$
In this case we take all the possible values it can take on for each outcome. That is to say what we expect to see for failures is
$E \left(P o w e r | F a i l u r e\right) = .10 \cdot .75 + .10 \cdot .75 + .02 \cdot 0 = .15$
So if there is a failure there is a 15% probability that there will be full power at any given time.
|
2021-12-01 19:48:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7583946585655212, "perplexity": 225.16347179765793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00199.warc.gz"}
|
https://math.stackexchange.com/questions/3075408/how-can-i-prove-the-following-function-is-positive
|
# How can i prove the following function is positive?
I have the following function.
$$F =[x_1,x_2,...,x_n]_{1 \times n}*M_{n \times n}*([\dfrac{x_1}{|x_1|^{1/2}},\dfrac{x_2}{|x_2|^{1/2}}, ..., \dfrac{x_n}{|x_n|^{1/2}}]^T)_{n \times 1}$$
Which $$x \in \mathbb{R}$$ and $$M$$ is a square matrix. In general, $$F$$ is not a positive function but when i choose $$M$$ as a positive definite matrix, $$F$$ is always positive (as i test it by MATLAB coding).
Now i need to prove $$F$$ is always positive (by choosing positive definite matrix $$M$$) mathematically, and i need your guidance.
$$\begin{pmatrix}1\\49\end{pmatrix}^\top \begin{pmatrix}197&-14\\-14&1\end{pmatrix} \begin{pmatrix}1\\7\end{pmatrix} = -244$$
|
2019-04-25 10:50:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9770524501800537, "perplexity": 102.83531681748705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578716619.97/warc/CC-MAIN-20190425094105-20190425120105-00181.warc.gz"}
|
https://hal.inria.fr/hal-01274008
|
# Into the Square - On the Complexity of Quadratic-Time Solvable Problems
3 GANG - Networks, Graphs and Algorithms
LIAFA - Laboratoire d'informatique Algorithmique : Fondements et Applications, Inria Paris-Rocquencourt
Abstract : This paper will analyze several quadratic-time solvable problems, and will classify them into two classes: problems that are solvable in truly subquadratic time (that is, in time $O(n^{2-\epsilon})$ for some $\epsilon>0$) and problems that are not, unless the well known Strong Exponential Time Hypothesis (SETH) is false. In particular, we will prove that some quadratic-time solvable problems are indeed easier than expected. We will provide an algorithm that computes the transitive closure of a directed graph in time $O(mn^{\frac{\omega+1}{4}})$, where $m$ denotes the number of edges in the transitive closure and $\omega$ is the exponent for matrix multiplication. As a side effect, we will prove that our algorithm runs in time $O(n^{\frac{5}{3}})$ if the transitive closure is sparse. The same time bounds hold if we want to check whether a graph is transitive, by replacing m with the number of edges in the graph itself. As far as we know, this is the fastest algorithm for sparse transitive digraph recognition. Finally, we will apply our algorithm to the comparability graph recognition problem (dating back to 1941), obtaining the first truly subquadratic algorithm. The second part of the paper deals with hardness results. Starting from an artificial quadratic-time solvable variation of the k-SAT problem, we will construct a graph of Karp reductions, proving that a truly subquadratic-time algorithm for any of the problems in the graph falsifies SETH. The analyzed problems are the following: computing the subset graph, finding dominating sets, computing the betweenness centrality of a vertex, computing the minimum closeness centrality, and computing the hyperbolicity of a pair of vertices. We will also be able to include in our framework three proofs already appeared in the literature, concerning the graph diameter computation, local alignment of strings and orthogonality of vectors.
Document type :
Preprints, Working Papers, ...
Domain :
https://hal.inria.fr/hal-01274008
Contributor : Michel Habib <>
Submitted on : Monday, February 15, 2016 - 11:12:48 AM
Last modification on : Wednesday, August 21, 2019 - 9:02:02 PM
### Identifiers
• HAL Id : hal-01274008, version 1
• ARXIV : 1407.4972
### Citation
Michele Borassi, Pierluigi Crescenzi, Michel Habib. Into the Square - On the Complexity of Quadratic-Time Solvable Problems. 2014. ⟨hal-01274008⟩
Record views
|
2019-11-20 16:40:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7483298182487488, "perplexity": 724.9952082122456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00131.warc.gz"}
|
https://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780199669912.001.0001/acprof-9780199669912-chapter-13
|
Users without a subscription are not able to see the full content.
Ta-Pei Cheng
Print publication date: 2013
Print ISBN-13: 9780199669912
Published to Oxford Scholarship Online: May 2013
DOI: 10.1093/acprof:oso/9780199669912.001.0001
Show Summary Details
Page of
PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2019. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 15 October 2019
Curved spacetime as a gravitational field
Chapter:
(p.200) 13 Curved spacetime as a gravitational field
Source:
Einstein's Physics
Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780199669912.003.0013
Gravitational time dilation can be interpreted as showing the warpage of spacetime in the time direction. Such considerations led Einstein to propose general relativity as a gravitational field theory with the field being curved spacetime. The geodesic equation is identified as the equation of motion of this field theory. This equation reduces to Newton’s equation of motion in the limit of particles moving with nonrelativistic velocities in a weak and static gravitational field. This clarifies the sense how Newton’s theory is extended by GR to new physical regimes. GR equations must be tensor equations with covariant derivatives in order to have proper transformation properties under position-dependent transformations. The replacement of ordinary derivatives by covariant derivatives, which bring in Christoffel symbols, automatically introduces gravity into physics equations.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
|
2019-10-15 17:32:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26479947566986084, "perplexity": 1304.829223011477}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660067.26/warc/CC-MAIN-20191015155056-20191015182556-00205.warc.gz"}
|
https://proofwiki.org/wiki/Mathematician:Mathematicians/Sorted_By_Nation/Belgium
|
# Mathematician:Mathematicians/Sorted By Nation/Belgium
For more comprehensive information on the lives and works of mathematicians through the ages, see the MacTutor History of Mathematics archive, created by John J. O'Connor and Edmund F. Robertson.
The army of those who have made at least one definite contribution to mathematics as we know it soon becomes a mob as we look back over history; 6,000 or 8,000 names press forward for some word from us to preserve them from oblivion, and once the bolder leaders have been recognised it becomes largely a matter of arbitrary, illogical legislation to judge who of the clamouring multitude shall be permitted to survive and who be condemned to be forgotten.'
-- Eric Temple Bell: Men of Mathematics, 1937, Victor Gollancz, London
## Burgundian Netherlands
##### Simon Stevin (1548 – 1620)
Flemish mathematician, engineer and writer most famous for inventing the decimal notation for the rendering of fractions.
Recommended the use of a decimal system be used for weights and measures, coinage and for measurement of angles.
Wrote most of his work in Dutch, believing it the best language for communication of scientific and mathematical ideas.
show full page
## Spanish Netherlands
##### Grégoire de Saint-Vincent (1584 – 1667)
Flemish Jesuit and mathematician, best remembered for his work on quadrature of the hyperbola.
Gave an early account of the summation of geometric series
Resolved Zeno's paradox by showing that the time intervals involved formed a geometric progression and thus had a finite sum.
show full page
## Belgium
##### Eugène Charles Catalan (1814 – 1894)
French and Belgian mathematician who is most famous for his work in combinatorics and number theory.
show full page
##### Maurice Kraitchik (1882 – 1957)
Belgian mathematician and writer who wrote on number theory and recreational mathematics.
Proved in $1922$ that the Mersenne number $M_{257}$ is composite, contrary to the claims of Marin Mersenne.
show full page
##### Paul Poulet (1887 – 1946)
Belgian amateur mathematician working in number theory.
Published his investigations into sociable numbers in $1918$.
Calculated the Fermat pseudoprimes to base $2$ (now called Poulet numbers) up to $50$ million in $1926$, then up to $100$ million in $1938$.
Published $43$ new multiperfect numbers in $1925$, including the first two known octo-perfect numbers.
show full page
##### Edouard Zeckendorf (1901 – 1983)
Belgian doctor, army officer and amateur mathematician, best known for Zeckendorf's Theorem.
show full page
##### Simon Bernhard Kochen (b. 1934 )
Belgian-born American mathematician, working in the fields of model theory, number theory and quantum mechanics.
show full page
|
2018-08-17 17:02:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5659704208374023, "perplexity": 4903.979111878939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212639.36/warc/CC-MAIN-20180817163057-20180817183057-00475.warc.gz"}
|
http://ieeexplore.ieee.org/xpls/icp.jsp?reload=true&arnumber=6186749
|
Browse
• Abstract
SECTION I
## INTRODUCTION
One of the remarkable technological achievements of the last 50 years is the integrated circuit. Built upon previous decades of research in materials and solid state electronics, and beginning with the pioneering work of Jack Kilby and Robert Noyce, the capabilities of integrated circuits have grown at an exponential rate. Codified as Moore's law, integrated circuit technology has had and continues to have a transformative impact on society. This paper endeavors to describe Moore's law for complementary metal–oxide–semiconductor (CMOS) technology, examine its limits, conider some of the alternative future pathways for CMOS, and discuss some of the recent proposals for successor CMOS technologies. In the spirit of the editorial guidance for this issue, an analysis of the living cell as an information processor is offered and estimates of its performance are given. For comparison, an equal volume CMOS cell is postulated, equipped with extremely scaled technologies, and performance estimates are generated. Indications are that the living cell is architected and operates in such a way that it is extraordinarily energy efficient relative to the performance of the comparison CMOS cell. This analysis is offered with the hope that it will encourage radical rethinking of possible future information processing technologies.
SECTION II
## BENEFITS OF SCALING: MOORE'S LAW FOR SEMICONDUCTORS
In 1965, Gordon Moore [1] observed that the number of transistors on a chip could be expected to double annually for at least ten years. At different time points in the ensuing decades, it has appeared that doubling time has varied from 18 months to three years. Overall, however, the chip transistor count has continued to increase, and conversely, the size of each transistor has decreased, at an amazing rate, and Gordon Moore's postulate became known as Moore's law. Fig. 1 is a plot of transistor count for a variety of microprocessor chips (listed in Table 1) versus time. Moore was indeed prescient for the transistor count, on average, has doubled approximately every two years.
Fig. 1. The number of transistors per microprocessor chip versus time, showing introduction of new enabling technologies.
TABLE 1 Microprocessor Data Used to Create Fig. 1
As feature sizes have decreased, the real density of transistors has correspondingly increased supporting either more functionality for a given chip size or the reduction in chip size to obtain a given level of functionality. The latter benefit enabled the fabrication of more chips per wafer and thus continued cost reduction trends. Cost reductions have also resulted from increased wafer sizes, again allowing the production of more chips/wafer. Also the individual transistor switching time and energy decrease with feature size scaling.
Minimum feature sizes circa 1980 were on the order of 3 $\mu$ m, while today they are about 32 nm (a 100-fold decrease), giving four orders of magnitude increase in device density. Supply voltages in this time frame have decreased from five to one volt in an effort to reduce power consumption.
It has been the history of the semiconductor industry that, as obstacles are encountered, scientific and engineering solutions are developed to continue the cadence more or less as indicated by Moore's law. In the 1990s, it became evident that scaling was encountering a number of barriers including increasing interconnect power consumption, transistors that were consuming increased power in their off state, etc. This led to the search for new material systems and associated processes to sustain the growth in transistor counts that was providing increasing performance and functionality for the electronics and other industries. An example of technology innovation was the introduction of copper interconnects to replace aluminum-based interconnects on chip [2], [3]. Initially, this was viewed as a difficult task since copper diffuses in silicon (Si) and can be detrimental to metal–oxide–semiconductor field-effect transistor (MOSFET) performance. However, barrier systems were developed so that the higher conductivity of copper could be exploited. As another example, the decrease in the gate oxide thickness to a few nanometers was leading to increased gate leakage currents and higher off state power consumption. Research led to the incorporation of new gate materials with a higher dielectric constant (e.g., hafnium oxide) so that drive capacitance could be maintained and tunneling reduced [4], [5]. Due to the incompatibility of the high- $k$ dielectric with the traditional polisilicon gate, a new metal gate technology was introduced [5]. In order to increase channel mobility, new strained Si channel technologies were developed [6]. Looking ahead, the pace of innovation continues with, for example, research to determine if higher channel mobility might be achieved by introducing compound III-V channel materials into the Si MOSFET [7]. Even the structure of the MOSFET is under renewed consideration, e.g., the different variations of the multiple-gate FET devices [8], e.g., Trigate, are now being introduced into production. These innovations have continued to provide increases in integrated circuit performance [9].
One indicator of the ultimate performance of an information processor, realized as an interconnected system of binary switches, is the maximum binary throughput (BIT), that is, the maximum number of on-chip binary transitions per unit time. It is the product of the number of devices $M$ with the clock frequency of the microprocessor $f$ TeX Source $$\beta=Mf.\eqno{\hbox{(1)}}$$(Note that $\beta$ is an aggregate indicator of technology capability.)
The computational performance of microprocessors $\mu$ is often measured in (millions) of instructions per second (IPSs) that can be executed against a standard set of benchmarks. There is a strong correlation between system capability for IPS $(\mu)$ and the binary throughput $\beta$, as shown in Fig. 5, and to a good approximation TeX Source $$\mu=f(\beta)=k\beta^{p}.\eqno{\hbox{(2)}}$$
For the selected class of microprocessors, $k\sim 0.1$ and $p\sim 0.64$ with a high degree of accuracy (the determination coefficient ${R}^{2}=0.98$). This strong correlation suggests a possible fundamental law behind the empirical observation.
Fig. 2 also shows an estimated capability of the human brain in ${\mmb\mu}$${\mmb\beta} metrics. While it is difficult to quantify the brain operations, there have been several attempts to estimate computational performance of the brain. In [10], an estimate of equivalent binary transitions was made from the analysis of the control function of brain: the equivalent number of binary transitions to support language, deliberate movements, information-controlled functions of the organs, hormone system, etc., resulting in an “effective” binary throughput of the brain \beta\sim 1019 b/s. An estimate of the number of equivalent IPSs was made in [11] from the analysis of brain image processing capability resulting in \mu\sim {\hbox {10}}^{14} IPSs. It is clear that the brain is not on the microprocessor trajectory in Fig. 2, giving rise to the hope that there may exist alternate technologies and computing architectures offering higher performance (at much lower levels of energy consumption). In the following sections, some of such technologies, complementary and/or alternative to CMOS circuitry, will be discussed in detail. Fig. 2. Benchmark capability \mu (instructions per second) as a function of \beta (bits per second). Without customers for the increased volume of transistors and chips, there would be little incentive to continue to drive scaling at an exponential pace. The 300 billion dollar semiconductor industry (circa 2011) provides essential components for the much larger electronics devices and systems industries as well as automotive, entertainment, medical device, and other technology-based industries. One effect of the rapidly increasing capability of integrated circuit systems is to support rapid growth in functionality and this has translated into a feature-driven market. That is, electronics customers, as a rule, replace their older electronic systems well before they are no longer useful because the newer devices offer a compelling increase in capability. This contrasts with many other industries where purchases are driven by the need for replacement at the end of the useful life of the product. The net effect is that electronic products have been part of a closed cycle where increasing markets have supported the capability of industry to invest in the continuous reduction of semiconductor costs via the introduction of new technologies. Moore's law is actually connected to a more fundamental premise, known as the learning curve, which relates decreases in the price of product to corresponding increases in the volume of production of that product. The learning curve for semiconductors has shown much less volatility over the years than Moore's law. The learning curve states that the cost per unit decreases by a fixed percent every time total cumulative volume doubles. This is applicable across a wide range of products but the striking difference for semiconductors is that the rate of decrease in cost per unit for semiconductors has occurred at a much higher rate than for many other industries. Figs. 3 and 4 (courtesy of W. Rhynes of Mentor Graphics [12]) illustrate the learning curve for transistors and for microprocessors. Fig. 3. Transistor cost as a function of the cumulative number of transistors shipped [12]. Fig. 4. Personal computer cost (inflation adjusted) per millions of instructions per second versus cumulative units shipped [12]. Note from Fig. 3 that the per-transistor cost decreases by approximately a factor of two for every doubling of transistor production. To put this in a temporal perspective, the average compound annual rate of cost reduction for transistors is on the order of 35% per year. In Fig. 3, the cost per millions of instructions per second (MIPS) for personal computers has shown an even more dramatic rate of cost decrease—a reduction on the order of a factor of nine in cost-per-MIP every two years. It is interesting to contemplate whether integrated circuits will be able to sustain these steep learning curves indefinitely. There exist formidable technical and economic challenges to doing so and some of these are considered in this paper. A good example is the search for new lithographic tools that can provide efficient and cost-effective patterning for features less than 10 nm in size. There is a focus today on extreme ultraviolet lithography to replace optical lithography but this technology is not yet production ready. At the same time, research in directed self-assembly [13], [14] continues to make good progress and offers the hope for some relief for optical methods. Moreover, there are clear and fundamental limits for the scaling of electron-based devices [15]. In spite of directed research programs that seek to provide alternatives to the MOSFET switch for logic applications where other nonelectron representations for information might be used, such as the Nanoelectronics Research Initiative (www.src.org/program/nri), no compelling replacement options have yet been identified [16], [17]. It appears that for many of the proposed alternative devices, their unique properties might be utilized to advantage to achieve a special function when integrated with CMOS technologies. On the other hand, continuing research in alternative memory technologies has resulted in the identification of potential replacements offering the potential for smaller sizes that could meet information processing performance specifications. Indeed, progress in memory technology may foretell changes in memory architectures for information processing and could support an increased focus on data-centric as opposed to logic-centric processing. Nevertheless, what will happen when we reach scaling limits for CMOS-like technologies? Can the learning curve shown above be continued at the same rates as have been sustained to date? Continuation of the semiconductor learning curve beyond the end of scaling rests on several factors. 1) It is essential to sustain an ever-broadening applications space for integrated circuits since this provides the revenue base for advances in semiconductor technology. 2) As scaling of features becomes more difficult, it will be necessary for advances in design, architectures, 3-D packaging, etc., to play an increased role in cost reduction. 3) Parallel fabrication to decrease manufacturing costs per unit transistor needs to be emphasized, e.g., by increasing wafer size [18]. In this paper, possibilities for continuing the semiconductor virtuous cycle are explored. The perspective is that the future for semiconductor technologies is very bright, even as we face scaling limits, primarily because the opportunities for integration of new functionalities with CMOS is at an early point and the possibilities for expanding applications incorporating new on-chip physical domains (e.g., mechanical, thermal, chemical, optical) of operation is just beginning. Examples include the integration of sensors that respond to a wide range of stimuli, new architectures that can reason from data leading to integrated systems that can assess and respond, inclusion of devices operating in new physical domains to increase the energy efficiency and performance of information processing systems, the introduction of 3-D packaging technologies, etc. All of these opportunities will require advances in science and engineering, but this is nature of the semiconductor enterprise. SECTION III ## “MORE MOORE”: EXTREMELY SCALED CMOS LOGIC AND MEMORY DEVICES In order to assess the performance characteristics for extremely scaled transistors and memory cells, it is instructive to consider the generic structure and the physical layout of a transistor (binary switch) and a nonvolatile memory cell. The approach taken is to utilize simple physical and geometrical models to make evident the essential performance characteristics of FETs at the limits of scaling. ### A. Electronic Switch (FET) An energy barrier is used to control electron transport in FETs. The barrier can be formed, e.g., by doping that creates built-in charges in the barrier (channel) region, as shown in Fig. 5(a). The height and the width of the barrier determine essential operational characteristics of transistors such as device size, switching speed, operating voltage, off leakage current, etc. In order to control the barrier height, a gate electrode is coupled to the barrier region, separated by a thin layer of gate insulator, e.g., SiO2 or HfO2. When a voltage is applied to the gate, an electric field is created between the gate and the barrier region. This electric field changes the barrier height, thus allowing electrons to pass through the channel. The width of the barrier is defined by device fabrication, and is represented by geometrical characteristics such as channel length L_{ch} or gate length L_{g}. In the following, it will be assumed L_{ch}\approx L_{g}\approx F, where F is the critical feature size. In order to retain gate control with scaling, it is necessary to decrease the gate insulator thickness T_{ox} proportionally to the decrease of the channel length. For an optimized FET structure, a rule of thumb suggests T_{ox}/L_{ch}\sim 1/30 [19]. The device platform for modern microelectronics is known as MOSFET. Fig. 5. Semiconductor FET: (a) materials system; (b) generic floorplan; (c) scaling limits; and (d) connected binary switches. The barrier representation of a binary switch (e.g., FET) shown in Fig. 5(a) also suggests a generic topology for the ultimately scaled device [Fig. 5(b)]. The 2-D floor plan of a smallest possible binary switch is a 3F\times F rectangle consisting of three square “tiles” of the same size F (representing the source, channel, and drain regions of the MOSFET). Further, it can be assumed that the associated insulator and metal layout elements are also composed of tiles of minimum size F. Finally, the metal interconnects, which connect individual devices in more complex logic circuits, can also be represented as a combination of the square tiles, as shown in Fig. 5(b). It is straightforward to show from both topology and physics considerations that in the limiting case, it is useful to consider the size of the interconnect tile as equal to the device tile F. The tiling framework is a useful tool for circuit/system physical-level explorations of different scenarios of extreme scaling.1 A detailed treatment of the tiling framework can be found in [20], and two important examples will be considered in Section III-C. One fundamental issue that limits physical scaling of MOSFETs, and therefore the minimum tile size F, is quantum mechanical tunneling, which dramatically increases the off leakage current, as depicted in Fig. 5(c). A simple estimate of the tunneling limit can be made using the Heisenberg relation [see numerical insert in Fig. 5(c)]. For typical parameters for Si FET, this estimate results in a minimum channel length of \sim4 nm. More detailed calculations yield similar findings—it is argued in a number of studies that tunneling off-state leakage becomes overwhelming for L_{ch}= 4–7 nm, and this is sometimes cited as the “ultimate” FET [21]. These assessments are consistent with the International Technology Roadmap for Semiconductors (ITRS) [22], which projected the minimal physical gate length in high-performance logic FET to be in the range from 4.5 nm (2007 ITRS) to 5.9 nm (2011 ITRS). Note that the result for the smallest channel length in Fig. 5(c) depends on the mass of the information-bearing particles, e.g., the effective mass of electrons in Si. Heavier particle mass could, in principle, allow for further scaling. One approach to deal with the severe leakage in a scaled transistor is to develop families of FETs, optimized for specific applications. For example, if highest switching speed is the goal, the smallest channel length and therefore thinnest gate dielectrics are required. As a result, the leakage can be relatively high, which still could be tolerated in some applications. On the other hand, in other applications, such as, e.g., mobile devices, standby power minimization is mandatory. This can be achieved by increasing L_{g} and T_{ox}, and thus giving up some performance and device density. A set of parameters projected for extremely scaled transistors developed by ITRS is shown in Tables 2 and 3 for high-performance and low standby power transistors, respectively. TABLE 2 ITRS Performance Projections for Extremely Scaled High-Performance FETs (2007 ITRS Edition [22]) TABLE 3 ITRS Performance Projections for Extremely Scaled Low Standby Power FETs (2007 ITRS Edition [22]) ### B. Nonvolatile Electronic Memory In memory cells that store electron charge, such as flash, dynamic random-access memory (DRAM), or static random-access memory (SRAM), two distinguishable states 0 and 1 are created by the presence (e.g., state 0) or absence (e.g., state 1) of electrons in a specific location (the charge storage node). In order to prevent losses of the stored charge, the storage node is defined by energy barriers of sufficient height E_{b} to retain charge (as shown in Fig. 6). The properties of the barrier, i.e., barrier height E_{b} and width a, determine the retention time of a memory cell. Fig. 6. The two-energy-barrier model for a memory cell: (a) the principle of storage and sensing; (b) write operation; and (c) read operation. In order to obtain a nonvolatile memory cell, sufficiently high barriers must be created to retain the charge for a long period of time. As it was argued in [23], for > 10 y retention, the barrier height E_{b} must be more than \sim1.7 eV. High barriers are formed by using layers of insulator (I), which surround a metallic storage node (M). Such an I–M–I structure forms the storage node in the floating gate cell, the basic element of flash memory. The barrier height E_{b} is a material-specific property [see a table insert in Fig. 6(a)]. As shown in Fig. 6(a), the stored electrons can “leak” from the storage node either over the barrier (if the barrier is not sufficiently high), resulting in leakage current I_{o-b}, or by tunneling through the barrier (if the barrier is not sufficiently wide), thus resulting in leakage current I_{T}. For long retention (e.g., > 10 y) the theoretical barrier width must be > 5 nm for all known dielectric materials (typically > 7 nm in practical devices). The corresponding practical minimum size of the floating gate cell is \sim10 nm [23]. The requirement for a large barrier insulator height and thickness also results in a fundamentally high operating voltage, both for write and read. For example, during the write operation, electrons are injected into the storage node, and this requires operation in the Fowler–Nordheim (F-N) tunneling regime for faster injection. The condition for F-N tunneling is eV_{b}>E_{b}, i.e., the potential difference across the barrier between the storage node and the external contact must be larger than the barrier height. Since the storage node is isolated from the external contacts by two barriers (i.e., it is floating), this requires the total write voltage applied to the opposite external contacts of the memory cell to be more than the doubled barrier height: eV_{\rm write}>2E_{b}, as shown in Fig. 6(b). (A symmetric barrier structure is assumed.) Thus, the floating gate structure inherently requires high voltage for the write operation: for example, for SiO2 barriers, {V}_{\rm writemin}> 6 V, and the write voltage should be > 10–15 V for faster (\sim\!\hbox{ms-}\mu \hbox{s}) operations. The presence or absence of stored electric charge in the storage can be detected by an electrometer type device [shown schematically in Fig. 6(a)]. The sensing device should be in immediate proximity to the storage node. A FET is commonly used as a sensor, and a complete nonvolatile floating gate memory cell consists of a stack of metallic and insulating layers on the top of a FET channel, as shown in Fig. 6(b). The sensing FET is controlled by the voltage V_{\rm read} applied to an external electrode, the control gate. The source-drain current of the FET depends on the presence or absence of charge in the floating gate, thus the memory state can be sensed by measuring the FET current. The control gate allows modulation of the semiconductor channel of the FET by external commands, similarly to the logic FET. However, differently from a conventional transistor, the degree of accessibility of the channel from the control gate is rather limited. First, the control gate is physically far from the channel, since the minimal thickness of both top and bottom dielectric layers is large due to the retention requirements, and the minimal thickness of the insulator stack is > 10 nm. Second, the control gate affects the channel only indirectly, as the floating gate lies between the control gate and the channel. Therefore, a large read voltage must be applied to the control gate for reliable on/off transitions of the sense transistor. The maximum read voltage is however limited by the condition for the F-N tunneling discussed above, and for the read operation, it is eV_{\rm read}\ <\ 2E_{b}, for example, for SiO2 barriers, V_{\rm readmax}\ <\ 6\ \hbox{V} for a nondisturbing read. In practice, a typical read voltage is 4.5–5 V. ### C. 2-D and 3-D Layouts of Logic and Memory Circuits Binary switches in logic circuits will be assumed to be isolated, thus allowing for arbitrary wiring. From the tiling consideration, the most compact layout for an array of isolated devices (assuming at least one tile between each device for insulation) results in maximum packing density of binary switches on a 2-D plane [Fig. 7(a)] TeX Source$$n_{L}={1\over 8F^{2}}.\eqno{\hbox{(3a)}}$$In the following, (3a) will be assumed as the device density in the logic layout. Next, interconnects need to be added. To estimate the minimum number of interconnect tiles per device, assume that in a three-terminal device, for each terminal, at least one “contacting” interconnect tile (three total) is needed and one “connecting” interconnect tile (three total) is needed. This results in six interconnect tiles per binary switch. Including the contacting tiles would result in eight interconnect tiles per switch. Thus, the average interconnect length obtained from the tiling consideration is \langle L\rangle=(6-8)F. This estimate is consistent with the wire-length distribution analysis in practical microprocessors [24]. For this densest arrangement, at least three additional layers of interconnects would be needed. Fig. 7. Representation for maximum device density for (a) logic and (b) memory circuits. Memory cells are typically organized in regular X$$Y$ arrays, thus only simple regular wiring is needed. Regularly wired memory cells in an array can be connected in series, thereby enabling higher packing density, as shown in Fig. 7(b) TeX Source $$n_{M}={1\over 4F^{2}}.\eqno{\hbox{(3b)}}$$[The serial connection of Fig. 7(b) represents a nand array, the typical array architecture of mainstream flash memory products.]
The tiling framework provides a methodology to estimate the average energy per bit in an arbitrary logic circuit. As was argued in [25], at the limits of scaling, the energy per tile is nearly the same for both devices and interconnect tiles and approximately equals to the device switching energy ($E_{sw}$ in Tables 2 and 3). For the total number of tiles $k$ (devices and interconnects), the average switching energy per bit is TeX Source $$E_{\rm bit}(k)={1\over 2}k\cdot E_{sw}.\eqno{\hbox{(4a)}}$$(The factor 1/2 originates from the assumed 50% activity factor.)
Now assuming the average interconnect length $\langle L \rangle=6F$, the total number of tiles per device $k=3+6=9$, then from (4a), we obtain TeX Source $$E_{\rm bit}={9\over 2}E_{sw}.\eqno{\hbox{(4b)}}$$
The total dynamic energy consumption by a circuit of $N$ binary switches will be $NE_{\rm bit}$. Correspondingly, the energy dissipated by transistors themselves (without interconnects) is $NE_{sw}$. It follows from (4b) that the ratio of transistor energy use to the total energy consumed by a logic circuit constitute about 2/9 or 22% of the total dynamic energy consumption, which is consistent with the energy breakdown analysis in practical microprocessor chips [26].
For memory arrays, due to the regular wiring, in many instances, the properties of interconnecting array wires determine the operational characteristics of the memory system. A given cell in an array is selected (e.g., for read operation) by applying appropriate signals to both interconnect lines, thus charging them. The relatively large operating voltage of flash results in rather large line charging energy $\sim\! C_{\rm line}V^{2}$, where $C_{\rm line}$ is the line capacitance. For $F=$ 10 nm and a 128 × 128 array, the line capacitance is $\sim\!\! 10^{-14}$ F [27]. If there is a random access read with $V_{\rm read}\sim 5\ \hbox{V}$, there results an energy per line access (and therefore for random access operation) of $\sim$10-13 J, or $\sim$10-15 J/bit for serial access. For write operation with $V_{\rm write}\sim 15\ \hbox{V}$, the write energy is $\sim$10-12 J/line. (In practical flash memory devices, the read energy is of the order of 10-13–10-11 J/bit read and 10-9–10-10 J/bit write [28], [29].)
A model for a tightly integrated 3-D system is useful as a conceptual tool in estimating ultimate performance of CMOS systems. The limits for the 3-D integration can be conceived using the methodology for stacking of 3-D tiles. For example, in logic circuits, the thickness of the FET layer is $\sim$3 F (including vertical extension due to gate and 1/2 interlayer insulation from each side) and the interconnect layer thickness 2 F (with insulation). As was mentioned above, for this densest arrangement at least three additional layers of interconnects would be needed. Thus, the resulting thickness of one logic circuit layer is 9 F. In memory circuits, the thickness of one layer can be 6 F, which includes a layer of array grid interconnects and interlayer insulation. Taking into account (3a) and (3b) obtain limiting 3-D density for logic to be $1/72F^{3}$ and for memory $-1/24F^{3}$. Table 4 summarizes the essential parameters of CMOS logic and memory devices in the limits of scaling. A question that arises is whether alternative technologies exist that could offer further improvements beyond CMOS. Some examples are considered in Section V.
TABLE 4 “Ultimate CMOS”: Limiting Density and Energetics
SECTION IV
## NOT JUST MOORE (MORE THAN MOORE)
The possibility to extend the functionality of CMOS circuits by integration with other technologies has been referred to as “more than Moore” [30]. An example that has already encountered an extraordinary market success in the last few years is provided by CMOS imagers which can be found in any cell phone camera [31]. There Si photodetectors or phototransistors constitute the optical sensors, which are monolithically integrated on a CMOS chip [32]. Another multifunctional combination within Si technology is provided by the integration of microelectromechanical system (MEMS) devices with CMOS [33]. In this case, hybrid integration is already available at a prototype level; monolithic integration will follow once issues related to thermal mismatch between the two processes are solved. Along the same line, digital micromirror devices (MDMs), micrometer size mirror realized on top of a CMOS circuit which controls their 3-D movement, are already used in projectors and TV sets [34]. A more ambitious path will be the integration not just of different functions within one material system, but also the integration of different technologies, as, for instance, Si and III-V semiconductors. Several examples can be provided to indicate how relevant such integration would be. As one looks at the fact that interconnection delay is already the bottleneck for the speed of an integrated circuit, it is clear that optical links would provide an obvious solution. Nonlinear optical passive components can be fabricated in Si technology, which could provide guiding, routing, and other optical functionality directly on chip. Light sources though are still a domain of compound semiconductors like GaAs and InP. Thus, the integration of light-emitting diodes (LEDs) or lasers realized with such materials is necessary. Lattice mismatch of the semiconductors, thermal mismatch of the processes, and fabrication compatibility pose serious challenges to integration, even at a hybrid level. Beside optics and radio frequency (RF), the integration with CMOS of other functions such as sensing, biological screening, or information storage can be of great interest. A review of some “more than Moore” devices and a brief discussion of the open challenges are provided below. Such a concept is nicely summarized in the scheme of a possible future chip shown in Fig. 8, which would be based on CMOS technology but would incorporate several other functionalities coming from alternative technologies [35]. The integration can be achieved directly on chip, requiring that the new technologies be fully compatible with CMOS. This is referred to as a “system-on-chip” approach. Alternatively, a 3-D integration is possible, where several chips, possibly realized with different technologies are stacked on top of each other (“system on package”).
Fig. 8. Illustration of the integration of many technologies on a single CMOS substrate [35].
### A. RF Technologies
Wireless communication has witnessed an unprecedented (and maybe unexpected) development in recent years. It is therefore more and more important to effectively bridge the two worlds of digital information processing and of RF transmission. If one looks at a modern cellular phone, a variety of specialized modules are present which perform specific tasks. Ideally, one unique chip should incorporate all necessary functionalities. RF functions are provided by antennas, filters, switches, and converters, which in turn use RF transistors, mechanical filters, and other active or passive devices. Many new materials and concepts have been introduced recently, which might have an important impact on RF applications in the future. In addition, they might help in the integration with CMOS technology.
MEMS technology has reached maturity and commercial success in recent years thanks to its application in electronic components for automotive and mobile communication [36]. One of its appealing features is the full compatibility with CMOS technology. By scaling MEMS to the nanoscale, further RF functionalities could be reached (e.g., nanoelectromechanical system (NEMS) resonators). Traditionally, some RF components such as the quartz crystals used in the reference oscillator are kept off-chip. In fact, integration leads to very poor quality factors and temperature instability, mainly due to the poor performance of the integrated inductors and capacitors. The best opportunities for miniaturization and integration of reference oscillators are provided by capacitively transduced microelectromechanical and nanoelectromechanical (MEM/NEM) resonators, which have reached in recent years operating frequencies of several gigahertz. Scaling to nanometer dimensions poses several problems connected, e.g., to fluctuations, friction, and dissipation mechanisms at the nanoscale. Nanostructures have been used and demonstrated in NEMS such as platinum [37] or Si [38] nanowires and carbon nanotubes (CNTs) [39], [40]. They are appealing due to their high stiffness, low density, defect-free structure, and ultrasmall cross section. Recently, graphene material has also attracted considerable attention for these applications [41].
The signal produced by this class of devices is very small and impedance matching can be a problem. One alternative is provided by movable gate transistors, which combine the advantage of a vibrating micro/nanostructure with the large output signal provided by the transistor drain current. Highly scaled versions of the in-plane resonant gate transistors with a front-end process have been reported based on silicon-on-nothing technology [42]. However, the sub-100-nm gaps and 400-nm-thick single crystal resonators suffered from the poor electron mobility. Nevertheless, full CMOS compatibility is guaranteed. An alternative structure has been proposed, the vibrating-body FET, where a combined effect due to modulation of the carrier density and of the piezoresistance in the channel is achieved. Si nanowires are the ideal channel material due to their pronounced piezoelectric properties.
Another appealing candidate for an RF oscillator that is fully CMOS compatible is provided by spin transfer torque effects [43]. In nanosized magnetic multilayer structures, metallic spin valves and magnetic tunnel junctions can drive uniform precession of the free-layer magnetization under an external input (provided by a magnetic field or an electrical current). This precession produces voltage responses that make those magnetic multilayers high-frequency spin torque oscillators. Oscillation frequencies ranging from several hundred megahertz to tens of gigahertz have been demonstrated [44]. The challenges that need to be overcome for practical applications include 1) reaching output powers in the milliwatt range, 2) improving the spectral purity of the oscillator, and 3) realizing auto-oscillating structures, thus eliminating the need for external magnetic fields.
RF mixers are an important building block of an RF front end. One candidate technology is the resonant tunneling diode [45]. Another candidate that should have provided low-noise solutions at RF, the single-electron transistor, could only demonstrate interesting performance at low temperature [46]. In principle, carbon-based devices possess the necessary nonlinearities in electrical characteristics to demodulate an AM signal [47], [48]. The main challenge consists, once again, in the integration of such technologies with CMOS [49].
It could be mentioned here that an alternative technology for RF applications (at least up to few megahertz) has been developed in recent years, based on polymeric devices and circuits. The advantages of such components are the low cost of their fabrication, the independence on the type of substrate used, and the possibility of large area manufacturing. Full organic radio-frequency identification (RFID) circuits including electronics and antennas have been realized. Such circuits include more than 1000 organic thin film transistors (OTFTs) and can operate up to 13.5 MHz [50]. Organic electronics [51] will not compete in speed with CMOS-based solutions, since the low mobility of organic semiconductors (typically below 1 cm2/VS) limits the maximum achievable frequency. Nevertheless, applications such as active matrix backbones for OLED display, ultralow-cost RFID systems, or biocompatible/disposable sensors can be envisaged.
### B. Optical Technologies
As mentioned earlier, interconnections have become one of the limiting factors of the speed of integrated circuits. Moving to optical interconnect would greatly enhance the available bandwidth, reduce the heat dissipation on-chip, and assure immunity from electrical interference. Si or Si-compatible optical components have long been demonstrated [52], [53]. Attempts to obtain efficient Si-based light sources have instead not been very successful. Si-based micro/nanostructured devices have been introduced [54]. Considerable interest has been attracted by the demonstration of a Si laser based on the Raman effect [55]. Currently, only optical pumping has been demonstrated. For any realistic application, electrical pumping has to be achieved. The field of passive components (waveguides, filters, connectors) has witnessed considerable advances with the discovery of photonic bandgap structures [56]. Nanotechnology has allowed researchers to realize 3-D, 2-D, or 1-D periodic structures with tailored spectral transmission properties. By inserting defects into an otherwise perfectly symmetric lattice, it is also possible to deflect a light beam over distances of a few nanometers. Thus, an unconventional way to guide and deflect light on a chip can be fabricated with unprecedented properties [57]. Furthermore, by exploiting the properties of surface plasmons in quantum dots structures, the absorption and/or transmission properties of materials and surfaces can be engineered [58]. Being based on Si or on Si-compatible materials, such optical components can be easily integrated with CMOS.
Another optical component which is fully based on Si technology is the CMOS imager, which has witnessed considerable commercial success in recent years mostly due to their use in camera phones. Keys to this success have been features inherent to CMOS technology, such as size, weight, power consumption, mechanical robustness, and price. Two challenges remain on the agenda: the extension of the detectable range, especially to infrared, and scaling of the optical components to keep pace with the CMOS miniaturization.
Typically, CMOS imagers employ a Si photodetector or phototransistor as optical sensor. Spectrally, the sensitivity of such components is limited to the visible range because of the Si energy gap. In order to move into the infrared and far infrared range, which is very attractive for a variety of applications in the fields of security, screening, and environmental monitoring, a possibility is to use small gap semiconductor materials. Photodetectors and imagers using, e.g., InAs or CdTe have been demonstrated and even commercialized [59]. The main problems with such technologies are on the one side the need to cool down the detector in order to have an acceptable noise level and, on the other hand, the difficulty to integrate III-V and II-VI materials with CMOS. A different approach uses a different concept. Rather than converting the optical signal into an electrical one via creation of electron-hole pairs following photon absorption, one can use the change is some property of the sensing material, for instance, resistance or temperature, under illumination. The most successful component of this type is the bolometer, which can be structured into arrays and driven by CMOS circuitry. A thorough review of infrared detector technologies can be found in [60]. Infrared uncooled cameras based on microbolometers integrated on CMOS are available on the market. They provide acceptable performance but they are quite expensive.
Photodetectors can be realized also using conductive polymers [61]. The advantages of such materials are: 1) the possibility to realize devices, circuits, and systems on large area substrates, which in turn can be of different nature (e.g., glass, plastic, textile, and paper); and 2) the reduced fabrication cost since the material can be processed from solutions. Thus, cheap preparation techniques such as ink jet printing, spin coating, and spray casting can be used [62]. The component that has received great attention not only in the research field but also on the market is the organic light-emitting diode (OLED) [63]. In fact, OLED displays have been introduced in several cellular and smartphones. This is the first time that an organic device, based on conductive polymers, has entered a large volume market. Besides OLEDs, other electronic and optoelectronic organic components have been demonstrated [64]. Organic solar cells based on blends of conducting polymers have also been fabricated with roll-to-roll processes, displaying a conversion efficiency of few percent [65]. Due to the flexibility in fabrication methods, organic photodetectors (OPDs) can be integrated onto CMOS. Inverted structures for OPDs that can directly be fabricated on CMOS as end-of-the-line process have been demonstrated [66]. CMOS imagers with OPD active pixels would guarantee a much larger fill factor with respect to Si photodetectors. In connection to IR imagers, either low gap polymers [67] or hybrid system combining polymers with quantum dots [68] have displayed room temperature sensitivity in this wavelength range. Their use for hybrid CMOS imagers would allow the realization of IR imagers at a cost comparable to conventional CMOS imagers.
Concerning pixel reduction, current technology allows for 2.2-$\mu$ m pixel pitch, and demonstrations exist for 1.7 $\mu$ m. Pixel size reduction in active pixel sensors is crucial since it leads to higher numbers of pixels at almost constant price. A further reduction is challenging, both due to limited optical capabilities and to signal noise. A large part of today's imager cost is taken by lenses, whose complexity is bound to increase with miniaturization. In order to reverse such a trend: 1) innovative strategies exploiting coherent effects at the nanoscale have to be found, for instance, exploiting plasmonics and photonic bandgaps; and 2) image correction via on-chip computation will have to be implemented, fully exploiting the capability of the CMOS chip.
### C. Sensing Technologies
The computational power and the maturity of CMOS technology can be of great advantage in the sensor field, where environmental parameters have to be determined and corresponding actions undertaken. Since the signals to be sensed are mostly nonelectrical in nature, appropriate transducer elements are needed. Examples of external physical stimuli are mechanical (pressure, motion, vibration), electrical (voltage), thermal (temperature difference), electromagnetic (light), chemical (presence of a particular chemical species), etc. In response to the external stimulus, the transducer generates an electrical signal that is further processed by accompanying circuitry and is used to provide actionable information to the end user. Sensor metrics include sensitivity, selectivity, and repeatability. Nanotechnology can provide adequate solutions in the form of novel materials and structures with high sensitivity and into the Si mainframe technology. Two kinds of transducers are currently receiving considerable attention for sensing: 1-D structures (e.g., nanowires and nanotubes) [69], [70] and NEM devices [71].
Sensors may potentially be everywhere, providing instrumentation for the state of the environment, security, supporting regulation of different processes, etc. It is necessary in many applications for the sensors to communicate their data to a central information collection/decision-making authority. This gives rise to the need to establish sensor communication networks that can be used, for instance, to create autonomic systems that are user-transparent, self-healing, self-configuring, self-optimizing, and self-protecting analogous to many of the functions of the human nervous systems such as control of heart rate, breathing, etc. Of course, sensor networks in general represent an important field of endeavor where issues of configuration, optimum communication protocols, and information carrying capacity are essential concerns [72].
One example of the many application areas for sensors is in fields related to biology. The state of the living system can be monitored by sensing different physical parameters, e.g., chemical, electrical, optical, thermal, magnetic, etc. There are indications that 1-D structures, such as semiconductor nanowires and CNTs, may offer superior sensitivity to planar devices and allow for picomolar detection of biomolecules [73]. An additional attractive feature of 1-D structures is that they might lend themselves to minimally invasive probes to contact or even puncture the cellular membrane, or even to be ingested into the cell itself. This suggests the intriguing possibility of electrically monitoring processes inside the cell [74]. Recently, it has been shown that nanowires and nanotubes can also be used not individually but rather as a conductive film, which can be used as semitransparent electrodes, sensors, transistors, and in general, for flexible and stretchable electronics [75], [76], [77], [78], [79]. Such solution-based technology is compatible with CMOS.
In many applications, sensors should operate in a standalone mode at extremely low levels of energy consumption. In some cases, operational energy could be harvested from the sensor environment, e.g., in the form of solar, thermal, electromagnetic, or mechanical energy. Currently, there is continuing progress in miniature energy harvesting devices to support autonomous operations of sensor units [80], [81]. Understanding of maximum performance potential of such energy harvesting devices, given practical size constraints, requires further studies. In parallel with the need to scavenge energy from the environment, there will be an increasing need to store it in batteries or capacitors.
Nanostructure and nanodevices, as well as novel materials, can be decisive in the search for efficient power solution of future electronic systems. Some very promising results have already been obtained. Among them, the so-called third-generation solar cells promise enhanced efficiency and/or reduced costs by using quantum nanostructures or organic semiconductors [82]. CNT or graphene sheets provide ideal solutions for compact, long-lasting miniature super capacitors [83]. Nanowires possess optical, electrical, and theormelectric properties which can be useful in a number of energy-related applications [84]. In most cases, the technologies and devices just mentioned can be used as standalone technologies or in hybrid systems integrated on CMOS.
SECTION V
## BEYOND MOORE
### A. Terminology and Context
There is an international effort underway to identify an alternative to the CMOS transistor, which within one-to-two decades will no longer submit to feature size and voltage scaling [22]. Many of these alternative devices operate using state variables other than charge and some of them may offer functionalities beyond those of a binary device that could be useful for more complex operations. Indeed, the choice of state variable for a device not only has ramifications for device performance but echoes up the abstraction hierarchy to impact device-to-device communication, achievable chip complexity, and ultimately system performance capability. The dependency is depicted in Fig. 9. The symbols in Fig. 9 have the following meaning:
$L_{sw}$ the smallest device (switch) feature, e.g., gate length in CMOS; $t_{sw}$ device switching time, i.e., time required to change state; $E_{sw}$ the energy required to change the device state (switching energy); $N_{\rm car}$ the number of information carriers required to transmit state to downstream devices; $M$ the device count, a measure of system complexity; $\beta$ binary information throughput, a measure of technological capability; $\mu$ instructions per second, a measure of information processor capability.
The state of a binary switch is that minimum set of physical variables that fully describe the system and its response to a given set of control variables. In characterizing the functionality of various candidate devices, it is important to draw a distinction between the physical entities used in their realization and the properties of these entities utilized in the operation of the device, which we refer to as variables. For example, physical entities might include electrons, atoms, ferromagnetic (FM) domains, etc. Associated with these physical entities are properties such as charge, spin, magnetic dipoles, etc.; the same entity might be used in two different devices, each exploiting a different property of that entity. In the following, “property” is used as a synonym for the word “variable” to agree with conventional usage.
Fig. 9. State variable and different facets of information processing system.
Now each device has input, state, and output variables: for example, the FET utilizes the electron as the physical entity, and the properties of charge are used for the input, output, and state variables. On the other hand, the spinFET utilizes electrons but it is controlled by electron spin, its state is defined by spin, and its output is transferred as charge.
Table 5 provides a tabulation of physical entities and the properties employed by several of the emerging devices. Also, an expanded taxonomy employed by the ITRS Emerging Research Devices Chapter [22] is shown in Fig. 10.
TABLE 5 Taxonomy for Candidate Information Processing Devices
Fig. 10. ITRS taxonomy for information processing nanotechnologies [22].
### B. Novel Device Examples
#### 1) III-V, Ge Channel, and Nanowire FET
It is well known that III-V compound semiconductors are ideal candidates for high-speed devices (several tens of gigahertz), due to their excellent bulk electron (e.g. 33 000 cm$^{2}{\hbox {V}}^{-1}{\hbox {s}}^{-1}$ for InAs and 80 000 cm $^{2}{\hbox {V}}^{-1}{\hbox {s}}^{-1}$ for InSb) and hole (1250 cm$^{2}{\hbox {V}}^{-1}{\hbox {s}}^{-1}$ for InSb and 850 cm$^{2}{\hbox {V}}^{-1}{\hbox {s}}^{-1}$ for GaSb) mobilities. The integration of GaAs and InP on Si substrates has been long sought but never achieved. Advances in epitaxial techniques have recently offered new perspectives on this challenge. In particular, Sb-based compound semiconductors are seen as realistic CMOS channel replacement materials due to the high mobilities for both electrons and holes [85], [86]. A further appealing system is provided by InAs, which can be grown in the form of nanowires directly on Si substrates with excellent material quality [87]. The major challenges include the need for high-quality, high- $k$ gate dielectrics (if MOSFETs are going to be used), damage-free low-resistivity junctions, and heterointegration on a very large-scale integration (VLSI)-compatible Si substrates. Similarly to III-V semiconductors, germanium (Ge) is also a potential channel replacement material because of its excellent bulk electron mobility of 3900 cm$^{2}{\hbox {V}}^{-1}{\hbox {s}}^{-1}$, almost three times higher than in bulk Si. Unfortunately, the poor quality of the Ge/dielectric has resulted in much lower mobilities in fabricated transistors [88]. Strain engineering of Ge $n$-channel MOSFETs has also been studied as a performance booster technology and its effectiveness has been demonstrated at a small strain level. An open issue is whether the low electron saturation velocity in Ge will limit the short channel performance of $n$-channel Ge MOSFETs relative to Si $n$-channel MOSFETs. In conclusion, III-V compound semiconductor and Ge FETs are considered viable candidates to extend CMOS to the end of the Roadmap.
Nanowire FETs are structures in which the conventional planar MOSFET channel is replaced with a semiconducting nanowire. Such nanowires have been demonstrated with diameters as small as 0.5 nm. They may be composed of a wide variety of materials, including Si, Ge, various III-V compound semiconductors (GaN, AlN, InN, GaP, InP, GaAs, InAs), II-VI materials (CdSe, ZnSe, CdS, ZnS), as well as semiconducting oxides (${\hbox {In}}_{2}{\hbox {O}}_{3}$, ZnO, ${\hbox {TiO}}_{2}$) [89]. Nanowires can exhibit quantum confinement behavior, i.e., 1-D conduction, that can lead to the reduction of short channel effects and other limitations to the scaling of planar MOSFETs. Vapor–liquid–solid (VLS) growth mechanism has been used to demonstrate a variety of nanowires, including core-shell and core-multishell heterostructures [90], [91]. Heterogeneous composite nanowire structures have been configured in both core-shell and longitudinally segmented configurations using group IV and compound materials. The longitudinally segmented configurations are grown epitaxially so that the material interfaces are perpendicular to the axis of the nanowire. This allows substantial lattice mismatches without significant defects. Vertical transistors have been fabricated in this manner using Si, InAs, and ZnO, with quite good characteristics [92], [93], [94]. The small lateral dimension of the nanowires allows their direct growth on lattice-mismatched substrates without the typical problems of dislocations and defects encountered in films. Thus, for instance, InAs of very good morphological quality has been grown directly on Si. Circuit and system functionality of nanowire devices has been demonstrated, including individual CMOS logic gates and other prototype circuit elements [95], [96]. Still a lot of work is needed to minimize parasitic components and achieve the high frequencies which have been predicted.
One of the crucial parameters controlling the power dissipation of CMOS devices is the subthreshold swing. In conventional MOSFETs, the thermal injection of carriers from the source to the channel sets a room temperature limit value of 60 mV/dec.
Tunnel FETs based on a gated p-i-n junction are expected to display an abrupt ${I}_{\rm on}/{I}_{\rm off}$ transition, thus lowering the subthreshold swing below the intrinsic MOSFET limit [97]. Such improvement is intrinsically connected to the quantum mechanical band-to-band tunneling process [98], which reacts sharply to variation of the gate voltage. High-performance tunnel FETs have been explored using low bandgap materials like Ge [99], SiGe [100], or based on Si nanowires [101] and CNTs [102]. A major challenge is the integration of such materials and structures on advanced Si platforms [103].
A completely different type of switch can be achieved exploiting the mechanical displacement of a solid beam controlled electrostatically to create a conducting path between two electrodes [104]. Such micro/nanoelectromechanical (M/NEM) switch has two major advantages with respect to MOSFETs: negligible leakage and negligible subthreshold swing. Thus, standby energy dissipation as well as dynamic energy consumption can be drastically reduced. The most recent developments suggest that M/NEM switches are attractive for ultralow-power digital logic applications. In addition, it is expected that the energy performance as well as the functional densities can largely improve with scaling. M/NEM switches can be fabricated by top-down approaches using conventional lithography techniques on Si, reaching actuation gaps as small as 15 nm [105]. Alternatively, bottom-up approaches employing CNTs [106] or Si nanowires [107] have been followed. In all cases, the leakage was virtually zero. The main weakness is switching speed, as the beam requires around 1 ns to move from the off position to the on position. A further challenge for M/NEM switches is the control of the surface forces and the reliability of the contacts.
#### 2) Carbon Electronics
The previous century has been the Silicon Century. The pervasion of electronic and optoelectronic devices in whole sectors of the society has been made possible mostly due to the success of CMOS technology (along the line of Moore's law). The new century might be the Carbon Century. Diamond has very interesting semiconducting properties, for instance, great heat and charge conductivity. Unfortunately, it is very difficult to obtain diamond in crystalline form at wafer level. Twenty years ago, CNTs were discovered. Some of their attributes make them very appealing in view of the miniaturization of electronic components. Despite a huge research effort, CNTs have not yet found a real application in nano and optoelectronics. Part of the problem is the difficulty to control the exact morphology (which in turn determines the CNT electronic properties) in a reliable and reproducible way. Recently, applications for a CNT network have emerged which make such system competitive with polymer materials for large area, low-cost electronics, and optoelectronics. A new carbon-made material has now appeared on the scene, receiving a great deal of attention. Graphene, a 2-D hexagonal grid of carbon atoms, has unique electronic, electrical, optoelectronic, and mechanical properties. It is therefore an appealing candidate for a variety of components like, e.g., transistors, sensors, electrodes, lasers. Although it is too early to forecast the market impact of graphene, the academic and industrial community as well the funding agencies are betting strongly on that novel nanomaterial. In the following, we will briefly discuss some of the important achievements for carbon-based devices and outline the main challenges.
CNT FETs are attractive because of the high mobility of charge carriers, the intrinsically small dimensions, and the possibility of minimizing short channel effects via all-around gate geometry. In the past two years, significant advances have been made in fabricating and characterizing CNT FETs [108], [109]. For instance, transistors with 15-nm channel length displayed no short channel effects and a transconductance of 40 $\mu$ S for a single channel [110]. Frequencies as high as 15 GHz have been reached [111]. Nevertheless, major challenges remain, in particular concerning the ability to control 1) bandgap energy and nanotube chirality with sufficient precision for industrial applications; 2) the positioning of the nanotubes in required locations and directions; 3) the deposition of a gate dielectric; and 4) the formation of low-resistance electrical contacts.
Thanks to the extremely high electron mobilities, graphene is an ideal material for RF transistors [112], [113], [114]. Very high values of cutoff frequency have been demonstrated, in excess of 200 GHz [115]. In order to achieve better performances, the quality of the source and drain contacts have to be improved, especially in the top gate configuration. Graphene FETs were first based on exfoliated graphene to form a transistor channel, which offers the highest mobility, but is hardly manufacturable [116]. Recently, epitaxial graphene on SiC substrates and chemical vapor deposition (CVD)-grown graphene on, e.g., copper foils, have been obtained [117], [118]. Back-gated graphene FETs with SiO2 dielectric were typically shown to have room temperature field-effect mobilities up to around 10 000 cm 2/Vs [119]. Suspended graphene or graphene sheet on flat and inert substrate such as boron nitride can reach mobilities above 100 000 cm2/Vs at room temperature [120], [121]. In top gate devices, lower mobilities are found, possibly because of a degradation of the channel properties when the gate dielectric is deposited [122]. Due to the peculiar band structure of graphene, electron and hole mobilities are similar. It also displays no energy gap, at least for extended single sheets. One crucial consequence are bipolar transport characteristics, which imply very small ${I}_{\rm on}/{I}_{\rm off}$ ratios. This is of course a major limitation for digital applications. Several methods to open a bandgap have been proposed, as, for instance, through the use of graphene nanoribbons [123].
#### 3) Memristors
Recently, interest on hysteretic devices has risen in the context of nonvolatile memories. Such devices, named memristors, were pioneered in the work of Chua in the 1970s [124]. There, he indicated the memristor as the missing element, in addition to inductors, resistors, and capacitors, needed for a coherent description of electronic circuits. Much later, other groups rediscovered the definition in connection to nonlinear elements embedded in a crossbar architecture [125].
One possible memristor structure can be based on a polymeric film sandwiched between two metal electrodes [126], [127]. As pointed out earlier in connection to polymeric devices, the main motivation for using such material is the low fabrication cost. On the other hand, scaling has not been widely discussed. Although polymeric resistive memory arrays have been demonstrated, including a 3-D stack of three active layers, the memory operation mechanisms are still unclear [128]. Some research suggests that the changes in resistance could be due to intrinsic molecular mechanisms, charge trapping, or redox/ionic mechanisms [129].
Another type of memristic devices is the so-called “atomic switch,” basically an electrochemical switch based on the diffusion of metal cations and their reduction/oxidation processes to form/dissolve a metallic conductive path [130]. The metal atoms are introduced into the ionic conductive materials from a reversible electrode. The atomic switch was initially developed as a two-terminal device using sulfide materials that were embedded in a crossbar architecture with scalability down to 20 nm [131]. Later, an atomic switch using fully CMOS compatible materials was developed to enable the formation of these devices in the metal layers of CMOS devices. This configuration resulted in the development of new type of programmable logic device [132]. Three-terminal atomic switches characterized by high ${I}_{\rm on}/{I}_{\rm off}$ ratio, low on-resistance, nonvolatility, and low-power consumption have also been demonstrated [133]. Several operating mechanisms have been proposed, including gate-controlled formation and annihilation of a metal filament, and gate-controlled nucleation of a metal cluster, but no complete understanding of the process currently exists. Switching speed, cyclic endurance, uniformities of the switching bias voltage, and resistances both for the on-state and the off-state should be improved for general usage as a logic device [134].
In a variety of materials, ion migration combined with a redox process can cause a change in resistance of a metal–insultor–metal structure [135]. For instance, for silver electrode, Ag+ cations can drift through the insulator in the presence of an applied voltage, forming a highly conductive filament connecting the metal electrodes resulting in the on-state of the cell. Reversing the applied voltage, an electrochemical dissolution of these filaments takes place, resetting the system into the high-resistance off-state [136]. In the case of transition metal oxides, such as TiO2, the motion of oxygen vacancies is responsible for the change in the cell resistance. In a third class of materials, a unipolar thermochemical mechanism leads to a stoichiometry change due to a current-induced increase of the temperature. In some cases, a formation process is required before the bistable switching can be started. Since the conduction is often of filamentary nature, memories based on this bistable switching process can be scaled to very small feature sizes. The switching speed is limited by the ion transport, typically rather slow. Thus, the distance between the electrodes has to be limited to a few nanometers. Although the microscopic nature of the switching process has yet to be understood in detail, recent experimental demonstrations of scalability, retention, and endurance are encouraging [137].
From an architectural point of view, memristive devices could be coupled with two-terminal select devices in order to build passive memory arrays (crossbars) [138]. The general requirements for such two-terminal switches are sufficient on-currents at proper bias to support read and write operations and sufficient on/off ratio to enable selection even in the absence of a transistor. These specifications are quite challenging and severely limit the maximum size of a crossbar array [139]. Currently, two approaches to integrating a two-terminal select device with storage node are being pursued. The first approach integrates the external select device in series with the storage element in a multilayer stack. The second approach uses a storage element with inherent nonlinear properties. The simplest realizations of two-terminal memory select devices use semiconductor diode structures, possibly in a back-to-back configuration for bipolar memory cells. Alternatively, a selector exhibiting resistive switching behavior could be used. That is, the selector works on the same principle as the restore element, the main difference being that it can be volatile. One possible device is based on a metal–insulator transition and exhibits a high resistance for voltage below a given value. As an example, a VO2-based device has been demonstrated as a select device for NiOx resistive random access memory (RRAM) element [140]. The main challenge for switch-type select devices is to identify the right material and the switching mechanism to achieve the required reliability, drive current density, and ${I}_{\rm on}/{I}_{\rm off}$ ratio.
In addition to memories, it has been suggested that logic gates can also be built using memristors [141]. Furthermore, neuromorphic architectures based on memristive crossbars have been investigated [142], [143].
#### 4) Molecular Electronics
One approach to beyond CMOS electronics is based on the use of single conductive molecules [144], [145]. Due to their intrinsically small size and the possibility to use self-assembling techniques, single molecules could be an alternative to Si nanostructures for nonvolatile memories, diodes, or switches [146], [147]. In fact, when properly functionalized, single molecules can display nonlinear electrical characteristics and, in some cases, hysteresis [148]. In a molecular memory, data are stored by applying an external voltage that causes a transition of the molecule into one of two possible conduction states. Data are read by measuring resistance changes in the molecular cell. The concept emphasizes extreme scaling; in principle, one bit of information can be stored in the space of a single molecule, namely, few nanometers. Computing with molecules as circuit building blocks is an exciting concept with several desirable advantages over conventional circuit elements. Because of their small size, very dense circuits could be built and bottom-up self-assembly of molecules in complex structures could be applied. However, major challenges still exist. First, the very nature of the molecular conduction and molecular switching has not been fully understood. The role of the metallic leads is not clear and parasitic effects due to the environment could appear which might determine the transport characteristics of a molecular device. In any case, prototypical molecular memories have been built, which show remarkable endurance and reproducibility [149], [150]. At an architectural level, both molecular quantum cellular automata (QCA) and crossbar structures have been investigated [151], [152].
#### 5) Magnetic Components
Electronic systems combining computing and storage capabilities could be realized based on magnetic structures. Magnetic random-access memories (RAMs) [153] are a mature technology with some products already on the market. The control of single spins of either atoms or electrons has also been proven a promising new way to achieve electronic functionalities. The possibility to build logic circuits with magnetic nanostructures has been demonstrated at a prototypical level. There, a novel architecture based on field coupling [called magnetic quantum cellular automaton (MQCA)] is adopted [154], where the spatial arrangement of coupled nanomagnets can be used to build logic functions and complete circuits. In the following, we will briefly describe some of the suggested magnetic components.
In spin transistors, the current is controlled by the magnetization configuration of the ferromagnetic electrodes or by the spin direction of the carriers [155]. Thus, feature could lead to low-power circuit architectures that are inaccessible to ordinary CMOS circuits. Recently, an experimental demonstration of spin FET was reported [156], [157]. Oscillatory spin signals controlled by a gate voltage were observed implying spin precession of spin-polarized carriers in the channel. However, the origin of the observed spin signals is not yet clear. Spin MOSFETs using ferromagnetic electrodes have also been proposed but not yet demonstrated [158].
Spin wave devices (SWDs) are a type of magnetic logic exploiting collective spin oscillation (spin waves) for information transmission and processing [159]. The spin waves are generated in a magnetoelectric cell which is driven by external voltage pulses. Such a cell also acts as detector and storage element. The information is encoded into the initial phase of the spin wave. Spin waves propagate through spin wave buses and interfere at the points of junction constructively or destructively, depending on the relative phase. The result of computation can be stored in the magnetization or converted into the voltage pulse by the output magnetoelectric cells. The primary expected advantages of SWDs are: 1) the ability to utilize phase in addition to amplitude for building logic devices with a fewer number of elements than required for transistor-based approach; 2) nonvolatile magnetic logic circuits; and 3) parallel data processing on multiple frequencies at the same device structure by exploiting each frequency as a distinct information channel. Prototypes operating at room temperature and at gigahertz frequency have been demonstrated.
In nanomagnetic devices, binary information can be encoded in the magnetization state. Fringing field interactions between neighboring nanomagnets can be used to perform Boolean logic operations [160]. A functionally complete logic set based on nanomagnets has been demonstrated [161]. In addition, nanomagnetic devices have nonlinear response characteristics, the output of one device is capable of driving another, power amplification (or gain) is present, and dataflow directionality can be obtained. Nanomagnet logic (NML) has therefore a great potential for low-power applications. A clock modulates the energy barriers between magnetization states in an NML circuit. Recently, experimental demonstrations of individual island switching as well as the reevaluation of NML lines and gates with CMOS-compatible clock structures have been reported [162]. Furthermore, NML appears to be scalable to the ultimate limit of using individual atomic spins. Whether a circuit ultimately exhibits reliable and deterministic switching is a function of how it is clocked—and requires additional study.
Field coupling via magnetic interaction belongs to a novel class of architectures called MQCA [154]. A cellular automaton (CA) is an array of cells, organized in a regular grid [163], [164]. Each cell can be in one of a finite number of states from a predefined state set, which is usually a set of integers. The state of each cell is updated according to transition rules, which determine the cell's next state from its current state as well as from the states of the neighboring cells. The functionality of each cell is defined by the transition rules of the CA. Typically, each cell encodes one bit into a single electrical or magnetic dipole. The cell-to-cell communication is guaranteed by magnetic interaction between neighboring dipoles. A QCA architecture has some appealing features: its regular structure has the potential for manufacturing methods that can deliver huge numbers of cells in a cost-effective way. Top-down as well as bottom-up manufacturing methods can be used. Furthermore, the design of a cell can be relatively simple as compared to that of a microprocessor unit, so design efforts are greatly reduced. Wires are completely unnecessary since the cells can interact with their neighboring cells through some physical mechanism. Thus, interconnection delay and power dissipation through interconnects are avoided. Clearly, QCAs also have some drawbacks and challenges. For instance, input and output of data to cells with nanometer dimensions may be difficult. Clocking the cells requires additional wires or external inputs. Speed might be a limiting factor. Room temperature operation has to be assured for any realistic application, which, up to now, has only been demonstrated for magnetic QCAs.
A concept that combines spin-controlled devices and nanomagnetic logic has been proposed recently [165]. In the all-spin logic (ASL), the information stored in the nanomagnets propagates as spin current in spin coherent channels. Recent advancements have shown that a combination of spintronics and magnetics can provide a low-power alternative to charge-based information processing. Key elements of ASL are the spin injection into metals and semiconductors from magnetic contacts and the switching of magnets by injected spins. Major challenges to be overcome are room temperature operation and a further improvement of the energy-delay product. It should be mentioned that ASL could also provide a natural implementation for biomimetic systems with architectures that are radically different from the standard von Neumann architecture.
#### 6) New Architectures for Beyond CMOS
Research on architectures that exploit the properties of the devices described in this section is at an early stage of development. Many different possibilities exist, two of which are CA and neural-inspired networks. CA typically use a form of nearest neighbor communication and they can be shown to be universal. Theoretical and experimental quantification of CA performance relative to the von Neumann architecture remains an open question [164]. Neural networks take a different approach by seeking to emulate structures in the brain and these have been studied for decades. So far it appears that neural networks can offer advantages for special classes of problems. There are indications that memristors in crossbar arrays may be able to emulate neural behavior. In the next section, a perspective on architectures for computation inspired by the operation of living cells is provided.
SECTION VI
## BIOLOGICAL COMPUTATION: LIVING CELL EXAMPLE
### A. A Basis for Quantitative Comparisons
The reliance of CMOS and many other proposed information technologies on electron charge to support their operations places them at risk as features scale downward into the few nanometer regime. Not only does tunneling become detrimental to performance, but also smaller features usually make the devices more susceptible to minute, manufacturing-induced, variations in material structure and composition. It has been said that the creativity of nature far exceeds that of humans and it seems reasonable to seek inspiration for new information processing technologies from this source. In that which follows, it is argued that the living cell can be viewed as an information processor that is extraordinarily efficient in the execution of its functions. The living cell is, in a sense, a universal constructor as suggested by von Neumann, which is capable of creating copies of itself [166], [167]. The model that is used in the following is the E.coli cell which has dimensions on the order of 1 $\mu$ m and which has been heavily studied so that quantitative estimates of its complexity, performance, and energy efficiency are available. Given this, it is important to point out that many of the mysteries of cell operation are yet unresolved and are the focus of continuing investigations.
In order to provide a benchmark for E.coli cell operation, we first extrapolate the capabilities of a 1-$\mu$ m scale CMOS information processor when end-of-scaling CMOS technology is utilized. Favorable assumptions for the 1-$\mu$ m CMOS cell are offered including the stipulation that no volume is required for energy storage and for communication. A development is then offered, from available data, of the information processing capability of the E.coli cell. It is argued that the information processing capabilities of the E.coli cell far exceed that of the 1-$\mu$ m CMOS cell and inferences are drawn suggestive of directions for future information processing technologies. The terminology in silico is used to refer to the semiconductor benchmark cell and the term in carbo is used to refer to the E.coli cell in the following.
### B. Bio-$\mu$ Cell Information Processor
Are there example information processing systems now extant from which inspiration might be drawn for new technologies? It has been recognized that individual cells, the smallest units of living matter, possess amazing computational capabilities, and are indeed the smallest known information processors [168]. As is argued in a number of studies, individual living cells, such as bacteria, have the attributes of a Turing machine, capable of a general-purpose computation [168], [169], [170]. It can also be viewed as universal constructor in the sense of von Neumann because it manufactures copies of itself, thus a computer making computers [169].
Just how does the cell go about implementing its information processing system? The cell is a very complex organism and any brief attempt to describe its operations is bound to be inadequate. A vastly oversimplified view of cellular processes is presented below.
A cell's primary functions can be described as follows.
1. Reproduction: making cells by acquiring/processing information from internal storage (DNA) and utilizing the structural building blocks and energy from the nutrients.
The reproduction task requires a massive information processing effort, a crude estimate of which is made in Section VII-A. In short, elementary structural building blocks (22 amino acids and five nucleotides) need to be synthesized or acquired, and then utilized to form functional building blocks, which include different proteins, RNA, and DNA molecules. Finally, all building blocks need to be properly placed within cell's volume for assembly. A special cell-cycle control mechanism regulates the sequence, timing, etc., of the cell assembly process.
2. Adaptation for survival: Acquiring/processing information from external stimuli with feedback from DNA.
Single-cell organisms, such as E.coli bacteria, could not survive without the ability to sense the environment and adapt to its changes (positive or negative). For example, in response to the external presence of specific nutrients, particular proteins are produced within the cell to facilitate the uptake and digestion of those nutrients. In the absence of nutrients, the cell can switch to a resting mode, where the reproductive process is inhibited. In addition, single-cell organisms can respond to a variety of external stimuli such as temperature, light, presence of toxic chemicals, magnetic field, etc. Many single-cell organisms also possess motility organs (e.g., flagellae in case of E. coli).
3. Extracellullar communication: Sending and receiving signals to coordinate community behavior.
Many unicellular organisms communicate to each other by the release and detection of special signal molecules. Cells use chemical signaling to detect population density and to exchange information about the local environment. Cell-to-cell communication coordinates the behavior of a cell population to increase access to nutrients, provide for collective defense, or enable the community to escape in case of threats to its survival.
In the following, we offer a simple estimate of single-cell computational capabilities based on two different approaches. A bottom-up approach counts the cell hardware, i.e., the number of memory and logic elements in the in carbo processor. The top-down approach deals with total amount of computation needed to implement operations to assemble a new cell.
### C. Cell Hardware
Fig. 11 shows a cartoon of a cell as information processor. It contains a localized long-term memory block ${\bf M}$ (DNA molecule), a number of short-term memory and logic units ${\bf L}$ (different protein and RNA molecules), (input) sensors ${\bf S}$ to monitor both outside environment and the cell interior (extracellular and intracellular receptor proteins), and two output units: the ribosomes, where new structural building blocks for reproduction are “printed,” and signaling units that “wirelessly” connect to neighboring cells by sending signal molecules.
Fig. 11. Unicellular organism as information processor.
The cell hardware is made from three types of macromolecules: proteins, DNA, and RNA. Table 6 presents a summary of the statistics and functions of these molecules in E.coli cell. A description of essential features of different parts of the cell hardware is given below.
TABLE 6 Essential Parameters of the E.coli Molecular Processor
#### 1) Logic Hardware
Many proteins (Fig. 12) in living cells have as their primary function the transfer and processing of information, and are therefore regarded as logic elements of the in carbo processor [171], [172], [173], [174]. In fact, as recent studies indicate, the proportion of components devoted to computational networks increases with the complexity of the cell, and are absolutely dominant in humans [173]. Proteins can alter their 3-D structural shapes (conformation) in response to external stimuli, and different conformations can represent different logic states. These nanomechanical changes form a state variable, sometimes called conformon [175]. The essential functions of the protein devices are determined by their conformational states. A simple example of the “binary” conformational change is the ion channel protein, which is embedded in a cell's membrane and acts as a gate for ions, and can be opened or closed depending on command from either internal or external sources, e.g., light, pressure, chemical signal, etc. Different nanomechanical conformations of these protein devices are recognized by other elements of the in carbo cell circuit by a process based on selective affinity of certain biomolecules with given conformational states. Molecular recognition implemented with conformons plays a fundamental role in the communication of information packages within the processor, and it facilitates targeted interactions between different elements, e.g., protein–protein, protein–DNA, RNA–ribosome, etc.
Fig. 12. Protein molecule formed from different amino acids (shown as circles of different colors).
The protein conformons control all processes in the cell, such as sensing, signaling, information retrieval, etc. Some examples will be given in the next section.
#### 2) Memory Hardware
All data about structure and operation of a living cell are stored in the long DNA molecule. DNA coding uses a base-4 (quaternary) system. The information is encoded digitally by using four different molecular fragments, called nucleobases, to represent a state: adenine (A), cytonine (C), guanine (G), and thymine (T). The four molecular state symbols are attached in series to a flexible “tape” or a “backbone” made of sugar and phosphate groups. The complete DNA unit consists of two complementary “tapes” forming the so-called double helix. Each state symbol (base) on the first tape forms a pair (base pair) with a complementary state symbol on the second tape: adenine forms a pair with thymine, while cytosine forms a pair with guanine. Information content in each tape is identical, but is written with different (complimentary) sequences of symbols. Thus, the base pair (bp) is a natural unit of information stored in DNA. One bp equals to two bits of binary information and corresponds to approximately 0.34 nm of length along the tape, as shown in Fig. 13.
Fig. 13. A fragment of DNA molecule formed from four different nucleotides.
Some examples of DNA storage capacity (genome size) are given in Table 7. Note that the storage density of molecular DNA memory is $\sim$10 Mb/$\mu$ m3 or 1019 b/cm3, which is much denser than the density limits for the electronic long-term memory evaluated in Section III-B. Also it is interesting to note that the single-cell organism Amoeba Dubia stores a huge amount of information (1.34 Tb), compared to $\sim$6 Gb stored in the human genome.
TABLE 7 DNA Storage Capacity (Genome Size) for Several Representative Cellular Organisms
##### a) read from long-term memory
Different parts of the DNA memory of the cell are continuously accessed to support its operation. One example is signal transduction, which is the DNA-controlled process of cellular response to external stimuli.
##### b) Writing to long-term memory
The view that DNA is a read-only memory has undergone a dramatic change in recent years. The copying of the parental DNA to the offspring, called vertical gene transfer, is the basis for inheritance and until recently was regarded as the only or at least the vastly predominant mechanism for transferring the genetic information. There is, however, an alternative mechanism for information transfer, which is lateral gene transfer. This can happen: 1) by a direct uptake (swallowing) of naked DNA from the cell environment; 2) by a virus; and 3) by direct physical contact between two cells. Fragments of DNA, imported from outside, can be integrated into the host DNA, and thus new information is written in the memory unit. Until the advent of the genome-sequencing era, a prevailing opinion among the research community was that lateral gene transfer was a rare and insignificant event. Currently, it is recognized that in prokaryotic, e.g., bacterial cells, lateral transfer is the predominant form of genetic variation and is one of the primary driving forces for bacterial evolution. In fact, the scale of lateral gene transfer can be very large: for example, two different strains of E.coli differ more radically in their genetic information than all mammals.
SECTION VII
## QUANTITATIVE ESTIMATES FOR THE BIO-$\mu$ CELL AND THE Si-$\mu$ CELL
### A. The Bio-$\mu$ Cell
The overall information content of a material system consists of information about the system's composition and shape [176]. For example, if a case of von Neumann universal constructor is considered, i.e., a computer with the task of controlled the assembly of the structure (e.g., another computer) from building blocks, a certain amount of information must be processed, which is related to the complexity of materials system. For each step, the computer must: 1) select the appropriate category of the building blocks; and 2) calculate $x$-, $y$-, $z$-coordinates of the position for each of the building blocks. If there are $N$ different building blocks (in the case of ultimate bottom-up construction, these building blocks could be atoms composing a material structure), information content of selection in step 1 is TeX Source $$I_{s}=\log_{2}N\ {\hbox {(bit)}}\eqno{\hbox{(5)}}$$ and the information of the $xyz$-positioning is TeX Source $$I_{xyz}=3n\eqno{\hbox{(6)}}$$ where $n$ is the lengths of a binary number representing each coordinate. In this estimate, $n=$ 32 b will be used, which is sufficient for representing numbers with practically arbitrary precision (“floating point” format).
Thus, if the total number of the building blocks in a material structure is $K$, the total information processed in assembly is TeX Source $$I_{K}=K(\log_{2}N+3n).\eqno{\hbox{(7)}}$$
Now, consider the task assembling of living cell of E.coli bacterium from individual atoms. The elemental composition of the bacterial cell is known with high accuracy and is shown in Table 8 [177]. The cell is mainly composed of ten different atoms with the total number of $\sim\! 3\times 10^{10}$ atoms. Thus, using (7), we obtain TeX Source $$I_{\rm cell}\sim {\hbox {3}}\times {\hbox {10}}^{10}\times(\log_{2}10+{\hbox {3}}\times {\hbox {32}})\sim {\hbox {3}}\times {\hbox {10}}^{12}\ {\hbox {bit}}.$$
TABLE 8 Elemental Composition of E.coli [177]
This result is remarkably close to the experimental estimates of the informational content of bacterial cells based on microcalorimetric measurements which range from 1011 to 1013 bits per cell. In the following, it is assumed that ${I}_{\rm cell}\sim {\hbox {10}}^{11}$ bit, i.e., the conservative estimate is used.
### B. The Si-$\mu$ Cell
For a system-level comparison between extremely scaled Si-based technology and carbon-based computational elements in biosystems, consider a hypothetical computer that is realized in a cube of 1 $\mu$ m in size (the volume of the bio-$\mu$ cell). Such computer, later referred to as Si-$\mu$ cell, must contain logic circuitry and nonvolatile memory to store program. Suppose further all components of the computer are to be implemented in ultimately scaled Si technology summarized in Table 4. In the following, 3-D-stacked logic and memory circuit layers will be used to fill the 1-$\mu{\hbox {m}}^{3}$ volume. (In Table 4, the thickness of one layer in the stack is assumed to be $9F$ for logic and $6F$ for memory.) The corresponding densest conceivable 3-D arrangement of FETs is ${\hbox {1.5}}\times {\hbox {10}}^{17}\ {\hbox {transistors/cm}}^{3}$, and thus 1-$\mu{\hbox {m}}^{3}$ volume could contain up to 150 000 logic transistors. For nonvolatile memory, the densest 3-D stack of nand layers is ${\hbox {4.2}}\times {\hbox {10}}^{16}\ {\hbox {b/cm}}^{3}$, or 42 kb of memory in 1-$\mu{\hbox {m}}^{3}$. Comparing to the bio-$\mu$ cell in Table 6, the Si-$\mu$ cell can contain $\sim\! \! 10\times$ less logic elements and more than 100× less memory. (Note that this estimate was made even before partitioning of the 1-$\mu{\hbox {m}}^{3}$ volume between logic and memory and not including an energy source.)
Next, according to Table 4, the off-state leakage power in the ultimate FET circuit is $\sim$2.34 nW per transistor, thus $\sim$357 $\mu$ W of total static power dissipation in a system of 152 000 FETs. This results in catastrophic heat densities in the 1-$\mu{\hbox {m}}^{3}$ cube: $q=$ 357 $\mu$ W/6 $\mu$ m2= 5800 W/cm2. This is almost equal to the heat density at the Sun's surface ($\sim$6000 W/cm2, as shown in Table 8). Clearly, such Si-$\mu$ cell computer cannot exist. Therefore, for such a system, larger scale devices or/and smaller device count must be used.
What is the smallest device count that could suffice for the Si-$\mu$ cell? von Neumann has argued that the minimum logic circuit complexity required to implement general-purpose computing is of the order of a few hundred devices [178]. In an attempt for a more accurate estimate of the von Neumann threshold, a model 1-bit minimal Turing machine (MTM) has been constructed with total device count of about 320 binary switches/transistors and requires 8-b instruction words for its operation [25]. In the following, it will be assumed that the logic processor of the Si-$\mu$ cell is implemented by an MTM. This also allows the maximization of the amount of memory in Si-$\mu$ cell (which is much less than the bio-$\mu$ cell).
Suppose the MTM is implemented within the Si-$\mu$ cell using FETs with ${L}_{g}\sim$ 4.5 nm with the parameters listed in Table 4. The remainder of the 1-$\mu$ m cube is available for memory.
Implementation of each of the MTM instructions requires a minimum of three sequential operations/cycles [25]. On average, $\sim$50% of transistors are active during each cycle, thus $\sim$160 switching events per cycle or $\sim$500 switching events per instruction. Since execution of one instruction results in one output bit, the ratio of the total transistor switchings (raw bits) to the output bits is 1/500. Therefore, in order to generate 1011 output bits, the typical outcome of biological computation, at least ${\hbox {5}}\times {\hbox {10}}^{13}$ raw bits must be processed in the MTM. It takes ${\hbox {3}}\times {\hbox {10}}^{11}$ MTM cycles to complete the computational task (i.e., three MTM cycles per one output bit). If it is required that these events occur over 2400 s (to match the bio-$\mu$ cell), the cycle time $t_{\rm cycle}=$ 8 ns. This appears to be easily achievable by CMOS technology. The total switching energy and power per MTM cycle are TeX Source $$E_{\rm cycle}=N \cdot E_{\rm bit}={\hbox {320}}\times {\hbox {2.93}}\times {\hbox {10}}^{-18}={\hbox {9.36}}\times {\hbox {10}}^{-16}\ {\hbox {J}}\eqno{\hbox{(8a)}}$$[note that $E_{\rm bit}$ in (8a) corresponds to the 50% activity factor (4b)] and TeX Source $$P_{\rm active}={E_{\rm cycle}\over t_{\rm cycle}}={{\hbox {9.36}}\times {\hbox {10}}^{-16} \over {\hbox {8}}\times {\hbox {10}}^{-9}}={\hbox {1.17}}\times {\hbox {10}}^{-7}\ {\hbox {W}}.\eqno{\hbox{(8b)}}$$(There is also leakage power consumption and this is approximately 749 nW.)
Next, the energy consumed by the memory access needs also be taken into account. At each cycle, an 8-b instruction must be read from the memory block. Assuming a serial read (typical for nand memory) with only one line in memory array charged, the energy for reading eight serial bits is close to 10-13 J TeX Source $$P_{M_{\rm cycle}}={E_{M}\over t_{\rm cycle}}\sim{{\hbox {10}}^{-13}\ {\hbox {J}}\over {\hbox {8}}\times {\hbox {10}}^{-9}\ {\hbox {s}}}={\hbox {1.25}}\times {\hbox {10}}^{-5}\ {\hbox {W}}.$$
A summary of energetics of Si-$\mu$ cell implemented with ultimate high-performance CMOS and also for low standby power technologies is given in Tables 9 and 10. As follows from the tables, the Si-$\mu$ cell cannot operate in this mode due excessive heat generation. Therefore, the cycle time has to be increased to reduce the power dissipation. Note that the predominant source of power consumption in both cases is a consequence of charging memory access lines.
TABLE 9 Energetics of Si-$\mu$ cell Implemented With Ultimate High-Performance CMOS Technology
TABLE 10 Energetics of Si-$\mu$ cell Implemented With Ultimate Low-Standby Power CMOS Technology
How much heat could be tolerated by a Si-$\mu$ cell computer? Table 11 provides some reference numbers for several model heat generators along with heat removal capabilities for different cooling techniques. If it is postulated that only passive cooling can be used for the Si-$\mu$ cell (i.e., no additional space overheads), the maximum heat flux through the walls of the cube must be < 1 W/cm2 ($\sim$ max. free water convection cooling rate).
TABLE 11 Cooling Capabilities for Air and Water and Examples of Representative Heat Generating Systems
If only passive air or water cooling is used, the max heat flux should be < 1 W/cm2, thus total power dissipation < 6 × 10-8 W, which limits the cycle time to be > 1.70 $\mu$ s. For this, the total time needed to emulate the bio-$\mu$ cell task (i.e., equivalent of 1011 output bits) will be 510 000 s, which is more than 200× larger than time needed for the bio-$\mu$ cell.
As follows from the above, the bio-$\mu$ cell outperforms the Si-$\mu$ cell in all respects. A summary of the comparison between the two $\mu$ cells is presented in Section VII-C and some of the implications are discussed.
### C. Comparisons and Implications
As is clear from the previous section, the Si-$\mu$ cell fundamentally cannot match the bio-$\mu$ cell in the density of memory and logic elements, or operational speed, or operational energy. A core challenge is the MTM requirement for a large number of memory accesses per output bit. The above analysis suggests that there is much to be learned from the designs of nature and they may provide hints as to how future technologies could evolve. Fig. 14 provides a brief summary of comparative data. As follows from the analysis, the number of functional elements in the bio-$\mu$ cell is extraordinary and far exceeds foreseeable device densities of semiconductors. This may be at least in part due to different mass of information carriers: As was argued in Section III, the smallest size for both memory and logic devices depends on the mass of the information-bearing particles, e.g., the smallest barrier width in Si devices is $\sim$5 nm due to low effective mass of electrons in Si. Heavier particle mass, in principle, allows for smaller device size, which seems to be realized in the in carbo logic and memory elements. Emerging technologies discussed in Section V-B3, such as memristors, atomic switch, and redox memory, also use heavier mass particles, and their potential for very dense circuits needs to be further explored.
Fig. 14. Comparison of significant parameters of the bio-$\mu$ cell and the Si-$\mu$ cell.
As was shown in the previous section, memory access is the most severe limiting factor of Si-$\mu$ cell. Not only is there simply not enough nonvolatile memory bits, but also access to them to support computations takes too much energy. In larger scale computers, this problem is easily circumvented by initial massive serial readout from the nonvolatile media (e.g., hard disk drives or flash memory) and buffering these data in low-energy SRAM or DRAM. However, at the scale of the 1-$\mu$ m cube, there is no space for the buffers, and just direct access to nonvolatile memory was assumed for the Si-$\mu$ cell operation. Another related observation is that organizing solid state memory in crossbar arrays, while an elegant solution is at larger scale, also contributes to excessive energy dissipation. In this regard, access to the DNA memory can be viewed as similar to access to hard disc drives [179], [180]. It could be argued that at least in theory, the serial access principle of hard disc drives might be a better solution for low-energy systems (of course, in practice, the mechanical overheads significantly add to the total energy consumption of hard disk drives).
The architectural organization of computation in in carbo systems appears to be much more efficient than for Si computers. As was mentioned in Section II, the biological processors, such as brain, are not on the computational trajectory for Si microprocessors in Fig. 2, suggesting that there may exist alternate technologies and computing architectures offering higher performance (at much lower levels of energy consumption). One key factor here is that basic algorithms need to work in very few steps [181]. Indeed, it appears that the bio-$\mu$ cell utilizes fine-grained and massive parallelism per instruction, i.e., by sending out into the cytoplasm multiple copies of DNA instructions by RNA messengers. Also, the bio-$\mu$ cell utilizes undirected thermally driven motion of the mRNA molecules to achieve connectivity to the ribosomes in the cytoplasm. A correct transfer to an appropriate ribosome is achieved by electrostatic attraction arising from conforms, i.e., from the specific shapes of the transmitted and recipient molecular structures. In contrast, data transfer in electrical circuits follows predetermined routes that require an expenditure of energy. Whereas electrical circuits utilize a controllable energy barrier whose operation requires an expenditure of energy and whose physical extent is determined by electron tunneling considerations, it is not clear that there is a similar use of energy barriers in the ribosome's execution of RNA instructions.
As a side remark, although not emphasized in this study, the bio-$\mu$ cell incorporates within its volume the capability to incorporate and transform materials from its environment into energy yielding molecules in a form accessible to its processes. Such additional processes for energy transformation were not included in the analyses of the Si-$\mu$ cube.
Finally, in 1959, Feynman [182] gave a presentation in which he suggested the possibility of building computers whose dimensions were “submicroscopic.” Although the progress of CMOS technology has been extraordinary, submicroscopic computers remain outside our grasp. As has been indicated above, nature appears to have successfully addressed the submicroscopic design challenge.
SECTION VIII
## SUMMARY
Feature size scaling has enabled a very steep learning curve for CMOS technology that has helped to create a feature-driven marketplace. Although there are compelling physical arguments that physical scaling must end for CMOS, it appears that the benefits of Moore's law will continue for some time, aided by the advent of new materials, processes, and device structures. Very likely, the application space for CMOS technology will continue to grow rapidly as new functionalities are combined with more traditional information processing and communication capabilities.
At the same time, there is intense research underway to find alternatives to CMOS technology that have the potential to extend the benefits of Moore's law scaling for decades into the future. It was pointed out that there are many options at this time, but there is no one-for-one substitute for CMOS technology yet available. Replacement options may eventually be identified, but it appears that a likely scenario is that this research will yield devices with functionalities that can be integrated with CMOS technology to provide unique capabilities or to replace CMOS modules with special-purpose structures based on the novel devices.
It also may be that dramatic improvements in information processing technologies will result from a radical rethinking of both architectures and supporting technologies. A comparative analysis between the bio-$\mu$ cell and the Si-$\mu$ cell was offered to stimulate thinking about alternative scenarios. As the bio-$\mu$ cell goes about its complex task of creating a copy of itself, it does so using fine-grained processes, devices, and architectures that are completely different and much more energy efficient than existing CMOS/von Neumann paradigms. Perhaps, the design of nature's information processors can inspire radical breakthroughs in inorganic information processing.
There is substantial momentum to sustain Moore's law for many more decades because of the benefits that it accrues to society. The challenges that lie before us to achieve this are substantial but so is the creativity of scientists and engineers. Although the road ahead is not well marked, there are many indications that there are no insurmountable barriers that would deny progress in information processing technologies for the foreseeable future.
## Footnotes
R. K. Cavin, III and V. V. Zhirnov are with Semiconductor Research Corporation, Research Triangle Park, NC 27709 USA (e-mail: Ralph.Cavin@src.org; Victor.Zhirnov@src.org).
P. Lugli is with Lehrstuhl für Nanoelektronik, Technische Universität München, D-80333 Munich, Germany (e-mail: lugli@nano.ei.tum.de).
1A practical device is somewhat larger than the ideal shown in Fig. 5(b); e.g., wraparound gates, larger area of source and drain to minimize contact resistance, increased gate width to increase on current, etc. However, the idealized representation shown in Fig. 5(b) will be used in this paper to cast MOSFET technology most favorably for packing density.
## References
No Data Available
## Cited By
No Data Available
None
## Multimedia
No Data Available
This paper appears in:
No Data Available
Issue Date:
No Data Available
On page(s):
No Data Available
ISSN:
None
INSPEC Accession Number:
None
Digital Object Identifier:
None
Date of Current Version:
No Data Available
Date of Original Publication:
No Data Available
## Need Help?
About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2013 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
|
2013-06-19 21:58:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5678610801696777, "perplexity": 1766.867282767595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708808740/warc/CC-MAIN-20130516125328-00090-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://bcss.org.my/tut/bayes-with-jags-a-tutorial-for-wildlife-researchers/occupancy-modelling/occupancy-with-a-categorical-covariate/
|
# Occupancy with a categorical covariate
For the willow tit data, both the habitat covariates were continuous measures: elevation and percentage forest cover. We often want to use a categorical covariate such as forest type, and it’s easy to incorporate one such covariate in our JAGS code.
## The example data set
For this module we’ll use camera trap data for Cape leopards from the Boland area in South Africa. Thanks to Rajan Amin and Zoological Society of London (ZSL) for making the data available. After a first survey, the area was affected by wildfires, so the team returned after the fires to compare leopard occupancy – or rather habitat use – in areas burnt vs unburnt.
You can load and inspect the data with the following code:
# Read in data
comment="#")
str(leo)
# 'data.frame': 47 obs. of 7 variables:
# $site: int 1 2 3 4 5 6 7 8 9 10 ... #$ n1 : int 130 130 130 130 130 130 130 130 130 130 ...
# $y1 : int 8 5 4 0 2 2 0 0 0 5 ... #$ n2 : int 104 130 85 130 130 130 130 39 112 130 ...
# $y2 : int 7 12 1 0 1 1 1 0 0 6 ... #$ fire: chr "Unburnt" "Unburnt" "Unburnt" "Unburnt" ...
# $elev: int 286 894 426 1044 405 807 496 1174 653 793 ... # Convert 'fire' to a factor leo$fire <- factor(leo$fire) summary(leo) # site n1 y1 n2 # Min. : 1.0 Min. : 0.0 Min. : 0.000 Min. : 18.0 # 1st Qu.:12.5 1st Qu.:113.0 1st Qu.: 0.000 1st Qu.: 95.5 # Median :24.0 Median :130.0 Median : 2.000 Median :115.0 # Mean :24.0 Mean :114.1 Mean : 3.128 Mean :108.0 # 3rd Qu.:35.5 3rd Qu.:130.0 3rd Qu.: 5.000 3rd Qu.:130.0 # Max. :47.0 Max. :130.0 Max. :20.000 Max. :130.0 # y2 fire elev # Min. : 0.000 Burnt :20 Min. : 285.0 # 1st Qu.: 0.000 Unburnt:27 1st Qu.: 414.0 # Median : 2.000 Median : 592.0 # Mean : 2.617 Mean : 641.8 # 3rd Qu.: 3.000 3rd Qu.: 800.0 # Max. :16.000 Max. :1487.0 The rows correspond to 47 camera trap locations. n1 is the number of days that the camera trap was operating at the site during the first survey and y1 is the number of days when leopards were detected; n2 and y2 give the same data for the second survey. The fires occurred between the first and second surveys, and fire indicates whether or not the site was affected by the fire. elev is the elevation of the site in metres. Let’s check the distribution of burnt sites across elevation so see if they may be confounded. range(leo$elev) # 285 1487
breaks <- seq(200, 1500, by=100)
op <- par(mfrow=2:1)
hist(leo$elev[leo$fire=="Burnt"], breaks=breaks, xlim=c(200, 1500),
xlab="Elevation", main="Burnt")
hist(leo$elev[leo$fire=="Unburnt"], breaks=breaks, xlim=c(200, 1500),
xlab="Elevation", main="Unburnt")
par(op)
Below 1100m there’s a good mix of burnt and unburnt sites, but all 5 sites above 1100m were burnt. If leopards avoid those 5 sites after the fires, we won’t be able to tell whether it’s because of the burning or the elevation. Fortunately we have data from before the fires, which will tell us about the effect of elevation alone.
For the next section we’ll only use the data from the second survey, after the fires.
## Just one categorical covariate
With one covariate, which is categorical, we can simplify the model by estimating one value for occupancy for each category. We do not need to use a linear model.
This is almost like fitting a separate model to each category, and we need enough observations for each category to be able to do this. With a single model fit, we have the advantage of using the data from all the sites to estimate probability of detection.
Here’s the JAGS code for this model:
# File name "burn1.jags"
model{
# likelihood
for(i in 1:nSite) {
z[i] ~ dbern(psi[fire[i]])
y[i] ~ dbin(p * z[i], nOcc[i])
}
# priors
for(i in 1:nCat) {
psi[i] ~ dbeta(1,1)
}
p ~ dbeta(1,1)
}
Now we prepare the data. We need to convert the fire covariate to a vector with integers for the categories: 1 for burnt, 2 for unburnt (in alphabetical order, by default). If you had more categories, the numbers would be 1, 2, 3, …
fireI <- as.integer(leo$fire) z <- ifelse(leo$y2 > 0, 1, NA) # Known occupancy
bdata <- with(leo, list(nSite=47, nOcc=n2, y=y2, fire=fireI,
nCat=2, z=z))
str(bdata)
# List of 6
# $nSite: num 47 #$ nOcc : int [1:47] 104 130 85 130 130 130 130 39 112 130 ...
# $y : int [1:47] 7 12 1 0 1 1 1 0 0 6 ... #$ fire : int [1:47] 2 2 2 2 2 2 2 1 2 2 ...
# $nCat : num 2 #$ z : num [1:47] 1 1 1 NA 1 1 1 NA NA 1 ...
Now we can run the model:
library(jagsUI)
library(mcmcOutput)
wanted <- c("p", "psi")
outA1 <- jags(bdata, NULL, wanted, "burn1.jags", DIC=FALSE,
n.chains=3, n.iter=1e4)
mcoA1 <- mcmcOutput(outA1)
diagPlot(mcoA1)
summary(mcoA1)
# mean sd median l95 u95 Rhat MCEpc
# p 0.032 0.003 0.032 0.026 0.038 0.999 0.810
# psi[1] 0.595 0.111 0.596 0.377 0.806 1.000 0.689
# psi[2] 0.848 0.071 0.856 0.706 0.971 1.001 0.701
plot(mcoA1)
That ran very quickly and converged well. And it seems clear that leopards do prefer the unburnt sites (psi[2]) to the burnt sites (psi[1]). But let’s look at the effect of elevation before discussing the results further.
## Categorical and continuous covariates
Once we have multiple covariates, things become a bit less simple. We need to use a linear predictor and a link function, usually the logistic (“logit”) link. But we can still use a different intercept for each category. And provided we standardise the continuous covariates, the intercept is the logit of the probability of occupancy at the mean values of all the continuous covariates. So we can use a prior on the probability. Here’s the JAGS code for this model:
# File name "burn2.jags"
model{
# likelihood
for(i in 1:nSite) {
logit(psi[i]) <- b0[fire[i]] + bElev * elev[i]
z[i] ~ dbern(psi[i])
y[i] ~ dbin(p * z[i], nOcc[i])
}
# priors
for(i in 1:nCat) {
psi0[i] ~ dbeta(1,1)
b0[i] <- logit(psi0[i])
}
bElev ~ dunif(-5, 5)
p ~ dbeta(1,1)
}
We standardise the elevation to mean 0 and SD 1, then bundle the data and run the model:
library(wiqid)
bdata <- with(leo, list(nSite=47, nOcc=n2, y=y2, fire=fireI,
nCat=2, elev=standardize(elev), z=z))
str(bdata)
# List of 7
# $nSite: num 47 #$ nOcc : int [1:47] 104 130 85 130 130 130 130 39 112 130 ...
# $y : int [1:47] 7 12 1 0 1 1 1 0 0 6 ... #$ fire : int [1:47] 2 2 2 2 2 2 2 1 2 2 ...
# $nCat : num 2 #$ elev : num [1:47] -1.227 0.87 -0.744 1.388 -0.817 ...
# $z : num [1:47] 1 1 1 NA 1 1 NA NA NA 1 ... wanted = c("p", "psi0", "bElev") outA2 <- jags(bdata, NULL, wanted, "burn2.jags", DIC=FALSE, n.chains=3, n.iter=1e4) mcoA2 <- mcmcOutput(outA2) diagPlot(mcoA2) plot(mcoA2) summary(mcoA2) # mean sd median l95 u95 Rhat MCEpc # p 0.032 0.003 0.032 0.026 0.038 0.999 0.739 # psi0[1] 0.661 0.125 0.666 0.419 0.899 1.000 1.105 # psi0[2] 0.850 0.076 0.860 0.702 0.983 0.999 0.965 # bElev -1.020 0.532 -0.975 -2.074 -0.018 1.000 1.212 The elevation coefficient is definitely negative – occupancy declines with elevation. We’ll plot the curves for both burnt and unburnt sites. The code is similar to that for the willow tits, but we have to convert the two intercepts, psi0[1] and psi0[2], back to the logit scale. # Convert psi0 back to b0 b0 <- qlogis(mcoA2$psi0)
# Create vector of elevations for the x axis
hist(leo$elev) # only 1 site above 1200m xx <- seq(250, 1200, , 101) # Standardise in the same way that we standardised the original data xxS <- standardize2match(xx, leo$elev)
# Get the values of the posterior mean and CrI for each value in xx
burnt <- unburnt <- matrix(NA, nrow=101, ncol=3)
for(i in 1:101) {
logitPsi1 <- b0[,1] + mcoA2$bElev * xxS[i] psi1 <- plogis(logitPsi1) burnt[i, ] <- c(mean(psi1), hdi(psi1)) logitPsi2 <- b0[,2] + mcoA2$bElev * xxS[i]
psi2 <- plogis(logitPsi2)
unburnt[i, ] <- c(mean(psi2), hdi(psi2))
}
plot(xx, unburnt[,1], type='n', ylim=range(burnt, unburnt),
xlab="Elevation, m", ylab="Occupancy", las=1)
polygon(c(xx, rev(xx)), c(burnt[,2], rev(burnt[,3])), border=NA,
polygon(c(xx, rev(xx)), c(unburnt[,2], rev(unburnt[,3])), border=NA,
lines(xx, burnt[,1], lwd=2)
lines(xx, unburnt[,1], lwd=2, col='blue')
abline(v=mean(leo$elev), lty=3) text(610, 0.3, 'mean elevation', pos=2) arrows(600, 0.3, 641, 0.3, length=0.1) legend('bottomleft', c("burnt", "unburnt"), col=c('black', 'blue'), lwd=2, bty='n') A couple of things are clear from the graph. First, the credible intervals are very wide; that should not be surprising given that we only have 20 sites in the burnt area and 27 in the unburnt area. Second, the effect of burning is modest compared to elevation: burnt sites at 250m have higher occupancy than unburnt sites at 1000m. ## Results from before the fires Now let’s turn to the data from before the fires, and see how occupancy compared between the soon-to-be-burnt sites and the unburnt sites. Three of the 47 sites used for the second survey did not have camera traps during the first survey. They are the last 3 sites in the data table, and have n1 = 0 and y1 = 0. We can leave them in the data set even though they provide no information, or we can set nsites = 44, so that JAGS ignores the last 3 sites. We chose the latter for the code below. z <- ifelse(leo$y1 > 0, 1, NA) # Known occupancy
bdata <- with(leo, list(nSite=44, nOcc=n1, y=y1, fire=fireI,
nCat=2, elev=standardize(elev), z=z))
str(bdata)
# List of 7
# $nSite: num 44 #$ nOcc : int [1:47] 130 130 130 130 130 130 130 130 130 130 ...
# $y : int [1:47] 8 5 4 0 2 2 0 0 0 5 ... #$ fire : int [1:47] 2 2 2 2 2 2 2 1 2 2 ...
# $nCat : num 2 #$ elev : num [1:47] -1.227 0.87 -0.744 1.388 -0.817 ...
# $z : num [1:47] 1 1 1 NA 1 1 NA NA NA 1 ... The code to run the model is the same, convergence is good, and here are the posterior plots: The results are very similar, with higher occupancy at the burned sites even before the fires! Let’s get MCMC chains for the differences before and after and plot them together: diffA2 <- mcoA2$psi0[,2] - mcoA2$psi0[,1] diffB2 <- mcoB2$psi0[,2] - mcoB2\$psi0[,1]
differences <- cbind(before=diffB2, after=diffA2)
postPlot(differences, compVal=0)
So, sure, occupancy was higher in the unburnt sites after the fires, but it was also higher in those sites before the fires. Indeed, the difference in occupancy has not changed as a result of the fires. Whatever is causing the difference, we can’t blame the fire.
The major lesson: don’t draw conclusions just by comparing impacted sites with controls. Look at data from before and after the impact. This is Before-After-Control-Impact (BACI) design.
|
2020-09-21 03:31:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4812527596950531, "perplexity": 5584.699302229964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198887.3/warc/CC-MAIN-20200921014923-20200921044923-00567.warc.gz"}
|
http://math.stackexchange.com/questions/195560/probability-question-with-interarrival-times
|
# Probability question with interarrival times
(Why did the chicken cross the road?) A chicken wants to cross a single-lane road where the cars arrive according to a Poisson process with rate $\lambda$. She needs at least $k$ minutes to cross the road safely, so she will have to wait until she sees a gap of at least $k$ between the oncoming cars. If the gap between the car that just arrived and the next one is at least $k$ then she starts crossing the road immediately. Let $T$ denote the random time she needs to wait by the road.
Find the expected time needed to cross the road.
-
@Sasha You're confusing "chicken" and "hen" – crf Sep 14 '12 at 5:28
The chicken waits between the passages of car $i-1$ and car $i$ without crossing the road if and only if no gap between car $n-1$ and car $n$ is greater than $k$, for any $n\leqslant i$. Thus, $$T=\sum_{i=1}^{+\infty}D_i\cdot[D_1\leqslant k,\ldots,D_i\leqslant k],$$ where $(D_i)_{i\geqslant 1}$ is i.i.d. with exponential distribution of parameter $\lambda$. By independence, $$\mathrm E(D_i;D_1\leqslant k,\ldots,D_i\leqslant k)=a^{i-1}b,\quad a=\mathrm P(D\leqslant k),\quad b=\mathrm E(D;D\leqslant k),$$ hence $\mathrm E(T)=b/(1-a)$. One knows that $1-a=\mathrm P(D\gt k)=\mathrm e^{-\lambda k}$, and $$b=\int_0^kx\cdot\lambda\mathrm e^{-\lambda x}\cdot\mathrm dx=\frac{1-(1+\lambda k)\mathrm e^{-\lambda k}}\lambda.$$ Finally, $$\mathrm E(T)=\frac{\mathrm e^{\lambda k}-1-\lambda k}\lambda.$$
-
What does $D_i$ represent in therms of the problem? Is it the arrival of car $i$ ? – audiFanatic Mar 24 '14 at 5:31
@audiFanatic The gap between two successive arrivals. – Did Mar 24 '14 at 6:24
Ok, thanks. Could you take a look at my question? It is somewhat similar to this one, but the approach I took is different. math.stackexchange.com/questions/724228/… – audiFanatic Mar 24 '14 at 6:32
Hint: condition on the size of the first gap (i.e. the time until the first car arrives).
-
The question is ambiguous.
1) If the question is how much time the chicken has to wait till it decides to cross the road, proceed as follows. The time of waiting can be represented as $$T=\sum_{i=0}^N Y_i$$ Where $Y_0, Y_1, \dots$ are the arrival times of cars (i.i.d $exp(\lambda)$, being a Poisson process with parameter $\lambda$), and $N$ is the number of cars until (not including) the first one to arrive after at least $k$ minutes. $N\sim\mathrm{Geom}(P(5<Y_0))$. To calculate $E(T)$, use the formula of Total Expectation, conditioning on $N$. In the course of your calculations, you will have to use basic facts about infinite series of functions.
2) If the question is how much time it would take the chicken to cross the road once it started doing so, then again the data is ambiguous:
a. If it takes the chicken exactly k minutes to cross the road, then the expected time is k.
b. If it takes the chicken at least k minutes to cross the road, then we need to know the distribution of the length of time it takes the chicken to cross the road in order to answer the question what will come first: the chicken crossing the road or it getting hit by a car.
Note that in all cases, it is assumed the chicken is clairvoyant and can predict with certainty that the next car will arrive in no less than k minutes, which is quite an accomplishment, whether or not you're a chicken.
-
If it's a straight road with no nearby entrance or exit points, and the cars maintain constant speed, and $k$ is small enough, then the chicken can already see the cars that will arrive within the next $k$ minutes, so the prediction is not that hard. You should avoid crossing a road if you can't predict whether a car will arrive before you finish crossing. – Robert Israel Sep 15 '12 at 0:43
@RobertIsrael: What if the car comes from the future, "Back to the Future" style, out of the blue? The poor chicken doesn't stand a chance, no matter how vigilant it is! – Evan Aad Sep 15 '12 at 6:33
Nothing new. I just want to upload for no reason ^_^
-
|
2015-11-30 01:29:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5548140406608582, "perplexity": 250.4568205331356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460519.28/warc/CC-MAIN-20151124205420-00169-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://miccai2021.org/openaccess/paperlinks/2021/09/01/127-Paper0330.html
|
Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews
# Authors
Cheng Peng, S. Kevin Zhou, Rama Chellappa
# Abstract
Medical image super-resolution (SR) is an active research area that has many potential applications, including reducing scan time, bettering visual understanding, increasing robustness in downstream tasks, etc. However, applying deep-learning-based SR approaches for clinical applications often encounters issues of domain inconsistency, as the test data may be acquired by different machines or on different organs. In this work, we present a novel algorithm called domain adaptable volumetric super-resolution (DA-VSR) to better bridge the domain inconsistency gap. DA-VSR uses a unified feature extraction backbone and a series of network heads to improve image quality over different planes. Furthermore, DA-VSR leverages the in-plane and through-plane resolution differences on the test data to achieve a self-learned domain adaptation. As such, DA-VSR combines the advantages of a strong feature generator learned through supervised training and the ability to tune to the idiosyncrasies of the test volumes through unsupervised learning. Through experiments, we demonstrate that DA-VSR significantly improves super-resolution quality across numerous datasets of different domains, thereby taking a further step toward real clinical applications.
SharedIt: https://rdcu.be/cyhUz
N/A
N/A
# Reviews
### Review #1
• Please describe the contribution of the paper
The authors describe a method for volumetric super-resolution (VSR) that can account for a domain mismatch during testing. The main contribution is a formulation that allows fine-tuning the feature extractor network on the test set while keeping the lightweight reconstruction network heads fixed. The method is trained for fixed upscaling factors on a publicly available lung CT dataset and evaluated for the volumetric SR problem of CT images acquired from organs liver, colon, kidney. The method compares favourably to state of the art methods (SAINT, 3D RCAN, 3D RDN) when matched for parameter count.
• Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
• Innovative formulation of a common feature extractor network as the basis for subsequent inplane, throughplane and refinement network heads. This feature extractor is adapted to the test domain in a self-supervised setting, while keeping the reconstruction heads frozen.
• The authors conduct an ablation study that shows a clear trend that the proposed modifications are beneficial
• Visual results provided in Fig. 2 are convincing and DA VSR seems to outperform compared reference methods quantitatively (Table 1)
• Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
• The authors downscale the original dataset from 512x512 to 256x256 to reduce computational complexity. The authors further interpolate the slice thickness from original 1-3.5mm to a standardised 2.5mm. Both of those “data normalization steps” constitute a severe limitation of the practical applicability of the method. It is critical for volumetric SR methods to account for the two challenges: a) deal with varying through plane upscaling factors b) control computational complexity imposed by large 3D input volumes with high inplane resolution.
• It is unclear how important the sample size of the test set is for an efficient self-supervised adaptation. How does the method perform on ‘new’ samples from the target domain (after adaptation)? What if there is only a single volume available from test domain T? If we would split test domain T into T_a and T_b, adapt parameters on T_a and test on T_b, how would SR performance compare on T_a/b?
• The magnitude of quantitative improvements due to domain adaptation are not very convincing, no statistical significance tests were conducted.
• It is questionable whether the experiment summarised in Table 1 really tests the methods’ ability to handle domain shift or is mostly driven by the fact that there are ~10 times more training scans in the lung dataset allowing to train a better feature extractor regardless of the organ.
• Overall the method is comparably complex and it is difficult to follow how exactly the network and reference methods are trained. E.g which heads/components are trained and frozen at what stage. It seems reference methods were re-implemented, to what extent were those reimplementations optimised to ensure fair comparison?
• Please rate the clarity and organization of this paper
Satisfactory
• Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
There is a discrepancy between the provided reproducibility checklist and what is observed in the paper. For example, standard deviation/error bars/significance tests are not reported for the presented results; no description of hardware & software used for training.
• Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
Potential typos:
• Equation (5): Should it be G_S^fro and G_F instead?
It would be valuable to report the ‘upper bound’ of the performance of a supervised method trained on the Liver/Colon/Kidney dataset with matched sample size.
Please discuss or correct the discrepancy of DA-VSR’s performance in Table 1 and Table 2. Wouldn’t one expect that this performance is identical, why has DA-VSR more parameters in Table 2 as compared to Table 1?
Evaluation in the context of a second application could further confirm the benefit of the method, e.g. perhaps on a modality with more pronounced domain shift across scanners/settings (e.g. MRI)
probably reject (4)
• Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The paper is interesting as it relies on a common backbone feature extractor for the inplane, throughplane and refinement reconstruction tasks. My biggest concern is centered around the evaluation of the method. In particular the evaluation is conducted in an idealized setting (inplane downscaling to 256x256, standardization of through plane resolution to 2.5mm). Further reference methods seem to be largely reimplemented, where it is not convincingly clear how they were trained and optimized to allow for a fair comparison. E.g. it should be discussed why DA-VSR^NA outperforms SAINT (Table 2) without any domain adaptation. Variability/Error estimates of performance estimates are not provided.
• What is the ranking of this paper in your review stack?
2
• Number of papers in your stack
3
• Reviewer confidence
Confident but not absolutely certain
### Review #2
• Please describe the contribution of the paper
A new domain adaptation algorithm called DA-VSA is proposed for the 3D medical image super-resolution task. A unified encoder is used with different heads for multiple directions super-resolution. In the test phase, a simple adaption is employed with a backbone fixed to achieve the best results.
• Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
1. Significant improvement in PSNR and visual quality from sample given in the paper.
2. The new idea that do multi-directional SR and combination of them using a shared encoder and different heads.
3. A general solution that compatabile with other methods like SAINT and SMORE.
• Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
1. The methods for comparsion is implemented by the author. “we implement all compared models to have similar number of parameters” – Page 5
2. Comparsion with other methods are lacked. Related works are mentioned while the difference is not highlighted.
3. Lack of some reference. This paper also includes SR for multiple axes but in the same network. Georgescu, Mariana-Iuliana, Radu Tudor Ionescu, and Nicolae Verga. “Convolutional Neural Networks with Intermediate Loss for 3D Super-Resolution of CT and MRI Scans.” IEEE Access 8 (2020): 49112-49124.
This paper seems to be the first using multi-slices as input for SR in CT. Yu, Haichao, et al. “Computed tomography super-resolution using convolutional neural networks.” 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2017.
• Please rate the clarity and organization of this paper
Very Good
• Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
Some details about the model is not given in the paper like the trade-off hyper-paramter $\lambda$, while the author is committed to provide the source code for training and testing. The dataset is public available also. Overall the reproducibility is good if the code will be given in the future.
• Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
1. The methods for comparsion is implemented by the author so it would be great to add some sentences to justify the correctness of the implementation, like ‘achieved similar PSNR compared to the original paper’
2. Though the related works are mentioned, it would be better to clearly mention the similar and different parts between the proposed and existing.
3. It will make the paper more convincing to add those missing reference and some short discussions.
accept (8)
• Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The proposed method achieves significant improvement in terms of PSNR and the idea is also novel (paper with similar published while the network structure and loss are different).
• What is the ranking of this paper in your review stack?
2
• Number of papers in your stack
5
• Reviewer confidence
Confident but not absolutely certain
### Review #3
• Please describe the contribution of the paper
The paper studies domain adaptation volumetric super-resolution methods based on a series of slice-wise. The authors propose a technique termed DA-VSR. In contrast to the existing method, DA-VSER uses a single feature extractor with several task-specific network heads for upsampling and fusion. DA-VSR leverages in-plane and through-plane resolution differences as self-supervised signals for self-learned domain adaptation. The authors demonstrate the robustness of their method compared with the state-of-the-art qualitatively and quantitatively.
• Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
– The paper is well organized, and the motivation behind mixing domain adaptation is well justified. – The DA-VSR introduced an novel in-plane SR head in the self-learned domain adaptation stage. – The DA-VSR achieves better results compared with the state-of-the-art methods quantitatively and visually.
• Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
– Quantitative results were reported in Table 1 and 2. How are the results computed? For example, specific cases or all the dataset. – I think the claims of the paper are a bit bold. The paper claims in the Abstract section “DA-VSR significantly improves super- resolution quality across numerous datasets of different domains”. However, there is one dataset including different organs. – Marginal improvement in terms of SoTA – Figure 1 needs to be more optimally presented. Would it be possible to somehow make it more clear what is happening? Otherwise Loss functions can be included for better illustration. – The windows in Figures are too wide. – – The authors seem to have missed some relevant literature. Specifically they don’t discuss learning based methods for image enhancement at any length, missing out on several relevant citations, e.g. “Deep Generative Adversarial Networks for Compressed Sensing Automates MRI”, “Structurally-sensitive multi-scale deep neural network for low-dose CT denoising”, “Multiple Cycle-in-Cycle Generative Adversarial Networks for Unsupervised Image Super-Resolution”.
• Please rate the clarity and organization of this paper
Very Good
• Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
– Will code be available? It is not mentioned that code is made available.
• Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://miccai2021.org/en/REVIEWER-GUIDELINES.html
– “To facilitate faster and more efficient acquisitions or storage, it is routine to acquire or reconstruct a few high-resolution cross sectional images, leading to a low through-plane resolution when the acquired images are organized into an anisotropic volume.” is a little bit confusing. Please reorganize this sentence. – It’s necessary for authors to use the correct references. In Page 4, Section 2.2, “SMORE [23]”, I have read the paper cited by authors, but can not find the term. Please check the manuscript carefully.
Probably accept (7)
• Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
The DA-VSR introduced an novel in-plane SR head in the self-learned domain adaptation stage.
• What is the ranking of this paper in your review stack?
1
• Number of papers in your stack
8
• Reviewer confidence
Very confident
# Primary Meta-Review
• Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
All reviewers agree that this is a mostly well motivated, innovative approach with convincing visual results. However, due to the large variance in the scores and some points brought up by R1 questioning the practicability of the method I recommend this paper for the rebuttal phase.
Among the reviewers there also seems to be a consensus that despite the good visual results, quantitative improvements seem very small, no significance tests are reported and the baselines have been reimplemented in a way guaranteeing fair a comparison.
Specifically R1, points out that the downscaling to 256x256 as well as the treatment of the standardisation of the through-plane resolution to 2.5mm limits the methods applicability in pratice.
Furthermore, R1 points out that positive results may be due to the much larger size of the training dataset.
Please make sure to address the above points in particular, along with the other issues raised by the reviewers.
• What is the ranking of this paper in your stack? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
3
# Author Feedback
We sincerely thank all the reviewers for their time and valuable comments. We are glad to see that all reviewers consider our work to be interesting and novel. We also thank them for pointing out various presentation issues and suggestions on references which will be fixed. With limited space, we address some of the key questions from reviewers:
Reviewer 1+2: “Reference methods seem to be largely re-implemented”/Baseline Reliability Most of the presented baselines are in fact implemented by their respective authors. RDN/RCAN are implemented based on https://github.com/sanghyun-son/EDSR-PyTorch, which is recommended by the original author. We obtained SAINT from its original author, in fact the networks and training hyper-parameters follow the experiments done in SAINT by Peng et al [12]. The only algorithm that we took some liberty implementation-wise is SMORE [22] - for fair comparison, we replaced the original, shallow SR network with a deep, Residual-Dense network. By constraining on similar network size, we attempted our best at making fair comparisons.
Reviewer 1: “Comparison done in idealized setting” due to downscaling and slice thickness normalization. As we described in section 3.2, downscaling to 256 by 256 is performed due to the large memory consumption of 3D CNNs for SISR. Under similiar network size, 3D RDN and RCAN will cause out of memory issue for a 512 by 512 volume on an 11Gb Nvidia 2080Ti GPU. This is also observed in Peng et al[12], Chen et al [2], and Wang et al [16] (e.g. patch-wise inference and stitching is used to address OOM, however that can lead to artifacts on patch boundaries). To be clear, DA-VSR is scalable to very high resolution, as it breaks volumetric SR down to multi-directional 2D SR and thus much more memory friendly, akin to [12].
While we agree with Reviewer 1 that varying slice thickness in CT/MRI is an important issue in real application, such an issue is orthogonal to our work. In our work, we seek to address the domain shift between training and test datasets. Therefore, we apply slice thickness normalization to eliminate a changing variable that can confound our experiments. Slice thickness normalization is widely adopted across works that deal with volumetric medical images and can be seen in previous SR works [2,16,22].
Reviewer 1: Whether DA-VSR is handling domain shift caused by different organs or benefited from larger training set and a better feature extractor. Table 1 shows that our method benefits both from larger training set and domain adaptation. Specifically, DA-VSR outperforms DA-VSR_NA, despite both using the same large lung dataset for training; this demonstrates the utility of domain adaptation. DA-VSR also outperforms DA-VSR_SMORE, therefore showing the utility of pre-training with a large dataset despite its domain difference from the test set.
Reviewer 3: There is only one multi-organ dataset used for our experiments. As described in Section 3.2, we used multiple datasets (LIDC, KITS, Medical Segmentation Decathlon), each of which is taken by different scanners and focused on different organs.
Reviewer 1+3: Marginal performance improvement/significance test. We thank Reviewer 1 for suggesting significance tests and will add them to the final version where more space is given. As an example, the p-value is 0.03 of DA-VSR vs SAINT (X4) for the kidney dataset with 59 samples.
While signficant, PSNR/SSIM are limited in showing improvements on details. As shown in Fig. 2 and in supplemental material, there are distinctive domain drift artifacts of which DA-VSR addressed well. Domain drift also does not happen uniformly on a CT image. Some patches do not suffer as much since similar patches can be observed in the training set; however, some patches suffer heavily due to lack of observations. We argue that preventing the generation of erroneous detail in those patches is a clinically valuable pursuit.
Code for DA-VSR will be released/submitted upon acceptance.
# Post-rebuttal Meta-Reviews
## Meta-review # 1 (Primary)
• Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
Many of the points raised by the reviewers have been addressed.
• The fair baseline issue was answered satisfactorily for me.
• The issue with the small effect sizes has been partially addressed since the authors propose to include statistical significance tests and report one example comparison in the rebuttal. However, I believe such a change should not be judged at the meta-review stage and without the necessary context and details about the kind of test performed etc.
• The issues with in-plane and through-plane resolution relating to practicability have been mostly addressed. While the points raised by R1 are valid, I am willing to accept that they can be set aside for a proof-of-concept work.
• Positive results are partially due to pretraining on a larger dataset but also due to domain adaptation as shown in the quantitative results.
Based on the rebuttal I follow the recommendation of R2 & R3 in accepting the paper.
• After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
• What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
7
## Meta-review #2
• Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
This paper proposes a novel algorithm called domain adaptable volumetric super-resolution (DA-VSR) for 3D image data. Two reviewers give relative high marks and the other reviewer raise a concern related to unclear experiment setting and small quantitative improvements.
In the rebuttal, the authors addressed these concerns, and they are strongly suggested to add these information to their final version.
• After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
• What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
10
## Meta-review #3
• Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
In the first reviews the major concerns were raised by R1 with regards to practical applicability. The rebuttal has addressed most of the concerns, especially if we consider that practical steps have to be taken to address out of memory issues. However the authors argument about the results in table 1 demonstrating proof of handling domain shift could have been presented better.
• After you have reviewed the rebuttal, please provide your final rating based on all reviews and the authors’ rebuttal.
Accept
• What is the rank of this paper among all your rebuttal papers? Use a number between 1 (best paper in your stack) and n (worst paper in your stack of n papers).
3
|
2021-11-27 06:14:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34368133544921875, "perplexity": 1613.6752173457085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00021.warc.gz"}
|
https://tex.stackexchange.com/questions/488634/how-adjust-and-align-properly-different-equations
|
# How adjust and align properly different equations
I'm trying to write this in latex:
Using the "align" enviroment I'm able to achieve to write this code:
\begin{align}
&\sum_k y_{ik}= \left\{ \begin{array}{l}
1, \\
m,
\end{array} \right. & \begin{array}{l}
i=2,...,n,\\
i=1,
\end{array}\\
&\sum_i q_i y_{ij} \le Q_k, &k=1,...,m, \\
\sum_j x_{ijk}= &\sum_j x_{ijk}= y_{ik}, &i=1,...,n, &&k=1,...,m,\\
&\sum_{i,j \in S} x_{ijk} \le |S| -1, &\text{para todo } S \subseteq \{2,...,n\},&& k=1,...,m,
\end{align}
which looks like this:
It's almost perfect, but the equals of the second column are not perfectly aligned, and I don't understand exactly how the enviroment works: why I had to writte two "&&" in some case and just one in other cases? Why the "para todo" (for all in spanish) is not aligned with the column and starts more in the left? It looks as I wanted, but I don't understand why. If somebody can show me how to write it or refer me to some tutorial of the enviroment, I would be really gratefull. Thank you.
• In an align environment, the first & is a rl align pair. That is the text to the left of & is right aligned and the text following a & is left aligned. The next & is an equation separator and the next & is again a rl align pair. I would suggest you use the alignat environment instead where each & is a rl align pair. Thus, abc & def will make the abc right aligned and the def will be left aligned, whereas abc && def will make the abc right aligned and the def to also be right aligned (since there is no empty text in between the &&). – Peter Grill May 1 at 18:04
You can think of the align environment as a table with alternating r and l columns. Every other & (switching from r to l) aligns the surrounding symbols, the others separate content of different alignment positions.
Your first line complicates things a bit, but the aligned environment works:
\documentclass{article}
\usepackage{amsmath}
\begin{document}
\begin{align}
\sum_k y_{ik} &= \begin{cases} 1, \\ m, \end{cases} &
\begin{aligned}
i \\
i
\end{aligned} &
\begin{aligned}
&= 2, ..., n, \\
&= 1,
\end{aligned} \\
\sum_i q_i y_{ij} &\le Q_k, &
k &= 1, ..., m, \\
\sum_j x_{ijk} = \sum_j x_{ijk} &= y_{ik}, &
i &= 1, ..., n, &
k &= 1, ..., m, \\
\sum_{i, j \in S} x_{ijk} &\le |S| - 1, &
\text{para todo } S &\subseteq \{2, ..., n\}, &
k &= 1, ..., m,
\end{align}
\end{document}
I propose this, based on mathtools (needless to load àmsmath) and some trial and error for adjusting the first line:
\documentclass[12pt, a4paper]{report}
\usepackage{mathtools}
\begin{document}
\begin{align}
\sum_k y_{ik} & =\mathrlap{\begin{cases}
1, & i=2,...,n,\\
m, \hspace{6.09em}& i=1,
\end{cases}} \\
\sum_i q_i y_{ij} &\le Q_k, &k&=1,...,m, \\
\sum_j x_{ijk}=\sum_j x_{ijk}&= y_{ik}, &i & =1,...,n, &&k=1,...,m, \\
\sum_{i,j \in S} x_{ijk} &\le |S| -1, &\text{para todo } S & \subseteq \{2,...,n\},&& k=1,...,m,
\end{align}
\end{document}
`
|
2019-12-05 14:28:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995077848434448, "perplexity": 5976.729732166708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540481076.11/warc/CC-MAIN-20191205141605-20191205165605-00432.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=0352289
|
MathSciNet bibliographic data MR352289 20H25 (20F10) Fine, Benjamin The structure of ${\rm PSL}\sb{2}(R)$${\rm PSL}\sb{2}(R)$; $R$$R$, the ring of integers in a Euclidean quadratic imaginary number field. Discontinuous groups and Riemann surfaces (Proc. Conf., Univ. Maryland, College Park, Md., 1973), pp. 145–170. Ann. of Math. Studies, No. 79, Princeton Univ. Press, Princeton, N.J., 1974. Links to the journal or article are not yet available
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2016-10-28 15:41:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572526216506958, "perplexity": 1233.9204630132006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00156-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://kb.osu.edu/dspace/handle/1811/19924
|
# FT MICROWAVE SPECTROSCOPY OF THE $H_{2}O-O_{2}$ COMPLEX
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/19924
Files Size Format View
2000-WE-02.jpg 83.79Kb JPEG image
Title: FT MICROWAVE SPECTROSCOPY OF THE $H_{2}O-O_{2}$ COMPLEX Creators: Kasai, Yasuko; Sumiyoshi, Yoshihiro; Endo, Yasuki Issue Date: 2000 Publisher: Ohio State University Abstract: Rotational transitions of a van der Waals complex, $H_{2}O-O_{2}$ in the $^{3}B_{2}$ ground vibronic state have been observed using a pulsed nozzle Fourier-transform microwave spectrometer for the first time. Six transitions observed in the frequency region between 8 - 30 GHz have been assigned to the rotational and fine structure components of the $H_{2}O-O_{2}$ complex, and are analyzed to obtained the rotational, centrifugal distortion, and spin-spin coupling constants. Each of the observed transitions is further split into several hyperfine components due to the water protons. Analysis of the hyperfine structure is in progress. The structure of the complex is considered to be $C_{2v}$. Description: Author Institution: Millimeter-Wave Remote Sensing Section, Global Environment Division Communication Research laboratory; Department of Pure and Applied Sciences, College of Art and Sciences, The University of Tokyo URI: http://hdl.handle.net/1811/19924 Other Identifiers: 2000-WE-02
|
2015-11-26 16:02:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39098918437957764, "perplexity": 3367.0086813700827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447758.91/warc/CC-MAIN-20151124205407-00249-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://mathoverflow.net/tags/measure-concentration/hot
|
# Tag Info
16
A set of points on the unit sphere in ${\Bbb R}^n$ with $\langle x,y\rangle \le \cos \theta$ for all distinct $x$ and $y$ is called a spherical code with minimum angle $\theta$. For $0<\theta < \pi/2$, Kabatiansky and Levenshtein gave an exponential upper bound (of the form $\exp(C(\theta)n)$) for the maximum number of points in such a spherical code. ...
13
It is a theorem of Besicovitch that measures on $\mathbb R^d$ do satisfy the density theorem. Fremlin, Measure Theory, Chap. 47 added Besicovitch, around 1930, extended his density properties of sets to those of finite Hausdorff measure. source next: D. G. Larman, "A new theory of dimension", Proc. London. Math. Soc. 17 (1967) 178-192 ...
11
You might be interested in something Jelani Nelson wrote me in an email on Oct. 13, 2011: "Another notion of derandomizing JL is the following: come up with a distribution over embeddings that can be sampled using as few random bits as possible so that, for any vector $x$ in $R^d$, a random vector has its $\ell_2$ norm preserved up to $1+\epsilon$ with ...
11
$\newcommand{\ep}{\varepsilon}$ Let $X$ be any nonnegative random variable (r.v.) with finite mean $\mu>0$ and variance $\sigma^2<\infty$. For any real $u>0$, we have $\ln\frac xu\le\frac xu-1$ for all real $x>0$, whence $x\ln x\le\frac{x^2}u+x\ln\frac ue$ and \begin{equation*} EX\ln X\le\frac{EX^2}u+EX\ln\frac ue=\frac{\sigma^2+\mu^2}u+\mu\...
10
One useful trick that comes in handy sometimes (I originally saw it in this paper of Talagrand, though it may go further back): We can view a random permutation in $S_n$ as being generated as follows: Start with the identity permutation, and successively perform transpositions of the form $(n, a_n)$, then $(n-1, a_{n-1})$, and so on down to $(2, a_2)$, where ...
10
In the general case (especially for high codimension) there will not be such a relation: Every compact Riemannian manifold can be isometrically embedded into the Euclidean space $R^N$. As the image of the submanifold is compact, you can change the metric "near infinity" such that it defines a metric on the sphere $S^N.$ But in special cases, there might be ...
9
This inequality cannot be true. Let us rewrite it in the more common form $$P(R_n\ge x)\le e^{-x^2/2} \tag{1}$$ for $x\ge0$, where $R_n:=S_n/b_n$, $S_n:=\sum_1^n c_iB_i$, $b_n:=\sqrt{\sum_1^n c_i^2}$. Let $n=2$, $c_1=1$, and $c_2=aI\{B_1=-1\}$, where $I\{\cdot\}$ denotes the indicator function, $a>0$ is large enough so that $\frac{-1+a}{\sqrt{1+a^2}}&... 8 Yes, it is true for any$n$. The easiest way to see it is by using the fact that your condition means precisely that$X$and$Y$can be realized on the same probability space$\Omega$in such a way that$Y\ge X$. 8 Without further assumptions you can't do better than the union bound (which should be$n e^{-\epsilon^2}$as you've written things). If$X_i$are identically distributed and the events$(|X_i| > \epsilon_0)$are disjoint then you get equality in the union bound for the maximum whenever$\epsilon \ge \epsilon_0$. If the$X_i$are$\epsilon$times ... 8 From the area estimates you get that that for fixed$\varepsilon>0$this number, say$M_\varepsilon(n)$, grows quite fast. Direct calculations show that the total area of the locus of unit vectors in$\mathbb{R}^{n}$which are not$\varepsilon$-perependicular to the given vector$u$is about $$2\cdot e^{-(n-2)\cdot\varepsilon^2/2}\cdot\mathop{\rm area}\... 7 For n>100 let F_n = (k/n)_{k=1}^{n -1} and for x= k/n in F_n let I_{k,n} be a symmetric interval around x having length n^{-n}/(n-1). Set f_n = n^n \sum_{k=1}^{n-1} 1_{I_{k,n}}. It is clear that f_n converges weak^* to 1_{[0,1]}. But the (f_n) are essentially disjointly supported and hence are equivalent to the unit vector ... 7 For p>1, the random variables you discuss do not possess exponential moments; You are in the regime of large deviations with stretched exponential tails. See for example the following recent paper by Gantert, Ramanan and Rembart http://arxiv.org/abs/1401.4577 (and the back references, going to Nagaev and earlier). 7 Basically, the proof goes along the following lines: (1) Take a small \varepsilon>0 and show that the expected exit time from the interval [-\varepsilon\sqrt{vl},\varepsilon\sqrt{vl}] is less than \varphi l (this is standard, using the fact that your martingale squared becomes a submartingale with uniformly positive drift, see e.g. Example 7.1 of ... 7 Let X_i=(X_{i,1},\dots,X_{i,d}), S:=(S_1,\dots,S_d), S_j:=\sum_{i=1}^d X_{i,j}/\sqrt n. Then, by Hoeffding's inequality, for s\ge0$$P(|S_j|\ge s)\le2e^{-s^2/2},$$whence$$E\|S\|_\infty=\int_0^\infty ds\,P(\|S\|_\infty\ge s) \le\int_0^\infty ds\,\min(1,2d\,e^{-s^2/2}) =O(1+\sqrt{\ln d});$$here we used the inequality$$\int_t^\infty ds\,e^{-s^2/2}\... 6 Short version: the set$\{\mu(B):B\in\mathcal{B}\}$is a closed set for any probability space$(X,\mathcal{B},\mu)$. For atomic spaces this follows from an elementary topological argument, and for non-atomic spaces it is a closed interval by a classical (and easy) result of Sierpinski. Longer version with details: Let$A_n=A'_n\cup A''_n$where$A'_n$is ... 6 A possible relevant post What kind of random matrices have rapidly decaying singular values?. In that post I discussed the distribution of maximal eigenvalue of a random matrix based on the result [Johnstone]. However that answer does not fully answer your question since there I emphasized the fact that the maximal eigenvalues of most random matrices in ... 6 The general idea behind such inequalities is to follow the martingale$X$until you lose control over the differences, then force it to be constant. This defines a new martingale$Y$with bounded differences, which is therefore concentrated. You then add to your probability of error the probability that the differences of$X$are too large, so that$Y \neq ...
6
For any $\beta>0$, $$\mathbb{E}B(n,p)^k\leq k!\beta^{-k}\mathbb{E}e^{\beta B(n,p)}= k!\beta^{-k}(1-p+pe^{\beta})^n.$$ Now you can plug various $\beta$, e.g. $\beta=\frac{k}{np}$ which yields $$\mathbb{E}B(n,p)^k\leq (np)^k k!k^{-k}\left((1-p)+pe^{\frac{k}{pn}}\right)^n.$$ I assume that you got your estimate by elaborating on this expression, although in ...
6
On the one hand, the proof is very cheap. Let $Z_j=e^{2\pi iX_j}$. $X=\sum_j X_j$, $Z=e^{2\pi i X}$. Note that $\operatorname{Var}_{\mathbb R/\mathbb Z}X\approx 1-|EZ|$ and similarly for $X_j$ and $Z_j$. Now just use the identity $EZ=\prod_j EZ_j$ to conclude. On the other hand, finding the reference may be a highly non-trivial task, so I leave it to ...
6
6
I streamlined my proof a bit so it is postable now :-) First, a disclaimer. I have no doubt that there is some slick theorem dating back to 1980's that immediately implies what you want and all one needs is to wait for a while until someone posts a reference to it. Meanwhile, here is a crude computation that gives a rather dismal value of $\alpha$ but still ...
6
Exponential inequalities for sums of independent random variables (r.v.'s) can be extended to martingales in a standard and completely general manner; see Theorem 8.5 or Theorem 8.1 for real-valued martingales, and Theorem 3.1 or Theorem 3.2 for martingales with values in 2-smooth Banach spaces in this paper. In particular, Theorem 8.7 in the same paper ...
6
First, we need to fix the notation a bit. Let $X_1,X_2,\dots$ be iid zero-mean unit-variance random variables (r.v.'s). For each natural $n$, let the $n$-tuple $(J_1,\dots,J_n):=(J_{n,1},\dots,J_{n,n})$ of r.v.'s be independent of the $X_k$'s and have the multinomial distribution with parameters $n,1/n,\dots,1/n$. For each $k\in[n]:=\{1,\dots,n\}$, the ...
5
What you have is called an empirical process, although it is usually written with the points and the functions reversed: let $\mathcal{F}$ be a family of functions $\Omega \to \mathbb{R}$ and let $X_1, \dots, X_n$ be i.i.d. elements of $\Omega$. The empirical process indexed by $\mathcal{F}$ is the collection of random variables $\{Z_f : f \in \mathcal F\}$ ...
5
Actually, Bernstein's inequality does not really require boundedness of the i.i.d. random summands; a finite exponential moment of the absolute value of a random summand will suffice. However, here we can just use Markov's inequality. Let $X,X_1,\dots,X_n$ be independent identically distributed random variables (i.i.d. r.v.'s) such that $P(X=z)=\mu(z)\in(0,... 5 Here's a step that seems nice enough to point out. It still leaves a parameter to pick, and I'm not sure it's ever better than applying Bernstein, but it does something different. We can get a probability bound in terms of how much$S_n$exceeds the Renyi entropy$H_{\alpha}$of$\mu$(equivalently, worded in terms of the$\ell_{\alpha}$norm of$\mu$), for ... 5 This looks like a weak law of large numbers, and in fact a strong law holds: I claim that$\liminf_{t \to \infty} \frac{S_t}{t} \ge \Delta$almost surely, which implies the desired result. The key is to show that$\frac{M_t}{t} \to 0$almost surely. Then we have$\frac{S_t}{t} = \frac{M_t}{t} + \frac{D_t}{t} \ge \frac{M_t}{t} + \Delta$and can take the ... 5 This is worked out in some detail in the paper of Fan, Grama and Liu, J. Math Anal. Appl. 448 (2017), 538-566 (see in particular Theorem 2.1 there, and the references). Unfortunately I do not have an open access to an electronic copy, and it doesn't seem to exist on arXiv. 5 It appears you want to have the following: Let$X_1,\dots,X_n$be independent zero-mean random variables (r.v.'s ) (or, more generally, martingale-differences) with$S_n:=X_1+\dots+X_n$,$B^2:=EX_1^2+\dots+EX_n^2$, and$M:=\frac1n\sum_1^n M_i$, where$M_i:=\text{ess sup}|X_i|$. Then \begin{equation*} P(S_n\ge x)\overset{\text{(?)}}\le\exp-\frac{x^2}{... 5 By the Tsirel’son--Ibragimov--Sudakov argument, reviewed on the first page in Bobkov, pushing the measure forward from the cube to the canonical Gaussian on$\mathbb R^n$and using the Gaussian isoperimetric inequality, we have $$1 - \mu_{\infty}(A_r)\le B(r):= B_p(r):= 1-\Phi\big(r\sqrt{2\pi}+\Phi^{-1}(p)\big),$$ where$r\...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-08-05 14:03:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9619069695472717, "perplexity": 332.5052077953061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735958.84/warc/CC-MAIN-20200805124104-20200805154104-00100.warc.gz"}
|