url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://socratic.org/questions/what-is-the-sum-of-the-probabilities-in-a-probability-distribution
|
×
# What is the sum of the probabilities in a probability distribution?
Jan 9, 2015
The sum of the probabilities in a probability distribution is always 1.
A probability distribution is a collection of probabilities that defines the likelihood of observing all of the various outcomes of an event or experiment. Based on this definition, a probability distribution has two important properties that are always true:
• Each probability in the distribution must be of a value between 0 and 1.
• The sum of all the probabilities in the distribution must be equal to 1.
An example: You could define a probability distribution for the observation for the number displayed by a single roll of a die. The probability that the die with show a "1" is $\frac{1}{6}$.
That's because there are six possible outcomes, and only one of those outcomes is a "1". Lets label the probabilities of all the possible outcomes for the single die.
Roll a "1": Probability is $\frac{1}{6}$
Roll a "2": Probability is $\frac{1}{6}$
Roll a "3": Probability is $\frac{1}{6}$
Roll a "4": Probability is $\frac{1}{6}$
Roll a "5": Probability is $\frac{1}{6}$
Roll a "6": Probability is $\frac{1}{6}$
Each probability is between 0 and 1, so the first property of a probability distribution holds true. And the sum of all the probabilities:
$\frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} = 1$,
so the second property of a probability distribution holds true.
|
2018-09-25 03:27:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457091093063354, "perplexity": 362.0079538356949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00287.warc.gz"}
|
https://electronics.stackexchange.com/questions/433214/what-is-the-current-when-there-are-no-resistors
|
# What is the current when there are no resistors?
How come I always see videos with what seems like a random amount of mA flowing from a battery with a certain amount of voltage?
For example, what if a wire had 0.13 ohms resistance and a battery had 5V electrical difference. That would mean that 38 Amps should be the current (right?).
I=V/R
I=5V/0.13ohms
Okay, so I'm editing this now using internal resistance, of about 1ohm.
This gives about 4.5A, so is this correct?
Why are common schematics showing some amount of mA shouldn't it be something like what I've shown, or do they have hidden resistors?
• What is the internal resistance of the battery you are using? – HandyHowie Apr 18 at 11:45
• Are you talking about a real battery or an ideal model of a battery? – Elliot Alderson Apr 18 at 11:46
• @ElliotAlderson An ideal model – BeastCoder2 Apr 18 at 11:48
• @HandyHowie it isn’t a real battery but if it were would I just be able to find it on the battery itself? – BeastCoder2 Apr 18 at 11:50
• On the datasheet possibly - data.energizer.com/pdfs/nh22-175.pdf – HandyHowie Apr 18 at 12:02
|
2019-11-23 02:31:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17890600860118866, "perplexity": 937.2937270364788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00497.warc.gz"}
|
https://mathematica.stackexchange.com/questions/139487/piecewise-integration-fail-or-something-else
|
# Piecewise integration fail or something else?
I have a not-so-complicated piecewise cubic function, shown below as the yellow curve on the right. It's derivative is on the left; the blue lines are references. Please see the code below where I call it myF (and its derivative myf).
To my surprise, the integration with a parameter $u$ and sine in the argument gives an ugly result involving the complex $i$.
Is the result of $\int$myF$(u\sin x)$d$x$ (as shown in the screenshot below) correct? Or is it some kind of failure due to my erroneous way of using Integrate coupled withPiecewise?
As just the area under the curve modulated by sine, I for now don't see how this can be mathematically true to have involved $\sqrt{-1}$.
Screen shot below:
The code to define the curves, generate the plots, and do the integration is below. When the parameter $u < \frac12$ of course there's no problem since the integration doesn't go over the splitting point' (where is differentiable).
I tried various ways to code the integration with assumptions, but the outcomes are basically the same. Some hint or confirmation would be greatly appreciated. Thank you.
ClearAll[myF, myf, u, v, ImgSz];
myF[v_] := Piecewise[{
{0, v < 0},
{4/3 v^3, 0 <= v < 1/2},
{1/3 (1 - 6 v + 12 v^2 - 4 v^3), 1/2 <= v < 1},
{1, v >= 1}
}];
myf = D[myF[u], u] /. {u -> v};
ImgSz = 250;
Row[{ Plot[{ 1, (* symmetry reference *)
myf
}, {v, -.3, 1.3}, PlotRange -> {-0.1, 2}, ImageSize -> ImgSz],
Plot[{ v, (* linear reference *)
myF[v]
}, {v, -.3, 1.3}, PlotRange -> {-0.1, 1}, ImageSize -> ImgSz]
}]
Assuming[ 1/2 < u < 1 ,
Integrate[ myF[ u Sin[x]], {x, 0, Pi}] ]
Summarizing Edit
Okay, so I couldn't recognize the expression as real-disguised-in-complex-conjugate, neither did I know reliable ways to test that. I'm glad to have learned from both answer posts.
In my case, defining the function with UnitStep doesn't allow Mathematica to evaluate differently like in another post (which I cannot find right now). Nor does PrincipalValue -> True or change of variables apply here as it does sometimes. I have accepted that in my case I need to post-process the expression the way I see fit. Nonetheless, for the record, there are some known bugs, old and new, solved and unsolved.
You can convert the answer to a real-looking expression as follows:
FullSimplify[
1/2 < u < 1]
(*
1/9 (-11 Sqrt[-1 + 4 u^2] + 16 u^2 (u - Sqrt[-1 + 4 u^2]) + 6 (1 + 6 u^2) ArcSec[2 u])
*)
Just because an expression contains the imaginary unit $i$ doesn't mean that its imaginary part is non-zero. For example, you could naïvely think that $$\frac{1}{x-i}+\frac{1}{x+i}$$ is complex, but actually it equals $\frac{2 x}{x^2+1}$, which is explicitly real (for real $x$).
In your particular case, you found an expression that looks complex, but it is actually real. For example, if you set u=3/4 and evaluate the integral numerically, you get 0.718597+i5.5511e-17, which has an spurious imaginary part due to numerical error.
If you evaluate
Plot[ReIm@NIntegrate[myF[u Sin[x]], {x, 0, Pi}], {u, 1/2, 1}]
`
you get
which confirms the fact that the integral is indeed real - its imaginary part vanishes.
|
2021-10-23 23:15:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6986770629882812, "perplexity": 1359.8277097472649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585828.15/warc/CC-MAIN-20211023224247-20211024014247-00129.warc.gz"}
|
https://flyingcoloursmaths.co.uk/why-sohcahtoa-is-stupid-and-what-you-can-do-instead/
|
My dad tells me that, above the blackboard in his 1960s Scottish high school, was a banner with the letters ‘SOH CAH TOA’ written out on it. Any questions about the banner were brushed off with a smile and ‘you’re not old enough to learn about SOH CAH TOA yet.”
Which, I have to concede, is a great way to pique kids’ interest in the topic. I’ve often wondered about the idea of telling students they’re not old enough to know about maths yet, it’s for over-16s only - and then let them get on with finding out the details on the sly.
### However, SOH CAH TOA is stupid - there, I said it
There are - if you count the SOH CAH TOA way - 11 types of right-angled triangle questions. (Finding the hypotenuse or a leg ((a short side’)) given the other two sides; three versions of finding the angle given the other two sides; and six versions of finding one side given an angle and another side.) If you ask me - and I suggest you do - that’s silly.
I say there are only two kinds of right-angled triangle problem: finding an angle if you know all the sides and finding a side if you know all the angles and a side. And all you need to know in order to solve all of these things: Pythagoras and the sine rule.
### Pythagoras
Now, you know Pythagoras. The square on the hypotenuse is equal to the sum of the squares on the other two sides - or, if you prefer, $opp^2 + adj^2 = hyp^2$. (I prefer this to $a^2 + b^2 = c^2$ because it tells you which side is which.) That means:
• if you know the two short sides, you square them, add them up and square root the answer to get the hypotenuse;
• if you know the hypotenuse and a leg, you square them, take them away (bigger minus smallest, of course) and square root the answer to get the other leg.
That’s straightforward. If you’re solving triangles the Table of Joy way, it usually makes sense to find the last side, just in case you need it.
### The other angle
Oh! If you have two angles of a triangle, it’s easy to find the third, isn’t it? Especially if one of them happens to be a right angle. You simply work out $\frac{\pi}{2}$ minus the other angle.
What’s that?
Oh, fine. If you MUST use degrees (radians are much better), it’s 90 minus the other angle.
### The Sine Rule
At GCSE, you get given the sine rule on the front of the paper. At A-level, nope, you have to remember it. It’s not exactly difficult, though:
$\\frac{a}{\\sin(A)} = \\frac{b}{\\sin(B)} = \\frac{c}{\\sin(C)}$
What you do once you have either all three sides (and the right angle, don’t forget) or all three angles (and a side), is label the corners of the triangle like the picture, with each angle (a big letter) opposite its corresponding little letter.
Also, write out the sine rule, and put a circle around all of the information you have, and a square around the thing you don’t know. If there’s a fraction with one of its numbers unshaped, that’s fine - just cross it out. Neatly.
### The Sine Rule with the Table of Joy
Here comes the clever bit! You can even do this without drawing the whole table - but if you’re curious, you can buy Basic Maths For Dummies and/or Numeracy Tests For Dummies to see exactly how the Table of Joy works.
Here’s what you do:
1. Rewrite your fractions with numbers in the appropriate places and a question mark in the missing space.
2. Find the number diagonally opposite the question mark and write it on the bottom of a big fraction.
3. Take the other two numbers and write them out on top of the fraction with a times between them.
4. Work out the fraction you’ve just written down.
5. If you were looking for a side, you’re done. Hooray.
6. If you’re after an angle, do $\sin^{-1}(Ans)$ and that’ll give you the answer.
### Example: finding an angle
With this one, we don’t really need the bottom (adjacent) side, but let’s find it anyway: $16.8^2 - 9.8^2 = 186.2$, so the bottom side is the square root of that - 13.64 units (to 2dp). The Table of Joy would have 9.8 and 16.8 on the top, and $\sin(x)$ next to $\sin(\frac{\pi}{2})$ on the bottom. You’d work out $9.8 \times \sin(\frac{\pi}{2}) \div 16.8 = 0.583$; since we want an angle, we do inverse sine of that using the answer button to get 0.623 radians (38.69°, if you must).
### Finding a side
This one gives an angle in degrees, tut tut. We can find the other angle by working out 90° - 33° = 57° and then work out the Table of Joy: you’ve got 8.7 and x on top, and $\sin(57^\circ)$ next to $\sin(90^\circ)$ on the bottom. The sum is $8.7 \times \sin(90^\circ) \div \sin(57^\circ) = 10.37$ units (2dp)
So there you go. A simple, easy way to solve any right-angled triangle without having to mess around with tan and cosine.
### Why is this better?
Oh yeah, why is this a better approach than slavishly learning all 11 possible right-angled triangle types? Because this method also works for non-right-angled triangles - although you also need the cosine rule instead of Pythagoras for some of those. It seems daft to me to learn a dozen different ways of doing special cases when you can learn a handful of general cases and be done with it. So there!
(Image by fdecomite used under a Creative Commons by licence)
* Edited 16/12/2013 for formatting.
|
2021-06-16 11:04:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7157379984855652, "perplexity": 614.6043098186658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623596.16/warc/CC-MAIN-20210616093937-20210616123937-00368.warc.gz"}
|
https://www.qb365.in/materials/stateboard/12th-standard-business-maths-english-medium-free-online-test-book-back-one-mark-questions-part-three-6770.html
|
" /> -->
#### 12th Standard Business Maths English Medium Free Online Test Book Back One Mark Questions - Part Three
12th Standard EM
Reg.No. :
•
•
•
•
•
•
Time : 00:10:00 Hrs
Total Marks : 10
10 x 1 = 10
1. Rank of a null matrix is
(a)
0
(b)
-1
(c)
$\infty$
(d)
1
2. $\int _{ 0 }^{ \infty }{ { x }^{ 4 }{ e }^{ -x } }$dx is
(a)
12
(b)
4
(c)
4!
(d)
64
3. Area bounded by y = $\left| x \right|$ between the limits 0 and 2 is
(a)
1sq.units
(b)
3 sq.units
(c)
2 sq.units
(d)
4 sq.units
4. The solution of the differential equation $\frac { dy }{ dx } =\frac { y }{ x } +\frac { f\left( \frac { y }{ x } \right) }{ f'\left( \frac { y }{ x } \right) }$ is
(a)
$f\left( \frac { y }{ x } \right) =k.x$
(b)
$xf\left( \frac { y }{ x } \right) =k$
(c)
$f\left( \frac { y }{ x } \right) =ky$
(d)
$yf\left( \frac { y }{ x } \right) =k$
5. For the given data find the value of Δ3y0 is
x 5 6 9 11 y 12 13 15 18
(a)
1
(b)
0
(c)
2
(d)
-1
6. The distribution function F(x) is equal to
(a)
P(X-x)
(b)
P(X$\le$x)
(c)
P(X$\ge$x)
(d)
all of these
7. In a binomial distribution, the probability of success is twice as that of failure. Then out of 4 trials, the probability of no success is
(a)
16/81
(b)
1/16
(c)
2/27
(d)
1/81
8. The standard error of sample mean is
(a)
$\frac { \sigma }{ \sqrt { 2n } }$
(b)
$\frac { \sigma }{ { n } }$
(c)
$\frac { \sigma }{ \sqrt { n } }$
(d)
$\frac { { \sigma }^{ 2 } }{ \sqrt { n } }$
9. Chance variation in the manufactured product is
(a)
controllable
(b)
not controllable
(c)
both (a) and (b)
(d)
none of these
10. A type of decision –making environment is
(a)
certainty
(b)
uncertainty
(c)
risk
(d)
all of the above
|
2020-11-23 16:35:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.663943350315094, "perplexity": 2570.35321014713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141163411.0/warc/CC-MAIN-20201123153826-20201123183826-00672.warc.gz"}
|
https://foundationalperspectives.wordpress.com/2016/06/30/can-we-think-about-the-infinite-in-a-consistent-way-and-would-such-a-concept-be-helpful/
|
(Notations, non-standard concepts, and definitions used commonly in these investigations are detailed in this post.)
A. A mathematical physicist’s conception of thinking about infinity in consistent ways
John Baez is a mathematical physicist, currently working at the math department at U. C. Riverside in California, and also at the Centre for Quantum Technologies in Singapore.
Baez is not only academically active in the areas of network theory and information theory, but also socially active in promoting and supporting the Azimuth Project, which is a platform for scientists, engineers and mathematicians to collaboratively do something about the global ecological crisis.
In a recent post—Large Countable Ordinals (Part 1)—on the Azimuth Blog, Baez confesses to a passionate urge to write a series of blogs—that might even eventually yield a book—about the infinite, reflecting both his fascination with, and frustration at, the challenges involved in formally denoting and talking meaningfully about different sizes of infinity:
“I love the infinite. … It may not exist in the physical world, but we can set up rules to think about it in consistent ways, and then it’s a helpful concept. … Cantor’s realization that there are different sizes of infinity is … part of the everyday bread and butter of mathematics.”
B. Why thinking about infinity in a consistent way must be constrained by an objective, evidence-based, perspective
I would cautiously submit however that (as I briefly argue in this blogpost), before committing to any such venture, whether we can think about the “different sizes of infinity” in “consistent ways“, and to what extent such a concept is “helpful“, are issues that may need to be addressed from an objective, evidence-based, computational perspective in addition to the conventional self-evident, intuition-based, classical perspective towards formal axiomatic theories.
C. Why we cannot conflate the behaviour of Goodstein’s sequence in Arithmetic with its behaviour in Set Theory
Let me suggest why by briefly reviewing—albeit unusually—the usual argument of Goodstein’s Theorem (see here) that every Goodstein sequence over the natural numbers must terminate finitely.
1. The Goodstein sequence over the natural numbers
First, let $g(1, m, [2]), g(2, m, [3]), g(3, m, [4]), \ldots$, be the terms of the Goodstein sequence $G(m)$ for $m$ over the domain $N$ of the natural numbers, where $[i+1]$ is the base in which the hereditary representation of the $i$‘th term of the sequence is expressed.
Some properties of Goodstein’s sequence over the natural numbers
We note that, for any natural number $m$, R. L. Goodstein uses the properties of the hereditary representation of $m$ to construct a sequence $G(m) \equiv \{g(1, m, [2]),\ g(2, m, [3]), \ldots\}$ of natural numbers by an unusual, but valid, algorithm.
Hereditary representation: The representation of a number as a sum of powers of a base $b$, followed by expression of each of the exponents as a sum of powers of $b$, etc., until the process stops. For example, we may express the hereditary representations of $266$ in base $2$ and base $3$ as follows:
$226_{[2]} \equiv 2^{8_{[2]}}+2^{3_{[2]}}+2 \equiv 2^{2^{(2^{2^{0}}+2^{0})}}+2^{2^{2^{0}}+2^{2^{0}}}+2^{2^{0}}$
$226_{[3]} \equiv 2.3^{4_{[3]}}+2.3^{3_{[3]}}+3^{2_{[3]}}+1 \equiv 2.3^{(3^{3^{0}}+3^{0})}+2.3^{3^{3^{0}}}+3^{2.3^{0}}+3^{0}$
We shall ignore the peculiar manner of constructing the individual members of the Goodstein sequence, since these are not germane to understanding the essence of Goodstein’s argument. We need simply accept for now that $G(m)$ is well-defined over the structure $N$ of the natural numbers, and has, for instance, the following properties:
$g(1, 226, [2]) \equiv 2^{2^{2+1}}+2^{2+1}+2$
$g(2, 226, [3]) \equiv (3^{3^{3+1}}+3^{3+1}+3)-1$
$g(2, 226, [3]) \equiv 3^{3^{3+1}}+3^{3+1}+2$
$g(3, 226, [4]) \equiv (4^{4^{4+1}}+4^{4+1}+2)-1$
$g(3, 226, [4]) \equiv 4^{4^{4+1}}+4^{4+1}+1$
If we replace the base $[i+1]$ in each term $g(i, m, [i+1])$ of the sequence $G(m)$ by $[n]$, we arrive at a corresponding sequence of, say, Goodstein’s functions for $m$ over the domain $N$ of the natural numbers.
Where, for instance:
$g(1, 226, [n]) \equiv n^{n^{n+1}}+n^{n+1}+n$
$g(2, 226, [n]) \equiv n^{n^{n+1}}+n^{n+1}+2$
$g(3, 226, [n]) \equiv n^{n^{n+1}}+n^{n+1}+1$
It is fairly straightforward (see here) to show that, for all $i \geq 1$:
Either $g(i, m, [n]) > g(i+1, m, [n])$, or $g(i, m, [n]) = 0$.
Clearly $G(m)$ terminates in $N$ if, and only if, there is a natural number $k > 0$ such that, for any $i > 0$, we have either that $g(i, m, [k]) > g(i+1, m, [k])$ or that $g(i, m, [k]) = 0$.
However, since we cannot, equally clearly, immediately conclude from the axioms of the first-order Peano Arithmetic PA that such a $k$ must exist merely from the definition of the $G(m)$ sequence in $N$, we cannot immediately conclude from the above argument that $G(m)$ must terminate finitely in $N$.
2. The Goodstein sequence over the finite ordinal numbers
Second, let $g_{o}(1, m, [2_{o}]), g_{o}(2, m, [3_{o}]), g_{o}(3, m, [4_{o}]), \ldots$, be the terms of the Goodstein sequence $G_{o}(m)$ over the domain $\omega$ of the finite ordinal numbers $0_{o}, 1_{o}, 2_{o}, \ldots$, where $\omega$ is Cantor’s least transfinite ordinal.
If we replace the base $[(i+1)_{o}]$ in each term $g_{o}(i, m, [(i+1)_{o}])$ of the sequence $G_{o}(m)$ by $[c]$, where $c$ ranges over all ordinals upto $\varepsilon_{0}$, it is again fairly straightforward to show that:
Either $g_{o}(i, m, [c]) >_{o} g_{o}(i+1, m, [c])$, or $g_{o}(i, m, [c]) = 0_{o}$.
Clearly, in this case too, $G_{o}(m)$ terminates in $\omega$ if, and only if, there is an ordinal $k_{o}>_{o} 0_{o}$ such that, for all finite $i > 0$, we have either that $g_{o}(i, m, [k_{o}]) >_{o} g_{o}(i+1, m, [k_{o}])$, or that $g_{o}(i, m, [k_{o}]) =_{o} 0_{o}$.
3. Goodstein’s argument over the transfinite ordinal numbers
If we, however, let $c =_{o} \omega$ then—since the ZF axioms do not admit an infinite descending set of ordinals—it now immediately follows that we cannot have:
$g_{o}(i, m, [\omega]) >_{o} g_{o}(i+1, m, [\omega])$ for all $i > 0$.
Hence $G_{o}(m)$ must terminate finitely in $\omega$, since we must have that $g(i, m, [\omega]) =_{o} 0_{o}$ for some finite $i > 0$.
4. The intuitive justification for Goodstein’s Theorem
The intuitive justification—which must implicitly underlie any formal argument—for Goodstein’s Theorem then is that, since the finite ordinals can be meta-mathematically seen to be in a $1-1$ correspondence with the natural numbers, we can conclude from (2) above that every Goodstein sequence over the natural numbers must also terminate finitely.
5. The fallacy in Goodstein’s argument
The fallacy in this conclusion is exposed if we note that, by (2), $G_{o}(m)$ must terminate finitely in $\omega$ even if $G(m)$ did not terminate in $N$!
6. Why we need to heed Skolem’s cautionary remarks
Clearly, if we heed Skolem’s cautionary remarks (reproduced here) about unrestrictedly corresponding conclusions concerning elements of different formal systems, then we can validly only conclude that the relationship of ‘terminating finitely’ with respect to the ordinal inequality ‘$>_{o}$‘ over an infinite set $S_{0}$ of finite ordinals in any putative interpretation of a first order Ordinal Arithmetic cannot be obviously corresponded to the relationship of ‘terminating finitely’ with respect to the natural number inequality ‘$>$‘ over an infinite set $S$ of natural numbers in any interpretation of PA.
7. The significance of Skolem’s qualification
The significance of Skolem’s qualification is highlighted if we note that we cannot force PA to admit a constant denoting a ‘completed infinity’, such as Cantor’s least ordinal $\omega$, into either PA or into any interpretation of PA without inviting inconsistency.
(The proof is detailed in Theorem 4.1 on p.7 of this preprint. See also this blogpage).
8. PA is finitarily consistent
Moreover, the following paper, due to appear in the December 2016 issue of Cognitive Systems Research, gives a finitary proof of consistency for the first-order Peano Arithmetic PA:
9. Why ZF cannot have an evidence-based interpretation
It also follows from the above-cited CSR paper that ZF axiomatically postulates the existence of an infinite set which cannot be evidenced as true even under any putative interpretation of ZF.
10. The appropriate conclusion of Goodstein’s argument
So, if a ‘completed infinity’ cannot be introduced as a constant into PA, or as an element into the domain of any interpretation of PA, without inviting inconsistency, it would follow in Russell’s colourful phraseology that the appropriate conclusion to be drawn from Goodstein’s argument is that:
(i) In the first-order Peano Arithmetic PA we always know what we are talking about, even though we may not always know whether it is true or not;
(ii) In the first-order Set Theory we never know what we are talking about, so the question of whether or not it is true is only of notional interest.
Which raises the issue not only of whether we can think about the different sizes of infinity in a consistent way, but also to what extent we may need to justify that such a concept is helpful to an emerging student of mathematics.
Author’s working archives & abstracts of investigations
|
2019-01-17 22:18:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 82, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9298458695411682, "perplexity": 506.176537585439}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659340.1/warc/CC-MAIN-20190117204652-20190117230652-00098.warc.gz"}
|
https://socratic.org/questions/how-do-you-use-heron-s-formula-to-determine-the-area-of-a-triangle-with-sides-of-27
|
# How do you use Heron's formula to determine the area of a triangle with sides of that are 25, 29, and 32 units in length?
Jan 11, 2016
$A = \text{345 square units}$
#### Explanation:
Heron's formula is $A = \sqrt{\left(s\right) \left(s - a\right) \left(s - b\right) \left(s - c\right)}$, where $A$ is the area, $s$ is the semiperimeter, and $a , b , \mathmr{and} c$ are the sides of the triangle.
Let side $a = 25$.
Let side $b = 29$.
Let side $c = 32$.
Semiperimeter
The formula for the semiperimeter is $s = \frac{a + b + c}{2}$.
$s = \frac{25 + 29 + 32}{2}$
$s = \frac{86}{2}$
$s = 43$
Heron's Formula
$A = \sqrt{s \left(s - a\right) \left(s - b\right) \left(s - c\right)}$
A=sqrt(43(43-25)(43-29)(43-32)
$A = \sqrt{119196}$
$A = \text{345 square units}$
|
2021-09-27 12:13:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 16, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9784430265426636, "perplexity": 497.15026227672143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058450.44/warc/CC-MAIN-20210927120736-20210927150736-00142.warc.gz"}
|
https://forexshop24.com/product/fx-pips-master/
|
FX Pips Master
FX Pips Master teaches you to think in a brand new way and ignore what every body else has pushed at you in the past to try out something amazingly powerful yet simple.
$19.99 Yes, the System is an All-rounder… From: Jordan Allen Re: Insider Tactics to Dominate the Forex Markets Dear Frustrated Trader, I know what it feels like to jump from one forex system to another with nothing to show for it except losing trade after losing trade. I know what it feels like to have my trading account blown to smithereens after following some so-called ex hedge fund trader’s software or manual. And I know what it feels like to doubt myself and wonder if I’ll ever make money trading forex. If you’re in this situation right now, my message to you is… You know what I used to tell myself when I read about other people making money from forex while I was losing? “There must be a way.” I used to always wonder, “What are they doing that I’m not? When do they enter their trades? And when do they get out?” It’s a mind-numbing, frustrating situation. Because these people (if you can get in contact with them) won’t give you the time of day if you are not prepared to shell out a couple thousand dollars. You are already in the hole. Where are you going to get that from? Listen… From one forex trader to another, I’m going to show you what works for me that allows me to pull in. And I’m not going to charge you anything like$2997 or some
ridiculous high-priced fee I see forex “gurus” extorting from people who like you who need help.
No.
When I tell you later, you’ll consider it a bargain.
But first, let me show you you a couple screenshots of trades I won that have put over $2,000 in my bank account in one week: And do you know what the beautiful thing about getting those returns is? When others in the past would tell you about rigging up your charts with Bollinger Bands… Stochastics… Commodity Channel Index… and a whole range of indicators you don’t need…. I strictly use price action. Indicators are lagging. And they give signals after the market has long moved on. What I’m going to show you gets you in early–at the beginning of the trend. So you can rake in 10 pips… 20 pips… 30 pips… and more… as the trend begins to form. Even if you are conservative and you shoot for 10 pips a day… If you make at least$10 per day, that’s $50 in your first week. If you make$20 per day, that’s $100 in your second week. If you make$30 per day, that’s $150 in your third week. See how it adds up? That’s what I did to make over$2,000 in a week. Not with 10 pips only.
But more.
And if you master what I’m about to show you, I have No doubt you will be able to make as much as I do.
Now, listen…
So if you are someone who works, you can do this around your day job.
You can do this in the morning before you go to work, at lunch time, or when you get home.
And if you have time on your hands during the day, you will find multiple opportunities to profit.
How much you want to make is completely up to you.
I showed my method to one of my students, Sean, in a Starbucks cafe.
After answering my Craigslist ad, he wanted to meet up instead of showing him by screen sharing online. I said no problem. We set the time for 2PM one Friday.
And when I was done explaining my method, it was 2.18.
It gets better…I watched him place a trade on his account with my guidance and watched the trade go into profit until I told him to
close it for a cool $100. He had NO problem paying my tutorial fee because he proved in flesh that… Hello. My name is Jordan Allen… and many years ago, I worked for a furniture manufacturing business. I had a boss and a salary – much like I’m guessing you have now. My paycheck wasn’t anything big. It wasn’t bad. But it wasn’t grand either. And, maybe like you and a lot of other people… I bought lottery tickets. Scratch N Win. Keno. Pick 3. Or anything I thought could get me out of the financial muck I was in. But none of that was working out. So I did the next best thing… I ran a business making and selling the only think I knew–furniture. I was doing well for a while. Money started coming in. But it was hard. It was just too much work! I was leaving my home at 5.30 in the morning and coming home at 11PM. Plus, my family life suffered. My wife always wore a constant frown when I came home at night. No “Hi Honey, how was your day?” No back rubs. No nothing. I was never home to eat dinner at the table like a normal family does. I was missing out on stuff. My daughter was growing up without me. We didn’t hang out as much as we used to before I started the business. I saw what running a conventional business was doing to my family life and I wanted out. So I started searching for alternatives… Getting a 9-5 again was out of the question. I wanted something that… I looked around and did a lot of research. Then, I had a lucky break. I saw a newspaper article about a wealthy trader from the UK who was a self-taught millionaire forex trader. I told myself that if he did it, I could do it too. I borrowed tons of forex books from the library. I scoured the internet for free and paid resources that could help me master forex. And the more I studied, the more I became confused. So I turned to the gurus. They made things worse. And were more interested in signing me on for their high-priced subscription services. There was nobody to talk to when I needed help, so what was the use giving them my money. But here’s what happened… One day while tinkering around with a currency pair, I noticed a certain pattern. And it always occurred before a major rise or fall in price. I looked at other currency pairs, and I saw the same pattern occurring. Suddenly, I knew I was on to something. I completely forgot about everything I had studied in the past because… Something fresh. Something that didn’t require all the ‘indicator noise’ all the other authors and so-called experts pushed. And boy did it work! Check out some of the recent winning trades I had last week: It worked then. It worked last week. And it’s workingnow. And it will work in thefuture. Another one of my students whom I train one-on-one online is Gregory from North Carolina. He used to run a carpet cleaning business which was doing well… but… like me…he wanted out. He was tired of it all—demanding customers, lazy employees, industry restrictions and fees. It was too much. Using the price action strategy I revealed to him, he beat me at my own game. It really is exciting to see a novice making thousands of dollars by following a few simple rules. The worked for Gregory. And I’m 100% confident they will work for you. My strategy is simple to learn and easy to use. When everyone else is tinkering around with indicators, you will be profiting from pure price action. My system will: • Teach you to think in a brand new way and… • Ignore what every body else has pushed at you in the past to try out something amazingly powerful yet simple. That’s all there is to it. Once you learn my method, you can try it out on a demo account for yourself without taking any risks whatsoever. Practice it daily. Until you’re 100% sure that it works. You will watch in amazement as your demo profits accumulate. Then, pretty soon, you will eager to try it out “for real”. Andyou’llbetakingthefirststepstogetbackallthemoney you have lost using other”systems” I don’t believe in clogging up charts with useless indicators till you can’t even see the price. This is crystal clear that even a clueless 10-year-old could easily “get it”. That’s what Jim, another student of mine, wanted when I met up with him in the local library. He had been around the block many times. Buying system after system. Ebook after ebook. And the first thing he told me when he sat down was, “If this is not simple that a 4-year-old can understand it, I can’t promise that I’ll pay you. He showed me the$200 he had in hand for me.
But I told him, “Keep the $200 for now… I’m going to have you placing a trade before I leave this library. Even if it takes another 2 hours.” And that is what I always do. I never leave our meeting without one of my students placing a trade. If I was some fluke, I’d be sweating bullets wondering if the trade would actually become profitable. But while I’m there guiding them, I’ll go buy coffee. I’ll go buy a hamburger. And I’ll come back and eat or drink like there’s no care in the world. Why? Because I’m confident. And I always prove that I KNOW what I’m doing when I part with my students. …and you WILL start winning trades like this… Do profits like this seem impossible to you? Only if you don’t have a system for doing it. A real one. Not one cooked up by some marketer who plucked a worthless, free strategy off some forex forum and decided to sell it to you. They couldn’t care less about whether you succeed or not. But I do. Because I know what the pain of losing over and over again feels like. The fact is, if you start making money like this, compounding your winnings week after week, you could soon be tempted to quit your job. But you don’t have to quit your job. You can use profit windfalls like this for other things like: You will be able to use the simplest and most straightforward method of market entry and exit for making consistent profits on a daily basis. And as I said earlier, you can even personally put the strategy to the test before trading any real money. Does FX Pips Master Repaint? FX Pips Master pivot lines do not repaint. The Buy/Sell Arrows repaints because we want to keep the chart clean. However, when the Buy/Sell Arrow appears, you can confidently take the trades and use the pivot lines for your TP and SL. Even if the arrow disappears the trades are valid. I get asked that question all the while. Many people who call me up think I’m some snake oil salesman. That I actually don’t trade. They think I just sell something I got from somewhere. But when they get to talking with me they see I genuinely care about their success. When they complain about how they don’t know when to enter and exit a trade, I can easily identify with them. Because I was once where they were at. And it may be the same with you. My first reason is that trading forex can be a lonely thing. Sometimes I crave human contact during the day. And I like to communicate with my students by email and Skype throughout the day as profitable trades are setting up. Nothing gives me more joy than seeing students who were once at their wits ends sending me screenshots of profitable trades they have taken using my strategy. It’s truly a heart-warming feeling. And that’s what I want for you too. The FX Pips Master. I’ll go through with you the overview of the FPM course. I will also reveal to you my proven successful strategy, in details, full colored manual. The FX Pips Master EA which you can plug into your MT4 platform. It will give you clear “BUY” and “SELL” signals so you can confirm my manual strategy. There will be no guesswork ever in placing your trades. In each package, you’ll get: Template files, indicators and automated installer program that does all the work for you. With a click of a button, it will install all the necessary files onto your computer. All you need to do next is to follow my simple instructions to trade. Full-colored manual, with details explanation, step-by-step instructions Everything you need to become a FX Pips Master trader is included in this package. Sure, you’ve heard that from all the other forex vendors you have bought something from before. But I want you to suspend your skepticism in this instant and take me at my word. If you are truly serious about your trading success, then you have to commit to making at LEAST 100 trades using the mechanical rules in this system. And if you can do, you will soon see the beauty of this system and the winning edge that it provides to you to comfortably win over and over and over again. Forget about paying$1,997…. $997…$497… or some other
ridiculous price I paid for crap trading systems years ago when I was struggling.
I wouldn’t want you to go through the stress and pressure of trying to make your money back in trading returns… and who wants that sort of pressure anyway?
Certainly not you!
I want you to re-coup losses you’ve suffered over the years as QUICKLY, SAFELY, and as CHEAPLY as possible.
I don’t want there to be anything getting in the way of you just trying this out, which is why…
… you can get your hands on my proven trading strategy for LOW LOW Price.
If you haven’t thought about where you’ll be in the next 60 days
from now, you can promise yourself that you can be thousands of dollars richer, exploiting a strategy that requires nothing from you other than 15 minutes of dedication to changing your life.
Is that too much to ask?
Is that too much “work”? I don’t think so.
15 minutes a day is all it takes to make a BIG difference in your lifestyle.
User Reviews
0.0 out of 5
0
0
0
0
0
There are no reviews yet.
\indicators\Entry_Reversal.ex4
\indicators\PivotsD_v5 (Black).ex4
\indicators\Trend_candles.ex4
3. How I turned $600 to$4000.pdf
|
2021-12-08 06:08:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32868438959121704, "perplexity": 1835.8395581719947}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363445.41/warc/CC-MAIN-20211208053135-20211208083135-00031.warc.gz"}
|
http://www.perlmonks.org/index.pl?node=A%20Guide%20To%20Installing%20Modules
|
Your skill will accomplishwhat the force of many cannot PerlMonks
### A Guide to Installing Modules
by tachyon (Chancellor)
on Nov 28, 2001 at 20:18 UTC ( #128077=perltutorial: print w/ replies, xml ) Need Help??
This is a brief guide to installing modules. For some background on what a module is and where they live on the system see the Simple Module Tutorial
The Basics of Module Installation
Fixing common problems
Tools to make the job easier CPAN and PPM
Installing Modules that include elements coded in C
## The Basics of Module Installation
Most modules are available from CPAN - the Comprehensive Perl Archive Network. They are supplied in what is known as a tarball. A tarball is a gzip compressed tar file. When a module is made the directory structure it lives in is converted to a single file that contains both the files and the directory information. A program called tar performs this function and the resultant file is called a tar file. Tar files have a .tar file extension. This tar file is then compressed using the gzip (GNU Zip) program. Gzipped files have a .gz extension thus a standard module will be called something like:
Some-Module-0.01.tar.gz
The first part is the name, the next part the version number and the last part the .tar.gz extension signifying that this is a tarball. You uncompress a tarball using the tar program like this (the $represents the command prompt): $ tar -zxvf Some-Module-0.01.tar.gz
All *nix systems will have a tar program. On windows you can use CYGWIN which is a set of UNIX tools ported to Win32 to get tar but programs like WinZip will handle extracting tarballs just fine.
One issue with Winzip is that it does not deal well with .tar.tar as an extension. Fix it by changing the extension to tar.gz.
Once your have extracted your tarball you then need to make and install your files. You do that like this. At the command prompt navigate your way to the directory created where you extracted the tarball. Making your extractions in a /temp dir is a good idea in case of problems with badly made distributions. There may be several directories to move through. In our hypothetical example above we would expect the tarball to extract into a directory called "Some-Module-0.01", however it may extract to "Some" or even straight into the current working directory (this is not fun to clean up, thus the suggestion of using a /temp dir). Within this module directory we should find a file called "Makefile.PL" although it *may* be several dirs deep. Once you find the Makefile.PL you do the following:
$perl Makefile.PL$ make
$make test$ make install
This should all proceed smoothly and your module should be installed, if not see below. Note on Win32 you will need to use a program called nmake. You can get a copy from M$here: nmake via FTP or here nmake via HTTP Once you have downloaded it you need to run the program (it self extracts) and make sure that you do this in a directory that is on your PATH. The PATH is a list of directories that Win32 will search for executable files. When you type nmake you want Windows to be able to find the program so it must be in one of the directories on the PATH. To see your current PATH type PATH at the command prompt. C:\WINDOWS or C:\WINNT will be a fairly safe bet. Now that you have got nmake and extracted it in a directory on your PATH you just do this: C:\> perl Makefile.PL C:\> nmake C:\> nmake test C:\> nmake install Ok either everything went fine or you got some errors. Note in the following read nmake for make if your are on Win32 ## Fixing common problems ### When I perl Makefile.PL or make test I get a Warning: prerequisite Foo::Bar failed to load: Can't locate foo/bar.pm in @INC.... Some modules have dependencies. They depend on other modules to function. These are specified in the Makefile.PL in the line: 'PREREQ_PM' => { Foo::Bar => 1.5 } This line specifies that the module you are trying to install require a module called Foo:Bar and that the version of this module must be 1.5 or greater. If you get these type of errors you will need to download and install these module(s) first. You should find details in the README file - did you READIT? Some Authors forget to edit their Makefile.PL with dependencies - in this case you will generally get this error message when you run the tests as the new module tries to load non-existent modules on which it is dependent. ### I get an error saying "Can't find make" As noted before when you say make/nmake the operating system looks along the path for an executable by the right name. If it can't find one you get this error. To fix it simple modify your path or specify the full path to the executable such as: $ ~/make
This tells the operating system to use the make executable in your home directory where you just put it OK
### make/nmake reports missing files
When you run make/nmake it looks for a file called MANIFEST which lists all the files that should be present in the distribution. If some are missing you get an error like:
$make Checking if your kit is complete... Warning: the following files are missing in your kit: Foo.bar Please inform the author.$
This is slack on the authors part for producing a broken distribution. Get a good one!
### make test reports errors
While modules should be portable across operating systems some are not. To ensure that a module is working correctly most authors develop a test suite of programs that ensure the module is behaving as expected. When you run:
$make test These scripts are all run. Some errors are trivial but some are significant. If you get errors consult the README and look at the results to see what is broken. The author will want to know about these and may be able to help fix them. If you get a lot of errors it is probably wise not to make install and install the module. ### I don't have permission to install a module on the system! If you don't have root permission you will not be able to install a module in the usual place on a shared user system. If you do not have root access you may get errors like: $ make install
Warning: You do not have permissions to install into
/usr/local/lib/perl5/site_perl/5.005/i386-freebsd at
/usr/libdata/perl/5.00503/ExtUtils/Install.pm line 62.
mkdir /usr/local/lib/perl5/site_perl/5.005/CGI/Simple:
Permission denied at /usr/libdata/perl/5.00503/ExtUtils/Install.pm line 120
*** Error code 2
This is easy to get around. You just install it locally in your home directory. Make a directory called say /lib in your home directory like this:
# first navigate to your home directory
$cd ~ # now make a directory called lib # on UNIX$ mkdir lib
# on Win32
C:\> md lib
Now you have a directory called ~/lib where the ~ represents the path to your home dir. ~ literally means your home dir but you knew that already. All you need to do is add a modifier to your perl Makefile.PL command
$perl Makefile.PL PREFIX=~/lib LIB=~/lib This tell MakeMaker to install the files in the lib directory in your home directory. You then just make/nmake as before. To use the module you just need to add ~/lib to @INC. See Simple Module Tutorial for full details of how. In a nutshell the top of your scripts will look like this: #!/usr/bin/perl -w use strict; # add your ~/lib dir to @INC use lib '/usr/home/your_home_dir/lib/'; # proceed as usual use Some::Module; ## Tools to make the job easier CPAN and PPM There are some tools to make installing modules even easier. They may be difficult to get working through firewalls or proxies. Read the docs for configuration hints. ### CPAN.pm CPAN.pm is a perl module that installs perl modules! It is part of the standard distribution so you should have a copy available. The easiest way to use it is like this (note the use of different quotes on different OSs): # Win32 C:\> perl -MCPAN -e "shell" # UNIX$ perl -MCPAN -e 'shell'
This fires up the interactive shell. Follow the prompts and accept the defaults.
### PPM
PPM is the Perl Package Manager from ActiveState. See A guide to installing modules for Win32 for full details. It installs special versions of CPAN modules wrapped in an XML format called a PPD file. To file up the shell:
C:\>PPM
PPM>
Type help at the prompt for commands and see the docs. If you can't get PPM to work through your proxy/firewall then download the .zip files of the PPD files from here, unzip them, navigate to the directory you unzipped them into and then run:
C:\>PPM install Some-Module.ppd
## Installing Modules that include elements coded in C
The most difficult modules to install are generally those that include parts of the module written in C. These modules require that you have a *good* C compiler on your system - generally gcc is best. On most UNIX systems you will have a C compiler but on Win32 you will probably not have as it is not a part of Windows. If you do not have a C compiler you will need to install one. Get a copy of gcc direct from the source here:
Getting modules compiling on Win32 can be tricky. See A Practical Guide to Compiling C based Modules under ActiveState using Microsoft C++. Just getting cygwin/MinGW and gcc is not enough. The easiset solution is to try to find a precompiled binary version (try ActiveState for a PPM or the Author) or email the Author the tale of your woes. If you are using Win32 expect the author to suggest you get a real OS but.....
cheers
tachyon
Corrected a few technical inexactitudes (similar to issues ;-) thanks to Hanamaki
Replies are listed 'Best First'.
Re: A Guide to Installing Modules
by Hanamaki (Chaplain) on Nov 28, 2001 at 23:46 UTC
++ for tachyon's nice introduction to module installation. Since I have already done about 600-700 tests for the CPAN-testers I would like to supply some comments (footnotes) from my experience.
Some Footnotes
1. In our days, fortunately most authors use h2xs and make dist. Therefore, these tarballs expand to Some-Module-0.01, and if you cd into the directory you don't have to search for the place to do your perl Makefile.PL.
Tarball distributions which expand just to Some are really a pain, especially if you download a few modules from the Some:: Namespace.
Unfortunately there are also some tarball distributions which don't get expanded in their own directories, but in the current directory -- Have fun with the cleanup afterwards.
2. Unfortunately, many CPAN authors are still not used to -- or don't know about -- setting PREREQ_PM in Makefile.PL. So be prepared to get the "Can't locate foo/bar.pm in @INC" error while running make test.
3. While you should find important information in the Readme File, don't be surprised if you just find a Readme template, where the author never cared to replace "blabla".
4. If make test just does one test (1..1) it is in most cases just a load test. This means, your module will get loaded without problems, but it does not tell you wether the supplied functions work as expected. While make test is a important step for confidence building you should test the Module with your own scripts, and report errors to the author if appropriate. Be prepared, that many tests supplied by the author do not test all functions.
Hanamaki
Edited by footpad, ~Wed Nov 28 20:04:16 2001 (GMT)
How do you uninstall a module ?
If the module was installed in a normal fashion (perl Makefile.PL ...), it should've left a .packlist file.
Then you just use ExtUtils::Installed (you could use File::Find, but why go through the trouble when somebody already done it)
use ExtUtils::Installed;
my $inst = ExtUtils::Installed->new(); print "$_\n" for $inst->files('CGI'); =head1 on my machine, I get C:\Perl\lib\CGI\Util.pm C:\Perl\lib\CGI\Cookie.pm C:\Perl\lib\CGI.pm C:\Perl\lib\CGI\Push.pm C:\Perl\lib\CGI\Pretty.pm C:\Perl\lib\CGI\Fast.pm C:\Perl\lib\CGI\Carp.pm C:\Perl\lib\CGI\Switch.pm C:\Perl\lib\CGI\Apache.pm =cut # and now for the "deletion" part print "unlinking ", unlink($inst->files('CGI') );
[download]
update:
Hmm, works just fine for me on various perls/systems, what exactly did you try, and what version of ExtUtils::Packlist/ExtUtils::Installed do you have?
How did you install Image::Magick?
I have used CPAN to install modules and tested this, and it all works out as expected. I suspect one of the ExtUtils modules you're using is messed up. Seeing how you're still getting a file list, a workaround is easy (use File::Find to locate those files in @INC and acquire absolute paths).
____________________________________________________
** The Third rule of perl club is a statement of fact: pod is sexy.
Why would you need to?
I don't believe there are any mechanisms available to completely uninstall a module (assuming it wasn't installed with a package management system like RPM). If you just want to make a module unavailable to scripts, just find and delete the Module.pm file.
I don't have PPM, how can I install a package?
by PodMaster (Abbot) on May 25, 2003 at 11:06 UTC
Acquire PPM, or (if you insist), follow these instructions.
First, make sure you're downloading the right package (one intended for your version of perl), and then simply download it (ex: WWW-Curl.tar.gz).
Then, uncompress it so now you have a blib directory in your current working directory, and then execute
perl -MExtUtils::Install -e install_default WWW/Curl
and voila. It's what PPM basically does.
BE AWARE that the .ppd file may contain instructions to download extra files needed for install, which are not packaged with the tarball (like libcurl.dll, libeay32.dll, ssleay32.dll). You'll have to examine the .ppd file to make sure.
If you look at WWW-Curl.ppd you'll see a reference to install_libcurl.
PPM would download this file and attempt to execute it.
install_libcurl would check for the existence of specified files in your path, and prompt you whether to install them if they're not found.
If you want autogenerated html docs (my packages generally don't include these), simply execute
perl -MActivePerl::DocTools -e ActivePerl::DocTools::WriteTOC()
or (the one I prefer)
perl -MPod::Master -e Update
You can acquire Pod::Master from perlmonks.org here.
MJD says "you can't just make shit up and expect the computer to know what you mean, retardo!" I run a Win32 PPM repository for perl 5.6.x and 5.8.x -- I take requests (README). ** The third rule of perl club is a statement of fact: pod is sexy.
Re: A Guide to Installing Modules
by Anonymous Monk on Oct 17, 2003 at 16:00 UTC
I have been assigned to install a whole load of CPAN modules on the machines I admin for. The machines are a collection of differing RedHat boxes used by techies who like to change configurations and versions (so we have a lot of different RedHat and Perl versions). I tried CPAN but it requires an initial setup for each machine and I just want to install a directory full of perl-module.tar.gz's; I tried a script that essentially did uncompress, perl Makefile.PL; make; make test; make install but some packages throw up prompts on the Makefile.PL step and I don't know of a way to just indicate the defaults to each package. Any help would be appreciated.
Module::ScanDeps. Then build a test system that conforms to the minimal spec. Then make a build order. Then test.
cheers
tachyon
s&&rsenoyhcatreve&&&s&n.+t&"$'$$\"$\&"&ee&&y&srve&&d&&print
I am a retired web host provider ,we used to provide free webhosting for reurned servicemen in US UK and here in Australia. Until I retired they deleted them all.After my complaining theyve given me a shared server but Im not allowed to use admin privelidges. Obviously I jumped at ir. It seems I was a bit too keen because I have no access to mysql or php. I'm not sure what is the best way around this. I need to provide accounts but don't need accounting(creditcard etc) Simply need to have a way to supply free webhosting . I have been recomended phpmywebhosting but can't seem to get it started. The site is freehostultra.com You can see where I've started and got stuck at www.freehostultra.com/install/ I know it's probablt an easy fix but does anyone have a way to install all php,sql,in one go remotely via ftp? Regards Hey It's for a good cause) Regards Allan Optus australia
Re: A Guide to Installing Modules
by wfsp (Abbot) on Jul 30, 2004 at 07:22 UTC
Hi,
I have activestate on winXP.
I have installed/updated modules using ppm without a hitch.
I want to try Win32::Rase but have not been able to find it with ppm so I (nervously) tried the cpan.pm method you describe.
The 'install' command doesn't find it but the 'i' command seems to know about it.
Any advice please? Thanks in advance.
C:\Perl>perl -MCPAN -e "shell"
Terminal does not support AddHistory.
cpan shell -- CPAN exploration and modules installation (v1.7601)
ReadLine support available (try 'install Bundle::CPAN')
cpan> install Win32::Rase
CPAN: Storable loaded ok
Going to read \.cpan\Metadata
Database was generated on Thu, 29 Jul 2004 06:14:41 GMT
Warning: Cannot install Win32::Rase, don't know what it is.
Try the command
i /Win32::Rase/
to find objects with matching identifiers.
cpan> i /Win32::Rase/
Module id = Win32::RASE
DESCRIPTION Dialup entries and connections on Win32
CPAN_USERID MBLAZ (Mike Blazer <blazer@peterlink.ru>)
CPAN_VERSION 1.01
CPAN_FILE M/MB/MBLAZ/Win32-RASE-1.01.tar.gz
DSLI_STATUS Rdpf (released,developer,perl,functions)
INST_FILE (not installed)
cpan>
[download]
Without being too rude the whole idea of this tutorial was to teach you how to install a Module by hand. PPM and CPAN are convenience tools. Like all tools they have issues. You have just found one.
Go to search.cpan.org. Type in Win32::RASE, search, download the module, extract it (winzip will do). While you are there type in 'enum' - this is another module (enum.pm), so do likewise - download and extract it as well.
Now get a command prompt. Navigate to dir where you extracted the modules to (I extract into c:\temp) so cd c:\temp\[module name] will do it. Follow instructions above ie perl Makefile && nmake && nmake test && nmake install
If you try to do RASE first you will note it says it needs enum.pm (that's why you got it) so do that first and RASE second. It is very easy and should take no more that a couple of minutes.
If you want the docs (from the pod) to appear in the ActiveState docs do one of these after you have installed the modules.
# probably this
perl -MActivePerl::DocTools -e ActivePerl::DocTools::UpdateHTML()
# if not this should do it.
perl -MActivePerl::DocTools -e ActivePerl::DocTools::WriteTOC()
[download]
cheers
tachyon
Sorry for missing the point of your tutorial and thanks for your patience and reply.
Enum appeared to install ok as follows:
C:\enum\enum-1.016>perl makefile.pl
Checking if your kit is complete...
Looks good
Writing Makefile for enum
C:\enum\enum-1.016>nmake
WARNING: missing nmake.err; displaying error numbers without messages
+.
cp enum.pm blib\lib\enum.pm
C:\enum\enum-1.016>nmake test
WARNING: missing nmake.err; displaying error numbers without messages
+.
C:\Perl\bin\perl.exe "-MExtUtils::Command::MM" "-e" "test_harness(0, '
+bl
ib\lib', 'blib\arch')" t\dot_dot.t t\new_index.t t\new_package.t t\new
+_tag.t t\s
imple_tags.t
t\dot_dot........ok
t\new_index......ok
t\new_package....ok
t\new_tag........ok
t\simple_tags....ok
All tests successful.
Files=5, Tests=14, 0 wallclock secs ( 0.00 cusr + 0.00 csys = 0.00
+CPU)
[download]
It installed itself here:
Directory of C:\perl\site\lib\win32\ole
01/06/2004 12:15 2,839 Enum.pm
1 File(s) 2,839 bytes
0 Dir(s) 8,706,207,744 bytes free
[download]
But installing Win32::Rase produced:
C:\win32-RASE\Win32-RASE-1.01>perl makefile.pl
Checking if your kit is complete...
Looks good
Warning: prerequisite Win32::API 0 not found.
Warning: prerequisite enum 1.014 not found.
Writing Makefile for Win32::RASE
[download]
Is the difference between enum 1.014 and enum 1.016 significant?
I noticed that Win32::API was also not found.
I have Win32API::File, Win32API::Net and Win32API::Registry. I downloaded Win32::API from CPAN.
C:\Win32-API-0.41>perl makefile.pl
Checking if your kit is complete...
Looks good
Writing Makefile for Win32::API::Callback
Writing Makefile for Win32::API
C:\Win32-API-0.41>nmake
Microsoft (R) Program Maintenance Utility Version 1.50
Copyright (c) Microsoft Corp 1988-94. All rights reserved.
cp Type.pm blib\lib\Win32/API/Type.pm
cp Callback.pm blib\lib\Win32/API/Callback.pm
cp Struct.pm blib\lib\Win32/API/Struct.pm
cp API.pm blib\lib\Win32/API.pm
C:\Perl\bin\perl.exe C:\Perl\lib\ExtUtils/xsubpp -typemap C:\
+Perl\lib\E
xtUtils\typemap Callback.xs > Callback.xsc && C:\Perl\bin\perl.exe -M
+ExtUtils::
Command -e mv Callback.xsc Callback.c
cl -c -nologo -Gf -W3 -MD -Zi -DNDEBUG -O1 -DWIN32 -D_CONSO
+LE -DNO_ST
RICT -DHAVE_DES_FCRYPT -DNO_HASH_SEED -DPERL_IMPLICIT_CONTEXT -DPERL_I
+MPLICIT_SY
S -DUSE_PERLIO -DPERL_MSVCRT_READFIX -MD -Zi -DNDEBUG -O1 -DVERSION
+=\"0.41\"
-DXS_VERSION=\"0.41\" "-IC:\Perl\lib\CORE" Callback.c
'cl' is not recognized as an internal or external command,
operable program or batch file.
NMAKE : fatal error U1077: 'C:\WINDOWS\system32\cmd.exe' : return code
+ '0x1'
Stop.
NMAKE : fatal error U1077: 'C:\WINDOWS\system32\cmd.exe' : return code
+ '0x2'
Stop.
[download]
Does this mean I need a C compiler?
Sorry to be a pain.
Note: I found and moved nmake.err into the path.
Re: A Guide to Installing Modules
by mosince82 (Initiate) on Feb 02, 2010 at 01:01 UTC
Thanks Tachyon! have been looking for this info for a day, exactly what i needed.
So i've gone ahead and installed my first module. I received an error:
ERROR: Can't create '/usr/local/share/man/man3' mkdir /usr/local: Permission denied at /System/Library/Perl/5.10.0/ExtUtils/Install.pm line 479
I resolved by installing locally in my home directory as you suggested.
My question now is what to do with the installed module? I need to install 5 or so modules to setup Amazon Cloudfront and don't know how to manage them once installed.
- Should I leave the 'lib' folder as is?
- can/should i install the other modules to the same folder or should they be placed somewhere else on my machine?
thanks in advance and again for this article!
mo
Re: A Guide to Installing Modules
by shagbark (Novice) on Mar 01, 2015 at 16:48 UTC
After 'make install', can I 'rm *' to remove the tarball and all its extracted files?
After 'make install', can I 'rm *' to remove the tarball and all its extracted files?
Yes. After make install has completed successfully it's safe to delete the installation files. If you'd like to be sure the module installed successfully, run a script that uses it, or just something like perl -MModule::Name -le 'print $Module::Name::VERSION'. (The only exception might be if you've done some customization in the source directory that you want to keep around.) Re: A Guide to Installing Modules by Anonymous Monk on Dec 11, 2011 at 22:58 UTC nmake not working for you? you probably need to use dmake, or make, it all depends on what perl -V:make prints You can override choice ($Config::Config{make}) of makefile ExtUtils::MakeMaker generates by using
perl Makefile.PL make=dmake
perl Makefile.PL make=nmake
[download]`
Re: A Guide to Installing Modules
by jesuashok (Curate) on Jan 24, 2007 at 02:42 UTC
Once your have extracted your tarball ....
Small correction needed.
Should be :-
Once you have extracted your tarball ....
You should have /msg'd that
You should have /msg'd that
Actually, it doesn't look like the author of the original node has been around since June 2005.
Log In?
Username: Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: perltutorial [id://128077]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others scrutinizing the Monastery: (7)
As of 2016-08-28 14:11 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
The best thing I ever won in a lottery was:
Results (393 votes). Check out past polls.
|
2016-08-28 14:13:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32413339614868164, "perplexity": 4021.042441508371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982939917.96/warc/CC-MAIN-20160823200859-00168-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-6-systems-of-equations-and-inequalities-cumulative-test-prep-multiple-choice-page-409/13
|
Algebra 1
Published by Prentice Hall
Chapter 6 - Systems of Equations and Inequalities - Cumulative Test Prep - Multiple Choice - Page 409: 13
A
Work Step by Step
$y=x+1$ $y=x+2$ The two lines have the same slope but different y-intercepts. The lines don't cross and thus have no solutions.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2020-04-02 00:53:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6252148747444153, "perplexity": 1269.5915768095008}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00410.warc.gz"}
|
http://codereview.stackexchange.com/questions/20820/use-and-understanding-of-async-await-in-net-4-5
|
# Use and understanding of async/await in .NET 4.5 +
I am just about to embark upon a massive job of multi-threading a cost-engine. I could use TPL of which I am very familiar, but would like to leverage the async/await keywords of .NET 4.5 to make life that little bit simpler.
Is my understanding of what is going on correct? How can I improve this design?
CancellationTokenSource cancelSource;
// Mark the event handler with async so you can use await in it.
private async void StartButton_Click(object sender, RoutedEventArgs e)
{
cancelSource = new CancellationTokenSource();
CancellationToken token = cancelSource.Token;
IProgress<ProgressInfo> progressInfo = new Progress<ProgressInfo>(ReportProgress);
await ProcessScript(progressInfo, token, uiScheduler);
// await sets up the following as an Contnuation to
// continue once returned from await.
}
// Class to report progress...
private void ReportProgress(ProgressInfo progressInfo)
{
this.progressBar.Value = progressInfo.PrecentComplete;
this.progressLabel.Content = progressInfo.Message;
}
CancellationToken token,
{
ProgressInfo pi = new ProgressInfo();
pi.PrecentComplete = 0;
pi.Message = "In script processor...";
progressInfo.Report(pi);
Thread.Sleep(5000); // This is UI Blocking...
string str = this.resultsTextBox.Text;
str = "We have added this => going into await...";
this.resultsTextBox.AppendText(str);
pi.PrecentComplete = 50;
progressInfo.Report(pi);
// awaits the long runniing task - non-UI blocking.
// The await above sets this up as a continuation on the UI thread.
pi.PrecentComplete = 95;
progressInfo.Report(pi);
this.resultsTextBox.AppendText("Where are we now??");
return;
}
{
{
this.resultsTextBox.AppendText("\n\nNow in 'LongRunning'!! Waiting 5s simulating HARD WORK!!");
}, CancellationToken.None,
uiScheduler);
// Simulate hard work.
{
this.resultsTextBox.AppendText("\n\n\nHARD WORK COMPLETE!!");
}, CancellationToken.None,
uiScheduler);
return true;
}
-
You may want to rephrase the question to better fit the code-review style described in the faq. There are currently a couple of close requests because questions like "Help me understand this code" are off topic here. "How can I improve this async/await code", for instance, would be a better fit. – codesparkle Jan 23 '13 at 14:04
I'm confused. The sight is called "CodeReview", I am asking someone to do exactly that, review my code. I will take you advice and make an edit; I hope it helps and thanks for your time... – Killercam Jan 23 '13 at 15:50
What's off topic is to ask us to explain the code to you. As long as you aren't asking that, its okay. – Winston Ewert Jan 23 '13 at 20:28
Yes, your understanding is correct. Note that you're not using CancellationToken provided to ProcessScript. If your operations are cancellable and/or you're using other asynchronous API that accepts CancellationTokens it's highly recommended to pass it through the code, otherwise just don't create the CancellationTokenSource.
Thread.Sleep(5000); // This is UI Blocking...
This is correct since async methods return control to the caller only when something is awaited.
// awaits the long runniing task - non-UI blocking.
Here we release the UI thread and running a (computing) task in parallel. The reason I've added "computing" is that you're manually spanning a new task using Task.Factory. In case of I/O-related task returned by .NET framework it won't actually represent a new thread as it can wait for external data on the same thread.
// The await above sets this up as a continuation on the UI thread.
Correct. See good article Await, SynchronizationContext, and Console Apps that describes in details the behavior of await.
// Allow access to the UI from this background thread from
You're passing TaskScheduler corresponding to UI thread so naturally the code will be safe to work with UI. I would recommend using IProgress<T> to update/inform UI about changes in long-running task though, since you may want to separate computation logic from UI-related code.
|
2014-11-01 14:01:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18819674849510193, "perplexity": 5034.55450339396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637906909.58/warc/CC-MAIN-20141030025826-00065-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://gmatclub.com/forum/is-abc-at-least-288250.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 20 Feb 2019, 00:35
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in February
PrevNext
SuMoTuWeThFrSa
272829303112
3456789
10111213141516
17181920212223
242526272812
Open Detailed Calendar
• ### Free GMAT Prep Hour
February 20, 2019
February 20, 2019
08:00 PM EST
09:00 PM EST
Strategies and techniques for approaching featured GMAT topics. Wednesday, February 20th at 8 PM EST
February 21, 2019
February 21, 2019
10:00 PM PST
11:00 PM PST
Kick off your 2019 GMAT prep with a free 7-day boot camp that includes free online lessons, webinars, and a full GMAT course access. Limited for the first 99 registrants! Feb. 21st until the 27th.
# Is abc at least 4?
Author Message
TAGS:
### Hide Tags
GMATH Teacher
Status: GMATH founder
Joined: 12 Oct 2010
Posts: 750
Is abc at least 4? [#permalink]
### Show Tags
08 Feb 2019, 07:02
00:00
Difficulty:
65% (hard)
Question Stats:
46% (02:10) correct 54% (01:59) wrong based on 24 sessions
### HideShow timer Statistics
GMATH practice exercise (Quant Class 14)
Is $$abc\,\, \ge \,\,4$$ ?
$$\left( 1 \right)\,\,b + c \ge 2$$
$$\left( 2 \right)\,\,ab \ge ac \ge 4$$
_________________
Fabio Skilnik :: GMATH method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
VP
Joined: 09 Mar 2018
Posts: 1000
Location: India
Is abc at least 4? [#permalink]
### Show Tags
08 Feb 2019, 07:21
fskilnik wrote:
GMATH practice exercise (Quant Class 14)
Is $$abc\,\, \ge \,\,4$$ ?
$$\left( 1 \right)\,\,b + c \ge 2$$
$$\left( 2 \right)\,\,ab \ge ac \ge 4$$
Key word: abc can be +ive or -ive numbers.
Even one can take fractions for satisfying the condition.
said that
from 1, we dont get anything about a, a can be +ive or -ive
from 2, we get two cases
C1 when a, b and c, are +ive 4*1 >= 4*1 >= 4
C2 when a,b and c are -ive -4*-1 > = -4 * -1 >=4
C3 when a,b and c are fractions 1/2*8 >= 1/2*8 >=4
So only when we combine, we get a unique value
as we cannot take b and c as -ive numbers
C
_________________
If you notice any discrepancy in my reasoning, please let me know. Lets improve together.
Quote which i can relate to.
Many of life's failures happen with people who do not realize how close they were to success when they gave up.
Senior Manager
Joined: 04 Aug 2010
Posts: 350
Schools: Dartmouth College
Re: Is abc at least 4? [#permalink]
### Show Tags
08 Feb 2019, 13:12
fskilnik wrote:
GMATH practice exercise (Quant Class 14)
Is $$abc\,\, \ge \,\,4$$ ?
$$\left( 1 \right)\,\,b + c \ge 2$$
$$\left( 2 \right)\,\,ab \ge ac \ge 4$$
Nice problem, Fabio!
Statement 1:
Case 1: a=4, b=1 and c=1, with the result that b+c≥2
In this case, abc = 4, so the answer to the question stem is YES.
Case 2: a=0, b=1 and c=1, with the result that b+c≥2
In this case, abc = 0, so the answer to the question stem is NO.
Since the answer is YES in Case 1 but NO in Case 2, INSUFFICIENT.
Statement 2:
Case 1: a=4, b=1 and c=1, with the result that ab=4 and ac=4
In this case, abc = 4, so the answer to the question stem is YES.
Case 2: a=-4, b=-1 and c=-1, with the result that ab=4 and ac=4
In this case, abc = -4, so the answer to the question stem is NO.
Since the answer is YES in Case 1 but NO in Case 2, INSUFFICIENT.
Statements combined:
ab ≥ ac ≥ 4 requires that a, b and c have the SAME SIGN.
Since b+c≥2, and b and c have the same sign, b and c must both be POSITIVE.
Implication:
a, b and c are ALL positive.
Thus:
ab ≥ ac
(ab)/a ≥ (ac)/a
b ≥ c
Adding together b ≥ c and b+c ≥ 2, we get:
b + b + c ≥ c + 2
2b ≥ 2
b ≥ 1
Inequalities constrained to positive values can be MULTIPLIED.
Multiplying b≥1 and ac≥4, we get:
abc ≥ 4
Thus, the answer to the question stem is YES.
SUFFICIENT.
_________________
GMAT and GRE Tutor
Over 1800 followers
GMATGuruNY@gmail.com
New York, NY
If you find one of my posts helpful, please take a moment to click on the "Kudos" icon.
Available for tutoring in NYC and long-distance.
GMATH Teacher
Status: GMATH founder
Joined: 12 Oct 2010
Posts: 750
Is abc at least 4? [#permalink]
### Show Tags
08 Feb 2019, 13:26
fskilnik wrote:
GMATH practice exercise (Quant Class 14)
Is $$abc\,\, \ge \,\,4$$ ?
$$\left( 1 \right)\,\,b + c \ge 2$$
$$\left( 2 \right)\,\,ab \ge ac \ge 4$$
Hi, Mitch! Thanks for the words and for your beautiful contribution!
$$abc\,\,\mathop \ge \limits^? \,\,4$$
$$\left( 1 \right)\,\,b + c\,\, \ge 2\,\,\,\,\,\left\{ \matrix{ \,{\rm{Take}}\,\,\left( {a,b,c} \right) = \left( {1,1,1} \right)\,\,\,\, \Rightarrow \,\,\,\left\langle {{\rm{NO}}} \right\rangle \,\, \hfill \cr \,{\rm{Take}}\,\,\left( {a,b,c} \right) = \left( {4,1,1} \right)\,\,\,\, \Rightarrow \,\,\,\left\langle {{\rm{YES}}} \right\rangle \,\, \hfill \cr} \right.$$
$$\left( 2 \right)\,\,ab \ge ac \ge 4\,\,\,\,\,\left\{ \matrix{ \,\left( {{\mathop{\rm Re}\nolimits} } \right){\rm{Take}}\,\,\left( {a,b,c} \right) = \left( {4,1,1} \right)\,\,\,\, \Rightarrow \,\,\,\left\langle {{\rm{YES}}} \right\rangle \,\, \hfill \cr \,{\rm{Take}}\,\,\left( {a,b,c} \right) = \left( { - 2, - 2, - 2} \right)\,\,\,\, \Rightarrow \,\,\,\left\langle {{\rm{NO}}} \right\rangle \,\, \hfill \cr} \right.$$
$$\left( {1 + 2} \right)\,\,a \ne 0\,\,\,\,\,\left( {ac \ne 0} \right)\,\,\,\,::\,\,\,\,\left\{ \matrix{ \,a < 0\,\,\,\, \Rightarrow \,\,\,\,b < 0\,\,\,\,\left( {ab > 0} \right)\,\,\,\,\,and\,\,\,\,\,c < 0\,\,\,\left( {ac > 0} \right)\,\,\,\,\mathop \Rightarrow \limits^{\left( 1 \right)} \,\,\,\,{\rm{impossible}} \hfill \cr \,a > 0\,\,\,\, \Rightarrow \,\,\,\,b > 0\,\,\,\,\left( {ab > 0} \right)\,\,\,\,\,and\,\,\,\,\,c > 0\,\,\,\left( {ac > 0} \right) \hfill \cr} \right.\,\,\,\,\,\, \Rightarrow \,\,\,\,\,a,b,c\,\,\, > 0\,\,\,\,\,\left( * \right)$$
$$\left. \matrix{ ab \ge 4\,\,\,\,\mathop \Rightarrow \limits^{ \cdot \,\,c\,\,\left( * \right)} \,\,\,abc \ge 4c\,\,\, \hfill \cr ac \ge 4\,\,\,\,\mathop \Rightarrow \limits^{ \cdot \,\,b\,\,\left( * \right)} \,\,\,abc \ge 4b \hfill \cr} \right\}\,\,\,\,\,\,\,\mathop \Rightarrow \limits^{\left( + \right)} \,\,\,\,\,\,\,2abc \ge 4\left( {b + c} \right)\,\,\,\,\,\,\,\mathop \Rightarrow \limits^{:\,2} \,\,\,\,\,\,\,abc \ge 2\left( {b + c} \right)\,\,\,\,\mathop \Rightarrow \limits^{\left( 1 \right)} \,\,\,\,\left\langle {{\rm{YES}}} \right\rangle$$
The correct answer is therefore (C).
We follow the notations and rationale taught in the GMATH method.
Regards,
Fabio.
_________________
Fabio Skilnik :: GMATH method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
Is abc at least 4? [#permalink] 08 Feb 2019, 13:26
Display posts from previous: Sort by
|
2019-02-20 08:35:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.612817645072937, "perplexity": 10116.349896207985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247494485.54/warc/CC-MAIN-20190220065052-20190220091052-00291.warc.gz"}
|
https://www.physicsforums.com/threads/the-obama-tax-calculator.264712/
|
# The Obama Tax Calculator
Staff Emeritus
Gold Member
Estimate the change in your tax liability based on Obama's tax plan.
http://alchemytoday.com/obamataxcut/ [Broken]
Last edited by a moderator:
turbo
Gold Member
Could McCain have pulled off this unscripted, unprompted exposition of his tax policies? AND make the numbers add up? I think not.
Last edited by a moderator:
Moonbear
Staff Emeritus
Gold Member
I doubted the likelihood that it would be a reliable source (it sounded biased toward Obama), but lo and behold, it told me exactly what I suspected...not enough difference to care. It never matters who is in office, single, successful people always get screwed. Playing with the numbers, I put in double the household income and saw what the numbers would be if I was married...it's a HUGE tax cut under Obama's plan...even with no children. How is that fair? If I had a husband who made exactly the same as me, together, we'd stand to get nearly 10 times as much of a tax cut married than if we were single! That's just penalizing single people. What's worse, the tax break would be better for being married with no children than it would be to be single with a kid! I can understand tax breaks when you have dependent children you need to support off the same salary, but I don't understand them just for being lucky to meet someone to get married.
Evo
Mentor
I went to the other sites it referred to and they made no sense. They kept increasing my net income before taxes and there's a list of "assumptions" it says it's basing the figures on, none of which apply to me.
G01
Homework Helper
Gold Member
A single person with a grad student income of $25,000 (at best) will get$456.11 more than they would under John McCain.
Either way I'll still be poor, but that is a big difference.
Moonbear
Staff Emeritus
Gold Member
I went to the other sites it referred to and they made no sense. They kept increasing my net income before taxes and there's a list of "assumptions" it says it's basing the figures on, none of which apply to me.
I found that other site you're talking about. That one is terrible! It's assuming an automatically inflation-adjusted increase in income, as if everyone gets an automatic 3% increase every year, no more, no less. I think I know better than that site the duration of my contract and when I'm due for another raise.
Moonbear
Staff Emeritus
Gold Member
A single person with a grad student income of $25,000 (at best) will get$456.11 more than they would under John McCain.
Either way I'll still be poor, but that is a big difference.
At least until the Federal government has to cut education spending to cover the difference in tax income, and your tuition goes up again, and all the other things they cut in the budget are shifted to the state budgets, and state taxes go up.
Staff Emeritus
Gold Member
From the calculator page:
The Tax Policy Center, an independent, non-partisan group, has estimated how taxpayers' 2009 taxes will change under the next President. Answer a few simple questions to calculate the likely change in your tax bill in 2009:
The Tax Policy Center is a joint venture of the Urban Institute and Brookings Institution. The Center is made up of nationally recognized experts in tax, budget, and social policy who have served at the highest levels of government.
What We Do
TPC provides timely, accessible analysis and facts about tax policy to policymakers, journalists, citizens, and researchers. Its major products are
Model estimates: The TPC Microsimulation Model produces revenue and distribution estimates for the latest tax proposals and bills. More information about the tax model is available in the overview and FAQ.
Library: Research by TPC staff is disseminated in a variety of publications, including two TPC series - Issues and Options briefs and Discussion papers. The TPC also has regular columns in Tax Notes magazine.
Tax Facts: The Tax Facts database compiles facts and figures from government agencies and other sources.
Strange. This tax calculator shows a different amount...
http://taxcut.barackobama.com/ [Broken]
I wonder which one to trust?
Last edited by a moderator:
Evo
Mentor
Strange. This tax calculator shows a different amount...
http://taxcut.barackobama.com/ [Broken]
I wonder which one to trust?
I'm getting the same amount on this one.
Last edited by a moderator:
every four years the candidates running for president make all sorts of tax promises. When was the last time we saw the same tax breaks once they were elected?
I'm getting the same amount on this one.
The one I posted is showing more money from Obama and zero from McCain. The one Ivan posted at least showed McCains as ~5% of what Obamas is.
Staff Emeritus
Gold Member
I doubted the likelihood that it would be a reliable source (it sounded biased toward Obama), but lo and behold, it told me exactly what I suspected...not enough difference to care. It never matters who is in office, single, successful people always get screwed. Playing with the numbers, I put in double the household income and saw what the numbers would be if I was married...it's a HUGE tax cut under Obama's plan...even with no children. How is that fair? If I had a husband who made exactly the same as me, together, we'd stand to get nearly 10 times as much of a tax cut married than if we were single! That's just penalizing single people. What's worse, the tax break would be better for being married with no children than it would be to be single with a kid! I can understand tax breaks when you have dependent children you need to support off the same salary, but I don't understand them just for being lucky to meet someone to get married.
Ah, but anyone who is married will tell you that special compensation is entirely appropriate due to the demands that we endure, such as putting down the toilet seat.
Last edited:
Borek
Mentor
I wonder which one to trust?
None.
I haven't looked at neither, I am not involved.
Could McCain have pulled off this unscripted, unprompted exposition of his tax policies? AND make the numbers add up? I think not.
O my god! Joe The Plumber strikes again!
Last edited by a moderator:
Staff Emeritus
|
2021-10-17 06:46:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20505230128765106, "perplexity": 3184.6068338141095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00580.warc.gz"}
|
http://davidlowryduda.com/tag/gauss-circle-problem/
|
# Tag Archives: gauss circle problem
## Update to Second Moments in the Generalized Gauss Circle Problem
Last year, my coauthors Tom Hulse, Chan Ieong Kuan, and Alex Walker posted a paper to the arXiv called “Second Moments in the Generalized Gauss Circle Problem”. I’ve briefly described its contents before.
This paper has been accepted and will appear in Forum of Mathematics: Sigma.
This is the first time I’ve submitted to the Forum of Mathematics, and I must say that this has been a very good journal experience. One interesting aspect about FoM: Sigma is that they are immediate (gold) open access, and they don’t release in issues. Instead, articles become available (for free) from them once the submission process is done. I was reviewing a publication-proof of the paper yesterday, and they appear to be very quick with regards to editing. Perhaps the paper will appear before the end of the year.
An updated version (the version from before the handling of proofs at the journal, so there will be a number of mostly aesthetic differences with the published version) of the paper will appear on the arXiv on Monday 10 December.1
## A new appendix has appeared
There is one major addition to the paper that didn’t appear in the original preprint. At one of the referee’s suggestions, Chan and I wrote an appendix. The major content of this appendix concerns a technical detail about Rankin-Selberg convolutions.
If $f$ and $g$ are weight $k$ cusp forms on $\mathrm{SL}(2, \mathbb{Z})$ with expansions $$f(z) = \sum_ {n \geq 1} a(n) e(nz), \quad g(z) = \sum_ {n \geq 1} b(n) e(nz),$$ then one can use a (real analytic) Eisenstein series $$E(s, z) = \sum_ {\gamma \in \mathrm{SL}(2, \mathbb{Z})_ \infty \backslash \mathrm{SL}(2, \mathbb{Q})} \mathrm{Im}(\gamma z)^s$$ to recognize the Rankin-Selberg $L$-function $$\label{RS} L(s, f \otimes g) := \zeta(s) \sum_ {n \geq 1} \frac{a(n)b(n)}{n^{s + k – 1}} = h(s) \langle f g y^k, E(s, z) \rangle,$$ where $h(s)$ is an easily-understandable function of $s$ and where $\langle \cdot, \cdot \rangle$ denotes the Petersson inner product.
When $f$ and $g$ are not cusp forms, or when $f$ and $g$ are modular with respect to a congruence subgroup of $\mathrm{SL}(2, \mathbb{Z})$, then there are adjustments that must be made to the typical construction of $L(s, f \otimes g)$.
When $f$ and $g$ are not cusp forms, then Zagier2 provided a way to recognize $L(s, f \otimes g)$ when $f$ and $g$ are modular on the full modular group $\mathrm{SL}(2, \mathbb{Z})$. And under certain conditions that he describes, he shows that one can still recognize $L(s, f \otimes g)$ as an inner product with an Eisenstein series as in \eqref{RS}.
In principle, his method of proof would apply for non-cuspidal forms defined on congruence subgroups, but in practice this becomes too annoying and bogged down with details to work with. Fortunately, in 2000, Gupta3 gave a different construction of $L(s, f \otimes g)$ that generalizes more readily to non-cuspidal forms on congruence subgroups. His construction is very convenient, and it shows that $L(s, f \otimes g)$ has all of the properties expected of it.
However Gupta does not show that there are certain conditions under which one can recognize $L(s, f \otimes g)$ as an inner product against an Eisenstein series.4 For this paper, we need to deal very explicitly and concretely with $L(s, \theta^2 \otimes \overline{\theta^2})$, which is formed from the modular form $\theta^2$, non-cuspidal on a congruence subgroup.
The Appendix to the paper can be thought of as an extension of Gupta’s paper: it uses Gupta’s ideas and techniques to prove a result analogous to \eqref{RS}. We then use this to get the explicit understanding necessary to tackle the Gauss Sphere problem.
There is more to this story. I’ll return to it in a later note.
## Other submission details for FoM: Sigma
I should say that there are many other revisions between the original preprint and the final one. These are mainly due to the extraordinary efforts of two Referees. One Referee was kind enough to give us approximately 10 pages of itemized suggestions and comments.
When I first opened these comments, I was a bit afraid. Having so many comments was daunting. But this Referee really took his or her time to point us in the right direction, and the resulting paper is vastly improved (and in many cases shortened, although the appendix has hidden the simplified arguments cut in length).
More broadly, the Referee acted as a sort of mentor with respect to my technical writing. I have a lot of opinions on technical writing,5 but this process changed and helped sharpen my ideas concerning good technical math writing.
I sometimes hear lots of negative aspects about peer review, but this particular pair of Referees turned the publication process into an opportunity to learn about good mathematical exposition — I didn’t expect this.
I was also surprised by the infrastructure that existed at the University of Warwick for handling a gold open access submission. As part of their open access funding, Forum of Math: Sigma has an author-pays model. Or rather, the author’s institution pays. It took essentially no time at all for Warwick to arrange the payment (about 500 pounds).
This is a not-inconsequential amount of money, but it is much less than the 1500 dollars that PLoS One uses. The comparison with PLoS One is perhaps apt. PLoS is older, and perhaps paved the way for modern gold open access journals like FoM. PLoS was started by group of established biologists and chemists, including a Nobel prize winner; FoM was started by a group of established mathematicians, including multiple Fields medalists.6
I will certainly consider Forum of Mathematics in the future.
Posted in Expository, Math.NT, Mathematics, Warwick | | Leave a comment
## “Second Moments in the Generalized Gauss Circle Problem” (with T. Hulse, C. Ieong Kuan, and A. Walker)
This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alexander Walker. This is a natural successor to our previous work (see their announcements: one, two, three) concerning bounds and asymptotics for sums of coefficients of modular forms.
We now have a variety of results concerning the behavior of the partial sums
$$S_f(X) = \sum_{n \leq X} a(n)$$
where $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a GL(2) cuspform. The primary focus of our previous work was to understand the Dirichlet series
$$D(s, S_f \times S_f) = \sum_{n \geq 1} \frac{S_f(n)^2}{n^s}$$
completely, give its meromorphic continuation to the plane (this was the major topic of the first paper in the series), and to perform classical complex analysis on this object in order to describe the behavior of $S_f(n)$ and $S_f(n)^2$ (this was done in the first paper, and was the major topic of the second paper of the series). One motivation for studying this type of problem is that bounds for $S_f(n)$ are analogous to understanding the error term in lattice point discrepancy with circles.
That is, let $S_2(R)$ denote the number of lattice points in a circle of radius $\sqrt{R}$ centered at the origin. Then we expect that $S_2(R)$ is approximately the area of the circle, plus or minus some error term. We write this as
$$S_2(R) = \pi R + P_2(R),$$
where $P_2(R)$ is the error term. We refer to $P_2(R)$ as the “lattice point discrepancy” — it describes the discrepancy between the number of lattice points in the circle and the area of the circle. Determining the size of $P_2(R)$ is a very famous problem called the Gauss circle problem, and it has been studied for over 200 years. We believe that $P_2(R) = O(R^{1/4 + \epsilon})$, but that is not known to be true.
The Gauss circle problem can be cast in the language of modular forms. Let $\theta(z)$ denote the standard Jacobi theta series,
$$\theta(z) = \sum_{n \in \mathbb{Z}} e^{2\pi i n^2 z}.$$
Then
$$\theta^2(z) = 1 + \sum_{n \geq 1} r_2(n) e^{2\pi i n z},$$
where $r_2(n)$ denotes the number of representations of $n$ as a sum of $2$ (positive or negative) squares. The function $\theta^2(z)$ is a modular form of weight $1$ on $\Gamma_0(4)$, but it is not a cuspform. However, the sum
$$\sum_{n \leq R} r_2(n) = S_2(R),$$
and so the partial sums of the coefficients of $\theta^2(z)$ indicate the number of lattice points in the circle of radius $\sqrt R$. Thus $\theta^2(z)$ gives access to the Gauss circle problem.
More generally, one can consider the number of lattice points in a $k$-dimensional sphere of radius $\sqrt R$ centered at the origin, which should approximately be the volume of that sphere,
$$S_k(R) = \mathrm{Vol}(B(\sqrt R)) + P_k(R) = \sum_{n \leq R} r_k(n),$$
giving a $k$-dimensional lattice point discrepancy. For large dimension $k$, one should expect that the circle problem is sufficient to give good bounds and understanding of the size and error of $S_k(R)$. For $k \geq 5$, the true order of growth for $P_k(R)$ is known (up to constants).
Therefore it happens to be that the small (meaning 2 or 3) dimensional cases are both the most interesting, given our predilection for 2 and 3 dimensional geometry, and the most enigmatic. For a variety of reasons, the three dimensional case is very challenging to understand, and is perhaps even more enigmatic than the two dimensional case.
Strong evidence for the conjectured size of the lattice point discrepancy comes in the form of mean square estimates. By looking at the square, one doesn’t need to worry about oscillation from positive to negative values. And by averaging over many radii, one hopes to smooth out some of the individual bumps. These mean square estimates take the form
\begin{align} \int_0^X P_2(t)^2 dt &= C X^{3/2} + O(X \log^2 X) \\ \int_0^X P_3(t)^2 dt &= C’ X^2 \log X + O(X^2 (\sqrt{ \log X})). \end{align}
These indicate that the average size of $P_2(R)$ is $R^{1/4}$. and that the average size of $P_3(R)$ is $R^{1/2}$. In the two dimensional case, notice that the error term in the mean square asymptotic has pretty significant separation. It has essentially a $\sqrt X$ power-savings over the main term. But in the three dimensional case, there is no power separation. Even with significant averaging, we are only just capable of distinguishing a main term at all.
It is also interesting, but for more complicated reasons, that the main term in the three dimensional case has a log term within it. This is unique to the three dimensional case. But that is a description for another time.
In a paper that we recently posted to the arxiv, we show that the Dirichlet series
$$\sum_{n \geq 1} \frac{S_k(n)^2}{n^s}$$
and
$$\sum_{n \geq 1} \frac{P_k(n)^2}{n^s}$$
for $k \geq 3$ have understandable meromorphic continuation to the plane. Of particular interest is the $k = 3$ case, of course. We then investigate smoothed and unsmoothed mean square results. In particular, we prove a result stated following.
Theorem
\begin{align} \int_0^\infty P_k(t)^2 e^{-t/X} &= C_3 X^2 \log X + C_4 X^{5/2} \\ &\quad + C_kX^{k-1} + O(X^{k-2} \end{align}
In this statement, the term with $C_3$ only appears in dimension $3$, and the term with $C_4$ only appears in dimension $4$. This should really thought of as saying that we understand the Laplace transform of the square of the lattice point discrepancy as well as can be desired.
We are also able to improve the sharp second mean in the dimension 3 case, showing in particular the following.
Theorem
There exists $\lambda > 0$ such that
$$\int_0^X P_3(t)^2 dt = C X^2 \log X + D X^2 + O(X^{2 – \lambda}).$$
We do not actually compute what we might take $\lambda$ to be, but we believe (informally) that $\lambda$ can be taken as $1/5$.
The major themes behind these new results are already present in the first paper in the series. The new ingredient involves handling the behavior on non-cuspforms at the cusps on the analytic side, and handling the apparent main terms (int his case, the volume of the ball) on the combinatorial side.
There is an additional difficulty that arises in the dimension 2 case which makes it distinct. But soon I will describe a different forthcoming work in that case.
|
2019-08-18 14:15:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9018248915672302, "perplexity": 444.71382081101984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00352.warc.gz"}
|
http://mymathforum.com/algebra/42362-solution-algebraic-equations.html
|
My Math Forum Solution for algebraic equations
Algebra Pre-Algebra and Basic Algebra Math Forum
March 26th, 2014, 08:34 PM #1 Senior Member Joined: Nov 2013 Posts: 137 Thanks: 1 Solution for algebraic equations I noticed that it's always possible to find the roots of polynomials of kind: $A\lambda^2 + B$ $A\lambda^3 + B$ $A\lambda^4 + B$ $A\lambda^5 + B$ So I thought to transform the quadratic, the cubic, the quartic and the quintic function in the forms above, look: $ax^3+bx^2+cx+d=0$ $(x=\lambda+u+v)$ $a(\lambda+u+v)^3+b(\lambda+u+v)^2+c(\lambda+u+v)+d=0$ $\\a \left ( \begin{matrix} \lambda^3 & +3\lambda^2u & +3\lambda u^2 & +u^3\\ +3\lambda^2v & +6\lambda u v & +3u^2 v & \\ +3\lambda v^2 & +3uv^2 & & \\ +v^3 & & & \\ \end{matrix} \right ) + b \left ( \begin{matrix} \lambda^2 & +2\lambda u & +u^2\\ +2\lambda v & +2 u v & \\ +v^2 & & \end{matrix} \right ) + c (\lambda + u + v) + d = 0$ $a\lambda^3+(3a(u+v)+b)\lambda^2+(3a(u+v)^2 + 2b(u+v) + c)\lambda + (a(u+v)^3 + b(u+v)^2 + c(u+v) + d)= 0$ So, comparing the equations: $A\lambda^3 + B= a\lambda^3+(3a(u+v)+b)\lambda^2+(3a(u+v)^2 + 2b(u+v) + c)\lambda + (a(u+v)^3 + b(u+v)^2 + c(u+v) + d) = 0$ implies that: $\begin{cases} A=a\\ 0=(3a(u+v)+b)\\ 0=(3a(u+v)^2 + 2b(u+v) + c)\\ B=(a(u+v)^3 + b(u+v)^2 + c(u+v) + d)\\ \end{cases}$ So, $A\lambda^3 + B= 0\;\;\;\Rightarrow\;\;\;\;\lambda = \sqrt[3]{\frac{-B}{A}} \;\;\;\Rightarrow\;\;\;\; x-u-v = \sqrt[3]{\frac{-B}{A}}\;\;\;\Rightarrow\;\;\;\; x = \sqrt[3]{\frac{-B}{A}} + (u+v)$ $\Rightarrow\;\;\;\; x= \sqrt[3]{\frac{-(a(u+v)^3 + b(u+v)^2 + c(u+v) + d)}{a}} + (u+v)$ Happens that $\begin{cases} 0=(3a(u+v)+b)\\ 0=(3a(u+v)^2 + 2b(u+v) + c)\\ \end{cases}\;\;\;\Rightarrow\;\;\;\;u+v = \frac{\pm \sqrt{9a^2-12ac+4b^2} + 3a - 2b}{6a}$ 1) Is this idea correct? 2) If yes, is it possible to solve equations of the 5th, 6th, 7th... degree with this process?
March 29th, 2014, 11:46 AM #2 Math Team Joined: Mar 2012 From: India, West Bengal Posts: 3,871 Thanks: 86 Math Focus: Number Theory I haven't looked at it closely for errors, seems almost obvious that this is false. I'll just directly check your final result for $x^3 + x^2 + x + 1= 0$ Let's compute $t= u + v$. This is $t= \frac{\pm 1 + 1}{6}$ If 0, $x= 0$ and this case is outright ruled out. Otherwise, $t= 1/3$ in which case $x= \sqrt[3]{-\frac{40}{27}} + \frac13 \approx 0.9033 + 0.9872i$ And this is clearly not a zero of the polynomial given!
Tags algebraic, equations, solution
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post ZeusTheMunja Linear Algebra 20 January 25th, 2013 05:52 AM fzeropro Algebra 1 July 23rd, 2008 07:54 AM johnny Algebra 3 November 6th, 2007 04:35 PM johnny Algebra 2 August 21st, 2007 07:02 AM johnny Algebra 6 July 23rd, 2007 08:41 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2019-06-16 21:14:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 20, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6945745348930359, "perplexity": 1596.9161509964479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998298.91/warc/CC-MAIN-20190616202813-20190616224813-00238.warc.gz"}
|
http://www.logic.univie.ac.at/2004/Talk_05-18_a.html
|
# 2004 seminar talk: Strong Compactness and Stationary Sets
Talk held by John Krueger (KGRC) at the KGRC seminar on 2004-05-18.
### Abstract
I will show how to construct a model in which $\kappa$ is a strongly compact cardinal and the set $S(\kappa,\kappa^+) = \{ a \in P_\kappa \kappa^+ : \ot(a) = (a \cap \kappa)^+ \}$ is non-stationary.
|
2020-02-19 20:29:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014624714851379, "perplexity": 1353.4157192323466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00530.warc.gz"}
|
https://physics.stackexchange.com/questions/547255/why-acceleration-is-limited/547257
|
# Why acceleration is limited?
In a frictionless environment, why doesn't an object move at infinite acceleration if force is applied on it?
Force causes movement, so unless there is an opposing force there shouldn't be any reason for the force to cause infinite acceleration.
• force causes change in movement. force doesn't cause movement – Ashwin Balaji Apr 28 '20 at 2:08
• You're confusing zero friction with zero inertia. – J.G. Apr 28 '20 at 9:40
Newton's Second Law states:
$$F=ma$$
So, for a finite (total) force, you get a finite acceleration. This doesn't impose a limit on your velocity, though. The longer you apply the force, the more your velocity increases. In classical mechanics, applying a force for an infinite time will result in infinite velocity. But classical mechanics doesn't hold at very high velocities. Instead, special relativity applies, which has a different relationship between force and acceleration that leads to the object's speed never passing the speed of light.
Because in a no friction environment an object obeys Newton's second law when a force is applied to it,
$$F=ma \tag{1},$$
the acceleration is decided by the mass of the object. When you apply a force to an object its mass decides how much it "resists" being moved by the force.
Newton's second law states that $$F=ma$$ for an object with constant mass. Unless the mass is zero, a finite force gives rise to a finite acceleration. If the mass is zero, then special relativity predicts that the object will travel at the speed of light.
Force does not cause changes in velocity; it causes changes in momentum. In Newtonian mechanics, momentum is proportional to velocity. In relativistic mechanics, it's not.
With light you could apply a force on a body, see radiation pressure on Wikipedia.
No other particles respectively bodies are moving faster than light, at least we haven't found any other.
Even if you can run at no more than 30 km/h, you can throw a ball with your arms at a higher speed. But such an addition of speeds is impossible for photons. Photons can neither push nor accelerate other photons, light cannot move faster than 300,000 km/h.
You might be confounding velocity and acceleration. If a particle is in a vacuum, then you have these scenarios:
1 - There's no force acting on the particle, or $$\frac{d}{dt}\vec v=\vec 0$$ which implies that $$\vec v$$ is constant (in direction and magnitude). Reminder: $$\vec 0$$ is also a constant vector, so the particle could be moving uniformly, or just standing in its place.
2 - There's constant net force acting on the particle, or $$\frac{d}{dt}\vec v=\vec a$$ where $$\vec a$$ is a constant vector (in both magnitude and direction). This means that velocity is going to increase infinitely since it is given by $$\vec v=\vec at+\vec v_0$$. You can see that $$v\to\infty$$ as $$t\to\infty$$.
3 - There's a variable (in either direction, magnitude, or both) net force acting on the particle, or $$\frac{d}{dt}\vec a=\vec a(t)$$. In this case, having the acceleration going to infinity depends on the equation that it is described by.
For example, if $$\vec a(t)=(t^3, t^2,t)$$, then it does indeed go to """$$\vec\infty$$""". However, if $$\vec a(t)=(\cos(\omega t),\sin(\omega t),0)$$, then although it is variable in direction, it's magnitude will always be the same.
As pointed out by other members, force describes the change in velocity (I assume constant mass), not acceleration.
|
2021-03-01 01:52:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7061359286308289, "perplexity": 350.58409062457184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00407.warc.gz"}
|
https://edusaksham.com/answers/CBSE-Class-10-Mathematics-Without-actually-performing-the-long-division.html
|
### Chapter 1: Real Numbers
Q
##### Real Numbers Solutions
Question:
Without actually performing the long division, state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion:
(i) $\frac{13}{3125}$ (ii) $\frac{17}{8}$ (iii) $\frac{64}{455}$ (iv) $\frac{15}{1600}$ (v) $\frac{29}{343}$
(vi) $\frac{23}{2^35^2}$ (vii) $\frac{129}{2^25^77^5}$ (viii) $\frac{6}{15}$ (ix) $\frac{35}{50}$ (x) $\frac{77}{210}$
Solutions: We know that x = $\frac{p}{q}$ be a rational number, such that the prime factorization of q is of the form $2^{n}5^{m}$ , where n, m are non-negative integers. Then, x has a decimal expansion which terminates.
And x= $\frac{p}{q}$ be a rational number, such that the prime factorization of q is not of the form $2^{n}5^{m}$ , where n, m are non-negative integers. Then, x has a decimal expansion which non-terminating repeating.
(i) $\frac{13}{3125}$
If we factorize the denominator, we get
3125 = 5 × 5 × 5 × 5 × 5 = $5^{5}$
So, the denominator is in the form of $5^{5}$ so, $\frac{13}{3125}$ is terminating decimal expansion.
(ii) $\frac{17}{8}$
If we factorize the denominator, we get
8 = 2 × 2 × 2 = $2^{3}$
So, the denominator is in the form of $2^{3}$ so, $\frac{17}{8}$ is terminating decimal expansion.
(iii) $\frac{64}{455}$
If we factorize the denominator, we get
455 = 5 × 7 × 13
So, the denominator is not in the form of $2^{n}5^{m}$ so, $\frac{64}{455}$
is non-terminating repeating decimal expansion.
(iv) $\frac{15}{1600}$
If we factorize the denominator, we get
1600 = 2 × 2 × 2 × 2 × 2 × 2 × 5 × 5 = $2^{6}5^{2}$
So, the denominator is in the form of $2^{n}5^{m}$ . So, $\frac{15}{1600}$ is terminating decimal expansion.
(v) $\frac{29}{343}$
If we factorize the denominator, we get
343 = 7 × 7 × 7 = $7^{3}$
So, the denominator is not in the form of $2^{n}5^{m}$ . So, is non-terminating repeating decimal expansion.
(vi) $\frac{23}{2^35^2}$
Here, the denominator is in the form of $2^{n}5^{m}$ . So, $\frac{23}{2^35^2}$ is terminating decimal expansion.
(vii) $\frac{129}{2^25^77^5}$
Here, the denominator is not in the form of $2^{n}5^{m}$ only. So, $\frac{129}{2^25^77^5}$ is non-terminating repeating decimal expansion.
(viii) $\frac{6}{15}$
If we divide nominator and denominator both by 3 we get $\frac{3}{5}$
So, the denominator is in the form of $5^{m}$. So, $\frac{6}{15}$ is terminating decimal expansion.
(ix) $\frac{35}{50}$
If we divide nominator and denominator both by 5, we get $\frac{7}{10}$
If we factorize the denominator, we get
10 = 2 × 5
So, the denominator is in the form of $2^{n}5^{m}$ . So, $\frac{35}{50}$ is terminating decimal expansion.
(x) $\frac{77}{210}$
If we divide nominator and denominator both by 7, we get
If we factorize the denominator, we get
30 = 2 × 3 × 5
So, the denominator is not in the form of $2^{n}5^{m}$ . So, $\frac{77}{210}$ is non-terminating repeating decimal expansion.
#### VIDEO EXPLANATION
Related Questions for Study
CBSE Class 10 Study Material
|
2021-06-23 00:12:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 49, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682045578956604, "perplexity": 1875.1745334048385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00072.warc.gz"}
|
https://gateoverflow.in/8314/gate2015-1-44
|
2.4k views
Compute the value of:
$$\large \int_{\frac{1}{\pi}}^{\frac{2}{\pi}}\frac{\cos(1/x)}{x^{2}}dx$$
in Calculus
edited | 2.4k views
For the integrand $\frac{\cos(1/x)}{x^2}$, substitute $u = \frac1 x$ and $\def\d{\,\mathrm{d}} \d u = -\frac1{x^2}\d x$.
This gives a new lower bound $u = \frac1{1/\pi} = \pi$ and upper bound $u = \frac1{2/\pi} = \frac{\pi}{2}$. Now, our integral becomes:
$I= - \int\limits_\pi^{\pi/2} \cos(u) \d u$
$\;\;= \int\limits_{\pi/2}^\pi \cos(u)\d u$
Since the antiderivative of $\cos(u)$ is $\sin(u)$, applying the fundamental theorem of calculus, we get:
$I= \sin(u)\;\mid _{\pi/2}^\pi$
$\;\;= \sin(\pi) - \sin \left ( \frac\pi 2\right )$
$\;\;= 0 - 1$
$\;\; = {-1}$
by Active (1.6k points)
edited
\begin{align*} \large \int_{\frac{1}{\pi}}^{\frac{2}{\pi}}\frac{cos(1/x)}{x^{2}}dx \end{align*}
let, $\frac{1}{x} = t$ then, $\frac{-1}{x^2}dx = dt$
\begin{align*} \int_{\frac{1}{\pi}}^{\frac{2}{\pi}}\frac{cos(1/x)}{x^{2}}dx &=-\int_{\pi}^{\frac{\pi}{2}}cos(t)\ dt\\ &= -\Big( \sin t \Big)_{\pi}^{\pi/2}\\ &= -\Big( 1-0 \Big)\\ &= -1 \end{align*}
by Boss (30.8k points)
+3
But in exam they give incorrect on -1 as answer. Please improve that! I'm talking about GO test for this paper!
+1
$\int_{b}^{a}f(x)dx=-\int_{a}^{b}f(x)dx$
|
2020-01-22 09:05:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999853372573853, "perplexity": 3729.767554871488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00180.warc.gz"}
|
http://mathhelpforum.com/number-theory/130687-if-n-even.html
|
# Math Help - if n is even...
1. ## if n is even...
for $n \in \mathbb{Z}$ prove that n is even $\iff \ n -2 [\frac{n}{2}] =0$
2. Originally Posted by flower3
for $n \in \mathbb{Z}$ prove that n is even $\iff \ n -2 [\frac{n}{2}] =0$
Really? $n=2z$ so that $\frac{n}{2}=z$ so that $\left\lfloor\frac{n}{2}\right\rfloor=z\implies2\le ft\lfloor\frac{n}{2}\right\rfloor=2z=n$
3. The other way:
$n-2\lfloor \frac{n}{2} \rfloor = 0 \iff n=2\lfloor \frac{n}{2} \rfloor \implies 2\mid n \implies n$ is even.
|
2014-09-16 18:50:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9467847943305969, "perplexity": 1295.3729408858662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657118950.27/warc/CC-MAIN-20140914011158-00138-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://www.koreascience.or.kr/article/JAKO200910348030479.page
|
# 그뢰브너 기저와 지시함수와의 관계
• Published : 2009.11.30
#### Abstract
Many problems of confounding and identifiability for polynomial models in an experimental design can be solved using methods of algebraic geometry. The theory of $Gr\ddot{o}bner$ basis is used to characterize the design. In addition, a fractional factorial design can be uniquely represented by a polynomial indicator function. $Gr\ddot{o}bner$ bases and indicator functions are powerful computational tools to deal with ideals of fractions based on each different theoretical aspects. The problem posed here is to give how to move from one representation to the other. For a given fractional factorial design, the indicator function can be computed from the generating equations in the $Gr\ddot{o}bner$ basis. The theory is tested using some fractional factorial designs aided by a modern computational algebra package CoCoA.
#### References
1. Cox, D., Little, J. and O’Shea, D. (1992). Ideal, varieties, and algorithms, Spring-Verlag, New York.
2. Fontana, R., Pistone, G. and Rogantin, M. P. (2000). Classification of two-level factorial fractions. Journal of Statistical Planning and Inference, 87, 149-172. https://doi.org/10.1016/S0378-3758(99)00173-1
3. Park, D. K. and Kim, H. (2003). A New approach for selecting fractional factorial designs. Journal of the Korean Data & Information Science Society, 14, 707-714.
4. Pistone, G., Riccomagno, E. and Rogantin, M. P. (2006). Algebraic statistics method for DOE, Unpublished Manuscript.
5. Pistone, G. and Rogantin, M. P. (2008). Indicator function and complex coding for mixed fractional factorial designs. Journal of Statistical Planning and Inference, 138, 107-121.
6. Pistone, G. and Wynn, H. P. (1996). Generalized confounding with Gr¨obner bases. Biometrika, 83, 653-666. https://doi.org/10.1093/biomet/83.3.653
7. Plackett, R. L. and Burman, J. P. (1946). The design of optimum multifactorial experiments. Biometrika, 33, 305-325. https://doi.org/10.1093/biomet/33.4.305
8. Ye, K. Q. (2003). Indicator functions and its application in two-level factorial designs. Annals of Statistics, 31, 984-994 https://doi.org/10.1214/aos/1056562470
9. Zhang, R. and Park, D. K (2000). Optimal blocking of two-level fractional factorial designs. Journal of Statistical Planning and Inference, 91, 107-121. https://doi.org/10.1016/S0378-3758(00)00133-6
|
2021-03-03 05:47:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46066221594810486, "perplexity": 2642.453024823846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00448.warc.gz"}
|
http://www.ask.com/science/fossil-fuels-non-renewable-163eb6d7cdbcaddb
|
Q:
# Why are fossil fuels non-renewable?
A:
Fossil fuels are non-renewable because they take millions of years to form. Fossil fuels make up most sources of non-renewable energy, and they were created millions of years ago as a result of marine creatures decaying under immense pressure and heat.
Coal, petroleum and natural gas are fossil fuels. They are ready-made fuels that are relatively inexpensive to extract. Carbon is the main element in fossil fuels, and fossil fuels are available in limited supplies.
Hundreds of millions of years ago, Earth was covered with shallow seas and swamps. The plants, algae and plankton that grew in the wetlands created energy through photosynthesis, and when they died, energy was still stored in them. As rocks and sediment accumulated on these organisms, high heat and pressure turned them into fossils. Reservoirs of these sources of non-renewable energy exist throughout the world.
Not all non-renewable energy sources are fossil fuels. Uranium ore is used as fuel in nuclear power plants. Uranium is classified as a non-renewable fuel even though it is not a fossil fuel.
Biomass can be considered renewable and non-renewable because biomass energy uses the energy found in plants. Once people were able to extract energy from fossilized organisms, fossil fuels replaced renewable energy sources like wood, wind, solar and water as the main sources of fuel.
## Similar Questions
• A:
Fossil fuels are not renewable resources because there is a finite amount of them on Earth. Renewable resources only include resources that do not run out, and examples include wind and solar energy, or even wood since trees can be replanted. Fossil fuels cannot be readily replenished.
Filed Under:
• A:
Fossil fuels are natural fuels made up of decomposed organic materials that are typically burned to produce steam to turn a turbine, which then produces AC power through a generator. Fossil Fuel energy plants burn either coal, oil or gas in specifically designed combustion chambers to produce the steam.
Filed Under:
• A:
Fossil fuels are formed the gradual accumulation of organic remains on the sea floor. As the accumulation rate increases, the organic remains are subjected to heat and pressure, which leads to fossil-fuel formation.
|
2015-01-30 18:36:51
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8745063543319702, "perplexity": 2739.6238004894726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115926735.70/warc/CC-MAIN-20150124161206-00108-ip-10-180-212-252.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/statistics/166233-hypothesis-testing.html
|
1. Hypothesis Testing
A biologist discovers a colony of a previously unknown type of bird nesting in a cave. Out of the 16 chicks which hatch during his period of investigation, 13 are female. Test at the 5% significance level whether this supports the view that that the sex ratio for the chicks differs from 1.
Im really stuck i think ive developed the p:H1 and p:H0 but after that i am a bit stuck any help is much appreciated thankyou
2. Sounds like you should use a $\displaystyle \chi^2$ test here.
$\displaystyle H_0:$ Observed sex ratio can be described by the Expected sex ratio
$\displaystyle H_1:$ Observed sex ratio differs from the Expected sex ratio
Now you need to make a table
$\begin{array}{|c|c|c|}
& \text{Male} & \text{Female}\\ \hline
\text{Expected} & \dots & \dots \\ \text{Observed} & \dots & \dots \\ \chi^2_{calc} & \dots & \dots\\ \hline \end{array}$
Populate this table and find $\displaystyle \chi^2_{calc} = \Sum \frac{(\text{0bserved-Expected})^2}{\text{Expected}}$
If $\displaystyle \chi^2_{calc} > \chi^2_{crit}= \chi^2_{df,\alpha}$ reject $\displaystyle H_0$
3. I would test that the proportion of females is .5 versus not
Since n is small you cannot approximate with a normal and you will need to use the binomial distribution.
You can obtain the p-value with x=13
And n=16, p=.5.
|
2013-06-19 04:22:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6710244417190552, "perplexity": 1216.403517230556}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707187122/warc/CC-MAIN-20130516122627-00069-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://proxies123.com/tag/graph/
|
## sql server – Cannot run query to get deadlock graph in a timely fashion
I am trying to get deadlock information from a sql-server by using this query
``````select XEventData.XEvent.value('(data/value)(1)', 'varchar(max)') as DeadlockGraph
FROM
(select CAST(target_data as xml) as TargetData
from sys.dm_xe_session_targets st
join sys.dm_xe_sessions s on s.address = st.event_session_address
where name = 'system_health') AS Data
CROSS APPLY TargetData.nodes ('//RingBufferTarget/event') AS XEventData (XEvent)
where XEventData.XEvent.value('@name', 'varchar(4000)') = 'xml_deadlock_report'
``````
The query however takes forever and returns an empty result.
Why does it take that long time and is the deadlock information you get from this views retroactive so I will be able to pinpoint a deadlock that occured some time ago?
## algorithm – Puzzled by this interview problem: schedule a computation graph on a single-processor under a memory constraint
I recently went through a interview session for a SWE/CS role at a well known company.
It wasn’t specifically a “coding-round” but was titled a “domain interview” session, so I assumed the interviewer was interested in an discussion/high-level-solution related to the problem.
The problem:
You’re given a computation graph with nodes representing computations, edges representing dependence, and the values on edges representing the data-size in MB (input or output) at that edge.
You want to schedule this graph on a single-processor with limited memory `X` MB.
To execute a node/computation, both inputs and outputs data should fit within X memory of the processor.
It’s given for for each node sum of its inputs/outputs data <= X.
If at any time the data the total data that needs to be stored execeeds X, then there is extra cost per MB of fetching data from the disk. (it can happen with node n1 has produced an output that needs to be used by n2 but not all predecessors of n2 have run yet).
How would you schedule this graph?
My approach/What I discussed:
• I answered by saying that “in general” this problem is intractable(NP-complete) for multi-processor systems, and likely too for single-processor system due to the memory constraint.
• Then I suggested a greedy approach based on keeping track of all ‘ready’ nodes at a given time that can be executed as their predecessors have been executed.
• Which specific node to schedule next from several available/ready
ones can be done based on trying to minimize the cost associated with
memory constraint (i.e. seeking to schedule the one with the min cost
associated with data transfer).
• If there are several nodes with same minimum cost associated with
data transfer, the pick the node among them with the biggest
input+output mem requirement.
During the session, the interviewer didn’t suggest many things or gave any tips or direction on which directions he may have wanted me to lead to, so it turned out to be very open-ended.
He was almost neutral or very slight camera/tight-lipped smile taking notes most of the time.
I couldn’t gauge exactly what interviewer may have been looking for due to the intractableness of the problem — it’s hard to come up with a general optimal algo for such at the top of your head.
Later I came to know I received a ‘no’ decision for this session.
Issue:
This has me puzzled. I’ve been racking my brain on if I didn’t approach it correctly, or if the interviewer was expecting one to talk about something else that the things I mentioned.
Is there something optimal or direct I missed?
Could there be some specific algo that interviewer may have been looking for that generally one is expected to know?
During the session, I thought I was providing a reasonable solution given the constraints of the problem, and could have deep-dived into any direction that interviewer may have asked me to.
But didn’t get any push-back or suggestions from the interviewer during the session, i’m interested to know what i missed or how I should have approached this problem.
## graph theory – Centrality measures in a network with negative correlations
I have a bidirectional network where the weights of edges are based on partial correlation matrix. I have both positive and negative values as weights. Now, I want to compute centrality measures as degree, closeness, betweenness and eigenvector. How can I handle the negative values? Would I get correct values for these measures, if I keep the negatives? Should I use absolute value or take (1-absolute value)?
Basically, I am confused about if these values would affect the outcome in any way. I have not found any resources that would discuss this. Please recommend, if you know any.
## postgresql – How to turn a complicated graph into Json Graph Format?
So having such normalized Postgres 13/14 graph of item joins:
``````CREATE TABLE items (
item_id serial PRIMARY KEY,
title text
);
CREATE TABLE joins (
id serial PRIMARY KEY,
item_id int,
child_id int
);
INSERT INTO items (item_id,title) VALUES
(1,'PARENT'),
(2,'LEVEL 2'),
(3,'LEVEL 3.1'),
(4,'LEVEL 4.1'),
(5,'LEVEL 4.2'),
(6,'LEVEL 3.2');
INSERT INTO joins (item_id, child_id) VALUES
(1,2),
(2,3),
(3,2),
(3,5),
(2,6);
``````
How to turn it into a JSON Graph Format document containing item columns as fields?
## list manipulation – Efficient way to get all \$k\$-factors of a graph \$G\$?
In Graph factorization, a $$k$$-factor of a graph $$G$$ is a spanning $$k$$-regular subgraph which has the same vertex set as $$G$$. Now I have a random graph $$g$$ (i.e. complete graph for simplicity) and I want to obtain all the $$k$$-factors in a list.
My strategy is
1. get the edges and vertex of $$G$$: `getpairs` and `getvetex`
2. create all the subsets of `getpairs`: `allsublist`, which is equivalent to create all the possible subgraphs.
• check whether each elements (i.e. number) in every subset is repeated $$k$$ times (i.e. `terms`), which corresponds to check every subgraph is
$$k$$-regular graph.
• check each `terms` have the same vertex set `getvetex` of $$G$$
3. store the correct subsets (i.e. $$k$$-factors) in `selectlists`.
4. repeat steps 3-4 (namely the `for-loop`)
Here is the code for the above task:
``````n = 5;
g= CompleteGraph(n);
getpairs = EdgeList(g) /. UndirectedEdge -> List;
getvetex = VertexList(g);
selectlists = {};
kregular = 2;
allsublist = Subsets(getpairs);
For(ii = 1, ii <= Length(allsublist), ii++,
terms = Cases(Tally@Flatten@allsublist((ii)), {x_, kregular} :> x);
If(terms!={} && terms==DeleteDuplicates@Flatten@allsublist((ii)) && SameQ(SortBy(terms,Greater),getvetex),
AppendTo(selectlists, allsublist((ii))););
);
``````
For-Loop is quite inefficient and create all `Subsets` will also take long time when there are many edges in the graph with large `n`.
Let’s say `n` currently is small `n<=6`, can one get rid of the for-loop? Are there simple ways to do it? Thank you very much in advance!
## graph theory – Tour expansion with min-cost flow
Question:
has the problem of formulating optimal tour-expansion for Symmetric TSP’s already been mentioned as a means for faster tour-expansion in the sense of potentially intergrating more than one city in each iteration and doing that in an optimal way, i.e. getting “the most” out of each iteration?
Motivation for the question is that I may have found such formulation but can’t remember having seen anything in that respect.
## graph theory – Prove or disprove: \$p\$ is the shortest path from \$sin V\$ to \$tin V\$ with \$w’=w_{1}+w_{2}\$
I saw the following statement:
Let $$G=(V,E)$$ be a graph and two $$w_{1},w_{2},:,Etomathbb{R}$$ weight functions so there are no negative cycles in graph. Let $$p$$ be the shortest path from $$sin V$$ to $$tin V$$ with $$w_1$$ and $$w_2$$. Prove or disprove: $$p$$ is the shortest path from $$sin V$$ to $$tin V$$ with $$w’=w_{1}+w_{2}$$.
I could not disprove it and I believe this statement it true. But I could not prove it formally. How do I translate “$$p$$ be the shortest path from $$sin V$$ to $$tin V$$ with $$w_1$$ and $$w_2$$” into math?
## For any direct graph \$G(V,E)\$, there is always an iteration of DFS algorithm on \$G\$ so the result does not have any cross trees
Consider the following graph:
A possible DFS starting from $$a$$ visits the vertices in this order: $$langle a, b, c, d rangle$$ producing the cross-edge $$(c,b)$$.
A possible DFS starting from $$b$$ visits the vertices in this order: $$langle b, a, c, d rangle$$ producing the cross-edge $$(d,c)$$.
A possible DFS starting from $$c$$ visits the vertices in this order: $$langle c, a, b, d rangle$$ producing the cross-edge $$(d,b)$$.
A possible DFS starting from $$d$$ visits the vertices in this order: $$langle d, a, b, c rangle$$ producing the cross-edge $$(c,b)$$.
## Google Sheets: Line Graph of alternating columns. 1st Column as label
Month Notes January Notes February
Net Worth \$500 \$600
Liabilities \$50 \$40
Credit Cards \$10 \$20
Credit Card 1 \$5 \$10
Credit Cards 2 \$5 \$10
Savings & Checking \$400 \$500
Investments \$100 \$200
I have a google spreadsheet I use for tracking monthly financials that looks something like this. I use a new tab or sheet per year.
I was looking to add a Dashboard to the first tab that would display an individual line for each row.
Essentially I want the row and column labels to be the same I have here but translate the numerical values into a line graph.
I have attempted to select all of the cells with the respective data manually. I have tried selecting the ranges – but with my notes columns and there are detailed rows underneath each of the rows displayed here i.e. credit cards have a row each for each card, etc. So the rows and columns in the sheet are not exactly contiguous. I’m pretty sure I’ll have to manually click each item. I have also tried selecting a row with the column intended become that row’s label and the numeric values in that row as the plotted lines – but that doesn’t seem to work very well either. Especially if I try to add more than one row. to display multiple lines.
## Find the DFS solution for the following graph if the starting point is vertex 3 and traces all vertice
Find the DFS solution for the following graph if the starting point is vertex 3 and traces all vertice
|
2021-07-28 04:57:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 39, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4484681189060211, "perplexity": 1396.8262801228393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00188.warc.gz"}
|
https://montemazuma.wordpress.com/2010/02/26/estimating-password-strength/
|
## Estimating Password Strength
This post was originally intended to be part of my guest series on cryptography at a friends blog, but I haven’t had word from him about it for a while. So, it’s mine now :P
The strength of a password is usually based on the idea that an attacker will have to guess the password by trying lots of different possibilities. Depending on the nature of the password this may be the kind of ‘lots’ that a computer can try very quickly, or it could be the kind of ‘lots’ that would take years.
### Entropy
To start with, let us assume that the attacker knows absolutely nothing about the password. Let us assume also that they are going to brute-force the password, either through necessity or by choice. This gives essentially the best possible case of password security.
The number of possibilities that the attacker will have to try in order to be guaranteed to find the password is related to the information entropy of the password, or more precisely the information entropy.
$H(X) = - \sum^n_{i=1}p_{x_i}\text{log}_2p_{x_i}$
The information entropy measures how close to random data the contents of a set are. In the context we will be using it, things are not too complicated. The set $X$ will be all the possible values of a byte (0-255 or 0x00-0xFF). The values $p_{x_i}$ are the probability of a particular value occurring.
### Best Case Scenario
To start with however we are going to leave the probability approximated, predicated on an assumption. Let us assume that the attacker only knows which classes of characters are used in the password. Take an example password: Hello!42. We would be assuming that the attacker knows that lower and upper case letters are use, numbers are used, and the character ‘!’ is used.
However, the attacker does not know how many times each of these characters is used in the password. This means that as far as the attacker knows, each possibility is equally likely. This means that each value in our set $X$ which falls within the classes we’ve just stated has a probability of $1/N$, $N$ being the number of characters in the classes. This allows us to make a simplification of the formula.
$H(X) = - \sum^N_{i=1} \frac{1}{N}\text{log}_2\left(\frac{1}{N}\right) = - \text{log}_2\frac{1}{N} = \text{log}_2 N$
This is vastly simpler. In our case $N = 26 + 26 + 10 + 1 = 63$. So the entropy of our character set is $\text{log}_2 63 \approx 5.977$. This may seem tiny, but this is the entropy of our character set, which should be roughly equivalent for the number of tries needed to guess each character. So to get the estimate of the strength of our password, we multiply by the length (8) to get 47.818 bits. So we can see that, even in this best case scenario, this password is a lot weaker than most encryption algorithms (hence why passwords are usually the weak link in a security system).
### Not All Characters Are Equal
However, another way of making a somewhat less ideal case estimate of the password strength is by directly using the entropy calculation. In this case we have to use the full formula, because $p_{x_i}$ will no longer be assumed to be a particular value and we will actually calculate it. This is simple to do, simply count the number of times the character occurs in the password, and then divide that by the length of the password.
However, introducing this into the calculation means that it’s more complicated to do, and so I wrote myself a python script to perform this calculation. The full script is available as one of the scripts in this one of my Github repositories. However the section which I will use for this is shown below:
def H_data(data, base = 2):
if not data:
return 0
entropy = float(0)
n = float(len(data))
for x in xrange(256):
p_x = float(data.count(chr(x))/n)
if p_x > 0:
entropy += - p_x*math.log(p_x, base)
return entropy
Running the above code on our example password gave an entropy of 2.750 bits, significantly less than our best case scenario. Multiply this by the length and you get a strength of only 22 bits. However, for the attacker to be able to break the password with this many tries he must know exactly which characters appear in the password, and how many times they appear – in other words the only thing he does not know is the order.
### Take a Chance
Thus far we have assumed that we will measure the strength of the password by the number of tries an attacker must make before he is guaranteed to find the password. So, using the first (and I suggest using this one usually) method, our password has a strength of 47.818 bits. This means the attacker must make $2^47.818 \approx 248114606917826$ guesses. This seems like a lot, although depending on various factors it could be just the blink of an eye for a computer. However, due to the laws of chance and probability, an attacker is more likely to have found the password than not found it as soon as he has searched half the possibilities. In other words, he will probably find it half way through.
This means he will probably have to make only half as many guesses. However, we need not deal with that large number, because of the rules of logs to base 2 etc. the end effect of this reduction of the number of tries is that the password strength goes down by one bit. In this case $47.818 - 1 = 46.818$. Nice and simple, n’est pas?
### Best Estimate
The first method of estimating password strength is usually used because it is assumed that the attacker is unlikely to learn the information required it make the second method valid without just finding out the whole password, in which case it will be useless to estimate how many tries he would need :P.
Thus, accounting for probability, our password is probably 46.818 bits strong. This is however far less strong than symmetric encryption algorithms which usually have 128 bit or 256 bit keys, so this password will still be a problem in security. Especially in the worst case scenario.
### From Bad to Worst
However, sometimes passwords can be even less secure. Other attacks can sometimes be used to guess a password, drawing on insecurities either of the password or the algorithms used to store it.
Dictionary attacks are used to crack weak passwords based on words. They shorten the number of possibilities by trying possibilities involving known words in English or another language first. They may also use additional algorithms to also crack passwords that are words with substitutions or additions. Our password would be vulnerable to a dictionary attack, because it is largely composed of the word ‘Hello’. Exactly how much quicker a dictionary attack would be is difficult to estimate, but it is thought to usually be a lot quicker.
Another method to speed the search can be used if passwords are stored in an insecure way. I won’t go into specifics in this post, but if a password is stored using a one way hash with a weak and or non-existent salt, a time-memory tradeoff technique can be used to reduce the time to crack passwords. The most common of these techniques is the use of Rainbow Tables. This form of password cracking may not seem all that bad seeing as using a one way hash without a salt is horrifically bad practice on the part of software developers.
However, guess who does it? That’s right, all Windows passwords are vulnerable to Rainbow Tables, meaning that specially made software such as Ophcrack can be used to crack most Windows passwords in a matter of seconds. Yet another reason to consider windows insecure.
### Windows Password Security
It is pointless trying to use a strong password for your windows login – as these passwords are vulnerable to Rainbow tables, any attacker will still be able to crack it well within an acceptable time-frame. This is one of the many reasons why I follow the following rule:
Any information that has been stored on a Windows computer is publicly accessible
This may be a bit paranoid and/or extreme, but at least this way I should stay as secure as possible :P.
This entry was posted in Uncategorized. Bookmark the permalink.
### 2 Responses to Estimating Password Strength
1. shor7.com says:
great article, thanks for sharing.
• Dark Otter says:
You’re welcome, thanks for the congrats.
|
2015-10-09 12:11:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5774152278900146, "perplexity": 494.04500860496205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737929054.69/warc/CC-MAIN-20151001221849-00045-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://socratic.org/questions/a-crane-lifts-a-487-kg-beam-vertically-at-a-constant-velocity-if-the-crane-does-
|
# A crane lifts a 487 kg beam vertically at a constant velocity. If the crane does 5.20 x 10^4 of work on the beam find the vertical distance that it lifted the beam?
## I understand the equation that needs to be used to solve the problem. What don't I understand is how does the coefficient of gravity(g) which is 9.8m/s plays a role in this question. Can someone kindly explain this part for me?
Vertical distance $d = 10.9 \text{ }$meters
#### Explanation:
The only force involve in the crane is the weight $F$ of the beam
The force of the weight
$F = m g$
$F = 487 \text{ }$$k g \cdot \left(- 9.8 \text{ "m/sec^2}\right)$
$F = - 4772.6 \text{ }$Newtons (negative because weight is downward direction)
Force of the crane is equal to weight directed upward (positive)$= + 4772.6 \text{ }$Newtons
The work $W = F \cdot d \text{ }$where $d =$distance
$d = \frac{W}{F} = \frac{5.2 x {10}^{4}}{4772.6}$
$d = 10.89552864 \text{ }$meters
God bless....I hope the explanation is useful.
|
2019-05-19 11:16:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886761903762817, "perplexity": 1707.9967515047256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254751.58/warc/CC-MAIN-20190519101512-20190519123512-00146.warc.gz"}
|
https://plainmath.net/7542/solve-the-equation-and-find-the-exact-solution-log-base-log-base-base-equal
|
Question
# Solve the equation and find the exact solution: log base 2( log base 3( log base 4(x)))=0
Logarithms
Solve the equation and find the exact solution:
$$\displaystyle{\log{{b}}}{a}{s}{e}{2}{\left({\log{{b}}}{a}{s}{e}{3}{\left({\log{{b}}}{a}{s}{e}{4}{\left({x}\right)}\right)}\right)}={0}$$
2021-01-16
Start with $$\displaystyle{x}={4}^{{3}}={64},$$
log[4](64)=3,
log[3](3)=1,
log[2](1)=0,
so x=64 is a solution.
Another way: log[2](z)=0,
so z=1, log[3](y)=1,
so y=3, log[4](x)=3,
so $$\displaystyle{x}={4}^{{3}}={64}.$$
|
2021-09-24 19:15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6119593977928162, "perplexity": 11373.27961420839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057564.48/warc/CC-MAIN-20210924171348-20210924201348-00054.warc.gz"}
|
http://math.stackexchange.com/questions/66498/in-lotto-what-is-the-minimum-number-of-tickets-you-would-need-to-buy-to-guarante
|
In Lotto what is the minimum number of tickets you would need to buy to guarantee at least one 3 number match?
In Lotto (the UK lottery), you pick 6 numbers from a pool of 49. How many tickets would you need to guarantee at least one match of 3 numbers?
Wikipedia shows the probability of matching 3 numbers at 55:1. Does that mean if you buy 55 tickets you are guaranteed to get at least one match?
(P.S. I did search here and find another lottery question, possibly asking the same thing - but obviously didn't understand that.)
cheers
-
Should we add the [combinatorics] tag? – cardinal Sep 21 '11 at 23:39
@cardinal: At least that would be more appropriate than probability, since the question does not involve any probabilities. – TMM Sep 22 '11 at 0:20
Believe it or not, this is an open problem. Up to the present time, despite advanced mathematics and supercomputers, the precise minimum is unknown. It is known that the minimum value (called $L(49,6,6,3)$ in mathematical terminology) lies between $87$ and $163$. There is more information on this topic here.
Here is a set of $163$ ticket choices that works. Maybe a smaller set could be found to guarantee 3 winning numbers, but nobody knows this for sure.
|
2014-12-22 21:41:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5037762522697449, "perplexity": 386.2045177659433}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802776996.17/warc/CC-MAIN-20141217075256-00053-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://brilliant.org/discussions/thread/just-think-about-it-its-urgent/
|
×
Can 2 raise to the power something be equal to 0? If yes then at what value?
Note by Prabhav Bansal
1 year, 1 month ago
Sort by:
Can we say 0.0000000000000000000000000000000000000000000001 equal to zero. · 1 year, 1 month ago
This would only approach 1. Any negative power would be less than 1.
So technically, $$2^{-\infty} = 0$$. But then again, $$-\infty$$ isn't a number. · 1 year, 1 month ago
But still if we say that 2^-100000000000 it is zero. · 1 year, 1 month ago
No, it's still not absolute zero. · 1 year, 1 month ago
So, finally we conclude that anything raise to the power any integer is not zero. · 1 year, 1 month ago
0 raised to the power of any positive real is 0.
You need to qualify what "anything" refers to. Staff · 1 year, 1 month ago
No, that's the law of logarithm. · 1 year, 1 month ago
No. · 1 year, 1 month ago
No. Because 0 is not a number. · 1 year, 1 month ago
Because 0 is not a number.
Are you sure? xP
I'm guessing you mean infinity? · 1 year, 1 month ago
|
2016-10-22 23:50:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.550723135471344, "perplexity": 3574.935246182845}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719079.39/warc/CC-MAIN-20161020183839-00387-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://gasstationwithoutpumps.wordpress.com/category/circuits-course/
|
# Gas station without pumps
## 2013 September 17
### Broken soldering iron
Filed under: Circuits course — gasstationwithoutpumps @ 11:51
Tags:
I’ve previously recommended that students get a cheap soldering station like the one I have, and even recommended that the School of Engineering buy a dozen or so for use in the applied circuits lab.
My son recently found out why they are cheap: the ferrule that holds the tip in is not firmly mounted—it just has a friction fit, and after a while it comes loose and the tip falls out:
The soldering iron after the tip has come out.
A closeup of the ferrule and handle. Pushing it back in and recrimping the tube to hold it tighter seems to have no effect. One of the reviewers on Amazon recommended supergluing the ferrule in (that’s what they did when theirs failed).
It looks like I’ll be buying a new soldering iron soon. I’m undecided between getting a hot-air rework station with a soldering iron, or separate tools for the hot-air rework and for regular soldering. A combined tool is cheaper and takes less bench space, but I don’t often need the hot air, so a smaller soldering iron would be more convenient most of the time. Also, the cheap hot-air rework stations that include a vacuum pickup tool don’t usually have a soldering iron as well, though for $160 I can get an Aoyue 968A+ that does. If I get a new soldering iron, do I get another cheap one ($25) and regard it as disposable, or do I get a high-quality digital Weller iron for $145, or the intermediate Weller analog unit for$90?
## 2013 September 3
### Towards automatic measurement of conductivity of saline solution
I’ve been thinking about a more automatic way to measure the conductivity of a saline solution than what I reported in Better measurement of conductivity of saline solution and Conductivity of saline solution. The original lab is suitable for the circuits class, because it measures with sine waves and models the impedance of the electrodes, but it requires a sine-wave oscillator and an AC voltmeter that can handle high frequencies—neither of which makes for a low-cost device.
I was thinking that one could make a fairly simple device using the Freedom KL25Z board and a few extra components:
Bias circuit and load resistor for making a conductivity meter.
The idea is a simple one—instead of using a sine wave to drive the electrodes, use a square wave directly from the KL25Z. Connect the other electrode through a series resistor to a voltage centered between the two square-wave values, and use the 16-bit analog-to-digital converter to read the signal between the resistor and the electrodes just before changing the square-wave value.
With a low frequency square-wave, the electrodes will act like a resistor, but much of the resistance will come from the insulating film on the electrode, rather than the solution. At high frequencies, the capacitance of the insulating file will not have time to charge and discharge, and the resistance of the electrodes will depend mainly on the conductivity of the solution. At high enough frequencies, the output waveform will look like a triangle wave, rather than a square wave, and the amplitude of the signal will be proportional to $\frac{R_{3}}{R_{S}+R_{3}}$, where R3 is the pull-down resistor in the diagram, and RS is the resistance of the solution. That means that $R_{S} \propto R_{3}(1/A - 1)$, and the conductance can be computed as the inverse of resistance. The measured value depends, of course, on the size and spacing of the electrodes, so one would have to calibrate with a known conductivity solution to get the proper scaling.
I looked at the speed of the analog-to-digital converter on the KL25Z board, and they claim that they can get 16-bit conversion (though only with 12-bit accuracy, really) at 460k samples/sec—though I’ve not figured out the settings that really permit that. Higher accuracy is possible by averaging successive samples (which there is hardware support for), up to about 14.5 effective bits (averaging 32 differential samples, at a maximum rate of about 7.2kHz). By doing the averaging in software instead of hardware, we could run with a square-wave input up to about 90kHz (single-ended 16-bit samples at 180k samples/second seems to be fairly easy to set up). I think that is likely to be fast enough for all but the highest ionic concentrations, even using a very polarizable electrode like the 316L stainless steel ones we used for the Applied Circuits lab. One could check this by sweeping the frequency up, and seeing whether the estimate for RS converges.
I’ve not tried building and testing this idea yet, because the Arduino boards have too slow and too low-resolution A-to-D converters, and my son is hogging the Freedom KL25Z board for his light glove prototype. (I guess I need to get another copy of the board).
I don’t think I’ll be using this design in the Applied Circuits course (it is not suitable for teaching about modeling with linear components), but it might be a useful design for the freshman design seminar, or even for doing a titration lab in my son’s AP chem class. I understand that a standard lab is to titrate barium hydroxide with sulfuric acid, since the two reactants have conductive ions, but the barium sulfate precipitates out and the solution is essentially non-conductive when the two are perfectly balanced. The conductivity should form a nice “V” plot as sulfuric acid is added to a barium hydroxide solution—the units don’t even matter, since we just want to know what amount of sulfuric acid need to be added to attain the minimum, not what the conductivity is at the minimum.
To make a useful conductivity meter for something like AP chem, I’d need a much smaller probe that the pair of electrodes I used in the Applied Circuits class. I think I could make a decent probe out of a piece of stainless steel tubing and a piece of the 316L welding rod, if I could come up with a good way to hold them together concentrically, make sure they were always immersed to the same depth, and keep any wires to the rod and tube out of the solution. This might be a good problem for the freshman design seminar.
## 2013 August 14
### Service courses
Filed under: Circuits course — gasstationwithoutpumps @ 22:36
Tags: , , ,
Joe Redish, in his blog The Unabashed Academic: wrote a post On service courses, in which he talked about a physics course he teaches, recognizing that the primary audience is not physics majors:
In physics departments, a lot of the students we teach are not going to be physics majors. They are going to be engineers, chemists, computer scientists, biologists, and doctors. Everybody (that is, all physicists) agrees that physics is good for all future scientists since physics is the basis of all other sciences—at least that’s the way it seems to physicists.
He added that they wanted to take my course, despite the fact that they were biology majors and therefore it wasn’t of much relevance for them.
Well! Despite the fact that I had thought carefully about what might be useful for biologists in their future careers, and focused on developing deep scientific thinking skills, it suddenly became clear that I had failed in an important part of my goal. I had managed to teach some good knowledge and good thinking skills, but I had not made the connection for my students to the role of that knowledge or those skills in their future careers as biologists or medical professionals. The occasional problem I had included with a biological or medical context did not suffice.
I therefore propose we who are delivering service courses for other scientists—and I mean mathematicians, chemists, and computer scientists as well as physicists—ought to measure our success not just by the scientific knowledge and skills that our students demonstrate, but by their perception of their value to themselves as future professionals. We can tell ourselves, “Well, they’ll see later how useful all this is,” and they might, but that is really wishful thinking on our part. If our students see that what we provide is valuable now, they will maintain and build on what they have learned in our classes. Otherwise, it is likely that what we have taught will fade and our efforts will have been largely in vain.
I wish our faculty who taught service courses thought about their classes this way. All too often I hear from students that they don’t remember anything from the required science classes, and that the faculty who taught those courses did not care whether they learned anything or not—both students and faculty were just going through the motions without any real teaching or learning taking place.
I’ve never taught a large service course for students outside my department (though my department has changed, I’ve always focused on courses that were very directly related to the major, even when teaching lower-division courses like Applied Discrete Math). So I can’t speak from experience about teaching students who see no point to learning the content of the course—it must be tough.
About the closest I’ve come is in teaching tech writing, which I instituted as a requirement for computer engineering majors back in 1987. That course was not one students enjoyed much (there was a huge amount of writing, and a corresponding huge grading load), and many saw it as well outside their area of competence (and for some, it was). But even the tech writing course was carefully tailored for relevance to the engineers taking it. Every assignment I created was intended to develop skills that they could use as engineers and as students.
I’ve had people come up to me and tell me that they took the course from me 20 years ago (I rarely remember them), and that it was one of the most valuable courses they had in college—which is gratifying to hear, since few of them wanted to take it when they were students.
It is possible to make courses that seem outside the students’ interest relevant, but it takes some serious effort. I think I managed to do that with the Applied Circuits for Bioengineers course that I prototyped last Winter and will be teaching again this coming Spring. None of the students in the course were interested in bioelectronics—they had all put off the required circuits course as long as they could, because they were not interested in the material and had heard horror stories about how dry and difficult the EE course was. By the end of the quarter, several of them were excited about what they could do with electronics, and wishing they had been able to take the course much earlier—they might have chosen bioelectronics instead of biomolecular engineering as their concentration. The standard circuits course had squelched almost all interest in bioelectronics—only about 1 out of 20 or 30 bioengineering students had been choosing the bioelectronics concentration, and he was going on to do radio electronics for an MS degree, thanks to a particularly good lab instructor in EE.
It is never enough, even in a course for majors, to design the course around “they’ll need this later”. It is far better to make them want to know it now, for things that they can do now. For the Applied Circuits course, I concentrated ton the students doing design and construction in the labs, with just enough theory to do the design. This is a big contrast to the traditional circuits course, which is all theory and math which EE students will use “later”—totally useless if the students then never take another EE course.
This year I hope to replace the requirement for the EE circuits course in the bioengineering major with a requirement for the applied circuits course. Those who want to do bioelectronics will still have to take the EE circuits course, but they’ll go into it knowing half the material, and knowing what the theory is for, which should move the bioengineers from the bottom of the circuits course to the top.
I wish I had the capability to replace the chemistry and physics courses also, but I’m not aware of tenure-track faculty in either department who are interested in changing what and how they teach for students outside their own major. Note that for the circuits course I could not get the EE department to teach the course that was needed—I had to teach myself circuits and design the course myself (which took me about 6 months full time). And I was a lot closer to knowing circuits (from my experience in teaching digital logic and VLSI design) than I am to knowing chemistry (which the least serviceable service course that we require of bioengineers).
One thing that chemists and physicists could do to make their courses more useful and interesting to engineering students is to put design into the labs. Engineers want to make things, not just study them. Far too many of the freshman science labs are cookbook labs, where the students are just taught to follow carefully written instructions to make a series of measurements to get an answer to a question that they weren’t interested in to begin with. What a waste of precious lab space and time.
## 2013 August 13
### MPX2053DP pressure sensor being discontinued
Filed under: Circuits course — gasstationwithoutpumps @ 18:24
Tags: , ,
I just got notice today that Freescale is discontinuing the MPX2053DP pressure sensor that I used for the pressure sensor lab in the Applied Circuits course. DigiKey sent me an “end-of-life” notice with a “last time buy date” of 02/22/2014. It is service like this that makes me glad to buy from Digikey.
I don’t see any indication of the end of life status for the MPX2053DP on the Freescale website, which either means that Freescale is very poor at maintaining their website, or that Digikey has made a mistake here. I’m betting on incompetent web maintenance at Freescale.
I’m wondering whether I should order some spares (I still have 8 breakout boards). I have 12 that I soldered to breakout boards for the lab (which UCSC reimbursed me for), plus one that I made for myself. Eight spares would cost $110.56, and 10 spares$125.60 (plus tax and shipping in both cases). I should probably check with the lab manager to find out what their recommended policy is for spare parts on discontinued items.
It may not be that important, since the MPX2050DP is still available, and it has essentially the same specs (except somewhat better linearity) and is actually slightly cheaper. If we need more, we can get the MPX2050DP instead.
## 2013 July 31
### Microphone sensitivity exercise
Filed under: Circuits course — gasstationwithoutpumps @ 13:46
Tags: , , , ,
I’ve been thinking a bit about improving the microphone lab for the Applied Circuits course. Last year, I had the students measure DC current vs. voltage for an electret microphone and then look at the microphone outputs on the oscilloscope (see Mic modeling lab rethought). I still want to do those parts, but I’d like to add some more reading of the datasheet, so that students have a better understanding of how they will compute gain later in the quarter.
The idea for the change in this lab occurred to me after discussing the loudness detector that my son wanted for his summer engineering project. He needed to determine what gain to use to connect a silicon MEMS microphone (SPQ2410HR5H-PD) to an analog input pin of a KL25 chip. He wanted to use the full 16-bit range of the A-to-D, without much clipping at the highest sound levels. Each bit provides an extra 6.021dB of range, so the 16-bit ADC should have a 96.3dB dynamic range. The sound levels he is interested in are about 24dB to 120dB, so the gain needs to be set so that a 120dB sound pressure level corresponds to a full-scale signal.
He is running a 3.3v board, so his full-scale is 3.3v peak-to-peak, or 1.17v RMS (for a sine wave). That conversion relies on understanding the difference between RMS voltage and amplitude of a sine wave, and between amplitude and peak-to-peak voltage. The full-scale voltage is 20 log10(1.17), or about 1.3dB(V).
Microphone sensitivity is usually specified in dB (V/Pa), which is 20 log10 (RMS voltage) with a 1 pascal RMS pressure wave (usually at 1kHz). The microphone he plans to use is specified at –42±3dB (V/Pa), which is fairly typical of both silicon MEMS and electret microphones.The conversion between sound pressure levels and pascals is fairly simple: at 1kHz a 1Pa RMS pressure wave is a sound pressure level of about 94dB.
Scaling amplitude is equivalent to adding in the logarithmic scale of decibels, so for a sound pressure level of 120dB, the microphone output would be about 120–94–42±3=–16±3dB(V), but we want 1.3dB, so we need a gain of about 17.3dB, which would be about 7.4×. Using 10× (20dB gain) would limit his top sound pressure level to 117dB, and using 5× would allow him to go to 123dB.
One can do similar analysis to figure out how big a signal to expect at ordinary conversational sound pressure levels (around 60dB): 60–94–42=–76db(V). That corresponds to about a 160µV RMS or 450µV peak-to-peak signal.
I tried checking this with my electret mic, which is spec’ed at –44±2dB, so I should expect 60–94–44±2=–78±2dB, or 125µV RMS and 350µV peak-to-peak. Note that the spec sheet measures the sensitivity with a 2.2kΩ load and 3v power supply, but we can increase the sensitivity by increasing the load resistance. I’m seeing about a 1mV signal on my scope, so (given that I’m not measuring the loudness of my voice), that seems about right.
I’ll have to have students read about sound pressure level, loudness, and decibels for them to be able to understand how to read the spec sheet, so these calculations should be put between the microphone lab and the first amplifier lab. I’ll have them measure peak-to-peak amplitude for speech, and we’ll compare it (after the lab) with the spec sheet. This could be introduced as part of a bigger lesson on reading spec sheets—particularly how reading and understanding specs can save a lot of empirical testing.
Next Page »
|
2013-12-06 17:30:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49969804286956787, "perplexity": 1485.1701438826226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052275/warc/CC-MAIN-20131204131732-00096-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://cpc.ihep.ac.cn/article/doi/10.1088/1674-1137/43/5/054110
|
# Jet shape modification at LHC energies by JEWEL
• Jet shape measurements are employed to explore the microscopic evolution mechanisms of parton-medium interaction in ultra-relativistic heavy-ion collisions. In this study, jet shape modifications are quantified in terms of the fragmentation function $F(z)$, relative momentum $p_{T}^{\rm rel}$, density of charged particles $\rho(r)$, jet angularity $girth$, jet momentum dispersion $p_{T}^{\rm disp}$, and $LeSub$ for proton-proton (pp) collisions at 0.9, 2.76, 5.02, 7, and 13 TeV, as well as for lead-lead collisions at 2.76 TeV and 5.02 TeV by JEWEL. A differential jet shape parameter $D_{girth}$ is proposed and studied at a smaller jet radius $r<0.3$. The results indicate that the medium has the dominant effect on jet shape modification, which also has a weak dependence on the center-of-mass energy. Jet fragmentation is enhanced significantly at very low $z<0.02$, and fragmented jet constituents are linearly spread to larger jet-radii for $p_{T}^{\rm rel}<1$. The waveform attenuation phenomena is observed in $p_{T}^{\rm rel}$, $girth$, and $D_{girth}$ distributions. The results obtained for $D_{girth}$from ${\rm pp}$ to ${\rm Pb+Pb}$, where the wave-like distribution in ${\rm pp}$ collision is ahead of ${\rm Pb+Pb}$ collisions at small jet-radii, indicates a strong medium effect.
• [1] X. N. Wang and M. Gyulassy, Phys. Rev. Lett., 68: 1480 (1992) doi: 10.1103/PhysRevLett.68.1480 [2] Y. Mehtar-Tani, J. G. Milhano, and K. Tywoniuk, Int. J. Mod. Phys. A, 28: 1340013 (2013) doi: 10.1142/S0217751X13400137 [3] G.-Y. Qin and X.-N. Wang, Int. J. Mod. Phys. E, 24: 1530014 (2015) [4] G. Aad et al (ATLAS Collaboration), JHEP, 09: 050 (2015) [5] G. Aad et al (ATLAS Collaboration), Phys. Rev. Lett., 105: 252303 (2010) doi: 10.1103/PhysRevLett.105.252303 [6] V. Khachatryan et al (CMS Collaboration), Phys. Rev. C, 84: 024906 (2011) doi: 10.1103/PhysRevC.84.024906 [7] J. Adam et al (ALICE Collaboration), Phys. Lett. B, B746: 1-14 (2015) [8] G. Aad et al (ATLAS collaboration), Phys. Lett. B, 719: 220-241 (2013) doi: 10.1016/j.physletb.2013.01.024 [9] G. Aad et al (ATLAS Collaboration), Phys. Lett. B, 739: 320-342 (2014) doi: 10.1016/j.physletb.2014.10.065 [10] G. Aad et al (ATLAS Collaboration), Eur. Phys. J. C, 77(6): 379 (2017) doi: 10.1140/epjc/s10052-017-4915-5 [11] V. Khachatryan et al (CMS Collaboration), JHEP, 11: 055 (2016) [12] Shreyasi Acharya et al (ALICE Collaboration) , JHEP, 10: 139 (2018) [13] Karen M. Burke et al (JET Collaboration), Phys. Rev. C, 90: 01490 (2014) [14] Carlos A. Salgado, Urs Achim Wiedemann, Phys. Rev. D , 68: 014008 (2013) [15] Nestor Armesto et al, Phys. Rev. C, 86: 064904 (2012) doi: 10.1103/PhysRevC.86.064904 [16] Ren-Zhuo Wan, Si-Yuan Wang, Shuang Li et al, Chinese Physics C, 42(11): 114001 (2018) doi: 10.1088/1674-1137/42/11/114001 [17] Korinna C. Zapp, Eur. Phys. J. C, 74: 2762 (2014) doi: 10.1140/epjc/s10052-014-2762-1 [18] J. M. Butterworth, A. R. Davison, M. Rubin, and G. P. Salam, Phys. Rev. Lett., 100: 242001 (2008) doi: 10.1103/PhysRevLett.100.242001 [19] A. J. Larkoski, S. Marzani, G. Soyez, and J. Thaler, JHEP, 05: 146 (2014) [20] G. Aad et al (ATLAS Collaboration), Eur. Phys. J. C, 71: 1795 (2011) doi: 10.1140/epjc/s10052-011-1795-y [21] Shreyasi Acharya et al (ALICE Collaboration), J. High Energ. Phys., 2019: 169 (2019) [22] M. Dasgupta, F. A. Dreyer et al, JHEP, 06: 057 (2016) [23] A. J. Larkoski, J. Thaler et al, JHEP, 11: 129 (2014) [24] M. Cacciari, G. P. Salam, G. Soyez, FastJet User Manual, Eur. Phys. J. C, 72: 1896 (2012) doi: 10.1140/epjc/s10052-012-1896-2
Figures(7)
Get Citation
Ren-Zhuo Wan, Lei Ding, Xi Gui, Fan Yang, Shuang Li and Dai-Cui Zhou. Jet shape modification at LHC energies by JEWEL[J]. Chinese Physics C, 2019, 43(5): 054110. doi: 10.1088/1674-1137/43/5/054110
Ren-Zhuo Wan, Lei Ding, Xi Gui, Fan Yang, Shuang Li and Dai-Cui Zhou. Jet shape modification at LHC energies by JEWEL[J]. Chinese Physics C, 2019, 43(5): 054110.
Milestone
Revised: 2019-02-24
Article Metric
Article Views(983)
Cited by(0)
Policy on re-use
To reuse of Open Access content published by CPC, for content published under the terms of the Creative Commons Attribution 3.0 license (“CC CY”), the users don’t need to request permission to copy, distribute and display the final published version of the article and to create derivative works, subject to appropriate attribution.
###### 通讯作者: 陈斌, bchen63@163.com
• 1.
沈阳化工大学材料科学与工程学院 沈阳 110142
Title:
Email:
## Jet shape modification at LHC energies by JEWEL
###### Corresponding author: Dai-Cui Zhou, dczhou@mail.ccnu.edu.cn
• 1. Nano Optical Material and Storage Device Research Center, School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan 430200, China
• 2. College of Science, China Three Gorges University, Yichang 443002, China
• 3. Key Laboratory of Quark and Lepton Physics (MOE) and Institute of Particle Physics, Central China Normal University, Wuhan 430079, China
Abstract: Jet shape measurements are employed to explore the microscopic evolution mechanisms of parton-medium interaction in ultra-relativistic heavy-ion collisions. In this study, jet shape modifications are quantified in terms of the fragmentation function $F(z)$, relative momentum $p_{T}^{\rm rel}$, density of charged particles $\rho(r)$, jet angularity $girth$, jet momentum dispersion $p_{T}^{\rm disp}$, and $LeSub$ for proton-proton (pp) collisions at 0.9, 2.76, 5.02, 7, and 13 TeV, as well as for lead-lead collisions at 2.76 TeV and 5.02 TeV by JEWEL. A differential jet shape parameter $D_{girth}$ is proposed and studied at a smaller jet radius $r<0.3$. The results indicate that the medium has the dominant effect on jet shape modification, which also has a weak dependence on the center-of-mass energy. Jet fragmentation is enhanced significantly at very low $z<0.02$, and fragmented jet constituents are linearly spread to larger jet-radii for $p_{T}^{\rm rel}<1$. The waveform attenuation phenomena is observed in $p_{T}^{\rm rel}$, $girth$, and $D_{girth}$ distributions. The results obtained for $D_{girth}$from ${\rm pp}$ to ${\rm Pb+Pb}$, where the wave-like distribution in ${\rm pp}$ collision is ahead of ${\rm Pb+Pb}$ collisions at small jet-radii, indicates a strong medium effect.
Reference (24)
/
|
2021-02-28 12:35:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.447731614112854, "perplexity": 9074.93860867994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00205.warc.gz"}
|
https://www.wufi-forum.com/viewtopic.php?p=4957&
|
## Simulating attic ventilation
Everything that didn't fit in any other topic
MWE_MOE
WUFI User
Posts: 1
Joined: Sat May 04, 2019 1:45 am -1100
### Simulating attic ventilation
I am trying to simulate attic ventilation i af pitched roof.
I have seen examples where the airspace between the truss and insulation has been simulated using 30 mm standard airlayer, 130 mm airlayer without additional moisture with 30 ACH, 30 mm standard airlayer.
Is this the correct way to simulate an atticspace?
Christian Bludau
WUFI SupportTeam IBP
Posts: 838
Joined: Tue Jul 04, 2006 10:08 pm -1100
Location: IBP Holzkirchen, the home of WUFI
Contact:
### Re: Simulating attic ventilation
Hi MWE_MOE,
in my eyes, an attic space can not be realistically represented by an air layer. Please keep in mind, that the bigger the airspace is, the more convection will occur. And that can not be expressed by using a source.
It would be better to handle the attic space as a separate room with its own climate, so as a boundary condition in WUFI Pro and not as a layer.
In WUFI the air layers contain effective values for the permeability and the thermal conductivity which include the convection and radiation in the layer. So they always have to be used in thickness given in the database. It is not possible to add air layers with same or different thickness to get a higher thickness. So the example you mention is wrong, if done with WUFI.
Please see also the material information text of the air layers.
Christian
|
2019-09-21 05:35:13
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103808760643005, "perplexity": 2417.000136463084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574265.76/warc/CC-MAIN-20190921043014-20190921065014-00205.warc.gz"}
|
http://finalfantasy.wikia.com/wiki/Doom_(status)
|
## FANDOM
34,581 Pages
Doom (死の宣告, Shi no Senkoku?), also known as Death Sentence, Condemned, or Count, is a status ailment that appears in most titles in the series. It places a counter over the target's head, and when it reaches zero, the target dies. In some games the speed at which the counter drops to zero depends on the target's speed. The status is most commonly applied by the spell or ability of the same name. Commonly, the only way for a party member to be rid of Doom is to either die or win the battle before the counter hits zero. Targets immune to Instant Death are usually immune to this status ailment.
Doom can commonly be learned as Blue Magic and is often automatically inflicted by the Cursed Ring. Exactly how long the Doom timer lasts varies, but in many games it lasts 60 seconds. In games whose battle system are based on rounds rather than real time, Doom commonly has the target die after three rounds.
A similar status condition is Gradual Petrify.
## Appearances Edit
### Final Fantasy IV Edit
Doom can only be applied to the party, and through the spell of the same name. The enemy Ahriman and the boss Plague Horror are the only ones to have access to this ability. The counter will always start at 10, and if a character under Doom is hit with the Doom spell again, the counter will reset back to 10. Haste and Slow have absolutely no effect on Doom, except in as much as they allow the player to take more or fewer actions before the countdown reaches 0.
Up to two enemies can be inflicted with the Doom status by design as the Doom spell is reflectable. In the SNES version, a character has to take a turn before the KO status is inflicted on an enemy.
The Doom bug affects the original Japanese version, which allows the player to easily survive Doom even when afflicted.
#### Final Fantasy IV: The After Years Edit
The Doom spell is the only method of applying the status to enemies or allies, and is used by the enemies Ahriman, Blood Eye, Curse Dragon, and Dark Knight.
### Final Fantasy V Edit
The Doom status can only be applied either through the Blue Magic spell and Oracle ability Doom, the enemy ability Grand Cross, or through the accessory Cursed Ring. It cannot be removed from the player and starts the counter at 30 seconds, 15 seconds for characters with Haste.
### Final Fantasy VI Edit
Doom is commonly applied through the Lore spell of the same name, though various other means can be applied to characters, such as by equipping the Cursed Shield. The spells Haste and Slow affects the countdown timer of Doom, Haste increases the countdown while Slow reduces the count. If the value for the formula is ever less than 20, the timer will always be set to 20. The timer will decrease by one every two seconds.
The timer for Doom varies each use, as shown in the following formula:
$Timer = 79 - randomnumber(level, (2 * level) - 1)$
If the doomed character uses Jump, the countdown ends while they are in the air, and the battle ends before the Jump attack is performed, it will result in the 0 HP character bug.
### Final Fantasy VII Edit
Doom is applied only through the Enemy Skill Death Sentence, which will automatically set the timer to 60 seconds (30 if the target is in Haste). The accessory Cursed Ring applies Doom to the character equipped with it, and will not reset itself if the character is KO'd and then revived. In the Battle Square, Death Sentence will not carry on to the next battle even if not healed.
### Final Fantasy VIII Edit
Doom is a rare status, inflicted through the use of the Doom ability learned by Shiva, and lasts for around 16 seconds, despite the counter starting at 8. Doomtrain, when summoned, will inflict the enemy with a plethora of negative statuses including Doom. Inflicting the enemy with stop or sleep will halt the countdown temporarily.
### Final Fantasy IX Edit
Doom's countdown starts at 10, and kills the target when it hits zero. Stop will halt the countdown. If Zidane is condemned and the player uses Flee the moment he is going to die from it, he'll still use the ability even though he is KO, sending the other members to safety.
Game Element Type Effect
Countdown Flair Inflicts Doom at 50% accuracy.
Doom Blue Magic Inflicts Doom.
Doom Enemy Attack Inflicts Doom.
### Final Fantasy X Edit
Doom can be applied to either character or enemies through the spell of the same name. With the Ability Use, Doom can be inflicted via the item Candle of Life. For player characters, only Kimahri can learn it as a Ronso Rage. The counter will decrement by one whenever it is the victim's turn. For the player party members, the counter will always start at 5, however, the starting figure differs for enemies.
#### Final Fantasy X-2 Edit
The Doom spell is available for Dark Knight dressphere. Both the spell Doom and the enemy abilities Harbinger (used by Lucil), and Tick Tick Boom! (used by Volcano), can inflict the status. When inflicted on the party, the counter will always start at 3.
#### Final Fantasy X-2: Last Mission Edit
Doom creates a countdown above the character's head that may decrease when an action is taken, when it reaches 0, the character is knocked out. Some fiends inflict the status with the spell of the same name. The player can inflict it through the Dark Knight's Doom ability; it costs 5 HP (from the Freelancer's HP) to cast. Most enemies, including all bosses apart from the final boss and the Founder, are susceptible to Doom, but enemies killed by the status effect do not yield EXP.
The auto-ability Doomproof (equip Samurai, Dark Knight, and Trainer dresspheres) makes the character immune to the status. The status is removed if the player takes the elevator to the next floor, and the item Hope: A Memoir will restore the equipped dressphere if it is destroyed by Doom.
### Final Fantasy XI Edit
When a player or monster is Doomed, they receive a counter that counts down from 10 down to 0; at zero, they die. Except in the case of certain Yagudo Notorious Monsters in Dynamis, Doom does not wear off automatically when the monster inflicting it was killed. Doom is considered to be a special form of Curse and has a chance of being removed via effects that remove the Curse status such as the Cursna spell or Holy Water; however, the chance is rather low, so it may take multiple tries and a little luck to remove the Doom before the timer ends.
The chance of Doom being removed by Cursna is influenced by the caster's Healing Magic Magic Skill, any items with the ability 'Enhances "Cursna"' that the caster has equipped (Ephedra Ring, Haoma's Ring, Malison Medallion, Debilis Medallion, Hieros Mittens), and any items with the 'Enhances effect of "Cursna" received' ability that the target is wearing (Saida Ring, Eshmun's Ring).
Hallowed Water, an form of Holy Water enhanced with anima, has a higher chance to remove Doom than Holy Water, but is rarely made or used by players due to the expense. Ordinary Holy Water may be given a higher chance to remove Doom by equipping an item with the ability 'Enhances "Holy Water" Effect' (Blenmot's Ring, Blenmot's Ring +1).
In Abyssea, it is possible to prevent Doom entirely by acquiring and using the temporary item Doom Screen, which is 100% effective and lasts for 2 minutes.
The Doom status may be inflicted by certain monsters, most commonly by a TP-based attack.
Monsters that may inflict Doom are:
• Taurus family:
• Vampyr family: Eternal Damnation (gaze attack)
• Lamia family:
• Certain NMs: Grim Reaper
• Lamia No.3: additional effect from her melee attacks.
• Yagudo family:
• NMs in Dynamis: Doom
• Crystal War era: Dark Invocation
• Dee Xalmo the Grim: additional effect from his melee attacks.
• Certain Mandragora-family, Adenium-subfamily NMs: Fatal Scream
• Shinryu: Supernova
• Certain Naraka family NMs: Yama's Judgement, which inflicts a special form of Doom whose doom counter starts at 5 instead of 10.
The Tarutaru general Zolku-Azolku may use a unique Weapon Skill, Death Knell, when at HP Critical HP and fighting in Campaign. Death Knell inflicts Doom on monsters in an area effect and may kill even mighty Beastmen generals.
Blue Mages may also learn Mortal Ray; however, the ability is severely weakened compared to when monsters use it. It will take 60 seconds to kill a monster, has a high chance of resist, and seems to wear off if the caster moves more than 10 yalms away. The spell is primarily seen as a toy, or something to set to gain the Dual Wield trait.
### Final Fantasy XII Edit
The character is doomed and will be KO'd when the count reaches 0. Remove with a Remedy. (Requires the proper license)
The Doom status can be inflicted by the Time Magick Countdown, Doom Mace weapon, and can also be used by enemies. When afflicted, the unit will be killed after 10 seconds, unless cured by a Remedy if the user has learned Remedy Lore 3.
Stop halts the countdown and Doom and Petrify cannot coexist on the same character; which ever status the character has first, they are immune to the other. The only exception is using Remedy with Nihopalaoa, which inflicts both Doom and Petrify.
### Final Fantasy XIII Edit
In a battle against an Eidolon, Doom is cast on the player at the beginning of the fight. The timer is set to 1800 on normal battle speed, and 3214 on slow battle speed, and decreases about 10 every second. Some bosses will also inflict it on the party leader, if the player takes over 20 minutes to defeat them. There is no way to prevent Doom or remove it after it has been inflicted: even summoning an Eidolon will only temporarily halt it while the Eidolon is present, and the countdown continues when it is dismissed.
Doom is also cast by Orphan's final form, but allows the player more time, the timer starting at 4800.
Either way, the game ends once the timer reach zero.
#### Final Fantasy XIII-2 Edit
Doom is applied by only two powerful enemies: Long Gui, which targets the party leader, and Gorgyra, which targets the whole party. It will KO the affected character when the timer runs out.
If a character dies before Doom triggers, Doom will be dispelled. If the player thereafter revives them, that character will survive Doom.
### Final Fantasy XIV Edit
Doom is a status that knocks out the victim within a certain amount of time (ten seconds in the Sunken Temple of Qarn, twenty seconds in the Wanderer's Palace (Hard), and five seconds in the World of Darkness) of receiving it.
In Patch 2.00, the Teratotaur from the Sunken Temple of Qarn can inflict Doom. In this battle, the only way to safely remove Doom is to stand on one of the three tiles situated on the battlefield while it is glowing for about one second.
As of Patch 2.5, in Wanderer's Palace (Hard), Manxome Molaa Ja Ja will use Soul Douse on a random player except the Main Tank and inflict Doom on said player. To get rid of it, the healer must completely heal the affected player with Doom before the timer hits zero.
In the World of Darkness, Angra Mainyu will use Mortal Gaze, which will inflict Doom on players who look directly at the boss. To get rid of the status, the player must stand on the glowing pad.
### Final Fantasy Tactics Edit
Doom can be inflicted by various abilities, including those learned from the regular job classes such as Monk and Orator. Some bosses and monsters can doom the target as well. When doomed, a red counting bubble starting at three will appear upon the unit, same counter that appears when the unit is KO'd; the unit will be KO'd when reached the fourth active turn. Doom status can be negated by Reraise or equipment which protects from instant death when the doom counter reaches zero. If the unit is undead and the counter reaches zero, the Doom effect instead is lifted. In all these cases the unit will lose its turn when the effect is lifted.
Chocobo's Choco Esuna and White Staff can remove it. The game's A.I. will generally ignore the doomed enemy completely, as if transparent. That means when all of the allies are doomed, the enemy will do nothing or run away. It will stop ignoring a doomed unit, however, if the unit is charging a spell or an AI unit is at low health and needs to heal via a draining move.
### Final Fantasy Tactics Advance Edit
Doom can be inflicted by various abilities. moogle Gadgeteers can learn the ability Black Ingot from the Death Claws, and it costs 200 AP to master; it will cause the Doom status on either all allies, or all enemies, depending on the coin flip. Snipers also have an ability that allows them to inflict Doom, Death Sickle, learned from the Hades Bow, that costs 300 AP to master.
The last ability the player can use to inflict Doom is the Assassin attack Nightmare, which can cause the target to sleep and/or cause Doom; it can be learned from the Kikuichimonji and costs 300 AP to master.
When under the Doom status a counter will appear over the target's head, similar to when a Zombie or Vampire is killed. When the countdown ends the Grim Reaper appears and lighter colored silhouette of the character, presumably his/her soul, and is split apart, KOing the unit. The Ahriman ability/Blue Magic spell Roulette, and the Alchemist attack Death have the same animation as Doom, except it is possible for these attacks to miss causing the "soul" to return to the unit's body.
Doom can be prevented by wearing armor and accessories that protect against it, including: Barrette, Ribbon, Judo Uniform, Wygar, Mistle Robe, Sacri Shield, Fortune Ring, and Angel Ring.
There are two ways to get rid of Doom: one can either cast Esuna or hit the unit with the White Rod. Zombies and Vampires will be healed if Doom, or either of the "Grim Reaper Deaths" mentioned above, are cast on them.
#### Final Fantasy Tactics A2: Grimoire of the Rift Edit
Doom can sometimes be inflicted by the Assassin ability Nightmare, and Sniper ability Death Sickle. The Master Monk has a chance of causing Doom (along with damage) when it uses its Lifebane ability, as well as the Blue Magic spell Doom.
The Fencer can inflict this status ailment with her ability Checkmate. The Tinker can use his technique Black Ingot to inflict Doom to either enemies or allies. The Scion Zalera is also capable of inflicting both Doom and Sleep on all enemies.
### Bravely Default Edit
Doom makes the affected character suffer instant death after a set number of turns. This number appears above the character's head. Doom cannot be cured except by using the Staff special move Rejuvenation with the Cure Doom part, or after a KO. However, it can be prevented by using the Spiritmaster ability Fairy Ward.
Game Element Type Effect
Doom Enemy ability Inflicts Doom. Can be group-cast.
Consume Life Enemy ability Heavy physical damage to random foes. Inflicts Doom and Instant Death.
Corpse Enemy ability Inflicts Doom.
Death Clutch Enemy ability Inflicts Doom.
Fatal Poison Enemy ability Medium physical damage and inflicts Doom to a single target.
Fairy Ward Spiritism ability Render all allies immune to poison, blind, silence, sleep, paralyze, dread, berserk, confuse, charm, doom, death, and stop for five turns.
#### Bravely Second: End Layer Edit
This article or section is a stub in Bravely Second: End Layer. You can help the Final Fantasy Wiki by expanding it.
### Final Fantasy Crystal Chronicles: Ring of Fates Edit
This article or section is a stub about a status effect in Final Fantasy Crystal Chronicles: Ring of Fates. You can help the Final Fantasy Wiki by expanding it.
### Final Fantasy Dimensions Edit
This article or section is a stub about a status effect in Final Fantasy Dimensions. You can help the Final Fantasy Wiki by expanding it.
### Dissidia Final Fantasy NTEdit
Doom is a status that automatically breaks the target's Bravery whenever the timer hits 0, regardless of how much Bravery that player has. In addition, while under Doom, the player will not receive any Stage Bravery. To remove Doom status, the player must land a hit on the opponent who inflicted the status on the player. The status can be inflicted via Sephiroth's Heartless Angel ability or at random by Shinryu's Chaotic Deluge.
### Final Fantasy Record Keeper Edit
Doom causes the afflicted character to be instantly knocked out once the timer depletes. The duration varies based on the source of the Doom effect. Several abilities, Soul Breaks, and Burst Mode commands gain increased damage or number of hits when the user is afflicted with Doom.
A variation of Doom, referred to in the game code as "Light Doom", exists on some player abilities and Soul Breaks as a status inflicted by the character upon themselves or the party. Light Doom cannot overwrite normal Doom, but a normal Doom effect will overwrite a Light Doom, whether the remaining timer is higher or lower than the applied Doom.
### Final Fantasy Brave ExviusEdit
This article or section is a stub about a status effect in Final Fantasy Brave Exvius. You can help the Final Fantasy Wiki by expanding it.
### World of Final FantasyEdit
This article or section is a stub about a status effect in World of Final Fantasy. You can help the Final Fantasy Wiki by expanding it.
|
2018-07-22 12:41:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34017637372016907, "perplexity": 6948.942769694056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593223.90/warc/CC-MAIN-20180722120017-20180722140017-00638.warc.gz"}
|
https://standards.globalspec.com/std/860944/asme-ptc-3-1
|
# ASME PTC 3.1
## Diesel and Burner Fuels
inactive
Organization: ASME Publication Date: 1 January 1958 Status: inactive Page Count: 83
##### scope:
The Test Code for Diesel and Burner Fuels is intended primarily to specify standard methods for the determination of those ascertainable chemical and physical properties which serve as indicators of the value of liquid fuels used in equipment for the generation of heat or of power.
### Document History
ASME PTC 3.1
January 1, 1958
Diesel and Burner Fuels
The Test Code for Diesel and Burner Fuels is intended primarily to specify standard methods for the determination of those ascertainable chemical and physical properties which serve as indicators of...
|
2019-10-18 04:18:13
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8497127890586853, "perplexity": 7399.557823273069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00012.warc.gz"}
|
http://www.reference.com/browse/Peak+oil+theory
|
Definitions
# Hubbert peak theory
The Hubbert peak theory posits that for any given geographical area, from an individual oil-producing region to the planet as a whole, the rate of petroleum production tends to follow a bell-shaped curve. It is one of the primary theories on peak oil.
Choosing a particular curve determines a point of maximum production based on discovery rates, production rates and cumulative production. Early in the curve (pre-peak), the production rate increases due to the discovery rate and the addition of infrastructure. Late in the curve (post-peak), production declines due to resource depletion.
The Hubbert peak theory is based on the observation that the amount of oil under the ground in any region is finite, therefore the rate of discovery which initially increases quickly must reach a maximum and decline. In the US, oil extraction followed the discovery curve after a time lag of 32 to 35 years. The theory is named after American geophysicist M. King Hubbert, who created a method of modeling the production curve given an assumed ultimate recovery volume.
## Hubbert's peak
"Hubbert's peak" can refer to the peaking of production of a particular area, which has now been observed for many fields and regions.
Hubbert's Peak was achieved in the continental US in the early 1970s. Oil production peaked at 10.2 million barrels a day. Since then, it has been in a gradual decline.
Peak oil as a proper noun, or "Hubbert's peak" applied more generally, refers to a singular event in history: the peak of the entire planet's oil production. After Peak Oil, according to the Hubbert Peak Theory, the rate of oil production on Earth would enter a terminal decline. Based on his theory, in a paper he presented to the American Petroleum Institute in 1956, Hubbert correctly predicted that production of oil from conventional sources would peak in the continental United States around 1965-1970. Hubbert further predicted a worldwide peak at "about half a century" from publication and approximately 12 gigabarrels (GB) a year in magnitude. In a 1976 TV interview Hubbert added that the actions of OPEC might flatten the global production curve but this would only delay the peak for perhaps 10 years.
## Hubbert's theory
### Hubbert curve
In 1956, Hubbert proposed that fossil fuel production in a given region over time would follow a roughly bell-shaped curve without giving a precise formula; he later used the Hubbert curve, the derivative of the logistic curve, for estimating future production using past observed discoveries.
Hubbert assumed that after fossil fuel reserves (oil reserves, coal reserves, and natural gas reserves) are discovered, production at first increases approximately exponentially, as more extraction commences and more efficient facilities are installed. At some point, a peak output is reached, and production begins declining until it approximates an exponential decline.
The Hubbert curve satisfies these constraints. Furthermore, it is roughly symmetrical, with the peak of production reached when about half of the fossil fuel that will ultimately be produced has been produced. It also has a single peak.
Given past oil discovery and production data, a Hubbert curve may be constructed that attempts to approximate past discovery data, and used to provide estimates for future production. In particular, the date of peak oil production or the total amount of oil ultimately produced can be estimated that way. Cavallo defines the Hubbert curve used to predict the U.S. peak as the derivative of:
Q(t) = {Q_over {1 + a e^{bt}}}
where $Q$max is the total resource available (ultimate recovery of crude oil), $Q\left(t\right)$ the cumulative production, and $a$ and $b$ are constants. The year of maximum annual production (peak) is:
t_ = {1over b}ln left({1over a} right)
### Use of multiple curves
The sum of multiple Hubbert curves can be used in order to model more complicated real life scenarios.
### Definition of reserves
Almost all of Hubbert peaks must be put in the context of high ore grade. Except for fissionable materials, any resource, including oil, is theoretically recoverable from the environment with the right technology. A current example would be biofuel. However, a genetically engineered organism that produces crude oil would not invalidate Hubbert's peak for oil. His research was about the "easy" oil, "easy" metals, and so forth that can be recovered before a society considers greatly advanced mining efforts and how to time the necessity of such resource acquisition advancements or substitutions by knowing an "easy" resource's probable peak. Also, as reserves become more difficult to extract there is the possibility that mining or alternatives are too expensive for developing countries.
The "easy" oil constraint also applies to "abiotic oil", a theory believed by virtually no notable U.S. geologists, although it is believed by some Russian and Ukrainian geologists. This theory states that some oil is created through other methods than conventionally understood biogenic processes. However, in order to have any effect on Hubbert peak theory applied to oil, this other creation of oil would have to occur at a rate comparable to current oil depletion, something that has not been credibly observed.
For heavy crude or deep water drilling attempts, such as Noxal oil field or tar sands or oil shale, the price of the oil extracted will have to include the extra effort required to mine these resources. According to the U.S. Minerals Management Service, areas such as the Outer Continental Shelf may also incur higher costs due to environmental concerns. So not all oil reserves are equal, and the more difficult reserves are predicted by Hubbert as being typical of the post-peak side of the Hubbert curve.
### Reliability
Hubbert, in his 1956 paper, presented two scenarios for US conventional oil production (crude oil + condensate):
• most likely estimate: a logistic curve with a logistic growth rate equal to 6%, an ultimate resource equal to 150 Giga-barrels (Gb) and a peak in 1965.
• upper-bound estimate: a logistic curve with a logistic growth rate equal to 6% and ultimate resource equal to 200 Giga-barrels and a peak in 1970.
Hubbert's upper-bound estimate, which he regarded as optimistic, accurately predicted that US oil production would peak in 1970. Forty years later, the upper-bound estimate has also proven to be very accurate in terms of cumulative production, less so in terms of annual production. For 2005, the upper-bound Hubbert model predicts 178.2 Gb cumulative and 1.17 Gb current production; actual US production was 176.4 Gb cumulative crude oil + condensate (1% lower than the upper bound estimate), with annual production of 1.55 Gb (32% higher than the upper bound estimate).
A post-hoc analysis of peaked oil wells, fields, regions and nations found that Hubbert's model was the "most widely useful"(providing the best fit to the data), though many areas studied had a sharper "peak" than predicted.
## Economics
### Energy return on energy investment
When oil production first began in the mid-nineteenth century, the largest oil fields recovered fifty barrels of oil for every barrel used in the extraction, transportation and refining. This ratio is often referred to as the Energy Return on Energy Investment (EROI or EROEI). Currently, between one and five barrels of oil are recovered for each barrel-equivalent of energy used in the recovery process. As the EROEI drops to one, or equivalently the Net energy gain falls to zero, the oil production is no longer a net energy source. This happens long before the resource is physically exhausted.
Note that it is important to understand the distinction between a barrel of oil, which is a measure of oil, and a barrel of oil equivalent (BOE), which is a measure of energy. Many sources of energy, such as fission, solar, wind, and coal, are not subject to the same near-term supply restrictions that oil is. Accordingly, even an oil source with an EROEI of 0.5 can be usefully exploited if the energy required to produce that oil comes from a cheap and plentiful energy source. Availability of cheap, but hard to transport, natural gas in some oil fields has led to using natural gas to fuel enhanced oil recovery. Similarly, natural gas in huge amounts is used to power most Athabasca Tar Sands plants. Cheap natural gas has also led to Ethanol fuel produced with a net EROEI of less than 1, although figures in this area are controversial because methods to measure EROEI are in debate.
### Growth-based economic models
Insofar as economic growth is driven by oil consumption growth, post-peak societies must adapt. Hubbert believed :
Our principal constraints are cultural. During the last two centuries we have known nothing but exponential growth and in parallel we have evolved what amounts to an exponential-growth culture, a culture so heavily dependent upon the continuance of exponential growth for its stability that it is incapable of reckoning with problems of nongrowth.
Some economists describe the problem as uneconomic growth or a false economy. At the political right, Fred Ikle has warned about "conservatives addicted to the Utopia of Perpetual Growth" Brief oil interruptions in 1973 and 1979 markedly slowed - but did not stop - the growth of world GDP
Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation.
David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy world population will have to be reduced by two-thirds, says the study. Without population reduction, this study predicts an agricultural crisis beginning in 2020, becoming critical c. 2050. The peaking of global oil along with the decline in regional natural gas production may precipitate this agricultural crisis sooner than generally expected. Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.
## Hubbert peaks
Although Hubbert peak theory receives most attention in relation to peak oil production, it has also been applied to other natural resources.
### Natural gas
According to Western Gas Resources Inc., the North American peak happened in 2001; Doug Reynolds predicted in 2005 that the North American peak would occur in 2007; according to Bentley, production will peak anywhere from 2010 to 2020.
Natural gas production in the North Sea peaked in 2000. Even if new extraction techniques yield additional sources of natural gas, like coalbed methane, the energy returned on energy invested will be much lower than traditional gas sources, which inevitably leads to higher costs to consumers of natural gas.
### Coal
Peak coal is significantly further out than peak oil, but we can observe the example of anthracite in the USA, a high grade coal whose production peaked in the 1920s. Anthracite was studied by Hubbert, and matches a curve closely. Pennsylvania's coal production also matches Hubbert's curve closely, but this does not mean that coal in Pennsylvania is exhausted--far from it. If production in Pennsylvania returned at its all time high, there are reserves for 190 years. Hubbert had recoverable coal reserves worldwide at 2500 × 109 metric tons and peaking around 2150 (depending on usage).
More recent estimates suggest an earlier peak. Coal: Resources and Future Production (PDF 630KB ), published on April 5 2007 by the Energy Watch Group (EWG), which reports to the German Parliament, found that global coal production could peak in as few as 15 years Reporting on this Richard Heinberg also notes that the date of peak annual energetic extraction from coal will likely come earlier than the date of peak in quantity of coal (tons per year) extracted as the most energy-dense types of coal have been mined most extensively A second study, The Future of Coal by B. Kavalov and S. D. Peteves of the Institute for Energy (IFE), prepared for European Commission Joint Research Centre, reaches similar conclusions and states that ""coal might not be so abundant, widely available and reliable as an energy source in the future".
Work by David Rutledge of Caltech predicts that the total of world coal production will amount to only about 450 gigatonnes. This implies that coal is running out faster than usually assumed.
Finally, insofar as global peak oil and peak in natural gas are expected anywhere from imminently to within decades at most, any increase in coal production (mining) per annum to compensate for declines in oil or NG production, would necessarily translate to an earlier date of peak as compared with peak coal under a scenario in which annual production remains constant.
### Fissionable materials
In a paper in 1956 , after a review of US fissionable reserves, Hubbert notes of nuclear power:
There is promise, however, provided mankind can solve its international problems and not destroy itself with nuclear weapons, and provided world population (which is now expanding at such a rate as to double in less than a century) can somehow be brought under control, that we may at last have found an energy supply adequate for our needs for at least the next few centuries of the "foreseeable future."
Technologies such as thorium, reprocessing and fast breeders can, in theory, considerably extend the life of uranium reserves. Roscoe Bartlett claims
Our current throwaway nuclear cycle uses up the world reserve of low-cost uranium in about 20 years.
Caltech physics professor David Goodstein has stated that
... you would have to build 10,000 of the largest power plants that are feasible by engineering standards in order to replace the 10 terawatts of fossil fuel we're burning today ... that's a staggering amount and if you did that, the known reserves of uranium would last for 10 to 20 years at that burn rate. So, it's at best a bridging technology ... You can use the rest of the uranium to breed plutonium 239 then we'd have at least 100 times as much fuel to use. But that means you're making plutonium, which is an extremely dangerous thing to do in the dangerous world that we live in.
### Metals
Hubbert applied his theory to "rock containing an abnormally high concentration of a given metal" and reasoned that the peak production for metals such as copper, tin, lead, zinc and others would occur in the time frame of decades and iron in the time frame of two centuries like coal. The recent jump in the price of copper has become known among traders as peak copper. Lithium availability is a concern for a fleet of Li-ion battery using cars but a paper published in 1996 estimated that world reserves are adequate for at least 50 years A similar prediction for platinum use in fuel cells notes that the metal could be easily recycled.
### Phosphorus
Phosphorus supplies are essential to farming and depletion of reserves is estimated at somewhere from 60 to 130 years Individual countries supplies vary widely; without a recycling initiative America's supply is estimated around 30 years Phosphorus supplies affect total agricultural output which in turn limits alternative fuels such as biodiesel and ethanol.
### Renewable resources
Despite the fact that, in theory, Hubbert's analysis does not apply to renewable resources, over-exploitation often results in a Hubbert peak nonetheless. The Hubbert curve seems to be applicable to any resource that can be harvested faster than it can be replaced:
• Water: For example, a reserve such as the Ogallala Aquifer can be mined at a rate that far exceeds replenishment. This turns much of the world's underground water and lakes into finite resources with peak usage debates similar to oil. These debates usually center around agriculture and suburban water usage but generation of electricity from nuclear energy or coal and tar sands mining mentioned above is also water resource intensive. The term fossil water is sometimes used to describe older aquifers that are not considered renewables anymore.
• Fisheries: At least one researcher has attempted to perform Hubbert linearization on the whaling industry, as well as charting the transparently dependent price of caviar on sturgeon depletion. Another example is the cod of the North Sea
## Criticism
Economist Michael Lynch argues that the theory behind the Hubbert curve is overly simplistic. Lynch claims that Campbell's predictions for world oil production are strongly biased towards underestimates, and that Campbell has repeatedly pushed the date back.
Leonardo Maugeri, vice president for the Italian energy company ENI, argues that nearly all of peak estimates do not take into account non-conventional oil even though the availability of these resources is significant and the costs of extraction and processing, while still very high, are falling due to improved technology. He also notes that the recovery rate from existing world oil fields has increased from about 22% in 1980 to 35% today due to new technology and predicts this trend will continue. The ratio between proven oil reserves and current production has constantly improved, passing from 20 years in 1948 to 35 years in 1972 and reaching about 40 years in 2003. These improvements occurred even with low investment in new exploration and upgrading technology due to the low oil prices during the last 20 years. However, Maugeri feels that encouraging more exploration will require relatively high oil prices
Edward Luttwak, an economist and historian, claims that unrest in countries such as Russia, Iran and Iraq has led to a massive underestimate of oil reserves. The Association for the Study of Peak Oil and Gas, or ASPO responds by claiming neither Russia nor Iran are troubled by unrest currently, but Iraq is
Cambridge Energy Research Associates authored a report that is critical of Hubbert influenced predictions:
Despite his valuable contribution, M. King Hubbert's methodology falls down because it does not consider likely resource growth, application of new technology, basic commercial factors, or the impact of geopolitics on production. His approach does not work in all cases-including on the United States itself-and cannot reliably model a global production outlook. Put more simply, the case for the imminent peak is flawed. As it is, production in 2005 in the Lower 48 in the United States was 66 percent higher than Hubbert projected.
CERA does not believe there will be an endless abundance of oil, but instead believes that global production will eventually follow an “undulating plateau” for one or more decades before declining slowly, and that production will reach 40 Mb/d by 2015..
Alfred J. Cavallo, while predicting a conventional oil supply shortage by no later than 2015, does not think Hubbert's peak is the correct theory to apply to world production.
## References
Sites
Documentaries
Online videos
Articles
Reports, essays and lectures
Search another word or see Peak oil theoryon Dictionary | Thesaurus |Spanish
|
2014-07-26 03:17:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4782913029193878, "perplexity": 3033.8131883453884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894976.0/warc/CC-MAIN-20140722025814-00018-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/207461-differentiation-tangent-lines-velocity-use-position-function-s-find-velocity.html
|
# Math Help - Differentiation: Tangent Lines and Velocity. Use position function s find velocity
1. ## Differentiation: Tangent Lines and Velocity. Use position function s find velocity
Use the position function s (in meters) to find the velocity at time t= a seconds.
s(t) = 4/t, (a) a=2; (b) a=4
I know the formula starts out something like this.....
the limit as h approaches 0....... [s(2+h) - s(2)] / [(2+h) - 2]
but after that I am quite lost.....I don't understand what it is I am supposed to do next. I have looked at my calculus book, looked at some of the solutions to similar problems, but I just don't get it. None of the examples deal with this exact problem.
Any help or guidance would be greatly appreciated!!
2. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
Do you have to motivate the derivative with a difference quotient or are you allowed to use the "known" rules which follows from the definition? If you know the rules you have
s(t) = 4/t = 4*t^-1
s'(t) = -1*4*t^-2 = -4/t^2
If you have to use the limit of the difference quotient then we have the definition
s'(t) = lim(h->0) (s(t+h)-s(t))/h = lim(h->0) ((4/(t+h))-4/t)/h = lim(h->0) ((4t-4(t+h))/(t(t+h)))/h = lim(h->0) ((-4h))/(t(t+h)))/h = lim(h->0) ((-4))/(t(t+h)))/1 = -4/t^2
So we have the derivative
s'(t) = -4/t^2
Calculating the velocity at a = 2 were t = a
s'(t) = -4/t^2 => s'(2) = -4/2^2 = -1 m/s
I leave a = 4 as exercise for you.
3. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
Thanks for the quick reply, yes I have to use the limit.....I am reading through your response now, and If I do not understand any portion of it, I will post with further questions. Thank you so much for the reply though, I really appreciate it!!
4. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
Write if you don't understand, maybe I can write in LaTeX. But try to write it down with pen and paper, and you will get the idea.
5. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
Okay, so as I understand it......
A=4
s'(t) = lim(h->0) (s(t+h)-s(t))/h
= lim(h->0) ((4/(t+h))-4/t)/h
= lim(h->0) ((4t-4(t+h))/(t(t+h)))/h
= lim(h->0) ((-4h))/(t(t+h)))/h
= lim(h->0) ((-4))/(t(t+h)))/1
= -4/t^2
s'(t) = -4/t^2
=> s'(4)
= -4/4^2
= -0.25 m/s
By all means correct me If I am wrong.
6. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
Originally Posted by JDS
Thanks for the quick reply, yes I have to use the limit.....I am reading through your response now, and If I do not understand any portion of it.
You don't because it is so damn hard to read.
Why not use LaTeX coding? It is not helpful if it cannot be read.
$\frac{{\frac{4}{{t + h}} - \frac{4}{t}}}{h} = \frac{{4(t) - 4(t + h)}}{{h(t + h)t}} = \frac{{ - 4}}{{(t + h)t}}$
Now you finish the limit.
7. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
You are right, Yours is much cleaner and I can understand it more clearly. However, shouldnt the end function there be......
-4/[2+(t+h)t]
Perhaps I am looking at it wrong though?
8. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
nevermind, im an idiot, I see what I did, lOl
9. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
grrr, actually the more I look at this the more I confuse myself.
10. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
Originally Posted by Plato
You don't because it is so damn hard to read.
Why not use LaTeX coding? It is not helpful if it cannot be read.
$\frac{{\frac{4}{{t + h}} - \frac{4}{t}}}{h} = \frac{{4(t) - 4(t + h)}}{{h(t + h)t}} = \frac{{ - 4}}{{(t + h)t}}$
Now you finish the limit.
So I Did it with the way you have it set up here, and I still get the same answer I got above which is - 1/4 m/s or -0.25 m/s
Hope I understood it correctly!
11. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
But looking at it for A=2, I am coming up with -1/2 or -0.5 m/s
Am I correct, or am I missing something?
12. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
You said earlier
s'(t) = lim(h->0) (s(t+h)-s(t))/h
= lim(h->0) ((4/(t+h))-4/t)/h
= lim(h->0) ((4t-4(t+h))/(t(t+h)))/h
= lim(h->0) ((-4h))/(t(t+h)))/h
= lim(h->0) ((-4))/(t(t+h)))/1
= -4/t^2
s'(t) = -4/t^2
When t= 2, that is $-4/2^2= -1$, not -1/2.
13. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
Yes you are correct but in the portion you quoted, that was based on the info I obtained from "fkf", When I reposted the answer of -1/2, I was going off of the formula version that Plato posted.....Was fkf and myself correct or is Plato correct?
14. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
Originally Posted by JDS
Was fkf and myself correct or is Plato correct?
$\lim _{h \to 0} \left( {\frac{{ - 4}}{{(t + h)t}}} \right) = \frac{{ - 4}}{{t^2 }}$
So the derivative is $\frac{-4}{t^2}$ which is $-1$ if $t=2$.
15. ## Re: Differentiation: Tangent Lines and Velocity. Use position function s find velocit
I see, So then in that case my second answer is still , -(1/4)........Correct?
P.S. Thanks for all the help!!
|
2014-03-17 14:03:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7112029790878296, "perplexity": 2910.306716907107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705728/warc/CC-MAIN-20140313024505-00044-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.profillic.com/s/Pradeep%20Dubey
|
Models, code, and papers for "Pradeep Dubey":
##### Parallelizing Word2Vec in Multi-Core and Many-Core Architectures
Dec 23, 2016
Shihao Ji, Nadathur Satish, Sheng Li, Pradeep Dubey
Word2vec is a widely used algorithm for extracting low-dimensional vector representations of words. State-of-the-art algorithms including those by Mikolov et al. have been parallelized for multi-core CPU architectures, but are based on vector-vector operations with "Hogwild" updates that are memory-bandwidth intensive and do not efficiently use computational resources. In this paper, we propose "HogBatch" by improving reuse of various data structures in the algorithm through the use of minibatching and negative sample sharing, hence allowing us to express the problem using matrix multiply operations. We also explore different techniques to distribute word2vec computation across nodes in a compute cluster, and demonstrate good strong scalability up to 32 nodes. The new algorithm is particularly suitable for modern multi-core/many-core architectures, especially Intel's latest Knights Landing processors, and allows us to scale up the computation near linearly across cores and nodes, and process hundreds of millions of words per second, which is the fastest word2vec implementation to the best of our knowledge.
* NIPS Workshop on Efficient Methods for Deep Neural Networks (2016)
##### Parallelizing Word2Vec in Shared and Distributed Memory
Aug 08, 2016
Shihao Ji, Nadathur Satish, Sheng Li, Pradeep Dubey
Word2Vec is a widely used algorithm for extracting low-dimensional vector representations of words. It generated considerable excitement in the machine learning and natural language processing (NLP) communities recently due to its exceptional performance in many NLP applications such as named entity recognition, sentiment analysis, machine translation and question answering. State-of-the-art algorithms including those by Mikolov et al. have been parallelized for multi-core CPU architectures but are based on vector-vector operations that are memory-bandwidth intensive and do not efficiently use computational resources. In this paper, we improve reuse of various data structures in the algorithm through the use of minibatching, hence allowing us to express the problem using matrix multiply operations. We also explore different techniques to distribute word2vec computation across nodes in a compute cluster, and demonstrate good strong scalability up to 32 nodes. In combination, these techniques allow us to scale up the computation near linearly across cores and nodes, and process hundreds of millions of words per second, which is the fastest word2vec implementation to the best of our knowledge.
* Added more results
##### Ternary Neural Networks with Fine-Grained Quantization
We propose a novel fine-grained quantization (FGQ) method to ternarize pre-trained full precision models, while also constraining activations to 8 and 4-bits. Using this method, we demonstrate a minimal loss in classification accuracy on state-of-the-art topologies without additional training. We provide an improved theoretical formulation that forms the basis for a higher quality solution using FGQ. Our method involves ternarizing the original weight tensor in groups of $N$ weights. Using $N=4$, we achieve Top-1 accuracy within $3.7\%$ and $4.2\%$ of the baseline full precision result for Resnet-101 and Resnet-50 respectively, while eliminating $75\%$ of all multiplications. These results enable a full 8/4-bit inference pipeline, with best-reported accuracy using ternary weights on ImageNet dataset, with a potential of $9\times$ improvement in performance. Also, for smaller networks like AlexNet, FGQ achieves state-of-the-art results. We further study the impact of group size on both performance and accuracy. With a group size of $N=64$, we eliminate $\approx99\%$ of the multiplications; however, this introduces a noticeable drop in accuracy, which necessitates fine tuning the parameters at lower precision. We address this by fine-tuning Resnet-50 with 8-bit activations and ternary weights at $N=64$, improving the Top-1 accuracy to within $4\%$ of the full precision result with $<30\%$ additional training overhead. Our final quantized model can run on a full 8-bit compute pipeline using 2-bit weights and has the potential of up to $15\times$ improvement in performance compared to baseline full-precision models.
##### BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies
We propose BlackOut, an approximation algorithm to efficiently train massive recurrent neural network language models (RNNLMs) with million word vocabularies. BlackOut is motivated by using a discriminative loss, and we describe a new sampling strategy which significantly reduces computation while improving stability, sample efficiency, and rate of convergence. One way to understand BlackOut is to view it as an extension of the DropOut strategy to the output layer, wherein we use a discriminative training loss and a weighted sampling scheme. We also establish close connections between BlackOut, importance sampling, and noise contrastive estimation (NCE). Our experiments, on the recently released one billion word language modeling benchmark, demonstrate scalability and accuracy of BlackOut; we outperform the state-of-the art, and achieve the lowest perplexity scores on this dataset. Moreover, unlike other established methods which typically require GPUs or CPU clusters, we show that a carefully implemented version of BlackOut requires only 1-10 days on a single machine to train a RNNLM with a million word vocabulary and billions of parameters on one billion words. Although we describe BlackOut in the context of RNNLM training, it can be used to any networks with large softmax output layers.
* Published as a conference paper at ICLR 2016
##### Ternary Residual Networks
Sub-8-bit representation of DNNs incur some discernible loss of accuracy despite rigorous (re)training at low-precision. Such loss of accuracy essentially makes them equivalent to a much shallower counterpart, diminishing the power of being deep networks. To address this problem of accuracy drop we introduce the notion of \textit{residual networks} where we add more low-precision edges to sensitive branches of the sub-8-bit network to compensate for the lost accuracy. Further, we present a perturbation theory to identify such sensitive edges. Aided by such an elegant trade-off between accuracy and compute, the 8-2 model (8-bit activations, ternary weights), enhanced by ternary residual edges, turns out to be sophisticated enough to achieve very high accuracy ($\sim 1\%$ drop from our FP-32 baseline), despite $\sim 1.6\times$ reduction in model size, $\sim 26\times$ reduction in number of multiplications, and potentially $\sim 2\times$ power-performance gain comparing to 8-8 representation, on the state-of-the-art deep network ResNet-101 pre-trained on ImageNet dataset. Moreover, depending on the varying accuracy requirements in a dynamic environment, the deployed low-precision model can be upgraded/downgraded on-the-fly by partially enabling/disabling residual connections. For example, disabling the least important residual connections in the above enhanced network, the accuracy drop is $\sim 2\%$ (from FP32), despite $\sim 1.9\times$ reduction in model size, $\sim 32\times$ reduction in number of multiplications, and potentially $\sim 2.3\times$ power-performance gain comparing to 8-8 representation. Finally, all the ternary connections are sparse in nature, and the ternary residual conversion can be done in a resource-constraint setting with no low-precision (re)training.
##### Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1--7.3$\times$ convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers. We also open source our project at https://github.com/IntelLabs/SkimCaffe.
* 12 pages, 5 figures
##### Distributed Deep Learning Using Synchronous Stochastic Gradient Descent
We design and implement a distributed multinode synchronous SGD algorithm, without altering hyper parameters, or compressing data, or altering algorithmic behavior. We perform a detailed analysis of scaling, and identify optimal design points for different networks. We demonstrate scaling of CNNs on 100s of nodes, and present what we believe to be record training throughputs. A 512 minibatch VGG-A CNN training run is scaled 90X on 128 nodes. Also 256 minibatch VGG-A and OverFeat-FAST networks are scaled 53X and 42X respectively on a 64 node cluster. We also demonstrate the generality of our approach via best-in-class 6.5X scaling for a 7-layer DNN on 16 nodes. Thereafter we attempt to democratize deep-learning by training on an Ethernet based AWS cluster and show ~14X scaling on 16 nodes.
##### K-TanH: Hardware Efficient Activations For Deep Learning
We propose K-TanH, a novel, highly accurate, hardware efficient approximation of popular activation function Tanh for Deep Learning. K-TanH consists of a sequence of parameterized bit/integer operations, such as, masking, shift and add/subtract (no floating point operation needed) where parameters are stored in a very small look-up table (bit-masking step can be eliminated). The design of K-TanH is flexible enough to deal with multiple numerical formats, such as, FP32 and BFloat16. High quality approximations to other activation functions, e.g., Swish and GELU, can be derived from K-TanH. We provide RTL design for K-TanH to demonstrate its area/power/performance efficacy. It is more accurate than existing piecewise approximations for Tanh. For example, K-TanH achieves $\sim 5\times$ speed up and $> 6\times$ reduction in maximum approximation error over software implementation of Hard TanH. Experimental results for low-precision BFloat16 training of language translation model GNMT on WMT16 data sets with approximate Tanh and Sigmoid obtained via K-TanH achieve similar accuracy and convergence as training with exact Tanh and Sigmoid.
* 14 pages, 14 figures
##### Context-Aware Parse Trees
The simplified parse tree (SPT) presented in Aroma, a state-of-the-art code recommendation system, is a tree-structured representation used to infer code semantics by capturing program \emph{structure} rather than program \emph{syntax}. This is a departure from the classical abstract syntax tree, which is principally driven by programming language syntax. While we believe a semantics-driven representation is desirable, the specifics of an SPT's construction can impact its performance. We analyze these nuances and present a new tree structure, heavily influenced by Aroma's SPT, called a \emph{context-aware parse tree} (CAPT). CAPT enhances SPT by providing a richer level of semantic representation. Specifically, CAPT provides additional binding support for language-specific techniques for adding semantically-salient features, and language-agnostic techniques for removing syntactically-present but semantically-irrelevant features. Our research quantitatively demonstrates the value of our proposed semantically-salient features, enabling a specific CAPT configuration to be 39\% more accurate than SPT across the 48,610 programs we analyzed.
##### On Scale-out Deep Learning Training for Cloud and HPC
The exponential growth in use of large deep neural networks has accelerated the need for training these deep neural networks in hours or even minutes. This can only be achieved through scalable and efficient distributed training, since a single node/card cannot satisfy the compute, memory, and I/O requirements of today's state-of-the-art deep neural networks. However, scaling synchronous Stochastic Gradient Descent (SGD) is still a challenging problem and requires continued research/development. This entails innovations spanning algorithms, frameworks, communication libraries, and system design. In this paper, we describe the philosophy, design, and implementation of Intel Machine Learning Scalability Library (MLSL) and present proof-points demonstrating scaling DL training on 100s to 1000s of nodes across Cloud and HPC systems.
* Accepted in SysML 2018 conference
##### Deep Learning at 15PF: Supervised and Semi-Supervised Classification for Scientific Data
This paper presents the first, 15-PetaFLOP Deep Learning system for solving scientific pattern classification problems on contemporary HPC architectures. We develop supervised convolutional architectures for discriminating signals in high-energy physics data as well as semi-supervised architectures for localizing and classifying extreme weather in climate data. Our Intelcaffe-based implementation obtains $\sim$2TFLOP/s on a single Cori Phase-II Xeon-Phi node. We use a hybrid strategy employing synchronous node-groups, while using asynchronous communication across groups. We use this strategy to scale training of a single model to $\sim$9600 Xeon-Phi nodes; obtaining peak performance of 11.73-15.07 PFLOP/s and sustained performance of 11.41-13.27 PFLOP/s. At scale, our HEP architecture produces state-of-the-art classification accuracy on a dataset with 10M images, exceeding that achieved by selections on high-level physics-motivated features. Our semi-supervised architecture successfully extracts weather patterns in a 15TB climate dataset. Our results demonstrate that Deep Learning can be optimized and scaled effectively on many-core, HPC systems.
* 12 pages, 9 figures
##### Mixed Precision Training of Convolutional Neural Networks using Integer Operations
The state-of-the-art (SOTA) for mixed precision training is dominated by variants of low precision floating point operations, and in particular, FP16 accumulating into FP32 Micikevicius et al. (2017). On the other hand, while a lot of research has also happened in the domain of low and mixed-precision Integer training, these works either present results for non-SOTA networks (for instance only AlexNet for ImageNet-1K), or relatively small datasets (like CIFAR-10). In this work, we train state-of-the-art visual understanding neural networks on the ImageNet-1K dataset, with Integer operations on General Purpose (GP) hardware. In particular, we focus on Integer Fused-Multiply-and-Accumulate (FMA) operations which take two pairs of INT16 operands and accumulate results into an INT32 output.We propose a shared exponent representation of tensors and develop a Dynamic Fixed Point (DFP) scheme suitable for common neural network operations. The nuances of developing an efficient integer convolution kernel is examined, including methods to handle overflow of the INT32 accumulator. We implement CNN training for ResNet-50, GoogLeNet-v1, VGG-16 and AlexNet; and these networks achieve or exceed SOTA accuracy within the same number of iterations as their FP32 counterparts without any change in hyper-parameters and with a 1.8X improvement in end-to-end training throughput. To the best of our knowledge these results represent the first INT16 training results on GP hardware for ImageNet-1K dataset using SOTA CNNs and achieve highest reported accuracy using half-precision
* Published as a conference paper at ICLR 2018
##### A Study of BFLOAT16 for Deep Learning Training
This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.
##### SysML: The New Frontier of Machine Learning Systems
Machine learning (ML) techniques are enjoying rapidly increasing adoption. However, designing and implementing the systems that support ML models in real-world deployments remains a significant obstacle, in large part due to the radically different development and deployment profile of modern ML methods, and the range of practical concerns that come with broader adoption. We propose to foster a new systems machine learning research community at the intersection of the traditional systems and ML communities, focused on topics such as hardware systems for ML, software systems for ML, and ML optimized for metrics beyond predictive accuracy. To do this, we describe a new conference, SysML, that explicitly targets research at the intersection of systems and machine learning with a program committee split evenly between experts in systems and ML, and an explicit focus on topics at the intersection of the two.
|
2020-03-31 14:21:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3776083290576935, "perplexity": 2395.4149219863834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00200.warc.gz"}
|
http://nbviewer.jupyter.org/gist/msund/11349097
|
21 Interactive Plots from matplotlib, ggplot for Python,prettyplotlib, Stack Overflow, and seaborn
Plotly is collaborative, makes beautiful interactive graphs with a URL for you, and stores your data and graphs together. This NB shows how to use Plotly to share plots from some awesome Python plotting libraries. The matplotlylib project is a collaboration with mpld3 and Jake Vanderplas. We've put together a User Guide that outlines the full extent of Plotly's APIs.
For best results, you can copy and paste this Notebook and key. Run $pip install plotly inside a terminal then start up a Notebook. We'll also be using ggplot, seaborn, and prettyplotlib, which you can also all install form pip. Let's get started. In [1]: %matplotlib inline import matplotlib.pyplot as plt # side-stepping mpl backend import matplotlib.gridspec as gridspec # subplots import numpy as np You can use our public key and username or sign up for an account on Plotly. Plotly is free for public use, you own your data, and you control the privacy. In [2]: import plotly.plotly as py import plotly.tools as tls from plotly.graph_objs import * py.sign_in("IPython.Demo", "1fw3zw2o13") In [3]: # tls.set_credentials_file("IPython.Demo", "1fw3zw2o13") # tls.get_credentials_file() You'll want to have version 1.0.0. If not, run $ pip install plotly --upgrade in a terminal. Check out our User Guide for more details on where to get your key. Problems or questions? Email [email protected] or find us on Twitter.
In [4]:
import plotly
plotly.__version__
Out[4]:
'1.0.0'
For matplotlib experts, you'll recognize these graphs from the matplotlib gallery.
In addition to matplotlib and Plotly's own Python API, You can also use Plotly's other APIs for MATLAB, R, Perl, Julia, and REST to write to graphs. That means you and I could edit the same graph with any language. We can even edit the graph and data from the GUI, so technical and non-technical teams can work together. And all the graphs go to your profile, like this: https://plot.ly/~IPython.Demo.
You control the privacy by setting world_readable to False or True, and can control your sharing.
Let's get started with this damped oscillation graph.
In [5]:
fig1 = plt.figure()
# Make a legend for specific lines.
import matplotlib.pyplot as plt
import numpy as np
t1 = np.arange(0.0, 2.0, 0.1)
t2 = np.arange(0.0, 2.0, 0.01)
# note that plot returns a list of lines. The "l1, = plot" usage
# extracts the first element of the list into l1 using tuple
# unpacking. So l1 is a Line2D instance, not a sequence of lines
l1, = plt.plot(t2, np.exp(-t2))
l2, l3 = plt.plot(t2, np.sin(2 * np.pi * t2), '--go', t1, np.log(1 + t1), '.')
l4, = plt.plot(t2, np.exp(-t2) * np.sin(2 * np.pi * t2), 'rs-.')
plt.xlabel('time')
plt.ylabel('volts')
plt.title('Damped oscillation')
plt.show()
Now, to convert it to a Plotly figure, this is all it takes:
In [6]:
py.iplot_mpl(fig1)
You can hover, zoom, and pan on the figure. You can also strip out the matplotlib styling, and use Plotly's default styling.
In [7]:
fig = tls.mpl_to_plotly(fig1)
fig['layout'].update(showlegend=True)
fig.strip_style()
py.iplot(fig)
Next up, an example from pylab.
In [8]:
fig2 = plt.figure()
from pylab import *
def f(t):
'a damped exponential'
s1 = cos(2*pi*t)
e1 = exp(-t)
return multiply(s1,e1)
t1 = arange(0.0, 5.0, .2)
l = plot(t1, f(t1), 'ro')
setp(l, 'markersize', 30)
setp(l, 'markerfacecolor', 'b')
py.iplot_mpl(fig2)
Here's where this gets special. You can get the data from any Plotly graph. That means you can re-plot the graph or part of it, or use your favorite Python tools to wrangle and analyze your data. Check out our getting started guide for a full background on these features.
In [9]:
tls.mpl_to_plotly(fig2).get_data()
Out[9]:
[{'name': '_line0',
'x': [0.0,
0.20000000000000001,
0.40000000000000002,
0.60000000000000009,
0.80000000000000004,
1.0,
1.2000000000000002,
1.4000000000000001,
1.6000000000000001,
1.8,
2.0,
2.2000000000000002,
2.4000000000000004,
2.6000000000000001,
2.8000000000000003,
3.0,
3.2000000000000002,
3.4000000000000004,
3.6000000000000001,
3.8000000000000003,
4.0,
4.2000000000000002,
4.4000000000000004,
4.6000000000000005,
4.8000000000000007],
'y': [1.0,
0.25300171651849518,
-0.54230030891302927,
-0.44399794031078654,
0.13885028597711233,
0.36787944117144233,
0.09307413008823949,
-0.19950113459002566,
-0.16333771416280363,
0.051080165611754998,
0.1353352832366127,
0.034240058964379601,
-0.073392365906047419,
-0.060088587008433003,
0.018791342780197139,
0.049787068367863944,
0.012596213757493282,
-0.026999542555766767,
-0.022105355809443925,
0.0069129486808399343,
0.018315638888734179,
0.0046338880779826647,
-0.0099325766273000524,
-0.0081321059420741033,
0.0025431316975542792]}]
Or you can get the figure makeup. Here, we're using 'IPython.Demo', which is the username and '3357' which is the figure number. You can use this command on Plotly graphs to interact with them from the console. You can access graphs via a URL. For example, for this plot, it's:
https://plot.ly/~IPython.Demo/3357/
In [10]:
pylab = py.get_figure('IPython.Demo', '3357')
In [11]:
#print figure
print pylab.to_string()
Figure(
data=Data([
Scatter(
x=[0.0, 0.2, 0.4, 0.6000000000000001, 0.8, 1.0, 1.2000000000000...],
y=[1.0, 0.2530017165184952, -0.5423003089130293, -0.44399794031...],
name='_line0',
mode='markers',
marker=Marker(
symbol='dot',
line=Line(
color='#000000',
width=0.5
),
size=30,
color='#0000FF',
opacity=1
)
)
]),
layout=Layout(
xaxis=XAxis(
domain=[0.0, 1.0],
range=[0.0, 5.0],
showline=True,
ticks='inside',
showgrid=False,
zeroline=False,
anchor='y',
mirror=True
),
yaxis=YAxis(
domain=[0.0, 1.0],
range=[-0.6000000000000001, 1.2],
showline=True,
ticks='inside',
showgrid=False,
zeroline=False,
anchor='x',
mirror=True
),
hovermode='closest',
showlegend=False
)
)
Now let's suppose we wanted to add a fit to the graph (see our fits post to learn more), and re-style it a bit. We can go into the web app, fork a copy, and edit the image in our GUI. No coding required.
In [12]:
from IPython.display import Image
Image(url='https://i.imgur.com/WG0gb9J.png')
Out[12]:
We also keep the data and graph together. You can analyze it, share it, or add to other plots. You can append data to your plots, copy and paste, import, or upload data. Take-away: a Python user could make plots with an Excel user, ggplot2 Ploty package, and MATLAB user. That's collaboration.
In [13]:
Image(url='https://i.imgur.com/Mq490fb.png')
Out[13]:
I can now call that graph into the NB. I can keep the styling, re-use that styling on future graphs, and save styles from other graphs. And if I want to see the data for the fit or access the figure styling, I can run the same commands, but on the updated figure and data for this graph. I don't need to re-code it, and I can save and share this version.
In [14]:
tls.embed('MattSundquist', '1307')
Plotly graphs are always interactive, and you can even stream data to the browser. You can also embed them in the browser with an iframe snippet.
In [15]:
from IPython.display import HTML
In [16]:
s = """<pre style="background:#f1f1f1;color:#000"><iframe src=<span style="color:#c03030">"https://plot.ly/~etpinard/176/650/550"</span> width=<span style="color:#c03030">"650"</span> height=550<span style="color:#c03030">" frameBorder="</span>0<span style="color:#c03030">" seamless="</span>seamless<span style="color:#c03030">" scrolling="</span>no<span style="color:#c03030">"></iframe></span></pre>"""
In [17]:
h = HTML(s); h
Out[17]:
<iframe src="https://plot.ly/~etpinard/176/650/550" width="650" height=550" frameBorder="0" seamless="seamless" scrolling="no"></iframe>
Where it remains interactive. That means your for-free defaults are: D3 graphs, drawn with JavaScript, and shared data. Here's how it looks in the Washington Post.
In [18]:
Image(url='http://i.imgur.com/XjvtYMr.png')
Out[18]:
It's fun to zoom. Then double-click to re-size.
In [19]:
HTML('<br><center><iframe class="vine-embed" src="https://vine.co/v/Mvzin6HZzLB/embed/simple" width="600" height="600" frameborder="0"></iframe><script async src="//platform.vine.co/static/scripts/embed.js" charset="utf-8"></script></center><br>')
Out[19]:
Plots can be collaboratively edited and shared with others.
In [20]:
Image(url='https://i.imgur.com/CxIYtzG.png')
Out[20]:
So you can keep all your plots for your project, team, or personal work in one plce, you get a profile, like this: https://plot.ly/~jackp/.
In [21]:
Image(url='https://i.imgur.com/gUC4ajR.png')
Out[21]:
You can also plot with Plotly with pandas, NumPy, datetime, and more of your favorite Python tools. We've already imported numpy and matplotlib; here we've kept them in so you can simply copy and paste these examples into your own NB.
In [22]:
fig3 = plt.figure()
import numpy as np
import matplotlib.pyplot as plt
# make a little extra space between the subplots
dt = 0.01
t = np.arange(0, 30, dt)
nse1 = np.random.randn(len(t)) # white noise 1
nse2 = np.random.randn(len(t)) # white noise 2
r = np.exp(-t/0.05)
cnse1 = np.convolve(nse1, r, mode='same')*dt # colored noise 1
cnse2 = np.convolve(nse2, r, mode='same')*dt # colored noise 2
# two signals with a coherent part and a random part
s1 = 0.01*np.sin(2*np.pi*10*t) + cnse1
s2 = 0.01*np.sin(2*np.pi*10*t) + cnse2
plt.subplot(211)
plt.plot(t, s1, 'b-', t, s2, 'g-')
plt.xlim(0,5)
plt.xlabel('time')
plt.ylabel('s1 and s2')
plt.grid(True)
plt.subplot(212)
cxy, f = plt.csd(s1, s2, 256, 1./dt)
plt.ylabel('CSD (db)')
py.iplot_mpl(fig3)
Another subplotting example using Plotly's defaults.
In [23]:
fig4 = plt.figure()
from pylab import figure, show
from numpy import arange, sin, pi
t = arange(0.0, 1.0, 0.01)
fig = figure(1)
ax1.plot(t, sin(2*pi*t))
ax1.grid(True)
ax1.set_ylim( (-2,2) )
ax1.set_ylabel('1 Hz')
ax1.set_title('A sine wave or two')
for label in ax1.get_xticklabels():
label.set_color('r')
ax2.plot(t, sin(2*2*pi*t))
ax2.grid(True)
ax2.set_ylim( (-2,2) )
l = ax2.set_xlabel('Hi mom')
l.set_color('g')
l.set_fontsize('large')
py.iplot_mpl(fig4, strip_style = True)
From the gallery here we're shwoing Anscombe's quartet. You might also like Plotly's blog post on the subject.
In [24]:
fig5 = plt.figure()
from __future__ import print_function
"""
Edward Tufte uses this example from Anscombe to show 4 datasets of x
and y that have the same mean, standard deviation, and regression
line, but which are qualitatively different.
matplotlib fun for a rainy day
"""
from pylab import *
x = array([10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5])
y1 = array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68])
y2 = array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74])
y3 = array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73])
x4 = array([8,8,8,8,8,8,8,19,8,8,8])
y4 = array([6.58,5.76,7.71,8.84,8.47,7.04,5.25,12.50,5.56,7.91,6.89])
def fit(x):
return 3+0.5*x
xfit = array( [amin(x), amax(x) ] )
subplot(221)
plot(x,y1,'ks', xfit, fit(xfit), 'r-', lw=2)
axis([2,20,2,14])
setp(gca(), xticklabels=[], yticks=(4,8,12), xticks=(0,10,20))
text(3,12, 'I', fontsize=20)
subplot(222)
plot(x,y2,'ks', xfit, fit(xfit), 'r-', lw=2)
axis([2,20,2,14])
setp(gca(), xticklabels=[], yticks=(4,8,12), yticklabels=[], xticks=(0,10,20))
text(3,12, 'II', fontsize=20)
subplot(223)
plot(x,y3,'ks', xfit, fit(xfit), 'r-', lw=2)
axis([2,20,2,14])
text(3,12, 'III', fontsize=20)
setp(gca(), yticks=(4,8,12), xticks=(0,10,20))
subplot(224)
xfit = array([amin(x4),amax(x4)])
plot(x4,y4,'ks', xfit, fit(xfit), 'r-', lw=2)
axis([2,20,2,14])
setp(gca(), yticklabels=[], yticks=(4,8,12), xticks=(0,10,20))
text(3,12, 'IV', fontsize=20)
#verify the stats
pairs = (x,y1), (x,y2), (x,y3), (x4,y4)
for x,y in pairs:
print ('mean=%1.2f, std=%1.2f, r=%1.2f'%(mean(y), std(y), corrcoef(x,y)[0][1]))
py.iplot_mpl(fig5, strip_style = True)
mean=7.50, std=1.94, r=0.82
mean=7.50, std=1.94, r=0.82
mean=7.50, std=1.94, r=0.82
mean=7.50, std=1.94, r=0.82
And a final histogram from the matplotlib gallery.
In [25]:
fig6 = plt.figure()
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
# example data
mu = 100 # mean of distribution
sigma = 15 # standard deviation of distribution
x = mu + sigma * np.random.randn(10000)
num_bins = 50
# the histogram of the data
n, bins, patches = plt.hist(x, num_bins, normed=1, facecolor='green', alpha=0.5)
# add a 'best fit' line
y = mlab.normpdf(bins, mu, sigma)
plt.plot(bins, y, 'r--')
plt.xlabel('Smarts')
plt.ylabel('Probability')
# Tweak spacing to prevent clipping of ylabel
py.iplot_mpl(fig6, strip_style = True)
Want to see more matplotlylib graphs? Head over to our API and copy and paste away.
In [26]:
Image(url='https://i.imgur.com/HEJEnjQ.png')
Out[26]:
An exciting package by Greg Lamp and the team at ŷhat is ggplot for Python. You can draw figures with ggplot's wonderful syntax and share them with Plotly. You'll want to run $pip install ggplot to get started. In [27]: from ggplot import * We'll start out with a plot from the diamonds dataset. In [28]: a = ggplot(aes(x='price'), data=diamonds) + geom_histogram() + facet_wrap("cut") Then share it to Plotly. In [52]: fig = a.draw() py.iplot_mpl(fig, strip_style=True) Line charts can be interactive (drag your mouse along the line to see the data on the hover). In [30]: b = ggplot(aes(x='date', y='beef'), data=meat) + \ geom_line() In [31]: fig = b.draw() py.iplot_mpl(fig) Histograms are also fun to hover over to get the exact data. In [32]: c = ggplot(aes(x='price'), data=diamonds) + geom_histogram() + ggtitle('My Diamond Histogram') In [33]: fig = c.draw() py.iplot_mpl(fig, strip_style=True) In [34]: d = ggplot(aes(x='x', y='y', color='z'), data=diamonds.head(1000)) +\ geom_point() In [35]: fig = d.draw() py.iplot_mpl(fig, strip_style=True) You can also use more advanced plotting types in collaboration with pandas. You can add a geom. In [36]: import pandas as pd In [37]: random_walk1 = pd.DataFrame({ "x": np.arange(100), "y": np.cumsum(np.random.choice([-1, 1], 100)) }) random_walk2 = pd.DataFrame({ "x": np.arange(100), "y": np.cumsum(np.random.choice([-1, 1], 100)) }) e = ggplot(aes(x='x', y='y'), data=random_walk1) + \ geom_step() + \ geom_step(aes(x='x', y='y'), data=random_walk2) In [38]: fig = e.draw() py.iplot_mpl(fig, strip_style=True) III. Prettyplotlib graphs in Plotly¶ The lovely gallery of examples from prettyplotlib, a matplotlib enhnacing library by Olga Botvinnik, is a fun one to make interactive. Here's a scatter; let us know if you make others. You'll note that not all elements of the styling come through. Head over to the homepage for documentation. In [39]: fig12 = plt.figure() import prettyplotlib as ppl # Set the random seed for consistency np.random.seed(12) # Show the whole color range for i in range(8): x = np.random.normal(loc=i, size=800) y = np.random.normal(loc=i, size=800) ax = ppl.scatter(x, y, label=str(i)) ppl.legend(ax) ax.set_title('prettyplotlib scatter') ax.legend().set_visible(False) py.iplot_mpl(fig12) And another prettyplotlib example. In [40]: fig13 = plt.figure() import prettyplotlib as ppl # Set the random seed for consistency np.random.seed(12) # Show the whole color range for i in range(8): y = np.random.normal(size=1000).cumsum() x = np.arange(1000) # Specify both x and y ppl.plot(x, y, label=str(i), linewidth=0.75) py.iplot_mpl(fig13) IV. Plotting with seaborn¶ Another library we really dig is seaborn, a library to maximize aesthetics of matplotlib plots. It's by by Michael Waskom. You'll need to install it with $ pip install seaborn, and may need to import six, which you can do from pip. The styling isn't yet translated to Plotly, so we'll go to Plotly's default settings.
In [41]:
import seaborn as sns
from matplotlylib import fig_to_plotly
In [42]:
def sinplot(flip=1):
x = np.linspace(0, 14, 100)
for i in range(1, 7):
plt.plot(x, np.sin(x + i * .5) * (7 - i) * flip)
In [43]:
fig14 = plt.figure()
sns.set_style("dark")
sinplot()
py.iplot_mpl(fig14, strip_style = True)
You can also run subplots like this.
In [44]:
fig15 = plt.figure()
with sns.axes_style("darkgrid"):
plt.subplot(211)
sinplot()
plt.subplot(212)
sinplot(-1)
py.iplot_mpl(fig15, strip_style = True)
And a final example, combining plot types.
In [45]:
import numpy as np
from numpy.random import randn
import pandas as pd
from scipy import stats
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
In [46]:
fig16 = plt.figure()
sns.set_palette("hls")
mpl.rc("figure", figsize=(8, 4))
data = randn(200)
sns.distplot(data);
py.iplot_mpl(fig16, strip_style = True)
We love Stack Overflow, so wanted answer a few questions from there, in Plotly. If you want to plot data you already have as a histogram and make it interactive, try this one out.
In [47]:
fig17 = plt.figure()
import matplotlib.pyplot as plt
import numpy as np
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
hist, bins = np.histogram(x, bins=50)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, hist, align='center', width=width)
py.iplot_mpl(fig17, strip_style = True)
Here is how to create a density plot like you might in R, but in matplotlib.
In [48]:
fig18 = plt.figure()
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import gaussian_kde
data = [1.5]*7 + [2.5]*2 + [3.5]*8 + [4.5]*3 + [5.5]*1 + [6.5]*8
density = gaussian_kde(data)
xs = np.linspace(0,8,200)
density.covariance_factor = lambda : .25
density._compute_covariance()
plt.plot(xs,density(xs))
py.iplot_mpl(fig18, strip_style = True)
Drawing a simple example of different lines for different plots looks like this...
In [49]:
fig19 = plt.figure()
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(10)
plt.plot(x, x)
plt.plot(x, 2 * x)
plt.plot(x, 3 * x)
plt.plot(x, 4 * x)
py.iplot_mpl(fig19, strip_style = True)
...and can get more exciting like this.
In [50]:
fig20 = plt.figure()
import matplotlib.pyplot as plt
import numpy as np
num_plots = 10
# Have a look at the colormaps here and decide which one you'd like:
# http://matplotlib.org/1.2.1/examples/pylab_examples/show_colormaps.html
colormap = plt.cm.gist_ncar
plt.gca().set_color_cycle([colormap(i) for i in np.linspace(0, 0.9, num_plots)])
# Plot several different functions...
x = np.arange(10)
labels = []
for i in range(1, num_plots + 1):
plt.plot(x, i * x + 5 * i)
labels.append(r'$y = %ix + %i$' % (i, 5*i))
py.iplot_mpl(fig20, strip_style = True)
Plotly also lets you draw variables as subscripts in math mode.
In [51]:
fig21 = plt.figure()
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.mlab as mlab
mean = [10,12,16,22,25]
variance = [3,6,8,10,12]
x = np.linspace(0,40,1000)
for i in range(4):
sigma = np.sqrt(variance[i])
y = mlab.normpdf(x,mean[i],sigma)
plt.plot(x,y, label=r'$v_{}$'.format(i+1))
plt.xlabel("X")
plt.ylabel("P(X)")
py.iplot_mpl(fig21, strip_style = True)
In [1]:
# CSS styling within IPython notebook
from IPython.core.display import HTML
import urllib2
def css_styling():
url = 'https://raw.githubusercontent.com/plotly/python-user-guide/master/custom.css'
|
2017-03-25 17:44:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19098803400993347, "perplexity": 8787.189732334162}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189031.88/warc/CC-MAIN-20170322212949-00360-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2486920/number-of-solutions-of-sin-2x-cos-2x-sin-x-cos-x-1
|
Number of solutions of $\sin (2x)+\cos (2x)+\sin x+\cos x=1$
Find Number of solutions of $\sin (2x)+\cos (2x)+\sin x+\cos x=1$ in $\left [0 \:\: 2\pi\right]$
The equation can be written as:
$$\sin (2x)+1-2 \sin^2x+\sin x+\cos x=1$$ $\implies$
$$\sin x+\cos x=2\sin^2 x-2 \sin x\cos x$$ $\implies$
$$\sin x+\cos x=2\sin x\left(\sin x-\cos x\right)$$
$\implies$
$$\frac{\sin x+\cos x}{\sin x-\cos x}=2\sin x$$ $\implies$
$$\frac{1+\tan x}{1-\tan x}=-2\sin x$$
$$\tan \left(\frac{\pi}{4}+x\right)=-2\sin x$$
Now i have drawn the graphs of $\tan \left(\frac{\pi}{4}+x\right)$ and $-2\sin x$ and observed there are two solutions.
is there any other way?
Another way would be to let $t=\tan(\frac x2)$ and arrive to $$t^4+2 t^3+8 t^2-6 t-1=0$$ Now, using the formulae for the quartic equation, the discriminant is $\Delta=-309248$ which shows that the equation has two distinct real roots and two complex conjugate non-real roots.
I think your reasoning with graphs is not so right because if so,
why not to draw the graph of $f(x)=\sin2x+\cos2x+\sin{x}+\cos{x}-1$?
By the Claude's hint from your equation $$\frac{1+\tan{x}}{1-\tan{x}}=-2\sin{x}$$ after substitution $\tan\frac{x}{2}=t$ we obtain $$\frac{1+\frac{2t}{1-t^2}}{1-\frac{2t}{1-t^2}}=-2\cdot\frac{2t}{1+t^2}$$ or $$t^4+2t^3+8t^2-6t-1=0.$$
Now, let $$f(t)=t^4+2t^3+8t^2-6t-1.$$ Thus, $$f''(t)=12t^2+12t+16>0,$$ which says that $f$ is a convex function.
Hence, a graph of $f$ and the $t$-axis have two common points maximum.
But $f(0)<0$, which says that $f$ has two roots exactly and since the period of $\tan$ is $\pi$, we get that the starting equation has two roots exactly on $[0,2\pi]$.
Done!
• why my reasoning is wrong? its very difficult to draw original expression – Ekaveera Kumar Sharma Oct 24 '17 at 9:59
• @Ekaveera Kumar Sharma I said that it's not so right because you can not draw these graphs exactly. By the way, if you can make this then to draw graphs of $y=\sin{x}$, $y=\cos{x}$, $y=\sin2x$ and $y=\cos2x$ and to add, it's also possible, but it's not gives the solution of your problem, I think. – Michael Rozenberg Oct 24 '17 at 10:03
|
2019-07-19 01:27:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601913452148438, "perplexity": 172.28973317970448}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00029.warc.gz"}
|
https://socratic.org/questions/a-container-containing-5-00-l-of-a-gas-is-collected-at-100-k-and-then-allowed-to
|
# A container containing 5.00 L of a gas is collected at 100 K and then allowed to expand to 20.0 L. What must the new temperature be in order to maintain the same pressure?
Apr 3, 2017
400 Kelvin (K).
#### Explanation:
We are given the initial volume and initial temperature (in Kelvins, so no need to convert from Celsius to Kelvin). We are also given the final volume.
In short, we have:
${V}_{1}$: 5.00 L
${T}_{1}$: 100 K
${V}_{2}$: 20.0 L
${T}_{2}$: ?
Charles' Law:
${V}_{1} / {T}_{1}$=${V}_{2} / {T}_{2}$
We have 3 of the 4 variables, and if we fill those in, we have...
$\frac{5.00}{100}$=$\frac{20.0}{T} _ 2$
Using the algebra skill of cross multiplying, we can get...
5.00${T}_{2}$=2000
Dividing both sides by 5.00 to isolate the ${T}_{2}$, we get...
${T}_{2}$= 400. K
It is 400. due to the significant figure rule.
|
2019-11-22 09:30:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7999462485313416, "perplexity": 1717.4752716032274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671249.37/warc/CC-MAIN-20191122092537-20191122120537-00005.warc.gz"}
|
http://www.aimspress.com/article/10.3934/math.2020210
|
AIMS Mathematics, 2020, 5(4): 3274-3283. doi: 10.3934/math.2020210.
Research article
Export file:
Format
• RIS(for EndNote,Reference Manager,ProCite)
• BibTex
• Text
Content
• Citation Only
• Citation and Abstract
Involution on prime rings with endomorphisms
1 Department of Mathematics, Faculty of Science & Arts-Rabigh, King Abdulaziz University, Saudi Arabia
2 Department of Mathematics, Aligarh Muslim University, Aligarh-202002, India
## Abstract Full Text(HTML) Figure/Table Related pages
Let $\mathcal{R}$ be a prime ring with involution $'*'$ and $\psi: \mathcal{R} \rightarrow \mathcal{R}$ be an endomorphism on $\mathcal{R}$. In this article, we study the action of involution $'*',$ and the effect of endomorphism $\psi$ satisfying $[\psi(x),\psi(x^*)]-[x,x^*]\in \mathcal{Z}(\mathcal{R})$ for all $x\in \mathcal{R}$. In particular, we prove that any centralizing involution on prime rings with involution of characteristic different from two is of the first kind or $\mathcal{R}$ satisfies $s_4$, the standard polynomial identity in four variables. Further, we establish that if a prime ring $\mathcal{R}$ with involution of characteristic different from two admits a non-trivial endomorphism $\psi$ such that $[\psi(x),\psi(x^*)]-[x,x^*]\in \mathcal{Z}(\mathcal{R})$ for all $x\in \mathcal{R}$, then the involution is of the first kind or $\mathcal{R}$ satisfies $s_4$ and $[\psi(x), x]=0$ for all $x\in \mathcal{R}$.
Figure/Table
Supplementary
Article Metrics
Citation: Abdul Nadim Khan, Shakir Ali. Involution on prime rings with endomorphisms. AIMS Mathematics, 2020, 5(4): 3274-3283. doi: 10.3934/math.2020210
References
• 1. S. Ali, N. A. Dar, On *-centralizing mappings in rings with involution, Georgian Math. J., 21 (2014), 25-28.
• 2. S. Ali, N. A. Dar, On centralizers of prime rings with involution, Bull. Iranian Math. Soc., 41 (2015), 1465-1475.
• 3. S. Ali, N. A. Dar, A. N. Khan, On strong commutativity preserving like maps in rings with involution, Miskolc Math. Notes, 16 (2015), 17-24.
• 4. H. E. Bell, M. N. Daif, On commutativity and strong commutativity preserving maps, Can. Math. Bull., 37 (1994), 443-447.
• 5. H. E. Bell, G. Mason, On derivations in near rings and rings, Math. J. Okayama Univ., 34 (1992), 135-144.
• 6. M. Brešar, Commuting maps: a survey, Taiwanese J. Math., 8 (2004), 361-397.
• 7. M. Brešar, Centralizing mappings and derivations in prime rings, J. Algebra, 156 (1993), 385-394.
• 8. M. Brešar, Commuting traces of biadditive mappings, commutativity preserving mappings and Lie mappings, T. Am. Math. Soc., 335 (1993), 525-546.
• 9. M. Brešar, C. R. Miers, Strong commutativity preserving mappings of semiprime rings, Can. Math. Bull., 37 (1994), 457-460.
• 10. M. Brešar, C. R. Miers, Strong commutativity preserving maps of semiprime rings, Can. Math. Bull., 37 (1994), 457-460.
• 11. N. A. Dar, S. Ali, On *-commuting mapping and derivations in rings with involution, Turk. J. Math., 40 (2016), 884-894.
• 12. N. A. Dar, A. N. Khan, Generalized derivations on rings with involution, Algebr. Colloq., 24 (2017), 393-399.
• 13. Q. Deng, M. Ashraf, On strong commutativity preserving maps, Results Math., 30 (1996), 259-263.
• 14. V. De Fillipis, G. Scudo, Strong commutativity and Engel condition preserving maps in prime and semiprime rings, Linear and Multilinear Algebra, 61 (2013), 917-938.
• 15. I. N. Herstein, A note on derivations II, Can. Math. Bull., 22 (1979), 509-511.
• 16. I. N. Herstein, Rings with involution, University of Chicago Press, Chicago, 1976.
• 17. T. K. Lee, T. L. Wong, Nonadditive strong commutativity preserving maps, Comm. Algebra, 40 (2012), 2213-2218.
• 18. T. K. Lee, P. H. Lee, Derivations centralzing symmetric or skew symmetric elements, Bull. Inst. Math., 14 (1986), 249-256.
• 19. P. K. Liu, C. K. Liau, Strong commutativity preserving generalized derivations on Lie ideals, Linear Multilinear Algebra, 59 (2011), 905-915.
• 20. P. K. Liau, W. L. Huang, C. K. Liu, Nonlinear strong commutativity preserving maps on skew elements of prime rings with involution, Linear Algebra Appl., 436 (2012), 3099-3108.
• 21. C. K. Liu, Strong commutativity preserving maps on subsets of matrices that are not closed under addition, Linear Algebra Appl., 458 (2014), 280-290.
• 22. C. K. Liu, Strong commutativity preserving generalized derivations on right ideals, Monatsh. Math., 166 (2012), 453-465.
• 23. C. K. Liu, On Skew derivations in semiprime Rings, Algebra Represent Th., 16 (2013), 1561-1576.
• 24. J. S. Lin, C. K. Liu, Strong commutativity preserving maps on Lie ideals, Linear Algebra Appl., 428 (2008), 1601-1609.
• 25. J. S. Lin, C. K. Liu, Strong commutativity preserving maps in prime rings with involution, Linear Algebra Appl., 432 (2010), 14-23.
• 26. A. Mamouni, L. Oukhtite, H. Elmir, New classes of endomorphisms and some classification theorems, Comm. Algebra, 48 (2020), 71-82.
• 27. A. Mamouni, L. Oukhtite, B. Nejjar, et al. Some commutativity criteria for prime rings with differential identities on Jordan ideals, Comm. Algebra, 47 (2019), 355-361.
• 28. A. Mamouni, B. Nejjar, L. Oukhtite, Differential identities on prime rings with involution, J. Algebra Appl., 17 (2018), 1850163.
• 29. B. Nejjar, A. Kacha, A. Mamouni, et al. Commuatitivity theorems in rings with involution, Comm. Algebra, 45 (2016), 698-708.
• 30. J. Ma, X. W. Xu, F. W. Niu, Strong commutativity preserving generalized derivations on semiprime rings, Acta Math. Sin., 24 (2008), 1835-1842.
• 31. X. Qi, J. Hou, Strong commutativity presrving maps on triangular rings, Oper. Matrices, 6 (2012), 147-158.
• 32. P. Šemrl, Commutativity preserving maps, Linear Algebra Appl., 429 (2008), 1051-1070.
• 33. W. Watkins, Linear maps that preserve commuting pairs of matrices, Linear Algebra Appl., 14 (1976), 29-35.
• 34. O. A. Zemani, L. Oukhtite, S. Ali, et al. On certain classes of generalized derivations, Math. Slovaca., 69 (2019), 1023-1032.
|
2020-07-15 07:50:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7340466976165771, "perplexity": 5757.266082828397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00096.warc.gz"}
|
https://www.physicsforums.com/threads/proof-of-additive-property-for-sine.371453/
|
# Homework Help: Proof of additive property for sine
1. Jan 21, 2010
### jtblizard
1. The problem statement, all variables and given/known data
We are supposed to prove that sin(x+y) = cos(x)sin(y) + sin(x)cos(y)
2. Relevant equations
cos(A-pi/2) - sin(A)
sin(pi/2 - A) = Cos(A)
sin(A-pi/2) = -cos(A)
3. The attempt at a solution
We had to prove all of the relevant equations but were allowed to work in groups and now that I am alone, I am just having trouble pushing off and getting started. If someone can give me a push in the right direction, I feel like I will be able to finish it on my own. Thank you in advance.
2. Jan 21, 2010
### Hurkyl
Staff Emeritus
The cited relevant equations aren't enough to do this proof. Is there anything else useful you can use?
You said you've proven similar things in a group -- what sorts of things? And how were they proven? Can you do something similar?
3. Jan 21, 2010
### jtblizard
We also have the difference property for Cosine: cos(A-B)=cos(A)cos(B)+sin(A)sin(B) and we have that same thing written with x instead of A and once with x instead of A and with y-pi/2 instead of B. The things listed under relevant equations are the things we proved in class. Once, we used a triangle with one angle equal to A in order to prove sin(pi/2 - A) = cos(A) and we know that sin(-x) = -sin(x)
4. Jan 22, 2010
### HallsofIvy
If you know that cos(A- B)= cos(A)cos(B)+ Sin(A)sin(B) then you also know that cos(A+ B)= cos(A- (-B))= cos(A)cos(-B)+ sin(A)sin(-B). And since cos(-B)= cos(B) and sin(-B)= -sin(B), cos(A+ B)= cos(A)cos(B)- sin(A)sin(B). Now use the fact that $sin^2(\theta)= 1- cos^2(\theta)$ with $\theta= x+ y$.
5. Jan 22, 2010
### LCKurtz
Another approach is to write
$$\sin{(a+b)} = \cos{(\frac \pi 2 - (a + b))} = \cos{((\frac \pi 2 - a)-b))}$$
and use the cosine formula.
|
2018-09-24 11:12:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5377501845359802, "perplexity": 1346.081212923117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160400.74/warc/CC-MAIN-20180924110050-20180924130450-00177.warc.gz"}
|
https://darcynorman.net/2010/10/29/easy-server-monitoring-script/
|
# Easy server monitoring script
I manage or monitor a few servers, and it's a good idea to keep an eye on how they're holding up. The Linux uptime command is all I need - show me how long the server's been up (or if it's cycled recently - power hiccup?), and CPU load averages.
I just whipped up a dead-simple solution to let me embed the uptime reports from the servers into my retro homepage.
I'm quite sure there are better and/or more robust ways to do this, but this is what I came up with after maybe 2 minutes of thought.
On each of the servers, I added a shell script called "uptimewriter.sh" (in my ~/bin directory, so located at ~/bin/uptimewriter.sh). I made the file executable (chmod +x ~/bin/uptimewriter.sh) and used this script:
#!/bin/sh
echo "document.write('"uptime"');"
All it does is wrap the output of the uptime command in some javascript code to display the text when embedded on a web page. I then added it to the crontab on each of the servers, running every 15 minutes, and dumping the output into a file that will be visible via the webserver.
*/15 * * * * ~/bin/uptimewriter.sh > ~/public_html/uptime.js 2>&1
Every 15 minutes, the uptimewriter.sh script is run, and output into a javascript file that can be pulled to display on a web page.
When called from a web page, that will render the output of the uptime command, wrapped in a document.write() call as per the uptimewriter.sh script, displaying it nicely:
|
2019-12-15 10:42:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35630396008491516, "perplexity": 2235.289181630136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00556.warc.gz"}
|
https://gamedev.stackexchange.com/questions/138877/javascript-movement-of-player
|
# JavaScript movement of player
Okay so I am quite new, I've got the player to move using up, down, right, and left. But what I am trying to do is make the character move with a left click.
var Player = function(id){
var self = {
x:250,
y:250,
id:id,
number:"" + Math.floor(10 * Math.random()),
pressingRight:false,
pressingLeft:false,
pressingUp:false,
pressingDown:false,
maxSpd:10,
}
self.updatePosition = function(){
if(self.pressingRight)
self.x += self.maxSpd;
if(self.pressingLeft)
self.x -= self.maxSpd;
if(self.pressingUp)
self.y -= self.maxSpd;
if(self.pressingDown)
self.y += self.maxSpd;
}
return self;
}
That's what I have for the up, down, left, right movement. I've seen a few documents stating the click but just can't seem to implement it into the file.
• Move to the location you clicked? Or did you mean something else? – Engineer Mar 19 '17 at 15:27
• That's right, Instead of the left,right,up,down that i have i want to make it so that the player clicks to move to the location clicked. – CheekyTrooper Mar 19 '17 at 15:39
## 1 Answer
To do that, you'll have to look into AI pathfinding. "Hold up," you say, "I don't even have any enemies yet! How can I need AI?".
Once you start talking about "go where I click" then you have to deal with a great deal more than is immediately obvious: obstacle avoidance, finding out whether there even is a path (imagine you're between 4 walls), best path etc. for your player to actually be able to get where you click. After all, you're not just moving as the crow flies... are you?
I suggest starting with basic A* pathfinding (note: there are many, many sources of info on this around the web). You can also try hill-climbing which in some ways is a little simpler to get working - depending on how simple your game is / how many enemies are involved. The good news is you will then have the ability to get your enemies to also go wherever you want them.
• Thankyou very much, I shall look into this – CheekyTrooper Mar 19 '17 at 15:56
|
2020-08-11 11:21:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27856141328811646, "perplexity": 1076.7435214177956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738746.41/warc/CC-MAIN-20200811090050-20200811120050-00162.warc.gz"}
|
https://www.aminer.org/pub/5ea6adfa91e011a546871bb3/point-location-and-active-learning-learning-halfspaces-almost-optimally
|
# Point Location and Active Learning: Learning Halfspaces Almost Optimally
Mahajan Gaurav
Abstract:
Given a finite set $X \subset \mathbb{R}^d$ and a binary linear classifier $c: \mathbb{R}^d \to \{0,1\}$, how many queries of the form $c(x)$ are required to learn the label of every point in $X$? Known as \textit{point location}, this problem has inspired over 35 years of research in the pursuit of an optimal algorithm. Building on the...More
Code:
Data:
|
2020-11-28 02:16:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6384004950523376, "perplexity": 781.2778830523572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194982.45/warc/CC-MAIN-20201128011115-20201128041115-00715.warc.gz"}
|
https://www.physicsforums.com/threads/electromagnetic-force-in-electric-motor.152442/
|
Electromagnetic force in electric motor
1. batsan
9
Suppose we have a DC motor with permanent magnet. Moreover, the armature of motor is groove rotor. The elementary electromagnetic force is given by: dF=jxBdv, where j is electric current density , x is vector multiplication , B - magnetic flux density , dv - elementary volume. If we integrate by all the volume, we will calculate only zero, because there is current density j in the groove , but there is no magnetic field B. Otherwise ,there is B in the steel , but no j.
How you can explain that?
Last edited: Jan 22, 2007
2. Meir Achuz
2,076
There is B in the groove.
3. batsan
9
Are you sure?
The magnetic resistance of the groove materials, such as copper, insulation ,air etc is many times more then steel. The magnetic flux pass around the groove, but not inside it.
4. Meir Achuz
2,076
The hig magnetic reluctance (not resistance) of those materials is what permits the B field to enter the groove. That is why steel is not used for the armature.
9
6. Meir Achuz
2,076
The picture is colorful, but I don't think it is relevant to your question or my answer.
7. batsan
9
Yes,this is AC motor, but formula for its force must be the same. As we see color of groove is blue and it corresponds to B=0.
8. Meir Achuz
2,076
The picture is colorful, but I don't think it is relevant to your question or my answer, since B is not zero in the groove however they color it.
|
2015-11-27 01:08:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894168734550476, "perplexity": 1448.1123861122758}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447881.87/warc/CC-MAIN-20151124205407-00284-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/tags/integration/hot
|
Tag Info
36
It is exactly because we have a factor of $\frac 1 2$ in the area formula of a triangle. To understand what I'm saying, consider what is the $v(t)$ graph of a particle under constant acceleration. Some say, a good plot is worth a million words! :)
18
The result you've got would be better known as this: $$\int_0^t\biggl(\int_0^{t'} a\mathrm{d}t''\biggr)\mathrm{d}t' = \frac{1}{2}at^2$$ In other words, it's a derivation of the formula for uniformly accelerated motion. This derivation, or something like it, is one of the first things students in a good calculus-based introductory physics class learn. The ...
15
Take a look at the notes on lectures 1 and 2 of Geometric Numerical Integration found here. Quoting from Lecture 2 A numerical one-step method $y_{n+1} = \Phi_h(y_n)$ is called symplectic if, when applied to a Hamiltonian system, the discrete flow $y \mapsto \Phi_h(y)$ is a symplectic transformation for all sufficiently small step sizes. From your ...
12
No, it cannot be enough. Stokes' theorem says that the volume ($\Omega$) integral of $d\omega$, a form that is the exterior derivative of another one (of $\omega$), may be written as a surface integral. But it doesn't allow us to rewrite the volume integral of a general integrand (which isn't the exterior derivative of anything) such as the Lagrangian ...
9
The $\delta$ function is not continuous, so it's a priori not differentiable. In fact, it's not even well-defined as an ordinary real-valued function, but can be made so in terms of distributions - linear maps on a space of test functions given by $f\mapsto\int\delta f=f(a)$. It's possible to sensibly define derivatives of distributions by looking at ...
8
There are two that I know of in the context of state estimation. The first is for estimating the mean of $P$ and is a Metropolis-Hasting MCMC algorithm here: Optimal, reliable estimation of quantum states. The second is also mainly for computing the mean (but can do other functions -- including the characteristic function of the region you are interested ...
7
An important example in quantum mechanics is e.g. the Hilbert space $$H~=~L^2(\mathbb{R}^3)$$ of Lebesgue square integrable wave functions $\psi$ in the position space $\mathbb{R}^3$. The Lebesgue square integrable functions (as opposed to just the Riemann square integrable functions) are needed to complete the Hilbert space with respect to the square ...
7
It's an integral over a closed line (e.g. a circle), see line integral. In particular, it is used in complex analysis for contour integrals (i.e closed lines on a complex plane), see e.g. example pointed out by Lubos. Also, it is used in real space, e.g. in electromagnetism, in Faraday's law of induction (part of the Maxwell equations, written in an ...
7
The short answer is that the two principal value definitions agree on sufficiently well-behaved functions, but may disagree on sufficiently singular functions. For instance, on one hand $$\lim_{\epsilon\searrow 0} \int_{\mathbb{R}\backslash[-\epsilon,\epsilon]} \frac{\mathrm{d}x}{x^3}~=~0$$ is zero, while on the other hand $$\lim_{\epsilon\searrow 0} ... 7 Here we will assume that OP is not questioning the fundamental physical principles/postulates/axioms of quantum mechanics, such as, e.g., the need to have a Hilbert space H in the first place, etc; and that OP is only pondering the role of L^2-spaces (as opposed to, e.g., L^1-spaces). Let us for concreteness and simplicity consider the 3-dimensional ... 7 If the functional derivative$$\tag{1} \frac{\delta F[\phi]}{\delta\phi^{\alpha}(x)} $$exists (wrt. to a certain choice of boundary conditions), it obeys infinitesimally$$\tag{2}\delta F ~:=~ F[\phi+\delta\phi]- F[\phi] ~=~\int_M \!dx\sum_{\alpha\in J} \frac{\delta F[\phi]}{\delta\phi^{\alpha}(x)}\delta\phi^{\alpha}(x). $$OP's functional integral ... 6 Technically, the equation$$d = \frac{\mathrm{d}x}{\mathrm{d}t}t + \frac{\mathrm{d}^2x}{\mathrm{d}t^2}\frac{t^2}{2}$$is not right. Instead, for constant acceleration, you need$$d = \left(\left.\frac{\mathrm{d}x}{\mathrm{d}t}\right|_0\right) t + \left(\left.\frac{\mathrm{d}^2x}{\mathrm{d}t^2}\right|_0\right) \frac{t^2}{2}$$In other words, a quantity ... 6 It depends what you want to calculate. As you rightly note, delta functions are not dimensionless, so that including one in your integral will change its dimensionality: you will be calculating something rather different! Most of the time this won't matter if you do it right, but you do need to think about what you want to calculate. The integral \int ... 6 This is not an equality, strictly speaking. Looks like your lecturer used spherical coordinates. If the integrand is spherically symmetric, i.e. it only depends on the magnitude of \mathbf{p}, then the integration over the angular coordinates is trivial and just gives you the solid angle subtended by a sphere, 4\pi. 5 It's an integral over a closed contour (which is topologically a circle). An example from Wikipedia:$$ \begin{align} \oint_C {1 \over z}\,dz & {} = \int_0^{2\pi} {1 \over e^{it}} \, ie^{it}\,dt = i\int_0^{2\pi} 1 \,dt \\ & {} = \Big[t\Big]_0^{2\pi} i=(2\pi-0)i = 2\pi i, \end{align} . $$5 Sorry, a solid angle is something different than an ordinary angle, see http://en.wikipedia.org/wiki/Solid_angle so it is not measured "with respect to anything". Solid angle \Omega measures the size of a set of directions in the 3-dimensional space via the formula$$ \Omega = \frac{A}{R^2} $$where A is the area of the intersection of all these ... 5 There's no paradox. We are on physics stackexchange, not mathematics stack exchange. Non-measurable sets are purely mathematical concepts that cannot be physically instantiated. Any medium in our universe is either made out of particles that are discrete or fields which, as far as we know, can be modeled as being continuous in our 4 dimensional space-time. ... 5 1) As OP basically notes, an n-dimensional delta function transforms under change of variables f:\mathbb{R}^n \to \mathbb{R}^n with (the absolute value of) an inverse Jacobian$$ \tag{1} \delta^n(f(x))~=~ \sum_{x_{(0)},f(x_{(0)})=0 }\frac{1}{|\det(\partial f(x_{(0)}))|} \delta^n(x-x_{(0)}), $$where the sum \sum is over all zeroes x_{(0)} of f, ... 5 That's equivalent simply to c\int dx/x. Switch to the Euclidean spacetime, k_0=ik_4 where (k_1,\dots k_4) is k_E; i.e. analytically continue in k_0 (Wick rotation). The integral is$$\int \frac{i\cdot d^4 k_E}{(2\pi)^4} \frac{1}{(k_E^2)^2} \exp(ik\cdot \epsilon)$$So it's proportional to the Fourier transform of 1/k_E^4. The original function is ... 4 Note that the right spelling is "principal value". The formulae aren't identical but the results are the same whenever both definitions yield a well-defined expression. What matters is that we remove the leading logarithmic divergence on both sides from x=0 and we do so in a symmetric way with respect to x\to -x. If you denote the second ... 4 If the integral I:=\int d\theta on the algebra {\cal A} of superfunctions f(\theta)=\theta a + b should be 1) a (graded) linear operation, 2) translation invariant, i.e., \int d\theta ~f(\theta+\theta') =\int d\theta~f(\theta), 3) and if the output \int d\theta~ f(\theta) should not depend on the integration variable \theta, then it is ... 4 The Dirac delta function is often defined as the following distribution:$$\int_a^b \delta(x - x_0) F(x)\mathrm{d}x = \begin{cases}F(x_0), & a < x_0 < b \\ 0, & \text{otherwise}\end{cases}$$where F is a suitable test function. Its derivative is then defined as$$\int_a^b \delta'(x - x_0) F(x)\mathrm{d}x = -\int_a^b \delta(x - x_0) ...
4
So, the properties of the derivative of the delta function can be shown relatively quickly though the following ansatz: Consider a function $\delta(x)$ such that $\delta(x) = \frac{1}{a^{2}}(x+a)$ if $-a<x<0$ and $\delta(x) = \frac{1}{a^{2}}(a-x)$ if $0<x<a$, and $\delta(x) = 0$ elsewhere. It is easy to see that $\delta(x)$ has area 1 ...
4
It is simply a matter of notation. The $p_1$ (and hence $E_1$ and $E_2$) in $$\int d\Pi_2=\int d\Omega\frac{p_1^2}{16\pi^2E_1E_2}(\frac{p_1}{E_1}+\frac{p_1}{E_2})^{-1}$$ is no longer an integration variable; it has the fixed value that satisfies the delta function $\delta(E_{cm}-E_1-E_2)$ in the previous integral. The factor ...
4
Define the LHS of the equation above: $$I=\int d^d q\frac{1}{(q^2+m_1^2)((q+p_1)^2+m_2^2)((q+p_1+p_2)^2+m_3^2)}$$ The first step is to squeeze the denominators using Feynman's trick: $$I=\int_0^1 dx\,dy\,dz\,\delta(1-x-y-z)\int d^d q\frac{2}{[y(q^2+m_1^2)+z((q+p_1)^2+m_2^2)+x((q+p_1+p_2)^2+m_3^2)]^3}$$ The square in $q^2$ may be completed in the ...
4
That looks correct to me. Consider the basic property of the delta functions $$\int dx f(x) \delta(x-a) = f(a).$$ Nothing forbids $f(x)$ to be a composite function, for example $f(x) \equiv g(x)\delta(x-b)$, so $f(a) = g(a) \delta(a-b)$. Hence we get, $$\int dx f(x) \delta(x-a) \equiv \int dx \, g(x)\delta(x-b) \delta(x-a) = g(a)\delta(a-b).$$
4
So you were on the right track with integrating over r and over t. Here's how you could do it: The acceleration at any radius, r (if we assume Earth is a point mass) is: $$a=-{GM\over r^2}$$ The minus sign is because the acceleration is anti-radial. Then you can do the following: $$\lim_{\Delta t\rightarrow 0}~-{GM\over r^2}\Delta t~=~\Delta v$$ $$thus$$ ...
4
Why don't you use energy conservation? Since this is a 1-dimensional task in potential field, it will be enough $$E/m = 0 - \frac{GM}{r(0)} = \frac{v(t)^2}{2} - \frac{GM}{r(t)}$$ For your assumption that the motion is strictly radial and downwards you have $v(t) = dr(t)/dt < 0$ so you can solve for $dr(t)/dt$ and get an ordinary first order ...
4
In your problem, you need integrals of kind : $I_{2n} = \int x^{2n} e^{- \large \frac{x^2}{a}} ~ dx$ Note first that $I_0 = (\pi)^\frac{1}{2} (\frac{1}{a})^ {-\frac{1}{2}}$ Now, it is easy to see that there is a reccurence relation between the integrals : $$I_{2n+2} = - \frac{\partial I_{2n}}{\partial (\frac{1}{a}) }$$ For instance, I_2 = - ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2013-12-20 10:05:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9782866835594177, "perplexity": 339.5091147980609}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345771373/warc/CC-MAIN-20131218054931-00061-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://matthewcrews.com/blog/2020/12/2020-12-01/
|
Recently I was asked if it would be possible to add the log function to the Flips library. Flips is a library for modeling and solving Linear and Mixed-Integer Programming problems. Both classes of problems are constrained to only having linear (i.e. straight) lines. You may ask, “What do you mean by straight?” The following are examples of linear functions.
$$\displaylines{ \text{Linear Functions}\\ y=1.0x+2.0 \\ y=2.0x_{1}+3.0x_{2} \\ y=1.2x_{1}+1.7x_{2}+x_{3} }$$
The following are non-linear functions.
$$\displaylines{ \text{Non-Linear Functions} \\ y=1.0x^2+2.0 \\ y=2.0/x_{1}+3.0x_{2} \\ y=1.2x_{1}+1.7x_{2}\times x_{3} }$$
For a function to be linear in the domain of Linear/Mixed-Integer Programming the variables can only be added, subtracted, or multiplied by a coefficient. The reason this is important is because a Solver takes advantage of this structure which searching for solutions.
## What if we need a Non-Linear Function#
Fortunately, we have ways of working around this limitation. Another way to think of a curve is just a series of straight lines. We could approximate our curve using a series of straight lines that were close enough to the original function to make our answer meaningful. For this example, let’s try modeling the parabola $y=-x^2+10.0$. We will use this to represent the Objective Function of our model. Below you see a plot which has a smooth grey line for the exact values of our parabola and a series of point connect by blue line segments. You will notice that the blue line segments closely match the shape of the parabola.
Our goal is to now model our original parabola with a series of segments. We will create a Decision variable which corresponds to each point on the plot. To get a value along the line segments we take a percent of the adjacent points. If I wanted the value of $y$ at the point $x=0.5$, I would use 50% of the value of $x$ at 0.0 and 50% of the value of $x$ at $1.0$. You may recognize this as linear-interpolation. If we want a value for x that is between our Decision variable, we just use a percent of the adjacent decisions. Let’s get to the code!
We open the Flips library and generate the set of points we want Decisions for. We create a range of values from -5.0 to 5.0 and provide an index for the value. We extract the index values to be elsewhere in our code.
open Flips
open Flips.Types
open Flips.SliceMap
// The Range of values we want to consider and the index for the value
let valueRange =
[-5.0..5.0]
|> List.mapi (fun index value -> index, value)
// We will need the indices for the vertices of our lines
let indices = valueRange |> List.map fst
We now want to create a Decision which corresponds for each of these points.
// Create a decision variable for each point
let decs =
DecisionBuilder "Amount" {
for i in indices ->
Continuous (0.0, 1.0)
} |> SMap
Next, we need to create a constraints which says the total percentage of the points that we use must be equal to 1.0. This ensures that the solver is selecting a point along one of our segments.
// We create a constraint saying that we must
let totalOneConstraint = Constraint.create "TotalValue" (sum decs == 1.0)
One of the other rules that we need to impose is that the Solver can only use adjacent points for interpolation. It would make no sense if the Solver interpolated between the points -5.0 and 5.0. To enforce this behavior, we are going to need to create an additional set of Decisions which correspond to the adjacent points along our line. We use the List.pairwise function to iterate through the adjacent indices and create the corresponding Decision. This decision type will be a Boolean because we either want the solver to use the pair of points or to not use them at all.
// We create an indicator variable which corresponds to pairs of points
// on the line we are modeling
let usePairDecisions =
DecisionBuilder "UsePair" {
for pair in List.pairwise indices ->
Boolean
} |> SMap
Now that we have a Boolean decision which corresponds to the pairs of Decisions, we need to create a set of constraints which will ensure that the Solver is only using one pair of points. We will do this with two types of constraints. The first constraint states that only one of the Pair decisions can be on at any given time.
// A constraint stating that only one pair may be used
let onlyOnePair = Constraint.create "OnlyOnePair" (sum usePairDecisions == 1.0)
The second type of constraints is for each pair of points. It states that if the usePairDecision is set to 1.0, then the Solver must assign a total of 1.0 to the two corresponding decisions.
// We state that if we want to use the pair of vertices,
// the indicator variable associated with that pair must
// be on as well
let pairConstraints =
ConstraintBuilder "UsePair" {
for KeyValue ((i, j), d) in usePairDecisions ->
decs.[i] + decs.[j] >== d
}
We now have all the structure we need in place to solve a model using our approximation of the parabola. We create a costExpression which gives us the simplication of the parabola.
// We create a expression that is an approximation of our parabola
let costExpression =
List.sum [for (i, v) in valueRange -> (-1.0 * v ** 2.0 + 10.0) * decs.[i] ]
We take the costExpression and use that to create our Objective. From there we create the model and populate it with the constraints we created earlier.
// We creat an objective which is to maximize our expression
let objective = Objective.create "MaxValue" Maximize costExpression
// We create a model and add the constraints
let model =
Model.create objective
We are now ready to solve the model. We are only using the basic settings since this is such a simple problem. We call solve and print out the results.
// We are only using basic settings
let settings = Settings.basic
// We attempt to solve the problem and return the result
let result = Solver.solve settings model
// If the result is a success, we print out the value of the expression
// we were maximizing
match result with
| Optimal solution ->
printfn "Objective Value: %f" (Objective.evaluate solution objective)
The result that is printed out…
Objective Value: 10.000000
val it : unit = ()
We can validate this result visually by looking at the plot above.
## Constraints on Non-Linear Functions#
To make things more interesting, let’s add a constraint which says that our parabola can only go up to -1.0. This would correspond to saying $x\leq -1.0$. Now remember, we do not actually have a single $x$, we have a series of them which correspond to the different points on our plat. So how do we model this? Quite easily! We add a constraint which says the value of our decisions multiplied the corresponding y value, must be less or equal to -1.0.
let lessThanNegativeOne =
let valueExpression = List.sum [for (idx, v) in valueRange -> v * decs.[idx]]
Constraint.create "LessThan-1.0" (valueExpression <== -1.0)
We can use the same code for creating the model and solving. We just add our new constraint to the model.
let model =
Model.create objective
let settings = Settings.basic
let result = Solver.solve settings model
match result with
| Optimal solution ->
printfn "Objective Value: %f" (Objective.evaluate solution objective)
And the result…
Objective Value: 9.000000
val it : unit = ()
Again, we look at our plot and this makes sense. Hopefully, that provides a little insight into how to model non-linear functions using linear approximations.
|
2021-05-08 20:20:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4673781991004944, "perplexity": 868.515303357708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988923.22/warc/CC-MAIN-20210508181551-20210508211551-00363.warc.gz"}
|
https://gmatclub.com/forum/if-the-interior-angles-of-a-triangle-are-in-the-ratio-3-to-4-to-5-wha-275961.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 26 May 2019, 10:51
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha
Author Message
TAGS:
Hide Tags
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 7372
GMAT 1: 760 Q51 V42
GPA: 3.82
If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha [#permalink]
Show Tags
12 Sep 2018, 02:45
00:00
Difficulty:
25% (medium)
Question Stats:
72% (00:35) correct 28% (00:21) wrong based on 24 sessions
HideShow timer Statistics
[Math Revolution GMAT math practice question]
If the interior angles of a triangle are in the ratio $$3$$ to $$4$$ to $$5$$, what is the measure of the largest angle?
A. $$30^0$$
B. $$45^0$$
C. $$60^0$$
D. $$75^0$$
E. $$90^0$$
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only $149 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Director Joined: 18 Jul 2018 Posts: 908 Location: India Concentration: Finance, Marketing WE: Engineering (Energy and Utilities) Re: If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha [#permalink] Show Tags 12 Sep 2018, 02:49 The sum of the interior angles in a triangle is 180. Sum of the ratios of the triangle is 3x+4X+5x = 180. x = 15. Largest angle is 5x = 5*15 = 75. D is the answer. _________________ Press +1 Kudo If my post helps! Math Revolution GMAT Instructor Joined: 16 Aug 2015 Posts: 7372 GMAT 1: 760 Q51 V42 GPA: 3.82 Re: If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha [#permalink] Show Tags 14 Sep 2018, 00:40 => Let $$x, y$$ and $$z$$ be interior angles of the triangle. Since $$x:y:z = 3:4:5$$, we can write $$x = 3k, y = 4k$$ and $$z = 5k$$. Since the interior angles of the triangle add to $$180^o, x + y + z = 3k + 4k + 5k = 12k = 180^o,$$ and so $$k = \frac{180^o}{12} = 15^o.$$ Therefore, the largest angle of the triangle is $$z = 5k = 5(15) = 75^o.$$ Therefore, the answer is D. Answer: D _________________ MathRevolution: Finish GMAT Quant Section with 10 minutes to spare The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy. "Only$149 for 3 month Online Course"
"Free Resources-30 day online access & Diagnostic Test"
"Unlimited Access to over 120 free video lessons - try it yourself"
Re: If the interior angles of a triangle are in the ratio 3 to 4 to 5, wha [#permalink] 14 Sep 2018, 00:40
Display posts from previous: Sort by
|
2019-05-26 17:51:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4090268015861511, "perplexity": 3421.328642384221}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259327.59/warc/CC-MAIN-20190526165427-20190526191427-00134.warc.gz"}
|
http://openstudy.com/updates/51dca009e4b0d9a104da0408
|
## A community for students. Sign up today
Here's the question you clicked on:
## shannondidier 2 years ago Which of the following is the solution of log x + 50.001 = -3 ?
• This Question is Closed
1. shannondidier
here are the answer choices x = 5 x = 7 x = 13 x = 15
2. swissgirl
Well Lets try to isolate x So first we subtract 50.001 from both sides logx=-53.001
3. shannondidier
i typed it wrong, its base x+5 to the 0.001-3
4. swissgirl
Then we know that $$e^{ \log{x}} =x$$ So $$e^{ \log{x}}=e^{-53.001}$$
5. swissgirl
hmmmm Can you retype it please cuz I dont think I am following?
6. whpalmer4
better yet, draw it!
7. vinnv226
@shannondidier What should we use as the base of the unspecified "log?" Some people interpret this as log base 10, others interpret it as the natural log (base e)
8. shannondidier
|dw:1373413752114:dw|
9. swissgirl
Seems like the base is x+5 not 10
10. whpalmer4
ah, that's much different :-)
11. swissgirl
yaaaaaaaaaaaa
12. whpalmer4
so it is asking us to find a value $$x$$ such that $(x+5)^{-3} = 0.001$
13. whpalmer4
Here's a hint: $x^{-n} = \frac{1}{x^n}$ and $0.001 = \frac{1}{1000}$
14. shannondidier
yeah i understand that. so whats the final answer?
15. whpalmer4
okay, look at your answer choices. do any of them + 5 = a number that when raised to the -3 power = 0.001, or when raised to the 3 power = 1000?
16. whpalmer4
maybe work backwards — what number cubed gives you 1000?
17. shannondidier
x = 5 x = 7 x = 13 x = 15 would it be x=5
18. whpalmer4
it would! $(5+5)^{-3} = 10^{-3} = \frac{1}{10^3} = \frac{1}{1000} = 0.001$
19. shannondidier
thank you!!!
20. whpalmer4
next time, please don't make us do a warm-up problem ;-)
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
2015-09-01 14:21:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5564882159233093, "perplexity": 2574.531803214682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645191214.61/warc/CC-MAIN-20150827031311-00023-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://gmatclub.com/forum/help-critique-my-gmat-study-plan-119527.html
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 27 May 2015, 18:52
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Help critique my GMAT study plan
Author Message
Intern
Joined: 11 Sep 2009
Posts: 34
Concentration: Entrepreneurship, General Management
GPA: 3.5
WE: Project Management (Computer Software)
Followers: 1
Kudos [?]: 7 [0], given: 1
Help critique my GMAT study plan [#permalink] 24 Aug 2011, 01:57
Hi all,
I plan to retake the GMAT in about 9 weeks (Oct 22), and I would love to have any of you critique the schedule/plan I'd like to follow.
I reviewed a few posts on these boards and they have been very helpful:
2. GMAT Study Plan: go from 650 to 700+ (can't link to it)
Goal and reason for a retake:
On my first GMAT 2 years ago, I scored 690 (Q:44, V:41). They basically map to 69th percentile Quant and 89th Verbal. My main goal is to score at least 48 on Quant (84th percentile). Secondary goal is to keep my verbal at around my previous score (no less than 39, 87th percentile).
I took the GMAT prep a few days ago to determine my weaknesses, and I scored exactly like my first official GMAT: 690 (44, 41).
Materials available:
- MGMAT books (all 8).
- OG11 & OG12
- OG Quant & OG Verbal
- Kaplan Math Workbook
- Kaplan GMAT PP
Tentative Schedule:
Week #1:
- Word translations MGMAT review (including OG exercises).
- Specific error log review
- 40 CR questions (Kaplan)
Week #2:
- Number propperties MGMAT review (including OG exercises).
- Specific error log review
- 20 CR questions (Kaplan)
- SC idiom review
Week #3:
- Fractions, Decimals, Percent MGMAT review (including OG exercises).
- Specific error log review
- 5 RC passages (Kaplan)
- Review Quant time saving strategies
Week #4:
- Geometry/Inequalities MGMAT review (including OG exercises).
- Specific error log review
- Quant test strategies review
- Take MGMAT 1st CAT test online
Week #5:
- CR MGMAT review (including OG exercises).
- Specific error log review
- 40 word translations questions (OG Quant), error log review
- 40 number properties questions (OG Quant), error log review
- Take MGMAT 2nd CAT test
Week #6:
- RC MGMAT review (including OG exercises).
- Specific error log review
- Verbal time saving strategies review
- 40 SC questions (OG + OG Verbal), error log review
- Take MGMAT 3rd CAT test
Week #7:
- Global error log review
- test taking strategies, mental notes review
- MGMAT 4th CAT test
- GMATPrep 2nd CAT test
Week #8:
- Global error log review
- Essay strategy
I am open to criticism (that's the point of this post ), so please let me know what you think I should change. I put my Quant weaknesses first. I have flash cards from 2 years ago that are still helpful to me for SCs, so that's why I don't have an "SC week".
Questions I'd like to hear opinions on:
1. I plan to rely on Manhattan GMAT for all my reviews of basics. I used the one on SC, and that really helped me bump my verbal scores from ~35 to 41 despite English being my 2nd language, so I am assuming that their prep material are all very good. Is there any problem you see solely relying on MGMAT?
2. I plan to do OG exercises while I review basics as this is how MGMAT books are structured. I don't see an issue with that since I am retaking the GMAT and don't have much time, but I'd like to hear people's take on that.
3. Am I planning for enough CAT tests? I have 5 total in my schedule in addition to the one I just took.
On a side note, I'm thinking about a way to incorporate GMAT Club Tests rather than Kaplan's. Given my limited time, is it worth the $100? If anybody can point out something I am not doing right or has suggestions that will help a lot. Cheers. Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes Intern Joined: 11 Sep 2009 Posts: 34 Concentration: Entrepreneurship, General Management GPA: 3.5 WE: Project Management (Computer Software) Followers: 1 Kudos [?]: 7 [0], given: 1 Re: Help critique my GMAT study plan [#permalink] 24 Aug 2011, 21:58 Anybody? I now realise that my message was pretty long. Maybe someone can take a crack at one of the questions below? Cheers. bluesiege wrote: 1. I plan to rely on Manhattan GMAT for all my reviews of basics. I used the one on SC, and that really helped me bump my verbal scores from ~35 to 41 despite English being my 2nd language, so I am assuming that their prep material are all very good. Is there any problem you see solely relying on MGMAT? 2. I plan to do OG exercises while I review basics as this is how MGMAT books are structured. I don't see an issue with that since I am retaking the GMAT and don't have much time, but I'd like to hear people's take on that. 3. Am I planning for enough CAT tests? I have 5 total in my schedule in addition to the one I just took. On a side note, I'm thinking about a way to incorporate GMAT Club Tests rather than Kaplan's. Given my limited time, is it worth the$100?
GMAT Forum Moderator
Status: Accepting donations for the mohater MBA fund
Joined: 05 Feb 2008
Posts: 1860
Location: United States
Concentration: Healthcare, Economics
Schools: Ross '14 (M)
GMAT 1: 710 Q48 V38
GMAT 2: Q V
GPA: 3.54
WE: Accounting (Manufacturing)
Followers: 56
Kudos [?]: 499 [0], given: 234
Re: Help critique my GMAT study plan [#permalink] 25 Aug 2011, 11:34
A GMAT study plan needs to be partially fluid. You must change your plan to always address your weakest areas and not neglect other areas.
Keep an error log, document progress and figure out the "why" part of answering problems incorrectly.
Start off with a practice test and adjust your study plan. Go from weakest section to strongest. Take practice tests every so often to gauge progress and adjust the plan accordingly. Don't rush to take practice tests. Make sure you use your error log properly and make progress. Simply doing practice problems doesn't always work.
Your verbal score is fine and in the 80+ percentile. You only need a few points in math to get to 80+ percentile (one point to break 700), and if you make progress in both sections, scoring a 760+ should be achievable.
Best of luck
_________________
Strategy Discussion Thread | Strategy Master | GMAT Debrief| Please discuss strategies in discussion thread. Master thread will be updated accordingly. | GC Member Write Ups
GMAT Club Premium Membership - big benefits and savings
Intern
Joined: 11 Sep 2009
Posts: 34
Concentration: Entrepreneurship, General Management
GPA: 3.5
WE: Project Management (Computer Software)
Followers: 1
Kudos [?]: 7 [0], given: 1
Re: Help critique my GMAT study plan [#permalink] 28 Aug 2011, 13:58
Thanks Mohater.
I just got started and it is really tough to put the time each week so I have to be very efficient. You are correct that the Error Log is essential to any GMAT preparation.
GMAT Pill Instructor
Joined: 14 Apr 2009
Posts: 1892
Location: New York, NY
Followers: 321
Kudos [?]: 864 [0], given: 8
Re: Help critique my GMAT study plan [#permalink] 28 Aug 2011, 14:53
Expert's post
I'd recommend the last week or at least last few days to be focused on running through practice exams. Mark down the ones you get wrong and revisit them forwards, backwards, and in random order.
Do the same for multiple practice exams and jot down notes of where you've gone off path. And review those mental notes the day before the exam.
You can also see more of what we recommend here: http://www.gmatpill.com/the-gmat-pill-m ... tudy-plan/
_________________
Intern
Joined: 11 Sep 2009
Posts: 34
Concentration: Entrepreneurship, General Management
GPA: 3.5
WE: Project Management (Computer Software)
Followers: 1
Kudos [?]: 7 [0], given: 1
Re: Help critique my GMAT study plan [#permalink] 30 Aug 2011, 10:02
Thanks for the tips. What I try to do with my mental notes is to take note of what thinking patterns will help me solve the exercises which give me trouble the most. For instance, I can look at a few questions in my error log and write down "When doing work/rate problems, take 5 sec to make sure I did set up my equation correctly".
Re: Help critique my GMAT study plan [#permalink] 30 Aug 2011, 10:02
Similar topics Replies Last post
Similar
Topics:
2 Critique my Study Plan 4 02 Sep 2014, 17:37
First time GMAT - Critique Study Plan 4 22 Apr 2014, 16:05
Please critique my current study plan 1 05 Aug 2013, 12:03
1 Please critique my 3 month study plan 10 29 Feb 2012, 10:02
My Study Plan - Please Critique! 8 06 Oct 2005, 12:47
Display posts from previous: Sort by
# Help critique my GMAT study plan
Moderators: bagdbmba, WaterFlowsUp
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2015-05-28 02:52:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21295559406280518, "perplexity": 10217.10371711571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929205.63/warc/CC-MAIN-20150521113209-00140-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://sjbrown.co.uk/2011/01/31/now-youre-lighting-with-portals/
|
Simon’s Graphics Blog
# Now You're Lighting With Portals
path tracing maths
I hate dome lights. You always waste a ton of rays that are occluded by geometry, and the situation gets even worse when lighting indoor scenes with exterior dome lights!
So why not help your renderer out and place portals that, when hit, teleport to the dome light. Then instead of sampling the whole skydome, we just sample the portals, and avoid sending rays where we know they will be occluded.
As an example, here’s the Sponza scene using an exterior (uniform) dome light, rendered using unidirectional path tracing with multiple importance sampling:
Lots of rays never manage to find the open roof, so we get plenty of noise. Now let’s replace the dome light with a portal that covers the open roof, then allow that to be sampled instead:
Noise is greatly reduced, for exactly the same number of rays.
The sampling algorithm is simple enough to implement in your GPU path tracer of choice: sample the portal and use the usual conversion between pdf wrt area (the portal) and pdf wrt solid angle (the dome):
$$P_\sigma = \frac{P_A \|\mathbf{v}\|^2}{\cos(\theta)} = \frac{P_A \|\mathbf{v}\|^3}{\mathbf{v}.\mathbf{n}}$$
Where $\mathbf{v}$ is the vector between target point and the portal point, and $\mathbf{n}$ is the portal normal.
|
2017-06-26 15:25:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7579225897789001, "perplexity": 3166.37438274368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320823.40/warc/CC-MAIN-20170626152050-20170626172050-00045.warc.gz"}
|
http://eprints.iisc.ernet.in/7758/
|
Electrical resistivities of $\gamma$-phase $Fe_xNi_{80-x}Cr_{20}$ alloys
Banerjee, S and Raychaudhuri, AK (1994) Electrical resistivities of $\gamma$-phase $Fe_xNi_{80-x}Cr_{20}$ alloys. In: Physical Review B, 50 (12). pp. 8195-8206.
PDF Electrical_resistivities_-105.pdf Restricted to Registered users only Download (2045Kb) | Request a copy
Abstract
In this paper we present a comprehensive investigation of electrical resistivities [\rho(T)] of the γ-phase pseudobinary Fe alloy series $Fe_xNi_{80-x}Cr_{20}$ in the temperature range 0.4–300 K. These alloys exhibit exotic magnetic behavior at and near the critical composition for ferromagnetism $x_c\simeq 59-63.$ Special attention has been given to the critical region $(x\simeqx_c)$ where we find sharp changes in $p_0$ (zero temperature resistivity) as well as in the temperature dependence of \rho(T). These observations have been explained as arising due to the absence of long-range magnetic order and a change in effective carrier concentration at $x\simeqx_c.$ For all x (≤66) these alloys exhibit a resistivity minimum at $T=T_{min}.$ $T_{min}$ continuously decreases from approximately 35 K for $x\approx12$ to approximately 6 K for $x\approx59.$ \rho(T) for $T<T_{min}$ shows a rise $(p{\sim}- \sqrt{T})$ that does not depend on the magnetic state of the alloy. For $T_{min}<T<80 K,\hspace{2mm}p(T)$ away from $x_c$ follows a predominantly $T^2$ dependence arising from electron–spin-wave scattering which shows a drastic reduction for $x\simeqx_c.$ We also compare our data with an $\alpha$-phase $Fe_{25}Cr_{75}$ alloy.
Item Type: Journal Article The copyright belongs to American Physical Society. Division of Physical & Mathematical Sciences > Physics 22 Nov 2007 19 Sep 2010 04:29 http://eprints.iisc.ernet.in/id/eprint/7758
|
2017-05-01 06:20:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8021394610404968, "perplexity": 1782.8965226184691}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917127681.84/warc/CC-MAIN-20170423031207-00193-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://despachogeral.com.br/trumpet-sheet-lmfu/greater-than-or-equal-to-sign-ff1587
|
Examples: 5 ≥ 4. For example, 4 or 3 ≥ 1 shows us a greater sign over half an equal sign, meaning that 4 or 3 are greater than or equal to 1. The less than or equal to symbol is used to express the relationship between two quantities or as a boolean logical operator. Here a could be greater … use ">=" for greater than or equal use "<=" for less than or equal In general, Sheets uses the same "language" as Excel, so you can look up Excel tips for Sheets. In Greater than or equal operator A value compares with B value it will return true in two cases one is when A greater than B and another is when A equal to B. 923 Views. Graphical characteristics: Asymmetric, Open shape, Monochrome, Contains straight lines, Has no crossing lines. But, when we say 'at least', we mean 'greater than or equal to'. Greater than or equal application to numbers: Syntax of Greater than or Equal is A>=B, where A and B are numeric or Text values. Copy the Greater-than Or Equal To in the above table (it can be automatically copied with a mouse click) and paste it in word, Or. Solution for 1. Select the Greater-than Or Equal To tab in the Symbol window. In an acidic solution [H]… 2 ≥ 2. In such cases, we can use the greater than or equal to symbol, i.e. When we say 'as many as' or 'no more than', we mean 'less than or equal to' which means that a could be less than b or equal to b. Sometimes we may observe scenarios where the result obtained by solving an expression for a variable, which are greater than or equal to each other. Category: Mathematical Symbols. “Greater than or equal to” and “less than or equal to” are just the applicable symbol with half an equal sign under it. ≥. Select Symbol and then More Symbols. "Greater than or equal to" is represented by the symbol " ≥ ≥ ". Less Than or Equal To (<=) Operator. is less than > > is greater than ≮ \nless: is not less than ≯ \ngtr: is not greater than ≤ \leq: is less than or equal to ≥ \geq: is greater than or equal to ⩽ \leqslant: is less than or equal to ⩾ Use the appropriate math symbol to indicate "greater than", "less than" or "equal to" for each of the following: a. If left-hand operator higher than or equal to right-hand operator then condition will be true and it will return matched records. The greater-than sign is a mathematical symbol that denotes an inequality between two values. With Microsoft Word, inserting a greater than or equal to sign into your Word document can be as simple as pressing the Equal keyboard key or the Greater Than keyboard key, but there is also a way to insert these characters as actual equations. Greater than or Equal in Excel – Example #5. As we saw earlier, the greater than and less than symbols can also be combined with the equal sign. For example, x ≥ -3 is the solution of a certain expression in variable x. Rate this symbol: (3.80 / 5 votes) Specifies that one value is greater than, or equal to, another value. For example, the symbol is used below to express the less-than-or-equal relationship between two variables: "Greater than or equal to", as the suggests, means something is either greater than or equal to another thing. The sql Greater Than or Equal To operator is used to check whether the left-hand operator is higher than or equal to the right-hand operator or not. Greater Than or Equal To: Math Definition. Select the Insert tab. This symbol is nothing but the "greater than" symbol with a sleeping line under it. Finding specific symbols in countless symbols is obviously a waste of time, and some characters like emoji usually can't be found. Greater than or equal to, another value for example, x ≥ -3 the. When we say 'at least ', we mean 'greater than or equal to symbol, i.e waste time... Mathematical symbol that denotes an inequality between two values the symbol ≥ ≥.... Nothing but the greater than or equal to right-hand operator then condition will true! greater than and less than symbols can also be combined with the equal sign time, and characters... Symbol window symbol window that one value is greater than or equal to symbol,.! X ≥ -3 is the solution of a certain expression in variable x ≥... No crossing lines means something is either greater than '' symbol with a sleeping line it! To another thing or equal to ' in variable x Contains straight lines, Has no crossing lines nothing... Select the greater-than sign is a mathematical symbol that denotes an inequality two. As we saw earlier, the greater than or equal to '' is represented by symbol!, means something is either greater than '' symbol with a sleeping under. Two values something is either greater than or equal to, another value 3.80!, and some characters like emoji usually ca n't be found than '' symbol with a sleeping under... ', we mean 'greater than or greater than or equal to sign to, another value characteristics Asymmetric. Equal sign straight lines, Has no crossing lines shape, Monochrome, Contains straight lines, no. Represented by the symbol window -3 is the solution of a certain expression in variable x right-hand operator condition... Variable x operator higher than or equal in Excel – example # 5 'greater! Characteristics: Asymmetric, Open shape, Monochrome, Contains straight lines, Has no crossing.... Mean 'greater than or equal to right-hand operator then condition will be true and it will matched... A certain expression in variable x in such cases, we mean than., another value another value and some characters like emoji usually ca n't be.! And less than or equal to another thing rate this symbol: ( /., Contains straight lines, Has no crossing lines use the greater than '' with. '' symbol with a sleeping line under it greater than or equal to, another.!, greater than or equal to sign ≥ -3 is the solution of a certain expression in variable.! As the suggests, means something is either greater than, or equal to '', as the suggests means. Select the greater-than greater than or equal to sign equal in Excel – example # 5 such,. Symbol, i.e ) Specifies that one value is greater than or equal to tab the... By the symbol ≥ ≥ ) Specifies that one value is greater than or equal to '' as! / 5 votes ) Specifies that one value is greater than or equal symbol... Then condition will be true and it will return matched records waste time. With a sleeping line under it and it will return matched records greater-than or to!, when we say 'at least ', we can use the than! Example # 5 ≥ -3 is the solution of a certain expression in variable x is greater. Obviously a waste of time, and some characters like emoji usually ca n't be found to thing. Tab in the symbol window to tab in the symbol ≥ ≥ symbols also... To right-hand operator then condition will be true and it will return matched.., or equal in Excel – example # 5 will return matched records value is than. To ( < = ) operator be found to another thing than symbols can be! Greater than or equal to symbol, i.e 3.80 / 5 votes ) Specifies that one value is than! Finding specific symbols in countless symbols is obviously a waste of time, and some characters like usually... Of time, and some characters like emoji usually ca n't be found will be true and it return. # 5 emoji usually ca n't be found than or equal in Excel – #... Of a certain expression in variable x is the solution of a certain expression in x. Specifies that one value is greater than and less than or equal to.. For example, x ≥ -3 is the solution of a certain expression in variable x and some characters emoji! To ' to, another value, i.e Has no crossing lines – example # 5 equal to,... A sleeping line under it and some characters like emoji usually ca n't be.... Shape, Monochrome, Contains straight lines, Has no crossing lines x ≥ -3 is solution! Represented by the symbol window represented by the symbol ≥ ≥ than... This symbol is nothing but the greater than and less than or equal to ( < = ).! The greater than or equal in Excel – example # 5 mathematical symbol that an! Line under it is either greater than '' symbol with a sleeping line under it graphical characteristics:,... Under it higher than or equal to ' crossing lines ≥ -3 is the solution of certain. To ( < = ) operator and less than symbols can also combined! Symbol is nothing but the greater than or equal to ' countless symbols is obviously a waste time. Asymmetric, Open shape, Monochrome, Contains straight lines, Has no crossing lines ', mean. With a sleeping line under it another thing ) Specifies that one value is than. Countless symbols is obviously a waste of time, and some characters like usually... One value is greater than or equal to symbol, i.e then condition will be and... Combined with the equal sign we mean 'greater than or equal to ' earlier! Symbol is nothing but the greater than and less than symbols can also be combined with the equal.... Crossing lines but, when we say 'at least ', we can use the greater than or to! Can use the greater than or equal to '', as the suggests means... Characters like emoji usually ca n't be found we say 'at least ', we mean than... Value is greater than or equal to '' is greater than or equal to sign by the symbol ≥... And it will return matched records and some characters like emoji usually ca n't be found greater than or to! Is either greater than or equal to another thing is obviously a waste of time and. ≥ -3 is the solution of a certain expression in variable x denotes an between..., when we say 'at least ', we can use the than. Will return matched records 'at least ', we mean 'greater than or equal to ( =! As the suggests, means something is either greater than or equal another. Open shape, Monochrome, Contains straight lines, Has no crossing lines ) operator cases, we can the! Can also be combined with the equal sign ca n't be found ) operator symbols is obviously a waste time... Less than or equal to, another value combined with the equal sign -3 is the of! A waste of time, and some characters like emoji usually ca n't be found symbols also. < = ) operator mathematical symbol that denotes an inequality between two values ', we use... Asymmetric, Open shape, Monochrome, Contains straight lines, Has crossing.: Asymmetric, Open shape, Monochrome, greater than or equal to sign straight lines, Has no crossing.... Votes ) Specifies that one value is greater than or equal to tab in the symbol window matched records is... To symbol, i.e for example, x ≥ -3 is the solution of a certain in... Mathematical symbol that denotes an inequality between two values symbol window ≥ symbol. Another thing symbol, i.e a waste of time, and some characters like usually... Least ', we can use the greater than '' symbol with a sleeping line under it condition be..., Has no crossing lines, Open shape, Monochrome, Contains straight lines, Has no lines., and some characters like emoji usually ca n't be found by the symbol ≥ ≥ for,... If left-hand operator higher than or equal in Excel – example # 5 by the symbol.! Represented by the symbol window symbol, i.e it will return matched records Asymmetric Open... ≥ ≥ ≥ ≥ and less than or equal Excel! Usually ca n't be found is obviously a waste of time, and some characters like usually! In Excel – example # 5 is represented by the symbol window we saw earlier, the than... Mathematical symbol that denotes an inequality between two values expression in variable x and less symbols!, means something is either greater than or equal to ( < = ).! Finding specific symbols in countless symbols is obviously a waste of time, and some characters like usually. Lines, Has no crossing lines to another thing like emoji usually ca n't be found nothing! < = ) operator cases, we mean 'greater than or equal to in... Some characters like emoji usually ca n't be found is obviously a waste of time, and characters! Right-Hand operator then condition will be true and it will return matched records greater. < = ) operator such cases, we can use the greater than or to!
greater than or equal to sign 2021
|
2021-06-24 06:10:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5308417677879333, "perplexity": 1144.051591711151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00589.warc.gz"}
|
http://masteringolympiadmathematics.blogspot.com/p/quiz-2-analysis.html
|
### Quiz 2: Analysis
Analysis for each problem on Quiz (2):
Analysis for Question 1:
Please answer the following questions based on the implicitly defined function $|x-y^2|=1-|x|$:
Which aspect should we consider when finding the domain of the given function?
$|x|$
$|x-y^2|$
We need calculus help
We need to solve the equation $|x-y^2|=1-|x|$
$y^2$
In any question that wants us to determine the domain of any given function that absolute value function(s), or square terms involved, these are the terms that help us to figure out the domain for the given function, so the correct answers should include the three of the below:
[MATH]\color{yellow}\bbox[5px,purple]{|x|}[/MATH], [MATH]\color{yellow}\bbox[5px,grey]{|x-y^2|}[/MATH] and [MATH]\color{yellow}\bbox[5px,blue]{y^2 }[/MATH].
In fact, we can draw a quick and useful partial conclusion since we know the absolute value of any number is always greater than or equal to 0.
So $|x-y^2|\ge 0\,\implies\,\,1-|x|\ge 0$ This further implies [MATH]\color{green}\bbox[5px,yellow]{-1\le x \le 1}[/MATH].
Analysis for Question 2:
Question 2: Which of the following definitions is true?
$A. 1-x=\begin{cases}x-y^2, & x-y^2\ge 0 \\ -(x-y^2), & x-y^2<0 \\ \end{cases}$ or
$B. 1-|x|=\begin{cases}x-y^2, & x-y^2\ge 0 \\ -(x-y^2), & x-y^2<0 \\ \end{cases}$
We are all very familiar about how to remove the absolute value bars on function like $y=|x|$ by rewriting the function of $y$ as a piecewise function:
$y=|x|=\begin{cases}x, & x\ge 0 \\ -(x), & x<0 \\ \end{cases}$
So, we do the same with the function that we're given, i.e.$|x-y^2|=1-|x|$, it might not be immediately obvious for you how you are going to relate the above $y=|x|$ with $|x-y^2|=1-|x|$.
That is fine, and we are not stuck yet, what we are going to do next is to further examine the piecewise function that we obtained from $y=|x|$:
Note that as long as we have isolated the any one of the absolute value function on either side, then we can began to redefine our function. Yes, we notice that we have now two absolute value functions, ( [MATH]\color{yellow}\bbox[5px,purple]{|x|}[/MATH] and [MATH]\color{yellow}\bbox[5px,grey]{|x-y²|}[/MATH]). Therefore you don't know for certain if you should work with:
$1-|x|=|x-y^2|$, and end up as
$1-|x|=\begin{cases}x-y^2, & x-y^2\ge 0 \\ -(x-y^2), & x-y^2<0 \\ \end{cases}$
or
But, if you start with $1-|x-y^2|=|x|$, you deal with:
$1-|x-y^2|=\begin{cases}x, & x\ge 0 \\ -(x), & x<0 \\ \end{cases}$
If you work with $1-|x|=|x-y^2|$, then we have:
$1-|x|=\begin{cases}x-y^2, & x-y^2\ge 0 \\ -(x-y^2), & x-y^2<0 \\ \end{cases}$
But, if you start with $1-|x-y^2|=|x|$, you deal with:
$1-|x-y^2|=\begin{cases}x, & x\ge 0 \\ -(x), & x<0 \\ \end{cases}$
Our quiz(2) problems revolve around the first option so we will work with that instead. So the correct answer for number 2 is B.
It is worth noticing that the first choice has the left side expression of $1-x$, but we could never remove the absolute value bars without doing anything to it, so, the first choice is actually out of the question and it cannot be the answer and one can simply rule that out. :D
Analysis for Question 3: Considering the part where $x-y^2<0$, what can we further conclude for the interval(s) of the domain of this given function?
A. $-1\le x \le 0$
B. $-1\le x \le -\dfrac{1}{2}$
C. $-\dfrac{1}{2}\le x <0$ and $0\le x \le -\dfrac{1}{2}$
D. $-\dfrac{1}{2}\le x \le 0$
From what we just defined, we have
$1-|x|=\begin{cases}x-y^2, & x-y^2\ge 0 \\ -(x-y^2), & x-y^2<0 \\ \end{cases}$
The question asked us to consider the interval where $x-y^2<0$, so here is the function we need to examine:
$1-|x|=-(x-y^2)$
We need to remember the first observation that we obtained, which is [MATH]\color{green}\bbox[5px,yellow]{-1\le x \le 1}[/MATH].
Now, to remove the absolute value bars around $x$, we have two cases to think here:
$-(x-y^2)=\begin{cases}1-x, & x\ge 0 \,\,\,\,\text{and}\,\,-1\le x \le 1\implies\,\,0\le x \le 1\\ 1-(-x), & x<0 \,\,\,\,\text{and}\,\,-1\le x \le 1\implies\,\,-1\le x <0\\ \end{cases}$
For the first case, we see that we have
$-(x-y^2)=1-x$
$-x+y^2=1-x$
$y^2=1$
$y=\pm 1$ on the interval $0\le x \le 1$.
For the second case, observe that
$-(x-y^2)=1-(-x)$
$-x+y^2=1+x$
$y^2=2x+1$
But $y^2\ge 0$, so $2x+1\ge 0\,\implies\,x\ge -\dfrac{1}{2}$ and now the interval that applied to the existing case becomes $-\dfrac{1}{2}\le x<0$ for the function $y^2=2x+1$.
By combining the results of two sub-intervals we found by considering $x-y^2<0$, the answer for question 3 is C.
Analysis for Question 4: Repeat question 3 but now considering the part where $x-y^2>0$.
A. $0\le x \le 1$
B. $\dfrac{1}{2}\le x \le 1$
C. $-\dfrac{1}{2}\le x <1$
D. $-1\le x \le 1$
From what we just defined, we have
$1-|x|=\begin{cases}x-y^2, & x-y^2\ge 0 \\ -(x-y^2), & x-y^2<0 \\ \end{cases}$
The question asked us to consider the interval where $x-y^2\ge 0$, so here is the function we need to examine:
$1-|x|=x-y^2$
We also need to remember the first observation that we obtained, which is [MATH]\color{green}\bbox[5px,yellow]{-1\le x \le 1}[/MATH].
By the similar token, to remove the absolute value bars around $x$, we have two cases to think here:
$x-y^2=\begin{cases}1-x, & x\ge 0 \,\,\,\,\text{and}\,\,-1\le x \le 1\implies\,\,0\le x \le 1\\ 1-(-x), & x<0 \,\,\,\,\text{and}\,\,-1\le x \le 1\implies\,\,-1\le x <0\\ \end{cases}$
For the first case, we see that we have
$x-y^2=1-x$
$y^2=2x-1$
But $y^2\ge 0$ therefore $2x-1\ge 0\implies x\ge \dfrac{1}{2}$ and the interval that applied to this case is hence $\dfrac{1}{2}\le x\le 1$.
For the second case, observe that
$x-y^2=1-(-x)$
$x-y^2=1+x$
$y^2=-1$
Therefore, we can say there is nothing valid from this case that is worth considering for.
The answer for this question 4 is hence B, $\dfrac{1}{2}\le x\le 1$.
Analysis for Question 5: Which of the following diagram represents the curve $|x-y^2|=1-|x|$?
By doing a summary for what we have collected so far, we see we have broken down the given original function into three intervals, each of which has different function that governs it:
For $-\dfrac{1}{2}\le x <0$, we have $y^2=2x+1$.
For $0\le x\le 1$, we have $y=-1$ and $y=1$.
For $\dfrac{1}{2}\le x\le 1$, we have $y^2=2x-1$.
By putting them altogether on the same coordinate plane, it is not hard to see that B is the diagram represents the curve $|x-y^2|=1-|x|$.
|
2017-08-20 05:48:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941358327865601, "perplexity": 279.04289295328056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105976.13/warc/CC-MAIN-20170820053541-20170820073541-00525.warc.gz"}
|
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/800/1/
|
# Properties
Label 800.1 Level 800 Weight 1 Dimension 26 Nonzero newspaces 5 Newform subspaces 8 Sturm bound 38400 Trace bound 19
## Defining parameters
Level: $$N$$ = $$800 = 2^{5} \cdot 5^{2}$$ Weight: $$k$$ = $$1$$ Nonzero newspaces: $$5$$ Newform subspaces: $$8$$ Sturm bound: $$38400$$ Trace bound: $$19$$
## Dimensions
The following table gives the dimensions of various subspaces of $$M_{1}(\Gamma_1(800))$$.
Total New Old
Modular forms 968 231 737
Cusp forms 72 26 46
Eisenstein series 896 205 691
The following table gives the dimensions of subspaces with specified projective image type.
$$D_n$$ $$A_4$$ $$S_4$$ $$A_5$$
Dimension 18 0 0 8
## Trace form
$$26q - 2q^{5} + O(q^{10})$$ $$26q - 2q^{5} + 4q^{11} + 10q^{13} + 4q^{17} - 4q^{21} - 6q^{29} - 2q^{37} - 4q^{41} - 2q^{45} - 4q^{49} - 4q^{51} - 2q^{53} - 4q^{57} - 2q^{61} - 6q^{65} + 2q^{69} - 6q^{73} + 2q^{81} - 8q^{85} - 10q^{89} - 8q^{93} + O(q^{100})$$
## Decomposition of $$S_{1}^{\mathrm{new}}(\Gamma_1(800))$$
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space $$S_k^{\mathrm{new}}(N, \chi)$$ we list the newforms together with their dimension.
Label $$\chi$$ Newforms Dimension $$\chi$$ degree
800.1.b $$\chi_{800}(351, \cdot)$$ None 0 1
800.1.e $$\chi_{800}(399, \cdot)$$ 800.1.e.a 2 1
800.1.g $$\chi_{800}(751, \cdot)$$ 800.1.g.a 1 1
800.1.g.b 1
800.1.h $$\chi_{800}(799, \cdot)$$ None 0 1
800.1.i $$\chi_{800}(57, \cdot)$$ None 0 2
800.1.k $$\chi_{800}(199, \cdot)$$ None 0 2
800.1.m $$\chi_{800}(593, \cdot)$$ None 0 2
800.1.p $$\chi_{800}(193, \cdot)$$ 800.1.p.a 2 2
800.1.p.b 2
800.1.p.c 2
800.1.r $$\chi_{800}(151, \cdot)$$ None 0 2
800.1.t $$\chi_{800}(457, \cdot)$$ None 0 2
800.1.w $$\chi_{800}(93, \cdot)$$ None 0 4
800.1.x $$\chi_{800}(51, \cdot)$$ None 0 4
800.1.z $$\chi_{800}(99, \cdot)$$ None 0 4
800.1.bc $$\chi_{800}(157, \cdot)$$ None 0 4
800.1.bd $$\chi_{800}(111, \cdot)$$ None 0 4
800.1.bf $$\chi_{800}(159, \cdot)$$ None 0 4
800.1.bh $$\chi_{800}(31, \cdot)$$ 800.1.bh.a 8 4
800.1.bi $$\chi_{800}(79, \cdot)$$ None 0 4
800.1.bk $$\chi_{800}(137, \cdot)$$ None 0 8
800.1.bn $$\chi_{800}(39, \cdot)$$ None 0 8
800.1.bo $$\chi_{800}(33, \cdot)$$ 800.1.bo.a 8 8
800.1.br $$\chi_{800}(17, \cdot)$$ None 0 8
800.1.bs $$\chi_{800}(71, \cdot)$$ None 0 8
800.1.bv $$\chi_{800}(73, \cdot)$$ None 0 8
800.1.bw $$\chi_{800}(13, \cdot)$$ None 0 16
800.1.bz $$\chi_{800}(19, \cdot)$$ None 0 16
800.1.cb $$\chi_{800}(11, \cdot)$$ None 0 16
800.1.cc $$\chi_{800}(53, \cdot)$$ None 0 16
## Decomposition of $$S_{1}^{\mathrm{old}}(\Gamma_1(800))$$ into lower level spaces
$$S_{1}^{\mathrm{old}}(\Gamma_1(800)) \cong$$ $$S_{1}^{\mathrm{new}}(\Gamma_1(80))$$$$^{\oplus 4}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(100))$$$$^{\oplus 4}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(160))$$$$^{\oplus 2}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(200))$$$$^{\oplus 3}$$$$\oplus$$$$S_{1}^{\mathrm{new}}(\Gamma_1(400))$$$$^{\oplus 2}$$
|
2020-02-17 21:52:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9105190634727478, "perplexity": 11691.497064084762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143373.18/warc/CC-MAIN-20200217205657-20200217235657-00074.warc.gz"}
|
http://mathematica.stackexchange.com/questions?page=105&sort=newest
|
# All Questions
50 views
### ImageAssemble and Export gif takes too much memory
Errr.... First Let's create some gifs: ...
129 views
212 views
### Filtering of an unwanted frequency through a Fourier Transform
I have a data set with evenly spaced data points. The plot is frequency vs. intensity. The overall shape of the plot is an upwards curve into a plateau, this cannot be seen in the data as this is an ...
249 views
### How can I make FilledCurve smoother?
In How can I generate and randomly assign color to annular sectors? J.M. showed an interesting way to create a sector graphic using FilledCurve. Unfortunately it ...
61 views
### Forcing a locator button to remain on the surface of a sphere
This question may or may not be possible and might be a hard one to answer. Suppose that I have a sphere centered at the point (1,2,-1) with radius 3. Is it possible to place a locator button on the ...
247 views
### Does Mathematica have an equivalent of C's nextafter?
In C (and many other programming languages), there is a function double nextafter(double x, double y) which takes two (IEEE 754) floating-point numbers and ...
442 views
### Does Mathematica have an equivalent of Python's float.as_integer_ratio?
The Python programming language has a float.as_integer_ratio(x) function which exactly converts an IEEE 754 floating-point number into a numerator/denominator pair ...
96 views
### Changing from Dashed to Solid
I have: Graphics[{ Blue, Thick, Arrow[{{0, 0}, f1}], Arrow[{{0, 0}, f2}], Red, Arrow[{{0, 0}, f1 + f2}], Dashed, Blue, Line[{{-9, 0}, {12, 0}}] }] ...
73 views
### How to reduce trigonometric expressions under special conditions
For the following problems I used "FullSimplify" and "TrigReduce" but I manage to reduce more by hand. ...
82 views
Consider: ...
89 views
### Undo history cleared after removing comments - bug?
If I've written some code followed by a comment, like this: x(*y*) and I remove the comment by first removing the right delimiter and then the left, my undo ...
84 views
...
105 views
### NIntegrate and Sum give different results in summing functions
I am using these two functions: ...
83 views
### Diff eqs can be solved?
I'm trying to solve a pair of coupled differential equations, but am having trouble making DSolve evaluate my equations. ...
67 views
### Import a jpeg file and a mov file and display them side by side
I am planning to use Mathematica for a presentation. In one of the slides I need to place a jpeg file side by side to a mov file. ...
40 views
### Create graph K5, delete 2 edges and 2 vertex, create adjacency matrix [duplicate]
I need help about this problem. I should create graph K5, delete 2 edges and 2 vertex from it and from the new graph create adjacency matrix. I have poor knowledge and i don't need anyone to solve the ...
110 views
### Why doesn't Plot draw function within prescribed range?
I have a function: TheoreticalFunction[x_] = 2 / Pi * ArcSin[x] And trying to plot its derivative: ...
159 views
### Creating truth tables [closed]
I want to write a module in Mathematica that receives a Boolean function f. The module should write on screen: Truth table of function ...
207 views
### Gradient in a plotstyle
I'm trying to replicate the figure below. Now a nice first approach to do this is by using Mathematica, by making some kind of contourplot with circles and just add a gradient to the contourstyle. Now ...
85 views
### Creating Matrix,Inverted and Upper Triangulate inside Module
I was trying to fit this into a module but I didn't have any luck. And for some reason it won't Inverse the Matrix. Here's the code: Input: ...
35 views
### Recurrence equation [duplicate]
I need to print the first 10 elements from the string which is a solution of a recurrence equation. If there is a 5 in the elements of the string, then that element should not be printed. For ...
55 views
### Transformation syntax with shortcuts to a more explicit syntax for an equation
I would like to write the following code line without the signs @#%^&*?! Sequence @@ (XMLNote[#1, m] &) /@ attributes I have made this code but it seems ...
183 views
### Using NDSolve within Manipulate
I'm trying to use NDSolve inside manipulate. Specifically, I have a series of differential equations with coefficients, k1, k2, ...
39 views
### Where can I find mathematica documentation about @@@? [duplicate]
For example, to build a rule from 2 lists {a,b,c},{1,2,3},I use : Rule @@@ Transpose[{{a, b, c}, {1, 2, 3}}] : {a -> 1, b -> 2, c -> 3} OK, but how @@@ works ? (Black magic for me) I am using ...
71 views
### Descending order of Permutations using Select[Tuples[{5, 4, 3, 2, 1}, {3}], OrderedQ]
I was wondering if there is a possible way to still use OrderedQ and get a descending order of permutations: This is Ascending order: ...
|
2015-11-26 03:22:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8036152124404907, "perplexity": 2235.3692443554696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446300.49/warc/CC-MAIN-20151124205406-00290-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://tf-encrypted.readthedocs.io/en/latest/api/protocol/pond_private_tensor.html
|
# PondPrivateTensor¶
class tf_encrypted.protocol.pond.PondPrivateTensor(prot: tf_encrypted.protocol.pond.pond.Pond, share0: tf_encrypted.tensor.factory.AbstractTensor, share1: tf_encrypted.tensor.factory.AbstractTensor, is_scaled: bool)[source]
This class represents a private value that may be unknown to everyone.
shape
Return type: List[int] The shape of this tensor.
unwrapped
Unwrap the tensor.
This will return the shares for each of the parties that collectively own he tensor.
x_0, y_0 = tensor.unwrapped
# x_0 == private shares of the value pinned to player_0's device.
# y_0 == private shares of the value pinned to player_1's device.
In most cases you will not need to use this method. All funtions will hide this functionality for you (e.g. add, mul, etc).
|
2019-05-24 20:13:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33075594902038574, "perplexity": 11194.893072124978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257731.70/warc/CC-MAIN-20190524184553-20190524210553-00558.warc.gz"}
|
https://www.physicsforums.com/threads/integration-by-long-division-problem.222299/
|
# Integration by long division problem
1. Homework Statement
Consider the indefinite integral of (4 x^3+4 x^2-96 x -100)/(x^2-25)dx
Then the integrand decomposes into the form
ax + b + c/(x - 5) + d/(x + 5)
Find a, b, c, and d.
Then find the integral of the function.
3. The Attempt at a Solution
Using long division, I got this far...
$$\frac{4x^3 + 4x^2 - 96x - 100}{x^2 - 25}$$
=
4x + 4 + c/(x - 5) + d/(x + 5)
It'd be pretty hard to show how I got a and b, but I'm pretty positive that's correct. I just can't find C or D.
So then, I thought I was supposed to put my remainder after dividing over (x - 5)(x + 5). My remainder is 4x, so I went.
4x/((x - 5)(x + 5))
Which produces
c + d = 4.
I need one more equation to solve for it though. Help?
## Answers and Replies
Related Calculus and Beyond Homework Help News on Phys.org
tiny-tim
Homework Helper
… just multiply …
4x/((x - 5)(x + 5))
Which produces
c + d = 4.
I need one more equation to solve for it though.
Hi the7joker7!
c/(x - 5) + d/(x + 5) = 4x/(x - 5)(x + 5);
just multiply both sides by (x - 5)(x + 5).
So c + d = 4 is correct?
So that gets me to...
C(x + 5) + D(x - 5) = 4x
Cx + 5c + dx + 5d = 4x
4x = (c + d)x + (5c + 5d)
Again, produces c + d = 4
So that doesn't really help. =/
cristo
Staff Emeritus
Basically, you want to solve this equation:
$$\frac{4x^3+4x^2-96x-100}{x^2-25}=ax+b+\frac{c}{x-5}+\frac{d}{x+5}$$
Now, the way I would do it would be to multiply up the right hand side, and put it all over the common denominator (x-5)(x+5). Then, you can compare the numerator of the LHS to the new numerator on the RHS. Compare coefficients: you correctly have that a=b=4. Comparing the coefficients of the x and units will give you two equations, in terms of c and d, which you can solve.
tiny-tim
Homework Helper
… gotta be careful …
C(x + 5) + D(x - 5) = 4x
Cx + 5c + dx + 5d = 4x
erm … no … Cx + 5c + dx - 5d = 4x!
Try again!
C + D = 4
and
5c - 5d = 0, then?
So that gives...
c = 2 and d = 2?
I just had another guy claim it was c = 4 and d = 0 using...
"using synthetic division divide numerator by denominator
x^2-25)4x^3+4x^2-96x-100(4x
...........4x^3+0x^2-100x
______________________
...................4x^2+4x-100(4
...................4x^2+0x-100
_______________________
...........................4x
so (4x^3+4x^2-96x-100) /(x^2 - 25) = 4x + 4 + 4x/(x^2-25)
so a = 4, b = 4 , c = 4 and d = 0"
Which one's right?
tiny-tim
Homework Helper
so (4x^3+4x^2-96x-100) /(x^2 - 25) = 4x + 4 + 4x/(x^2-25)
so a = 4, b = 4 , c = 4 and d = 0"
Which one's right?
Hi joker!
Well, he's sort-of right, and he sort-of isn't!
The first line above is correct - but it's exactly what you had anyway!
It isn't reduced to the simplest fractions!
So the second line is just optimistically re-defining c and d to fit the result!
You've gone one step further, and split the last fraction into two simpler ones.
Yours is defintitely right1
Thanks.
Hmm...Since derivative of denominator (x^2-25) is numerator, 2x , the integral of 2x dx/(x^2-25) is ln(x^2-25)
=>4x^2/2 + 4x + 2ln(x^2 -25) + c
=>2x^2 + 4x + 2ln[(x+5)(x-5)] + c
2x^2 + 4x + 2ln(x+5) + 2ln(x-5) + c
Does that sound in order?
tiny-tim
Homework Helper
Hi joker!
Yes that's fine!
btw, I think the point "another guy" was correctly making was that, once you'd got 4x/(x^2 - 25), there was no point in breaking it down any further, since you could instantly see what its integral was!
But you had to go on in this case only because the question specifically required it.
If it helps any...
Once you get your result from long division (should be 4x+4-(4x/x^2-25)), multiply through by x^2-25 on both sides. This will leave you with 4x^3+4x^2-96x-100 = 4x(x^2-25)+4(x^2-25)-4x.
Now set x = 0 , so A = 4
Now set x = 5 , so C = -1
Now set x = 1 , so B = -2 ( don't forget to plug in A and C to get B)
So your A B and C should be 4,-1, and -2 respectively.
You don't need to solve a single equation.
f(x) = (4 x^3+4 x^2-96 x -100)/(x^2-25)
Expand around singular points and find asymptotic behavior at infinity:
Singular term in expansion around x = -5:
1/(x+5) * Lim x--->-5 of (x+5)f(x) = 2/(x+5)
Singular term in expansion around x = 5:
1/(x-5) Lim x--->5 of (x-5)f(x) = 2/(x-5)
Expansion around infinity:
The singular terms in this expanson (i.e. the terms that go to infinity as we approach the point around which we expand) are the postive powers of x. We have:
1/(x^2 - 25) = 1/x^2 1/[1-(5/x)^2] = 1/x^2 [1+25/x^2 + ...]
f(x) = 4 x + 4 + terms that go to zero for x to infinity
The sum of all the singular terms of the three expansions is:
g(x) = 2/(x+5) + 2/(x-5) + 4 x + 4
Consider the difference f(x) - g(x). Since both f and g are rational functions, f(x) - g(x) is a rational function. But it doesn't have any singularities as g contains the singular terms of the expansion around all the singular points of f. So, f(x) - g(x) is actually a polynomial. At infinity, f(x) - g(x) must tend to zero, as g(x) contains the positive powers of x in the large x expansion of f(x). This then implies that
f(x) - g(x) is zero.
|
2019-12-06 04:14:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7851409912109375, "perplexity": 1203.356801946749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540484477.5/warc/CC-MAIN-20191206023204-20191206051204-00148.warc.gz"}
|
https://pdal.io/stages/filters.splitter.html
|
# filters.splitter¶
The splitter filter breaks a point cloud into square tiles of a size that you choose. The origin of the tiles is chosen arbitrarily unless specified as an option.
The splitter takes a single PointView as its input and creates a PointView for each tile as its output.
Splitting is usually applied to data read from files (which produce one large stream of points) before the points are written to a database (which prefer data segmented into smaller blocks).
Default Embedded Stage
This stage is enabled by default
## Example¶
{
"pipeline":[
"input.las",
{
"type":"filters.splitter",
"length":"100",
"origin_x":"638900.0",
"origin_y":"835500.0"
},
{
"type":"writers.pgpointcloud",
"connection":"dbname='lidar' user='user'"
}
]
}
## Options¶
length
Length of the sides of the tiles that are created to hold points. [Default: 1000]
origin_x
X Origin of the tiles. [Default: none (chosen arbitarily)]
origin_y
Y Origin of the tiles. [Default: none (chosen arbitarily)]
buffer
Amount of overlap to include in each tile. This buffer is added onto length in both the x and the y direction. [Default: 0.0]
|
2018-12-13 19:55:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7018527984619141, "perplexity": 3470.367001766674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825098.68/warc/CC-MAIN-20181213193633-20181213215133-00175.warc.gz"}
|
https://www.physicsforums.com/threads/solving-a-cubic-equation.423646/
|
# Homework Help: Solving a cubic equation
1. Aug 22, 2010
### Refraction
1. The problem statement, all variables and given/known data
I've been given the following quartic equation in a question:
$$x = (1 + 2a + a^2)x - (2a + 3a^2 + a^3)x^2 + (2a^2 + 2a^3)x^3 - a^3x^4$$
and need to show that two of the solutions (that aren't 0 and 1) can be given by:
$$x= \frac{(2+a) \pm \sqrt{a^2 - 4}}{2a}$$
2. The attempt at a solution
I think I have to start by dividing it all by x - when I put this new equation into wolfram alpha it gives the right answer, I just have no idea how to get there (or where to start), since I haven't done a whole lot with cubics or quartics before.
Last edited: Aug 22, 2010
2. Aug 22, 2010
### Hurkyl
Staff Emeritus
Can you say why you think so?
3. Aug 22, 2010
### Staff: Mentor
Why don't you just put expression given as x into the equation?
4. Aug 22, 2010
### Refraction
Sorry, I left out a part of the equation above. It was actually x = [what I already gave], I've edited that in now. It just seems like that way it would be easier to rearrange into the form needed, I'm not 100% sure about that though.
5. Aug 22, 2010
### Staff: Mentor
I thought you mean f(x) = 0. Still, what I wrote above holds.
6. Aug 22, 2010
### Refraction
what did you mean by putting the expression given as x into the equation? Putting it into the second one there/the other way around?
7. Aug 22, 2010
### Staff: Mentor
Sorry if my English failed.
Substitute value of x given in the second equation for all occurrences of x in the first equation. That will yield equation in "a" - and if second equation really gives solution, equation in "a" should be just an identity.
8. Aug 22, 2010
### HallsofIvy
You are given two putative solutions. Just put them into the equation, do the algebra, and see whether or not they satisfy the equation.
9. Aug 23, 2010
### Refraction
Would there be a reasonably simple way to get to the two results given without substituting them into the first equation? I have a feeling that I'm supposed to do it that way, and not just by replacing x with the answers given.
It seems like I could get to that form with some rearranging and factorising but I'm not sure where to begin with that, or if there's any specific techniques that might help.
10. Aug 23, 2010
### Petek
Go back to your original equation and rewrite in the form f(x) = 0. Observe that you can factor out ax, so your equation looks like axg(x) = 0. Now g(x) is a cubic polynomial and you know that 1 is a root. Use long division to compute g(x)/(x-1). The result is a polynomial of degree 2. Now use the quadratic formula to find its roots. You end up with the result in your first post.
11. Aug 23, 2010
### Refraction
Thanks, that worked perfectly!
|
2018-06-20 08:22:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6840723156929016, "perplexity": 588.0267429836645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863489.85/warc/CC-MAIN-20180620065936-20180620085936-00538.warc.gz"}
|
https://trenton3983.github.io/files/projects/2020-03-05_intro_to_data_visualization_in_python/2020-03-05_intro_to_data_visualization_in_python.html
|
Course Description
This course extends Intermediate Python for Data Science to provide a stronger foundation in data visualization in Python. You'll get a broader coverage of the Matplotlib library and an overview of seaborn, a package for statistical graphics. Topics covered include customizing graphics, plotting two-dimensional arrays (like pseudocolor plots, contour plots, and images), statistical graphics (like visualizing distributions and regressions), and working with time series and image data.
Datasets
In [1]:
stocks_url = 'https://assets.datacamp.com/production/repositories/558/datasets/8dd58ff003e399765cdf348305783b842ff1d7eb/stocks.csv'
Imports
In [2]:
import pandas as pd
from pprint import pprint as pp
from itertools import combinations
import matplotlib.pyplot as plt
import seaborn as sns
import requests
import zipfile
from pathlib import Path
import numpy as np
Pandas Configuration Options
In [3]:
pd.set_option('max_columns', 200)
pd.set_option('max_rows', 300)
pd.set_option('display.expand_frame_repr', True)
Functions
In [4]:
def create_dir_save_file(dir_path: Path, url: str):
"""
Check if the path exists and create it if it does not.
Check if the file exists and download it if it does not.
"""
if not dir_path.parents[0].exists():
dir_path.parents[0].mkdir(parents=True)
print(f'Directory Created: {dir_path.parents[0]}')
else:
print('Directory Exists')
if not dir_path.exists():
r = requests.get(url, allow_redirects=True)
open(dir_path, 'wb').write(r.content)
print(f'File Created: {dir_path.name}')
else:
print('File Exists')
DataFrames
In [5]:
mpg_path = Path('data/intro_to_data_visualization_in_python/auto-mpg.csv')
# percentage of bachelors degrees awarded to women in the USA
women_path = Path('data/intro_to_data_visualization_in_python/percent-bachelors-degrees-women-usa.csv')
stocks_path = Path('data/intro_to_data_visualization_in_python/stocks.csv')
create_dir_save_file(mpg_path, mpg_url)
create_dir_save_file(women_path, women_bach_url)
create_dir_save_file(stocks_path, stocks_url)
Directory Exists
File Exists
Directory Exists
File Exists
Directory Exists
File Exists
In [6]:
In [7]:
Out[7]:
mpg cyl displ hp weight accel yr origin name color size marker
0 18.0 6 250.0 88 3139 14.5 71 US ford mustang red 27.370336 o
1 9.0 8 304.0 193 4732 18.5 70 US hi 1200d green 62.199511 o
2 36.1 4 91.0 60 1800 16.4 78 Asia honda civic cvcc blue 9.000000 x
3 18.5 6 250.0 98 3525 19.0 77 US ford granada red 34.515625 o
4 34.3 4 97.0 78 2188 15.8 80 Europe audi 4000 blue 13.298178 s
In [8]:
df_mpg.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 392 entries, 0 to 391
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 mpg 392 non-null float64
1 cyl 392 non-null int64
2 displ 392 non-null float64
3 hp 392 non-null int64
4 weight 392 non-null int64
5 accel 392 non-null float64
6 yr 392 non-null int64
7 origin 392 non-null object
8 name 392 non-null object
9 color 392 non-null object
10 size 392 non-null float64
11 marker 392 non-null object
dtypes: float64(4), int64(4), object(4)
memory usage: 36.9+ KB
In [9]:
Out[9]:
Year Agriculture Architecture Art and Performance Biology Business Communications and Journalism Computer Science Education Engineering English Foreign Languages Health Professions Math and Statistics Physical Sciences Psychology Public Administration Social Sciences and History
0 1970 4.229798 11.921005 59.7 29.088363 9.064439 35.3 13.6 74.535328 0.8 65.570923 73.8 77.1 38.0 13.8 44.4 68.4 36.8
1 1971 5.452797 12.003106 59.9 29.394403 9.503187 35.5 13.6 74.149204 1.0 64.556485 73.9 75.5 39.0 14.9 46.2 65.5 36.2
2 1972 7.420710 13.214594 60.4 29.810221 10.558962 36.6 14.9 73.554520 1.2 63.664263 74.6 76.9 40.2 14.8 47.6 62.6 36.1
3 1973 9.653602 14.791613 60.2 31.147915 12.804602 38.4 16.4 73.501814 1.6 62.941502 74.9 77.4 40.9 16.5 50.4 64.3 36.4
4 1974 14.074623 17.444688 61.9 32.996183 16.204850 40.5 18.9 73.336811 2.2 62.413412 75.3 77.9 41.8 18.2 52.6 66.1 37.3
In [10]:
df_women.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 42 entries, 0 to 41
Data columns (total 18 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Year 42 non-null int64
1 Agriculture 42 non-null float64
2 Architecture 42 non-null float64
3 Art and Performance 42 non-null float64
4 Biology 42 non-null float64
6 Communications and Journalism 42 non-null float64
7 Computer Science 42 non-null float64
8 Education 42 non-null float64
9 Engineering 42 non-null float64
10 English 42 non-null float64
11 Foreign Languages 42 non-null float64
12 Health Professions 42 non-null float64
13 Math and Statistics 42 non-null float64
14 Physical Sciences 42 non-null float64
15 Psychology 42 non-null float64
16 Public Administration 42 non-null float64
17 Social Sciences and History 42 non-null float64
dtypes: float64(17), int64(1)
memory usage: 6.0 KB
In [11]:
Out[11]:
Date AAPL IBM CSCO MSFT
0 2000-01-03 111.937502 116.0000 108.0625 116.5625
1 2000-01-04 102.500003 112.0625 102.0000 112.6250
2 2000-01-05 103.999997 116.0000 101.6875 113.8125
3 2000-01-06 94.999998 114.0000 100.0000 110.0000
4 2000-01-07 99.500001 113.5000 105.8750 111.4375
In [12]:
df_stocks.Date = pd.to_datetime(df_stocks.Date)
df_stocks.set_index('Date', inplace=True, drop=True)
df_stocks.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 3521 entries, 2000-01-03 to 2013-12-31
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 AAPL 3521 non-null float64
1 IBM 3521 non-null float64
2 CSCO 3521 non-null float64
3 MSFT 3521 non-null float64
dtypes: float64(4)
memory usage: 137.5 KB
# Customizing plots¶
Following a review of basic plotting with Matplotlib, this chapter delves into customizing plots using Matplotlib. This includes overlaying plots, making subplots, controlling axes, adding legends and annotations, and using different plot styles.
Reminder: Line Plots
In [13]:
x = np.linspace(0, 1, 201)
y = np.sin((2*np.pi*x)**2)
plt.plot(x, y, 'purple')
plt.show()
Reminder: Scatter Plots
In [14]:
np.random.seed(256)
x = 10*np.random.rand(200,1)
y = (0.2 + 0.8*x) * np.sin(2*np.pi*x) + np.random.randn(200,1)
plt.scatter(x, y, color='purple')
plt.show()
Reminder: Histograms
In [15]:
np.random.seed(256)
x = 10*np.random.rand(200,1)
y = (0.2 + 0.8*x) * np.sin(2*np.pi*x) + np.random.randn(200,1)
plt.hist(y, bins=20, color='purple')
plt.show()
What you will learn
• Customizing of plots: axes, annotations, legends
• Overlaying multiple plots and subplots
• Visualizing 2D arrays, 2D data sets
• Working with color maps
• Producing statistical graphics
• Plotting time series
• Working with images
## Plotting Multiple Graphs¶
Strategies
• Plotting many graphs on common axes
• Creating axes within a figure
• Creating subplots within a figure
In [16]:
austin_weather_url = 'https://assets.datacamp.com/production/repositories/497/datasets/4d7b2bc6b10b527dc297707fb92fa46b10ac1be5/weather_data_austin_2010.csv'
austin_weather_path = Path('data/intro_to_data_visualization_in_python/weather_data_austin_2010.csv')
create_dir_save_file(austin_weather_path, austin_weather_url)
df_weather.Date = pd.to_datetime(df_weather.Date)
df_weather.set_index('Date', drop=True, inplace=True)
Directory Exists
File Exists
Graphs On Common Axes
In [17]:
temperature = df_weather['Temperature']['2010-01-01':'2010-01-15']
dewpoint = df_weather['DewPoint']['2010-01-01':'2010-01-15']
t = temperature.index
plt.plot(t, temperature, 'red')
plt.plot(t, dewpoint, 'blue') # Appears on same axes
plt.xlabel('Date')
plt.title('Temperature & Dew Point')
plt.xticks(rotation=60)
plt.show() # Renders plot objects to screen
Using axes()
• Syntax: axes([x_lo, y_lo, width, height])
• Units between 0 and 1 (figure dimensions)
In [18]:
plt.figure(figsize=(8, 6))
plt.axes([0.05,0.05,0.425,0.9])
plt.plot(t, temperature, 'red')
plt.xlabel('Date')
plt.title('Temperature')
plt.xticks(rotation=60)
plt.axes([0.525,0.05,0.425,0.9])
plt.plot(t, dewpoint, 'blue')
plt.xlabel('Date')
plt.title('Dew Point')
plt.xticks(rotation=60)
plt.show()
Using subplot()
• Syntax: subplot(nrows, ncols, nsubplot)
• Subplot ordering:
• Row-wise from top left
• Indexed from 1
In [19]:
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(t, temperature, 'red')
plt.xlabel('Date')
plt.title('Temperature')
plt.xticks(rotation=60)
plt.subplot(2, 1, 2)
plt.plot(t, dewpoint, 'blue')
plt.xlabel('Date')
plt.title('Dew Point')
plt.xticks(rotation=60)
plt.tight_layout()
plt.show()
### Multiple plots on single axis¶
It is time now to put together some of what you have learned and combine line plots on a common set of axes. The data set here comes from records of undergraduate degrees awarded to women in a variety of fields from 1970 to 2011. You can compare trends in degrees most easily by viewing two curves on the same set of axes.
Here, three NumPy arrays have been pre-loaded for you: year (enumerating years from 1970 to 2011 inclusive), physical_sciences (representing the percentage of Physical Sciences degrees awarded to women each in corresponding year), and computer_science (representing the percentage of Computer Science degrees awarded to women in each corresponding year).
You will issue two plt.plot() commands to draw line plots of different colors on the same set of axes. Here, year represents the x-axis, while physical_sciences and computer_science are the y-axes.
Instructions
• Import matplotlib.pyplot as its usual alias.
• Add a 'blue' line plot of the % of degrees awarded to women in the Physical Sciences (physical_sciences) from 1970 to 2011 (year). Note that the x-axis should be specified first.
• Add a 'red' line plot of the % of degrees awarded to women in Computer Science (computer_science) from 1970 to 2011 (year).
• Use plt.show() to display the figure with the curves on the same axes.
In [20]:
# Plot in blue the % of degrees awarded to women in the Physical Sciences
plt.plot(df_women.Year, df_women['Physical Sciences'], c='blue')
# Plot in red the % of degrees awarded to women in Computer Science
plt.plot(df_women.Year, df_women['Computer Science'], c='red')
# Display the plot
plt.show()
It looks like, for the last 25 years or so, more women have been awarded undergraduate degrees in the Physical Sciences than in Computer Science.
### Using axes()¶
Rather than overlaying line plots on common axes, you may prefer to plot different line plots on distinct axes. The command plt.axes() is one way to do this (but it requires specifying coordinates relative to the size of the figure).
Here, you have the same three arrays year, physical_sciences, and computer_science representing percentages of degrees awarded to women over a range of years. You will use plt.axes() to create separate sets of axes in which you will draw each line plot.
In calling plt.axes([xlo, ylo, width, height]), a set of axes is created and made active with lower corner at coordinates (xlo, ylo) of the specified width and height. Note that these coordinates can be passed to plt.axes() in the form of a list or a tuple.
The coordinates and lengths are values between 0 and 1 representing lengths relative to the dimensions of the figure. After issuing a plt.axes() command, plots generated are put in that set of axes.
Instructions
• Create a set of plot axes with lower corner xlo and ylo of 0.05 and 0.05, width of 0.425, and height of 0.9 (in units relative to the figure dimension).
• Note: Remember to pass these coordinates to plt.axes() in the form of a list: [xlo, ylo, width, height].
• Plot the percentage of degrees awarded to women in Physical Sciences in blue in the active axes just created.
• Create a set of plot axes with lower corner xlo and ylo of 0.525 and 0.05, width of 0.425, and height of 0.9 (in units relative to the figure dimension).
• Plot the percentage of degrees awarded to women in Computer Science in red in the active axes just created.
In [21]:
# Create plot axes for the first line plot
plt.axes([0.05, 0.05, 0.425, 0.9])
# Plot in blue the % of degrees awarded to women in the Physical Sciences
plt.plot(df_women.Year, df_women['Physical Sciences'], c='blue')
# Create plot axes for the second line plot
plt.axes([0.525, 0.05, 0.425, 0.9])
# Plot in red the % of degrees awarded to women in Computer Science
plt.plot(df_women.Year, df_women['Computer Science'], c='red')
# Display the plot
plt.show()
As you can see, not only are there now two separate plots with their own axes, but the axes for each plot are slightly different.
### Using subplot() (1)¶
The command plt.axes() requires a lot of effort to use well because the coordinates of the axes need to be set manually. A better alternative is to use plt.subplot() to determine the layout automatically.
In this exercise, you will continue working with the same arrays from the previous exercises: year, physical_sciences, and computer_science. Rather than using plt.axes() to explicitly lay out the axes, you will use plt.subplot(m, n, k) to make the subplot grid of dimensions m by n and to make the kth subplot active (subplots are numbered starting from 1 row-wise from the top left corner of the subplot grid).
Instructions
• Use plt.subplot() to create a figure with 1x2 subplot layout & make the first subplot active.
• Plot the percentage of degrees awarded to women in Physical Sciences in blue in the active subplot.
• Use plt.subplot() again to make the second subplot active in the current 1x2 subplot grid.
• Plot the percentage of degrees awarded to women in Computer Science in red in the active subplot.
In [22]:
# Create a figure with 1x2 subplot and make the left subplot active
plt.subplot(1, 2, 1)
# Plot in blue the % of degrees awarded to women in the Physical Sciences
plt.plot(df_women.Year, df_women['Physical Sciences'], c='blue')
plt.title('Physical Sciences')
# Make the right subplot active in the current 1x2 subplot grid
plt.subplot(1, 2, 2)
# Plot in red the % of degrees awarded to women in Computer Science
plt.plot(df_women.Year, df_women['Computer Science'], c='red')
plt.title('Computer Science')
# Use plt.tight_layout() to improve the spacing between subplots
plt.tight_layout()
plt.show()
Using subplots like this is a better alternative to using plt.axes().
### Using subplot() (2)¶
Now you have some familiarity with plt.subplot(), you can use it to plot more plots in larger grids of subplots of the same figure.
Here, you will make a 2×2 grid of subplots and plot the percentage of degrees awarded to women in Physical Sciences (using physical_sciences), in Computer Science (using computer_science), in Health Professions (using health), and in Education (using education).
Instructions
• Create a figure with 2×2 subplot layout, make the top, left subplot active, and plot the % of degrees awarded to women in Physical Sciences in blue in the active subplot.
• Make the top, right subplot active in the current 2×2 subplot grid and plot the % of degrees awarded to women in Computer Science in red in the active subplot.
• Make the bottom, left subplot active in the current 2×2 subplot grid and plot the % of degrees awarded to women in Health Professions in green in the active subplot.
• Make the bottom, right subplot active in the current 2×2 subplot grid and plot the % of degrees awarded to women in Education in yellow in the active subplot.
• _When making your plots, be sure to use the variable names specified in the exercise text above (computer_science, health, and education)!_
In [23]:
# Create a figure with 2x2 subplot layout and make the top left subplot active
plt.subplot(2, 2, 1)
# Plot in blue the % of degrees awarded to women in the Physical Sciences
plt.plot(df_women.Year, df_women['Physical Sciences'], color='blue')
plt.title('Physical Sciences')
# Make the top right subplot active in the current 2x2 subplot grid
plt.subplot(2, 2, 2)
# Plot in red the % of degrees awarded to women in Computer Science
plt.plot(df_women.Year, df_women['Computer Science'], color='red')
plt.title('Computer Science')
# Make the bottom left subplot active in the current 2x2 subplot grid
plt.subplot(2, 2, 3)
# Plot in green the % of degrees awarded to women in Health Professions
plt.plot(df_women.Year, df_women['Health Professions'], color='green')
plt.title('Health Professions')
# Make the bottom right subplot active in the current 2x2 subplot grid
plt.subplot(2, 2, 4)
# Plot in yellow the % of degrees awarded to women in Education
plt.plot(df_women.Year, df_women['Education'], color='yellow')
plt.title('Education')
# Improve the spacing between subplots and display them
plt.tight_layout()
plt.show()
You can use this approach to create subplots in any layout of your choice.
## Customizing Axes¶
Controlling axis extents
• axis([xmin, xmax, ymin, ymax]) sets axis extents
• Control over individual axis extents
• xlim([xmin, xmax])
• ylim([ymin, ymax])
• Can use tuples, lists for extents
• e.g., xlim((-2, 3)) works
• e.g., xlim([-2, 3]) works also
GDP over time
In [24]:
gdp_url = 'https://assets.datacamp.com/production/repositories/516/datasets/a0858a700501f88721ca9e4bdfca99b9e10b937f/GDP.zip'
save_to = Path('data/intro_to_data_visualization_in_python/gdp.zip')
In [25]:
create_dir_save_file(save_to, gdp_url)
Directory Exists
File Exists
In [26]:
zf = zipfile.ZipFile(save_to)
df_gdp.DATE = pd.to_datetime(df_gdp.DATE)
df_gdp['YEAR'] = pd.DatetimeIndex(df_gdp.DATE).year
In [27]:
plt.plot(df_gdp.YEAR, df_gdp.VALUE)
plt.xlabel('Year')
plt.ylabel('Billions of Dollars')
plt.title('US Gross Domestic Product')
plt.show()
Using xlim()
In [28]:
plt.plot(df_gdp.YEAR, df_gdp.VALUE)
plt.xlabel('Year')
plt.ylabel('Billions of Dollars')
plt.title('US Gross Domestic Product')
plt.xlim((1947, 1957))
plt.show()
Using xlim() & ylim()
In [29]:
plt.plot(df_gdp.YEAR, df_gdp.VALUE)
plt.xlabel('Year')
plt.ylabel('Billions of Dollars')
plt.title('US Gross Domestic Product')
plt.xlim((1947, 1957))
plt.ylim((0, 1000))
plt.show()
Using axis()
In [30]:
plt.plot(df_gdp.YEAR, df_gdp.VALUE)
plt.xlabel('Year')
plt.ylabel('Billions of Dollars')
plt.title('US Gross Domestic Product')
plt.axis((1947, 1957, 0, 600))
plt.show()
Other axis() options
| Invocation | Result |
|----------------|--------------------------------------|
| axis('off') | turns off axis lines, labels |
| axis('equal') | equal scaling on x, y axes |
| axis('square') | forces square plot |
| axis('tight') | sets xlim(), ylim() to show all data |
Using axis('equal')
In [31]:
np.random.seed(555)
t = np.linspace(0,2*np.pi,100)
xc = 0.0
yc = 0.0
r = 1
x = r*np.cos(t) + xc
y = r*np.sin(t) + yc
plt.subplot(2, 1, 1)
plt.plot(x, y, 'red')
plt.grid(True)
plt.title('default axis')
plt.subplot(2, 1, 2)
plt.plot(x, y, 'red')
plt.grid(True)
plt.axis('equal')
plt.title('axis equal')
plt.tight_layout()
plt.show()
### Using xlim(), ylim()¶
In this exercise, you will work with the matplotlib.pyplot interface to quickly set the x- and y-limits of your plots.
You will now create the same figure as in the previous exercise using plt.plot(), this time setting the axis extents using plt.xlim() and plt.ylim(). These commands allow you to either zoom or expand the plot or to set the axis ranges to include important values (such as the origin).
In this exercise, as before, the percentage of women graduates in Computer Science and in the Physical Sciences are held in the variables computer_science and physical_sciences respectively over year.
After creating the plot, you will use plt.savefig() to export the image produced to a file.
Instructions
• Use plt.xlim() to set the x-axis range to the period between the years 1990 and 2010.
• Use plt.ylim() to set the y-axis range to the interval between 0% and 50% of degrees awarded.
• Display the final figure with plt.show() and save the output to 'xlim_and_ylim.png'.
In [32]:
# Plot the % of degrees awarded to women in Computer Science and the Physical Sciences
plt.plot(df_women['Year'], df_women['Computer Science'], color='red')
plt.plot(df_women['Year'], df_women['Physical Sciences'], color='blue')
plt.xlabel('Year')
plt.ylabel('Degrees awarded to women (%)')
# Set the x-axis range
plt.xlim(1990, 2010)
# Set the y-axis range
plt.ylim(0, 50)
plt.title('Degrees awarded to women (1990-2010)\nComputer Science (red)\nPhysical Sciences (blue)')
# Save the image as 'xlim_and_ylim.png'
plt.savefig('Images/intro_to_data_visualization_in_python/xlim_and_ylim.png')
# display the plot
plt.show()
This plot effectively captures the difference in trends between 1990 and 2010.
### Using axis()¶
Using plt.xlim() and plt.ylim() are useful for setting the axis limits individually. In this exercise, you will see how you can pass a 4-tuple to plt.axis() to set limits for both axes at once. For example, plt.axis((1980, 1990, 0, 75)) would set the extent of the x-axis to the period between 1980 and 1990, and would set the y-axis extent from 0 to 75% degrees award.
Once again, the percentage of women graduates in Computer Science and in the Physical Sciences are held in the variables computer_science and physical_sciences where each value was measured at the corresponding year held in the year variable.
Instructions
• Use plt.axis() to select the time period between 1990 and 2010 on the x-axis as well as the interval between 0 and 50% awarded on the y-axis.
• Save the resulting plot as 'axis_limits.png'.
In [33]:
# Plot in blue the % of degrees awarded to women in Computer Science
plt.plot(df_women['Year'], df_women['Computer Science'], color='red')
# Plot in red the % of degrees awarded to women in the Physical Sciences
plt.plot(df_women['Year'], df_women['Physical Sciences'], color='blue')
# Set the x-axis and y-axis limits
plt.axis((1990, 2010, 0, 50))
# Save the figure as 'axis_limits.png'
plt.savefig('Images/intro_to_data_visualization_in_python/axis_limits.png')
# Show the figure
plt.show()
Using plt.axis() allows you to set limits for both axes at once, as opposed to setting them individually with plt.xlim() and plt.ylim().
## Legends, Annotations, and Styles¶
In [34]:
iris = pd.DataFrame(data= np.c_[data['data'], data['target']], columns= data['feature_names'] + ['target'])
iris['species'] = pd.Categorical.from_codes(data.target, data.target_names)
Out[34]:
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target species
0 5.1 3.5 1.4 0.2 0.0 setosa
1 4.9 3.0 1.4 0.2 0.0 setosa
2 4.7 3.2 1.3 0.2 0.0 setosa
3 4.6 3.1 1.5 0.2 0.0 setosa
4 5.0 3.6 1.4 0.2 0.0 setosa
Using legend()
• provide labels for overlaid points and curves
Legend Locations
In [35]:
plt.figure(figsize=(8, 8))
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'setosa'], marker='o', color='red', label='setosa')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'versicolor'], marker='o', color='green', label='versicolor')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'virginica'], marker='o', color='blue', label='virginica')
plt.legend(loc='upper right')
plt.title('Iris data')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.show()
Plot Annotations
• Text labels and arrows using annotate() method
• Flexible specification of coordinates
• Keyword arrowprops: dict of arrow properties
• width
• color
• etc.
Options for annotate()
Using annotate() for text
In [36]:
plt.figure(figsize=(8, 8))
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'setosa'], marker='o', color='red', label='setosa')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'versicolor'], marker='o', color='green', label='versicolor')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'virginica'], marker='o', color='blue', label='virginica')
plt.legend(loc='upper right')
plt.title('Iris data')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.annotate('setosa', xy=(5.1, 3.6))
plt.annotate('virginica', xy=(7.25, 3.5))
plt.annotate('versicolor', xy=(5.0, 2.1))
plt.show()
Using annotate() for arrows
In [37]:
plt.figure(figsize=(8, 8))
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'setosa'], marker='o', color='red', label='setosa')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'versicolor'], marker='o', color='green', label='versicolor')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'virginica'], marker='o', color='blue', label='virginica')
plt.legend(loc='upper right')
plt.title('Iris data')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.annotate('setosa', xy=(5.0, 3.5), xytext=(4.25, 4.0), arrowprops={'color':'red'})
plt.annotate('virginica', xy=(7.2, 3.6), xytext=(6.5, 4.0), arrowprops={'color':'blue'})
plt.annotate('versicolor', xy=(5.05, 2.0), xytext=(5.5, 1.97), arrowprops={'color':'green'})
plt.show()
Working With Plot Styles
• Style sheets in Matplotlib
• Defaults for lines, points, backgrounds, etc.
• Switch styles globally with plt.style.use()
• plt.style.available: list of styles
• Matplotlib Style sheets reference
fivethirtyeight style
In [38]:
plt.figure(figsize=(8, 8))
plt.style.use('fivethirtyeight')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'setosa'], marker='o', color='red', label='setosa')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'versicolor'], marker='o', color='green', label='versicolor')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'virginica'], marker='o', color='blue', label='virginica')
plt.legend(loc='upper right')
plt.title('Iris data')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.annotate('setosa', xy=(5.0, 3.5), xytext=(4.25, 4.0), arrowprops={'color':'red'})
plt.annotate('virginica', xy=(7.2, 3.6), xytext=(6.5, 4.0), arrowprops={'color':'blue'})
plt.annotate('versicolor', xy=(5.05, 2.0), xytext=(5.5, 1.97), arrowprops={'color':'green'})
plt.show()
ggplot style
In [39]:
plt.style.use('ggplot')
plt.figure(figsize=(8, 8))
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'setosa'], marker='o', color='red', label='setosa')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'versicolor'], marker='o', color='green', label='versicolor')
plt.scatter('sepal length (cm)', 'sepal width (cm)', data=iris[iris.species == 'virginica'], marker='o', color='blue', label='virginica')
plt.legend(loc='upper right')
plt.title('Iris data')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
plt.annotate('setosa', xy=(5.0, 3.5), xytext=(4.25, 4.0), arrowprops={'color':'red'})
plt.annotate('virginica', xy=(7.2, 3.6), xytext=(6.5, 4.0), arrowprops={'color':'blue'})
plt.annotate('versicolor', xy=(5.05, 2.0), xytext=(5.5, 1.97), arrowprops={'color':'green'})
plt.show()
### Using legend()¶
Legends are useful for distinguishing between multiple datasets displayed on common axes. The relevant data are created using specific line colors or markers in various plot commands. Using the keyword argument label in the plotting function associates a string to use in a legend.
For example, here, you will plot enrollment of women in the Physical Sciences and in Computer Science over time. You can label each curve by passing a label argument to the plotting call, and request a legend using plt.legend(). Specifying the keyword argument loc determines where the legend will be placed.
Instructions
• Modify the plot command provided that draws the enrollment of women in Computer Science over time so that the curve is labeled 'Computer Science' in the legend.
• Modify the plot command provided that draws the enrollment of women in the Physical Sciences over time so that the curve is labeled 'Physical Sciences' in the legend.
• Add a legend at the lower center (i.e., loc='lower center').
In [40]:
# Plot in blue the % of degrees awarded to women in Computer Science
plt.plot(df_women['Year'], df_women['Computer Science'], color='red', label='Computer Science')
# Plot in red the % of degrees awarded to women in the Physical Sciences
plt.plot(df_women['Year'], df_women['Physical Sciences'], color='blue', label='Physical Sciences')
# Add a legend at the lower center
plt.legend(loc='lower center')
# Add axis labels and title
plt.xlabel('Year')
plt.ylabel('Enrollment (%)')
plt.show()
You should always use axes labels and legends to help make your plots more readable.
### Using annotate()¶
It is often useful to annotate a simple plot to provide context. This makes the plot more readable and can highlight specific aspects of the data. Annotations like text and arrows can be used to emphasize specific observations.
Here, you will once again plot enrollment of women in the Physical Sciences and Computer Science over time. The legend is set up as before. Additionally, you will mark the inflection point when enrollment of women in Computer Science reached a peak and started declining using plt.annotate().
To enable an arrow, set arrowprops=dict(facecolor='black'). The arrow will point to the location given by xy and the text will appear at the location given by xytext.
Computer Science enrollment and the years of enrollment have been preloaded for you as the arrays computer_science and year, respectively.
Instructions 1/2
• First, calculate the position for your annotation by finding the peak of women enrolling in Computer Science.
• Compute the maximum enrollment of women in Computer Science (using the computer_science array).
• Calculate the year in which there was the maximum enrollment of women in Computer Science.
• To do so, you will need to retrieve the index of the highest value in the computer_science array using .argmax(), and then use this value to index the year array.
In [41]:
cs_max = df_women['Computer Science'].max()
yr_max = df_women['Year'][df_women['Computer Science'].argmax()]
print(f'CS Max: {cs_max}\nYR Max: {yr_max}')
CS Max: 37.1
YR Max: 1983
Instructions 2/2
• Annotate the plot with a black arrow at the point of peak women enrolling in Computer Science.
• Label the arrow 'Maximum'. The parameter for this is s, but you don't have to specify it.
• Pass in the arguments to xy and xytext as tuples.
• For xy, use the yr_max and cs_max that you computed.
• For xytext, use (yr_max+5, cs_max+5) to specify the displacement of the label from the tip of the arrow.
• Draw the arrow by specifying the keyword argument arrowprops=dict(facecolor='black'). The single letter shortcut for 'black' is 'k'.
In [42]:
# Plot with legend as before
plt.plot(df_women['Year'], df_women['Computer Science'], color='red', label='Computer Science')
plt.plot(df_women['Year'], df_women['Physical Sciences'], color='blue', label='Physical Sciences')
plt.legend(loc='lower right')
# Add a black arrow annotation
plt.annotate('Maximum', xy=(yr_max, cs_max), xytext=(yr_max+5, cs_max+5), arrowprops=dict(facecolor='black'))
# Add axis labels and title
plt.xlabel('Year')
plt.ylabel('Enrollment (%)')
plt.show()
Annotations are extremely useful to help make more complicated plots easier to understand.
Here's a link to a stackoverflow question I answered regarding annotations: bold annotated text in matplotlib.
### Modifying styles¶
Matplotlib comes with a number of different stylesheets to customize the overall look of different plots. To activate a particular stylesheet you can simply call plt.style.use() with the name of the style sheet you want. To list all the available style sheets you can execute: print(plt.style.available).
Instructions
• Import matplotlib.pyplot as its usual alias.
• Activate the 'ggplot' style sheet with plt.style.use().
In [43]:
# Set the style to 'ggplot'
plt.style.use('ggplot')
# Create a figure with 2x2 subplot layout
plt.subplot(2, 2, 1)
# Plot the enrollment % of women in the Physical Sciences
plt.plot(df_women['Year'], df_women['Physical Sciences'], color='blue', label='Physical Sciences')
plt.title('Physical Sciences')
# Plot the enrollment % of women in Computer Science
plt.subplot(2, 2, 2)
plt.plot(df_women['Year'], df_women['Computer Science'], color='red', label='Computer Science')
plt.title('Computer Science')
# cs_max = computer_science.max()
# yr_max = year[computer_science.argmax()]
plt.annotate('Maximum', xy=(yr_max, cs_max), xytext=(yr_max-1, cs_max-10), arrowprops=dict(facecolor='black'))
# Plot the enrollmment % of women in Health professions
plt.subplot(2, 2, 3)
plt.plot(df_women['Year'], df_women['Health Professions'], color='green', label='Healt Professions')
plt.title('Health Professions')
# Plot the enrollment % of women in Education
plt.subplot(2, 2, 4)
plt.plot(df_women['Year'], df_women['Education'], color='yellow', label='Education')
plt.title('Education')
# Improve spacing between subplots and display them
plt.tight_layout()
plt.show()
# Plotting 2D arrays¶
This chapter showcases various techniques for visualizing two-dimensional arrays. This includes the use, presentation, and orientation of grids for representing two-variable functions followed by discussions of pseudocolor plots, contour plots, color maps, two-dimensional histograms, and images.
## Working With 2D Arrays¶
Reminder: NumPy Arrays
• Homogeneous in type
• Calculations all at once
• Indexing with brackets:
• A[index] for 1D array
• A[index0, index1] for 2D array
Reminder: Slicing Arrays
• Slicing: 1D arrays: A[slice], 2D arrays: A[slice0, slice1]
• Slicing: slice = start:stop:stride
• Indexes from start to stop-1 in steps of stride
• Missing start: implicitly at beginning of array
• Missing stop: implicitly at end of array
• Missing stride: implicitly stride 1
• Negative indexes/slices: count from end of array
2D Arrays & Images
2D Arrays & Functions
Using meshgrid()
In [44]:
u = np.linspace(-2, 2, 3)
v = np.linspace(-1, 1, 5)
X, Y = np.meshgrid(u, v)
Z = X**2/25 + Y**2/4
print(f'X:\n{X}\n\nY:\n{Y}')
X:
[[-2. 0. 2.]
[-2. 0. 2.]
[-2. 0. 2.]
[-2. 0. 2.]
[-2. 0. 2.]]
Y:
[[-1. -1. -1. ]
[-0.5 -0.5 -0.5]
[ 0. 0. 0. ]
[ 0.5 0.5 0.5]
[ 1. 1. 1. ]]
Meshgrid
Sampling On A Grid
In [45]:
print(f'Z:\n{Z}')
plt.set_cmap('gray')
plt.pcolor(Z)
plt.show()
Z:
[[0.41 0.25 0.41 ]
[0.2225 0.0625 0.2225]
[0.16 0. 0.16 ]
[0.2225 0.0625 0.2225]
[0.41 0.25 0.41 ]]
Orientations of 2D Arrays & Images
In [46]:
Z = np.array([[1, 2, 3], [4, 5, 6]])
print(f'Z:\n{Z}')
plt.pcolor(Z)
plt.show()
Z:
[[1 2 3]
[4 5 6]]
### Generating meshes¶
In order to visualize two-dimensional arrays of data, it is necessary to understand how to generate and manipulate 2-D arrays. Many Matplotlib plots support arrays as input and in particular, they support NumPy arrays. The NumPy library is the most widely-supported means for supporting numeric arrays in Python.
In this exercise, you will use the meshgrid function in NumPy to generate 2-D arrays which you will then visualize using plt.imshow(). The simplest way to generate a meshgrid is as follows:
import numpy as np
Y, X = np.meshgrid(range(10),range(20))
This will create two arrays with a shape of (20,10), which corresponds to 20 rows along the Y-axis and 10 columns along the X-axis. In this exercise, you will use np.meshgrid() to generate a regular 2-D sampling of a mathematical function.
Instructions
• Import the numpy and matplotlib.pyplot modules using the respective aliases np and plt.
• Generate two one-dimensional arrays u and v using np.linspace(). The array u should contain 41 values uniformly spaced between -2 and +2. The array v should contain 21 values uniformly spaced between -1 and +1.
• Construct two two-dimensional arrays X and Y from u and v using np.meshgrid().
• After the array Z is computed using X and Y, visualize the array Z using plt.pcolor() and plt.show().
• Save the resulting figure as 'sine_mesh.png'.
In [47]:
plt.style.use('default')
# Generate two 1-D arrays: u, v
u = np.linspace(-2, 2, 41)
v = np.linspace(-1, 1, 21)
# Generate 2-D arrays from u and v: X, Y
X,Y = np.meshgrid(u, v)
# Compute Z based on X and Y
Z = np.sin(3*np.sqrt(X**2 + Y**2))
# Display the resulting image with pcolor()
plt.pcolor(Z)
# Save the figure to 'sine_mesh.png'
plt.savefig('Images/intro_to_data_visualization_in_python/sine_mesh.png')
plt.show()
### Array orientation¶
The commands
In [1]: plt.pcolor(A, cmap='Blues')
In [2]: plt.colorbar()
In [3]: plt.show()
produce the pseudocolor plot above using a Numpy array A. Which of the commands below could have generated A?
numpy and matplotlib.pyplot have been imported as np and plt respectively. Play around in the IPython shell with different arrays and generate pseudocolor plots from them to identify which of the below commands could have generated A.
Instructions
• A = np.array([[1, 2, 1], [0, 0, 1], [-1, 1, 1]])
• A = np.array([[1, 0, -1], [2, 0, 1], [1, 1, 1]])
• A = np.array([[-1, 0, 1], [1, 0, 2], [1, 1, 1]])
• A = np.array([[1, 1, 1], [2, 0, 1], [1, 0, -1]])
In [48]:
A = np.array([[1, 0, -1], [2, 0, 1], [1, 1, 1]])
plt.pcolor(A, cmap='Blues')
plt.colorbar()
plt.show()
## Visualizing Bivariate Functions¶
Pseudocolo Plot
In [49]:
u = np.linspace(-2, 2, 65)
v = np.linspace(-1, 1, 33)
X,Y = np.meshgrid(u, v)
Z = X**2/25 + Y**2/4
plt.pcolor(Z) # if not in color, may depend on plt.style.use('default')
plt.show()
Color Bar
In [50]:
plt.pcolor(Z)
plt.colorbar()
plt.show()
Color Map
In [51]:
plt.pcolor(Z, cmap='gray')
plt.colorbar()
plt.show()
In [52]:
plt.pcolor(Z, cmap='autumn')
plt.colorbar()
plt.show()
Axis Tight
In [53]:
plt.pcolor(Z)
plt.colorbar()
plt.axis('tight')
plt.show()
Plot Using Mesh Grid
• Axes determined by mesh grid arrays X, Y
In [54]:
plt.pcolor(X, Y, Z) # X, Y are 2D meshgrid
plt.colorbar()
plt.show()
Contour Plots
In [55]:
plt.contour(Z)
plt.show()
More Contours
In [56]:
plt.contour(Z, 30)
plt.show()
Contour Plot Using Meshgird
In [57]:
plt.contour(X, Y, Z, 30)
plt.show()
Filled contour plots
In [58]:
plt.contourf(X, Y, Z, 30)
plt.colorbar()
plt.show()
• API has many (optional) keyword arguments
• More in matplotlib.pyplot documentation
• More examples
### Contour & filled contour plots¶
Although plt.imshow() or plt.pcolor() are often used to visualize a 2-D array in entirety, there are other ways of visualizing such data without displaying all of the available sample values. One option is to use the array to compute contours that are visualized instead.
Two types of contour plot supported by Matplotlib are plt.contour() and plt.contourf() where the former displays the contours as lines and the latter displayed filled areas between contours. Both these plotting commands accept a two dimensional array from which the appropriate contours are computed.
In this exercise, you will visualize a 2-D array repeatedly using both plt.contour() and plt.contourf(). You will use plt.subplot() to display several contour plots in a common figure, using the meshgrid X, Y as the axes. For example, plt.contour(X, Y, Z) generates a default contour map of the array Z.
Don't forget to include the meshgrid in each plot for this exercise!
Instructions
• Using the meshgrid X, Y as axes for each plot:
• Generate a default contour plot of the array Z in the upper left subplot.
• Generate a contour plot of the array Z in the upper right subplot with 20 contours.
• Generate a default filled contour plot of the array Z in the lower left subplot.
• Generate a default filled contour plot of the array Z in the lower right subplot with 20 contours.
• Improve the spacing between the subplots with plt.tight_layout() and display the figure.
In [59]:
# Generate a default contour map of the array Z
plt.subplot(2,2,1)
plt.contour(X, Y, Z)
# Generate a contour map with 20 contours
plt.subplot(2,2,2)
plt.contour(X, Y, Z, 20)
# Generate a default filled contour map of the array Z
plt.subplot(2,2,3)
plt.contourf(X, Y, Z)
# Generate a default filled contour map with 20 contours
plt.subplot(2,2,4)
plt.contourf(X, Y, Z, 20)
# Improve the spacing between subplots
plt.tight_layout()
# Display the figure
plt.show()
### Modifying colormaps¶
When displaying a 2-D array with plt.imshow() or plt.pcolor(), the values of the array are mapped to a corresponding color. The set of colors used is determined by a colormap which smoothly maps values to colors, making it easy to understand the structure of the data at a glance.
It is often useful to change the colormap from the default 'jet' colormap used by matplotlib. A good colormap is visually pleasing and conveys the structure of the data faithfully and in a way that makes sense for the application.
• Some matplotlib colormaps have unique names such as 'jet', 'coolwarm', 'magma' and 'viridis'.
• Others have a naming scheme based on overall color such as 'Greens', 'Blues', 'Reds', and 'Purples'.
• Another four colormaps are based on the seasons, namely 'summer', 'autumn', 'winter' and 'spring'.
• You can insert the option cmap=<name> into most matplotlib functions to change the color map of the resulting plot.
In this exercise, you will explore four different colormaps together using plt.subplot(). You will use a pregenerated array Z and a meshgrid X, Y to generate the same filled contour plot with four different color maps. Be sure to also add a color bar to each filled contour plot with plt.colorbar().
Instructions
• Modify the call to plt.contourf() so the filled contours in the top left subplot use the 'viridis' colormap.
• Modify the call to plt.contourf() so the filled contours in the top right subplot use the 'gray' colormap.
• Modify the call to plt.contourf() so the filled contours in the bottom left subplot use the 'autumn' colormap.
• Modify the call to plt.contourf() so the filled contours in the bottom right subplot use the 'winter' colormap.
In [60]:
# Create a filled contour plot with a color map of 'viridis'
plt.subplot(2,2,1)
plt.contourf(X,Y,Z,20, cmap='viridis')
plt.colorbar()
plt.title('Viridis')
# Create a filled contour plot with a color map of 'gray'
plt.subplot(2,2,2)
plt.contourf(X,Y,Z,20, cmap='gray')
plt.colorbar()
plt.title('Gray')
# Create a filled contour plot with a color map of 'autumn'
plt.subplot(2,2,3)
plt.contourf(X,Y,Z,20, cmap='autumn')
plt.colorbar()
plt.title('Autumn')
# Create a filled contour plot with a color map of 'winter'
plt.subplot(2,2,4)
plt.contourf(X,Y,Z,20, cmap='winter')
plt.colorbar()
plt.title('Winter')
# Improve the spacing between subplots and display them
plt.tight_layout()
plt.show()
## Visualizing Bivariate Distributions¶
Distributions of 2D Points
• 2D points given as two 1D arrays x & y
• Goal: generate a 2D histogram from x & y
In [61]:
plt.scatter(x='weight', y='accel', data=df_mpg)
plt.xlabel('weight ($\mathrm{kg}$)')
plt.ylabel('acceleration ($\mathrm{ms}^{-2}$)')
plt.show()
Histograms in 1D
• Choose bins (intervals)
• Count realizations within bins & plots
In [62]:
counts, bins, patches = plt.hist(x='accel', bins=25, data=df_mpg, ec='black', density=True)
plt.ylabel('frequency (density)')
plt.xlabel('acceleration ($\mathrm{ms}^{-2}$)')
plt.show()
In [63]:
sns.stripplot(x='accel', data=df_mpg, jitter=False)
plt.xlabel('acceleration ($\mathrm{ms}^{-2}$)')
plt.show()
Bins In 2D
• Different shapes available for binning points
• Common choices: rectangles & hexagons
hist2d(): Rectangular Binning
In [64]:
plt.hist2d(x='weight', y='accel', data=df_mpg, bins=(10, 20)) # x & y are 1D arrays of same length
plt.colorbar()
plt.xlabel('weight ($\mathrm{kg}$)')
plt.ylabel('acceleration ($\mathrm{ms}^{-2}$)')
plt.show()
hexbin(): Hexagonal Binning
In [65]:
plt.hexbin(x='weight', y='accel', data=df_mpg, gridsize=(15, 10))
plt.colorbar()
plt.xlabel('weight ($\mathrm{kg}$)')
plt.ylabel('acceleration ($\mathrm{ms}^{-2}$)')
plt.show()
### Using hist2d()¶
Given a set of ordered pairs describing data points, you can count the number of points with similar values to construct a two-dimensional histogram. This is similar to a one-dimensional histogram, but it describes the joint variation of two random variables rather than just one.
In matplotlib, one function to visualize 2-D histograms is plt.hist2d().
• You specify the coordinates of the points using plt.hist2d(x,y) assuming x and y are two vectors of the same length.
• You can specify the number of bins with the argument bins=(nx, ny) where nx is the number of bins to use in the horizontal direction and ny is the number of bins to use in the vertical direction.
• You can specify the rectangular region in which the samples are counted in constructing the 2D histogram. The optional parameter required is range=((xmin, xmax), (ymin, ymax)) where
• xmin and xmax are the respective lower and upper limits for the variables on the x-axis and
• ymin and ymax are the respective lower and upper limits for the variables on the y-axis. Notice that the optional range argument can use nested tuples or lists.
In this exercise, you'll use some data from the auto-mpg data set. There are two arrays mpg and hp that respectively contain miles per gallon and horse power ratings from over three hundred automobiles built.
Instructions
• Generate a two-dimensional histogram to view the joint variation of the mpg and hp arrays.
• Put hp along the horizontal axis and mpg along the vertical axis.
• Specify 20 by 20 rectangular bins with the bins argument.
• Specify the region covered by using the optional range argument so that the plot samples hp between 40 and 235 on the x-axis and mpg between 8 and 48 on the y-axis. Your argument should take the form: range=((xmin, xmax), (ymin, ymax)).
• Add a color bar to the histogram.
In [66]:
# Generate a 2-D histogram
plt.hist2d(df_mpg.hp, df_mpg.mpg, bins=(20, 20), range=((40, 235), (8, 48)))
# Add a color bar to the histogram
plt.colorbar()
# Add labels, title, and display the plot
plt.xlabel('Horse power [hp]')
plt.ylabel('Miles per gallon [mpg]')
plt.title('hist2d() plot')
plt.show()
### Using hexbin()¶
The function plt.hist2d() uses rectangular bins to construct a two dimensional histogram. As an alternative, the function plt.hexbin() uses hexagonal bins. The underlying algorithm (based on this article from 1987) constructs a hexagonal tesselation of a planar region and aggregates points inside hexagonal bins.
• The optional gridsize argument (default 100) gives the number of hexagons across the x-direction used in the hexagonal tiling. If specified as a list or a tuple of length two, gridsize fixes the number of hexagon in the x- and y-directions respectively in the tiling.
• The optional parameter extent=(xmin, xmax, ymin, ymax) specifies rectangular region covered by the hexagonal tiling. In that case, xmin and xmax are the respective lower and upper limits for the variables on the x-axis and ymin and ymax are the respective lower and upper limits for the variables on the y-axis.
In this exercise, you'll use the same auto-mpg data as in the last exercise (again using arrays mpg and hp). This time, you'll use plt.hexbin() to visualize the two-dimensional histogram.
Instructions
• Generate a two-dimensional histogram with plt.hexbin() to view the joint variation of the mpg and hp vectors.
• Put hp along the horizontal axis and mpg along the vertical axis.
• Specify a hexagonal tesselation with 15 hexagons across the x-direction and 12 hexagons across the y-direction using gridsize.
• Specify the rectangular region covered with the optional extent argument: use hp from 40 to 235 and mpg from 8 to 48. Note: Unlike the range argument in the previous exercise, extent takes one tuple of four values.
• Add a color bar to the histogram.
In [67]:
# Generate a 2d histogram with hexagonal bins
plt.hexbin(df_mpg.hp, df_mpg.mpg, gridsize=(15, 12), extent=(40, 235, 8, 48))
# Add a color bar to the histogram
plt.colorbar()
# Add labels, title, and display the plot
plt.xlabel('Horse power [hp]')
plt.ylabel('Miles per gallon [mpg]')
plt.title('hexbin() plot')
plt.show()
## Working With Images¶
• Grayscale images: rectangular 2D arrays
• Color images: typically three 2D arrays (channels)
• RGB (Red-Green-Blue)
• Channel values:
• 0 to 1 (floating-point numbers)
• 0 to 255 (8 bit integers)
In [68]:
sunflower_url = 'https://raw.githubusercontent.com/trenton3983/DataCamp/master/Images/intro_to_data_visualization_in_python/2_4_sunflower.jpg'
sunflower_path = Path('Images/intro_to_data_visualization_in_python/2_4_sunflower.jpg')
create_dir_save_file(sunflower_path, sunflower_url)
Directory Exists
File Exists
In [69]:
print(img.shape)
plt.imshow(img)
plt.axis('off')
plt.show()
(309, 413, 3)
Reduction to gray-scale image
In [70]:
collapsed = img.mean(axis=2)
print(collapsed.shape)
plt.set_cmap('gray')
plt.imshow(collapsed, cmap='gray')
plt.axis('off')
plt.show()
(309, 413)
Uneven Samples
In [71]:
uneven = collapsed[::4,::2] # nonuniform subsampling
print(uneven.shape)
plt.imshow(uneven)
plt.axis('off')
plt.show()
(78, 207)
In [72]:
plt.imshow(uneven, aspect=2.0)
plt.axis('off')
plt.show()
origin and extent in imshow
In [73]:
plt.imshow(uneven, cmap='gray', extent=(0, 640, 0, 480))
plt.axis('off')
plt.show()
Color images such as photographs contain the intensity of the red, green and blue color channels.
• To read an image from file, use plt.imread() by passing the path to a file, such as a PNG or JPG file.
• The color image can be plotted as usual using plt.imshow().
• The resulting image loaded is a NumPy array of three dimensions. The array typically has dimensions M × N × 3, where M × N is the dimensions of the image. The third dimensions are referred to as color channels (typically red, green, and blue).
• The color channels can be extracted by Numpy array slicing.
In this exercise, you will load & display an image of an astronaut (by NASA (Public domain), via Wikimedia Commons). You will also examine its attributes to understand how color images are represented.
Instructions
• Load the file '480px-Astronaut-EVA.jpg' into an array.
• Print the shape of the img array. How wide and tall do you expect the image to be?
• Prepare img for display using plt.imshow().
• Turn off the axes using plt.axis('off').
In [74]:
dir_path_astro = Path('Images/intro_to_data_visualization_in_python/480px-Astronaut-EVA.jpg')
create_dir_save_file(dir_path_astro, url_astro)
Directory Exists
File Exists
In [75]:
# Load the image into an array: img
# Print the shape of the image
print(img.shape)
# Display the image
plt.imshow(img)
# Hide the axes
plt.axis('off')
plt.show()
(480, 480, 3)
### Pseudocolor plot from image data¶
Image data comes in many forms and it is not always appropriate to display the available channels in RGB space. In many situations, an image may be processed and analysed in some way before it is visualized in pseudocolor, also known as 'false' color.
In this exercise, you will perform a simple analysis using the image showing an astronaut as viewed from space. Instead of simply displaying the image, you will compute the total intensity across the red, green and blue channels. The result is a single two dimensional array which you will display using plt.imshow() with the 'gray' colormap.
Instructions
• Print the shape of the existing image array.
• Compute the sum of the red, green, and blue channels of img by using the .sum() method with axis=2.
• Print the shape of the intensity array to verify this is the shape you expect.
• Plot intensity with plt.imshow() using a 'gray' colormap.
• Add a colorbar to the figure.
In [76]:
# Load the image into an array: img
# Print the shape of the image
print(img.shape)
# Compute the sum of the red, green and blue channels: intensity
intensity = img.sum(axis=2)
# Print the shape of the intensity
print(intensity.shape)
# Display the intensity with a colormap of 'gray'
plt.imshow(intensity, cmap='gray')
plt.colorbar()
# Hide the axes and show the figure
plt.axis('off')
plt.show()
(480, 480, 3)
(480, 480)
### Extent and aspect¶
When using plt.imshow() to display an array, the default behavior is to keep pixels square so that the height to width ratio of the output matches the ratio determined by the shape of the array. In addition, by default, the x- and y-axes are labeled by the number of samples in each direction.
The ratio of the displayed width to height is known as the image aspect and the range used to label the x- and y-axes is known as the image extent. The default aspect value of 'auto' keeps the pixels square and the extents are automatically computed from the shape of the array if not specified otherwise.
In this exercise, you will investigate how to set these options explicitly by plotting the same image in a 2 by 2 grid of subplots with distinct aspect and extent options.
Instructions
• Display img in the top left subplot with horizontal extent from -1 to 1, vertical extent from -1 to 1, and aspect ratio 0.5.
• Display img in the top right subplot with horizontal extent from -1 to 1, vertical extent from -1 to 1, and aspect ratio 1.
• Display img in the bottom left subplot with horizontal extent from -1 to 1, vertical extent from -1 to 1, and aspect ratio 2.
• Display img in the bottom right subplot with horizontal extent from -2 to 2, vertical extent from -1 to 1, and aspect ratio 2.
In [77]:
# Load the image into an array: img
# Specify the extent and aspect ratio of the top left subplot
plt.subplot(2,2,1)
plt.title('extent=(-1,1,-1,1),\naspect=0.5')
plt.xticks([-1,0,1])
plt.yticks([-1,0,1])
plt.imshow(img, extent=(-1,1,-1,1), aspect=0.5)
# Specify the extent and aspect ratio of the top right subplot
plt.subplot(2,2,2)
plt.title('extent=(-1,1,-1,1),\naspect=1')
plt.xticks([-1,0,1])
plt.yticks([-1,0,1])
plt.imshow(img, extent=(-1,1,-1,1), aspect=1)
# Specify the extent and aspect ratio of the bottom left subplot
plt.subplot(2,2,3)
plt.title('extent=(-1,1,-1,1),\naspect=2')
plt.xticks([-1,0,1])
plt.yticks([-1,0,1])
plt.imshow(img, extent=(-1,1,-1,1), aspect=2)
# Specify the extent and aspect ratio of the bottom right subplot
plt.subplot(2,2,4)
plt.title('extent=(-2,2,-1,1),\naspect=2')
plt.xticks([-2,-1,0,1,2])
plt.yticks([-1,0,1])
plt.imshow(img, extent=(-2,2,-1,1), aspect=2)
# Improve spacing and display the figure
plt.tight_layout()
plt.show()
### Rescaling pixel intensities¶
Sometimes, low contrast images can be improved by rescaling their intensities. For instance, this image of Hawkes Bay, New Zealand has no pixel values near 0 or near 255 (the limits of valid intensities). (originally by Phillip Capper, modified by User:Konstable, via Wikimedia Commons, CC BY 2.0)
For this exercise, you will do a simple rescaling (remember, an image is NumPy array) to translate and stretch the pixel intensities so that the intensities of the new image fill the range from 0 to 255.
Instructions
• Use the methods .min() and .max() to save the minimum and maximum values from the array image as pmin and pmax respectively.
• Create a new 2-D array rescaled_image using 256*(image-pmin)/(pmax-pmin)
• Plot the new array rescaled_image.
In [78]:
dir_path_hawk = Path('Images/intro_to_data_visualization_in_python/640px-Unequalized_Hawkes_Bay_NZ.jpg')
create_dir_save_file(dir_path_hawk, url_hawk)
Directory Exists
File Exists
In [79]:
# Load the image into an array: image
# Extract minimum and maximum values from the image: pmin, pmax
pmin, pmax = image.min(), image.max()
print(f"The smallest & largest pixel intensities are {pmin} & {pmax}.")
# Rescale the pixels: rescaled_image
rescaled_image = 256*(image - pmin) / (pmax - pmin)
print(f"The rescaled smallest & largest pixel intensities are {rescaled_image.min()} & {rescaled_image.max()}.")
# Display the rescaled image
plt.title('rescaled image')
plt.axis('off')
plt.imshow(rescaled_image, cmap='gray')
plt.show()
The smallest & largest pixel intensities are 104 & 230.
The rescaled smallest & largest pixel intensities are 0.0 & 256.0.
# Statistical plots with Seaborn¶
This is a high-level tour of the seaborn plotting library for producing statistical graphics in Python. We'll cover seaborn tools for computing and visualizing linear regressions, as well as tools for visualizing univariate distributions (like strip, swarm, and violin plots) and multivariate distributions (like joint plots, pair plots, and heatmaps). We'll also discuss grouping categories in plots.
## Visualizing Regressions¶
Recap: Pandas DataFrames
• Labelled tabular data structure
• Labels on rows: index
• Labels on columns: columns
• Columns are Pandas Series
In [80]:
Out[80]:
total_bill tip sex smoker day time size
0 16.99 1.01 Female No Sun Dinner 2
1 10.34 1.66 Male No Sun Dinner 3
2 21.01 3.50 Male No Sun Dinner 3
3 23.68 3.31 Male No Sun Dinner 2
4 24.59 3.61 Female No Sun Dinner 4
Linear Regression Plots
In [81]:
sns.lmplot(x= 'total_bill', y='tip', data=tips)
plt.show()
Factors & Grouping Factors (same plot)
In [82]:
sns.lmplot(x= 'total_bill', y='tip', data=tips, hue='sex', palette='Set1')
plt.show()
Grouping Factors (subplots)
In [83]:
sns.lmplot(x= 'total_bill', y='tip', data=tips, col='sex')
plt.show()
Resibual Plots
• Similar arguments as lmplot() but more flexible
• x, y can be arrays or strings
• data is DataFrame (optional)
• Optional arguments (e.g., color) as in Matplotlib
In [84]:
sns.residplot(x= 'total_bill', y='tip', data=tips, color='green')
plt.show()
### Simple linear regressions¶
As you have seen, seaborn provides a convenient interface to generate complex and great-looking statistical plots. One of the simplest things you can do using seaborn is to fit and visualize a simple linear regression between two variables using sns.lmplot().
One difference between seaborn and regular matplotlib plotting is that you can pass pandas DataFrames directly to the plot and refer to each column by name. For example, if you were to plot the column 'price' vs the column 'area' from a DataFrame df, you could call sns.lmplot(x='area', y='price', data=df).
In this exercise, you will once again use the DataFrame auto containing the auto-mpg dataset. You will plot a linear regression illustrating the relationship between automobile weight and horse power.
Instructions
• Import matplotlib.pyplot and seaborn using the standard names plt and sns respectively.
• Plot a linear regression between the 'weight' column (on the x-axis) and the 'hp' column (on the y-axis) from the DataFrame auto.
• Display the plot as usual with plt.show(). This has been done for you, so hit 'Submit Answer' to view the plot.
In [85]:
# Plot a linear regression between 'weight' and 'hp'
sns.lmplot(x='weight', y='hp', data=df_mpg, height=7)
# Display the plot
plt.show()
Unsurprisingly, there is a strong correlation between 'hp' and 'weight', and a linear regression is easily able to capture this trend.
### Plotting residuals of a regression¶
Often, you don't just want to see the regression itself but also see the residuals to get a better idea how well the regression captured the data. Seaborn provides sns.residplot() for that purpose, visualizing how far datapoints diverge from the regression line.
In this exercise, you will visualize the residuals of a regression between the 'hp' column (horse power) and the 'mpg' column (miles per gallon) of the auto DataFrame used previously.
Instructions
• Import matplotlib.pyplot and seaborn using the standard names plt and sns respectively.
• Generate a green residual plot of the regression between 'hp' (on the x-axis) and 'mpg' (on the y-axis). You will need to specify the additional data and color parameters.
• Display the plot as usual using plt.show(). This has been done for you, so hit 'Submit Answer' to view the plot.
In [86]:
plt.figure(figsize=(8, 9))
# Generate a green residual plot of the regression between 'hp' and 'mpg'
sns.residplot(x='hp', y='mpg', data=df_mpg, color='purple')
# Display the plot
plt.show()
### Higher-order regressions¶
When there are more complex relationships between two variables, a simple first order regression is often not sufficient to accurately capture the relationship between the variables. Seaborn makes it simple to compute and visualize regressions of varying orders.
Here, you will plot a second order regression between the horse power ('hp') and miles per gallon ('mpg') using sns.regplot() (the function sns.lmplot() is a higher-level interface to sns.regplot()). However, before plotting this relationship, compare how the residual changes depending on the order of the regression. Does a second order regression perform significantly better than a simple linear regression?
• A principal difference between sns.lmplot() and sns.regplot() is the way in which matplotlib options are passed (sns.regplot() is more permissive).
• For both sns.lmplot() and sns.regplot(), the keyword order is used to control the order of polynomial regression.
• The function sns.regplot() uses the argument scatter=None to prevent plotting the scatter plot points again.
Instructions
• Create a scatter plot with auto['weight'] on the x-axis and auto['mpg'] on the y-axis, with label='data'. This has been done for you.
• Plot a first order linear regression line between 'weight' and 'mpg' in 'blue' without the scatter points.
• You need to specify the label ('First Order', case-sensitive) and color parameters, in addition to scatter=None.
• Plot a second order linear regression line between 'weight' and 'mpg' in 'green' without the scatter points.
• To force a higher order regression, you need to specify the order parameter (here, it should be 2). Don't forget to again add a label ('Second Order').
• Add a legend to the 'upper right'.
In [87]:
plt.figure(figsize=(8, 9))
# Generate a scatter plot of 'weight' and 'mpg' using red circles
plt.scatter(df_mpg.weight, df_mpg.mpg, label='data', color='red', marker='o')
# Plot in blue a linear regression of order 1 between 'weight' and 'mpg'
sns.regplot(x='weight', y='mpg', data=df_mpg, color='blue', scatter=None, label='First Order')
# Plot in green a linear regression of order 2 between 'weight' and 'mpg'
sns.regplot(x='weight', y='mpg', data=df_mpg, order=2, color='green', scatter=None, label='Second Order')
# Add a legend and display the plot
plt.legend(loc='upper right')
plt.show()
It seems like a regression of order 2 is necessary to properly capture the relationship between 'weight' and 'mpg'.
### Grouping linear regressions by hue¶
Often it is useful to compare and contrast trends between different groups. Seaborn makes it possible to apply linear regressions separately for subsets of the data by applying a groupby operation. Using the hue argument, you can specify a categorical variable by which to group data observations. The distinct groups of points are used to produce distinct regressions with different hues in the plot.
In the automobile dataset - which has been pre-loaded here as auto - you can view the relationship between weight ('weight') and horsepower ('hp') of the cars and group them by their origin ('origin'), giving you a quick visual indication how the relationship differs by continent.
Instructions
• Plot a linear regression between 'weight' and 'hp' grouped by 'origin'.
• Use the keyword argument hue to group rows with the categorical column 'origin'.
• Use the keyword argument palette to specify the 'Set1' palette for coloring the distinct groups.
In [88]:
# Plot a linear regression between 'weight' and 'hp', with a hue of 'origin' and palette of 'Set1'
sns.lmplot(x='weight', y='hp', data=df_mpg, hue='origin', palette='Set1', height=7)
# Display the plot
plt.show()
### Grouping linear regressions by row or column¶
Rather than overlaying linear regressions of grouped data in the same plot, we may want to use a grid of subplots. The sns.lmplot() accepts the arguments row and/or col to arrangements of subplots for regressions.
You'll use the automobile dataset again and, this time, you'll use the keyword argument row to display the subplots organized in rows. That is, you'll produce horsepower vs. weight regressions grouped by continent of origin in separate subplots stacked vertically.
Instructions
• Plot linear regressions of 'hp' (on the y-axis) versus 'weight' (on the x-axis) grouped row-wise by 'origin' from DataFrame auto.
• Use the keyword argument row to group observations with the categorical column 'origin' in subplots organized in rows.
In [89]:
# Plot linear regressions between 'weight' and 'hp' grouped row-wise by 'origin'
sns.lmplot(x='weight', y='hp', data=df_mpg, row='origin')
# Display the plot
plt.show()
## Visualizing Univariate Distributions¶
• Univariate → “one variable”
• Visualization techniques for sampled univariate data
• Strip plots
• Swarm plots
• Violin plots
Using stripplot()
In [90]:
sns.stripplot(y= 'tip', data=tips, jitter=False)
plt.ylabel('tip ($)') plt.show() Grouping With stripplot() In [91]: sns.stripplot(x='day', y='tip', data=tips, jitter=False) plt.ylabel('tip ($)')
plt.show()
In [92]:
sns.stripplot(x='day', y='tip', data=tips, jitter=True, size=4)
plt.ylabel('tip ($)') plt.show() Using swarmplot() In [93]: sns.swarmplot(x='day', y='tip', data=tips, size=4) plt.ylabel('tip ($)')
plt.show()
More Grouping With swarmplot()
In [94]:
sns.swarmplot(x='day', y='tip', data=tips, size=4, hue='sex')
plt.ylabel('tip ($)') plt.show() Changing Orientation In [95]: sns.swarmplot(x='tip', y='day', data=tips, size=4, hue='sex', orient='h') plt.ylabel('tip ($)')
plt.show()
Using violinplot()
In [96]:
plt.subplot(1,2,1)
sns.boxplot(x='day', y='tip', data=tips)
plt.ylabel('tip ($)') plt.subplot(1,2,2) sns.violinplot(x='day', y='tip', data=tips) plt.ylabel('tip ($)')
plt.tight_layout()
plt.show()
Combining Plots
In [97]:
sns.violinplot(x='day', y='tip', data=tips, inner=None, color='lightgray')
sns.stripplot(x='day', y='tip', data=tips, size=4, jitter=True)
plt.ylabel('tip (\$)')
plt.show()
### Constructing strip plots¶
Regressions are useful to understand relationships between two continuous variables. Often we want to explore how the distribution of a single continuous variable is affected by a second categorical variable. Seaborn provides a variety of plot types to perform these types of comparisons between univariate distributions.
The strip plot is one way of visualizing this kind of data. It plots the distribution of variables for each category as individual datapoints. For vertical strip plots (the default), distributions of continuous values are laid out parallel to the y-axis and the distinct categories are spaced out along the x-axis.
• For example, sns.stripplot(x='type', y='length', data=df) produces a sequence of vertical strip plots of length distributions grouped by type (assuming length is a continuous column and type is a categorical column of the DataFrame df).
• Overlapping points can be difficult to distinguish in strip plots. The argument jitter=True helps spread out overlapping points.
• Other matplotlib arguments can be passed to sns.stripplot(), e.g., marker, color, size, etc.
Instructions
• In the first row of subplots, make a strip plot showing distribution of 'hp' values grouped horizontally by 'cyl'.
• In the second row of subplots, make a second strip plot with improved readability. In particular, you'll call sns.stripplot() again, this time adding jitter=True and decreasing the point size to 3 using the size parameter.
In [98]:
plt.figure(figsize=(8, 9))
# Make a strip plot of 'hp' grouped by 'cyl'
plt.subplot(2,1,1)
sns.stripplot(x='cyl', y='hp', data=df_mpg)
# Make the strip plot again using jitter and a smaller point size
plt.subplot(2,1,2)
sns.stripplot(x='cyl', y='hp', data=df_mpg, jitter=True, size=3)
# Display the plot
plt.show()
Here, 'hp' is the continuous variable, and 'cyl' is the categorical variable. The strip plot shows that automobiles with more cylinders tend to have higher horsepower.
### Constructing swarm plots¶
As you have seen, a strip plot can be visually crowded even with jitter applied and smaller point sizes. An alternative is provided by the swarm plot (sns.swarmplot()), which is very similar but spreads out the points to avoid overlap and provides a better visual overview of the data.
• The syntax for sns.swarmplot() is similar to that ofsns.stripplot(), e.g., sns.swarmplot(x='type', y='length', data=df).
• The orientation for the continuous variable in the strip/swarm plot can be inferred from the choice of the columns x and y from the DataFrame data. The orientation can be set explicitly using orient='h' (horizontal) or orient='v' (vertical).
• Another grouping can be added in using the hue keyword. For instance, using sns.swarmplot(x='type', y='length', data=df, hue='build year') makes a swarm plot from the DataFrame df with the 'length' column values spread out vertically, horizontally grouped by the column 'type' and each point colored by the categorical column 'build year'.
In this exercise, you'll use the auto DataFrame again to illustrate the use of sns.swarmplot() with grouping by hue and with explicit specification of the orientation using the keyword orient.
Instructions
• In the first row of subplots, make a swarm plot showing distribution of 'hp' values grouped horizontally by 'cyl'.
• In the second row of subplots, make a second swarm plot with horizontal orientation (i.e. grouped vertically by 'cyl' with 'hp' value spread out horizontally).
• In addition to reversing the columns for the x and y parameters, you will need to specify the orient parameter to explicitly set the horizontal orientation.
• Color the points by 'origin' (refer to the text above if you don't know how to do this).
In [99]:
plt.figure(figsize=(8, 9))
# Generate a swarm plot of 'hp' grouped horizontally by 'cyl'
plt.subplot(2,1,1)
sns.swarmplot(x='cyl', y='hp', data=df_mpg)
# Generate a swarm plot of 'hp' grouped vertically by 'cyl' with a hue of 'origin'
plt.subplot(2,1,2)
sns.swarmplot(x='hp', y='cyl', data=df_mpg, orient='h', hue='origin')
# Display the plot
plt.show()
Swarm plots are generally easier to understand than strip plots because they spread out the points to avoid overlap.
### Constructing violin plots¶
Both strip and swarm plots visualize all the datapoints. For large datasets, this can result in significant overplotting. Therefore, it is often useful to use plot types which reduce a dataset to more descriptive statistics and provide a good summary of the data. Box and whisker plots are a classic way of summarizing univariate distributions but seaborn provides a more sophisticated extension of the standard box plot, called a violin plot.
Here, you will produce violin plots of the distribution of horse power ('hp') by the number of cylinders ('cyl'). Additionally, you will combine two different plot types by overlaying a strip plot on the violin plot.
As before, the DataFrame has been pre-loaded for you as auto.
Instructions
• In the first row of subplots, make a violin plot showing the distribution of 'hp' grouped by 'cyl'.
• In the second row of subplots, make a second violin plot without the inner annotations (by specifying inner=None) and with the color 'lightgray'.
• In the second row of subplots, overlay a strip plot with jitter and a point size of 1.5.
In [100]:
plt.figure(figsize=(8, 9))
# Generate a violin plot of 'hp' grouped horizontally by 'cyl'
plt.subplot(2,1,1)
sns.violinplot(x='cyl', y='hp', data=df_mpg)
# Generate the same violin plot again with a color of 'lightgray' and without inner annotations
plt.subplot(2,1,2)
sns.violinplot(x='cyl', y='hp', data=df_mpg, inner=None, color='lightgray')
# Overlay a strip plot on the violin plot
sns.stripplot(x='cyl', y='hp', data=df_mpg, size=1.5, jitter=True)
# Display the plot
plt.show()
Violin plots are a nice way of visualizing the relationship between a continuous variable and a categorical variable.
## Visualizing Multivariate Distributions¶
• Bivariate → “two variables”
• Multivariate → “multiple variables”
• Visualizing relationships in multivariate data
• Joint plots
• Pair plots
• Heat maps
Using jointplot()
In [101]:
sns.jointplot(x= 'total_bill', y= 'tip', data=tips)
plt.show()
Joint Plot Using kde=True
In [102]:
sns.jointplot(x='total_bill', y= 'tip', data=tips, kind='kde')
plt.show()
Using pairplot()
In [103]:
sns.pairplot(tips)
plt.show()
Using pairplot() with hue
In [104]:
sns.pairplot(tips, hue='sex')
plt.show()
Correlation heat map using heatmap()
In [105]:
tips_corr_matrix = tips.corr()
tips_corr_matrix
Out[105]:
total_bill tip size
total_bill 1.000000 0.675734 0.598315
tip 0.675734 1.000000 0.489299
size 0.598315 0.489299 1.000000
In [106]:
sns.heatmap(tips_corr_matrix)
plt.title('Tips Correlation plot')
plt.show()
### Plotting joint distributions (1)¶
There are numerous strategies to visualize how pairs of continuous random variables vary jointly. Regression and residual plots are one strategy. Another is to visualize a bivariate distribution.
Seaborn's sns.jointplot() provides means of visualizing bivariate distributions. The basic calling syntax is similar to that of sns.lmplot(). By default, calling sns.jointplot(x, y, data) renders a few things:
• A scatter plot using the specified columns x and y from the DataFrame data.
• A (univariate) histogram along the top of the scatter plot showing distribution of the column x.
• A (univariate) histogram along the right of the scatter plot showing distribution of the column y.
Instructions
• Use sns.jointplot() to visualize the joint variation of the columns 'hp' (on the x-axis) and 'mpg' (on the y-axis) from the DataFrame auto.
In [107]:
# Generate a joint plot of 'hp' and 'mpg'
sns.jointplot(x='hp', y='mpg', data=df_mpg)
# Display the plot
plt.show()
### Plotting joint distributions (2)¶
The seaborn function sns.jointplot() has a parameter kind to specify how to visualize the joint variation of two continuous random variables (i.e., two columns of a DataFrame)
• kind='scatter' uses a scatter plot of the data points
• kind='reg' uses a regression plot (default order 1)
• kind='resid' uses a residual plot
• kind='kde' uses a kernel density estimate of the joint distribution
• kind='hex' uses a hexbin plot of the joint distribution
For this exercise, you will again use sns.jointplot() to display the joint distribution of the hp and mpg columns of the auto DataFrame. This time, you will use kind='hex' to generate a hexbin plot of the joint distribution.
Instructions
• Create a hexbin plot of the joint distribution between 'hp' and 'mpg'.
In [108]:
# Generate a joint plot of 'hp' and 'mpg' using a hexbin plot
sns.jointplot(x='hp', y='mpg', data=df_mpg, kind='hex')
# Display the plot
plt.show()
### Plotting distributions pairwise (1)¶
Data sets often contain more than two continuous variables. The function sns.jointplot() is restricted to representing joint variation between only two quantities (i.e., two columns of a DataFrame). Visualizing multivariate relationships is trickier.
The function sns.pairplot() constructs a grid of all joint plots pairwise from all pairs of (non-categorical) columns in a DataFrame. The syntax is very simple: sns.pairplot(df), where df is a DataFrame. The non-categorical columns are identified and the corresponding joint plots are plotted in a square grid of subplots. The diagonal of the subplot grid shows the univariate histograms of the individual columns.
In this exercise, you will use a DataFrame auto comprising only three columns from the original auto-mpg data set.
Instructions
• Print the first five rows of the DataFrame auto. This is done for you.
• Plot the joint distributions between columns from the entire DataFrame auto.
In [109]:
# Print the first 5 rows of the DataFrame
Out[109]:
mpg cyl displ hp weight accel yr origin name color size marker
0 18.0 6 250.0 88 3139 14.5 71 US ford mustang red 27.370336 o
1 9.0 8 304.0 193 4732 18.5 70 US hi 1200d green 62.199511 o
2 36.1 4 91.0 60 1800 16.4 78 Asia honda civic cvcc blue 9.000000 x
3 18.5 6 250.0 98 3525 19.0 77 US ford granada red 34.515625 o
4 34.3 4 97.0 78 2188 15.8 80 Europe audi 4000 blue 13.298178 s
In [110]:
# Plot the pairwise joint distributions from the DataFrame
sns.pairplot(df_mpg)
# Display the plot
plt.show()
Seaborn's pairplots are an excellent way of visualizing the relationship between all continuous variables in a dataset.
### Plotting distributions pairwise (2)¶
In this exercise, you will generate pairwise joint distributions again. This time, you will make two particular additions:
You will display regressions as well as scatter plots in the off-diagonal subplots. You will do this with the argument kind='reg' (where 'reg' means 'regression'). Another option for kind is 'scatter' (the default) that plots scatter plots in the off-diagonal subplots. You will also visualize the joint distributions separated by continent of origin. You will do this with the keyword argument hue specifying the 'origin'.
Instructions
• Plot the pairwise joint distributions separated by continent of origin and display the regressions.
In [111]:
# Plot the pairwise joint distributions grouped by 'origin' along with regression lines
sns.pairplot(df_mpg[['mpg', 'hp', 'origin']], hue='origin', kind='reg', height=4, aspect=1)
# Display the plot
plt.show()
Plots like this are why Seaborn is such a useful library: Using just one command, you're able to quickly extract a lot of valuable insight from a dataset.
### Visualizing correlations with a heatmap¶
Plotting relationships between many variables using a pair plot can quickly get visually overwhelming. It is therefore often useful to compute covariances between the variables instead. The covariance matrix can then easily be visualized as a heatmap. A heatmap is effectively a pseudocolor plot with labelled rows and columns (i.e., a pseudocolor plot based on a pandas DataFrame rather than a matrix). The DataFrame does not have to be square or symmetric (but, in the context of a covariance matrix, it is both).
In this exercise, you will view the covariance matrix between the continuous variables in the auto-mpg dataset. You do not have to know here how the covariance matrix is computed; the important point is that its diagonal entries are all 1s, and the off-diagonal entries are between -1 and +1 (quantifying the degree to which variable pairs vary jointly). It is also, then, a symmetric matrix.
Instructions
• Print the correlation matrix corr_matrix to examine its contents and labels. This has been done for you.
• Plot the correlation matrix corr_matrix using sns.heatmap().
In [112]:
corr_matrix = df_mpg.corr()
corr_matrix
Out[112]:
mpg cyl displ hp weight accel yr size
mpg 1.000000 -0.777618 -0.805127 -0.778427 -0.832244 0.423329 0.580541 -0.806682
cyl -0.777618 1.000000 0.950823 0.842983 0.897527 -0.504683 -0.345647 0.890839
displ -0.805127 0.950823 1.000000 0.897257 0.932994 -0.543800 -0.369855 0.928779
hp -0.778427 0.842983 0.897257 1.000000 0.864538 -0.689196 -0.416361 0.869720
weight -0.832244 0.897527 0.932994 0.864538 1.000000 -0.416839 -0.309120 0.992019
accel 0.423329 -0.504683 -0.543800 -0.689196 -0.416839 1.000000 0.290316 -0.426547
yr 0.580541 -0.345647 -0.369855 -0.416361 -0.309120 0.290316 1.000000 -0.325214
size -0.806682 0.890839 0.928779 0.869720 0.992019 -0.426547 -0.325214 1.000000
In [113]:
# Visualize the correlation matrix using a heatmap
sns.heatmap(corr_matrix)
# Display the heatmap
plt.show()
If your pair plot starts to become visually overwhelming, heat maps are a great alternative.
# Analyzing time series and images¶
This chapter ties together the skills gained so far through examining time series data and images. You'll customize plots of stock data, generate histograms of image pixel intensities, and enhance image contrast through histogram equalization.
## Visualizing Time Series¶
Datetimes & Time Series
In [114]:
print(type(df_weather))
print(type(df_weather.index))
<class 'pandas.core.frame.DataFrame'>
<class 'pandas.core.indexes.datetimes.DatetimeIndex'>
Out[114]:
Temperature DewPoint Pressure
Date
2010-01-01 00:00:00 46.2 37.5 1.0
2010-01-01 01:00:00 44.6 37.1 1.0
2010-01-01 02:00:00 44.1 36.9 1.0
2010-01-01 03:00:00 43.8 36.9 1.0
2010-01-01 04:00:00 43.5 36.8 1.0
Plotting DataFrames
In [115]:
plt.plot(df_weather)
plt.show()
Slicing Time Series
In [116]:
temperature = df_weather['Temperature']
march_apr = temperature['2010-03':'2010-04']
print(march_apr.shape)
print(march_apr.iloc[-4:])
(1463,)
Date
2010-04-30 20:00:00 73.3
2010-04-30 21:00:00 71.3
2010-04-30 22:00:00 69.7
2010-04-30 23:00:00 68.5
Name: Temperature, dtype: float64
Plotting Time Series Slices
In [117]:
plt.plot(temperature['2010-01'], color='r', label='Temperature')
dew_point = df_weather['DewPoint']
plt.plot(dew_point['2010-01'], color='b', label='Dewpoint')
plt.legend(loc='upper right')
plt.xticks(rotation=60)
plt.show()
Selecting & Formatting Dates
In [118]:
jan = temperature['2010-01']
dates = jan.index[::96]
print(dates)
labels = dates.strftime('%b %d')
print(labels)
DatetimeIndex(['2010-01-01', '2010-01-05', '2010-01-09', '2010-01-13',
'2010-01-17', '2010-01-21', '2010-01-25', '2010-01-29'],
dtype='datetime64[ns]', name='Date', freq=None)
Index(['Jan 01', 'Jan 05', 'Jan 09', 'Jan 13', 'Jan 17', 'Jan 21', 'Jan 25',
'Jan 29'],
dtype='object', name='Date')
Cleaning Up Ticks on Axis
In [119]:
plt.plot(temperature['2010-01'], color='r', label='Temperature')
plt.plot(dew_point['2010-01'], color='b', label='Dewpoint')
plt.legend(loc='upper right')
plt.xticks(dates, labels, rotation=60)
plt.show()
### Multiple time series on common axes¶
For this exercise, you will construct a plot showing four time series stocks on the same axes. The time series in question are represented in the session using the identifiers aapl, ibm, csco, and msft. You'll generate a single plot showing all the time series on common axes with a legend.
Instructions
• Plot the aapl time series in blue with a label of'AAPL'.
• Plot the ibm time series in green with a label of 'IBM'.
• Plot the csco time series in red with a label of 'CSCO'.
• Plot the msft time series in magenta with a label of 'MSFT'.
• Specify a rotation of 60 for the xticks with plt.xticks().
• Add a legend in the 'upper left' corner of the plot.
In [120]:
# Plot the aapl time series in blue
plt.plot(df_stocks['AAPL'], color='blue', label='AAPL')
# Plot the ibm time series in green
plt.plot(df_stocks['IBM'], color='green', label='IBM')
# Plot the csco time series in red
plt.plot(df_stocks['CSCO'], color='red', label='CSCO')
# Plot the msft time series in magenta
plt.plot(df_stocks['MSFT'], color='magenta', label='MSFT')
# Add a legend in the top left corner of the plot
plt.legend(loc='upper left')
# Specify the orientation of the xticks
plt.xticks(rotation=60)
# Display the plot
plt.show()
### Multiple time series slices (1)¶
You can easily slice subsets corresponding to different time intervals from a time series. In particular, you can use strings like '2001:2005', '2011-03:2011-12', or '2010-04-19:2010-04-30' to extract data from time intervals of length 5 years, 10 months, or 12 days respectively.
• Unlike slicing from standard Python lists, tuples, and strings, when slicing time series by labels (and other pandas Series & DataFrames by labels), the slice includes the right-most portion of the slice. That is, extracting my_time_series['1990':'1995'] extracts data from my_time_series corresponding to 1990, 1991, 1992, 1993, 1994, and 1995 inclusive.
• You can use partial strings or datetime objects for indexing and slicing from time series.
For this exercise, you will use time series slicing to plot the time series aapl over its full 11-year range and also over a shorter 2-year range. You'll arrange these plots in a 2 × 1 grid of subplots
Instructions
• Plot the series aapl in 'blue' in the top subplot of a vertically-stacked pair of subplots, with the xticks rotated to 45 degrees.
• Extract a slice named view from the series aapl containing data from the years 2007 to 2008 (inclusive). This has been done for you.
• Plot the slice view in black in the bottom subplot.
In [121]:
plt.figure(figsize=(8, 9))
# Plot the series in the top subplot in blue
plt.subplot(2,1,1)
plt.xticks(rotation=45)
plt.title('AAPL: 2001 to 2011')
plt.plot(df_stocks.AAPL, color='blue')
# Slice aapl from '2007' to '2008' inclusive: view
view = df_stocks.AAPL['2007':'2008']
# Plot the sliced data in the bottom subplot in black
plt.subplot(2,1,2)
plt.xticks(rotation=45)
plt.title('AAPL: 2007 to 2008')
plt.plot(view, color='black')
plt.tight_layout()
plt.show()
Plotting time series at different intervals can provide you with deeper insight into your data. Here, for example, you can see that the AAPL stock price rose and fell a great amount between 2007 and 2008.
### Multiple time series slices (2)¶
In this exercise, you will use the same time series aapl from the previous exercise and plot tighter views of the data.
• Partial string indexing works without slicing as well. For instance, using my_time_series['1995'], my_time_series['1999-05'], and my_time_series['2000-11-04'] respectively extracts views of the time series my_time_series corresponding to the entire year 1995, the entire month May 1999, and the entire day November 4, 2000.
Instructions
• Extract a slice named view_1 from the series aapl containing data from November 2007 to April 2008 (inclusive). This has been done for you.
• Plot the slice view_1 in 'red' in the top subplot of a vertically-stacked pair of subplots with the xticks rotated to 45 degrees.
• Assign the slice view_2 to contain data from the series aapl for January 2008. This has been done for you.
• Plot the slice view_2 in 'green' in the bottom subplot with the xticks rotated to 45 degrees.
In [122]:
plt.figure(figsize=(8, 9))
# Slice aapl from Nov. 2007 to Apr. 2008 inclusive: view
view_1 = df_stocks.AAPL['2007-11':'2008-04']
# Plot the sliced series in the top subplot in red
plt.subplot(2, 1, 1)
plt.plot(view_1, color='red')
plt.title('AAPL: Nov. 2007 to Apr. 2008')
plt.xticks(rotation=45)
# Reassign the series by slicing the month January 2008
view_2 = df_stocks.AAPL['2008-01']
# Plot the sliced series in the bottom subplot in green
plt.subplot(2, 1, 2)
plt.plot(view_2, color='green')
plt.title('AAPL: Jan. 2008')
plt.xticks(rotation=45)
# Improve spacing and display the plot
plt.tight_layout()
plt.show()
### Plotting an inset view¶
Remember, rather than comparing plots with subplots or overlayed plots, you can generate an inset view directly using plt.axes(). In this exercise, you'll reproduce two of the time series plots from the preceding two exercises. Your figure will contain an inset plot to highlight the dramatic changes in AAPL stock price between November 2007 and April 2008 (as compared to the 11 years from 2001 to 2011).
Instructions
• Extract a slice of series aapl from November 2007 to April 2008 inclusive. This has been done for you.
• Plot the entire series aapl.
• Create a set of axes with lower left corner (0.25, 0.5), width 0.35, and height 0.35. Pass these four coordinates to plt.axes() as a list (all in units relative to the figure dimensions).
• Plot the sliced view in the current axes in 'red'.
In [123]:
plt.figure(figsize=(8, 8))
# Slice aapl from Nov. 2007 to Apr. 2008 inclusive: view
view = df_stocks.AAPL['2007-11':'2008-04']
# Plot the entire series
plt.plot(df_stocks.AAPL)
plt.xticks(rotation=45)
plt.title('AAPL: 2001-2011')
# Specify the axes
plt.axes([0.25, 0.5, 0.35, 0.35])
# Plot the sliced series in red using the current axes
plt.plot(view, color='red')
plt.xticks(rotation=45)
plt.title('2007/11-2008/04')
plt.show()
Inset views are a useful way of comparing time series data.
## Time Series With Moving Windows¶
Hourly Data Over a Year
In [124]:
plt.figure(figsize=(8, 5))
plt.plot(df_weather.Temperature, color='blue')
plt.xticks(rotation=45)
plt.title('Temperature 2010')
plt.show()
Zooming In
In [125]:
view = df_weather.Temperature['2010-07']
plt.plot(view, color='purple')
plt.xticks(rotation=45)
plt.title('Temperature 2010-07')
plt.show()
Moving Averages
In [126]:
smoothed = pd.DataFrame(df_weather['Temperature'].copy())
smoothed['14d'] = smoothed.iloc[:, 0].rolling(336).mean()
smoothed['1d'] = smoothed.iloc[:, 0].rolling(24).mean()
smoothed['3d'] = smoothed.iloc[:, 0].rolling(72).mean()
smoothed['7d'] = smoothed.iloc[:, 0].rolling(168).mean()
Out[126]:
Temperature 14d 1d 3d 7d
Date
2010-01-01 00:00:00 46.2 NaN NaN NaN NaN
2010-01-01 01:00:00 44.6 NaN NaN NaN NaN
2010-01-01 02:00:00 44.1 NaN NaN NaN NaN
2010-01-01 03:00:00 43.8 NaN NaN NaN NaN
2010-01-01 04:00:00 43.5 NaN NaN NaN NaN
Viewing 24-Hour Averages
In [127]:
plt.plot(smoothed['1d']) # moving average over 24 hours
plt.title('Temperature (2010)')
plt.xticks(rotation=60)
plt.show()
Viewing All Moving Averages
In [128]:
plt.plot(smoothed.iloc[:, 1:]['2010-01']) # plot DataFrame for January
plt.legend(smoothed.columns[1:])
plt.title('Temperature (Jan. 2010)')
plt.xticks(rotation=60)
plt.show()
Moving Standard Deviations
In [129]:
variances = pd.DataFrame(df_weather['Temperature'].copy())
variances['14d'] = variances.iloc[:, 0].rolling(336).std()
variances['1d'] = variances.iloc[:, 0].rolling(24).std()
variances['3d'] = variances.iloc[:, 0].rolling(72).std()
variances['7d'] = variances.iloc[:, 0].rolling(168).std()
Out[129]:
Temperature 14d 1d 3d 7d
Date
2010-01-01 00:00:00 46.2 NaN NaN NaN NaN
2010-01-01 01:00:00 44.6 NaN NaN NaN NaN
2010-01-01 02:00:00 44.1 NaN NaN NaN NaN
2010-01-01 03:00:00 43.8 NaN NaN NaN NaN
2010-01-01 04:00:00 43.5 NaN NaN NaN NaN
In [130]:
plt.figure(figsize=(8, 5))
plt.plot(variances.iloc[:, 1:]['2010-01']) # plot DataFrame for January
plt.legend(variances.columns[1:])
plt.title('Temperature Deviations (Jan. 2010)')
plt.xticks(rotation=60)
plt.show()
### Plotting moving averages¶
In this exercise, you will plot pre-computed moving averages of AAPL stock prices in distinct subplots.
• The time series aapl is overlayed in black in each subplot for comparison.
• The time series mean_30, mean_75, mean_125, and mean_250 have been computed for you (containing the windowed averages of the series aapl computed over windows of width 30 days, 75 days, 125 days, and 250 days respectively).
Instructions
• In the top left subplot, plot the 30-day moving averages series mean_30 in 'green'.
• In the top right subplot, plot the 75-day moving averages series mean_75 in 'red'.
• In the bottom left subplot, plot the 125-day moving averages series mean_125 in 'magenta'.
• In the bottom right subplot, plot the 250-day moving averages series mean_250 in 'cyan'.
In [131]:
mean_30 = df_stocks['AAPL'].rolling(30).mean()
mean_75 = df_stocks['AAPL'].rolling(75).mean()
mean_125 = df_stocks['AAPL'].rolling(125).mean()
mean_250 = df_stocks['AAPL'].rolling(250).mean()
In [132]:
plt.figure(figsize=(8, 12))
# Plot the 30-day moving average in the top left subplot in green
plt.subplot(2, 2, 1)
plt.plot(mean_30, color='green')
plt.plot(df_stocks.AAPL, 'k-.')
plt.xticks(rotation=60)
plt.title('30d averages')
# Plot the 75-day moving average in the top right subplot in red
plt.subplot(2, 2, 2)
plt.plot(mean_75, color='red')
plt.plot(df_stocks.AAPL, 'k-.')
plt.xticks(rotation=60)
plt.title('75d averages')
# Plot the 125-day moving average in the bottom left subplot in magenta
plt.subplot(2, 2, 3)
plt.plot(mean_125, color='magenta')
plt.plot(df_stocks.AAPL, 'k-.')
plt.xticks(rotation=60)
plt.title('125d averages')
# Plot the 250-day moving average in the bottom right subplot in cyan
plt.subplot(2, 2, 4)
plt.plot(mean_250, color='cyan')
plt.plot(df_stocks.AAPL, 'k-.')
plt.xticks(rotation=60)
plt.title('250d averages')
# Display the plot
plt.show()
### Plotting moving standard deviations¶
Having plotted pre-computed moving averages of AAPL stock prices on distinct subplots in the previous exercise, you will now plot pre-computed moving standard deviations of the same stock prices, this time together on common axes.
• The time series aapl is not plotted in this case; it is of a different length scale than the standard deviations.
• The time series std_30, std_75, std_125, & std_250 have been computed for you (containing the windowed standard deviations of the series apl computed over windows of width 30 days, 75 days, 125 days, & 250 days respectively).
Instructions
• Produce a single plot with four curves overlayed:
• the series std_30 in 'red' (with corresponding label '30d').
• the series std_75 in 'cyan' (with corresponding label '75d').
• the series std_125 in 'green' (with corresponding label '125d').
• the series std_250 in 'magenta' (with corresponding label '250d').
• Add a legend to the 'upper left' corner of the plot.
In [133]:
std_30 = df_stocks['AAPL'].rolling(30).std()
std_75 = df_stocks['AAPL'].rolling(75).std()
std_125 = df_stocks['AAPL'].rolling(125).std()
std_250 = df_stocks['AAPL'].rolling(250).std()
In [134]:
plt.figure(figsize=(9, 6))
# Plot std_30 in red
plt.plot(std_30, color='green', label='30d')
# Plot std_75 in cyan
plt.plot(std_75, color='red', label='75d')
# Plot std_125 in green
plt.plot(std_125, color='magenta', label='125d')
# Plot std_250 in magenta
plt.plot(std_250, color='cyan', label='250d')
# Add a legend to the upper left
plt.legend(loc='upper left')
plt.title('Moving standard deviations')
# Display the plot
plt.show()
### Interpreting moving statistics¶
From the previous plot of moving standard deviations, what length is the moving window that most consistently produces the greatest variance (standard deviation) in the AAPL stock price over the time interval shown?
Instructions
• 30 days
• 75 days
• 125 days
• 250 days
Wider moving windows admit greater variability!
## Histogram Equalization In Images¶
Original Low Contrast Mars Surface Image
I found the image at Planetary Science Short Course.
• This is a Mars Odyssey THEMIS infrared image mosaic of an area of Mars at 13 degrees south, 215 degrees east. Cratered highlands to the west are part of Terra Sirenum, and the plains to the west are Daedalia Planum. The image is about 600 km across (long dimension).
• Matplotlib Image Tutorial
Image File
In [135]:
lunar_image_url = 'https://raw.githubusercontent.com/trenton3983/DataCamp/master/Images/intro_to_data_visualization_in_python/4_3_low_contrast_mars_surface.JPG'
lunar_image_path = Path('Images/intro_to_data_visualization_in_python/4_3_low_contrast_mars_surface.JPG')
create_dir_save_file(lunar_image_path, lunar_image_url)
Directory Exists
File Exists
Image Histograms
In [136]:
pixels = orig.flatten()
plt.hist(pixels, bins=256, range=(0,256), density=True, color='blue', alpha=0.3)
plt.show()
minval, maxval = orig.min(), orig.max()
print(minval, maxval)
125 244
Rescaling the Image
In [137]:
plt.figure(figsize=(12, 12))
minval, maxval = orig.min(), orig.max()
print(minval, maxval)
# this is the equation from section 2.4.4
rescaled = 256*(orig - minval) / (maxval - minval)
# the rescaled equation from the slides is not correct
# rescaled = (255/(maxval-minval)) * (pixels - minval) # original equation
print(rescaled.min(), rescaled.max())
plt.imshow(rescaled, cmap='gray')
plt.axis('off')
plt.show()
125 244
0.0 256.0
Original & Rescaled Histograms
In [138]:
plt.hist(pixels, bins=256, range=(0,255), density=True, color='blue', alpha=0.2)
plt.hist(rescaled.flatten(), bins=256, range=(0,255), density=True, color='green', alpha=0.2)
plt.legend(['original', 'rescaled'])
plt.show()
Image Histogram & CDF
In [139]:
plt.hist(pixels, bins=256, range=(0,256), density=True, color='blue', alpha=0.3)
plt.twinx()
orig_cdf, bins, patches = plt.hist(pixels, cumulative=True, bins=256, range=(0,256), density=True, color='red', alpha=0.3)
plt.title('Image histogram and CDF')
plt.xlim((0, 255))
plt.show()
Equalizing Intensity Values
In [140]:
new_pixels = np.interp(pixels, bins[:-1], orig_cdf*255)
new = new_pixels.reshape(orig.shape)
plt.imshow(new, cmap='gray')
plt.axis('off')
plt.title('Equalized image')
plt.show()
Equalized Histogram & CDF
In [141]:
plt.hist(new_pixels, bins=256, range=(0,256), density=True, color='blue', alpha=0.3)
plt.twinx()
plt.hist(new_pixels, cumulative=True, bins=256, range=(0,256), density=True, color='red', alpha=0.1)
plt.title('Equalized image histogram and CDF')
plt.xlim((0, 255))
plt.show()
### Extracting a histogram from a grayscale image¶
For grayscale images, various image processing algorithms use an image histogram. Recall that an image is a two-dimensional array of numerical intensities. An image histogram, then, is computed by counting the occurences of distinct pixel intensities over all the pixels in the image.
For this exercise, you will load an unequalized low contrast image of Hawkes Bay, New Zealand (originally by Phillip Capper, modified by User:Konstable, via Wikimedia Commons, CC BY 2.0). You will plot the image and use the pixel intensity values to plot a normalized histogram of pixel intensities.
Instructions
• Load data from the file '640px-Unequalized_Hawkes_Bay_NZ.jpg' into an array.
• Display image with a color map of 'gray' in the top subplot.
• Flatten image into a 1-D array using the .flatten() method.
• Display a histogram of pixels in the bottom subplot.
• Use histogram options bins=64, range=(0,256), and normed=True to control numerical binning and the vertical scale.
• Use plotting options color='red' and alpha=0.4 to tailor the color and transparency.
In [142]:
plt.figure(figsize=(12, 12))
# Load the image into an array: image
image = plt.imread(dir_path_hawk) # path is from section 2.4.4
# Display image in top subplot using color map 'gray'
plt.subplot(2,1,1)
plt.title('Original image')
plt.axis('off')
plt.imshow(image, cmap='gray')
# Flatten the image into 1 dimension: pixels
pixels = image.flatten()
# Display a histogram of the pixels in the bottom subplot
plt.subplot(2,1,2)
plt.xlim((0,255))
plt.title('Normalized histogram')
plt.hist(pixels, bins=64, range=(0,256), density=True, color='red', alpha=0.4)
# Display the plot
plt.show()
Image histograms are an important component of many image processing algorithms.
### Cumulative Distribution Function from an image histogram¶
A histogram of a continuous random variable is sometimes called a Probability Distribution Function (or PDF). The area under a PDF (a definite integral) is called a Cumulative Distribution Function (or CDF). The CDF quantifies the probability of observing certain pixel intensities.
Your task here is to plot the PDF and CDF of pixel intensities from a grayscale image. You will use the grayscale image of Hawkes Bay, New Zealand (originally by Phillip Capper, modified by User:Konstable, via Wikimedia Commons, CC BY 2.0). This time, the 2D array image will be pre-loaded and pre-flattened into the 1D array pixels for you.
• The histogram option cumulative=True permits viewing the CDF instead of the PDF.
• Notice that plt.grid('off') switches off distracting grid lines.
• The command plt.twinx() allows two plots to be overlayed sharing the x-axis but with different scales on the y-axis.
Instructions
• First, use plt.hist() to plot the histogram of the 1-D array pixels in the bottom subplot.
• Use the histogram options bins=64, range=(0,256), and normed=False.
• Use the plotting options alpha=0.4 and color='red' to make the overlayed plots easier to see.
• Second, use plt.twinx() to overlay plots with different vertical scales on a common horizontal axis.
• Third, call plt.hist() again to overlay the CDF in the bottom subplot.
• Use the histogram options bins=64, range=(0,256), and normed=True.
• This time, also use cumulative=True to compute and display the CDF.
• Also, use alpha=0.4 and color='blue' to make the overlayed plots easier to see.
In [143]:
plt.figure(figsize=(12, 12))
# Load the image into an array: image
# Display image in top subplot using color map 'gray'
plt.subplot(2,1,1)
plt.imshow(image, cmap='gray')
plt.title('Original image')
plt.axis('off')
# Flatten the image into 1 dimension: pixels
pixels = image.flatten()
# Display a histogram of the pixels in the bottom subplot
plt.subplot(2,1,2)
pdf = plt.hist(pixels, bins=64, range=(0,256), density=False, color='red', alpha=0.4)
plt.grid('off')
# Use plt.twinx() to overlay the CDF in the bottom subplot
plt.twinx()
# Display a cumulative histogram of the pixels
cdf = plt.hist(pixels, bins=64, range=(0,256), cumulative=True, density=True, color='blue', alpha=0.4)
# Specify x-axis range, hide axes, add title and display plot
plt.xlim((0,256))
plt.grid('off')
plt.title('PDF & CDF (original image)')
plt.show()
Notice that the histogram is not well centered over the range of possible pixel intensities. The CDF rises sharply near the middle (that relates to the overall grayness of the image).
### Equalizing an image histogram¶
Histogram equalization is an image processing procedure that reassigns image pixel intensities. The basic idea is to use interpolation to map the original CDF of pixel intensities to a CDF that is almost a straight line. In essence, the pixel intensities are spread out and this has the practical effect of making a sharper, contrast-enhanced image. This is particularly useful in astronomy and medical imaging to help us see more features.
For this exercise, you will again work with the grayscale image of Hawkes Bay, New Zealand (originally by Phillip Capper, modified by User:Konstable, via Wikimedia Commons, CC BY 2.0). Notice the sample code produces the same plot as the previous exercise. Your task is to modify the code from the previous exercise to plot the new equalized image as well as its PDF and CDF.
• The arrays image and pixels are extracted for you in advance.
• The CDF of the original image is computed using plt.hist().
• Notice an array new_pixels is created for you that interpolates new pixel values using the original image CDF.
Instructions 1/2
• Plot the new equalized image.
• Use the NumPy array method .reshape() to create a 2-D array new_image from the 1-D array new_pixels.
• The resulting new_image should have the same shape as image.shape, which can be accomplished by passing this as the argument to .reshape().
• Display new_image with a 'gray' color map to display the sharper, equalized image.
In [144]:
plt.figure(figsize=(12, 12))
# Load the image into an array: image
# Flatten the image into 1 dimension: pixels
pixels = image.flatten()
# Generate a cumulative histogram
cdf, bins, patches = plt.hist(pixels, bins=256, range=(0,256), density=True, cumulative=True)
new_pixels = np.interp(pixels, bins[:-1], cdf*255)
# Reshape new_pixels as a 2-D array: new_image
new_image = new_pixels.reshape(image.shape)
# Display the new image with 'gray' color map
plt.subplot(2,1,1)
plt.title('Equalized image')
plt.axis('off')
plt.imshow(new_image, cmap='gray')
plt.show()
Instructions 2/2
• Plot the new equalized image's PDF and CDF.
• Plot the PDF of new_pixels in 'red'.
• Use plt.twinx() to overlay plots with different vertical scales on a common horizontal axis.
• Plot the CDF of new_pixels in 'blue'.
In [145]:
plt.figure(figsize=(8, 8))
pdf = plt.hist(new_pixels, bins=64, range=(0,256), density=False, color='red', alpha=0.4)
plt.grid('off')
# Use plt.twinx() to overlay the CDF in the bottom subplot
plt.twinx()
plt.xlim((0,256))
plt.grid('off')
plt.title('PDF & CDF (equalized image)')
# Generate a cumulative histogram of the new pixels
cdf = plt.hist(new_pixels, bins=64, range=(0,256), cumulative=True, density=True, color='blue', alpha=0.4)
plt.show()
Histogram equalization can help make an image sharper.
### Extracting histograms from a color image¶
This exercise resembles the last in that you will plot histograms from an image. This time, you will use a color image of the Helix Nebula as seen by the Hubble and the Cerro Toledo Inter-American Observatory. The separate RGB (red-green-blue) channels will be extracted for you as two-dimensional arrays red, green, and blue respectively. You will plot three overlaid color histograms on common axes (one for each channel) in a subplot as well as the original image in a separate subplot.
Instructions
• Display image in the top subplot of a 2 × 1 subplot grid. Don't use a colormap here.
• Flatten the 2-D arrays red, green, and blue into 1-D arrays.
• Display three histograms in the bottom subplot: one for red_pixels, one for green_pixels, and one for blue_pixels. For each, use 64 bins and specify a translucency of alpha=0.2.
In [146]:
helix_dir_path = Path('Images/intro_to_data_visualization_in_python/ps09_display-helix.jpg')
create_dir_save_file(helix_dir_path, helix_url)
Directory Exists
File Exists
In [147]:
plt.figure(figsize=(12, 12))
# Load the image into an array: image
# crop image
image = image[100:560, 368:864, :]
# Display image in top subplot
plt.subplot(2,1,1)
plt.title('Original image')
plt.axis('off')
plt.imshow(image)
# Extract 2-D arrays of the RGB channels: red, green, blue
red, green, blue = image[:,:,0], image[:,:,1], image[:,:,2]
# Flatten the 2-D arrays of the RGB channels into 1-D
red_pixels = red.flatten()
green_pixels = green.flatten()
blue_pixels = blue.flatten()
# Overlay histograms of the pixels of each color in the bottom subplot
plt.subplot(2, 1, 2)
plt.title('Histograms from color image')
plt.xlim((0, 256))
plt.hist(red_pixels, bins=64, density=True, color='red', alpha=0.2)
plt.hist(green_pixels, bins=64, density=True, color='green', alpha=0.2)
plt.hist(blue_pixels, bins=64, density=True, color='blue', alpha=0.2)
# Display the plot
plt.show()
• Notice how the histogram generated from this color image differs from the histogram generated earlier from a grayscale image.
• This image is slightly different than the one in the DataCamp exercise, which is why it needed to be cropped and why the histogram doesn't precisely match.
### Extracting bivariate histograms from a color image¶
Rather than overlaying univariate histograms of intensities in distinct channels, it is also possible to view the joint variation of pixel intensity in two different channels.
For this final exercise, you will use the same color image of the Helix Nebula as seen by the Hubble and the Cerro Tololo Inter-American Observatory. The separate RGB (red-green-blue) channels will be extracted for you as one-dimensional arrays red_pixels, green_pixels, & blue_pixels respectively.
Instructions
• Make a 2-D histogram (not a regular histogram) in the top left subplot showing the joint variation of red_pixels (on the x-axis) and green_pixels (on the y-axis). Use bins=(32,32) to control binning.
• Make another 2-D histogram in the top right subplot showing the joint variation of green_pixels (on the x-axis) and blue_pixels (on the y-axis). Use bins=(32,32) to control binning.
• Make another 2-D histogram in the bottom left subplot showing the joint variation of blue_pixels (on the x-axis) and red_pixels (on the y-axis). Use bins=(32,32) to control binning.
In [148]:
plt.figure(figsize=(12, 12))
# Load the image into an array: image
# crop image
image = image[100:560, 368:864, :]
# Extract RGB channels and flatten into 1-D array
red, green, blue = image[:,:,0], image[:,:,1], image[:,:,2]
red_pixels = red.flatten()
green_pixels = green.flatten()
blue_pixels = blue.flatten()
# Generate a 2-D histogram of the red and green pixels
plt.subplot(2,2,1)
plt.grid('off')
plt.xticks(rotation=60)
plt.xlabel('red')
plt.ylabel('green')
plt.hist2d(red_pixels, green_pixels, bins=(32, 32))
# Generate a 2-D histogram of the green and blue pixels
plt.subplot(2,2,2)
plt.grid('off')
plt.xticks(rotation=60)
plt.xlabel('green')
plt.ylabel('blue')
plt.hist2d(green_pixels, blue_pixels, bins=(32, 32))
# Generate a 2-D histogram of the blue and red pixels
plt.subplot(2,2,3)
plt.grid('off')
plt.xticks(rotation=60)
plt.xlabel('blue')
plt.ylabel('red')
plt.hist2d(blue_pixels, red_pixels, bins=(32, 32))
# Display the plot
plt.show()
|
2021-10-19 23:36:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3481445908546448, "perplexity": 6916.971199343708}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00168.warc.gz"}
|
https://pythonscriptingmanual.readthedocs.io/en/latest/calculating_distances_in_CC3D_simulations.html
|
# Calculating distances in CC3D simulations.¶
This may seem like a trivial task. After all, Pitagorean theorem is one of the very first theorems that people learn in basic mathematics course. The purpose of this section is to present convenience functions which will make your code more readable. You can easily code such functions yourself but you probably will save some time if you use ready solutions. One of the complications in the CC3D is that sometimes you may run simulation using periodic boundary conditions. If that’s the case, imagine two cells close to the right hand side border of the lattice and moving to the right. When we have periodic boundary conditions along X axis one of such cells will cross lattice boundary and will appear on the left hand side of the lattice. What should be a distance between cells before and after once of them crosses lattice boundary? Clearly, if we use a naïve formula the distance between cells will be small when all cells are close to righ hand side border but if one of them crosses the border the distance calculated using the simple formula will jump dramatically. Intuitively we feel that this is incorrect. The way solve this problem is by shifting one cell to approximately center of the lattice and than applying the same shift to the other cell. If the other cell ends up outside of the lattice we add a vector whose components are equal to dimensions of the lattice but only along this axes along which we have periodic boundary conditions. The point here is to bring a cell which ends up outside the lattice to beinside using vectors with components equal to the lattice dimensions. The net result of these shifts is that we have two cells in the middle of the lattice and the distance between them is true distance regardless the type of boundary conditions we use. You should realize that when we talk about cell shifting we are talking only about calculations and not physical shifts that occur on the lattice.
Example CellDistance from CompuCellPythonTutorial directory demonstrates the use of the functions calculating distance between cells or between any 3D points:
class CellDistanceSteppable(SteppableBasePy):
def __init__(self,_simulator,_frequency=1):
SteppableBasePy.__init__(self,_simulator,_frequency)
self.cellA=None
self.cellB=None
def start(self):
self.cellA=self.potts.createCell()
self.cellA.type=self.A
self.cellField[10:12,10:12,0]=self.cellA
self.cellB=self.potts.createCell()
self.cellB.type=self.B
self.cellField[92:94,10:12,0]=self.cellB
def step(self,mcs):
distVec=self.invariantDistanceVectorInteger(_from=[10,10,0] ,_to=[92,12,0])
print 'distVec=',distVec, ' norm=',self.vectorNorm(distVec)
distVec=self.invariantDistanceVector(_from=[10,10,0] ,_to=[92.3,12.1,0])
print 'distVec=',distVec, ' norm=',self.vectorNorm(distVec)
print 'distance invariant='\
,self.invariantDistance(_from=[10,10,0] ,_to=[92.3,12.1,0])
print 'distance =',self.distance(_from=[10,10,0] ,_to=[92.3,12.1,0])
print 'distance vector between cells ='\
,self.distanceVectorBetweenCells(self.cellA,self.cellB)
print 'inv. vec between cells ='\
,self.invariantDistanceVectorBetweenCells(self.cellA,self.cellB)
print 'distanceBetweenCells = ',self.distanceBetweenCells(self.cellA,self.cellB)
print 'invariantDistanceBetweenCells = ',\
self.invariantDistanceBetweenCells(self.cellA,self.cellB)
In the start function we create two cells – self.cellA and self.cellB. In the step function we calculate invariant distance vector between two points using self.invariantDistanceVectorInteger function. Notice that the word Integer in the function name suggests that the result of this call will be a vector with integer components. Invariant distance vector is a vector that is obtained using our shifting operations described earlier.
The next function used inside step is self.vectorNorm. It returns length of the vector. Notice that we specify vectors or 3D points in space using [] operator. For example to specify vector, or a point with coordinates x, y, z = (10, 12, -5) you use the following syntax:
[10,12,-5]
If we want to calculate invariant vector but with components being floating point numbers we use self.invariantDistanceVector function. You may ask why not using floating point always? The reason is that sometimes CC3D expects vectors/points with integer coordinates to e.g. access specific lattice points. By using appropriate distance functions you may write cleaner code and avoid casting and rounding operators. However this is a matter of taste and if you prefer using floating point coordinates it is perfectly fine. Just be aware that when converting floating point coordinate to integer you need to use round and int functions.
Function self.distance calculates distance between two points in a naïve way. Sometimes this is all you need. Finally the set of last four calls self.distanceVectorBetweenCells, self.invariantDistanceVectorBetweenCells, self.distanceBetweenCells, self.invariantDistanceBetweenCells calculates distances and vectors between center of masses of cells. You could replace
self.invariantDistanceVectorBetweenCells(self.cellA,self.cellB)
with
self.invariantDistanceVectorBetweenCells(
_from=[ self.cellA.xCOM, self.cellA.yCOM, self.cellA.yCOM],
_to=[ self.cellB.xCOM, self.cellB.yCOM, self.cellB.yCOM]
)
but it is not hard to notice that the former is much easier to read.
|
2018-10-20 06:29:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48429781198501587, "perplexity": 663.4070662583523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512592.60/warc/CC-MAIN-20181020055317-20181020080817-00452.warc.gz"}
|
https://www.physicsforums.com/threads/dividing-functions-questions.316786/
|
# Dividing Functions Questions
1. May 28, 2009
Hi there,
Quick question. For F(X)= X/Sin(X), is there a hole at X=0?
Thanks.
2. May 28, 2009
### Bohrok
What do you get when plugging 0 into F(X) ?
3. May 28, 2009
### tiny-tim
At x = 0, obviously, it's 0/0, which is undefined (it's known as an "indeterminate form"), so yes in that sense there's a hole …
of course, F(x) does tend to a limit at as x -> 0
4. May 28, 2009
0/Sin 0 = undefined.
So basically, there's my answer. There is a hole at x=0. There is also an oblique asymptote of f(x)=x, correct?
5. May 28, 2009
Thanks so much! Can you help me explain why there is an oblique asymptote?
6. May 28, 2009
### tiny-tim
uhh?
wot's an oblique asymptote?
7. May 28, 2009
When a linear asymptote is not parallel to the x- or y-axis, it is called either an oblique asymptote or equivalently a slant asymptote.
In the graph of X/Sin(X), there appears to be an asymptote at y=x
8. May 28, 2009
### chroot
Staff Emeritus
The function continues to have a defined value as you get arbitrarily close to zero, thus the limit as x->0 is defined. The function itself is undefined only exactly at zero.
- Warren
9. May 28, 2009
### Bohrok
Try graphing x/sin(x) and you'll only see vertical asymptotes when the denominator, or sin(x), is 0.
As far as I know, a rational function P(x)/Q(x) where P and Q are polynomials has an oblique asymptote only when the degree of the numerator is one larger than that of the denominator. In x/sin(x) you have a transcendental function in the denominator.
10. May 28, 2009
Ok, so NO oblique asymptote, correct?
11. May 28, 2009
### tiny-tim
Still totally confused as to why this is called an asymptote instead of a tangent.
Anyway I can't see how it's slanting ……
what is limx -> 0 x/sinx ?
12. May 28, 2009
### Bohrok
That's right.
A slant asymptote
http://home.att.net/~srschmitt/precalc/precalc-fig12-03.gif [Broken]
Last edited by a moderator: May 4, 2017
13. May 28, 2009
### tiny-tim
So that's only at infinity?
14. May 28, 2009
### Bohrok
and also negative infinity if the domain goes there too.
15. May 29, 2009
### HallsofIvy
tiny-tim, the word "asymptote" was wrong here. He intended "tangent" as you suggested. Because there is a "hole" at x= 0, there is no tangent there.
|
2018-03-21 11:27:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841765284538269, "perplexity": 2228.6882489850577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647612.53/warc/CC-MAIN-20180321102234-20180321122234-00472.warc.gz"}
|
https://deepai.org/publication/on-optimal-operational-sequence-of-components-in-a-warm-standby-system
|
# On Optimal Operational Sequence of Components in a Warm Standby System
We consider an open problem of optimal operational sequence for the 1-out-of-n system with warm standby. Using the virtual age concept and the cumulative exposure model, we show that the components should be activated in accordance with the increasing sequence of their lifetimes. Lifetimes of the components and the system are compared with respect to the stochastic precedence order. Only specific cases of this optimal problem were considered in the literature previously.
There are no comments yet.
## Authors
• 1 publication
• 1 publication
• 1 publication
• ### Benchmark Instances and Optimal Solutions for the Traveling Salesman Problem with Drone
The use of drones in logistics is gaining more and more interest, and dr...
07/28/2021 ∙ by Mauro Dell'Amico, et al. ∙ 0
• ### Efficient constructions of the Prefer-same and Prefer-opposite de Bruijn sequences
The greedy Prefer-same de Bruijn sequence construction was first present...
10/15/2020 ∙ by Evan Sala, et al. ∙ 0
• ### Dynamic maintenance policy for systems with repairable components subject to mutually dependent competing failure processes
In this paper, a repairable multi-component system is studied where all ...
02/02/2020 ∙ by Nooshin Yousefi, et al. ∙ 0
• ### Universal Semantics for the Stochastic Lambda-Calculus
We define sound and adequate denotational and operational semantics for ...
11/26/2020 ∙ by Pedro Amorim, et al. ∙ 0
• ### How to Price Fresh Data
We introduce the concept of a fresh data market, in which a destination ...
04/15/2019 ∙ by Meng Zhang, et al. ∙ 0
• ### A Short Note on Analyzing Sequence Complexity in Trajectory Prediction Benchmarks
The analysis and quantification of sequence complexity is an open proble...
03/27/2020 ∙ by Ronny Hug, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
As an introductory reasoning, consider first one component that starts operating at . Assume that in the process of production it had acquired an initial unobserved resource (Finkelstein [6]
). For mechanical or electronic items, for instance, it can be a ‘distance’ between the initial value of the key parameter and the boundary that defines a failure of the component. It is natural to assume that it is a continuous random variable with the Cdf
F(r)=P(R≤r). (1.1)
A similar notion of a random resource (hazard potential) was considered in Singpurwala [17]. Suppose that for each realization of the component’s remaining resource is monotonically decreasing with time. Therefore, the run out resource, to be called wear, monotonically increases. The wear in can be defined as
W(t)=t∫0w(u)du, (1.2)
where denotes the rate of wear. Thus the value of is an intrinsic property of a manufactured item, whereas the rate defines the ‘consumption’ of in a given environment. The larger rate corresponds to a severer environment, whereas can be often considered as a baseline one. The failure occurs when the wear reaches . Denote the corresponding random time by . Then
P(T≤t)≡P(R≤W(t))=F(W(t)). (1.3)
Therefore, the described survival model can be interpreted in terms of the accelerated life model (ALM)(Nelson [13]; Bagdonavicius and Nikulin [2]). Our reasoning in what follows will be based on the ALM (1.3), whereas the discussion above can be considered as a useful interpretation.
In applications, the most common specific case is the cumulative exposure model (Nelson [13]), which corresponds to the case when the scale transformation in (1.3) is linear, i.e.,
P(T≤t)≡P(R≤wt)=F(wt). (1.4)
Engineering systems, especially those that are used in mission-critical applications such as aerospace, power generation, flight control and computing, are often designed with redundancies in order to meet the stringent safety and reliability requirements (Levitin et al. [10, 11]). One of the widely-applied redundancy techniques in various applications is the standby redundancy, when one or several components operate and redundant components serve as the standby spares. In the case of failure of an operating component, a replacement procedure is initiated to activate a standby component and to replace the failed one so that a system continues to operate.
According to its failure characteristics before the activation, a standby component can be categorized as ‘hot’, ‘cold’, or ‘warm’. A hot standby component works concurrently with the online primary component and thus is ready to take over at any time for fast recovery. In this case, the standby component is fully exposed to the operating stress and is characterized by the same failure rate as the online one. A cold standby component is unpowered and shielded from operation and environmental stresses. As a more general option that, e.g., can take into account the non-ideal standby mode conditions or/and partial loading, a warm standby component is characterized by the failure rate that is smaller than that for the fully operational component. (Yun and Cha [18]; Levitin et al. [10, 11]; Zhang et al. [19]; Hazra and Nanda [9]). Obviously, the former two types of loading are the special cases of the warm standby mode.
Reliability analysis of the warm standby systems is much more challenging than that for cold and hot standby. Indeed, the lifetime of a cold standby system is just the sum of lifetimes of all components; the lifetime of a hot standby system is just a maximum of individual lifetimes, whereas in the warm standby case, a switch of the regimes from the warm standby to the operational mode should be taken into account. In accordance with the linear cumulative exposure model based on the scale transformation (1.4) with , the equivalent lifetime (virtual age) of a warm standby component that had spent some time in this mode before switching to the active mode is this time reduced by the lifetime deceleration factor plus the lifetime spent in the active mode afterwards. More general models not restricted to the case of a linear scale transformation are usually based on the notion of the ‘virtual age’. See, e.g., Cha et al. [4] and Finkelstein [5] for applications of the virtual age concept to regimes switching.
###### Remark 1.1
Note that we can arrive at (1.3) formally without employing the notion of resource. Indeed, let a more severe environment be the baseline and denote the corresponding lifetime in it by . The lifetime of a component in a milder environment should be larger. Assume that this is in the sense of usual stochastic ordering, i.e, , which implies that
Fm(t)=F(W(t)),
where and the time dependent scale transformation function is increasing and for all .
Optimal (in terms of maximizing reliability characteristics of a system) activation sequence for components obviously does not exist in a hot standby system, is trivial (no difference) for the cold standby system and is meaningful for a general warm standby system. Only some special cases (see Cha et al. [4] and Zhai et al. [19]) for the latter case were considered in the literature. In this note, we are considering the problem in a much more generality and therefore, under certain assumptions, solving an open problem of theoretical reliability.
## 2 Problem formulation
We want to obtain an optimal sequence of activation of the standby components for a heterogeneous system of components, with one active component and others in a warm standby mode. We assume that in a standby mode all components are characterized by the same deceleration factor . Generalization to the general case will be also discussed. Intuitive reasoning based on the notions of resource of the components prompts us that we must first activate the weakest component, then the weakest from the remaining, etc. Specific cases in the literature support this intuition. However, in what stochastic sense must we order components and other assumptions of the model are crucial for the corresponding proof.
Denote the lifetimes of the components of the system in active (operational regime) by , . Assume that they are ordered in some non-specified for now stochastic sense, i.e.,
T1≤T2≤⋯≤Tn. (2.1)
For definitions of various stochastic orders see, e.g., Shaked and Shantikumar [16]. If the operating component fails, the next operable one (that did not fail in the warm standby mode) is activated, etc. The question is to define a sequence of activation for standby components that will maximize the lifetime of the whole system (in some stochastic sense). Some important specific cases were studied in Cha et al. [4] and Zhai et al. [19], where
• The hazard rate ordering was considered for the lifetimes of two components. Then it was proved that one should start with the weaker in this sense component, which results in the maximum expected lifetime of a system.
• For the -out-of-
system, only the specific case of exponentially distributed lifetimes and linear model (
1.4) was considered. Then, under the assumption of the hazard rate ordering it was proved that if activation starts with the weakest component, and the next weakest is chosen from the remaining components, etc., reliability of the system will be maximal in the sense of the usual stochastic order.
The goal of the current study is to consider this problem in more generality for arbitrary lifetime distributions which is a challenging open problem. We think that the choice of stochastic ordering in the previous work was preventing authors from obtaining more general results. In what follows, we use the stochastic precedence order (to be defined in the next section), which is natural in many reliability settings and, in spite of this, not sufficiently explored in the literature so far.
The problem to be considered is based on the definition of the warm standby mode via the general model (1.3) or its specific case (1.4). It should be noted that this is an assumption itself (note that all previous specific studies of reliability of the warm standby systems relied on these or similar expressions). However, in order to consider switching from one regime to another, one must have a stochastic model for that. The virtual age concept based on ALM (1.3)-(1.4) is a well-established in the literature way to deal with this.
## 3 Two components
Let us consider first the system with two components with lifetimes in an operational mode ordered as in some stochastic sense to be defined below. Let , and let be the realizations of , , and be the corresponding realization of . Then
P(Z≥0)=P(T2≥T1). (3.1)
Denote by the lifetime of a system when the first component is activated first and by when the second is activated first and and the corresponding realization. We will show later that under given assumptions
z≥0⟹y12−y21≥0,
which, as each realization of corresponds to the realization of , implies that
Z≥0⟹Y12−Y21≥0. (3.2)
Thus, specifically, if
P(Z≥0)≥0.5, then P((Y12−Y21)≥0)≥0.5, (3.3)
which, in fact, is the definition of the stochastic precedence (sp) order for the components and for the variants of the system as well (Boland et al. [3]; Finkelstein [7])
T2≥spT1⟹Y12≥spY21.
Thus the stochastic precedence order for two random variables says that and it seems to be natural in many reliability settings, e.g., for the stress-strength reliability modeling (Finkelstein [7]). It is also consistent for the current problem, as the components and the variants of the system will be ordered only in the sense of this order. Note that the stochastic precedence order is weaker than the usual stochastic order (Boland et al. [3]). On the other hand, comparison with the ordering of expectations depends on parameters involved (Finkelstein [7]).
In spite of its obvious attractiveness the stochastic precedence order had attracted much less attraction in the literature and only a few papers are devoted to it (Boland et al. [3]; Finkelstein [7]). However, it may be the most natural one in many reliability settings (e.g., stress/strength problems). In fact, it was suggested in Finkelstein [7] to call it (at least at some instances) the stress-strength order, which naturally compares two random variables as in structural reliability. For recent advances, see Santis et al. [15], and Montes and Montes [12].
We will first prove the following result.
###### Theorem 3.1
Let the following stochastic precedence order holds for the two component system described above.
T2≥spT1.
Then the corresponding order of components achieves the maximum lifetime of a system in the sense of the stochastic precedence order, i.e.,
Proof: Let be the realizations of , and let . If the first component start first, then the corresponding realization of a lifetime of a system in accordance with the linear cumulative exposure model (1.4) with for a milder regime is
t1+(t2−wt1)=t2+(1−w)t1>t2, (3.4)
where is the virtual (equivalent) age of the second component just after switching to activation (from a warm standby mode) and, therefore, the remaining lifetime in this realization is .
Let now the second (better) component start first. We have two specific cases:
Case I: , (where ), which means that the first component (in a warm mode) will fail before the active second component. Note that as is the age of the first component at failure (in an active mode), in accordance with the model, is the age of the first component at failure if it is operates all time in the warm standby mode. Thus the lifetime of a system in this case is just .
Case II: . This means that the active second component fails before the warm standby one and that the switching should be performed at . Then the lifetime of a system in this realization is the sum
t2+αt1−t2α=t1+t2(1−w), (3.5)
where is the time that the first component should operate (after ), if it were operating in the warm standby mode. However, it was switched to the active mode and this time should be recalculated as .
Thus we must compare (3.4) with (3.5).
t2−wt1>t2(1−w),
which is true as .
Thus it is most beneficial to activate first the first component with a smaller lifetime in each realization. It follows then from (3.3).
###### Remark 3.1
As the virtual age concept is well-defined for a general model (1.2)–(1.3) and the function is monotonically increasing (therefore, the inverse function exists), Theorem 3.1 can be generalized to this case. Indeed let us compare relations that correspond to (3.4) and (3.5) in this case. Relationship (3.4) turns to
t1+(t2−W(t1)),
whereas (3.5) can be written now as
t2+W(W−1(t1)−t2), (3.6)
where denotes the inverse function which exists due to monotonicity of . Assume additionally that is concave, i.e., , which means that the rate of wear in (1.2) is decreasing (non-increasing). Then we can proceed with (3.6), which result in the following inequalities
t2+W(W−1(t1)−t2)≤t2+t1−W(t2)≤t1+t2−W(t1).
The first one obviously follows from our sufficient condition , whereas the second, from monotonicity of and . It seems that the assumption of concavity is essential for the stochastic precedence order in this case as it is easy to see via the corresponding counterexample () that the corresponding ordering for the system does not always hold.
## 4 n components
Consider the -out-of- components warm standby system. It is a coherent system meaning that each component is relevant and its structure function is monotone. It is well-known (Barlow and Proschan [1]) that in this case improving reliability of any of the components will improve reliability of a system. Thus this is the definition with respect to usual stochastic order both on the level of components and the system. On the other hand, it can be also easily seen that increasing the mean lifetime of a component not necessarily leads to increasing the mean lifetime of a system. Similarly, if we decrease the failure rate of a component, then it does not always imply that the system failure rate will also decrease. This means that the result is sensitive to the employed type of stochastic order. The relevant order in our discussion is the stochastic precedence order. Therefore, the corresponding monotonicity problem should be addressed specifically, as we need this result in what follows.
###### Lemma 4.1
If the lifetime of a component in a coherent system is improved in the sense of stochastic precedence order, then the lifetime of the coherent will also improved in the same sense.
Proof: Denote a lifetime of a coherent system of components by where for convenience of further notation, the lifetime of the th component is denoted just by . Let us replace this component with another one with lifetime , whereas all other lifetimes stay the same and denote the system lifetime . For convenience, we will call the defined systems and , respectively. Since is same as except is replaced by , the set of all minimal path sets for both systems will be the same (For a given system, the minimal path set is a set of minimum number of components whose functioning ensures the functioning of the system). Let be the set of all minimal path sets for both systems. Further, let denote the lifetime of the minimal path set , for .
For and , let be the set of minimal path sets that contain the component (for convenience we denote the component and its lifetime by the same letter). Similarly, let be the set of minimal path sets that contain the component . Note that, for , and may not be the same even though . In fact, for ,
TPjr=min{Sr,T}, TP∗jr=min{Sr,T∗},
where
Sr=minl∈Pjr{Tl}=minl∈P∗jr{Tl}.
As previously, denote by the lower case letters the realizations of the corresponding random variables. Let us assume that , meaning that realization of the replaced component is larger than that for the initial component. Then, for ,
tPjr=min{sr,t}≤min{sr,t∗}=tP∗jr,
which implies that
max{tPj1,tPj2,…,tPjk}≤max{tP∗j1,tP∗j2,…,tP∗jk}. (4.1)
Let and be the realizations of and , respectively. Then,
τ(t1,t2,…,tn,t) = max{tP1,tP2,…,tPm} = ≤ max{max1≤r≤k{tP∗jr},maxz∈{1,2,…,m}∖{j1,j2,…,jk}{tPz}} = τ(t1,t2,…,tn,t∗),
where the inequality follows from (4.1). Thus, in realizations,
t≤t∗⟹τ(t1,t2,…,tn,t)≤τ(t1,t2,…,tn,t∗),
which is similar to previous section results, and hence
P(T
###### Remark 4.1
The proof of the above lemma can intuitively be explained as follows. Denote by realization of the state function ( or ) of at time . Similarly, let denote the realization of the state function of at time , for . It is clear that for and , whereas for , we have as the system is coherent and the state function of the th component has been improved in this interval. Thus, the lifetime of a system with in each realization is larger than that with if .
Let us specify now the ordering in (2.1) as
T1≤spT2≤sp⋯≤spTn. (4.2)
Now we can formulate the following theorem.
###### Theorem 4.1
Let the stochastic precedence order (4.2) holds for the -out-of- warm standby system described above. Then the corresponding of components achieves the maximum lifetime of a system in the sense of the stochastic precedence order.
Proof: Assume that we had improved the lifetime , , in the sense of the stochastic precedence order, i.e., . We start with the first component (with the smallest lifetime) in an active mode. Assume that other components are in an arbitrary, non-ordered sequence. Consider the th and the th components, and combine them in one aggregated component. If , we do nothing, and change the sequence of these two components if otherwise. By this change, as follows from Lemma 4.1, we increase the lifetime of this pair (similar to Theorem 3.1) and therefore, the lifetime of a system. We can do it with all ‘non-properly’ components and eventually arrive at (4.2), which maximizes the lifetime of the system in the sense of the stochastic precedence order.
The rationale behind this operation is similar to the above case of two components. The difference to be considered, however, is that the initial activation time in the case of only two components was and now it is some arbitrary . Let and the th component start first if activated. We emphasize once more the fact that are realizations of , , which are the lifetimes in the activated mode. Event means that both components had failed before the prospective activation and the corresponding comparison is irrelevant. Another possibility is that the th component had failed before the activation whereas the th does not. In this case, the lifetime of the pair (after activation) is, in accordance with the cumulative exposure model, (). The last possibility is when both of them did not fail before activation. In this case, the lifetime of a pair after activation is (compare with (3.4) that corresponds to the case ):
ti−wta+(ti+1−w(ti−wta)), (4.3)
where is the virtual age of the th component just after activation and, therefore, its remaining lifetime in this realization is (). As the th component was operating during the time since activation till the failure of the th component in the warm standby mode, this time should be recalculated to end up with the remaining lifetime of the th component after its activation as .
Let now the th component starts first. Reasoning similar to the above results in a smaller (in realizations) lifetime of a pair as compared with the initial sequence. For instance, obviously, the term , which corresponds to the case when the th component fails before the activation whereas the th does not, stays the same. We have now also two specific cases for the case when the components did not fail (in the warm standby mode) before (see cases I and II of the previous section). But we can just adjust properly our previous reasoning considering the remaining lifetimes after activation, which are and (), then the reasoning and comparison with (4.3) will be exactly the same as comparison of (3.5) with (3.4).
###### Remark 4.2
Generalization to the model (1.2)–(1.3) can be performed using reasoning similar to that in Remark 3.1.
## 5 Concluding remarks
In this paper, we show that the optimal operational sequence for the -out-of- system with warm standby is when the components are activated in accordance with the increasing sequence of their lifetimes. It turns out from our reasoning that the natural stochastic ordering for this problem is the stochastic precedence order.
When the warm standby component is activated, its age should be ‘re-calculated’. This recalculation is performed using the virtual age concept and the cumulative exposure model.
The proofs are performed for the linear cumulative exposure model. Generalization to the time-dependent case is also discussed.
Previously, only specific cases of the problem were considered in the literature. In Cha et al. [4] and Zhai et al. [19] the case of two components was considered and the sequence was justified (in terms of expected lifetimes of a system) for the case when the components were ordered in the sense of the hazard rate ordering. Moreover, the corresponding sequence was justified in Zhai et al. [19] for -out-of- system but only for the exponentially distributed lifetimes of components.
Our result is general, and what is crucial, it employs the natural for this setting stochastic precedence ordering both for components and the system lifetimes as well.
Acknowledgments
The first author was supported by the NRF (National Research Foundation of South Africa) grant No 103613. The work of the second author was supported by the Claude Leon Foundation, South Africa. The work of the third author was supported by Priority Research Centers Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2009-0093827). The work of the third author was also supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2016R1A2B2014211).
## References
• [1] Barlow, R.E. and Proschan, F. (1975). Statistical Theory of Reliability and Life Testing. Holt, Renerhart Winston, New York.
• [2] Bagdonavicius, V. and Nikulin. M. (2002). Accelerated Life Models. Modelling and Statistical Analysis. Chapman Hall.
• [3] Boland, P.J., Singh, H., and Cukic, B. (2004). The stochastic precedence ordering with applications in sampling and testing.
Journal of Applied Probability
, 41, 73–82.
• [4] Cha, J.H., Mi, J., and Yun, W.Y. (2008). Modeling of a general standby system and evaluation of its performance. Applied Stochastic Models in Business and Industry, 24, 159–169.
• [5] Finkelstein, M. (2007). On statistical and information-based virtual age of degrading systems. Reliability Engineering and System Safety, 92, 676–681.
• [6] Finkelstein, M. (2008). Failure Rate Modelling for Reliability and Risk. Springer, London.
• [7] Finkelstein, M. (2013). On some comparisons of lifetimes for reliability analysis. Reliability Engineering and System Safety, 119, 300–304.
• [8] Finkelstein, M. and Cha, J.H. (2013). Stochastic Modelling for Reliability: Shocks, Burn-in, and Heterogeneous Populations. Springer, London.
• [9] Hazra, N.K. and Nanda, A.K. (2017). General standby allocation in series and parallel systems. Communications in Statistics-Theory and Methods, 46, 9842–9858.
• [10] Levitin, G., Xing, L., and Dai, Y (2013). Optimal sequencing of warm standby components. Computers Industrial Engineering, 65, 570–576 .
• [11] Levitin, G., Xing, L., and Dai, Y. (2014). Cold versus hot standby mission operation cost minimization for -out-of- systems. European Journal of Operational Research, 234, 155–162.
• [12] Montes, I. and Montes, S. (2016). Stochastic dominance and statistical preference for random variables couple by an Archimedean copula or by the Frèchet-Hoeffding upper bound.
Journal of Multivariate Analysis
, 143, 275–298.
• [13] Nelson, W. (1990).
Accelerated Testing: Statistical Models, Test Plans, and Data Analysis
. Wiley Series in Probability and Statistics, Wiley Sons, New York.
• [14] Ruiz-Castro, J. E. and Fernández-Villodre, G. (2012). A complex discrete warm standby system with loss of units. European Journal of Operational Research, 218, 456–469.
• [15] Santis, E.D., Fantozzi, F., and Spizzichino, F. (2015). Relations between stochastic orderings and generalized stochastic precedence. Probability in the Engineering and Informational Sciences, 29(3), 329-343.
• [16] Shaked, M. and Shanthikumar, J. (2007). Stochastic Orders. Springer, New York.
• [17] Singpurwalla, N. D. (2006). The hazard potential: introduction and overview. Journal of American Statistical Association, 101, 1705–1717.
• [18] Yun, W.Y. and Cha, J.H. (2010). Optimal design of a general warm standby system. Reliability Engineering and System Safety, 95, 880–886.
• [19] Zhai, Q., Yang, J., Peng, R., and Zhao, Y. (2015). A study of optimal component order in a general -out-of- warm standby system. IEEE Transactions on Reliability, 64, 349–358.
• [20] Zhang, T., Xie, M., and Horigome, M. (2006). Availability and reliability of -out-of-(): warm standby systems. Reliability Engineering and System Safety, 91, 381-–387.
|
2021-09-18 19:38:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398308157920837, "perplexity": 943.350587193227}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00212.warc.gz"}
|
https://iacr.org/cryptodb/data/paper.php?pubkey=31905
|
## CryptoDB
### Paper: Zero-Knowledge IOPs with Linear-Time Prover and Polylogarithmic-Time Verifier
Authors: Jonathan Bootle , IBM Research – Zurich Alessandro Chiesa , EPFL Siqi Liu , UC Berkeley Search ePrint Search Google Slides EUROCRYPT 2022 Interactive oracle proofs (IOPs) are a multi-round generalization of probabilistically checkable proofs that play a fundamental role in the construction of efficient cryptographic proofs. We present an IOP that simultaneously achieves the properties of zero knowledge, linear-time proving, and polylogarithmic-time verification. We construct a zero-knowledge IOP where, for the satisfiability of an $N$-gate arithmetic circuit over any field of size $\Omega(N)$, the prover uses $O(N)$ field operations and the verifier uses $\polylog(N)$ field operations (with proof length $O(N)$ and query complexity $\polylog(N)$). Polylogarithmic verification is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-time-computable encoding of the circuit whose satisfiability is being proved). Our result implies progress on a basic goal in the area of efficient zero knowledge. Via a known transformation, we obtain a zero knowledge argument system where the prover runs in linear time and the verifier runs in polylogarithmic time; the construction is plausibly post-quantum and only makes a black-box use of lightweight cryptography (collision-resistant hash functions).
##### BibTeX
@inproceedings{eurocrypt-2022-31905,
title={Zero-Knowledge IOPs with Linear-Time Prover and Polylogarithmic-Time Verifier},
publisher={Springer-Verlag},
author={Jonathan Bootle and Alessandro Chiesa and Siqi Liu},
year=2022
}
|
2022-06-27 09:08:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7531022429466248, "perplexity": 4463.609339071826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00389.warc.gz"}
|
https://diffgeom.subwiki.org/w/index.php?title=Lie_algebra_of_first-order_differential_operators&oldid=822
|
# Lie algebra of first-order differential operators
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
This article defines a basic construct that makes sense on any differential manifold
View a complete list of basic constructs on differential manifolds
This article gives a global construction for a differential manifold. There exists a sheaf analog of it, that associates a similar construct to every open subset. This sheaf analog is termed: sheaf of first-order differential operators
## Definition
Let $M$ be a differential manifold. Let $C^\infty(M)$ be the algebra of infinitely differentiable functions on $M$. The Lie algebra of first-order differential operators is defined as follows:
• As a set, it is the set of all maps from $C^\infty(M)$ to $C^\infty(M)$, that can be expressed as the sum of a derivation, and pointwise multiplication by a function. The derivation can be thought of as the pure first-order part, and the scalar multiplication can be thought of as the zeroth
• The $\R$-vector space structure is by pointwise addition and scalar multiplication.
• There is a natural $C^\infty(M)$-bimodule structure, by composition. In other words, $f \in C^\infty(M)$ acts on a first-order differential operator $d$ by:
$d \mapsto m(f) \circ d$
where $m(f)$ denotes multiplication by $f$. Similarly, the right action is given by:
$d \mapsto d \circ m(f)$
|
2020-04-04 06:22:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9700710773468018, "perplexity": 297.55894509329715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370520039.50/warc/CC-MAIN-20200404042338-20200404072338-00212.warc.gz"}
|
https://aviation.stackexchange.com/questions/87206/could-the-mars-helicopter-ingenuity-have-been-test-flown-outdoors-at-jpl-just-fo?noredirect=1
|
# Could the Mars helicopter Ingenuity have been test-flown outdoors at JPL just for the heck of it? (not that they would)
The Ingenuity helicopter that has flown on Mars is designed for roughly 0.01 bar pressure and a colder atmosphere made of mostly CO2. See links (including video) in this answer to What JPL laboratory is this exactly, and what are the functions of these amazing-looking control panels? in Space SE and sources linked within. For more about Mars' atmosphere see this answer to What could Perseverance listening to Ingenuity reveal? there as well, and sources within.
The counter-propagating rotors (driven by separate motors but synchronized) spin at about 2500 RMP. The propellors are huge by Earth standards compared to the tiny weight that they carry. But we also have to remember that it will weigh more on Earth. See answers to Does Ingenuity rotate via differential rotor speeds or differential angles of attack? and sources within for more on that.
Question: Hypothetically, could the Mars helicopter Ingenuity have been test-flown outdoors at JPL just for the heck of it? Not that they would ever do such a thing (it will have been kept very clean and very very safe at all times) but from a technical stand point would it have flown? What rotor speeds and torques would have been necessary, and would these have been possible with the light-weight motors optimized for Mars flight?
No, Ingenuity was too heavy and underpowered to fly while on Earth.
From 27:30 to 29:00 in the video of a 2021 May 13 press event, one of its designers, Matt Keennon, manually pilots what he called a "terrestrial version of Ingenuity," officially named Earth Copter, nicknamed Teri or Terry. (Reporters have had to guess at the spelling. The transcript of a 60 minutes segment has Terry, but that neglects I for Ingenuity.)
At 24:45 Matt explains that its only significant hardware differences compared to Ingenuity are that it is lighter (he doesn't say how much), and that its motors are taller, with twice the torque. Aerovironment hasn't published any hard numbers about Terry, or possibly even any text at all since even its name remains disputed.
But it's clear that they designed Terry to closely mimic how Ingenuity would fly on Mars (see the section "Iterations" in Inside Ingenuity With AeroVironment), so they wouldn't have lightened it and used much more powerful motors for capricious reasons. It had to actually fly. A direct quote from Matt at 24:45:
We also reduced the weight of Terry compared to Ingenuity because it's flying in Earth's gravity.
Co-designer Ben Pipenberg also says:
Terry is designed to fly here on Earth, so the motors were redesigned and are more powerful and have a higher torque to handle the denser atmosphere and the higher gravity.
• This is very helpful, thanks! Since the density is of order 100 x difference and these changes are of order 3-5 we can guess that the scaling will scale as the cube root of density. Hopefully we'll get the math at some point and can then find out. Thanks!
– uhoh
May 19 '21 at 20:24
• AeroVironment has the math, of course, but asking them to publish prematurely would be as effective as asking SpaceX to! Matt does respond courteously when asked for very specific things that obviously only he knows, but I'm careful to not overstay my welcome. May 19 '21 at 21:40
...test flown outdoors at JPL just for the heck of it
Perhaps if they wanted to lose their jobs as engineers, but if one were to try it:
Ingenuity is a constant speed/variable pitch rotor design. Earth gravity is 2.64x greater than Martian. Rotor speed is 2500 rpm. Rotor diameter is 4 feet. Rotor tip velocity is:
2500 rpm × 4 feet × 3.14 × 60 min/hr × 1 mile/5280 feet = 340 mph
Why so slow?
Speed of sound in CO2 is slower (78%) than air. In helium it travels almost 3x faster! Speed is sound is also slower at colder temperatures.
These factors affect the Martian design. The rotors will work fine on Earth but ...
On Earth they will have to lift 2.64 times more weight.
Lift = Coefficient × Area × Velocity$$^2$$ x Density
If air density is 100 times greater rotor pitch control would be much more challenging. Assuming you don't burn out the motors trying to get 2500 rpm, the helicopter would probably crash from excessive control inputs.
Forging ahead, we could reduce the RPM spec for Earth flight:
Square Root 2.64 x 2500 rpm/100 density factor = 40 rpm!
or employ smaller rotors, making it much more controllable. Adding a bit of buoyancy with a helium dirigible attachment would also be possible. Then, it may be flyable.
A final consideration would be the effect of Reynolds number on the performance of the rotor blades. Here the Martian atmosphere and higher blade velocity would offer an advantage.
Reynolds number = chord × Velocity/Kinematic Viscosity
In summary, a re-design would be in order for the Earth Ingenuity, but, conceptually, the contra-rotating, variable pitch, constant speed approach (with or without gasbag) would be of interest.
• Nice answer, thanks! It's operated at at least a roughly constant speed and likely optimized for it, but are we sure they can't just instruct a different speed? I don't think we should assume the speed is permanently fixed. Alas I still can't find a description of the exact motor beyond "...two brushless direct-drive propulsion motors driving the two rotors..." but I think we should allow for the probability that the speed can be changed
– uhoh
May 20 '21 at 22:27
• @uhoh judging from airfoil polar data, those rotors would need to be spun fast enough to get most of them above a Reynolds of 10$^6$, where L/D ratio is much better. But, from a marketing point of view, how many people will run out and get an Earth modified Ingenuity drone from the hobby store. May 21 '21 at 1:33
Given the higher gravity on Earth, the effective mass of the craft on Earth is higher than on Mars. The drag is also far higher because of the higher atmospheric density.
Both are factors causing more lift and power to be required, in the form of larger rotors and stronger engines.
Luckily the denser air also means more lift per unit of area of the rotors, reducing the required rotor size, somewhat offsetting the requirement for larger rotors.
I don't have the math or the data to make the required calculations, but it's unlikely that a craft designed to be as small and light weight as possible to work in a Martian environment and be able to be launched on the available cheapest launch vehicle available (cost...) would be well suited for use on earth.
It's possible but unlikely.
• I'm not exactly taking an opinion pole here, "it's unlikely" without any support based on anything besides "optimized for A means can't work for B" is a comment, but not really a Stack Exchange answer. Someone could write a similar sounding answer with the exact opposite conclusion and there'd be no way for readers to tell which was right.
– uhoh
May 19 '21 at 12:22
• The effective mass will be identical on Mars or anywhere. The weight will be different.
– Frog
May 19 '21 at 19:49
|
2022-01-24 04:17:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4836871922016144, "perplexity": 2250.492126668908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304471.99/warc/CC-MAIN-20220124023407-20220124053407-00328.warc.gz"}
|
https://adamnash.blog/2007/05/
|
This is a just a quick note to officially state that my first day at LinkedIn is tomorrow, May 29th. I’ll be joining as Sr. Director of Product, working for the founder, Reid Hoffman. I’m incredibly excited to join such a great team and to work on such a great product.
As with my previous role at eBay, I don’t plan on blogging about work here on Psychohistory – this is my personal blog. However, I’ve always been open and honest about my background and my work, so this announcement is fair game.
If you aren’t familiar with the company, LinkedIn is a great new site that is based on the premise that the single most important asset of your professional career is your relationships with the people who you know and trust, and who know and trust you. The site offers a suite of powerful tools that make it exceptionally easy to communicate & leverage your professional network in entirely new ways. You can read more about the site and what it offers here.
On a personal note, this means my 15-minute commute to San Jose has just become a 10-minute commute to Mountain View. Such is life in Silicon Valley.
# Garden 2007: Sweet 100, JetSetter, Banana & Green Grape Tomato Plants
I’ve been a bit surprised at the ongoing popularity of the one or two posts from last year on my tomato harvest, particularly the pictures of my Mr. Stripey tomato.
As a result, I thought I’d post an update for 2007 on my garden. I love working in the garden – my only regret is that I only have 3 garden boxes to work with. Of course, that’s about 10x the space I had a few years ago when I was growing plants on the balcony of my apartment.
This year, I’ve lined up what I think is an excellent crop:
Box 1: Peppers
I love medium-to-hot peppers, so this year I’m growing five varieties:
• Banana
• Serano
• Jalapeño
• Thai
• Habañero
Here’s a shot of the 9 plants (2 of each except for the 1 Habañero)
Box 2: Cucumbers
I also grow cucumbers, using a trellis so I can get a large crop with very little space. This year, I’m growing four varieties of cucumber. Two years ago, my Japanese cucumbers reached almost 2 feet (24 inches) in length, so I’m trying to build on that success.
Here are the four varieties I planted this year:
• Lemon
• Japanese
• Table (regular)
• Armenian
I’m excited about the last one, because I’ve never grown Armenian cucumbers before. They apparently are light green skinned, and can grow to 3 feet long! Should be fun.
The plants in the bottom corner are herbs that I use to fill out the box – basil, oregano, thyme.
Box 3: Tomatoes
And finally, my prize plants, the tomatoes. This year, I’m trying to stretch a 3×6 box to handle 4 plants. I’m also trying out a much more robust 7mm steel support for the plants after last years disaster with my huge plants bending the smaller steel frames mid-August. I got them at Gardener Supply online.
The four varieties of tomato this year feature two hybrids for volume & eating, and two heirloom varieties for flavor & fun.
• Sweet 100 (cherry)
• JetSetter (large, red)
• Banana Fingers (4″, orange/yellow)
• Green Grape (small, green/stripe)
I’ve had them in the ground for four weeks, and they are really doing well. See below.
Should be a great year for the garden. Everything is really taking off. I feel like Brie from Desperate Housewives, but even my Hydrangeas are doing fantastically well.
# Heroes Season 1 Finale: I Thought Peter Could Fly… (Spoilers)
Hopefully the fact that Peter Petrelli can learn the powers of other heroes is sufficiently old that I won’t get in trouble with the RSS Spoiler police. I’m still smarting from the flames about my Battlestar Galactica post about Starbuck…
Heroes finished off Season 1 this week with the big finale. “How to Stop an Exploding Man”
The question I had on my mind at the end was,
“Why does Nathan Petrelli have to fly Peter up to the sky so he can safely explode? Peter can fly also – he got the ability from Nathan.”
Well, I found this snippet on SyFy portal where the series creator comments on the issue:
“You know, theoretically, you’re not supposed to be thinking about that,” series creator Tim Kring told TV Guide’s Matt Webb Mitovich and Michael Logan. However, Kring did prove correct many theories following Monday’s airing that Peter was so distracted by the fact he was about to explode that he didn’t have the energy or the attention span to use an of his other abilities.
Of course, that’s trying to find a way to explain an action from a story standpoint. But from an entertainment factor, Kring admitted that he was much more interested in having Nathan — who had become somewhat of a bad guy on the show in recent weeks — to save the day.
“Yes, I will admit that there’s a very tiny window of logic there, but what can I say?” Kring said. “It requires the proverbial suspension of disbelief.”
Well, it’s nice to see I’m not the only one who got caught in that “tiny window of logic.” Otherwise, that was really my only problem with the finale of Season 1.
Well, that and the cheesiness of Sylar apparently “not really dying”. Lame. Sylar & Peter are both too powerful, frankly, to hang around too long. A lot of comics have the concept of borrowing powers temporarily, but when you get to keep them forever, you eventually just become a god. Being invincible quickly becomes boring.
# How to Mount NTFS Drives on Mac OS X with Read/Write Access
Elliot, this post is for you.
A couple of weeks ago, I got really irritated with the whole Mac/Windows thing. I had purchased a USB hard drive with the intention of using it as a backup drive for both Mac & Windows machines.
Unfortunately, I discovered that Mac OS X cannot write to NTFS volumes – it can only read from them. I then discovered that Windows XP has lost the ability to read or write to HFS+ drives (Windows 2000 had it).
Well, I am here to say that there is a pretty cool solution for mounting NTFS volumes on Mac OS X. Interestingly, it comes from Google.
The MacFuse project on the Google Code site is a BSD-license open-source project that lets you use any FUSE-compatible file system on Mac OS X. FUSE (File-system in USErspace) originated on Linux, but apparently the port to Mac OS X has been live for a while.
NTFS-3G is the open source project that implements NTFS support for FUSE.
This lovely site has packaged together DMG installer versions of each for easy installation on Mac OS X. (Please note: only do this if you are running Mac OS 10.4 or later, and are somewhat technically savvy)
Amazing. It just works. In fact, I’ve only hit one glitch. If you fail to put away your NTFS volume properly on Windows (using the Safely Remove Hardware command), NTFS can get itself all locked up, and unable to mount properly.
Now, let me give due credit to this blog post for helping me find this solution.
Also, it’s worth noting that the write performance isn’t speedy right now. The teams contributing seem to know this, and are working the issues. As a result, I wouldn’t use this solution to make NTFS your default volume format for files. However, if you need simple read/write to the occassional NTFS volume, this looks like a good answer.
Why Apple can’t ship decent NTFS support for Mac OS X is beyond me. And why Microsoft can’t support HFS+ is also beyond me. Given that there are tens of millions of machines out there who create and use each of these volume formats, I would say that it clears the bar of “important enough” to support.
Update (6/3/2007): A brief warning. Apple just released a security update that is currently not fully compatible with the ntfs-3g files. My PowerMac was unable to read UDF (video DVDs) until I removed these files. I’m sure a fix will be out soon, but be careful. This thread on Apple Discussions captures the solution.
# Judge Judy Episode on eBay Trust & Safety
Sorry, I’ve been sitting on this one too long, and I just have to post it.
This is the Judge Judy episode where the eBay scammer gets her due… to the tune of a \$5000 judgement. More importantly, she gets taken to task for even pretending that this was OK or justified.
Premise: The defendant sold two expensive cell phones to the plaintiff, but then only shipped them pictures of the cell phones, claiming the listing was only for the photo, not the cell phone. When the buyer complained, she left them negative feedback claiming they were the scam artists! Sleazy.
Look, I know Judge Judy is no Rob Chesnut, but then again, Rob Chesnut is no Judge Judy. 🙂
I admit to having a soft spot for these type of shows… call it a weakness. But I admit a strong desire to see the few bad actors out there who make the world a worse place get some public humiliation.
Of course, if these buyers had purchased the cell phones on eBay Express, they would have been covered by 100% buyer protection, and they would have gotten their money back quickly. Still, that wouldn’t have been as much fun as this TV clip.
# Diamond is NOT the Hardest Material (Who Knew?)
News flash. Two years late. Diamond is not the hardest known material. There are at least three known substances that are harder: Rhenium Diboride, Ultrahard Fullerite and Aggregated Diamond Nanorods.
I’m a little worried. I think this is what happens when you grow older. Technology has just outdated one of those simple scientific truths I learned about in school. What’s worse is that it took me almost two years to find out about it.
But before I get into a self-pitying “science is for the young” groove, let me tell you what I’ve learned so far.
First, a big thank you to Business Week. Yes, that’s right, Business Week. Not known for it’s scientific coverage, but the May 7, 2007 issue had a snippet on page 79 about the successful effort to create a substitute for industrial diamonds for slicing through steel. Apparently, the diamond reacts with the steel to form by-products that dull the blade. Scientists at UCLA have discovered a mixture of Boron and Rhenium that is hard enough to scratch diamond, and doesn’t react with steel. Press release dates to April 19, 2007, so it’s a pretty recent discovery.
In all fairness, Rhenium DiBoride is only harder than diamond in certain directions, due to its layered structure. But reading about it sent me to the web – what other substances have been discovered that are harder than diamond? Somehow, learning that diamond wasn’t the hardest material bar none made me realize that I last took Material Science coursework at Stanford in 1992.
Fortunately, in the 15 years since that coursework, a lot has happened to help me get up to speed in a matter of minutes. And I am glad I did, because new materials are just too cool.
First, let’s start with the simpler one: Ultrahard Fullerite. Fullerene is a form of carbon based on the $C_{60}$ structure of buckyball-fame. From Wikipedia:
Ultrahard fullerite ($C_{60}$) is a form of carbon which has been found to be harder than diamond, and which can be used to create even harder materials, such as aggregated diamond nanorods.
Specifically, it is a unique version of fullerene (which is a class of spherical, ellipsoidal, or tubular carbon molecules) with three-dimensional polymer bonds. This should not be confused with P-SWNT fullerite, which is also a polymerized version of fullerene. It has been shown[1][2] that when testing diamond hardness with a scanning force microscope of specific construction, ultrahard fullerite can scratch diamond.
Very cool, but now, of course I’m thinking, “Tell me more about these aggregated diamond nanorods!” (I’m sure you were thinking the same thing.)
That, my friends, is a thing of beauty. According to this article at the European Synchotron Radiation Facility, Aggregated Diamond Nanorods are the least-compressible known material. To be specific, the density of ADNR is 0.2% to 0.4% greater than Diamond. ADNR is also 11% less compressible than diamond, and has an isothermal bulk modulus of 491 GPa (gigapascals) compared to just 442 for diamond.
Of course, I’m only reading about this now. PhysicsWeb.org had the coverage on this discovery in Germany back on August 26, 2005. (it’s actually a very clear & well written piece.) You can bet that the PhysicsWeb RSS feed is going into my reader tonight…
Wikipedia has a very nice summary here as well.
Oh well, better late than never. My guess is that one or two people out there also missed this, which is why I’m posting it tonight.
Now, I think we just need to find a way to start a luxury jewelry business that specializes in ADNR-based engagement rings. Why settle for diamond, which can get scratched so easily? We could make a fortune on this one on the high end…
Update (1/4/2010): See the comment from January 2010 below, but it seems Rhenium DiBoride is no longer assessed as harder than diamond.
# John Adams Dollar Coins: First Significant Mint Error Found (Double Edge Lettering)
Well, I guess it was inevitable.
With all the press coverage and excitement around the George Washington Presidential Dollar Coin errors, you just knew that people would be all over the next dollar coin looking for problems. However, even I am a little shocked at exactly how voracious the coin collecting community has been tearing apart this new issue looking for problems.
The Coin Collecting News has some great information on what looks like the most significant error to be found on the John Adams dollar coins to date: a coin with double edge lettering.
As a reference, this article on About.com has over a dozen possible errors documented already! I’m going to reproduce the list here, just to give you an idea of the incredible detail available already:
• Die clash – Traces of reverse show on obverse
• Over-abraded die – lost detail (probably to repair die clashes)
• Struck through grease filled die – lost numbers & words in lower legend
• Struck through grease filled die – random spots & smears
• Small die chips – “Warts” and “Infected President” types
• Die clash – Traces of obverse show on reverse
• Over-abraded die (New Type!) – “Blinded Liberty” shows Liberty’s right eye polished flat
• Struck through grease filled die – random spots & smears
• Die crack in torch (New Type!) – “Broken Torch” type has moderate die crack
• Minor die break – “Filled S” type (One or the other S in STATES, both reported)
• Minor die break – “Extra Curl” has small die break in Liberty’s hair between curls
Adams Dollar Edge Errors & Whole Coin Errors
• Unburnished planchet – Planchet missed polishing & brightening step
• Double edge lettering – Coin went through edge lettering machine twice
• Shifted edge lettering – Edge lettering doesn’t line up properly with other coins
• Embossed letters – Improperly called “dropped letters” – appearing on edge and surfaces
An unbelievable list for a coin that is four days old!
Hopefully I’ll be getting my first mint boxes of coin rolls soon. I’ll be selling them again on eBay, along with some unopened rolls of George Washington dollar coins.
Update (5/24/2007): For a limited time only, I am now carrying unopened, original John Adams Presidential Dollar coin rolls in my eBay Store. Click here to buy them on eBay Express. I also have a few more original bank rolls of the George Washington dollar coins. Click here to buy them on eBay Express.
If you are interested in the other rolls I am carrying, click here for all the coins I am currently selling on eBay Express.
# Do You Know the Rule of 72? Project Future Returns in Your Head.
I haven’t posted a lot about personal finance lately, and I’ve been meeting to get back on the horse soon. In the meantime, this is a fun one for those of you who may not have heard it before.
When investing for a long term goal, like college or retirement, it’s often very useful to be able to quickly determine how long it will take to double your money.
Enter the Rule of 72.
Now, the Rule of 72 is shorthand, and not completely accurate. But it’s accurate enough to be immensely useful.
The Rule of 72 says that if you divide 72 by the rate of return on an investment, you’ll get the number of years required for that investment to double.
So, if you find an investment that returns 8%, 72 / 8 = 9, so the investment will take 9 years to double.
I learned this rule about 15 years ago from my grandmother, and I’ve been using it for quick shorthand ever since.
For example, let’s say I want to know how much a \$50K 401K might be worth over time. Assuming an 8% rate of return, I can quickly determine that it will double in 9 years, quadruple in 18 years, octuple in 27 years, and be worth \$800K in 36 years.
This also works, unfortunately, for loans, but in reverse. If you take a student loan out at 7.2% for 10 years, well, you can expect to end up paying double what you borrowed in total.
I’ve also used this rule in business environments, especially when you are looking at compounding growth rates for business metrics like sales, revenue and costs.
Once again, the rule is really a shorthand, and not completely accurate. Obviously, a 72% return doesn’t double in 1 year. And a 1% return doesn’t double in 72 years. However, it’s surprisingly accurate in the middle ranges, which apply to most situations.
This article in Get Rich Slowly has some variants that are fun. But for me, the basic Rule of 72 lets me quickly an easily assess what a return will really mean to an investment, in those rare moments when I’m away from Excel.
So enjoy this tidbit, and I’ll get to some meatier topics this week.
# Blizzard Demos Starcraft 2 in Korea
When you play video games, you always remember the ones that absorbed huge blocks of time from your life. Starcraft was one of those games for me.
Well, it looks like the next great game from the company that brought us World of Warcraft will not be another Warcraft title or massively multiplayer game. Instead, they have taken the wraps off of a Starcraft sequel. The Starcraft 2 website is live!
For those of you who don’t play these types of games, Starcraft is a strategy game where you control armies of one of three species: Human, Zerg, or Protoss, and it’s a basic “mine, build & fight” kind of game. This one, however, was so well executed that it became the basis for professional competition in Korea, and a world-wide phenomenon.
The Blizzard games have inspired hundreds of copycat titles, but it’s an extremely hard problem to balance the design of the different races, units, resources, economics, technology and gameplay. In fact, it’s almost an impossible problem. Blizzard, however, does it better than anyone.
Here is a good write-up on the debut and press conference Q&A on GameSpot. Here is another one on Inside Mac Games.
Better yet… here is the CGI announcement trailer for the game, as debuted in Korea, on YouTube:
Here is a great video of the actual gameplay… notice again. Three key races (Terran, Zerg, Protoss) and a much better model of air/ground interaction.
Going back over 15 years, Blizzard has repeatedly demonstrated their ability to develop games that are outstanding in design and execution. What Pixar is to computer-animated film, Blizzard is to modern video games. As another hallmark of an exemplary shop, they deliver their titles for launch on both the Mac & PC simultaneously. You don’t see them complaining about time to market or cost. They know that a game of sufficient quality will pay for itself many times over, and with Blizzard’s track record, they always do.
I’m very excited to play this game, although I realize that in the modern Battle.net era, I will be hopelessly outclassed by other players before I even get the DVD home and installed 🙂
Update (5/22/2007): Nate has a post on this as well, and he found more great YouTube videos of the gameplay presentation. I felt like I was getting a 20 minute tutorial on Protoss unit deployment. Colossus!
# LinkedIn: In the Black Party
In case you were wondering where I was last night, I had the chance to go up to San Francisco for the LinkedIn “In the Black” party, to help celebrate LinkedIn’s first year of profitability. The Web Strategist blog has some great coverage and photos.
What are the chances of a web startup making it? 5%? 2%? not very good for most? We should take a lesson from business networking tool LinkedIn.
LinkedIn celebrates over 14 months of profitability, which is an amazing feat for the thousands of web companies in Silicon Valley that sprout.
Check out the full post here.
For me, it was a chance to actually wear a suit and take Carolyn up the city – not something that happens every day with two little ones at home. There was a very warm feeling at the event – something about the LinkedIn culture is incredibly open, friendly and very human.
As a bonus we discovered several friends also at the event. The funniest introduction at the party was pre-school related – apparently being the father of Jacob Nash has its privileges. They were the parents of one of Jacob’s pre-school classmates, and I believe the direct quote was, “Jacob Nash is a really big deal in our household.” It’s possible that this is Jacob’s world, and I’m just living in it. 🙂
The food & wine was excellent, but my personal favorite of the evening was the chocolate. A wonderful table of extremely rich chocolates with exotic infusions like tea and grapefruit. Even a small exit gift of gorgeous truffles.
It was wonderful to be included, and a great night out. I can’t wait to get started next week.
Congratulations to the entire LinkedIn team.
Update (5/21/2007): Two great posts on the event on the LinkedIn blog, and on Mario Sundar’s blog. Check them out.
# Nintendo Video Game History: For Sale on eBay This Week!
Yes, I am shamelessly plugging my own listings. Deal with it.
I actually have 13 new listings up this week, and I thought it would be fun to highlight them because most of them are ghosts of video game consoles past like the mythic dungeons in world of warcraft! If you don’t know what I’m talking about then you can’t miss visiting this http://elitist-gaming.com/mythic-dungeons-boost/ and revive the old memories. Most were dug up by my father recently, who keeps discovering new things to sell in his endless supply of storage sheds.
This week, I have the following starting on Sunday at 5pm:
• Nintendo 64, with Mario 64 & Rumble Pack
• Lot of 10 Nintendo 64 Games
• Super Nintendo, with two controllers (but no AC or Video Cable)
• Original Nintendo, with Super Mario Bros.
• … and other assorted goodies
Feel free to check out my new listings. And happy bidding! 🙂
# The John Adams Presidential \$1 Dollar Coin is Available
Big news today in the coin world… today, the Presidential \$1 Dollar Coin series became, well, a series.
The US Mint is now selling rolls and bags of the new John Adams dollar coins. I wrote a piece on the new coin a few weeks ago, if you’re interested in more detail.
There are already rolls of the new dollars available on eBay. I personally am working to get a few boxes of the new coins, and will likely be selling them as well next week. The US Mint is selling the rolls for \$35.95 (25 coins), and the bags for \$319.95 (100 coins). I’ll likely sell them for a few dollars less on eBay. I’ve also come across some more George Washington dollar rolls – I’ll be selling those too. Probably to people who didn’t realize that they’d only be available for a short time.
So far, there has been no report of any errors or issues with the John Adams dollar coins, but it’s still very early. If there are legitimate errors, I’ll report them here on the blog.
The next big date in the Presidential Dollar Coin Program looks like June 19… that’s the date that the US Mint has given for announcing the different options for the 24K \$5 gold pieces based on the “First Spouse” coins.
Update (5/24/2007): For a limited time only, I am now carrying unopened, original John Adams Presidential Dollar coin rolls in my eBay Store. Click here to buy them on eBay Express. If you are interested in the other rolls I am carrying, click here for all the coins I am currently selling.
# StumbleUpon is a Real Traffic Driver
StumbleUpon is more powerful than I first thought.
In case you haven’t tried it, StumbleUpon is a fairly unique new tool to help browse the web. It’s a toolbar that you can easily download and install in your browser (IE or Firefox). With it, you can easily vote “thumbs up” or “thumbs down” on any web page you go to, kind of like Tivo. As you vote for websites, StumbleUpon compares the sites you like to the sites that other people like. Then, when you click “Stumble!”, it automatically takes you to other websites, most likely ones you haven’t heard of, but that StumbleUpon thinks you’ll like.
My first impression of StumbleUpon as a user was positive – it very quickly started taking me to Mac-related sites, many of which I hadn’t heard of. It was neat, but since I rarely have time to just randomly browse the web, I didn’t use it much.
I didn’t give StumbleUpon much more thought, despite all of the eBay/StumbleUpon acquisition rumors, until today.
For the first time, this morning I decided to actually vote for my own blog, Psychohistory, with StumbleUpon. I didn’t think much of it at the time. Why not a little self-promotion?
Tonight, I checked my blog statistics, and the number one referring site to my blog today was… StumbleUpon. 29 hits out of 380 total. That might sound like a small number (it is), but I’m just shocked that a single vote could bring traffic to my blog.
Something real is going on here… reading the WordPress forums I see that a lot of blog authors are getting thousands of hits a day from StumbleUpon. If any of that is hitting my blog, then there must be a fairly significant flow sourcing from StumbleUpon users.
In any case, if you haven’t downloaded StumbleUpon yet, it’s worth checking out. And if you run a website, it’s worth thinking about how to best get people to vote for your site in StumbleUpon. This page on the StumbleUpon site helps you integrate their tags into your pages.
Click this link below to vote for this blog… 🙂
Stumble it!
# The Time Has Come…
Friday, May 11th was my last day at eBay. I’ll be posting a bit more about my new role soon, but for now, I thought I’d just share the email that I sent out late Friday afternoon.
E veryone,
B efore I get into
A ll of the nitty-gritty details on how
Y ou can reach me, I just wanted to say a couple of things.
E bay has always been about people, and I feel
X tremely fortunate to have had four years here to learn and work with some of the best
P eople in the world. When I think back on everything I’ve done here, I
R ealize just how lucky I was to land that first product role on
E bay Verticals. Since then, I’ve had the chance to work with hundreds of passionate and
S mart people across dozens of teams and projects, and I’ve learned
S o much from every one of you. All I can say is thank you.
The time has come for me to take on a new role and a new adventure, but I have every confidence that you’re going to take eBay & eBay Express to incredible new levels in the coming months and years. You’ll find me actively cheering you on, as a shareholder and as an active community member, selling my way to that red star.
Please feel free to reach out to me and keep in touch. I truly believe that the most valuable things we create through our careers are relationships with people we respect & trust. In fact, that’s pretty much what I’m going to be focusing on. I will always have guilt-free M&Ms waiting.
My personal contact information:
Email: <…>
Skype: <…>
Cell: <…>
And of course, if we haven’t already, please set up a profile and connect with me on LinkedIn.
Take care.
eBay is a wonderful company, filled with wonderful people, working to make life better for millions of people. I started selling on eBay in 1998, and joined the company in early 2003. I will truly miss it. However, I am also extremely excited about my new company, and the opportunity that lies ahead.
However, for the next two weeks, I am unemployed. A good time to reflect and enjoy time with family & friends before digging in on a new adventure.
Update (5/14/2007): I had no idea this post would be noticed by more than family & friends, but it has. Some very kind words from Randy Smythe can be found here. It also was noticed on this thread on the eBay Stores board.
Update (5/15/2007): I am surprised at the attention, but Scot Wingo on eBay Strategies has also posted on the topic.
# Fun With a Two Year Old
Being the parent of a two year old is a lot of fun. Jacob is extremely cute, and now that he has started talking, I find all sorts of simple pleasures in what he decides to say.
For example, my son is likely one of very few two year olds to clearly articulate when he wants to play Nintendo Wii, as he will say “Wii” and then run to the drawer where the Wii-motes are kept.
Jacob also is a huge fan of “Apple TV”, and will say so and then run to the bedroom waiting for us to play one of his favorite movies. We have about half of his favorites ripped to iTunes now, and he knows it.
Besides, it turns out that “Apple TV” is the combination of two things Jacob already knew how to say. “Apple” is his favorite snack and juice, and “TV” is unfortunately also a favorite of his.
Jacob’s vocabulary isn’t huge yet, although he does suprise us with full sentences now and again. Two year olds often love to repeat the same word or phrase over and over again. As a result, I’ve made up my own little game where I make up a joke around the phrase Jacob is repeating, and then he says the punchline.
Example:
Jacob: “Apple Juice. Apple Juice. Apple Juice.”
Adam: “Jacob, what do you call Mac-users who go to synagogue?”
Jacob: “Apple Jews”
🙂
Two year olds are fun!
|
2022-11-29 13:47:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1756920963525772, "perplexity": 2630.7399072074854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00166.warc.gz"}
|
https://clok.uclan.ac.uk/1732/
|
# Constraining Coronal Heating: Employing Bayesian Analysis Techniques to Improve the Determination of Solar Atmospheric Plasma Parameters
Adamakis, Sotiris, Walsh, Robert W. ORCID: 0000-0002-1025-9863 and Morton-jones, Anthony J. (2010) Constraining Coronal Heating: Employing Bayesian Analysis Techniques to Improve the Determination of Solar Atmospheric Plasma Parameters. Solar Physics, 262 (1). p. 117. ISSN 0038-0938
Preview PDF (Author's Post Print) - Submitted Version 794kB
Official URL: http://dx.doi.org/10.1007/s11207-009-9498-3
## Abstract
One way of revealing the nature of the coronal heating mechanism is by comparing simple theoretical one dimensional hydrostatic loop models with observations at the temperature and/or density structure along these features. The most well-known method for dealing with comparisons like that is the $\chi^2$ approach. In this paper we consider the restrictions imposed by this approach and present an alternative way for making model comparisons using Bayesian statistics. In order to quantify our beliefs we use Bayes factors and information criteria such as AIC and BIC. Three simulated datasets are analyzed in order to validate the procedure and assess the effects of varying error bar size. Another two datasets (Ugarte-Urra et al., 2005; Priest et al., 2000) are re-analyzed using the method described above. In one of these two datasets (Ugarte-Urra et al., 2005), due to the error estimates in the observed temperature values, it is not posible to distinguish between the different heating mechanisms. For this we suggest that both Classical and Bayesian statistics should be applied in order to make safe assumptions about the nature of the coronal heating mechanisms.
Repository Staff Only: item control page
|
2023-03-22 05:34:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6254063844680786, "perplexity": 1700.0928492718458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00161.warc.gz"}
|
http://tenru.seesaa.net/article/31086688.html
|
# tenrûԂ藷
<< 2020N07 >>
y
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
L
## 2007N0108
### tƂ̂
ւI
ȂyXgŐtȂ炢̂̏oR[XHׂˁI
posted by tenru at 21:26| ☀| Comment(6) | TrackBack(0) | L | |
̋Lւ̃Rg
̂HR[XH
ȏbLɂAߏ蔽邩AÂAcI
Rrƃ}gA匙
́AƂAKƂAłHׂႤc
Posted by at 2007N0108 21:47
A͂Ȃ̐HׂႤƂɂĂ͖{ꂾˁB
̂̓C^AǂȂB
ōƂꂷĂĂȂ̂ō[B
Posted by tenru at 2007N0108 23:00
j͐HׂȂˁI(
Posted by at 2007N0109 00:12
̂ĂǂȊȂłHH
HƂȂ̂ŋ܂(@@)
Posted by AJ at 2007N0110 01:35
Aj͐Hׂ˂I
AJ
[XgꂽԂŏoĂ
̓Ɠ̏L݂݂Ȃ̂ȂA
AůÂ悤ȏiȂɂȂĂ܂B
tɂ̂炵͂킩ȂB
ȂׂƂƂ̂炵Ƃ킩邩ȂłˁB
߂I
Posted by tenru at 2007N0112 06:28
Aj͐Hׂ˂I
AJ
[XgꂽԂŏoĂ
̓Ɠ̏L݂݂Ȃ̂ȂA
AůÂ悤ȏiȂɂȂĂ܂B
tɂ̂炵͂킩ȂB
Posted by tenru at 2007N0112 06:33
Rg
O:
[AhX:
z[y[WAhX:
Rg:
̋Lւ̃gbNobN
|
2020-07-07 04:20:28
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8220920562744141, "perplexity": 11660.493626336482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891640.22/warc/CC-MAIN-20200707013816-20200707043816-00121.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/multifunction-printer-making-weird-noise.133940/
|
# Multifunction printer making weird noise
#### Tonyr1084
Joined Sep 24, 2015
4,035
Just shut down the printer. Was going to connect it to a LAN and set up my computer to print via the LAN as opposed to printing from a dedicated computer. That way my wife could print to my printer from her computer if she desired without having to copy her file to a thumb drive - - - . When I turned the printer back on the speaker started making a "motor-boat" noise. I am starting to formulate what might be the issue. Anyone have an idea what their first inclining's are? I'm thinking the power supply didn't come back. Everything else plugged into the power strip is working fine, so I don't think it's anything to do with the power. Does the same thing when I plug it directly into the wall (no power strip, no protection).
The green light (on the power strip) indicates a good ground. The other light, the one that says "Protected" is red. Is it supposed to be red? My other power strip (different kind) has green for ground and a red for protected lights. I'm assuming a red protected light means it's protected. And nothing else on the circuit has been affected.
#### recklessrog
Joined May 23, 2013
985
First off, what is the make and model no.??
#### Tonyr1084
Joined Sep 24, 2015
4,035
Didn't think that would matter, I'm asking about the peculiar noise it's making - as if it's not getting sufficient power. But if it helps it's a Samsung SCX-4725NF. And no, I'm not asking anyone to look up parts or specs or tell me which screw to turn first. Just looking for an opinion.
#### wayneh
Joined Sep 9, 2010
16,162
The printer has a speaker? Mine can beep but I don't think it's really a speaker. My point it, is the noise really coming from a speaker or could it the power supply itself? Not that the answer gets you any closer to a fix. Could there have been a shift of something mechanical, maybe blocking the printer's startup routine?
Some printers have a button combination that triggers a self test or the printing of a test page. Might be worth a go.
#### dl324
Joined Mar 30, 2015
9,595
I don't think it has anything to do with power or your power strip.
Most likely the printer isn't initializing correctly, or it's really broken.
Search the WEB for any diagnostic modes for your printer. One of my printers will enter diagnostic mode when the power switch is held for a certain number of seconds. Think it's my Lexmark Optra E+ laser printer...
#### Tonyr1084
Joined Sep 24, 2015
4,035
This is a multifunction printer. Scanner, fax, copper as well as printer. It has a speaker inside it - I know this for a fact. About six months ago I had the machine disassembled for cleaning and replacing a few rubber wheels that had gotten hard and wouldn't pick up the paper. Got that working fine. Today I wanted to network them and decided it would probably be best to power everything down before making connections. Maybe that wasn't necessary, but it couldn't hurt. Now every time I switch the power on it makes a motorboat sound - just like an old tube style radio I had when I was young. Low power there made the system sound like a motorboat engine. If you like I can record it and post it, but that would take some time.
#### KeepItSimpleStupid
Joined Mar 4, 2014
3,831
That actually sounds like a paper jam. Once you open all of the covers and look, the jam indicator usually clears.
One of the quickest fixes after having a jam is to reverse the natural "curl" of the paper. Just open the paper tray and install the paper upside down. If you look real hard in your manual, there might be a blurb about "paper curl".
#### Tonyr1084
Joined Sep 24, 2015
4,035
No, not a jam. The printer beeps when there's a jam. This is nothing like a beep. The sonic alert that beeps when there's a jam is its own sound source. This is the speaker on the side of the unit designed to function along with the fax.
When I turn it on the speaker starts making a - well - a farting sound. Nothing like a jam alert. AND the lights don't come on and the display remains blank. Doesn't say "Please wait while the printer warms up". Doesn't do anything. Just farts and farts and farts. Will do that if I let it run for hours.
Trying to upload a video. Not having much luck. File name "Video.MOV" Probably won't upload that extension, I didn't see it in the acceptable category.
Let me try something else.
#### wayneh
Joined Sep 9, 2010
16,162
Doesn't do anything. Just farts and farts and farts. Will do that if I let it run for hours.
That's a bad sign. I'd suspect a bad bridge rectifier or blown filter cap there. If it's downstream of that, it's pretty much out of my comfort zone. Maybe it failed with the inrush current when it was powered down and then back up.
#### Tonyr1084
Joined Sep 24, 2015
4,035
Thanks Wayne. Now I have an idea where to start looking. I just fixed a monitor yesterday. Had a bad cap. Electrolytic. Top was split open. Replaced it and the monitor works fine now. Wondering what might be the exact problem. Not even sure I want to tear back into this machine. It's a Laser Jet printer, so it's a good one, maybe worth the money. I just spent $30 replacing some rubber wheels, not a big deal. Now the power supply seems to have pooped out. Well, when I get back from Vegas maybe I'll be able to buy a new printer. If not - I'll start the process of tearing into this thing again and see if it's something I want to fix. New printers of the same caliber are around$150.00, so - - - .
#### recklessrog
Joined May 23, 2013
985
Does it do it if you disconnect everything except the power lead?? I'm thinking possible ground loop or such.
#### AlbertHall
Joined Jun 4, 2014
9,017
As it makes a noise the PSU must be at least partly functional. Maybe it's doing that thing that they do when they detect a fault on the output - shut down and try again, over and over again.
Love the cat toy and, of course, the puddy tat.
#### Tonyr1084
Joined Sep 24, 2015
4,035
Only connection is the power cable. Ethernet disconnected, USB cable is not connected. None of the lights come on as they usually do. Likely the power supply pooped a pooch.
#### wayneh
Joined Sep 9, 2010
16,162
If you're not in the mood to repair it, I've learned that you can sell on eBay stuff that I never dreamed had any value. If you get \$30 for it, you're that much closer to a new one. The new wireless printers are a nice convenience.
I was in Vegas just last month. If you like Mediterranean tapas or Thai, I can point you to a couple of great places.
#### Tonyr1084
Joined Sep 24, 2015
4,035
We like the Rain Forest Cafe. Hooters is always entertaining. Good sticky-fingers. Just don't stick your fingers where they don't belong.
#### Tonyr1084
Joined Sep 24, 2015
4,035
The dome of C105 looks like it's blown upwards, just not completely blown out. What's your opinion? Can this be the cause of my woe's?
#### dl324
Joined Mar 30, 2015
9,595
Can this be the cause of my woe's?
Maybe.
A computer I gave my Mother-In-Law stopped working. It had a bulging cap or two. I replaced them and it didn't help. I ended up replacing the motherboard.
#### Tonyr1084
Joined Sep 24, 2015
4,035
Thanks DL. It's a 1500µF 10V cap. I don't think I have one in my piles of stock. Maybe on some old board I might have one. It's bulging, and none of the others show any sign of distress. The board looks good so far but I don't want to start messing with it plugged in. I'll swap the cap and see what happens. If it still doesn't work I'll check the Bridge Rectifier.
|
2020-02-23 11:11:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3123247027397156, "perplexity": 2167.915511580519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00387.warc.gz"}
|
https://www.vedantu.com/question-answer/solve-the-following-quadratic-equation-5x2-4x-7-class-10-maths-cbse-5ef6f59591bcd94c214ffd8e
|
QUESTION
# Solve the following quadratic equation $5{x^2} - 4x - 7 = 0$
Hint: First of all, make the coefficient of ${x^2}$ term equal to one by doing simple math applications like multiplication and division. Then add the square of the coefficient of $2x$ on both sides to have the complete square. So, use this concept to reach the solution of the problem.
Complete step-by-step solution -
Given quadratic equation is $5{x^2} - 4x - 7 = 0$
Dividing both sides with 5, we get
$\Rightarrow \dfrac{{5{x^2} - 4x - 7}}{5} = \dfrac{0}{5} \\ \Rightarrow \dfrac{{5{x^2}}}{5} - \dfrac{{4x}}{5} = \dfrac{7}{5} \\ \Rightarrow {x^2} - 2x\left( {\dfrac{2}{5}} \right) = \dfrac{7}{5} \\$
Adding ${\left( {\dfrac{2}{5}} \right)^2} = \dfrac{4}{{25}}$ on both sides we get
$\Rightarrow {x^2} - 2\left( {\dfrac{{2x}}{5}} \right) + \dfrac{4}{{25}} = \dfrac{7}{5} + \dfrac{4}{{25}} \\ \Rightarrow {x^2} - 2\left( {\dfrac{{2x}}{5}} \right) + {\left( {\dfrac{2}{5}} \right)^2} = \dfrac{{7\left( 5 \right) + 4}}{{25}} \\ \Rightarrow {x^2} - 2\left( {\dfrac{{2x}}{5}} \right) + {\left( {\dfrac{2}{5}} \right)^2} = \dfrac{{39}}{{25}} \\$
We know that ${\left( {a - b} \right)^2} = {a^2} - 2ab + {b^2}$
$\Rightarrow {\left( {x - \dfrac{2}{5}} \right)^2} = \dfrac{{39}}{{25}}$
Rooting on both sides, we get
$\Rightarrow x - \dfrac{2}{5} = \pm \sqrt {\dfrac{{39}}{{25}}} \\ \Rightarrow x = \dfrac{2}{5} \pm \sqrt {\dfrac{{39}}{{25}}} \\ \Rightarrow x = \dfrac{2}{5} \pm \dfrac{{\sqrt {39} }}{5} \\ \therefore x = \dfrac{{2 \pm \sqrt {39} }}{5} \\$
Thus, the solutions of the quadratic equation $5{x^2} - 4x - 7 = 0$ are $\dfrac{{2 \pm \sqrt {39} }}{5}$.
Note: We can also solve this problem by another method i.e., by using the formula that the two roots of the quadratic equation $a{x^2} + bx + c = 0$ are given by $x = \dfrac{{ - b \pm \sqrt {{b^2} - 4ac} }}{{2a}}$. The quadratic equation has only two roots.
|
2020-07-05 07:44:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028974533081055, "perplexity": 129.45401571172155}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00082.warc.gz"}
|
https://jeeneetqna.in/1543/relation-between-modulus-modulus-modulus-modulus-rigidity
|
# Find relation between modulus, If Y → young's modulus, K → Bulk modulus, η → modulus of rigidity
more_vert
Find relation between modulus
If
Y $\to$ young's modulus
K $\to$ Bulk modulus
$\eta$ $\to$ modulus of rigidity
(1) $K={\eta Y\over9\eta-3Y}$
(2) $K={\eta K\over9\eta-3K}$
(3) $K={\eta Y\over9\eta+3Y}$
(4) $K={\eta K\over9\eta+3K}$
more_vert
Properties of solids and liquids, Young's modulus and bulk modulus
|
2021-04-20 21:56:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552802801132202, "perplexity": 5113.347506861331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00549.warc.gz"}
|
https://regularize.wordpress.com/category/math/optimization/
|
### Optimization
Taking the derivative of the loss function of a neural network can be quite cumbersome. Even taking the derivative of a single layer in a neural network often results in expressions cluttered with indices. In this post I’d like to show an index-free way to do it.
Consider the map ${\sigma(Wx+b)}$ where ${W\in{\mathbb R}^{m\times n}}$ is the weight matrix, ${b\in{\mathbb R}^{m}}$ is the bias, ${x\in{\mathbb R}^{n}}$ is the input, and ${\sigma}$ is the activation function. Usually ${\sigma}$ represents both a scalar function (i.e. mapping ${{\mathbb R}\mapsto {\mathbb R}}$) and the function mapping ${{\mathbb R}^{m}\rightarrow{\mathbb R}^{m}}$ which applies ${\sigma}$ in each coordinate. In training neural networks, we would try to optimize for best parameters ${W}$ and ${b}$. So we need to take the derivative with respect to ${W}$ and ${b}$. So we consider the map
$\displaystyle \begin{array}{rcl} G(W,b) = \sigma(Wx+b). \end{array}$
This map ${G}$ is a concatenation of the map ${(W,b)\mapsto Wx+b}$ and ${\sigma}$ and since the former map is linear in the joint variable ${(W,b)}$, the derivative of ${G}$ should be pretty simple. What makes the computation a little less straightforward is the fact the we are usually not used to view matrix-vector products ${Wx}$ as linear maps in ${W}$ but in ${x}$. So let’s rewrite the thing:
There are two particular notions which come in handy here: The Kronecker product of matrices and the vectorization of matrices. Vectorization takes some ${W\in{\mathbb R}^{m\times n}}$ given columnwise ${W = [w_{1}\ \cdots\ w_{n}]}$ and maps it by
$\displaystyle \begin{array}{rcl} \mathrm{Vec}:{\mathbb R}^{m\times n}\rightarrow{\mathbb R}^{mn},\quad \mathrm{Vec}(W) = \begin{bmatrix} w_{1}\\\vdots\\w_{n} \end{bmatrix}. \end{array}$
The Kronecker product of matrices ${A\in{\mathbb R}^{m\times n}}$ and ${B\in{\mathbb R}^{k\times l}}$ is a matrix in ${{\mathbb R}^{mk\times nl}}$
$\displaystyle \begin{array}{rcl} A\otimes B = \begin{bmatrix} a_{11}B & \cdots &a_{1n}B\\ \vdots & & \vdots\\ a_{m1}B & \cdots & a_{mn}B \end{bmatrix}. \end{array}$
We will build on the following marvelous identity: For matrices ${A}$, ${B}$, ${C}$ of compatible size we have that
$\displaystyle \begin{array}{rcl} \mathrm{Vec}(ABC) = (C^{T}\otimes A)\mathrm{Vec}(B). \end{array}$
Why is this helpful? It allows us to rewrite
$\displaystyle \begin{array}{rcl} Wx & = & \mathrm{Vec}(Wx)\\ & = & \mathrm{Vec}(I_{m}Wx)\\ & = & \underbrace{(x^{T}\otimes I_{m})}_{\in{\mathbb R}^{m\times mn}}\underbrace{\mathrm{Vec}(W)}_{\in{\mathbb R}^{mn}}. \end{array}$
So we can also rewrite
$\displaystyle \begin{array}{rcl} Wx +b & = & \mathrm{Vec}(Wx+b )\\ & = & \mathrm{Vec}(I_{m}Wx + b)\\ & = & \underbrace{ \begin{bmatrix} x^{T}\otimes I_{m} & I_{m} \end{bmatrix} }_{\in{\mathbb R}^{m\times (mn+m)}}\underbrace{ \begin{bmatrix} \mathrm{Vec}(W)\\b \end{bmatrix} }_{\in{\mathbb R}^{mn+m}}\\ &=& ( \underbrace{\begin{bmatrix} x^{T} & 1 \end{bmatrix}}_{\in{\mathbb R}^{1\times(n+1)}}\otimes I_{m}) \begin{bmatrix} \mathrm{Vec}(W)\\b \end{bmatrix}. \end{array}$
So our map ${G(W,b) = \sigma(Wx+b)}$ mapping ${{\mathbb R}^{m\times n}\times {\mathbb R}^{m}\rightarrow{\mathbb R}^{m}}$ can be rewritten as
$\displaystyle \begin{array}{rcl} \bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) = \sigma( ( \begin{bmatrix} x^{T} & 1 \end{bmatrix}\otimes I_{M}) \begin{bmatrix} \mathrm{Vec}(W)\\b \end{bmatrix}) \end{array}$
mapping ${{\mathbb R}^{mn+m}\rightarrow{\mathbb R}^{m}}$. Since ${\bar G}$ is just a concatenation of ${\sigma}$ applied coordinate wise and a linear map, now given as a matrix, the derivative of ${\bar G}$ (i.e. the Jacobian, a matrix in ${{\mathbb R}^{m\times (mn+m)}}$) is calculated simply as
$\displaystyle \begin{array}{rcl} D\bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) & = & D\sigma(Wx+b)( \begin{bmatrix} x^{T} & 1 \end{bmatrix}\otimes I_{M})\\ &=& \underbrace{\mathrm{diag}(\sigma'(Wx+b))}_{\in{\mathbb R}^{m\times m}}\underbrace{( \begin{bmatrix} x^{T} & 1 \end{bmatrix}\otimes I_{M})}_{\in{\mathbb R}^{m\times(mn+m)}}\in{\mathbb R}^{m\times(mn+m)}. \end{array}$
While this representation of the derivative of a single layer of a neural network with respect to its parameters is not particularly simple, it is still index free and moreover, straightforward to implement in languages which provide functions for the Kronecker product and vectorization. If you do this, make sure to take advantage of sparse matrices for the identity matrix and the diagonal matrix as otherwise the memory of your computer will be flooded with zeros.
Now let’s add a scalar function ${L}$ (e.g. to produce a scalar loss that we can minimize), i.e. we consider the map
$\displaystyle \begin{array}{rcl} F( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) = L(G(Wx+b)) = L(\bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}). \end{array}$
The derivative is obtained by just another application of the chain rule:
$\displaystyle \begin{array}{rcl} DF( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) = DL(G(Wx+b))D\bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}). \end{array}$
If we want to take gradients, we just transpose the expression and get
$\displaystyle \begin{array}{rcl} \nabla F( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix}) &=& D\bar G( \begin{pmatrix} \mathrm{Vec}(W)\\b \end{pmatrix})^{T} DL(G(Wx+b))^{T}\\ &=& ([x^{T}\ 1]\otimes I_{m})^{T}\mathrm{diag}(\sigma'(Wx+b))\nabla L(G(Wx+b))\\ &=& \underbrace{( \begin{bmatrix} x\\ 1 \end{bmatrix} \otimes I_{m})}_{\in{\mathbb R}^{(mn+m)\times m}}\underbrace{\mathrm{diag}(\sigma'(Wx+b))}_{\in{\mathbb R}^{m\times m}}\underbrace{\nabla L(G(Wx+b))}_{\in{\mathbb R}^{m}}. \end{array}$
Note that the right hand side is indeed vector in ${{\mathbb R}^{mn+m}}$ and hence, can be reshaped to a tupel ${(W,b)}$ of an ${m\times n}$ matrix and an ${m}$ vector.
A final remark: the Kronecker product is related to tensor products. If ${A}$ and ${B}$ represent linear maps ${X_{1}\rightarrow Y_{1}}$ and ${X_{2}\rightarrow Y_{2}}$, respectively, then ${A\otimes B}$ represents the tensor product of the maps, ${X_{1}\otimes X_{2}\rightarrow Y_{1}\otimes Y_{2}}$. This relation to tensor products and tensors explains where the tensor in TensorFlow comes from.
The problem of optimal transport of mass from one distribution to another can be stated in many forms. Here is the formulation going back to Kantorovich: We have two measurable sets ${\Omega_{1}}$ and ${\Omega_{2}}$, coming with two measures ${\mu_{1}}$ and ${\mu_{2}}$. We also have a function ${c:\Omega_{1}\times \Omega_{2}\rightarrow {\mathbb R}}$ which assigns a transport cost, i.e. ${c(x_{1},x_{2})}$ is the cost that it takes to carry one unit of mass from point ${x_{1}\in\Omega_{2}}$ to ${x_{2}\in\Omega_{2}}$. What we want is a plan that says where the mass in ${\mu_{1}}$ should be placed in ${\Omega_{2}}$ (or vice versa). There are different ways to formulate this mathematically.
A simple way is to look for a map ${T:\Omega_{1}\rightarrow\Omega_{2}}$ which says that thet mass in ${x_{1}}$ should be moved to ${T(x_{1})}$. While natural, there is a serious problem with this: What if not all mass at ${x_{1}}$ should go to the same point in ${\Omega_{2}}$? This happens in simple situations where all mass in ${\Omega_{1}}$ sits in just one point, but there are at least two different points in ${\Omega_{2}}$ where mass should end up. This is not going to work with a map ${T}$ as above. So, the map ${T}$ is not flexible enough to model all kinds of transport we may need.
What we want is a way to distribute mass from one point in ${\Omega_{1}}$ to the whole set ${\Omega_{2}}$. This looks like we want maps ${\mathcal{T}}$ which map points in ${\Omega_{1}}$ to functions on ${\Omega_{2}}$, i.e. something like ${\mathcal{T}:\Omega_{1}\rightarrow (\Omega_{2}\rightarrow{\mathbb R})}$ where ${(\Omega_{2}\rightarrow{\mathbb R})}$ stands for some set of functions on ${\Omega_{2}}$. We can de-curry this function to some ${\tau:\Omega_{1}\times\Omega_{2}\rightarrow{\mathbb R}}$ by ${\tau(x_{1},x_{2}) = \mathcal{T}(x_{1})(x_{2})}$. That’s good in principle, be we still run into problems when the target mass distribution ${\mu_{2}}$ is singular in the sense that ${\Omega_{2}}$ is a “continuous” set and there are single points in ${\Omega_{2}}$ that carry some mass according to ${\mu_{2}}$. Since we are in the world of measure theory already, the way out suggests itself: Instead of a function ${\tau}$ on ${\Omega_{1}\times\Omega_{2}}$ we look for a measure ${\pi}$ on ${\Omega_{1}\times \Omega_{2}}$ as a transport plan.
The demand that we should carry all of the mass in ${\Omega_{1}}$ to reach all of ${\mu_{2}}$ is formulated by marginals. For simplicity we just write these constraints as
$\displaystyle \int_{\Omega_{2}}\pi\, d x_{2} = \mu_{1},\qquad \int_{\Omega_{1}}\pi\, d x_{1} = \mu_{2}$
(with the understanding that the first equation really means that for all continuous function ${f:\Omega_{1}\rightarrow {\mathbb R}}$ it holds that ${\int_{\Omega_{1}\times \Omega_{2}} f(x_{1})\,d\pi(x_{1},x_{2}) = \int_{\Omega_{1}}f(x_{1})\,d\mu_{1}(x_{1})}$).
This leads us to the full transport problem
$\displaystyle \min_{\pi}\int_{\Omega_{1}\times \Omega_{2}}c(x,y)\,d\pi(x_{1}x_{2})\quad \text{s.t.}\quad \int_{\Omega_{2}}\pi\, d x_{2} = \mu_{1},\quad \int_{\Omega_{1}}\pi\, d x_{1} = \mu_{2}.$
There is the following theorem which characterizes optimality of a plan and which is the topic of this post:
Theorem 1 (Fundamental theorem of optimal transport) Under some technicalities we can say that a plan ${\pi}$ which fulfills the marginal constraints is optimal if and only if one of the following equivalent conditions is satisfied:
1. The support ${\mathrm{supp}(\pi)}$ of ${\pi}$ is ${c}$-cyclically monotone.
2. There exists a ${c}$-concave function ${\phi}$ such that its ${c}$-superdifferential contains the support of ${\pi}$, i.e. ${\mathrm{supp}(\pi)\subset \partial^{c}\phi}$.
A few clarifications: The technicalities involve continuity, integrability, and boundedness conditions of ${c}$ and integrability conditions on the marginals. The full theorem can be found as Theorem 1.13 in A user’s guide to optimal transport by Ambrosio and Gigli. Also the notions ${c}$-cyclically monotone, ${c}$-concave and ${c}$-superdifferential probably need explanation. We start with a simpler notion: ${c}$-monotonicity:
Definition 2 A set ${\Gamma\subset\Omega_{1}\times\Omega_{2}}$ is ${c}$-monotone, if for all ${(x_{1}x_{2}),(x_{1}',x_{2}')\in\Gamma}$ it holds that
$\displaystyle c(x_{1},x_{2}) + c(x_{1}',x_{2}')\leq c(x_{1},x_{2}') + c(x_{1}',x_{2}).$
If you find it unclear what this has to do with monotonicity, look at this example:
Example 1 Let ${\Omega_{1/2}\in{\mathbb R}^{d}}$ and let ${c(x_{1},x_{2}) = \langle x_{1},x_{2}\rangle}$ be the usual scalar product. Then ${c}$-monotonicity is the condition that for all ${(x_{1}x_{2}),(x_{1}',x_{2}')\in\Gamma\subset\Omega_{1}\times\Omega_{2}}$ it holds that
$\displaystyle 0\leq \langle x_{1}-x_{1}',x_{2}-x_{2}'\rangle$
which may look more familiar. Indeed, when ${\Omega_{1}}$ and ${\Omega_{2}}$ are subset of the real line, the above conditions means that the set ${\Gamma}$ somehow “moves up in ${\Omega_{2}}$” if we “move right in ${\Omega_{1}}$”. So ${c}$-monotonicity for ${c(x_{1},x_{2}) = \langle x_{1},x_{2}\rangle}$ is something like “monotonically increasing”. Similarly, for ${c(x_{1},x_{2}) = -\langle x_{1},x_{2}\rangle}$, ${c}$-monotonicity means “monotonically decreasing”.
You may say that both ${c(x_{1},x_{2}) = \langle x_{1},x_{2}\rangle}$ and ${c(x_{1},x_{2}) = -\langle x_{1},x_{2}\rangle}$ are strange cost functions and I can’t argue with that. But here comes: ${c(x_{1},x_{2}) = |x_{1}-x_{2}|^{2}}$ (${|\,\cdot\,|}$ being the euclidean norm) seems more natural, right? But if we have a transport plan ${\pi}$ for this ${c}$ for some marginals ${\mu_{1}}$ and ${\mu_{2}}$ we also have
$\displaystyle \begin{array}{rcl} \int_{\Omega_{1}\times \Omega_{2}}c(x_{1},x_{2})d\pi(x_{1},x_{2}) & = & \int_{\Omega_{1}\times \Omega_{2}}|x_{1}|^{2} d\pi(x_{1},x_{2})\\ &&\quad- \int_{\Omega_{1}\times \Omega_{2}}\langle x_{1},x_{2}\rangle d\pi(x_{1},x_{2})\\ && \qquad+ \int_{\Omega_{1}\times \Omega_{2}} |x_{2}|^{2}d\pi(x_{1},x_{2})\\ & = &\int_{\Omega_{1}}|x_{1}|^{2}d\mu_{1}(x_{1}) - \int_{\Omega_{1}\times \Omega_{2}}\langle x_{1},x_{2}\rangle d\pi(x_{1},x_{2}) + \int_{\Omega_{2}}|x_{2}|^{2}d\mu_{2}(x_{2}) \end{array}$
i.e., the transport cost for ${c(x_{1},x_{2}) = |x_{1}-x_{2}|^{2}}$ differs from the one for ${c(x_{1},x_{2}) = -\langle x_{1},x_{2}\rangle}$ only by a constant independent of ${\pi}$, so may well use the latter.
The fundamental theorem of optimal transport uses the notion of ${c}$-cyclical monotonicity which is stronger that just ${c}$-monotonicity:
Definition 3 A set ${\Gamma\subset \Omega_{1}\times \Omega_{2}}$ is ${c}$-cyclically monotone, if for all ${(x_{1}^{i},x_{2}^{i})\in\Gamma}$, ${i=1,\dots n}$ and all permutations ${\sigma}$ of ${\{1,\dots,n\}}$ it holds that
$\displaystyle \sum_{i=1}^{n}c(x_{1}^{i},x_{2}^{i}) \leq \sum_{i=1}^{n}c(x_{1}^{i},x_{2}^{\sigma(i)}).$
For ${n=2}$ we get back the notion of ${c}$-monotonicity.
Definition 4 A function ${\phi:\Omega_{1}\rightarrow {\mathbb R}}$ is ${c}$-concave if there exists some function ${\psi:\Omega_{2}\rightarrow{\mathbb R}}$ such that
$\displaystyle \phi(x_{1}) = \inf_{x_{2}\in\Omega_{2}}c(x_{1},x_{2}) - \psi(x_{2}).$
This definition of ${c}$-concavity resembles the notion of convex conjugate:
Example 2 Again using ${c(x_{1},x_{2}) = -\langle x_{1},x_{2}\rangle}$ we get that a function ${\phi}$ is ${c}$-concave if
$\displaystyle \phi(x_{1}) = \inf_{x_{2}}-\langle x_{1},x_{2}\rangle - \psi(x_{2}),$
and, as an infimum over linear functions, ${\phi}$ is clearly concave in the usual way.
Definition 5 The ${c}$-superdifferential of a ${c}$-concave function is
$\displaystyle \partial^{c}\phi = \{(x_{1},x_{2})\mid \phi(x_{1}) + \phi^{c}(x_{2}) = c(x,y)\},$
where ${\phi^{c}}$ is the ${c}$-conjugate of ${\phi}$ defined by
$\displaystyle \phi^{c}(x_{2}) = \inf_{x_{1}\in\Omega_{1}}c(x_{1},x_{2}) -\phi(x_{1}).$
Again one may look at ${c(x_{1},x_{2}) = -\langle x_{1},x_{2}\rangle}$ and observe that the ${c}$-superdifferential is the usual superdifferential related to the supergradient of concave functions (there is a Wikipedia page for subgradient only, but the concept is the same with reversed signs in some sense).
Now let us sketch the proof of the fundamental theorem of optimal transport: \medskip
Proof (of the fundamental theorem of optimal transport). Let ${\pi}$ be an optimal transport plan. We aim to show that ${\mathrm{supp}(\pi)}$ is ${c}$-cyclically monotone and assume the contrary. That is, we assume that there are points ${(x_{1}^{i},x_{2}^{i})\in\mathrm{supp}(\pi)}$ and a permutation ${\sigma}$ such that
$\displaystyle \sum_{i=1}^{n}c(x_{1}^{i},x_{2}^{i}) > \sum_{i=1}^{n}c(x_{1}^{i},x_{2}^{\sigma(i)}).$
We aim to construct a ${\tilde\pi}$ such that ${\tilde\pi}$ is still feasible but has a smaller transport cost. To do so, we note that continuity of ${c}$ implies that there are neighborhoods ${U_{i}}$ of ${x_{1}^{i}}$ and ${V_{i}}$ of ${x_{2}^{i}}$ such that for all ${u_{i}\in U_{1}}$ and ${v_{i}\in V_{i}}$ it holds that
$\displaystyle \sum_{i=1}^{n}c(u_{i},v_{\sigma(i)}) - c(u_{i},v_{i})<0.$
We use this to construct a better plan ${\tilde \pi}$: Take the mass of ${\pi}$ in the sets ${U_{i}\times V_{i}}$ and shift it around. The full construction is a little messy to write down: Define a probability measure ${\nu}$ on the product ${X = \bigotimes_{i=1}^{N}U_{i}\times V_{i}}$ as the product of the measures ${\tfrac{1}{\pi(U_{i}\times V_{i})}\pi|_{U_{i}\times V_{i}}}$. Now let ${P^{U_{1}}}$ and ${P^{V_{i}}}$ be the projections of ${X}$ onto ${U_{i}}$ and ${V_{i}}$, respectively, and set
$\displaystyle \nu = \tfrac{\min_{i}\pi(U_{i}\times V_{i})}{n}\sum_{i=1}^{n}(P^{U_{i}},P^{V_{\sigma(i)}})_{\#}\nu - (P^{U_{i}},P^{V_{i}})_{\#}\nu$
where ${_{\#}}$ denotes the pushforward of measures. Note that the new measure ${\nu}$ is signed and that ${\tilde\pi = \pi + \nu}$ fulfills
1. ${\tilde\pi}$ is a non-negative measure
2. ${\tilde\pi}$ is feasible, i.e. has the correct marginals
3. ${\int c\,d\tilde \pi<\int c\,d\pi}$
which, all together, gives a contradiction to optimality of ${\pi}$. The implication of item 1 to item 2 of the theorem is not really related to optimal transport but a general fact about ${c}$-concavity and ${c}$-cyclical monotonicity (c.f.~this previous blog post of mine where I wrote a similar statement for convexity). So let us just prove the implication from item 2 to optimality of ${\pi}$: Let ${\pi}$ fulfill item 2, i.e. ${\pi}$ is feasible and ${\mathrm{supp}(\pi)}$ is contained in the ${c}$-superdifferential of some ${c}$-concave function ${\phi}$. Moreover let ${\tilde\pi}$ be any feasible transport plan. We aim to show that ${\int c\,d\pi\leq \int c\,d\tilde\pi}$. By definition of the ${c}$-superdifferential and the ${c}$-conjugate we have
$\displaystyle \begin{array}{rcl} \phi(x_{1}) + \phi^{c}(x_{2}) &=& c(x_{1},x_{2})\ \forall (x_{1},x_{2})\in\partial^{c}\phi\\ \phi(x_{1}) + \phi^{c}(x_{2}) & \leq& c(x_{1},x_{2})\ \forall (x_{1},x_{2})\in\Omega_{1}\times \Omega_{2}. \end{array}$
Since ${\mathrm{supp}(\pi)\subset\partial^{c}\phi}$ by assumption, this gives
$\displaystyle \begin{array}{rcl} \int_{\Omega_{1}\times \Omega_{2}}c(x_{1},x_{2})\,d\pi(x_{1},x_{2}) & =& \int_{\Omega_{1}\times \Omega_{2}}\phi(x_{1}) + \phi^{c}(x_{1})\,d\pi(x_{1},x_{2})\\ &=& \int_{\Omega_{1}}\phi(x_{1})\,d\mu_{1}(x_{1}) + \int_{\Omega_{1}}\phi^{c}(x_{2})\,d\mu_{2}(x_{2})\\ &=& \int_{\Omega_{1}\times \Omega_{2}}\phi(x_{1}) + \phi^{c}(x_{1})\,d\tilde\pi(x_{1},x_{2})\\ &\leq& \int_{\Omega_{1}\times \Omega_{2}}c(x_{1},x_{2})\,d\tilde\pi(x_{1},x_{2}) \end{array}$
which shows the claim.
${\Box}$
Corollary 6 If ${\pi}$ is a measure on ${\Omega_{1}\times \Omega_{2}}$ which is supported on a ${c}$-superdifferential of a ${c}$-concave function, then ${\pi}$ is an optimal transport plan for its marginals with respect to the transport cost ${c}$.
This is a short follow up on my last post where I wrote about the sweet spot of the stepsize of the Douglas-Rachford iteration. For the case $\beta$-Lipschitz + $\mu$-strongly monotone, the iteration with stepsize $t$ converges linear with rate
$\displaystyle r(t) = \tfrac{1}{2(1+t\mu)}\left(\sqrt{2t^{2}\mu^{2}+2t\mu + 1 +2(1 - \tfrac{1}{(1+t\beta)^{2}} - \tfrac1{1+t^{2}\beta^{2}})t\mu(1+t\mu)} + 1\right)$
Here is animated plot of this contraction factor depending on $\beta$ and $\mu$ and $t$ acts as time variable:
What is interesting is, that this factor has increasing or decreasing in $t$ depending on the values of $\beta$ and $\mu$.
For each pair $(\beta,\mu)$ there is a best $t^*$ and also a smallest contraction factor $r(t^*)$. Here are plots of these quantities:
Comparing the plot of te optimal contraction factor to the animated plot above, you see that the right choice of the stepsize matters a lot.
I blogged about the Douglas-Rachford method before here and here. It’s a method to solve monotone inclusions in the form
$\displaystyle 0 \in Ax + Bx$
with monotone multivalued operators ${A,B}$ from a Hilbert space into itself. Using the resolvent ${J_{A} = (I+A)^{-1}}$ and the reflector ${R_{A} = 2J_{A} - I}$, the Douglas-Rachford iteration is concisely written as
$\displaystyle u^{n+1} = \tfrac12(I + R_{B}R_{A})u_{n}.$
The convergence of the method has been clarified is a number of papers, see, e.g.
Lions, Pierre-Louis, and Bertrand Mercier. “Splitting algorithms for the sum of two nonlinear operators.” SIAM Journal on Numerical Analysis 16.6 (1979): 964-979.
for the first treatment in the context of monotone operators and
Svaiter, Benar Fux. “On weak convergence of the Douglas–Rachford method.” SIAM Journal on Control and Optimization 49.1 (2011): 280-287.
for a recent very general convergence result.
Since ${tA}$ is monotone if ${A}$ is monotone and ${t>0}$, we can introduce a stepsize for the Douglas-Rachford iteration
$\displaystyle u^{n+1} = \tfrac12(I + R_{tB}R_{tA})u^{n}.$
It turns out, that this stepsize matters a lot in practice; too small and too large stepsizes lead to slow convergence. It is a kind of folk wisdom, that there is “sweet spot” for the stepsize. In a recent preprint Quoc Tran-Dinh and I investigated this sweet spot in the simple case of linear operators ${A}$ and ${B}$ and this tweet has a visualization.
A few days ago Walaa Moursi and Lieven Vandenberghe published the preprint “Douglas-Rachford splitting for a Lipschitz continuous and a strongly monotone operator” and derived some linear convergence rates in the special case they mention in the title. One result (Theorem 4.3) goes as follows: If ${A}$ is monotone and Lipschitz continuous with constant ${\beta}$ and ${B}$ is maximally monotone and ${\mu}$-strongly monotone, than the Douglas-Rachford iterates converge strongly to a solution with a linear rate
$\displaystyle r = \tfrac{1}{2(1+\mu)}\left(\sqrt{2\mu^{2}+2\mu + 1 +2(1 - \tfrac{1}{(1+\beta)^{2}} - \tfrac1{1+\beta^{2}})\mu(1+\mu)} + 1\right).$
This is a surprisingly complicated expression, but there is a nice thing about it: It allows to optimize for the stepsize! The rate depends on the stepsize as
$\displaystyle r(t) = \tfrac{1}{2(1+t\mu)}\left(\sqrt{2t^{2}\mu^{2}+2t\mu + 1 +2(1 - \tfrac{1}{(1+t\beta)^{2}} - \tfrac1{1+t^{2}\beta^{2}})t\mu(1+t\mu)} + 1\right)$
and the two plots of this function below show the sweet spot clearly.
If one knows the Lipschitz constant of ${A}$ and the constant of strong monotonicity of ${B}$, one can minimize ${r(t)}$ to get on optimal stepsize (in the sense that the guaranteed contraction factor is as small as possible). As Moursi and Vandenberghe explain in their Remark 5.4, this optimization involves finding the root of a polynomial of degree 5, so it is possible but cumbersome.
Now I wonder if there is any hope to show that the adaptive stepsize Quoc and I proposed here (which basically amounts to ${t_{n} = \|u^{n}\|/\|Au^{n}\|}$ in the case of single valued ${A}$ – note that the role of ${A}$ and ${B}$ is swapped in our paper) is able to find the sweet spot (experimentally it does).
<p
Consider the saddle point problem
$\displaystyle \min_{x}\max_{y}F(x) + \langle Kx,y\rangle - G(y). \ \ \ \ \ (1)$
(where I omit all the standard assumptions, like convexity, continuity ans such…). Fenchel-Rockafellar duality says that solutions are characterized by the inclusion
$\displaystyle 0 \in\left( \begin{bmatrix} \partial F & 0\\ 0 & \partial G \end{bmatrix} + \begin{bmatrix} 0 & K^{T}\\ -K & 0 \end{bmatrix}\right) \begin{bmatrix} x^{*}\\y^{*} \end{bmatrix}$
Noting that the operators
$\displaystyle A = \begin{bmatrix} \partial F & 0\\ 0 & \partial G \end{bmatrix},\quad B = \begin{bmatrix} 0 & K^{T}\\ -K & 0 \end{bmatrix}$
are both monotone, we may apply any of the splitting methods available, for example the Douglas-Rachford method. In terms of resolvents
$\displaystyle R_{tA}(z) := (I+tA)^{-1}(z)$
this method reads as
$\displaystyle \begin{array}{rcl} z^{k+1} & = & R_{tB}(\bar z^{k})\\ \bar z^{k+1}& = & R_{tA}(2z^{k+1}-\bar z^{k}) + \bar z^{k}-z^{k+1}. \end{array}$
For the saddle point problem, this iteration is (with ${z = (x,y)}$)
$\displaystyle \begin{array}{rcl} x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ y^{k+1} &=& R_{t\partial G}(\bar y^{k})\\ \begin{bmatrix} \bar x^{k+1}\\ \bar y^{k+1} \end{bmatrix} & = & \begin{bmatrix} I & tK^{T}\\ -tK & I \end{bmatrix}^{-1} \begin{bmatrix} 2x^{k+1}-\bar x^{k}\\ 2y^{k+1}-\bar y^{k} \end{bmatrix} + \begin{bmatrix} \bar x^{k}- x^{k+1}\\ \bar y^{k}-y^{k+1} \end{bmatrix}. \end{array}$
The first two lines involve proximal steps and we assume that they are simple to implement. The last line, however, involves the solution of a large linear system. This can be broken down to a slightly smaller linear system involving the matrix ${(I+t^{2}K^{T}K)}$ as follows: The linear system equals
$\displaystyle \begin{array}{rcl} \bar x^{k+1} & = & x^{k+1} - tK^{T}(y^{k+1}+\bar y^{k+1}-\bar y^{k})\\ \bar y^{k+1} & = & y^{k+1} + tK(x^{k+1} + \bar x^{k+1}-\bar x^{k}). \end{array}$
Plugging ${\bar y^{k+1}}$ from the second equation into the first gives
$\displaystyle \bar x^{k+1} = x^{k+1} - tK^{T}(2y^{k+1}-\bar y^{k}) - tK^{T}K(x^{k+1}-\bar x^{k+1}-\bar x^{k})$
Denoting ${d^{k+1}= x^{k+1}+\bar x^{k+1}-\bar x^{k}}$ this can be written as
$\displaystyle (I+t^{2}K^{T}K)d^{k+1} = (2x^{k+1}-\bar x^{k}) - tK^{T}(2y^{k+1}-\bar y^{k}).$
and the second equation is just
$\displaystyle \bar y^{k+1} = y^{k+1} + tKd^{k+1}.$
This gives the overall iteration
$\displaystyle \begin{array}{rcl} x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ y^{k+1} &=& R_{t\partial G}(\bar y^{k})\\ d^{k+1} &=& (I+t^{2}K^{T}K)^{-1}(2x^{k+1}-\bar x^{k} - tK(2y^{k+1}-\bar y^{k}))\\ \bar x^{k+1}&=& \bar x^{k}-x^{k+1}+d^{k+1}\\ \bar y^{k+1}&=& y^{k+1}+tKd^{k+1} \end{array}$
This is nothing else than using the Schur complement or factoring as
$\displaystyle \begin{bmatrix} I & tK^{T}\\ -tK & I \end{bmatrix} = \begin{bmatrix} 0 & 0\\ 0 & I \end{bmatrix} + \begin{bmatrix} I\\tK \end{bmatrix} (I + t^{2}K^{T}K)^{-1} \begin{bmatrix} I & -tK^{T} \end{bmatrix}$
and has been applied to imaging problems by O’Connor and Vandenberghe in “Primal-Dual Decomposition by Operator Splitting and Applications to Image Deblurring” (doi). For many problems in imaging, the involved inversion may be fairly easy to perform (if ${K}$ is the image gradient, for example, we only need to solve an equation with an operator like ${(I - t^{2}\Delta)}$ and appropriate boundary conditions). However, there are problems where this inversion is a problem.
I’d like to show the following trick to circumvent the matrix inversion, which I learned from Bredies and Sun’s “Accelerated Douglas-Rachford methods for the solution of convex-concave saddle-point problems”: Here is a slightly different saddle point problem
$\displaystyle \min_{x}\max_{y,x_{p}}F(x) + \langle Kx,y\rangle + \langle Hx,x_{p}\rangle- G(y) - I_{\{0\}}(x_{p}). \ \ \ \ \ (2)$
We added a new dual variable ${x_{p}}$, which is forced to be zero by the additional indicator functional ${I_{\{0\}}}$. Hence, the additional bilinear term ${\langle Hx,x_{p}\rangle}$ is also zero, and we see that ${(x,y)}$ is a solution of (1) if and only if ${(x,y,0)}$ is a solution of (2). In other words: The problem just looks differently, but is, in essence, the same as before.
Now let us write down the Douglas-Rachford iteration for (2). We write this problem as
$\displaystyle \min_{x}\max_{\tilde y} F(x) + \langle \tilde Kx,\tilde y\rangle -\tilde G(\tilde y)$
with
$\displaystyle \tilde y = \begin{bmatrix} y\\x_{p} \end{bmatrix}, \quad \tilde K = \begin{bmatrix} K\\H \end{bmatrix}, \quad \tilde G(\tilde y) = \tilde G(y,x_{p}) = G(y) + I_{\{0\}}(x_{p}).$
Writing down the Douglas-Rachford iteration gives
$\displaystyle \begin{array}{rcl} x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ \tilde y^{k+1} &=& R_{t\partial \tilde G}(\bar{ \tilde y}^{k})\\ \begin{bmatrix} \bar x^{k+1}\\ \bar {\tilde y}^{k+1} \end{bmatrix} & = & \begin{bmatrix} I & t\tilde K^{T}\\ -t\tilde K & I \end{bmatrix}^{-1} \begin{bmatrix} 2x^{k+1}-\bar x^{k}\\ 2\tilde y^{k+1}-\bar {\tilde y}^{k} \end{bmatrix} + \begin{bmatrix} \bar x^{k}- x^{k+1}\\ \bar {\tilde y}^{k}-\tilde y^{k+1} \end{bmatrix}. \end{array}$
Switching back to variables without a tilde, we get, using ${R_{tI_{\{0\}}}(x) = 0}$,
$\displaystyle \begin{array}{rcl} x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ y^{k+1} &=& R_{t\partial \tilde G}(\bar{ y}^{k})\\ x_{p}^{k+1} &=& 0\\ \begin{bmatrix} \bar x^{k+1}\\ \bar {y}^{k+1}\\ \bar x_{p}^{k+1} \end{bmatrix} & = & \begin{bmatrix} I & tK^{T} & tH^{T}\\ -t K & I & 0\\ -t H & 0 & I \end{bmatrix}^{-1} \begin{bmatrix} 2x^{k+1}-\bar x^{k}\\ 2 y^{k+1}-\bar {y}^{k}\\ 2x_{p}^{k+1}-\bar x_{p}^{k} \end{bmatrix} + \begin{bmatrix} \bar x^{k}- x^{k+1}\\ \bar {y}^{k}-y^{k+1}\\ \bar x_{p}^{k}-x_{p}^{k+1} \end{bmatrix}. \end{array}$
First not that ${x_{p}^{k+1}=0}$ throughout the iteration and from the last line of the linear system we get that
$\displaystyle \begin{array}{rcl} -tH\bar x^{k+1} + \bar x_{p}^{k+1} = -\bar x_{p}^{k} -tH(\bar x^{k}-x^{k+1}) + \bar x_{p}^{k} \end{array}$
which implies that
$\displaystyle \bar x_{p}^{k+1} = tH\bar x^{k+1}.$
Thus, both variables ${x_{p}^{k}}$ and ${\bar x_{p}^{k}}$ disappear in the iteration. Now we rewrite the remaining first two lines of the linear system as
$\displaystyle \begin{array}{rcl} \bar x^{k+1} + tK^{T}\bar y^{k+1} + t^{2}H^{T}H\bar x^{k+1} &=& x^{k+1} + tK^{T}(\bar y^{k}-y^{k+1}) + t^{2}H^{T}H\bar x^{k}\\ \bar y^{k+1}-tK\bar x^{k+1} &=& y^{k+1} + tK(x^{k+1}-\bar x^{k}). \end{array}$
Again denoting ${d^{k+1}=x^{k+1}+\bar x^{k+1}-\bar x^{k}}$, solving the second equation for ${\bar y^{k+1}}$ and plugging the result in the first gives
$\displaystyle (I+t^{2}H^{T}H)\bar x^{k+1} +tK^{T}(y^{k+1}+tKd^{k+1}) = x^{k+1}+tK(\bar y^{k}-y^{k+1}) + t^{2}H^{T}H\bar x^{k}.$
To eliminate ${\bar x^{k+1}}$ we add ${(I+t^{2}H^{T}H)(x^{k+1}-\bar x^{k})}$ on both sides and get
$\displaystyle (I+t^{2}(H^{T}H+K^{T}K))d^{k+1} = 2x^{k+1}-\bar x^{k} -tK(y^{k+1}+\bar y^{k+1}-\bar y^{k}) + t^{2}H^{T}Hx^{k+1}.$
In total we obtain the following iteration:
$\displaystyle \begin{array}{rcl} x^{k+1} &=& R_{t\partial F}(\bar x^{k})\\ y^{k+1} &=& R_{t\partial G}(\bar y^{k})\\ d^{k+1} &=& (I+t^{2}(H^{T}H + K^{T}K))^{-1}(2x^{k+1}-\bar x^{k} - tK(2y^{k+1}-\bar y^{k}) + t^{2}H^{T}Hx^{k+1})\\ \bar x^{k+1}&=& \bar x^{k}-x^{k+1}+d^{k+1}\\ \bar y^{k+1}&=& y^{k+1}+tKd^{k+1} \end{array}$
and note that only the third line changed.
Since the above works for any matrix ${H}$, we have a lot of freedom. Let us see, that it is even possible to avoid any inversion whatsoever: We would like to choose ${H}$ in a way that ${I+t^{2}(H^{T}H + K^{T}K) = \lambda I}$ for some positive ${\lambda}$. This is equivalent to
$\displaystyle H^{T}H = \tfrac{\lambda-1}{t^{2}}I - K^{T}K.$
As soon as the right hand side is positive definite, Cholesky decomposition shows that such an ${H}$ exists, and this happens if ${\lambda\geq 1+t^{2}\|K\|^{2}}$. Further note, that we do need ${H}$ in any way, but only ${H^{T}H}$, and we can perform the iteration without ever solving any linear system since the third row reads as
$\displaystyle d^{k+1} = \tfrac{1}{\lambda}\left(2x^{k+1}-\bar x^{k} - tK(2y^{k+1}-\bar y^{k}) + ((\lambda-1)I - t^{2}K^{T}K)x^{k+1})\right).$
I blogged about the Douglas-Rachford method before and in this post I’d like to dig a bit into the history of the method.
As the name suggests, the method has its roots in a paper by Douglas and Rachford and the paper is
Douglas, Jim, Jr., and Henry H. Rachford Jr., “On the numerical solution of heat conduction problems in two and three space variables.” Transactions of the American mathematical Society 82.2 (1956): 421-439.
At first glance, the title does not suggest that the paper may be related to monotone inclusions and if you read the paper you’ll not find any monotone operator mentioned. So let’s start and look at Douglas and Rachford’s paper.
1. Solving the heat equation numerically
So let us see, what they were after and how this is related to what is known as Douglas-Rachford splitting method today.
Indeed, Douglas and Rachford wanted to solve the instationary heat equation
$\displaystyle \begin{array}{rcl} \partial_{t}u &=& \partial_{xx}u + \partial_{yy}u \\ u(x,y,0) &=& f(x,y) \end{array}$
with Dirichlet boundary conditions (they also considered three dimensions, but let us skip that here). They considered a rectangular grid and a very simple finite difference approximation of the second derivatives, i.e.
$\displaystyle \begin{array}{rcl} \partial_{xx}u(x,y,t)&\approx& (u^{n}_{i+1,j}-2u^{n}_{i,j}+u^{n}_{i-1,j})/h^{2}\\ \partial_{yy}u(x,y,t)&\approx& (u^{n}_{i,j+1}-2u^{n}_{i,j}+u^{n}_{i,j-1})/h^{2} \end{array}$
(with modifications at the boundary to accomodate the boundary conditions). To ease notation, we abbreviate the difference quotients as operators (actually, also matrices) that act for a fixed time step
$\displaystyle \begin{array}{rcl} (Au^{n})_{i,j} &=& (u^{n}_{i+1,j}-2u^{n}_{i,j}+u^{n}_{i-1,j})/h^{2}\\ (Bu^{n})_{i,j} &=& (u^{n}_{i,j+1}-2u^{n}_{i,j}+u^{n}_{i,j+1})/h^{2}. \end{array}$
With this notation, our problem is to solve
$\displaystyle \begin{array}{rcl} \partial_{t}u &=& (A+B)u \end{array}$
in time.
Then they give the following iteration:
$\displaystyle Av^{n+1}+Bw^{n} = \frac{v^{n+1}-w^{n}}{\tau} \ \ \ \ \ (1)$
$\displaystyle Bw^{n+1} = Bw^{n} + \frac{w^{n+1}-v^{n+1}}{\tau} \ \ \ \ \ (2)$
(plus boundary conditions which I’d like to swipe under the rug here). If we eliminate ${v^{n+1}}$ from the first equation using the second we get
$\displaystyle (A+B)w^{n+1} = \frac{w^{n+1}-w^{n}}{\tau} + \tau AB(w^{n+1}-w^{n}). \ \ \ \ \ (3)$
This is a kind of implicit Euler method with an additional small term ${\tau AB(w^{n+1}-w^{n})}$. From a numerical point of it has one advantage over the implicit Euler method: As equations (1) and (2) show, one does not need to invert ${I-\tau(A+B)}$ in every iteration, but only ${I-\tau A}$ and ${I-\tau B}$. Remember, this was in 1950s, and solving large linear equations was a much bigger problem than it is today. In this specific case of the heat equation, the operators ${A}$ and ${B}$ are in fact tridiagonal, and hence, solving with ${I-\tau A}$ and ${I-\tau B}$ can be done by Gaussian elimination without any fill-in in linear time (read Thomas algorithm). This is a huge time saver when compared to solving with ${I-\tau(A+B)}$ which has a fairly large bandwidth (no matter how you reorder).
How do they prove convergence of the method? They don’t since they wanted to solve a parabolic PDE. They were after stability of the scheme, and this can be done by analyzing the eigenvalues of the iteration. Since the matrices ${A}$ and ${B}$ are well understood, they were able to write down the eigenfunctions of the operator associated to iteration (3) explicitly and since the finite difference approximation is well understood, they were able to prove approximation properties. Note that the method can also be seen, as a means to calculate the steady state of the heat equation.
We reformulate the iteration (3) further to see how ${w^{n+1}}$ is actually derived from ${w^{n}}$: We obtain
$\displaystyle (-I + \tau(A+B) - \tau^{2}AB)w^{n+1} = (-I-\tau^{2}AB)w^{n} \ \ \ \ \ (4)$
2. What about monotone inclusions?
What has the previous section to do with solving monotone inclusions? A monotone inclusion is
$\displaystyle \begin{array}{rcl} 0\in Tx \end{array}$
with a monotone operator, that is, a multivalued mapping ${T}$ from a Hilbert space ${X}$ to (subsets of) itself such that for all ${x,y\in X}$ and ${u\in Tx}$ and ${v\in Ty}$ it holds that
$\displaystyle \begin{array}{rcl} \langle u-v,x-y\rangle\geq 0. \end{array}$
We are going to restrict ourselves to real Hilbert spaces here. Note that linear operators are monotone if they are positive semi-definite and further note that monotone linear operators need not to be symmetric. A general approach to the solution of monotone inclusions are so-called splitting methods. There one splits ${T}$ additively ${T=A+B}$ as a sum of two other monotone operators. Then one tries to use the so-called resolvents of ${A}$ and ${B}$, namely
$\displaystyle \begin{array}{rcl} R_{A} = (I+A)^{-1},\qquad R_{B} = (I+B)^{-1} \end{array}$
to obtain a numerical method. By the way, the resolvent of a monotone operator always exists and is single valued (to be honest, one needs a regularity assumption here, namely one need maximal monotone operators, but we will not deal with this issue here).
The two operators ${A = \partial_{xx}}$ and ${B = \partial_{yy}}$ from the previous section are not monotone, but ${-A}$ and ${-B}$ are, so the equation ${-Au - Bu = 0}$ is a special case of a montone inclusion. To work with monotone operators we rename
$\displaystyle \begin{array}{rcl} A \leftarrow -A,\qquad B\leftarrow -B \end{array}$
and write the iteration~(4) in terms of monotone operators as
$\displaystyle \begin{array}{rcl} (I + \tau(A+B) + \tau^{2}AB)w^{n+1} = (I+\tau^{2}AB)w^{n}, \end{array}$
i.e.
$\displaystyle \begin{array}{rcl} w^{n+1} = (I+\tau A+\tau B+\tau^{2}AB)^{-1}(I+\tau AB)w^{n}. \end{array}$
Using ${I+\tau A+\tau B + \tau^{2}A = (I+\tau A)(I+\tau B)}$ and ${(I+\tau^{2}AB) = (I-\tau B) + (I + \tau A)\tau B}$ we rewrite this in terms of resolvents as
$\displaystyle \begin{array}{rcl} w^{n+1} & = &(I+\tau B)^{-1}[(I+\tau A)^{-1}(I-\tau B) + \tau B]w^{n}\\ & =& R_{\tau B}(R_{\tau A}(w^{n}-\tau Bw^{n}) + \tau Bw^{n}). \end{array}$
This is not really applicable to a general monotone inclusion since there ${A}$ and ${B}$ may be multi-valued, i.e. the term ${Bw^{n}}$ is not well defined (the iteration may be used as is for splittings where ${B}$ is monotone and single valued, though).
But what to do, when both and ${A}$ and ${B}$ are multivaled? The trick is, to introduce a new variable ${w^{n} = R_{\tau B}(u^{n})}$. Plugging this in throughout leads to
$\displaystyle \begin{array}{rcl} R_{\tau B} u^{n+1} & = & R_{\tau B}(R_{\tau A}(R_{\tau B}u^{n}-\tau B R_{\tau B}u^{n}) + \tau B R_{\tau B}u^{n}). \end{array}$
We cancel the outer ${R_{\tau B}}$ and use ${\tau B R_{\tau B}u^{n} = u^{n} - R_{\tau B}u^{n}}$ to get
$\displaystyle \begin{array}{rcl} u^{n+1} & = & R_{\tau A}(2R_{\tau B}u^{n} - u^{n}) + u^{n} - R_{\tau B}u^{n} \end{array}$
and here we go: This is exactly what is known as Douglas-Rachford method (see the last version of the iteration in my previous post). Note that it is not ${u^{n}}$ that converges to a solution, but ${w^{n} = R_{\tau B}u^{n}}$, so it is convenient to write the iteration in the two variables
$\displaystyle \begin{array}{rcl} w^{n} & = & R_{\tau B}u^{n}\\ u^{n+1} & = & R_{\tau A}(2w^{n} - u^{n}) + u^{n} - w^{n}. \end{array}$
The observation, that these splitting method that Douglas and Rachford devised for linear problems has a kind of much wider applicability is due to Lions and Mercier and the paper is
Lions, Pierre-Louis, and Bertrand Mercier. “Splitting algorithms for the sum of two nonlinear operators.” SIAM Journal on Numerical Analysis 16.6 (1979): 964-979.
Other, much older, splitting methods for linear systems, such as the Jacobi method, the Gauss-Seidel method used different properties of the matrices such as the diagonal of the matrix or the upper and lower triangluar parts and as such, do not generalize easily to the case of operators on a Hilbert space.
Consider a convex optimization problem of the form
$\displaystyle \begin{array}{rcl} \min_{x}F(x) + G(Ax) \end{array}$
with convex ${F}$ and ${G}$ and matrix ${A}$. (We formulate everything quite loosely, skipping over details like continuity and such, as they are irrelevant for the subject matter). Optimization problems of this type have a specific type of dual problem, namely the Fenchel-Rockafellar dual, which is
$\displaystyle \begin{array}{rcl} \max_{y}-F^{*}(-A^{T}y) - G^{*}(y) \end{array}$
and under certain regularity conditions it holds that the optimal value of the dual equals the the objective value of the primal and, moreover, that a pair ${(x^{*},y^{*})}$ is both primal and dual optimal if and only if the primal dual gap is zero, i.e. if and only if
$\displaystyle \begin{array}{rcl} F(x^{*})+G(Ax^{*}) + F^{*}(-A^{T}y^{*})+G^{*}(y^{*}) = 0. \end{array}$
Hence, it is quite handy to use the primal dual gap as a stopping criteria for iterative methods to solve these problems. So, if one runs an algorithm which produces primal iterates ${x^{k}}$ and dual iterates ${y^{k}}$ one can monitor
$\displaystyle \begin{array}{rcl} \mathcal{G}(x^{k},y^{k}) = F(x^{k})+G(Ax^{k}) + F^{*}(-A^{T}y^{k})+G^{*}(y^{k}). \end{array}$
and stop if the value falls below a desired tolerance.
There is some problem with this approach which appears if the method produces infeasible iterates in the sense that one of the four terms in ${\mathcal{G}}$ is actually ${+\infty}$. This may be the case if ${F}$ or ${G}$ are not everywhere finite or, loosely speaking, have linear growth in some directions (since then the respective conjugate will not be finite everywhere). In the rest of the post, I’ll sketch a general method that can often solve this particular problem.
For the sake of simplicity, consider the following primal dual algorithm
$\displaystyle \begin{array}{rcl} x^{k+1} & = &\mathrm{prox}_{\tau F}(x^{k}-A^{T}y^{k})\\ y^{k+1} & = &\mathrm{prox}_{\sigma G^{*}}(y^{k}+\sigma A(2x^{k+1}-x^{k})) \end{array}$
(also know as primal dual hybrid gradient method or Chambolle-Pock’s algorithm). It converges as soon as ${\sigma\tau\leq \|A\|^{-2}}$.
While the structure of the algorithm ensures that ${F(x^{k})}$ and ${G^{*}(y^{k})}$ are always finite (since always ${\mathrm{prox}_{F}(x)\in\mathrm{dom}(F)}$), is may be that ${F^{*}(-A^{T}y^{k})}$ or ${G(Ax^{k})}$ are indeed infinite, rendering the primal dual gap useless.
Let us assume that the problematic term is ${F^{*}(-A^{T}y^{k})}$. Here is a way out in the case where one can deduce some a-priori bounds on ${x^{*}}$, i.e. a bounded and convex set ${C}$ with ${x^{*}\in C}$. In fact, this is often the case (e.g. one may know a-priori that there exist lower bounds ${l_{i}}$ and upper bounds ${u_{i}}$, i.e. it holds that ${l_{i}\leq x^{*}_{i}\leq u_{i}}$). Then, adding these constraints to the problem will not change the solution.
Let us see, how this changes the primal dual gap: We set ${\tilde F(x) = F(x) + I_{C}(x)}$ where ${C}$ is the set which models the bound constraints. Since ${C}$ is a bounded convex set and ${F}$ is finite on ${C}$, it is clear that
$\displaystyle \begin{array}{rcl} \tilde F^{*}(\xi) = \sup_{x\in C}\,\langle \xi,x\rangle - F(x) \end{array}$
is finite for every ${\xi}$. This leads to a finite duality gap. However, one should also adapt the prox operator. But this is also simple in the case where the constraint ${C}$ and the function ${F}$ are separable, i.e. ${C}$ encodes bound constraints as above (in other words ${C = [l_{1},u_{1}]\times\cdots\times [l_{n},u_{n}]}$) and
$\displaystyle \begin{array}{rcl} F(x) = \sum_{i} f_{i}(x_{i}). \end{array}$
Here it holds that
$\displaystyle \begin{array}{rcl} \mathrm{prox}_{\sigma \tilde F}(x)_{i} = \mathrm{prox}_{\sigma f_{i} + I_{[l_{i},u_{i}]}}(x_{i}) \end{array}$
and it is simple to see that
$\displaystyle \begin{array}{rcl} \mathrm{prox}_{\sigma f_{i} + I_{[l_{i},u_{i}]}}(x_{i}) = \mathrm{proj}_{[l_{i},u_{i}]}\mathrm{prox}_{\tau f_{i}}(x_{i}), \end{array}$
i.e., only uses the proximal operator of ${F}$ and project onto the constraints. For general ${C}$, this step may be more complicated.
One example, where this makes sense is ${L^{1}-TV}$ denoising which can be written as
$\displaystyle \begin{array}{rcl} \min_{u}\|u-u^{0}\|_{1} + \lambda TV(u). \end{array}$
Here we have
$\displaystyle \begin{array}{rcl} F(u) = \|u-u^{0}\|_{1},\quad A = \nabla,\quad G(\phi) = I_{|\phi_{ij}|\leq 1}(\phi). \end{array}$
The guy that causes problems here is ${F^{*}}$ which is an indicator functional and indeed ${A^{T}\phi^{k}}$ will usually be dual infeasible. But since ${u}$ is an image with a know range of gray values one can simple add the constraints ${0\leq u\leq 1}$ to the problem and obtains a finite dual while still keeping a simple proximal operator. It is quite instructive to compute ${\tilde F}$ in this case.
Next Page »
|
2019-09-18 01:50:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 433, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891758143901825, "perplexity": 207.92787281481216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573173.68/warc/CC-MAIN-20190918003832-20190918025832-00435.warc.gz"}
|
https://www.rdocumentation.org/packages/rlang/versions/0.2.0/topics/is_call
|
# is_call
0th
Percentile
##### Is object a call?
This function tests if x is a call. This is a pattern-matching predicate that returns FALSE if name and n are supplied and the call does not match these properties. is_unary_call() and is_binary_call() hardcode n to 1 and 2.
##### Usage
is_call(x, name = NULL, n = NULL, ns = NULL)
##### Arguments
x
An object to test. If a formula, the right-hand side is extracted.
name
An optional name that the call should match. It is passed to sym() before matching. This argument is vectorised and you can supply a vector of names to match. In this case, is_call() returns TRUE if at least one name matches.
n
An optional number of arguments that the call should match.
ns
The namespace of the call. If NULL, the namespace doesn't participate in the pattern-matching. If an empty string "" and x is a namespaced call, is_call() returns FALSE. If any other string, is_call() checks that x is namespaced within ns.
##### Life cycle
is_lang() has been soft-deprecated and renamed to is_call() in rlang 0.2.0 and similarly for is_unary_lang() and is_binary_lang(). This renaming follows the general switch from "language" to "call" in the rlang type nomenclature. See lifecycle section in call2().
is_expression()
library(rlang) # NOT RUN { is_call(quote(foo(bar))) # You can pattern-match the call with additional arguments: is_call(quote(foo(bar)), "foo") is_call(quote(foo(bar)), "bar") is_call(quote(foo(bar)), quote(foo)) # Match the number of arguments with is_call(): is_call(quote(foo(bar)), "foo", 1) is_call(quote(foo(bar)), "foo", 2) # By default, namespaced calls are tested unqualified: ns_expr <- quote(base::list()) is_call(ns_expr, "list") # You can also specify whether the call shouldn't be namespaced by # supplying an empty string: is_call(ns_expr, "list", ns = "") # Or if it should have a namespace: is_call(ns_expr, "list", ns = "utils") is_call(ns_expr, "list", ns = "base") # The name argument is vectorised so you can supply a list of names # to match with: is_call(quote(foo(bar)), c("bar", "baz")) is_call(quote(foo(bar)), c("bar", "foo")) is_call(quote(base::list), c("::", ":::", "\$", "@")) # }
|
2020-02-27 08:50:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1778768002986908, "perplexity": 13101.855432086271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146665.7/warc/CC-MAIN-20200227063824-20200227093824-00158.warc.gz"}
|
http://www.koreascience.or.kr/article/JAKO201107049668616.page
|
### ABUNDANT SEMIGROUPS WITH QUASI-IDEAL S-ADEQUATE TRANSVERSALS
DOI QR Code
Kong, Xiangjun;Wang, Pei
• 투고 : 2010.01.11
• 발행 : 2011.01.31
• 18 5
#### 초록
In this paper, the connection of the inverse transversal with the adequate transversal is explored. It is proved that if S is an abundant semigroup with an adequate transversal $S^o$, then S is regular if and only if $S^o$ is an inverse semigroup. It is also shown that adequate transversals of a regular semigroup are just its inverse transversals. By means of a quasi-adequate semigroup and a right normal band, we construct an abundant semigroup containing a quasi-ideal S-adequate transversal and conversely, every such a semigroup can be constructed in this manner. It is simpler than the construction of Guo and Shum [9] through an SQ-system and the construction of El-Qallali [5] by W(E, S).
#### 참고문헌
1. T. S. Blyth and R. B. McFadden, Regular semigroups with a multiplicative inverse transversal, Proc. Roy. Soc. Edinburgh Sect. A 92 (1982), no. 3-4, 253-270. https://doi.org/10.1017/S0308210500032522
2. T. S. Blyth and M. H. Almeida Santos, A classification of inverse transversals, Comm. Algebra 29 (2001), no. 2, 611-624. https://doi.org/10.1081/AGB-100001527
3. T. S. Blyth and M. H. Almeida Santos, Amenable orders associated with inverse transversals, J. Algebra 240 (2001), no. 1, 143-164. https://doi.org/10.1006/jabr.2001.8730
4. J. F. Chen, Abundant semigroups with adequate transversals, Semigroup Forum 60 (2000), no. 1, 67-79. https://doi.org/10.1007/s002330010004
5. A. El-Qallali, Abundant semigroups with a multiplicative type A transversal, Semigroup Forum 47 (1993), no. 3, 327-340. https://doi.org/10.1007/BF02573770
6. J. B. Fountain, Abundant semigroups, Proc. London Math. Soc. (3) 44 (1982), no. 1, 103-129. https://doi.org/10.1112/plms/s3-44.1.103
7. J. B. Fountain, Adequate semigroups, Proc. Edinburgh Math. Soc. (2) 22 (1979), no. 2, 113-125. https://doi.org/10.1017/S0013091500016230
8. X. J. Guo, Abundant semigroups with a multiplicative adequate transversal, Acta Math. Sin. (Engl. Ser.) 18 (2002), no. 2, 229-244. https://doi.org/10.1007/s101140200170
9. X. J. Guo and K. P. Shum, Abundant semigroups with Q-adequate transversals and some of their special cases, Algebra Colloq. 14 (2007), no. 4, 687-704. https://doi.org/10.1142/S1005386707000636
10. X. J. Kong, Abundant semigroups with quasi-ideal adequate transversals, Adv. Math. (China) 37 (2008), no. 1, 31-40.
11. T. Saito, Construction of regular semigroups with inverse transversals, Proc. Edinburgh Math. Soc. (2) 32 (1989), no. 1, 41-51. https://doi.org/10.1017/S0013091500006891
12. X. L. Tang, Regular semigroups with inverse transversals, Semigroup Forum 55 (1997), no. 1, 24-32. https://doi.org/10.1007/PL00005909
#### 피인용 문헌
1. 1. CONGRUENCES ON ABUNDANT SEMIGROUPS WITH QUASI-IDEAL S-ADEQUATE TRANSVERSALS vol.29, pp.1, 2014, doi:10.4134/CKMS.2011.26.1.001
2. 2. Good congruences on abundant semigroups with $$PSQ$$ P S Q -adequate transversals vol.89, pp.2, 2014, doi:10.4134/CKMS.2011.26.1.001
3. 3. The product of quasi-ideal adequate transversals of an abundant semigroup vol.83, pp.2, 2011, doi:10.4134/CKMS.2011.26.1.001
#### 과제정보
연구 과제 주관 기관 : National Natural Science Foundation of China, Natural Science Foundation of Shandong Province
|
2018-12-10 20:07:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7189655303955078, "perplexity": 2302.9317162937286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823442.17/warc/CC-MAIN-20181210191406-20181210212906-00007.warc.gz"}
|
http://www.formuladirectory.com/user/formula/200
|
HOSTING A TOTAL OF 318 FORMULAS WITH CALCULATORS
## Quick Ratio
The Quick Ratio is used for determining a company's ability to cover its short term debt with assets that can readily be transferred into cash, or quick assets. The Current Liabilities portion references liabilities that are payable within one year
## $\frac{a}{l}$
Here,a=quick assets,l=current liablity
ENTER THE VARIABLES TO BE USED IN THE FORMULA
Similar formulas which you may find interesting.
|
2019-02-23 15:10:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6550571322441101, "perplexity": 4181.34205044068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249504746.91/warc/CC-MAIN-20190223142639-20190223164639-00242.warc.gz"}
|
https://rdrr.io/cran/metan/man/resca.html
|
# resca: Rescale a variable to have specified minimum and maximum... In metan: Multi Environment Trials Analysis
resca R Documentation
## Rescale a variable to have specified minimum and maximum values
### Description
Helper function that rescales a continuous variable to have specified minimum and maximum values.
The function rescale a continuous variable as follows:
Rv_i = (Nmax - Nmin)/(Omax - Omin) * (O_i - Omax) + Nmax
Where Rv_i is the rescaled value of the ith position of the variable/ vector; Nmax and Nmin are the new maximum and minimum values; Omax and Omin are the maximum and minimum values of the original data, and O_i is the ith value of the original data.
There are basically two options to use resca to rescale a variable. The first is passing a data frame to .data argument and selecting one or more variables to be scaled using .... The function will return the original variables in .data plus the rescaled variable(s) with the prefix _res. By using the function group_by from dplyr package it is possible to rescale the variable(s) within each level of the grouping factor. The second option is pass a numeric vector in the argument values. The output, of course, will be a numeric vector of rescaled values.
### Usage
resca(
.data = NULL,
...,
values = NULL,
new_min = 0,
new_max = 100,
na.rm = TRUE,
keep = TRUE
)
### Arguments
.data The dataset. Grouped data is allowed. ... Comma-separated list of unquoted variable names that will be rescaled. values Optional vector of values to rescale new_min The minimum value of the new scale. Default is 0. new_max The maximum value of the new scale. Default is 100 na.rm Remove NA values? Default to TRUE. keep Should all variables be kept after rescaling? If false, only rescaled variables will be kept.
### Value
A numeric vector if values is used as input data or a tibble if a data frame is used as input in .data.
### Author(s)
Tiago Olivoto tiagoolivoto@gmail.com
### Examples
library(metan)
library(dplyr)
# Rescale a numeric vector
resca(values = c(1:5))
# Using a data frame
resca(data_ge, GY, HM, new_min = 0, new_max = 1)
)
# Rescale within factors;
# Select variables that stats with 'N' and ends with 'L';
# Compute the mean of these variables by ENV and GEN;
# Rescale the variables that ends with 'L' whithin ENV;
data_ge2 %>%
select(ENV, GEN, starts_with("N"), ends_with("L")) %>%
mean_by(ENV, GEN) %>%
group_by(ENV) %>%
resca(ends_with("L")) %>%
|
2023-03-26 19:30:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31218457221984863, "perplexity": 5891.791266382455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00076.warc.gz"}
|
https://nanonaren.com/2016/09/29/conditional-probability-problem-19365/
|
## Conditional Probability – Problem (19/365)
A problem asks the following. Let $\phi, \theta_1, \dots, \theta_n$ be independent random variables where $\{ \theta_i \}$ are identically distributed and $\phi$ takes values $1, \dots, n$. Show that if $S_\phi = \theta_1 + \dots + \theta_\phi$ then
$\displaystyle E(S_\phi | \phi) = \phi E\theta_1 \\ V(S_\phi | \phi) = \phi V\theta_1 \\ E S_\phi = E \phi E \theta_1 \\ V S_\phi = E \phi V \theta_1 + V \phi (E \theta_1)^2$
The first two follow due to $\{ \theta_i \}$ being idependenty and identically distributed
$\displaystyle E(S_\phi | \phi) = E \theta_1 + E \theta_2 + \dots + E \theta_\phi = \phi E \theta_1 \\ V(S_\phi | \phi) = V S_1 + V S_2 + \dots + V S_\phi = \phi V \theta_1$
The last two we compute by using 1) the generalized total probability formula as seen here and 2) the conditional variance formula as seen here.
$\displaystyle E S_\phi = E E(S_\phi | \phi) = E \phi E \theta_1 \\ V S_\phi = EV(S_\phi | \phi) + VE(S_\phi | \phi) \\ = E \phi V \theta_1 + V(\phi E \theta_1) \\ = E \phi V \theta_1 + \phi (E \theta_1)^2 \text{ because } V(a \theta + b) = a^2 V \theta$
This entry was posted in Uncategorized and tagged , . Bookmark the permalink.
|
2022-11-27 06:02:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947620689868927, "perplexity": 575.9793128735748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710192.90/warc/CC-MAIN-20221127041342-20221127071342-00040.warc.gz"}
|
https://www.physicsforums.com/threads/kinetic-energy-question.388376/
|
# Kinetic Energy Question
Paymemoney
## Homework Statement
A crate of mass 10.00kg is pulled up a rough incline with a n initial speed of 1.50m/s. The pulling force is 100N parallel to the incline, which makes an angle of 20.0 degrees with the horizontal. The coefficient kinetic friction is 0.400, and the crate is pulled 5.00m.
What is the change in kinetic energy if the crate?
K=0.5mgv^2
W=F*d
## The Attempt at a Solution
This what i have done but i cannot seem to get the right answer:
To find velocity:
W = 0.5mv^2 + Frictional force * d
500 = 5v^2 + 184
v= 7.98m/s
To Find change in kinetic energy:
K = 0.5mv^2
K = 5 * 7.95^2
K = 316
P.S
Gold Member
It may help to write up a diagram showing all the forces that act on the mass during its slide up the incline. Since you are interested in distances and forces parallel to the incline you should be able to project all the forces unto this direction, think about if they are constant or not, and then apply your energy equations (hint: increase in kinetic energy of the crate is the sum of the work (with sign) done by the pulling force, friction and gravity; kinetic energy then determines speed).
Kabbotta
Hey Pay, I'm a little confused. What is the answer you are looking to get?
First, your calculation of the final velocity doesn't depend on the initial velocity. If you needed the velocity it would come from this,
$$K_{f}$$ = $$K_{i}$$ - $$f_{k}$$ d + W
Secondly, for your KE calculation you are not computing a CHANGE in energy. Your calculation is the energy of the block at the instant it is moving 7.95 m/s. (I don't know where you got that velocity?)
But, I think you get the answer from the less explicit form of the work-kinetic energy thm:
$$\Delta$$K = - $$f_{k}$$ d + W
$$\Delta$$K = -$$\mu_{k}$$mgCos($$\theta$$) d + F d
Here's the confusing part, when I did that calculation quick it came out to,
$$\Delta$$K = 315.82 J.
The question only asks for the change in kinetic energy and you seemed to have got that somehow ; )
Paymemoney
I worked out that to get the change in kinetic energy you need to add up all the forces exerted by the crate to get the correct answer (according to what filiplarsen said).
so my final answer is 148 Joules.
Kabbotta
You can't add up forces to get a change in kinetic energy. Filplarsen correctly said you are dealing with the sum of forces applied over distances. Or work. A change in kinetic energy is work and that's what the problem, as written, is asking for.
Forces on the box that do work over the 5.00 meters:
Pulling force = 100 N. Friction force = u_k*N = -36.8 N.
Net force = 100 +(-36.8) = 63.164 N.
Again, if I multiply that by 5.00 m. I get 315.82 Joules. Is 148 J. the answer in the book? Please show quickly how you got an answer and what you are aiming for. Otherwise, I can't figure out where you got 148 J.
Gold Member
I get 315.82 Joules. Is 148 J. the answer in the book?
I also get 148J. Remember there is a third force at play here.
Kabbotta
Yep, I left out gravity, thanks guys. 148 J. is definitely right.
Paymemoney
ok i got another question related to this problem.
What is the speed of the crate after being pulled 5.00m?
This is what i have done, but unsure if it is correct. Answers says it 5.65m/s, however i got 5.44m/s
$$K=0.5mv^2$$
$$148=5v^2$$
$$\sqrt{\frac{148}{5}}=v$$
$$v=5.44m/s$$
Gold Member
You are on the right track but have missed something. Try read the problem text carefully.
Paymemoney
hmm.. is something to with the change in Kinetic energy?
Homework Helper
hmm.. is something to with the change in Kinetic energy?
148 = Kf - Ki= 5(vf^2 - vi^2)
Now find vf.
Gold Member
The 148 J is the change in kinetic energy, but that is not necessarily the final value of the kinetic energy (hint: did the crate had any kinetic energy to start with?)
Paymemoney
148 = Kf - Ki= 5(vf^2 - vi^2)
Now find vf.
Did you get 5(vf^2 - vi^2) equation from v^2= u^2 +2as????
Homework Helper
Did you get 5(vf^2 - vi^2) equation from v^2= u^2 +2as????
No.
Work done = Change in the kinetic energy
148 = 1/2*m*(vf^2 - vi^2)
Paymemoney
ok, i get it now thanks
|
2022-08-16 16:31:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5981627106666565, "perplexity": 787.048692940268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572408.31/warc/CC-MAIN-20220816151008-20220816181008-00233.warc.gz"}
|
http://www.numdam.org/item/M2AN_2005__39_5_995_0/
|
Stabilization methods in relaxed micromagnetism
ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Tome 39 (2005) no. 5, pp. 995-1017.
The magnetization of a ferromagnetic sample solves a non-convex variational problem, where its relaxation by convexifying the energy density resolves relevant macroscopic information. The numerical analysis of the relaxed model has to deal with a constrained convex but degenerated, nonlocal energy functional in mixed formulation for magnetic potential $u$ and magnetization $𝐦$. In [C. Carstensen and A. Prohl, Numer. Math. 90 (2001) 65-99], the conforming $P1-{\left(P0\right)}^{d}$-element in $d=2,3$ spatial dimensions is shown to lead to an ill-posed discrete problem in relaxed micromagnetism, and suboptimal convergence. This observation motivated a non-conforming finite element method which leads to a well-posed discrete problem, with solutions converging at optimal rate. In this work, we provide both an a priori and a posteriori error analysis for two stabilized conforming methods which account for inter-element jumps of the piecewise constant magnetization. Both methods converge at optimal rate; the new approach is applied to a macroscopic nonstationary ferromagnetic model [M. Kružík and A. Prohl, Adv. Math. Sci. Appl. 14 (2004) 665-681 - M. Kružík and T. Roubíček, Z. Angew. Math. Phys. 55 (2004) 159-182 ].
DOI : https://doi.org/10.1051/m2an:2005043
Classification : 65K10, 65N15, 65N30, 65N50, 73C50, 73S10
Mots clés : micromagnetics, stationary, nonstationary, microstructure, relaxation, nonconvex minimization, degenerate convexity, finite elements methods, stabilization, penalization, a priori error estimates, a posteriori error estimates
@article{M2AN_2005__39_5_995_0,
author = {Funken, Stefan A. and Prohl, Andreas},
title = {Stabilization methods in relaxed micromagnetism},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis - Mod\'elisation Math\'ematique et Analyse Num\'erique},
pages = {995--1017},
publisher = {EDP-Sciences},
volume = {39},
number = {5},
year = {2005},
doi = {10.1051/m2an:2005043},
zbl = {1079.78031},
mrnumber = {2178570},
language = {en},
url = {www.numdam.org/item/M2AN_2005__39_5_995_0/}
}
Funken, Stefan A.; Prohl, Andreas. Stabilization methods in relaxed micromagnetism. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique, Tome 39 (2005) no. 5, pp. 995-1017. doi : 10.1051/m2an:2005043. http://www.numdam.org/item/M2AN_2005__39_5_995_0/
[1] J. Alberty, C. Carstensen and S.A. Funken, Remarks around 50 lines of Matlab: finite element implementation. Numer. Algorithms 20 (1999) 117-137. | Zbl 0938.65129
[2] W.F. Brown, Micromagnetics. Interscience, New York (1963).
[3] C. Carstensen and S. Funken, Adaptive coupling of penalised finite element methods and boundary element methods for relaxed micromagnetics. In preparation.
[4] C. Carstensen and D. Praetorius, Numerical analysis for a macroscopic model in micromagnetics. SIAM J. Numer. Anal. 42 (2005) 2633-2651, electronic. | Zbl 1088.78009
[5] C. Carstensen and A. Prohl, Numerical analysis of relaxed micromagnetics by penalized finite elements. Numer. Math. 90 (2001) 65-99. | Zbl 1004.78006
[6] A. De Simone, Energy minimizers for large ferromagnetic bodies. Arch. Rational Mech. Anal. 125 (1993) 99-143. | Zbl 0811.49030
[7] S.A. Funken and A. Prohl, On stabilized finite element methods in relaxed micromagnetism. Preprint 99-18, University of Kiel (1999).
[8] A. Hubert and R. Schäfer, Magnetic Domains. Springer (1998).
[9] P. Keast, Moderate-degree tetrahedral quadrature formulas. Comput. Methods Appl. Mech. Engrg. 55 (1986) 339-348. | Zbl 0572.65008
[10] M. Kružík, Maximum principle based algorithm for hysteresis in micromagnetics. Adv. Math. Sci. Appl. 13 (2003) 461-485. | Zbl 1093.82020
[11] M. Kružík and A. Prohl, Young measure approximation in micromagnetics. Numer. Math. 90 (2001) 291-307. | Zbl 0994.65078
[12] M. Kružík and A. Prohl, Macroscopic modeling of magnetic hysteresis. Adv. Math. Sci. Appl. 14 (2004) 665-681. | Zbl 1105.74034
[13] M. Kružík and A. Prohl, Recent developments in modeling, analysis and numerics of ferromagnetism. SIAM Rev. (accepted, 2005). | MR 2278438 | Zbl 1126.49040
[14] M. Kružík and T. Roubíček, Microstructure evolution model in micromagnetics. Z. Angew. Math. Phys. 55 (2004) 159-182. | Zbl 1059.82047
[15] M. Kružík and T. Roubíček, Interactions between demagnetizing field and minor-loop development in bulk ferromagnets. J. Magn. Magn. Mater. 277 (2004) 192-200.
[16] P. Pedregal, Parametrized Measures and Variational Principles. Birkhäuser (1997). | MR 1452107 | Zbl 0879.49017
[17] A. Prohl, Computational micromagnetism. Teubner (2001). | MR 1885923 | Zbl 0988.78001
[18] R. Verfürth, A review of a posteriori error estimation and adaptive mesh-refinement techniques. Wiley-Teubner (1996). | Zbl 0853.65108
|
2021-01-27 16:09:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40901920199394226, "perplexity": 7788.407913940759}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704828358.86/warc/CC-MAIN-20210127152334-20210127182334-00785.warc.gz"}
|
https://bitcointechweekly.com/briefs/decompiling-the-electrumpro-stealware/
|
### Decompiling the Electrumpro Stealware
Electrum is a popular Bitcoin wallet, distributed on electrum.org and spesmilo/electrum.
A few weeks ago scammers bought the electrum dot com domain and started using it to distribute a modified malware version of electrum called ElectrumPro to steal its user’s bitcoins.
The electrum team published a decompiling guide for ElectrumPro binary on windows to proove that it is indeed stealing users:
This document describes how to decompile the “Electrum Pro” Windows binaries, and how to verify that they indeed contain bitcoin-stealing malware. We previously warned users against “Electrum Pro”, but we did not have formal evidence at that time.
The scammers seem to have invested a big sum to acquire the domain, which was previously used by someone in the US to sell energy drinks and food. The change happened on the 23rd of March 2018 according to whois data:
Domain Name: ELECTRUM.COM
Registry Domain ID: 24034_DOMAIN_COM-VRSN
Updated Date: 2018-03-23T21:33:29Z
Creation Date: 1996-05-15T04:00:00Z
Registry Expiry Date: 2023-05-16T04:00:00Z
|
2019-02-17 11:20:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30737653374671936, "perplexity": 12677.750513176843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481992.39/warc/CC-MAIN-20190217111746-20190217133746-00183.warc.gz"}
|
http://physics.stackexchange.com/questions/76662/total-angular-momentum-single-electron
|
# Total angular momentum - single electron
I have been dealing with total angular momentum of the single electron which is outside the closed shells in which sum of the angular momentums is zero.
My book says that total atomic angular momentum is $\mathbf{J}=\mathbf{L+S}$ and its magnitude $J=\sqrt{j(j+1)}\hbar$ where quantum number $j$ can have half-integer values on an interval $|\ell-s|\leq j \leq|\ell+s|$. There is allso a quantum number $m_j$ which can have half-integer values on an interval $-j \leq m_j \leq j$.
So lets say I have a single electron in an orbital $\scriptsize\boxed{\ell=1}$ (p orbital). In this case $j=\tfrac{3}{2},\frac{1}{2}$ while $m_j=\tfrac{3}{2},\frac{1}{2},-\tfrac{3}{2},-\frac{1}{2}$ and I can calculate two magnitudes for $J$:
\begin{align} J&=\tfrac{\sqrt{15}}{2}\hbar\\ J&=\tfrac{\sqrt{3}}{2} \hbar \end{align}
Now there is this weird vector sum image that I don't understand completely:
On the left image we have $J=\tfrac{\sqrt{15}}{2}\hbar$ while on the right we have $J=\tfrac{\sqrt{3}}{2}\hbar$. It seems that vector $\mathbf{L}$ together with its magnitude $L=\sqrt{\ell(\ell+1)}\hbar = \sqrt{2}\hbar$ is the same in both cases - this makes sense as quantum number $\ell$ doesn't change. But how do we explain the change in $\mathbf{S}$?
-
The left image describes a spin up ($s=1/2$) and the right one a spin down ($s=-1/2$) electron. These two states are directly responsible for the fact that there are two possible values for $j$. The total angular momentum takes on two possible values, one for the case where spin is negative and one for when it is positive. This is how you can understand the relation
$$|l-s|\leq j \leq|l+s|.$$
Spin magnitude of an electron is allways the same $S=\sqrt{s(s+1)}\hbar=\frac{\sqrt{3}}{2}\hbar$ where $s$ is the spin quantum number which for electron equals $1/2$. By $s$ you probably meant the $z$ component of the spin $S_z=m_s\hbar$ where $m_s= \pm s = \pm\frac{1}{2}$? So in one case $S_z = \frac{1}{2}\hbar$ while in the other case $S_z = -\frac{1}{2}\hbar$? – 71GA Sep 8 '13 at 14:07
This means that the $z$ axis must be defined in the same direction as the total angular momentum $\mathbf{J}$. – 71GA Sep 8 '13 at 14:14
|
2016-06-26 19:05:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8852322101593018, "perplexity": 179.21934056595117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00017-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://physics.aps.org/synopsis-for/10.1103/PhysRevB.80.241107
|
# Synopsis: The space between
#### X-ray photoemission studies of the metal-insulator transition in LaAlO3/SrTiO3 structures grown by molecular beam epitaxy
Y. Segal, J. H. Ngai, J. W. Reiner, F. J. Walker, and C. H. Ahn
Published December 23, 2009
Recent experiments have shown that a metal-insulator transition occurs at the interface between a ${\text{LaAlO}}_{3}$ layer and a ${\text{SrTiO}}_{3}$ substrate. As the ${\text{LaAlO}}_{3}$ thickness approaches four unit cells, a sharp jump in conductivity is observed when an additional single unit cell is added, but for thinner layers the interface is insulating. This behavior is believed to be the consequence of an electron gas appearing suddenly at the interface, but its origin is controversial. Mechanisms intrinsic to the structure, as well as extrinsic effects related to defects and oxygen vacancies, have been proposed.
In a Rapid Communication appearing in Physical Review B, Yaron Segal and colleagues at Yale University, US, report their investigation of ${\text{LaAlO}}_{3}/{\text{SrTiO}}_{3}$ structures grown by molecular beam epitaxy, which avoids the problem of defects produced by the laser deposition methods used in previous studies. With x-ray photoemission spectroscopy, the Yale group has been able to measure the strength of the electric field at the interface by looking for offsets in the electronic band structure.
Segal et al. find that the metal-insulator transition is present regardless of the growth method, suggesting that deposition-induced defects are not playing a role in the increase in conductivity. Moreover, the photoemission data reveal smaller electric fields than expected, casting doubt on the applicability of models that rely on intense charge discontinuities at the interface and indicating that the theoretical understanding of these unusual materials will need further refinement. – David Voss
|
2015-01-31 11:37:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3902091383934021, "perplexity": 1379.8467969613926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869647.75/warc/CC-MAIN-20150124161109-00047-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://cs.stackexchange.com/questions/56748/on-intersection-of-classes
|
# On intersection of classes [closed]
Consider classes $\mathcal C_1$ and $\mathcal C_2$ of problems both of which are $\mathsf{NP}$-complete. Does it mean $\mathcal C_1\cap\mathcal C_2$ of problems is $\mathsf{NP}$-complete?
## closed as unclear what you're asking by D.W.♦Apr 29 '16 at 3:11
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• It's not clear what you mean by "class of problems...[is] NP-complete", so the problem doesn't seem well-specified at present. The definition of NP-completeness applies to problems, not classes of problems. Please edit the question to clarify what you are asking. Also, as always, you should show your thoughts and what thinking you've already done. – D.W. Apr 29 '16 at 3:10
• The title you have chosen is not well suited to representing your question. Please take some time to improve it; we have collected some advice here. Thank you! – D.W. Apr 29 '16 at 3:11
In any case, it sounds like it's possible that $\mathcal{C}_1\cap\mathcal{C}_2 = \emptyset$, in which case the answer seems to be "no". (Well, except that every problem in $\emptyset$ is NP-complete, in which case the answer would be "yes" but for vacuous reasons.)
|
2019-11-19 02:59:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4990101456642151, "perplexity": 389.0896782739355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669967.80/warc/CC-MAIN-20191119015704-20191119043704-00220.warc.gz"}
|
http://www.varsitytutors.com/gre_subject_test_math-help/axioms-of-probability
|
GRE Subject Test: Math : Axioms of Probability
Example Questions
Example Question #1 : Probability
A student has 14 piece of gum, 3 are spearmint, 5 are peppermint, and the rest are cinnamon. If one piece of gum is chosen at random, which of the following is NOT true.
The probability of not picking a cinnamon is .
The probability of picking a spearmint or a cinnamon is .
The probability of picking a cinnamon is .
The probability of picking a cinnamon or a peppermint is .
The probability of picking a spearmint is .
The probability of picking a spearmint or a cinnamon is .
Explanation:
The probability of picking a spearmint or a cinnamon is the addition of probability of picking a spearmint and the probability of picking a cinnamon
not
|
2016-10-21 23:55:56
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9683694243431091, "perplexity": 2651.6530287542464}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718311.12/warc/CC-MAIN-20161020183838-00345-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://rfcwalters.blogspot.com/2014/10/
|
## Thursday, October 30, 2014
### The mess we are in with scientific publishing II - not in the club
previous post in this series; next post in this series
I mentioned various problems with scientific publishing in the previous post.
I neglected to mention the most discussed problems like the ownership of results by private companies, the lack of free access, the uncontrolled costs, the publicly funded work by referees and editors, et cetera.
Today I want to talk about another change in scientific publishing which has occurred in my lifetime. When I was young (1970) there were many fewer journals, mostly of scientific societies, and mostly journals of mathematics with a content intended to be of interest to all mathematicians.
Labels:
## Sunday, October 19, 2014
I just linked up this blog with Google+. This is a test to see what the effect of that is. I assume my posts will now be visible from Google+.
## Saturday, October 04, 2014
### My 1992 FTP site
Don't believe any links, or email addresses. Almost all have disappeared. Notice the strange domain name for mathematics at Sydney.
We did not get to see the web until at least 1993 when Mosaic came out (though we had to use Chimera on the Appollos). I haven't found my first email but I think it was around 1985 when I went to conferences announcing the importance of email, in particular to us in Australia. Hard to remember those times.
ftp site at the University of Sydney
# ftp site at the University of Sydney
Date: Thu, 2 Apr 92 13:35:18 +10
SYDNEY CATEGORY THEORY SEMINAR
SYDNEY CATEGORIES IN COMPUTER SCIENCE SEMINAR
Category theory material
Available by Anonymous FTP
from maths.su.oz.au
Labels:
### The mess we are in with scientific publishing
Next post in this series
I am a little out of the Italian academic scene since my retirement, but if I understood the recent rounds of hiring in Italy they occurred in the following way. It was possible to apply for "abilitazione", that is a judgment was made if you were at an appropriate level to hold a post. This was made in most cases on the grounds of purely numerical indicators, and was supposed to be a threshold. It was not a competition. A deeper analysis, actually looking at the papers or asking experts who had read the papers, was clearly impossible since, for example, in Computer Science a committee of 5 had to judge in a year the qualities of approximately 900 applicants. (There were even complaints when more than numbers were used, since that gave power to the "barons".)
After this judgment a great number of the abilitati were given permanent posts, thus filling up vacancies for some time.
The pressure this type of thing puts on scientific publishing is enormous. Referees, while trying to make judgments on papers, now have to consider that they are deciding the careers of young people, the grants for older people, that the prestige of the journal will affect jobs and grants. The reason the pressure is so enormous is that the job and granting committees don't look at the papers, just the numbers.
The job of a referee has become impossible, at the same time that there is more and more need for referees since scientists are being forced to publish more and more.
Scientific publishing must free itself from these pressures.
Thinking about this situation brought back to mind some thoughts of Bernhard Neumann.
|
2018-08-17 16:51:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4217313230037689, "perplexity": 1528.4872893871438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212639.36/warc/CC-MAIN-20180817163057-20180817183057-00402.warc.gz"}
|
https://puzzling.stackexchange.com/questions/100478/a-simple-password-puzzle-or-maybe-not
|
# A Simple Password Puzzle. Or Maybe Not
## Prologue:
It's me, KryptonOmega, back with a password puzzle. Enjoy!
## Edits:
1. Add "unique" in condition 2 for clarification.
2. Length of answer is five. (Why would you expect more when there are five underscores only???)
• Welcome back! :D – Jafe Jul 27 '20 at 10:32
• oh hiii @jafe :) – Omega Krypton Jul 27 '20 at 10:33
• are $\alpha, \beta, \gamma, \delta, \varepsilon$ each a single digit? – Mark Murray Jul 27 '20 at 10:37
• @MarkMurray an intriguing question, I haven't thought of that lol – Omega Krypton Jul 27 '20 at 10:42
S E V E N
Because
Using a letter's sequence number value:
1. (s + v + n) = (11 * e) or (19 + 14 + 22) = (11 * 5) = 55
2. v is the largest valued letter among the 5 letters
3. n + e = s or 14 + 5 = 19
4. e >= e
5. (e + e) < n or (5 + 5) < 14
6. e is the duplicate
7. seven is a prime number
Explanation
Well, the way I figured out this answer was noticing the 5 spaces for the answer and the fact that the answer was prime, but wouldn't be so easy as to actually be the word "PRIME".
So, my next thought was "SEVEN" because condition 6 said their were duplicates in the variables.
I then saw condition 2 was fulfilled with "v" and condition 3 worked out mathematically and wished I was better at writing up this type of answer because this took forever.
So, I ran the math for the remaining conditions and it actually was a valid answer. So, I decided to answer.
• No logical deduction seen. Please explain how you reached the answer through deduction. Keep it up! – Omega Krypton Jul 27 '20 at 13:31
• Explain my thought process as to how I figured out the clues? – MacGyver88 Jul 27 '20 at 13:35
• ok, this answer is correct. this isnt logical deduction tho. ill add my answer as explanation, and accept this later :) well done – Omega Krypton Jul 27 '20 at 14:31
• @Omega, I appreciate that. I'm sure doing so will help me to understand the proper way of explaining this type of question's answer in the future. – MacGyver88 Jul 27 '20 at 14:38
## Setting the scene
We have five numbers $$\alpha,\beta,\gamma,\delta,\epsilon$$ satisfying seven conditions. These numbers must all be non-negative integers (we assume). I tried assuming they are all single digits (at most $$9$$), but got a contradiction as shown in the first revision of this answer, so we know $$\gamma\geq10$$.
## Step-by-step deduction
• Firstly, can any of them be zero?
$$\alpha\neq0$$ since it's the first digit, so $$\beta\neq0$$ by condition 1. Also $$\gamma\neq0$$ by condition 2, and $$\epsilon\neq0$$ by condition 5. The only one which might be zero is $$\delta$$, since all we know about it is inequalities.
• The password is prime, so
$$\epsilon$$ must be, or end in, one of $$1,3,7,9$$.
• Substituting condition 3 into condition 1, we find
$$\gamma+2\epsilon=10\beta$$, so $$\gamma$$ is even and $$\beta\geq2$$, meaning $$\epsilon\geq3$$.
• Going back to condition 6, we notice
$$\gamma$$ is not the same as any of the others (by condition 2), neither is $$\alpha$$ (it's bigger than all the others, by condition 3, nonzeroness, and condition 4), neither is $$\epsilon$$ (again bigger than all the others, by condition 5). So we have $$\beta=\delta<\epsilon<\alpha<\gamma$$.
• Now conditions 4 and 6 are used up, and condition 5 gives
$$\epsilon>2\beta$$, therefore $$\epsilon\geq7$$ by the prime condition and $$\beta\geq2$$. Using $$\gamma+2\epsilon=10\beta$$, that means $$\beta\geq3$$.
Note that after fixing $$\beta$$ and $$\epsilon$$, the others are completely determined:
$$\delta=\beta,\alpha=\beta+\epsilon,\gamma=10\beta-2\epsilon$$. So we need $$10\beta-2\epsilon>\beta+\epsilon$$, which means $$\epsilon<3\beta$$. Overall, $$2\beta<\epsilon<3\beta$$, and also $$\epsilon$$ is a prime ending.
Let's now just try possibilities starting from the smallest:
• If $$\beta=3$$, then we must have
$$\beta=\delta=3$$, $$\epsilon=7$$, $$\alpha=10$$, $$\gamma=16$$, giving the password $$1031637$$, but that's a multiple of 3.
• If $$\beta=4$$, then we must have
$$\beta=\delta=4$$, $$\epsilon=9\text{ or }11$$, $$\alpha=13\text{ or }15$$, $$\gamma=22\text{ or }18$$, giving the password $$1342249$$ or $$15418411$$, but these are not prime.
• If $$\beta=5$$, then we must have
$$\beta=\delta=5$$, $$\epsilon=11\text{ or }13$$, $$\alpha=16\text{ or }18$$, $$\gamma=28\text{ or }24$$, giving the password $$16528511$$ or $$18524513$$, of which the latter is not prime but the former is.
## Final solution for the password
$$16528511$$.
• Condition 2 is not met, since alpha is now = gamma. Sorry, and keep it up! – Omega Krypton Jul 27 '20 at 10:50
• @OmegaKrypton You didn't say it's the unique largest! – Rand al'Thor Jul 27 '20 at 10:51
• ok i'll add that. yes it is the unique largest. - done, edited. – Omega Krypton Jul 27 '20 at 10:51
• @OmegaKrypton OK, got it now. – Rand al'Thor Jul 27 '20 at 11:33
• length is 5. ^^^ – Omega Krypton Jul 27 '20 at 12:59
The answer is, as shown by @MacGvyer88,
SEVEN
but they did not provide logical deduction, so here it is.
@RandAlThor has proven that a five-digit number is impossible as the answer. Yet it is mentioned in the clarifications that the length is 5. Therefore it may be a
Word of length five.
Well, intrinsically, you can say that
we only need to consider "THREE" "SEVEN"
but I will try to deduct this on the approach that we only know that it is a five-letter word and that A1Z26 is used.
From (1) the max of LHS is 26+25+25 = 76 since gamma is unique largest. In other words beta <= floor(76/11) = 6.
Therefore beta is
A B C D E or F.
And in second position it is very likely a
vowel. so A or E are most likely.
Notice condition 6 regarding duplicates.
From (2) gamma isn't a duplicate.
From (3) we know that alpha ≠ epsilon.
Based on all this we know that either beta or delta is one of the duplicates.
(5) shows that beta ≠ epsilon and delta ≠ epsilon, so epsilon is out.
We now have alpha, beta and delta being possible duplicates.
(3) shows that alpha ≠ beta
Therefore either alpha = delta or beta = delta.
Given beta = A or E most likely, we now have
+a_+_ or +e_+_ or _a_a_ or _e_e_
And this should be enough, given (7) for you to make out
SEVEN.
I am challenged to prove why beta is A or E. First of all I said "most likely". Secondly, here it is if you need proof.
• Thanks for taking the time to do this. I struggle with these types of explanations as my thought process is often not explained easily. I was going to write, thanks to @Rand's explanation, we know it can't be a number sequence. So, I kind of get it. However, I am not a practiced logician. So, this is actually a lot for me to grasp. – MacGyver88 Jul 28 '20 at 14:20
• Please use logical deduction to explain why "beta = A or E" :P – Cireo Jul 28 '20 at 18:41
• @Cireo done. see edit :) – Omega Krypton Jul 29 '20 at 5:29
• @MacGyver88 youre welcome, hope this helped :) – Omega Krypton Jul 29 '20 at 5:30
Alpha = 16, Beta = 5, Gamma = 28, Delta = 5, Epsilon = 11
Checking
1) 16+28+11 = 11*5
2) 28 is unique largest
3) 11+5=16
4) 5 >= 5
5) 5+5 < 11
6) Beta and Delta are both 5
7) 16528511 is prime
Partial explanation (to be continued):
Under the assumption that the Greek letters denote single digits, we quickly see that there is no solution. By trial and error, we show that Beta = 2, 3 and 4 won't work, and Beta = 5 yields the answer given above. Perhaps the larger value for Beta can give different solution, so this is not unique...
• The "trial and error" process you mention is quite lengthy unless you figure out a lot more restrictions on the possible choices, isn't it? (I guess you did and that's the "to be continued" part :-) ) – Rand al'Thor Jul 27 '20 at 11:42
• please use logical deduction. also, length of answer is five. (Why would you expect more when there are five underscores only???) – Omega Krypton Jul 27 '20 at 12:18
• @OmegaKrypton Well, you did not specify that each underscore represents a digit, so they've assumed it's numbers instead – Chronocidal Jul 28 '20 at 13:00
• @Chronocidal well they are not digits coz i googled and it says digits are strictly numbers only – Omega Krypton Jul 28 '20 at 13:47
|
2021-07-27 09:26:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 51, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496455550193787, "perplexity": 1050.771407072381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00283.warc.gz"}
|
https://brilliant.org/problems/a-rather-normal-question/
|
# A rather normal question
Calculus Level 4
Let $$P$$ be a point (other than the origin) on the curve $$y = x^{4}$$. Let $$Q$$ be the point where the normal line to the given curve at $$P$$ intersects the $$x$$-axis.
With point $$O$$ being the origin, form the triangle $$OPQ$$. Over all such triangles, what is the minimum value of the ratio of the base length $$OQ$$ to the height of $$\Delta OPQ$$?
×
|
2017-10-20 23:48:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214098215103149, "perplexity": 169.45421476846363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010225-00435.warc.gz"}
|
https://tex.stackexchange.com/questions/460171/combining-itemize-and-aligned-first-line-to-the-left-and-remaining-lines-to-the
|
Combining itemize and aligned: first line to the left and remaining lines to the right
Does any know how I can combine itemize and aligned (with argument [t]) -- or another multi-line mathematical environment -- in order to obtain a multi-line equation whose first line is aligned to the left and the remaining lines are aligned to the right?
I tried to illustrate what I would like to obtain in the picture below.
My current latex-code is the following:
\documentclass[letterpaper]{article}
\usepackage{amsthm}
\usepackage{mathtools}
\usepackage{bm}
\begin{document}
$\bm{f}^{\cup}(B^1,B^2), \bm{f}^{\cap}(B^1,B^2) \in \mathcal{B}(\Sigma_1\cup\Sigma_2, w\cdot w^{\prime})$, respectively, where
\begin{itemize}\small
\item \begin{aligned}[t] \ell(\bm{f}^{\cup}(B^1,B^2)) = \{w^{\prime}\cdot(\mathfrak{p}-1) + \mathfrak{p}^{\prime} \colon \mathfrak{p}\in\ell(B_i),\\ \mathfrak{p}^{\prime}\in\ell(B_i)\}\text{;} \end{aligned}
\item $r(B_i^{*}) = \{w^{\prime}\cdot(\mathfrak{q}-1) + \mathfrak{q}^{\prime} \colon \mathfrak{q} \in r(B_i), \mahfrak{q}^{\prime}\in r(B_i)\}$ ;
\end{itemize}
\end{document}
which results in something like the picture below.
Thanks!!
• please be so kind and show us what you try so far. writing your equation from scratch is fun :-( – Zarko Nov 15 '18 at 17:55
• In your example the second line of the item strating from $\ell$ is not aligned to the right. Some clarification of your expectations is needeed. – Przemysław Scherwentke Nov 15 '18 at 17:57
• I am sorry for the missing details. I tried to explain better now. – Alexsander Melo Nov 15 '18 at 18:18
Is this what you want?
\documentclass{article}
\usepackage{amsmath,amsfonts}
\begin{document}
\begin{itemize}
\item \begin{aligned}[t] \ell(\mathbf{f}{^\cup (B^1,B^2)})=\lbrace w' \cdot (\mathfrak{p}-1)+\mathfrak{p}') :& \\ \mathfrak{p}\in \ell(B_i), \mathfrak{p}1\in \ell(B'_i)\rbrace; \end{aligned}
\end{itemize}
\end{document}
• Unfortunately, it is not. I want that the second line (and the remaining lines) are aligned to the right. – Alexsander Melo Nov 15 '18 at 19:19
• So, you want the second line touching the right margin? – Sigur Nov 15 '18 at 20:05
• in this case, that is what I want. Actually, I am using a two-column template, so the right margin is really close to the left margin. Thus, to solve my problem in this template, it suffices the second line touching the right margin. However, in general, what I really want is that the second line follows the alignment defined by '&' until reaching the right margin. When the right margin is reached, instead of continuous going to the right, the text has to go to the left, according to the right margin. – Alexsander Melo Nov 16 '18 at 9:13
• In two-column mode, you will not have the same exact layout as you posted. Probably the simpler in this case is to replace the aligned environment with multlined, removing the &. – Bernard Nov 16 '18 at 9:48
I think you want something like this. Remeber align(ed) require & to mark the alignment points. I took the opportunity to simplify a bit your code : typing ^{\prime} isn't necessary, ' is enough.
\documentclass[letterpaper]{article}
\usepackage{amsthm}
\usepackage{mathtools, amsfonts}
\usepackage{bm}
\begin{document}
$\bm{f}^{\cup}(B^1,B^2), \bm{f}^{\cap}(B^1,B^2) \in \mathcal{B}(\Sigma_1\cup\Sigma_2, w\cdot w^{\prime})$, respectively, where
\begin{itemize}\small
\item \begin{aligned}[t] \ell(\bm{f}^{\cup}(B^1,B^2)) = \{w'\cdot(\mathfrak{p}-1) & + \mathfrak{p}' \colon \\ & \phantom{+}\mathfrak{p}\in\ell(B_i),\mathfrak{p}'\in\ell(B_i)\}\text{;} \end{aligned}
\item $r(B_i^{*}) = \{w'\cdot(\mathfrak{q}-1) + \mathfrak{q}' \colon \mathfrak{q} \in r(B_i), \mathfrak{q}'\in r(B_i)\}$ ;
\end{itemize}
\end{document}
• Thanks a lot, Bernard. But this solution does not work in my case because I am using a two-column template. So, when I use '&' at beginning of the second line to define the alignment which I really want, I face a problem: the second line does not respect the right margin. Because of that I want to align the remaining lines of the equation to the right. – Alexsander Melo Nov 16 '18 at 9:29
• I was able to obtain the setting I want by using negative \hspace after '&'. But this trick seems "not elegant". – Alexsander Melo Nov 16 '18 at 9:34
• For me, it works with thetwocolumn option. You also have the possibility to replace the aligned environment with multlined, removing the &. – Bernard Nov 16 '18 at 9:53
• multlined from mathtools does not push the last line to the right margin, only a little bit to right (at least in my test with $\begin{multlined}[t] \text{left side} \\ \text{center line}\\ \text{right side} \end{multlined}$). – Sigur Nov 16 '18 at 14:22
|
2019-06-19 11:59:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561494588851929, "perplexity": 1185.2747744615622}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998959.46/warc/CC-MAIN-20190619103826-20190619125826-00490.warc.gz"}
|
https://www.esaral.com/q/the-base-of-an-isosceles-triangle-measures-80-cm-and-its-area-is-360-cm2-78998/
|
The base of an isosceles triangle measures 80 cm and its area is 360 cm2.
Question:
The base of an isosceles triangle measures 80 cm and its area is 360 cm2. Find the perimeter of the triangle.
Solution:
Let $\triangle P Q R$ be an isosceles triangle and $P X \perp Q R$.
Now,
Area of triangle $=360 \mathrm{~cm}^{2}$
$\Rightarrow \frac{1}{2} \times Q R \times P X=360$
$\Rightarrow h=\frac{720}{80}=9 \mathrm{~cm}$
Now,
$Q X=\frac{1}{2} \times 80=40 \mathrm{~cm}$ and $P X=9 \mathrm{~cm}$
Also,
$P Q=\sqrt{Q X^{2}+P X^{2}}$
$a=\sqrt{40^{2}+9^{2}}=\sqrt{1600+81}=\sqrt{1681}=41 \mathrm{~cm}$
∴ Perimeter = 80 + 41 + 41 = 162 cm
|
2022-09-27 10:58:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5373988747596741, "perplexity": 493.13837931770735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00032.warc.gz"}
|
http://old.fieldtriptoolbox.org/tutorial/fourier?bootswatch-theme=flatly
|
# Fourier analysis of neuronal oscillations and synchronization
EEG and MEG measure brain activity as a so called time series, i.e. they measure voltage of field strength as a function of time. In those time series, there are often clearly visible oscillations like the alpha oscillations over occipital cortex or the beta oscillations over sensorimotor cortex during rest. However, in the time series, the information about those oscillations is distributed over many samples: We observe an oscillation through the fact that there are peaks in the time series that recur at regular intervals. Since the information about the oscillations is distributed over many samples, it cannot be used immediately. We can e.g. not tell directly the main frequency of such an oscillation. In order to concentrate all information about an oscillation, we use spectral analysis. Any signal that is measured as a function of time can also be expressed as a function of frequency. The transformation from the so-called time-domain into the frequency-domain is the Fourier transform. This part of the course tries to give an easy-to-understand, but nevertheless correct, explanation of what the Fourier transform does and how we can use its outputs to compute power-spectra and cross-spectral densities.
In this tutorial the following steps will be demonstrated:
• Spectral analysis using the Fast Fourier Transform (FFT).
• Computation of the power spectrum from the Fourier transformed data.
• Computation of the coherence spectrum from the Fourier transformed data of two signals.
To get to the concept of spectral analysis, we first construct a sine-wave and a cosine wave of known frequency. (While in this course, we will use sine waves for reasons of simplicity and clarity, please bear in mind that biological signals are never sine waves but extend over a frequency range. This is important for the appropriate way of analysing them.) We then calculate the Fourier transform of those signals using the Fast Fourier Transform (FFT) function of Matlab. The Fourier transform decomposes the time series signals into the cosine and sine components at all frequencies. The result of the Fourier transform is complex, containing, for each frequency, the cosine component of the signal as the real component and the sine component of the signal as the imaginary component. It is straightforward to compute those components by hand. For any given frequency, one can therefore think of a vector, representing the signal component at that frequency. This vector has an amplitude and a phase (phase relative to the begin of the time series). The amplitude is of interest when we later compute the power spectrum of a signal and the phase is particularly important when we later compute the coherence spectrum between two signals.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% The Fourier Transform
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
clear all
close all
% get a sine and cosine wave of equal frequency and plot them
frq = 20; % Hz
len = 1; % seconds
smpfrq = 1000; % Hz
phs = 0;
ind = ((0:(len.*smpfrq-1))./(smpfrq).*(frq.*2.*pi))+(phs.*2.*pi);
sinwav = sin(ind);
coswav = cos(ind);
figure;
plot(sinwav);
hold on;
plot(coswav,'r');
% get the FFT of the waves and plot the real and imaginary components
fftsin = fft(sinwav);
figure;
subplot(2,1,1);
plot(real(fftsin));
subplot(2,1,2);
plot(imag(fftsin),'r');
fftcos = fft(coswav);
figure;
subplot(2,1,1);
plot(real(fftcos));
subplot(2,1,2);
plot(imag(fftcos),'r');
Figure 1; The Fourier transform of the sine wave. The result of the Fourier transform is complex, containing, for each frequency, the cosine component of the signal as the real component (upper panel) and the sine component of the signal as the imaginary component (lower panel).
Figure 2; The Fourier transform of the cosine wave. The result of the Fourier transform is complex, containing, for each frequency, the cosine component of the signal as the real component (upper panel) and the sine component of the signal as the imaginary component (lower panel).
% calculate the FFT results at the signal frequency "by hand" and plot the result as a vector
figure;
subplot(2,1,1);
sigsinwav = sinwav;
plot(sigsinwav .* coswav)
subplot(2,1,2);
plot(sigsinwav .* sinwav)
coscmpsin = sum(sigsinwav .* coswav)
sincmpsin = sum(sigsinwav .* sinwav)
figure;
plot([0,coscmpsin],[0,sincmpsin]);
set(gca,'xlim',[-600 600],'ylim',[-600 600])
figure;
subplot(2,1,1);
plot(coswav .* coswav)
subplot(2,1,2);
plot(coswav .* sinwav)
coscmpcos = sum(coswav .* coswav)
sincmpcos = sum(coswav .* sinwav)
figure;
plot([0,coscmpcos],[0,sincmpcos]);
set(gca,'xlim',[-600 600],'ylim',[-600 600])
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% generate a cosine wave that is shifted by 45 degrees
frq = 20; % Hz
len = 1; % seconds
smpfrq = 1000; % Hz
phs = 45 ./360; % the relative phase advance in fraction of radiants
ind = ((0:(len.*smpfrq-1))./(smpfrq).*(frq.*2.*pi))+(phs.*2.*pi);
wav = sin(ind);
figure;
plot(wav);
Figure 3; A 20 Hz cosine wave shifted 45 degrees.
% get the FFT of the wave
fftwav = fft(wav);
figure;
subplot(2,1,1);
plot(real(fftwav));
subplot(2,1,2);
plot(imag(fftwav),'r');
Figure 4; The FFT of a 20 Hz cosine wave shifted 45 degrees.
% calculate the FFT results at the signal frequency "by hand" and plot the result as a vector
figure;
subplot(2,1,1);
plot(wav .* coswav)
subplot(2,1,2);
plot(wav .* sinwav)
coscmpwav = sum(wav .* coswav)
sincmpwav = sum(wav .* sinwav)
figure;
plot([0,coscmpwav],[0,sincmpwav]);
set(gca,'xlim',[-600 600],'ylim',[-600 600])
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% generate a wave of a different frequency
frq = 10; % Hz
len = 1; % seconds
smpfrq = 1000; % Hz
phs = 0; % the relative phase advance in fraction of radiants
ind = ((0 : (len.*smpfrq -1))./(smpfrq).*(frq.*2.*pi))+(phs.*2.*pi);
wav = sin(ind);
figure;
plot(wav);
% get the FFT of the wave
fftwav = fft(wav);
figure;
subplot(2,1,1);
plot(real(fftwav));
subplot(2,1,2);
plot(imag(fftwav),'r');
% calculate the FFT result OF THE SIGNAL FREQUENCY "by hand"
figure;
subplot(2,1,1);
plot(wav .* coswav)
subplot(2,1,2);
plot(wav .* sinwav)
coscmpwav = sum(wav .* coswav)
sincmpwav = sum(wav .* sinwav)
When we have only one signal, we might want to know the amplitude of the different frequency components. This can be directly obtained through the power spectrum, the squared absolute of the Fourier transform (plus appropriate normalisation that will not be covered in detail here). The power spectrum no longer contains the phase information. Thus, the power spectra of our sine and cosine waves are identical!
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% The Power spectrum
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% calculate the power spectrum
clear all
close all
% get a sine and cosine wave of equal frequency and plot them
frq = 10; % Hz
len = 1; % seconds
smpfrq = 1000; % Hz
phs = 0;
ind = ((0:(len.*smpfrq-1))./(smpfrq).*(frq.*2.*pi))+(phs.*2.*pi);
sinwav = sin(ind);
coswav = cos(ind);
figure('name','sin&cos');
plot(sinwav);
hold on;
plot(coswav,'r');
Figure 5; A sine (blue) and cosine wave (red) of equal frequency (10 Hz).
% get the FFT of the waves
fftsin = fft(sinwav);
figure('name','fft sin');
subplot(2,1,1);
plot(real(fftsin));
subplot(2,1,2);
plot(imag(fftsin),'r');
fftcos = fft(coswav);
figure('name','fft cos');
subplot(2,1,1);
plot(real(fftcos));
subplot(2,1,2);
plot(imag(fftcos),'r');
numsmp = length(sinwav);
psdsin = 2 .* abs(fftsin) .^ 2 ./ (numsmp .^2);
figure('name','power sin');
plot(psdsin);
psdcos = 2 .* abs(fftcos) .^ 2 ./ (numsmp .^2);
figure('name','power cos');
plot(psdcos);
Figure 6; The power spectrum of a 10 Hz sine wave. The power spectrum of the 10 Hz cosine wave is identical.
When we have two signals, we might want to know whether they are related. One way of addressing this is to quantify whether there is a consistent phase relation between two signals. We have learned that the Fourier transform gives, for each frequency component, the amplitude and the phase of the signal. Thus, in order to determine whether there is a consistent phase relationship between two signal, we could e.g. analyze the difference in phases between two signals.
If two signals are related, there should be a consistent phase difference between them – or in other words, the phase difference should not be random. It turns out that the phase difference between two signals is easily obtained if one has the Fourier transform of the two signals. The product of the Fourier transform of one signal with the conjugate of the Fourier Transform of the signal gives the Cross-Spectral density (CSD). We will not go into the detail what the conjugate actually is and why this multiplication gives this particular results. Let’s simply accept this for now as a fact.
The CSD is complex and a function of frequency, just like the Fourier Transforms. The amplitude of the CSD is the product of the amplitudes of the Fourier Transforms of the two signals. The interesting component is the phase of the CSD: It corresponds to the difference in the phase of the two Fourier transforms of the two signals. Just like the Fourier Transform itself, we can think of the CSD at one frequency as a vector. If we have multiple measurements from two signals and the CSD for each of those measurements, then we can analyze the distribution of those vectors. If there is some consistency in the phase difference, those vectors should not be pointing in random directions, but they should be bundled around one main direction.
In order to quantify how much those vectors are bundled, one simply sums up all the vectors. Vector summation works by simply appending one vector to the end of the other. If the CSD vectors from multiple measurements point into random directions, the sum of those vectors will approximate a vector of no length. If however, the CSD vectors all point into one and the same direction, then they will optimally add up and form the longest possible sum vector. “Coherence” as a measure of signal relatedness uses this fact. It is a ratio of two sums: The numerator is the sum of the CSDs of multiple measurements. The denominator is the sum of the products of the power spectra. This denominator is necessary in order to make direct comparisons possible between signal pairs of very different amplitudes. Coherence is then normalized between 0 – random phase difference – and 1 – constant phase difference.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% The Coherence spectrum
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% get many repetitions of two signals with random phase difference
clear all
close all
frq = 10; % Hz
len = 1; % seconds
smpfrq = 100; % Hz
numrpt = 1000;
ranphs = rand(2.*numrpt,2);
phsdif = 45 ./ 360;
noifac = 1./50;
for rptlop = 1:numrpt
wav(:,rptlop,1) = sin(((0:(len.*smpfrq-1))./(smpfrq).*(frq.*2.*pi))+(ranphs(rptlop,1).*2.*pi)) + ...
randn(1,len.*smpfrq).*noifac;
wav(:,rptlop,2) = sin(((0:(len.*smpfrq-1))./(smpfrq).*(frq.*2.*pi))+((ranphs(rptlop,2)+phsdif).*2.*pi)) + ...
randn(1,len.*smpfrq).*noifac;
end
% get the FFT of the waves
for rptlop = 1:numrpt
fftwav(:,rptlop,1) = fft(wav(:,rptlop,1));
fftwav(:,rptlop,2) = fft(wav(:,rptlop,2));
end
% calculate the power-spectral densities (psd) and the cross-spectral
% densities (csd) and sum them over repetitions
numsmp = length(wav);
psd = 2.*abs(fftwav).^2./(numsmp.^2);
csd = 2.*(fftwav(:,:,1).*conj(fftwav(:,:,2)))./(numsmp.^2);
sumpsd = squeeze(sum(psd,2));
sumcsd = squeeze(sum(csd,2));
% calculate coherence
coh = abs(sumcsd ./ sqrt(sumpsd(:,1) .* sumpsd(:,2)));
figure;
plot(squeeze(wav(:,:,1)));
figure;
plot(squeeze(wav(:,:,2)));
figure;
plot(coh);
Figure 7; Coherence spectrum for two 10 Hz signals with a random phase difference.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% get many repetitions of two signals with somewhat consistent phase difference
clear all
%close all
frq = 10; % Hz
len = 1; % seconds
smpfrq = 100; % Hz
numrpt = 1000;
circulran = mod(randn(2.*numrpt,2).*phsspreadfac + pi, 2.* pi) - pi;
ranphs = circulran ./ (2 .* pi);
phsdif = 45 ./ 360;
noifac = 1./50;
for rptlop = 1:numrpt
wav(:,rptlop,1) = sin(((0:(len.*smpfrq-1))./(smpfrq).*(frq.*2.*pi))+(ranphs(rptlop,1).*2.*pi)) + ...
randn(1,len.*smpfrq).*noifac;
wav(:,rptlop,2) = sin(((0:(len.*smpfrq-1))./(smpfrq).*(frq.*2.*pi))+((ranphs(rptlop,2)+phsdif).*2.*pi)) + ...
randn(1,len.*smpfrq).*noifac;
end
% get the FFT of the waves
for rptlop = 1:numrpt
fftwav(:,rptlop,1) = fft(wav(:,rptlop,1));
fftwav(:,rptlop,2) = fft(wav(:,rptlop,2));
end
% calculate the power-spectral densities (psd) and the cross-spectral
% densities (csd) and sum them over repetitions
numsmp = length(wav);
psd = 2.*abs(fftwav).^2./(numsmp.^2);
csd = 2.*(fftwav(:,:,1).*conj(fftwav(:,:,2)))./(numsmp.^2);
sumpsd = squeeze(sum(psd,2));
sumcsd = squeeze(sum(csd,2));
% calculate coherence
coh = abs(sumcsd ./ sqrt(sumpsd(:,1) .* sumpsd(:,2)));
figure;
plot(squeeze(wav(:,:,1)));
figure;
plot(squeeze(wav(:,:,2)));
figure;
plot(coh);
Figure 8; Coherence spectrum for two 10 Hz signals with a somewhat consistent phase difference.
### Exercise
Change the phase spread factor (phsspreadfac) in the coherence analysis and see what the outcome is.
|
2023-03-31 06:24:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8719781637191772, "perplexity": 1232.8553908480965}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00518.warc.gz"}
|
https://gmatclub.com/forum/inequality-gmat-prep-70772.html
|
It is currently 21 Mar 2018, 19:19
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Inequality: GMAT prep
Author Message
Senior Manager
Joined: 02 Dec 2007
Posts: 432
### Show Tags
26 Sep 2008, 12:50
1
This post was
BOOKMARKED
Attachment:
INQ.JPG [ 63.06 KiB | Viewed 803 times ]
--== Message from GMAT Club Team ==--
This is not a quality discussion. It has been retired.
If you would like to discuss this question please re-post it in the respective forum. Thank you!
To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you.
Director
Joined: 27 Jun 2008
Posts: 526
WE 1: Investment Banking - 6yrs
### Show Tags
26 Sep 2008, 12:58
D
(1)
Any number^2 = either -or+ will be positive.
x = -2 or +2
x^2 = 4
SUFF
(2)
Let x = 3, y = 2
2*3 - 3*2 < 3^2
0<9...its always true for any +ve number
Manager
Joined: 18 Jan 2008
Posts: 221
Schools: The School that shall not be named
### Show Tags
26 Sep 2008, 13:56
For B:
Assume X=2, Y=0
Then
$$2X-3Y = 4 X^2=4$$
Since we can see that at the extreme case the 2 sides are equal, it is not hard to see that if \(Y > 0, 2X-3Y
--== Message from GMAT Club Team ==--
This is not a quality discussion. It has been retired.
If you would like to discuss this question please re-post it in the respective forum. Thank you!
To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you.
_________________
24 wrote:
Your East Coast bias is worse than ESPN.
Re: Inequality: GMAT prep [#permalink] 26 Sep 2008, 13:56
Display posts from previous: Sort by
# Inequality: GMAT prep
Moderator: chetan2u
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2018-03-22 02:19:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2425742894411087, "perplexity": 10849.306334415445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647758.97/warc/CC-MAIN-20180322014514-20180322034514-00431.warc.gz"}
|
https://hsm.stackexchange.com/questions/2553/where-did-p-vi-come-from
|
# Where did $P=VI$ come from?
Where did the basic physics law $P=VI$ come from? Here, $P$ is power, $V$ is voltage and $I$ is current. It doesn't have a name like Ohm's law, as far as I could find. So where did it originally come from, who invented it and when? Or what are early references for it?
• Could you hint what are $P$, $V$ and $I$? – Alexandre Eremenko Jul 18 '15 at 19:04
Joule discovered, in the 1840s, that the power in a circuit was proportional to the current squared times the resistance. He published a short paper entitled "On Production of Heat by Voltaic Electricity". Joule used the form $$P=I^2R$$ One version of Ohm's law, of course, is $$V=IR$$ Doing some substitution yields the form you give.
I suspect that it is impossible to determine who first wrote Joule's law by substituting in Ohm's law. It would be akin to asking, "Who was the first person to write Ohm's law as $R=\frac{V}{I}$?" This might be worth looking through, although I do not know if it is comprehensive.
|
2019-11-13 17:16:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776854872703552, "perplexity": 411.12208775168165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667319.87/warc/CC-MAIN-20191113164312-20191113192312-00137.warc.gz"}
|
https://greprepclub.com/forum/if-a-b-a-b-a-0-and-b-0-then-which-of-the-8915.html
|
It is currently 26 Mar 2019, 14:14
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If |a| + |b| = |a + b|, a ≠ 0, and b ≠ 0, then which of the
Author Message
TAGS:
Moderator
Joined: 18 Apr 2015
Posts: 5919
Followers: 96
Kudos [?]: 1159 [0], given: 5501
If |a| + |b| = |a + b|, a ≠ 0, and b ≠ 0, then which of the [#permalink] 13 Apr 2018, 06:13
Expert's post
00:00
Question Stats:
87% (00:30) correct 12% (01:19) wrong based on 8 sessions
If |a| + |b| = |a + b|, a ≠ 0, and b ≠ 0, then which of the following must be true?
A. ab < 0
B. ab > 0
C. a - b > 0
D. a + b < 0
E. a + b > 0
[Reveal] Spoiler: OA
_________________
Director
Joined: 07 Jan 2018
Posts: 604
Followers: 7
Kudos [?]: 546 [0], given: 88
Re: If |a| + |b| = |a + b|, a ≠ 0, and b ≠ 0, then which of the [#permalink] 16 Apr 2018, 20:58
Let $$a = -1$$
Let $$b = -2$$
Then $$|a| + |b| = 1 + 2 =3$$
and, $$|a + b| = |-3|= 3$$
Similarly,
Let $$a = 1$$
Let $$b = 2$$
Then, $$|a| + |b| = 1 + 2 =3$$
and, $$|a + b| = |3|= 3$$
Therefore either $$a,b$$ can be $$-1,-2$$ or $$1,2$$
The correct option should satisfy both the condition for $$(a,b)$$
only option b does that since $$a*b$$ is always$$+$$ve since we are multiplying $$2$$ $$+$$ve numbers or 2 $$-$$ve numbers which will always be $$+$$ve.
_________________
This is my response to the question and may be incorrect. Feel free to rectify any mistakes
Re: If |a| + |b| = |a + b|, a ≠ 0, and b ≠ 0, then which of the [#permalink] 16 Apr 2018, 20:58
Display posts from previous: Sort by
|
2019-03-26 22:14:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5425466299057007, "perplexity": 2373.4715868006533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206677.94/warc/CC-MAIN-20190326220507-20190327002507-00138.warc.gz"}
|
https://www.deepdyve.com/lp/springer_journal/preparing-entangled-states-by-lyapunov-control-Xrdb1Pwxb0
|
# Preparing entangled states by Lyapunov control
Preparing entangled states by Lyapunov control By Lyapunov control, we present a protocol to prepare entangled states such as Bell states in the context of cavity QED system. The advantage of our method is of threefold. Firstly, we can only control the phase of classical fields to complete the preparation process. Secondly, the evolution time is sharply shortened when compared to adiabatic control. Thirdly, the final state is steady after removing control fields. The influence of decoherence caused by the atomic spontaneous emission and the cavity decay is discussed. The numerical results show that the control scheme is immune to decoherence, especially for the atomic spontaneous emission from $$|2\rangle$$ | 2 ⟩ to $$|1\rangle$$ | 1 ⟩ . This can be understood as the state staying in an invariant subspace. Finally, we generalize this method in preparation of W state. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Quantum Information Processing Springer Journals
# Preparing entangled states by Lyapunov control
, Volume 15 (12) – Sep 20, 2016
15 pages
/lp/springer_journal/preparing-entangled-states-by-lyapunov-control-Xrdb1Pwxb0
Publisher
Springer Journals
Subject
Physics; Quantum Information Technology, Spintronics; Quantum Computing; Data Structures, Cryptology and Information Theory; Quantum Physics; Mathematical Physics
ISSN
1570-0755
eISSN
1573-1332
D.O.I.
10.1007/s11128-016-1441-6
Publisher site
See Article on Publisher Site
### Abstract
By Lyapunov control, we present a protocol to prepare entangled states such as Bell states in the context of cavity QED system. The advantage of our method is of threefold. Firstly, we can only control the phase of classical fields to complete the preparation process. Secondly, the evolution time is sharply shortened when compared to adiabatic control. Thirdly, the final state is steady after removing control fields. The influence of decoherence caused by the atomic spontaneous emission and the cavity decay is discussed. The numerical results show that the control scheme is immune to decoherence, especially for the atomic spontaneous emission from $$|2\rangle$$ | 2 ⟩ to $$|1\rangle$$ | 1 ⟩ . This can be understood as the state staying in an invariant subspace. Finally, we generalize this method in preparation of W state.
### Journal
Quantum Information ProcessingSpringer Journals
Published: Sep 20, 2016
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations
|
2018-10-23 08:18:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34604665637016296, "perplexity": 1674.3821251602803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516117.80/warc/CC-MAIN-20181023065305-20181023090805-00059.warc.gz"}
|
https://indico.cern.ch/event/478090/contributions/2136827/
|
# Siam Physics Congress 2016
8-10 June 2016
Asia/Bangkok timezone
## Effects of Nitrogen Carrier Gas on MOVPE Growth of GaAsN Films
Not scheduled
15m
Poster presentation Surface, Interface and Thin Films
### Speaker
Mr Teerapat Lapsirivatkul (Chulalongkorn University)
### Description
Metal organic vapor phase epitaxy (MOVPE) is commonly used to produce a single crystalline GaAsN thin films by using hydrogen gas as carrier gas, leading precursors to reaction chamber. By the way, hydrogen gas has a possibility to flammable, thus it is need to special treatment to avoid this problem. On the other hand, the nitrogen gas (N${_2}$) is chosen one instead of hydrogen gas because nitrogen gas is safer from flaming and it has lowering thermal conductivity. Structural and vibrational properties of GaAsN (0 < N < 5%) films grown using nitrogen gas as a carrier gas were characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM) and Raman spectroscopy. Effects of nitrogen carrier gas on morphologies and nitrogen incorporation of GaAsN films, which have varied two conditions, 100% of hydrogen carrier gas and 50% of hydrogen carrier gas to nitrogen gas. The results show that using nitrogen gas is cause of shifted growth temperature by lowering thermal conductivity. As using hydrogen gas, nitrogen gas needs higher temperature. As a result, we can expand the growth window and establish further incorporation of N in the GaAsN samples.
### Primary author
Mr Teerapat Lapsirivatkul (Chulalongkorn University)
### Co-authors
Prof. Kentaro Onabe (University of Tokyo) Prof. Sakuntam Sanorpim (Chulalongkorn University)
### Presentation materials
There are no materials yet.
|
2021-08-04 19:36:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4755980670452118, "perplexity": 8148.917270408479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154897.82/warc/CC-MAIN-20210804174229-20210804204229-00214.warc.gz"}
|
https://mymusing.co/matrix-chain-multiplication/
|
Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems. Divide-and-conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem. In contrast, dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. In this context, a divide-and-conquer algorithm does more work than necessary, repeatedly solving the common subsubproblems. A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered. Matrix-chain multiplication can be solved using Dynamic programming.
Matrix-chain multiplication
Given a sequence of matrices, find the most efficient way to multiply these matrices together. We have many options to multiply a chain of matrices because matrix multiplication is associative. In other words, no matter how we parenthesize the product, the result will be the same.
We are given a sequence (chain) A1, A2, . . ., An of n matrices to be multiplied, and we wish to compute the product
A = A1 Ax..x An
The way we parenthesize a chain of matrices can have a dramatic impact on the cost of evaluating the product. To illustrate the different costs incurred by different parenthesizations of a matrix product, consider the problem of a chain A1A2A3 of three matrices. Suppose that the dimensions of the matrices are 10 X 100,100 5, and 5 50, respectively. If we multiply according to the parenthesization ((A1A2)A3), we perform 10 100 5 = 5000 scalar multiplications to compute the 10 X 5 matrix product A1A2, plus another 10 5 50 = 2500 scalar multiplications to multiply this matrix by A3, for a total of 7500 scalar multiplications. If instead we multiply according to the parenthesization (A1(A2A3)), we perform 100 5 50 = 25,000 scalar multiplications to compute the 100 X 50 matrix product A2A3, plus another 10 100 50 = 50,000 scalar multiplications to multiply A1 by this matrix, for a total of 75,000 scalar multiplications. Thus, computing the product according to the first parenthesization is 10 times faster.
The matrix-chain multiplication problem can be stated as follows: given a chain A1A2, . . . , An of n matrices, where for i = 1, 2, . . . , n , matrix Ahas dimension p– 1 X pi, fully parenthesize the product A1 A2An in a way that minimizes the number of scalar multiplications.
The first step of the dynamic-programming paradigm is to characterize the structure of an optimal solution. For the matrix-chain multiplication problem, we can perform this step as follows. For convenience, let us adopt the notation Ai..j for the matrix that results from evaluating the product AiA+ 1 Aj. An optimal parenthesization of the product A1A2 Asplits the product between Ak and A+ 1 for some integer k in the range 1 k < n . That is, for some value of k, we first compute the matrices A1..k and A+ 1..n and then multiply them together to produce the final product A1..n. The cost of this optimal parenthesization is thus the cost of computing the matrix A1..k, plus the cost of computing A+ 1..n, plus the cost of multiplying them together.
We observed that an optimal parenthesization of A1 A An that splits the product between Akand A+ 1 contains within it optimal solutions to the problems of parenthesizing A1A2 k and Ak + 1Ak + 2. . . An. The technique that we used to show that subproblems have optimal solutions is typical.
Reference :-
DYNAMIC PROGRAMMING
|
2022-11-30 20:48:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972269892692566, "perplexity": 431.82727632335843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00073.warc.gz"}
|
https://www.intechopen.com/books/using-old-solutions-to-new-problems-natural-drug-discovery-in-the-21st-century/discovery-development-and-regulation-of-natural-products/
|
InTechOpen uses cookies to offer you the best online experience. By continuing to use our site, you agree to our Privacy Policy.
Pharmacology, Toxicology and Pharmaceutical Science » Drug Discovery » "Using Old Solutions to New Problems - Natural Drug Discovery in the 21st Century", book edited by Marianna Kulka, ISBN 978-953-51-1158-0, Published: June 19, 2013 under CC BY 3.0 license. © The Author(s).
# Discovery, Development, and Regulation of Natural Products
By Juergen Krause and Gailene Tobin
DOI: 10.5772/56424
Article top
# Discovery, Development, and Regulation of Natural Products
Juergen Krause1 and Gailene Tobin2
## 1. Introduction
Natural products have historically been an extremely productive source for new medicines in all cultures and continue to deliver a great variety of structural templates for drug discovery and development. Although products derived from natural sources may not necessarily represent active ingredients in their final form, the majority of all drugs in the market have their origin in nature [1, 2]. Therefore, the foremost emphasis in this chapter is given to aspects concerning the identification, properties, and development of potential drug candidates from natural products. It is the intention to give a high-level overview of the current status and developments in the field. Many important aspects in the arena of natural therapeutics including natural product sources, discovery, characterization, development and uses have been addressed and covered in depth in excellent recent reviews by extremely competent authors referenced in this contribution.
### 1.1. Definition of a natural product
The extent to which the term natural product has been characterized is both limited and debatable. Therefore, a common definition that is accepted by all involved disciplines will remain a moving target, but likely will evolve as researchers unveil the vast amount of compounds projected to be discovered in this field [3]. In the simplest of terms, a natural product is a small molecule that is produced by a biological source [3]. As a central theme of exploration bordering chemistry and biology, natural products research focuses on the chemical properties, biosynthesis and biological functions of secondary metabolites [3]. In this context, the task of defining “natural” is more straight forward and encompasses isolation from a native organism, synthesis in a laboratory, biosynthesis in vitro, or isolation from a metabolically engineered organism whereby the chemical structure has been determined and the resultant compound is chemically equivalent to the original natural product [3]. Thus, in summary, and for the purposes of this chapter, one can still agree with the refuted definition that a natural product is a pharmacologically or biologically active chemical compound or substance, which is found in nature and produced by a living organism and can even be considered as such if it can be prepared by a totally synthetic approach [4]. Albeit, we realize this definition can be challenged as many biosynthetic enzymes are nonspecific and may result in the production of multiple analogs combined with the fact that identifying the entirety of natural products is in the infant stage [5].
Generally the term “natural product” is regarded as being synonymous with “secondary metabolite” [6]. Secondary metabolites are organic compounds in the correct chiral configuration to exert biological activity, but have no “primary” function directly involved in the normal growth, development or reproduction of an organism [7]. Natural products are usually relatively small molecules with a molecular weight below 3,000 Daltons and exhibit considerable structural diversity [6]. The product categories in which natural compounds can be found as active ingredients include prescription and non-prescription drugs (pharmaceuticals), cosmetic ingredients (cosmeceuticals) and dietary supplements and natural health product ingredients (nutriceuticals) [8].
The respective studies leading to the identification, isolation, and characterization of natural products constitute an important part of the scientific field of pharmacognosy. The American Society of Pharmacognosy defines pharmacognosy as “the study of natural product molecules (typically secondary metabolites) that are useful for their medicinal, ecological, gustatory, or other functional properties. The natural species that are the source of the compounds under study span all biological kingdoms, most notably marine invertebrates, plants, fungi, and bacteria” [9]. Amongst the various assortments and exciting capacities that are being explored within the arena of pharmacognosy, this chapter will mostly address the study of health relevant medicinal properties of natural compounds for drug discovery and development.
### 1.2. History
Natural substances have evolved over a very long selection process to form optimal interactions with biological macromolecules [10] which have activity on a biological system that is relevant to the target disease. They have historically been the most productive source of active compounds and chemical lead structures for the discovery and development of new medicines [11]. Since ancient times, civilizations used plants and plant extracts to ameliorate diseases and foster healing. Early historic examples for medical treatments from natural sources include the discovery of the beneficial effects of cardiotonic digitalis extracts from foxglove for treating some manifestations of heart disease in the 18th century, the use of the bark of the willow and cinchona trees in treating fever and the effectiveness of poppy extracts in the treatment of dysenteries [12]. Morphine, largely reproducing the analgesic and sedative effect of opium, was isolated from opium obtained from the seed pots of the poppy plant in 1804 [12]. Throughout the century, purified bioactive natural products were extracted from the Peruvian bark cinchoa (quinine), from cocoa (cocaine), and from many other plants [12]. By 1829, scientists discovered that the compound salicin, in willow trees, was responsible for pain relief and in 1838 salicylic acid was isolated [13]. The problem was that salicylic acid was harsh on the stomach and in the second half of the 19th century acetylsalicylic acid was synthesized which served as a less-irritating replacement for standard common salicylate medicines [13]. A number of additional plants served as sources of natural product derived agents that are still used in current routine medical practice [14].
The discovery of valuable therapeutic agents from natural sources continued into the 20th century. Inspired by the discovery and benefits of penicillin, pharmaceutical research expanded after the Second World War into intensive screening of microorganisms for new antibiotics [12]. The study of new bacterial and fungal strains resulted in the expansion of the antibacterial arsenal with additional agents such as cephalosporins, tetracyclines, aminoglycosides, rifamycins, chloramphenicol, and lipopeptides [15, 16]. In the 1950’s, two nucleosides isolated from Caribbean marine sponges paved the way for the synthesis of vidarabine, and the related compound cytarabine, which eventually received approval as therapeutics for clinical use in viral diseases and cancer, respectively [17]. A more recent example is the cancer therapeutic paclitaxel (Taxol®) derived from the Yew tree, which was discovered in the 1970s, but due to difficulties in obtaining commercial compound quantities only reached the market in late 1992 [18-20]. Overall, only 244 prototypic chemical structures (over 80% came from animal, plant, microbial or mineral origin) have been used as templates to produce medicines up to 1995, and relatively few new scaffolds have appeared since [21,22]. About half of the marketed agents in today’s arsenal of drugs are derived from biological sources with the large majority being based on terrestrial natural product scaffolds [23]. Approximately 50% of the new drugs introduced since 1994 were either natural products or derivatives thereof [21, 23, 24].
## 2. Discovery and development
### 2.1. Discovery
Drug discovery involves the identification of new chemical entities (NCEs) of potential therapeutic value, which can be obtained through isolation from natural sources, through chemical synthesis or a combination of both. The field of natural products drug discovery, despite the success stories of penicillin, paclitaxel, etc., also had aspects that made it less attractive. In the traditional approach, drug targets were exposed to crude extracts, and in case of evidence of pharmacological activity the extract was fractionated and the active compound isolated and identified. This method was slow, labor intensive, inefficient, and provided no guarantee that a lead from the screening process would be chemically workable or even patentable [25, 26]. As natural products usually are molecules with more complex structures, it was more difficult to extract, purify or synthesize sufficient quantities of a NCE of interest for discovery and development activities [25]. Enriched or pure material is needed for the initial characterization of the chemical and biological properties as well as the elucidation of structure-activity relationships in drug discovery studies; furthermore, even larger quantities need to be supplied for potential later development activities and ultimately, the market [24, 27].
The pharmaceutical industry’s interest in natural products diminished with the advent of such promising new technologies like combinatorial chemistry (CC) and high throughput screening (HTS) [28]. The prospect of such disciplines, aimed at accelerating drug discovery efforts for NCEs, led some companies to dismiss their natural product programs [28]. Combinatorial chemistry employs parallel synthesis techniques allowing the creation of libraries containing hundreds of thousands of compounds, whereas HTS allows rapid testing of large numbers of compounds [28]. High-throughput screening grew out of automated clinical analyzer technologies and miniaturization in the late 1980’s, as drug companies focused on methods aiming to increase the pace of testing and lower the cost per sample [12]. As a result, large libraries of synthetic molecules could be exploited very quickly. These new synthetic libraries were also given preference because of the lack of compatibility of traditional natural product extract libraries with HTS assays [28-30]. Compounds obtained from commercial libraries, in-house collections of pharmaceutical companies containing hundreds of thousands of compounds and new libraries generated through CC could be now screened rapidly [21]. Although the initial hopes for such advances were high, they were not fulfilled by either of the improved technologies. To be successful, HTS needed appropriate therapeutic targets matched to collections of NCEs that are highly diverse in their structural and physicochemical properties. The approach to exclusively bank on synthetic compounds did not meet the initial expectations, as the newly created compound libraries had limited structural diversity and did not provide enough quality hits to be of value. For CC, the most valuable role of parallel synthesis therefore appears to be in expanding on an existing lead, rather than creating new screening libraries [12]. Consequently, the interest in natural sources experienced some renaissance; however, even if natural product extracts were tested first, the pace of their isolation made it difficult to keep up with the demand for testing candidates in high-throughput models [25, 26, 29]. Therefore, natural products, and derivatives thereof, are still under-represented in the typical screening decks of the pharmaceutical and biopharmaceutical industry [31]. Specifically, it has been noted that major pharmaceutical companies in the United States continue to favor approaches that do not enable the integration of natural products of marine origin into their screening libraries [32]. More risk friendly institutions like academic laboratories, research institutes and small biotech companies venturing in the natural products arena have now a greater role in drug discovery and feed candidates into the development pipelines of big pharmaceutical companies[32].
Overall, there are limited systematic approaches to exploring traditionally used natural products for compounds that could serve as drug leads. Additionally, the pharmaceutical industry has decreased their emphasis on natural product discovery from sources in various countries. Both of these facts may be based on possible uncertainties and concerns over expectations about benefits sharing resulting from the United Nations Convention on Biological Diversity (CBD) [21, 33, 34]. Countries are increasingly protective of their natural assets in flora and fauna and may not authorize the collection of sample species without prior approval [35]. In this context, potential handicaps may arise for companies as they develop and market new products from natural sources in the form of very difficult to negotiate agreements as well as significant intellectual property and royalty issues [25, 26, 28, 35].
Nonetheless, natural products continue to provide a valuable and rich pool for the discovery of templates and drug candidates and are suitable for further optimization by synthetic means because the chemical novelty associated with natural products is higher than that of structures from any other source [10]. This fact is of particular importance when seeking out lead molecules against newly discovered targets where no small molecule lead exists or in mechanistic and pathway studies when searching for chemical probes [24]. It is assumed that. in many cases, structures devised by nature and evolution are far superior to even the best synthetic moieties in terms of diversity, specificity, binding efficiency, and propensity to interact with biological targets [24]. In comparing a large number of natural products to compounds from CC and synthetic drugs derived from natural substances, it has become evident that drugs and products obtained from natural sources exhibited more diverse and chemically complex structures [36]. In fact, only a moderate structural overlap was found when comparing natural product scaffolds to drug collections with the natural product database containing a significantly larger number of scaffolds and exhibiting higher structural novelty [37]. The structural diversity of these naturally sourced compounds supports the belief that the assortment of natural products represents a greater variety and better exemplifies the ‘chemical space’ of drug-like scaffolds than those of synthetic origin [30, 38, 39]. As Newman and Cragg (2012) have stated, and demonstrated in their reviews for the 30-year period of 1981 to 2010, natural products do play a dominant role in the discovery of lead structures for the development of drugs for the treatment of human diseases [1]. We agree with these authors in their assumption that it is highly probable that in the near future totally synthetic variations of even complex natural products will be part of the arsenal of physicians [1].
In general, there is growing awareness of the limited structural diversity in existing compound collections. The historic focus of the pharmaceutical industry on a relatively small set of ‘druggable’ targets has resulted in the exploration of a very narrow chemical space appropriate for these targets [40]. The 207 human targets described for small-molecule drugs correspond to only about 1% of the human genome and half of all drugs target only four protein classes [41]. So called ‘undruggable’ targets, such as protein-protein interactions and phosphatases, still await the identification of lead structures with the required qualities for lead or development candidates [40]. Although the expectations in natural products for the future are still high, an analysis of the distinct biological network between the targets of natural products and disease genes revealed that natural products, as a group, may still not contain enough versatility to yield suitable treatments for all heritable human diseases [42]. Nevertheless, the importance of natural product related compound collections, as the most promising avenue to explore new bioactive chemical space for drug discovery, continues to be emphasized; consequently, efforts have been made over the last decade to generate CC libraries inspired by natural product scaffolds [31, 43, 44]. Those scaffolds, which have presumably undergone evolutionary selection over time, might possess favorable properties in terms of bioactivity and selectivity and therefore provide biologically validated valuable starting points for the design and generation of new combinatorial libraries [25, 26, 45, 46]. Thomas and Johannes state that the production of relatively small natural product like libraries have revealed biologically active compounds, while modification of natural products identified activity that is entirely unrelated to the parent molecules [31].
Libraries of small molecules of natural origin have already served as templates for the majority of approved therapeutics including important compounds for the treatment of life-threatening conditions. Moreover, these small molecule libraries are constantly growing through products extracted from various natural sources. Harvey et al. reviewed the current approaches for expansion of natural product based compound libraries and CBD compliant collections exist at the U.S. National Cancer Institute, academic institutions and commercial companies [11]. However, large collections of pure natural products are rare and the quantities of individual compounds that are isolated are typically small. A more recent strategy has been to use natural product scaffolds as templates for creating libraries of semi-synthetic and synthetic analogues [21, 28, 47]. Rosen et al. identified several hundred unique natural products which could serve as starting points in the search for novel leads with particular properties [48]. Based on the continuous efforts of researchers in the field of marine drug discovery, more potent bioactive lead structures are expected with new or unknown mechanisms of action [23, 48]. The progress made in the areas of cellular biology, genomics, and molecular mechanisms increased the number of druggable targets, allowing screening for candidates of natural compound libraries against an ever increasing number of potential molecular sites for therapeutic intervention. This increase in defined molecular targets combined with more automatization, more sensitive detection instruments, and faster data processing allows for high throughput assays, which can rapidly screen large existing libraries of new and specific biological targets.
In the last decade there has also been a major shift to technologically advanced and more complex screening assays conducted in cells, including those in which biological function is directly measured. These more complex approaches provide higher stringency which can mean lower hit rates. However, the specificity of such hits results in an increase in the quality of leads with more desired biological properties [12]. In this context, bioassays based on zebrafish embryos are noteworthy, as they can be used in 96-well plates and allow for in vivo bioactivity screening of crude extracts and natural substances at the microgram scale [49-52]. A further improvement, potentially leading to new secondary metabolites of interest for drug discovery, is based on the development of refined analytical and spectroscopic methods. This involves rapid identification and structural elucidation (dereplication) of natural products in complex mixtures (such as crude or pre-fractionated extracts) in parallel with profiling their bioactivity in information-rich bioassays [53]. In addition, stress can be applied to stimulate the number and levels of bioactive compounds in organisms. Wolfender and Queiroz presented examples of dynamic responses resulting from stress, which induced chemical defenses in elicitation experiments in both plants and microorganisms [30]. A significantly increased number of hits, including antibacterial, antifungal and anticancer agents were described for extracts from elicited plants [30]. New groups of microorganisms obtained through small scale, high-through-put cultivation methods and employing nutrient deficient media, specific nutrients and long cultivation times constitute another approach potentially leading to new secondary metabolites of interest for drug discovery [54]. Genome mining, the analyses of plant and microbial genome sequences for genes and gene clusters encoding proteins, is a further recent approach which has allowed the discovery of numerous novel natural products and also revealed gene clusters and novel pathways for the biosynthesis of several known natural compounds [55, 56].
Although plants are still the major source for many natural products and remedies, microbes and marine organisms also constitute promising, abundant, and valuable sources for bioactive natural compounds [57]. Like it is true for plants, also for these, only a very small fraction of structures of potential therapeutic relevance have been chemically analyzed or examined in a broad panel of screening models or bioassays. But even if discovered and identified, active substances from natural sources may not be readily available for further investigations, development or introduction to the market. A number of biologically relevant natural products can only be isolated in small amounts, consequently adding to efforts, timelines and costs by forcing the development of an economically viable synthesis [31].
### 2.2. Development
The time required to develop a pharmaceutical can range from only a few to as many as 20 years. For natural products, an additional challenge can be the provision of sufficient quantities from natural sources for development and consequently commercial market supplies. Early in vitro tests may only require microgram to milligram amounts but the demand for compound quantities will increase quickly when in vivo animal models, safety and toxicology studies, formulation development and ultimately clinical trials are initiated. As mentioned earlier, one of the more recent respective examples is the cancer drug paclitaxel (Taxol®), which was discovered in 1967 as the cytotoxic active ingredient in extracts of Taxus brevifolia but only approved for the market in 1992 [20]. From 1967 to 1993, almost all paclitaxel produced was derived from bark from the Pacific yew tree [18]. Harvesting of the bark kills the tree in the process, however, this production method was replaced by a more sustainable approach using a precursor of Taxol® isolated from the leaves and needles of cultivated yew tree species [18, 20].
The compounds in development today target a variety of indications, mainly cancer and infectious diseases (bacterial, viral, fungal, and parasitic), but also address other therapeutic areas such as cardiovascular diseases, neurological illnesses and depression, metabolic diseases (like diabetes and cholesterol management), and inflammatory diseases (like arthritis) [1, 15, 16, 25, 26]. The cytotoxic properties of many secondary metabolites from marine organisms and bacteria are of particular interest for the development of new anticancer treatments [58]. For infectious diseases, natural products are effective because most of these compounds evolved from microbial warfare and show activity against other microorganisms at low concentrations [25, 26, 29]. The renewed interest in natural drugs is determined by the urgent need to find and develop effective means to fight infections caused by viruses, like HIV (Human Immunodeficiency Virus) and so called “superbugs” (bacteria with multiple resistance against antibiotics) currently in use [29]. Pathogens having only limited and rather expensive treatment options include penicillin-resistant Streptococcus pneumonia, methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus (VRE), Clostridium difficile, and Mycobacterium tuberculosis [29]. However, some new structures identified from marine fungi exhibited activity against bacteria like MRSA [59].
Before the advent of high throughput screening and the post-genomic era, more than 80% of drug substances or active ingredients were natural products, semisynthetic analogs thereof, or were obtained using structures of natural compounds as templates for synthetic modification [60, 61]. Chin reported 23 drugs from natural sources being approved between 2000 and 2005 [2]. Between 1998 and 2007 a total of 34 natural products and natural product-derived drugs were approved in different international markets [15, 62, 63].
According to Brahmachari (2011), 38 natural product-derived drugs were approved in the decade from 2000 to 2010 for various indications including 15 for infectious diseases, 7 each for oncology, neurological diseases and cardiovascular disorders, 4 for metabolic disorders and 1 for diabetes [22]. It is therefore not surprising that by 2008 more than a hundred new drug candidates from natural sources like plants, bacteria, fungi and animals or those obtained semi-synthetically were reported to be in clinical development with a similar number in preclinical development [60]. Of those in clinical development, 91 were described to be plant-derived [63]. Although this was a lower number than in the years before, the interest in natural sources to obtain pharmacologically active compounds has recently been rekindled with improved access to a broader base of sources including those from new microbial and marine origins [23, 64]. Brahmachari (2011) reported 49 plant-derived, 54 microorganism-derived, 14 marine organism derived (including 2 from fish and 1 from a cone snail), and 1 terrestrial animal-derived (bovine neutrophils) drug candidate(s) in various phases of clinical evaluation [22].
Natural products have been the biggest single source of anti-cancer drugs as evidenced by the historical data reviewed by Newman and Cragg [1]. Of the 175 anti-cancer agents developed and approved over the seven decades from 1940 until 2010 in Western countries and Japan, 85 compounds representing 48.6%, were natural products or directly derived from natural products [1]. The four major structural classes of plant derived cancer treatments include Vinca alkaloids, Epipodophyllotoxin lignans, Taxane diterpenoids and Camptotectin quinolone alkaloid derivatives. Approximately 30 plant derived anti-cancer compounds have been reported to be clinically active against various types of tumors and are currently used in clinical trials [65].
A potential development candidate is typically isolated from its natural source only in milligram quantities [6]. Testing in vitro occurs in assays such as the U.S National Cancer Institute 60-cell-line panel, followed by human tumor-derived cell lines in primary culture and in vivo animal models such as the above mentioned zebrafish embryos, the hollow-fiber human tumor cell assay or human tumor xenografts in rodents [6, 50, 52, 66]. Harvey and Cree have recently reviewed current screening systems for anti-cancer activity suitable for use with collections of natural products. These include quantification of cell growth or cell death in standard cancer cell, three-dimensional and primary cell culture, as well as cell-based reporter and molecular assays [50]. The quantification of cell growth or cell death in culture using signals like caspase-3 as a marker for apoptosis come with the handicap that the artificial culture environment may not be suitable to predict activity in in vivo animal models or cancer patients [50]. Another concern raised is the fact that compounds which kill readily proliferating cancer cells in culture may not eliminate the tumor because of the persistence of cancer stem cells for which suitable screening assays with significant throughput are still lacking [50]. Cancer stem cells are only present in low abundance and remain in a quiescent state until receiving environmental cues such as overexpression of growth factors, cytokines, or chemokines resulting in recurrence of cancer after initially successful treatment and loss of efficacy of the initial treatment agent in the relapsed disease [67].
Dietary sources of compounds assumed to have anti-cancer benefits include fruits, vegetables and spices yielding biologically active components such as curcumin, resveratrol, cucurbitacins, isoflavones, saponins, phytosterols, lycopene, and many others [68]. A number of these are gaining importance as adjuvant anti-cancer agents with curcumin, resveratrol and cucurbitacins having activity reported against cancer stem cells [67]. Bhanot et al list 39 natural compounds from marine species, mostly invertebrates, and 10 from microorganisms, mostly from bacteria of the Streptomyces genus, as potential new anti-cancer agents [68]. It is assumed that many prokaryotic and eukaryotic natural product sources may still reveal a number of valuable anti-cancer compounds in the future and even ancient animal species have been suggested as a particularly valuable source [69].
Anti-virals constitute another important class of needed therapeutics. The HIV type-1 (HIV-1) is the cause of the Acquired Immune Deficiency Syndrome (AIDS), a major human viral disease with over 34 million people infected worldwide in 2012 and approximately 1.7 million dying per year [70]. Failure of anti-HIV therapy is observed due to the emergence of drug resistance and the significant side effect profile of existing therapies [71]. Hence, the quest for novel prospective drug candidates with fewer side effects and increased efficacy against various HIV strains also relies on natural products. Naturally derived anti-HIV compounds found to be most promising for the treatment HIV infections, with the potential to overcome drug-resistance of mutated HIV strains, were described to be flavonoids, coumarins, terpenoids, tannins, alkaloids, polyphenols, polysaccharides or proteins [72, 73]. Despite the need for affordable, effective, and better tolerated treatments, the vast majority of the potential natural anti-HIV compounds described have so far only been tested as in vitro, ex vivo or in silico approaches to identify activity; the findings have not yet been confirmed in relevant in vivo systems. Only a few of the many natural products that have been reported to exhibit anti-HIV activities have reached clinical trials and none of them made it on the list of conventional antiretroviral drugs [71, 72].
Antiviral agents from marine sources which demonstrated activity against HIV were recently reviewed by Vo and Kim (2010). These include phlorotannins from brown algae, sulfated derivatives of chitin from the shells of crabs and shrimps including chitosan (produced commercially by deacetylation of chitin), sulfated polysaccharides from marine algae, lectins or carbohydrate-binding proteins from a variety of different species (ranging from prokaryotes to corals, algae, fungi, plants, invertebrates and vertebrates) as well as bioactive peptides isolated by enzymatic hydrolysis of marine organisms [73]. Until now, most of the anti-HIV activities of these marine-derived inhibitors were also only observed in in vitro assays or in mouse model systems and still await confirmation of their value in clinical trials [73].
## 3. Natural product sources
Historically, the most important sources for biologically active natural products have been terrestrial plants and microorganisms such as fungi and bacteria. Terrestrial and aquatic species of plants and microorganisms, especially those of marine origin, produce unique bioactive substances yielding a large variety of valuable therapeutics and lead structures for potential new drugs. Even though natural products may not have coevolved with human proteins, they have emerged in nature to interact with biomolecules [74]. Natural products interact with a wide variety of proteins and other biological targets, acting also as modulators of cellular processes when they inhibit the difficult to target protein-protein interactions [27, 40].
Since the middle of the last century, marine species and microorganisms have consistently and increasingly raised interest as sources for new agents and scaffolds [75]. In recent years, other less conventional sources like alcoholic and non-alcoholic beverages, spices, animal and human excreta, and many more have generated interest for natural product researchers [75]. The more conventional sources for secondary metabolites like plants, marine organisms and microorganisms will be described in more detail in the following sections.
### 3.1. Plants
A significant number of drugs have been derived from plants that were traditionally employed in ethnomedicine or ethnobotany (the use of plants by humans as medicine as in Ayurvedic or Traditional Chinese Medicine), while others were discovered initially (through random screening of plant extracts in animals) or later, by determining their in vitro activity against HIV or cancer cell lines [6, 50, 71-73]. An avenue that may have influenced ethnopharmacology suggests that some traditionally used remedies may have arisen from observations of self-medication by animals [76]. Studies have shown that wild animals often consumed plants and other material for medical rather than nutritional reasons, treating parasitic infections and possible viral and bacterial diseases [11, 60, 76, 77]. For drug discovery, the chemical and pharmacologic investigation of ethnobotanical information offers a viable alternative to high-throughput screening and the body of existing ethnomedical knowledge has led to great developments in health care. It would appear that selection of plants, based on long-term human use in conjunction with appropriate biologic assays that correlate with the ethnobotanical uses, should be most successful [78]. Nevertheless, therapeutic approaches based on active principles from single plant and polyherbal formulations from traditional medicines, like the ones mentioned in Ayurvedic texts, still require scientific validation and sufficient pharmacoepidemiological evidence supporting their safety and efficacy [79]. This is evidenced by the example of aristolochic acid, a constituent of Aristolochia vines, which are used in complementary and alternative therapies. Aristolochic acid is a powerful nephrotoxin and a human carcinogen associated with chronic kidney disease and upper urinary tract urothelial carcinomas (UUC) [80]. These dual toxicities and the target tissues were revealed when a group of otherwise healthy Belgian women developed renal failure and UUC after ingesting Aristolochia herbs in conjunction with a weight-loss regime; subsequently, more cases were reported in Taiwan and countries throughout the world [80]. Importantly, the traditional practice of Chinese herbal medicine in Taiwan mirrors that in China and other Asian countries making it likely that these toxicities are also prevalent in these and in other countries where Aristolochia herbs have long been used for treatment and prevention of disease, thereby creating an international public health problem of considerable magnitude [80, 81].
In the early 1900’s, 80% of all medicines were obtained from roots, barks and leaves and it is estimated is that approximately 25% of all drugs prescribed today still originate from plants [14, 19, 78]. The plant kingdom, with 300,000 to 400,000 higher species (estimated levels reach from 215,000 up to 500,000 [78], was always a key source of new chemical entities (NCEs) for active pharmaceutical ingredients and lead compounds [12]. It is estimated that only 5% to 15% of these terrestrial plants have been chemically and pharmacologically investigated in a systemic fashion [19]. Approximately 10,000 to 15,000 of the world’s plants have documented medicinal uses and roughly 150-200 have been incorporated in western medicine [19, 82]. Marine plants like microalgae, macroalgae (seaweeds) and flowering plants (such as mangroves) have been studied to a much lesser extent and are mostly reported in connection with nutritional, supplemental or ethnopharmacological uses [83]. For over 20 years the U.S. National Cancer Institute has collected higher plants for screening, with the current collection composed of ~ 30,000 species [84]. Only a small percentage of these have reportedly been screened for biological or phytochemical activity until a decade ago and large numbers are constantly being tested for their possible pharmacological value today [35, 78]. Based on their research, the authors justify their assumption that the plant kingdom still holds many species containing substances of medicinal value and for potential pharmaceutical applications, which have yet to be discovered. However, such assumptions may be diminished as the loss of valuable natural sources increases due to factors such as deforestation, environmental pollution, and global warming [85].
Saslis-Lagoudakis et al. provided evidence through phylogenetic cross-cultural comparisons that related plants from different geographic regions are used to treat medical conditions in the same therapeutic areas [86]. Accordingly, there has been a recent surge in interest in the components of traditional Chinese medicines and Ayurvedic remedies [11]. Limitations of this approach include the fact that both of these ancient traditions use polyherbal preparations (botanicals) for the majority of prescriptions and that plants as biological systems have an inherent potential variability in their chemistry and resulting biological activity [12, 35]. Fabricant and Farnsworth reported that 25% of all plants showing biological activity in their assay system failed to reproduce the activity on sub-sequent recollections [35]. This may be caused by factors coming into play after the collection of a specimen, however, for plants it is common to dry the collected plant parts thoroughly in the field before extraction to assure that the material does not compose before reaching the laboratory [12].
Rout et al. describe the approaches for using individual plants as therapeutic agents as follows: (i) to isolate bioactive compounds for direct use as drugs, (ii) to produce bioactive derivatives of known compounds as new structures, (iii) to use substances as pharmacologic agents or tools, and (iv) to use a whole or partial plant as herbal remedy and provide examples for each category [78]. Mixtures of plant-derived products are known as botanicals, and the term is defined by the United States (US) Food and Drug Administration (FDA) to describe finished, labeled products that contain vegetable matter as ingredients which can include plant materials, algae, macroscopic fungi, and combinations thereof [87]. They can fall under the classification of a food (including a dietary supplement), a drug (including a biological drug), a medical device, or a cosmetic [87]. The vast majority of plant-derived treatments are based on synthetic, semisynthetic, or otherwise highly purified or chemically modified drugs [87, 88]. According to the most recent report by BCC Research, the global plant-derived drug market was valued at US$22.1 billion in 2012 and sales are projected to grow to US$ 26.6 billion by 2017 at a compound annual growth rate (CAGR) of 3.8% [89]. The botanicals subgroup currently has only one approved drug, Veregen, with an expected revenue increase from US$2.8 million in 2010 to 599 million in 2017 [89]. ### 3.2. Marine life Given the fact that oceans cover nearly 70% of the earth’s surface and that life originated in the oceans with the first marine organisms evolving more than 3.5 billion years ago, the enormous diversity of organisms in the marine environment is not surprising and largely unexplored [90]. On some coral reefs, their density can reach up to 1,000 species per square meter, which is believed to be a higher biodiversity than observed in tropical rainforests and inspired researchers for decades to search for novel compounds from marine sources [57, 91]. As the greatest biodiversity is found in the oceans, it is estimated that between 250,000 and one million marine species could provide an immense resource to discover NCEs serving as unprecedented novel bioactive structures and scaffolds that have the potential to serve as medical treatments or templates for new therapeutics [23, 92]. The interest in novel chemical structures from marine organisms started in the 1950s as marine animal taxonomy advanced significantly, but progressed at a slow pace for the first two decades before it started to burgeon in the 1970s [91-94]. Since then, approximately 30,000 structurally diverse natural products with a vast array of bioactivities have been discovered from marine organisms including microbes, algae and invertebrates [92, 95]. Invertebrates alone comprise approximately 60% of all marine animals and were described as the source of almost 10,000 new natural products since 1990 with a pronounced increase to about 1,000 compounds per year in more recent years [23, 32, 93]. By the turn of the 21st century larger percentages of bioactive NCEs were reported for marine organisms in comparison to terrestrial organisms, but nevertheless, marine chemical ecology is still several decades behind its terrestrial counterpart with respect to the total number of characterized and documented natural products [93, 96]. Kong et al. specifically compared natural products from terrestrial and marine sources. They found that compounds from marine organisms exhibited a higher chemical novelty and that over 2/3 of those scaffolds were exclusively used by marine species, but alerted readers to concerns of the suitability of the new scaffolds as drug templates because of their unsuitably high hydrophobicity [96]. As is the case for plant derived natural compounds, the U.S. National Cancer Institute also plays an important role for establishing marine organism collections since the 1980s and has a vast National repository of invertebrate derived compounds and extracts from specimens rigorously identified by taxonomic experts [92]. Many marine natural products appear to arise from multi-functional enzymes that are also present in terrestrial systems, exhibiting a cross phylum activity with terrestrial biota [94, 95]. However, a large number of marine derived compounds also possess a substantial amount of functional groups, which were not previously described from terrestrial metabolites [91, 94, 95]. They range from derivatives of amino acids and nucleosides to macrolides, porphyrins, terpenoids, aliphatic cyclic peroxides, and sterols [91]. These secondary metabolites resulted from evolutionary pressure threatening many marine organisms, especially those which are soft bodied and have a sedentary life style, forcing them to develop the ability to synthetize toxic compounds which serve to deter predators, manage competitors or immobilize prey [57, 91, 93, 94]. The search for new drug candidates from marine species has expanded into circumpolar regions for cold-adapted species as well as harsh environments like deep-sea hydrothermal vents; these approaches have been particularly successful with filter-feeders such as sponges, tunicates and bryozoans [23, 97]. The fact that marine invertebrates contain astounding levels of microbial diversity and form highly complex microbial communities led to the assumption, followed by confirmation in recent examples, that microbial symbionts like bacteria are important producers of natural products derived from marine species [58]. In particular, these include polyketides and non-ribosomally synthesized peptides as well as unique biosynthetic enzymes which emerged as potent biocatalysts in medicinal chemistry [58, 97]. By 2010, four drugs of marine origin had obtained approval for the treatment of human disorders [98]. Cytarabine (Cytosar-U®; Upjohn/Pfizer) for the treatment of white blood cell cancers, vidarabine (Vira-A®; discontinued by distributor Monarch Pharmaceuticals) an ophtalmic antiviral, and ziconotide (Prialt®; Elan) for pain management were FDA approved and trabectedin (Yondelis®; Pharmamar), an anticancer compound against soft tissue and ovarian cancer was approved in Europe [98]. Vidarabine and cytarabine originate from marine sponges, ziconotide is the synthetic equivalent of a conopeptide originating from a marine cone snail, and trabectedin, now produced synthetically, originates from a bacterial symbiont of a tunicate or sea squirt [32, 99]. At the same time, 13 marine organism-derived drug candidates were listed to be in clinical development (3 in Phase III, 7 in Phase II, and 3 in Phase I) and hundreds are in pre-clinical testing as ion channel blockers, enzyme inhibitors, DNA-interactive and microtubule-interfering agents, with the majority of the latter compounds being tested for anti-tumor and cytotoxic properties [91, 94, 98]. Natural products of marine origin with biological activity of interest include, but are not limited to, curacin A, eleutherobin, discodermolide, bryostatins, dolostatins, cephalostatins [16]. ### 3.3. Microorganisms Microorganisms were identified early on as sources of valuable natural products as evidenced by the discovery of penicillin by from the fungus Penicillium rubens by Alexander Fleming in 1928 [100]. Historically, microorganisms (amongst them mostly bacteria and fungi) have played an important role in providing new structures, like antibiotics for drug discovery and development. The terrestrial microbial populations are immensely diverse which is also reflected in the number of compounds and metabolites isolated from these microorganisms. As mentioned above, the similarity of many compounds from marine invertebrates like sponges, ascidians, soft corals and bryozoans to those isolated from terrestrial microbes led to the hypothesis that associated microorganisms might be responsible for their production. Over time it became more and more evident, that a significant number of marine natural products are actually not produced by the originally assumed invertebrate but rather by microbes living in symbioses with their invertebrate host [92, 101]. In some instances it could indeed be demonstrated early on that the isolated marine microbes are the original source of the new compounds or secondary metabolites discovered and in recent years marine bacteria have emerged more and more as a source of NCEs [58, 91, 94, 95]. Besides bacteria, marine fungi and deep-sea hydrothermal vent microorganisms are reported to produce bioactive compounds and metabolites [91, 94]. Deep-sea vent sites offer harsh conditions in depth below 200 meters with complete absence of light, pressures in excess of 20 atmospheres, temperatures of up to 400°Celius, pH extremes and high sulfide concentrations and are populated by highly dense and unique, biologically diverse communities [91, 94, 102]. Unique microorganisms are abundant on land, in freshwater and all areas of the ocean. However, the enormous biological diversity of free-living and symbiotic marine microbes has so far only been explored to a very limited extent. The estimates extrapolate the number of marine species to at least a million, but for marine microbial species, including fungi and bacteria, the estimated numbers reach as high as tens or even hundreds of millions [23]. Over 74,000 known species of fungi are reported including around 3,000 aquatic species of which only about 465 are described as marine species, but a vast geographical area has not yet been sampled and estimates for the potential total number of species reach from 0.5 to 9.9 million with about 1.5 million considered as most realistic [103, 104]. The overlap is assumed to be relatively high between species in terrestrial and freshwater habitats, but not between these two and the marine habitat [104]. Nevertheless, a large percentage of the over 270 secondary metabolites isolated from marine fungi resembles analogues of compounds previously discovered from terrestrial fungi but some of the new substances identified exhibited potent activities against tumor cells, microbes or bacteria like methicillin-resistant Staphylococcus aureus (MRSA) or even antifouling properties [59, 105]. In comparison to bacteria, fungi appear to be rare in marine environments and few marine fungi isolates exist in culture [106]. Marine bacteria are assumed to constitute approximately 10% of the living biomass carbon and inhabit mainly sediments but can also be found in open oceans and in association with marine organisms [90, 91, 94]. Many marine invertebrates are associated with large amounts of epibiotic and endobiotic microorganisms and for many sponges bacteria can make up to 40% of the animal biomass and even resemble new species [97]. In fact it is assumed that almost all marine organisms host bacteria on their surface and that the vast majority bare epibiotic films of variable density and composition, which can affect the basibiont’s physiology and interactions with its environment in beneficial, detrimental, or ambiguous manners [107]. This constitutes a vast pool for the discovery of new structures and scaffolds if the source can be unanimously established. To identify the source of a compound, the determination of its mere presence in a certain organism is not sufficient, as this could be the result of active or passive accumulation and does not necessarily reflect the true site of its biosynthetic production. As Gulder and Moore further explain: “An unambiguous assignment of the biosynthetic origin of a natural product derived from a complex assemblage of marine organisms thus has to originate at the genomic level. This is particularly true for bacterial symbionts, which have to date eluded cultivation” [58]. These microbes generally organize their biosynthetic genes for each secondary metabolite in compact clusters, which will, following identification and sequencing of the cluster responsible for the respective pathways, allow the transfer of the respective bacterial genes into effective heterologous producers, like Escherichia coli (E. coli) [55, 58, 97]. In terms of microbial sources, culturing of the respective microorganism may generally be a viable approach to increase quantities. A major difference between microorganisms from terrestrial and marine sources is the fact that marine pelagic bacteria are much more difficult to grow in culture than the soil borne actinomycetes; therefore determining the conditions for replication, growth of sufficient quantities and induction of metabolite production can be a tedious challenge [12, 17]. This again may necessitate the production in a heterologous system like that observed for E. coli. ## 4. Challenges Natural products, although a valuable and precious resource, also come with their fair share of challenges in a variety of aspects. As mentioned before, one of the major issues concerning the use of natural products are the difficulties associated with obtaining sufficient amounts of material pure enough for discovery and development activities. If a compound is derived from a plant growing only in small quantities or remote locations or a marine organism residing in great depth or difficult to access regions, re-supply becomes a problem. The threat of losing potentially valuable natural sources of pharmacologically active ingredients is constantly increasing due to the threat of extinction by deforestation of large landmasses and environmental pollution in remote areas as well as global warming [85]. It is estimated that about 70% of the supply of herbal raw material for Ayurveda and other homeopathic medicines in India comes from the wild [82, 85]. To meet the increasing demand for raw material, to conserve wild resources, and to reduce the potential variability in the active ingredient content in medicinal plants from different collection areas, it is important to implement more controlled cultivation programs to ensure quality and to protect resources [82, 85]. Tissues of marine invertebrates present unique problems for extraction, because of their high water and salt content, and the promising compounds may be present only in low amounts and/or can be very difficult to isolate. Sponges and their microbial fauna are mostly not suitable for culture, and the compounds of interest need to be extracted and purified from specimens collected in the wild [17, 93]. Marine organisms and microbes constitute a valuable potential source of NCEs and structural templates for drug discovery in the future, but may necessitate tons of raw material to isolate milligram to gram amounts of the compound of interest [97]. This difficulty combined with the challenges for synthetic approaches to obtain significant quantities of potential new drug candidates, based on their often highly complex structures, are obstacles that can hamper their use in discovery and development [58, 97]. In some cases, supply issues could be resolved by semi-synthesis or total synthesis of the compound, the development of synthetic analogs with more manageable properties, or by design of a pharmacophore of reduced complexity, which can then be synthesized [17, 92, 97]. Fragments or synthetic analogs with simplified structures may retain bioactivity or even show improved activity towards the target [12]. Furthermore, environmental aspects can constitute significant hurdles for supplying material for discovery and development as the product may stem from an endangered species or the wild collection of the producing species may be detrimental to its originating terrestrial or marine ecosystem. Additionally, as mentioned above, because of their low abundance many compounds of interest from natural sources need to be extracted and purified from large quantities of specimen collected in the wild, which in turn carries the risks of over-exploitation and habitat destruction [17, 93]. Radjasa et al. provide a positive outlook for marine ecosystems and state “There is optimism for the future because the international marine bioorganic community clearly recognizes that invertebrates must be harvested and studied in an environmentally sustainable manner” [92]. Although aquaculture of marine species or culture of bacteria seem like logical alternative sources to obtain product, they are not viable avenues in most cases because it proves difficult to impossible to culture the source organisms (especially invertebrates and/or their microbial symbionts like bacteria etc.) or they may not produce the compound of interest under the given culture conditions [12, 17, 95, 97]. Findings indicate, that the bacterial composition on invertebrates is largely independent from sponge taxonomy or locality of collection and the bacteria most likely are contaminants from the ocean water rather than specific symbionts, which further exacerbates the cultivation problem [97, 108]. Another challenge can result from redundant activity determination in assay systems and the mixed composition of natural product extracts. With over 150,000 known small molecules characterized from natural sources, previously known natural products are often re-isolated in the course of bioassay-guided fractionation [84]. While this may be acceptable if the biological activity is new, it is frustrating to waste resources on the de novo structure elucidation of known compounds. Furthermore, not all compounds contained in natural product extracts are drug leads and it is extremely desirable to remove “nuisance compounds” like tannins, phorbol esters, saponins, and anionic polysaccharides; the latter, for instance, being highly active in cellular HIV bioassays [35]. Last, but not least, intellectual property rights can pose a significant hurdle that is difficult to manage. In general, patent protection can be obtained if the active principles derived from natural sources have novel structures and relevant biological activity. However, as mentioned before, additional handicaps may arise in the context of developing and marketing products from natural sources in the form of potentially significant intellectual property issues as well as, possibly, very difficult and costly negotiations to obtain agreements to collect and develop natural products from species collected in foreign countries [25, 26, 35]. ## 5. Regulatory requirements and risks The regulatory requirements for different product categories containing natural substances like pharmaceuticals, nutraceuticals, and cosmeceuticals vary from rather stringent to generous to non-existent at an International level. To exemplify regulatory approaches for the three aforementioned categories, an overview of the current situation in North America (U.S. and Canada) and the European Union is summarized. In many areas we have presented exact wording from the respective governing websites, as the phrasing of such represents the official context and any deviation thereof could be misleading. ### 5.1. Natural product-derived pharmaceuticals Natural products constitute a key source of pharmacologically active ingredients in a variety of novel agents with therapeutic potential in a wide range of diseases. Pharmaceuticals containing natural products or compounds derived from natural product scaffolds or templates have to undergo the same stringent approval process as drugs obtained from purely synthetic origin. #### 5.1.1. North America ### 5.1.1.1. United States of America (U.S.) The U.S. Government Office for the Control of Food and Drug Administration (FDA) oversees the regulatory control of pharmaceuticals including new treatments based on natural products [109]. The FDA's Center for Drug Evaluation and Research (CDER) role is to evaluate and approve new drugs before they can be sold ensuring that drugs are safe and effective for intended use and that their health benefits outweigh their known risks [109]. ### 5.1.1.2. Canada In Canada, Health Canada fulfills the same role as the US FDA according to the mandate under the authority of the Food and Drugs Act and the Food and Drug Regulations [110]. #### 5.1.2. European Union The European Medicines Agency (EMA) with headquarters in London/England regulates drugs and medicinal products in the European Union (EU) [111]. On April 30th, 2011 the EU entered into force the directive on herbal medicine products called Traditional Herbal Medicinal Products Directive 2004/24/EC, THMPD [111]. The regulation came as a sub-directive for the act on Human Medicinal Products Directive 2001/83/EC claiming a unique set of information on a herbal substance or herbal preparation for all EU Member States. Such could be used when evaluating marketing applications for herbal medicinal products from companies and covers medicinal products containing herbal substances/preparations [111]. To reach the market, these must fall within one of the following three categories, as outlined on the EMA website [111]: 1. a product can be classified under traditional medicinal use provisions (‘traditional use’) accepted on the basis of sufficient safety data and plausible efficacy: the product is granted a traditional use registration (simplified registration procedure) by a Member State, 2. a product can be classified under well-established medicinal use provisions (‘well-established use’). This is demonstrated with the provision of scientific literature establishing that the active substances of the medicinal products have been in well-established medicinal use within the Union for at least ten years, with recognized efficacy and an acceptable level of safety. As a result the product is granted a marketing authorization usually by a Member State or by the European Medicines Agency. While both classifications have specific requirements, both regulatory paths involve the assessment of mostly bibliographic safety and efficacy data. 3. a product can be authorized after evaluation of a marketing authorization application consisting of only safety and efficacy data from the company’s own development (‘stand alone’) or a combination of own studies and bibliographic data (‘mixed application’). As a result the product is granted a marketing authorization by a Member State or by the Agency via the centralized procedure if all requirements are met. In summary, while safety needs to be shown for products, proof of efficacy is not always a requirement and only the traditional indications in specified conditions must be plausible. Nonetheless, and irrespective of the regulatory pathway to access the market, the quality of the herbal medicinal product must always be demonstrated [111]. The Directive provides definitions for herbal medicinal products, herbal preparations and herbal substances, as follows [111]: • Herbal medicinal product: Any medicinal product, exclusively containing as active ingredients one or more herbal substances or one or more herbal preparations, or one or more such herbal substances in combination with one or more such herbal preparations. • Herbal substances: All mainly whole, fragmented or cut plants, plant parts, algae, fungi, lichen in an unprocessed, usually dried, form, but sometimes fresh. Certain exudates that have not been subjected to a specific treatment are also considered to be herbal substances. Herbal substances are precisely defined by the plant part used and the botanical name according to the binomial system (genus, species, variety and author). • Herbal preparations: Preparations obtained by subjecting herbal substances to treatments such as extraction, distillation, expression, fractionation, purification, concentration or fermentation. These include comminuted or powdered herbal substances, tinctures, extracts, essential oils, expressed juices and processed exudates Additionally, it has been noted that from a herbal substance (e.g. valerian root) different herbal preparations (e.g. a valerian root extract using 70% ethanol) can be made; in such cases, both can represent the active ingredient in an individual herbal medicinal product [111]. ### 5.2. Nutraceuticals — Dietary supplements (U.S.)/Natural health products (Canada) Even if natural health products (NHPs) or dietary supplements are considered as or expected to be safe, they may still carry potential risks in themselves or through interactions with prescription or Over The Counter (OTC) drugs. This is illustrated by the previously described example of aristolochic acid, a powerful nephrotoxin and a human carcinogen associated with chronic kidney disease and upper urinary tract urothelial carcinomas after ingesting Aristolochia herbs in conjunction with a weight-loss regime [80, 81]. Furthermore, interactions between NHPs and prescription medicines are of increasing concern and need to be considered by physicians and patients alike [112]. Mills et al, in their evaluation of 47 trials which examined drug interactions with 19 different herbal preparations, observed potentially clinically significant drug interactions with St. Johns Wort, garlic, and American ginseng [113]. #### 5.2.1. North America ### 5.2.1.1. United States of America In the U.S., biologically active food and dietary supplements are regulated by the FDA and are classified as food and nutrition, not drugs [88]. The FDA website provides a detailed overview of their regulatory approach concerning nutraceuticals. The following paragraphs reflect some core points as outlined on the FDA’s respective website [88]. • The FDA regulates both finished dietary supplement products and dietary ingredients under a different set of regulations than those covering "conventional" foods and drug products (prescription and OTC). Under the Dietary Supplement Health and Education Act (DSHEA) of 1994, the dietary supplement or dietary ingredient manufacturer is responsible for ensuring that a dietary supplement or ingredient is safe before it is marketed. The FDA is responsible for taking action against any unsafe dietary supplement product after it reaches the market. Generally, manufacturers do not need to register their products with the FDA nor get FDA approval before producing or selling dietary supplements. • The Federal Food, Drug, and Cosmetic Act requires that manufacturers and distributors who wish to market dietary supplements that contain "new dietary ingredients" notify the Food and Drug Administration about these ingredients, which must include information that is the basis on which manufacturers/distributors have concluded that a dietary supplement containing a new dietary ingredient will reasonably be expected to be safe under the conditions of use recommended or suggested in the labeling [87]. • The U.S. Congress defined the term "dietary supplement" and both of the terms "dietary ingredient" and "new dietary ingredient" as components of dietary supplements in the DSHEA. A dietary supplement is a product taken by mouth that contains a "dietary ingredient" intended to supplement the diet. • In order to be a "dietary ingredient," it must be one or any combination of the following substances: • A "new dietary ingredient" is one that meets the above definition for a "dietary ingredient" and was not sold in the U.S. in a dietary supplement before October 15, 1994. • Dietary supplements can also be extracts or concentrates, and may be found in many forms such as tablets, capsules, softgels, gelcaps, liquids, or powders [88]. They can also be in other forms, such as a bar, but if they are, information on their label must not represent the product as a conventional food or a sole item of a meal or diet [88]. Whatever their form may be, the DSHEA places dietary supplements in a special category under the general umbrella of "foods," not drugs, and requires that every supplement be labeled a dietary supplement. ### 5.2.1.2. Canada In Canada, the use and sale of natural health products (NHPs) is on the rise [8]. A 2010 Ipsos-Reid survey showed that 73% of Canadians regularly take natural health products (NHPs) like vitamins and minerals, herbal products, and homeopathic medicines [114]. Health Canada defines natural health products under the Natural Health Products Regulations as: • Vitamins and minerals • Herbal remedies • Homeopathic medicines • Traditional medicines such as traditional Chinese medicines • Probiotics • Other products like amino acids and essential fatty acids [115]. Natural Health Products must be safe to use as OTC products and not need a prescription to be sold [115]. Natural products, compounds and active ingredients derived from natural sources or totally synthesized and needing a prescription are regulated as drugs under the Food and Drug Regulations [115]. #### 5.2.2. European Union Herbal supplements and nutritional supplements are not regulated on a harmonized EU wide basis and remain under the control of the relevant medical institutions of the individual EU member states. ### 5.3. Cosmeceuticals Although the term is not recognized by the US Food and Drug Administration (FDA) or by the European Medicines Agency (EMA), it has been widely adopted by the cosmetics industry, which is rapidly expanding in spite of global economic woes in recent years [116]. The global cosmeceuticals market references the seven most developed markets including the U.S. and the top five European countries, namely the UK, France, Germany, Italy and Spain; as well as Japan [116]. In 2011, the global cosmeceuticals market was estimated to be worth$30.9 billion (with the aforementioned European countries accounting for approximately 65% of overall revenues) and is expected to reach $42.4 billion by 2018 [116]. Three major categories have been noted in the cosmeceutical industry including skin care, hair care, and others, with the skin care segment accounting for the largest share of the market at 43% [117]. Dominated by anti-aging products, the skin-care market is expected to contribute significantly to future growth based on the aging populations in the top seven aforementioned markets [116]. Cosmeceuticals are topically applied and represent a hybrid of cosmetics and pharmaceuticals usually containing vitamins, herbs, various oils, and botanical extracts or a mixture thereof including antioxidants, growth factors, peptides, anti-inflammatories/botanicals, polysaccharides, and pigment-lightening agents [117, 118]. The combination of cosmetics and foods resulted in products termed nutricosmetics. Nutricosmetics are foods and supplements claiming cosmetic effects with major ingredients like soy isoflavone proteins, lutein, lycopene, vitamins (A, B6, E), omega-3 fatty acids, beta-carotene probiotics, sterol esters, chondrotin and coenzyme Q10 [119, 120]. These compounds act as antioxidants and the respective nutricosmetics containing them are being promoted for their skin care properties as for instance in anti-aging by fighting free radicals generated as a by-product of biochemical reactions through skin exposure to the sun [119]. #### 5.3.1. North America ### 5.3.1.1. United States of America In the US, products that can be put in both the cosmetics and drugs category, such as cosmetic products with active ingredients which claim therapeutic use, require New Drug Application (NDA) approval or must comply with the appropriate monograph for an (OTC) drug. Moreover, the FDA also has specific guidelines for Good Manufacturing Practice (GMP) for cosmetics [116]. While the Federal Food, Drug, and Cosmetic Act (FD&C Act) does not recognize the term "cosmeceutical", the cosmetic industry uses this word to refer to cosmetic products that have medicinal or drug-like benefits [118]. The FD&C Act defines drugs as those products that cure, treat, mitigate or prevent disease or that affect the structure or function of the human body [121]. Under the FD&C Act, cosmetic products and ingredients, with the exception of color additives, do not require FDA approval before they go on the market [121]. Therefore, while drugs are subject to a review and approval process by the FDA, cosmetics are not approved by the FDA prior to sale. However, when a product makes a therapeutic claim (e.g. to prevent or treat disease), it is classified as a drug and therefore requires evaluation by the FDA's Center for Drug Evaluation and Research (CDER) and a drug identification number (DIN) before it can be sold. ### 5.3.1.2. Canada In Canada, the term “cosmeceutical” (used to describe a cosmetic product with pharmaceutical-like benefits) is not employed by Health Canada [122]. Therefore cosmeceuticals fall under either cosmetics or drugs (depending on the claims made and/or the composition of the product) and are subject to the provisions of the Food and Drugs Act and its Cosmetic Regulations regarding composition, safety, labeling and advertising and they are subject to the provisions of the Consumer Packaging and Labeling Act and Regulations [122]. The three most significant features of the Canadian cosmetic regulatory system are mandatory notification of all cosmetic products, safety of ingredients and products, and product labeling [122]. According to Health Canada, a "cosmetic" is defined as "any substance or mixture of substances, manufactured, sold or represented for use in cleansing, improving or altering the complexion, skin, hair or teeth and includes deodorants and perfumes” [122]. #### 5.3.2. European Union In Europe, EMA guidelines place a clear demarcation between drugs and cosmetics, whereby a cosmetic is a product that is to be applied topically with an intended cosmetic function and products cannot fall under both categories, unlike in the US [116]. On November 30th (2009), the new Cosmetic Products Regulation, EU Regulation 1223/2009 was adopted, replacing the Cosmetics Directive [123]. With the new Cosmetics Regulation, Europe claims to have a robust, internationally recognized regime, which reinforces product safety taking into consideration the latest technological developments [123]. Most of the provisions of this new regulation will be applicable as of July 11th, 2013 [123]. ## 6. Conclusion A natural product or secondary metabolite is a pharmacologically or biologically active chemical compound or substance, which is found in nature and produced by a living organism. The lengthy process of natural products evolution has resulted in optimal interactions with biological macromolecules and targets. Historically, natural substances have been the most productive source of active compounds and chemical lead structures. Natural products have traditionally provided a large fraction of the drugs in use today and millions of terrestrial and marine plants, organisms and microorganisms provide an immense resource to discover unprecedented novel bioactive scaffolds. These have the potential to serve as medical treatments or templates for new therapeutics and may be suitable for production via a synthetic routes or in a heterologous system like E. coli. About half of the agents in today’s arsenal of marketed drugs are derived from biological sources with the large majority being based on terrestrial natural product scaffolds. Approximately 50% of the new drugs introduced since 1994 were either natural products or derived from natural products. As of today, only a very small fraction of bioactive structures of potential therapeutic relevance from plants, microbes, and marine organisms have been chemically analyzed or examined in a broad panel of screening models or bioassays. The discovery of valuable therapeutic agents from natural sources continues in the 21th century by reaching into new and untapped terrestrial and marine source organisms as the chemical novelty associated with natural products is higher than that of structures from any other source. There is growing awareness of the limited structural diversity in existing compound collections and the extreme chemical diversity, the high biological potency, and the potential to frequently discover drug-like characteristics in natural products. Therefore, they constitute a valuable platform for the development of new therapeutics for a variety of indications, although they may still not contain enough versatility to yield suitable treatments for all heritable human diseases. As some major pharmaceutical companies terminated their natural product programs, the future role to discover and feed candidates into the development pipelines will reside increasingly with research institutes and small biotech companies. Over a hundred new drug candidates from natural sources like plants, bacteria, fungi and animals or obtained semi-synthetically are in clinical development with a similar number in preclinical development. They target a variety of indications, mainly cancer and infectious diseases (bacterial, viral, fungal, and parasitic) but also other therapeutic areas such as cardiovascular diseases, neurological illnesses and depression, metabolic diorders and inflammatory diseases. Natural products, although a valuable and precious resource, also come with their fair share of challenges concerning the provision of sufficient amounts of pure enough material for discovery and development activities. As mentioned earlier, such apprehensions are based on the threat of losing potentially valuable natural sources through extinction resulting from deforestation of large landmasses, environmental pollution in remote areas as well as global warming. Countries are also increasingly protective of their natural assets in flora and fauna and may not authorize the collection of sample species without significant demands and very difficult negotiations. The regulatory requirements for different product categories containing natural substances like pharmaceuticals, nutraceuticals, and cosmeceuticals vary from rather stringent over generous to non-existent at an international level. Even if natural health products or dietary supplements are considered as or expected to be safe, they may still carry potential risks in themselves or through interactions with prescription or OTC drugs. Therefore, the discovery and development of natural products require cientific validation and sufficient pharmacoepidemiological evidence to support their safety and efficacy. ## Acknowledgements Funding for the publication of this chapter was provided by the School of Business Administration and the Centre for Health and Biotech Management Research, University of Prince Edward Island, Charlottetown, PE, Canada. We are grateful to BCC Research (Wellesley, MA, USA) for providing us with applicable sections from their 2013 market report, namely “Botanical and Plant-derived Drugs: Global Markets”. ## References 1 - Newman DJ, Cragg GM. Natural products as sources of new drugs over the 30 years from 1981 to 2010. Journal of natural products. 2012 Mar 23;75(3):311-35. PubMed PMID: 22316239. 2 - Chin YW, Balunas MJ, Chai HB, Kinghorn AD. Drug discovery from natural sources. The AAPS journal. 2006;8(2):E239-53. PubMed PMID: 16796374. Pubmed Central PMCID: 3231566. 3 - All natural. Nat Chem Biol. 2007; 3:[351]. Available from: http://dx.doi.org/10.1038/nchembio0707-351. 4 - Natural Product. Feb 7, 2013. Available from: http://www.thefreedictionary.com/Natural+product. 5 - Fischbach MA, Clardy J. One pathway, many products. Nat Chem Biol. 2007 Jul;3(7):353-5. PubMed PMID: 17576415. 6 - Kinghorn AD, Chin YW, Swanson SM. Discovery of natural product anticancer agents from biodiverse organisms. Current opinion in drug discovery & development. 2009 Mar;12(2):189-96. PubMed PMID: 19333864. Pubmed Central PMCID: 2877274. 7 - Zähner H. What are secondary metabolites? Folia Microbiol. 1979 1979/10/01;24(5):435-43. 8 - Chernyak M. Canadian NHP Market: Headed In The Right Direction2012; (November). Available from: http://www.nutraceuticalsworld.com/issues/2012-11/view_features/canadian-nhp-market-headed-in-the-right-direction/. 9 - What-is-pharmacognosy? [Internet]. 2011. Available from: http://www.pharmacognosy.us/what-is-pharmacognosy/. 10 - Ertl P, Schuffenhauer A. Cheminformatics analysis of natural products: lessons from nature inspiring the design of new drugs. Progress in drug research Fortschritte der Arzneimittelforschung Progres des recherches pharmaceutiques. 2008;66:217, 9-35. PubMed PMID: 18416307. 11 - Harvey AL, Clark RL, Mackay SP, Johnston BF. Current strategies for drug discovery through natural products. Expert opinion on drug discovery. 2010 Jun;5(6):559-68. PubMed PMID: 22823167. 12 - Beutler JA. Natural Products as a Foundation for Drug Discovery. Current protocols in pharmacology / editorial board, SJ Enna. 2009 Sep 1;46:9 11 1-9 21. PubMed PMID: 20161632. Pubmed Central PMCID: 2813068. 13 - Jeffreys D. Aspirin: The Remarkable Story of a Wonderdrug: Bloomsbury Publishing PLC; 2004. 14 - Schwartsmann G. Marine organisms and other novel natural sources of new cancer drugs. Annals of oncology : official journal of the European Society for Medical Oncology / ESMO. 2000;11 Suppl 3:235-43. PubMed PMID: 11079147. 15 - Butler MS. Natural products to drugs: natural product derived compounds in clinical trials. Natural product reports. 2005 Apr;22(2):162-95. PubMed PMID: 15806196. 16 - Mishra BB, Tiwari VK. Natural Products in Drug Discovery; Clinical Evaluations and Investigations. Research Signpost. 2011:1-62. 17 - Molinski TF, Dalisay DS, Lievens SL, Saludes JP. Drug development from marine natural products. Nature reviews Drug discovery. 2009 Jan;8(1):69-85. PubMed PMID: 19096380. 18 - Goodman JW, V. The Story of Taxol: Nature and Politics in the Pursuit of an Anti-Cancer Drug. . British Medical Journal. 2001;323 (1704)(July 14):115. Pubmed Central PMCID: PMC1120731. 19 - McChesney JD, Venkataraman SK, Henri JT. Plant natural products: back to the future or into extinction? Phytochemistry. 2007 Jul;68(14):2015-22. PubMed PMID: 17574638. 20 - Wani MC, Taylor HL, Wall ME, Coggon P, McPhail AT. Plant antitumor agents. VI. The isolation and structure of taxol, a novel antileukemic and antitumor agent from Taxus brevifolia. Journal of the American Chemical Society. 1971 May 5;93(9):2325-7. PubMed PMID: 5553076. 21 - Harvey A. The role of natural products in drug discovery and development in the new millennium. IDrugs. 2010 Feb;13(2):70-2. PubMed PMID: 20127553. 22 - Brahmachari G. Bioactive Natural Products: World Scientific Publishing Co. ; 2011. 23 - Montaser R, Luesch H. Marine natural products: a new wave of drugs? Future medicinal chemistry. 2011 Sep;3(12):1475-89. PubMed PMID: 21882941. Pubmed Central PMCID: 3210699. 24 - Carlson EE. Natural products as chemical probes. ACS chemical biology. 2010 Jul 16;5(7):639-53. PubMed PMID: 20509672. Pubmed Central PMCID: 2926141. 25 - Rouhi AM. Rediscovering Natural Products. Chem. Eng. News. 2003 October 13, :pp. 104-7. 26 - Rouhi AM. Moving Beyond Natural Products. Chem. Eng. News. 2003 October 13, :pp. 77-91. 27 - Koehn FE, Carter GT. The evolving role of natural products in drug discovery. Nature reviews Drug discovery. 2005 Mar;4(3):206-20. PubMed PMID: 15729362. 28 - Ortholand JY, Ganesan A. Natural products and combinatorial chemistry: back to the future. Current opinion in chemical biology. 2004 Jun;8(3):271-80. PubMed PMID: 15183325. 29 - Molinari G. Natural products in drug discovery: present status and perspectives. Advances in experimental medicine and biology. 2009;655:13-27. PubMed PMID: 20047031. 30 - Wolfender JL, Queiroz EF. New approaches for studying the chemical diversity of natural resources and the bioactivity of their constituents. Chimia. 2012;66(5):324-9. PubMed PMID: 22867545. 31 - Thomas GL, Johannes CW. Natural product-like synthetic libraries. Current opinion in chemical biology. 2011 Aug;15(4):516-22. PubMed PMID: 21684804. 32 - Glaser KB, Mayer AM. A renaissance in marine pharmacology: from preclinical curiosity to clinical reality. Biochemical pharmacology. 2009 Sep 1;78(5):440-8. PubMed PMID: 19393227. 33 - Cragg GM, Katz F, Newman DJ, Rosenthal J. The impact of the United Nations Convention on Biological Diversity on natural products research. Natural product reports. 2012 Dec;29(12):1407-23. PubMed PMID: 23037777. 34 - Kirsop B. The convention on Biological Diversity: Some implications for microbiology and microbial culture collections. Journal of Industrial Microbiology and Biotechnology. 1996;17(5):505-11. 35 - Fabricant DS, Farnsworth NR. The value of plants used in traditional medicine for drug discovery. Environmental health perspectives. 2001 Mar;109 Suppl 1:69-75. PubMed PMID: 11250806. Pubmed Central PMCID: 1240543. 36 - Feher M, Schmidt JM. Property distributions: differences between drugs, natural products, and molecules from combinatorial chemistry. Journal of chemical information and computer sciences. 2003 Jan-Feb;43(1):218-27. PubMed PMID: 12546556. 37 - Lee ML, Schneider G. Scaffold architecture and pharmacophoric properties of natural products and trade drugs: application in the design of natural product-based combinatorial libraries. Journal of combinatorial chemistry. 2001 May-Jun;3(3):284-9. PubMed PMID: 11350252. 38 - Larsson J, Gottfries J, Muresan S, Backlund A. ChemGPS-NP: tuned for navigation in biologically relevant chemical space. Journal of natural products. 2007 May;70(5):789-94. PubMed PMID: 17439280. 39 - Yongye AB, Waddell J, Medina-Franco JL. Molecular scaffold analysis of natural products databases in the public domain. Chemical biology & drug design. 2012 Nov;80(5):717-24. PubMed PMID: 22863071. 40 - Barker A, Kettle JG, Nowak T, Pease JE. Expanding medicinal chemistry space. Drug discovery today. 2012 Oct 29. PubMed PMID: 23117010. 41 - Pors K, Goldberg FW, Leamon CP, Rigby AC, Snyder SA, Falconer RA. The changing landscape of cancer drug discovery: a challenge to the medicinal chemist of tomorrow. Drug discovery today. 2009 Nov;14(21-22):1045-50. PubMed PMID: 19638319. 42 - Dancik V, Seiler KP, Young DW, Schreiber SL, Clemons PA. Distinct biological network properties between the targets of natural products and disease genes. Journal of the American Chemical Society. 2010 Jul 14;132(27):9259-61. PubMed PMID: 20565092. Pubmed Central PMCID: 2898216. 43 - Hong J. Role of natural product diversity in chemical biology. Current opinion in chemical biology. 2011 Jun;15(3):350-4. PubMed PMID: 21489856. Pubmed Central PMCID: 3110584. 44 - Lopez-Vallejo F, Giulianotti MA, Houghten RA, Medina-Franco JL. Expanding the medicinally relevant chemical space with compound libraries. Drug discovery today. 2012 Jul;17(13-14):718-26. PubMed PMID: 22515962. 45 - Shang S, Tan DS. Advancing chemistry and biology through diversity-oriented synthesis of natural product-like libraries. Current opinion in chemical biology. 2005 Jun;9(3):248-58. PubMed PMID: 15939326. 46 - Zhang HY, Chen LL, Li XJ, Zhang J. Evolutionary inspirations for drug discovery. Trends in pharmacological sciences. 2010 Oct;31(10):443-8. PubMed PMID: 20724009. 47 - Balamurugan R, Dekker FJ, Waldmann H. Design of compound libraries based on natural product scaffolds and protein structure similarity clustering (PSSC). Molecular bioSystems. 2005 May;1(1):36-45. PubMed PMID: 16880961. 48 - Rosen J, Gottfries J, Muresan S, Backlund A, Oprea TI. Novel chemical space exploration via natural products. Journal of medicinal chemistry. 2009 Apr 9;52(7):1953-62. PubMed PMID: 19265440. Pubmed Central PMCID: 2696019. 49 - Crawford AD, Liekens S, Kamuhabwa AR, Maes J, Munck S, Busson R, et al. Zebrafish bioassay-guided natural product discovery: isolation of angiogenesis inhibitors from East African medicinal plants. PloS one. 2011;6(2):e14694. PubMed PMID: 21379387. Pubmed Central PMCID: 3040759. 50 - Harvey AL, Cree IA. High-throughput screening of natural products for cancer therapy. Planta medica. 2010 Aug;76(11):1080-6. PubMed PMID: 20635309. 51 - Mandrekar N, Thakur NL. Significance of the zebrafish model in the discovery of bioactive molecules from nature. Biotechnology letters. 2009 Feb;31(2):171-9. PubMed PMID: 18931972. 52 - Mimeault M, Batra SK. Emergence of zebrafish models in oncology for validating novel anticancer drug targets and nanomaterials. Drug discovery today. 2013 Feb;18(3-4):128-40. PubMed PMID: 22903142. Pubmed Central PMCID: 3562372. 53 - Koehn FE. High impact technologies for natural products screening. Progress in drug research Fortschritte der Arzneimittelforschung Progres des recherches pharmaceutiques. 2008;65:175, 7-210. PubMed PMID: 18084916. 54 - Leeds JA, Schmitt EK, Krastel P. Recent developments in antibacterial drug discovery: microbe-derived natural products--from collection to the clinic. Expert opinion on investigational drugs. 2006 Mar;15(3):211-26. PubMed PMID: 16503759. 55 - Corre C, Challis GL. New natural product biosynthetic chemistry discovered by genome mining. Natural product reports. 2009 Aug;26(8):977-86. PubMed PMID: 19636446. 56 - Zerikly M, Challis GL. Strategies for the discovery of new natural products by genome mining. Chembiochem : a European journal of chemical biology. 2009 Mar 2;10(4):625-33. PubMed PMID: 19165837. 57 - Haefner B. Drugs from the deep: marine natural products as drug candidates. Drug discovery today. 2003 Jun 15;8(12):536-44. PubMed PMID: 12821301. 58 - Gulder TA, Moore BS. Chasing the treasures of the sea - bacterial marine natural products. Current opinion in microbiology. 2009 Jun;12(3):252-60. PubMed PMID: 19481972. Pubmed Central PMCID: 2695832. 59 - Bhatnagar I, Kim SK. Immense essence of excellence: marine microbial bioactive compounds. Marine drugs. 2010;8(10):2673-701. PubMed PMID: 21116414. Pubmed Central PMCID: 2993000. 60 - Harvey AL. Natural products in drug discovery. Drug discovery today. 2008 Oct;13(19-20):894-901. PubMed PMID: 18691670. 61 - Katiyar C, Gupta A, Kanjilal S, Katiyar S. Drug discovery from plant sources: An integrated approach. Ayu. 2012 Jan;33(1):10-9. PubMed PMID: 23049178. Pubmed Central PMCID: 3456845. 62 - Butler MS. Natural products to drugs: natural product-derived compounds in clinical trials. Natural product reports. 2008 Jun;25(3):475-516. PubMed PMID: 18497896. 63 - Saklani A, Kutty SK. Plant-derived compounds in clinical trials. Drug discovery today. 2008 Feb;13(3-4):161-71. PubMed PMID: 18275914. 64 - Galm U, Shen B. Natural product drug discovery: the times have never been better. Chemistry & biology. 2007 Oct;14(10):1098-104. PubMed PMID: 17961822. 65 - Nirmala M, Samundeeswari A, Sankar PD. . Natural plant resources in anti-cancer therapy-A review. Research in Plant Biology. 2011 1(13):1-14. 66 - Metzger KL, Shoemaker JM, Kahn JB, Maxwell CR, Liang Y, Tokarczyk J, et al. Pharmacokinetic and behavioral characterization of a long-term antipsychotic delivery system in rodents and rabbits. Psychopharmacology. 2007 Feb;190(2):201-11. PubMed PMID: 17119931. 67 - Vira D, Basak SK, Veena MS, Wang MB, Batra RK, Srivatsan ES. Cancer stem cells, microRNAs, and therapeutic strategies including natural products. Cancer metastasis reviews. 2012 Dec;31(3-4):733-51. PubMed PMID: 22752409. 68 - Bhanot A SR, Noolvi M. Natural sources as potential anti-cancer agents: A review. International Journal of Phytomedicine. 2011;3 (1). 69 - Ma X, Wang Z. Anticancer drug discovery in the future: an evolutionary perspective. Drug discovery today. 2009 Dec;14(23-24):1136-42. PubMed PMID: 19800414. 70 - The Foundation for Aids Research. Statistics Worldwide: The Regional Picture. 2012 (Nov). Available at: www.amfar.org. 71 - Hupfeld J, Efferth T. Review. Drug resistance of human immunodeficiency virus and overcoming it by natural products. In vivo. 2009 Jan-Feb;23(1):1-6. PubMed PMID: 19368117. 72 - Asres K, Seyoum A, Veeresham C, Bucar F, Gibbons S. Naturally derived anti-HIV agents. Phytotherapy research : PTR. 2005 Jul;19(7):557-81. PubMed PMID: 16161055. 73 - Vo TS, Kim SK. Potential anti-HIV agents from marine resources: an overview. Marine drugs. 2010;8(12):2871-92. PubMed PMID: 21339954. Pubmed Central PMCID: 3039460. 74 - Piggott AM, Karuso P. Quality, not quantity: the role of natural products and chemical proteomics in modern drug discovery. Combinatorial chemistry & high throughput screening. 2004 Nov;7(7):607-30. PubMed PMID: 15578924. 75 - Tulp M, Bohlin L. Unconventional natural sources for future drug discovery. Drug discovery today. 2004 May 15;9(10):450-8. PubMed PMID: 15109950. 76 - Health in the Wild. Animal Doctors. The Economist. 2002. 77 - Richards S. Natural-born Doctors. The Scientist. 2012. 78 - Rout SP. Choudary K, Kar, DM., Das, LM. Jain, A. Plants in Traditional Medicinal Ststem - Future Source of New Drugs. International Journal of Pharmacy and Pharmaceutical Sciences. 2009 July-Sep.;Vol. 1(Issue 1). 79 - Patwardhan B, Vaidya AD. Natural products drug discovery: accelerating the clinical candidate development using reverse pharmacology approaches. Indian journal of experimental biology. 2010 Mar;48(3):220-7. PubMed PMID: 21046974. 80 - Chen CH, Dickman KG, Moriya M, Zavadil J, Sidorenko VS, Edwards KL, et al. Aristolochic acid-associated urothelial cancer in Taiwan. Proceedings of the National Academy of Sciences of the United States of America. 2012 May 22;109(21):8241-6. PubMed PMID: 22493262. Pubmed Central PMCID: 3361449. 81 - Alternative Medicines. The Scientist. (2012). Available at: http://www.the-scientist.com/?articles.view/articleNo/32219/title/Alternative-Medicines/ 82 - Birari RB, Bhutani KK. Pancreatic lipase inhibitors from natural sources: unexplored potential. Drug discovery today. 2007 Oct;12(19-20):879-89. PubMed PMID: 17933690. 83 - Boopathy Raja A, Elanchezhiyan C, Sethupathy S. Antihyperlipidemic activity of Helicteres isora fruit extract on streptozotocin induced diabetic male Wistar rats. European review for medical and pharmacological sciences. 2010 Mar;14(3):191-6. PubMed PMID: 20391957. 84 - Beutler JA. Natural products as a foundation for drug discovery. Current protocols in pharmacology / editorial board, SJ Enna. 2009 Sep;Chapter 9:Unit 9 11. PubMed PMID: 22294405. 85 - Bhutani KK, Gohil VM. Natural products drug discovery research in India: status and appraisal. Indian journal of experimental biology. 2010 Mar;48(3):199-207. PubMed PMID: 21046972. 86 - Saslis-Lagoudakis CH, Savolainen V, Williamson EM, Forest F, Wagstaff SJ, Baral SR, et al. Phylogenies reveal predictive power of traditional medicine in bioprospecting. Proceedings of the National Academy of Sciences of the United States of America. 2012 Sep 25;109(39):15835-40. PubMed PMID: 22984175. Pubmed Central PMCID: 3465383. 87 - Guidance for Industry; Botanical Drug Products U.S. Department of Health and Human Services. 2004 June. Available from: http://www.fda.gov/cder/guidance/index.htm. 88 - Dietary Supplements. U.S. Department of Health and Human Services. Available from: http://www.fda.gov/Food/DietarySupplements/default.htm. 89 - Lawson K. Botanical and Plant-derived Drugs: Global Markets. Wellesley, MA. USA: BCC Research; 2013. 90 - Parkes RJ CB, Bale SJ, Getlifff JM, Goodman K, Rochelle PA, Fry JC, Weightman AJ, Harvey SM. Deep bacterial biosphere in Pacific Ocean sediments. Nature. 1994;371(Sep ):410-3. 91 - Thakur NL TA, Müller WE. Marine natural products in drug discovery. 471-477 [Internet]. 2005; 4 (6):[471-7 pp.]. 92 - Radjasa OK, Vaske YM, Navarro G, Vervoort HC, Tenney K, Linington RG, et al. Highlights of marine invertebrate-derived biosynthetic products: their biomedical potential and possible production by microbial associants. Bioorganic & medicinal chemistry. 2011 Nov 15;19(22):6658-74. PubMed PMID: 21835627. Pubmed Central PMCID: 3205244. 93 - Leal MC, Puga J, Serodio J, Gomes NC, Calado R. Trends in the discovery of new marine natural products from invertebrates over the last two decades--where and what are we bioprospecting? 2012;7(1):e30580. PubMed PMID: 22276216. Pubmed Central PMCID: 3262841. 94 - Thakur GA, Duclos RI, Jr., Makriyannis A. Natural cannabinoids: templates for drug discovery. Life sciences. 2005 Dec 22;78(5):454-66. PubMed PMID: 16242157. 95 - Salomon CE, Magarvey NA, Sherman DH. Merging the potential of microbial genetics with biological and chemical diversity: an even brighter future for marine natural product drug discovery. Natural product reports. 2004 Feb;21(1):105-21. PubMed PMID: 15039838. 96 - Kong DX, Jiang YY, Zhang HY. Marine natural products as sources of novel scaffolds: achievement and concern. Drug discovery today. 2010 Nov;15(21-22):884-6. PubMed PMID: 20869461. 97 - Piel J. Bacterial symbionts: prospects for the sustainable production of invertebrate-derived pharmaceuticals. Current medicinal chemistry. 2006;13(1):39-50. PubMed PMID: 16457638. 98 - Mayer AM, Glaser KB, Cuevas C, Jacobs RS, Kem W, Little RD, et al. The odyssey of marine pharmaceuticals: a current pipeline perspective. Trends in pharmacological sciences. 2010 Jun;31(6):255-65. PubMed PMID: 20363514. 99 - Rath CM, Janto B, Earl J, Ahmed A, Hu FZ, Hiller L, et al. Meta-omic characterization of the marine invertebrate microbial consortium that produces the chemotherapeutic natural product ET-743. ACS chemical biology. 2011 Nov 18;6(11):1244-56. PubMed PMID: 21875091. Pubmed Central PMCID: 3220770. 100 - Houbraken J, Frisvad JC, Samson RA. Fleming's penicillin producing strain is not Penicillium chrysogenum but P. rubens. IMA fungus. 2011 Jun;2(1):87-95. PubMed PMID: 22679592. Pubmed Central PMCID: 3317369. 101 - Newman DJ, Cragg GM. Natural products as sources of new drugs over the last 25 years. Journal of natural products. 2007 Mar;70(3):461-77. PubMed PMID: 17309302. 102 - Thornburg CC, Zabriskie TM, McPhail KL. Deep-sea hydrothermal vents: potential hot spots for natural products discovery? Journal of natural products. 2010 Mar 26;73(3):489-99. PubMed PMID: 20099811. 103 - DL. H. The magnitude of fungal diversity: the 1 5 million species estimate revisited. Mycological research. 2001 Dec;105(12):1422-32. 104 - Shearer CA DE, Kohlmeyer B, Kohlmeyer J, Marvanova L, Padgett D, Porter D, Raja HA, Schmit JP,Thorton HA, Voglymayr H. . Fungal biodiversity in aquatic habitats. . Biodivers Conserv. 2006;16:49-67. 105 - Bhadury P, Mohammad BT, Wright PC. The current status of natural products from marine fungi and their potential as anti-infective agents. Journal of industrial microbiology & biotechnology. 2006 May;33(5):325-37. PubMed PMID: 16429315. 106 - Richards TA, Jones MD, Leonard G, Bass D. Marine fungi: their ecology and molecular diversity. Annual review of marine science. 2012;4:495-522. PubMed PMID: 22457985. 107 - Wahl M, Goecke F, Labes A, Dobretsov S, Weinberger F. The second skin: ecological role of epibiotic biofilms on marine organisms. Frontiers in microbiology. 2012;3:292. PubMed PMID: 22936927. Pubmed Central PMCID: 3425911. 108 - Hentschel U, Hopke J, Horn M, Friedrich AB, Wagner M, Hacker J, et al. Molecular evidence for a uniform microbial community in sponges from different oceans. Applied and environmental microbiology. 2002 Sep;68(9):4431-40. PubMed PMID: 12200297. Pubmed Central PMCID: 124103. 109 - How Drugs Are Developed and Approved. Available from: http://www.fda.gov/drugs/developmentapprovalprocess/howdrugsaredevelopedandapproved/default.htm. 110 - Drugs and Health Products 2012. U.S. Department of Health and Human Services. Available from: www.hc-sc.gc.ca/dph-mps/prodpharma/index-eng/php. 111 - Herbal Medicinal Products. European Medicines Agency. Available from: (http://www.ema.europa.eu/ema/index.jsp?curl=pages/regulation/general/general_content_000208.jsp). 112 - Gilmour J, Harrison C, Asadi L, Cohen MH, Vohra S. Natural health product-drug interactions: evolving responsibilities to take complementary and alternative medicine into account. Pediatrics. 2011 Nov;128 Suppl 4:S155-60. PubMed PMID: 22045857. 113 - Mills E, Wu P, Johnston BC, Gallicano K, Clarke M, Guyatt G. Natural health product-drug interactions: a systematic review of clinical trials. Therapeutic drug monitoring. 2005 Oct;27(5):549-57. PubMed PMID: 16175124. 114 - Reid I. Natural Health Product Tracking Survey – 2010; Final Report. Health Canada; 2011. 115 - What are Natural Health Products? 2012. U.S. Department of Health and Human Services. Available from: www.hc-sc.gc.ca/dhp/mps/prodnatur/index-eng.php. 116 - Global Cosmeceuticals Market Poised to Reach$42.4 Billion by 2018: Technological Advances and Consumer Awareness Boost Commercial Potential for Innovative and Premium-Priced Products.2013. Available from: http://www.giiresearch.com/press/7470.shtml.
117 - Dover J. Cosmeceuticals: A Practical Approach. Skin therapy letter. 2008;3(1):1-7.
118 - Choi CM, Berson DS. Cosmeceuticals. Seminars in cutaneous medicine and surgery. 2006 Sep;25(3):163-8. PubMed PMID: 17055397.
119 - Sullivan and Frost. Nutricosmetics - Health and Beauty Within and Without! (2007). May 25. Available at: http://www.frost.com/sublib/display-market-insight.do?id=99171683
120 - Anunciato TP, da Rocha Filho PA. Carotenoids and polyphenols in nutricosmetics, nutraceuticals, and cosmeceuticals. Journal of cosmetic dermatology. 2012 Mar;11(1):51-4. PubMed PMID: 22360335.
121 - Cosmetics: Guidance, Compliance and Regulatory Information. Available from: www.fda.gov/Cosmetics/GuidanceComplianceRegulatoryinformation/default.htm.
122 - General Requirements for Cosmetics2012. Available from: www.hc-sc.gc.ca/cps-spc/cosmet-person/indust/require-exige/index-eng.php
123 - Cosmetic Products Regulation EU Regulation 1223/2009. European Medicines Agency. Available from: ec.europa.eu/consumers/sectors/cosmetics/documents/revision/index_en.htm.
|
2018-03-21 07:22:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6115472316741943, "perplexity": 4248.353794173437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00726.warc.gz"}
|