url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://stats.stackexchange.com/questions/169213/is-there-a-non-parametric-coefficient-of-variation
|
# Is there a non-parametric Coefficient of Variation?
I was reading a paper where the author mentions: "The coefficient of variation is primarily a descriptive statistic, but it is amenable to statistical inferences such as null hypothesis testing or confidence intervals. Standard procedures are often very dependent of the normality assumption and current work is exploring alternative procedures which are less dependent on this normality assumption."
The paper is from 2010. So my question is: has there been any recent "advancements" in terms of a coefficient of variation statistic that is not dependent upon the normality assumption? And, is this what one may call a non-parametric CV?
• The CV by itself is no-parametric, what you ask for is non-parametric tests/confidence intervals for the CV. You could try the bootstrap! – kjetil b halvorsen Aug 28 '15 at 17:37
• In some fields they call coefficient variation a "signal/noise" ratio. – Aksakal Aug 28 '15 at 17:39
• @Aksakal I think it is mean/SD, i.e. 1/CV, that is signal/noise. – Nick Cox Aug 28 '15 at 17:50
• @NickCox, right, noise/signal, same difference. – Aksakal Aug 28 '15 at 17:51
• What do you actually intend by the phrase "nonparametric" there? – Glen_b Aug 29 '15 at 4:32
The coefficient of variation is not strongly associated with the normal distribution at all. It is most obviously pertinent for distributions like the lognormal or gamma. See e.g. this thread.
Looking at ratios such as interquartile range/median is possible. In many situations that ratio might be more resistant to extreme values than the coefficient of variation. The measure seems neither common nor especially useful, but it certainly predates 2010. Tastes vary, but I see no reason to call that ratio nonparametric; it just uses different parameters.
A much better developed approach is to use the ratio of the second and first $L$-moment. The first $L$-moment is just the mean, but the second $L$-moment has more resistance than the standard deviation. Start (e.g.) here for more on $L$-moments.
Whenever the coefficient of variation seems natural, that's usually a sign that analyses should be conducted on a logarithmic scale. If CV is (approximately) constant, then SD is proportional to the mean, which goes with comparisons and changes being multiplicative rather than additive, which implies thinking logarithmically.
Note: The paper cited starts quite well, but then focuses on testing the CV when the distribution is normal. As above, if the distribution is normal, then the CV seems utterly uninteresting in practice, so the emphasis is puzzling to me. Your inclinations may differ.
• Thanks, for the info on L-moments, very interesting. Two things, though: i) if the CV is associated w/ the log-normal distribution, wouldn't this also hold for the normal distribution since if X is normal, then exp(X) is log-normal; and ii) can you explain what you mean by "whenever the coefficient of variation seems natural"? – StatsScared Aug 28 '15 at 18:37
• The CV is not necessarily associated with i.e. the log-normal (or any other distribution). I believe Nick's point is that the CV is interpretable for a log-normal distribution, and not for a normal. From the linked thread: "In principle and practice the coefficient of variation is only defined fully and at all useful for variables that are entirely positive." – jtobin Aug 28 '15 at 19:04
• @StatsScared This may seem perverse but I really don't want to define or explain "natural" as if it were a technical term: it is not technical at all. It just means that an idea helps analysis because it matches something about the generating process, to a good approximation. If raspberry bush heights and redwood tree heights have roughly the same coefficient of variation despite very different SD and mean, then the CV is an idea that is of scientific use for such data. Economists too find some use for the CV of incomes, which also are often best thought of logarithmically. – Nick Cox Aug 28 '15 at 19:17
• @jtobin can you expand on "CV is interpretable for a log-normal distribution, and not for a normal". This notion is still not clear to me. – StatsScared Sep 2 '15 at 17:58
• @StatsScared The CV is a mean-standardized measure of dispersion; if the mean is 0 or less than 0, what useful information does the CV communicate about dispersion? A normal distribution can have a pathological CV in this sense, whereas a log-normal distribution cannot (its mean must be strictly positive). – jtobin Sep 2 '15 at 22:02
The coefficient of variation $\sigma / \mu$ itself has no normality assumption - just that the first two moments of the distribution are computable. I think it's a nonparametric test for the CV that you're looking for.
Googling "nonparametric coefficient of variation test" turned up a variety of papers on tests, depending on what kind of test you might be interested in. Ex:
|
2020-07-14 04:28:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6805852651596069, "perplexity": 1054.109149846731}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00415.warc.gz"}
|
http://mathhelpforum.com/geometry/188038-find-center-sphere-given-3-points-its-radius.html
|
# Math Help - Find the center of a sphere given 3 points and its radius
1. ## Find the center of a sphere given 3 points and its radius
Hey everyone,
This is my first post on this site, I hope it ends well!
So I have a deceptively simple problem, I need to find the center of a sphere given 3 non-colinear points and the desired radius of the sphere. There should be two possible answers.
Thanks!
2. ## Re: Find the center of a sphere given 3 points and its radius
Originally Posted by Starstorms
Hey everyone,
This is my first post on this site, I hope it ends well!
So I have a deceptively simple problem, I need to find the center of a sphere given 3 non-colinear points and the desired radius of the sphere. There should be two possible answers.
Thanks!
the equation of a sphere in 3D space is (x-a)^2+(y-b)^2+(z-c)^2 = r^2 where (a,b,c) is the center and r is the radius.
3. ## Re: Find the center of a sphere given 3 points and its radius
@piscoau
Thanks for your reply. I am well aware of the equation for a sphere in 3D space, the issue is finding the right a, b, and c values when given r and the 3 points that are solutions to this equation.
4. ## Re: Find the center of a sphere given 3 points and its radius
Originally Posted by Starstorms
This is my first post on this site, I hope it ends well! So I have a deceptively simple problem, I need to find the center of a sphere given 3 non-colinear points and the desired radius of the sphere. There should be two possible answers.
I am not sure why you say it is simple or there should be two possible answers.
There is one answer if you want the three points to be on a great circle of the sphere. Otherwise, there is a whole line of centers.
If $P,~Q,~\&~R$ are the three points then the points of the plane $\pi_1$ that is the perpendicular bisector of $\overline{PQ}$ are equidistant from $P~\&~Q$.
Same said for $\pi_2$ with respect to $P~\&~R$.
The points on the line $\pi_1\cap\pi_2$ are equidistant from $P,~Q,~\&~R$.
That is a messy problem to solve algebraically.
5. ## Re: Find the center of a sphere given 3 points and its radius
Originally Posted by Plato
I am not sure why you say it is simple or there should be two possible answers.
There is one answer if you want the three points to be on a great circle of the sphere. Otherwise, there is a whole line of centers.
If $P,~Q,~\&~R$ are the three points then the points of the plane $\pi_1$ that is the perpendicular bisector of $\overline{PQ}$ are equidistant from $P~\&~Q$.
Same said for $\pi_2$ with respect to $P~\&~R$.
The points on the line $\pi_1\cap\pi_2$ are equidistant from $P,~Q,~\&~R$.
That is a messy problem to solve algebraically.
Hey Plato, thanks for the reply. First off I said it was "deceptively simple" haha, meaning its a simple question to ask but very hard one to answer.
But anyway, I'm really confused by why you think there aren't just two answers. Given just 3 points then yes there would be a line of infinite possible points going through the center of the triangle formed by the points. However, we can specify these points down to just 2, one on either side, because we know the radius of the sphere, right?
6. ## Re: Find the center of a sphere given 3 points and its radius
Three points will determine a unique circle. Taking a line through the center of that circle, perpendicular to the plane determined by the three points, any point along that line will serve as the center for a sphere containing the given three points.
If, in addition to the three circles, you are given a radius, there exist a center on either side of the plane determined by the three points that will give that radius. That is why there are two such spheres.
7. ## Re: Find the center of a sphere given 3 points and its radius
Originally Posted by Starstorms
But anyway, I'm really confused by why you think there aren't just two answers. Given just 3 points then yes there would be a line of infinite possible points going through the center of the triangle formed by the points. However, we can specify these points down to just 2, one on either side, because we know the radius of the sphere, right?
For any real number greater than the circumradius there will be two spheres with that radius containing the three given points.
Note the word any. At the circumcenter the sphere with the circumradius has the three points on a great circle of the sphere.
If you understand vector geometry, on the webpage reference above about halfway down are instructions on finding both the circumradius and the circumcenter.
|
2015-05-22 18:18:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6021113991737366, "perplexity": 197.00195594803182}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925917.18/warc/CC-MAIN-20150521113205-00113-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://nebusresearch.wordpress.com/2018/03/29/reading-the-comics-march-24-2018-arithmetic-and-information-edition/?replytocom=8826
|
# Reading the Comics, March 24, 2018: Arithmetic and Information Edition
And now I can bring last week’s mathematically-themed comics into consideration here. Including the whole images hasn’t been quite as much work as I figured. But that’s going to change, surely. One of about four things I know about life is that if you think you’ve got your workflow set up to where you can handle things you’re about to be surprised. Can’t wait to see how this turns out.
John Deering’s Strange Brew for the 22nd is edging its way toward an anthropomorphic numerals joke.
Brant Parker and Johnny Hart’s Wizard of Id for the 22nd is a statistics joke. Really a demographics joke. Which still counts; much of the historical development of statistics was in demographics. That it was possible to predict accurately the number of people in a big city who’d die, and what from, without knowing anything about whether any particular person would die was strange and astounding. It’s still an astounding thing to look directly at.
Hilary Price and Rina Piccolo’s Rhymes with Orange for the 23rd has the form of a story problem. I could imagine turning this into a proper story problem. You’d need some measure of how satisfying the 50-dollar wines are versus the 5-dollar wines. Also how much the wines affect people’s ability to notice the difference. You might be able to turn this into a differential equations problem, but that’s probably overkill.
Mark Anderson’s Andertoons for the 23rd is Mark Anderson’s Andertoons for this half of the week. It’s a student-avoiding-the-problem joke. Could be any question. But arithmetic has the advantages of being plausible, taking up very little space to render, and not confusing the reader by looking like it might be part of the joke.
John Zakour and Scott Roberts’s Working Daze for the 23rd has another cameo appearance by arithmetic. It’s also a cute reminder that there’s no problem you can compose that’s so simple someone can’t over-think it. And it puts me in mind of the occasional bit where a company’s promotional giveaway will technically avoid being a lottery by, instead of awarding prizes, awarding the chance to demonstrate a skill. Demonstration of that skill, such as a short arithmetic quiz, gets the prize. It’s a neat bit of loophole work and does depend, as the app designers here do, on the assumption there’s some arithmetic that people can be sure of being able to do.
Teresa Burritt’s Frog Applause for the 24th is its usual bit of Dadist nonsense. But in the talk about black holes it throws in an equation: $S = \frac{A k c^3}{4 G \hbar}$. This is some mathematics about black holes, legitimate and interesting. It is the entropy of a black hole. The dazzling thing about this is all but one of those symbols on the right is the same for every black hole. ‘c’ is the speed of light, as in ‘E = mc2‘. G is the gravitational constant of the universe, a measure of how strong gravity is. $\hbar$ is Planck’s constant, a kind of measure of how big quantum mechanics effects are. ‘k’ is the Boltzmann constant, which normal people never heard of but that everyone in physics and chemistry knows well. It’s what you multiply by to switch from the temperature of a thing to the thermal energy of the thing, or divide by to go the other way. It’s the same number for everything in the universe.
The only thing custom to a particular black hole is ‘A’, which is the surface area of the black hole. I mean the surface area of the event horizon. Double the surface area of the event horizon and you double its entropy. (This isn’t doubling the radius of the event horizon, but you know how much growth in the radius it is.) Also entropy. Hm. Everyone who would read this far into a pop mathematics blog like this knows that entropy is “how chaotic a thing is”. Thanks to people like Boltzmann we can be quantitative, and give specific and even exact numbers to the entropy of a system. It’s still a bit baffling since, superficially, a black hole seems like it’s not at all chaotic. It’s a point in space that’s got some mass to it, and maybe some electric charge and maybe some angular momentum. That’s about it. How messy can that be? It doesn’t even have any parts. This is how we can be pretty sure there’s stuff we don’t understand about black holes yet. Also about entropy.
This strip might be an oblique and confusing tribute to Dr Stephen Hawking. The entropy formula described was demonstrated by Drs Jacob Bekenstein and Stephen Hawking in the mid-1970s. Or it might be coincidence.
## Author: Joseph Nebus
I was born 198 years to the day after Johnny Appleseed. The differences between us do not end there. He/him.
## 2 thoughts on “Reading the Comics, March 24, 2018: Arithmetic and Information Edition”
1. Perhaps it’s a tribute to Ringo “I’ve got a ‘ole in me pocket!” Starr. Poor guy’s so depressed he’d jump in the river Mersey ‘cept it looks like rain. He could use the ego-boost. Guy’s likely to do anything like show his motor to a Nowhere Man.
Like
1. Might be and, really, that is the sort of dense joke I’d see appealing to Burritt, too, thanks.
Like
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2020-05-26 21:36:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4509361684322357, "perplexity": 1196.3656302452216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00107.warc.gz"}
|
http://mathhelpforum.com/math-topics/125058-how-pronounce-cesaro.html
|
# Thread: How to pronounce "Cesàro"?
1. ## How to pronounce "Cesàro"?
That is, the name in "Cesàro summability". Thanks!
2. Originally Posted by zzzhhh
That is, the name in "Cesàro summability". Thanks!
Check this out. I often want to know how to pronouce names, so I go here...
Pronunciation Guide for Mathematics
|
2018-04-22 05:42:05
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8398310542106628, "perplexity": 9995.799674075339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945493.69/warc/CC-MAIN-20180422041610-20180422061610-00312.warc.gz"}
|
https://gmatclub.com/forum/get-paid-100-to-take-a-gmat-practice-test-conditions-apply-92414.html?kudos=1
|
Author Message
TAGS:
### Hide Tags
Founder
Affiliations: AS - Gold, HH-Diamond
Joined: 04 Dec 2002
Posts: 14549
Location: United States (WA)
GMAT 1: 750 Q49 V42
GPA: 3.5
Followers: 3788
Kudos [?]: 23646 [1] , given: 4554
Get Paid $100 to take a GMAT practice test conditions apply [#permalink] ### Show Tags 07 Apr 2010, 22:52 1 This post received KUDOS Expert's post 1 This post was BOOKMARKED If you are taking the GMAT in the next 30 days or have taken it within the last week (7 days or less), Knewton will pay you$100 to take one of their tests.
You must have taken the GMAT within the last 7 days or will be taking it within the next 30 days to be eligible.
To apply, please fill out this survey: http://bit.ly/KnewtonCAT
_________________
Founder of GMAT Club
US News Rankings progression - last 10 years in a snapshot - New!
Just starting out with GMAT? Start here...
Need GMAT Book Recommendations? Best GMAT Books
Co-author of the GMAT Club tests
GMAT Club Premium Membership - big benefits and savings
Manager
Joined: 15 Mar 2010
Posts: 100
Followers: 1
Kudos [?]: 84 [2] , given: 30
### Show Tags
09 Apr 2010, 08:30
2
KUDOS
Expert's post
JoshKnewton wrote:
Thanks for your interest in the program.
We are constantly striving to provide the very best GMAT CAT experience.
For now we are only looking for people who have taken -- or will take -- the GMAT within 7 days. This may change in future, but for now those are the rules.
Please let me know if you have any more questions,
Josh
Thanks Josh.
Just to clarify - you want the person to take the practice CAT test within 7 days of taking the GMAT (before or after) but you will have the opportunity to do that for a few weeks, correct? (meaning if I take the GMAT in 2 weeks from now, I can take the CAT in a week or 10 days from now).
In my opinion, it is a great opportunity to make the GMAT cost $150 vs$250.
If you were thinking/waiting/planning to take the test in a month or later on, you may want to set the date in a few weeks and get the $100 rebate. (By the way, what is the means of refunding? paypal?) _________________ Founder of GMAT Club US News Rankings progression - last 10 years in a snapshot - New! Just starting out with GMAT? Start here... Need GMAT Book Recommendations? Best GMAT Books Co-author of the GMAT Club tests GMAT Club Premium Membership - big benefits and savings Knewton GMAT Representative Joined: 23 Oct 2009 Posts: 112 Location: New York, NY Schools: BA Amherst College, MFA Brooklyn College Followers: 19 Kudos [?]: 37 [2] , given: 1 Re: Get Paid$100 to take a GMAT practice test conditions apply [#permalink]
### Show Tags
09 Apr 2010, 09:24
2
KUDOS
Thanks for the questions.
Yes, you can sign up today if you are taking the test in, say, a month. And I will contact you a week before your scheduled exam to say that you have two weeks to take our CAT (one before test and one week after).
No one will get paid without producing a score report from the actual GMAT.
We can mail people a check. There will be no "refund" as we are not affiliated with the actual GMAT. Though, yes, it is a nice way to get paid to study for the exam.
For those who take our test after taking the actual GMAT (and yes we want you folks too), we want to make sure you are incentivized -- so if your score on our tests is not within 120 points of your actual score you will not get paid.
Josh
_________________
Josh Anish
Senior Editor
Knewton, Inc
Free GMAT Club tests ($250 value) in addition to any other discounts or coupons when you buy the Knewton Course with KnewtonBest-GMAT-Club discount code. Use this promo code when you sign up for Knewton: KnewtonBest-GMAT-Club Intern Joined: 03 Apr 2010 Posts: 15 Followers: 0 Kudos [?]: 6 [1] , given: 6 Re: Get Paid$100 to take a GMAT practice test conditions apply [#permalink]
### Show Tags
08 Apr 2010, 21:41
1
KUDOS
Knewton GMAT Representative
Joined: 23 Oct 2009
Posts: 112
Location: New York, NY
Schools: BA Amherst College, MFA Brooklyn College
Followers: 19
Kudos [?]: 37 [1] , given: 1
Re: Get Paid $100 to take a GMAT practice test conditions apply [#permalink] ### Show Tags 09 Apr 2010, 06:11 1 This post received KUDOS Thanks for your interest in the program. We are constantly striving to provide the very best GMAT CAT experience. For now we are only looking for people who have taken -- or will take -- the GMAT within 7 days. This may change in future, but for now those are the rules. Please let me know if you have any more questions, Josh _________________ Josh Anish Senior Editor Knewton, Inc Free GMAT Club tests ($250 value) in addition to any other discounts or coupons when you buy the Knewton Course with KnewtonBest-GMAT-Club discount code.
Manager
Joined: 10 Aug 2009
Posts: 123
Followers: 3
Kudos [?]: 16 [1] , given: 13
### Show Tags
09 Apr 2010, 06:18
1
KUDOS
Hi Josh, I signed up. My test will be next Saturday, 4/17. Let me know if I can be of any help.
_________________
Manager
Joined: 10 Aug 2009
Posts: 123
Followers: 3
Kudos [?]: 16 [1] , given: 13
Re: Get Paid $100 to take a GMAT practice test conditions apply [#permalink] ### Show Tags 09 Apr 2010, 08:36 1 This post received KUDOS bb wrote: JoshKnewton wrote: Thanks for your interest in the program. We are constantly striving to provide the very best GMAT CAT experience. For now we are only looking for people who have taken -- or will take -- the GMAT within 7 days. This may change in future, but for now those are the rules. Please let me know if you have any more questions, Josh Thanks Josh. Just to clarify - you want the person to take the practice CAT test within 7 days of taking the GMAT (before or after) but you will have the opportunity to do that for a few weeks, correct? (meaning if I take the GMAT in 2 weeks from now, I can take the CAT in a week or 10 days from now). In my opinion, it is a great opportunity to make the GMAT cost$150 vs $250. If you were thinking/waiting/planning to take the test in a month or later on, you may want to set the date in a few weeks and get the$100 rebate.
(By the way, what is the means of refunding? paypal?)
Yeh this seems like a great way to subsidise the cost of my exam. Anyway how do they choose participants? I'm taking the exam next week and would like to know if I can take part.
Manager
Joined: 05 Dec 2009
Posts: 127
Followers: 2
Kudos [?]: 87 [1] , given: 0
### Show Tags
13 Apr 2010, 05:01
1
KUDOS
Josh,
How do we get selected? I finished a test on 4/10 and submitted the survey. Will I be notified through email? thanks!
_________________
Intern
Joined: 13 Mar 2010
Posts: 14
Schools: Johnson '14 (A)
Followers: 0
Kudos [?]: 1 [1] , given: 0
### Show Tags
16 Apr 2010, 10:27
1
KUDOS
Expert's post
nickk wrote:
Thanks bb. I also wanted to incorporate their practice exam into my study program (GMAT is on Monday).
Good Luck!!!
Take a good rest over the weekend and prepare to fight for every single question on the exam; give it your best but don't get carried away. There is a time to move on. I spent 4 mins on one of my quant questions and never cracked it, so if you don't figure it out in 3 mins, probably not worth it. It could be an experimental question and then you wasted a whole bunch of time.
_________________
Founder of GMAT Club
US News Rankings progression - last 10 years in a snapshot - New!
Just starting out with GMAT? Start here...
Need GMAT Book Recommendations? Best GMAT Books
Co-author of the GMAT Club tests
GMAT Club Premium Membership - big benefits and savings
Manager
Joined: 10 Aug 2009
Posts: 123
Followers: 3
Kudos [?]: 16 [1] , given: 13
### Show Tags
21 Apr 2010, 11:27
1
KUDOS
Sorry for any lag.
Here's the process.
2) I'll send you an email within 7 days of when you told me you're going to take the real GMAT.
3) Signup for the Knewton GMAT course's free trial (with the same email address you gave me when you signed up for GMAT club) and take a CAT.
4) Send me via pdf or via snail mail your official score report.
5) Every Thurs night we will verify all the score reports we got in that week, and make sure that effort was expended on our CATs.
6) We'll paypal money to your email address on Friday. If have a paypal account you can transfer to your bank account easily. If you don't have an account you can just create a free one and claim your money.
Any other questions?
Knewton
c/o Josh Anish
19 Union Square West
12th floor
New York, NY 10003
Cheers,
Josh
_________________
Josh Anish
Senior Editor
Knewton, Inc
Free GMAT Club tests ($250 value) in addition to any other discounts or coupons when you buy the Knewton Course with KnewtonBest-GMAT-Club discount code. Use this promo code when you sign up for Knewton: KnewtonBest-GMAT-Club Senior Manager Joined: 21 Mar 2010 Posts: 314 Followers: 5 Kudos [?]: 30 [1] , given: 33 Re: Received$100 via paypal [#permalink]
### Show Tags
23 Apr 2010, 13:23
1
KUDOS
lagomez wrote:
mbafall2011 wrote:
I took my real GMAT on the 12th. I sent my score after taking the Knewton GMAT a day later. I took a week to mail the score.
Some one from Josh's team sent me $100 via paypal. Thanks Josh for the quick turnaround. They processed my$100 in 2 days i think.
Thanks
What was the correlation between your Knewton score and the official score?
I was off 20 points but i attribute it to the fact that i wasn't putting in 100% effort. In reality, it is the only test that i finished with 15 minutes to spare.
Knewton GMAT Representative
Joined: 23 Oct 2009
Posts: 112
Location: New York, NY
Schools: BA Amherst College, MFA Brooklyn College
Followers: 19
Kudos [?]: 37 [1] , given: 1
Re: Get Paid $100 to take a GMAT practice test conditions apply [#permalink] ### Show Tags 25 Apr 2010, 04:38 1 This post received KUDOS Send me a PM if you believe I have neglected to get back to you. Thanks. _________________ Josh Anish Senior Editor Knewton, Inc Free GMAT Club tests ($250 value) in addition to any other discounts or coupons when you buy the Knewton Course with KnewtonBest-GMAT-Club discount code.
Knewton GMAT Representative
Joined: 23 Oct 2009
Posts: 112
Location: New York, NY
Schools: BA Amherst College, MFA Brooklyn College
Followers: 19
Kudos [?]: 37 [1] , given: 1
Re: Get Paid $100 to take a GMAT practice test conditions apply [#permalink] ### Show Tags 04 May 2010, 08:56 1 This post received KUDOS Thanks for all the chat. You can send me your score report via PDF, snail mail, or even just send the link and psswd to your score report on the web. Let me know if you have any questions. Best, Josh _________________ Josh Anish Senior Editor Knewton, Inc Free GMAT Club tests ($250 value) in addition to any other discounts or coupons when you buy the Knewton Course with KnewtonBest-GMAT-Club discount code.
Manager
Joined: 30 Jun 2004
Posts: 177
Location: Singapore
Followers: 1
Kudos [?]: 24 [0], given: 5
### Show Tags
08 Apr 2010, 02:25
Why the one month time bracket? Wish they could have extended it to two or three months
|
2017-02-27 10:02:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24581119418144226, "perplexity": 5827.997060830552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172775.56/warc/CC-MAIN-20170219104612-00192-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://spatialaudio.net/updating-multiple-svn-working-copies-at-once/
|
# Updating multiple SVN working copies at once.
This is not really audio related, but maybe someone is interested anyway …
We use Subversion (SVN) for several different things — software, papers, scripts — and one of the most common tasks is to update working copies all over the place. This can be quite annoying if you are working on many SVN repositories.
To make this repetitive task a little easier, the following scripts can be used.
Therefore, create a new directory and put all your working copies in subdirectories and use one of the following scripts to do the rest of the work.
Windows
I suppose you have TortoiseSVN installed — if not, install it.
Put the following in a file with the extension .bat, place it in the directory with your working copies and double-click on it.
@echo off
FOR /D %%A IN (*) DO START TortoiseProc.exe /command:update /path:%%A /closeonend:0
(The basic idea is stolen from here.)
Linux/Unix/…
Put the following code in in a file, make it executable and put it in the directory with the working copies (or somewhere in your path, e.g. in $HOME/bin/). Then start the script. #!/bin/bash # Shell script that runs "svn update" in each subdirectory. # If arguments are given, they are used instead of "update". # The script also works if a directory name contains spaces. COMMAND=svn # if no options are specified on the command line, this is used: OPTIONS=update LOGFILE=error.log if [$# -gt 0 ]
then
OPTIONS="$@" fi for DIR in */ do echo "============> entering "$DIR""
(cd "$DIR" &&$COMMAND $OPTIONS) 2> >(tee -a$LOGFILE >&2)
echo "============> leaving "$DIR"" echo done # if error-log is empty, remove it if [$(wc -l < $LOGFILE) -eq 0 ] then rm -f$LOGFILE
fi
This entry was posted in Uncategorized and tagged . Bookmark the permalink.
### One Response to Updating multiple SVN working copies at once.
1. Jay says:
Great Script. Was exactly what I was looking for.
Jay
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
2021-07-29 22:38:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7219570875167847, "perplexity": 4546.4827822366715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153897.89/warc/CC-MAIN-20210729203133-20210729233133-00157.warc.gz"}
|
http://www.panoradio-sdr.de/analog-digital-conversion/
|
# Digitizing Signals and Discretization
The heart of a direct conversion receiver is the AD converter, that samples analog signal in order to transform them into the digital domain. Sampling includes two tasks: discretization in time and discretization of the amplitude. Discretization in the time domain is determined by the sampling frequency, whereas discretization of the amplitude is determined by the bit width of the digital values, that can be seen as a “rounding” operation, often called “quantization”. The analog signal is a continuous waveform – its sampled digital signal is just a list of numbers. Digital signals always need additional information on the sampling frequency fs to carry meaningful information.
Sampling is discretization in time and in amplitude. A digital signal is basically a list of numbers with an associated sampling frequency.
# The Sampling Theorem
Shannon’s sampling theorem answers the question, how fast the sample rate fs for the AD conversion must be in order not to lose information: The sampling frequency $f_{s}$ must be higher than twice the frequency (or highest frequency component) of the analog signal $f_{analog}$.
$$f_{s} \geq 2 f_{analog}$$
If this rule is fulfilled, the digital samples contain the full information on the analog waveform. This means that the digital signal can be perfectly converted back to the original analog signal. This could be done by a sample-and-hold circuit, that first creates an intermediate signal looking similar like a staircase. In a second step a (ideal) low-pass filter with an edge frequency of $f_{s}/2$ creates a smooth waveform.
If the sampling theorem is violated, i.e. analog signals are faster than half the sampling rate $f_{s}$, the so-called effect of “aliasing” occurs. In this case the digitized signal represents an analog signal with a different lower “aliasing” frequency. A DA converter with a sample & hold circuit and low-pass filter would interpolate the signal at this lower frequency. In general an analog signal of frequency $f_{analog}$ has in the digital domain the (aliasing) frequency
$$f=|nf_s-f_{analog}|$$
where n is an integer chosen such that $f
The analog spectrum can be divided in so-called Nyquist zones, that represent slices of bandwidth $f_{s}/2$. Frequencies of the same Nyquist zone alias unambiguously to digital frequencies (independent of the fulfillment of the sampling theorem). The sampling theorem is fulfilled, if the analog frequencies are exclusively located in the first Nyquist zone.
During AD conversion with aliasing analog frequencies are folded onto frequencies < fs/2. The numbers indicate the different Nyquist zones
The sampling theorem implies an important information for the digital signal processing of SDRs: There is a strong relationship between SDR bandwidth and the data rates (corresponding to the sampling frequency). The larger the bandwidth is, the higher data rates generated are in the digital domain. Therefore high bandwidths pose a challenge not only to AD conversion, but also to the succeeding DSP.
# Undersampling
In practical applications aliasing is not always a problem, but can also be used for sampling frequencies above $f_{s}/2$. This technique, called undersampling (or sometimes IF or bandpass sampling) samples too slow and violates the sampling theorem, such that aliasing occurs intentionally. Then frequencies above $f_{s}/2$ appear at lower frequencies < $f_{s}/2$. In this sense undersampling exhibits the same behaviour as a mixer, as it shifts signals in frequency. Undersampling works perfectly fine, if all analog input signals reaching the AD converter belong to only one Nyquist zone. For that purpose a proper band-pass filter may be required before AD conversion, that cuts off all frequencies of other Nyquist zones. If a frequency band is to be undersampled, that spans over more than one Nyquist zone (e.g. if the signal frequencies lie around $nf_{s}/2$), it may be advantageous to change the sampling frequency.
Undersampling of the third Nyquist zone. If the other zones are suppressed by a band-pass filter this techniques work perfectly fine
# The Influence of Clock Jitter on SNR
A sometimes overlooked property is the quality of the clock source, which is used to sample the analog signals. Clock jitter of the clock source introduces additional noise, because due to the jitter the analog signal is (randomly) sampled slightly too early or too late, which results in a small random amplitude error. In AD conversion this additional noise limits the achievable overall SNR. This expresses the following formula, which describes the theoretically maximum SNR in dB, that can be achieved for a given clock jitter $t_{jitter}$.
$$SNR = 20 \,log \left( \frac{1}{2 \pi f_{analog} t_{jitter} } \right)$$
Maximal achieveable SNR dependent on sampling clock jitter and the frequency of the analog input signal
This SNR is not only dependent on the clock jitter itself, but also on the frequency $f_{analog}$ of the analog signal. Fast analog signals are more susceptible to clock jitter noise. This is because the waveform changes more rapidly and therefore even a small deviation in the sampling time leads to large amplitude error.
Amplitude errors due to deviating sampling times for a slow and a fast analog input signal
|
2020-07-14 01:20:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 16, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7677552103996277, "perplexity": 764.0712115305236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00084.warc.gz"}
|
https://brilliant.org/discussions/thread/calculuscombinatoricscalculatorics/
|
×
# Calculus + Combinatorics = Calculatorics
Points $$P$$ and $$Q$$ are chosen on the two short sides of a right isosceles triangle, one on each side. The perpendiculars $$PM$$ and $$QN$$ are drawn such that $$M$$ and $$N$$ are points on the hypotenuse of the right isosceles triangle. Find the expected value of the area of trapezium $$PMNQ$$, with proof.
Bonus: Generalise this for a right triangle with the two short sides of length $$a$$ and $$b$$.
Note by Sharky Kesa
1 year, 1 month ago
## Comments
There are no comments in this discussion.
×
Problem Loading...
Note Loading...
Set Loading...
|
2017-01-24 01:10:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7626475095748901, "perplexity": 356.9184646873577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283475.86/warc/CC-MAIN-20170116095123-00132-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.studypug.com/algebra-help/sigma-notation
|
# Sigma notation - Sequences and Series
### Sigma notation
Don't you find it tiring when we express a series with many terms using numerous addition and/or subtraction signs? Don't you wish that we have something to symbolise this action? Well we have a solution, introducing the "Sigma Notation"! In this section, we will learn how to utilise the sigma notation to represent a series, as well as how to evaluate it.
|
2020-01-22 16:55:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 20, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040955305099487, "perplexity": 964.8296472748646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00007.warc.gz"}
|
https://zbmath.org/?q=an:1357.53090
|
# zbMATH — the first resource for mathematics
A globalization for non-complete but geodesic spaces. (English) Zbl 1357.53090
The article shows that given a geodesic space $$X$$ with the property that for any point $$x \in X$$ there exists a neighborhood $$\Omega \ni x$$ such that the $$\kappa$$-comparison holds for any quadruple of points in $$\Omega$$, then the completion of $$X$$ is an Alexandrov space with curvature $$\geq \kappa$$. This answers a question asked by Viktor Schroeder around 2009.
Here an Alexandrov space with curvature $$\geq \kappa$$ is a complete length space such that for any quadruple of points $$(p; x^1; x^2; x^3)$$ the $$(1+3)$$-point comparison holds: $\measuredangle^\kappa(p^{x^1}_{x^2}) + \measuredangle^\kappa(p^{x^2}_{x^3}) + \measuredangle^\kappa(p^{x^3}_{x^1}) \leq 2 \pi.$
##### MSC:
53C70 Direct methods ($$G$$-spaces of Busemann, etc.) 53C45 Global surface theory (convex surfaces à la A. D. Aleksandrov)
Full Text:
##### References:
[1] Alexander, S., Bishop, R.: The Hadamard-Cartan theorem in locally convex spaces. l’Enseignement Math. 36, 309-320 (1990) · Zbl 0718.53055 [2] Alexander, S., Bishop, R.: Warped products admitting a curvature bound. arXiv:1509.00380 [math.DG] · Zbl 1352.53027 [3] Alexander, S., Kapovitch, V., Petrunin, A.: Alexandrov geometry. www.math.psu.edu/petrunin [4] Burago, Y; Gromov, M; Perelman, G, A.D. Aleksandrov spaces with curvatures bounded below, Russ. Math. Surv., 47, 1-58, (1992) · Zbl 0822.20043 [5] Petrunin, A, Parallel transportation for Alexandrov space with curvature bounded below, Geom. Funct. Anal., 8, 123-148, (1998) · Zbl 0903.53045
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-09-22 19:06:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7684888243675232, "perplexity": 1872.999410489803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00586.warc.gz"}
|
https://byjus.com/question-answer/what-will-be-the-slope-of-line-having-the-equation-as-y-2x-5-1/
|
Question
# What will be the slope of line having the equation as $$(y+2x)=5$$ ?
Solution
## $$y+2x=5$$ can be written as $$y=-2x+5$$ which is of form $$y=mx+c$$ Comparing we get, $$m=-2$$So slope is$$(-2)$$Physics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-21 08:49:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5614970326423645, "perplexity": 2009.2358454560563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00335.warc.gz"}
|
https://eduzip.com/ask/question/the-dimensional-formula-of-latent-heat-is-identical-to-that-of-273348
|
Physics
# The dimensional formula of latent heat is identical to that of
gravitational potential
##### SOLUTION
Latent heat is given by:
$Q=mL$
$\Rightarrow L=\displaystyle \frac {Q}{m}$ i.e Energy per unit mass.
a) internal energy has the units of energy
b) angular momentum has the units Joule-sec, i.e Energy.second
c) Gravitational potential is the potential energy per unit mass, i.e Energy per mass
d) Electric potential is the potential energy per unit charge.
You're just one step away
Single Correct Medium Published on 18th 08, 2020
Questions 244531
Subjects 8
Chapters 125
Enrolled Students 199
#### Realted Questions
Q1 TRUE/FALSE Medium
Derived units can be defined in terms of one or two of all the fundamental units in them.
• A. True
• B. False
Asked in: Physics - Units and Measurement
1 Verified Answer | Published on 18th 08, 2020
Q2 Subjective Medium
Earth is approximately a sphere of radius $6.37 \times 10^6 \ m$. What are its volume in cubic kilometers?
Asked in: Physics - Units and Measurement
1 Verified Answer | Published on 18th 08, 2020
Q3 Single Correct Medium
The formula for the capacity of a condenser is given by $C= \dfrac{A}{d}$ when $A$ is the area of each plate and $d$ is the distance between the plates. Then the dimensions of missing quantity is :
• A. ${ \epsilon }_{ 0 }={ M }^{ -1 }{ L }^{ -3 }{ T }^{ 4 }{ A }^{ 2 }$
• B. ${ \epsilon}_{ 0 }={ M }^{1}{ L }^{3}{ T }^{-4 }{ A }^{-2 }$
• C. ${ \epsilon }_{ 0 }={ M }^{ -1 }{ L }^{ 3 }{ T }^{ 4 }{ A }^{-2}$
• D. ${ \epsilon }_{ 0 }={ M }^{ -1 }{ L }^{ -2 }{ T }^{ 4 }{ A }^{ 2 }$
Asked in: Physics - Units and Measurement
1 Verified Answer | Published on 18th 08, 2020
Q4 Subjective Medium
Draw a velocity -time graph for a body moving with an initial velocity $u$ and acceleration $a$. Use this graph to find the distance travelled by the body in time t?
Asked in: Physics - Units and Measurement
1 Verified Answer | Published on 18th 08, 2020
Q5 Single Correct Medium
A cube has a side of length 1.2 x $10^{-2}$. Its volume upto correct significant figures is
• A. 1.7 x $10^{-6}m^3$
• B. 1.73 x $10^{-6}m^3$
• C. 1.78 x $10^{-6}m^3$
• D. 1.732 x $10^{-6}m^3$
Asked in: Physics - Units and Measurement
1 Verified Answer | Published on 18th 08, 2020
|
2021-11-30 11:57:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6877097487449646, "perplexity": 3887.252404650182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00561.warc.gz"}
|
http://mathoverflow.net/revisions/19758/list
|
3 Corrected disphenoid example by adding congruence assumption.
Lemma: In a polyhedron of this type with polygons $A$ and $B$ sharing an edge $e$, the two other polygons meeting $e$ must have the same number of sides.
Proof: By local symmetry reflecting through the perpendicular bisector of $e$, the angles are equal.
Sergei Ivanov proved the same lemma in the comments.
Since 5 is odd, all of the polygons around a pentagon must have the same number of sides, since you can't have a nonconstant alternating sequence. So, the only possibilities are that all polygons are pentagons, or that each pentagon is surrounded by hexagons, and each of these hexagons is surrounded by 3 pentagons and 3 hexagons. In the latter case, attaching pentagonal pyramids to each pentagon extends each hexagon into an equilateral triangle, producing a polyhedron whose faces are equilateral triangles with 5 meeting at a vertex, an icosahedron, so the original was a truncated icosahedron.
Note that if you have equilateral triangles and squares meeting 4 to a vertex, then there are two possibilities for 3 squares and 1 triangle at a vertex, one with cubic symmetry and one with only dihedral symmetry with a belt which is an octagonal prism.
By contrast, if you require that there are 3 congruent triangles meeting at a vertex, but drop the regularity assumption, you get a family of disphenoids, which generically have the Klein 4-group as symmetries and no reflective symmetry. These are related to ideal hyperbolic tetrahedra.
2 Linked to truncated icosahedron shirt
Lemma: In a polyhedron of this type with polygons $A$ and $B$ sharing an edge $e$, the two other polygons meeting $e$ must have the same number of sides.
Proof: By local symmetry reflecting through the perpendicular bisector of $e$, the angles are equal.
Sergei Ivanov proved the same lemma in the comments.
Since 5 is odd, all of the polygons around a pentagon must have the same number of sides, since you can't have a nonconstant alternating sequence. So, the only possibilities are that all polygons are pentagons, or that each pentagon is surrounded by hexagons, and each of these hexagons is surrounded by 3 pentagons and 3 hexagons. In the latter case, attaching pentagonal pyramids to each pentagon extends each hexagon into an equilateral triangle, producing a polyhedron whose faces are equilateral triangles with 5 meeting at a vertex, an icosahedron, so the original was a truncated icosahedron.
Note that if you have equilateral triangles and squares meeting 4 to a vertex, then there are two possibilities for 3 squares and 1 triangle at a vertex, one with cubic symmetry and one with only dihedral symmetry with a belt which is an octagonal prism.
By contrast, if you require that there are 3 triangles meeting at a vertex, but drop the regularity assumption, you get a family of disphenoids, which generically have the Klein 4-group as symmetries and no reflective symmetry.
1
Lemma: In a polyhedron of this type with polygons $A$ and $B$ sharing an edge $e$, the two other polygons meeting $e$ must have the same number of sides.
Proof: By local symmetry reflecting through the perpendicular bisector of $e$, the angles are equal.
Sergei Ivanov proved the same lemma in the comments.
Since 5 is odd, all of the polygons around a pentagon must have the same number of sides, since you can't have a nonconstant alternating sequence. So, the only possibilities are that all polygons are pentagons, or that each pentagon is surrounded by hexagons, and each of these hexagons is surrounded by 3 pentagons and 3 hexagons. In the latter case, attaching pentagonal pyramids to each pentagon extends each hexagon into an equilateral triangle, producing a polyhedron whose faces are equilateral triangles with 5 meeting at a vertex, an icosahedron, so the original was a truncated icosahedron.
Note that if you have equilateral triangles and squares meeting 4 to a vertex, then there are two possibilities for 3 squares and 1 triangle at a vertex, one with cubic symmetry and one with only dihedral symmetry with a belt which is an octagonal prism.
By contrast, if you require that there are 3 triangles meeting at a vertex, but drop the regularity assumption, you get a family of disphenoids, which generically have the Klein 4-group as symmetries and no reflective symmetry.
|
2013-05-18 20:59:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5624792575836182, "perplexity": 345.10314148392723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382851/warc/CC-MAIN-20130516092622-00079-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://hifa.fondazionemercantini.it/divergence-theorem-calculator.html
|
Suppose F is a vector field with div(F(x,y,z))=4. By contrast, the divergence theorem allows us to calculate the single triple integral ∭ E div F d V, ∭ E div F d V, where E is the solid enclosed by the cylinder. If you want to use the divergence theorem to calculate the ice cream flowing out of a cone, you have to include a top to your cone to make your surface a closed surface. The intersection of S with the z plane is the circle x^2+y^2=16. 4 Similarly as Green’s theorem allowed to calculate the area of a region by passing along the boundary, the volume of a region can be computed as a flux integral: Take for example the vector field F~(x,y,z) = hx,0,0i which has divergence 1. Explanation:. We have examined several versions of the Fundamental Theorem of Calculus in higher dimensions that relate the integral around an oriented boundary of a domain to a "derivative" of that entity on the oriented domain. But one caution: the Divergence Theorem only applies to closed surfaces. Discover Resources. EXAMPLE 4 Find a vector field whose divergence is the given F function. Use the Divergence Theorem to calculate the surface integral? ∫∫S F · dS; that is, calculate the flux of F across S. Advanced Math Q&A Library Use the Divergence Theorem to calculate the surface integral F · dS; that is, calculate the flux of F across S. By constructin…. Divergence Calculator The calculator will find the divergence of the given vector field, with steps shown. div = divergence(U,V) assumes X and Y are determined by the expression. In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a result that relates the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed. Use the divergence theorem to calculate surface integral when and S is a part of paraboloid that lies above plane and is oriented upward. Use the Divergence Theorem to calculate the surface integral ∫∫S F · dS; that is, calculate the flux of F across S. Let →F F → be a vector field whose components have continuous first order partial derivatives. Get the free "MathsPro101 - Curl and Divergence of Vector " widget for your website, blog, Wordpress, Blogger, or iGoogle. The flux of this vector field through. When you have a solid region bounded by a closed surface S, there is an easier way to calculate this integral, using the Divergence Theorem. The flux of a vector field F across a closed oriented surface S in the direction of the surface's outward unit normal field n equals the integral of V,F over the region D enclosed by the surface: F dV. The divergence is ∂x(y2+ yz)+ ∂y(sin(xz)+ z2)+ ∂z(z2) = 2z. is the divergence of the vector field $$\mathbf{F}$$ (it's also denoted $$\text{div}\,\mathbf{F}$$) and the surface integral is taken over a closed surface. F(x,y,z) = (cos(z)+xy2) i + xe-z j + (sin(y)+x2z) k S is the surface of the solid. Divergence can be viewed as a measure of the magnitude of a vector field's source or sink at a given point. Gauss-Ostrogradsky Divergence Theorem Proof, Example. The divergence theorem can also be used to evaluate triple integrals by turning them into surface integrals. Use the Divergence Theorem to calculate the surface integral S F · dS; that is, calculate the flux of F across S. By the divergence theorem, the flux is zero. , Arfken 1985) and also known as the Gauss-Ostrogradsky theorem, is a theorem in vector calculus that can be stated as follows. The Divergence Theorem predicts that we can also evaluate the integral in Example 3 by integrating the divergence of the vector field F over the solid region bounded by the ellipsoid. Answer to: Use the divergence theorem to calculate the flux of F = (x - 2 y) i + (y - z) j + (z - 8x) k out of the unit sphere. Vector fields which have zero divergence are often called solenoidal fields. Learn more. Advanced Math Q&A Library Use the Divergence Theorem to calculate the surface integral F · dS; that is, calculate the flux of F across S. I approached it like this, d s → can be resolved as d s n → where n → is the normal vector to the differential surface. The Divergence Theorem states that if is an oriented closed surface in 3 and is the region enclosed by and F is a vector field whose components. Basically what this Divergence Theorem says is that the flow or. The divergence times each little cubic volume, infinitesimal cubic volume, so times dv. You cannot use the divergence theorem to calculate a surface integral over $\dls$ if $\dls$ is an open surface, like part of a cone or a paraboloid. Similar Questions. ds; that is, calculate the flux of F across S. The divergence theorem can be used to calculate a flux through a closed surface that fully encloses a volume, like any of the surfaces on the left. Question (5): (8 points) ILO's: K1 - 12 - P1] (a) Use the Divergence Theorem to calculate the outward flux of the vector field ] = (z? + x - y) i + (x + y3 - z)j + (x - z/x2 + y2 + y) k across the surface of solid bounded by 0 SXS 9 - y2, -3 sy s 3,0 < Z = 9. The flux of this vector field through. Divergence definition is - a drawing apart (as of lines extending from a common center). Vector fields which have zero divergence are often called solenoidal fields. Math · Multivariable calculus · Green's, Stokes', and the divergence theorems · Divergence theorem (articles) 3D divergence theorem Also known as Gauss's theorem, the divergence theorem is a tool for translating between surface integrals and triple integrals. F(x, y, z) = (x3 + y3)i + (y3 + z3)j + (z3 + x3)k, S is the sphere with center the origin and radius 3. By the divergence theorem, the flux is zero. The paper estabishes (i) the structure of the divergence measure. It can be seen as a three-dimensional generalization of Green's Theorem. Question (5): (8 points) ILO's: K1 - 12 - P1] (a) Use the Divergence Theorem to calculate the outward flux of the vector field ] = (z? + x - y) i + (x + y3 - z)j + (x - z/x2 + y2 + y) k across the surface of solid bounded by 0 SXS 9 - y2, -3 sy s 3,0 < Z = 9. The divergence theorem can be used to calculate a flux through a closed surface that fully encloses a volume, like any of the surfaces on the left. The Divergence Theorem - Examples (MATH 2203, Calculus III) November 29, 2013 The divergence (or flux density) of a vector field F = i + j + k is defined to be div(F)=∇·F = + +. Use the divergence theorem to calculate the flux of a vector field. Divergence Calculator. ∫B∇⋅Fdxdydz= ∫B2zdxdydz. In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). F(x, y, z) = (x3 + y3)i + (y3 + z3)j + (z3 + x3)k, S is the sphere with center the origin and radius 3. Answer to: Use the divergence theorem to calculate the flux of F = (x - 2 y) i + (y - z) j + (z - 8x) k out of the unit sphere. Divergence definition is - a drawing apart (as of lines extending from a common center). The Divergence Theorem predicts that we can also evaluate the integral in Example 3 by integrating the divergence of the vector field F over the solid region bounded by the ellipsoid. 3 The Divergence Theorem Let Q be any domain with the property that each line through any interior point of the domain cuts the boundary in exactly two points, and such that the boundary S is a piecewise smooth closed, oriented surface with unit normal n. Solution for Problem. It can not directly be used to calculate the flux through surfaces with boundaries, like those on the right. Use the Divergence Theorem to calculate the surface integral F. Putting it together: here, things dropped out nicely. Then the volume integral of the divergence del ·F of F over V and the surface integral of F over the boundary. Explanation:. By constructin…. Use the divergence theorem to calculate the flux of a vector field. dS div F dV, to calculate the flux F. Try the Stokes' theorem instead: it will reduce the surface integral to a line integral over the equator. Use the Divergence Theorem to calculate the surface integral S F · dS; that is, calculate the flux of F across S. F(x, y, z) = (x3 + y3)i + (y3 + z3)j + (z3 + x3)k, S is the sphere with center the origin and radius 3. Divergence Theorem Let E E be a simple solid region and S S is the boundary surface of E E with positive orientation. Use the Divergence Theorem to evaluate S F · dS, where F(x, y, z) = z2xi + y3 3 + cos z j + (x2z + y2)k and S is the top half of the sphere x2 + y2 + z2 = 4. Use the Divergence Theorem to calculate the surface integral? ∫∫S F · dS; that is, calculate the flux of F across S. The Divergence Theorem predicts that we can also evaluate the integral in Example 3 by integrating the divergence of the vector field F over the solid region bounded by the ellipsoid. Divergence can be viewed as a measure of the magnitude of a vector field's source or sink at a given point. To visualize this, picture an open drain in a tub full of water; this drain may represent a 'sink,' and all of the velocities at each specific point in the tub represent the vector field. The arrays X and Y, which define the coordinates for U and V, must be monotonic, but do not need to be uniformly spaced. Question (5): (8 points) ILO's: K1 - 12 - P1] (a) Use the Divergence Theorem to calculate the outward flux of the vector field ] = (z? + x - y) i + (x + y3 - z)j + (x - z/x2 + y2 + y) k across the surface of solid bounded by 0 SXS 9 - y2, -3 sy s 3,0 < Z = 9. Apply the divergence theorem to an electrostatic field. Vector fields which have zero divergence are often called solenoidal fields. Answer to: Use the divergence theorem to calculate the flux of F = (x - 2 y) i + (y - z) j + (z - 8x) k out of the unit sphere. is the divergence of the vector field $$\mathbf{F}$$ (it's also denoted $$\text{div}\,\mathbf{F}$$) and the surface integral is taken over a closed surface. The Divergence Theorem - Examples (MATH 2203, Calculus III) November 29, 2013 The divergence (or flux density) of a vector field F = i + j + k is defined to be div(F)=∇·F = + +. The flux of this vector field through. [T] Use a CAS and the divergence theorem to calculate flux where and S is a sphere with center (0, 0) and radius 2. By the divergence theorem, the flux is zero. The Divergence Theorem Example 4. ds; that is, calculate the flux of F across S. Use the Divergence Theorem to calculate the surface integral? ∫∫S F · dS; that is, calculate the flux of F across S. Math · Multivariable calculus · Green's, Stokes', and the divergence theorems · Divergence theorem (articles) 3D divergence theorem Also known as Gauss's theorem, the divergence theorem is a tool for translating between surface integrals and triple integrals. The flux of this vector field through. The Divergence Theorem is a theorem relating the flux across a surface to the integral of the divergence over the interior. x 2 + y 2 + z 2 = a 2, z ≥ 0. Free series convergence calculator - test infinite series for convergence step-by-step This website uses cookies to ensure you get the best experience. Let S be the surface x 2 3y2 z 4 with positive orientation and let F~ xx3 y3;y3 z3;z3 x y. It is a result that links the divergence of a vector field to the value of surface integrals of the flow defined by the field. Divergence and Curl calculator. Let S be the surface of the solid bounded by y2 z2 1, x 1, and x 2 and let F~ x3xy2;xez;z3y. Use the Divergence Theorem to calculate the surface integral F. Since $\div \dlvf = y^2+z^2+x^2$, the surface integral is equal to the triple integral \begin{align*} \iiint_B (y^2+z^2+x^2) dV \end{align. In this paper, we propose and investigate a divergence-free reconstruction of the nonconforming virtual element for the Stokes problem. Explanation:. dS, where S is the surface of the cube with corners at (0, 0, 0), (1, 0, 0), (0, 1, 0), (1, 1, 0), (0, 0, 1), (1, 0, 1), (0, 1, 1), and. The flux of a vector field F across a closed oriented surface S in the direction of the surface's outward unit normal field n equals the integral of V,F over the region D enclosed by the surface: F dV. The Divergence theorem in vector calculus is more commonly known as Gauss theorem. When you studied surface integrals, you learned how to calculate the flux integral $$\displaystyle{\iint_S{\vec{F} \cdot \vec{N} ~ dS} }$$ which is basically the flow through a surface S. 3 The Divergence Theorem Let Q be any domain with the property that each line through any interior point of the domain cuts the boundary in exactly two points, and such that the boundary S is a piecewise smooth closed, oriented surface with unit normal n. F = xyi + yz j + xzk; D the region bounded by the unit cube defined by 0 ≤ x ≤1, 0 ≤y ≤1, 0 ≤z ≤1 - 2169903. Try the Stokes' theorem instead: it will reduce the surface integral to a line integral over the equator. Which translates the integral into the surface integral in Divergence Theorem of Gauss, which implies the volume integral will be Div of Curl of u, but this Div (Curl u) is zero. Advanced Math Q&A Library Use the Divergence Theorem to calculate the surface integral F · dS; that is, calculate the flux of F across S. In this paper, we propose and investigate a divergence-free reconstruction of the nonconforming virtual element for the Stokes problem. In this section we are going to relate surface integrals to triple integrals. , Arfken 1985) and also known as the Gauss-Ostrogradsky theorem, is a theorem in vector calculus that can be stated as follows. Let be a region in space with boundary. divergence line: Divergenzlinie {f} divergence loss: Streuverlust {m} divergence matrix: Divergenzmatrix {f} math. By the divergence theorem, the flux is zero. The Divergence Theorem It states that the total outward flux of vector field say A , through the closed surface, say S, is same as the volume integration of the divergence of A. Notice that the limit being taken is of the ratio of the flux through a surface to the volume enclosed by that surface, which gives a rough measure of the flow "leaving" a point, as we mentioned. Let V be a region in space with boundary partialV. dS of the vector field F = (r*y+ xz – ry, –ry + ry – yz, 2x° + yz –…. Gauss-Ostrogradsky Divergence Theorem Proof, Example. Solution: Since I am given a surface integral (over a closed surface) and told to use the divergence theorem, I must convert the surface integral into a triple integral over the region inside the surface. Verify the divergence theorem. The proof can then be extended to more general solids. First compute integrals over S1 and S2, where S1 is the disk x2 + y2 ≤ 4, oriented downward, and S2 = S1 union S. The Divergence Theorem relates surface integrals of vector fields to volume integrals. X and Y must have the same number of elements, as if produced by meshgrid. ∇ ⋅F = ∂P ∂x + ∂Q ∂y + ∂R ∂z is the divergence of the vector field F (it’s also denoted divF) and the surface integral is taken over a closed surface. dS div F dV, to calculate the flux F. The divergence theorem can be used to calculate a flux through a closed surface that fully encloses a volume, like any of the surfaces on the left. ds; that is, calculate the flux of F across S. The divergence theorem can also be used to evaluate triple integrals by turning them into surface integrals. Use the Divergence Theorem to calculate the surface integral F. More precisely, the divergence theorem states that the surface integral of a vector field over a closed surface, which is called the flux through the surface, is. In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a result that relates the flux of a vector field through a closed surface to the divergence of the field in the volume enclosed. 3 Divergence Theorem (1) The divergence of a vector field F = M i + j + P k is div(F) = V \$F = (2) Divergence Theorem. The Divergence Theorem states that if is an oriented closed surface in 3 and is the region enclosed by and F is a vector field whose components. F(x, y, z) = x&2yi + xy^2j + 2xyzk, S is the surface of the tetrahedron bounded by the planes x = 0, y = 0, z = 0,. Gauss's Divergence Theorem Let F(x,y,z) be a vector field continuously differentiable in the solid, S. Then the volume integral of the divergence del ·F of F over V and the surface integral of F over the boundary. Divergence Calculator The calculator will find the divergence of the given vector field, with steps shown. Use the Divergence Theorem to calculate the surface integral F. div = divergence(X,Y,U,V) computes the divergence of a 2-D vector field U, V. Answer to: Use the divergence theorem to calculate the flux of F = (x - 2 y) i + (y - z) j + (z - 8x) k out of the unit sphere. divergence time: Divergenzzeit {f} biol. Divergence is when the price of an asset and a technical indicator move in opposite directions. Apply the divergence theorem to an electrostatic field. Divergence Theorem Let E E be a simple solid region and S S is the boundary surface of E E with positive orientation. By contrast, the divergence theorem allows us to calculate the single triple integral ∭ E div F d V, ∭ E div F d V, where E is the solid enclosed by the cylinder. dS of the vector field F = (r*y+ xz - ry, -ry + ry - yz, 2x° + yz -…. Advanced Math Q&A Library Use the Divergence Theorem to calculate the surface integral F · dS; that is, calculate the flux of F across S. ∇ ⋅F = ∂P ∂x + ∂Q ∂y + ∂R ∂z is the divergence of the vector field F (it’s also denoted divF) and the surface integral is taken over a closed surface. ∫B∇⋅Fdxdydz= ∫B2zdxdydz. Get the free "MathsPro101 - Curl and Divergence of Vector " widget for your website, blog, Wordpress, Blogger, or iGoogle. [T] Use a CAS and the divergence theorem to calculate flux where and S is a sphere with center (0, 0) and radius 2. Solution for Problem. Solution: Since I am given a surface integral (over a closed surface) and told to use the divergence theorem, I must convert the surface integral into a triple integral over the region inside the surface. Question (5): (8 points) ILO's: K1 – 12 – P1] (a) Use the Divergence Theorem to calculate the outward flux of the vector field ] = (z? + x – y) i + (x + y3 – z)j + (x – z/x2 + y2 + y) k across the surface of solid bounded by 0 SXS 9 – y2, -3 sy s 3,0 < Z = 9. F(x, y, z) = el tan(z)i + YV 6 - x2j + x sin(y)k, S is the surface of the solid that lies above the xy-plane and below the surface z = 2 - x4 - y4, -1
|
2020-10-24 02:39:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9269073605537415, "perplexity": 371.41927404252425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881640.29/warc/CC-MAIN-20201024022853-20201024052853-00383.warc.gz"}
|
https://dantopology.wordpress.com/2009/11/18/compact-spaces-with-g-delta-diagonals/
|
# Compact Spaces With G-delta Diagonals
In a previous post, I showed that any compact space with a countable network is metrizable. Another classic metrization theorem for compact spaces is that any compact space with a $G_\delta-$diagonal is metrizable ([6]). The theroem I try to prove is: for a compact space $X$, $X^2$ is perfectly normal if and only if $X$ has a $G_\delta-$diagonal if and only if $X$ is metrizable. My proof is based on the notion of $G^*_\delta-$diagonal. Every compact space with a $G_\delta-$diagonal has a $G^*_\delta-$diagonal, which allows us to define a countable base. The theorem discussed here had been generalized (see the comment at the end of this post). All spaces are at least Hausdorff.
Let $X$ be a space. The set $\Delta=\lbrace{(x,x):x \in X}\rbrace$ is called the diagonal of the space $X$. The space $X$ has a $G_\delta-$diagonal if $\Delta$ is a $G_\delta-$set in $X^2$.
Let $\mathcal{G}$ be a collection of subsets of $X$ and let $x \in X$. Define $st(x,\mathcal{G})=\bigcup \lbrace{G \in \mathcal{G}:x \in G}\rbrace$. A sequence $\lbrace{\mathcal{G}_n}\rbrace_{n<\omega}$ of open covers of $X$ is called a $G_\delta-$diagonal sequence of $X$ if for each $x \in X$, $\lbrace{x}\rbrace=\bigcap_{n<\omega} st(x,\mathcal{G}_n)$. Lemma 1 shows that a space has a $G_\delta-$diagonal if and only if it has a $G_\delta-$diagonal sequence. This lemma is due to Ceder ([2]).
Another notion we need is that of the $G^*_\delta-$diagonal. The space $X$ has a $G^*_\delta-$diagonal if there is a $G_\delta-$diagonal sequence $\lbrace{\mathcal{G}_n}\rbrace_{n<\omega}$ such that for each $x \in X$, $\lbrace{x}\rbrace=\bigcap_{n<\omega} \overline{st(x,\mathcal{G}_n)}$. Such a $G_\delta-$diagonal sequence is called a $G^*_\delta-$diagonal sequence. The notion of $G^*_\delta-$diagonal is due to R. E. Hodel ([4]). Lemma 2 below shows that any compact space with a $G_\delta-$diagonal has a $G^*_\delta-$diagonal.
Lemma 1. The space $X$ has a $G_\delta-$diagonal if and only if it has a $G_\delta-$diagonal sequence.
Proof. $\Rightarrow$ Suppose that $\Delta=\bigcap_{n<\omega}U_n$ where each $U_n$ is open in $X^2$. Let $\tau$ denote the topology on $X$. For each $n$, let $\mathcal{G}_n=\lbrace{V \in \tau:V \times V \subset U_n}\rbrace$. We claim that for each $x \in X$, $\lbrace{x}\rbrace=\bigcap_{n<\omega} st(x,\mathcal{G}_n)$. Obviously $\lbrace{x}\rbrace \subset \bigcap_{n<\omega} st(x,\mathcal{G}_n)$. Let $y \in \bigcap_{n<\omega} st(x,\mathcal{G}_n)$. For each $n$, $y \in V_n$ where $V_n \in \mathcal{G}_n$ and $x \in V_n$. Thus $(x,y) \in V_n \times V_n \subset U_n$ for each $n$. This implies $(x,y) \in \Delta$ and $x=y$. Thus $\lbrace{x}\rbrace=\bigcap_{n<\omega} st(x,\mathcal{G}_n)$. We have established that $\{ \mathcal{G}_n \}$ is a $G_\delta$-diagonal sequence of $X$.
$\Leftarrow$ Suppose $\lbrace{\mathcal{G}_n}\rbrace$ is a $G_\delta-$diagonal sequence of $X$. For each $n$, let $U_n=\bigcup \lbrace{V \times V:V \in \mathcal{G}_n}\rbrace$. Since $\bigcap_n st(x,\mathcal{G}_n)=\{ x \}$, $\Delta \subset \bigcap_{n<\omega} U_n$. To show the set inclusion for the other direction, let $(x,y) \in \bigcap_{n<\omega} U_n$. For each $n$, $(x,y) \in V_n \times V_n$ for some $V_n \in \mathcal{G}_n$. This implies that $y \in st(x,\mathcal{G}_n)$ for each $n$. It follows that $y=x$. Thus $\Delta = \bigcap_{n<\omega} U_n$.
Lemma 2. If $X$ is compact and has a $G_\delta-$diagonal, then $X$ has a $G^*_\delta-$diagonal. Furthermore, each open cover in the $G_\delta-$diagonal sequence is finite.
Proof. Let $\lbrace{\mathcal{G}_n}\rbrace_{n<\omega}$ be the $G_\delta-$diagonal sequence obtained in Lemma 1. We inductively define $\lbrace{\mathcal{H}_n}\rbrace_{n<\omega}$, another $G_\delta-$diagonal sequence.
Using the compactness of $X$, obtain a finite subcollection $\mathcal{H}_0$ of $\mathcal{G}_0$ such that $\mathcal{H}_0$ is a cover of $X$. Here’s how I obtain $\mathcal{H}_1$. For each $x \in X$, choose an open set $G_x \in \mathcal{G}_1$ and an open set $H_x \in \mathcal{H}_0$ such that $x \in G_x$ and $x \in H_x$. Choose open set $V_x$ such that $x \in V_x$ and $\overline{V_x} \subset G_x \cap H_x$. Let $\mathcal{H}_1$ be a finite subcollection of $\lbrace{V_x:x \in X}\rbrace$ such that $\mathcal{H}_1$ is a cover of $X$. Continue the inductive process and we produce a sequence of open covers $\lbrace{\mathcal{H}_n}\rbrace_{n<\omega}$ satisfying the following two claims.
Claim 1
For each $x \in X$, $\lbrace{x}\rbrace=\bigcap_{n<\omega} st(x,\mathcal{H}_n)$.
Because each open cover $\mathcal{H}_n$ is chosen to be a subcover or a refinement of the open cover $\mathcal{G}_n$, we have $st(x,\mathcal{H}_n) \subset st(x,\mathcal{G}_n)$. Since $\lbrace{x}\rbrace=\bigcap_{n<\omega} st(x,\mathcal{G}_n)$ (from the definition of $G_\delta-$diagonal sequence), we have $\lbrace{x}\rbrace=\bigcap_{n<\omega} st(x,\mathcal{H}_n)$.
Claim 2
For each $x \in X$, $\lbrace{x}\rbrace=\bigcap_{n<\omega} \overline{st(x,\mathcal{H}_n)}$.
We only need to show $\bigcap_{n<\omega} \overline{st(x,\mathcal{H}_n)} \subset \lbrace{x}\rbrace$. Let $y \in \overline{st(x,\mathcal{H}_n)}$ for each $n$. Because $\mathcal{H}_n$ is finite, $y \in \overline{V}$ for some $V \in \mathcal{H}_n$ with $x \in V$. Each such $\overline{V} \subset U$ for some $U \in \mathcal{H}_{n-1}$. Thus $y \in \overline{st(x,\mathcal{H}_n)}$ implies $y \in st(x,\mathcal{H}_{n-1})$. By Claim 1, $y=x$.
We have shown that $X$ has a $G^*_\delta-$diagonal by producing a $G^*_\delta-$diagonal sequence $\lbrace{\mathcal{H}_n}\rbrace_{n<\omega}$.
Theorem. If $X$ is compact and has a $G_\delta-$diagonal, then $X$ is metrizable.
Proof. Let $\lbrace{\mathcal{H}_n}\rbrace_{n<\omega}$ be the $G^*_\delta-$diagonal sequence obtained in Lemma 2. Furthermore, each $\mathcal{H}_n$ is a finite open cover. Let $\mathcal{H}=\bigcup_{n<\omega} \mathcal{H}_n$. The collection $\mathcal{H}$ satisfies the properties stated in the following two claims.
Claim 3
For each $x,y \in X$ with $x \neq y$, there is a $U \in \mathcal{H}$ such that $x \in U$ and $y \notin \overline{U}$.
Since $\lbrace{x}\rbrace=\bigcap_{n<\omega} \overline{st(x,\mathcal{H}_n)}$, $y \notin \overline{st(x,\mathcal{H}_n)}$ for some $n$. Then there is some $U \in \mathcal{H}_n$ such that $x \in U$ and $y \notin \overline{U}$. In fact, for any $U \in \mathcal{H}_n$ with $x \in U$, it must be the case that $y \notin \overline{U}$.
Claim 4
Let $\mathcal{B}=\lbrace{X-\overline{\bigcup F}: F \subset \mathcal{H} \phantom{x} and \phantom{x} \vert F \lvert < \omega}\rbrace$. Then $\mathcal{B}$ is a countable base for $X$.
To see Claim 4, note that $\mathcal{B}$ is a cover of $X$ and is closed under finite intersections. This makes $\mathcal{B}$ is base for a topology. To show that this base generate the same topology on $X$, let $y \in U \subset X$ where $U$ is open. Then $X-U$ is compact. For each $x \in X-U$, let $V_x \in \mathcal{H}$ such that $x \in V_x$ and $y \notin \overline{V_x}$. We can choose $F=\lbrace{V_{x(0)},...,V_{x(n)}}\rbrace$ such that $F$ is a cover of $X-U$. Then $y \in X-\overline{\bigcup F} \subset U$.
With Claims 3 and 4, the theorem is established.
Corollary. Let $X$ be a compact space. The following conditions are equivalent.
1. $X^2$ is perfectly normal.
2. $X$ has a $G_\delta-$diagonal.
3. $X$ has a countable base.
Proof. $1 \rightarrow 2$ and $3 \rightarrow 1$ are obvious. $2 \rightarrow 3$ follows from the theorem.
Examples. Based on the corollary, any non-metrizable compact Hausdorff space does not have a $G_\delta-$diagonal. One handy example is the uncountable product of the unit interval $I^{\omega_1}$ where $I=[0,1]$. Both $I \times I$ with the lexicographic order and the double arrow space are compact and non-metrizable (thus have no $G_\delta-$diagonal). I discussed these two spaces in a previous post.
Comment. The notion of $G_\delta-$diagonal plays an important role in metrization theorems. The theorem for compact space with $G_\delta-$diagonal had long been generalized. For example, in [3] Chaber had shown that any countably compact space with a $G_\delta-$ diagonal is metrizable. In [1] and [5], it was shown that any paracompact space with a $G_\delta-$diagonal is submetrizable. The theorem proved in this post would simply be a corollary of this result. In upcoming posts, I plan to discuss some of these theorems as well as explore the connection of submetrizability and various $G_\delta-$diagonal properties.
Reference
1. Borges, C. R. On stratifiable spaces, Pacific J. Math., 17 (1966), 1-16.
2. Ceder, J. G. Some generalizations of metric spaces, Pacific J. Math., 11 (1961), 105-125.
3. Chaber, Conditions which imply compactness in countably compact spaces, Bull. Acad. Pol. Sci. Ser. Math., 24 (1976), 993-998.
4. Hodel, R., E., Moore spaces and $w \Delta-$spaces, Pacific J. Math., 38, (1971), 641-652.
5. Okuyama, A., On metrizability of M-spaces, Proc. Japan. Acad., 40, 176-179.
6. Sneider, V., Continuous images of Souslin and Borel sets: metrization theorems, Dokl. Acad. Nauk USSR, 50 (1945), 77-79.
$\text{ }$
$\text{ }$
$\text{ }$
Dan Ma math
Daniel Ma mathematics
Dan Ma topology
Daniel Ma topology
$\copyright$ 2009 – Dan Ma, Revised February 1, 2018
|
2019-01-23 00:50:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 201, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9755425453186035, "perplexity": 157.7801359004266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583879117.74/warc/CC-MAIN-20190123003356-20190123025356-00157.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/110963/reasons-people-wouldnt-weaponize-these-gravity-machines
|
Reasons People Wouldn't Weaponize These Gravity Machines
So when making my space stories I have put them in a technological time in history where gravity manipulators are available to the common creature and you can use it to go to other planets and live in zero-G environments, the science isn't important(mostly because I'm shamelessly hand waving it with two words Exotic Matter) and what it exactly does is make the gravity a person(creature) experiences equal to that of their home planet, rather than that of their current environment. And this is a problem because it leaves the option for any person to use the gravity machine to experience less than their normal gravity, effectively giving them super strength, and this happening would mess up the story and plot as a major theme is natural ability of different creatures. So my question is what excuses or reasons could I give to why no one uses this technology like this?
Gravity changes weight but it doesn't change mass. You can't swing a 2 ton anime sword like a normal sword just because you're in zero G. It would be more like pushing a really heavy fridge around on wheels. Hard to start and hard to stop.
Changing gravity isn't really super strength. You can move heavy objects but only slowly and the heavier they are, the slower you can move them.
I'd rather an exoskeleton which would actually give me super strength.
Gravity weapons would be far different than plain old strength. If you reverse gravity, your enemy falls up or if you increase it, you squash them like a fly. If you use zero G, you stop them from fighting back easily.
• It really depends on the degree of precision with which the gravity can be manipulated. If gravity can be added or subtracted in all direction at any location, you can effectively regard it as changing the individual force vectors of the 3D vector field, which means you can do whatever you want. So, to the OP, I suggest you limit the degree of precision and the maximum magnitude. – Lowell May 1 '18 at 7:31
Nobody messes with standard gravity because gravity isn't personal. If I reduce it for me, so that I am stronger relative to the massive objects around me, I am also reducing it for you and making you stronger. So instead of beating each other up with baseball bats, we swing automobiles at each other. (sort of reminds me of Superman vs General Zod in that old Christopher Reeves flick).
Since medical science is not enhanced by gravity manipulation, and since it is easier to reconstruct a skull that has been hit by a bat than one that has been hit by a bus...
Nobody messes with the gravity controls.
• these machines are suits that change the gravity for a specific individual not for the whole ship – Amoeba Apr 30 '18 at 23:54
• what if they change the gravity inside the suit but not outside it, whatever you pick upstill weighs whatever the outside gravity says it should. – John Apr 30 '18 at 23:58
• Like John just said... the gravity based strength enhancement will only be effective when moving things around inside the suit. If I grab a (local environment relative) one ton brick with the glove of my suit and throw it at you, it doesn't move and you remain unhurt. If I want to throw heavy objects at you, the secret isn't to use the suit's interior gravity manipulation circuitry to lighten the load. I just need to turn up the power on the suit's exoskeleton muscles so they become strong enough for the job. – Henry Taylor Apr 30 '18 at 23:59
• exactly how could they weaponize it anyway? A car still has the same inertia no matter what the gravity. If you can't swing a car around in normal gravity you won't be doing it in lower gravity either. – John May 1 '18 at 0:02
• John carter is actually stronger then the other people around him, not just reducing the weight of his own body using gravity manipulation. It is also a serous exaggeration of the difference in gravity and the advantage it would give. They might be able to jump really high but that's it, they could not hit harder. – John May 1 '18 at 1:52
Can it just be a really heavy pair of boots?
• well considering that most the organisms may not have feet or more than two, well who ever designed it would get sued for human chauvinism – Amoeba May 1 '18 at 0:11
• @Amoeba I think the word you're looking for is Anthropocentrism. – Tim B II May 1 '18 at 0:14
• @TimBII roles right off the tongue – Amoeba May 1 '18 at 0:17
• Please excuse my anthropocentrism. I blame my upbringing :( – Pink Sweetener May 1 '18 at 1:41
The sad answer to your question is "no", people will manipulate gravity machines for their own purposes, and if they think they can achieve a benefit (especially an unearned benefit) by using the gravity machine with little risk to themselves, then they will likely go for it.
You can see this with people using various tools and implements in ways they were never intended or designed to be used in order to commit crimes. This isn't even crimes of violence (that was settled by our distant hominid ancestors when they discovered rocks and long thigh bones amplified their strength), computer hacking and programming malware, phone "phreaking", counterfeiting books, money and luxury purses and other technological flim flammery have existed for a very long time.
Now why didn't I think of this sooner?
Gravity machines, especially ones powered by handwavium provide plenty of opportunities for characters to mess up with worldbuiding, once you start thinking through all the implications of this. As many people have mentioned, gravity does not change mass or inertia: you can easily be killed by a car drifting into you and crushing you against a wall in zero-g. Similarly, if the gravity effect can be focused as you imply in your description, then you could increase local gravity and stick a person to a wall or the floor while you go about your nefarious plan. Since gravity is thought to be the effect of bending space-time by mass, you also need to figure out where the mass or equivalent energy is coming from or going to, as well as issues like heat dissipation. In other words, manipulating gravity has very profound effects outside of making you feel lighter or heavier.
And of course, what happens when the bad guys have a falling out, or the good guys grab their own gravity machines to battle the bad guys? How do intersecting gravitational fields work in your world? Can you set them "out of phase" and neutralize another gravity projector? Would you create singularities and suddenly deal with miniature black holes rapidly radiating away their gravitational potential energy through Hawking Radiation and radiating Petawatts of energy in their final moments?
Micro black holes are very bad news
So perhaps you need to reign in your enthusiasm and carefully think through the logic of your worldbuilding. Gravity generators are likely to have far more interesting and profound ramifications than simply making you temporarily lighter or heavier.
I don't exactly know how the gravity manipulator would work, but I picture the following. There is the local gravitation field F_L without the effect of the manipulator, provided by the planet and everything around the region. And then the gravity manipulator adds an additional local field F_M to the original field, resulting in the effective field F.
F_L is the given -- it cannot be change. The only thing you can restrict is F_M. I presume you want to preserve the directional freedom.
Therefore, you can limit (1) the maximum magnitude of the vectors in F_M and (2) the precision of control -- i.e. the maximum rate of change of the directions and the magnitudes of the vectors.
Please note that the fields do not represent the forces themselves, but rather the gravitational potential. And I did not care about the "conservativeness" of the field in the illustration, but gravitational fields do have to be conservative.
• Just to add a little more, you can consider having gravity manipulators of varying qualities, where the ones of high qualities can yield more control over the field, enabling one to change the field more drastically. But that is completely up to you to include in the story:) – Lowell May 1 '18 at 7:49
You can make the use of the suits in such a manner (aka make your self super strong) similar to Gun usage in the USA (I'm not american so this could be 100% bs).
Anyone could use the suit in such a manner, but to do so would be criminal and also raises the issue of escalation, where someone else will turn down the gravity of their suit to be able to combat you. With wide enough spread and usage, as well as a high level of education and a strong morale compass, the users of such suits simply find no need or urge to use the suits in such a manner, and if they did, the other uses would be able to shut them down.
You could also throw in some past history of criminals using the suits as such and being shut down, or maybe the gravity goes -ve and tears them apart. Some horror stories to stop people from going to the extremes of the suits power.
Finally, you could have each suit linked to some central system, and differing amounts of gravity will require different amounts of authorization. Hence a person who wants to have super strength will need to seek approval before being able to access it. Of course you might have people in modified suits that circumnavigate the authorization issue, but you will always have criminals.
|
2020-05-25 12:22:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3036794066429138, "perplexity": 1069.1792235023265}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388427.15/warc/CC-MAIN-20200525095005-20200525125005-00430.warc.gz"}
|
https://answerriddle.com/answer-which-one-of-the-following-jazz-musicians-served-in-the-u-s-army-in-the-1940s-and-helped-form-wolfpack-one-of-first-integrated-bands-in-the-u-s-military/
|
# Answer: Which one of the following jazz musicians served in the U.S. Army in the 1940s and helped form Wolfpack, one of first integrated bands in the U.S. Military?
The Question: Which one of the following jazz musicians served in the U.S. Army in the 1940s and helped form Wolfpack, one of first integrated bands in the U.S. Military?
Thelonious Monk
Dave Brubeck
Miles Davis
Buddy Rich
Sonny Rollins
The Answer: Dave Brubeck.
|
2021-05-07 09:25:21
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8656054139137268, "perplexity": 3708.024402653616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00526.warc.gz"}
|
https://socratic.org/questions/how-do-you-simplify-4-sqrt7-sqrt-3
|
# How do you simplify 4 /(sqrt7 + sqrt 3)?
Apr 3, 2017
You multiply the both halves by $\sqrt{7} - \sqrt{3}$ and use the special product $\left(A + B\right) \left(A - B\right) = {A}^{2} - {B}^{2}$
#### Explanation:
$= \frac{4}{\sqrt{7} - \sqrt{3}} \times \frac{\sqrt{7} - \sqrt{3}}{\sqrt{7} - \sqrt{3}}$
$= \frac{4 \left(\sqrt{7} - \sqrt{3}\right)}{\left(\sqrt{7} + \sqrt{3}\right) \left(\sqrt{7} - \sqrt{3}\right)}$
$= \frac{4 \left(\sqrt{7} - \sqrt{3}\right)}{{\sqrt{7}}^{2} - {\sqrt{3}}^{2}} = \frac{4 \left(\sqrt{7} - \sqrt{3}\right)}{7 - 3} =$
$= \frac{\cancel{4} \left(\sqrt{7} - \sqrt{3}\right)}{\cancel{4}} = \sqrt{7} - \sqrt{3}$
|
2020-04-02 16:53:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282054305076599, "perplexity": 5847.779208016371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00489.warc.gz"}
|
https://mathematica.stackexchange.com/questions/123965/how-to-get-old-message-formatting-in-version-11
|
# How to get old message formatting in version 11?
Version 11 uses a new-style message formatting.
The new style has useful features, and it is usually desirable. However, when saved in a notebook, it doesn't display correctly in older versions of Mathematica. If we use version 11 to write documentation for a package that is also compatible with older versions, we cannot include new-style messages in the notebook.
How can we turn off this new message formatting?
Internal\$MessageMenu = False
`
|
2020-01-25 11:04:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17329765856266022, "perplexity": 3002.6162151935914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672440.80/warc/CC-MAIN-20200125101544-20200125130544-00001.warc.gz"}
|
https://stacks.math.columbia.edu/tag/031N
|
Lemma 10.149.3. Let $R$ be a ring. Let $I$ be a directed set. Let $(S_ i, \varphi _{ii'})$ be a system of $R$-algebras over $I$. If each $R \to S_ i$ is formally étale, then $S = \mathop{\mathrm{colim}}\nolimits _{i \in I} S_ i$ is formally étale over $R$
Proof. Consider a diagram as in Definition 10.149.1. By assumption we get unique $R$-algebra maps $S_ i \to A$ lifting the compositions $S_ i \to S \to A/I$. Hence these are compatible with the transition maps $\varphi _{ii'}$ and define a lift $S \to A$. This proves existence. The uniqueness is clear by restricting to each $S_ i$. $\square$
There are also:
• 2 comment(s) on Section 10.149: Formally étale maps
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2020-09-30 08:51:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9870132803916931, "perplexity": 367.77099053630604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402123173.74/warc/CC-MAIN-20200930075754-20200930105754-00231.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-6th-edition/chapter-7-summary-review-and-test-cumulative-review-exercises-chapters-1-7-page-7074/7
|
## College Algebra (6th Edition)
$x = 3$
$$log_{2}(x + 1) + log_{2}(x - 1) = 3$$ By using the Rules of Logarithmics, we can simplify the first two terms as follows: $$log_{2}(x + 1) + log_{2}(x - 1)$$ $$log_{2}(x+1)(x-1)$$ $$log_{2}(x^{2} - 1)$$ And use it to re-write the original equation as: $$log_{2}(x^{2} - 1) = 3$$ Finally, we can write the equation in exponential form and solve accordingly: $$2^{3} = (x^{2} - 1)$$ $8 = x^{2} - 1$ $8 + 1 = x^{2}$ $\frac{+}{} \sqrt {9} = \frac{+}{} 3 = x$ Since using $x= -3$ in the original equation would give us $\log_{2}(-2) + \log_{2}(-4) = 3$, an undefined logrithmic equation, the only possible solution is $x=3$.
|
2018-12-19 16:11:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9277936816215515, "perplexity": 172.7488658309161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832559.95/warc/CC-MAIN-20181219151124-20181219173124-00113.warc.gz"}
|
https://math.stackexchange.com/questions/1509985/finding-a-bitstring-having-a-maximum-hamming-distance-away-from-a-list-of-bitstr
|
# Finding a bitstring having a maximum Hamming distance away from a list of bitstrings
If I have a list of $2^{2k}$ bitstrings of length $2k+1$ and each have a pairwise Hamming distance $\leq 2k$ is it possible to find a bitstring (either one of the ones on the list or another one) such that the Hamming distance (or some other metric) between it and every other string is $\leq k$. If it is possible is there an efficient algorithm for doing so?
I have a feeling that there may not be a solution with regards to my specific question since I'm thinking that there are $2^{2^{2k}}$ different sets of strings with pairwise Hamming distance $\leq 2k$ (just think of chosing a string or the string with every bit flipped) and only $2^{2k+1}$ possible "centers". However, I'm not sure. I'm hoping that this may be possible with some other metric.
The counting argument you're grasping towards works to show that there is not always a possible center.
When you have a set $2^{2k}$ strings of length $2k+1$, that's exactly half of all strings with length $2k+1$. Also it is not possible to have both a string and its negation in the set, because they would have Hamming distance $2k+1$. This means that for every string, either it or its negation has to be in the set, or the set wouldn't be large enough.
And this means that for any particular choice of "center" you can reconstruct exactly what the original list must have been -- namely, every string with a Hamming distance of $> k$ from the center has to be excluded, which means that its negation (which has a distance of $\le k$ from the center) must be included. This fixes the included-or-excluded status of every possible string.
So there's only one list that works for each "center", which, as you notice are much fewer than the number of possible lists (unless $k=0$, which is kind of a degenerate case).
If you have a list (or rather, an oracle that determines membership in the list) and know that is has a possible center, is it easy to determine what that center is with a small number of queries to the oracle? Yes:
First find two strings with Hamming distance $1$ where one of them is in the set and the other isn't. (For example, just try all strings of the form $1^n0^{2k+1-n}$ until the answer you get changes). This will immediately tell you what the right bit at the position where the two strings differ is. For each other position, take the string you know is just outside the set and see if it goes inside if you flip that bit. If it does, the flipped bit was right, otherwise the original bit was.
In this way you can determine the center in at most $3k+1$ queries to the oracle (modulo possible fencepost errors, but it should be thereabout).
• Thank you. Do you know if there would be more sets covered by other metrics such as the Levenshtein distance? – Ari Nov 2 '15 at 21:26
|
2019-07-17 06:42:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6924655437469482, "perplexity": 111.78412440642643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525094.53/warc/CC-MAIN-20190717061451-20190717083451-00300.warc.gz"}
|
http://math.stackexchange.com/questions/648827/difficult-integral-maybe-multidimensional-contour-integration
|
# Difficult integral, maybe multidimensional contour integration
I need to solve $$\int_{\mathbb{R}^3}d^3k\frac{e^{i\vec{k}\cdot\vec{\rho}}}{|\theta|+k^2}$$ I have a feeling that I should use contour integration, but in three variables I do not know how to employ it. Moreover, Mathematica cannot solve it for me.
I should say that I have some notes that say that I should pass from a form like $-i4\pi\int_{-1}^{1}d\cos\theta\int_0^\infty\frac{dk\sin(kr)}{|\theta|+k^2}$ but I do not know why.
I'd appreciate a help in solving that.
-
Since you are a new user, you should explains what did you try. In this way, it would be easier for any user to help you. – Felix Marin Jan 23 '14 at 15:42
In the integral, do you mean $\rho^2=k^2$ ? – Tom-Tom Jan 23 '14 at 15:52
V.Rossetto, you are right. Corrected – Rimon Jan 23 '14 at 16:45
$$\color{#00f}{\large% \int_{\mathbb{R}^{3}} {\expo{\ic\vec{k}\cdot\vec{\rho}} \over \verts{\theta} + k^{2}}\,\dd^{3}\vec{k} = {2\pi^{2} \over \rho}\expo{-\rho\root{\verts{\theta}}}}$$
@Rimon It's a usual one since $\large{\rm e}^{{\rm i}\theta} = \cos\left(\theta\right) + {\rm i}\sin\left(\theta\right)$. Thanks. – Felix Marin Jan 23 '14 at 16:55
|
2015-07-05 21:40:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979552626609802, "perplexity": 1658.3765411394897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097710.59/warc/CC-MAIN-20150627031817-00207-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/transformation-that-takes-all-point-on-parabola-onto-unit-circle.312638/
|
# Transformation that takes all point on parabola onto unit circle
## Homework Statement
Show that the transformation
Code:
_ __ _ __
| 0 -2 1 || x |
| -2 2 0 || y |
|_2 -2 1 _||_ 1_|
takes all points on parabola y2=x onto the unit circle x2+y2=1
## The Attempt at a Solution
I can't find out what to do I just need a hint about how to get started about doing this.
|
2021-04-18 20:26:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22586607933044434, "perplexity": 831.7239442768222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038860318.63/warc/CC-MAIN-20210418194009-20210418224009-00537.warc.gz"}
|
https://amathew.wordpress.com/category/logic/
|
### logic
I recently started writing up some material on finite presentation for the CRing project. There seems to be a folk “finitely presented” approach in mathematics: to prove something over a big, scary uncountable field like $\mathbb{C}$, one argues that the problem descends to some much smaller subobject, for instance a finitely generated subring of the complex numbers. It might be possible to prove using elementary methods the analog for such smaller subobjects, from which one can deduce the result for the big object.
One way to make these ideas precise is the characteristic p principle of Abraham Robinson, which I blogged about in the past when describing the model-theoretic approach to the Ax-Grothendieck theorem. Today, I want to describe a slightly different (choice-free!) argument in this vein that I learned from an article of Serre.
Theorem 1 Let ${F: \mathbb{C}^n \rightarrow \mathbb{C}^n}$ be a polynomial map with ${F \circ F = 1_{\mathbb{C}^n}}$. Then ${F}$ has a fixed point.
We can phrase this alternatively as follows. Let ${\sigma: \mathbb{C}[x_1, \dots, x_n] \rightarrow \mathbb{C}[x_1, \dots, x_n]}$ be a ${\mathbb{C}}$-involution. Then the map on the ${\mathrm{Spec}}$‘s has a fixed point (which is a closed point).
In fact, this result can be proved using directly Robinson’s principle (exercise!). The present argument, though, has more of an algebro-geometric feel to it, and it now appears in the CRing project — you can find it in the chapter currently marked “various.(more…)
Model theory often provides a framework from one which one can obtain “finitary” versions of infinitary results, and vice versa.
One spectacular example is the Ax-Grothendieck theorem, which states that an injective polynomial map ${P: \mathbb{C}^n \rightarrow \mathbb{C}^n}$ is surjective. The key idea here is that the theorem for polynomial maps of a fixed degree is a statement of first-order logic, to which the compactness theorem applies. Next, the theorem is trivial when ${\mathbb{C}}$ is replaced by a finite field, and one then deduces it for ${\overline{\mathbb{F}_p}}$ (and maps ${P: \overline{\mathbb{F}_p} \rightarrow \overline{\mathbb{F}_p}}$ by an inductive limit argument. It then holds for algebraically closed fields of nonzero characteristic, because ${ACF_p}$ is a complete theory—any first-order statement true in one algebraically closed field of characteristic ${p}$ is true in any such field. Finally, one appeals to a famous result of Abraham Robinson that any first-order statement true in algebraically closed fields of characteristic ${p>p_0}$ is true in algebraically closed fields of characteristic zero.
There is a discussion of this result and other proofs by Terence Tao here.
For fun, I will formally state and prove Robinson’s theorem.
Theorem 1 (A. Robinson) Let ${S}$ be a statement in first-order logic in the language of fields (i.e., referring to the operations of addition and multiplication, and the constants ${0,1}$). Then ${S}$ is true in algebraically closed fields of characteristic zero if and only if ${S}$ is true algebraically closed fields of arbitrarily high (or sufficiently high, ${p>p_0}$) characteristic ${p}$. (more…)
|
2021-03-02 14:06:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7754316926002502, "perplexity": 231.21062225386558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00156.warc.gz"}
|
http://mathhelpforum.com/algebra/97607-find-larger-integer.html
|
# Thread: Find the Larger Integer
1. ## Find the Larger Integer
The product of two integers is -40. If the same two integers are added, their sum is -3. Find the larger of the integers.
I did this:
x*y = -40...Equation A
x + y = -3..Equation B
I solved Equation B for y and got y = -x - 3.
I plugged the value for y into Equation A and solved for x.
x(-x - 3) = -3
-x^2 - 3x = -3
I moved everything to the right side.
0 = x^2 + 3x - 40
My final answer is 5 but the math book tells me that the correct answer is -8. Can you explain who is right and why?
2. Depends on whether they mean "largest" in the sense of "absolute value" (in which case it's -8) or "largest" in the sense of "most positive". Go back thru the book and see how they define "largest".
3. Originally Posted by sharkman
The product of two integers is -40. If the same two integers are added, their sum is -3. Find the larger of the integers.
I did this:
x*y = -40...Equation A
x + y = -3..Equation B
I solved Equation B for y and got y = -x - 3.
I plugged the value for y into Equation A and solved for x.
x(-x - 3) = -40
-x^2 - 3x = -3
I moved everything to the right side.
0 = x^2 + 3x - 40
My final answer is 5 but the math book tells me that the correct answer is -8. Can you explain who is right and why?
Note the minor correction above.
But, yes you are correct.
EDIT: I think that "largest" means largest.
4. ## ok but...
Originally Posted by Plato
Note the minor correction above.
But, yes you are correct.
EDIT: I think that "largest" means largest.
Are you saying that the equation x(-x - 3) = -40should not be equated to -40? But the question stated that the product of two integers is -40. I moved everything to the right side and so -40 remains -40, right? Please, explain what you meant by "minor correction" in your reply. Thank you.
5. Originally Posted by sharkman
Are you saying that the equation x(-x - 3) = -40should not be equated to -40? But the question stated that the product of two integers is -40. I moved everything to the right side and so -40 remains -40, right? Please, explain what you meant by "minor correction" in your reply. Thank you.
You had $\color{red}x(-x-3)=-3$.
You had $\color{red}x(-x-3)=-3$.
|
2017-01-17 06:23:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5976352095603943, "perplexity": 1047.700588344216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279468.17/warc/CC-MAIN-20170116095119-00267-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://astronomy.stackexchange.com/questions/14019/discovery-in-astronomy-vs-one-in-physics-do-they-differ-in-required-burden-of
|
# Discovery in Astronomy vs one in Physics - do they differ in required burden of evidence?
Discovery in Astronomy/Astrophysics (of astronomical objects) vs discovery of physical phenomena in experimental Physics (as such) - do they differ in required burden of evidence (with the notion, that astronomy/astrophysics deals with real physically existing astronomical objects, while the aim of experimental physics per se is mostly to verify rather somewhat abstract concepts of theoretical physics)?
Perhaps answering my question in some definite way would require some analysis with historical perspective and consideration of precedences ...
This generalized question of mine was motivated by more specific case featured in https://physics.stackexchange.com/q/236107/25575
• Isn't astronomy a part of physics – user5402 Jul 15 '19 at 15:38
There is no central authority in science. There is no council that sets the standards. The criteria for a discovery are the same: You publish your findings, and your peers accept your results.
There is the 5 sigma rule in particle physics. Perhaps you were thinking of this. But that is not an official rule, instead it's a convention among particle physics. And it's not applicable to every field.
Things are never cut and dry. What does "Peers" mean. What if only most are convinced. Does it count? Discoveries are usually judged in retrospect. After the dust has settled and the debate s are over.
//edit
Why is the bar set so high in physics? When "discovering" a particle there is no way to "see" it, instead you observe a mass of data and see a statistical discrepancy. Cern produces masses and masses of data, and it is sifted for anything unusual. And with so much data, the chance of seeing something unusual is actually quite high. (imagine searching for repetitions of '9' in the digits of pi - if you search far enough you can be sure to find a string of 6 9s even though the chance of a random string having 6 9s is very low)
With a mass of data, and the opportunity to repeat experiments it makes sense to set the bar very high.
Compare with an observation, such as the "chirp" at Ligo. We can't repeat it at will, the data is there: something happened that is consistent with a black hole merger. No other theory has been proposed that can explain the observations. The observation is less dependent on a statistical finding after running multiple experiments, but a single direct observation.
In fields in which a result could be explained by chance, then a statistical analysis is done, and published. This contributes to the quality of a finding, and so the number of people you will convince. And convincing your peers is the only criteria that counts in the end.
• Good point about the $5\sigma$ rule in physics. This actually distinguishes physics a bit from astronomy, as in the latter field, people tend to happily publish a $3\sigma$ result (although they may call it "tentative evidence"). – pela Mar 5 '16 at 14:29
• @pela - thanks for the comment - this is amazing, since, IMHO, astronomy/astrophysics deals with real physically existing astronomical objects, while the aim of experimental physics is mostly to verify (somewhat abstract) concepts of theoretical physics ... – Alex Mar 5 '16 at 14:42
• Would it be worth a comparison with psychology when p=0.05 is "significant" (equivalant to 2sigma) and p=0.1 is "Highly suggestive and tending towards significance") – James K Mar 5 '16 at 16:42
• @JamesKilfiger - why not ... I wonder if there already exists some research/publication, covering this issue across variety of science domains. – Alex Mar 5 '16 at 16:49
• @Alex: The number of sigmas needed to take a result seriously does depends on the type of observation. But for instance, if one detects something which would be unusual — like a galaxy breaking the distance record — then a $3\sigma$ would make most people happy. After all, this still means that "we're 99% sure it's there". Then better follow-up observations may confirm or reject the observation. And as James Kilfiger mention, we're still doing better than most social sciences :) – pela Mar 5 '16 at 17:21
|
2021-04-16 17:27:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5846566557884216, "perplexity": 1154.1625358021759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088245.37/warc/CC-MAIN-20210416161217-20210416191217-00028.warc.gz"}
|
https://quant.stackexchange.com/questions/38027/finding-a-minimum-variance-portfolio-when-using-a-regulariser
|
# Finding a minimum variance portfolio when using a regulariser?
I am aware that the minimum variance portfolio of a market with $n$ securities can be shown to be:
$$w^* = (1^T_n\Sigma^{-1}1_n)^{-1}\Sigma^{-1}1_n, \\ s.t. \ \ 1^T_nw = 1$$
by using the method of Langrange multipliers or other. I am interested in demonstration of the extension: $$w^* = \underset{w}{\mathrm{argmin}}\lbrace w^T \Sigma w + \lambda\sum_{i=1}^n\rho(w_i)\rbrace\\ s.t. \ \ 1^T_nw = 1$$
where $\rho(.)$ is some arbitrary penalty function (e.g. $\lvert w_i\rvert$).
Perhaps you could go through the process step by step as I am getting lost when I try.
Thanks!
You're not going to get an analytic formula except in special cases of function $\rho(x)$. And you're probably going to want $\rho$ convex.
• If $\rho$ is convex, the problem is a convex optimization problem and can be efficiently solved numerically. If $\rho$ isn't convex, the optimization problem may be difficult to solve.
• If $\rho(x) = |x|$ you basically have the LASSO objective which doesn't have an analytic solution (though the solution can be efficiently found numerically).
• If $\rho(x) = x^2$, you get a clean formula.
### Special case $\rho(x) = x^2$
Then $\lambda \sum_i \rho(w_i) = \lambda \mathbf{w}'I\mathbf{w}$. Your optimization problem is then:
$$\begin{array}{*2{>{\displaystyle}r}} \mbox{minimize (over w_i)} & \mathbf{w}' \left(\Sigma + \lambda I \right)\mathbf{w} \\ \mbox{subject to} & \sum w_i = 1 \end{array}$$
And it's essentially the same as your original problem. $\Sigma$ is replaced by $\Sigma + \lambda I$.
$$w^* = \frac{\left( \Sigma + \lambda I\right)^{-1}\mathbf{1}}{\mathbf{1}'\left( \Sigma + \lambda I\right)^{-1}\mathbf{1}}$$
(Just to be explicit, I use bold letters for vectors and $I$ is the identity matrix.)
-- Update -- Motivated by the comment from @noob2, I've attached a simulated example showing how security weights (in case $n = 8$) change as $\lambda$ increases. As @noob2 pointed out, higher $\lambda$ pushes weights towards the equal weight portfolio.
(Note: I've used a random covariance matrix, not one based on actual data. So don't over generalize anything besides the long run convergence towards 1/n.)
• Interesting. What this is saying is as $\lambda \rightarrow \infty$ make the portfolio look more and more like the $\frac{1}{N}$ portfolio. For $\lambda=0$ take the ususal minvar portfolio. And for $\lambda$ in between, a compromise between these two. Feb 2, 2018 at 15:28
• @noob2 Yeah, I added a picture from an example simulation. Feb 2, 2018 at 15:51
• @MatthewGunn This is very useful thanks. Could you point me towards a resource that demonstrates finding numerical solution in the Lasso case? Maybe an algorithm or pseudocode? Thanks. Feb 2, 2018 at 16:04
• Following my above comment am I right in assuming I could use gradient descent or some Newton Raphson based approach to converge on an optimal solution in the convex case? Feb 2, 2018 at 16:23
• @PsychicSteven717 If you use cvx in MATLAB (you'll have to add carriage returns) cvx_begin variables w(k); dual variable u; minimize(quad_form(w, S) + lambda * norm(w, 1)) subject to: u: sum(w) == 1; cvx_end Of course you can find optimization libraries for any language. Feb 2, 2018 at 16:27
|
2022-05-28 16:32:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.7358185052871704, "perplexity": 673.7955255873984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00560.warc.gz"}
|
https://www.fastsalelongisland.com/chestnut/recent/51673784ea59f983c0924ee20ed0f674
|
En skabelon til simple dokumenter. It;s all good up to the point that I am trying to play the mp4 video which does not open even though it looks like the pdf has a video at this point ( my cursor turns to the usual pointy hand). Overleaf on Twitter. An online LaTeX editor thats easy to use. Open this amsmath fragment in Overleaf.
Below is a description of the commands related to the colour and styling of the links.. You can customise the look and feel of your presentation by choosing your preferred combination of For more hints and tips on creating presentations with Beamer, checkout Part 3 of our free introduction to LaTeX course. Click the image above to get started, and try changing the theme to "Madrid" to get the look shown. You can find out more in our cookie policy. Overleaf on LinkedIn. Does Overleaf support pTeX? No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. These templates make it easy to create such a presentation, and the resulting set of slides is available for distribution in PDF format perfect for sharing before or after your lecture, seminar or talk. There are some additional bibliography styles you can use in Overleaf. These templates make it easy to create such a presentation, and the resulting set of slides is available for distribution in PDF format perfect for sharing before or after your lecture, seminar or talk. Keine Installation notwendig, Zusammenarbeit in Echtzeit, Versionskontrolle, Hunderte von LaTeX-Vorlagen und mehr Overleaf on Facebook. This five-part series of articles uses a combination of video and textual descriptions to teach the basics of creating a presentation using the LaTeX beamer package. Poster template for University of Zrich (UZH), conforming as closely as possible to university style guide. Communicating and sharing your work effectively with colleagues, supervisors and the general public often requires the preparation of a suitable presentation, Direct all queries and suggestions to : [emailprotected] Nikhil Alex Verghese. There are other three page styles: empty: Both the header and footer are cleared (blank) in this page style. A lot more LaTeX font typefaces are available, see the reference guide . By Amit Ghosh Wikipedia:LaTeX symbols Complete the following programming exercise (SIMPLE TEXT EDITOR WITH ONE LARGE RICH TEXT BOX) 7 out of 5 stars 6,451 $29 Step 6: Next to the Overleaf on Facebook. Overleaf on Twitter. Presentation Template - Texas A&M. Updated 2021-02-12. It mimics the look of the "seminar" package, which can only be used with plain TeX. I am compiling this to overleaf which produces a pdf that I am trying to present in Okular. . No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. Re: Font Size in LaTex by Chris Wheatland - Sunday, 17 March 2013, 2:39 AM I'm running Moodle 2 The Enigma typewriter font helps you recreate old documents and letters with vintage precision$50.
May 2013 by tom 20 Comments Using standard cross-referencing in LaTeX only produces the label number, a name describing the label such as figure, chapter or equation has to be added manually. The cleveref package overcomes this limitation by automatically producing the label name and number. You can customise the look and feel of your presentation by choosing your preferred combination of Theme and Color Theme. Overleaf on LinkedIn. Simple Presentation Template. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. Example by . We only use cookies for essential purposes and to improve your experience on our site. Template for Frontiers Journal. Overleaf on Facebook. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. targeted text goes here text goes here or you can use below to apply to the blocks/text after the command. This document lists 2590 symbols and the corresponding LATEX commands that produce them. Formatting a poster correctly can be difficult but these templates and examples make it easy to create beautiful, eye-catching posters with key content clearly laid out. Template for math homework assignments. Yes, Overleaf can compile Japanese documents that require pTeX. Um editor de LaTeX online fcil de usar. An online LaTeX editor thats easy to use. If you're looking to get started with a LaTeX presentation, this template is for you! There are also some comments and example to show how to customise various elements, e.g. I am sharing this template to help my fellows and others who want to use an amazingly easy and professional LaTeX template for their thesis. It is based on Find More Templates. I usually do something like this: \begin {frame} {} \centering \Large \emph {Fin} \end {frame} If you want larger, you could try one of the \LARGE, \huge, or \Huge. Each template provides placeholders for text, tables, figures and equations. A minimal example of a presentation with beamer in LaTeX: https://vknight.org/tex/#21-presentations
With ShareLaTeX you get the same LaTeX set-up wherever you go. To create folders in Overleaf , go to the upper left corner of your editor and click the folder icon. \vspace{ 1cm } %Example of different font sizes and types In this example, a command and a switch are used. Please set up the document in Settings.tex. View PDF. These templates make it easy to create such a presentation, and the resulting set of slides is available for distribution in PDF format perfect for sharing before or after your lecture, seminar If you're new to LaTeX, checkout the third part of Open as Template. Here's how: A pTeX document does not need any special commands for Japanese text (kanji, hiragana and katakana). An online LaTeX editor thats easy to use. In this tutorial, we are going to make a complete review of the main functionalities that beamer offers regarding the use of images in presentations. Communicating and sharing your work effectively with colleagues, supervisors and the general public often requires the preparation of a suitable presentation, tailored to that audience. Abstract. It is also set up to use the lineno package for line numbers; these can be turned on by adding the 'lineno' option to the documentclass command. For more hints and tips on creating presentations with Beamer, checkout Part 3 of our free introduction to LaTeX course. This template has been generated according to the Power Point template of LUMC in 2016. Click the image above to get started, and try changing the theme to No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. Chinese support. Template for scientific papers for the department Media and Digital Technologies @ St. Plten UAS It supports all Bachelor and Master theses of all study programmes in the department Media and Digital Technologies. Presentation Template - Texas A&M. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. Each template provides placeholders for text, tables, figures and equations. Templates tagged Bibliography. A template Ximera activity. We only use cookies for essential purposes and to improve your experience on our site. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. If the GIF is made of full frames then it is possible on Windows to open the file as frames for presentation or viewing, A little known SumatraPDF feature is that multi page TIFF or GIF frames can be SavedAs PDF the same as other supported image formats. OIST LaTeX Template: OIST Presentation. This five-part series of articles uses a combination of video and textual descriptions to teach the basics of creating a presentation using the LaTeX beamer package. For more hints and tips on creating presentations with Beamer, checkout Part 3 of our free introduction to LaTeX course. You can customise the look and feel of your presentation by choosing your preferred combination of Theme and Color Theme. Produce beautiful documents starting from our gallery of LaTeX templates for journals, conferences, theses, reports, CVs and much more. An online LaTeX editor thats easy to use. Shyamalendu Sinha. Sem instalao, colaborao em tempo real, controle de verses, centenas de templates LaTeX e mais. View Source. the font and Here we describe the commands \counterwithin and \counterwithout which originated in the chngcntr package but have now been integrated into LaTeX itselfsee LaTeX News, April 2018.The chngcntr package documentation makes the following comments which help to This is a template for writing a thesis at KTH. The command \fontfamily {qcr}\selectfont will set the TeX gyre cursor font typeface, whose fontcode is qcr, for the text inside the braces. If you've never used LaTeX you may want check out our video tutorials for beginners . To start our presentation we need to set the document class to beamer. Next we'll select a theme using the \usetheme command; for our example we'll use the Boadilla theme. We'll look at a number of different themes that can be used later on in the series. Templates Presentation. Template for American Physical Society (APS) and American Institute of Physics (AIP) journals, including Physical Review Letters, Physical Review A-E, Physical Review X, Reviews of Modern Physics, Applied Phyiscs Letters, using the ReVTeX 4.2 document class. an input box will appear, set a name for your folder and click Create. 'LuaLatex' should be used. Overleaf on Facebook. Search: Simple Text Box Latex. Templates German. Overleaf on Twitter. answered 1 min ago. OIST LaTeX Template: OIST Presentation. Overleaf on To use it, we include the following line in the preamble: \usepackage{graphicx}. Abstract. Start your projects with quality LaTeX templates for journals, CVs, resumes, papers, presentations, assignments, letters, project reports, and more. Overleaf on Twitter. The template contains extensive commenting which lets you customize your presentation easily, be it to change the layout theme, colors, fonts, font size, text alignment or more. An online LaTeX editor thats easy to use. To upload files directly into a folder. Templates Presentation. Font size is usually set automatically, and its easy to switch between landscape or portrait, A0, A1, A2, A3 and A4 size posters. Open this code fragment in Overleaf. Universit di Trento, a.a. 2017-2018. An online LaTeX editor thats easy to use. There are also some comments and example to show how to customise various elements, e.g. By Amit Ghosh Wikipedia:LaTeX symbols Complete the following programming exercise (SIMPLE TEXT EDITOR WITH ONE LARGE RICH TEXT BOX) 7 out of 5 stars 6,451 $29 Step 6: Next to the font style box is a box containing a number and an arrow Step 6: Next to the font style box is a box containing a number and an arrow. Start your projects with quality LaTeX templates for journals, CVs, resumes, papers, presentations, assignments, letters, project reports, and more. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. . 'LuaLatex' should be used. \texttt{ A command is used to change the style of a sentence } . I used this template during my Ph.D. defense. The command \graphicspath{ {./images/} } tells L a T e X that the images are kept in a folder named images under the directory of the main document.. Overleaf on Facebook. Click the image above to get started, and try changing the theme to "Madrid" to get the look shown. The following graphic shows the output produced by the LaTeX code: You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. These templates make it easy to create such a presentation, and the resulting set of slides is available for distribution in PDF format perfect for sharing before or after your lecture, seminar or talk. Here is a sample of how it looks with the Search: Latex Text In Equation. Show all Gallery Items. Faa documentos lindos comeando com modelos LaTeX da nossa galeria: revistas, conferncias, teses, relatrios, currculos e muito mais. where bibfile is the name of the bibliography .bib file and stylename is one of the following: The draft style is intended for editing your document before the final version, since it makes easier to keep track of the cited sources. ), Theorem boxes, and solution sections. Open an example of the hyperref package in Overleaf. Once you are there, simply click the Open in Overleaf button. You can customise the look and feel of your presentation by choosing your preferred combination of No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. This presentation template uses the well-known beamer class and shows how effortless making presentations using LaTeX can be. OIST LaTeX Template: OIST Presentation. Sem instalao, colaborao em tempo real, controle de There's a fairly large set of font sizes. Search or browse below. This tutorial will walk you through creating a beamer slideshow presentation using Texmaker. Um editor de LaTeX online fcil de usar. Template for presentation of projects at FIT BUT. Click the image above to get started, and try changing the theme to "Madrid" to get the look shown. This is a basic journal article template which includes metadata fields for multiple authors, affiliations and keywords. If you're looking to get started with a LaTeX presentation, this template is for you! No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. If you're looking to get started with a LaTeX presentation, this template is for you! Creating a bibliography is made easy in LaTeX through the use of packages such as bibtex, biber, natbib and biblatex which allow the automatic generation of the reference list in the chosen style (e.g. Just like with a normal LaTeX document we want to split our presentation up into sections and subsections. Let's add in some sections and then add some frames to each section. Then we can give our frames titles using the \frametitle command and add in some text. Template for presentation of projects at FIT BUT. An online LaTeX editor thats easy to use. Um editor de LaTeX online fcil de usar. If you're looking to get started with a LaTeX presentation, this template is for you! The cleveref The themes implementation is heavily inspired from the Metropolis theme. The IEEE provides guidelines for the preparation of papers and presentations for their conference proceedings, including a series of LaTeX templates. The standard page styles are invoked in LaTeX by means of the command: \pagestyle{ ''style'' } \pagestyle{ myheadings } The myheadings pagestyle displays the page number on top of the page in the outer corner. We will start, as a warm-up, with a quick review of how to insert images in general LaTeX documents, and the differences that beamer presents with respect to other document classes. An online LaTeX editor thats easy to use. the font and colours. Example by Overleaf. Add \usepackage {ragged2e} to the preamble and use \justify { text } wherever you want to justify the text. This beamer template is intended for Texas A&M University graduate students. You can drag and drop already uploaded files, even between folders in your left panel. An online LaTeX editor thats easy to use. This is generated purely with images as the background. These tutorials were first On the right, you can see the output in a PDF preview. [emailprotected] Presentation Style. An online LaTeX editor thats easy to use. Standard page styles. Regards and Best Wishes for your thesis.$25.
An online LaTeX editor thats easy to use. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned A beamer template for use at the Technical University of Denmark (DTU) following the DTU design guide at DTU design guide. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more Then, simply go up to You can add text in an equation with the \textrm command We give another proof for the scattering type formula of $_-$ Latex automatically changes the size and fonts somewhere in the textaligning a multiline formula with I will design 25 slides for you with Latex beamer, google slides or Powerpoint.
Companies rarely tend to deviate from their top 15-20 questions on Leetcode, and for a great deal of these companies these questions either directly overlap or have overlapping patterns. Based upon the earlier template by Giampiero Salvi. Faa documentos lindos comeando com modelos LaTeX da nossa galeria: revistas, conferncias, teses, relatrios, currculos e muito mais. This is an inofficial template for a thesis for FH Aachen. OIST LaTeX Template: OIST Presentation An online LaTeX editor thats easy to use. In the ActiveX Controls group, click Text Box A text box lets you display the contents of a text file in a dialog box A few examples at what can be . Abstract. For outstanding and professional presentations that mimic the official style guidelines (as of 2022) Roberto Metere. The cleveref package overcomes this limitation by automatically producing the label name and number. Introduction and example. Simple Presentation Template. If you're looking to get started with a LaTeX presentation, this template is for you! Each problem should have its own .tex file, and compiled by main.tex. If you're looking to get started with a LaTeX presentation, this template is for you! Template for IPST 2019 presentations. in that required by the academic journal youre submitting your article to). A modern, elegant, and versatile theme for Beamer. In the ActiveX Controls group, click Text Box A text box lets you display the contents of a text file in a dialog box A few examples at what can be achieved with CSS and form text box's Use dollar symbols around the text, for example, use '$\int_1^{20} x^2 dx$' for inline mode or '$$\int_1^{20} x^2 dx$$' for display mode Latex: how to Click the image above to get started, and try changing the theme to "Madrid" to get the look shown. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. Overleaf on LinkedIn. When using BiBTeX, the bibliography style is set and the bibliography file is imported with the following two commands: \bibliographystyle{ stylename } \bibliography{ bibfile } where bibfile is the name of the bibliography .bib file, without the extension, and stylename is one of values shown in the table below . Creating a professional CV or resume is quick and easy with Overleaf and writeLaTeX. By opening the PDF in Adobe PDF and switching to full screen mode, the right arrow button can be used to transition through the package and will include all in-slide transitions. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. Re: Font Size in LaTex by Chris Wheatland - Sunday, 17 March 2013, 2:39 AM I'm running Moodle 2 The Enigma typewriter font helps you recreate old documents and letters with vintage precision for custom size poster Those can be expressions as well Photography or art books dont conform to any particular Photography or art books dont conform to any particular. Faa documentos lindos comeando com modelos LaTeX da nossa galeria: revistas, conferncias, teses, relatrios, currculos e muito mais. Font size is usually set automatically, and its easy to switch between landscape or portrait, A0, A1, A2, A3 and A4 size Um editor de LaTeX online fcil de usar. All COMSATS University Islamabad and Sub campus students are encouraged to use LaTeX rather than other software. Template per le relazioni di ottica del prof. Antonio Perreca. Next to it, there is the LaTeX source code. ShareLaTeX comes with a complete, ready to go LaTeX environment which runs on our servers. Packages, settings, and declarations are handled by settings.tex. Unofficial Beamer Template for Project Presentations (Major/Minor) and Thesis Defense for all students (BS-MS/MSc/PhD/IPhD) of all backgrounds (PCMB) under Indian Institute of Science Education and Research, Thiruvananthapuram. Companies rarely tend to deviate from their top 15-20 questions on Leetcode, and for a great deal of these companies these questions either directly overlap or have overlapping patterns. Communicating and sharing your work effectively with colleagues, supervisors and the general public often requires the preparation of a suitable presentation, tailored to that audience. If you're new to LaTeX, checkout the third part of our free online LaTeX course for more hints on creating presentations and drawings in LaTeX. Overleaf on LinkedIn. 33. LaTex . OIST LaTeX Template: OIST Presentation An online LaTeX editor thats easy to use. An online LaTeX editor thats easy to use. The \includegraphics{universe} command is the one that actually included This template can also be downloaded from Github. This has already been answered but I should like to add that you can still use whichever theme you want with invisible blocks, and then set the colors to make blocks apparent. This template is intended to be used in the IPST 2019 congress. Share. Below is a description of the commands related to the colour No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. . Show all Templates. Jhair Stivel Acosta Sarmiento. Keine Installation notwendig, Zusammenarbeit in Echtzeit, Versionskontrolle, Hunderte von LaTeX Premium Premium presentation in Latex.
|
2022-08-18 02:44:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25339633226394653, "perplexity": 3996.701202643574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573145.32/warc/CC-MAIN-20220818003501-20220818033501-00079.warc.gz"}
|
https://www.physicsforums.com/threads/optimization-problem-with-an-rc-bp-filter.886178/
|
# Optimization Problem with an RC BP filter
Tags:
1. Sep 21, 2016
### BiGyElLoWhAt
I am assigned to design a circuit that peaks voltage at 10kHz and is less than half peak voltage at 3k and 30k. Only capacitors and resistors are allowed. The circuit I'm using is attached. I end up with
$I_0 = [\frac{-R_2\omega^2C_1C_2 + [C_1+C_2]R_1R_2\omega^3C_1C_2 - R_1\omega^2[C_1+C_2]^2 - R_2\omega^2C_2[C_1+C_2]}{(R_1R_2\omega^2C_1C_2-R_1\omega[C_1+C_2]-R_2C_2\omega)^2 +1} + \frac{-R_1R_2^2\omega^4C_1^2C_2^2 + R_1R_2\omega^3C_1C_2[C_1+C_2] +R_2^2\omega^3C_1C_2^2 + [C_1+C_2]\omega}{(R_1R_2\omega^2C_1C_2-R_1\omega[C_1+C_2]-R_2C_2\omega)^2 +1}i] V_{in}$
Which I just realized I have I naught and not I2 (running through R2), so I need to fix that.
I'm assuming there are some tricks to picking values to accomplish this? We're staying in the peco-micro range for caps and <1M for resistors, just for accessibility.
If I make a spreadsheet trying to capture all the variance, it's too huge. I'm already out to DZ and only have r1 + w, and r2 + w variance.
I don't think I'll even be able to solve for $\frac{d}{d\omega} \text{Transfer} = 0$
Since the resistors don't seem to change much of the frequency response (I assumed they might have to do with the width of the peak, as in an RLC Q-factor), I should be able to simplify a little, but I think the key will be in my capacitance ratios.
Thanks.
#### Attached Files:
• ###### Circuit.png
File size:
12.6 KB
Views:
90
2. Sep 22, 2016
### andrewkirk
Spreadsheets are lousy for optimisation. You're much better off using a more versatile tool like R, Matlab or Mathematica. They can do in a few lines of code what a spreadsheet might require hundreds of columns and many MB of spreadsheet size to accomplish, and they'll do it more quickly and intuitively too.
It looks like you have at most four variables, so the optimisation should be quite tractable.
Have you performed dimensional analysis on your formula? I'm not that familiar with electrical formulas but it looks to me like the terms in the numerator do not all have the same units, and the same goes for the denominator. If that's right then the formula may not be correct.
3. Sep 22, 2016
### Staff: Mentor
Optimization can mean different things to different people, it depends upon what you are trying to optimize (or maximize or minimize). What do you mean by "optimization" here?
For a 2-stage passive filter it is often that we desire the highest Q, and this is achieved by having the second stage impose least feasible loading on the first RC stage. So you might try and make the impedance of C2 and R2 of the order of 10 or more times the impedance of R1 and C1.
4. Sep 22, 2016
### Aaron Crowl
The first stage is a low-pass filter and the second stage is a high pass filter. How do you think they would behave if they were separate? Hint 1: C1 is going to be larger than C2. Hint 2: C1 is going to behave somewhat like an open until you get close to the high frequency cutoff and C2 is going to behave somewhat like a short when you are well above the low frequency cutoff. Approximations are OK for getting starting values.
5. Sep 22, 2016
### BiGyElLoWhAt
I actually thought of this late last night, but was wondering if it was reasonable to treat it as such. The approximation part was what bothered me slightly. So pick my RC values in my first circuit so that I have it about 100% at 10kHz, and the same with the second? What about the diminishing effects? Do I set 30k = 50%*@10k on my hipass and same with lowpass? My hipass should have negligible effects on low frequencies, and vice versa, so that would be a good approximation, I think.
Last edited: Sep 22, 2016
6. Sep 22, 2016
### BiGyElLoWhAt
I tried using wolfram, and I'm not sure if I entered it correctly, but it didn't "understand my query". They do have the same units. The numerator terms all have ohm^-1 and denominator is unitless (1/wc has units of ohm, so wc is ohm^-1). That's how I caught mistakes on my first 3 attempts at the analysis haha.
When I tried an approximate maximization, by setting the denominator = zero, I got something to the effect of root(abcd + 4iab) with i root(-1) which doesn't seem to make sense to me.
7. Sep 22, 2016
### BiGyElLoWhAt
Perhaps optimization wasn't the best term to use, but I stated that I needed to make it peak at 10kHz and have a q factor such that it was less than 50% voltage at 3k and 30k.
8. Sep 22, 2016
### Staff: Mentor
It is not obvious to me why C1, as you say, is going to be larger than C2. Can you give a hint why this should be so?
9. Sep 22, 2016
### Aaron Crowl
Sure it's reasonable. C1 is part of a low-pass so it's going to have a really high impedance at lower frequencies because C1 dominates the voltage divider until you get to the cut-off frequency. Approximate it as an open circuit and analyze what is left at the turn-on frequency of the second stage (3kHz). We're justified in doing this by the fact that the parallel equivalent of an extremely high impedance in parallel with a low impedance branch will be approx equal to that low impedance branch.
$$lim_{X_{1}→∞}X_{1}||X_{2}=lim_{X_{1}→∞}\frac{X_{1}X_{2}}{X_{1}+X_{2}}=X_{2}$$
The impedance of C1 should not become significantly low until you get near 30kHz. Try it, take C1 out of the circuit and analyze it with only R1,R2, and C2 at 3kHz.
Also consider the limit of this addition $lim_{X_{2}→0}X_{2} + R_{2}=R_{2}$. The value of C2 won't be significant compared to R2 at the high frequency cutoff (30kHz). It could be approximated as a short.
you could say that the low-freq cutoff is a strong function of C2 and the high-freq cutoff is a strong function of C1.
Edit: The first approximation requires that the impedance of C1 is much greater than the impedance of (R2+C2) at 3kHz. Keep that in mind when you select resistance values.
Last edited: Sep 22, 2016
10. Sep 22, 2016
### Svein
11. Sep 22, 2016
### BiGyElLoWhAt
I never realized you can't actually maximize an RC circuit. You can, however, set the values equal to the appropriate ratio and solve for the relationship. So I know that
$\omega = \frac{\sqrt{3}}{R_1C_1}$ gives me Vout~ 1/2 Vin at 30k and that $\omega = \frac{\sqrt{3}}{3R_2C_2}$ for 3k, but when I equate the two to find the frequency that they are both at unity, I get a mess.
$(R_2\omega C_2)^2 = \frac{1 +R_2^2\omega^2C_2^2}{1+R_1^2\omega^2C_1^2}$
12. Sep 22, 2016
### andrewkirk
I make out the first term in the numerator $$R_2\omega^2C_1C_2$$ to have units of $\Omega^3$, and the second term, which is
$$[C_1+C_2]R_1R_2\omega^3C_1C_2$$ to have units of $\Omega^5$.
13. Sep 23, 2016
### BiGyElLoWhAt
R is ohms, wc is per ohm. that term is ohms* ohms^-2
the second is as well. 1/wc = ohm
wc = 1/ohm
14. Sep 23, 2016
### Staff: Mentor
Maybe the word is customize?
15. Sep 23, 2016
### BiGyElLoWhAt
Perhaps, I'm not sure what you would call it, honestly. Optimization was the best I could come up with at the time.
16. Sep 23, 2016
### BiGyElLoWhAt
As far as approximations go, how about this:
Their maximum would be approximately when the magnitude of the phase angle of the high pass was equal to that of the low pass, that would give me the value that satisfies $R_1 \omega \C_1 = \frac{1}{R_2\omega C_2} \to \omega^2 = \frac{1}{R_1R_2C_1C_2}$ So then I think I will be able to set up a system of equations relating 2 variables in each. That might be useful if I can pick values. However, if impose another boundary condition, would that be sufficient to solve 5 equations 5 unknowns? Do I simply need another defined point on the graph?
17. Sep 23, 2016
### andrewkirk
As I understand it, you are trying to find values of $C_1,C_2,R_1,R_2$ such that, if we write the real part of the formula above as $F( C_1,C_2,R_1,R_2,\omega)$, then all the following are satisfied:
$$\frac \partial{\partial\omega}F( C_1,C_2,R_1,R_2,2\pi\times 10^4)=0$$
$$\frac {\partial^2}{\partial\omega^2}F( C_1,C_2,R_1,R_2,2\pi\times 10^4)<0$$
$$F( C_1,C_2,R_1,R_2,6\pi\times 10^3)<\frac12 F( C_1,C_2,R_1,R_2,2\pi\times 10^4)$$
$$F( C_1,C_2,R_1,R_2,60\pi\times 10^3)<\frac12 F( C_1,C_2,R_1,R_2,2\pi\times 10^4)$$
Is that correct?
18. Sep 23, 2016
### BiGyElLoWhAt
Yes.
So 4 equations, 4 unknowns. Am I understanding you correctly?
Also, thanks for the help.
19. Sep 23, 2016
### BiGyElLoWhAt
Any recommendations for a system of equation solver? I've exceeded the maximum number of characters in wolfram. I just can't type anymore.
20. Sep 23, 2016
### andrewkirk
Hmmm. I wrote a little R script, shown below, to try the formula out. It sampled each of the two resistances at twenty equidistant points over a range from $100\Omega$ to 1,000,100$\Omega$ and did the same for the two caps over a range from $100pf$ to 1,000,100$pf$ and for each combination of four variables calculated the value of the real part of the formula (output voltage?) for frequencies from 1kHz to 30kHz at intervals of 30Hz and tested that to see whether it had a peak - the alternative being that it is monotonic increasing or decreasing.
The result was that for none of the combinations tested was there a peak. I looked at a few sample curves and they were all monotonic decreasing, which is in line with my naive expectation that output voltage would decrease with frequency.
I could do wider search ranges or finer meshes, but it already takes 22 secs to run and with four variables, any significant widening or increased granularity of the search space could rapidly blow out the run time.
Is there something I'm missing here?
Here's the code
Code (Text):
voltom<-function(omega){
(-R_2*omega^2*C_1*C_2 + (C_1+C_2)*R_1*R_2*omega^3*C_1*C_2 - R_1*omega^2*(C_1+C_2)^2 - R_2*omega^2*C_2*(C_1+C_2)) /
((R_1*R_2*omega^2*C_1*C_2-R_1*omega*(C_1+C_2)-R_2*C_2*omega)^2 +1)
}
haspeak<-function(vec){
!(which.max(vec) %in% c(1,length(vec)))
}
omegalo<-1000
omegarange<-30000
omeganum<-1000
omega<-2*pi*(omegalo+(1:omeganum)*omegarange/omeganum)
num<-20
#capacitances expressed in picofarads (10^-12)
caplo<-100
caprange<-1000*1000
capnum<-num
C1<-C2<-(caplo+(1:capnum)*caprange/capnum)
resrange<-1000*1000
reslo<-100
resnum<-num
R1<-R2<-(reslo+(1:resnum)*resrange/resnum)
found<-FALSE
i<-1
while ((i <=capnum)&!found){
C_1<-C1[i]
j<-1
while ((j <=capnum)& !found){
C_2<-C2[j]
k<-1
while ((k <=capnum)& !found){
R_2<-R2[k]
l<-1
while ((l <=capnum)& !found){
R_2<-R2[l]
found<- haspeak(voltom(omega))
l<-l+1
}
k<-k+1
}
j<-j+1
}
i<-i+1
}
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
|
2018-01-20 17:27:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6306434869766235, "perplexity": 974.9307889797798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889677.76/warc/CC-MAIN-20180120162254-20180120182254-00215.warc.gz"}
|
https://maurobringolf.ch/2017/01/modeling-ordered-pairs-using-nothing-but-unordered-sets/
|
While studying for an exam in Discrete Mathematics I stumbled upon an interesting little thing in the lecture notes. In the chapter on basic set theory there was an example about ordered sets (tupels, lists or however you decide to call them). It showed how to define ordered sets using nothing but unordered sets. It sounds paradox but is quite simple.
With unordered set I mean a mathematical set. It has no notion of order and repetitions in notation are irrelevant. For example, these notations all mean the same set containing the numbers 1,2 and 3:
Sets are probably the most fundamental objects in mathematics. Some areas of mathematics even try to build everything upon sets, so it is worth looking at them in more detail. One of the first things that seems to be missing is the relative order of elements. To capture two elements and their order, one usually writes $(1,2)$ which denotes the ordered set containing 1 first and 2 second. This is pretty much the mathematical equivalent of an array and of course not the same as $(2,1)$, because their order is different.
There seems to be a fundamental difference between the two concepts, right? The astonishing thing is that order can be modeled without order. The notion of order introduced in ordered sets is nothing new and already contained within the notion of an unordered set. Here is a way how an ordered pair of two elements $a$ and $b$ can be defined using nothing but unordered sets:
As you can see, on the left is the ordered pair and on the right are only unordered sets. This is the standard definition for an ordered pair by Kuratowski 1. The thing on the right is sort of like a data structure built with things we have (unordered sets) for the abstract thing on the left we want to model (ordered pair). Given a proper system of axioms for sets, one can formally prove that this construction has exactly the same properties as an ordered pair. But for now I am satisfied with a bit of intuition for how it works. The main requirement for ordered pairs is the following property:
This statement captures the notion of a first and a second element within the pair. If they are different and switch positions, the resulting pair is different. This is how the first element $x$ can be extracted from the ordered pair $P = (x,y)$ defined as above:
The elements of $P$ are sets. One contains both elements and one contains just the first element. Only the first element is contained in both and therefore satisfies this property. That means the “first” element in $P$ can be characterized in terms of sets. Defining the second element $y$ is a little bit more involved, but can be done as well. It is the element that is in exactly one of the sets within $P$. Stated as mathematical formula, this looks something like the following:
In words: There is a set $A$ in $P$ such that $y$ is in $A$ and for any two different sets in $P$, $y$ is not contained in one of them. The first part says that their is at least one set that contains $y$ and the second part says that there is at most one of them. Again, only the “second” element of $P$ satisfies this condition and can therefore be identified in terms of sets.
The last thing I want to look at is what happens for ordered pairs of the form $(x,x)$ that contain the same element twice. Using the definition and removing repeated elements from set notation yields:
The cool thing about the characterizations of the first and second element above is that they still work in this case:
• The first element is in all sets contained in $\{ \{ x \} \}$
• The second element is in exactly one set contained in $\{ \{ x \} \}$
The element $x$ satisfies both of them and is therefore the first and the second element. This is a great example of a problem that sounds impossible but has a simple solution. While this one is purely theoretical there are practical questions that are similar. One that comes to my mind is public-key cryptography: Is it possible to communicate securely over an insecure channel? The unintuitive but correct answer is yes. And I think studying “easier” problems that follow this pattern helps understanding and solving harder, more useful questions later on.
|
2018-09-25 17:18:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 22, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7560306191444397, "perplexity": 135.28344341458558}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161902.89/warc/CC-MAIN-20180925163044-20180925183444-00188.warc.gz"}
|
http://jmre.ijournals.cn/en/ch/reader/view_abstract.aspx?flag=2&file_no=202102200000001&journal_id=en
|
A note on a problem of S\''{a}rk\"{o}zy and S\''{o}s
Received:February 20, 2021 Revised:February 20, 2021
Key Word: representation function linear form
Fund ProjectL:
Author Name Affiliation Address Min Tang School of Mathematics and Statistics, Anhui Normal University School of Mathematics and Computer Science,Anhui Normal University
Hits: 103
Download times: 0
Abstract:
Let $k,\ell \geq 2$ be positive integers. Let $A$ be an infinite set of nonnegative integers. For $n\in \mathbb{N}$, let $r_{1,k,\ldots,k^{\ell-1}} (A, n)$ denote the number of solutions of $n=a_0+ka_1+\cdots +k^{\ell-1}a_{\ell-1}$, $a_0, \ldots, a_{\ell-1}\in A$. In this paper, we show that $r_{1,k,\ldots,k^{\ell-1}} (A, n)=1$ for all $n\geq 0$ if and only if $A$ is the set of all nonnegative integers such that all its digits in its $k^\ell$-adic expansion are smaller than $k$. This result partially answers a question of S\''{a}rk\"{o}zy and S\''{o}s on representation for multivariate linear forms.
Citation:
DOI:
View/Add Comment Download reader
|
2021-12-02 18:54:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7796982526779175, "perplexity": 375.2182633276632}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362287.26/warc/CC-MAIN-20211202175510-20211202205510-00114.warc.gz"}
|
https://boredofstudies.org/threads/electromagnetism-help.391444/
|
# Electromagnetism help (1 Viewer)
#### zacn
##### New Member
Anyone help with this question please.
#### Canteen
##### New Member
$\bg_white \frac{F}{\ell}=k\frac{I^2}{d} \implies I=\sqrt{\frac{Fd}{\ell k}}\ \ \left(\textrm{where } k\textrm{ is } \frac{\mu_0}{2\pi}\right)$
So we want something that looks like $\bg_white y=\sqrt{x}$ making the answer D
|
2020-12-05 17:47:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22390063107013702, "perplexity": 8983.432859134344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141748276.94/warc/CC-MAIN-20201205165649-20201205195649-00438.warc.gz"}
|
https://www.tutorialspoint.com/the-centre-of-a-circle-is-2-a-a-7-find-the-values-of-a-if-the-circle-passes-through-the-point-11-9-and-has-diameter-10-sqrt-2-units
|
# The centre of a circle is $(2 a, a-7)$. Find the values of $a$ if the circle passes through the point $(11,-9)$ and has diameter $10 \sqrt{2}$ units.
#### Complete Python Prime Pack
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack
9 Courses 2 eBooks
Given:
The centre of a circle is $(2a, a – 7)$.
To do:
We have to find the values of $a$ if the circle passes through the point $(11, -9)$ and has diameter $10\sqrt2$ units.
Solution:
From the figure,
Radius of the circle $=$ Distance between the centre $C (2a, a-7)$ and the point $P (11, -9)$
We know that,
The distance between two points $(x_{1}, y_{1})$ and $(x_{2}, y_{2})=\sqrt{(x_{2}-x_{1})^{2}+(y_{2}-y_{1})^{2}}$
Radius of the circle $=\sqrt{(11-2a)^2+(-9-a+7)^2}$
$=\sqrt{(11-2a)^2+(2+a)^2}$......(i)
The length of the diameter $=10 \sqrt{2}$ units. This implies,
The length of the radius $=\frac{\text { Length of diameter }}{2}$
$=\frac{10 \sqrt{2}}{2}=5 \sqrt{2}$
Therefore,
$5 \sqrt{2}=\sqrt{(11-2 a)^{2}+(-2-a)^{2}}$
Squaring on both sides, we get,
$50=(11-2 a)^{2}+(2+a)^{2}$
$\Rightarrow 50=121+4 a^{2}-44 a+4+a^{2}+4 a$
$\Rightarrow 5 a^{2}-40 a+75=0$
$\Rightarrow a^{2}-8 a+15=0$
$\Rightarrow a^{2}-5 a-3 a+15=0$
$\Rightarrow a(a-5)-3(a-5)=0$
$\Rightarrow(a-5)(a-3)=0$
$\therefore a=3,5$
Hence, the required values of $a$ are 5 and 3.
Updated on 10-Oct-2022 13:28:51
|
2022-11-27 08:58:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28971436619758606, "perplexity": 1339.659950606803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00291.warc.gz"}
|
https://lectures.quantecon.org/jl/kalman.html
|
## Download PDF
How to read this lecture...
Code should execute sequentially if run in a Jupyter notebook
# A First Look at the Kalman Filter¶
## Overview¶
This lecture provides a simple and intuitive introduction to the Kalman filter, for those who either
• have heard of the Kalman filter but don’t know how it works, or
• know the Kalman filter equations, but don’t know where they come from
For additional (more advanced) reading on the Kalman filter, see
The second reference presents a comprehensive treatment of the Kalman filter
Required knowledge: Familiarity with matrix manipulations, multivariate normal distributions, covariance matrices, etc.
## The Basic Idea¶
The Kalman filter has many applications in economics, but for now let’s pretend that we are rocket scientists
A missile has been launched from country Y and our mission is to track it
Let $$x \in \mathbb{R}^2$$ denote the current location of the missile—a pair indicating latitude-longitute coordinates on a map
At the present moment in time, the precise location $$x$$ is unknown, but we do have some beliefs about $$x$$
One way to summarize our knowledge is a point prediction $$\hat x$$
• But what if the President wants to know the probability that the missile is currently over the Sea of Japan?
• Then it is better to summarize our initial beliefs with a bivariate probability density $$p$$
• $$\int_E p(x)dx$$ indicates the probability that we attach to the missile being in region $$E$$
The density $$p$$ is called our prior for the random variable $$x$$
To keep things tractable in our example, we assume that our prior is Gaussian. In particular, we take
(1)$p = N(\hat x, \Sigma)$
where $$\hat x$$ is the mean of the distribution and $$\Sigma$$ is a $$2 \times 2$$ covariance matrix. In our simulations, we will suppose that
(2)$\begin{split}\hat x = \left( \begin{array}{c} 0.2 \\ -0.2 \end{array} \right), \qquad \Sigma = \left( \begin{array}{cc} 0.4 & 0.3 \\ 0.3 & 0.45 \end{array} \right)\end{split}$
This density $$p(x)$$ is shown below as a contour map, with the center of the red ellipse being equal to $$\hat x$$
### The Filtering Step¶
We are now presented with some good news and some bad news
The good news is that the missile has been located by our sensors, which report that the current location is $$y = (2.3, -1.9)$$
The next figure shows the original prior $$p(x)$$ and the new reported location $$y$$
The bad news is that our sensors are imprecise.
In particular, we should interpret the output of our sensor not as $$y=x$$, but rather as
(3)$y = G x + v, \quad \text{where} \quad v \sim N(0, R)$
Here $$G$$ and $$R$$ are $$2 \times 2$$ matrices with $$R$$ positive definite. Both are assumed known, and the noise term $$v$$ is assumed to be independent of $$x$$
How then should we combine our prior $$p(x) = N(\hat x, \Sigma)$$ and this new information $$y$$ to improve our understanding of the location of the missile?
As you may have guessed, the answer is to use Bayes’ theorem, which tells us to update our prior $$p(x)$$ to $$p(x \,|\, y)$$ via
$p(x \,|\, y) = \frac{p(y \,|\, x) \, p(x)} {p(y)}$
where $$p(y) = \int p(y \,|\, x) \, p(x) dx$$
In solving for $$p(x \,|\, y)$$, we observe that
• $$p(x) = N(\hat x, \Sigma)$$
• In view of (3), the conditional density $$p(y \,|\, x)$$ is $$N(Gx, R)$$
• $$p(y)$$ does not depend on $$x$$, and enters into the calculations only as a normalizing constant
Because we are in a linear and Gaussian framework, the updated density can be computed by calculating population linear regressions
In particular, the solution is known [1] to be
$p(x \,|\, y) = N(\hat x^F, \Sigma^F)$
where
(4)$\hat x^F := \hat x + \Sigma G' (G \Sigma G' + R)^{-1}(y - G \hat x) \quad \text{and} \quad \Sigma^F := \Sigma - \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma$
Here $$\Sigma G' (G \Sigma G' + R)^{-1}$$ is the matrix of population regression coefficients of the hidden object $$x - \hat x$$ on the surprise $$y - G \hat x$$
This new density $$p(x \,|\, y) = N(\hat x^F, \Sigma^F)$$ is shown in the next figure via contour lines and the color map
The original density is left in as contour lines for comparison
Our new density twists the prior $$p(x)$$ in a direction determined by the new information $$y - G \hat x$$
In generating the figure, we set $$G$$ to the identity matrix and $$R = 0.5 \Sigma$$ for $$\Sigma$$ defined in (2)
(The code for generating this and the proceding figures can be found in the file gaussian_contours.jl
### The Forecast Step¶
What have we achieved so far?
We have obtained probabilities for the current location of the state (missile) given prior and current information
This is called “filtering” rather than forecasting, because we are filtering out noise rather than looking into the future
• $$p(x \,|\, y) = N(\hat x^F, \Sigma^F)$$ is called the filtering distribution
But now let’s suppose that we are given another task: to predict the location of the missile after one unit of time (whatever that may be) has elapsed
To do this we need a model of how the state evolves
Let’s suppose that we have one, and that it’s linear and Gaussian. In particular,
(5)$x_{t+1} = A x_t + w_{t+1}, \quad \text{where} \quad w_t \sim N(0, Q)$
Our aim is to combine this law of motion and our current distribution $$p(x \,|\, y) = N(\hat x^F, \Sigma^F)$$ to come up with a new predictive distribution for the location in one unit of time
In view of (5), all we have to do is introduce a random vector $$x^F \sim N(\hat x^F, \Sigma^F)$$ and work out the distribution of $$A x^F + w$$ where $$w$$ is independent of $$x^F$$ and has distribution $$N(0, Q)$$
Since linear combinations of Gaussians are Gaussian, $$A x^F + w$$ is Gaussian
Elementary calculations and the expressions in (4) tell us that
$\mathbb{E} [A x^F + w] = A \mathbb{E} x^F + \mathbb{E} w = A \hat x^F = A \hat x + A \Sigma G' (G \Sigma G' + R)^{-1}(y - G \hat x)$
and
$\operatorname{Var} [A x^F + w] = A \operatorname{Var}[x^F] A' + Q = A \Sigma^F A' + Q = A \Sigma A' - A \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma A' + Q$
The matrix $$A \Sigma G' (G \Sigma G' + R)^{-1}$$ is often written as $$K_{\Sigma}$$ and called the Kalman gain
• The subscript $$\Sigma$$ has been added to remind us that $$K_{\Sigma}$$ depends on $$\Sigma$$, but not $$y$$ or $$\hat x$$
Using this notation, we can summarize our results as follows
Our updated prediction is the density $$N(\hat x_{new}, \Sigma_{new})$$ where
(6)\begin{split}\begin{aligned} \hat x_{new} &:= A \hat x + K_{\Sigma} (y - G \hat x) \\ \Sigma_{new} &:= A \Sigma A' - K_{\Sigma} G \Sigma A' + Q \nonumber \end{aligned}\end{split}
• The density $$p_{new}(x) = N(\hat x_{new}, \Sigma_{new})$$ is called the predictive distribution
The predictive distribution is the new density shown in the following figure, where the update has used parameters
$\begin{split}A = \left( \begin{array}{cc} 1.2 & 0.0 \\ 0.0 & -0.2 \end{array} \right), \qquad Q = 0.3 * \Sigma\end{split}$
### The Recursive Procedure¶
Let’s look back at what we’ve done
We started the current period with a prior $$p(x)$$ for the location $$x$$ of the missile
We then used the current measurement $$y$$ to update to $$p(x \,|\, y)$$
Finally, we used the law of motion (5) for $$\{x_t\}$$ to update to $$p_{new}(x)$$
If we now step into the next period, we are ready to go round again, taking $$p_{new}(x)$$ as the current prior
Swapping notation $$p_t(x)$$ for $$p(x)$$ and $$p_{t+1}(x)$$ for $$p_{new}(x)$$, the full recursive procedure is:
1. Start the current period with prior $$p_t(x) = N(\hat x_t, \Sigma_t)$$
2. Observe current measurement $$y_t$$
3. Compute the filtering distribution $$p_t(x \,|\, y) = N(\hat x_t^F, \Sigma_t^F)$$ from $$p_t(x)$$ and $$y_t$$, applying Bayes rule and the conditional distribution (3)
4. Compute the predictive distribution $$p_{t+1}(x) = N(\hat x_{t+1}, \Sigma_{t+1})$$ from the filtering distribution and (5)
5. Increment $$t$$ by one and go to step 1
Repeating (6), the dynamics for $$\hat x_t$$ and $$\Sigma_t$$ are as follows
(7)\begin{split}\begin{aligned} \hat x_{t+1} &= A \hat x_t + K_{\Sigma_t} (y_t - G \hat x_t) \\ \Sigma_{t+1} &= A \Sigma_t A' - K_{\Sigma_t} G \Sigma_t A' + Q \nonumber \end{aligned}\end{split}
These are the standard dynamic equations for the Kalman filter (see, for example, [LS12], page 58)
## Convergence¶
The matrix $$\Sigma_t$$ is a measure of the uncertainty of our prediction $$\hat x_t$$ of $$x_t$$
Apart from special cases, this uncertainty will never be fully resolved, regardless of how much time elapses
One reason is that our prediction $$\hat x_t$$ is made based on information available at $$t-1$$, not $$t$$
Even if we know the precise value of $$x_{t-1}$$ (which we don’t), the transition equation (5) implies that $$x_t = A x_{t-1} + w_t$$
Since the shock $$w_t$$ is not observable at $$t-1$$, any time $$t-1$$ prediction of $$x_t$$ will incur some error (unless $$w_t$$ is degenerate)
However, it is certainly possible that $$\Sigma_t$$ converges to a constant matrix as $$t \to \infty$$
To study this topic, let’s expand the second equation in (7):
(8)$\Sigma_{t+1} = A \Sigma_t A' - A \Sigma_t G' (G \Sigma_t G' + R)^{-1} G \Sigma_t A' + Q$
This is a nonlinear difference equation in $$\Sigma_t$$
A fixed point of (8) is a constant matrix $$\Sigma$$ such that
(9)$\Sigma = A \Sigma A' - A \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma A' + Q$
Equation (8) is known as a discrete time Riccati difference equation
Equation (9) is known as a discrete time algebraic Riccati equation
Conditions under which a fixed point exists and the sequence $$\{\Sigma_t\}$$ converges to it are discussed in [AHMS96] and [AM05], chapter 4
A sufficient (but not necessary) condition is that all the eigenvalues $$\lambda_i$$ of $$A$$ satisfy $$|\lambda_i| < 1$$ (cf. e.g., [AM05], p. 77)
(This strong condition assures that the unconditional distribution of $$x_t$$ converges as $$t \rightarrow + \infty$$)
In this case, for any initial choice of $$\Sigma_0$$ that is both nonnegative and symmetric, the sequence $$\{\Sigma_t\}$$ in (8) converges to a nonnegative symmetric matrix $$\Sigma$$ that solves (9)
## Implementation¶
The type Kalman from the QuantEcon.jl package implements the Kalman filter
• Instance data:
• The parameters $$A, G, Q, R$$ of a given model
• the moments $$(\hat x_t, \Sigma_t)$$ of the current prior
• The type Kalman from the QuantEcon.jl package has a number of methods, some that we will wait to use until we study more advanced applications in subsequent lectures
• Methods pertinent for this lecture are:
• prior_to_filtered, which updates $$(\hat x_t, \Sigma_t)$$ to $$(\hat x_t^F, \Sigma_t^F)$$
• filtered_to_forecast, which updates the filtering distribution to the predictive distribution – which becomes the new prior $$(\hat x_{t+1}, \Sigma_{t+1})$$
• update, which combines the last two methods
• a stationary_values, which computes the solution to (9) and the corresponding (stationary) Kalman gain
You can view the program on GitHub
## Exercises¶
### Exercise 1¶
Consider the following simple application of the Kalman filter, loosely based on [LS12], section 2.9.2
Suppose that
• all variables are scalars
• the hidden state $$\{x_t\}$$ is in fact constant, equal to some $$\theta \in \mathbb{R}$$ unknown to the modeler
State dynamics are therefore given by (5) with $$A=1$$, $$Q=0$$ and $$x_0 = \theta$$
The measurement equation is $$y_t = \theta + v_t$$ where $$v_t$$ is $$N(0,1)$$ and iid
The task of this exercise to simulate the model and, using the code from kalman.jl, plot the first five predictive densities $$p_t(x) = N(\hat x_t, \Sigma_t)$$
As shown in [LS12], sections 2.9.1–2.9.2, these distributions asymptotically put all mass on the unknown value $$\theta$$
In the simulation, take $$\theta = 10$$, $$\hat x_0 = 8$$ and $$\Sigma_0 = 1$$
Your figure should – modulo randomness – look something like this
### Exercise 2¶
The preceding figure gives some support to the idea that probability mass converges to $$\theta$$
To get a better idea, choose a small $$\epsilon > 0$$ and calculate
$z_t := 1 - \int_{\theta - \epsilon}^{\theta + \epsilon} p_t(x) dx$
for $$t = 0, 1, 2, \ldots, T$$
Plot $$z_t$$ against $$T$$, setting $$\epsilon = 0.1$$ and $$T = 600$$
Your figure should show error erratically declining something like this
### Exercise 3¶
As discussed above, if the shock sequence $$\{w_t\}$$ is not degenerate, then it is not in general possible to predict $$x_t$$ without error at time $$t-1$$ (and this would be the case even if we could observe $$x_{t-1}$$)
Let’s now compare the prediction $$\hat x_t$$ made by the Kalman filter against a competitor who is allowed to observe $$x_{t-1}$$
This competitor will use the conditional expectation $$\mathbb E[ x_t \,|\, x_{t-1}]$$, which in this case is $$A x_{t-1}$$
The conditional expectation is known to be the optimal prediction method in terms of minimizing mean squared error
(More precisely, the minimizer of $$\mathbb E \, \| x_t - g(x_{t-1}) \|^2$$ with respect to $$g$$ is $$g^*(x_{t-1}) := \mathbb E[ x_t \,|\, x_{t-1}]$$)
Thus we are comparing the Kalman filter against a competitor who has more information (in the sense of being able to observe the latent state) and behaves optimally in terms of minimizing squared error
Our horse race will be assessed in terms of squared error
In particular, your task is to generate a graph plotting observations of both $$\| x_t - A x_{t-1} \|^2$$ and $$\| x_t - \hat x_t \|^2$$ against $$t$$ for $$t = 1, \ldots, 50$$
For the parameters, set $$G = I, R = 0.5 I$$ and $$Q = 0.3 I$$, where $$I$$ is the $$2 \times 2$$ identity
Set
$\begin{split}A = \left( \begin{array}{cc} 0.5 & 0.4 \\ 0.6 & 0.3 \end{array} \right)\end{split}$
To initialize the prior density, set
$\begin{split}\Sigma_0 = \left( \begin{array}{cc} 0.9 & 0.3 \\ 0.3 & 0.9 \end{array} \right)\end{split}$
and $$\hat x_0 = (8, 8)$$
Finally, set $$x_0 = (0, 0)$$
You should end up with a figure similar to the following (modulo randomness)
Observe how, after an initial learning period, the Kalman filter performs quite well, even relative to the competitor who predicts optimally with knowledge of the latent state
### Exercise 4¶
Try varying the coefficient $$0.3$$ in $$Q = 0.3 I$$ up and down
Observe how the diagonal values in the stationary solution $$\Sigma$$ (see (9)) increase and decrease in line with this coefficient
The interpretation is that more randomness in the law of motion for $$x_t$$ causes more (permanent) uncertainty in prediction
## Solutions¶
using QuantEcon, LaTeXStrings
using Plots
pyplot()
### Exercise 1¶
import Distributions: Normal, pdf
# == Parameters == #
theta = 10
A, G, Q, R = 1.0, 1.0, 0.0, 1.0
x_hat_0, Sigma_0 = 8.0, 1.0
# == Initialize Kalman filter == #
kalman = Kalman(A, G, Q, R)
set_state!(kalman, x_hat_0, Sigma_0)
# == Run == #
N = 5
xgrid = linspace(theta - 5, theta + 2, 200)
densities = []
labels = []
for i=1:N
# Record the current predicted mean and variance, and plot their densities
m, v = kalman.cur_x_hat, kalman.cur_sigma
push!(densities, pdf(Normal(m, sqrt(v)), xgrid))
push!(labels, LaTeXString("\$t=$i\$")) # Generate the noisy signal y = theta + randn() # Update the Kalman filter update!(kalman, y) end plot(xgrid, densities, label=reshape(labels,1,length(labels)), legend=:topleft, grid=false, title=LaTeXString("First$N densities when \$\\theta =$theta\$")) ### Exercise 2¶ srand(42) # reproducible results epsilon = 0.1 kalman = Kalman(A, G, Q, R) set_state!(kalman, x_hat_0, Sigma_0) nodes, weights = qnwlege(21, theta-epsilon, theta+epsilon) T = 600 z = Array{Float64}(T) for t=1:T # Record the current predicted mean and variance, and plot their densities m, v = kalman.cur_x_hat, kalman.cur_sigma dist = Normal(m, sqrt(v)) integral = do_quad((x)->pdf(dist, x), nodes, weights) z[t] = 1. - integral # Generate the noisy signal and update the Kalman filter update!(kalman, theta + randn()) end plot(1:T, z, fillrange=0, color=:blue, fillalpha=0.2, grid=false, legend=false, xlims=(0, T), ylims=(0, 1)) ### Exercise 3¶ import Distributions: MultivariateNormal, rand srand(41) # reproducible results # === Define A, Q, G, R === # G = eye(2) R = 0.5 .* G A = [0.5 0.4 0.6 0.3] Q = 0.3 .* G # === Define the prior density === # Sigma = [0.9 0.3 0.3 0.9] x_hat = [8, 8]'' # === Initialize the Kalman filter === # kn = Kalman(A, G, Q, R) set_state!(kn, x_hat, Sigma) # === Set the true initial value of the state === # x = zeros(2) # == Print eigenvalues of A == # println("Eigenvalues of A:\n$(eigvals(A))")
# == Print stationary Sigma == #
S, K = stationary_values(kn)
println("Stationary prediction error variance:\n\$S")
# === Generate the plot === #
T = 50
e1 = Array{Float64}(T)
e2 = Array{Float64}(T)
for t=1:T
# == Generate signal and update prediction == #
dist = MultivariateNormal(G*x, R)
y = rand(dist)
update!(kn, y)
# == Update state and record error == #
Ax = A * x
x = rand(MultivariateNormal(Ax, Q))
e1[t] = sum((x - kn.cur_x_hat).^2)
e2[t] = sum((x - Ax).^2)
end
plot(1:T, e1, color=:black, linewidth=2, alpha=0.6, label="Kalman filter error", grid=false)
plot!(1:T, e2, color=:green, linewidth=2, alpha=0.6, label="conditional expectation error")
Eigenvalues of A:
[0.9,-0.1]
Stationary prediction error variance:
[0.403291 0.105072; 0.105072 0.410617]
Footnotes
[1] See, for example, page 93 of [Bis06]. To get from his expressions to the ones used above, you will also need to apply the Woodbury matrix identity.
• Share page
|
2017-10-21 21:04:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729817271232605, "perplexity": 1091.5922330350907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.43/warc/CC-MAIN-20171021205648-20171021225648-00098.warc.gz"}
|
http://www.mathjournals.org/jot/2012-068-001/2012-068-001-002.html
|
Previous issue · Next issue · Most recent issue · All issues
# Journal of Operator Theory
Volume 68, Issue 1, Summer 2012 pp. 19-66.
Coactions of Hopf $C^*$-bimodules
Authors Thomas Timmermann
Author institution: FB 10 Mathematisches Institut, Westfaelische Wilhelms-Universitaet, Einsteinstrasse 62, 48149 Muenster, Germany
Summary: Coactions of Hopf $C^{*}$-bimodules simultaneously generalize coactions of Hopf $C^{*}$-algebras and actions of groupoids. Following an approach of Baaj and Skandalis, we construct reduced crossed products and establish a duality for fine coactions. Examples of coactions arise from Fell bundles on groupoids and actions of a groupoid on bundles of $C^{*}$-algebras. Continuous Fell bundles on an \'etale groupoid correspond to coactions of the reduced groupoid algebra, and actions of a groupoid on a continuous bundle of $C^{*}$-algebras correspond to coactions of the function algebra.
Contents Full-Text PDF
|
2018-05-28 09:48:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5141938924789429, "perplexity": 5338.120980964143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872766.96/warc/CC-MAIN-20180528091637-20180528111637-00434.warc.gz"}
|
https://www.physicsforums.com/threads/future-fields-of-employment-that-will-grow-rapidly.942901/
|
Future fields of employment that will grow rapidly
theb2
What fields in the future do you see blowing up? Will all the automated truck drivers and grocery clerks become software engineers?
Security cam watcher.
kyphysics, Tom.G and theb2
kyphysics
What fields in the future do you see blowing up? Will all the automated truck drivers and grocery clerks become software engineers?
NO WAY do I think there will ever be legalized automated truck drivers. Maybe self-driving cars, but even then I think the moral questions about accidetns and who's at fault could stall that for decades, if not longer. Just a personal opinion.
But with trucks? NO way...they are too large. Unless your technology is perfect, I just feel the legal questions of fault over an potentially disasterous accident would be too great with large motor vehicles. That thing could do massive damage.
I think as more of humans' basic necessities are met, perhaps more emphasis will be had in society on enjoying life more and/or exploring philosophical questions or a greater desire for things like art. So much of our basic lives are spent in survival mode. Work to pay the bills, so we have:
food, clothing, shelter, and healthcare
Let's say you've got all that in a society. What do you seek out and/or do with your life? I think things:
entertainment
leisure and pleasure (sports, amusement parks, food experiences, etc.)
knowledge (pursuing "useless degrees" - I think this is a relative term - in art, literature, philosophy, etc.) art and "beautification" services
...all could be focal points in society and areas of economic growth. It's interesting to imagine a world where all of one's basic necessities are met already and where a society would go from there. I see broadly:
enjoyment + meaning
being things that people focus on.
Last edited:
NO WAY do I think there will ever be legalized automated truck drivers. Maybe self-driving cars, but even then I think the moral questions about accidetns and who's at fault could stall that for decades, if not longer. Just a personal opinion.
But with trucks? NO way...they are too large. Unless your technology is perfect, I just feel the legal questions of fault over an potentially disasterous accident would be too great with large motor vehicles. That thing could do massive damage.
I think as more of humans' basic necessities are met, perhaps more emphasis will be had in society on enjoying life more and/or exploring philosophical questions or a greater desire for things like art. So much of our basic lives are spent in survival mode. Work to pay the bills, so we have:
food, clothing, shelter, and healthcare
Let's say you've got all that in a society. What do you seek out and/or do with your life? I think things:
entertainment
leisure and pleasure (sports, amusement parks, food experiences, etc.)
knowledge (pursuing "useless degrees" - I think this is a relative term - in art, literature, philosophy, etc.) art and "beautification" services
...all could be focal points in society and areas of economic growth. It's interesting to imagine a world where all of one's basic necessities are met already and where a society would go from there. I see broadly:
enjoyment + meaning
being things that people focus on.
On the contrary, I actually think that trucks are probably the mode of transportation (along with rail and subway trains) that are most likely to be effectively automated, given their relatively fixed routes, thus mitigating for the sheer volume of data needed for the machine learning algorithm to adapt while driving (although questions regarding safety may temper adoption, at least in the immediate term).
However, unlike yourself, I'm not convinced that developments in technology alone will necessarily move us toward the meeting of all basic necessities of what you speak of. For that to take place, changes in basic governance to ensure that the basic needs are met for all citizens need to be made (e.g. experimenting with guaranteed annual income, negative income tax, etc.) After all, much of the rise in the standard of living that we in the Western world has enjoyed are as much of a result of social welfare systems being instituted by democratic governments (often under pressure from activists, unions, and the broader electorate) in concert with the developing economy.
Otherwise, I'm afraid that developments in current technology in a laissez-faire economic system like the US will lead to a great exacerbation of already present inequalities.
russ_watters
kyphysics
On the contrary, I actually think that trucks are probably the mode of transportation (along with rail and subway trains) that are most likely to be effectively automated, given their relatively fixed routes, thus mitigating for the sheer volume of data needed for the machine learning algorithm to adapt while driving (although questions regarding safety may temper adoption, at least in the immediate term).
I could be biased, having been hit by a dump truck in an auto accident a few years ago (lots of physical therapy followed). It is scary seeing a large vehicle like that come at you.
I've read that they could have designated automated driving lanes on roads and that would make me feel a bit safer. Even better would be those designated automated driving lanes with big barriers between them and the regular lanes. But mixing automated driving vehicles with human drivers in the same lanes does make me feel uneasy. I just don't know if we'll ever get to a point where those self-driving cars (big and small) can be smart enough and be able to make judgments about things like humans can to be mixed in the same lanes.
However, unlike yourself, I'm not convinced that developments in technology alone will necessarily move us toward the meeting of all basic necessities of what you speak of. For that to take place, changes in basic governance to ensure that the basic needs are met for all citizens need to be made (e.g. experimenting with guaranteed annual income, negative income tax, etc.) After all, much of the rise in the standard of living that we in the Western world has enjoyed are as much of a result of social welfare systems being instituted by democratic governments (often under pressure from activists, unions, and the broader electorate) in concert with the developing economy.
Oh, I agree! I should have qualified my comments earlier with something like IF we ever reach that stage...
IF we have a society where technology and morality are advanced enough (I think of Star Trek's universe where people don't care about money anymore and they have the technology to meet basic human needs and use their time and technology for discovery...), I could see "economics" (if it exists) maybe more focused on human enjoyment and meaning (like people pursuing knowledge for knowledge's sake...getting those "useless degrees" some people criticize nowadays...I don't think they're useless...it's just that most humans lives on Earth - especially outside of the U.S. and modernized nations - is overwhelmingly focused on practical survival, which can leave little room for sitting around to contemplate philosophy questions or study ancient art and literature, etc. ...but in some society where survival didn't occupy such a huge part of our daily existence, those other things seem like they'd take on greater interest...just speculation on my part).
What I fear could get in teh way are evil, selfish people who don't want to use our resources and improving technologies for this type of Star Trekian world, but rather to take more for themselves by any means necessary.
One of the biggest growth professions is in health care. Boomers are retiring and over the next decade or two we will need more nurses and other patient care workers. Even today there is a nurse shortage. see http://money.cnn.com/2018/04/30/news/economy/nursing-school-rejections/index.html
There are currently about three million nurses in the United States. The country will need to produce more than one million new registered nurses by 2022 to fulfill its health care needs, according to the American Nurses Association estimates.
That's a problem.
In 2017, nursing schools turned away more than 56,000 qualified applicants from undergraduate nursing programs. Going back a decade, nursing schools have annually rejected around 30,000 applicants who met admissions requirements, according to the American Association of Colleges of Nursing.
"Some of these applicants graduated high school top of their class with a 3.5 GPA or higher," said Rosseter. "But the competition to get into a nursing school right now is so intense."
Because of the lack of openings, nursing programs across the board -- in community colleges to undergraduate and graduate schools -- are rejecting students in droves.
This is the current need and it will increase into the 2040's. Of course along with this will be be need for more extended care facilities. One can also foresee attempts to use technology to create a less costly and/or a more efficient health care system.
russ_watters
kyphysics
One of the biggest growth professions is in health care. Boomers are retiring and over the next decade or two we will need more nurses and other patient care workers. Even today there is a nurse shortage. see http://money.cnn.com/2018/04/30/news/economy/nursing-school-rejections/index.html
This is the current need and it will increase into the 2040's. Of course along with this will be be need for more extended care facilities. One can also foresee attempts to use technology to create a less costly and/or a more efficient health care system.
I do wonder...will we need more or less (or same) health care professionals if the U.S. got universal healthcare?
On the one hand, people would have better preventative care and you'd see perhaps less people with "built-up" illnesses and problems. On the other hand, maybe lots more people will have affordable access and start using services more.
Healthcare jobs will always be around as long as humans are mortal and can be sick in any way shape or form. Safe bet.
Welding jobs are in such high demand that many drug addicts (who are starting fresh and getting clean) are being trained to do them. You can get paid something like $90K doing manual labor in this field. Not enough people have the qualifications and specialized training. Yet, there's a big shortage. Even ex-felons who are trained can get jobs immediately. Demand is so high! Truck driving is another in-demand field. It's hard work staying awake to make time-sensitive routes and can sometimes be dangerous (Google drivers falling asleep and crashing and lack of protection laws), but it's a job that you can make good money off of (over$100,000 in some areas of driving) without a college education.
Computer programming is one where you can make good money too via what are known as coding bootcamps. You need no prior experience. My cousin currently attends one in NYC and starting salaries are \$100,000 + (not much for NYC, granted) for less than a year of training. I've been learning a little on my own on the side...just for fun, though.
Things may change 5-10 years later, but these three are currently pretty good jobs in high-demand.
Anything AI related
russ_watters
Anything AI related
For "truck drivers and grocery clerks"? I don't think so...
There will be a great need of IT security 'adepts' too, but that's also not a job what fits for those who can be replaced with robots or basic AIs.
For "truck drivers and grocery clerks"? I don't think so...
In the future, there won't be any, because AI will take over.
In the future, there won't be any, because AI will take over.
Yeah. So the 'liberated' masses will need some kind of jobs. As I see that's what theb2 asked about
Mentor
I do wonder...will we need more or less (or same) health care professionals if the U.S. got universal healthcare?
On the one hand, people would have better preventative care and you'd see perhaps less people with "built-up" illnesses and problems. On the other hand, maybe lots more people will have affordable access and start using services more.
Healthcare jobs will always be around as long as humans are mortal and can be sick in any way shape or form. Safe bet.
Old people tend to need more healthcare services than young people. <cough> The proportion of old people in the population is increasing. So is the demand for in-home care services, so that hopefully fewer old people need to move to special facilities (assisted-care and nursing homes), or at least more of them can postpone it.
Unfortunately, at the moment it seems to me that in-home health care aides tend to be at or near the bottom of the totem pole when it comes to pay, status, etc., among medical workers. How long will it be before robots can help people dress, bathe, go to the toilet, etc.?
Mentor
Yeah. So the 'liberated' masses will need some kind of jobs. As I see that's what theb2 asked about
I'm not sure the intent was that strictly limited, but even if it was, the answer is still relevant. Remember, there are two basic types of people who may be affected by a technology shift like this:
1. Current truck drivers/clerks, etc. who will be displaced.
2. Kids who have not yet selected a field of employment.
Grocery clerk is not a career, so nobody should be aspiring to make it one, so I don't think that needs to be addressed. People make careers of truck driving though, so kids should take into account the potential AI takeover of it when selecting a career.
Mike Rowe turned Dirty Jobs into advocacy of the idea that too many people go to college and get useless degrees and not enough go to tech schools and become tradespeople. I think he's correct...Today. That may not be the case 20 years from now in general and definitely certain individual fields will be greatly affected.
Suyash Singh
I am studying right now so i think i know some things
Robotics and computing surely will continue to grow (especially the new fields )
Cyber security
Doctors
Weapon development and army
Sex robot makers and prostitutes
Entertainment field is ever growing with increasing number of main actors, plus their bodyguards , and all their crew
Check out the list of most common jobs in the US and see what you think what jobs will be affected by technology in the coming decades and the effect on the general employment in the country.
https://www.ranker.com/list/most-common-jobs-in-america/american-jobs
Note that retail sales and cashiers come in at 1 and 2 while the first computer/software jobs come in at 47,58,59
kyphysics
Unfortunately, at the moment it seems to me that in-home health care aides tend to be at or near the bottom of the totem pole when it comes to pay, status, etc., among medical workers. How long will it be before robots can help people dress, bathe, go to the toilet, etc.?
Definitely a tough job too. My great granduncle's wife essentially does all this "medical assistance" stuff for my GGU. He's unable to do a lot of basic stuff like even getting up to shower. She has to physically help lift him up and help him shower/bathe. They save money by not hiring help, but she spends a good portion of time and energy taking care of him every day.
Beyond robotic help, maybe advances in medicine and neuroscience can reduce some of the degenerative effects of aging.
At least with life-expectancy, we've come a long way. Look at the early 1900's to the early 2000's. People are living to 80 years old and beyond regularly now. Who knows what will change in the next 50-100 years. Will we get 125 or 150 year old people regularly?
kyphysics
In the future, there won't be any, because AI will take over.
Any sources/references for AI definitely taking over truck driving?
I hear stuff like this all the time, but am skeptical. How close to certain are you guys on this one?
kyphysics
Something this thread got me thinking about is the social aspect of life going forward in the age of AI.
With robots possibly taking over so much of everyday needs and replacing humans, will there be a kind of increased social need of humans? Granted, we don't necessarily get our social needs fulfilled talking to our waiter or grocery store clerk, etc. But, imagine having lots and lots of human jobs replaced by robots and not physically being near such people anymore. Would there be a surge in "social interaction" economies/industries to replace the displaced human interactions we're used to having?
Suyash Singh
|
2023-01-31 13:20:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21918094158172607, "perplexity": 2413.088540620466}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00071.warc.gz"}
|
https://byjus.com/question-answer/the-chart-below-shows-data-about-the-number-of-employees-at-pepdico-a-popular-beverage/
|
Question
# The chart below shows data about the number of employees at PepdiCO, a popular beverage company.$$2012$$$$2013$$$$2014$$Total Employees$$1,670$$$$1,890$$$$2,110$$Percent Male$$65\%$$$$60\%$$$$55\%$$Percent Female$$35\%$$$$40\%$$$$45\%$$If the number of male and female employees decreases and increases by $$5\%$$ respectively, approximately how many male employees will work at PepdiCO in $$2015$$?
A
1,515
B
1,398
C
1,282
D
1,165
Solution
## The correct option is B $$1,165$$As per data seen in table data total employees of $$2012$$ is $$1670, 2013$$ is $$1890$$ and $$2014$$ is $$2110$$ work in pepdicoThen difference of total employees $$2012$$ and $$2013=1890-1670=220$$And difference of total employees $$2013$$ and $$2014=2110-1890=220$$As per trend the total employees $$2015=2110+220=2330$$As per given in question, male and female employees decreases and increases by $$5\%$$ respectively.In $$2014$$ male employees are $$55\%$$. Then male employees in $$2015$$ are $$55-5=50\%$$Then total male employees work in PepdiCo in $$2015=$$ $$\dfrac{50}{100}\times 2330=1165$$Maths
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-17 16:26:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23218391835689545, "perplexity": 2155.8492720138834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00299.warc.gz"}
|
https://studysoup.com/tsg/14335/introductory-chemistry-5-edition-chapter-13-problem-89p
|
×
Get Full Access to Introductory Chemistry - 5 Edition - Chapter 13 - Problem 89p
Get Full Access to Introductory Chemistry - 5 Edition - Chapter 13 - Problem 89p
×
# Solved: Determine the volume of 0.150 M NaOH solution required to neutralize each sample
ISBN: 9780321910295 34
## Solution for problem 89P Chapter 13
Introductory Chemistry | 5th Edition
• Textbook Solutions
• 2901 Step-by-step solutions solved by professors and subject experts
• Get 24/7 help from StudySoup virtual teaching assistants
Introductory Chemistry | 5th Edition
4 5 1 276 Reviews
16
5
Problem 89P
Determine the volume of 0.150 M NaOH solution required to neutralize each sample of hydrochloric acid. The neutralization reaction is:
$$\mathrm {NaOH}(aq)+\mathrm {HCl}(aq) \longrightarrow \mathrm{H_2O}(l)+\mathrm{NaCl}(aq)$$
(a) 25 mL of a 0.150 M HCl solution
(b) 55 mL of a 0.055 M HCl solution
(c) 175 mL of a 0.885 M HCl solution
Equation Transcription:
Text Transcription:
NaOH(aq)+HCl(aq) rightarrow H_2O(l)+NaCl(aq)
Step-by-Step Solution:
Step 1 of 8
Step1 of 8
Solution: Here, we are going to calculate the volume of NaOH solution required to neutralize the given HCl solutions.
Neutralisation reaction is the reaction between an acid and a base to give salt and water. The neutralization reaction in the given problem is:
Here, 1 mole of HCl requires 1 mole of NaOH for complete neutralization.
___________________
Step 2 of 8
Step 3 of 8
## Discover and learn what students are asking
Calculus: Early Transcendental Functions : The Natural Logarithmic Function: Integration
?In Exercises 1-26, find the indefinite integral. $$\int \frac{9}{5-4 x} d x$$
Statistics: Informed Decisions Using Data : Two-Way Analysis of Variance
?To graphically display the role interaction plays in a factorial design, we draw ______.
#### Related chapters
Unlock Textbook Solution
Enter your email below to unlock your verified solution to:
Solved: Determine the volume of 0.150 M NaOH solution required to neutralize each sample
|
2022-05-27 03:32:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24606762826442719, "perplexity": 7770.800472317811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00795.warc.gz"}
|
https://brainiak.in/491/c-the-mass-of-an-atom-of-hydrogen-is-1-008-u-what-is-the-mass-of-18-atoms-of-hydrogen
|
0 votes
6 views
4 ) solve problems
C. The mass of an atom of hydrogen is 1.008 u. What is the mass of 18 atoms of hydrogen.
reopened | 6 views
## 1 Answer
0 votes
Best answer
Given : mass of hydrogen = 1.008 u
To find : mass of 18 atoms of hydrogen
Solution :
mass of 18 atoms of hydrogen = 18 $$\times$$ mass of hydrogen
=$$18\times 1.008 u=18.144u$$
$$\therefore$$ mass of 18 atoms of hydrogen is 18.144 u
by (5.4k points)
|
2021-06-15 09:47:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3808462917804718, "perplexity": 5712.851565832943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00610.warc.gz"}
|
https://ltwork.net/the-speed-of-an-object-undergoing-constant-acceleration--5425098
|
The speed of an object undergoing constant acceleration increases from 8.0 m/s to 12.0 m/s in 10.0 seconds.
Question:
The speed of an object undergoing constant acceleration increases from 8.0 m/s to 12.0 m/s in 10.0 seconds. How far does the object travel during the 10.0 seconds?
4 m
100 m
800 m
10 m
What does putting on the sweater, even though it isn’t hers, reveal about Rachel?She sticks up for herself when she knows she is right. She will
What does putting on the sweater, even though it isn’t hers, reveal about Rachel? She sticks up for herself when she knows she is right. She will do what she is told, especially when facing conflict. She can be talked into liking something that she does not like....
Part BEnter the numerical coordinates of the vertices of quadrilateral ABCD in the table. (Point A has been done for you.)
Part B Enter the numerical coordinates of the vertices of quadrilateral ABCD in the table. (Point A has been done for you.) Then predict the numerical coordinates of the vertices of quadrilateral A'B'C'D' as the figure translates four different ways: 7 units up, 1 unit down, 4 units to the right, a...
Peter is 2 years older than Winnie. Peter's age is 16 years less than seven times Winnie's age. The equations below model the relationship between Peter's
Peter is 2 years older than Winnie. Peter's age is 16 years less than seven times Winnie's age. The equations below model the relationship between Peter's age (p) and Winnie's age (w): p = w + 2 p = 7w − 16 Which is a possible correct method to find Peter's and Winnie's ages? Solve w + 2 = 7w −...
An electric dipole is formed from ± 5.0 nc point charges spaced 3.0 mm apart. the dipole is centered
An electric dipole is formed from ± 5.0 nc point charges spaced 3.0 mm apart. the dipole is centered at the origin, oriented along the y-axis. what is the electric field strength at point (x, y) = ( 20 mm ,0cm)? what is the electric field strength at point (x, y) = (0cm, 20 mm )?...
Point K is on line segment \overline{JL} JL . Given JK=2x-2,JK=2x−2, KL=x-9,KL=x−9, and JL=2x+8,JL=2x+8, determine the numerical length of \overline{KL}. KL ....
The waste products of a nuclear fusion power plants can be best described as
The waste products of a nuclear fusion power plants can be best described as...
If y − 4 = 2x, which of the following sets represents possible inputs and outputs of the function, represented as ordered pairs? (5 points){(2, 1),
If y − 4 = 2x, which of the following sets represents possible inputs and outputs of the function, represented as ordered pairs? (5 points) {(2, 1), (4, 2), (6, 3)} {(1, 2), (2, 4), (3, 6)} {(4, 0), (6, 1), (8, 2)} {(0, 4), (1, 6), (2, 8)}...
6 2/3 divided by 5/6
6 2/3 divided by 5/6...
For every 10 flowers, there are 36 butterflies resting on the flower with the same number on each flower
For every 10 flowers, there are 36 butterflies resting on the flower with the same number on each flower .how many butterflies are on one flower $For every 10 flowers,there are 36 butterflies resting on the flower with the same number on each flo$...
Consider the function . (a) describe the transformation from the graph of f(x) to g(x) = - square root
Consider the function . (a) describe the transformation from the graph of f(x) to g(x) = - square root x + 8 to . (b) f(x) to h(x) = 2 square root -x +1 describe the transformation from the graph of to . i have finished all of my problems, but this one. i can't figure it out and any would be greatl...
Express as an ordinary number. 3.85 x 10 -3 = 0.00385 3,850 0.385
Express as an ordinary number. 3.85 x 10 -3 = 0.00385 3,850 0.385...
A model for nerve conduction for a unit of the axon based on resistance and capacitance of the axon
A model for nerve conduction for a unit of the axon based on resistance and capacitance of the axon is shown. Typical values of R’s andCfor a 1 mm long section of myelinatedmammalian skeletal axon are given below. About what is the expected time for charge to move 1 mm through the axoplasm and cha...
Que es una pendiente en una recta
Que es una pendiente en una recta...
The factors a consumer uses when considering a purchase are called
The factors a consumer uses when considering a purchase are called...
What greek god/ goddesses symbol is a goblet and pitcher
What greek god/ goddesses symbol is a goblet and pitcher...
Can you help me pls?
Can you help me pls? $Can you help me pls?$...
How do you set up equations for linear models.
How do you set up equations for linear models....
|
2022-08-16 13:23:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5860190987586975, "perplexity": 1348.7176115633213}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00644.warc.gz"}
|
https://www.qb365.in/materials/stateboard/10th-standard-science-public-exam-march-official-model-question-paper-2019-2909.html
|
10th Public Official Model Question 2019
10th Standard
Reg.No. :
•
•
•
•
•
•
Science
Time : 02:15:00 Hrs
Total Marks : 75
15 x 1 = 15
1. In a pea plant, the yellow colour of the seed dominates over the green colour. The genetic make up of the green colour of the seed can be shown as __________
(a)
GG
(b)
Gg
(c)
Yy
(d)
yy
2. The sensory organs contain ______________________.
(a)
Unipolar neuron
(b)
Bipolar neuron
(c)
Multipolar neuron
(d)
Medullated neuron
3. Thin walled non-motile spores are called_____
(a)
spirogyra
(b)
aplanospore
(c)
zoospore
(d)
akinete
4. Select important characteristic features of mammals
(a)
four-chambered heart
(b)
fore-limbs and hind limbs
(c)
milk-producing glands
(d)
post anal tail
5. The special root-like structure of plant parasites in cuscuta and viscum are called ......................
(a)
Rhizoids
(b)
Haustoria
(c)
Hyphae
(d)
Stolons
6. The particles in various forms are visible only under an ultramicroscope. A solution containing such particles is called __________.
(a)
true solution
(b)
colloidal solution
7. .......................... have equal number of neutrons.
(a)
Isobars
(b)
Isotones
(c)
Isotopes
(d)
Mass Numbers
8. An example for weak acid is ________
(a)
nitric acid
(b)
acetic acid
(c)
sulphuric acid
9. The third period contains elements. Out of these elements, How many elements are non-metals?
(a)
8
(b)
5
10. The enzyme which dissociates glucose into ethanol is________
(a)
sucrose
(b)
amailase
(c)
maltose
(d)
zymase
11. Screw Gauge is an instrument used to measure the dimensions of very small objects upto ____________.
(a)
0.1 cm
(b)
0.01 cm
(c)
0.1 mm
(d)
0.01 mm
12. Chandrayaan-l was launched by the_________in October 2008 from Srihari Kota in Andrapradesh.
(a)
ISRO
(b)
NASA
(c)
ESA
13. The potential difference required to pass a current 0.2 A in a wire of resistance 20 ohm is _________.
(a)
100V
(b)
4V
(c)
0.01V
(d)
40V
14. The positive electrode of voltaic cell is_______
(a)
copper
(b)
zinc
(c)
dilute sulphuric acid
15. The phenomenon of producing an emf in a circuit whenever the magnetic flux linked with a coil changes is________.
(a)
electromagnetic induction
(b)
inducing current
(c)
inducing voltage
(d)
change in current
16. 20 x 2 = 40
17. A change that affects the body cell is not inherited.However, a change in the gamete is inherited. The effects of radiation at Hiroshima have been affecting generations.
Analyze the above statements and give your interpretation.
18. What is Germinal Variation?
19. A health worker advises the people in a locality not to have tattoong done using common needles and to insist the barber to change the shaving razors/blades in the salon.Name the dreadful disease,the spreading of which,can be prevented by following these measures.Also mention other preventive measures that can be taken with regard to this disease.
20. Tuberculosis and Typhoid are bacterial diseases.A short rod-shaped bacterium with numerous flagella-salmonella typhi causes Tuberculosis, Is this statement is correct if not, correct it.
21. Whatis the function of endocrine system in man?
22. Give any two examples for each of the following cases where dispersal of fruits and seeds take place :
(i) by birds (through excreta) (ii) by human beings
23. In balsam plant the seeds fall off far away from the mother plant.
a) Is this statement correct or incorrect?
b) Give reason.
24. Mention the two unique characteristics of mammals.
25. What are the four compositions of circulatory system of man?
26. In human beings, air enters into the body through ___________ and moves into __________. In fishes, water enters into the body through __________ and the dissolved oxygen diffuses into ___________.
27. Classify the following into producers, consumers, decomposers.
i) butterfly ii). grass hopper iii) calottes iv) snakes v). shoe flower vi) nitrobacteria
28. What is pollution?
29. Hydrogen is found to be a good choice among the alternative fuel, why?
1.Hydrogen can meet all the energy needs of human society, including power generation more efficiently and more economically than petrol fuels and in total compatibility with the environment.
2.Hydrogen is non - toxic, reasonably safe to handle, distribute and to be used as a fuel.
30. Radha prepared a solution which could be separated by filtration.
i) Name the type of solution.
ii) Is the solution transparent or opaque?
iii) Mention the nature of the solution.
iv) Mention the size of the solute particle.
31. What are the factors that affect solubility?
32. Complete the table given below:
Element Atomic Mass Molecular Mass Atomicity Oxygen 16 32 __ Hydrogen 1 __ 2 Phosphorous 40 __ 4
33. Observe the given chemical change and answer the following:
i) Identify ‘A’ and ‘B’.
ii) Write the commercial name of calcium hydroxide.
iii) Identify products ‘C’ and 'D' , when HCl is allowed to react with calcium oxide.
iv) Say whether calcium oxide is acidic or basic.
34. Redox reactions are reactions during which electron transfer takes place. Here magnesium atom transfers two electrons one each to the two chlorine atoms.
i) What are the products of this reaction?
ii) Write the balanced equation for the complete reaction.
iii) Which element is being oxidized?
iv) Which element is being reduced?
v) Write the reduction part of the reaction.
35. Correct the given equation if it is wrong.
i) Metallic oxide + Acid $\longrightarrow$ Salt + Water
ii) Metal carbonate + Acid $\longrightarrow$ Salt + Water + Carbon dioxide
iii) Metal + Acid $\longrightarrow$ Salt + Carbon dioxide
iv) Metallic oxide + Water $\longrightarrow$ Salt + Oxygen
36. Coating the surface of iron with other metal prevents it from rusting. If it is coated with a thin layer of zinc, it is called _______ .
(galvanization / painting / cathodic protection)
37. 'A' is a metal used to build airplanes and other industrial parts. 'A' .reacts with strong
caustic alkalies and form 'B' along with hydrogen. Identify A and B.
38. Read each description given below and say whether it fits for ethanol or ethanoic acid. i) It is a clear liquid with a burning taste.
ii) It is used to preserve biological specimens in laboratories.
iii) It is used to preserve food and fruit juices.
iv) On cooling, it is frozen to form ice flakes which look like a glacier.
39. Reason and Assertion
Assertion :
Functional group is responsible for the characteristic properties of the compounds.
Reason : The chemical properties of organic compounds are determined by functional groups.
Does the reason satisfy the given assertion?
40. When sodium salt of A is heated with soda lime (solid mixture of 3 parts of NaOH and 1 part of CaO) B gas is formed.
i) Identify A and B
ii) Write the corresponding chemical equation
41. Match the items in group A with the items in group B:
Sl.No. Group-A Group-B 1. Small dimensions Kilometre 2. Large dimensions Screw gauge 3. Long distance Scale 4. Small distance Lightyear Altimeter
42. If an angel visits an asteroid called B 612 which has a radius of 20 m and mass of 104 kg, what will be the acceleration due to gravity in B 612 ?.
43. Observe the diagram and write the answer.
a) When a gun is fired, which law acts on it?
b) What is the action and reaction in this
c) Since the gun as a much greater mass than the bullet, how will be the acceleration of the gun?
44. Write any five achievements of Chandrayaan - I
45. Fuse wire is made up of an alloy of-------which has high resistance and ------- melting point.
46. Define electric power.
47. Light which is incident on a flat surface makes an angle of 15o with the surface.
i) What is the angle of incidence?
ii) What is the angle of reflection?|
iii) Find the angle of deviation.
48. The focal length of a concave lens is 2m. Calculate the power of the lens.
49. 4 x 5 = 20
50. Human evolution has undergone a record of changes during the past 15 million years.
i)Name the different species of mankind in chronological order from primitive to modern man.
ii)When were the primitive caves developed?
iii)Narrate the life led by early man like hominids.
51. What is known as pollination? List out biotic and abiotic factors which are involved in pollination?
52. With a suitable diagram, describe the structure and functions of the human heart.
53. Observe the picture given below and find out what type of energy is produced
i) Identify whether this energy is conventional or nonconventional.
ii) Draw the given diagram and label it with the parts given below:(battery, battery charger controller, solar incidence, DC load, battery system)
iii) In the given picture, _______energy is transformed into ______energy.
54. (a) List out the differences between atoms and molecules.
(b) Find the number of moles in copper containig 12.046 $\times$ 1023 molecules.
55. Homologous series predict the properties of the members of the series. Justify this statement through its characteristics.
56. Explain Newton's second law of motion w!th an example.
57. The effective resistance of three resistors connected in parallel is 60/47 ?. When one wire breaks, the effective resistance becomes 15/8 ohms. Find the resistance of the wire that is broken.
|
2019-03-23 07:41:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.442545086145401, "perplexity": 4201.972033953137}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202728.21/warc/CC-MAIN-20190323060839-20190323082839-00536.warc.gz"}
|
https://learn.careers360.com/ncert/question-find-the-sum-of-the-odd-numbers-between-0-and-50/
|
# Q : 14 Find the sum of the odd numbers between $\small 0$ and $\small 50$.
Odd number between 0 and 50 are
1,3,5,...49
This is an AP with
$here, \ a = 1 \ and \ d = 2$
There are total 25 odd number between 0 and 50
Now, we know that
$S_n= \frac{n}{2}\left \{ 2a+(n-1)d \right \}$
$\Rightarrow S_{25}= \frac{25}{2}\left \{ 2\times 1+(25-1)2 \right \}$
$\Rightarrow S_{25}= \frac{25}{2}\left \{ 2+48 \right \}$
$\Rightarrow S_{25}= \frac{25}{2}\times 50$
$\Rightarrow S_{25}= 25 \times 25 = 625$
Therefore, sum of the odd numbers between $\small 0$ and $\small 50$ 625
Exams
Articles
Questions
|
2020-06-07 07:39:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7547736763954163, "perplexity": 1141.389196070674}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523564.99/warc/CC-MAIN-20200607044626-20200607074626-00331.warc.gz"}
|
https://physics.stackexchange.com/questions/300551/how-can-wifi-penetrate-through-walls-when-visible-light-cant
|
# How can wifi penetrate through walls when visible light can't?
I did search the question on Physics S.E considering it would be previously asked. I found this How come Wifi signals can go through walls, and bodies, by kitchen-microwaves only penetrate a few centimeters through absorbing surfaces? But in this question , the answers are w.r.t to or in comparison with microwaves , their absorption and certain other things. I didn't find a sort of general answer that could be the answer to the question.
So the question is - wifi or radio waves reach us through concrete walls . They also reach us through the ceiling (if some one is using it in the flat above ours ). Even through the air they travel such a lot , bending around corners or doors . Now I would not compare them to microwaves (because I don't want the answer in terms of properties of the material but physics). Visible light which is so much powerful than them can't penetrate black opaque paper leave alone the walls. Same is true for gamma rays (penetration through a very thick wall).
So why then radio waves being so very less powerful than light waves are able to travel through walls?
There should be a general concept as to why the radio waves are able to pass through walls but microwaves or light waves cannot ! A linked question is also that sound travels much faster in solids(walls) but is not audible in it though it is in air.
After reading @BillN's answer, it would be really helpful if any one could explain it in terms molecular resonance or crystalline structure or electrical conductivity or how does molecular resonance or crystalline structure or electrical conductivity cause this.
• One cannot answer your question without talking about properties of the materials. Just wonder that light doesn't travels through concrete but travel through glass. – Diracology Dec 23 '16 at 20:49
• @Diracology Yes I understand that. Since I did not know , I just wanted to say that is it only due to properties of the material or also has a stronger theoretical basis – Shashaank Dec 23 '16 at 20:54
• It's important to understand in a lot of this stuff that microwaves (which wifi is) essentially can't get through walls: they are hugely attenuated. But wifi devices have both very sensitive receivers and a great mass of error-correcting codes which makes them able to hear very tiny signals. GPS receivers are a good example (albeit the signal there is so tiny they generally won't work indoors). – tfb Dec 25 '16 at 13:50
• I think it's the same way they pass through windows. – JEB Jun 3 '18 at 0:06
Different molecules and different crystalline structures have frequency dependent absorption/reflection/transmission properties. In general, light in the human visible range can travel with little absorption through glass, but not through brick. UV can travel well through plastic, but not through silicate-based glass. Radio waves can travel through brick and glass, but not well through a metal box. Each of these differences has a slightly different answer, but each answer is based on molecular resonance or crystalline structure (or lack thereof) or electrical conductivity.
Bottom line: There isn't one general answer for why $\lambda_A$ goes through material X but $\lambda_B$ doesn't.
• Thanks fir the answer . Could you please also explain how molecular resonance or electrical conductivity can explain this particular case of radio waves penetrating through walls but not light , so that I can get a general idea as to what happens in this process . – Shashaank Dec 23 '16 at 21:29
• @Shashaank, you might find this helpful: Interaction of Radiation with Matter – Alfred Centauri Dec 23 '16 at 22:06
• @AlfredCentauri Thanks. Yes I found it really helpfuL – Shashaank Dec 24 '16 at 5:59
The way light, radio waves or microwaves interact with matter is through electromagnetic interaction with the microscopic charged particles. Different types of excitation can happen with these charges depending on the energy of the photons constituting the radiation. With increasing energy the radiation can cause molecular rotations, molecular vibrations, electronic polarization, electronic excitation, ionization, atomic excitation and so on.
The wifi operates in the microwave frequency and this can only generate rotations or maybe vibrations to the molecules. In the process of penetrating the material and interacting with the molecules, the microwave loses energy through heat. However in general this losses are small and the microwave can penetrate a long distance into the material.
Light on the other hand interacts with matter via electronic excitation or electronic polarization. There is a quite general theory that describe the electrons in solids called band theory. According to it the electrons have energy levels distributed along energy bands with the range of a few electron volts. Moreover, these bands are separated by "forbidden levels" called band gaps. For conductors the last band (valence band) is only partially filled whereas for insulators it is completely filled. This fact is crucial to the electric and optical properties of the material.
Given the frequency $\nu$ of the photon, its energy can be calculated from $$E=h\nu.$$ In particular, the photons composing the visible light have energies approximately between $1.8\, \mathrm{eV}$ (red light) to $3.1\, \mathrm{eV}$ (violet light). If you incide light in a material whit band gap of less than $1.8\, \mathrm{eV}$ then every photon is able to excite an electron from the valence band to the conduction band. The electrons then emit this photon and the overall effect is that the material is opaque. On the other hand if the material has a band gap greater than $3.1\, \mathrm{eV}$ no photon (in the visible) can be absorbed. The material is then transparent to light, such as a glass. There is also absorption of light in transparent material through electronic polarization so a very thick glass transmit less light.
If you keep increasing the energy of the photon, let us say to the ultraviolet regime, then even for glass there will be valence band-conduction band transitions and the glass is as opaque to UV as wood is to visible light.
• So , in general it is how radiation reacts with matter. For a particular substance a low energy (frequency) wave may not loose it's energy but high energy wave may loose all of its energy. Can you please tell a substance through which infra wave pass but UV wave can't – Shashaank Dec 25 '16 at 7:11
## protected by AccidentalFourierTransformJul 20 '18 at 13:54
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
2019-01-20 12:59:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5313610434532166, "perplexity": 513.4850579505272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583716358.66/warc/CC-MAIN-20190120123138-20190120145138-00129.warc.gz"}
|
https://mitpress.mit.edu/books/going-alone
|
Hardcover | $14.75 Short | £12.95 | 592 pp. | 6 x 9 in | 21 illus. | August 2002 | ISBN: 9780262025218 Paperback |$35.00 X | £27.95 | 592 pp. | 6 x 9 in | 21 illus. | December 2002 | ISBN: 9780262513760
## Going Alone
The Case for Relaxed Reciprocity in Freeing Trade
## Overview
Since the end of World War II, the freeing of trade has been most visible in reciprocal liberalization agreements negotiated under the General Agreement on Tariffs and Trade, or GATT, and through increasing bilateral and plurilateral agreements. There has also, however, been a significant, if less visible, unilateral freeing of trade by several nations.
This book, based on a research project directed by Jagdish Bhagwati, examines the experiences with such unilateral trade liberalization. Part 1 considers historical experiences, following Britain’s unilateral embrace of free trade. Part 2 discusses recent examples, and Part 3 discusses unilateral liberalization in specific sectors. The substantive introduction provides a synthesis of the findings as well as theoretical support. It argues that although unilateral freeing of trade is generally less beneficial than reciprocity, it can trigger "sequential" reciprocity through example or by encouraging lobbies abroad to favor trade expansion.
|
2017-02-22 05:23:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24395398795604706, "perplexity": 12284.859711781271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170884.63/warc/CC-MAIN-20170219104610-00581-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://iwaponline.com/wpt/article/11/2/245/20706/Assessing-the-impacts-of-water-on-industry-the
|
Industrial water use accounts for 22% of global water consumption and for as much as 60% in high-income countries. Thus, it is a considerable factor when dealing with global water issues. This paper assesses the impacts of water on the brewing industry using the case example Asia Pacific Breweries (APB). By deriving a true cost of water (TCW) via a three-step approach involving (1) desalination, (2) secondary treatment and (3) reclamation, it reflects all water-related risks from physical availability to reputational, environmental, and legal risks in the context of a value-based management framework that embeds sustainability in the company's operational conduct. An excel-based sensitivity model takes into consideration all relevant drivers allowing for a detailed cost breakdown for each of the three steps. The resulting transparency of what drives the TCW yielded that the main components are energy, carbon, and capital costs. As derivatives on all three components exist, they were used to feign a hypothetical water option by replicating its pay-off as a weighted average of energy and carbon derivatives. At APB water accounted for merely 0.4% of total operating costs, but drove 335 times its cost in revenues. The resulting biomass from secondary treatment was also considered for energy recovery through anaerobic digestion and thermal hydrolysis. As a result, secondary treatment energy costs were subsequently reduced by 56%. As a result, the application of discriminatory pricing for industrial end-users of water was, therefore, concluded a viable option. Still, the demand-price elasticity of the company's products needs to be considered when applying said option to avoid the passing on of costs to consumers. A practical approach is considered involving a revenues per m3 of water over the TCW metric to determine the extent of discriminatory pricing and temporary tax breaks to avoid the passing on of costs to consumers.
## INTRODUCTION
Freshwater and energy constitute two of the most essential and inseparable commodities for human life on earth. Industry, as the spine of modern society, however, places significant demands on both resources (Gude et al. 2010) as it constitutes 22% of global water demand (UNESCO 2003b) and even up to 59% of demand in ‘high-income countries’ (WBCSD 2009). The idiom ‘no water, no business’ illustrates all too clearly the inherent interdependence between industry and water (WBCSD 2009). As water is often underpriced, it is seldom adequately accounted for in production processes and product pricing. This leads to unsustainable water usage which takes its tolls on the environment and eventually societal wellbeing. In light of population, economic and industrial growth, water, which was long treated a ubiquitous and abundant resource, is now increasingly becoming the constraining factor to the very growth it is fueling (Veolia Water 2010). This ouroboros relationship necessitates a thorough understanding of what creates value and what drives costs. Therefore, this paper aims to provide a framework to assess the role of water for the brewing industry from a management and sustainability perspective. A value-based management (VBM) evaluation framework is applied using the case example of Asia Pacific Breweries (APB). By linking all the relevant drivers with the three pillars of sustainable development (SD) in an excel-based sensitivity analysis, the True Cost of Water (TCW) is calculated to reflect all water related risks whilst considering relationships such as the energy water nexus and energy recovery from anaerobic digestion.
The TCW is a figure that takes into consideration all risk factors associated with water and converts them into a cost figure. Conceptually, it assumes that the cost of absolute water scarcity is the cost obtained by sourcing and treating water via a three-step approach: (1) sourcing water through reverse osmosis (RO) desalination, (2) treating the process effluent via conventional secondary treatment (e.g. activated sludge (AS)) and (3) keeping it in the loop using micro/ultra filtration (MF/UF), RO and finally ultraviolet disinfection (UV). This approach stipulates that a physical lack of water can be replaced by the cost of desalination and a closed-loop water treatment system. This makes the TCW a function of various cost inputs ranging from energy to capital costs. This paper focuses on the most volatile and significant cost components of the TCW, reflecting the impacts of changes in its drivers so as to give directional implications as to its development. It also shows that water risks can be quantified purely in financial terms. This illustrates that the TCW is a good proxy to estimate the impacts of water on industry and water risks in general. Overall, the costs considered are energy costs, capital costs, the costs of carbon as an environmental externality, labour, maintenance, and chemical costs. An analysis of global data on desalination plants (Desaldata.com 2011) yielded that there is already a tendency to replace water scarcity with the cost of desalination. Interviews with industry representatives at the TUAS Desalination plant in Singapore revealed that desalination is in fact APB's main source of water (Siew 2011), thus, validating the viability and real world applicability of the three-step approach used to calculate the TCW.
## WATER-RISKS AND INDUSTRY
Temporary water shortages, even mere constraints, can already disrupt production and cause substantial losses, growth constraints, and eventually bankruptcy. Should the availability of water or its sourcing and treatment costs be subject to sudden supply and price shocks not accounting for such developments may pose a significant business risk. Thus, said thorough understanding of what drives water costs under all conditions is not only required to effectively manage water risks on an industrial level, but also to manage water efficiently and effectively as an industrial resource and commodity. Industry should also be concerned with the underlying carbon footprint of its water consumption, as well as the socio-economic impacts thereof. Water consumption and subsequent wastewater treatment volumes, therefore, need to be understood in terms of their true costs which should also include a monetized greenhouse gas carbon equivalent value per m³ of water used. Then it is only a matter of time before a closed loop water management system becomes the next inevitable step water-dependent industries will have to take. The sludge from secondary treatment not only poses a considerable cost factor but is also significant in terms of its carbon footprint and environmental impacts. Hence, anaerobic digestion and thermal hydrolysis-based energy recovery was also considered when assessing the overall energy and carbon footprints and the TCW.
### VBM
APB Singapore was analysed in terms of its theoretical water consumption, wastewater strength, and carbon emissions. The resulting costs were then imposed onto its cost and profit structures to reveal the impacts of water on its operations. The date used was taken directly from (APB's 2010) Annual Report (APB 2010). The analysis was conducted using VBM. VBM provides a management approach that takes into account all relevant factors industries need to consider when looking to embed sustainability within their business. The three pillars of sustainability, i.e. economics, environment, and society, are broken down into six factors: efficiency, costs, carbon, environmental health, and socio-political/socio economic factors. At the same time these factors are directly interlinked with the industry-specific technology the respective company employs. Additionally, a fully functioning and sustainable industry needs to operate within the constraints of the underlying legal framework whilst adhering to the requirements imposed by it. VBM lets companies monitor, measure, and manage all relevant factors accordingly, whilst also providing a comprehensive driver model for sensitivity analyses and impact assessment. See Figure 1 for a generalized conceptual depiction of a VBM framework. In this paper, the VBM framework was used to calculate the TCW based on the six factors mentioned above.
Figure 1
SD pyramid.
Figure 1
SD pyramid.
### Value-driver tree approach
In order to account for the various drivers and parameters involved in the derivation of the TCW, a driver-tree model interlinking all drivers with each other as well as with the defined key performance indicators (KPIs) was used. The KPIs defined were kWh/m³ and S$/m³ as well as the cost of carbon per m³ in S$. The driver tree enabled the modelling of KPI sensitivities with respect to the driving factors behind them. The processes of RO-treatment, for example, were modelled in trees to reflect energy consumption per m³ as a function of salinity measured in total dissolved solids (TDS) as well as capacity in m³ to reflect the scale economies of higher capacities. (See Figure 2.) The approach was used for each of the three steps to derive the TCW.
Figure 2
High level illustration of driver-tree approach.
Figure 2
High level illustration of driver-tree approach.
The framework is applicable to both municipalities and industrial end users. Municipalities can estimate their KPIs based on assumptions pertaining to biochemical oxygen demand (BOD), chemical oxygen demand (COD), Nitrogen (N), Phosphorous (P) and suspended solids (SS) per population equivalent (PE) per day. This may vary according to location, e.g. while BOD is estimated to be 60 grams in most parts of Europe it is about 70 grams per PE in the USA (Tölgyessy 1993). Thus, the accuracy of the estimation of the TCW is highly dependent on the accuracy of the respective input parameters and limited to the explanatory power of average values.
As higher levels of rain and infiltration, together with a weaker BOD loading, influence the cost per unit of BOD by increasing non-BOD related removal costs such as pumping and aeration, equalization chambers are assumed as per the Singaporean example so as to have stable loading rate with a constant flow. This is to avoid diversion of costs to units not related to BOD removal (EPA Water Quality Office 1971). The parameters hydraulic retention time (HRT), SRT, and MLSS, were held constant and assumed to be functioning at steady state equilibrium with optimum efficiency and for a conventional AS plant. Dissolved oxygen (DO), and the food to microorganisms ratio (F:M ratio) were assumed sufficiently high at levels above their critical values to allow for efficiently working processes that are cost effective; i.e. a DO concentration of at least 2.0 mg/l (Davies 2005) with an F:M ratio of 0.2–0.4 kg BOD/kg MLVSS (Bitton 1998). Potential model extensions could allow for the reflection of a variable HRT, SRT, and MLSS, and how they would change in accordance with variations in the variable parameters flow (Q), organic loading (BOD, COD) as well as nutrients (N and P) so as to uphold the ideal equilibrium. This would allow for a higher complexity and accuracy of the model.
The pumping costs of water represent a significant, non-negligible, fraction of the costs which, however, vary significantly depending on type of pumps and the volume pumped (U.S. department of energy 1999), as well as pipe material (Rawlings & Sykulski 1999) and diameter (Doyle & Parsons 2002), and the distances and slopes required to pump. Moreover, the infrastructure costs (i.e. building a distribution network) also constitute a considerable fraction of costs which, however, were not modelled in this approach. The aim at this stage was simply to identify the major cost constituents driving the cost of water and model these based on steady state equilibrium of optimally functioning processes. However, every single step required in the process was modelled using data from literature and industry to derive a unique function yielding a cost figure per m3 of water treated. (I.e. oxygen and energy requirements for BOD and nitrogen removal, return activated sludge (RAS) pumping, phosphorous and COD removal, UASB pre-treatment for high strength industrial effluent, capital costs, sludge treatment and energy recovery, and process emissions and carbon equivalents.) For a detailed derivation of the functions and models please refer to Leusder (2011).
### Modeling energy, carbon and capital costs
#### Desalination
An equation that best reflects potential energy consumption per m³ of RO permeate depends on the factors capacity and feed water salinity TDS in parts per million (ppm). It was estimated using an ordinary least squares (OLS) regression. Prior to performing this, the data were first analysed by creating scatter plots and determining the best-fit equation of salinity and capacity to kwh/m³ separately. Figure 3: linear, exponential and power uni-variate regression below shows potential relationships between higher salinity and power consumption as well as capacity in kwh/m³.
Figure 3
Linear, exponential and power uni-variate regression.
Figure 3
Linear, exponential and power uni-variate regression.
These plots were used to assess what functional form may provide a sensible fit for the independent variables to best reflect the data, in order to then apply it in a multivariate regression analysis.
Based on the output of the regressions above, two multivariate regressions were compared. In the first model, the natural logarithm of power consumption (y), which reflects the exponential relationship found above, was used along with a linear form for salinity (x1) and capacity (x2). In the second model, a linear form for power consumption (y) salinity (x1) and capacity (x2) was used. The regressions were modelled as follows to establish as to whether the linear log-model or the plain linear model yield the better fit between power consumption, salinity and capacity:
1. Log linear model:
2. Linear model:
where x1 is salinity in ppm and x2 is capacity in m³ per day.
The linear model was found to exhibit a meaningful correlation between capacity and energy consumption, yielding that both salinity and plant capacity affect power consumption. An increase in salinity is reflected in an increase in power consumption (t = 12.32, d. f. = 48, p < 0.01) as is also confirmed by the findings of the Committee on Advancing Desalination Technology (2008). Plant capacity, however, was found to have the opposite effect with large plants showing slight economies of scale (t = 2.28, d. f. = 48, p < 0.05). The log linear model also yielded that there is a strong relationship between salinity and power consumption (t = 16.80, d. f. = 48, p < 0.05) and confirms the exponential relationship of salinity and energy as a result of increasing osmotic pressure throughout the desalination process.
As the removal of fresh water increases, feed water concentration goes up resulting in more energy required to remove the salt as a function of the recovery ratio, as indicated above in Figure 4 (ADU-RES 2006). Experience from Singapore (Siew 2011) and literature reviews have shown temperatures around 25 °C to be ideal for high recovery ratios with good permeate quality (Nisan et al. 2005). See Figure 5 below for a depiction of functions pertaining to temperature and permeate recovery.
Figure 4
Theoretical minimum energy required to desalinate seawater at 25 °C.
Figure 4
Theoretical minimum energy required to desalinate seawater at 25 °C.
Figure 5
Permeate recovery as a function of temperature (Nisan et al. 2005).
Figure 5
Permeate recovery as a function of temperature (Nisan et al. 2005).
Moreover, while in this model the relationship between capacity and energy consumption is still measurable, it is not significant (t = −0.97, d. f. = 48, p > 0.10), therein, reflecting the findings of the capacity sensitivity analysis as conducted by the Committee on Advancing Desalination Technology (2008). Given that the underlying analysis is based on a limited set of 50 data points, caution must still be exercised as to the extent of the validity of an exponential over a linear relationship. Yet, as the model fits the data best, it can used to provide exemplary outputs for where there is a lack of data. See Figure 6 below for a statistical summary.
Figure 6
Log linear (left) and linear (right) multivariate regression model output.
Figure 6
Log linear (left) and linear (right) multivariate regression model output.
Based on the data from industry and the literature cited above, the formula to estimate energy consumption for desalination and reclamation in kwh/m³ was derived as:
This implies that an increase in x1 by one unit means kwh/m³(y) goes up by 0.0000398. Analogously, an increase in capacity (x2) by one unit implies a decrease in kwh/m³(y) by 0.000000409 with an R² of 0.86. Given that there are differing views in literature and practice as to the effects of capacity on energy consumption, with higher capacities said to either decrease energy consumption (Hitachi, 2010) or have virtually no measurable financial effect at all on an m³ basis (Committee on Advancing Desalination Technology 2008), the approach offers a choice between the multivariate log linear model where capacity is non-significant, and a multivariate linear model where it is significant.
Eventually, a lot depends on plant configuration, i.e. a lot can be modelled depending on the type of technology employed. Industry will be reliant on what is actually available and, thus, on estimates of what it currently costs to produce freshwater from wastewater (TDS 1,000), brackish water (TDS 10,000), average seawater (35,000), and Gulf-seawater (480,000). The modelled equation also can be used to estimate energy requirements for RO-based desalination where data is confidential and not available.
The point of the OLS regression is to illustrate what the current desalination power consumption is, based on real data. While there are other tools developed by various companies and agencies in the industry to estimate desalination costs, there are limitations to each of them (Committee on Advancing Desalination Technology 2008). As many of them are mostly for design purposes they may not reflect the actual power consumption as found in practice which may in fact vary significantly (Siew 2011). This is because desalination cost data, as with all water costing data, are a function of numerous variables with components not easy to ascertain. In line with the approach taken by the Committee on Advancing Desalination Technology (2008), which deemed the transparency of actual operating data from existing desalination plants a better indication as to potential costs, this analysis is further enhanced by data from Global Water Intelligence's Desalination Database (Desaldata.com 2011) as was modelled into the above equation.
For desalination the model yielded the following cost break-down based on the above parameter settings. Energy and capital costs are the main constituents of the per m³ cost of desalinated water, together accounting for 75% (Energy 41%, capital 34%). Capital costs were modelled using a standard annuity formula and a 4% WACC with a duration of 35 years (Leusder 2011). This was analogously done for secondary treatment and reclamation. With a price of S$20.6 per tonne of carbon, carbon costs do not represent a significant cost factor at S$ 0.04 per m³ of water produced (5% of total costs). The remaining 20% are made up of chemical, labour, and maintenance costs and are based on literature estimates as per the Committee on Advancing Desalination Technology (2008).
#### Secondary treatment
As the constituents of AS were modelled in a complex approach considering oxygen and energy requirements for BOD and nitrogen removal, RAS pumping, phosphorous and COD removal, UASB pre-treatment for high strength industrial effluent, capital costs, sludge treatment and energy recovery, and process emissions and carbon equivalents. The costs are broken down into 39% sludge costs, 12%, 5% carbon costs, 13% COD removal costs, 14% labour costs, 10% phosphorous removal costs, and 3% capital costs, (Leusder 2011).
#### Closing the loop
Water reclamation, which merely requires a one-stage RO train with two passes, is a cheaper alternative to desalination, which is a more complicated process (Jensen 2004) requiring higher feed-pressures at around 60 bar, a two-stage RO process, and thus higher energy requirements at around 4 KWh/m³ (Siew 2011). As reclamation requires only about 1 KWh/m³ and can be used where access to seawater is limited, it is generally the preferred alternative, given that it subsequently emits less carbon (Jensen 2004). Yet, given the public's acceptance for treated effluent, its applicability for households and beverage and food production is highly limited. As with desalination the main cost constituents of reclamation are energy, capital, and carbon costs which mere modelled using the same approach as for desalination above. For regions where no data could be obtained, the aforementioned linear log model can also be applied to estimate energy requirements. The required constituents used to derive an energy price using the above equation were feed water TDS and the capacity of the plant. In the two cases analysed here, the equation was not used as all data was readily available for modelling purposes. The two cases compared were the Singaporean water reclamation approach (NEWater) involving UV disinfection and the Californian approach involving advanced oxidation processes (AOP) using hydrogen peroxide. The latter allows for the removal of non-easily removable organic compounds such as highly toxic and badly biodegradable chlorophenols (CPs) (Pera-Titus et al. 2004), endocrine disrupting compounds (Rosenfeldt & Linden 2004), and even pesticides (Badawy et al. 2006).
The modelled approach allows for the selection of UV treatment for disinfection purposes at 70 mj/cm² at an average energy consumption of 0.035 kwh/m³ (PUB 2002, PUB 2011) versus AOP at 300 mj/cm² at 0.08 kwh/m³ (OCWD 2011). Moreover, it breaks down the total energy costs into micro- and ultra-filtration (MF/UF) at 0.2 KWh/m³, RO at 0.6 KWh/m³ and a remainder of 0.165 KWh/m³ for the air instrumentation system and the chemical dosing pumps. Whilst labour costs were derived from literature, maintenance costs were estimated as second pass SWRO permeate costs at 0.01 (Committee on Advancing Desalination Technology 2008) and 30% thereof for second pass permeate, as per PUB experience (PUB 2002, PUB 2011). Chemical costs were derived as per data by the Orange County Water District (OCWD 2011). Overall, total chemical costs for water reclamation were thus estimated at 0.0016 S$per m³, whilst permeate conditioning costs were assumed to be equal to desalinated water conditioning costs as per the Committee on Advancing Desalination Technology (2008). In summary, the cost-constituents of reclaimed water are 75% capital costs, 10.5% energy costs, 1.5% carbon costs and 14% other costs, incl. chemical costs, (Leusder 2011). ### Wastewater biomass and energy recovery Conventional aerobic wastewater treatment is associated with the production of sludge (UNEP, 2000). Whilst some aerobic treatment systems are now said to be sludge-free due to an aerobic-aeration process with ‘supercharged’ air sustaining a highly effective flock of enzymes devouring the sludge material in the system (Global Water Group 2011), these are not widely distributed globally. Therefore, when assessing the impacts of wastewater treatment, sludge, in terms of treatment costs and disposal emissions but also in terms of energy recovery from anaerobic digestion and pre-treatment, must be taken into consideration. Sludge treatment is an expensive and carbon intensive process. For a granular analysis of the process emissions and carbon equivalents associated with sludge please also refer to Leusder (2011). While sludge treatment costs vary between different regions, studies show that as a rule of thumb the overall costs (manpower, equipment and energy) can account for about 50% of the total cost of wastewater treatment (Kemira 2008). Studies by the EC confirm this. The most significant cost constituent is sludge treatment, which is expected to increase further as more stringent hygiene standards are introduced. Sludge dewatering and drying is also very costly, yet savings from the transport of wet sludge off-set these to a certain extent (European Commission Environment 1999). In the following section, costs, emissions, and energy recovery are discussed exemplarily based on a set of underlying assumptions. The employed pre-treatment assumed is the Norwegian thermal hydrolysis system CAMBI®. As per the below process data by CAMBI and Black & Veatch, an electrical output of approximately 1MWh per tonne of dry solids (TDS) after an electrical input of 310 kwh can be achieved (Jolly & Gillard 2009) for thermal hydrolysis pre-treatment plants sized as small as 3,600 metric tonnes per year (CAMBI 2007). This is based on 400 m³ biogas per TDS with a methane yield of 68% at 61% overall volatile solids (VS) destruction and 48% TS removal (McCausland & McGrath 2010). The mixing ratio of primary vs. waste activated sludge (WAS) is around 50/50 on a dry solids basis (Jolly & Gillard 2009) with a tendency towards a higher WAS content (Rognlien 2011). The sludge is fed at about 10–12% DS assuming 90% effective digester volume (EDV) and average VS of 75% with the VS loading being about 6.5 kg VS per m³ digester capacity per day with a HRT of just over 12 days (Lowe et al. 2007). In order to approximate the amounts of sludge contributed to municipal sewage treatment works by APB in Singapore, a simplified regression model for primary and secondary sludge based on municipal AS data from Ontario and Kuwait was used. The quantity and characteristics of sludge produced depend on the wastewater characteristics and the degree of treatment. The two main sources of sludge generated in AS plants are SS in the influent wastewater (raw primary sludge) and BOD removed during the AS process (oxidized secondary sludge). The capacities of the plants analysed ranged from 500 to 200,000 m³/day. The wastewater treated is municipal and primarily domestic with treatment encompassing primary and secondary (conventional AS) treatment stages (Hamoda 1988). Using the following equation with an R² of 0.88, an estimate for annual sludge production in tonnes based on total SS and BOD removed yielded: Based on the average brewery wastewater characteristics of 1,500 mg/l of BOD and SS of 60 mg/l (Worldbank, 1998) the total primary and secondary sludge volumes estimated for APB totalled 176.1 and 368.8 tonnes per annum, respectively, yielding a total annual sludge tonnage of approximately 544.9 tonnes. This is based on 70% of SS and 30% of BOD being removed during primary sedimentation (FAO 1987) at an overall treatment efficiency of 95%. As the ratio of primary vs. secondary brewery sludge is 33/67 on a dry solids basis, the accuracy of the potential electrical energy output as per Cambi's thermal hydrolysis pre-treatment is uncertain. Given that experimental results showed that municipal and brewery sludge mixtures at a ratio of 25:75% by weight (sewage:brewery) yielded higher biogas production (Babel et al. 2009) comparable outcomes in terms of net energy production may still be possible, though, this requires further investigation. For simplicity sake it is assumed that the energy yield is the same so as to give a rough ballpark figure by which to illustrate potential carbon and energy savings. The total energy yield that could be achieved from such an amount of sludge was then calculated at 490.4 MWh per annum with a carbon equivalent of approximately 267 tonnes of carbon. This was assuming that 1 KWh of grid electricity has 545 grams of carbon equivalent (Carbon Trust 2011) while energy generated via combined heat and power (CHP) has no carbon footprint. Alternatively, when accounting for carbon, by assuming that energy generated via (CHP) has a carbon equivalent of 0.295 kg per KWh (WRc 2007) this still saves 123 tonnes of carbon emissions. Overall, up to 89% of the energy used to aerate APB's wastewater and 56% of total AS energy cost can be saved using thermal hydrolysis, as per Cambi, if all of APB's sludge is pre-treated as per the above mentioned methodology. This reduces the aeration footprint to 0.147 KWh/m³ from 1.315 KWh/m³ previously and cuts total energy cost for AS from S$ 76,584 to S$33,464. In other words, sludge can save 1.168 KWh/m3. ### Water derivatives The results of the three-step approach yielded that the two main components of the TCW are energy and capital costs. Given that the TCW was used to assess the impacts of water on APB, it makes financial sense to address potential fluctuations in the price of water by locking it in at a certain point in time. By replicating the pay-off of an underlying without having to physically hold it, derivatives provide such a lock-in mechanism. As water derivatives do not exist at present, the approach employed in this paper replicates their hypothetical pay-off by pricing an option based on the main constituents of the TCW; i.e. energy, carbon and capital costs. As energy, carbon, and interest rate derivatives (as a capital cost proxy) do exist, the volatilities of their underlyings can now be used to replicate the pay-offs of a hypothetical water option. The example uses a European option as a fixed maturity instrument. An option is used as it is the most costly yet also most flexible means of managing risk in the short run. It also allows for the evaluation of delta, gamma and vega risks, which can reflect the risk companies face with respect to water as they show how the price of water and water risk management changes relative to movements in the price of the underlying. In other words, they provide somewhat of an embedded market sensitivity analysis of water's price based on its main pricing constituents. The option price itself is calculated using the Black-Scholes option pricing model (Black & Scholes 1973). With energy accounting for ∼50% of the TCW, the volatility thereof can be assumed to be a good proxy for the volatility of the water price itself. Carbon is yet another source of volatility in the water price and can still be subject to massive legislature induced price hikes. Thus, the volatility of natural gas at 0.375 (as an exemplary proxy for electricity) and the volatility of carbon at 0.355 (Bloomberg 2011) were used as a weighted average to estimate a proxy for the volatility of water. The weighting applied was 97/3 energy/carbon given the relative contributions of the respective components to the TCW. Increasing carbon prices will, eventually, skew this weighting towards carbon. By using the weighted average cost of capital (WACC) of the company as the risk-free rate for the determination of the option price, the volatility of the cost of capital is also reflected in the option price. An overview of the used input parameters to determine an option price is depicted in Table 1. Table 1 Option input parameters Option pricing parameters Underlying S0 1722376 Strike Price/Exercise Price 1722376 Risk Free Rate 0.04 Time to Expiry in Years Volatility σ 0.374418 Option pricing parameters Underlying S0 1722376 Strike Price/Exercise Price 1722376 Risk Free Rate 0.04 Time to Expiry in Years Volatility σ 0.374418 The option is priced with a duration of 1 year and a WACC of 4% to represent the risk free rate. The volatility used is the implied volatility of natural gas as a source of energy (Bloomberg 2011). The underlying price and the strike price were assumed equal. Other costs were assumed constant over time. Based on this input, the price of a call option is S$ 285,944 and the corresponding put option is S$218,409. The delta of an option is the first partial derivative of the call option price with respect to the price of the underlying, i.e. water or in this case its proxies, and measures the rate of change of the option price with respect to the underlying while holding all else constant (Hull 2008). So if the price of water or the components constituting its price change by one dollar, the price of the option changes by the value of delta. Along these lines, the gamma of an option is defined as the rate of change of the option's delta with respect to the price of water or the components constituting its price. Mathematically speaking, it is the second partial derivative of the option value with respect to the underling (Hull 2008). Here, a percentage change in the price of water changes the option delta by the value of gamma. The vega of an option is defined as the rate of change of the option price with respect to the volatility of the underlying and is calculated as the first partial derivative of the option price with respect to the volatility of the underlying asset (Hull 2008). In this case, vega has been adjusted to reflect a percentage change. That is, a 1% increase in volatility increases the option price by S$ 5,389. As such, the ‘Greeks’ can be used to reflect the changing costs of managing water risks through derivatives, thereby, offering a different perspective on water risks and their management. While there is a vast array of ‘Greeks’ for the risk management of derivatives, this paper merely illustrates the three mentioned above. For an overview of the put and call ‘Greeks’ see Table 2 below.
Table 2
Option Greeks
Call Option Risks Put Option Risks
Delta −0.615637 Delta 0.384363
Gamma −0.000001 Gamma −0.00001
Vega −6,580.6 Vega −6,580.6
Call Option Risks Put Option Risks
Delta −0.615637 Delta 0.384363
Gamma −0.000001 Gamma −0.00001
Vega −6,580.6 Vega −6,580.6
## RESULTS AND DISCUSSION
This paper illustrated the dependence of the price of water on the parameters: energy, capital, and carbon. The underlying analysis was based on an energy price of S$0.1 and a carbon price of S$ 20.6 (base case). Energy prices are often subsidized and generally subject to high fluctuation given the finite nature of oil, coal, and gas. Moreover, with carbon prices required to be at least € 35 per tonne (Crooks et al. 2010), water is subject to yet another volatile cost component. Therefore, in order to make the model more robust, a sensitivity analysis to reflect these changes was conducted holding constant labour, maintenance, and chemical costs. These costs are hence referred to as other costs. The below figure illustrates the impacts of a 100% increase of the energy costs holding constant carbon and capital costs vis-á-vis the base case. The impacts of energy recovery from sludge were not taken into consideration in this analysis.
Figure 7 shows an increase in relative energy costs from 14% to 25% for water reclamation, 13% to 22% for AS, and finally an increase from 41 to 58% for desalination, the most energy intensive of the three options. Conducting the same analysis for a carbon price of S$61.92 at an energy price of S$ 0.1 yielded that the relative costs of carbon for water reclamation increased to 4% from formerly 2%. The relative cost of carbon for AS, the most carbon-intense process, more than doubled to 13% from a mere 5%, while desalination increased to 9%. (See Figure 8.)
Figure 7
Energy sensitivity analysis Scenario 1 vis-á-vis the base case.
Figure 7
Energy sensitivity analysis Scenario 1 vis-á-vis the base case.
Figure 8
Carbon sensitivity analysis Scenario 2 vis-á-vis the base case.
Figure 8
Carbon sensitivity analysis Scenario 2 vis-á-vis the base case.
The conducted analysis yielded that even with substantial increases in the relevant parameters of the TCW, the resulting costs remained negligibly low for breweries. This implies that there is room for discriminatory pricing.
Based on a carbon price of S$20.64 and energy selling at S$ 0.1 per KWh, the results yielded a TCW of S$4.15 per m³. In terms of APB's total operating costs this merely accounts for 0.4%. However, 1m³ of water drives S$ 1,125 in revenues, i.e. it yields 125 litres of beer at S$9 per litre of beer with profits of S$ 195 per m3. (See Table 3).
Table 3
Impacts of the TCW on APB
1Publicly available data from annual report.
Scenario 2 increased both energy and carbon prices by 100% and 300% respectively. The changes were then reflected using the previous driver-tree approach. The results increased the TCW to S$5.13. This increased water's fraction of total costs to 0.5% from previously 0.4% whilst augmenting the fraction of profits to 2.6% from previously 2.1%. The profit and revenue multiples (how much profit and revenue is driven by 1m³ relative to its price) fell to 37 and 219 from 46 and 273, respectively. Water derivatives offer a means of using the price of water as a parameter to determine the cost of managing water risks. Moreover, they also offer a set of additional risk metrics; the Greeks. These underline how managing water risk essentially boils down to managing energy, carbon and interest risk as well, as they reflect how the price of the option changes as a result of small changes in the price of the underlying which again is approximated by its major constituents and changes thereof, i.e. carbon, energy and interest rates. ## CONCLUSION This paper illustrated that water risks can be quantified purely in financial terms, thus, illustrating that the TCW is a good proxy to estimate industrial water risks and their impacts. It was derived using a complex VBM model that accounts for all drivers of the water price and is a function driven mainly by energy, carbon, and capital costs. As such, water risk management is merely a question of adequately accounting for these constituents. Derivatives offer an effective means of incorporating all three constituents in one financial product. The TCW was, thus, also used to derive a water option price as well as its delta, gamma, and vega. It was based on the weighted average implied volatilities of the carbon price and the Indonesian gas price as a proxy for electrical energy volatility in Singapore. The cost of an option at par was S$ 285.944 for calls and S\$ 218.409 for puts given a maturity of 1 year and a risk free rate of 4%.Water derivatives facilitate the replication of the pay-offs of purchasing these underlying constituents. However, as water merely makes up 0.4% of total operating costs for APB shows that a derivative protecting against water-price fluctuations without actual physical delivery will only address 0.4% of the companies costs, leaving its profits to stand and fall with water's physical availability. This does not protect APB from forfeiting the entirety of its revenues, as without water, it simply cannot produce anything. As such, a consistent supply of a standardized quality effluent traded via forward contracts would offer protection against such a shortfall.
The thermal hydrolysis of sludge even cut secondary treatment energy costs by 56%. As the TCW constitutes such a low fraction of total operating costs, yet, water drives a significant amount of revenues and profits, the application of higher water tariffs to APB based on the respective value of water to it (discriminatory pricing) may lead to an efficient outcome, if demand-price elasticities of the company's products are accounted for accordingly. Based on the above analysis in Table 3 we can see that the higher the value that can be derived from one m3 of water relative to the fraction of the TCW over the total operating costs of the firm is, the more the firm should be charged per incremental m3. The higher this value is, the more efficient the outcome of discriminatory pricing will be. However, whilst this approach yields a financially accurate basis for water pricing, it would still skew the desired results unfavourably, if the corresponding demand price elasticity is not taken into consideration accordingly. As such, a discriminatory increment in the water price would need to go hand in hand with a price fixing or rather be administered through a temporary tax break for the company until it manages to implement measures to incur the same total cost of water by reducing the overall amount of water or by simply closing the loop and recycling its own effluent.
The key difference of the TCW and VBM framework approach versus conventional assessment and management approaches is that it illustrates how each individual driver affects the KPIs. A further feature is the ability to feed balance sheet data into the model yielding KPIs reflecting a company's susceptibility to water and wastewater treatment costs. Applying this approach to a number of companies in the same sector can be used to derive an index by which to assess entire industries and their relationship to water. The developed approach is not a design software but an impact assessment tool considering environmental (carbon and pollution), economic (energy and carbon costs) and societal (e.g. acceptability of reclaimed and desalinated water) impacts on companies. As all the required data is publicly available, the model can be used to provide a directional indication of potential impacts for companies without access to confidential information. However, in its present configuration it can only accommodate municipalities and food/beverage companies. Further research is required to adapt it to, e.g. semiconductor companies and other industries.
## REFERENCES
REFERENCES
2006
Co-ordination Action for Autonomous Desalination Units based on Renewable Energy System–Deliverable 6.1, Energy Consumption Modelling. Work Package 6, Further Development of Integrated Plant Design
.
CREST
,
United Kingdom
.
Report number: INCO-CT-2004-509093
.
APB
2010
Annual Report 2010, Singapore, Asia Pacific Breweries Limited
.
Babel
S.
Sae-Tang
J.
Pecharaply
A.
2009
.
International Journal of Environmental Science and Technology
6
(
1
),
131
140
.
M. I.
Ghaly
M. Y.
T
.
2006
.
Desalination
194
(
1–3
),
166
175
.
Bitton
G.
1998
Formula Handbook for Environmental Engineers and Scientists
.
Environmental Science and Technology
,
New Jersey, USA
John Wiley and Sons Inc
.
Black
F.
Scholes
M
.
1973
.
Journal of Political Economy
81
(
3
),
637
654
.
Bloomberg
2011
Natural Gas Futures. [Online] http://www.bloomberg.com/ (accessed 17 Aug 2011
).
CAMBI
2007
(
accessed 13 July 2011
).
Carbon Trust
.
2011
(
accessed 17 Aug 2011
).
2008
Desalination: A national perspective
.
,
Washington DC
.
Crooks
E.
Pfeifer
S.
Rigby
E.
2010
Carbon-price boost for green energy. Financial Times
.
Davies
P. S.
2005
The Biological Basis of Wastewater Treatment. Strathkelvin Instruments Ltd
.
Desaldata.com
2011
Incorporating the IDA desalination plants inventory. [Online] http://desaldata.com/projects/analysis
(
accessed 17 Aug 2011
).
Doyle
J. D.
Parsons
S. A.
2002
.
Water Research
36
(
16
),
3925
3940
.
EPA Water Quality Office
1971
Combined treatment of domestic and industrial waste by activated sludge
.
Clean Water–Water Pollution Control Research Series
5
(
71
),
1
119
.
European Commission Environment
1999
Stresa (Italy). In: Proceedings of the workshop on ‘Problems around sludge’–Technology and innovative options related to sludge management
. In:
Hall
J.
Reimann
D. O.
Tilche
A.
Bortone
G.
Dohányos
M.
Ulmgren
L.
Phan
L.
(eds.),
European Comission
. pp.
155
212
.
FAO
1987
Wastewater treatment and use in agriculture. FAO Corportate Document Repository
.
Global Water Group
2011
Pure, safe, and green: A New Paradigm for Water and Wastewater Processing. [Online] http://www.india.aquatechtrade.com/in/en/2011/Documents/AquaStagePresentations/GWI.pdf
.
Gude
V. G.
Nirmalakhandan
N.
Deng
S.
2010
.
Renewable and Sustainable Energy Reviews
14
(
9
),
2641
2654
.
Hamoda
M. F.
1988
.
Environment International
14
(
1
),
29
35
.
Hitachi
2010
Seawater reverse osmosis desalination system. Report number: HAQT-SW3-2010
.
Hull
J.
2008
Options, Futures, and Other Derivatives with Derivagem CD
, (
7th edn
),
Upper Saddle River, NJ: Prentice Hall
.
Jensen
O.
2004
The last NEWater project. 9. Singapore, Global Water Intelligence (GWI). Report number: 5
.
Jolly
M.
Gillard
J.
2009
The economics of advanced digestion. 14th European Biosolids and Organic Resources Conference and Exhibition
,
CAMBI, Black & Veatch
.
Kemira
2008
Sludge treatment solution. Application Borshure, (AP-KCC-EN-WW_080409)
,
1
15
.
Leusder
C. J.
2011
Assessing the impacts of water on business: Modelling the true cost of water in terms of capital, energy, and carbon
.
Imperial College London, Centre for Environmental Policy
,
London, UK
.
Lowe
P.
Horan
N. J.
Walley
P.
2007
. In
12th European Biosolids and Organic Resources Conference
,
Aqua Enviro, Manchester
,
UK
.
McCausland
C.
McGrath
S.
2010
Anaerobic digestion of CAMBI treated sludge at Ringsend WWTP: The impact of changing sludge blends on key performance parameters
. In:
15th European Biosolids and Organic Resources Conference
,
Celtic Anglian Water, Leeds, England, Celtic Anglian Water Aqua Enviro Technology Transfer
.
Nisan
S.
Commerçon
B.
Dardour
S.
2005
.
Desalination
182
(
1–3
),
483
495
.
OCWD
2011
Interviewed by: Mehul Patel. Water reclamation energy and chemical cost: Personal communication with the OCWD. Orange County Water District
.
Pera-Titus
M.
García-Molina
V.
Baños
M. A.
Giménez
J.
Esplugas
S
.
2004
.
Applied Catalysis B: Environmental
47
(
4
),
219
256
.
PUB
2002
Singapore Water Reclamation Study – Expert Panel Review and Findings
.
1
24
.
PUB
2011
Requirements for discharge to the public sewers. [Online] www.pub.gov.sg/general/…/RequirementsForDischargeToSewer.doc
(
accessed 17 Aug 2011
).
Rawlings
R. H. D.
Sykulski
J. R.
1999
.
Building Services Engineering Research and Technology
20
(
3
),
119
129
.
Rognlien
H.
2011
Cambi average sludge ratio
.
Rosenfeldt
E. J.
Linden
K. G.
2004
.
Environmental Science & Technology
38
(
20
),
5476
5483
.
Siew
M.
2011
Senior General Manager, Global Operations & Maintenance
.
Singapore
,
Personal interview
.
Tölgyessy
J.
1993
Water Chemistry
. In:
Studies in Environmental Science: Chemistry and Biology of Water, Air and Soil–Environmental Aspects
. (
Tölgyessy
J.
, ed.).
Slovak Technical University, Bratislava, Czechoslovakia - Environmental Aspects. Volume 53 edition
.
Elsevier
, pp.
14
322
.
U.S. department of energy
1999
Reduce pumping costs through optimum pipe sizing. Energy Efficiency and Renewables Energy, DOE/GO-10099–879
.
UNEP
2000
Sludge treatment, reuse and disposal. [Online] Available from: http://www.unep.or.jp/ietc/publications/freshwater/sb_summary/10.asp [Accessed 16.07.2011]
.
UNESCO
2003b
(
accessed 12 July 2011
).
Veolia Water
2010
The Water Impact Index and the First Carbon-Water Analysis of a Major Metropolitan Water Cycle. VEOLIA WATER WHITE PAPER
.
WBCSD
2009
Water Facts & Trends. Version 2. www.wbcsd.org
,
Water Facts & Trends
.
World Bank
1998
Breweries - Industry Description and Practices. Pollution Prevention and Abatement Handbook, 272–274
.
WRc
2007
WI GHG Estimator. [1.0] UK, Carbon Trust, UKWIR, Water UK
.
|
2019-11-19 18:06:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46512091159820557, "perplexity": 3860.9852375876035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670162.76/warc/CC-MAIN-20191119172137-20191119200137-00174.warc.gz"}
|
https://socratic.org/questions/let-f-x-9x-2-and-g-x-x-3-how-do-you-find-f-g-x
|
# Let f(x) = 9x - 2 and g(x) = -x + 3, how do you find f(g(x))?
Feb 15, 2017
$f \left(g \left(x\right)\right) = - 9 x + 25$
#### Explanation:
Substitute x = - x + 3, that is g(x) into f(x)
$f \left(g \left(x\right)\right) = f \left(\textcolor{red}{- x + 3}\right)$
$\textcolor{w h i t e}{f \left(g \left(x\right)\right)} = 9 \left(\textcolor{red}{- x + 3}\right) - 2$
$\textcolor{w h i t e}{f \left(g \left(x\right)\right)} = - 9 x + 27 - 2$
$\textcolor{w h i t e}{f \left(g \left(x\right)\right)} = - 9 x + 25$
|
2021-12-08 01:27:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7270066142082214, "perplexity": 1179.7486881676562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363420.81/warc/CC-MAIN-20211207232140-20211208022140-00482.warc.gz"}
|
http://www.cfd-online.com/W/index.php?title=Approximation_Schemes_for_convective_term_-_structured_grids_-_definitions&oldid=2943
|
# Approximation Schemes for convective term - structured grids - definitions
Here we will develop a commone definitions and regulations because of
• in different articles was used defferent definitions and notations
• we are searching for common approach and generalisation
## Usual using definition for convected variable
$\boldsymbol{f}$
$\boldsymbol{\phi}$
## definition of considered face, upon wich approximation is applied
usually considered the face
$\boldsymbol{w}$
for which flux is directed from left to the right
|
2015-11-28 08:20:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8067259788513184, "perplexity": 4896.561004795001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451744.67/warc/CC-MAIN-20151124205411-00124-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/507d5d49e4b07c5f7c1fc7e8
|
## mathslover Group Title How can we say that in golden ratio : $$\frac{a+b}{a} = \frac{a}{b}$$ ?? one year ago one year ago
1. mathslover Group Title
Is their any proof for ^ the above equation?
2. amistre64 Group Title
that is simply how it is defined.
3. amistre64 Group Title
you might as well ask; is there any proof that green is green
4. mathslover Group Title
You want to say that : (a+b)/a = a/b --> if a > b ^ that is : 1 + b/a = a/b , is universal ?
5. mathslover Group Title
Sorry , I meant that : green = green is OK but a/b + 1 = b/a seems hard to agree.
6. mathslover Group Title
usually when we are to prove x = y then it is agreed that YES IT IS LIKE GREEN = GREEN but not exactly, it is like : color of leaf = green (TO PROVE) . I hope you are getting my point sir.
7. amistre64 Group Title
take a line of unit length (1); take some part of it and define it as "x"; which leaves us with the rest of it as (1-x)|dw:1350393541608:dw|the golden ratio is defined as the value such that the ratio$\frac{1}{x}=\frac{x}{1-x}$
8. amistre64 Group Title
by redefining the parts as x = a x-1 = b 1 = a+b we have your setup
9. mathslover Group Title
sir, but what is the proof that : 1-x = x^2 ? Are we estimating this?
10. mathslover Group Title
Estimation (in the following sense) : In this special case of golden ratio : (a+b)/b = a/b
11. amistre64 Group Title
when 1-x = x^2 the ratio of the parts to the whole IS the golden ratio. this is along the same line of thought as: the ratio of the circumference of a circle to its diameter defines pi. How would you prove the C/d = pi ? its not a matter of proof, but of definition
12. mathslover Group Title
So can we regard that as , axiom ? postulate? Well I think it can be defined well as 'definition' , correct? btw, a nice example given by you sir.
13. amistre64 Group Title
id say "definition" is a good term to use :) we can prove that the value of the golden ratio is what it is by solving for "x"
14. mathslover Group Title
OK thanks a lot sir!
|
2014-07-24 20:11:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8250563144683838, "perplexity": 1601.0972741802018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997891176.68/warc/CC-MAIN-20140722025811-00141-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/109753-making-an--agent/
|
• ### Announcements
#### Archived
This topic is now archived and is closed to further replies.
# Making an Agent
## Recommended Posts
Ok, I just started getting into the AI thang. I was reading a article in a DirectX book and the graphics that were displayed were by FSMs. So I looked on Google and ran into tons of AI sites. Most of them trying to explain a little about FSM and how they can be used. Little did I know I''v been using them all the time. So I thought I would try and create some sort of FSM agent that had a special purpose. I want to get to know the frame work first. I have had no luck finding any good articles that explain good algorithms for putting peacing together the frame work for a Agent (FSM). What I want is some good info on designing a system. What type of classes, structures and methods of C++ that should be used. The type of AI I need to make is somthing pretty basic for now. I want it to be able react off sertain thangs in its enviorment. I want some basic attack knolage and thats about it for now. Later I want to add a learning system. Witch I heard most of this stuff can be based all of a FSM. This is something I just came up with. Its just a laout of thangs. I want to code a good frame work and layout. So then I have something to build off of.
enum STATE { DEAD, ALIVE, THINK };
while( 1 )
{
if( ::PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) )
{
::TranslateMessage( &msg );
::DispatchMessage( &msg );
if( msg.message == WM_QUIT )
break;
}
else
{
// AI process( es )
// Finite State Machine
switch( s )
{
case ALIVE:
break;
case THINK:
break;
}
// End process
}
}
I wanted to get a FSM framework setup like the ones in QUAKEs. Because I like they way they work from what I''v head. Those type of algorythms is what I''m talking about.
• ### Forum Statistics
• Total Topics
627734
• Total Posts
2978846
• 10
• 10
• 21
• 14
• 12
|
2017-10-22 17:23:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2952553629875183, "perplexity": 2972.862176445576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825399.73/warc/CC-MAIN-20171022165927-20171022185927-00500.warc.gz"}
|
http://www.eng-tips.com/viewthread.cfm?qid=254489
|
Smart questions
Smart people
Find A ForumFind An Expert Join Eng-Tips Forums
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Remember Me
Are you an
Engineering professional?
Join Eng-Tips now!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
Jobs from Indeed
Just copy and paste the
# The $150 [Near] Space Camera Forum Search FAQs Links MVPs wktaylor (Aeronautics) (OP) 16 Sep 09 10:40 Wow... This was truly an exciting cutting edge beer-budget engineering project that had to consider every aspect of high altitude flight, including launch and recovery! Hmmm.: I wonder if an extra [emergency plug-in?] battery pack for the camera and cell-phone could have allowed the combo to transmit each photo when taken [USB'ed together]... or on the descent???I can't wait to see the photos strung-together in a video sequence!!!------------------------------http://www.wired.com/gadgetlab/2009/09/the-150-space-camera-mit-students-beat-nasa-on-beer-money-budget/The$150 Space Camera: MIT Students Beat NASA On Beer-Money BudgetBespoke is old hat. Off-the-shelf is in. Even Google runs the world's biggest and scariest server farms on computers home-made from commodity parts. DIY is cheaper and often better, as Justin Lee and Oliver Yeh found out when they decided to send a camera into [near] space.The two students (from MIT, of course) put together a low-budget rig to fly a camera high enough to photograph the curvature of the Earth. Instead of rockets, boosters and expensive control systems, they filled a weather balloon with helium and hung a styrofoam beer cooler underneath to carry a cheap Canon A470 compact camera. Instant hand warmers kept things from freezing up and made sure the batteries stayed warm enough to work.Of course, all this would be pointless if the guys couldn't find the rig when it landed, so they dropped a prepaid GPS-equipped cellphone inside the box for tracking. Total cost, including duct tape? $148.LaunchTwo weeks ago, on Sept. 2, at the leisurely post-breakfast hour of 11:45 a.m., the balloon was launched from Sturbridge, Massachusetts. Lee and Yeh took a road trip in order to stop prevailing winds from taking the balloon out onto the Atlantic, and checked in on the University of Wisconsin's balloon trajectory website to estimate the landing site.Because of spotty cellphone coverage in central Massachusetts, it was important to keep the rig in the center of the state so it could be found upon landing. Light winds meant the guys got lucky and, although the cellphone's external antenna was buried upon landing, the fix they got as the balloon was coming down was close enough.The PhotographsThe balloon and camera made it up high enough to see the black sky curling around our blue planet. The Canon was hacked with the CHDK (Canon Hacker's Development Kit) open-source firmware, which adds many features to Canon's cameras. The intervalometer (interval timer) was set to shoot a picture every five seconds, and the 8-GB memory card was enough to hold pictures for the five-hour duration of the flight.The picture you see above was shot from around 93,000 feet, just shy of 18 miles high. To give you an idea of how high that is, when the balloon burst, the beer-cooler took 40 minutes to come back to Earth.What is most astonishing about this launch, named Project Icarus, is that anyone could do it. The budget is so small as to be almost nonexistent (the guys slept in their car the night before the launch to save money), so that even if everything went wrong, a second, third or fourth attempt would be easy. All it took was a grand idea and an afternoon poking around the hardware store.The project website has few details on how the balloon was put together — but the students say they will be posting the step-by-step instructions soon. UPDATE: The instructions will be available for free, not$150, as earlier reported.Project Icarus page [1337 Arts] http://space.1337arts.com/Photo credit: 1337 Arts/Justin Lee and Oliver Yeh Regards, Wil Taylor
KENAT (Mechanical) 16 Sep 09 10:58
That's what the world needs, more junk falling out of the sky to hit us. Posting guidelines FAQ731-376: Eng-Tips.com Forum Policies http://eng-tips.com/market.cfm? (probably not aimed specifically at you)What is Engineering anyway: FAQ1088-1484: In layman terms, what is "engineering"?
KirbyWan (Aerospace) 17 Sep 09 8:51
Great post Wil, I've always been a fan of repurposing equipment to do something new and cool.I've always thought that using a hydrogen filled high altitude balloon as the first stage of a light orbital rocket would be a good idea. I know that it won't add any significant velocity to the rocket, and to get something orbital it needs velocity. But by getting it above most of the atmosphere you don't waste any thrust overcoming drag. The hydrogen would get it higher then the same balloon filled with helium and if you could figure out a way of combusting they hydrogen it might even work as a fuel tank. I don't know how much the very large high altitude balloons that nasa uses can lift, but I know they have reached 100k using helium which is above 99.9% of the atmosphere. I guess I'd have to think about stability since fins would be largely ineffective, but without aerodynamic loads the structure could be lighter.-Kirby Kirby WilkersonRemember, first define the problem, then solve it.
IRstuff (Aerospace) 17 Sep 09 10:28
Clearly, this comes from the heart of American bias against intellectualism. Why the comparison against NASA; was there a competition going on? Considering that both the GPS and the camera that they used owe much to NASA's and Air Force's development of technology and exploration tools, it's no wonder that they can enjoy the benefits of money that NASA previously invested. So, yes, it's \$150, now, but that's on top of billions of sunk costs by NASA.
wktaylor (Aeronautics) (OP) 22 Sep 09 12:08
Dizzying time-lapse video of still-photos [every-5-seconds] strung together: http://www.youtube.com/watch?v=MCBBRRp9DOQ Regards, Wil Taylor
wktaylor (Aeronautics) (OP) 22 Sep 09 12:17
And here is the video-log of another amateur flight in Canada ~ late august 2009(???)... and remarkably similar to the flight by the the MIT college students...http://www.youtube.com/watch?v=m-NMa8u0OJ8 Regards, Wil Taylor
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free!
|
2014-12-18 20:54:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1908358931541443, "perplexity": 8177.6495494404235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767873.65/warc/CC-MAIN-20141217075247-00076-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://eduzip.com/ask/question/find-the-complement-of-the-following-angle60circ-521170
|
Mathematics
# Find the complement of the following angle:$60^\circ$
##### SOLUTION
Two angles are said to be complementary if the sum of their measures is $90^\circ.$
The given angle is $60^\circ$
Let the measure of its complement be $x^\circ.$
Then,
$\implies x + 60 = 90$
$\implies x = 90 - 60$
$\implies x = 30^\circ$
Hence, the complement of the given angle measures $30^\circ$.
You're just one step away
Subjective Medium Published on 09th 09, 2020
Questions 120418
Subjects 10
Chapters 88
Enrolled Students 86
#### Realted Questions
Q1 Single Correct Medium
An angle is $\dfrac { 2 } { 3 } rd$ its complement and $\dfrac { 1 } { 4 } th$ of its supplement. then the angle is
• A. $56 ^ { \circ }$
• B. $36 ^ { \circ }$
• C. $40 ^ { \circ }$
• D. $46 ^ { \circ }$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q2 One Word Medium
Find the sum of ordinates of all the points on the line $x+y=4$ that lie at a
unit distance from the line $4x+3y=10$.
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q3 Single Correct Medium
In the figure, $\displaystyle AB\parallel CD\parallel EF$ and $\displaystyle AE\bot AB$. Also, $\displaystyle \angle BAE={ 90 }^{ o }$ and $\displaystyle \angle FED={ 45 }^{ o }$. Find the angles x, y and z.
• A. $\displaystyle x={ 45 }^{ o },y={ 45 }^{ o },z={ 135 }^{ o }$
• B. $\displaystyle x={ 135 }^{ o },y={ 45 }^{ o },z={ 35 }^{ o }$
• C. $\displaystyle x={ 45 }^{ o },y={ 135 }^{ o },z={ 45 }^{ o }$
• D. $\displaystyle x={ 135 }^{ o },y={ 135 }^{ o },z={ 45 }^{ o }$
Asked in: Mathematics - Straight Lines
1 Verified Answer | Published on 17th 08, 2020
Q4 Subjective Medium
State the type of each of the following angles:
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q5 Subjective Hard
In given figure DE||BC and $AD:DB=5:4$ Find $\dfrac { Area(\triangle DEF) }{ Area(\triangle CFB) }$.
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
|
2022-01-18 20:28:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.587054431438446, "perplexity": 9358.404370908547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00420.warc.gz"}
|
https://www.mendeley.com/research-papers/55-tesla-coercive-magnetic-field-frustrated-sr3niiro6/
|
## 55~ Tesla coercive magnetic field in frustrated Sr3NiIrO6
• Singleton J
• Kim J
• Topping C
• 4
• N/A
Citations
#### Abstract
We have measured extremely large coercive magnetic fields of up to 55~T in Sr$_3$NiIrO$_6$, with a switched magnetic moment $\approx 0.8~\mu_{\rm B}$ per formula unit. As far as we are aware, this is the largest coercive field observed thus far. This extraordinarily hard magnetism has a completely different origin from that found in conventional ferromagnets. Instead, it is due to the evolution of a frustrated antiferromagnetic state in the presence of strong magnetocrystalline anisotropy due to the overlap of spatially-extended Ir$^{4+}$ 5$d$ orbitals with oxygen 2$p$ and Ni$^{2+}$ 3$d$ orbitals. This work highlights the unusual physics that can result from combining the extended $5d$ orbitals in Ir$^{4+}$ with the frustrated behaviour of triangular lattice antiferromagnets.
#### Cite this document
Choose a citation style from the tabs below
Save time finding and organizing research with Mendeley
|
2018-08-16 00:24:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4716816842556, "perplexity": 4156.397890692959}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210387.7/warc/CC-MAIN-20180815235729-20180816015729-00501.warc.gz"}
|
https://content.ces.ncsu.edu/suggestions-for-establishing-a-blueberry-planting-in-western-north-carolina
|
NC State Extension Publications
## Introduction
Blueberry production in Western North Carolina differs from the main commercial production areas in the southeastern part of the state because of differing climate and soil conditions. Highbush blueberry cultivars (Vaccinium corymbosum) should be used exclusively; rabbiteye blueberries (Vaccinium ashei) will not consistently survive low winter temperatures that occur in Western NC. For general information on pick-your-own (PYO) and home blueberry production, see HIL-202, Blueberry Production for Local Sales and Pick-Your-Own Operations and HIL 8207, Growing Blueberries in the Home Garden. For specific information on pruning blueberries or on using overhead irrigation for frost/freeze protection, see HIL-201B, Principles of Pruning the Highbush Blueberry and HIL-201E, Blueberry Freeze Damage and Protection Meaures, respectively.
## Site Selection
• Well-drained, sandy or loamy soils.
• pH 4.0 to 5.0, high organic matter -- 3% or greater.
• Level or rolling land, elevated area with good air drainage.
• Possibilities for irrigation.
## Preparation of Land
• Test soil and bring to a medium level of phosphorous before planting.
• Eliminate problem weed species with herbicides or cultivation the year before planting.
• Incorporate bark humus or sawdust into the soil to bring organic matter to 3% or greater if needed in the rows (2- to 4-feet-wide strips) before planting.
• Set plants 5 feet apart in rows, 9 to 10 feet between rows, in late winter or early spring (as soon as the soil can be worked).
• Sawdust mulch (4 to 6 inches deep) over row immediately after setting plants.
• Row middles should be in sod (fescue or bluegrass).
## Planting Tips
• Before setting plants in the field, prune to remove at least half of the height of the canes, and thin to 1 to 3 strong canes per plant, removing all weak or twiggy growth.
• Early fruiting places stress on young plants. Plants should not be allowed to fruit the first 2 years. Remove fruiting wood and weak growth during the dormant season.
## Cultivar Selection
Cultivar Name Harvest Begins Harvest Ends Berry Size Berry Color Berry Flavor
*Weymouth 6/15 to 7/1 7/15 to 8/1 small dark blue poor
*Earliblue 6/15 to 7/1 7/11 to 7/28 medium med blue good
Spartan 6/21 to 7/6 7/21 to 8/7 large light blue excellent
Collins 6/22 to 7/7 7/22 to 8/8 medium-large light blue good
Patriot 6/28 to 7/13 7/28 to 8/12 large med blue excellent
Bluejay 6/30 to 7/15 7/30 to 8/20 med-large light blue good, mild
*Blueray 7/3 to 7/19 8/3 to 8/20 large dark blue good
*Bluecrop 7/7 to 7/23 8/13 to 8/29 med-large light blue good
*Berkeley 7/7 to 7/23 8/7 to 8/20 large light blue fair, mild
*Jersey 7/14 to 7/30 8/18 to 9/3 small light blue good
Coville 7/20 to 8/5 8/20 to 9/5 med-large med blue good, tart
Elliott 7/30 to 8/15 8/30 to 9/15 med light blue good
* Varieties that have been grown successfully in mountain areas of North Carolina. The other varieties are suggested for trial planting. Other cultivars worthy of trial use include 'Duke', 'Sunrise' and 'Toro'.
## Availability of Plants
Nurseries usually have ample supply of plants priced from $0.50 to$3.00 per plant, depending on quantity, variety, and size. Two-year-old plants are preferred. Additional plants may be obtained in later years from locally grown cuttings. See HIL-8207, Growing Blueberries in the Home Garden, for a current list of blueberry nurseries.
## Cultivation
Cultivate during the first year only to control weeds and grass. A 4- to 6-inch mulch of sawdust or bark helps control weeds and grass. Keep row middles mowed to conserve soil moisture and to keep the ground cover under control.
## Fertilization
(Caution: Blueberry plants are easily damaged by too much fertilizer.) Acid-forming fertilizers that have little limestone filler are desirable. Special azalea or rhododendron fertilizers meet this requirement, but the price maybe prohibitive for more than a few bushes. A standard 12-12-12, 10-10-10 or 8-8-8 can be used if a special blueberry fertilizer is not available. The high analysis fertilizers such as 12-12-12 generally have lower amounts of limestone filler than lower analysis fertilizers like 8-8-8. Ammonium nitrate (33.5-0-0) or ammonium sulfate (20.5-0-0) are desirable sources of supplemental nitrogen. If the soil pH is below 5.0, use ammonium nitrate, but use ammonium sulfate for more acid forming effect if the pH is above 5.0. Special attention should be given to leaf yellowing (complete area of young and old leaves) caused by nitrogen deficiency when sawdust or bark was combined with the planting soil. Organisms in the soil deplete the available nitrogen and cause a deficiency for the blueberry plant as the sawdust or bark decomposes.
First Year — Uniformly distribute 16 lb of nitrogen per acre after the first flush of growth is complete (6 to 8 weeks after planting) within a band 1 foot on each side of the plant. The 16 lb of nitrogen are supplied by 133, 160 or 200 lb, respectively, of 12-12-12, 10-10-10 or 8-8-8.
Fertilizer can also be applied by hand around individual bushes. Uniformly distribute 12 oz (1 Tbsp) of 12-12-12 within a circle 1 foot from the plant. Use proportionately more 10-10-10 or 8-8-8. Repeat applications using ammonium nitrate or ammonium sulfate every 4 to 6 weeks until July 1. Extend application intervals during dry periods until rainfall has totaled 4 inches. Use 50 lb per acre of ammonium nitrate or 80 lb per acre of ammonium sulfate in a 2 foot band (1 foot on each side of the bush). This rate corresponds to about 14 oz (12 Tbsp) ammonium nitrate or 38 oz (34 Tbsp) of ammonium sulfate within the circle 1 foot from the plant.
Second Year — Double the first-year rates, but increase the band width to 3 feet or the circle around individual plants to 112 feet.
Bearing Plants — Apply 300-500 lb per acre of 12-12-12 or an equivalent amount of 10-10-10 or 8-8-8 in a 3-4 ft band. For individual bushes, apply the equivalent of 12 lb (1 cup) of 12-12-12 within a circle 3 feet from the plant. Sidedress with 30 lb of N (about 100 lb of ammonium nitrate or 150 lb of ammonium sulfate) per acre 4-6 weeks later. For individual bushes, this is 2 oz (14 cup) of ammonium nitrate or 3 oz (38 cup) of ammonium sulfate.
## Insect and Disease Control
Insects and diseases have not been serious problems; however, check for damage periodically. Wild blueberries are common in western North Carolina; and, therefore, some pest problems may be expected at one time or another. For more detailed information, refer to AG-468, Diseases and Arthropod Pests of Blueberry.
|
2020-04-01 18:55:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33187514543533325, "perplexity": 10203.952561601045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00337.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-12-section-12-7-common-logarithms-natural-logarithms-and-change-of-base-practice-page-885/2
|
## Algebra: A Combined Approach (4th Edition)
Using the property of common logarithms, $\log 1000=\log 10^{3}=3$
|
2018-07-19 12:02:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1981690227985382, "perplexity": 5733.900353814992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00582.warc.gz"}
|
http://www.sawaal.com/time-and-distance-questions-and-answers/two-horses-start-trotting-towards-each-other-one-from-a-to-b-and-another-from-b-to-a-they-cross-each_6727
|
21
Q:
# Two horses start trotting towards each other, one from A to B and another from B to A. They cross each other after one hour and the first horse reaches B, 5/6 hour before the second horse reaches A. If the distance between A and B is 50 km. what is the speed of the slower horse?
A) 30 km/h B) 15 km/h C) 25 km/h D) 20 km/h
Explanation:
If the speed of the faster horse be $\inline&space;f_{s}$ and that of slower horse be $\inline&space;s_{s}$ then
$\inline&space;f_{s}+s_{s}=\frac{50}{1}=50$
and $\inline&space;\frac{50}{s_{s}}-\frac{50}{f_{s}}=\frac{5}{6}$
Now, you can go through options.
The speed of slower horse is 20km/h
Since, 20+30=50
and $\inline&space;\frac{50}{20}-\frac{50}{30}=\frac{5}{6}$
Q:
A thief goes away with a MARUTHI car at a speed of 40 kmph. The theft has been discovered after half an hour and the owner sets off in a bike at 50 kmph when will the owner over take the thief from the start ?
A) 2 hrs 10 min B) 2 hrs C) 2 hrs 5 min D) 2 hrs 30 min
Explanation:
|-----------20--------------------|
50 40
Difference = 20kms
Relative Speed = 50 – 40 = 10 kmph
Time = 20/10 = 2 hours.
1 19
Q:
A boatman can row 96 km downstream in 8 hr. If the speed of the current is 4 km/hr, then find in what time he will be able to cover 8 km upstream ?
A) 1.5 hrs B) 1 hrs C) 2.5 hrs D) 2 hrs
Explanation:
Speed in downstream = 96/8 = 12 kmph
Speed of current = 4 km/hr
Speed of the boatman in still water = 12 – 4 = 8 kmph
Speed in upstream = 8 – 4 = 4 kmph
Time taken to cover 8 km upstream = 8/4 = 2 hours.
1 22
Q:
A boat takes 19 hours for travelling downstream from point A to point B and coming back to a point C which is at midway between A and B. If the velocity of the stream is 4 kmph and the speed of the boat in still water is 14 kmph, what is the distance between A and B ?
A) 180 km B) 160 km C) 140 km D) 120 km
Explanation:
Speed in downstream = (14 + 4) km/hr = 18 km/hr;
Speed in upstream = (14 – 4) km/hr = 10 km/hr.
Let the distance between A and B be x km. Then,
x/18 + (x/2)/10 = 19 ⇔ x/18 + x/20 = 19 ⇒ x = 180 km.
1 10
Q:
A boatman can row 3 km against the stream in 20 minutes and return in 18 minutes. Find the rate of current ?
A) 1/3 kmph B) 2/3 kmph C) 1/4 kmph D) 1/2 kmph
Explanation:
Speed in upstream = Distance / Time = 3 x 60/20 = 9 km/hr.
Speed in downstream = 3 x 60/18 = 10 km/hr
Rate of current = (10-9)/2 = 1/2 km/hr.
1 7
Q:
A person takes 20 minutes more to cover a certain distance by decreasing his speed by 20%. What is the time taken to cover the distance at his original speed ?
A) 1hr B) 1 hr 20 min C) 1 hr 10 min D) 50 min
Explanation:
Let the distance and original speed be 'd' km and 'k' kmph respectively.
d/0.8k - d/k = 20/60 => 5d/4k - d/k = 1/3
=> (5d - 4d)/4k = 1/3 => d = 4/3 k
Time taken to cover the distance at original speed
= d/k = 4/3 hours = 1 hour 20 minutes.
|
2017-04-30 18:39:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6200537085533142, "perplexity": 1459.085919838014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125841.92/warc/CC-MAIN-20170423031205-00643-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://franklin.dyer.me/notes/note/Liouvilles_Theorem
|
## Franklin's Notes
### Liouville's Theorem
Liouville's Theorem is as follows:
Liouville's Theorem. If $f$ is holomorphic and bounded on $\mathbb C$, then it is constant.
Proof. We have the following formula for the derivative of the function $f$: where $\mathrm{ind}(\gamma,z)=1$. If we suppose that $f$ is bounded in magnidue so that $|f(z)|0$, and we choose $\gamma$ to be a circle of radius $R$ centered at $z$, the integral identity can be transformed into the following bound: but because $R$ can be chosen to be arbitrarily large, we have that $|f'(z)|0$, and hence $f'(z)$ vanishes for all $z$, implying that $f$ is constant. $\blacksquare$
|
2023-02-07 01:13:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958459138870239, "perplexity": 67.97820892234229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00246.warc.gz"}
|
https://proxies-free.com/tag/jump/
|
## unity – How to make Player consistently jump *to* a specific target rather than *towards* it?
I’m trying to make it so if my player hits “E” then they jump precisely to a specific target. The current code results in player jumping towards the target but not really making it there.
I’ve tried adjusting forceMultiply and ForceMode and adding an offset but I just can’t get a result that simply consistently launches the player’s rigidbody `rb` towards the `anchorPoint` position.
The code I have is:
``````private void Update()
{
if (Input.GetKeyDown(KeyCode.E))
{
tryGrabbing = true;
}
}
private void FixedUpdate()
{
if (tryGrabbing)
{
}
}
``````
Thank you for any help!!
## physics – (Unity 3D) How to make Player consistently jump *to* a specific target rather than *towards* it?
I’m trying to make it so if my player hits “E” then they jump precisely to a specific target. The current code results in player jumping towards the target but not really making it there.
I’ve tried adjusting forceMultiply and ForceMode and adding an offset but I just can’t get a result that simply consistently launches the player’s rigidbody `rb` towards the `anchorPoint` position.
The code I have is:
``````private void Update()
{
if (Input.GetKeyDown(KeyCode.E))
{
tryGrabbing = true;
}
}
private void FixedUpdate()
{
if (tryGrabbing)
{
}
}
``````
Thank you for any help!!
## bash – Use specific key to connect to jump host without modifying ssh config file
I’m trying to write a script that connect to a linux server by using an other one as a ProxyJump:
``````ssh -J root@proxyhost root@target
``````
I have two different keys (actually ssh certificate) and I would like to tell ssh to use one for the proxy host and the other for the target. I know I could modify the ssh config for that but I would like to specify it on the command line so I don’t have to rely on a valid ssh configuration.
So I’m looking for something like:
``````ssh -i proxyhostkey -J root@proxyhost -i targetkey root@target
``````
The ssh man page of the `-J` options says (emphasis mine):
Note that configuration directives supplied on the command-line
generally apply to the destination host and not any specified jump
hosts. Use ~/.ssh/config to specify configuration for jump hosts.
Is it possible to do want I want ? Or do I have to ensure that the `~/.ssh/config` file will be correct ?
## The Problem
I am using a Logitech K480 Bluetooth keyboard with a 7th generation iPad (iPadOS 14.6) and I often notice that when I am navigating a body of text I have typed (e.g. in the Notes or Facebook apps) with the keyboard arrows the cursor will jump to the beginning of a text field if I push the up arrow twice in a particular location. It can be quite annoying when trying to navigate text I’ve typed; it’s difficult to intentionally navigate using it (or avoid it) because of the variable length of lines in paragraphs.
The first push of the up arrow will move the cursor one line up and then the second push will jump the cursor; it happens both immediately and if I wait. If I push the down key after the jump (again either immediately or if I wait), the cursor will jump back to the original line (i.e. the line I was on before I pushed the up key the first time). The cursor jumping behaviour will occur again in the same manner when pushing the up key twice.
This behaviour consistently occurs if the cursor is at the end of a line following the first up arrow push, no matter the location of the paragraph on the page or the position of the line in the paragraph.
## Related Behaviour
There is some related behaviour. When using Command + left arrow to skip to the end of the line, pushing the up arrow will consistently not move the cursor to the line above but will instead jump the cursor to the beginning of the same line (pushing the down cursor subsequently will jump the cursor back to its original position at the end of the line). Sometimes when navigating a typed document with the arrow keys, a push of the up or down arrow key seems to move two lines at once.
## Possible Explanations
Together these behaviours and their consistency make me think this is an iPadOS ‘autoscrolling’ function, analogous to how deleting text with the onscreen keyboard speeds up the longer the backspace button is held. However, I have not found any documentation of this behaviour by Apple or anywhere else online.
I don’t think this is a feature of the K480 keyboard; it is not listed anywhere in the Logitech documentation, and the responder to my enquiry to Logitech seemed to have no knowledge of this phenomenon (though it was a particularly unhelpful response…). My work’s IT manager thinks it is likely an iPad software feature.
Perhaps this is just a feature of Apple devices that I’m not aware of (e.g. something that’s shared between Macs and iOS devices), but it is strange that I can’t seem to find even a mention of this behaviour.
## The Questions
Has anyone else observed this behaviour? Can it be customised or toggled? Is there some documentation (or personal experience) of the behaviour that lays out how it works?
## unity – How to stop a jump midair and remove y velocity in the process
What I want to do is to cut a jump midair by turning off the y velocity when the player releases the jump button (like in Hollow Knight). I managed to do that with:
``````rb.velocity = new Vector2(rb.velocity.x, 0f);
``````
My problem is when I cut the jump and I am midair, if I repeat the release jump button process, the whole thing happens again, the character staggers as long as I am releasing the jump button continously.
I would like to be able to do that only once when I am off the ground. I am a noob when it comes to coding so I don’t really know what to do.
`````` using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Events;
public class PlayerController : MonoBehaviour {
public Animator animator;
private Rigidbody2D rb;
public float speed;
private float moveInput;
public float jumpForce;
private bool isGrounded;
public Transform feetPos;
private float jumpTimeCounter;
public float jumpTime;
private bool isJumping;
private bool facingRight = true;
public float jumpDownForceY = 0;
void Start()
{
rb = GetComponent<Rigidbody2D>();
}
void Update()
{
if (Input.GetButtonDown("Jump") && rb.velocity.y == 0)
if (Mathf.Abs(moveInput) > 0 && rb.velocity.y == 0)
animator.SetBool("Running", true);
else
animator.SetBool("Running", false);
if (rb.velocity.y == 0)
{
animator.SetBool("Jumping", false);
animator.SetBool("Falling", false);
}
if (rb.velocity.y > 0.2)
{
animator.SetBool("Jumping", true);
}
if (rb.velocity.y < 0)
{
animator.SetBool("Jumping", false);
animator.SetBool("Falling", true);
}
if (isGrounded == true && Input.GetButtonDown("Jump"))
{
isJumping = true;
jumpTimeCounter = jumpTime;
rb.velocity = Vector2.up * jumpForce;
}
if (Input.GetButton("Jump") && isJumping == true)
{
if (jumpTimeCounter > 0){
rb.velocity = Vector2.up * jumpForce;
jumpTimeCounter -= Time.deltaTime;
} else {
isJumping = false;
}
}
if (Input.GetButtonUp("Jump"))
{
isJumping = false;
rb.velocity = new Vector2(rb.velocity.x, 0f);
}
}
void FixedUpdate()
{
moveInput = Input.GetAxisRaw("Horizontal");
rb.velocity = new Vector2(moveInput * speed, rb.velocity.y);
Flip(moveInput);
}
private void Flip(float moveInput)
{
if (moveInput > 0 && !facingRight || moveInput < 0 && facingRight)
{
facingRight = !facingRight;
Vector3 theScale = transform.localScale;
theScale.x *= -1;
transform.localScale = theScale;
}
}
}
``````
## libgdx – Should delta be applied to every change per frame ? (e.g. acceleration, deceleration, jump, etc.?)
I have written a game based on another game’s original physics. I have all the constants the original game used in the Sega Megadrive. For example:
``````float ACCELERATION = 0.03287f;
float DECELERATION = 0.4f;
float FRICTION = ACCELERATION;
float TOP_SPEED = 8f;
``````
when the player presses the right button I do:
`````` if (rightPressed) {
speed.x += ACCELERATION * delta; // accelerate
if (speed.x >= TOP_SPEED * delta) {
speed.x = TOP_SPEED * delta; // impose a top speed limit
}
}
...
else { // user is not moving the player (left/right)
speed.x -= Math.min(Math.abs(speed.x), FRICTION * delta) * Math.signum(speed.x);
}
``````
Several lines later:
`````` x += speed.x * delta;
y += speed.y * delta ;
``````
Is the delta here used correctly? Should it appear anywhere as I did or just the moment I set x and y? My understanding is that speed should accelerate according to delta as well.
Another problem I have is that, by using the original game constants, the character moves in slow motion (even if I only have delta at the moment of updating x and y) so I had to multiply all constants by 15000 in order to have normal playing speed. I expected I may have to multiply by 60 (because of the 60 frames per second of Sega Megadrive) but not by 15000.
## c++ – SFML – My jump never stops
Im making a short jump test in sfml. However I have a problem, whenever the jump button is pressed the character will jump but will not fall down, he will keep going higher. However, in the beginning of the test, I made the sprite above the ground to see if the gravity function worked, but this still happens. Here is a video
Here is my code:
``````#include <SFML/Graphics.hpp>
using namespace sf;
float VelocityX = 0, VelocityY = 0;
float x = 0, y = 0;
int gravity = 2;
float dt;
void movement() {
if (y < 444) {
VelocityY += gravity * dt;
}
else if (y > 444) {
y = 444;
}
x += VelocityX;
y += VelocityY;
}
int main() {
RenderWindow window(VideoMode(800, 600), "jump test");
RectangleShape rect(Vector2f(20, 20));
const int moveSpeed = 500, jumpForce = 10;
Clock deltaTime;
while (window.isOpen())
{
dt = deltaTime.restart().asSeconds();
Event event;
while (window.pollEvent(event)) {
if (event.type == Event::Closed) {
window.close();
}
}
if (Keyboard::isKeyPressed(Keyboard::Right)) {
VelocityX = moveSpeed * dt;
}
else if (Keyboard::isKeyPressed(Keyboard::Left)) {
VelocityX = -moveSpeed * dt;
}
else {
VelocityX = 0;
}
if (Keyboard::isKeyPressed(Keyboard::Space)) {
VelocityY -= jumpForce;
}
movement();
rect.setPosition(x, y);
window.clear();
window.draw(rect);
window.display();
}
return 0;
}
$$```$$
``````
I am trying to create a google sheet that lists all payments made for tuition by family. I would like to use only one tab (not create a tab for each family) as I have about 120 families to keep track of.
Any thoughts of the best way to do this without having to scroll forever to get to the specific family I need to look at? Currently I have the families listed in alphabetical order in multiples of 4 going down and then the next set of 4, etc. This is easier when it comes to printing certain sections of the sheet for cross reference (ie not having to scroll down forever to find a name).
Date Family
Penland
8/6/20 \$150 cc = app fee
8/17/20 \$550 ck #2635 = reg, corp
9/10/20 \$650 ck #3016 = Aug tuit
10/12/20 \$650 ck #4361 = Oct tuit
11/10/20 \$650 ck #3027 = Nov tuit
12/16/20 \$650 ck #2143 = Dec tuit
1/20/21 \$650 ck #2738 = Jan tuit
2/26/21 \$650 ck #4622 = Feb tuit
I know how to use the F5 shortcut to get to a specific cell but how do you get to a specific cell by typing in the name of the family? That would help with the whole scrolling issue.
Or if anyone has a better idea of how to record these payments I’d love your help.
## plugins – Shortcode use creates a scroll error/ jump to top of page?
plugins – Shortcode use creates a scroll error/ jump to top of page? – WordPress Development Stack Exchange
|
2021-07-28 03:36:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28900471329689026, "perplexity": 3760.2236756724287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00397.warc.gz"}
|
https://www.codespeedy.com/polar-to-rectangular-conversion-in-cpp/
|
# Polar to Rectangular conversion in C++
In this tutorial, we are going to learn Polar to Rectangular conversion in C++. Here we will learn about the rectangular form, polar form, and Polar to Rectangular conversion. After that, we will see the C++ program for the same.
Let us see how to represent z=a+ib in polar form:-
As, shown in the figure:
r ²
= a ² + b ² (Pythagoras theorem)
By using the basic trigonometry:
=> cosθ=a/r and sinθ=b/r
=> a=rcosθ and b=rsinθ
Substitute the values of a and b in z=a+ib, we get
z=a+bi=rcosθ+(rsinθ)i=r(cosθ+isinθ)
This is called as polar form.
## Conversion of polar form to rectangular form in C++
To convert polar form to rectangular form i.e.
r(cosθ+isinθ) to a+ib
we have to calculate the value of a and b:-
a = r cos θ
b = r sin θ
and substitute these values in a+ib to get the rectangular form.
#### C++ program
So, here is the C++ implementation of the above conversion:-
#include<bits/stdc++.h>
using namespace std;
/*===============================================
FUNCTION FOR CONVERSION FROM POLAR TO RECTANGULAR
================================================*/
void convert_to_rect(int r, int angle)
{
int a,b;
//Calculating values of a and b
a=r*cos(angle);
b=r*sin(angle);
//Displaying rectangular form
cout<<"Rectangular form is: "<<a<<" + i("<<b<<")"<<endl;
}
/*======================================
MAIN FUNCTION
=======================================*/
int main()
{
int r,angle;
r=5;
angle=10;
//Displaying Polar form
cout<<"Polar form is: "<<r<<"*(cos("<<angle<<")+isin("
<<angle<<"))"<<endl;
//Passing r and angle to convert_to_rect function
convert_to_rect(r,angle);
return 0;
}
Output:-
Polar form is: 5*(cos(10)+isin(10))
Rectangular form is: -4 + i(-2)
|
2021-08-04 05:45:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.939490795135498, "perplexity": 5480.438487879112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00457.warc.gz"}
|
http://www.physicsforums.com/forumdisplay.php?f=193&pp=40&sort=lastpost&order=asc&daysprune=-1&page=5
|
# Career Guidance
- Discuss topics on science professions and career paths. Job descriptions, salary, requirements...
Views: 775 Announcement: End of year contest, $75+$50 prize! Dec18-13 Meta Thread / Thread Starter Last Post Replies Views Pinned: Cosmology: a good career choice? ( 1 2 3 ... Last) i find the study of cosmology very fascinating and now contemplating to study it. But is it a wise choice? I mean,... Mar30-11 12:11 AM Entropee 79 63,894 Pinned: Mechanical vs. Civil engineering ( 1 2) Hi everyone. I am a sophmore studying to get a bs in mechanical engineering. I am thinking about switching to civil... Aug18-13 06:42 PM char808 35 130,198 Pinned: Anyone considering a career as a patent attorney? ( 1 2 3 ... Last) Hey folks, I'm Greg's sister. One career many scientists do not consider is becoming a patent attorney. I've been... Dec13-13 07:32 PM TheKracken 150 84,714 I have graduated with an electrical engineering degree last May, and have been on the job hunt since then. Right now I... Oct10-07 07:57 PM mr_coffee 13 3,159 Hi i am in my last year of high skool. I'll be joinin college next yr in the usa. I was wonderin wat are the most... Oct13-07 06:23 PM Ki Man 2 2,114 Hello, So, I've done some research and read other posts (http://www.physicsforums.com/showthread.php?t=99432) about... Oct13-07 09:37 PM Everdawn 11 3,026 I have a question concerning a career move. I have a chance to switch from the nuclear industry to the aerospace... Oct14-07 07:18 PM quetzalcoatl9 10 3,022 i dont know how many times this has been asked(probably 347374 times, coz thats what this Academic & Career Guidance... Oct18-07 03:15 PM ank_gl 2 1,812 I'm finishing up my bachelors in Mathematical Physics in april, and haven't done much planning beyond that. I was... Oct22-07 09:17 PM NeoDevin 0 3,247 Alright so I should probably start thinking about what to do in Uni so here are my 3 choices, and i have some... Oct23-07 10:20 PM ekrim 1 2,719 Hello. Maybe this sort of question has been answered before so excuse my laziness. At the moment I'm doing my 2nd... Oct26-07 10:00 PM eastside00_99 1 1,336 Hi, I'm currently in my first year of 6th form, meaning that in one and a half years i have to choose a major at... Oct27-07 06:37 PM BadlyAddicted 0 2,598 I was just in a thread over in the Academic and Career Guidance forum asking if there was a special thread for listing... Nov5-07 02:00 PM robphy 6 1,979 So I'm graduating in May with my B.S in Electrical Engineering Technology. well walking in may I still have to take my... Nov11-07 08:25 PM ENGRedcupcake 7 1,260 Hey This will be my first post on this forum, but I have been reading posts for quite a while. I find it really... Nov23-07 01:48 AM Chris Hillman 12 8,326 Dear friends, I have several problems regarding selection of major field and subfields. I greatly appreciate your... Nov24-07 09:56 PM rukshan 3 1,990 Recently, I've constantly been debating going into Academia, versus industry. First off, I absolutely love conducting... Nov27-07 12:23 PM robphy 2 3,182 Losing your house? I hope not, but if so the BBC knows who to blame :rolleyes: Nov28-07 04:07 PM ΔxΔp≥ћ/2 1 1,893 I'am in High School right now, and I'am still not really sure what job I would want to have. However, I do have... Nov28-07 06:52 PM Chris Hillman 3 1,561 I have a 4.0 so far but and flew through Diff EQ, Calc 1-3 with no problemo. However, Linear Algebra is kicking my... Nov29-07 05:19 AM ROLEX4life 20 5,549 In the entire world, what type of profession is most respectable? I know this is based on your opinions. I just want... Nov29-07 03:43 PM scorpa 78 14,711 Hi all members, Please help me out!!!!!!!!!!!!!11 I have finished my 12th with biology,chemistry and physics as... Dec8-07 04:38 PM Moonbear 1 1,739 I am usually not a poster of Dilbert cartoons, but I think this one is an exception. I think this may have to be a... Dec22-07 01:43 PM FredGarvin 8 1,693 Hey everyone, I've been wondering about where Physics Ph.D. graduates work a few years after they've graduated and... Dec30-07 04:52 PM will.c 27 5,601 Hi, Anyone a maths teacher or science teacher here? need advice if i should become a maths teacher? good... Jan3-08 02:22 AM RasslinGod 5 5,041 Post: Need advice on Numerical Methods Book User: careerassessm Infraction: Spam Level 2 Points: 10 ... Jan7-08 05:18 AM Integral 0 763 I'm a 3rd year university student, microbiology/biochem major. I also recently found after taking Organic One that I... Jan8-08 11:28 AM Spirochete 0 1,251 I have read it is highly cyclical I have also heard that it is very stable. Is Civil Engineering a stable career ? Jan17-08 09:37 AM RufusDawes 4 10,276 I'm an aerospace engineer, currently job hunting, but I'm keeping myself occupied by refreshing my memory in c++... Jan22-08 11:40 PM jaap de vries 9 6,662 Hi, I want to get a bachelor's in physics and math, with a master's in engineering.... that's the plan, but what... Jan24-08 12:41 AM rbj 1 2,147 I'm currently a high school senior planning on going to college to study Electrical Engineering. I've become quite... Jan25-08 02:59 PM Poop-Loops 9 2,360 Hi, my mate has just left school and wants a career in jet propulsion or aircraft structure, he lives in plymouth (uk)... Jan27-08 12:48 PM TVP45 15 2,588 Hello, I'm currently a sophomore in high school and have recently become very interested in physics. I do have a... Jan28-08 10:05 AM Laura1013 2 2,188 Hello I’m a junior in high school. I was just wondering if I could get some advice on careers. I am considering... Feb4-08 03:41 PM iceman99 8 2,863 i'm in last year of high school and it's time to decide a career path. i'm mostly into biology and chemistry. can... Feb7-08 09:33 PM sulymani 2 1,912 I am currently in my honours year of a biomedical science degree in Australia, currently working on my thesis.... Feb16-08 10:18 PM Scatterbrains 4 3,931 How would a criminal record (while obtaining a Phd) affect one's academic career, if any at all? Feb26-08 07:36 PM Moonbear 30 8,101 Hi, I'm a physics major at Cornell and after reading a few of the threads on this site, I'm worried I might... Feb26-08 09:57 PM ZapperZ 1 1,156 What does on do in an industry setting with a BS in physics? I really do no think I want to work IN physics. I... Mar3-08 12:39 PM clope023 9 2,479 Hey, To pursue a career in accountancy would I need to do a degree in accountancy? Some people say no and others... Mar9-08 05:52 PM _Mayday_ 0 961 So, I am a sophomore at a Math/Science Magnet school and I am considering applying to colleges a year early and... Mar30-08 09:41 PM mgiddy911 12 1,978 Hi, I'm currently a student in high school, and I found this forum. I want to go into aerospace engineering, due to... Apr2-08 12:32 PM bentrinh 16 4,963 I have a great interest in space exploration and am trying to find somewhere I could work in that field. I don't... Apr4-08 10:16 AM D H 11 2,456
Display Options for Career Guidance Mentors
Showing threads 161 to 200 of 4338 Mentors : 2
Sorted By Thread Title Last Post Time Thread Start Time Number of Replies Number of Views Thread Starter Thread Rating Sort Order Ascending Descending From The Last Day Last 2 Days Last Week Last 10 Days Last 2 Weeks Last Month Last 45 Days Last 2 Months Last 75 Days Last 100 Days Last Year Beginning
Forum Tools Search this Forum Search this Forum : Advanced Search
|
2013-12-22 11:52:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2555941045284271, "perplexity": 3831.032226685752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387346051826/warc/CC-MAIN-20131218055411-00074-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://docs.python.org/2/tutorial/floatingpoint.html
|
# 14. Floating Point Arithmetic: Issues and Limitations¶
Floating-point numbers are represented in computer hardware as base 2 (binary) fractions. For example, the decimal fraction
0.125
has value 1/10 + 2/100 + 5/1000, and in the same way the binary fraction
0.001
has value 0/2 + 0/4 + 1/8. These two fractions have identical values, the only real difference being that the first is written in base 10 fractional notation, and the second in base 2.
Unfortunately, most decimal fractions cannot be represented exactly as binary fractions. A consequence is that, in general, the decimal floating-point numbers you enter are only approximated by the binary floating-point numbers actually stored in the machine.
The problem is easier to understand at first in base 10. Consider the fraction 1/3. You can approximate that as a base 10 fraction:
0.3
or, better,
0.33
or, better,
0.333
and so on. No matter how many digits you’re willing to write down, the result will never be exactly 1/3, but will be an increasingly better approximation of 1/3.
In the same way, no matter how many base 2 digits you’re willing to use, the decimal value 0.1 cannot be represented exactly as a base 2 fraction. In base 2, 1/10 is the infinitely repeating fraction
0.0001100110011001100110011001100110011001100110011...
Stop at any finite number of bits, and you get an approximation.
On a typical machine running Python, there are 53 bits of precision available for a Python float, so the value stored internally when you enter the decimal number 0.1 is the binary fraction
0.00011001100110011001100110011001100110011001100110011010
which is close to, but not exactly equal to, 1/10.
It’s easy to forget that the stored value is an approximation to the original decimal fraction, because of the way that floats are displayed at the interpreter prompt. Python only prints a decimal approximation to the true decimal value of the binary approximation stored by the machine. If Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display
>>> 0.1
0.1000000000000000055511151231257827021181583404541015625
That is more digits than most people find useful, so Python keeps the number of digits manageable by displaying a rounded value instead
>>> 0.1
0.1
It’s important to realize that this is, in a real sense, an illusion: the value in the machine is not exactly 1/10, you’re simply rounding the display of the true machine value. This fact becomes apparent as soon as you try to do arithmetic with these values
>>> 0.1 + 0.2
0.30000000000000004
Note that this is in the very nature of binary floating-point: this is not a bug in Python, and it is not a bug in your code either. You’ll see the same kind of thing in all languages that support your hardware’s floating-point arithmetic (although some languages may not display the difference by default, or in all output modes).
Other surprises follow from this one. For example, if you try to round the value 2.675 to two decimal places, you get this
>>> round(2.675, 2)
2.67
The documentation for the built-in round() function says that it rounds to the nearest value, rounding ties away from zero. Since the decimal fraction 2.675 is exactly halfway between 2.67 and 2.68, you might expect the result here to be (a binary approximation to) 2.68. It’s not, because when the decimal string 2.675 is converted to a binary floating-point number, it’s again replaced with a binary approximation, whose exact value is
2.67499999999999982236431605997495353221893310546875
Since this approximation is slightly closer to 2.67 than to 2.68, it’s rounded down.
If you’re in a situation where you care which way your decimal halfway-cases are rounded, you should consider using the decimal module. Incidentally, the decimal module also provides a nice way to “see” the exact value that’s stored in any particular Python float
>>> from decimal import Decimal
>>> Decimal(2.675)
Decimal('2.67499999999999982236431605997495353221893310546875')
Another consequence is that since 0.1 is not exactly 1/10, summing ten values of 0.1 may not yield exactly 1.0, either:
>>> sum = 0.0
>>> for i in range(10):
... sum += 0.1
...
>>> sum
0.9999999999999999
Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is explained in precise detail below, in the “Representation Error” section. See The Perils of Floating Point for a more complete account of other common surprises.
As that says near the end, “there are no easy answers.” Still, don’t be unduly wary of floating-point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in 2**53 per operation. That’s more than adequate for most tasks, but you do need to keep in mind that it’s not decimal arithmetic, and that every float operation can suffer a new rounding error.
While pathological cases do exist, for most casual use of floating-point arithmetic you’ll see the result you expect in the end if you simply round the display of your final results to the number of decimal digits you expect. For fine control over how a float is displayed see the str.format() method’s format specifiers in Format String Syntax.
## 14.1. Representation Error¶
This section explains the “0.1” example in detail, and shows how you can perform an exact analysis of cases like this yourself. Basic familiarity with binary floating-point representation is assumed.
Representation error refers to the fact that some (most, actually) decimal fractions cannot be represented exactly as binary (base 2) fractions. This is the chief reason why Python (or Perl, C, C++, Java, Fortran, and many others) often won’t display the exact decimal number you expect:
>>> 0.1 + 0.2
0.30000000000000004
Why is that? 1/10 and 2/10 are not exactly representable as a binary fraction. Almost all machines today (July 2010) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2**N where J is an integer containing exactly 53 bits. Rewriting
1 / 10 ~= J / (2**N)
as
J ~= 2**N / 10
and recalling that J has exactly 53 bits (is >= 2**52 but < 2**53), the best value for N is 56:
>>> 2**52
4503599627370496
>>> 2**53
9007199254740992
>>> 2**56/10
7205759403792793
That is, 56 is the only value for N that leaves J with exactly 53 bits. The best possible value for J is then that quotient rounded:
>>> q, r = divmod(2**56, 10)
>>> r
6
Since the remainder is more than half of 10, the best approximation is obtained by rounding up:
>>> q+1
7205759403792794
Therefore the best possible approximation to 1/10 in 754 double precision is that over 2**56, or
7205759403792794 / 72057594037927936
Note that since we rounded up, this is actually a little bit larger than 1/10; if we had not rounded up, the quotient would have been a little bit smaller than 1/10. But in no case can it be exactly 1/10!
So the computer never “sees” 1/10: what it sees is the exact fraction given above, the best 754 double approximation it can get:
>>> .1 * 2**56
7205759403792794.0
If we multiply that fraction by 10**30, we can see the (truncated) value of its 30 most significant decimal digits:
>>> 7205759403792794 * 10**30 // 2**56
100000000000000005551115123125L
meaning that the exact number stored in the computer is approximately equal to the decimal value 0.100000000000000005551115123125. In versions prior to Python 2.7 and Python 3.1, Python rounded this value to 17 significant digits, giving ‘0.10000000000000001’. In current versions, Python displays a value based on the shortest decimal fraction that rounds correctly back to the true binary value, resulting simply in ‘0.1’.
|
2019-02-22 08:36:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5844384431838989, "perplexity": 1011.5383144791463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247514804.76/warc/CC-MAIN-20190222074154-20190222100154-00201.warc.gz"}
|
http://mathoverflow.net/questions/78706/probability-one-event-for-markov-chain
|
# Probability-one event for Markov chain
Let $X$ be a Markov chain, with countable state space $I$ and transition probability matrix $P$. $X$ is irreducible, but need not be recurrent. Let $S$ be a fixed subset of $I$.
Define a subset $K$ of $I$ to be "nice" if there exists $\epsilon = \epsilon_K$ such that for all $k \in K$, $P_{kS} \geq \epsilon$. (Here, $P_{kS} = \sum_{s \in S} P_{ks}$.)
Given: with probability 1, there exists a nice set which $X$ visits infinitely often. (Note that the set $K$, and therefore the value of $\epsilon_K$, may be random.)
Want to show: with probability 1, $X$ visits $S$ infinitely often.
It seems like it ought to be either trivially true or trivially false, but I'm failing to determine which...
-
I'm a bit confused by this problem. How can $K$ be random? Any subset of $I$ is either nice or it isn't, and that determination only depends on $I$, $S$, and $P$, all of which are non-random entities. Am I missing something? I can delete this later, as I realize this doesn't qualify as an answer. I would just like some clarification. – Jeremy Voltz Oct 20 '11 at 21:45
Let me write this in terms of the underlying probability space $\Omega$: I know that for almost all $\omega \in \Omega$, there exists $K = K(\omega)$ such that $X_n(\omega) \in K(\omega)$ for infinitely many $n$. However, it may not be true that there is a single deterministic $K$ which almost all $X_n(\omega)$'s visit. – Elena Yudovina Oct 20 '11 at 22:18
Thank you, that makes sense. – Jeremy Voltz Oct 22 '11 at 15:17
If I've understood your problem correctly, an argument along these lines may help:
Let ${\cal F}_n=\sigma(X_0,X_1,\dots,X_n)$ and define $S_n=\left(X_n\in S\right)$, so that $S_n\in {\cal F}_n$. We will use Levy's generalization of the Borel-Cantelli Lemma which states that $$\left( S_n\mbox{ i.o.} \right)=\left(\sum_n \mathbb{P}(S_{n+1} | {\cal F}_{n})=\infty\right).$$
Let's calculate the conditional probability. Letting $E(x)=\{ X_{n}=x_{n},X_{n-1}=x_{n-1},\dots,X_0=x_0\}$ be a generic partition set, we get \begin{eqnarray*} \mathbb{P}(S_{n+1}\,|\,{\cal F}_n)&=&\sum_x\mathbb{P}(X_{n+1}\in S\,|\,E(x))1_{E(x)}\cr &=&\sum_x\mathbb{P}(X_{n+1}\in S\,|\,X_n=x_n)1_{E(x)}\cr &=&\sum_x P(x_n, S)1_{E(x)}\cr &=&P(X_n, S), \end{eqnarray*} where $P$ is the transition kernel for the Markov chain.
The definition of nice" set gives $P(X_n,S)\geq \varepsilon_K 1_K(X_{n}),$ and since $(X_n)$ visits $K$ infinitely often, we have $$\sum_n P(X_n,S)\geq \varepsilon_K \sum_n 1_K(X_{n})=\infty$$ almost surely.
-
I think this misses the point that $K(\omega)$ was meant to be a random set depending on $\omega$. – Anthony Quas Oct 21 '11 at 4:56
In my solution $\varepsilon_K(\omega)>0$ and $K(\omega)$ can be random. Only $S$ must be a non-random set. – Byron Schmuland Oct 21 '11 at 15:17
|
2014-03-12 15:27:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657270312309265, "perplexity": 155.57023021881932}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021900438/warc/CC-MAIN-20140305121820-00084-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://academic.oup.com/aob/article/113/6/909/87755/The-hydroclimatic-and-ecophysiological-basis-of
|
## Abstract
Background
Tropical montane cloud forests (TMCFs) are characterized by a unique set of biological and hydroclimatic features, including frequent and/or persistent fog, cool temperatures, and high biodiversity and endemism. These forests are one of the most vulnerable ecosystems to climate change given their small geographic range, high endemism and dependence on a rare microclimatic envelope. The frequency of atmospheric water deficits for some TMCFs is likely to increase in the future, but the consequences for the integrity and distribution of these ecosystems are uncertain. In order to investigate plant and ecosystem responses to climate change, we need to know how TMCF species function in response to current climate, which factors shape function and ecology most and how these will change into the future.
Scope
This review focuses on recent advances in ecophysiological research of TMCF plants to establish a link between TMCF hydrometeorological conditions and vegetation distribution, functioning and survival. The hydraulic characteristics of TMCF trees are discussed, together with the prevalence and ecological consequences of foliar uptake of fog water (FWU) in TMCFs, a key process that allows efficient acquisition of water during cloud immersion periods, minimizing water deficits and favouring survival of species prone to drought-induced hydraulic failure.
Conclusions
Fog occurrence is the single most important microclimatic feature affecting the distribution and function of TMCF plants. Plants in TMCFs are very vulnerable to drought (possessing a small hydraulic safety margin), and the presence of fog and FWU minimizes the occurrence of tree water deficits and thus favours the survival of TMCF trees where such deficits may occur. Characterizing the interplay between microclimatic dynamics and plant water relations is key to foster more realistic projections about climate change effects on TMCF functioning and distribution.
## INTRODUCTION
The climatic conditions associated with elevation in tropical landscapes favour the occurrence of a unique and endangered ecosystem known as tropical montane cloud forests (TMCFs). Despite the occurrence of TMCFs in a wide range of climatic envelopes (Jarvis and Mulligan, 2010), the main common climatic attribute for every TMCF is frequent and persistent cloud immersion (i.e. fog; Scholl et al., 2010; Bruijnzeel et al., 2011). Fog frequency and intensity is an important factor determining several structural features of TMCFs (Grubb and Whitmore, 1966; Bruijnzeel and Veneklaas, 1998; Bruijnzeel and Hamilton, 2000; Bruijnzeel, 2001). As a general rule, there is an increase in epiphyte cover and decrease in tree height, canopy stratification and leaf area index in high-altitude TMCFs (also called upper montane cloud forests). TMCFs located at lower altitudes (lower montane cloud forests) are closer structurally to lowland tropical forests (Bruijnzeel and Hamilton, 2000; Bruijnzeel, 2001; Bruijnzeel et al., 2011).
The climatic and structural characteristics of TMCFs are widely assumed to be responsible for some of the ecosystem services provided by TMCFs. The environments of TMCFs are thought to increase streamflow volume, not only because of the additional inputs of cloud water interception (CWI), but also because of the low average atmospheric demand and thus low evapotranspiration, caused by the frequent cloud immersion (Bruijnzeel et al., 2011). Water quality may also be improved by the role of the TMCF cover in reducing soil erosion and landslides compared with other land uses (Sidle et al., 2006). These ecosystem services might be extremely valuable in some regions in which significant populations occur downslope and downstream of cloud forests, and are sometimes considered as a basis for TMCF conservation programmes through ‘payment for ecosystem services’ schemes (Bruijnzeel et al., 2011).
Cloud forests are also extremely valuable from a biological conservation point of view; the uniqueness of TMCF environments is also reflected in their high biodiversity and endemism levels (Bruijnzeel et al., 2010a, b). These ecosystems have a unique floristic composition, significantly distinct from that of lowland tropical forest (Grubb and Whitmore, 1966; Bertoncello et al., 2011). Neotropical TMCFs present an abundance of temperate-climate taxa, such as Podocarpus, Alnus, Drimys, Weinmannia and Magnoliaceae (Webster, 1995; Bertoncello et al., 2011). Based on the disjunct distribution of these taxa in tropical landscapes and palinological records, some authors suggest that the modern floristic composition and distribution of Neotropical TMCFs could be explained by Pleistocene climatic fluctuations, causing expansions and retractions in vegetation (Webster, 1995; Meireles, 2003; Bertoncello et al., 2011) and the reconnection between North and South America during the Pliocene, which allowed the migration of Andean and cordilleran taxa between north and south (Webster, 1995). The relatively low endemism at species level, despite high generic endemism, suggests recent and rapid speciation in TMCFs (Webster, 1995).
Various assessments of the distribution of TMCFs exist. The most comprehensive assessment of the distribution of cloud forests throughout the tropics is that compiled under the auspices of UNEP–WCMC by Aldrich et al. (1997). This is a database comprising >560 point observations distributed throughout the tropics and representing areas that have been defined as cloud forests in the literature or by local experts. These point observations have been used to help develop spatial assessments of cloud forest distribution on the basis of nationally or regionally defined elevational bands and remotely sensed forest cover assessments (Bubb et al., 2004; Scatena et al., 2010). The derived total cover of TMCFs was estimated to be in the order of 215 000 km2 (1·4 % of the total area of all tropical forests). However, TMCFs are defined by the frequency and persistence of cloud cover, not by elevation, and Jarvis and Mulligan (2011) stress the very wide range of climatic and landscape situations (temperature, rainfall, altitude, distance to sea and mountain size) represented by the >560 observed UNEP–WCMC cloud forest sites. Because this climatic variability is not just controlled by elevation, elevation-based approaches to estimate cloud forest distribution will be able to indicate the major cloud forest areas but they are not likely to identify all cloud-affected forests and may thus represent an underestimate of the true cloud forest distribution and extent.
The cloud frequency-based pan-tropical assessment of Mulligan (2010) models the distributions of cloud forest hydroclimatically to define the distribution of hydroclimatic cloud-affected forests (CAFs) rather than elevationally or ecologically defined TMCFs (Fig. 1). The most affected CAFs will have ecological adaptations that are characteristic of TMCFs (ecologically or elevationally defined), but lesser CAFs may still be hydrologically and ecologically distinct from forests that are not cloud affected but might not be considered as cloud forest structurally or ecologically. CAFs represent some 14·2 % of all tropical forests and cover an area of 2·21 Mkm2 between 23·5°N and 35°S (Mulligan, 2010).
Fig. 1.
Global distribution of cloud-affected forests (CAFs) defined hydroclimatically (Mulligan, 2010) in South-east Asia and Oceania (A), Paleotropics (B) and Neotropics (C). Areas with >40 % tree cover are shown; the darkest shades are 100 % tree cover.
Fig. 1.
Global distribution of cloud-affected forests (CAFs) defined hydroclimatically (Mulligan, 2010) in South-east Asia and Oceania (A), Paleotropics (B) and Neotropics (C). Areas with >40 % tree cover are shown; the darkest shades are 100 % tree cover.
The archipelagic distribution of TMCFs (Luna-Vega et al., 2001) and the relationship between altitude and TMCF structure and composition (Grubb and Whitmore, 1966; Bruijnzeel and Hamilton, 2000; Bruijnzeel, 2001; Bertoncello et al., 2011; Bruijnzeel et al., 2011) raises the question of which ecophysiological traits TMCF plants possess that allow and restrict their distributions to these specific hydroclimatic conditions. Addressing this question will help provide a mechanistic basis to investigate how these ecosystems will respond to the climatic changes projected to affect tropical montane regions.
Temperature projections of general circulation models (GCMs) agree reasonably well that tropical mountains will see warming over the next decades. Some models project an increase in the height of cloud formation (‘cloud uplift’) and higher evapotranspiration in tropical montane regions as a consequence of increasing earth surface temperatures (Still et al., 1999). These changes may affect TMCF structure and functioning in a number of ways, from drought-induced mortality of some tree species (Lowry et al., 1973; Werner, 1988) to an upward shift in lowland fauna and flora and invasion of pre-montane and lowland tropical species (Pounds et al., 1999). There is much less agreement between GCMs concerning the projected distribution of rainfall in tropical mountains (Mulligan et al., 2011), and different GCMs disagree in both the magnitude and direction of change of rainfall at the regional scale (Bruijnzeel et al., 2011). Given the spatial complexity of climate in general and rainfall in particular in tropical mountains, the local scale impacts of these rainfall changes are impossible to project (Oliveira et al., 2014). Given their limited geographic extent, island isolation by elevation and surrounding land use change, and strong dependence on a unique set of climate characteristics, it is clear that changes in rainfall and temperature will lead to significant stress on these systems.
In this review, we intend to link TMCF unique hydrometeorological conditions with TMCF vegetation distribution, functioning and survival in current and future climates. We will do that by coupling published and new data about TMCF plant water relations, including recent advances regarding foliar water uptake (FWU; Eller et al., 2013; Goldsmith et al., 2013) and the hydraulic safety margin (Choat et al., 2012), with published and new data about current and projected TMCF microclimate.
## HYDROCLIMATIC CONDITIONS AND HYDRAULIC FUNCTIONING OF TMCF TREES
### Temporal and spatial patterns of fog occurrence in TMCFs
Mulligan (2010) calculates the lifting condensation level (LCL) for four periods of the day for each month on the basis of pan-tropical climatological data and finds very high frequencies at which LCL is at ground level (i.e. fog is possible) in the Andes and Central America, but also in Africa and to a lesser extent parts of South-east Asia. However, elevation was not a good surrogate for satellite-observed cloud frequency across the tropics. Although the minimum observed cloud frequency does increase linearly with altitude (areas close to sea level having cloud frequencies of around 30 % in the tropics), sites at a particular altitude can show a range of cloud frequencies, depending on other factors. Nevertheless, at altitudes >1400 m a.s.l., cloud frequencies are generally >65 % (Mulligan, 2010).
Jarvis and Mulligan (2011) found the climate of the UNEP–WCMC TMCF sites to be highly variable, with an average rainfall of 2000mm year–1 and an average temperature of 17·7 °C. They also found TMCFs to be wetter (rainfall being 184 mm year–1 higher on average), cooler (by 4·2 °C on average) and less seasonally variable than the average for all montane forests (defined as all tropical forests at >500 m elevation). These global generalizations hide significant variability within and between sites.
Fog tends to occur much more frequently in the afternoon and night (Mulligan, 2010) and may persist through the dry season when rainfall is low or zero. This may be important hydrologically and ecophysiologically in seasonally dry environments (Bruijnzeel et al., 2011). Observations of cloud frequency (2001–2006) based on the MODIS cloud climatology developed by Mulligan (2010) for CAF areas in Colombia (forest cover >40 %) show an area-average frequency of 0·66. Rainfall extracted from WorldClim for the same CAF areas and months show 179 mm month–1 (Table 1). Diurnality of fog frequency for Colombian TMCFs defined using the elevational limits of Bubb et al. (2004), CAFs defined by Mulligan (2010) and all land in Colombia is shown in Table 2. Clearly CAFs do not have significantly greater cloud frequency than all land in Colombia except in the evening – whereas the elevationally defined TMCFs have very low observed cloud frequency at this time. High cloud frequency during the day will lead to a lower incident solar radiation and photosynthetically active radiation (PAR) loads, with an increased diffuse fraction of light radiation (Letts and Mulligan, 2005; Mercado et al., 2009), whereas high night-time cloud frequency will tend to reduce outgoing long-wave radiation and thus daily temperature range. In contrast to the pan-tropical mean, for Colombia the mean annual rainfall for CAFs and TMCFs is lower than for all land, though the monthly minimum for CAFs (97 mm) is higher than the minimum for all land (75 mm). Mean annual temperature for CAFs in Colombia is 18 °C (lower than all land at 24 °C) but not as low as for TMCFs at 13·25 °C. Jarvis and Mulligan (2011) show that rainfall seasonality is highly variable between TMCF sites, with most showing low seasonality but some having a strong seasonality of rainfall.
Table 1.
Climatic characteristics of CAFs, TMCFs and all land in Colombia
Variable Area Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Mean
Cloud frequency (fraction) TMCFs* 0·36 0·51 0·63 0·76 0·79 0·74 0·72 0·66 0·77 0·74 0·62 0·57 0·66
CAFs 0·39 0·53 0·63 0·78 0·79 0·74 0·74 0·66 0·76 0·76 0·63 0·58 0·67
All land 0·36 0·53 0·65 0·78 0·79 0·75 0·74 0·69 0·76 0·75 0·63 0·58 0·67
Rainfall (mm h–1TMCFs* 75 86 110 180 180 140 130 120 130 190 160 110 134·25
CAFs 97 110 140 230 240 200 170 170 190 250 210 140 178·92
All land 91 110 140 250 300 280 260 250 240 280 220 140 213·42
Temperature (°C) TMCF* 13 13 14 14 14 13 13 13 13 13 13 13 13·25
CAFs 18 19 19 19 19 18 18 18 18 18 18 18 18·33
All land 24 24 25 24 24 24 23 24 24 24 24 24 24·00
Variable Area Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Mean
Cloud frequency (fraction) TMCFs* 0·36 0·51 0·63 0·76 0·79 0·74 0·72 0·66 0·77 0·74 0·62 0·57 0·66
CAFs 0·39 0·53 0·63 0·78 0·79 0·74 0·74 0·66 0·76 0·76 0·63 0·58 0·67
All land 0·36 0·53 0·65 0·78 0·79 0·75 0·74 0·69 0·76 0·75 0·63 0·58 0·67
Rainfall (mm h–1TMCFs* 75 86 110 180 180 140 130 120 130 190 160 110 134·25
CAFs 97 110 140 230 240 200 170 170 190 250 210 140 178·92
All land 91 110 140 250 300 280 260 250 240 280 220 140 213·42
Temperature (°C) TMCF* 13 13 14 14 14 13 13 13 13 13 13 13 13·25
CAFs 18 19 19 19 19 18 18 18 18 18 18 18 18·33
All land 24 24 25 24 24 24 23 24 24 24 24 24 24·00
*Bubb et al. (2004); 2000–3500 m a.s.l.
Table 2.
Diurnality of satellite observed cloud frequency (2000–2006) for CAFs, TMCF and all land in Colombia
Local time TMCFs* CAFs All land
0600–1200 0·62 0·66 0·67
1200–1800 0·78 0·75 0·75
1800–2400 0·56 0·8 0·76
2400–0600 0·62 0·6 0·62
Local time TMCFs* CAFs All land
0600–1200 0·62 0·66 0·67
1200–1800 0·78 0·75 0·75
1800–2400 0·56 0·8 0·76
2400–0600 0·62 0·6 0·62
*Bubb et al. (2004); 2000–3500 m a.s.l.
Fog impacts the solar radiation, temperature and precipitation mean and seasonal behaviour of TMCFs, but perhaps the key component of TMCF climate of relevance to climate change studies is the altitudinally and topographically controlled spatial variability of climate, which means that cloud forests occur over highly restricted ranges with sharp climatic gradients. Table 3 shows change in temperature, precipitation and cloud frequency for all land, TMCFs and CAFs (>40 % tree cover) in Colombia and indicates that, though gradients of cloud frequency are only slightly steeper in TMCFs and CAFs compared with all land, gradients of rainfall and temperature are much steeper and it is these gradients that create sensitivity to climate change in cloud forest ecosystems since these gradients create barriers to dispersal and migration as cloud forest climates change.
Table 3.
Spatial variability in the climate characteristics of TMCFs, CAFs and all land in Colombia expressed as the mean gradient for each variable in each zone
Variable Area Mean gradient (units per km)
Cloud frequency (fraction) TMCFs* 0·012
CAFs 0·012
All land 0·011
Rainfall (mm h–1TMCFs* 90
CAFs 110
All land 46
Temperature (°C) TMCFs* 0·9
CAFs 0·8
All land 0·3
Variable Area Mean gradient (units per km)
Cloud frequency (fraction) TMCFs* 0·012
CAFs 0·012
All land 0·011
Rainfall (mm h–1TMCFs* 90
CAFs 110
All land 46
Temperature (°C) TMCFs* 0·9
CAFs 0·8
All land 0·3
*Bubb et al. (2004); 2000–3500 m a.s.l.
### Water use patterns of TMCF trees
The linkages between the highly variable hydrometeorological conditions in TMCFs and vegetation water use remain poorly explored. Though there are a paucity of studies quantifying tree transpiration in TMCFs compared with other systems, Bruijnzeel et al. (2011) are able to show a general negative relationship between TMCF vegetation water use and altitude (Bruijnzeel et al., 2011). Forests located at higher altitudes are more affected by fog (upper montane cloud forests and elfin cloud forests) and transpire less (380·4 ± 31·8 mm year–1) than lower montane cloud forests (646 ± 38·8 mm year–1) and lowland evergreen rain forests (1004 ± 81·6 mm year–1). This negative relationship between vegetation water use and altitude could be attributed mostly to increased cloud cover (and thus reduced evaporative demand) at higher altitudes (Zotz et al., 1998) as well as reduced leaf area index (Bruijnzeel et al., 2011). Cavelier (1996) proposed that hydraulic inefficiency could constrain TMCF tree transpiration, but several studies showed that peak transpiration rates of TMCF trees are comparable with those of lowland forests (Zotz et al., 1998; Feild and Holbrook, 2000; Santiago et al., 2010). Santiago et al. (2010) even showed that xylem area per unit of leaf area increased with altitude in the Hawaiian tree species Metrosideros polymorpha.
The TMCFs located at higher altitudes are usually exposed to more persistent fog events (Grubb and Whitmore, 1966; Jarvis and Mulligan, 2011), a microclimatic condition that affects tree water relations through transpiration suppression and by the addition of a water subsidy to the ecosystem (Fig. 2). Transpiration suppression caused by fog has been described in several fog-affected ecosystems (Burgess and Dawson, 2004; Reinhardt and Smith, 2008; Limm et al., 2009), including TMCFs (Gotsch et al., 2014; C. B. Eller et al., unpubl. data). The mechanism behind this suppression is probably the decrease in atmospheric vapour pressure deficit (VPD) and PAR associated with fog events (Reinhardt and Smith, 2008), which decreases the driving gradient for water loss by the vegetation. The formation of a water film on leaves also limits gas exchange and contributes to transpiration suppression (Smith and McClean, 1989; Brewer and Smith, 1997; Letts and Mulligan, 2005). Moreover, high-altitude TMCFs are subjected to lower mean air temperatures when compared with lowland forests (Bruijnzeel et al., 2011; Jarvis and Mulligan, 2011), which leads to lower VPD and, consequently, lower plant transpiration rates.
Fig. 2.
Scenarios illustrating the direction and magnitude of water fluxes in tropical montane cloud forests (TMCFs) under contrasting micrometeorological conditions. In scenario (A), clear days and nights, TMCF trees lose water to the atmosphere by transpiration (E). In scenarios (B) and (C), leaf-wetting events suppress transpiration of TMCF trees and provide additional water supply to the vegetation by cloud water interception (CWI), which is the water intercepted by the plant aerial tissue that then drips to the soil, and by foliar water uptake (FWU) that is the water directly intercepted and absorbed by plant leaves which may be redistributed downwards through the plant xylem to the soil (see Eller et al., 2013). The magnitude of FWU, CWI and water drip to the soil will depend on: (1) the duration and magnitude of fog events; (2) canopy water storage capacity; and (3) atmosphere–soil water potential gradient (WPG). In scenario (B), fog events of high magnitude and long duration saturate canopy water storage capacity and increase CWI, causing an increase in soil water potential and a decrease in FWU. In scenario (C), hydrological inputs of low magnitude and/or duration wet the canopy but not the soil, increasing the WPG and the magnitude of FWU. However, during the wet season or in very humid TMCFs, when the soil has high water potential, the FWU should be minor regardless of fog event intensity, because of the small WPG. Soil water potential values are monthly means of the wettest month (–0·19 MPa) and driest month (–0·77 MPa) in a Brazilian cloud forest stand.
Fig. 2.
Scenarios illustrating the direction and magnitude of water fluxes in tropical montane cloud forests (TMCFs) under contrasting micrometeorological conditions. In scenario (A), clear days and nights, TMCF trees lose water to the atmosphere by transpiration (E). In scenarios (B) and (C), leaf-wetting events suppress transpiration of TMCF trees and provide additional water supply to the vegetation by cloud water interception (CWI), which is the water intercepted by the plant aerial tissue that then drips to the soil, and by foliar water uptake (FWU) that is the water directly intercepted and absorbed by plant leaves which may be redistributed downwards through the plant xylem to the soil (see Eller et al., 2013). The magnitude of FWU, CWI and water drip to the soil will depend on: (1) the duration and magnitude of fog events; (2) canopy water storage capacity; and (3) atmosphere–soil water potential gradient (WPG). In scenario (B), fog events of high magnitude and long duration saturate canopy water storage capacity and increase CWI, causing an increase in soil water potential and a decrease in FWU. In scenario (C), hydrological inputs of low magnitude and/or duration wet the canopy but not the soil, increasing the WPG and the magnitude of FWU. However, during the wet season or in very humid TMCFs, when the soil has high water potential, the FWU should be minor regardless of fog event intensity, because of the small WPG. Soil water potential values are monthly means of the wettest month (–0·19 MPa) and driest month (–0·77 MPa) in a Brazilian cloud forest stand.
Night-time transpiration is another common and important component of tree and ecosystem water balance in TMCFs (Dawson et al., 2007). The few studies of night-time transpiration in TMCFs show moderate to very high water losses at night (Feild and Holbrook, 2000; Rosado et al., 2012; Gotsch et al., 2014). The functional meaning of night-time transpiration is not completely clear, but it is often suggested that it can contribute to nutrient acquisition (Scholz et al., 2007; Snyder et al., 2008). Nocturnal sap flow in TMCF trees during drier nights could compensate for the lack of nutrient acquisition during periods in which transpiration is suppressed by leaf-wetting events.
Soil water conditions might pose additional constraints on the water use of TMCF vegetation. Extreme conditions, such as water logging, constrain plant transpiration in some TMCFs because of poorly developed root systems or lower leaf area of trees inhabiting anoxic soils (Jane and Green, 1985; Santiago et al., 2000). Soil water deficits, documented in seasonally dry TMCF areas (Jarvis and Mulligan, 2011), can cause a decrease in tree crown conductance and constrain plant transpiration (Kumagai et al., 2004; 2005; Chu et al., 2014).
Stomatal behaviour of TMCF trees might be quite conservative, closing in response to relatively low VPD (Jane and Green, 1985), leading to the inhibitory effects of high VPD (>1–1·2 kPa) on tree transpiration even under non-limiting soil water conditions (Motzer, 2005). This type of stomatal behaviour is usually associated with plants vulnerable to hydraulic failure (McDowell et al., 2008). Despite the paucity of tree hydraulic data for TMCFs, Santiago et al. (2000) demonstrated that M. polymorpha trees from TMCFs are more susceptible to xylem cavitation than lowland forest trees. Drimys brasiliensis, one of the most abundant and ubiquitous tree species in Brazilian TMCFs (Bertoncello et al., 2011), also has a hydraulic system that is a very vulnerable to drought, losing 50 % of its hydraulic conductivity at –1·56 MPa (Fig. 3), a very high value when compared with the average –2·6 MPa for tropical forests (Choat et al., 2012). In addition, this species has a very narrow xylem hydraulic safety margin, indicating that this species operates close to the steepest point of its xylem vulnerability curve and is therefore very prone to catastrophic embolism (Fig. 3). These results support the view that TMCF trees are particularly vulnerable to droughts and might depend on alternative water sources, such as cloud water, to avoid hydraulic failure.
Fig. 3.
Embolism vulnerability curve showing loss of hydraulic conductivity (PLC, %) as a function of xylem water potential (Ψx, MPa) for branches of Drimys brasiliensis (Winteraceae), a dominant species in Brazilian tropical montane cloud forests. Ψ50 (–1·55 MPa) and Ψ88 are the xylem water potential inducing 50 and 88 % embolism, respectively. Ψmin (–1·54 MPa) is the minimum xylem water potential measured in the field during 24 months. Ψf is the increase in Ψmin due to fog occurrence. The difference between Ψ50 and Ψmin (vertical red bar) represents the ‘safety margin’ that the plant operates in the driest conditions, which is 0·01 MPa. The blue arrow represents the increase in leaf water potential and hydraulic safety margin after fog exposure and foliar water uptake (FWU) (from Eller et al., 2013). The curve was fitted using an exponential sigmoidal equation: PLC = 100/{1 – exp[a(ΨxΨ50)]}, where a is the slope of the curve. The R2 value for the fit (–0·72) was obtained with linear regression of the transformed data (Pammenter and Van der Willigen, 1998).
Fig. 3.
Embolism vulnerability curve showing loss of hydraulic conductivity (PLC, %) as a function of xylem water potential (Ψx, MPa) for branches of Drimys brasiliensis (Winteraceae), a dominant species in Brazilian tropical montane cloud forests. Ψ50 (–1·55 MPa) and Ψ88 are the xylem water potential inducing 50 and 88 % embolism, respectively. Ψmin (–1·54 MPa) is the minimum xylem water potential measured in the field during 24 months. Ψf is the increase in Ψmin due to fog occurrence. The difference between Ψ50 and Ψmin (vertical red bar) represents the ‘safety margin’ that the plant operates in the driest conditions, which is 0·01 MPa. The blue arrow represents the increase in leaf water potential and hydraulic safety margin after fog exposure and foliar water uptake (FWU) (from Eller et al., 2013). The curve was fitted using an exponential sigmoidal equation: PLC = 100/{1 – exp[a(ΨxΨ50)]}, where a is the slope of the curve. The R2 value for the fit (–0·72) was obtained with linear regression of the transformed data (Pammenter and Van der Willigen, 1998).
## CLOUD WATER INPUTS IN TMCFs
### Cloud water interception
Cloud water interception (CWI) and its subsequent precipitation as fog drip may represent a major hydrological input to TMCFs. There is a general trend of higher altitude TMCFs presenting higher CWI values (Giambelluca and Gerold, 2011); however, the relative importance of this hydrological input varies considerably between sites because of the importance of vegetation structure and epiphytism, fog frequency, fog water content, topographic exposure, wind direction and wind speed (Bruijnzeel et al., 2011). Holwerda (2010) found CWI values as low as 0·15 mm d–1 (1·7 % of the rainfall at the site) in a Mexican lower montane cloud forest, while Takahashi et al. (2010) found values as high as 3·3 mm d–1 (37 % of the rainfall at the site) in a Hawaiian lower montane cloud forest.
The hydrological relevance of CWI might vary seasonally and peak during dry seasons when rainfall inputs are lowest. Brown (1996) used throughfall data (water captured below the canopy during fog or rainfall) to investigate seasonal variation in CWI in a TMCF in Guatemala. He found that throughfall in an upper montane cloud forest, despite being relatively high during the entire year, can exceed rainfall by 147 mm during the dry season. Holder (2004) estimates that the contribution of fog precipitation to the hydrological budget in Guatemalan TMCF is 1 mm d–1 during the dry season and 0·5 mm d–1 during the rainy season. The impact of the seasonality of this water input on vegetation water use has yet to be demonstrated directly in TMCFs. Plants from redwood forests, a non-montane fog-affected ecosystem, use significantly more fog water during the dry season, when fog incidence is higher (Dawson, 1998). It is likely that fog inputs have the greatest impacts hydrologically in low rainfall, seasonally dry but frequently foggy and highly exposed forests (Bruijnzeel et al., 2011).
### Foliar water uptake
Recent studies have suggested that direct foliar water uptake is an ecophysiologically important input in TMCFs (Eller et al., 2013; Goldsmith et al., 2013). Unlike CWI, FWU is a water flux within the plant, driven by water potential gradients between sources and sinks along the soil–plant–atmosphere continuum (SPAC) and the hydraulic conductivity between SPAC compartments (Fig. 2). Simonin et al. (2009) suggested that FWU can be described using a simple equation based on Darcy's law:
$$\hbox{FWU} = k_{{\rm Atm} - {\rm L}} \Delta {\it \psi}_{{\rm Atm} - {\rm L}}$$
where kAtm−L is the efficiency of leaf water uptake, which is basically the leaf surface total conductivity to water entry, and ΔψAtm−L is the water potential (ψH2O) gradient between the inside and the outside of the leaf. During fog events, the atmospheric boundary layer surrounding leaves is saturated with moisture and the ψH2O outside the leaf should be close to zero. If leaf ψH2O is negative, FWU should be higher than 0 during most leaf-wetting events provided that the leaf surface is hydrophilic enough to allow water film formation.
With constant kAtm−L, we should expect higher FWU rates in leaves experiencing water deficits, which should be more common during periods of low soil water availability (Fig. 2B, C). Supporting this prediction, Breshears et al. (2008) shows that the effect of FWU on leaf water potential is greater when the plant is subjected to water stress. Also, sap flow reversals of higher magnitude have been observed during the dry season in D. brasiliensis at a Brazilian TMCF (C. B. Eller et al., unpubl. data). However, Burgess and Dawson (2004) observed that well-watered leaves of Sequoia sempervirens absorbed more fog water than water-stressed leaves, implying that kAtm−L is more dynamic in some species than others. Therefore, FWU could be more controlled by ΔψAtm−L in some species, while in others the kAtm−L should play a larger role.
The kAtm−L should be largely determined by leaf cuticle permeability to water and the occurrence of structures that facilitate water uptake. Despite their role in restricting molecular diffusion and thus water loss, the cuticles of leaves are known to be permeable to various molecules (Schönherr and Riederer, 1989; Schreiber and Riederer, 1996; Niederl et al., 1998). Water might diffuse through a lipophilic pathway in the cuticle, with lipophilic cutin and wax domains forming its transport path (Schreiber, 2005), or aqueous pores, which are formed by the hydration of dipoles and ionic functional groups (Schönherr, 2006). It is important to note that cuticle permeability to water might vary by several orders of magnitude between species (Kerstiens, 1996), and might be quite sensitive to changes in environmental conditions, increasing under high temperature (Schreiber, 2001) and high atmospheric humidity (Schreiber et al., 2001; Eller et al., 2013).
The occurrence of structures on leaf epidermis that facilitate water uptake can increase kAtm−L even further. Trichomes (Schreiber et al., 2001; Schönherr, 2006), hydathodes (Martin and von Willert, 2000), guard cells (Schlegel et al., 2005) and stomatal plugs (Westhoff et al., 2009) are examples of epidermal structures that might be preferential paths to FWU in some species because of differential properties of the cuticle on these structures. For example, Schönherr (2006) shows that aqueous pores are more likely to occur at the base of trichomes (Schönherr, 2006). Spatial heterogeneity on the wax content of the cuticle might also strongly affect water permeability (Schönherr and Lendzian, 1981; Becker et al., 1986).
There is also substantial empirical evidence that water might enter into leaves through stomatal apertures (Eichert et al., 2008; Burkhardt et al., 2012). Until recently, direct water entry through stomata was considered physically impossible because of high water surface tension and the morphology of stomata (Schönherr and Bukovac, 1972), but the recent hypothesis of ‘hydraulic activation of stomata’ by Burkhardt (2010) provides a possible explanation for this process. Burkhardt (2010) suggested that the deposition of hygroscopic particles around the guard cells and sub-stomatal cavity might break water surface tension and allow the formation of thin water films along the stomata, establishing a hydraulic connection between the outside surfaces of the leaf and the apoplast.
Considering the multiple water entry pathways on the leaf, it is not surprising that the occurrence of FWU in plants is a very widespread phenomenon, confirmed in >70 species (>85 % of all the studied species; Goldsmith et al. 2013). To our knowledge, all the studies investigating FWU in TMCFs found that this mechanism was present at least to some extent in the studied trees (Lima, 2010; Cassana and Dillenburg, 2012; Eller et al., 2013; Goldsmith et al., 2013). The prevalence of this mechanism in TMCFs enhances vegetation survival during seasonal droughts (Eller et al., 2013), and might affect biotic interactions, foliar traits associated with fog interception efficiency (Martorell and Ezcurra, 2007) and perhaps even hydraulic niche differentiation (Silvertown et al., 1999) and community assembly patterns. We should also note that this process adds a potentially important biotic component to TMCF water fluxes that has been ignored in hydrometeorological models until now.
### Ecological consequences of cloud immersion
Cloud immersion generally has a positive effect on leaf, plant and forest water balance (Bruijnzeel et al., 2011; Eller et al., 2013; Goldsmith et al., 2013). Even if a certain tree species is not capable of significant FWU, the suppressive effect on plant transpiration (Limm et al., 2009; Gotsch et al., 2014) and additional soil water input by fog drip can provide an important water subsidy for plants (Dawson, 1998; Liu et al., 2004). However, there are important ecological differences in the water subsidy provided by FWU and fog drip. First, part of the water of some leaf-wetting events might not even reach the soil because of the canopy storage and subsequent evaporation. Thus, species capable of FWU could benefit from the water input even of a weak leaf-wetting event. Also, the water absorbed by FWU might be redistributed inside the plant and even reach the plant rhizosphere (Eller et al., 2013). The increase in root moisture associated with this transport should cause ecological consequences to plants similar to those when water is redistributed between roots in different soil layers (hydraulic redistribution; Burgess et al., 1998; Oliveira et al., 2005a). These consequences include a decrease in branch and root embolism, an increase on root life span (Domec et al., 2004, 2006; Bauerle et al., 2008), benefits to mycorrhizal development (Querejeta et al., 2007) and even increased nutrient availability in the soil close to the roots (Dawson, 1997; Pang et al., 2013). The effects of this water transport on biotic interactions and below-ground resource competition could also be significant (Dawson, 1993; Zou et al., 2005; Prieto et al., 2011). However, there are also a number of possibly negative ecological consequences associated with FWU. If FWU occurs directly through the cuticle, as seems to be the case in some species (Schönherr, 1976, 2006; Kerstiens, 2006; Eller et al., 2013), this additional water permeability could work both ways, leading to higher cuticular conductance, which can be detrimental to plant drought resistance, mostly because of the reduced capacity to control leaf water loss during droughts (Burkhardt and Riederer, 2003). Another possible cost associated with FWU comes from the potential negative relationship between FWU and leaf water repellency (LWR) (Fig. 4; Grammatikopoulos and Manetas, 1994; Rosado et al., 2010; Rosado and Holder, 2013). FWU is thought to be favoured in plants with lower LWR (i.e. plants that stay wet for longer). Comparative studies show that LWR in cloud forests is lower than in lowland forests (Holder, 2007a), which indirectly reinforces the proposed LWR–FWU relationship (Fig. 4), now that we have evidence that FWU occurs in TMCFs (Lima, 2010; Eller et al., 2013; Goldsmith et al., 2013). Low LWR might have several negative consequences for the leaf, such as facilitation of pathogen infection (Reynolds et al., 1989; Evans et al., 1992), foliar nutrient leaching (Cape, 1996), epiphyll growth (Holder, 2007b), decrease in leaf self-cleaning properties (Barthlott and Neinhuis, 1997) and decrease in leaf gas exchange (Smith and McClean, 1989; Brewer and Smith, 1997; Letts and Mulligan, 2005).
Fig. 4.
Hypothetical relationship between foliar water uptake (FWU) and leaf water repellency (LWR) (A); higher contact angles mean a more hydrophobic leaf surface. We propose a negative non-linear relationship between FWU and LWR. More hydrophobic cuticles could reduce FWU due both to a more impermeable biochemical structure and to reduced formation of water films on its surface. The region where the curve approaches its asymptote (close to 90°) represents a hypothetical point where the leaf would dry too quickly to allow significant FWU. The arrows in (A) represent ecosystem-level consequences of this relationship: a more hydrophilic canopy should increase canopy storage, while a more hydrophobic canopy should increase dripping during leaf-wetting events. At the plant level, plants with more hydrophilic leaves (B) should have higher FWU rates than plants with more hydrophobic leaves (C).
Fig. 4.
Hypothetical relationship between foliar water uptake (FWU) and leaf water repellency (LWR) (A); higher contact angles mean a more hydrophobic leaf surface. We propose a negative non-linear relationship between FWU and LWR. More hydrophobic cuticles could reduce FWU due both to a more impermeable biochemical structure and to reduced formation of water films on its surface. The region where the curve approaches its asymptote (close to 90°) represents a hypothetical point where the leaf would dry too quickly to allow significant FWU. The arrows in (A) represent ecosystem-level consequences of this relationship: a more hydrophilic canopy should increase canopy storage, while a more hydrophobic canopy should increase dripping during leaf-wetting events. At the plant level, plants with more hydrophilic leaves (B) should have higher FWU rates than plants with more hydrophobic leaves (C).
Because of the negative impact that cloud immersion might have on leaf gas exchange of plants with low LWR, we hypothesize that the relationship LWR–FWU (Fig. 4) should influence how leaf-wetting events affect plant carbon balance. In a scenario where plant carbon assimilation is not being limited by water, cloud immersion should decrease plant instantaneous gas exchange rates due to the formation of a water film on leaves (Fig. 4; Smith and McClean, 1989; Brewer and Smith, 1997; Letts and Mulligan, 2005) and the decrease in PAR (Reinhardt and Smith, 2008). In this scenario, water gains by FWU should be minor when the water potential gradient between the atmosphere and the soil is small (Fig. 2B); thus, plants with high LWR should be favoured because they can achieve their maximum assimilation rates after the fog events more rapidly than plants with low LWR/high FWU (Fig. 4A). However, in a scenario where carbon assimilation is limited by water, the additional water subsidy provided by FWU (in comparison with plants that only depend on fog drip to use cloud water) should allow plants with higher FWU capacity to achieve higher assimilation rates after the fog events (Fig. 4B). The LWR–FWU relationship will further depend on the predominant time of occurrence of leaf-wetting events (night-time or daytime) and on the relative importance of light energy limitation compared with CO2 supply limitation during the leaf-wetting events.
Because of the many ecological benefits of FWU in water-limited conditions, we hypothesize that FWU could have been selected in seasonally dry TMCFs. The presence of fog could favour the selection of a unique strategy for dealing with soil drought in these environments. The additional water supply could allow plants capable of FWU to maintain gas exchange even during drier seasons without presenting other drought resistance-related traits such as deep roots or cavitation-resistant xylem. However, the costs associated with FWU might make it a sub-optimal strategy in very humid TMCFs. Considering that all empirical evidence of FWU in TMCFs thus far comes from seasonally dry TMCFs (Eller et al., 2013; Goldsmith et al., 2013), studies investigating the prevalence of FWU in very humid TMCFs could help us clarify if FWU occurrence can be attributed to environmental selection or whether it is just a consequence of TMCF leaves not being completely impermeable.
## IMPACT OF CLIMATE CHANGE ON TMCFs
Given the importance of their unique hydrometeorological conditions to TMCF vegetation–water relations, climate change will probably affect TMCF functioning and structure, However, the exact response of TMCF ecosystems to climate change will depend on the nature of changes in the seasonal and diurnal distribution of climatic variables and their intensity–frequency distribution, none of which can be projected well by GCMs (Oliveira et al., 2014). Still et al. (1999) showed that increases in land surface temperature might decrease the frequency of cloud immersion events in tropical mountains because of ‘cloud uplift’. Given the importance of cloud water inputs to the TMCF water budget, a decrease in the frequency of ground-level cloud (fog) (assuming that there were no significant changes in other climatic parameters such as rainfall and temperature) will probably increase TMCF evapotranspiration, vegetation drought stress and, ultimately, plant mortality.
However, cloud uplift will probably be accompanied by changes in rainfall and temperature. We can use GCMs to examine the projections for changes in these variables in a typical cloud forest situation. Using the SRES A2a climate scenarios downscaled for 17 GCMs by Ramirez-Villegas and Jarvis (2010) and cut for the tropical montane areas of Colombia, we can calculate an ensemble mean temperature and rainfall for the 2050s. We compare the ensemble mean with the mean + 1 standard deviation (mean +1 s.d.) and the mean – 1 s.d. of the ensemble (Table 4). For all land areas in Colombia, baseline mean annual temperature (MAT) is 24 °C, rising to 26 °C for the SRES A2a ensemble mean, 27 °C for mean +1 s.d. and 25 °C for mean – 1 s.d. However, precipitation for the baseline is 2500 mm year–1, rising to 7800 mm year–1 for the ensemble mean, 8900 mm year–1 for mean +1 s.d. and 6800 mm year–1 for mean – 1 s.d. The patterns are similar when analyses are confined to the areas of the country defined as CAFs by Mulligan (2010) and the areas defined as TMCFs according to the elevation limits used by Bubb et al. (2004). Based on GCM results, one could postulate that temperature increases may potentially reduce TMCF distribution, while the increases in rainfall are likely to work in the opposite direction.
Table 4.
Climate change uncertainty for TMCFs, CAFS and all land in Colombia based on a 17 GCM ensemble for a SRES A2a scenario
TMCFs*
CAFs
All land
MAT (°C) Precipitation (mm year–1MAT (°C) Precipitation (mm year–1MAT (°C) Precipitation (mm year–1
Baseline (1950–2000) 13 1600 18 2200 24 2500
Mean of 17 GCMs A2a 2050s 16 5100 21 6700 26 7800
Mean of 17 GCMs +1 s.d. A2a 2050s 17 6000 22 7500 27 8900
Mean of 17 GCMs – 1 s.d. A2a 2050s 15 4300 20 5800 25 6800
TMCFs*
CAFs
All land
MAT (°C) Precipitation (mm year–1MAT (°C) Precipitation (mm year–1MAT (°C) Precipitation (mm year–1
Baseline (1950–2000) 13 1600 18 2200 24 2500
Mean of 17 GCMs A2a 2050s 16 5100 21 6700 26 7800
Mean of 17 GCMs +1 s.d. A2a 2050s 17 6000 22 7500 27 8900
Mean of 17 GCMs – 1 s.d. A2a 2050s 15 4300 20 5800 25 6800
*Bubb et al. (2004); 2000–3500 m a.s.l.
MAT, mean annual temperature.
Combining the Still et al. (1999) ‘cloud uplift’ predictions with the GCM results presented here, we propose two broad directions of response of TMCFs to climate changes: in one scenario, the increased rainfall would not be enough to offset the drying effects of the ‘cloud uplift’ and higher temperatures. This would probably lead to drought-induced mortality of the more vulnerable species. Drought-induced tree mortality has already been documented in TMCFs during extreme droughts (Lowry et al., 1973; Werner, 1988). As mentioned previously, TMCF tree species operate close to their limit of hydraulic failure (Fig. 3). This means that the dry-season changes in soil water availability and atmospheric demand expected in this drier scenario might seriously damage the species hydraulic system and increase the chance of large-scale vegetation mortality. Plants with high FWU capacity could be particularly vulnerable to the decrease in leaf-wetting events, not only because of the key role of FWU in the maintenance of ecophysiological performance during drought (Simonin et al., 2009; Eller et al., 2013), but also because of the role that FWU might play in hydraulic failure avoidance. The increase in leaf water potential associated with FWU (average of 0·4 MPa in D. brasiliensis; Eller et al., 2013) might decrease xylem tension (Brodersen and McElrone, 2013) and increase the plant hydraulic safety margin (Fig. 3). Foliar water uptake can also be an important mechanism responsible for successful embolism repair in leaves and stems of TMCF plants (Limm et al., 2009; Simonin et al., 2009; Eller et al., 2013). Cuticular absorption could reduce the tension on the xylem enough to allow for refilling (Burgess and Dawson, 2007; Limm et al., 2009, Oliveira et al., 2005b).
In another possible scenario, the increased rainfall completely offsets the reduced cloud water contribution to TMCF water budget and increased atmospheric demand caused by higher temperatures. This would expose TMCF vegetation to a warmer, less foggy but rainier climate, similar to the climatic envelope of lowland tropical forests. This kind of change could favour the invasion of TMCFs by lower elevation species (Loope and Giambelluca, 1998; Pauchard et al., 2009). Lowland species are likely to be better competitors in this climatic scenario, due to their higher leaf area index (Bruijnzeel et al., 2011) and higher optimum temperature for photosynthesis (Allen and Ort, 2001), which leads to faster growing rates and potentially higher seed output. The invasion of TMCFs by lowland animals observed by Pounds et al. (1999) and associated with climate change could also increase dispersion rates of lowland tree species upwards into the mountains.
In both of the hypothetical scenarios, TMCF functioning and structure would be altered. In the drier scenario, drought should induce widespread mortality of less drought-resistant species, while in the warmer scenario, TMCF species could be competitively displaced by lower elevation species. More knowledge about TMCF vegetation and ecosystem functioning is necessary in order to understand more precisely to what extent each particular scenario could affect TMCFs. It is possible that different TMCFs would be more or less vulnerable to a particular scenario depending on their current climate characteristics. For example, current seasonally dry TMCFs could be less vulnerable to a drier scenario, because one could assume that species from these TMCFs are already more drought resistant.
## CONCLUDING REMARKS AND PERSPECTIVES
In this review, we propose that TMCF distribution depends strongly on the relationship between particular plant ecophysiological traits, such as FWU (increasing the hydraulic safety margin), and unique hydrometeorological conditions of TMCFs. Changes in these conditions, especially related to cloud immersion events, could drastically change the costs and benefits associated with FWU and, consequently, TMCF structure and functioning. More information about the mechanisms behind drought-induced mortality in TMCF plants is needed to clarify how drought events might affect population dynamics and community structure of TMCFs under drier climates. Despite knowing that leaf-wetting events and FWU might be important to some TMCF species during droughts (Eller et al., 2013), we do not know what proportion of TMCF species depend on FWU for survival during drought. We also do not know if a small hydraulic safety margin (Fig. 3) is a widespread trait in different TMCF species.
Potential effects of increased precipitation – which will vary highly between cloud forests in different topographic and continental settings – could either compensate for the reduction of leaf-wetting events or combine with warming to create a microclimatic envelope that could facilitate the invasion of TCMFs by lowland species. Competitive interactions between lowland forest species under different environmental conditions are also needed to illustrate TMCF vulnerability to lowland species invasion and the consequences for TMCF community structure and ecosystem-level processes.
We believe that the inclusion of non-standard climate variables (fog frequency and terrain exposure) and species functional attributes is essential for an accurate niche-based modelling of species distribution and also for more accurate predictions of ecophysiological models, especially under climate change. The FWU phenomenon in TMCFs, for example, adds an important component that needs to be taken into consideration in TMCF ecophysiological models, since it could increase the predicted contribution of fog to the ecophysiology of these ecosystems and also affect canopy water storage and re-evaporation to the atmosphere. The water subsidy provided by fog could also allow species capable of FWU to occur in places where they could not otherwise occur, if they depended only on soil water.
## ACKNOWLEDGEMENTS
This review was based, in part, on a plenary lecture presented at the ComBio2013, Perth, Australia, sponsored by the Annals of Botany. The authors would like to express their thanks to the Graduate Program in Ecology from the University of Campinas (UNICAMP), and to Hans Lambers and Tim Colmer (University of Western Autralia) for the invitation to present a lecture at ComBio2013. This work was supported by the São Paulo Research Foundation (FAPESP) (grant no. 10/17204-0), FAPESP/Microsoft Research (grant no. 11/52072-0) awarded to R.S.O., and the Higher Education Co-ordination Agency (CAPES) (scholarship to C.B.E. and P.L.B.). The cloud forest mapping was supported by the UK Department for International Development Forestry Research Programme (ZF0216).
## LITERATURE CITED
Aldrich
M
Billington
C
Edwards
M
Laidlaw
R
A global directory of tropical montane cloud forests
,
1997
Cambridge, UK
UNEP–WCMC
Allen
DJ
Ort
DR
Impacts of chilling temperatures on photosynthesis in warm-climate plants
Trends in Plant Science
,
2001
, vol.
6
(pg.
36
-
42
)
Barthlott
W
Neinhuis
C
Purity of sacred lotus, or escape from contamination in biological surfaces
Planta
,
1997
, vol.
202
(pg.
1
-
8
)
Bauerle
TL
Richards
JH
Smart
DR
Eissenstat
DM
Importance of internal hydraulic redistribution for prolonging the lifespan of roots in dry soil
Plant, Cell and Environment
,
2008
, vol.
31
(pg.
177
-
186
)
Becker
M
Kerstiens
G
Schönherr
J
Water permeability of plant cuticles: permeance, diffusion and partition coefficients
Trees
,
1986
, vol.
1
(pg.
54
-
60
)
Bertoncello
R
Yamamoto
K
Meireles
LD
Shepherd
GJ
A phytogeographic analysis of cloud forests and other forest subtypes amidst the Atlantic forests in south and southeast Brazil
Biodiversity and Conservation
,
2011
, vol.
20
(pg.
3413
-
3433
)
Breshears
DD
McDowell
NG
Goddard
KL
, et al. .
Foliar absorption of intercepted rainfall improves woody plant water status most during drought
Ecology
,
2008
, vol.
89
(pg.
41
-
47
)
Brewer
CA
Smith
WK
Patterns of leaf surface wetness for montane and subalpine plants
Plant, Cell and Environment
,
1997
, vol.
20
(pg.
1
-
11
)
Brodersen
CR
McElrone
AJ
Maintenance of xylem network transport capacity: a review of embolism repair in vascular plants
Frontiers in Plant Science
,
2013
, vol.
4
(pg.
1
-
11
)
Brown
MB
de la Roca
I
Vallejo
A
, et al. .
A valuation analysis of the role of cloud forests in watershed protection. Sierra de las Minas Biosphere Reserve, Guatemala and Cusuco N.P., Honduras
1996
RARE Center for Tropical Conservation
Bruijnzeel
LA
Hydrology of tropical montane cloud forest: a reassessment
Land Use and Water Resources Research
,
2001
, vol.
1
(pg.
1
-
18
)
Bruijnzeel
LA
Hamilton
LS
Decision time for cloud forests. IHP Humid Tropics Programme Series no. 13
,
2000
Paris
IHP-UNESCO, Amsterdam: IUCN-NL, and Gland: WWF International
Bruijnzeel
LA
Veneklas
E
Climatic conditions and tropical montane forest productivity: the fog has not lifted yet
Ecology
,
1998
, vol.
79
(pg.
3
-
9
)
Bruijnzeel
LA
Scatena
FN
Hamilton
LS
Tropical montane cloud forests. Science for conservation and management
2010a
Cambridge
Cambridge University Press
Bruijnzeel
LA
Kappelle
M
Mulligan
M
Scatena
FN
Bruijnzeel
LA
Scatena
FN
Hamilton
LS
Tropical montane cloud forests: state of knowledge and sustainability perspectives in a changing world
Tropical montane cloud forests. Science for conservation and management
,
2010b
Cambridge
Cambridge University Press
(pg.
691
-
740
)
Bruijnzeel
LA
Mulligan
M
Scatena
FN
Hydrometeorology of tropical montane cloud forests: emerging patterns
Hydrological Processes
,
2011
, vol.
25
(pg.
465
-
498
)
Bubb
P
May
I
Miles
L
Sayer
J
Cloud forest agenda
,
2004
Cambridge
UNEP–WCMC
Burgess
SSO
Dawson
TE
The contribution of fog to the water relations of Sequoia sempervirens (D. Don): foliar uptake and prevention of dehydration
Plant, Cell and Environment
,
2004
, vol.
27
(pg.
1023
-
1034
)
Burgess
SSO
Dawson
TE
Predicting the limits to tree height using statistical regressions of leaf traits
New Phytologist
,
2007
, vol.
174
(pg.
626
-
636
)
Burgess
SSO
MA
Turner
NC
Ong
CK
The redistribution of soil water by tree root systems
Oecologia
,
1998
, vol.
115
(pg.
306
-
311
)
Burkhardt
M
Hygroscopic particles on leaves: nutrients or desiccants?
Ecological Monographs
,
2010
, vol.
80
(pg.
369
-
399
)
Burkhardt
M
Riederer
M
Ecophysiological relevance of cuticular transpiration of deciduous and evergreen plants in relation to stomatal closure and leaf water potential
Journal of Experimental Botany
,
2003
, vol.
54
(pg.
1941
-
1949
)
Burkhardt
J
Basi
S
Pariyar
S
Hunsche
M
Stomatal penetration by aqueous solutions – an update involving leaf surface particles
New Phytologist
,
2012
, vol.
196
(pg.
774
-
787
)
Cape
JN
Kerstiens
G
Surface wetness and pollutant deposition
Plant cuticles: an integrated approach
,
1996
Oxford
BIOS Scientific Publishers
(pg.
283
-
300
)
Cavelier
J
Tissue water relations in elfin cloud forest tree species of Serrania de Macuira, Guajira, Colombia
Trees
,
1996
, vol.
4
(pg.
155
-
163
)
Cassana
FF
Dillenburg
LR
The periodic wetting of leaves enhances water relations and growth of the long-lived conifer Araucaria angustifolia
Plant Biology
,
2012
, vol.
15
(pg.
75
-
83
)
Choat
B
Jansen
S
Brodribb
TJ
, et al. .
Global convergence in the vulnerability of forests to drought
Nature
,
2012
, vol.
491
(pg.
752
-
755
)
Chu
H-S
Chang
S-C
Klemm
O
, et al. .
Does canopy wetness matter? Evapotranspiration from a subtropical montane cloud forest in Taiwan
Hydrological Processes
,
2014
, vol.
28
(pg.
1190
-
1214
)
Dawson
TE
Hydraulic lift and the water use by plants: implications for water balance, perfomance and plant–plant interactions
Oecologia
,
1993
, vol.
95
(pg.
565
-
574
)
Dawson
TE
Flore
HE
Lynch
JP
Eissenstat
DM
Water loss from tree roots influences soil water and nutrient status and plant performance
Radical biology: advances and perspectives on the function of plant roots
,
1997
Rockville, MD
American Society of Plant Physiologists
(pg.
235
-
250
)
Dawson
TE
Fog in the California redwood forest: ecosystem inputs and use by plants
Oecologia
,
1998
, vol.
117
(pg.
476
-
485
)
Dawson
TE
Burgess
SS
Tu
KP
, et al. .
Nighttime transpiration in woody plants from contrasting ecosystems
Tree Physiology
,
2007
, vol.
27
(pg.
561
-
575
)
Domec
JC
Warren
JM
Meinzer
FC
Native root xylem embolism and stomatal closure in stands of Douglas-fir and ponderosa pine: mitigation by hydraulic redistribution
Oecologia
,
2004
, vol.
141
(pg.
7
-
16
)
Domec
JC
Scholz
FG
Bucci
SJ
Meinzer
FC
Goldstein
G
Villalobos-Vega
R
Diurnal and seasonal changes in root xylem embolism in Neotropical savanna woody species: impact on stomatal control of plant water status
Plant, Cell and Environment
,
2006
, vol.
29
(pg.
26
-
35
)
Eichert
T
Kurtz
A
Steinerb
U
Goldbach
H
Size exclusion limits and lateral heterogeneity of the stomatal foliar uptake pathway for aqueous solutes and water suspended nanoparticles
Physiologia Plantarum
,
2008
, vol.
134
(pg.
151
-
160
)
Eller
CB
Lima
AL
Oliveira
RS
Foliar uptake of fog water and transport belowground alleviates drought effects in the cloud forest tree species, Drimys brasiliensis (Winteraceae)
New Phytologist
,
2013
, vol.
199
(pg.
151
-
162
)
Evans
KJ
Nyquist
WE
Latin
RX
A model based on temperature and leaf wetness duration for establishment of Alternaria leaf blight of muskmelon
Phytopathology
,
1992
, vol.
82
(pg.
890
-
895
)
Feild
TS
Holbrook
NM
Xylem sap flow and stem hydraulics of the vesselless angiosperm Drimys granadensis (Winteraceae) in a Costa Rican elfin forest
Plant, Cell and Environment
,
2000
, vol.
23
(pg.
1067
-
1077
)
Giambelluca
TW
Gerold
G
Levia
DF
Carlyle-Moses
D
Tanaka
T
Hydrology and biogeochemistry of tropical montane cloud forests
Hydrology and biogeochemistry of forest ecosystems
,
2011
New York
Springer Verlag
(pg.
221
-
259
)
Goldsmith
GR
Matzke
NJ
Dawson
TE
The incidence and implications of clouds for cloud forest plant water relations
Ecology Letters
,
2013
, vol.
16
(pg.
307
-
314
)
Gotsch
SG
Asbjornsen
H
Holwerda
F
Goldsmith
GR
Weintraub
AE
Dawson
TE
Foggy days and dry nights determine crown-level water balance in a seasonal tropical montane cloud forest
Plant, Cell and Environment
,
2014
, vol.
37
(pg.
261
-
272
)
Grammatikopoulos
G
Manetas
Y
Direct absorption of water by hairy leaves of Phlomis fruticosa and its contribution to drought avoidance
Canadian Journal of Botany
,
1994
, vol.
72
(pg.
1805
-
1811
)
Grubb
PJ
Whitmore
TC
A comparison of montane and lowland rain forest in Ecuador. II. The climate and its effects on the distribution and physiognomy of the forests
Journal of Ecology
,
1966
, vol.
54
(pg.
303
-
333
)
Holder
CD
Rainfall interception and fog precipitation in a tropical montane cloud forest of Guatemala
Forest Ecology and Management
,
2004
, vol.
190
(pg.
373
-
384
)
Holder
CD
Leaf water repellency of species in Guatemala and Colorado (USA) and its significance to forest hydrology studies
Journal of Hydrology
,
2007a
, vol.
336
(pg.
147
-
154
)
Holder
CD
Leaf water repellency as an adaptation to tropical montane cloud forest environments
Biotropica
,
2007b
, vol.
39
(pg.
767
-
770
)
Holwerda
F
Bruijnzeel
LA
Oord
AL
, et al. .
Bruijnzeel
LA
Scatena
FN
Hamilton
LS
Fog interception in a Puerto Rican elfin cloud forest: a wet-canopy water budget approach
Tropical montane cloud forests. Science for conservation and management
,
2010
Cambridge
Cambridge University Press
(pg.
282
-
292
)
Jane
GT
Green
TGA
Patterns of stomatal conductance in six evergreen tree species from a New Zealand cloud forest
Botanical Gazette
,
1985
, vol.
146
(pg.
413
-
420
)
Jarvis
A
Mulligan
M
The climate of cloud forests
Hydrological Processes
,
2011
, vol.
25
(pg.
327
-
343
)
Kerstiens
G
Cuticular water permeability and its physiological significance
Journal of Experimental Botany
,
1996
, vol.
47
(pg.
1813
-
1832
)
Kerstiens
G
Water transport in plant cuticles: an update
Journal of Experimental Botany
,
2006
, vol.
57
(pg.
2493
-
2499
)
Kumagai
T
Saitoh
TM
Sato
Y
, et al. .
Transpiration, canopy conductance and the decoupling coefficient of a lowland mixed dipterocarp forest in Sarawak, Borneo: dry spell effects
Journal of Hydrology
,
2004
, vol.
287
(pg.
237
-
251
)
Kumagai
T
Saitoh
TM
Sato
Y
, et al. .
Annual water balance and seasonality of evapotranspiration in a Bornean tropical rainforest
Agricultural and Forest Meteorology
,
2005
, vol.
128
(pg.
81
-
92
)
Letts
MG
Mulligan
M
The impact of light quality and leaf wetness on photosynthesis in north-west Andean tropical montane cloud forest
Journal of Tropical Ecology
,
2005
, vol.
21
(pg.
549
-
557
)
Lima
AL
The ecological role of fog and foliar water uptake in three woody species from Southeastern Brazilian Cloud Forest
2010
Brazil
Masters thesis, University of Campinas (UNICAMP)
Limm
E
Simonin
K
Bothman
A
Dawson
T
Foliar water uptake: a common water acquisition strategy for plants of the redwood forest
Oecologia
,
2009
, vol.
161
(pg.
449
-
459
)
Liu
W
Meng
F
Zhang
Y
Liu
Y
Li
H
Water input from fog drip in the tropical seasonal rain forest of Xishuangbanna, South-West China
Journal of Tropical Ecology
,
2004
, vol.
20
(pg.
517
-
524
)
Loope
LL
Giambelluca
TW
Vulnerability of island tropical montane cloud forests to climate change, with special reference to east Maui, Hawaii
Climatic Change
,
1998
, vol.
39
(pg.
503
-
517
)
Lowry
JB
Lee
DW
Stone
BC
Effects of drought on Mount Kinabalu
Malayan Nature Journal
,
1973
, vol.
26
(pg.
178
-
179
)
Luna-Veja
I
Morrone
JJ
Ayala
OAA
Organista
DE
Biogeographical affinities among Neotropical cloud forests
Plant Systematics and Evolution
,
2001
, vol.
228
(pg.
229
-
239
)
Martin
CE
von Willert
DJ
Leaf epidermal hydathodes and the ecophysiological consequences of foliar water uptake in species of Crassula from the Namib Desert in southern Africa
Plant Biology
,
2000
, vol.
2
(pg.
229
-
242
)
Martorell
C
Ezcurra
E
The narrow-leaf syndrome: a functional and evolutionary approach to the form of fog-harvesting rosette plants
Oecologia
,
2007
, vol.
151
(pg.
561
-
73
)
McDowell
N
Pockman
WT
Allen
CD
, et al. .
Mechanisms of plant survival and mortality during drought: why do some plants survive while others succumb to drought?
New Phytologist
,
2008
, vol.
178
(pg.
719
-
739
)
Meireles
LD
Florística das fisionomias vegetacionais e estrutura da floresta alto-montana de Monte Verde, Serra da Mantiqueira, MG
2003
Brazil
Masters thesis, University of Campinas (UNICAMP)
LM
Bellouin
N
Sitch
S
, et al. .
Impact of changes in diffuse radiation on the global land carbon sink
Nature
,
2009
, vol.
458
(pg.
1014
-
1017
)
Motzer
T
Micrometeorological aspects of a tropical mountain forest
Agricultural and Forest Meteorology
,
2005
, vol.
135
(pg.
230
-
240
)
Mulligan
M
Bruijnzeel
LA
Scatena
FN
Hamilton
LS
Modeling the tropics-wide extent and distribution of cloud forest and cloud forest loss, with implications for conservation priority
Tropical montane cloud forests. Science for conservation and management
,
2010
Cambridge
Cambridge University Press
(pg.
14
-
39
)
Mulligan
M
Fisher
M
Sharma
B
, et al. .
The nature and impact of climate change in the Challenge Program on Water and Food (CPWF) basins
Water International
,
2011
, vol.
36
(pg.
96
-
124
)
Niederl
S
Kirsch
T
Riederer
M
Schreiber
L
Co-permeability of 3H-labelled water and 14C-labelled organic acids across isolated plant cuticles: investigating cuticular paths of diffusion and predicting cuticular transpiration
Plant Physiology
,
1998
, vol.
116
(pg.
117
-
123
)
Oliveira
RS
Dawson
TE
Burgess
SSO
DC
Hydraulic redistribution in three Amazonian trees
Oecologia
,
2005a
, vol.
145
(pg.
354
-
363
)
Oliveira
RS
Dawson
TE
Burgess
SSO
Evidence for direct water absorption by the shoot of the desiccation-tolerant plant Vellozia flavicans in the savannas of central Brazil
Journal of Tropical Ecology
,
2005b
, vol.
21
(pg.
585
-
588
)
Oliveira
RS
Christoffersen
BO
Barros
FV
, et al. .
Changing precipitation regimes and the water and carbon economies of trees
2014
Theoretical and Experimental Plant Physiology.
Pang
J
Wang
Y
Lambers
H
Tibbet
M
Siddique
KHM
Ryan
MH
Commensalism in an agroecosystem: hydraulic redistribution by deep-rooted legumes improves survival of a droughted shallow-rooted legume companion
Physiologia Plantarum
,
2013
, vol.
149
(pg.
79
-
90
)
Pammenter
NW
Van der Willigen
C
A mathematical and statistical analysis of the curves illustrating vulnerability of xylem to cavitation
Tree Physiology
,
1998
, vol.
18
(pg.
589
-
593
)
Pauchard
A
Kueffer
C
Dietz
H
, et al. .
Ain't no mountain high enough: plant invasions reaching new elevations
Frontiers in Ecology and the Environment
,
2009
, vol.
7
(pg.
479
-
486
)
Pounds
JA
Fogden
MP
Campbell
JH
Biological response to climate change on a tropical mountain
Nature
,
1999
, vol.
398
(pg.
611
-
615
)
Prieto
I
FM
Armas
C
Pugnaire
FI
The role of hydraulic lift on seedling establishment under a nurse plant species in a semi-arid environment
Perspectives in Plant Ecology, Evolution and Systematics
,
2011
, vol.
13
(pg.
181
-
187
)
Querejeta
JI
Egerton-Warburton
LM
Allen
MF
Hydraulic lift may buffer rhizosphere hyphae against the negative effects of severe soil drying in a California oak savanna
Soil Biology and Biochemistry
,
2007
, vol.
39
(pg.
409
-
417
)
Ramirez-Villegas
J
Jarvis
A
Downscaling global circulation model outputs: the delta method decision and policy analysis
,
2010
Colombia
Decision and Policy Analysis, CIAT
Working Paper No. 1
Reinhardt
K
Smith
WK
Impacts of cloud immersion on microclimate, photosynthesis and water relations of Abies fraseri (Pursh.) Poiret in a temperate mountain cloud forest
Oecologia
,
2008
, vol.
158
(pg.
229
-
238
)
Reynolds
KM
LV
Richard
DL
Ellis
MA
Splash dispersal of Phytophthora cactorum from infected strawberry fruit by simulated canopy drip
Phytopathology
,
1989
, vol.
79
(pg.
425
-
432
)
BHP
Holder
CD
The significance of leaf water repellency in ecohydrological research: a review
Ecohydrology
,
2013
, vol.
6
(pg.
150
-
161
)
BHP
Oliveira
RS
Aidar
MPM
Is leaf water repellency related to vapor pressure deficit and crown exposure in tropical forests?
Acta Oecologica
,
2010
, vol.
36
(pg.
645
-
649
)
BHP
Oliveira
RS
Joly
CA
Aidar
MPM
Burgess
SSO
Diversity in nighttime transpiration behavior of woody species of the Atlantic Rain Forest, Brazil
Agricultural Forest Meteorology
,
2012
, vol.
158–159
(pg.
13
-
20
)
Santiago
LS
Jones
T
Goldstein
G
Bruijnzeel
LA
Scatena
FN
Hamilton
JG
Juvik
JO
Bubb
P
Physiological variation in Hawaiian Metrosideros polymorpha across a range of habitats: from dry forests to cloud forests
Mountains in the mist: science for conserving and managing tropical montane cloud forests
,
2010
Honolulu
University of Hawaii Press
(pg.
456
-
464
)
Santiago
LS
Goldstein
G
Meinzer
FC
Fownes
J
Mueller-Dombois
D
Transpiration and forest structure in relation to soil waterlogging in a Hawaiian montane cloud forest
Tree Physiology
,
2000
, vol.
20
(pg.
673
-
681
)
Scatena
FN
Bruijnzeel
LA
Bubb
P
Das
S
Bruijnzeel
LA
Scatena
FN
Hamilton
LS
Setting the stage
Tropical montane cloud forests. Science for conservation and management
,
2010
Cambridge
Cambridge University Press
(pg.
3
-
13
)
Schlegel
TK
Schönherr
J
Schreiber
L
Size selectivity of aqueous pores in stomatous cuticles of Vicia faba leaves
Planta
,
2005
, vol.
221
(pg.
648
-
655
)
Scholl
M
Eugster
W
Burkard
R
Understanding the role of fog in forest hydrology: stable isotopes as tools for determining input and partitioning of cloud water in montane forests
Hydrological Processes
,
2010
, vol.
25
(pg.
353
-
366
)
Scholz
FG
Bucci
SJ
Goldstein
G
Meinzer
FC
Franco
AC
Miralles-Wilhelm
F
Removal of nutrient limitations by long-term fertilization decreases nocturnal water loss in savanna trees
Tree Physiology
,
2007
, vol.
27
(pg.
551
-
559
)
Schönherr
J
Water permeability of isolated cuticular membranes: the effect of pH and cations on diffusion, hydrodynamic permeability and size of polar pores
Planta
,
1976
, vol.
128
(pg.
113
-
126
)
Schönherr
J
Characterization of aqueous pores in plant cuticles and permeation of ionic solutes
Journal of Experimental Botany
,
2006
, vol.
57
(pg.
2471
-
2491
)
Schönherr
J
Bukovac
MJ
Penetration of stomata by liquids: dependence on surface tension, wettability, and stomatal morphology
Plant Physiology
,
1972
, vol.
49
(pg.
813
-
819
)
Schönherr
J
Lendzian
K
A simple and inexpensive method of measuring water permeability of isolated plant cuticular membranes
Zeitschrift für Pflanzenphysiologie
,
1981
, vol.
102
(pg.
321
-
327
)
Schönherr
J
Riederer
M
Foliar penetration and accumulation of organic chemicals in plant cuticles
Reviews of Enviromental Contamination and Toxicology
,
1989
, vol.
108
(pg.
1
-
70
)
Schreiber
L
Effect of temperature on cuticular transpiration of isolated cuticular membranes and intact leaf disks
Journal of Experimental Botany
,
2001
, vol.
52
(pg.
1893
-
1900
)
Schreiber
L
Polar paths of diffusion across plant cuticles: new evidence for an old hypothesis
Annals of Botany
,
2005
, vol.
95
(pg.
1069
-
1073
)
Schreiber
L
Riederer
M
Ecophysiology of cuticular transpiration: comparative investigation of cuticular water permeability of plant species from different habitats
Oecologia
,
1996
, vol.
107
(pg.
426
-
432
)
Schreiber
L
Skrabs
M
Hartmann
KD
Diamantopoulos
P
Simanova
E
Santrucek
J
Effect of humidity on cuticular water permeability of isolated cuticular membranes and leaf disks
Planta
,
2001
, vol.
214
(pg.
274
-
282
)
Sidle
RC
Ziegler
Negishi
JN
Nik
AR
Siew
R
Turkelboom
F
Erosion processes in steep terrain: truths, myths, and uncertainties related to forest management in Southeast Asia
Forest Ecology and Management
,
2006
, vol.
224
(pg.
199
-
225
)
Silvertown
J
Dodd
ME
Gowing
D
Mountford
O
Hydrologically-defined niches reveal a basis for species-richness in plant communities
Nature
,
1999
, vol.
400
(pg.
61
-
63
)
Simonin
KA
Santiago
LS
Dawson
TE
Fog interception by Sequoia sempervirens (D.Don) crowns decouples physiology from soil water deficit
Plant, Cell and Environment
,
2009
, vol.
32
(pg.
882
-
892
)
Smith
WK
McClean
TM
Adaptive relationship between leaf water repellency, stomatal distribution, and gas exchange
American Journal of Botany
,
1989
, vol.
76
(pg.
465
-
469
)
Snyder
KA
James
JJ
Richards
JH
Donovan
LA
Does hydraulic lift or nighttime transpiration facilitate nitrogen acquisition?
Plant and Soil
,
2008
, vol.
306
(pg.
159
-
166
)
Still
CJ
Foster
PN
Schneider
SH
Simulating the effects of climate change on tropical montane cloud forests
Nature
,
1999
, vol.
398
(pg.
608
-
610
)
Takahashi
M
Giambelluca
TW
Mudd
RG
, et al. .
Rainfall partitioning and cloud water interception in native forest and invaded forest in Hawai'i Volcanoes National Park
Hydrological Processes
,
2010
, vol.
25
(pg.
448
-
464
)
Westhoff
M
Zimmermann
D
Zimmermann
G
, et al. .
Distribution and function of epistomatal mucilage plugs
Protoplasma
,
2009
, vol.
235
(pg.
101
-
105
)
Webster
GL
Churchill
SP
Balslev
H
Forero
E
Luteyn
JL
The panorama of Neotropical cloud forests
Biodiversity and conservation of Neotropical montane forests. Neotropical Montane Forest Biodiversity and Conservation Symposium 1
,
1995
(pg.
53
-
77
New York Botanical Garden
Werner
WL
Canopy dieback in the upper montane rain forests of Sri Lanka
GeoJournal
,
1988
, vol.
17
(pg.
245
-
248
)
Zotz
G
Tyree
MT
Patiño
S
Carlton
MR
Hydraulic architecture and water use of selected species from a lower montane cloud forest in Panama
Trees
,
1998
, vol.
12
(pg.
302
-
309
)
Zou
C
Barnes
P
Archer
S
McMurtry
C
Soil moisture redistribution as a mechanism of facilitation in savanna tree–shrub clusters
Oecologia
,
2005
, vol.
145
(pg.
32
-
40
)
|
2017-02-23 14:05:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6010225415229797, "perplexity": 6486.374796126791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00580-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://tensornetworktheory.org/documentation/a00113.html
|
Tensor Network Theory Library Beta release 1.2.1 A library of routines for performing TNT-based operations
Time evolution
## Detailed Description
This section describes MPS functions which are useful for performing time evolution using the TEBD algorithm, although they are not limited to this.
## Functions
void tntMpoPropST2scConnect (tntNetwork mpo, tntNetwork PropB, tntNetwork PropT)
double tntMpoPropSTscContract (tntNetwork mpoProp, int chi)
tntNodeArray tntMpsCreatePropArray (unsigned L, tntComplex h, tntNodeArray *nnl, tntNodeArray *nnr, tntComplexArray *nnparam, tntNodeArray *os, tntComplexArray *osparam)
tntNetwork tntMpsPropArrayToST2sc (tntNodeArray Proparr)
void tntMpsPropST2scConnect (tntNetwork mps, tntNetwork Prop)
double tntMpsPropSTscContract (tntNetwork mpsProp, int chi)
tntNode tntMpsPropTwoSite (tntNode *Alp, tntNode *Arp, int chi, double *err)
## Function Documentation
void tntMpoPropST2scConnect ( tntNetwork mpo, tntNetwork PropB, tntNetwork PropT )
Connects networks created using tntMpsCreatePropST2sc() to an MPO network. It removes the singleton legs from the propagator network and connects the bottom propogator to the downward set of physical legs, and a conjugate copy to the upward set of physical legs. Note in the convention used in the library, the bottom facing legs are those that belong to the 'ket' and the top propagator are those that belong to the 'bra'.
The MPO passed as an argument is used, so the connections on the physical legs are modified by the function, and they must initially be unconnected. A copy of the propagator network is connected, so the propagator network passed in the arguments is unchanged by the function.
Functions tntMpoPropST2scProduct() for then taking the product of the network with an MPS.
Returns
No return value
Parameters
mpo The network representing the MPO PropB The network representing the propagator to contract to the bottom (ket) legs PropT The network representing the propagator to contract to the top (bra) legs
Definition at line 27 of file tntMpoST2sc.c.
Referenced by tntMpoPropST2scProduct().
double tntMpoPropSTscContract ( tntNetwork mpoProp, int chi )
Performs a single sweep in one direction of a Suzuki-Trotter staircase sweep, contracting the propagators with the MPO nodes. The direction is determined by the connections of the network.
The connections of the network must be as follows for a left to right sweep:
and as follows for a right to left sweep:
Any node connected to the downwards facing legs of the propagator network or upward legs of the conjugate propagator netowrk are unaffected by the function.
Returns
The truncation error for that sweep, which is the sum of the truncation error for all SVDS. The truncation error for each SVD can be defined in a few different ways - see the documentation for tntTruncType() for more details.
Parameters
mpoProp Network representing the MPO connected to the staircase. Will be changed by the function, by applying the propagator. chi The maximum internal dimension. All SVD's will be truncated to this value.
Definition at line 113 of file tntMpoST2sc.c.
Referenced by tntMpoPropST2scProduct().
tntNodeArray tntMpsCreatePropArray ( unsigned L, tntComplex h, tntNodeArray * nnl, tntNodeArray * nnr, tntComplexArray * nnparam, tntNodeArray * os, tntComplexArray * osparam )
Creates an array of propagators (length $$L-1$$) required for evolution under a site-wide operator $$\hat{O}$$ formed from a sum of nearest-neighbour and onsite terms.
$\hat{O} = \sum_{j=0}^{L-2}\sum_i^{n_n}\alpha_{i,j}\hat{o}^l_{i}\otimes\hat{o}^r_i + \sum_{j=0}^{L-1}\sum_i^{n_o}\beta_{i,j}\hat{o}^s_{i}$
Nearest-neighbour operators $$\hat{o}^l_{i}$$ and $$\hat{o}^r_i$$ should be provided in arrays nnl and nnr respectively both having length $$n_n$$. Onsite operators $$\hat{o}^s_{i}$$ should be provided in array os having length $$n_o$$. The operators should be single-site operators or product MPOs, i.e. no internal legs, and two physical legs with the legs labelled as follows:
All the operators should have the same dimension for the physical legs.
The parameters $$\alpha_{i,j}$$ and $$\beta_{i,j}$$ for the nearest neighbour are onsite terms are supplied in matrices nnparam and osparam respectively. The matrix must have a number of rows equal to the length $$n_n, n_o$$ of its respective operators array, but there are two options for the number of columns:
1. All the parameters are uniform accross the lattice. In this case the parameters array should have one column (which is the default behaviour if it is created with only one dimension specified). The parameter $$\alpha_{i,j}$$ or $$\beta_{i,j}$$ should be in position i,1 in the matrix for all sites.
2. One or more of the parameters can vary accross the lattice. In this case the parameters matrix should have L-1 columns for nearest neighbour operators and L columns for onsite operators. The parameter $$\alpha_{i,j}$$ or $$\beta_{i,j}$$ for operator i and site j should be at position i,j in the matrix. Any uniform operators should have identical entries for all the sites.
The two-site operator for the nearest-neighbour terms is simply formed from the supplied arguments as
$\hat{T}^n_{j,j+1} = \sum_i^{n_n}\alpha_{i,j}\hat{o}^l_i\otimes\hat{o}^r_i$
for each pair of sites $$j,j+1$$.
The two-site operator for the on-site terms is formed from the supplied arguments as
$\hat{T}^s_{j,j+1} = \frac{1}{2}\sum_i^{n_o}\left[(\beta_{i,j}+\delta_{0,j}\beta_{i,0})\hat{o}^s_i\otimes\mathbb{1}+(\beta_{i,j+1}+\delta_{L-1,j+1}\beta_{i,L-1})\mathbb{1}\otimes\hat{o}^s_i\right]$
for each pair of sites $$j,j+1$$ i.e. the onsite terms are distributed symmetrically amongst the two-site terms.
The propagator $$\hat{P}_{j,j+1}$$ in array entry j is then related to the operators $$\hat{T}_{j,j+1}=\hat{T}^n_{j,j+1}+\hat{T}^s_{j,j+1}$$ by:
$\hat{P}_{j,j+1} = \mathrm{exp}\left[-\mathrm{i}\Re(h)\hat{T}_{j,j+1} \right]\times\mathrm{exp}\left[-\Im(h)\hat{T}_{j,j+1} \right].$
The parameter $$h$$ is a uniform scale factor, which is usually related to the time step in simulations (e.g. half the time-step for second order decompositions). The propagators will then have four physical legs, each physical leg having the same dimension as the original physical legs, which are labelled as follows:
Returns
Returns the propagator for each pair of sites in an array.
Parameters
L Length of system. h Uniform scale factor to apply to all terms. See the main description for information on how real and imaginary parts are applied nnl Array of nearest-neighbour operators for left site. Send NULL if there are no nearest neighbour terms. Unchanged by function - copies are used. nnr Array of nearest-neighbour operators for right site. Send NULL if there are no nearest neighbour terms. Unchanged by function - copies are used. nnparam Array of parameters for nearest-neighbour operators. Send NULL if there are no nearest neighbour terms. os Array of onsite operators. Send NULL if there are no on-site operators. Unchanged by function - copies are used. osparam Parameters for the on-site operators. Send NULL if there are no on-site operators.
Definition at line 75 of file tntMpsCreatePropagator.c.
Referenced by tntMpsCreatePropST2sc().
tntNetwork tntMpsPropArrayToST2sc ( tntNodeArray Proparr )
Creates a network of two-site terms representing a site-wide propagator from an array of propagators decomposed using a Suzuki-Trotter second-order staircase expansion. Array entry j is assumed to contain the two-site propagator for sites $$j,j+1$$, and two copies of the propagator are placed in the network i.e. one for a right to left sweep, and one for a left-to-right sweep. Such an array can be created using tntMpsCreatePropArray(), although it does not have to be (e.g. the nodes could be loaded from an initialisation file instead). The final network has the following form:
The orange diamonds represent the start and end of the network i.e. the network starts at the top-most propagator and ends at the bottom-most propagator.
Returns
A network representing the site-wide propagator
Parameters
Proparr Array of propagators. Uncahnged by the function - copies of all nodes are used.
Definition at line 115 of file tntMpsCreateSTstaircase.c.
Referenced by tntMpsCreatePropST2sc().
void tntMpsPropST2scConnect ( tntNetwork mps, tntNetwork Prop )
Connects a network created using tntMpsCreatePropST2sc() to an MPS network. It removes the singleton legs from the propagator network, and the start and end legs of the complete network are those that originally belonged to the MPS, as shown below.
Note the orange diamonds represent the start and end of the network.
The MPS passed as an argument is used, so the connections on the physical legs are modified by the function, and they must initially be unconnected. A copy of the propagator network is connected, so the propagator network passed in the arguments is unchanged by the function.
Functions tntMpsPropST2scProduct() for then taking the product of the network with an MPS.
Returns
No return value
Parameters
mps The network representing the MPS Prop The network representing the propagator
Definition at line 31 of file tntMpsST2sc.c.
Referenced by tntMpsPropST2scProduct().
double tntMpsPropSTscContract ( tntNetwork mpsProp, int chi )
Performs a single sweep in one direction of a Suzuki-Trotter staircase sweep, contracting the propagators with the MPS nodes. The direction is determined by the connections of the network.
The connections of the network must be as follows for a left to right sweep:
and as follows for a right to left sweep:
Any node connected to the downwards facing legs of the propagator network are unaffected by the function.
The `twist' or orthogonality centre is moved as the two-site gates are swept across. After completion of a left to right sweep the orthogonality centre will be on the last site, and after the completion of a right to left sweep the orthogonality centre will be on the first site.
Returns
The truncation error for that sweep, which is the sum of the truncation error for all SVDS. The truncation error for each SVD can be defined in a few different ways - see the documentation for tntTruncType() for more details.
Parameters
mpsProp Network representing the MPS connected to the staircase. Will be changed by the function, by applying the propagator. chi The maximum internal dimension. All SVD's will be truncated to this value.
Definition at line 97 of file tntMpsST2sc.c.
Referenced by tntMpsPropST2scProduct().
tntNode tntMpsPropTwoSite ( tntNode * Alp, tntNode * Arp, int chi, double * err )
Performs the local contraction of two MPS nodes with the propagator, then factorises back into two unitary MPS nodes and a central node of singular values.
Returns
The node representing the matrix of singular values
Parameters
Alp Points to the left node of the pair Arp Points to the right node of the pair chi The maximum internal dimension for the SVD err Total truncation error
Definition at line 157 of file tntMpsST2sc.c.
References tntNodeFindConn(), and tntNodeSVD().
Referenced by tntMpsPropSTscContract().
|
2019-02-17 05:55:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6673786044120789, "perplexity": 2316.5913316698234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481624.10/warc/CC-MAIN-20190217051250-20190217073250-00162.warc.gz"}
|
https://proofwiki.org/wiki/Inverse_of_Transitive_Relation_is_Transitive/Proof_2
|
# Inverse of Transitive Relation is Transitive/Proof 2
Jump to navigation Jump to search
## Theorem
Let $\RR$ be a relation on a set $S$.
Let $\RR$ be transitive.
Then its inverse $\RR^{-1}$ is also transitive.
## Proof
Let $\RR$ be transitive.
Thus by definition:
$\RR \circ \RR \subseteq \RR$
Thus:
$\ds \RR^{-1} \circ \RR^{-1}$ $=$ $\ds \paren {\RR \circ \RR}^{-1}$ Inverse of Composite Relation $\ds$ $\subseteq$ $\ds \RR^{-1}$ Inverse of Subset of Relation is Subset of Inverse
$\blacksquare$
Hence the result by definition of transitive relation.
|
2023-03-27 23:05:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551341533660889, "perplexity": 718.6044095956448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00106.warc.gz"}
|
http://www.c0t0d0s0.org/2010-03-02/less-known-solaris-features-logadm.c0t0d0s0
|
# Less known Solaris features - logadm
The mighty root couldn’t sleep at night, so root walked around the castle. Deep down in the foundation of super users castle there was a strange room. It was filled with scrolls and some of the serving daemons of root filled this room with even more scrolls while root was standing there. So root looked in some of them: There were new ones, interesting ones. Then root took some old ones, blew away the dust and after a short look root thought “Damned … those scrolls are so old, there aren’t true anymore. Someone has to clean up this place”. So root spoke a mighty spell of infinite power and another daemon spawned from the ether: “You are the keeper of the scrolls. But don’t keep everything. Just the last ones.” And so it was done since the day of this sleepless night.
## Housekeeping in your logfile directories
One of the regular task of any decent admin should be the housekeeping in the logfile directory. Logfiles are really useful, when something went wrong, but often they just filling the directories with data. Albeit sometimes it’s useful to have even very old logfiles, most of the times you just need the recent ones in direct access.
One of the big conundrums with solaris is the point that few people know the logadm tool. It’s available with Solaris since the release of Solaris 9. However it’s still one of the well-kept secrets of Solaris despite the fact, that the tool is well documented and already in use Solaris. I’m often wondering what users of Solaris thing, where this .0-files were created ;)
#### Capabilities
For all the usual task surrounding logfile handling you could use the command logadm . It’s a really capable tool:
• rotating logs (by copying/truncating or moving )
• configuring rules, when a log rotation should take place. This rules can be based on ...
• the size of the log file
• the time since last log rotation
• executing command before and after a logfile rotation
• compressing rotated log files based on rules
• specifying your own commands to copy/move the files
• specifying commands that should be used instead of a simple deletion for expiration of files
</noautobr>
#### How it works
logadm is the tool to configure it as well it’s the tool that do the log rotation. To automatically rotate the logs, logadm is executed by cron once a day. Let’s look into the crontab of the root user.
So as you may have already recognized, the logadm script is executed daily at 3:10 am in a Solaris default configuration. But how is the rotation itself done? Well, exactly as you would do it by typing in commands on the shell. logadm does the same job - just automatically. It generates a sequence of commands to rotate the logs and send them to a c shell for execution.
### Configuring it
Albeit it’s possible to invoke the rotation totally from command line each time, this isn’t a comfortable method to rotate logfiles. In almost every case you would configure the logadm facility in order to do the rotation task automatically. So how is this done?
The daily run of logadm depends on a config file. When you don’t specify a config file, it’s /etc/logadm.conf per default. The configuration is rather short and just rotates the usual suspects. The following file is a little bit extended, as the default configuration file doesn’t use all important options:
You can edit this file directly, but it’s preferred to change it with the logadm command itself. Let’s dissect some lines of this configuration.
#### Standard log rotation - introducing -C,-P and -a
This line is responsible for rotating /var/log/syslog. The -C 8 specifies, that the logadm should hold 8 old versions before it expires (read: deletes) old logfiles. With the -a 'kill -HUP cat /var/run/syslog.pid option the syslog gets an HUP signal after the log rotation. The syslogd needs this to recreate the logfile and to restart logging. The -P isn’t there in the pristine version of the file. Those options will are inserted when you run the logadm command. With this option, you tell logadm when when the last rotation was done. logadm inserts it each time it rotates a logfile. It’s important to know, that this time is GMT time, so don’t wonder about the offset to the local time shown with your logfiles. In this statement you don’t find a configuration of the time or size that leads to a log rotation. In this case the default values “1 byte and 1 week” are kicking in. So /var/log/syslog is rotated when the last rotation was at least one week in the past {\emph and} the file is at least one Byte in size.
#### Explicit definition of a rotation time span - introducing -p
With -p you can control the period between log rotations. For example 1d specifies that you rotate the log files on a daily schedule.
So this logfiles is rotated every day by the logadm execution initiated by the entryin the crontab.
#### A template for the name of the rotated log - introducing -t
-t specifies the way, how the names for logfiles are created by logadm. The template for the new name of the logfile isn’t just a fixed string, you can use several variables to control the name of the file.
This is a relatively simple example. $n is just the version number of the file, starting with 0. Thus a configuration would lead to a filename like this one: $n is just one possible variable available for use in the templates. The logadm man page specifies further possible variables.
#### Explicit definition of maximum logfile size - introducing -s
Sometimes you want to set your current logfile a limit based on file size and not based on a time interval. The -s option allows you to do so:
With this option logadm will rotate the logfile as soon it’s 512k or larger at the time logadm runs.
#### Specifying both: Space and time
When you specify a maximum logfile size (-s) as well as a maximum logfile period (-p), both conditions are connected with an AND. So the default “1 byte and 1 week” can be translated: Rotate when the logfile has a size of at least 1 byte AND it’s one week old, thus a week old zero-length logfile is not rotated by the default configuration.
#### Copy and truncate instead of moving - introducing -c
Rotating logfiles isn’t unproblematic. Some application don’t like it if you simply move the file away. They may use the rotated log instead of a new one, or they simply don’t create a new logfile. Thus you have to restart the service or send a signal. There is an alternative. It’s called truncating.
The -c option forces logadm to use cp instead of mv to rotate the log. To get a fresh start in the new log /dev/null is copied to the current logfile effectively make a 0 byte file out out it. This circumvents the need for a restart of the application to restart logging.
#### Compress after rotation - introducing -z
Due to their structure, logfiles can yield an excellent compression ratio. So it’s sensible to compress them after rotation. You can configure this with the -z session. However often it’s somewhat unpractical to work with compressed files, thus the parameter after -z forces logadm not to compress the stated number of the most recent log files. By doing so you have the recent logfiles available for processing without decompressing, without sacrifying space by leaving all logfiles uncompressed.
-z 1 configures logadm to leave the most recent rotated logfile untouched and compresses the next one. This results to the following files:
You can check the addition with the logadm -V command. -V triggers the validation of the file.Thus it’s sensible to run this command after a direct addition to the file as well.
Okay, we’ve worked a while with this log rotation configuration. You will notice that, that the configuration line has changed.
Now you want to get rid of this configuration statement. This is done with the -R option in conjunction with the name of the logfile.
It’s gone …. the /var/squid/logs/cache.log will not be rotated the next time logadm is executed by cron or at the command line.
#### Forcing an immediate rotation
Sometimes you want to execute an immidiate log rotation. For example because you want to restart a service with a fresh log to ease analysis. -p now executes the log rotation right in the moment you start logadm
With -p now you can even execute log rotations that are never executed automatically.
#### What happens under the hood?
As i wrote at the beginning, logadm works by sending a lot of commands to a c-shell. The logadm has an interesting mode of operation. When you use the -v option the tools shows the commands logadm uses to rotate the logfiles.
As you may have recognized pretty much the same as you would do manually.
## Tips and Tricks
It’s perfectly possible to have more than one configuration file. You can choose the config file with -f. You just adds an additional entry in the cron table that specifies a different logfile.A colleague of me use this “feature” to have an independent log rotation config file for SamFS, so he don’t have to edit the file supplied by the distribution.
Albeit it’s just a really small tool, the possibilities of logadm to help you with your logfiles are vast and infinite. I’m really wondering why many people don’t know about it. I hope i changed that at least a little bit by writing this tutorial.
|
2021-04-15 01:19:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5633605122566223, "perplexity": 2402.063383386268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00630.warc.gz"}
|
https://forum.azimuthproject.org/plugin/ViewComment/22800
|
Here's the way to think about using negative entropy to find an optimal solution. Consider autonomous vs non-autonomous differential equations. One way to think about the distinction is that the transfer function for non-autonomous only depends on the presenting input. Thus, it acts like an op-amp with infinite bandwidth. Or below saturation it gives perfectly linear amplification

In contrast, for an autonomous formulation, the amplification depends on prior values so it requires a time-domain convolution or a frequency-domain transfer function

Yet there are many other non-autonomous formulations that aren't linear, for example a companding transfer that takes the square root of the input (used for compressing the dynamic range of a signal).

What does this have to do with entropy? Well that transfer function can get very strange but still possess underlying order. Yet that order or pattern may be difficult to discern without adequate information. So consider if the non-autonomous transfer function itself is something odd, such as an unknown and potentially complex sinusoidal modulation. This occurs in Mach-Zehnder modulation. The effect is to distort the input enough to fold the amplitude at certain points.

The difficulty is if we have little knowledge of the input forcing or the modulation, we will not be able to decode anything. But with a measure such as Negative Shannon Entropy, we can see how far we can go with limited info.
So consider this output waveform that we are told is due to Mach-Zehnder modulation of an unknown input

All we know is that there may be a basis forcing that consists of a couple of sinusoids, and that there is an obvious non-autonomous complex modulation that is generating the above waveform
The idea is that we test out various combinations of sinusoidal parameters and then maximize the Shannon entropy of the *power spectrum* of the transfer from input to output (see the citation in the previous post). We can do this calculating a discrete Fourier transform or an FFT and multiplying by the complex conjugate to get the power spectrum. For a perfectly linear amplification as in the first example, it is essentially a delta function at a frequency of zero, indicating maximum order with a maximum negative Shannon entropy. And for a single sinusoidal frequency modulation, the power spectrum would be a delta *shifted* to the frequency of the modulation. Again this will be a maximally-ordered amplification, and again with a maximum in negative Shannon entropy. Yet, in practical terms, perhaps something such as a Renyi or Tsallis entropy measure would work even better than Shannon entropy. Actually, the [Tsallis entropy](https://en.wikipedia.org/wiki/Tsallis_entropy) is close to describing a mean-square variance error in a signal, whereby it exaggerates clusters or strong excursions when compared against a constant background.
So this is what I have used that works quite well. I essentially maximize the normalized mean-squared variance of the power spectrum
$$\frac{\sum (F(\omega)-)^2}{\sum F(\omega)}$$
The result of a search algorithm of input sinusoidal factors to maximize the power spectrum variance value is this power spectrum

which stems from this optimal input forcing

Note that this is not the transfer modulation, which we still need to extract from the power spectrum.
As a result, this negative entropy algorithm is able to deconstruct or decode a Mach-Zehnder modulation of two sinusoidal factors that's encoding an input forcing of another pair of sinusoidal factors. So essentially we are able to find 4 unknown factors (or 8 if both amplitude and phase are included) by only searching on 2 factors (or 4 if amplitude and phase are included). But how is that possible? It's actually not a free lunch because the power spectrum calculation is essentially testing all possible modulations in parallel and the negative entropy calculation is keeping track of the frequency components that maximize the delta functions in the spectrum. That is the mean-square variance is weighting greater excursions than a flat highly-random background would.
From the paper, this is the general idea. For negative entropy we are looking for the upper spectrum, not the lower, which is a maximum entropy

Good luck, this works well for certain applications. It may even work better in a search algorithm than if you did a pure RMS minimization of fitting the 4 sinusoidal factors directly against the output, as it may not fall into local minima as easily. Doing the power spectrum helps to immediately broaden the search I think.
|
2021-10-18 23:22:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7738679051399231, "perplexity": 548.559622619948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585215.14/warc/CC-MAIN-20211018221501-20211019011501-00406.warc.gz"}
|
https://rviews.rstudio.com/2021/09/27/august-2021-top-40-new-cran-packages/
|
# August 2021: "Top 40" New CRAN Packages
by Joseph Rickert
One hundred sixty new packages covering a wide array of topics made it to CRAN in August. I thought I would emphasize the breadth of topics by expanding the number of categories organizing my “Top 40” selections beyond core categories that appear month after month. Here are my picks in fourteen categories: Archaeology, Computational Methods, Data, Education, Finance, Forestry, Genomics, Machine Learning, Medicine, Science, Statistics, Time Series, Utilities, and Visualization. Based on informal impressions formed over the last several months, I believe a new category combining applications in forestry, animal populations, climate change could become a regular core category.
### Archaeology
DIGSS v1.0.2: Provides a simulation tool to estimate the rate of success that surveys including user-specific characteristics have in identifying archaeological sites given specific parameters of survey area, survey methods, and site properties. See Kintigh (1988) for background and the vignette for examples.
### Computational Methods
simlandr v0.1.1: Provides a set of tools for constructing potential landscapes for dynamical systems using Monte-Carlo simulation which is especially suitable for formal psychological models. There are vignettes on Dynamic Models and Simulations, Constructing Potential Landscapes, and Calculating the Lowest Elivation Path.
### Data
metaboData v0.6.2: Provides access to remotely stored data sets from a variety of biological sample matrices analyzed using mass spectrometry metabolomic analytical techniques. See the vignette.
nflreadr v1.1.0: Provides functions for downloading data from the GitHub repository for the nflverse project. There is a brief Introduction and several short vignettes that serve as the data dictionary for the various files Draft Picks, Rankings, etc.
OCSdata v1.0.2: Provides functions to access and download data from the Open Case Studies repositories on GitHub. See the vignette to get started.
rATTAINS v0.1.2: Implements an interface to United States Environmental Protection Agency (EPA) ATTAINS database used to track information provided by states about water quality assessments conducted under federal Clean Water Act requirements. There is a vignette.
taylor v0.2.1: Provides access to a curated data set of Taylor Swift songs, including lyrics and audio characteristics. Data comes Genius and the Spotify API. See README for examples,
### Education
karel v0.1.0: Provides an R implementation of Karel the robot, a programming language for teaching introductory concepts about general programming in an interactive and fun way, by writing programs to make Karel achieve tasks in the world she lives in. There are several vignettes including one on Control Structures and another on Algorithmic Decomposition.
roger v0.99-0: Implements tools for grading the coding style and documentation of R scripts. This is the R component of Roger the Omni Grader, an automated grading system for computer programming projects based on Unix shell scripts. Look here for more information.
### Finance
dispositionEffect v1.0.0: Implements four different methodologies to evaluate the presence of the disposition effect and other irrational investor behaviors based on investor transactions and financial market data. There is a Getting Started Guide, and vignettes on Analysis, Disposition Effects in Parallel, and Time Series Disposition Effects.
HDShOP v0.1.1: Provides functions to construct shrinkage estimators of high-dimensional mean-variance portfolios and performs high-dimensional tests on optimality of a given portfolio. See Bodnar et al. (2018), Bodnar et al. (2019), and Bodnar et al. (2020) for background.
tcsinvest v0.1.1: Implements an interface to the Tinkoff Investments API which enables analysts and traders can interact with account and market data from within R. Clients for both REST and Streaming protocols have been implemented. There is a vignette.
### Forestry
APAtree v1.0.1: Provides functions to map the area potentially available (APA) using the approach from Gspaltl et al. (2012) and also aggregation functions to calculate stand characteristics based on APA-maps and the neighborhood diversity index as described in Glatthorn (2021). See the vignette for examples.
efdm v0.1.0: Implements the European Forestry Dynamics Model (EFDM), a large-scale forest model that simulates the development of a forest and estimates volume of wood harvested for any given forested area. See Packalen et al. (2015) for background and the vignette for examples.
### Genomics
molnet v0.1.0: Implements a network analysis pipeline that enables integrative analysis of multi-omics data including metabolomics. It allows for comparative conclusions between two different conditions, such as tumor subgroups, healthy vs. disease, or generally control vs. perturbed. The case study presented in the vignette uses data published by Krug (2020).
simtrait v1.0.21P Provides functions to simulate complex traits given a SNP genotype matrix and model parameters with an emphasis on avoiding common biases due to the use of estimated allele frequencies. Traits can follow three models: random coefficients, fixed effect sizes, and multivariate normal. GWAS method benchmarking functions as described in Yao and Ochoa (2019) are also provided. See the vignette.
statgenIBD v1.0.1: Provides functions to calculate biparental, three and four-way crosses Identity by Descent (IBD) probabilities using Hidden Markov Models and inheritance vectors following Lander & Green (1987) and Huang (2011). See the vignette for examples.
### Machine Learning
text2map v0.1.0: Provides functions for computational text analysis for the social sciences including functions for working with word embeddings, text networks, and document-term matrices. For background on the methods used see Stoltz and Taylor (2019), Taylor and Stoltz (2020), Taylor and Stoltz (2020), and Stoltz and Taylor (2021). There is a Quick Start Guide and a vignette on Concept Class Analysis.
NPRED v1.0.5: Uses partial informational correlation (PIC) to identify the meaningful predictors from a large set of potential predictors. Details can be found in Sharma & Mehrotra, (2014), Sharma et al.(2016), and Mehrotra & Sharma (2006). See the vignette for examples.
stabiliser v0.1.0: Implements an approach to variable selection through stability selection and the use of an objective threshold based on permuted data. See Lima et al (2021) and Meinshausen & Buhlmann (2010) for details and the vignette for an example.
### Medicine
dreamer v3.0.0: Fits longitudinal dose-response models utilizing a Bayesian model averaging approach as outlined in Gould (2019) for both continuous and binary responses. See the vignette.
smartDesign v0.72: Implements the SMART trial design, as described by He et al. (2021) which includes multiple stages of randomization where participants are randomized to an initial treatment in the first stage and then subsequently re-randomized between treatments in the following stage. There is a Dynamic Treatment Tutorial and a Sequential Design Tutorial.
### Science
bootf2 v0.4.1: Provides functions to compare dissolution profiles with confidence intervals of the similarity factor f2 and also functions to simulate dissolution profiles. There are multiple vignettes including and Introduction a Simulation Example.
track2KBA v1.0.1: Provides functions to prepare and analyze animal tracking data in order to identify areas of potential interest for population level conservation. See Lascelles et al. (2016) for background on the methodology employed and the vignette for examples and workflow.
### Statistics
chyper v0.3.1: Provides functions to work with the conditional hypergeometric distribution. See the vignette.
sprtt v0.1.0: Provides functions to perform sequential t-tests including those of Wald (1947), Rushton (1950), Rushton (1952), and Hajnal (1961). There is an Introduction to the package, a Use Case, and a vignette on the Sequential t-test.
SurvMetrics v0.3.5: Implements popular evaluation metrics commonly used in survival prediction including Concordance Index, Brier Score, Integrated Brier Score, Integrated Square Error, Integrated Absolute Error and Mean Absolute Error. For detailed information, see Ishwaran et al. (2008) and Moradian et al. (2017). The vignette offers examples.
### Time Series
DCSmooth v1.0.2: Implements nonparametric smoothing techniques for data on a lattice or functional time series which allow for modeling a dependency structure of the error terms of the nonparametric regression model. See Beran & Feng (2002), Mueller & Wang (1994), Feng & Schaefer (2021), and Schaefer & Feng (2021) for the background and the vignette for examples.
STFTS v0.1.0: Implements statistical hypothesis tests of functional time series including a functional stationarity test, a functional trend stationarity test and a functional unit root test.
WASP v1.4.1: Implements wavelet-based variance transformation methods for system modeling and prediction. For details see Jiang et al. (2020), Jiang et al. (2020), and Jiag et al. (2021) There is a vignette with examples.
### Utilities
ExpImage v0.2.0: Provides an image editing tool for researchers which includes functions for segmentation and for obtaining biometric measurements. There are several vignettes including: Contagem de bovinos, Contagem de objetos, and Como editar imagens.
meltr v1.0.0: Provides functions to read non-rectangular data, such as ragged forms of csv (comma-separated values), tsv (tab-separated values), and fwf (fixed-width format) files. See README to get started.
plumbertableau v0.1.0: Implements tools for building plumber APIs that can be used in Tableau workbooks. There is a package Introduction and vignettes on Writing Extensions, Using Extensions in Tableau, and Publishing Extensions to RStudio Connect.
string2path v0.0.2: Provides functions to extract glyph information from a font file, translate the outline curves to flattened paths or tessellated polygons, and return the results as a data.frame. See README for an example.
trackdown v1.0.0: Uses Googel Drive to implement tools for collaborative writing and editing of R Markdown and Sweave documents. There are some Tech Notes and vignettes on Features and Workflow.
### Visualization
aRtsy v0.1.1: Provides algorithms for creating artwork in the ggplot2 language that incorporate some form of randomness. See README for examples and package use.
ggcleveland v0.1.0: Provides functions to produce ggplot2 versions of the visualization tools described in William Cleveland’s book Visualizing Data. The vignette contains several examples.
ggtikz v0.0.1: Provides tools to annotate ggplot2 plots with TikZ code using absolute data or relative coordinates. See the vignette.
tidycharts v0.1.2: Provides functions to generate charts compliant with the International Business Communication Standards (IBCS) including unified bar widths, colors, chart sizes, etc. There is a Getting Started guide and vignettes on EDA, Customization, and Joining Charts.
Share Comments · · · ·
|
2022-08-17 17:29:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21967975795269012, "perplexity": 5661.333228666268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00593.warc.gz"}
|
http://openstudy.com/updates/55d1ded1e4b09ad8b749e67c
|
## mathmath333 one year ago Probability question
1. mathmath333
\large \color{black}{\begin{align} & \normalsize \text{10 black and 5 white balls. Two balls are drawn from the}\hspace{.33em}\\~\\ & \normalsize \text{urn one after the other without replacement. What is the}\hspace{.33em}\\~\\ & \normalsize \text{ probability that both drawn balls are black? }\hspace{.33em}\\~\\ \end{align}}
2. misty1212
HI!!
3. misty1212
what is the probability that the first one is black?
4. mathmath333
robability that the first one is black=2/3
5. mathmath333
hii!
6. misty1212
ok now how many black balls are left in the urn?
7. mathmath333
9
8. misty1212
right and how many balls total?
9. mathmath333
14
10. misty1212
so therefore the probability that the second ball chosen is black, given that the first ball was black, is ?
11. mathmath333
9/14
12. misty1212
exactly
13. misty1212
your final job is $\frac{2}{3}\times \frac{9}{14}$
14. mathmath333
3/7
15. misty1212
ok looks good
|
2016-10-27 20:36:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942508101463318, "perplexity": 3373.934761985264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721392.72/warc/CC-MAIN-20161020183841-00476-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://maker.pro/forums/threads/babys-first-opamp.7151/
|
# Baby's First OpAmp
C
#### CitizenRuth
Jan 1, 1970
0
Waaaanh! Electronics is hard.
I'm trying to become familiar with my 358n dual OpAmp chip. I am trying
to amplify an oscillator signal, an AR pulse from a 555. In general,
I'm trying to learn about synthesizer basics.
I'm using a non-inverting amp configuration with the R1/R2 =~ 1 for a
gain of approx. 2. But when I hook up the pulse, the output from the
OpAmp is actually significantly lower than the input.
I figured this chip is bad, because I got it on the street, but when I
use a straight DC voltage at the input instead of the output from the
555, the OpAmp circuit works as one would expect.
Can someone help me understand? It's not incorrect use of multimeter is
it? I have the $60 radio-shack one and I tried measurements with both AC and DC voltage meter (for the pulse circuit). Thanks much! J #### John Popelish Jan 1, 1970 0 CitizenRuth said: Waaaanh! Electronics is hard. I'm trying to become familiar with my 358n dual OpAmp chip. I am trying to amplify an oscillator signal, an AR pulse from a 555. In general, I'm trying to learn about synthesizer basics. I'm using a non-inverting amp configuration with the R1/R2 =~ 1 for a gain of approx. 2. But when I hook up the pulse, the output from the OpAmp is actually significantly lower than the input. I figured this chip is bad, because I got it on the street, but when I use a straight DC voltage at the input instead of the output from the 555, the OpAmp circuit works as one would expect. Can someone help me understand? It's not incorrect use of multimeter is it? I have the$60 radio-shack one and I tried measurements with both
AC and DC voltage meter (for the pulse circuit).
Thanks much!
No opamp is perfect and does exactly what you expect an opamp to do.
And the 358 is no exception. I think it might be time for you to
learn how to read a data sheet to anticipate if the imperfections in a
particular opamp will have significant impact on a particular application.
http://www.national.com/ds.cgi/LM/LM158.pdf
The fact that your opamp performs more ideally at DC than it does with
a pulse input may either have to do with voltage swing peaks or with
voltage swing time rate of change. Opamps have limits in both areas.
Take a look at the specs called Input Common-Mode Voltage Range,
Output Voltage Swing, the graph of Open Loop Frequency Response (gain
versus frequency), and the graph of Large Signal Frequency Response
(signal swing versus frequency).
If you have any questions what any of these specs mean to your
experiment, come back with those questions and the actual values of
supply voltage, resistor values and pulse voltage and frequency you
are experimenting with.
B
#### Bob Eldred
Jan 1, 1970
0
CitizenRuth said:
Waaaanh! Electronics is hard.
I'm trying to become familiar with my 358n dual OpAmp chip. I am trying
to amplify an oscillator signal, an AR pulse from a 555. In general,
I'm trying to learn about synthesizer basics.
I'm using a non-inverting amp configuration with the R1/R2 =~ 1 for a
gain of approx. 2. But when I hook up the pulse, the output from the
OpAmp is actually significantly lower than the input.
I figured this chip is bad, because I got it on the street, but when I
use a straight DC voltage at the input instead of the output from the
555, the OpAmp circuit works as one would expect.
Can someone help me understand? It's not incorrect use of multimeter is
it? I have the $60 radio-shack one and I tried measurements with both AC and DC voltage meter (for the pulse circuit). Thanks much! What are the power supply voltages that the 555 and 358 are operating at? Also what is the frequency of the 555 and pulse width? Typically the 555 will put out a pulse from near zero to within a volt or so of its power supply voltage. If that is 12 volts, the output might get to 11 or so. Similarly the 358 can only get with a volt or so of its power supply voltages. Now if the 555 is delivering a signal that is 0 to 11 Volts and you are trying to multiply it by two, non-inverting, the op-amp will have to deliver 0 to 22Volts, clearly beyond its range. So, that is the first thing to look for, what is the voltage you are trying to amplify and what is the available voltage that the amplifier can swing. Obviously you have to stay within that range by lowering the gain, lowering the 555 output or offsetting the amplifier so it stays within range; e.g., -11V to +11V or something similar. A second thing to look for is the frequency and pulse width of the 555. If these are too high or pulse width too short, the amplifier cannot respond. Furthermore the rise and fall times of a 555 are surely much faster than the slew rate of a 358 will allow. This will put slopes on the rise and fall of the wave form. A 358 is not an appropriate amplifier to amplify fast pulses and this sloping make keep the waveform from reaching its peak depending on how wide the pulse is. Thirdly, a multimeter is not the proper instrument to measure pulses with. You need a scope to see what is actually happening. Good luck. Bob B #### Ban Jan 1, 1970 0 Bob said: What are the power supply voltages that the 555 and 358 are operating at? Also what is the frequency of the 555 and pulse width? Typically the 555 will put out a pulse from near zero to within a volt or so of its power supply voltage. If that is 12 volts, the output might get to 11 or so. Similarly the 358 can only get with a volt or so of its power supply voltages. Now if the 555 is delivering a signal that is 0 to 11 Volts and you are trying to multiply it by two, non-inverting, the op-amp will have to deliver 0 to 22Volts, clearly beyond its range. In the data sheet is written single supply, 5V to 32V. If you want the circuit to function, you will need at least a +24V supply for the opamp(in case the 555 is working on 12V). Another possibility would be to operate the 555 on 5V and then use the 358 with a +12V supply. since the OP got already a timer running, he or she will be also getting this opamp stage to work. When measuring with a multimeter, it will show only half the expected value, because half of the time the output is at 0V and half at the double voltage, if the duty cycle is 50%. In this case a measurement directly at the output of the timer can be compared to the one on the opamp output. It will show the desired ratio 2:1 when the supply voltage is high enough. B #### Bob Eldred Jan 1, 1970 0 Ban said: In the data sheet is written single supply, 5V to 32V. If you want the circuit to function, you will need at least a +24V supply for the opamp(in case the 555 is working on 12V). Another possibility would be to operate the 555 on 5V and then use the 358 with a +12V supply. since the OP got already a timer running, he or she will be also getting this opamp stage to work. When measuring with a multimeter, it will show only half the expected value, because half of the time the output is at 0V and half at the double voltage, if the duty cycle is 50%. In this case a measurement directly at the output of the timer can be compared to the one on the opamp output. It will show the desired ratio 2:1 when the supply voltage is high enough. True, but if the 555 is running a several hundred KHz then the 358 will not even respond at least to the those voltages. There is no mention of frequency or pulse width and I don't think you can assume 50% duty cycle. In short we don't have enough information to conclude much of anything. Bob P #### Peter Bennett Jan 1, 1970 0 Waaaanh! Electronics is hard. I'm trying to become familiar with my 358n dual OpAmp chip. I am trying to amplify an oscillator signal, an AR pulse from a 555. In general, I'm trying to learn about synthesizer basics. I'm using a non-inverting amp configuration with the R1/R2 =~ 1 for a gain of approx. 2. But when I hook up the pulse, the output from the OpAmp is actually significantly lower than the input. I figured this chip is bad, because I got it on the street, but when I use a straight DC voltage at the input instead of the output from the 555, the OpAmp circuit works as one would expect. Can someone help me understand? It's not incorrect use of multimeter is it? I have the$60 radio-shack one and I tried measurements with both
AC and DC voltage meter (for the pulse circuit).
Thanks much!
When you feed DC into the op-amp, and measure the output with the
meter, the meter is reading DC, and should indicate the expected
voltage.
When you measure the output voltage with the pulse input, the meter,
when set to DC volts, will read the average voltage. If the pulse
duty cycle is 50%, the meter should read about half the maximum value.
If the pulse has a much shorter positive period than zero period, the
meter will read well below half.
With the meter set to AC, it may read the RMS value, which will be
less than half the peak-to-peak value.
C
#### CitizenRuth
Jan 1, 1970
0
CitizenRuth said:
Waaaanh! Electronics is hard.
I'm trying to become familiar with my 358n dual OpAmp chip. I am trying
to amplify an oscillator signal, an AR pulse from a 555. In general,
I'm trying to learn about synthesizer basics.
I'm using a non-inverting amp configuration with the R1/R2 =~ 1 for a
gain of approx. 2. But when I hook up the pulse, the output from the
OpAmp is actually significantly lower than the input.
I figured this chip is bad, because I got it on the street, but when I
use a straight DC voltage at the input instead of the output from the
555, the OpAmp circuit works as one would expect.
Can someone help me understand? It's not incorrect use of multimeter is
it? I have the \$60 radio-shack one and I tried measurements with both
AC and DC voltage meter (for the pulse circuit).
Thanks much!
Thanks for all the input, everybody - very helpful. Lots of things I
By the way, I meant to write AF, not AR, i.e., Audio Frequency. AR is a
CSound term . . . .
Replies
1
Views
155
Replies
5
Views
782
Replies
6
Views
1K
Replies
2
Views
1K
Replies
3
Views
853
|
2022-12-10 01:58:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45297104120254517, "perplexity": 3822.9721902630235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00858.warc.gz"}
|
http://openstudy.com/updates/4dc1c62cb0ab8b0b699c8c8b
|
## anonymous 5 years ago can u help me to solve and graph [-2x-4]>6
1. anonymous
-2x-4>6 -2x > 10 x < -5 <-- when dividing by a negative number the sign flips. This can be plotted on a number line. x = every number greater than negative five.
2. anonymous
Uh.. that's not correct. Recall that: $|a| > b \iff a > b \text{ or } a < -b$ In this case a is (-2x - 4) and b is 6 So we have -2x - 4 < -6 or -2x - 4 > 6 -2x < -2 or -2x > 10 x > 1 or x < -5
3. anonymous
You can also see that your solution is wrong since 0 is greater than -5 and yet $|-2(0) -4| = |-4| = 4 < 6$
|
2016-10-28 00:36:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46001777052879333, "perplexity": 408.80288802720776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721415.7/warc/CC-MAIN-20161020183841-00436-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.gatecseit.in/information-communication-technology/ict-mcq-set-208/
|
# ICT MCQ set 208
By | December 21, 2018
## Information and Communicating Technology (ICT) MCQ QUESTION ANSWERS
Question 1
LCD stands for..........
A Low Crystal Display B Less Crystal Display C Liquid Crystal Display D All the above
Question 2
Half of a byte is also called what?
A Half-byte B Quadra C Nybble D Binary
Question 3
Microsoft PowerPoint can insert objects from the following add-ins?
A Equation Editor B Organization Chart C Photo Album D All of these
Question 4
The output quality of a printer is measured by
A Dot per inch B Dot per sq. inch C Dots printed per unit time D All of above
Question 5
Which button do you click to add up a series of numbers?
A The autosum button B The Formula button C The quicktotal button D The total button
Question 6
Which of the following describes the characteristic features of SRAM?
A Cheap but slow B More consumption of power and much costly C Based on transistor-capacitor combination D Low consumption of power
Question 7
For separating channels in FDM, it is necessary to use..........
A Bandpass filters B Time slots C All of the above D None of these
Question 8
Touch Screen is..........?
A Input device B Output device C Both A& B above D None of these
Question 9
Which of the following is used to connect two strings of text?
A * B $mathchar"5E$ C & D $sim$
Question 10
MICR stands for
A Magnetic Ink Cases Reader B Magnetic Ink Card Recognition C Magnetic Ink Character Recognition D None of the above
Question 11
..........is the process of finding errors in software code
A Compiling B Testing C Running D Debugging
Question 12
In 1999, the Melissa virus was a widely publicized:
A e-mail Virus B Macro virus C Trojan Horse D Time Bomb
Question 13
The sequence of events that happen during a typical fetch operation is
A PC--> Memory--> IR B PC--> MAR--> Memory--> IR C PC--> MAR--> Memory---> MDR--> IR D PC--> Memory--> MDR--> IR
Question 14
Which of the following is correct regarding the Background of slides
A Background color of slides can be change B Picture can be set as Slide Background C Texture can be set as Slide Background D All of the Above
Question 15
How many vaccum tubes was used in first analytical engine-?
A 12000 B 10000 C 90000 D 14000
Question 16
The overall design, construction, organization and interconnecting of the various components of a computer system is referred as
A Computer Architecture B Computer Flow chart C Computer Algorithm D None of these
There are 16 questions to complete.
|
2019-01-19 03:55:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3168696165084839, "perplexity": 5951.615557397022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00326.warc.gz"}
|
http://math.stackexchange.com/questions/152512/why-is-the-topological-pressure-called-pressure
|
# Why is the topological pressure called pressure?
Let us consider a compact topological space $X$, and a continuous function $f$ acting on $X$. One of the most important quantities related to such a topological dynamical system is the entropy.
For any probability measure $\mu$ on $X$, one can define the measure-theoretic (or Kolmogorov-Sinai) entropy. Without reference to any measure, one can define the topological entropy, which has the good property of being and invariant under homeomorphism. These two notions are related via a variational principle:
$$h_\mathrm{top} (f) = \sup_{\{\mu\ \mathrm{inv.}\}} h_\mu (f),$$
and are also related to the physical notion of entropy of a system (well, the KS entropy is, at least. The case for the topological entropy is less clear for me, although things behave nicely in the cases I know and which have a physical interest).
Given a continuous potential $\varphi:X \to \mathbb{R}$, one can define the topological pressure $P(\varphi, f)$ by mimicking the definition of the topological entropy (other definitions include the following equation, and some extensions for complex potentials). Then one can get another variational principle:
$$P (\varphi, f) = \sup_{\{\mu \ \mathrm{inv.}\}} \left\{ \int_X \varphi \ d \mu + h_\mu (f) \right\}.$$
The RHS in the variational principle above is the supremum of $\int_X \varphi \ d \mu + h_\mu (f)$, which is, up to a change of sign (1), what is called in physics the free energy of the system. And we try to maximize it, as in physics (modulo the change of sign).
So it would seem logical if, as we have measure-theoretic and topological entropy, we would have measure-theoretic and topological free energy. And I can't find why one would like to call "pressure" what is the maximum of the free energy. I looked at some old works by David Ruelle, but couldn't find how this term was coined, and soon ran into the "not on the Internet nor in the library" wall. It may have something to do with lattices gases, but I emphasize the "may".
So my question is: why is this thing called pressure?
1. The first clue is that the entropy has a positive, and not negative, sign. The second is that we try to maximize the quantity, while in physics one tries to minimize it. Other clues include the fact that, in non-compact cases, a good condition is to have $\lim_\infty \varphi = - \infty$, again in opposition with physics.
Edit: I have asked three people which are familiar with the subject, but none gave me a good answer (actually, I got somewhat conflicting answer). I am starting a bounty to draw some attention, but this might be better suited to MathOverflow...
-
docs.google.com/viewer?a=v&q=cache:O1CB20zCU3wJ:www.csm.ro/… may be it help you , but i am not sure – accounts-manager Mar 17 '13 at 19:29
Perhaps because, in the absence of chemical potentials, the free energy is $pV$ (pressure times volume), and in a probability space, the "volume" (or total measure) is 1. Although, the analogies can't be carried too far, as you have indicated.
-
|
2015-10-09 20:11:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8914927244186401, "perplexity": 283.50602881561866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737935292.75/warc/CC-MAIN-20151001221855-00186-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://ai.stackexchange.com/tags/graph-neural-networks/hot
|
# Tag Info
### What is geometric deep learning?
To complete the first answer that is rather graph oriented, I will write a little about deep learning on manifolds, which is quite general in terms of GDL thanks to the nature of manifolds. Note ...
• 301
Accepted
### What is non-Euclidean data?
I presume this question was prompted by the paper Geometric deep learning: going beyond Euclidean data (2017). If we look at its abstract: Many scientific fields study data with an underlying ...
• 831
### What is non-Euclidean data?
Non-Euclidian geometry can be generally boiled down to the phrase the shortest path between 2 points isn't necessarily a straight line. Or, put in a way that lends itself very much to machine ...
• 3,627
Accepted
### What is geometric deep learning?
The article Geometric deep learning: going beyond Euclidean data (by Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, Pierre Vandergheynst) provides an overview of this relatively new sub-...
• 35k
Accepted
### What is a filter in the context of graph convolutional networks?
Short answer Check out the paper of Shuman et al. [1], it provides some background on Graph Signal Processing, including answers to your questions in sections II.C and III.A Long Answer Question 1 Yes,...
### What is the difference between graph convolution in the spatial vs spectral domain?
Spectral Convolution In a spectral graph convolution, we perform an Eigen decomposition of the Laplacian Matrix of the graph. This Eigen decomposition helps us in understanding the underlying ...
Accepted
### How are GCN doing semi-supervised learning?
In the introduction, the authors write We consider the problem of classifying nodes (such as documents) in a graph (such as a citation network), where labels are only available for a small subset of ...
• 35k
### How does the K-dimensional WL test work?
I never used a k-WL in practice, but I did apply weisfeiler-lehman for my graph tasks. As you can know, the WL provides the coloring by interactive procedure that's assign each node a 'color' (...
### Is there an open-source implementation for graph convolution networks for weighted graphs?
You can use Pytorch_Geometric library for your projects. Its supports weighted GCNs. It is a rapidly evolving open-source library with easy to use syntax. It is mentioned in the landing page of ...
• 290
Accepted
### What is the best resources to learn Graph Convolutional Neural Networks?
I believe Graph Representation Learning book by William L. Hamilton is a great resource to start
• 326
### Machine learning with graph as input and output
You can flatten the graph into a matrix and then train it like a normal neural network input. Perhaps an adjacency graph or maybe simply a series of linear equations representing the nodes and convert ...
### What is a graph neural network?
Graph Neural Networks The term Graph Neural Network, in its broadest sense, refers to any Neural Network designed to take graph structured data as its input: To cover a broader range of methods, this ...
• 831
### What is the difference between graph convolution in the spatial vs spectral domain?
After I read multiple explanations from different sources I think I found the main difference between the two methods. Implementation wise the only difference is the matrix that you're multiplying the ...
• 1,098
### Are there neural networks that accept graphs or trees as inputs?
Yes, there are numerous, coming under the umbrella term Graph Neural Networks (GNN). The most common input structures accepted by these techniques are the adjacency matrix of the graph (optionally ...
• 831
Accepted
### Understanding the node information score in the paper "Hierarchical Graph Pooling with Structure Learning"
Here, $H$ is a $n * d$ matrix where $n$ is the number of total nodes in the graph and $d$ is the dimension of embedding of each node. Using the notation in the question, the basic GNN formulation ...
• 290
Accepted
### What are some conferences for publishing papers on graph convolutional networks?
Based on past publications, here are some journals and conferences where you can possibly publish or present a research paper on geometric deep learning or graph neural networks Neural Information ...
• 35k
Accepted
### Is there an open-source implementation for graph convolution networks for weighted graphs?
A Comprehensive Survey on Graph Neural Networks (2019) presents a list of ConvGNN's. All of the following accept weighted graphs, and three accept those with edge weights as well: And below is a ...
• 831
### How Graph Convolutional Neural Networks forward propagate?
I think the picture you're presenting is mostly for educational purposes and that's why they are excluding the node itself from it's neighbors and using two distinct networks (most of the papers I've ...
• 1,098
### Are Graph Neural Networks generalizations of Convolutional Neural Networks?
Excuse my lack of rigor. Although I believe this could be rigorously proven for certain definitions of GNN, the term is still too loose for me to honestly claim one way or another on this. Hopefully ...
• 123
Accepted
### Why don't we use diffusion for non-graph CNNs?
Just for completeness, here is one simple formalization of a diffusion GCN (Gasteiger et al.): $\text{D-GCN}(X) = \sum_{k=1}^K A^k X W_k$ You have a diffusion factor $k \in [1 .. K]$ and you apply ...
• 741
1 vote
### What kind of features does each node have as an input graph to a graph neural network?
Applying GNNs to images can be realized in different ways. If you only substitute a visual ConvNet by a GNN, then the pixel values would be the same as what goes into a ConvNet. The only difference ...
• 741
1 vote
Accepted
### Is "node embedding" in GNN analogous to "hidden layer" of FFN?
Embeddings are vectors. Layers are functions. So, node embeddings (e.g. produced by TransE) are analogous to word embeddings or code embeddings, i.e. they are vector (and lower-dimensional) ...
• 35k
1 vote
Accepted
### Is there a graph neural network algorithm that can deal with a different number of input and output nodes?
I suggest you look into link prediction. I have had good luck with the StellarGraph library. They have several algorithms implemented, including GCN. Link prediction is a binary classification problem....
1 vote
Accepted
### Graph Convolutional Networks: why are non-parametric filters not localized in space?
Your explanation is correct. Probably, the term non-parametric is not very appropriate. But the meaning of it here, as far as I understand, is the parametrization ...
1 vote
Accepted
### How do graph neural networks adapt to different number of nodes and connections of different graphs?
The essence of the reason, why this approach works for graphs with a different number of nodes is the locality and node order permutation invariance. The typical form of the layer-wise signal ...
1 vote
Accepted
### Does the Weisfeiler-Lehman Isomorphism Test end?
Notice that a partition (set of nodes with the same label) can never get combined with another partition during an iteration. If two nodes are in different partitions, they stay in different ...
• 507
1 vote
Accepted
### How do convolutional layers of basic Graph Convolutional Networks work?
Actually, the given pipeline was used in the old days of Graph Neural Networks. Canonical paper on the subject is Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. You ...
1 vote
### How does a GCN handle new input graphs?
Graph neural networks, of which GCNs are a specific type, are able to handle arbitrary graphs as input. GNNs operate first over "neighborhoods" of nodes to compute individual node ...
• 346
1 vote
### What is the best resources to learn Graph Convolutional Neural Networks?
There is also the proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges (2021), written by some of the experts on the topic. The book does not focus only on graphs and graph ...
• 35k
1 vote
### Are there neural networks that accept graphs or trees as inputs?
There are types of neural networks designed exactly for that purpose. For example, graph convolutional networks (GCN) by Thomas N. Kipf. The input to the network will be a matrix of size $N \times F$, ...
• 1,098
Only top scored, non community-wiki answers of a minimum length are eligible
|
2022-10-04 16:11:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46020540595054626, "perplexity": 1235.1101870254215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00323.warc.gz"}
|
https://ltwork.net/precipitation-is-related-to-high-and-low-preasure-air-with--6913190
|
# Precipitation is related to high and low preasure air, with low pressure systems lead
###### Question:
Precipitation is related to high and low preasure air, with low pressure systems lead to rain
### Anybody know these answers this is my et for the day
Anybody know these answers this is my et for the day $Anybody know these answers this is my et for the day$...
### 12. Calculate the distance of the biker if the average speed is 90 Km/hr. and time travelled is 3 hrs.(Show your work and units)HELPPP
12. Calculate the distance of the biker if the average speed is 90 Km/hr. and time travelled is 3 hrs. (Show your work and units) HELPPP...
### How did the chinese communists increase their power during world war 2?
How did the chinese communists increase their power during world war 2?...
### Why would this piece be considered pop art?
Why would this piece be considered pop art? $Why would this piece be considered pop art?$...
### If angle pdq =23 and angle sdq = 11 what is the measure of angle pds
If angle pdq =23 and angle sdq = 11 what is the measure of angle pds...
### If the given example in the video had the same moment about point AA, however the I-beam now has a lower
If the given example in the video had the same moment about point AA, however the I-beam now has a lower area moment of inertia, the bending stress at point AA would be compared to the video solution...
### Help me please, ONLY IF YOU KNOW AND ARE WILLING TO HELP NOT JUST FOR FREE POINTS
Help me please, ONLY IF YOU KNOW AND ARE WILLING TO HELP NOT JUST FOR FREE POINTS $Help me please, ONLY IF YOU KNOW AND ARE WILLING TO HELP NOT JUST FOR FREE POINTS$...
### Is magical realism real -- or just realistic? What's the difference?
Is magical realism real -- or just realistic? What's the difference? $Is magical realism real -- or just realistic? What's the difference?$...
### The intended audience for the adaptation is most likely a business associate. close friend.
The intended audience for the adaptation is most likely a business associate. close friend. college professor. total stranger....
### You study echolocation in animals. what type of bats would be best for your research?
You study echolocation in animals. what type of bats would be best for your research?...
### Of the different levels of the Indian caste system an explanation of each of the group that make the Indian caste system
of the different levels of the Indian caste system an explanation of each of the group that make the Indian caste system ...
### The sum of the present ages of father and his son is 60yrs. Six years ago, father's age was five times
The sum of the present ages of father and his son is 60yrs. Six years ago, father's age was five times the age of the son. After 6yrs, son's age will be? A. 20B. 30C. 23D. 15The sum of the present ages of father and his son is 60yrs. Six years ago, father's age was five times the age of the son. Af...
### Name 4 problems with distortion when we try to represent the earth on a flat surface
Name 4 problems with distortion when we try to represent the earth on a flat surface...
### NEED HELP QUICK! DO BOTH! THANKS! BRAINLIEST!!
NEED HELP QUICK! DO BOTH! THANKS! BRAINLIEST!! $NEED HELP QUICK! DO BOTH! THANKS! BRAINLIEST!!$$NEED HELP QUICK! DO BOTH! THANKS! BRAINLIEST!!$...
### Explain how to create an equation with infinitely many solutions
Explain how to create an equation with infinitely many solutions...
### The NFL regulation extra point is kicked 12 yards from the goal. How many inches away from the goal is the extra point kick?
The NFL regulation extra point is kicked 12 yards from the goal. How many inches away from the goal is the extra point kick?...
### Suppose the risk-free rate is 3.5%; on average, an AAA-rated corporate bond carries a credit spread of 0.3%, an A-rated corporate
Suppose the risk-free rate is 3.5%; on average, an AAA-rated corporate bond carries a credit spread of 0.3%, an A-rated corporate bond carries a credit spread of 1.1%, and a B-rated corporate bond carries a credit spread of 3.9%. Company XYZ’s outstanding debt is rated BBB by rating agencies. What...
### Which equation results from adding the equations in this system? -2 x + y = 8. 5 x-y = - 5. 7x=13 7x=3
Which equation results from adding the equations in this system? -2 x + y = 8. 5 x-y = - 5. 7x=13 7x=3 3x=3 3x=13...
Please helpp with these 2 questions $Please helpp with these 2 questions$...
|
2022-09-28 21:39:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2828052341938019, "perplexity": 2151.6493804432816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00236.warc.gz"}
|
https://matchmaticians.com/questions/q6xevb/q6xevb-how-do-i-calculate-the-last-six-digits-of-number-span
|
The last six digits of the number $30001^{18}$
I am familiar with the method of exponentiation by squaring. However, since I am only interested in the last 6 digits, it seems unnecessary to calculate the whole number.
Is there a method to do this effectively and preferably by hand? A detailed explanation would be great!
|
2023-03-22 00:42:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79904705286026, "perplexity": 66.14033395962248}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00015.warc.gz"}
|
https://bitworking.org/news/2006/10/wsgi_url_vars/
|
# wsgi.url_vars
Ian Bicking: I think there's room for some more standards building on WSGI (that aren't actually extensions of the WSGI spec itself).
I put a page up on the wsgi.org site for this: http://wsgi.org/wsgi/Specifications
And I'm introducing what I think is low-hanging fruit in the specification realm: wsgi.url_vars http://wsgi.org/wsgi/Specifications/url_vars
It appears that wsgicollection was part of the motivation, which is gratifying.
I have updated wsgicollection to support wsgi.url_vars in addition to the currently supported selector.vars.
|
2020-09-25 13:52:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18371576070785522, "perplexity": 7317.817565281399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400226381.66/warc/CC-MAIN-20200925115553-20200925145553-00590.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-12-section-12-1-three-dimensional-coordinate-systems-12-1-exercises-page-797/25
|
## Calculus: Early Transcendentals 8th Edition
The equation $x=5$ represents a vertical plane parallel to the yz-axis that is shifted 5 units in the positive x direction.
The equation sets a value for x that is independent of y or z. If $f(y,z)=x$, then $f(y,z)$ will equal 5 at every point on the yz-plane.
|
2018-07-22 06:40:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8581898808479309, "perplexity": 241.63194824276337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00216.warc.gz"}
|
https://dsp.stackexchange.com/questions/28483/is-this-system-lti
|
# Is this system LTI?
Assuming the system $h[n]$ is LTI (and has an associated $H(z)$ transform), is the whole system below LTI?
I found the impulse response of the system and I got that it is $$h_{0}[n]=\alpha ^{-n}\cdot h[n]$$ where $h_{0}[n]$ is $y[n]$ when $x[n]=\delta [n]$.
That doesn't give us much information, so I thought of testing the system with the input signal being a complex exponential (i.e. $x[n]=a^{n}$). Because complex exponentials are eigenfunctions of LTI systems, if the output is not something like $y[n]=z^{n}\cdot H(z)$ then the system would not be LTI.
However, the output with $x[n]=a^{n}$ turned out to be $$y[n]=a^{n}\cdot H(a\cdot \alpha)$$
So, apparently, complex exponentials are eigenfunctions of the whole system. Also, I think that $H_{0}(z)=H(z\cdot \alpha)$. Is that right?
Is this enough to affirm that the whole system is LTI? Or does it just prove that it could be LTI?
You are right that determining the response to an impulse will generally not lead to any useful description of system behavior, unless the system is LTI. Your reasoning using eigenfunctions is correct. However, I would approach the problem as follows. If (and only if) the input/output relation can be formulated as a convolution of the input signal with a sequence that is independent of the input signal (the impulse response), then the system is LTI. And this is indeed possible:
\begin{align}y[n]&=\alpha^{-n}\sum_{k=-\infty}^{\infty}x[k]\alpha^kh[n-k]\\&=\sum_{k=-\infty}^{\infty}x[k]\alpha^{-(n-k)}h[n-k]\\&=\sum_{k=-\infty}^{\infty}x[k]\tilde{h}[n-k]\end{align}\tag{1}
where $\tilde{h}[n]=\alpha^{-n}h[n]$ is the impulse response of the total system, as you've already found out by yourself.
The $\mathcal{Z}$-transform of $\tilde{h}[n]$ is indeed easily expressed in terms of $H(z)$:
$$\tilde{H}(z)=\sum_{n=-\infty}^{\infty}\tilde{h}[n]z^{-n}=\sum_{n=-\infty}^{\infty}h[n]\alpha^{-n}z^{-n}=H(\alpha z)$$
• I'd just like to add that this is not true if $\alpha=0$. There's also trouble if $\alpha$ is negative and $n$ is rational, and I suspect it's even worse if they're allowed to be complex. – MBaz Jan 26 '16 at 20:27
• Thank you Matt, brilliant as usual. And nice observation by MBaz – Tendero Jan 27 '16 at 1:21
• Sorry to come back here, but I was checking this again and I don't really get your answer. Where did the $y[n]=\alpha^{-n}\sum_{k=-\infty}^{\infty}x[k]\alpha^kh[n-k]$ come from? – Tendero Feb 10 '16 at 1:26
• @M.S.: This is just the mathematical representation of the system in the figure: $x[n]$ is multiplied by $\alpha^n$, then convolved with $h[n]$, and finally multiplied by $\alpha^{-n}$. – Matt L. Feb 10 '16 at 8:50
|
2019-06-19 20:02:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9200902581214905, "perplexity": 226.10939791337313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999040.56/warc/CC-MAIN-20190619184037-20190619210037-00117.warc.gz"}
|
https://slideplayer.com/slide/5977959/
|
Www.soran.edu.iq Probability and Statistics Dr. Saeid Moloudzadeh Axioms of Probability/ Basic Theorems 1 Contents Descriptive Statistics Axioms of Probability.
Presentation on theme: "Www.soran.edu.iq Probability and Statistics Dr. Saeid Moloudzadeh Axioms of Probability/ Basic Theorems 1 Contents Descriptive Statistics Axioms of Probability."— Presentation transcript:
www.soran.edu.iq Probability and Statistics Dr. Saeid Moloudzadeh Axioms of Probability/ Basic Theorems 1 Contents Descriptive Statistics Axioms of Probability Combinatorial Methods Conditional Probability and Independence Distribution Functions and Discrete Random Variables Special Discrete Distributions Continuous Random Variables Special Continuous Distributions Bivariate Distributions
www.soran.edu.iq Probability and Statistics Contents Descriptive Statistics Axioms of Probability Combinatorial Methods Conditional Probability and Independence Distribution Functions and Discrete Random Variables Special Discrete Distributions Continuous Random Variables Special Continuous Distributions Bivariate Distributions 2
www.soran.edu.iq Chapter 1: Axioms of Probability Context Sample Space and Events Axioms of Probability Basic Theorems 3
www.soran.edu.iq Chapter 1: Axioms of Probability Context Sample Space and Events Axioms of Probability Basic Theorems 4
www.soran.edu.iq Section 3: Axioms of Probability Definition 2-2-1 (Probability Axioms): Let S be the sample space of a random phenomenon. Suppose that to each event A of S, a number denoted by P(A) is associated with A. If P satisfies the following axioms, then it is called a probability and the number P(A) is said to be the probability of A. 5
www.soran.edu.iq Section 3: Axioms of Probability 6 Let S be the sample space of an experiment. Let A and B be events of S. We say that A and B are equally likely if P(A) = P(B). We will now prove some immediate implications of the axioms of probability.
www.soran.edu.iq Section 3: Axioms of Probability Theorem 1.1: The probability of the empty set is 0. That is, P( ) = 0. Theorem 2-2-3: Let be a mutually exclusive set of events. Then 7
www.soran.edu.iq Section 3: Axioms of Probability 8
www.soran.edu.iq Section 3: Axioms of Probability It is now called the classical definition of probability. The following theorem, which shows that the classical definition is a simple result of the axiomatic approach, is also an important tool for the computation of probabilities of events for experiments with finite sample spaces. Theorem 1.3: Let S be the sample space of an experiment. If S has N points that are all equally likely to occur, then for any event A of S, where N(A) is the number of points of A. 9
www.soran.edu.iq Section 3: Axioms of Probability Example 1.11: Let S be the sample space of flipping a fair coin three times and A be the event of at least two heads; then S ={HHH,HTH,HHT, HTT,THH, THT, TTH, TTT} and A = {HHH,HTH,HHT,THH}. So N = 8 and N(A) = 4. Therefore, the probability of at least two heads in flipping a fair coin three times is N(A)/N = 4/8 = 1/2. 10
www.soran.edu.iq Section 3: Axioms of Probability 11
www.soran.edu.iq Section 3: Axioms of Probability 12
www.soran.edu.iq Section 4: Basic Theorems 13
www.soran.edu.iq Section 4: Basic Theorems 14
www.soran.edu.iq Section 4: Basic Theorems 15
www.soran.edu.iq Section 4: Basic Theorems 16
www.soran.edu.iq Section 4: Basic Theorems 17
Download ppt "Www.soran.edu.iq Probability and Statistics Dr. Saeid Moloudzadeh Axioms of Probability/ Basic Theorems 1 Contents Descriptive Statistics Axioms of Probability."
Similar presentations
|
2022-01-29 10:43:38
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9264222383499146, "perplexity": 1057.9687647631492}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00366.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/138064-matrix-multiplication-proof.html
|
1. ## matrix multiplication proof
can anyone please direct me to or give me a proof of why matrices are multiply the way they do with the rows of the first matrix multiply by the corresponding columns of the second matrix?
2. Originally Posted by duoc12
can anyone please direct me to or give me a proof of why matrices are multiply the way they do with the rows of the first matrix multiply by the corresponding columns of the second matrix?
Dear duoc,
Matrix multiplication is defined that way. Therefore there is no proof to why matrices are multiplied the way they do. If you need to clarify the definition of matrix multiplication please refer, Multiplication of Matrices
3. In addition, the REASON matrix multiplication has been defined that way is to create a shorthand method of writing systems of equations and shorthand methods of solving them.
I.e.
$a_{11}x_1 + a_{12}x_2 + a_{13}x_3 + \dots + a_{1n}x_n = b_1$
$a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + \dots + a_{2n}x_n = b_2$
$\vdots$
$a_{n1}x_1 + a_{n2}x_2 + a_{n3}x_3 + \dots + a_{nn}x_n = b_n$
Can be written as
$\left[\begin{matrix}a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn}\end{matrix}\right]\left[\begin{matrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{matrix}\right] = \left[\begin{matrix}b_1 \\ b_2 \\ \vdots \\ b_n \end{matrix}\right]$
Or in even shorterhand...
$A\mathbf{x} = \mathbf{b}$...
And then by using some matrix algebra
$A^{-1}A\mathbf{x} = A^{-1}\mathbf{b}$
$I\mathbf{x} = A^{-1}\mathbf{b}$
$\mathbf{x} = A^{-1}\mathbf{b}$.
If $A^{-1}$ can be calculated, that means that you can easily find $\mathbf{x}$.
|
2016-09-27 12:30:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8946064710617065, "perplexity": 274.3904649626518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661051.55/warc/CC-MAIN-20160924173741-00196-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://www.homebuiltairplanes.com/forums/threads/beast-one-the-next-generation-microjet.34976/page-4
|
# "Beast One" - the next generation Microjet
### Help Support Homebuilt Aircraft & Kit Plane Forum:
#### Voidhawk9
##### Well-Known Member
HBA Supporter
MSFS 2020 has a new system similar to X-Plane.
Nah, it's not even close, despite what the hype and marketing proclaims. I have both. FS2020 is pretty, X-plane is a capable engineering tool.
Last edited:
#### Scheny
##### Well-Known Member
I ignored the marketing and am referring to the page in the SDK where they describe the new model. Maybe I will add X-Plane in the future, but I have to concentrate my resources now.
The simulator is not used as an engineering tool for me. I will only use it to get to accustom myself to the aircraft, like low sitting position and approach profile (drag=min_thrust) where you have to decelerate from cruise to pattern speed before you are able to use any drag device. For this reason I bought a VR headset yesterday (I have good experience from a VR Eurofighter on a full motion platform --> comes very close to being in the real thing).
Engineering was done with Excel sheets, XFLR5 and now since I got the new Ryzen 9 5900X CPU, hundreds of CFD computing hours.
#### Map
##### Well-Known Member
I have a different kind of observation on airplane this size. While I was flying, I had two close encounters with little Sonex Jets lately. Both were near airports (different ones and different planes). The small size makes them hard to see and they come up so fast. One was a controlled airport and I knew about the traffic, but still only managed to see it after it was very close. Not good.
#### Scheny
##### Well-Known Member
I am implementing the Beast One for MSFS and there has been quite some progress. By now, also the nosewheel is in, but the rest of the gear mechanism is still missing. Elevator fairing is almost completed and the Garmin G3X Touch in the cockpit already works.
Next task: get the switches working and then model the electrical system correctly.
#### jandetlefsen
##### Well-Known Member
HBA Supporter
Just downloaded and tried it out in FS2020, really fun little plane. Can't wait to see more progress!
Also It has full gear now.
#### Attachments
• 327.1 KB Views: 122
Last edited:
#### Yellowhammer
##### Well-Known Member
HBA Supporter
Just downloaded and tried it out in FS2020, really fun little plane. Can't wait to see more progress!
Also It has full gear now.
Looks like a carton to me.
#### Urquiola
##### Well-Known Member
Two ideas already consulted elsewhere: a MiG-17 with an Area Ruled fuselage?, an F-104 with an straight leading edge wing? Would this improve somehow these aircraft? Blessings +
#### Scheny
##### Well-Known Member
Just downloaded and tried it out in FS2020, really fun little plane. Can't wait to see more progress!
And it will get better . Aerodynamics (in MSFS) are not fine-tuned yet.
Looks like a carton to me.
Yeah, the texture is still missing, as I am not used to creating UV-maps yet. They will be added soon, but first I am evaluating switching to a T-tail.
#### Scheny
##### Well-Known Member
Two ideas already consulted elsewhere: a MiG-17 with an Area Ruled fuselage?, an F-104 with an straight leading edge wing? Would this improve somehow these aircraft? Blessings +
Unfortunately not. At Mach 0.5 to 0.6 cruise the area rule has no impact, but pressure gradients (Arnolds "poor mans area rule") are used. This is why I calculate a T-tail right now, as the elevator is creating a lot of low pressure areas which create too much drag.
HBA Supporter
#### Scheny
##### Well-Known Member
I was quite for some time, as I had serious doubt about the interference drag of the tail. Around christmas, I spent a few thousand on a high end PC which is crunching numbers on CFD for a few days to evaluate the differences between standard and T-tail. The AMD 5900X is really amazing, running 24 threads at an average of 4.45GHz and 80°C.
Unlike another better funded and more discussed project in this forum, the CFD is calculated with a representative mesh of 8.4 million cells just for the first approximation (and the 6 million are advanced meshing, where 90% is in the boundary layer).
I will post the outcome by end of this weekend. This is also the reason why the work on the MSFS model was halted after 0.3.1 as I was not sure if the tail will change.
Last edited:
#### jandetlefsen
##### Well-Known Member
HBA Supporter
Would be cool to see your CFD process, something i have very little idea about.
#### Pops
##### Well-Known Member
HBA Supporter
Log Member
Unfortunately not. At Mach 0.5 to 0.6 cruise the area rule has no impact, but pressure gradients (Arnolds "poor mans area rule") are used. This is why I calculate a T-tail right now, as the elevator is creating a lot of low pressure areas which create too much drag.
Never flew a T-tail that I liked.
#### Dun
##### Member
Hi,
recently joined on HBA and now found this thread;
Excellent idea and commendable effort for this airplane, Scheny
I wish you the best with the project.
As mentioned by others here, if your target speeds are achieved with the structure able to withstand aerobatic loads, your airplane will surely be a lucrative project.
Have you pondered on whether it would be supplied in kit form or fully manufacturing it?
Also, I immediately wondered if this could have a later variant version:
An option for a less aerobatically capable version, with extended wingtip tanks for some very desirable range, up to the 600kg EASA limit.
Your thoughts on this possibility, with regards to lift and wing area, structural load bearing, and drag?
Thank you
#### Scheny
##### Well-Known Member
It is a real pity that the test pilot is against the T-tail version, as the airflow is just as good as it can be:
Laminar flow over most of the surface and the boundary layer is very thin, even at the tail. There is some weird flow at the nose, which I am investigating right now. It has no impact, but if I can get rid of it, I do. Also the mid fuselage will get slightly wider at the fuel tank, as aerodynamics showed decreased drag for increased width there (sounds strange, but minimizes interference).
My aerodynamicist voted against showing any pictures of the wing, but this is a good sign. I happened to design the airfoils and layout so perfectly aligned, that it looks like an elliptic wing and there was nothing which could be improved. testing was for cruise config, so lets see if this is also true for landing config.
#### Scheny
##### Well-Known Member
As mentioned by others here, if your target speeds are achieved with the structure able to withstand aerobatic loads, your airplane will surely be a lucrative project.
Have you pondered on whether it would be supplied in kit form or fully manufacturing it?
At the moment (after loosing funding due to Corona) I am happy if I manage to build at least one. But as soon as one is flying and tested thoroughly, there is nothing against selling it in kit form. Price would be 300k$if built in single orders or 200k$ for one per month. This sounds high as the JSX-2 is ~150k\$, but instead of getting only a few aluminum sheets, it comes already painted and sanded, so that you only have to glue left and right half together (saving at least a 1000h of work).
Also, I immediately wondered if this could have a later variant version:
An option for a less aerobatically capable version, with extended wingtip tanks for some very desirable range, up to the 600kg EASA limit.
Your thoughts on this possibility, with regards to lift and wing area, structural load bearing, and drag?
Unlike the first draft, it has a center tank, so the wings are free for wing tanks. No problem doing that, except that you need some connectors and extra pumps. As the weight is inside the wing, it does not add up to the spar (so no change required here). The aircraft is using 450kg class parts, so it has capability to "grow" by 70kg, which means that the fuel can be +50% (or another 20gal) --> cruise up to 3h possible at slightly lower speed.
As it is my first design, I use 6G with FOS 2.25 or equally 9G with FOS 1.5 to be on the safe side using it up to 6G in real life (the maximum rating for most components). The gear is calculated for 3G impact at 450kg, so also no problem here.
Drag should be almost linear and the wing area is rather large for a jet. When taking off and rotating at 70kt, you will already have 90kt by the time the wheels lift off. So there is not much change here (except for a higher take-off roll) and at landing you should be down to the same weight as normal. Otherwise it will touch down at a 90kt instead of 80kt.
#### rv6ejguy
##### Well-Known Member
HBA Supporter
View attachment 109266
Laminar flow over most of the surface and the boundary layer is very thin, even at the tail.
Laminar flow over most of the surface? That would be quite an accomplishment. Even if you did, a few bugs in the summer would trash all that.
#### rv6ejguy
##### Well-Known Member
HBA Supporter
Drag should be almost linear and the wing area is rather large for a jet.
Pretty sure drag will vary as the square of the velocity rather than in a linear fashion... No?
#### Dun
##### Member
Pretty sure drag will vary as the square of the velocity rather than in a linear fashion... No?
The response was to my query whether Scheny has considered increasing the fuel capacity and if the wing area should be increased in this case, but as he replied the current parameters are sufficient to bear the added weight with no major difference to the speeds.
@Scheny , that all sounds quite good and the pricing as well is quite reasonable. In case the 3-hour cruise would give equal or better range, that would be great.
In a dream scenario, massive production could reduce the price a bit lower and edge towards something achievable for the average pilot in US & EU instead of buying a house.
The only difference would be the operating cost per hour, do you have an estimate for that?
As you say, building one and finishing it will be the deal, unlike all the vaporware i.e. in the eVTOL market. Once a video would hit youtube of this small fighter-like jet with better speeds and as good a range compared with others in the weight-class....
|
2021-10-27 17:45:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44331979751586914, "perplexity": 2733.803485692087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588216.48/warc/CC-MAIN-20211027150823-20211027180823-00235.warc.gz"}
|
https://proofwiki.org/wiki/Woset_is_Isomorphic_to_Set_of_its_Initial_Segments
|
# Woset is Isomorphic to Set of its Initial Segments
## Theorem
Let $\struct {S, \preceq}$ be a well-ordered set.
Let:
$A = \set {a^\prec: a \in S}$
where $a^\prec$ is the strict lower closure of $S$ determined by $a$.
Then:
$\struct {S, \preceq} \cong \struct {A, \subseteq}$
where $\cong$ denotes order isomorphism.
## Proof
Define $f: S \to A$ as:
$\forall a \in S: \map f a = a^\prec$
where $a^\prec$ is the initial segment determined by $a$.
### $f$ is Surjective
$f$ is trivially surjective by the definition of $A$.
$\Box$
### $f$ is Strictly Increasing
Let $x, y \in S$ with $x \prec y$.
Let $z \in \map f x$.
Then by the definition of initial segment:
$z \prec x$
By Reflexive Reduction of Ordering is Strict Ordering, $\prec$ is also transitive.
Thus:
$z \prec y$
Thus by the definition of initial segment:
$z \in y^\prec = \map f y$
As this holds for all such $z$:
$\map f x \subseteq \map f y$
As $x \prec y$:
$x \in y^\prec = \map f y$
But since $\prec$ is antireflexive:
$x \nprec x$
so:
$x \notin \map f x$
Thus:
$\map f x \subsetneqq \map f y$
As this holds for all such $x$ and $y$, $f$ is strictly increasing.
$\Box$
Since a well-ordering is a total ordering, $f$ is an order embedding by Mapping from Totally Ordered Set is Order Embedding iff Strictly Increasing.
Thus $f$ is a surjective order embedding and therefore an order isomorphism.
$\blacksquare$
|
2020-02-18 06:09:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958645105361938, "perplexity": 417.22488346582355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00069.warc.gz"}
|
https://www.jiskha.com/display.cgi?id=1363140363
|
# Geometry
posted by .
Let S = \{ 1, 2, 3, \ldots 12\} and T_1, T_2, \ldots T_a be subsets of S such that T_i \not \subset T_j \, \forall i \neq j . What is the maximum possible value of a?
## Similar Questions
1. ### math
3)How mnay subsets of 6 integers taken from the numbers 1,2,3...,20 are there such that there are no consecutive integers in any subset (e.g. if 5 is in the subset then 4 and 6 cannot be in it)?
2. ### Math
I was to answer this question: How are the idea of subsets and proper subsets used in counting to identify relationships between whole numbers?
3. ### Modern (Abstract) Algebra
Let f:A->B, where A and B are nonempty Prove that f(S1 - f(S2) is a proper subset of f(S1 -S2) fo all subsets S1 and S2 of A. Give an example where there are subsets S1 and S2 of A such that f(S1) - f(S2) does not equal f(S1-S2)
4. ### MATH PROOFS
PROVE: P(( union )[i=1]^(k))=(∑)P(A[i]^ ) ∀ A[i]^ such that (A[i]^ intersect A[]j = ∅ ∀i <>j) ∀k
5. ### Geometry
Determine the number of subsets of A=\{1, 2, \ldots, 10\} whose sum of elements are greater than or equal to 28 .
6. ### Geometry
The vertices of a regular 10-gon are labeled V_1, V_2, \ldots V_n, which is a permutation of \{ 1, 2, \ldots, 10\}. Define a neighboring sum to be the sum of 3 consecutive vertices V_i, V_{i+1} and V_{i+2} [where V_{11}=V_1, V_{12}=V_2]. …
7. ### Math (Algebra)
<p>What is the remainder when $$5\times 55\times 555 \times \ldots \times \underbrace{555\ldots 55}_{555 5's}$$ is divided by $$100$$?
|
2017-08-20 23:11:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9133463501930237, "perplexity": 1759.403215034668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106996.2/warc/CC-MAIN-20170820223702-20170821003702-00153.warc.gz"}
|
https://mathoverflow.net/questions/236353/compressing-a-hypersurface-on-the-sphere
|
# Compressing a hypersurface on the sphere
Let $M^n$ be a compact, connected, orientable hypersurface of the unit sphere $S^{n+1} \subset \mathbb{R}^{n+2}$. Suppose $M$ is contained in the northern hemisphere $S_+^{n+1}$ and has nonzero principal curvatures everywhere, i.e., has nonzero gaussian curvature everywhere. It seems to me that if we compress $M$ somehow in the direction of the north pole, its principal curvatures will, in absolute value, get arbitrarily large.
Is it true? If so, is there a way to show it without explicitly exhibiting a map $S_+^{n+1} \to S_+^{n+1}$ that does the job? Instead of compressing $M$ towards the north pole, let us think more generally; is it true that there exists a diffeomorphic copy of $M$ in $S_+^{n+1}$ having prinicipal curvatures say, in absolute value bigger than 1?
Thanks for your help and thoughts!
• You can write out the Ptolemaic coordinates on the sphere with an explicit expression for the Riemannian metric, so that dilation in those coordinates gives a very explicit compression toward the north pole. You can see explicitly an orthonormal frame and how it transforms in those coordinates, I think, so you should be able to see how shape operators transform. – Ben McKay Apr 15 '16 at 20:48
• @BenMcKay What are Ptolemaic coordinates? – Eduardo Longa Apr 15 '16 at 23:25
• I am not sure which one Ben McKay means specifically, but you can do his construction in either stereographic projection or orthographic projection. – Willie Wong Apr 29 '16 at 19:06
|
2020-04-04 22:37:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8001148700714111, "perplexity": 263.1003599406317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370525223.55/warc/CC-MAIN-20200404200523-20200404230523-00319.warc.gz"}
|
http://support.sas.com/documentation/cdl/en/statug/66859/HTML/default/statug_hpmixed_syntax15.htm
|
TEST Statement
TEST fixed-effects </ options> ;
The TEST statement performs a hypothesis test on the fixed effects. You can specify multiple effects in one TEST statement or in multiple TEST statements, and all TEST statements must appear after the MODEL statement.
You can specify the following options in the TEST statement after a slash (/).
HTYPE=value-list
indicates the type of hypothesis test to perform on the specified effects. Valid entries for values in the value-list are 3, corresponding to a Type III test. The default value is 3. The ODS table name is Tests3 for the Type III test.
E
requests that matrix coefficients associated with test types be displayed for specified effects.
E3 | EIII
requests that Type III matrix coefficients be displayed if a Type III test is performed.
CHISQ
requests that tests be performed in addition to any F tests. A statistic equals its corresponding F statistic times the associate numerator degree of freedom, and this same degree of freedom is used to compute the p-value for the test. This p-value will always be less than that for the F test, because it effectively corresponds to an F test with infinite denominator degrees of freedom.
|
2019-02-19 23:19:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.796201765537262, "perplexity": 1092.9963118636279}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247493803.26/warc/CC-MAIN-20190219223509-20190220005509-00579.warc.gz"}
|
http://dyinglovegrape.com/math/primes_elliott_halberstam.php
|
# DyingLoveGrape.
## Primes and the Elliott-Halberstam Conjecture.
### Introduction.
Sometimes simple questions have simple answers. Sometimes simple questions have extremely dense and difficult answers. Sometimes simple questions have such difficult answers that even starting to understand the answer requires years of study and expertise in a number of different fields.
Recently, an exciting attack on the Twin Primes Conjecture was detailed. Unfortunately, since I am not a number theorist, it was difficult for me to get through even the introduction to the papers giving the gist of the idea that the attack was using. Most of them require defining a certain error function and showing that a particular bound holds for this function. To motivate this error function, it is necessary to understand Euler's totient function. Therefore, my plan for this post is to detail and give examples for the totient function (for those who have not seen it before) and use it to motivate this error function.
Prereqs: Just general Calculus-level mathematical maturity. Also, some basic understanding of modular arithmetic.
### Primes and Euler's Totient Function.
Much has been written about the totient function — I will just briefly touch on it. The idea behind the totient function is easy: $\phi(n)$ is defined to be the number of natural numbers less than $n$ which are relatively prime to $n$ . Hence, for example, $\phi(8) = 4$ since the only natural numbers less than 8 which are relatively prime to 8 are: 1, 3, 5, 7. There's four of them, so $\phi(8) = 4$. Here's a few more: \begin{align*} \phi(1) &=1\\ \phi(2) &=1\\ \phi(3) &=2\\ \phi(4) &=2\\ \phi(5) &=4\\ \phi(6) &=2\\ \phi(7) &=6\\ \phi(8) &=4\\ \phi(9) &=6\\ \phi(10) &= 4\\ \phi(11) &= 10\\ \phi(12) &= 4\\ \phi(13) &= 12\\ \end{align*} One's initial reaction is to find a pattern — but there doesn't seem to be much of a pattern here at first glance. We notice that for some numbers $p$ that $\phi(p) = p-1$; indeed, if $p$ is a prime number then every number less than it is relatively prime to it, so we have $\phi(p) = p-1$. Moreover, it's a bit difficult to see from this chart, but if we were to make a chart of these numbers one would be able to see something interesting: $\phi(p)\phi(q) = \phi(pq)$ for any natural numbers $p,q$ with $p,q$ relatively prime to each other. Plug in a few numbers and see that this works out. What's the reasoning behind this? Let's just think about what happens if $p$ and $q$ happen to be primes. Then what do we have? The right-hand side is all numbers less than and relatively prime to $pq$. Let's think about which numbers are not relatively prime to $pq$. Next, the multiples of $p$ less than $pq$ aren't relatively prime to $pq$: this is $p, 2p, 3p, \dots, (q-1)p$ which will give us $q-1$ non-relatively prime numbers (make sure you understand why these numbers work!). Next, the multiples of $q$: this is $q, 2q, \dots, (p-1)q$ which gives us $p-1$ non-relatively prime numbers. This is a total of $p-1 + q-1 = p + q - 2$ numbers which are less than but not relatively prime to $pq$. Since there are $pq-1$ numbers less than $pq$ in general, the number of relatively prime numbers is the ones which are left after we subtract off the non-relatively prime numbers; that is: $(pq -1) - (p + q - 2) = pq - p - q + 1$ Now let's look back and notice that $\phi(p) = p-1$ and $\phi(q) = q-1$ for primes $p,q$ and that, moreover, $(p-1)(q-1) = pq - p - q + 1$ which is what we found $\phi(pq)$ to be!
Unfortunately, if $p=q$ is prime, then we must be a bit more clever in thinking about $\phi(p^{2})$. What are the factors of $p^{2}$ less than $p^{2}$? There's a bunch: $1, p, 2p, 3p, \dots, (p-1)p.$ Hence, we have that there are $p^{2} - 1 - (p-1) = p^{2} - p$ relatively prime numbers less than $p^{2}$. A similar argument holds when we have higher powers of $p$: for example, the factors of $p^{n}$ are: $1, p, 2p, 3p, \dots, p^{2}, (p+1)p, \dots, (p^{n-1}-1)p.$ Therefore, since we have $p^{n}-1$ numbers less than $p^{n}$ in general, those which are relatively prime are given by $p^{n} - 1 - (p^{n-1} -1) = p^{n} - p^{n-1}.$ The general formula for $p$ prime and $n$ a natural number: $\phi(p^{n}) = p^{n} - p^{n-1}.$
Using all of the previous things we've learned, if we have some number $x$ which is not necessarily prime, we know that we may prime factorize $x$ and obtain: \begin{align*}\phi(x) &= \phi(p_{1}^{a_{1}}p_{2}^{a_{2}}\cdots p_{n}^{a_{n}}) \\ &=(p_{1}^{a_{1}} - p_{1}^{a_{1}-1})(p_{2}^{a_{2}} - p_{2}^{a_{2}-1}) \cdots (p_{n}^{a_{n}} - p_{n}^{a_{n}-1}) \end{align*} which is great, since it tells us we know $\phi$ for every number so long as we know $\phi$ for every prime which divides that number. This is a powerful idea! For example, counting the number of numbers less than and relatively prime to $30030$ would be pretty tedious without using what we've just shown: \begin{align*}\phi(30030) &= \phi(2\times 3\times 5\times 7\times 11\times 13)\\ &=\phi(2)\phi(3)\phi(5)\phi(7)\phi(11)\phi(13)\\ &=(2-1)(3-1)(5-1)(7-1)(11-1)(13-1)\\ &=(1)(2)(4)(6)(10)(12)\\ &=5760. \end{align*} There is a ton we can say about Euler's totient function, but we will restrict ourselves for now; the interested reader can probably figure a few things out at this point about the totient function which we haven't listed.
### Approximating Primes.
At this point, we need some big guns.
Dirichlet's Prime Number Theorem says that, given two relatively prime numbers $a,d$ there are infinitely many primes of the form $a + nd$ where $n\geq 0$. This means that in the sequence $a, a+d, a+2d, a+3d, \dots$ there will be infinitely many prime numbers. Let's test this out with $a = 2$ and $d = 5$ just to see if we get a whole bunch of prime numbers (infinitely many would take too long to check!). We get, $2, 7, 12, 17, 22, 27, 32, 37, \dots$ and in that part of the sequence we have the primes $2, 7, 17, 37$. If we went onward (you may try this if you have some programming experience or a pencil and paper and some free time) we'd find many more primes in this sequence.
Let's investigate this a bit more. Let's say we fix some number $d$ and look at all the possible $a$'s less than $d$ which would work for Dirichlet's theorem. How many are there? We've seen that there are $\phi(d)$ such numbers
To be a bit concrete, let's work with $d = 7$. Then we may have the following numbers and their corresponding sequences: \begin{align*} 1 &: 1, 8, 15, 22, \dots\\ 2 &: 2, 9, 16, 23, \dots\\ 3 &: 3, 10, 17, 24, \dots\\ 4 &: 4, 11, 18, 25, \dots\\ 5 &: 5, 12, 19, 26, \dots\\ 6 &: 6, 13, 20, 27, \dots\\ \end{align*} Neat. Notice that besides the numbers in the sequence $7, 14, 21, \dots$ we have every natural number contained in some sequence. Here's something amazing: each of these sequences contains the same proportion of primes and the proportion is (therefore) equal to $\dfrac{1}{d-1}$. Note that this is true even if $d$ is not a prime number. That is, if we were to ask, Out of the total number of primes (which is infinite), what proportion does the sequence $1, 8, 15, 22, \dots$ contain?'' we'd recieve the answer that $\frac{1}{6}$th of the primes live in this sequence.
Perhaps a more telling example is if we allow $d = 4$. Then we obtain the sequences: \begin{align*} 1 &: 1, 5, 9, 13, \dots\\ 3 &: 3, 7, 11, 15, \dots\\ \end{align*} which, according to the claim above, each contain one-half of the primes each! Notice that, because the list of primes is infinite, it is a bit strange to take proportions: in particular, the prime number $2$ is in neither list. What does this claim mean then?
We certainly can't look at all of the primes at once, so let's choose some big number $x$ to fix and just look at all of the numbers which are less than or equal to $x$. Let's define $\pi(x)$ to be the number of primes less than or equal to $x$. We want to see how many primes are in each of the sequences, so we do something sneaky: in the above example where $d = 4$ all of the primes in the list where $a = 1$ have the property that they are $1\mod 4$; that is, they are $a\mod d$. Similarly, when $a = 3$, all of the primes in that list are of the form $3\mod 4$ which is also $a\mod d$. If we want to count all the primes in a particular sequence, we just need to count the primes $p$ which have the property $p \equiv a\mod d$. To this end, let's define the function $\pi(x; d,a)$ to be the number of primes less than or equal to $x$ which are equivalent to $a\mod d$.
### A Quick Example.
A quick example might help solidify these concepts if you've never seen them before. Let's let $x = 25$ and let's let $d = 4$ again. Then we have the sequences \begin{align*} 1 &: 1, 5, 9, 13, 17, 21, 25.\\ 3 &: 3, 7, 11, 15, 19, 23.\\ \end{align*} We notice that $\pi(25; 4,1) = 3$ since only $5, 13, 17$ are primes in that sequence. Similarly, we may count $\pi(25; 4,3) = 4.$ Notice that these are not exactly equal but they are almost equal to each other at this point. For fun, we also note that $\pi(25) = 9$ since $2,3,5,7,11,13,17,19,23$ are the only primes less than or equal to 25.
Since there are two sequences (since $\phi(4) = 2$) and each had to divide up $\pi(25) = 9$ primes, we'd expect them to have $\dfrac{\pi(25)}{\phi(4)} = \dfrac{9}{2} = 4.5$ each. That's not exactly what happened, but it's close to what happened; so we'll say that $\pi(25;4,1) \approx \pi(25;4,3) \approx \dfrac{\pi(25)}{\phi(4)}$ or, because the value of $a$ doesn't really matter (since each sequence ought to have the same proportion of primes, approximately), $\pi(25; 4,a)\approx \dfrac{\pi(25)}{\phi(4)}.$
Let's try this again with $x = 500$. I won't write out all the terms, but I'll note \begin{align*} \phi(4) &= 2\\ \pi(500;4,1)&=44\\ \pi(500;4,3)&=50\\ \pi(500) &= 95\\ \end{align*} This time, we have that $\dfrac{\pi(500;4,1)}{\pi(500)} \approx 0.463\dots$ $\dfrac{\pi(500;4,3)}{\pi(500)} \approx 0.526\dots$ These are both relatively close to the value we'd expect, which is $\frac{1}{2}$. As $x$ gets larger, these values will get closer.
Just for kicks, one more time with $x = 1000000$. \begin{align*} \phi(4) &= 2\\ \pi(1000000;4,1)&=39175\\ \pi(1000000;4,3)&=39322\\ \pi(1000000) &= 78498\\ \end{align*} Then, $\dfrac{\pi(1000000;4,1)}{\pi(1000000)} \approx 0.4990573\dots$ $\dfrac{\pi(1000000;4,3)}{\pi(1000000)} \approx 0.50093\dots$ Pretty darn close.
And this doesn't just work with $d = 4$. Let's try one where $d$ is something slightly larger, just so you know nothing fishy is going on. Let's let $d = 9$. We know that $\phi(9) = \phi(3)^{2} = 4$ and the values of $a$ will be in $\{1, 2, 4, 8\}$. I'll use Mathematica to record the data we need up to $x = 100000$. \begin{align*} \phi(4) &= 2\\ \pi(100000;9,1)&=13063\\ \pi(100000;9,2)&=13099\\ \pi(100000;9,4)&=13070\\ \pi(100000;9,8)&=13099\\ \pi(100000) &= 52332\\ \end{align*} Then, $\dfrac{\pi(100000;9,1)}{\pi(100000)} \approx 0.2496\dots$ $\dfrac{\pi(100000;9,2)}{\pi(100000)} \approx 0.2503\dots$ $\dfrac{\pi(100000;9,4)}{\pi(100000)} \approx 0.24975\dots$ $\dfrac{\pi(100000;9,8)}{\pi(100000)} \approx 0.2503\dots$ We'd expect each to have the proportion $\frac{1}{4}$th of primes, and they seem to have approximately that proportion.
### The Error.
What we've found is that these proportions are approximately correct — but how approximate are we claiming? The standard way to think about how close something is to its approximation is to consider the error term. In this case, we can think about the error as how close $\pi(x;d,a)$ is to $\dfrac{\pi(x)}{\phi(d)}$. Instead of proportions, we're now asking if the number of primes in $\pi(x;d,a)$ is close to the total number of primes less than or equal to $x$ divided into $\phi(d)$ parts. We can write this out as: $E(x;d,a) = |\pi(x;d,a) - \dfrac{\pi(x)}{\phi(d)}|$ which gives us the error if we're given some $a$. Since there's a finite number of $a$'s to work with, we can actually bound our errors nicely just by talking about the largest error; in other words, recalling that $a \leq d$ by definition, $E(x;d) = \max_{gcd(a,d) = 1}|\pi(x;d,a) - \dfrac{\pi(x)}{\phi(d)}|$ that is, $E(x;d)$ is the largest $E(x;d,a)$ when we plug in all of the possible values for $a$. There's nothing particularly crazy about this error, but we'd like to be able to say that it's bounded by something we like. Usually, bounds are given by some constant, or some polynomial, or some other function that we know a lot about (and preferably one which is simple!)...
### The Elliott-Halberstam Conjecture.
Indeed, there is such a bound conjectured, but the Elliott-Halberstam conjecture states this bound in a somewhat strange way. First, it is taken over a sum of all possible values of $d$ which are somewhat smaller than $x$. This is a bit strange, but it will tell us that "on average" the error is not much larger than some particular function. Let's write it out.
Elliott-Halberstam Conjecture. For every $\theta \lt 1$ and every $A\gt 0$, there exists some constant $C\gt 0$ such that $\sum_{1\leq d\leq x^{\theta}} E(x;d) \leq \frac{Cx}{(\log(x))^{A}}$ for every $x\gt 2$.
This conjecture has a number of interesting consequences. Moreover, this is a foundational conjecture which was altered slightly to prove some more recent theorems which were, in turn, altered slightly to create the technique which was used to attack the Twin Prime conjecture recently. We won't detail these new advancements in this post but hope that we've given the reader enough to look up some of these concepts in greater detail.
|
2017-12-13 05:15:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855318069458008, "perplexity": 180.0963613906274}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948521292.23/warc/CC-MAIN-20171213045921-20171213065921-00138.warc.gz"}
|
https://tex.stackexchange.com/questions/228302/transform-sepfootnotecontent-into-an-environment/229964
|
# Transform \sepfootnotecontent into an environment
I am currently cleaning the code of a 1000-page book that contain hundreds of very long footnotes so I decided to move them into separate files using the sepfootnotes package. It’s easy to use:
\sepfootnotecontent{label}{The content}
...
\sepfootnote{label}
As I said, the footnotes are very long and the footnote file is hard to read. Using an environment would make it easier to parse:
\begin{nbp}{label}
The content
\end{nbp}
(“nbp” stands for “note de bas de page”, it’s a french document.)
I found out that the environ package makes it easy to “turn” a command into an environment:
\NewEnviron{nbp}[1]{\sepfootnotecontent{#1}{\BODY}}
There is no error during the compilation but nothing appears in the footnote:
\begin{nbp}{1}
dolor sit amet.
\end{nbp}
Lorem ipsum\sepfootnote{1}
Expected output:
Here is a minimal working example:
\documentclass{article}
\usepackage{environ}
\usepackage{sepfootnotes}
\NewEnviron{nbp}[1]{\sepfootnotecontent{#1}{\BODY}}
\begin{document}
\sepfootnotecontent{works}{dolor sit amet.}
\begin{nbp}{doesntwork}
dolor sit amet.
\end{nbp}
Lorem ipsum\sepfootnote{works}\sepfootnote{doesntwork}
\end{document}
• Would be nice if there were a complete minimal example (or see here) to work with.... – jon Feb 15 '15 at 20:18
• @jon Here is the MWE, after fighting with the minimal class that don’t recognize \sepfootnote :( – Zoxume Feb 16 '15 at 12:36
• Oh yeah: don't use the minimal class for minimal examples. (Confusing, I realize, but that is not what it was designed for.) – jon Feb 16 '15 at 19:06
No need to change the internals of sepfootnotes: an \aftergroup trickery is sufficient.
\documentclass{article}
\usepackage{environ}
\usepackage{sepfootnotes}
\NewEnviron{nbp}[1]{%
\xdef\nbptemp{{#1}{\unexpanded\expandafter{\BODY}}}%
\aftergroup\donpb
}
\newcommand{\donpb}{\expandafter\sepfootnotecontent\nbptemp}
\begin{document}
\sepfootnotecontent{works}{Dolor sit amet.}
\begin{nbp}{doesntwork}
Again dolor sit amet.
\end{nbp}
Lorem ipsum\sepfootnote{works}\sepfootnote{doesntwork}
\end{document}
Note: I compiled the file with a reduced \textheight just to have a smaller image.
• Your answer is better because it doesn’t any extra packages. Thank you! – Zoxume Feb 25 '15 at 15:07
There are two problems: the marco \sepfootnotecontent saves its contents locally which means it is forgotten after the environment ends. The second problem: the macro \BODY is saved as footnote content but what you really want is it the first expansion of \BODY and not the macro itself.
With the help of the etoolbox package and its \patchcmd we can easily create a global version of the internal macro \sep@namedef:
\let\sep@namegdef\sep@namedef
\patchcmd\sep@namegdef{\@namedef}{\global\@namedef}{}{}
Now we need a global equivalent of \sepfootnotecontent:
% \gsepfootnoteenvcontent{<content>}{<id>}
\newcommand\gsepfootnoteenvcontent[2]{\sep@namegdef{sepfoot}{#2}{#1}}
Note the swap of the last two arguments: this makes it easier to expand the \BODY macro in the next step before passing it to \sep@namegdef.
Last the environment where \BODY is expanded before it is passed to \gsepfootnoteenvcontent:
\NewEnviron{nbp}[1]{\expandafter\gsepfootnoteenvcontent\expandafter{\BODY}{#1}}
A full example:
\documentclass{article}
\usepackage{environ,etoolbox}
\usepackage{sepfootnotes}
\makeatletter
% \gsepfootnoteenvcontent{<content>}{<id>}
\newcommand\gsepfootnoteenvcontent[2]{\sep@namegdef{sepfoot}{#2}{#1}}
\let\sep@namegdef\sep@namedef
\patchcmd\sep@namegdef{\@namedef}{\global\@namedef}{}{}
\makeatother
\NewEnviron{nbp}[1]{\expandafter\gsepfootnoteenvcontent\expandafter{\BODY}{#1}}
\begin{document}
\sepfootnotecontent{works}{dolor sit amet.}
\begin{nbp}{doesntwork}
dolor sit amet.
\end{nbp}
Lorem ipsum\sepfootnote{works}\sepfootnote{doesntwork}
\end{document}
|
2020-12-02 10:18:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6751710176467896, "perplexity": 3219.720594209234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141706569.64/warc/CC-MAIN-20201202083021-20201202113021-00406.warc.gz"}
|
https://physics.stackexchange.com/questions/502645/electromagnetic-wave-equation-can-we-ignore-the-constant-of-integration
|
# Electromagnetic wave equation: can we ignore the constant of integration?
Suppose we obtain a solution for each of $$\mathbf B$$, $$\mathbf E$$ of maxwell equations in the vacuum ($$\rho=0$$). Clearly, for any constant vector $$\mathbf k, \mathbf m$$, $$\mathbf {B+k}$$ and $$\mathbf{E+m}$$ also satisfy the same set of differential equations. Presumably, we can call $$\mathbf k, \mathbf m$$ "constants of integration".
My question is, though, is it okay to choose those constants randomly as I like? It is really difficult for me to "choose" appropriate values of $$\mathbf k, \mathbf m$$.
I might be able to determine $$\mathbf k, \mathbf m$$ if $$\mathbf E$$ is assumed to vanish at $$\infty$$. However, in the case that $$\mathbf E$$ is a sinusoidal electromagnetic wave, it certainly does not vanish as infinity.
• Are you really allowed to add a constant vector to E and still satisfy all of the equations? Presumably if the oscillations are reduced to zero you would be left with a uniform E field in all of space and that would require an infinite source at infinity.
– user196418
Sep 13 '19 at 14:22
Usually, you would formulate it in terms of Sommerfeld radiation condition:
$$\lim_{r\to\infty} r\left( \frac{\partial u}{\partial r}-iku \right)=0$$ Here $$u$$ would be some wave quantity, $$r$$ is the radial coordinate, $$i=\sqrt{-1}$$, and $$k$$ is the wavenumber.
See the following reference for a detailed discussion of its origin and the role in mathematical physics:
Excerpt from this paper's abstract:
In 1912 Sommerfeld introduced his radiation condition to ensure the uniqueness of the solution of certain exterior boundary value problems in mathematical physics. In physical applications these problems generally describe wave propagation where an incident time-harmonic wave is scattered by an object, and the resulting diffracted or scattered waves need to be calculated. When formulated mathematically, these problems usually take the form of an exterior Dirichlet or Neumann problem for the Helmholtz partial differential equation. The Sommerfeld condition is applied at infinity and, when added to the statement of the boundary value problem, singles out only the solution which represents “outgoing” (rather than “incoming” or “standing”) waves in the physical applications.
It is also important to note the uniqueness theorem (as it is usually called in the engineering graduate texts on electromagnetics). See, for example,
In this chapter, the proof first requires the medium to be lossy to prove the uniqueness (thus fields decay), and then uses the lossless case as a limiting case when loss approaches zero.
How deeply and how mathematical you want to dive into this topic – depends totally on your needs.
We are almost always satisfying Maxwell's equations (or any set of differential equations) with respect to some boundary conditions. Usually we assume that a vector field goes to zero at infinity, which means it is uniquely specified by its divergence and curl. (See the Helmholtz decomposition.) If it doesn't go to zero at infinity, then can specify some other boundary conditions that account for that constant.
• So, what is the boundary condition for electromagnetic waves? Sep 13 '19 at 22:10
|
2021-09-28 13:42:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9226045608520508, "perplexity": 161.8958652057983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060803.2/warc/CC-MAIN-20210928122846-20210928152846-00195.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/11272
|
## Files in this item
FilesDescriptionFormat
application/pdf
Surface and Med ... ed by Discrete Samples.pdf (7MB)
(no description provided)PDF
## Description
Title: Surface and Medial Axis Topology Through Distance Flows Induced by Discrete Samples Author(s): Sadri, Bardia Subject(s): computer science Abstract: The distance function induced by a surface in R^n is known to carry a great deal of topological information about the surface and its embedding in space. It is an important question, from both theoretical and practical standpoints, whether such information about a surface can be extracted from the distance function induced by a discrete sample of it. Distance functions induced by discrete samples of surfaces and their associated mathematical structures are the main focus of this dissertation. These functions lead to continuous flow maps that turn out to be powerful topological tools. Based on these flow maps we design and analyze a number of simple and natural shape and medial axis reconstruction algorithms for which we can guarantee the topological type of output. We prove that critical points of distance functions induced dense enough (relative) epsilon-samples of surfaces are concentrated around the surface and its medial axis. These two types of critical points can be distinguished from each other algorithmically. This separation'' of critical points is crucial to the design and analysis of of the above-mentioned algorithms. Specifically, we present an algorithm for homeomorphic reconstruction of surfaces in 3D. This algorithm generalizes to higher dimensions with a slight change in the type of the provided topological guarantee. We also present an algorithm for medial axis approximation that computes a piece-wise linear core for the given sample. This core is guaranteed to be homotopy equivalent to the medial axis of the shape enclosed by the original surface. We then show that the core can be enhanced by any geometric medial axis approximation scheme without compromising the topological equivalence of the output and the medial axis being approximated. Finally, we present an analysis of Herbert Edelsbrunner's well-known Wrap reconstruction algorithm and show that under relative epsilon-sampling in 3D, the output of Wrap is homotopy equivalent to the original shape. Issue Date: 2006-12 Genre: Technical Report Type: Text URI: http://hdl.handle.net/2142/11272 Other Identifier(s): UIUCDCS-R-2006-2789 Rights Information: You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the University of Illinois at Urbana-Champaign Computer Science Department under terms that include this permission. All other rights are reserved by the author(s). Date Available in IDEALS: 2009-04-21
|
2018-03-18 10:01:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28742092847824097, "perplexity": 625.6315647439703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645604.18/warc/CC-MAIN-20180318091225-20180318111225-00431.warc.gz"}
|
https://datascience.stackexchange.com/questions/79923/how-can-be-proved-that-the-softmax-output-forms-a-probability-distribution-and-t
|
# How can be proved that the softmax output forms a probability distribution and the sigmoid output does not?
I was reading Nielsen's book and in this part of chapter 3 about the softmax function, he says, just before the following Excercise, that the output of a neural network with a output softmax layers forms a probability distribution and the sigmoid output does not always forms it. Now I've been wondering about the output of a neural network, if I have a sigmoid output layer, say for one observation the output is 0.7 for class 0, should the probability for class 1 be 0.3? Or, in this binary classification example, using a softmax output, the first output neuron would be 0.7 for class 0 and 0.3 for class 1 in that particular observation?
Softmax maps $$f:ℝ^n\rightarrow (0,1)^n$$ such that $$\sum f(\vec x) =1$$. Therefore, we can interpret the output of softmax as probabilities.
With sigmoidal activation, there are no such constraints for summation, so even though $$0, it is not guaranteed that $$\sum S(\vec x)=1$$. The sigmoidal function does not normalize the outputs, so in your example where class 0 has output $$0.7$$, class 1 could have any value in $$(0,1)$$, which might not be $$0.3$$.
Here's an example:
$$\vec x=[-5,\pi,\frac{1}{3},0]$$
$$f(\vec x)\approxeq [2.6379\times10^{-4},0.9059,0.05464]$$
$$S(\vec x)\approxeq [6.693\times10^{-3},0.9586,0.5826,0.5]$$
Because $$0 and $$\sum f(\vec x)=1$$, the softmax output vector can be interpreted as probabilities. On the other hand, $$\sum S(\vec x) > 1$$, so you cannot interpret the sigmoidal output as a probability distribution, even though $$0
(I chose the above $$\vec x$$ arbitrarily to demonstrate that the inputs need not be negative, non-negative, rational, etc., hence $$\vec x\in ℝ^n$$)
• Thank you! So, in your example, I can see that the sum of all f(x) is 1 for the observation x, but for S(x), why do you sum it when actually, by using sigmoidal output, isn't it just one number (eg, 0.7 of being 1) because in that case we only use one output sigmoid neuron? Being that the case of one output neuron, wouldn't the probability of being 0 be 0.3? as in, for example, logistic regression? All these is like the second part of the question I coulnd't get to ask. – valware_xyz Aug 10 '20 at 3:57
• For binary classification, a single sigmoidal output neuron would be sufficient for the reasons you stated. For multiclass classification, sigmoidal neurons aren't guaranteed (and probably won't) output a legitimate probability distribution for the reasons I stated. So yes, in a binary classification problem, a single sigmoidal neuron is totally fine. – Benji Albert Aug 10 '20 at 15:32
• Thank you very much, sir! Would you recommend any books or resources to deepen the understanding of this topic? I am finishing Nielsen's book, and then go through Stanford's NLP and CNN courses. – valware_xyz Aug 10 '20 at 18:18
• Unfortunately, I am not up-to-date with reading materials for fundamentals. However, I'd recommend reading highly cited papers after you understand the basic concepts. Good luck! – Benji Albert Aug 10 '20 at 18:40
|
2021-06-17 08:07:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111515045166016, "perplexity": 716.7391448602813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629632.54/warc/CC-MAIN-20210617072023-20210617102023-00369.warc.gz"}
|
https://cran.ma.imperial.ac.uk/web/packages/superb/vignettes/Vignette2.html
|
Most studies examine the effect of a certain factor on a measure. However, very rarely do we have hypotheses stating what would be the expected result. Instead, we have a control group to which the results of the treatment group will be compared (Cousineau, 2017).
Imagine a study in a school examining the impact of playing collaborative games before beginning the classes. This study most likely will have two groups, one where the students are playing collaborative games and one where the students will have non- structured activities prior to classes. The objective of the study is to compare the two groups.
Consider the results obtained. The measurement instrument tends to return scores near 100.
As seen, there seems to be a better score for students playing collaborative games. Taking into account the confidence interval, the manipulation seems to improve significantly the learning behavior as the lower end of the Collaborative games interval is above the Unstructured activity mean (and vice versa).
What a surprise to discover that a t test will NOT confirm this impression (t(48) = 1.76, p = .085):
t.test(dataFigure1$score[dataFigure1$grp==1],
dataFigure1$score[dataFigure1$grp==2],
var.equal=T)
##
## Two Sample t-test
##
## data: dataFigure1$score[dataFigure1$grp == 1] and dataFigure1$score[dataFigure1$grp == 2]
## t = 1.7612, df = 48, p-value = 0.08458
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.7082201 10.7082201
## sample estimates:
## mean of x mean of y
## 105 100
## The origin of the paradox
The reason is that the confidence intervals used are “stand-alone”: They can be used to examine, say, the first group to the value 100. As this value is outside the interval, we are correct in concluding that the first group’s mean is significantly different from 100 (with level $$\alpha$$ of .05) from 100, as confirmed by a single-group t test:
t.test(dataFigure1$score[dataFigure1$grp==1], mu=100)
##
## One Sample t-test
##
## data: dataFigure1$score[dataFigure1$grp == 1]
## t = 2.5021, df = 24, p-value = 0.01956
## alternative hypothesis: true mean is not equal to 100
## 95 percent confidence interval:
## 100.8756 109.1244
## sample estimates:
## mean of x
## 105
Likewise, the second group’ mean is significantly different from 105 (which happens to be the first group’s mean):
t.test(dataFigure1$score[dataFigure1$grp==2], mu=105)
##
## One Sample t-test
##
## data: dataFigure1$score[dataFigure1$grp == 2]
## t = -2.4794, df = 24, p-value = 0.02057
## alternative hypothesis: true mean is not equal to 105
## 95 percent confidence interval:
## 95.83795 104.16205
## sample estimates:
## mean of x
## 100
This is precisely the purpose of stand-alone confidence intervals: to compare a single result to a fix value. The fix value (here 100 for the first group and 105 for the second group) has no uncertainty, it is a constant.
In contrast, the two-group t test compares two means, the two of which are uncertain. Therefore, in making a confidence interval, it is necessary that the basic, stand-alone, confidence interval be informed that it is going to be compared —not to a fix value— to a second quantity which is itself uncertain.
Using the language of analyse of variances, we can say that when the purpose of the plot is to compare means to other means, there is more variances in the comparisons than there is in single groups in isolation.
Assuming that the variances are roughly homogeneous between group (an assumption made by the t test, but see below), there is a simple adjustment that can be brought to the error bars: just increase their length by $$\sqrt{2}$$. As $$\sqrt{2} \approx 1.41$$ it means increasing their length by 41%.
With superbPlot, this so-called difference adjustment (Baguley, 2012) is obtained easily by adding an adjustment to the list of adjustments with adjustments = list(purpose = "difference"), as seen below.
superbPlot(dataFigure1,
BSFactors = "grp",
adjustments=list(purpose = "difference"), # the only new thing here
variables = "score",
plotStyle="line" ) +
xlab("Group") + ylab("Score") +
coord_cartesian( ylim = c(85,115) ) +
theme_gray(base_size=16) +
scale_x_discrete(labels=c("1" = "Collaborative\ngames", "2" = "Unstructured\nactivity"))
This is where the usefulness of superb is apparent: it only required an option to mutate the plot from a plot showing means with stand-alone confidence intervals to a plot showing means with difference-adjusted confidence intervals (Cousineau, Goulet, & Harding, 2021).
## Illustrating the impact of the adjustments
Just for comparison purposes, let’s show both plots side-by-side.
library(gridExtra)
plt1 <- superbPlot(dataFigure1,
BSFactors = "grp",
variables = "score",
plotStyle="line" ) +
xlab("Group") + ylab("Score") +
labs(title="(stand-alone)\n95% confidence intervals") +
coord_cartesian( ylim = c(85,115) ) +
theme_gray(base_size=16) +
scale_x_discrete(labels=c("1" = "Collaborative\ngames", "2" = "Unstructured\nactivity"))
plt2 <- superbPlot(dataFigure1,
BSFactors = "grp",
variables = "score",
plotStyle="line" ) +
xlab("Group") + ylab("Score") +
coord_cartesian( ylim = c(85,115) ) +
theme_gray(base_size=16) +
scale_x_discrete(labels=c("1" = "Collaborative\ngames", "2" = "Unstructured\nactivity"))
plt <- grid.arrange(plt1, plt2, ncol=2)
A second way to compare the two plots is to superimpose them, as in Figure 4:
# generate the two plots, nudging the error bars, using distinct colors, and
# having the second plot's background transparent (with makeTransparent() )
plt1 <- superbPlot(dataFigure1,
BSFactors = "grp",
variables = "score",
errorbarParams = list(color="blue",position = position_nudge(-0.05) ),
plotStyle="line" ) +
xlab("Group") + ylab("Score") +
labs(title="(red) Difference-adjusted 95% confidence intervals\n(blue) (stand-alone) 95% confidence intervals") +
coord_cartesian( ylim = c(85,115) ) +
theme_gray(base_size=12) +
scale_x_discrete(labels=c("1" = "Collaborative\ngames", "2" = "Unstructured\nactivity"))
plt2 <- superbPlot(dataFigure1,
BSFactors = "grp",
variables = "score",
errorbarParams = list(color="red",position = position_nudge(0.05) ),
plotStyle="line" ) +
xlab("Group") + ylab("Score") +
labs(title="(red) Difference-adjusted 95% confidence intervals\n(blue) (stand-alone) 95% confidence intervals") +
coord_cartesian( ylim = c(85,115) ) +
theme_gray(base_size=12) +
scale_x_discrete(labels=c("1" = "Collaborative\ngames", "2" = "Unstructured\nactivity"))
# transform the ggplots into "grob" so that they can be manipulated
plt1g <- ggplotGrob(plt1)
plt2g <- ggplotGrob(plt2 + makeTransparent() )
# put the two grob onto an empty ggplot (as the positions are the same, they will be overlayed)
ggplot() +
annotation_custom(grob=plt1g) +
annotation_custom(grob=plt2g)
As seen, the difference-adjusted error bars are wider. This is to be expected: their purposes (comparing two means) introduces more variability, and variability always reduces precision.
## Two options
There are two methods to adjust for the purpose of the error bars:
• "difference": This method is the simplest. It increases the error bars by a factor of $$\sqrt{2}$$ on the premise that the variances are homogeneous
• "tryon": This method, proposed in Tryon (2001), is used when the variances are inhomogeneous. It replaces the $$\sqrt{2}$$ correction factor by a factor $$2 \times E$$ based on the heterogeneity of the variances. In the case where the error bars are roughly homogeneous, there is no visible difference with "difference". See[Vignette 7] (https://dcousin3.github.io/superb/articles/Vignette7.html) for more
The option "single" is used if the purpose is obtain “stand-alone” error bars or error bars that are to be compared to an a priori determine value. Such error bars are inapt to perform pair-wise comparisons.
## In conclusion
Adjusting the confidence intervals is important to have coherence between the text and the figure. If you are to claim that there is no difference but show Figure 1, an examinator (you know, a reviewer) may raise a red flag and cast doubt on your conclusions (opening the door to many rounds of reviews or rejection if this is a submitted work).
Having coherence between the figures and the tests reported in your document is one way to improve the clarity of your work. Coherence here comes cheap: You just need to add in the figure caption “Difference-adjusted” before “95% confidence intervals.”
# References
Baguley, T. (2012). Calculating and graphing within-subject confidence intervals for ANOVA. Behavior Research Methods, 44, 158–175. https://doi.org/10.3758/s13428-011-0123-7
Cousineau, D. (2017). Varieties of confidence intervals. Advances in Cognitive Psychology, 13, 140–155. https://doi.org/10.5709/acp-0214-z
Cousineau, D., Goulet, M.-A., & Harding, B. (2021). Summary plots with adjusted error bars: The superb framework with an implementation in R. Advances in Methods and Practices in Psychological Science, 4, 1–18. https://doi.org/10.1177/25152459211035109
Tryon, W. W. (2001). Evaluating statistical difference, equivalence, and indeterminacy using inferential confidence intervals: An integrated alternative method of conducting null hypothesis statistical tests. Psychological Methods, 6, 371–386. https://doi.org/10.1037/1082-989X.6.4.371
|
2022-05-27 11:42:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5403474569320679, "perplexity": 3029.034671683316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662647086.91/warc/CC-MAIN-20220527112418-20220527142418-00384.warc.gz"}
|
http://blog.stata.com/2012/07/18/using-statas-random-number-generators-part-1/
|
Home > Numerical Analysis > Using Stata’s random-number generators, part 1
## Using Stata’s random-number generators, part 1
I want to start a series on using Stata’s random-number function. Stata in fact has ten random-number functions:
1. runiform() generates rectangularly (uniformly) distributed random number over [0,1).
2. rbeta(a, b) generates beta-distribution beta(a, b) random numbers.
3. rbinomial(n, p) generates binomial(n, p) random numbers, where n is the number of trials and p the probability of a success.
4. rchi2(df) generates χ2 with df degrees of freedom random numbers.
5. rgamma(a, b) generates Γ(a, b) random numbers, where a is the shape parameter and b, the scale parameter.
6. rhypergeometric(N, K, n) generates hypergeometric random numbers, where N is the population size, K is the number of in the population having the attribute of interest, and n is the sample size.
7. rnbinomial(n, p) generates negative binomial — the number of failures before the nth success — random numbers, where p is the probability of a success. (n can also be noninteger.)
8. rnormal(μ, σ) generates Gaussian normal random numbers.
9. rpoisson(m) generates Poisson(m) random numbers.
10. rt(df) generates Student’s t(df) random numbers.
You already know that these random-number generators do not really produce random numbers; they produce pseudo-random numbers. This series is not about that, so we’ll be relaxed about calling them random-number generators.
You should already know that you can set the random-number seed before using the generators. That is not required but it is recommended. You set the seed not to obtain better random numbers, but to obtain reproducible random numbers. In fact, setting the seed too often can actually reduce the quality of the random numbers! If you don’t know that, then read help set seed in Stata. I should probably pull out the part about setting the seed too often, expand it, and turn it into a blog entry. Anyway, this series is not about that either.
This series is about the use of random-number generators to solve problems, just as most users usually use them. The series will provide practical advice. I’ll stay away from describing how they work internally, although long-time readers know that I won’t keep the promise. At least I’ll try to make sure that any technical details are things you really need to know. As a result, I probably won’t even get to write once that if this is the kind of thing that interests you, StataCorp would be delighted to have you join our development staff.
runiform(), generating uniformly distributed random numbers
Mostly I’m going to write about runiform() because runiform() can solve such a variety of problems. runiform() can be used to solve,
• shuffling data (putting observations in random order),
• drawing random samples without replacement (there’s a minor detail we’ll have to discuss because runiform() itself produces values drawn with replacement),
• drawing random samples with replacement (which is easier to do than most people realize),
• drawing stratified random samples (with or without replacement),
• manufacturing fictional data (something teachers, textbook authors, manual writers, and blog writers often need to do).
runiform() generates uniformly, a.k.a. rectangularly distributed, random numbers over the interval, I quote from the manual, “0 to nearly 1”.
Nearly 1? “Why not all the way to 1?” you should be asking. “And what exactly do you mean by nearly 1?”
The answer is that the generator is more useful if it omits 1 from the interval, and so we shaved just a little off. runiform() produces random numbers over [0, 0.999999999767169356].
Here are two useful formulas you should commit to memory.
1. If you want to generate continuous random numbers between a and b, use
generate double u = (ba)*runiform() + a
The random numbers will not actually be between a and b, they will be between a and nearly b, but the top will be so close to b, namely 0.999999999767169356*b, that it will not matter.
Remember to store continuous random values as doubles.
2. If you want to generate integer random numbers between a and b, use
generate ui = floor((ba+1)*runiform() + a)
In particular, do not even consider using the formula for continuous values but rounded to integers, which is to say, round(u) = round((ba)*runiform() + a). If you use that formula, and if ba>1, then a and b will be under represented by 50% each in the samples you generate!
I stored ui as a default float, so I am assuming that -16,777,216 ≤ a < b ≤ 16,777,216. If you have integers outside of that range, however, store as a long or double.
I’m going to spend the rest of this blog entry explaining the above.
First, I want to show you how I got the two formulas and why you must use the second formula for generating integer uniform deviates.
Then I want explain why we shaved a little from the top of runiform(), namely (1) while it wouldn’t matter for formula 1, it made formula 2 a little easier, (2) the code would run more quickly, (3) we could more easily prove that we had implemented the random-number generator correctly, and (4) anyone digging deeper into our random numbers would not be misled into thinking they had more than 32 bits of resolution. That last point will be important in a future blog entry.
Continuous uniforms over [a, b)
runiform() produces random numbers over [0, 1). It therefore obviously follows that (ba)*runiform()+a produces number over [a, b). Substitute 0 for runiform() and the lower limit is obtained. Substitute 1 for runiform() and the upper limit is obtained.
I can tell you that in fact, runiform() produces random numbers over [0, (232-1)/232].
Thus (ba)*runiform()+a produces random numbers over [a, ((232-1)/232)*b].
(232-1)/232) approximately equals 0.999999999767169356 and exactly equals 1.fffffffeX-01 if you will allow me to use %21x format, which Stata understands and which you can understand if you see my previous blog posting on precision.
Thus, if you are concerned about results being in the interval [a, b) rather than [a, b], you can use the formula
generate double u = ((ba)*runiform() + a) / 1.fffffffeX-01
There are seven f’s followed by e in the hexadecimal constant. Alternatively, you could type
generate double u = ((ba)*runiform() + a) * ((2^32-1)/2^32)
but multiplying by 1.fffffffeX-01 is less typing so I’d type that. Actually I wouldn’t type either one; the small difference between values lying in [a, b) or [a, b] is unimportant.
Integer uniforms over [a, b]
Whether we produce real, continuous random numbers over [a, b) or [a, b] may be unimportant, but if we want to draw random integers, the distinction is important.
runiform() produces continuous results over [0, 1).
(ba)*runiform()+a produces continuous results over [a, b).
To produce integer results, we might round continuous results over segments of the number line:
a a+.5 a+1 a+1.5 a+2 a+2.5 b-1.5 b-1 b-.5 b
real line +-----+-----+-----+-----+-----+-----------+-----+-----+-----+
int line |<-a->|<---a+1--->|<---a+2--->| |<---b-1--->|<-b->|
In the diagram above, think of the numbers being produced by the continuous formula u=(ba)*runiform()+a as being arrayed along the real line. Then imagine rounding those values, say by using Stata’s round(u) function. If you rounded in that way, then
• Values of u between a and a+0.5 will be rounded to a.
• Values of u between a+0.5 and a+1.5 will be rounded to a+1.
• Values of u between a+1.5 and a+2.5 will be rounded to a+2.
• Values of u between b-1.5 and b-0.5 will be rounded to b-1.
• Values of u between b-0.5 and b-1 will be rounded to b.
Note that the width of the first and last intervals is half that of the other intervals. Given that u follows the rectangular distribution, we thus expect half as many values rounded to a and to b as to a+1 or a+2 or … or b-1.
And indeed, that is exactly what we would see:
. set obs 100000
obs was 0, now 100000
. gen double u = (5-1)*runiform() + 1
. gen i = round(u)
. summarize u i
Variable | Obs Mean Std. Dev. Min Max
-------------+--------------------------------------------------------
u | 100000 3.005933 1.156486 1.000012 4.999983
i | 100000 3.00489 1.225757 1 5
. tabulate i
i | Freq. Percent Cum.
------------+-----------------------------------
1 | 12,525 12.53 12.53
2 | 24,785 24.79 37.31
3 | 24,886 24.89 62.20
4 | 25,284 25.28 87.48
5 | 12,520 12.52 100.00
------------+-----------------------------------
Total | 100,000 100.00
To avoid the problem we need to make the widths of all the intervals equal, and that is what the formula floor((ba+1)*runiform() + a) does.
a a+1 a+2 b-1 b b+1
real line +-----+-----+-----+-----+-----------------------+-----+-----+-----+-----+
int line |<--- a --->|<-- a+1 -->| |<-- b-1 -->|<--- b --->)
Our intervals are of equal width and thus we expect to see roughly the same number of observations in each:
. gen better = floor((5-1+1)*runiform() + 1)
. tabulate better
better | Freq. Percent Cum.
------------+-----------------------------------
1 | 19,808 19.81 19.81
2 | 20,025 20.02 39.83
3 | 19,963 19.96 59.80
4 | 20,051 20.05 79.85
5 | 20,153 20.15 100.00
------------+-----------------------------------
Total | 100,000 100.00
So now you know why we shaved a little off the top when we implemented runiform(); it made the formula
floor((ba+1)*runiform() + a):
easier. Our integer [a, b] formula did not have to concern itself that runiform() would sometimes — rarely — return 1. If runiform() did return the occasional 1, the simple formula above would produce the (correspondingly occasional) b+1.
How Stata calculates continuous random numbers
I’ve said that we shaved a little off the top, but the fact was that it was easier for us to do the shaving than not.
runiform() is based on the KISS random number generator. KISS produces 32-bit integers, meaning integers the range [0, 232-1], or [0, 4,294,967,295]. You might wonder how we converted that range to being continuous over [0, 1).
Start by thinking of the number KISS produces in its binary form:
b31b30b29b28b27b26b25b24b23b22b21b20b19b18b17b16b15b14b13b12b11b10b9b8b7b6b5b4b3b2b1b0
The corresponding integer is b31*231 + b31*230 + … + b0*20. All we did was insert a binary point out front:
. b31b30b29b28b27b26b25b24b23b22b21b20b19b18b17b16b15b14b13b12b11b10b9b8b7b6b5b4b3b2b1b0
making the real value b31*2-1 + b30*2-2 + … + b0*2-32. Doing that is equivalent to dividing by 2-32, except insertion of the binary point is faster. Nonetheless, if we had wanted runiform() to produce numbers over [0, 1], we could have divided by 232-1.
Anyway, if the KISS random number generator produced 3190625931, which in binary is
10111110001011010001011010001011
we converted that to
0.10111110001011010001011010001011
which equals 0.74287549 in base 10.
The largest number the KISS random number generator can produce is, of course,
11111111111111111111111111111111
and 0.11111111111111111111111111111111 equals 0.999999999767169356 in base 10. Thus, the runiform() implementation of KISS generates random numbers in the range [0, 0.999999999767169356].
I could have presented all of this mathematically in base 10: KISS produces integers in the range [0, 232-1], and in runiform() we divide by 232 to thus produce continuous numbers over the range [0, (232-1)/232]. I could have said that, but it loses the flavor and intuition of my longer explanation, and it would gloss over the fact that we just inserted the binary point. If I asked you, a base-10 user, to divide 232 by 10, you wouldn’t actually divide in the same way that they would divide by, say 9. Dividing by 9 is work. Dividing by 10 merely requires shifting the decimal point. 232 divided by 10 is obviously 23.2. You may not have realized that modern digital computers, when programmed by “advanced” programmers, follow similar procedures.
Oh gosh, I do get to say it! If this sort of thing interests you, consider a career at StataCorp. We’d love to have you.
Is it important that runiform() values be stored as doubles?
Sometimes it is important. It’s obviously not important when you are generating random integers using floor((ba+1)*runiform() + a) and -16,777,216 ≤ a < b ≤ 16,777,216. Integers in that range fit into a float without rounding.
When creating continuous values, remember that runiform() produces 32 bits. floats store 23 bits and doubles store 52, so if you store the result of runiform() as a float, it will be rounded. Sometimes the rounding matters, and sometimes it does not. Next time, we will discuss drawing random samples without replacement. In that case, the rounding matters. In most other cases, including drawing random samples with replacement — something else for later — the rounding
does not matter. Rather than thinking hard about the issue, I store all my non-integer
random values as doubles.
Tune in for the next episode
Yes, please do tune in for the next episode of everything you need to know about using random-number generators. As I already mentioned, we’ll discuss drawing random samples without replacement. In the third installment, I’m pretty sure we’ll discuss random samples with replacement. After that, I’m a little unsure about the ordering, but I want to discuss oversampling of some groups relative to others and, separately, discuss the manufacturing of fictional data.
Am I forgetting something?
Categories: Numerical Analysis Tags:
• Simon
Thanks for the very helpful explanation, William.
I am trying to build up “fake” dataset, using random generated variables. Of each variable I know:
1) number of observations
2) mean
3) std error
4) min and max values.
Do you know I can I ask STATA to randomly generate a variable with all those specified parameters?
Many thanks,
Simon
• Jorge
This is one of the most useful entries about stata that I have ever read. Thank you William for taking the time of explaining this.
I was working with the code you propose for drawing random integers, and find out that it actually requires a lot of observations to get an even distribution between your intervals. Of course, you would expect to have a decent number of observations in each interval (~30 maybe), but even with this number, you don’t get a distribution as smooth as you expect.
Is there any way to deal with this problem on small size samples, without messing the random process?
Thank you,
Jorge
• A set of random draws is less smooth or regular than many people expect. So the behavior you report is to be expected and is desirable.
There are special circumstances where you may want something more regular than a random draw, e.g. when using maximum simulated likelihood. For those situations one option is to use a Halton set, see -help mf_halton- and the special issue of the Stata Journal on maximum simulated likelihood: http://www.stata-journal.com/sj6-2.html
• mark
I have the same problem as yours. Have you already had a solution to it? if so, can you please share with me? many thanks.
• mark
Have you already had a solution to it? If so, can you please share it with me. I have the same problem. many thanks
• There has been a long discussion of that on the Statalist recently. It started here: http://www.stata.com/statalist/archive/2013-03/msg00417.html
• Hi, I would like to know how I can generate random between two decimals. Example: Random between 0.5 to 0.9.
thanks
• Ana Milena Plata Fajardo
Please, I generated the random variable and I’m trying to run spmat command. However stata give me an error : “Variable _ID not found”
• David Crow
I know that drawnorm allows one to simulate data for a multivariate normal distribution, but can one simulate data from other multivariate probability distributions, including the uniform?
• Emmanuel Asare-bediako
please, help me to analyse by data using mixed logit
• Pedro
Hello,
How can I generate a random 3×1 vector, which has a mean-vector = b and a 3×3 variance-covariance matrix = B?
I need to generate a random vector of 3 coefficients which are related among them by a 3×3 variance-covariance matrix.
• Chek Meout
Is there a program i can use use to generate random numbers between 1-80 by inputting the data from the previous randomly generated numbers ?
• ptd006
Dear William,
is there a C(++) implementation of Stata’s random number generator available? (I am porting some Stata code and would like it for testing purposes)
Thanks!
• ptd006
In case it is useful to anyone else- I wrote a C++ plugin to use mt19937 generated uniforms in Stata, using the default seed. See http://stackoverflow.com/questions/29419165/stata-random-number-generator-in-c/29609299#29609299
• Russell Pieters
HI William,
How would I go about generating a random dummy variable, for a falsification test, i.e. a variable that randomly allocates values 0 or 1 into a vector?
Kind Regards,
Russell
• Mona
I think you can use runiformint(0,1) for that.
• Anne Bendixen
we seek to randomly allocate Don’t know (9) answers, so that 1/4 (25%) of the them is valued as correct. We have a multiple choice question with 4 opportunities, but we already coded them 0/1 – correct/incorrect. Therefore we want to recode DK saying: recode var1 (9=1 “correct” …. every 4th time).
Is that possible?
|
2017-02-24 17:10:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5328376889228821, "perplexity": 1859.6854787858185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171629.84/warc/CC-MAIN-20170219104611-00305-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://www.kybernetika.cz/content/1994/6/725
|
# Abstract:
In this paper, the robust ripple-free tracking and disturbance rejection problem is solved for multirate sampled-data systems whose matrices are assumed to depend on some "physical'' parameters. Making use of a hybrid control system structure, including a continuous-time internal model of the exogenous signals and a periodic discrete-time subcompensator, a ripple-free null steady-state error response is obtained in a neighbourhood of the nominal "physical'' parameters of the plant, and a ripple-free dead-beat error response at the nominal ones.
# Classification:
93B51, 93C73, 93C57, 93B35, 93A99
|
2023-03-27 11:17:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267248868942261, "perplexity": 2029.594967354335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00273.warc.gz"}
|
https://jonathanweisberg.org/vip/chbayes.html
|
# 8 Bayes’ Theorem
…in no other branch of mathematics is it so easy for experts to blunder as in probability theory.
—Martin Gardner
In a famous psychology experiment, subjects were asked to solve the following problem. The experiment was first published in 1971. It was performed by Daniel Kahneman and Amos Tversky. Their work on human reasoning reshaped the field of psychology, and eventually won a Nobel prize in 2002.
A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. You are given the following data:
1. $$85\%$$ of the cabs in the city are Green and $$15\%$$ are Blue.
2. A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors $$80\%$$ of the time and failed $$20\%$$ of the time.
What is the probability that the cab involved in the accident was blue rather green?
Most people answer $$80\%$$, because the witness is $$80\%$$ reliable. But the right answer is $$12/29$$, or about $$41\%$$.
How could the probability be so low when the witness is $$80\%$$ reliable? The short answer is: because blue cabs are rare. So most of the time, when the witness says a cab is blue, it’s one of the $$20\%$$ of green cabs they mistakenly identify as blue.
But this answer still needs some explaining. Let’s use a diagram.
Figure 8.1: The taxicab problem. There are $$15$$ blue cabs, $$85$$ green. The dashed region indicates those cabs the witness identifies as “blue.” It includes $$80\%$$ of the blue cabs ($$12$$), and only $$20\%$$ of the green ones ($$17$$). Yet it includes more green cabs than blue.
Imagine there are just $$100$$ cabs in town, $$85$$ green and $$15$$ blue. The dashed blue line represents the cabs the witness identifies as “blue,” both right or wrong. Because the witness is $$80\%$$ accurate, that line encompasses $$80\%$$ of the blue cabs, which is $$12$$ cabs. But it also encompasses $$20\%$$ of the green cabs, which is $$17$$. That’s a total of $$29$$ cabs identified as “blue,” only $$12$$ of which actually are blue.
So out of the $$29$$ cabs the witness calls “blue,” only $$12$$ really are blue. The probability a cab really is blue given the witness says so is only $$12/29$$, about $$41\%$$.
Another way to think about the problem is that there are two pieces of information relevant to whether the cab is blue. The witness says the cab is blue, but also, most cabs are not blue. So there’s evidence for the cab being blue, but also strong evidence against it. The diagram in Figure 8.1 shows us how to balance these two, competing pieces of evidence and come to the correct answer.
What trips people up so much in the taxicab problem? Remember how order matters with conditional probability. In this problem, we’re asked to find $$\p(B \given W)$$, the probability the cab is blue given that the witness says it is. That’s not the same as $$\p(W \given B)$$, the probability the witness will say the cab is blue if it really is. The problem tells us $$\p(W \given B) = 8/10$$, but it doesn’t tell us a number for $$\p(B \given W)$$. We have to figure that out.
We saw back in Chapter 6 that $$\p(A \given B)$$ is usually a different number from $$\p(B \given A)$$. A university student will probably be young, but a young person will probably not be a university student. That’s an example where it’s easy to see that order matters. The taxicab example makes it much less obvious, in fact it tempts us to confuse $$\p(B \given W)$$ with $$\p(W \given B)$$.
## 8.1 Bayes’ Theorem
Problems where we’re given $$\p(B \given A)$$ and we have to figure out $$\p(A \given B)$$ are extremely common. Luckily, there’s a famous formula for solving them.
Thomas Bayes (1701–1761) was an English minister and mathematician, the first to formulate the theorem that now bears his name.
Bayes’ Theorem
If $$\p(A),\p(B)>0$$, then $\p(A \given B) = \frac{\p(A)\p(B \given A)}{\p(B)}.$
Notice two things here. First, we need $$\p(A)$$ and $$\p(B)$$ to both be positive, because otherwise $$\p(A \given B)$$ and $$\p(B \given A)$$ aren’t well-defined. Second, we need to know more than just $$\p(B \given A)$$ to apply the formula. We also need numbers for $$\p(A)$$ and $$\p(B)$$.
In the taxicab problem we’re given two of the three pieces of information we need. Here’s Bayes’ theorem for the taxicab example: $\p(B \given W) = \frac{\p(B) \p(W \given B)}{\p(W)}.$ Whereas the problem gives us the following information:
• $$\p(W \given B) = 80/100$$.
• $$\p(W \given \neg B) = 20/100$$.
• $$\p(B) = 15/100$$.
• $$\p(\neg B) = 85/100$$.
What’s missing for Bayes’ Theorem is $$\p(W)$$. Fortunately, we can calculate it with the Law of Total Probability! \begin{aligned} \p(W) &= \p(W \given B)\p(B) + \p(W \given \neg B)\p(\neg B)\\ &= (80/100)(15/100) + (20/100)(85/100)\\ &= 29/100. \end{aligned} And now we have everything we need to plug into Bayes’ Theorem: \begin{aligned} \p(B \given W) &= \frac{\p(B) \p(W \given B)}{\p(W)}\\ &= \frac{(15/100)(80/100)}{29/100}\\ &= 12/29. \end{aligned} This is the same answer we got with our grid diagram (Figure 8.1). And notice, it’s also the answer we’d get from a tree diagram too (Figure 8.2).
Figure 8.2: Tree diagram for the taxicab problem. Since $$\p(B \wedge W) = .12$$ and $$\p(W) = .12 + .17$$, the definition of conditional probability yields $$\p(B \given W) = 12/29$$.
## 8.2 Understanding Bayes’ Theorem
Why does Bayes’ theorem work? One way to think about it is to start by recalling the definition of conditional probability: $\p(A \given B) = \frac{\p(A \wedge B)}{\p(B)}.$ Then apply the General Multiplication Rule to the numerator: $\p(A \wedge B) = \p(B \given A)\p(A).$ And plug that back into the first equation: $\p(A \given B) = \frac{\p(A) \p(B \given A)}{\p(B)}.$ This proves that Bayes’ theorem is correct. But it also suggests a way of understanding it visually, with an Euler diagram.
Figure 8.3: An Euler diagram for visualizing Bayes’ theorem
We’ve talked before about $$\p(A \given B)$$ being the portion of the $$B$$ region that’s inside the $$A$$ region. The purple portion of the blue $$B$$ circle, in other words: \begin{aligned} \p(A \given B) &= \frac{ \color{bookpurple}{\p(A \wedge B)} }{ \color{bookblue}{\p(B)} }.\\ \end{aligned} So when we apply the General Multiplication Rule to the purple region we get: \begin{aligned} \p(A \given B) &= \frac{ \color{bookpurple}{\p(B \given A)\p(A)} }{ \color{bookblue}{\p(B)} }.\\ \end{aligned}
We’ll come back to another, non-visual way of understanding Bayes’ theorem in Chapter 10.
There are lots more visual explanations of Bayes’ theorem you can find online. There’s even one using Legos! But they all tend to come down to the same thing. A two step explanation that goes:
1. Start with the definition of conditional probability, which we learned how to visualize in Chapter 6.
2. Then apply the General Multiplication Rule, which we learned how to visualize in Chapter 7.
This is a perfectly good and helpful way to think about Bayes’ theorem. But it’s not really a visualization of the theorem itself. It’s two separate visualizations of the ingredients we use to derive the theorem.
Figure 8.4: Bayes’ theorem on display at the offices of HP Autonomy, in Cambridge, UK
In any case, Bayes’ theorem comes up so often it’s good to memorize the formula itself. The visualization in Figure 8.4 is probably about as helpful as it gets for this purpose.
## 8.3 Bayes’ Long Theorem
We had to apply the Law of Total Probability first, before we could solve the taxicab problem with Bayes’ theorem, to calculate the denominator. This is so common that you’ll often see Bayes’ theorem written with this calculation built in. That is, the denominator $$\p(B)$$ is expanded out using the Law of Total Probability.
Bayes’ Theorem (long version)
If $$1 > \p(A) > 0$$ and $$\p(B)>0$$, then $\p(A \given B) = \frac{\p(A)\p(B \given A)}{\p(A)\p(B \given A) + \p(\neg A)\p(B \given \neg A)}.$
Notice how there’s some repetition in the numerator and the denominator. The term $$\p(A)\p(B \given A)$$ appears both above and below. So, when you’re doing a calculation with this formula, you can just do that bit once and then copy it in both the top and bottom. Then you just have to do the bottom-right term to complete the formula.
Figure 8.5: A tree diagram for the long form of Bayes’ theorem. The definition of conditional probability tells us $$\p(A \given B)$$ is the first leaf divided by the sum of the first and third leaves.
A tree diagram helps illuminate the long version of Bayes’ theorem. To calculate $$\p(A \given B)$$, the definition of conditional probability directs us to calculate $$\p(A \wedge B)$$ and $$\p(B)$$: $\p(A \given B) = \frac{ \p(A \wedge B) }{ \p(B) }.$ Looking at the tree diagram in Figure 8.5, we see that this amounts to computing the first leaf for the numerator, and the sum of the first and third leaves for the denominator. Which yields the same formula as in the long form of Bayes’ theorem.
## 8.4 Example
Let’s practice the long form of Bayes’ theorem.
Joe has just heard about the Zika virus and wonders if he has it. His doctor tells him only $$1\%$$ of the population has the virus, but he’s still worried. There’s a blood test he can take, but it’s not perfect. The test always comes up either negative or positive, but:
• 95% of people who have the virus test positive.
• 85% of people who don’t have the virus test negative.
If Joe tests positive, what is the probability he really has the Zika virus?
We’re asked to calculate $$\p(Z \given P)$$, and we’re given the following:
• $$\p(Z) = 1/100$$.
• $$\p(P \given Z) = 95/100$$.
• $$\p(P \given \neg Z) = 15/100$$.
So we can calculate $$\p(Z \given P)$$ using the long form of Bayes’ theorem: \begin{aligned} \p(Z \given P) &= \frac{\p(Z)\p(P \given Z)}{\p(Z)\p(P \given Z) + \p(\neg Z)\p(P \given \neg Z)}\\ &= \frac{(1/100)(95/100)}{(1/100)(95/100) + (99/100)(15/100)}\\ &= \frac{95}{95 + 1,485}\\ &= 19/316\\ &\approx .06. \end{aligned} The calculation is also diagrammed in Figure 8.6.
Figure 8.6: Tree diagram of the Zika problem
It turns out there’s only about a $$6\%$$ chance Joe has the virus. Even though he tested positive with a fairly reliable blood test! It’s counterintuitive, but the reason is the same as with the taxicab problem. There are two, conflicting pieces of evidence: the blood test is positive but the virus is rare. It turns out the virus is so rare that the positive blood test doesn’t do much to increase Joe’s chances of being infected.
Bayes’ theorem doesn’t always give such surprising results. In fact the results are often very intuitive. Professors just like to use the counterintuitive examples to demonstrate how important Bayes’ theorem is. Without it, it’s easy to go astray.
## 8.5 The Base Rate Fallacy
In the Zika example, the rate of infection in the general population is very low, just $$1\%$$. The rate at which something happens in general is called the base rate. In the taxicab example, the base rate for blue cabs was $$15\%$$.
This video by Julia Galef explains more about how you can use Bayes’ theorem, not just to avoid the base-rate fallacy, but also to improve your thinking more generally. In Chapter 10 we’ll talk more about how Bayes’ theorem illuminates good scientific reasoning.
One lesson of this chapter is that you have to take the base rate into account to get the right answer, via Bayes’ theorem. Humans have a tendency to ignore the base rate, and focus only on the “test” performed: the blood test in the Zika example, the testimony of the witness in the taxicab example. This mistake is called base rate neglect, or the base rate fallacy.
The base rate fallacy is so tempting, even trained professionals are susceptible to it. In one famous study, $$160$$ medical doctors were given a problem similar to our Zika example (but with cancer instead of Zika). The question was multiple choice, and it used easier numbers than we did. Yet only $$34$$ of the doctors got it right. Microsoft founder Bill Gates seems to be making a similar mistake in this tweet.
Hence the quote that opens this chapter: “in no other branch of mathematics is it so easy for experts to blunder as in probability theory.”
## Exercises
1. Recall this problem from Chapter 6:
Five percent of tablets made by the company Ixian have factory defects. Ten percent of the tablets made by their competitor company Guild do. A computer store buys $$40\%$$ of its tablets from Ixian, and $$60\%$$ from Guild.
Use Bayes’ theorem to find $$\p(I \given D)$$, the probability a tablet from this store is made by Ixian, given that it has a factory defect?
2. Recall this problem from Chapter 6:
In the city of Elizabeth, the neighbourhood of Southside has lots of chemical plants. $$2\%$$ of Elizabeth’s children live in Southside, and $$14\%$$ of those children have been exposed to toxic levels of lead. Elsewhere in the city, only $$1\%$$ of the children have toxic levels of exposure.
Use Bayes’ theorem to find $$\p(S \given L)$$, the probability that a randomly chosen child from Elizabeth who has toxic levels of lead exposure lives in Southside?
3. The probability that Nasim will study for her test is $$4/10$$. The probability that she will pass, given that she studies, is $$9/10$$. The probability that she passes, given that she does not study, is $$3/10$$. What is the probability that she has studied, given that she passes?
4. At the height of flu season, roughly $$1$$ in every $$100$$ people have the flu. But some people don’t show symptoms even when they have it: only half the people who have the virus show symptoms.
Flu symptoms can also be caused by other things, like colds and allergies. So about $$1$$ in every $$20$$ people who don’t have the flu still have flu-like symptoms.
If someone has flu-like symptoms at the height of flu season, what is the probability that they actually have the flu?
5. A magic shop sells two kinds of trick coins. The first kind are biased towards heads: they come up heads $$9$$ times out of $$10$$ (the tosses are independent). The second kind are biased towards tails: they comes up tails $$8$$ times out of $$10$$ (tosses still independent). Half the coins are the first kind, half are the second kind. But they don’t label the coins, so you have to experiment to find out which are which.
You pick a coin at random and flip it once. Given that it comes up heads, what is the probability it’s the first kind of coin?
6. There is a room filled with two types of urns.
• Type A urns have $$30$$ yellow marbles, $$70$$ red.
• Type B urns have $$20$$ green marbles, $$80$$ yellow.
The two types of urn look identical, but $$80\%$$ of them are Type A. You pick an urn at random and draw a marble from it at random.
1. What is the probability the marble will be yellow?
Now you look at the marble: it is yellow.
1. What is the probability the urn is a Type B urn, given that you drew a yellow marble?
7. In the long form of Bayes’ theorem, the denominator is broken down by applying the Law of Total Probability to a partition of two propositions, $$A$$ and $$\neg A$$. We can extend the same idea to a partition of three propositions, $$X$$, $$Y$$, and $$Z$$. Here is a start on the formula:
$\p(X \given B) = \frac{\p(X)\p(B \given X)}{\p(X)\p(B \given X) + \;\ldots\;}.$
Fill in the rest of the formula, then justify it.
8. A company makes websites, always powered by one of three server platforms: Bulldozer, Kumquat, or Penguin. Bulldozer crashes $$1$$ out of every $$10$$ visits, Kumquat crashes $$1$$ in $$50$$ visits, and Penguin only crashes $$1$$ out of every $$200$$ visits.
This problem is based on Exercise 6 from p. 78 of Ian Hacking’s An Introduction to Probability & Inductive Logic.
Half of the websites are run on Bulldozer, $$30\%$$ are run on Kumquat, and $$20\%$$ are run on Penguin.
You visit one of their sites for the first time and it crashes. What is the probability it was run on Penguin?
9. You and Carlos are at a party, which means there’s a $$2/3$$ chance he’s been drinking. You decide to experiment to find out: you throw a tennis ball to Carlos and he misses the catch. Five minutes later you try again and he misses again. Assume the two catches are independent.
When he’s sober, Carlos misses a catch only two times out of ten. When he’s been drinking, Carlos misses catches half the time.
What is the probability that Carlos has been drinking, given that he missed both catches?
10. The Queen Gertrude Hotel has two kinds of suites: singles have one bed, royal suites have three beds. There are $$80$$ singles and $$20$$ royals.
In a single, the probability of bed bugs is $$1/100$$. But every additional bed put in a suite doubles the chance of bed bugs.
If a suite is inspected at random and bed bugs are found, what is the probability it’s a royal?
11. Willy Wonka Chocolates Inc. makes two kinds of boxes of chocolates. The “wonk box” has four caramel chocolates and six regular chocolates. The “zonk box” has six caramel chocolates, two regular chocolates, and two mint chocolates. A third of their boxes are wonk boxes, the rest are zonk boxes.
They don’t mark the boxes. The only way to tell what kind of box you’ve bought is by trying the chocolates inside. In fact, all the chocolates look the same; you can only tell the difference by tasting them.
If you buy a random box, try a chocolate at random, and find that it’s caramel, what is the probability you’ve bought a wonk box?
12. A room contains four urns. Three of them are Type X, one is Type Y.
• The Type X urns each contain $$3$$ black marbles, $$2$$ white marbles.
• The Type Y urn contains $$1$$ black marble, $$4$$ white marbles.
You are going to pick an urn at random and start drawing marbles from it at random without replacement.
What is the probability the urn is Type X if the first draw is black?
13. Suppose I have an even mix of black and white marbles. I choose one at random without letting you see the colour, and I put it in a hat. This problem was devised by Lewis Carroll, author of Alice’s Adventures in Wonderland. Then I add a second, black marble to the hat. If I draw one marble at random from the hat and it’s black, what is the probability the marble left in the hat is black?
14. Suppose you have a test for some disease, which always comes up positive for people who have the disease: $$\p(P \given D) = 1$$. The base-rate in the population for this disease is $$1\%$$. How low does the false-positive rate $$\p(P \given \neg D)$$ have to be for the test to acheive $$95\%$$ reliability, i.e. to have $$\p(D \given P) = .95$$?
15. Suppose the test for some disease is perfect for people who have the disease, $$\p(P \given D) = 1$$. And it’s almost perfect for people who don’t have the disease: $$\p(\neg P \given \neg D) = 98/99$$. How high does the base rate have to be for the test to be $$99\%$$ reliable, i.e. to have $$\p(D \given P) = .99$$?
16. An urn contains $$4$$ marbles, either blue or green. The number of blue marbles is equally likely to be $$0, 1, 2, 3$$, or $$4$$. Suppose we do $$3$$ random draws with replacement, and the observed sequence is: blue, green, blue. What is the probability the urn contains just $$1$$ blue marble?
17. Consider the equation $$\p(A \given B) = \p(B \given A)$$. Assume all conditional probabilities are well-defined. Which one of the following is correct?
1. This equation always holds.
2. This equation never holds.
3. This equation only holds sometimes: whenever $$\p(A \given B) = \p(A)$$.
4. This equation only holds sometimes: whenever $$\p(A) = \p(B)$$.
5. This equation only holds sometimes: whenever $$\p(B) = 1$$.
|
2021-07-27 05:39:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8854013085365295, "perplexity": 745.0647676433069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152236.64/warc/CC-MAIN-20210727041254-20210727071254-00664.warc.gz"}
|
https://www.calctool.org/math-and-statistics/slope
|
# Slope Calculator
Created by Luis Hoyos
Last updated: Jul 05, 2022
If you were searching for how to find a slope with two points of a line, this slope calculator is what you need.
Calculating slope and its formula is the first step to understanding more complex topics in math or physics:
• The slope concept is necessary to understand the derivative of a function.
• Once you understand the derivative concept, you'll notice that velocity is no more than the derivative of position with respect to time.
• If you're studying solid mechanics, you'll figure out that the slope of the linear portion of the stress-strain diagram equals the famous Young's modulus.
• When studying the normal force over objects lying on inclined surfaces, you'll find out that this normal force varies with the slope of that surface.
This tool also has other functionalities:
• Calculating the y-intercept of the line forming those two points.
• Calculating the distance between the two points.
• Calculating the slope percentage and calculating the slope angle.
Read on to learn more about the slope equation used to calculate slope with two points and how to calculate those other terms.
## Slope formula
If you have two points (x₁, y₁, and x₂, y₂) and want to calculate the slope, the equation to use is:
m = (y₂ - y₁)/(x₂ - x₁)
where:
• m — Slope of the line;
• x₁ — x coordinate of the first point;
• y₁ — y coordinate of the first point;
• x₂ — x coordinate of the second point; and
• y₂ — y coordinate of the second point.
## How to find a slope with two points
1. Identify the two points: (x₁, y₁) and (x₂, y₂).
2. Subtract the y coordinate of the second point to the y coordinate of the first point: y₂ - y₁.
3. Subtract the x coordinate of the second point to the x coordinate of the first point: x₂ - x₁.
4. Finally, to calculate the slope m by dividing the subtraction of the y coordinates by the subtraction of the x coordinates: m = (y₂ - y₁)/(x₂ - x₁) (this is the slope formula)
This tool also calculates the slope angle, the angle the line forms about the horizontal axis. To find this angle (θ), just calculate the inverse tangent (arctan) of the slope m
θ = arctan(m)
💡 Key points:
• For positive slopes, the angle is positive and measured about the right part of the x-axis.
• For negative slopes, the angle is negative and measured about the left part of the x-axis.
• A higher absolute value of slope (m) implies a higher absolute value for the angle (θ).
Percent grade or slope percentage is simply the slope expressed as a percentage. For example, for m = 1, the percent grade equals 100%, while an m = 0.5 implies a 50% percent grade. To calculate the slope percentage, multiply the slope by 100.
## Calculating the y-intercept
The y-intercept is the point where the function intersects the y-axis, and we can calculate it with one of the following formulas:
b = y₂ - mx₂
b = y₁ - mx₁
As we can see, to calculate the y-intercept, we only need one point (x₁, y₁ or x₂, y₂) and the slope.
## Calculating the distance between two points
To calculate the distance between two points (d), we use the following formula, which relies on the Pythagorean theorem:
d = √[(x₂ - x₁)² + (y₂ - y₁)²]
Luis Hoyos
First point coordinates
x₁
y₁
Second point coordinates
x₂
y₂
Result
Slope (m)
Related numbers
Y - intercept
Angle (θ)
deg
%
Distance (d)
Distance between x's (Δx)
Distance between y's (Δy)
People also viewed…
|
2022-12-05 14:12:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7882784605026245, "perplexity": 1404.2672325404903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00533.warc.gz"}
|
https://brilliant.org/problems/quadratic-non-residues/
|
Find the smallest $$n$$ such that for some prime $$p$$, at least $$20$$ of the numbers $$1,2,...,n$$ are quadratic non-residues modulo $$p$$.
Details and assumptions
$$k$$ is a quadratic residue modulo $$p$$ if there exists an integer $$j$$ such that $$j^2 \equiv k \pmod{p}$$.
There is no condition on the relative sizes of $$n$$ and $$p$$. As an explicit example, if $$p=3$$, then $$n=59$$ would satisfy the conditions of the question.
×
|
2017-07-28 15:06:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957099199295044, "perplexity": 36.33779788712526}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500550969387.94/warc/CC-MAIN-20170728143709-20170728163709-00057.warc.gz"}
|
http://mathhelpforum.com/calculus/275788-integral-dirac-delta-distribution-over-finite-region.html
|
# Thread: Integral of Dirac delta distribution over finite region
1. ## Integral of Dirac delta distribution over finite region
Hi there. I was wondering about what happens if one integrates the Dirac delta in a finite region.
It is well known that:
$\int_{-\infty}^{\infty} \delta (x)dx=1$
and:
$\int_{-\infty}^{\infty} f(x) \delta (x)dx=f(0)$
Now, what happens if the integration is defined in a finite region, for example:
$\int_{-5}^{5} \delta (x)dx=$?
$\int_{-5}^{5} f(x) \delta (x)dx=$?
Or even:
$\int_{0}^{x_f} f(x) \delta (x)dx=$?
Does this has a meaning at all?
2. ## Re: Integral of Dirac delta distribution over finite region
$\forall \epsilon > 0,~\displaystyle \int_{-\epsilon}^\epsilon~\delta(\tau)~d\tau = 1$
$\forall \epsilon > 0,~\displaystyle \int_{-\epsilon}^0~\delta(\tau)~d\tau = \int_0^\epsilon~\delta(\tau)~d\tau = \dfrac 1 2$
$\forall \epsilon > 0, ~\displaystyle \int_{-\epsilon}^\epsilon~f(\tau) \delta(\tau)~d\tau = \int_0^\epsilon~f(\tau) \delta(\tau)~d\tau = \int_{-\epsilon}^0~f(\tau) \delta(\tau)~d\tau=f(0)$
3. ## Re: Integral of Dirac delta distribution over finite region
Originally Posted by romsek
$\forall \epsilon > 0,~\displaystyle \int_{-\epsilon}^0~\delta(\tau)~d\tau = \int_0^\epsilon~\delta(\tau)~d\tau = \dfrac 1 2$
$\forall \epsilon > 0, ~\displaystyle \int_{-\epsilon}^\epsilon~f(\tau) \delta(\tau)~d\tau = \int_0^\epsilon~f(\tau) \delta(\tau)~d\tau = \int_{-\epsilon}^0~f(\tau) \delta(\tau)~d\tau=f(0)$
With $f(x)=1$, you have a contradiction between the two lines here.
$\int_{-a}^a f(x)\delta(x)\,\mathrm d x = f(0)$ is a trivial result when you consider the value of $\delta(x)$ on the intervals $(-\infty,-a)$ and $(a,\infty)$.
4. ## Re: Integral of Dirac delta distribution over finite region
Originally Posted by Archie
With $f(x)=1$, you have a contradiction between the two lines here.
$\int_{-a}^a f(x)\delta(x)\,\mathrm d x = f(0)$ is a trivial result when you consider the value of $\delta(x)$ on the intervals $(-\infty,-a)$ and $(a,\infty)$.
yeah... my mistake
$\displaystyle \int_{-\epsilon}^0 f(\tau)\delta(\tau)~d\tau = \int_0^\epsilon f(\tau)\delta(\tau)~d\tau = \dfrac 1 2 f(0)$
5. ## Re: Integral of Dirac delta distribution over finite region
Are there different interpretations of this integral? Wolframalpha seems to think:
$\displaystyle \int_0^\epsilon \delta(\tau)d\tau = \int_{-\epsilon}^0 \delta(\tau)d\tau = 1$
Wolfram|Alpha: Computational Knowledge Engine
Wolfram|Alpha: Computational Knowledge Engine
Is that a mistake? I'm sure Wolfram can fix it if it is.
6. ## Re: Integral of Dirac delta distribution over finite region
Originally Posted by romsek
yeah... my mistake
$\displaystyle \int_{-\epsilon}^0 f(\tau)\delta(\tau)~d\tau = \int_0^\epsilon f(\tau)\delta(\tau)~d\tau = \dfrac 1 2 f(0)$
Now that I think about it this result was key to showing that the Fourier Series converges to the average of the left and right hand limits at discontinuities.
7. ## Re: Integral of Dirac delta distribution over finite region
Originally Posted by SlipEternal
Are there different interpretations of this integral? Wolframalpha seems to think:
$\displaystyle \int_0^\epsilon \delta(\tau)d\tau = \int_{-\epsilon}^0 \delta(\tau)d\tau = 1$
Wolfram|Alpha: Computational Knowledge Engine
Wolfram|Alpha: Computational Knowledge Engine
Is that a mistake? I'm sure Wolfram can fix it if it is.
every representation of the delta function I've ever seen has shown it as symmetric with respect to the origin.
It's considered the limit of a symmetric distribution where the standard deviation tends to zero.
If you integrate one side or the other you get 1/2
You can stuff your snide little attitude asshole.
8. ## Re: Integral of Dirac delta distribution over finite region
Originally Posted by SlipEternal
Are there different interpretations of this integral? Wolframalpha seems to think:
$\displaystyle \int_0^\epsilon \delta(\tau)d\tau = \int_{-\epsilon}^0 \delta(\tau)d\tau = 1$
Wolfram|Alpha: Computational Knowledge Engine
Wolfram|Alpha: Computational Knowledge Engine
Is that a mistake? I'm sure Wolfram can fix it if it is.
When I entered the link it says that the integral from 0 to 1 gives $\theta(0)$, where theta is the heaviside step function. The heaviside step function is 1/2 at zero: https://en.wikipedia.org/wiki/Heaviside_step_function
It actually depends on the definition of the Heaviside function, but I'm pretty sure that the definition wolfram handles is the one with 1/2 at zero.
9. ## Re: Integral of Dirac delta distribution over finite region
Originally Posted by Ulysses
When I entered the link it says that the integral from 0 to 1 gives $\theta(0)$, where theta is the heaviside step function. The heaviside step function is 1/2 at zero: https://en.wikipedia.org/wiki/Heaviside_step_function
It actually depends on the definition of the Heaviside function, but I'm pretty sure that the definition wolfram handles is the one with 1/2 at zero.
Nope. I tried heavysidetheta(0) and it gave one. I think it is an error at wolframalpha.
10. ## Re: Integral of Dirac delta distribution over finite region
Originally Posted by romsek
You can stuff your snide little attitude asshole.
I was actually asking in earnest, Mr. Paranoid.
11. ## Re: Integral of Dirac delta distribution over finite region
Relax guys , everything is fine. Thank you both.
12. ## Re: Integral of Dirac delta distribution over finite region
I contacted Wolframalpha and asked that they include both answers as a result for Heavisidetheta[0].
|
2017-07-23 13:43:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9742507338523865, "perplexity": 867.9375794130946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424559.25/warc/CC-MAIN-20170723122722-20170723142722-00613.warc.gz"}
|
https://tex.stackexchange.com/questions/137567/getting-unicode-lower-block-series-to-appear
|
getting unicode lower block series to appear
In contexts outside of LaTeX, I've found rectangles of increasing size very useful. I can enter these in UTF-8 in my text editor, but despite trying every font related fix I can find, I can't seem to get these glyphs to show up (I'm using xelatex). Can anyone provide a MWE in which these will appear?
• ' ' U+2002 En Space Nut
• '▁' U+2581 lower 1/8
• '▂' U+2582 lower 1/4
• '▃' U+2583 lower 3/8
• '▄' U+2584 lower 1/2
• '▅' U+2585 lower 5/8
• '▆' U+2586 lower 3/4
• '▇' U+2587 lower 7/8
• '█' U+2588 full block
I can get the ding symbols for horizontally increasing rectangles working ( \ding{120} ❘, \ding{121} ❙, and \ding{122} ❚ ), but I really want vertically increasing blocks.
• Are you sure that the font you use contains those characters? Which one are you using? It works here with DejaVu. – Marco Oct 12 '13 at 21:45
• I would guess that the font I'm using doesn't have those characters, but this is why I'm asking--they don't show up in the document I'm compiling, and I've tried several fonts now. The problem is that I'm new enough to latex that I don't know if I'm just missing something or if I keep choosing 'incomplete' fonts. Is there a list of fonts somewhere of those that would typically support a full set of unicode characters? Thanks! – bwv549 Oct 13 '13 at 6:30
• I assume the font doesn't have the glyphs. Did you try DejaVu? There is no font covering the entire Unicode range. Arial Unicode MS and Everson Mono have quite good coverage, though. To inspect the glyphs of the font use your favourite font viever (e.g. fontforge) or have a look at the anwers to Generating a table of glyphs with XeTeX. – Marco Oct 13 '13 at 10:35
• …and since you're dealing with fonts I highly recommend using LuaTeX or XeTeX. – Marco Oct 13 '13 at 10:36
Package pmboxdraw defines box drawing characters using rules:
\documentclass{article}
\usepackage{pmboxdraw}
\begin{document}
\renewcommand*{\arraystretch}{1.2}
\begin{tabular}{lll}
U+2581 & \pmboxdrawuni{2581} & \verb|\pmboxdrawuni{2581}|\\
U+2582 & \pmboxdrawuni{2582} & \verb|\pmboxdrawuni{2582}|\\
U+2583 & \pmboxdrawuni{2583} & \verb|\pmboxdrawuni{2583}|\\
U+2584 & \textdnblock & \verb|\textdnblock|\\
U+2585 & \pmboxdrawuni{2585} & \verb|\pmboxdrawuni{2585}|\\
U+2586 & \pmboxdrawuni{2586} & \verb|\pmboxdrawuni{2586}|\\
U+2587 & \pmboxdrawuni{2587} & \verb|\pmboxdrawuni{2587}|\\
U+2588 & \textblock & \verb|\textblock|\\
\end{tabular}
\end{document}
Remarks:
• \usepackage[utf8]{inputenc} is already supported by package pmboxdraw. Thus the symbols can be input as Unicode characters.
• With XeTeX (or LuaTeX), you can make the Unicode input characters active, e.g.:
\catcode\^^^^2581=\active
\def^^^^2581{\pmboxdrawuni{2581}}
`
|
2019-08-19 20:20:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369502067565918, "perplexity": 7204.0048769615705}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314959.58/warc/CC-MAIN-20190819201207-20190819223207-00298.warc.gz"}
|
https://blog.axelrobbe.nl/post/2020-07-09-nexus9k-vpc-with-fhrp-in-2-datacenters/
|
# Nexus 9k VPC (back to back) and FHRP setup in 2 data centers
This post describes the setup of VPCs on a data center interconnect and HSRP as the first hop redundancy protocol for the VLAN interfaces (SVIs). This configuration has been performed on a Nexus 93180YC-EX with software version 7.0(3)I7(8). The switches have the system default switchport command set, so all ports are switchports by default, but this does not matter for the setup.
## Background
This configuration is for a setup where the current network “core” is a Catalyst 6500 in VSS mode with a chassis in each data center. This has some benefits in terms of a single management plane for example. The problem with this device was that the customer didn’t have enough line cards to provide redundancy to all connected devices. Two extra line cards per chassis were almost the same cost as 4 Nexus 93180YC-EX switches with some extra gbic SFPs. The reason to go for this particular model is that it allows to use both fiber and gbic SFPs in the same slot. The size of the customer did not allow for more switches, thus going with this flexible switch model combined with gbic SFPs was the reasonable choice.
Each data center has 2 Nexus switches configured in a VPC domain. They connect to the other data center location via a LACP-VPC link. To provide high availability, each Nexus has all the SVIs of all the other Nexus switches, because the VLANs span both data centers. The SVIs in turn are configured with HSRP as first hop redundancy protocol. The HSRP is isolated via an ACL so that each data center has the gateway local and traffic does not needlessly need to traverse the DCI.
The migration strategy will differ heavily per scenario and requirements. In this setup, I went with a L2 connection to the VSS to easily move SVIs in each data center and rollback when necessary to the VSS. After everything is connected to the Nexus switches and tested, the VSS was disconnected from the Nexus switches. The VSS remained operational by itself (without anything attached) for another week, just in case. Afterwards, the VSS was decommissioned and the fibers previously in use by the VSS were added to the Nexus LACP links as extra data links for the DCI (data center interconnect).
## VPC configuration
To perform a VPC configuration, you need to activate the vpc feature with feature vpc. Below is a code snippet of the first Nexus switch.
vrf context keepalive
interface Ethernet1/48
no switchport
vrf member keepalive
no shutdown
vpc domain 1
peer-switch
role priority 1
system-priority 8192
peer-keepalive destination 10.40.98.2 source 10.40.98.1 vrf keepalive
peer-gateway
ip arp synchronize
The IP addresses are configured in a separate VRF that has no routing entries. The IP addresses are not used elsewhere in the network.
The other switch in the VPC pair needs to have the destination and source addresses reversed. It’s also important to remember that the secondary location needs to be configured with another VPC domain ID. Nexus VPC domains allow only two devices per domain. If not, you might experience problems with traffic forwarding and establishing LACP tunnels (non-LACP port-channels will establish, but you might experience problems with traffic forwarding).
I configured the peer-switch and peer-gateway because I want both Nexus switches to forward L2 and L3 traffic by themselves.
## Configuring the DCI
To configure the DCI, in this example I use port Ethernet 1/46. After configuring the basics for the port, the rest can be applied to the port-channel. Below is a code snippet of the first Nexus switch.
conf t
interface Ethernet1/46
description To_DC2
switchport mode trunk
channel-group 999
no shutdown
interface port-channel999
description To_DC2
switchport mode trunk
spanning-tree port type
vpc 999
Later on, I added the additional links that were used by the VSS for extra bandwidth and redundancy. Those links have a similar config to Ethernet1/46.
## HSRP configuration
So why use HSRP? First of all, for each VLAN that is routed by the Nexus switches, you want to have a single gateway IP - a VIP. Each Nexus switch has its own IP addresses assigned to the SVI, so using a first hop redundancy protocol makes it a lot easier and transparent to the connected hosts. Second, HSRP is cisco proprietary and the design involves blocking HSRP traffic between the two data centers so each data center has its own gateway per VLAN. This drastically reduces the traffic load on the DCI and potential latency for a routed packet. Because the FHRP is to be blocked between the two data centers, it’s preferable to use HSRP for this. That way, VRRP is still available for other instances that need a FHRP, such as the firewall clusters. The firewalls wouldn’t be able to speak HSRP in the first place, because in this case, they’re not Cisco branded. Therefore, choosing HSRP for the Nexus platform makes the most sense.
To perform a HSRP configuration, you need to activate the vpc feature with feature hsrp. Below is a code snippet of the first Nexus switch.
interface Vlan100
description SOME_VLAN
no shutdown
no ip redirects
no ipv6 redirects
no ip ospf passive-interface
no ip arp gratuitous hsrp duplicate
hsrp version 2
hsrp 100
priority 120
ip 10.10.0.254
HSRP version 2 is here configured mainly for purposes of matching the VLAN ID to the HSRP group ID. It’s a Cisco recommendation to use a different group per VLAN or subnet.
### HSRP ACL
In this design the idea is to get two HSRP primaries, one for each data center so that traffic can be routed on-site instead of traversing the DCI. To accomplish this, HSRP traffic has to be limited to the each location. To achieve this, an HSRP access list has to be set up, applied to the port-channel as well as prevent gratuitous arp on each VLAN SVI that is present in both locations.
NOTE: By default, the switches I was using did not allow for port access-groups due to insufficient TCAM memory allocation. Please refer to the troubleshooting section if you notice similar behavior - Errors related to TCAM entries.
ip access-list DENY_HSRP_IP
10 deny udp any 224.0.0.2/32 eq 1985
20 deny udp any 224.0.0.102/32 eq 1985
30 permit ip any any
interface port-channel999
ip port access-group DENY_HSRP_IP in
interface Vlan100
no ip arp gratuitous hsrp duplicate
### Spanning Tree
You might also want to consider the spanning-tree topology. Whatever you do, make sure the two VPC peers always have a similar spanning-tree priority. They are seen as one and the same switch to everything that is attached, including the Nexus switches in the other data center.
Because two VLANs are stretched to a tertiary location, I wanted to control spanning-tree a bit more precisely. I used the long method because of the high speed links and defined the DC1 switches are the primary for all VLANs. However, setting that priority is not necessary when you don’t have to worry about another location or, in my case, having the old VSS switches still attached for a short while.
NOTE: VLAN 3967 is as high as NXOS let’s you configure.
spanning-tree pathcost method long
spanning-tree vlan 1-3967 priority 4096
Furthermore, it’s advisable to use a BPDU filter on the DCI and activate storm-control to limit broadcast traffic.
interface port-channel999
spanning-tree bpdufilter enable
## Troubleshooting
When configuring this, I ran into a few issues myself. I’ve provided the solutions to these issues below. The guides I used were written for Nexus 7k and were therefore not 1-on-1 applicable to the Nexus 9K platform.
The logs can show a warning relating to the TCAM (ing-ifacl) memory state for ingress PACL.
2020 Jun 10 11:38:14 Switch01 %ACLQOS-SLOT1-2-ACLQOS_FAILED: ACLQOS failure: TCAM region is not configured for feature PACL class IPv4 direction ingress. Please configure TCAM region Ingress PACL [ing-ifacl] and retry the command.
2020 Jun 10 11:38:14 Switch01 %ETHPORT-5-IF_SEQ_ERROR: Error ("TCAM region is not configured. Please configure TCAM region and retry the command") communicating with MTS_SAP_ACLMGR for opcode MTS_OPC_ETHPM_BUNDLE_MEMBER_BRINGUP (RID_PORT: Ethernet1/46)
2020 Jun 10 11:38:14 Switch01 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel999 is down (No operational members)
2020 Jun 10 11:38:14 Switch01 last message repeated 1 time
2020 Jun 10 11:38:14 Switch01 %ETHPORT-5-IF_DOWN_ERROR_DISABLED: Interface Ethernet1/46 is down (Error disabled. Reason:TCAM region is not configured. Please configure TCAM region and retry the command)
To view the current setup and allocate memory, you can use show system internal access-list globals. Here you can see how much is allocated and in use. You might need to free up some memory elsewhere in order to allocate it to the ingress interface ACL (PACL).
Switch01(config)# show system internal access-list globals
slot 1
=======
Atomic Update : ENABLED
Default ACL : DENY
Bank Chaining : DISABLED
Fabric path DNL : DISABLED
NS Buffer Profile: Burst optimized
Min Buffer Profile: all
EOQ Class Stats: qos-group-0
NS MCQ3 Alias: qos-group-3
Ing PG Share: ENABLED
IPG in Shape: DISABLED
Classify ns-only : DISABLED
Ing PG Min: NOT-DISABLED
OQ Drops Type: both
OQ Stats Type: [c0]: q 0 both
[c1]: q 1 both
[c2]: q 2 both
[c3]: q 3 both
[c4]: q 4 both
[c5]: q 5 both
[c6]: q 6 both
[c7]: q 7 both
[c8]: q 8 both
[c9]: q 9 both
peak count type: port
counter 0 classes: 255
counter 1 classes: 0
OOBST Max records: 1000
DPP Aging Period: 5000
DPP Max Number of Packets: 120
AFD ETRAP Aging Period: 50
AFD ETRAP Byte Count: 1048555
AFD ETRAP Bandwidth Threshold: 500
ACL Inner Header Match : DISABLED
ACL Inner Header Match : DISABLED
LOU Threshold Value : 5
--------------------------------------------------------------------------------------
INSTANCE 0 TCAM Region Information:
--------------------------------------------------------------------------------------
Ingress:
--------
Region TID Base Size Width
--------------------------------------------------------------------------------------
NAT 13 0 0 1
Ingress PACL 1 0 0 1
Ingress VACL 2 0 0 1
Ingress RACL 3 0 1792 1
Ingress RBACL 4 0 0 1
Ingress L2 QOS 5 1792 256 1
Ingress L3/VLAN QOS 6 2048 512 1
Ingress SUP 7 2560 512 1
Ingress L2 SPAN ACL 8 3072 256 1
Ingress L3/VLAN SPAN ACL 9 3328 256 1
Ingress FSTAT 10 0 0 1
SPAN 12 3584 512 1
Ingress REDIRECT 14 0 0 1
Ingress NBM 30 0 0 1
-------------------------------------------------------------------------------------
Total configured size: 4096
Remaining free size: 0
Note: Ingress SUP region includes Redirect region
Egress:
--------
Region TID Base Size Width
--------------------------------------------------------------------------------------
Egress VACL 15 0 0 1
Egress RACL 16 0 1792 1
Egress SUP 18 1792 256 1
Egress L2 QOS 19 0 0 1
Egress L3/VLAN QOS 20 0 0 1
-------------------------------------------------------------------------------------
Total configured size: 2048
Remaining free size: 0
--------------------------------------------------------------------------------------
INSTANCE 1 TCAM Region Information:
--------------------------------------------------------------------------------------
Ingress:
--------
Region TID Base Size Width
--------------------------------------------------------------------------------------
NAT 13 0 0 1
Ingress PACL 1 0 0 1
Ingress VACL 2 0 0 1
Ingress RACL 3 0 1792 1
Ingress RBACL 4 0 0 1
Ingress L2 QOS 5 1792 256 1
Ingress L3/VLAN QOS 6 2048 512 1
Ingress SUP 7 2560 512 1
Ingress L2 SPAN ACL 8 3072 256 1
Ingress L3/VLAN SPAN ACL 9 3328 256 1
Ingress FSTAT 10 0 0 1
SPAN 12 3584 512 1
Ingress REDIRECT 14 0 0 1
Ingress NBM 30 0 0 1
-------------------------------------------------------------------------------------
Total configured size: 4096
Remaining free size: 0
Note: Ingress SUP region includes Redirect region
Egress:
--------
Region TID Base Size Width
--------------------------------------------------------------------------------------
Egress VACL 15 0 0 1
Egress RACL 16 0 1792 1
Egress SUP 18 1792 256 1
Egress L2 QOS 19 0 0 1
Egress L3/VLAN QOS 20 0 0 1
-------------------------------------------------------------------------------------
Total configured size: 2048
Remaining free size: 0
As you can see, I had 0 remaining free space, so I removed some memory from the RACL allocation and assigned it to the ingress PACL. This works in increments of 256.
conf t
hardware access-list tcam region ing-racl 1536
hardware access-list tcam region ing-ifacl 256
end
Reload the device and wait for it to come back. Don’t forget to save your config beforehand though!
After reboot, you can find the readdressed memory:
Switch01# sh hardware access-list tcam region
NAT ACL[nat] size = 0
Ingress PACL [ing-ifacl] size = 256
VACL [vacl] size = 0
Ingress RACL [ing-racl] size = 1536
Ingress RBACL [ing-rbacl] size = 0
Ingress L2 QOS [ing-l2-qos] size = 256
Ingress L3/VLAN QOS [ing-l3-vlan-qos] size = 512
Ingress SUP [ing-sup] size = 512
Ingress L2 SPAN filter [ing-l2-span-filter] size = 256
Ingress L3 SPAN filter [ing-l3-span-filter] size = 256
Ingress FSTAT [ing-fstat] size = 0
span [span] size = 512
Egress RACL [egr-racl] size = 1792
Egress SUP [egr-sup] size = 256
Ingress Redirect [ing-redirect] size = 0
Egress L2 QOS [egr-l2-qos] size = 0
Egress L3/VLAN QOS [egr-l3-vlan-qos] size = 0
Ingress NBM [ing-nbm] size = 0
|
2020-10-24 15:32:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20042797923088074, "perplexity": 13998.342796807294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107883636.39/warc/CC-MAIN-20201024135444-20201024165444-00023.warc.gz"}
|