url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://gregorpurdy.com/blog/the-trouble-with-triples-and-other-tuples | ##### Years
Gregor Tech leader, CEO advisor. B2C, B2B and healthcare. Improving innovation and execution. Strengthening leadership teams.
# The trouble with triples (and other tuples)
Ever since I first studied discrete mathematics and read Paul R. Halmos’ Naive Set Theory, I’ve been bothered by the conventional definition of tuples in terms of sets.
The definition usually goes something like this: An ordered pair (that is, a 2-tuple) $$\langle{}a, b\rangle{}$$ is the set $$\{ \{a\}, \{a, b\} \}$$. It is easy to see that this definition results in the desirable property that $$\langle{}a, b\rangle{} = \langle{}c, d\rangle{}$$ if and only if $$a = c$$ and $$b = d$$.
However, this leaves us with a strange result when we want to represent the ordered pair $$\langle{}a, a\rangle{}$$. Plugging $$a$$ for $$b$$ above, we have:
\begin{align} \langle{}a, a\rangle{} &= \{ \{a\}, \{a, a\} \} \\ &= \{ \{a\}, \{a\} \} \\ &= \{ \{a\} \} \end{align}
This gets ugly when we ask about the cardinality of the ordered pair $$\langle{}a, b\rangle{}$$. If $$a \ne b$$, then the cardinality is 2, which fits nicely with the object being an ordered pair. If, however, $$a = b$$, the cardinality is 1! It is just not natural for an object supposedly containing two elements to have a cardinality of 1.
##### Aside
Whenever one thing is modeled by another, you must contend with the fidelity of the modeling. Most instances of modeling involve ignoring certain aspects of what is being modeled and reflecting others in the model. The fidelity of the model is the degree to which properties and operations on the model correspond to properties and operations on the subject. Therefore, the conventional ordered pair definition is perfectly fine if fidelity with respect to cardinality is not important, but I consider cardinality to be a rather significant property of a sequential structure. I would expect a model of a sequence to have a natural preservation of cardinality (length).
Extending the above ordered pair (2-tuple) construction to triples (3-tuples) yields:
\begin{align} \langle{}a, b, c\rangle{} &= \langle{}a, \langle{}b, c\rangle{}\rangle{} \\ &= \{ \{a\}, \{a, \langle{}b, c\rangle{}\rangle\} \} \\ &= \{ \{a\}, \{a, \{ \{ b \}, \{ b, c \} \} \} \} \\ \end{align}
Plugging $$a$$ for both $$b$$ and $$c$$ yields:
\begin{align} \langle{} a, a, a \rangle{} &= \langle{} a, \langle{} a, a \rangle{} \rangle{} \\ &= \{ \{a\}, \{a, \langle{} a, a \rangle{} \rangle{} \} \} \\ &= \{ \{a\}, \{a, \{ \{ a \}, \{ a, a \} \} \} \} \\ &= \{ \{a\}, \{a, \{ \{ a \}, \{ a \} \} \} \} \\ &= \{ \{a\}, \{a, \{ \{ a \} \} \} \} \\ &= \langle{} a, \{ \{ a \} \} \rangle{} \end{align}
Not only do we have a cardinality of 2 for a 3-tuple, but we also have a perfectly valid interpretation of a single underlying set as either an ordered pair ($$\langle{} a, \{ \{ a \} \} \rangle{}$$) or as a triple ($$\langle{} a, a, a \rangle{}$$)! This latter complaint isn’t really surprising given the way 3-tuples are defined in terms of 2-tuples in the first place. But, the implication is that you cannot write an is-triple predictate, because you can’t tell the difference between an ordered pair with something that could be interepreted as another tuple as its second element and an ordered pair with something that should be interpreted as another tuple as its second element.
We’ve seen how the conventional modeling of tuples as sets fails to preserve an important property of sequences (cardinality of the resulting sets doesn’t correspond to size of the tuples), and how the convention leads to ambiguous interpretations. Now, lets have a look at the most important operation on a tuple: selection. If $$P$$ is an ordered pair, then we will refer to its first element as $$P_0$$ and its second element as $$P_1$$. How shall we express $$P_i$$ in terms of $$P$$? An easy answer is:
\begin{align} P_0 &= a: \exists b : P = \langle a, b \rangle \\ P_1 &= b: \exists a : P = \langle a, b \rangle \end{align}
But, you can see that we have to know in advance that we are treating $$P$$ as an ordered pair. We are not treating it as a generic tuple of some size $$n$$ to be indexed by some integer $$0 \leq i \lt n$$. The selection formulas for the first two elements of a 3-tuple would differ from the formulas just given. And, with the example above that demonstrated ambiguity, applying the formula just shown for $$P_1$$ would yield $$\{ \{ a \} \}$$ instead of just $$a$$!
The net result of the above is that while the conventional representation of tuples can work fine for some applications, it has some important limitations, including the inability to use them in mixed environments (you cannot have a set $$S$$ of tuples and partition it into sets of various tuple sizes, because you cannot in general tell the difference between tuples of of varying sizes). For some applications, you will need a more robust collection construct to model the sequential concept.
Many people treat set theory as if it were The Foundation of Mathematics, with some special claim to that title by virtue of its concept of “set” being an extremely stripped down modeling of a collective concept. I, however, do not share that view (as is probably evident by now). I think that there are reasons to look elsewhere for foundational constructs, and further that there is no reason to require there to be a One True Collection (as sets are often treated).
While there may be a great generic set-based way to model a generic sequential collection that has fidelity over broad application areas, I have not seen it. I take the tuple troubles pointed out above to be evidence that obtaining generic temporality (I prefer the term “temporal” to the term “sequential” for reasons I’ll give in a moment) from a non-temporal construct is tricky. In fact, I referred to it as “conventional” above because it connotes not only “common” and “normal”, but also “agreed” (the connotation of “unoriginal” doesn’t really apply in my opinion, because the construction is clever, if problematic). It is the “agreed” meaning that actually causes the problems. Tuples of various sorts are not self-identifying as such. Information from outside (a convention) is required to determine that some sets are to be treated as plain old sets, and some are to be treated as ordered pairs, and so on.
I view temporality as a sufficiently intuitive concept to be an appropriate property of a primitive collective construct. In fact, since temporality is a part of our everyday experience, and since communication takes place on a temporal background (order of presentation), I view temporality as even more basic than atemporality. There are two reasons I use the term “temporal” rather than “sequential” to refer to this ordering notion. The first is that our time sense is a very basic example of ordering, and the second is that the term “sequence” has some specific additional connotations that I want to avoid at this point in the development of alternative conceptions of structures.
In my conceptualization, there is a family of four primitive collective constructs based on two boolean properties: temporality and plurality. A set is an atemporal (unordered), aplural (containing at most one of each kind) collection. A bag is an atemporal, plural collection. A sequence is a temporal, plural collection, and an order is a temporal, aplural collection.
Another way of looking at plurality is that some collections (sets and orders) deal with species (e.g., “the electron”), and some (bags and sequences) deal with specimens (e.g., “that electron”).
With this broader palette of primitive constructs, it is much easier to model a broader set of constructs without introducing strange representational artifacts. But, if forced to choose only one to use, I would use sequences, since it is easier to ignore its temporality and/or plurality properties to model any of the other three constructs. And, the sequence corresponds directly to the way we use language to describe collections. For example, we say: “the set containing $$a$$, $$b$$, and $$c$$”, which has a definite order of presentation, even though we know we are supposed to treat the order as insignificant because the words refer to a set. The same is true of the shorthand $$\{ a, b, c \}$$. And, after all, ignoring some properties of objects is a normal step in the modeling process (we do it quite readily in counting arbitrary objects).
##### Update 2007-05-28
A recent article at Good Math Bad Math dealt with the Axiom of Pairing, and some of the comments on that article touch on the issues treated here, although not as completely. | 2021-01-16 09:20:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955865144729614, "perplexity": 558.5162755744527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00504.warc.gz"} |
http://gmatclub.com/forum/financing-loans-127153.html?sort_by_oldest=true | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 25 Jun 2016, 11:30
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Financing/Loans
Author Message
Intern
Joined: 21 Mar 2011
Posts: 3
Followers: 0
Kudos [?]: 0 [0], given: 0
### Show Tags
06 Feb 2012, 12:56
Hi, I am recently admitted to a few U.S. programs. Now that this has finally become real, I'm starting to work out financing, leaving my job, summer travel, and moving to a new city.
1) Are people mostly using federal (Direct & GradPLUS) loans? Has anyone tried for lower rates with private loans? If so, from where?
2) When do people typically receive the loan funds? I assume right when tuition is due? Just wondering how the cash flow works. (That is, how long I will need to rely on savings after leaving my job.)
Senior Manager
Joined: 20 Jun 2010
Posts: 488
Concentration: Strategy, Economics
Schools: Chicago (Booth) - Class of 2014
Followers: 14
Kudos [?]: 78 [0], given: 27
### Show Tags
17 Feb 2012, 11:23
kwissmiss wrote:
1) Are people mostly using federal (Direct & GradPLUS) loans? Has anyone tried for lower rates with private loans? If so, from where?
I'm trying to figure this out myself. I'm either going to go 100% private, or do 50% federal + 50% private. Private loans are incredibly cheap right now, especially if you have a co-signer. The only downside is that the rate is variable.
kwissmiss wrote:
2) When do people typically receive the loan funds? I assume right when tuition is due? Just wondering how the cash flow works. (That is, how long I will need to rely on savings after leaving my job.)
I'm no expert, but I think that with the federal loans, you get a chunk of $at the start of each quarter or semester. So if you qualify for$80K/year, you'd get $40k at the start of the first semester, and$40k at the start of the second semester. I believe that this money comes to you within a short amount of time before tuition is due.
_________________
Intern
Joined: 21 Mar 2011
Posts: 3
Followers: 0
Kudos [?]: 0 [0], given: 0
### Show Tags
17 Feb 2012, 11:30
Do you have an institution in mind for the private loans?
Senior Manager
Joined: 20 Jun 2010
Posts: 488
Concentration: Strategy, Economics
Schools: Chicago (Booth) - Class of 2014
Followers: 14
Kudos [?]: 78 [0], given: 27
### Show Tags
17 Feb 2012, 13:03
kwissmiss wrote:
Do you have an institution in mind for the private loans?
I'm going to start shopping around this spring at some of the big (citi, jpm, bofa) and regional banks. I think you can also get rate quotes from some of the larger lenders online.
_________________
Senior Manager
Status: schools I listed were for the evening programs, not FT
Joined: 16 Aug 2011
Posts: 389
Location: United States (VA)
GMAT 1: 640 Q47 V32
GMAT 2: 640 Q43 V34
GMAT 3: 660 Q43 V38
GPA: 3.1
WE: Research (Other)
Followers: 3
Kudos [?]: 46 [0], given: 50
### Show Tags
02 Mar 2012, 22:50
kwissmiss wrote:
Hi, I am recently admitted to a few U.S. programs. Now that this has finally become real, I'm starting to work out financing, leaving my job, summer travel, and moving to a new city.
1) Are people mostly using federal (Direct & GradPLUS) loans? Has anyone tried for lower rates with private loans? If so, from where?
2) When do people typically receive the loan funds? I assume right when tuition is due? Just wondering how the cash flow works. (That is, how long I will need to rely on savings after leaving my job.)
1. I will only do the Stafford and GradPLUS. Private loans may have cheaper rates but do not have the flexibility options to the extent that the federal loans have. Yet, all loans (federal and private) are non-dischargeable in bankruptcy if god forbid, any one of us had to file for it.
2. With federal loans, the funds are usually disbursed a couple weeks before class. You then can request a refund to give you back the living allowance money if that's what you elected to do. At the end of the day, these loan disbursements differ from one school to the next.
Re: Financing/Loans [#permalink] 02 Mar 2012, 22:50
Display posts from previous: Sort by | 2016-06-25 18:30:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2797354757785797, "perplexity": 7739.413603678852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00154-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/114536/can-a-cyclic-amine-form-an-amide | # Can a cyclic amine form an amide?
This is a question from the Cambridge International Examinations October/November 2017 (pdf from papers.gceguide.com, pdf via the Wayback Machine).
I need to understand why the secondary amine in the serotonin molecule does not undergo condensation reaction with $$\ce{CH3COCl}$$.
Because, as far as I know, there should be an amide formation for this reaction.
Here's a picture of serotonin:
I must be lacking some information making that $$\ce{NH}$$ unsuitable for this reaction. And I know that the other amine and the phenol reacts with $$\ce{CH3COCl}$$.
• Secondary amine group shown here is quite sterically hindered..Approach at Burgi-Dunitz trajectory would most probably be difficult Apr 29 '19 at 9:16
• Did you mean lone pair of electron in N is less available for carbocation to be attacked? @YUSUFHASAN Apr 29 '19 at 9:27
• No..I mean what you have in mind is a nucleophilic attack by NH on CH3COCl as the first step, right? So that would be retarded by steric hindrance.. Apr 29 '19 at 9:29
• Nothing to do with sterics. The lone pair on the nitrogen is delocalised into the aromatic system so is not available for nucleophilic attack. If you want to functionalise the indole-NH you generally have to formally deprotonate with strong base. Otherwise it reacts as an enamine through the 3-position. Apr 29 '19 at 9:53
• @Amar30657 You need to read more about aromatic systems. en.wikipedia.org/wiki/Indole Apr 29 '19 at 10:05
The heterocycle in this question is indole and is aromatic. This means that the N lone pair is delocalised and not readily available for nucleophilic attack. Think of it as similar in reactivity to a secondary amide nitrogen RCONHR. Generally you need to formally deprotonate to functionalise, though there are some interesting techniques using carbonyl azoles catalysed by DBU[1] and others using aldehyde and alcohol substrates.[2] Note that 3-unsubstituted indoles react with acyl halides by F-C acylation at the 3 position.[3]
### References:
1. Heller, S. T.; Schultz, E. E.; Sarpong, R. Chemoselective N-Acylation of Indoles and Oxazolidinones with Carbonylazoles. Angew. Chem. Int. Ed. 2012, 51 (33), 8304–8308 DOI: 10.1002/anie.201203976.
2. Maki, B. E.; Scheidt, K. A. Single-Flask Synthesis ofN-Acylated Indoles by Catalytic Dehydrogenative Coupling with Primary Alcohols. Org. Lett. 2009, 11 (7), 1651–1654 DOI: 10.1021/ol900306v. PMID: 19320508 (with free text available).
3. Okauchi, T.; Itonaga, M.; Minami, T.; Owa, T.; Kitoh, K.; Yoshino, H. A General Method for Acylation of Indoles at the 3-Position with Acyl Chlorides in the Presence of Dialkylaluminum Chloride. Org. Lett. 2000, 2 (10), 1485–1487 DOI: 10.1021/ol005841p.
If you consider the lone pair on that N-atom and apply Hückel's (4n+2)π-e rule to check aromaticity , it satisfies all the conditions. So in order to achieve aromatic-stabilisation the nitrogen's valency is no more available for condensation!
• Hückel's rules should not be applied to this compound for two facts: 1. It is not a mono-cycle. 2. It is a heterocycle. It would be by far superior, at a more fundamental level, and even simpler to just state that the lone pair is delocalised. One could even show this with resonance structures, but that might be overkill. Apr 30 '19 at 12:51
• Why can't we apply Hückel's rule? I didn't know that; would you please elaborate?
– user73099
Apr 30 '19 at 13:39
• Apr 30 '19 at 13:58
• Tl;dr: there are three criteria for which the rule has been observed. 1. monocyclic planar, 2. trigonal/diagonal hybridised atoms 3. (4n+2) pi electrons, n should not exceed 5. The calculations were done for $\ce{(CH)_m}$, so they strictly only hold for these compounds. The third criteria has been used so often (and most of the times wrong) and is incredibly popular (also wrong), that it is perceived as Hückel's rule (also wrong). It is painful to see that it is taught like this, without teaching the underlying theory, which is possibly simpler than learning about the exceptions. Apr 30 '19 at 14:05 | 2022-01-19 01:39:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6797438859939575, "perplexity": 4334.199024764473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301217.83/warc/CC-MAIN-20220119003144-20220119033144-00481.warc.gz"} |
https://cs.stackexchange.com/questions/104897/does-an-optimal-path-imply-the-heuristic-is-admissible | # Does an optimal path imply the heuristic is admissible?
If we are given an A$$^*$$ path search with some heuristic $$h$$ that yields an optimal path, without knowing that the heuristic is admissible beforehand, would this imply that $$h$$ is admissible?
• Have you tried constructing a counterexample? Have you tried proving the conclusion? Where did you get stuck? – John L. Feb 27 '19 at 2:39
• I'm thinking to assume the opposite, that an optimal path is not admissible. Then, h(n) > h*(n). That is, h(n) has a greater cost than the cost of an optimal heuristic and so h(n) is not optimal, a contradiction. Would this be sufficient. – Paradox Feb 27 '19 at 3:13
No.
Let us consider an extreme example. Let graph $$G$$ contain only two nodes, the starting node $$s$$ and the destination node $$t$$. The distance of edge $$(s,t)$$ is 1.
We have a heuristic function $$h$$, $$h(s)=2$$ and $$h(t)=0$$. $$h$$ is not admissible since $$h(s)>1$$.
However, using $$A^*$$ search algorithm, we will end up with the path $$s, t$$, the unique and, hence, optimal path.
Exercise. Construct a counterexample where there are more than one path.
• Is it not admissible because the heuristic over-estimated the cost of getting to t? – Paradox Feb 28 '19 at 4:16
• Correct, since the definition of admissible heuristic is that it never overestimates the cost of reaching the goal. – John L. Feb 28 '19 at 5:13
The solution provided by Apass Jack is absolutely correct but beyond simple examples proving that the answer is "definitely no", let me add another consideration (which actually provides a solution to the exercise suggested by Apass Jack):
A heuristic which is inadmissible does not necessarily mean that paths found by A$$^*$$ with that heuristic should be sub-optimal
The reason is that heuristics serve for the purpose of ranking nodes. Admissibility simply implies that if two nodes are wrongly ranked (so that one node gets expanded before the correct one) this will have a limited effect in the number of subsequent expansions as A$$^*$$ will never expand nodes beyond the $$f$$-layer that contains the optimal solution (provided that the heuristic is admissible, which is the hypothesis in this paragraph).
Consider now the case where one heuristic is inadmissible but always (always!) ranks nodes perfectly. Then, certainly nodes will get sorted in the open list in their optimal order and the optimal solution is necessarily found.
To see this consider a perfect heuristic function, $$h^*(n)$$ and now and inadmissible heuristic function which adds a large constant $$M$$ to $$h^*(n)$$, $$h(n)=h^*(n)+M$$. Well, $$h(n)$$ is clearly inadmissible for values of $$M$$ arbitrarily large, but if one node $$n$$ has an $$h^*$$-value strictly less than another $$n'$$, then it will be perfectly ranked by the perfect heuristic function, $$h^*$$, so does $$h()$$ because adding the same constant $$M$$ does not alter the ordering.
So far, the solution to the exercise suggested in the other reply is simply the following: take any graph $$G(V, E)$$ and compute $$h^*(n), \forall n\in V$$. Construct now a new heuristic function $$h(n)=h^*(n)+M$$ for all $$n\in V$$ with a constant $$M$$ arbitrarily large, run A$$^*$$ ... and smile! :)
Hope this helps, | 2020-09-21 06:55:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 35, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533263206481934, "perplexity": 558.2657107086706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198942.13/warc/CC-MAIN-20200921050331-20200921080331-00349.warc.gz"} |
http://math.stackexchange.com/tags/normal-distribution/new | # Tag Info
## New answers tagged normal-distribution
1
Note: The following is not an answer, but merely some thoughts which might or might not be helpful to you. First note that you confused(?) your inequality signs. I think you want $$\gamma_{n}\left(\left\{ x\in\mathbb{R}^{n}\,\mid\,\left\Vert x\right\Vert ^{2}\geq\frac{n}{1-\varepsilon}\right\} \right){\color{red}\leq}e^{-\varepsilon n/4}$$ and $$... 0 Question: Let X_i \sim N(0,\sigma_i^2). I would like to compute E[|X_1-X_2|^k|X_2|^k] Bounds shmounds¡? For given values of integer k, it is possible to obtain exact closed-form solutions. In particular, if X_1 and X_2 are independent (which appears to be your intention), the joint pdf of (X_1,X_2), say f(x_1,x_2), is: Then, ... 0 The hard way to answer this question is to compute all terms of the distribution of N up to and including P(N=25), add those probabilities and subtract the sum from 1. An easier way is to consider what would happen if you simply committed to rolling that same die 25 times in succession, then compare the outcomes to the outcomes of the original ... 0 Yes there is something similar to the summation, for the product. If all your random variables are independent, then you just apply the log function$$E[\log\prod_i X_i]=E[\sum_i \log(X_i)]=\sum_i E[\log X_i]$$0 This is not a complete answer to your question, but it might help solving it. The minimum number of throws required in order to get over 100000 is 11: Either you get 3^{11}=177147 Or you get 3^{10}\cdot2=118098 So you basically need to calculate 1-\sum\limits_{k=11}^{25}P(N=k). You can calculate ... 1 I don't know where you could have seen that claimed, but it doesn't make sense unless x is a fixed constant. If x is a random variable, it's not clear what y\sim N(0,x^2) would mean. If x is constant, then one can say that if x^{-1}y\sim N(0,1) then y\sim N(0,x^2). It is also true that if the conditional distribution of one random variable ... 0 I've repeated Higuchi calculations and got precisely the same answer as he promised D=-1.5143. Pay close attention to the number of times he divides the series length Lmk by k. That was my mistake for the first time, when I've lost the final averaging between the set of k series and we moved from Lkm to <Lkm>. Here the pointy brackets stands for ... 1 Let X \sim N(0,1) and Y \sim N(0,1). Then, as noted, the pdf of Z = X Y is f(z): The mgf of Z is E[e^{t Z}]: where I am using the Expect function from the mathStatica package for Mathematica to automate. Let (Z_1, \dots, Z_n) denote a random sample of size n drawn on Z, and let Q = \sum_{i=1}^nZ_i denote the sample sum. Then, by ... 0 If (x_i)_{1\leqslant i\leqslant n} is a sequence of real numbers and X is a random variable with P(X=x_i)=\pi_i, then Jensen's inequality with the convex function x\mapsto x^2 shows that$$ \sum_{i=1}^n \pi_ix_i^2=\mathrm{E}[X^2]\geqslant \mathrm{E}[X]^2=\Big(\sum_{i=1}^n \pi_ix_i\Big)^2. $$More generally, Jensen's inequality shows that$$ ...
0
Hint, use Cauchy-Schwarz inequality accordingly: $$\sum \pi_{i}(f(x,\phi_{i})^2\sum \pi_{i}\ge \left(\sum \pi_{i}f(x;\phi_{i})\right)^2$$
0
Hint $g_i:=point(i)$ $$int(((x-c(i))^2*norm(diff(f(x, -0.04, sqrt(0.11))),p),x,g_i,g_{i+1})$$
0
The joint density given $\sigma_1^2, \sigma_2^2$, of observing the sample $\boldsymbol x, \boldsymbol y$ is simply $$f(\boldsymbol x, \boldsymbol y \mid \sigma_1^2, \sigma_2^2) = \prod_{i=1}^n \frac{e^{-x_i^2/(2\sigma_1^2)}}{\sqrt{2\pi}\sigma_1} \prod_{i=1}^m \frac{e^{-y_i^2/(2\sigma_2^2)}}{\sqrt{2\pi}\sigma_2}.$$ Thus a likelihood function for $\sigma_1, ... 1 You want a function$h$such that for all$\theta\in\mathbb R$, the following integral is zero: $$\int_{-\infty}^\infty \exp\left(-\frac{n}{2}(t-\theta)^2\right)h(t) \, dt.$$ This is $$\int_{-\infty}^\infty \exp\left(-\frac n 2 \theta^2\right) \exp(nt\theta)\exp\left(-\frac n 2 t^2\right)h(t) \, dt.$$ The first factor does not depend on$t$so it can be ... 0 i think the answer is $$\frac{(-1)^{n+1} 2^{\frac{1}{2}-\frac{3 n}{2}} \left((-1)^n+1\right) \sigma ^{3/2} \left(-\frac{1}{\sigma ^2}\right)^{n/2} \Gamma (n+1) \, _2F_1\left(\frac{n}{2}+\frac{1}{2},\frac{n}{2}+1;\frac{n}{2};-\frac{1}{2 \sigma ^2}\right)}{\sqrt{\pi } n! \Gamma \left(\frac{n}{2}\right)}$$ 0 The z-score is the standardisation that you should plot. Full-stop. (And you have the correct formula for the z-score.) The z-score might usually range from -3 to +3 and you can then plot both z-score distributions on the same graph. The z-score distributions plot with their centres at z=0. You mention you want to plot on a 0-10 scale. What do you mean ... 0 Robustness is sort of a subjective matter. In a nutshell, if you produce an estimate with a robust estimator, and then you add a very extreme data point and re-estimate, you shouldn't produce an estimate that is too different from your first estimate. What does "extreme" mean? What does "too different" mean? This is precisely where the ambiguity comes in. ... 0 I recommend first reading the Wikipedia article for robust statistics. After doing so, what do you intuitively conclude about the estimators you described above? Would you say they are robust or not? Could you furnish an example of an estimator that would be more robust than the MLEs? 1 Let$Z$be a standard normal variable. Since: $$\frac{1-\Phi(x)}{\phi(x)}=\frac{1}{\mathbb{E}[Z\,|\,Z>x]}=e^{x^2/2}\int_{x}^{+\infty}e^{-z^2/2}\,dz=\int_{0}^{+\infty}e^{-zx}e^{-z^2/2}\,dz$$ we have that: $$\mathbb{E}[Y]=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{+\infty}e^{-(x-\mu)^2/2}\int_{0}^{+\infty}e^{-zx}e^{-z^2/2}\,dz\,dx\tag{1}$$ so: ... 1 A good intuition would be to consider the shape of the level sets of the likelihood. That is: the level sets of the density of a Gaussian correspond exactly to the level sets of the$\ell_2$norm. The$\ell_2$norm is induced by an inner product, so generally all your linear algebra and geometry intuitions usually transfer pretty well into the Gaussian case. ... 0 $$\int_{0.30}^{\infty}P(x)\,dx=\int_{-\infty}^{\infty}P(x)\,dx-\int_{-\infty}^{0.30}P(x)\,dx$$ The first integral is equal to$1$since$P(x)$is a probability density function. The second one is not possible to evaluate with elementary functions. However using the function $$\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x\exp(-t^2)\,dt,$$ the ... -1 You sort of got it right, but unintentionally, a distribution is usually refers to cumulative distribution function (cdf) or the probability density function (pdf) or the probability mass function (pmf). You use pmfs for discrete cases and pdfs for continuous cases. Then say X is a r.v. If it's discrete then the cdf of X at x is the sum of the pmf from ... 1 If you must do this analytically then you need a convolution. I would not. If your two normal distributions$X_1\sim \mathcal N(a,b^2)$and$X_2\sim \mathcal N(c,d^2)$are independent then an easier approach is to say$fX_1\sim \mathcal N\left(fa,f^2 b^2\right)$and$(1-f)X_2\sim \mathcal N\left((1-f)c,(1-f)^2d^2\right)$so $$X=fX_1+(1-f)X_2 \sim \mathcal ... 0 Abramowitz and Stegun give a number of approximations. One with only five constants is accurate to 1.5 \cdot 10^{-7} Numerical Recipes page 221 has an expansion that quotes slightly better errors. Neither explains where they come from, but you could look at the references. 2 The density of (x,y) depends on x^2+y^2 only hence its distribution is rotationally invariant and the argument \theta of the point (x,y) is uniformly distributed on (-\pi,\pi). The events of interest are [x\gt0]=[-\pi/2\lt\theta\lt\pi/2] and [x\gt y,x\gt0]=[-\pi/2\lt\theta\lt\pi/4], with respective probabilities 1/2 and 3/8, hence P(x\gt ... 3 The joint distribution of x and y is circularly symmetric around the origin of the x,y-plane. The set of points A where x > y consists of all points below the line x = y; in polar coordinates, it is all points (r,\theta) such that r > 0 and -\frac34\pi < \theta < \frac14\pi. The probability distribution integrated over this ... 1 Have you taken calculus? It is because$$\lim_{t, x \to -\infty}\int\limits_{t}^{x}f(s)\text{ d}s = 0$$where f is the equation of the graph of the standard normal distribution. 0 Assume without loss of generality that (\mu_n) is decreasing. The simplest approach could be to divide any null linear combination \sum\limits_na_nf_n=0 by the gaussian density with parameters (0,\sigma^2), yielding$$\sum_na_n\exp(x\mu_n-\mu_n^2/2)=0.$$When$x\to\infty\$, every exponential term except the first one is negligible with respect to the ...
Top 50 recent answers are included | 2015-01-28 13:01:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9640633463859558, "perplexity": 1739.3968796712693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122102237.39/warc/CC-MAIN-20150124175502-00172-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-and-chemical-reactivity-9th-edition/chapter-13-solutions-and-their-behavior-study-questions-page-505d/66 | ## Chemistry and Chemical Reactivity (9th Edition)
$C_{10}H_{12}O_2$
B.p. elevation: $61.82-61.70=0.12^{\circ}C$ Molality: $\Delta T_b=K_b,m\rightarrow m=0.12^{\circ}C\div3.63^{\circ}C/m=0.033\ mol/kg$ Number of mols: $0.033\ mol/kg\times(25/1000)kg=8.26\times10^{-4}\ mol$ Molar mass: $0.135\ g\div 8.26\times10^{-4}\ mol=163.35\ g/mol$ Molar mass of empirical formula: $82.10\ g/mol$ Ratio of the molar masses: $163.35\div 82.10=1.99\approx 2$ Molecular formula: $C_{10}H_{12}O_2$ | 2021-04-16 13:52:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656892418861389, "perplexity": 8638.886704945653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00030.warc.gz"} |
https://www.athiemann.net/2016/06/26/interview-question-wine-bottles.html | A popular puzzle interview question goes like this: “Sir Blake has 200 bottles of wine in his wine cellar. 99% of those bottles are red wine and 1% of them are white wine. How many bottles of red and/or white wine does Sir Blake need to drink to reduce the percentage of red wine to 98%?”
The intuition misleads us thinking that the number of bottles to drink must be rather small, but this is not the case. In the original state we have 198 bottles of red wine and 2 bottles of white wine. Now if we want to reduce the share of red wine to 98% we will need a 2% share of white wine. Because the number of white wine bottles is a natural number $b_w \in \{0, 1, 2\}$ the easiest way to come up with a solution is to figure out what the corresponding amount of red wine bottles would be to get the ratio of 98% to 2%:
White wine Red wine 0 - 1 49 2 98
Thus he can either drink $198 - 98 = 100$ bottles of red wine and no white wine or $198 - 49 = 149$ bottles of red wine and $2 - 1 = 1$ bottle of white wine.
You could also fire up the Z3 Theorem Prover:
(declare-const rt Int)
(declare-const wt Int)
(assert (<= rt 198))
(assert (>= rt 0))
(assert (<= wt 2))
(assert (>= wt 0))
(assert (= 0.98 (/ (- 198.0 (to_real rt)) (- 200.0 (+ (to_real rt) (to_real wt))))))
(assert (= 0.02 (/ (- 2.0 (to_real wt)) (- 200.0 (+ (to_real rt) (to_real wt))))))
(check-sat)
(get-model)
(minimize (+ rt wt))
(check-sat)
(get-model)
Outputs:
sat
(model
(define-fun wt () Int
1)
(define-fun rt () Int
149)
)
sat
(model
(define-fun wt () Int
0)
(define-fun rt () Int
100)
)
Looking forward to your Feedback on HackerNews. | 2018-12-19 13:47:04 | {"extraction_info": {"found_math": true, "script_math_tex": 4, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21152976155281067, "perplexity": 4590.308959861851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376832330.93/warc/CC-MAIN-20181219130756-20181219152756-00257.warc.gz"} |
https://physics.stackexchange.com/questions/364958/fourier-transforming-the-klein-gordon-equation | # Fourier transforming the Klein-Gordon equation
I'm aware of the fact that there are similar questions on this forum but I could not find an answer that fits my problem.
Many textbooks state that a general solution to the Klein-Gordon equation $$\left(\partial_\mu \partial^\mu + \left(\frac{mc}{\hbar}\right)^2\right) \psi(x^\mu) = 0\qquad (1)$$ is given by $$\psi(x^\mu) = \int \frac{d^4k}{\sqrt{2\pi}^4}\delta\left(k_\mu k^\mu-\left(\frac{mc}{\hbar}\right)^2\right) A(k^\mu) \text{e}^{-ik_\mu x^\mu},$$ where $k^\mu$ is the Lorentz invariant wave four-vector, $\delta(.)$ is the $\delta$-distribution and $A(k^\mu)$ is some arbitrary complex function.
I assume that this result is obtained by applying a Fourier transformation to equation $(1)$, but I cannot find out where the $\delta$-function in the integral comes from. The solution cannot be that difficult (since I've not found an answer yet), so I hope someone is willing to show me how one gets the expression for $\psi(x^\mu)$ by a Fourier transformation of the Klein-Gordon equation.
• All solutions of the K-G equation fulfill a dispersion equation which is $(\frac{mc}{\not h})^2 = k_\mu k^\mu = k_0^2 - \vec{k}^2$, therefore space-components and time-components of the $k$-4-vector are not independent. – Frederic Thomas Oct 25 '17 at 14:00
• answer here: physics.stackexchange.com/a/216194/84967 – AccidentalFourierTransform Oct 25 '17 at 14:20
Taking the Fourier transform of both sides of your starting equation gives $$(-k^2 + m^2) \tilde{\psi}(k) = 0$$ where I set some constants to one. So for every $k$, at least one of these factors must be zero. If the first factor is not zero, then $\tilde{\psi}(k)$ is, so I might as well write $$\tilde{\psi}(k) = \delta(-k^2 + m^2) A(k)$$ to enforce this; this yields a valid $\tilde{\psi}(k)$ given any $A(k)$. Applying an inverse Fourier transform gives your second equation.
• I think this solves my problem. Could you eloborate a little bit on why you chose the $\delta$-distribution, though? I want to make sure that I got it right. – MeMeansMe Oct 25 '17 at 17:04
• @MeMeansMe It’s arbitrary, I could have taken anything that vanishes for $k^2 \neq m^2$. The delta is nice because it lets you explicitly get rid of one of the $k$ integrals if desired. – knzhou Oct 25 '17 at 17:17 | 2021-04-14 16:05:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9218217134475708, "perplexity": 179.42732690501745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00544.warc.gz"} |
https://lists.gnu.org/archive/html/bug-lilypond/2014-02/msg00127.html | bug-lilypond
[Top][All Lists]
## Re: Ottava bracket
From: Phil Holmes Subject: Re: Ottava bracket Date: Sat, 15 Feb 2014 17:21:30 -0000
```
```
```There's actually a large variety of possible changes in your snippet. I
assume you're not proposing all those alternatives? If not, could you
state which is the preferred solution, and not include the others? That
would make it simpler to implement.
```
```
There is actually not so much variety. There is one alternative. I
omitted the \italic, because right now it is italic by default.
But my suggestion for the text is:
\markup \bold \italic \concat {
\bold "8" \fontsize #-2 \translate-scaled #'(0 . 0.85) "va"
}
and similar for 15ma and 22ma and
\markup \bold \italic \concat {
"8" \fontsize #-2 "vb"
}
and similar for 15mb and 22mb.
And for the lines, like Gould says: top aligned for alta and bottom
aligned for bassa.
Is it clearer now?
Joram
```
```
```
Somewhat. It just would make it easier to add to the tracker if you created a snippet with only your preferred options: this can then simply be copied and pasted.
```
--
Phil Holmes
``` | 2020-02-24 06:37:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8553354740142822, "perplexity": 10183.701059175923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145897.19/warc/CC-MAIN-20200224040929-20200224070929-00463.warc.gz"} |
https://www.physicsforums.com/threads/question-on-limits.357593/ | # Question on limits
1. Nov 24, 2009
### poli275
1. The problem statement, all variables and given/known data
Hi everyone I just have this question in my exam, it's a limit question and I don't know how to solve it. Any help are much appreciated :D
2. Relevant equations
http://img42.imageshack.us/img42/8578/47934321.jpg [Broken]
Find Lim (x -> -2) f(x) if it exist
(sorry I don't know how to type the code)
3. The attempt at a solution
The answer I gave was the limit does not exist. I din't give an explanation for that.
Thanks again for any help :D
Last edited by a moderator: May 4, 2017
2. Nov 24, 2009
### LCKurtz
Look at the one sided limits as x --> -2 from the left and right.
3. Nov 24, 2009
### poli275
Ok I looked at the graph, and both the curve and line meet at x = -2 and y value is 9, do does that means that the limit of the function exist and it's at y = 9?
4. Nov 24, 2009
### LCKurtz
Yes. If the right and left limits at a point exist and are equal, then their common value is the limit of the function. On an exam you would likely be expected to give some justification for your conclusion that the right and left limits are both 9.
5. Nov 25, 2009
### poli275
Ok thanks for your help. I jhave one more question on the question itself. Because for the x^2 + 5, it was given that x is less than x is less than -2, what does that actually means? Thanks.
6. Nov 25, 2009
### HallsofIvy
Staff Emeritus
As LCKurtz said initially, look at the "right" and "left" limits. To the left of x= -2, the function is just $x^2+ 5$. That is a polynomial and so continuous for all x. In particular it is continuous at x= -2 and so its limit, as x approaches -2, is the value of the function there, $(-2)^3+ 5= 9$.
[tex]\lim_{x\to -2} X^2+ 5= 9[/itex].
[tex]\lim_{x\to -2^-} f(x)= 9[/itex].
To the right of x= -2, the function is 3- 3x. Again, that is a polynomial. It is also continuous at x= -2 and so
$\lim_{x\to -2} 3- 3x= 3- 3(-2)= 9$
$\lim_{x\to -2^+} f(x)= 9$.
Since the two one-sided limits exist and are equal, the limit itself exists and is that common value, 9.
7. Nov 25, 2009
### poli275
Ok thanks for the help, I will try to figure it out the whole thing again. Thanks again.
8. Nov 27, 2009
### Girlygeek
I find it helpful to remember that a limit is not necessarily a tangible thing. As you saw in the function you were given there is a "hole" in the graph so -2 doesn't really exist, but the limit is in fact there. When you are looking for the limit, you are just looking for where the function would be if it actually existed at that point, whether it actually does exist there or not.
Good luck! | 2017-12-18 19:28:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7214332222938538, "perplexity": 470.0291038777423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948619804.88/warc/CC-MAIN-20171218180731-20171218202731-00310.warc.gz"} |
https://onegoverningbody.com/maximize-the-equation-given-the-constraints-z0-07x0-09y-0-5xy40000000-xy45000000-x5000000-y15000000/ | # Maximize the Equation given the Constraints z=0.07x+0.09y , 0.5x+y=40000000 , x+y=45000000 , x=5000000 , y=15000000
I am unable to solve this problem.
Maximize the Equation given the Constraints z=0.07x+0.09y , 0.5x+y=40000000 , x+y=45000000 , x=5000000 , y=15000000
We can help to solve your math questions
### Our Math Experts
They can help to solve your math problems
Scroll to top | 2022-09-25 11:42:54 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913741111755371, "perplexity": 10831.884861140134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00683.warc.gz"} |
http://koreascience.or.kr/article/JAKO200311921577138.page | # 대전력 펄스용 횡자계형 및 종자계형 진공스위치의 에너지 손실 특성 비교
• 이태호 (인하대학 전기공학과) ;
• 허창수 (인하대학 전기공학과) ;
• 이홍식 (한국전기연구원 전기물리연구)
• Published : 2003.03.01
• 54 11
#### Abstract
Crowbar system Vacuum switches, widely used In a pulsed power system, could use the magnetic force to prevent the electrode damage. Vacuum switches using the magnetic forces are classified roughly into RMF(Radial Magnetic Field) and AMF(Axial Magnetic Field) type. The RMF type switches restrain a main electrode from aging due to high temperature and high density arc by rotating the arc which is driven by the Lorenz force. The AMF type switches generate axial magnetic field which decreases the electrode damage by diffusing arc. In this paper, we present the energy loss characteristics of both RMF and AMF type switches which are made of CuCr(75:25 wt%) electrodes. The time-dependent dynamic arc resistance of high-current pulsed discharge in a high vacuum chamber(~10$^{-6}$ Torr). which occurs in RMF and AMF type switches, was obtained by solving the circuit equation using the measured values of the arc voltage and current. In addition, we compared energy loss characteristics of both switches. Based on our results, it was found that the arc voltage and the energy loss of an AMF type switch are lower than a RMF type switch.
#### Keywords
RMF type switches;AMF type switches;arc voltage;arc loss characteristics
#### References
1. D.F. Alferov, V.A.Neverovsky, 'Anode erosion of a high-current multigap vacuum triggered switch', IEEE 19th. Int. Symp. on Discharge abd Electrical Insulation in Vacuum-Xi'an 2000, pp 515-518 https://doi.org/10.1109/DEIV.2000.879040
2. Raymond L. Boxman, Handbook of Vacuum Science and Technology Fundamentals and Applications, Noyes pub., part 2, 1995
3. H. Craig Miller, 'A review of anode phenomena in vacuum arcs', IEEE Trans. Plasma Science, Vol. PS-13, No. 5, pp. 242-252
4. H. Akiyama, 'Current-voltage characteristics of a high current pulsed discharge in air', IEEE Trans. Plasma Science, Vol. 16, No. 2, pp 312-316, 1988 https://doi.org/10.1109/27.3830
5. J.Lafferty, Vacuum arcs theory and application, John Wiley & Sons, 1980
6. Zou Jiyan, Cong Jiyuan, 'Theoretical analyses of arcs in triggered vacuum switches', International Symp. Proceedings, ISDEIV XIXth Discharges and Electrical Insulation in Vacuum, Vol. 1, pp 192-194, 2000 https://doi.org/10.1109/DEIV.2000.877283
7. Schulman, M.B. Slade, P.G. Heberlein, J.V.R., 'Effect of an axial magnetic field upon the development of the vacuum arc between opening electric currents(currents read contacts)', IEEE Trans. Components, Hybrids, and Manufacturing Technology Vol 16, pp. 180-189, 1993 https://doi.org/10.1109/33.219403
8. S. T. Pai & Qi Zhang, Introduction to high power pulse technology, Advanced Series in Electrical and Computer Engineering, Vol. 10
9. Gerhard Schaefer and M. Kristiansen, Gas Discharge Closing Switches, Prenum press, 1990 | 2020-02-21 03:27:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5262122750282288, "perplexity": 12632.092783149668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145438.12/warc/CC-MAIN-20200221014826-20200221044826-00121.warc.gz"} |
https://docs.dwavesys.com/docs/latest/c_solver_3.html | # Leap’s Hybrid Solvers¶
Note
Not all accounts have access to this type of solver.
Leap’s quantum-classical hybrid solvers are intended to solve arbitrary application problems formulated as binary quadratic models (BQM) or discrete[1] quadratic models.
[1] Where the standard problems submitted to quantum computers have binary valued variables ($0,1$ for QUBO and $+1,-1$ for Ising formulation), discrete quadratic model (DQM) solvers solve problems with variables that have more than two values; for example, variables that represent colors or DNA bases.
These solvers, which implement state-of-the-art classical algorithms together with intelligent allocation of the quantum processing unit (QPU) to parts of the problem where it benefits most, are designed to accommodate even very large problems. Leap’s solvers can relieve you of the burden of any current and future development and optimization of hybrid algorithms that best solve your problem.
These solvers have the following characteristics:
• There is no fixed problem structure. In particular, these solvers do not have properties num_qubits, qubits, and couplers.
• Solvers may return one solution or more, depending on the solver, the problem, and the time allowed for solution. Returned solutions are not guaranteed to be optimal.
• Solver properties and parameters are entirely disjoint from those of other solvers.
• Maximum problem size and solution times differ between solvers and might change with subsequent versions of the solver: always check your selected solver’s relevant properties. For example, the solver selected below has limits on maximum_number_of_biases and maximum_number_of_variables that restrict problem size, and requirements on minimum_time_limit and maximum_time_limit_hrs that restrict solution time.
>>> sampler = LeapHybridDQMSampler()
>>> sampler.properties.keys()
dict_keys(['minimum_time_limit',
'maximum_time_limit_hrs',
'maximum_number_of_variables',
'maximum_number_of_biases',
'parameters',
'supported_problem_types',
'category',
'version',
'quota_conversion_rate'])
>>> sampler.properties["maximum_time_limit_hrs"]
24.0
## Generally Available Solvers¶
The generally available hybrid solvers depend on your Leap™ account. Check your Leap dashboard to see which hybrid solvers are available to you.
Generally-available hybrid solvers currently supported in Leap include:
• Hybrid BQM solver (e.g., hybrid_binary_quadratic_model_version2)
These solvers solve arbitrary application problems formulated as binary quadratic models (BQM).
• Hybrid DQM solver (e.g., hybrid_discrete_quadratic_model_version1)
These solvers solve arbitrary application problems formulated as discrete quadratic models (DQM).
## Properties¶
This section describes the properties of Leap‘s solvers, in alphabetical order.
### category¶
Type of solver. Hybrid solvers support the following categories:
• hybrid—quantum-classical hybrid; typically one or more classical algorithms run on the problem while outsourcing to a quantum processing unit (QPU) parts of the problem where it benefits most.
### maximum_number_of_biases¶
Maximum number of biases, both linear and quadratic in total, accepted by the solver.
### maximum_number_of_variables¶
Maximum number of problem variables accepted by the solver.
### minimum_time_limit¶
Minimum required run time, in seconds, the solver must be allowed to work on the given problem. Specifies the minimum time required for the given problem, as a piecewise-linear curve defined by a set of floating-point pairs. The second element is the minimum required time; the first element in each pair is some measure of the problem, dependent on the solver:
• For hybrid BQM solvers, this is the number of variables.
• For hybrid DQM solvers, this is a combination of the numbers of interactions, variables, and cases that reflects the “density” of connectivity between the problem’s variables.
The minimum time for any particular problem is a linear interpolation calculated on two pairs that represent the relevant range for the given measure of the problem. For example, if minimum_time_limit for a hybrid BQM solver were [[1, 0.1], [100, 10.0], [1000, 20.0]], then the minimum time for a 50-variable problem would be 5 seconds, the linear interpolation of the first two pairs that represent problems with between 1 to 100 variables.
For more details, see the Ocean samplers documentation for solver methods that calculate this parameter, and their descriptions.
### maximum_time_limit_hrs¶
Maximum allowed run time, in hours, that can be specified for the solver.
### quota_conversion_rate¶
Ratio of time charged to Leap account quotas between QPU and hybrid solver usage. For example, for a value of 20, using 20 seconds of hybrid solver time is has an equivalent cost to using 1 second of QPU time.
### supported_problem_types¶
Indicates what problem types are supported for the solver. Hybrid solvers support the following energy-minimization problem types:
• bqm—binary quadratic model (BQM) problems; use $0/1$-valued variables and $-1/1$-valued variables.
• dqm—discrete quadratic model (DQM) problems; use variables that can represent a set of values such as {red, green, blue, yellow} or {3.2, 67}.
### version¶
Version number of the solver (e.g., “1.0”).
## Parameters¶
This section describes the parameters accepted by Leap‘ s hybrid solvers, in alphabetical order. See Summary of Hybrid Solver Parameters for a summary and for the default values.
### time_limit¶
Specifies the maximum run time, in seconds, the solver is allowed to work on the given problem. Can be a float or integer in the range defined by minimum_time_limit for the given problem.
## Summary of Hybrid Solver Parameters¶
Hybrid solver parameters and their default values are summarized in the table below.
Table 1 Hybrid Solver Parameters
Parameter Range Default Value
time_limit See minimum_time_limit Problem dependent | 2020-10-30 02:11:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38506555557250977, "perplexity": 2579.939357708063}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107906872.85/warc/CC-MAIN-20201030003928-20201030033928-00345.warc.gz"} |
https://gamedev.stackexchange.com/questions/103315/glsl-fragment-shader-lighting-lambert-term | # GLSL Fragment shader lighting lambert term
Just a simple question: Should I or should i not normalize the SurfaceToLight vector to calculate the lambert term on a GLSL lighting shader?
I mean, here:
vec3 CalculateLights ( void )
{
vec3 OverallResult = (AmbientLight.Color * AmbientLight.Intensity * Material.Ambient.xyz );
vec3 SurfaceToCamera = WorldFragPosition - CameraPositionWorld;
// Apply point lights
for ( int LightIndex = 0; LightIndex < PointLightCount; ++LightIndex )
{
vec3 LightResult = vec3 ( 0, 0, 0 );
vec3 SurfaceToLight = PointLights[LightIndex].Position - WorldFragPosition;
float Distance = length ( SurfaceToLight );
if ( Distance > PointLights[LightIndex].Cutoff )
continue;
// Calculate normalized vectors and Lambert term
vec3 NormalizedFragVertexNormalWorld = normalize( fragVertexNormalWorld );
vec3 NormalizedSurfaceToLight = normalize( SurfaceToLight );
// float LambertTerm = max( dot( NormalizedSurfaceToLight, NormalizedFragVertexNormalWorld ), 0 ); // Should I use normalized here?
float LambertTerm = max( dot( SurfaceToLight, fragVertexNormalWorld ), 0 ); // Should I use normalized here?
// Compute the diffuse term.
vec3 DiffuseResult = LambertTerm * PointLights[LightIndex].Intensity * PointLights[LightIndex].Color * Material.Diffuse.xyz;
LightResult += DiffuseResult;
// Compute specular
// float SpecularCoefficient = 0.0;
// if ( LambertTerm > 0.0 )
// SpecularCoefficient = pow ( max ( 0.0, dot ( SurfaceToCamera, reflect(-SurfaceToLight, NormalizedFragVertexNormalWorld))), Material.Shininess);
// vec3 SpecularResult = SpecularCoefficient * Material.Specular.xyz * PointLights[LightIndex].Color;
// LightResult += SpecularResult;
// Compute attenuation
float Attenuation = PointLights[LightIndex].ConstantAttenuation + PointLights[LightIndex].LinearAttenuation * Distance + PointLights[LightIndex].ExponentialAttenuation * pow ( Distance, 2 );
// float Attenuation = 1.0 / (1.0 + PointLights[LightIndex].ConstantAttenuation * pow(Distance, 2));
LightResult /= Attenuation;
OverallResult += LightResult;
}
return OverallResult;
}
As you can see, i've been trying out some different formulas on this shader. Specular still does not work. Here is the normalized and non-normalized versions of the image
Of course you should normalize it - it isn't the Lambert term if it isn't normalized.
Lambert term = max(cos(angle between direction to light and surface normal),0)
And (in HLSL-ish pseudocode): dot(A,B) = cos(angle(A,B)) * length(A) * length(B)
So to get dot(A,B) = cos(angle(A,B)), length of both vectors must be equal to 1 (which is what normalization does - divides a vector by its length).
https://en.wikipedia.org/wiki/Lambert's_cosine_law
• ok, just checking, because the image looks a whole lot better without normalization... thanks. – Joao Pincho Jul 2 '15 at 12:51
• @RhiakathFlanders, you can always increase light intensity and experiment with attenuation equations, it could produce similar results while still making more sense mathematically. – snake5 Jul 2 '15 at 12:57
• @RhiakathFlanders Also, that light difference might be due to gamma vs linear space. http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html – Felipe Lira Jul 2 '15 at 13:11
• @PhilLira, it has nothing to do with color spaces. Please read the code given by OP. – snake5 Jul 2 '15 at 13:13
• @snake5 I worded my comment wrong. The light difference I mean would be to achieve a brighter effect with normalized vectors. He didn't posted the fragment output section and as a side note I'm stating that if he doesn't use either sRGB frambuffer extension or convert final color to gamma space in shader his light computation will be wrong. – Felipe Lira Jul 2 '15 at 13:21
Yes, you should always normalize light computation vectors.
dot(N, L) = cos(angle(N, L)) * length(N) * length(L), so, if vectors are normalized max(0.0, dot(N,L)) will get you values in the range of [0,1] which is exacly what you want.
0 being light perpendicular to surface normal, 1 being light pointing exaclyt at the same direction of normal (thus giving most contribution). Negative values means light is hit the back of surface, thus the need o max(0.0, dot()).
A few things worth noting:
1. Linear and Gamma Space
Monitors use a non-linear color space (gamma), so you should be aware of this or your light will appear darker. Textures that come from programs are already written in gamma space and you should convert them to linear space to compute lighting and after lighting done you should convert final result to gamma again.
Now you either use an sRGB texture and framebuffer extension to do this automatically for you or you need to do this in your shader.
So, for instance, you'd have to do:
vec3 finalCol = do_all_lighting_and_shading();
return vec4(pow(finalCol, 1.0 / 2.2), pixelAlpha);
For more information on linear and gamma space check this link: The Importance of Being Linear, and this is how to add sRGB extension to do gamma/linear conversion automatically for you: Using sRGB color space with OpenGL.
2. A word on Specular
I see you're using Reflect vector to compute specular. A better approach is to use Half vector. Check this: https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model
• why is half vector better? on that like you've sent, it mentions that calculating the half vector is slower. – Joao Pincho Jul 2 '15 at 20:56
• I think you misread it. Read the Efficiency section. – Felipe Lira Jul 2 '15 at 21:19
• ok, i'll try that implementation. just to be sure: I have two matrices being passed unto the shader. the ModelViewProjection ( CameraPerspective * CameraViewMatrix * ModelMatrix ) and the ModelMatrix by itself. Is my ModelMatrix what they mean by modelview? Or should I multiply it by the camera view matrix? – Joao Pincho Jul 3 '15 at 14:09 | 2020-05-28 12:10:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5833669304847717, "perplexity": 7670.649275570554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396089.30/warc/CC-MAIN-20200528104652-20200528134652-00033.warc.gz"} |
https://www.biostars.org/p/241210/ | R encountered fatal error when using processBismarkAln in methylKit
2
0
Entering edit mode
4.7 years ago
Hi
I am attempting to use methylkit to analyse my RRBS data but cannot seem to be able to import my files. I have .bam files generated from bismark. I read I can use function processBismarkAln to read these kind of files into methylkit, but R encounters a fatal error when I try.
Here is my code:
file.list = list("Final_145A.sorted.dedup.bam")
myobj = processBismarkAln(location = file.list, sample.id = list("test145"), assembly = "btaUMD3", save.folder = NULL, save.context = c("CpG"), read.context = "CpG", nolap = FALSE, mincov = 10, minqual = 20, phred64 = FALSE, treatment = c(0))
not sure of what is going on. Do I need to do any extra processing in the bismark .bam file before importing it into methylkit? If yes, what should I do? Specific codes would be appreciated since I am very new to this.
Thank you!!
R version 3.3.3 (2017-03-06)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
Running under: OS X Yosemite 10.10.5
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] parallel stats4 stats graphics grDevices utils datasets methods base
other attached packages:
[1] methylKit_1.0.0 devtools_1.12.0 GenomicRanges_1.26.3 GenomeInfoDb_1.10.3
[5] IRanges_2.8.1 S4Vectors_0.12.1 BiocGenerics_0.20.0
loaded via a namespace (and not attached):
[1] Rcpp_0.12.9 plyr_1.8.4 XVector_0.14.0
[4] R.methodsS3_1.7.1 bitops_1.0-6 R.utils_2.5.0
[7] tools_3.3.3 zlibbioc_1.20.0 mclust_5.2.2
[10] digest_0.6.12 memoise_1.0.0 tibble_1.2
[13] gtable_0.2.0 lattice_0.20-34 fastseg_1.20.0
[16] Matrix_1.2-8 coda_0.19-1 rtracklayer_1.34.2
[19] withr_1.0.2 stringr_1.2.0 gtools_3.5.0
[22] Biostrings_2.42.1 grid_3.3.3 Biobase_2.34.0
[25] data.table_1.10.4 qvalue_2.6.0 emdbook_1.3.9
[28] XML_3.98-1.5 BiocParallel_1.8.1 limma_3.30.12
[31] ggplot2_2.2.1 reshape2_1.4.2 magrittr_1.5
[34] GenomicAlignments_1.10.0 scales_0.4.1 Rsamtools_1.26.1
[37] MASS_7.3-45 splines_3.3.3 SummarizedExperiment_1.4.0
[40] assertthat_0.1 bbmle_1.0.18 colorspace_1.3-2
[43] numDeriv_2016.8-1 stringi_1.1.2 RCurl_1.95-4.8
[46] lazyeval_0.2.0 munsell_0.4.3 R.oo_1.21.0
methylkit • 2.8k views
0
Entering edit mode
Could you please post the exact error message?
0
Entering edit mode
Hi! When R crashed, it just outputted: 'R encountered a fatal errow. Session must be terminated'.
I read somewhere that loading library(Rcpp) before running processBismarkAln could help, but now R is outputting the following:
rsession(53014,0x7fff78526300) malloc: * mach_vm_map(size=8031079741507334144) failed (error code=3) error: can't allocate region ** set a breakpoint in malloc_error_break to debug Error in eval(expr, envir, enclos) : std::bad_alloc Error in eval(expr, envir, enclos) : no methylation tag found.
I am guessing there is something wrong with my .bam file, but not sure how to fix it.
0
Entering edit mode
Hello, anyone found a solution for this? I am on windows, and the methylkit crashes when doing processBismarkA : 'R encountered a fatal errow. Session must be terminated'. A colleague uses the same command, the same files, on mac, and it works... Any idea other than switching to MAC (which I am seriously considering...)? Thank you Rita
1
Entering edit mode
4.7 years ago
That error message looks like you're not reading in the right bam file - bismark's aligner sets specific flags in the bam file (XM:Z:). If it doesn't encounter these flags it stops (see the code: https://github.com/al2na/methylKit/blob/master/src/methCall.cpp#L560 ) with the error message 'no methylation tag found', which is what you have there.
Judging from your filename, is it possible you ran samtools rmdup on the bismark alignment? This could have removed the bismark tag.
0
Entering edit mode
Hi! Thank you for your input. Because I uses the Ovation RRBS kit from NuGen for library construction, it is recommended to use their customized nudup.py script after alignment to remove duplicates.
Would you have any insights on how to fix the file? I am very new to this. Any help is appreciated.
Thank you!
0
Entering edit mode
Looking at the nudup.py script here I cannot see anything where it would remove the tags. Have you tried loading the bam file before you ran nudup.py into methylKit? Does that one work? You may be able to filter the duplicates out in methylkit
0
Entering edit mode
4.7 years ago
So that did not work either.Thank you for all your help though, Philipp! The solution I found for the problem was to use the cytosineReport.txt file generated by bismark. However that needs extra processing for loading into methylkit too. If anyone runs through the same problem:
Run the bismark_methylation_extractor; Run the following awk command on the cytosineReport.txt file to put it into the appropriate format for methylkit:
awk '{OFS="\t";if($4+0 > 0 ||$5+0 >0 ) print $1,$2,$3,$4/($4+$5),$4+$5;}' cytosineReport.txt > outformethylkit
Then you are able to input these files into methylkit by simply using the read() function. This solution is discussed here: https://groups.google.com/forum/#!topic/methylkit_discussion/0s1DLTWsLyM
I did not seem to find any solution for importing the bismark .bam files into methylkit without R issuing any errors. If anyone has a solution for that, please reply :)
Otherwise, the solution above works. | 2021-11-28 11:39:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24238073825836182, "perplexity": 13499.999439101985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358520.50/warc/CC-MAIN-20211128103924-20211128133924-00469.warc.gz"} |
http://mathhelpforum.com/statistics/90952-losing-my-marbles-probability.html | # Thread: Losing my marbles.... Probability
1. ## Losing my marbles.... Probability
OK so I have a bag of marbles containing: 3 Catseyes, 4 Agates and 7 Oxbloods. If all have an equal chance of being picked and two are drawn, what are the chances of at least 1 of the chosen being an Agate.
According to my limited learning I worked it out as follows:
$P(at least 1 agate)= \frac{4}{14} + \frac{4}{13} = \frac{54}{91}$
Is this correct? If not then how do I work it out?
2. Originally Posted by TommyBoy22
OK so I have a bag of marbles containing: 3 Catseyes, 4 Agates and 7 Oxbloods. If all have an equal chance of being picked and two are drawn, what are the chances of at least 1 of the chosen being an Agate.
According to my limited learning I worked it out as follows:
$P(at least 1 agate)= \frac{4}{14} + \frac{4}{13} = \frac{54}{91}$
Is this correct? If not then how do I work it out?
Hi TommyBoy,
$P(\text{at least 1 agate})=1-(\frac{10}{14})(\frac{9}{13})$
3. Hello, TommyBoy22!
Sorry, your work is way off . . .
What formulas (if any) are you using?
I have a bag of marbles containing: 3 Catseyes, 4 Agates and 7 Oxbloods.
If all have an equal chance of being picked and two are drawn,
what is the probability of at least one Agate being chosen?
There are 14 marbles: 3 CEs, 4 AGs, 7 OBs.
Two marbles are drawn.
. . There are: . $_{14}C_2 \:=\:91$ possible outcomes.
The opposite of "at least one AG" is "no AGs."
There are: 4 AGs and 10 Others.
There are: . $_{10}C_2 \:=\:45$ ways to draw two Others (no AGs).
. . Hence, there are: . $91 - 45 \:=\:46$ ways to draw at least one AG.
Therefore: . $P(\text{at least one AG}) \:=\:\frac{46}{91}$
~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
From your work, I assume you are working with a sequence of draws.
In that case, we must consider all the cases.
We want at least one AG.
. . This means: (1 AG and 1 Other), or (2 AGs).
There are two ways to get one AG and one Other:
. . AG, then Other: . $\frac{4}{14}\cdot\frac{10}{13} \:=\:\frac{20}{91}$
. . Other, then AG: . $\frac{10}{14}\cdot\frac{4}{13} \:=\:\frac{20}{91}$
There is one way to get two AGs:
. . AG, then AG: . $\frac{4}{14}\cdot\frac{3}{13} \:=\:\frac{6}{91}$
Therefore: . $P(\text{one AG or two AGs}) \;=\;\frac{20}{91} + \frac{20}{91} + \frac{6}{91} \;=\;\frac{46}{91}$
4. Hi thanks guys.. yeah no theory used.. just made sense in my head | 2017-11-18 04:39:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9313918948173523, "perplexity": 1019.4730470675113}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804610.37/warc/CC-MAIN-20171118040756-20171118060756-00262.warc.gz"} |
http://pobarabanu.com.ua/borderline-meaning-vofd/l1phi2.php?id=bc6176-gaussian-process-python | A common applied statistics task involves building regression models to characterize non-linear relationships between variables. The Gaussian Processes Classifier is available in the scikit-learn Python machine learning library via the GaussianProcessClassifier class. scikit-learn is Python’s peerless machine learning library. The multivariate Gaussian distribution is defined by a mean vector μ\muμ ⦠It may seem odd to simply adopt the zero function to represent the mean function of the Gaussian process â surely we can do better than that! Since the posterior of this GP is non-normal, a Laplace approximation is used to obtain a solution, rather than maximizing the marginal likelihood. These are fed to the underlying multivariate normal likelihood. Newsletter | Iteration: 300 Acc Rate: 96.0 % GPã¢ãã«ãç¨ããäºæ¸¬ 4. For this, we can employ Gaussian process models. Conveniently, scikit-learn displays the configuration that is used for the fitting algorithm each time one of its classes is instantiated. Contact | Overview 3.2. beta Generalized least-squares regression weights for Universal Kriging or given beta0 for Ordinary Kriging. Gaussian process regression (GPR). 2013-03-14 18:40 IJMC: Begun. Though in general all the parameters are non-negative real-valued, when $\nu = p + 1/2$ for integer-valued $p$, the function can be expressed partly as a polynomial function of order $p$ and generates realizations that are $p$-times differentiable, so values $\nu \in {3/2, 5/2}$ are most common. There are three filters available in the OpenCV-Python library. To get a sense of the form of the posterior over a range of likely inputs, we can pass it a linear space as we have done above. A Gaussian process can be used as a prior probability distribution over functions in Bayesian inference. Because we have the probability distribution over all possible functions, we can caculate the means as the function , and caculate the variance to show how confidient when we make predictions using the function. Let’s change the model slightly and use a Student’s T likelihood, which will be more robust to the influence of extreme values. The selection of a mean function is … Requirements: 1. model.kern. We can access the parameter values simply by printing the regression model object. All we will do here is a sample from the prior Gaussian process, so before any data have been introduced. It is defined as an infinite collection of random variables, with any marginal subset having a Gaussian distribution. In addition to standard scikit-learn estimator API, GaussianProcessRegressor: where $\Gamma$ is the gamma function and $K$ is a modified Bessel function. — Page 2, Gaussian Processes for Machine Learning, 2006. Iteration: 600 Acc Rate: 94.0 % I have a 2D input set (8 couples of 2 parameters) called X. I have 8 corresponding outputs, gathered in the 1D-array y. The way that examples are grouped using the kernel controls how the model “perceives” the examples, given that it assumes that examples that are “close” to each other have the same class label. [1mlengthscales[0m transform:+ve prior:Ga([ 1. For example, one specification of a GP might be: Here, the covariance function is a squared exponential, for which values of and that are close together result in values of closer to one, while those that are far apart return values closer to zero. model.kern. It is the marginalization property that makes working with a Gaussian process feasible: we can marginalize over the infinitely-many variables that we are not interested in, or have not observed. Search, Best Config: {'kernel': 1**2 * RationalQuadratic(alpha=1, length_scale=1)}, >0.790 with: {'kernel': 1**2 * RBF(length_scale=1)}, >0.800 with: {'kernel': 1**2 * DotProduct(sigma_0=1)}, >0.830 with: {'kernel': 1**2 * Matern(length_scale=1, nu=1.5)}, >0.913 with: {'kernel': 1**2 * RationalQuadratic(alpha=1, length_scale=1)}, >0.510 with: {'kernel': 1**2 * WhiteKernel(noise_level=1)}, Making developers awesome at machine learning, # evaluate a gaussian process classifier model on the dataset, # make a prediction with a gaussian process classifier model on the dataset, # grid search kernel for gaussian process classifier, Click to Take the FREE Python Machine Learning Crash-Course, Kernels for Gaussian Processes, Scikit-Learn User Guide, Gaussian Processes for Machine Learning, Homepage, Machine Learning: A Probabilistic Perspective, sklearn.gaussian_process.GaussianProcessClassifier API, sklearn.gaussian_process.GaussianProcessRegressor API, Gaussian Processes, Scikit-Learn User Guide, Robust Regression for Machine Learning in Python, https://scikit-learn.org/stable/modules/gaussian_process.html#kernels-for-gaussian-processes, Your First Machine Learning Project in Python Step-By-Step, How to Setup Your Python Environment for Machine Learning with Anaconda, Feature Selection For Machine Learning in Python, Save and Load Machine Learning Models in Python with scikit-learn. What we need first is our covariance function, which will be the squared exponential, and a function to evaluate the covariance at given points (resulting in a covariance matrix). $$Gaussian Process Regression Gaussian Processes: Deï¬nition A Gaussian process is a collection of random variables, any ï¬nite number of which have a joint Gaussian distribution. Notice that, in addition to the hyperparameters of the Matèrn kernel, there is an additional variance parameter that is associated with the normal likelihood. This might not mean much at this moment so lets dig a bit deeper in its meaning. We can demonstrate this with a complete example listed below. Are They Mutually Exclusive?$$. The Machine Learning with Python EBook is where you'll find the Really Good stuff. a RBF kernel. Alternatively, a non-parametric approach can be adopted by defining a set of knots across the variable space and use a spline or kernel regression to describe arbitrary non-linear relationships. x: array([-0.75649791, -0.16326004]). Fitting Gaussian Process with Python Reference Gaussian Processì ëí´ ììë³´ì! [ 1.] the bell-shaped function). In fact, it’s actually converted from my first homework in a Bayesian Deep Learning class. So conditional on this point, and the covariance structure we have specified, we have essentially constrained the probable location of additional points. 1.7.1. Read more. In this tutorial, you discovered the Gaussian Processes Classifier classification machine learning algorithm. Gaussian processes require specifying a kernel that controls how examples relate to each other; specifically, it defines the covariance function of the data. x: array([-2.3496958, 0.3208171, 0.6063578]). This is called the latent function or the “nuisance” function. For regression tasks, where we are predicting a continuous response variable, a GaussianProcessRegressor is applied by specifying an appropriate covariance function, or kernel. For a Gaussian process, this is fulfilled by the posterior predictive distribution, which is the Gaussian process with the mean and covariance functions updated to their posterior forms, after having been fit. Where did the extra information come from. p(y^{\ast}|y, x, x^{\ast}) = \mathcal{GP}(m^{\ast}(x^{\ast}), k^{\ast}(x^{\ast})) For classification tasks, where the output variable is binary or categorical, the GaussianProcessClassifier is used. We will use 10 folds and three repeats in the test harness. Declarations are made inside of a Model context, which automatically adds them to the model in preparation for fitting. and I help developers get results with machine learning. {\mu_y} \\ In addition to fitting the model, we would like to be able to generate predictions. sklearn.gaussian_process.kernels.WhiteKernel¶ class sklearn.gaussian_process.kernels.WhiteKernel (noise_level=1.0, noise_level_bounds=(1e-05, 100000.0)) [source] ¶. Iteration: 700 Acc Rate: 96.0 % The hyperparameters for the Gaussian Processes Classifier method must be configured for your specific dataset. Perhaps the most important hyperparameter is the kernel controlled via the “kernel” argument. nfev: 8 You can view, fork, and play with this project on the Domino data science platform. Please ignore the orange arrow for the moment. Models are specified by declaring variables and functions of variables to specify a fully-Bayesian model. Covers self-study tutorials and end-to-end projects like: Try running the example a few times. A GP kernel can be specified as the sum of additive components in scikit-learn simply by using the sum operator, so we can include a Matèrn component (Matern), an amplitude factor (ConstantKernel), as well as an observation noise (WhiteKernel): As mentioned, the scikit-learn API is very consistent across learning methods, and as such, all functions expect a tabular set of input variables, either as a 2-dimensional NumPy array or a pandas DataFrame. Collaboration Between Data Science and Data Engineering: True or False? 100%|ââââââââââ| 2000/2000 [00:54<00:00, 36.69it/s]. nit: 15 3. Next, we can look at configuring the model hyperparameters. For the binary discriminative case one simple idea is to turn the output of a regression model into a class probability using a response function (the inverse of a link function), which “squashes” its argument, which can lie in the domain (−inf, inf), into the range [0, 1], guaranteeing a valid probabilistic interpretation. In this blog, we shall discuss on Gaussian Process Regression, the basic concepts, how it can be implemented with python from scratch and also using the GPy library. Describing a Bayesian procedure as “non-parametric” is something of a misnomer. GPflow is a package for building Gaussian process models in python, using TensorFlow.It was originally created by James Hensman and Alexander G. de G. Matthews.It is now actively maintained by (in alphabetical order) Alexis Boukouvalas, Artem Artemev, Eric Hambro, James Hensman, Joel Berkeley, Mark van der Wilk, ST John, and Vincent Dutordoir. p(x,y) = \mathcal{N}\left(\left[{ Programmer? Gaussian Process (GP) Regression with Python - Draw sample functions from GP prior distribution. Definition of Gaussian Process 3.3. Yes I tried, but the problem is in Gaussian processes, the model consists of: the kernel, the optimised parameters, and the training data. In fact, Bayesian non-parametric methods do not imply that there are no parameters, but rather that the number of parameters grows with the size of the dataset. Welcome! Sitemap | The scikit-learn library provides many built-in kernels that can be used. A stochastic process of random variables, with any marginal subset having a Gaussian process techniques normal distributions are particularly... Addition to standard scikit-learn estimator API, GaussianProcessRegressor: what are Gaussian Processes, the vast of. See that the model context fits the model will attempt to best configure the controlled... You are looking to go deeper kernel method, like SVMs, although they are able to predictions... Fitting our simulated dataset, some rights reserved the latent function or the training dataset on algorithm 2.1 Gaussian... In to your data science and data Engineering: True or False distribution functions summarize the distribution forecasts... Yes I know that RBF and DotProduct are functions defined earlier in the code is... Arbitrary inputs $X^ gaussian process python$ matrix [ R ] they are to... Stheno is an included parameter ( variance ), so before any data have been introduced your science! Changing over time with 20 input variables been introduced can do about it ) or the training dataset with non-normal... 79.0 percent requires a link function that interprets the internal representation and predicts the of! Knot layout procedures gaussian process python somewhat ad hoc and can also fix values if we information... And sometimes an unacceptably coarse one, but is a generalization of the Mueller Report satisfied that we can and... Models for nonlinear regression and classification models infinite vector is as a.... For fitting case for comparing the performance of each package training dataset doing so the x-axis this point and... $) complements the amplitude by scaling realizations on the x-axis new Instances... Below demonstrates this using the GridSearchCV class with a worked example models with ease involves regression... The most important hyperparameter is the Matèrn covariance resources on the x-axis ) method returns blurred of! Demonstrates this using the lovely conditioning property of mutlivariate Gaussian distributions to model our data 1. Earlier in the machine learning, 2006 somewhat ad hoc and can also variable... Page 40, Gaussian Processes Classifier is available in the scikit-learn Python machine learning library via the class. The GPR ( Gaussian gaussian process python modelling in Python tasks, where the output variable is or. Its computational backend project in Domino so as the density of points becomes high it! A modern computational backend value as a machine learning Ordinary Kriging have details! A multivariate normal to infinite dimension and fitting non-parametric regression and classification the kernel the! Posterior is only an approximation, and make predictions on new data closer together along this axis have,... * arg, * * kw ) [ source ] ¶ Compute log likelihood using Gaussian process models new Instances. Make predictions with the Gaussian Processes in Python https: //github.com/nathan-rice/gp-python/blob/master/Gaussian % 20Processes % 20in % by!, supplying a complete example listed below mean much at this moment so lets gaussian process python... [ 0.6148462 ], point by point, and this can be as... I have no details regarding how it was generated some rights reserved a proper Bayesian model, and model! Fed to the model and make predictions with the Gaussian probability distribution over possible functions allow! Well as priors for the Gaussian Processes Classifier is a complex topic many.. Gradient to be constant and zero ( for normalize_y=True ) this using the GridSearchCV class with Gaussian. Turned Off by setting “ optimize ” to None science news, insights tutorials...: machine learning demonstrate GPflow usage by fitting our simulated dataset involves a straightforward conjugate Gaussian likelihood, we like. Gaussianprocess.Loglikelihood ( * arg, * * kw ) [ source ] ¶ users! Are going generate realizations sequentially, point by point, and we have information to justify doing.! Classification predictive modeling from my first homework in a Bayesian procedure as “ non-parametric ” is something of model... Use some simulated data as a machine learning, 2006 to learn more gaussian process python the:. Push points closer together along this axis ) ) [ 0.38479193 ] model.kern that they engage in full! Heuristic to demonstrate how the covariance structure works Page 35, Gaussian Processes in was. The GPMC model using the sample method noise_level_bounds= ( 1e-05, 100000.0 ) ) [ 0.38479193 ].... Complete example listed below my new Ebook: machine learning used, the! Model without the use of probability functions, which recently underwent a complete survey of software tools for fitting Processes! Setting “ optimize ” to None that the model, and the covariance different! Well as priors for the training dataâs mean ( for normalize_y=True ) create a dataset with examples... Are going generate realizations sequentially, point by point, and we have done here GP needs to be to! Monte Carlo or an approximation via variational inference a type of covariance matrices sampled this... Repeated cross-validation calculated for arbitrary models sampling locations link function that interprets the internal representation predicts. The test harness task involves building regression models to characterize non-linear relationships between variables a non-normal )! By declaring variables and functions of variables to specify a likelihood as well as for! For Universal Kriging or given beta0 for Ordinary Kriging Classifier algorithm gaussian process python given. Gpytorch is a Gaussian process is uniquely defined by it's there are three filters available in the figure, curve..., point by point, and this can be tuned arbitrary starting point to sample say... Yes I know that RBF and DotProduct are functions defined earlier in the scikit-learn Python machine library... Learning with Python Reference Gaussian Processì ëí´ ììë³´ì kernel to describe the type of covariance matrices of software tools fitting. DataâS mean ( for normalize_y=False ) or the training dataâs mean ( for normalize_y=False ) or the training.... Scalable, flexible, and make predictions on new data 0.1 ] ) [ 0.6148462 ] gain in this. Test harness set of points which automatically adds them to the underlying multivariate normal likelihood underwent a complete (... Time one of the functions, e.g also fix values if we have performed! Medical Center differentiation variational inference are available now in GPflow and PyMC3, respectively for binary classification tasks where. Regression weights for Universal Kriging or given beta0 for Ordinary Kriging distributions in and of themselves in addition to scikit-learn. Complete revision ( as of version 0.18 ) 100 examples, each of which fits to. Since the GP prior is a soft, probabilistic classification rather than optimize we. Learning Mastery with Python our simulated dataset configurations for sophisticated kernel functions for the kernel the... Signal variance Ïâ²=1 a viable alternative for many problems in my new Ebook: machine learning, 2006 science Off... Or the training dataset models ( i.e for sophisticated kernel functions Blur Filter, Blur! To best configure the kernel parameters [ 1mlengthscales [ 0m transform: +ve:... Process module, which are parametric and play with this project on the hyperparameters of GP... Both test different kernel functions priors have been introduced in geostatistics idea of which fits in to data. Find the Really Good stuff algorithm for classification predictive modeling configuration that is used obtain a.... Is where you 'll find the Really Good stuff s define a synthetic classification.. Compute log likelihood using Gaussian process library implemented using PyTorch sample from the prior Gaussian process library implemented PyTorch. A common applied statistics task involves building regression models to characterize non-linear relationships between variables since our model involves straightforward! Something of a misnomer strategy, and play with this project on the Domino data science data... Latent function or the training dataâs mean ( for normalize_y=True ) the Gaussian Processes for machine (! Fail to deliver value and what you can do about it fixes the roughness parameter to 3/2 ( Matern32 and... Variables to specify a likelihood as well as priors for the synthetic binary classification and a. Is difficult to specify a likelihood as well as priors for the fitting algorithm each time one of GPflow that... A mean accuracy of about 79.0 percent function and set the lengthscale l=1 and the signal variance Ïâ²=1, Victoria. Adopting a set of points either using Markov chain Monte Carlo or an via... Majority of the Mueller Report science platform GPy by the Sheffield machine learning algorithm to justify doing so [ [... Over time demonstrate this with a worked example used as a constant been specified, and directly model the underlying... * 2 * RBF with parameters set to length_score = 1. the complete example listed below does not like. Decide to use the Gaussian Processes are a general and flexible class of models for nonlinear regression and classification on... Doing so error value as a test case for comparing the performance of package. At this moment so lets dig a bit deeper in its definition fitting Gaussian Classifier... Sequentially is just a few of them to the model, and the structure... 40, Gaussian Processes Classifier method must be configured for your specific results may given... Sense of the information is encoded within the K covariance matrices is far a! Try a few lines of scikit-learn code, learn how in my new Ebook machine! Available now in GPflow and PyMC3, respectively dataâs mean ( for )! Classification is a non-parametric algorithm that can be used algorithm requires the specification of values! Now in GPflow and PyMC3, respectively a sample from the GP prior is a complex topic to sample say... Worked example scikit-learn library provides many built-in kernels that can be fitted either using Markov chain Monte Carlo an... From this GP prior is a complex topic shows 50 samples drawn from this GP prior, Gaussian Processes is. Classifier classification machine learning community over last years, having originally been introduced,. Generate predictions label prediction for arbitrary inputs$ X^ * \$ neural networks in that engage! A likelihood as well as priors for the fitting algorithm each time one GPflow.
Bird Flyer Program, No7 Restore And Renew Eye, Miele Convection Steam Ovens, Box Leaf Privet Growth Rate, Math Ppt For Kindergarten, Miami-dade County Jobs Parks And Recreation, Fashion Magazine Cover Png, La Roche-posay Lipikar Lotion Review, Belgium Wind Storm, | 2021-06-18 08:20:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5400961637496948, "perplexity": 1497.9346639940218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487635920.39/warc/CC-MAIN-20210618073932-20210618103932-00052.warc.gz"} |
http://mathhelpforum.com/calculus/227083-laplace-transforms.html | # Math Help - Laplace Transforms
1. ## Laplace Transforms
Hi,
For Question 2,
I know you can do them by partial intergration but I'm not sure how to show them using the method it asks.
For Question 4, I genuinely have no idea.
Any help would be appreciated.
Thanks
2. ## Re: Laplace Transforms
Originally Posted by fourierT
Hi,
For Question 2,
I know you can do them by partial intergration but I'm not sure how to show them using the method it asks.
For Question 4, I genuinely have no idea.
Any help would be appreciated.
Thanks
By "using the definition" they mean
$\large F(s) = \mathscr{L}\left \{f(t) \right \} = \displaystyle{\int_{-\infty}^{\infty}}f(t)e^{-st}~dt$
and for 2(a) you would use "integration by parts" to evaluate this integral.
for 2(b) just use their hint and do the integration.
I should note that it looks like they implicitly mean f(t)=0 for t<0, otherwise these transforms won't converge.
for 4) you should have read about using Laplace transforms to solve linear constant coefficient differential equations. This is a pretty straightforward example assuming H(t) stands for the Heaviside step function. Go re-read that section of your text. Or look at this.
3. ## Re: Laplace Transforms
Done the rest, still stuck on Question 4.
Any further help would be great, thanks.
4. ## Re: Laplace Transforms
Originally Posted by fourierT
Thanks,
I'm slightly behind on the laplace stuff hence why I had a some trouble with these.
I think I've done 2 a) and b) but still can't do 4).
If we have two transform pairs $f(t) \overset{\mathscr{L}}{\Longleftrightarrow} F(s)$
Then what does $\dfrac{d}{dt}f(t)$ correspond to in the s domain?
5. ## Re: Laplace Transforms
sF(s) - F(0) ?
6. ## Re: Laplace Transforms
No, that doesn't even make sense - derivative of F(s) with respect to t?
The Laplace transform of f'(t) has an easy-to-write relationship to the Laplace transform of f(t). It should be in your textbook, or you can look at romsek's link.
- Hollywood
7. ## Re: Laplace Transforms
Originally Posted by fourierT
sF(s) - F(0) ?
ok now apply this twice and take the Laplace transform of both sides of your differential equation and solve it in the s domain. Then transform it back to the t domain. | 2014-09-24 01:55:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7865684032440186, "perplexity": 1280.531690537523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657140890.97/warc/CC-MAIN-20140914011220-00137-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://lawprofessors.typepad.com/antitrustprof_blog/2010/11/partial-collusion-with-asymmetric-cross-price-effects.html | # Antitrust & Competition Policy Blog
## A Member of the Law Professor Blogs Network
Thursday, November 18, 2010
### Partial collusion with asymmetric cross-price effects
Posted by D. Daniel Sokol
Luca Savorelli (University of Bologna - Econ) explains Partial collusion with asymmetric cross-price effects.
ABSTRACT: Asymmetries in cross-price elasticities have been demonstrated by several empirical studies. In this paper we study from a theoretical stance how introducing asymmetry in the substitution effects influences the sustainability of collusion. We characterize the equilibrium of a linear Cournot duopoly with substitute goods, and consider substitution e¤ects which are asymmetric in magnitude. Within this framework, we study partial collusion using Friedman (1971) solution concept. Our main result shows that the interval of quantities supporting collusion in the asymmetric setting is always smaller than the interval in the symmetric benchmark. Thus, the asymmetry in the substitution effects makes collusion more difficult to sustain. This implies that previous Antitrust decisions could be reversed by considering the role of this kind of asymmetry. | 2014-12-21 18:08:50 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8019714951515198, "perplexity": 3445.500329885313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772125.148/warc/CC-MAIN-20141217075252-00017-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://cs.stackexchange.com/questions/119174/thomas-write-rula-andview-serializability | # Thomas write rula andview serializability
Yesterday our Professor told in the class that Thomas Write rule ensures view serializability, but while surfing on this topic today on internet I am not able to find any information about that claim. So is it always $$TRUE?$$
What he told was this-->
Timestamp ordering ensures conflict serializability
Thomas write rule ensures view serializability
• Did you just try asking your professor?
– Juho
Jan 4 '20 at 8:32
• What to ask when he has already mentioned that in class... But i am not able to find a single line about it on the web Jan 4 '20 at 13:32
• For instance, couldn't you say "You said that X. Is it really always true? Why?". Anyway, I hope your question attracts attention here.
– Juho
Jan 4 '20 at 13:58
• Will do when college resumes on monday..Thanks anyway.. :) Jan 4 '20 at 14:07
## 1 Answer
Timestamp ordering ensures conflict serializability
proof :
Assume that in precedence graph of schedule, we have edge $$T_i \rightarrow T_j$$.
Now, When $$T_j$$ puts it's request for this conflicting operation it will be continue only if $$\text{timestamp}(T_i) < \text{timestamp}(T_j)$$.
Now, for schedule to be non conflict serializable there must exist a cycle in precedence graph of that schedule and let say that cycle is $$T_i, T_{i+1}, ...., T_i$$.
Now, note that that cycle can't exist because existence of cycle implies $$\text{timestamp}(T_i) < \text{timestamp}(T_i)$$. (As timestamp are unique.)
So, we conclude that there can't be any such cycle which in turn implies that timestamp ordering protocol allows only conflict serializable schedules.
• Ya..this I know brother but i want to confirm the claim about view serailizibility Jan 5 '20 at 13:47
• Yes, working on that will post if succeed. :) Jan 5 '20 at 13:49
• Thanks a lot.. :) Jan 5 '20 at 13:57
• @Turing101, here is a proof of second one Jan 5 '20 at 14:34
• Thanks a ton mate.. :) Jan 5 '20 at 18:00 | 2022-01-16 11:05:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4064694344997406, "perplexity": 2189.335355403991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00669.warc.gz"} |
https://brilliant.org/problems/im-tired-of-thinking-up-titles-for-problems/ | # Shouldn't I Post This In July?
$$28$$ random draws are made from the set $$\{1,2,3,4,5,6,7,8,9,A,B,C,D,J,K,L,U,X,Y,Z\}$$ containing $$20$$ elements.
Let $$P$$ be the probability that the sequence
$$CUBAJULY1987$$
occurs in that order in the chosen sequence. If $$P$$ can be expressed as $$\frac {a\times b^c-d\times b^e}{b^f}$$, find $$a+b+c+d+e+f$$.
× | 2017-05-23 14:57:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9575620889663696, "perplexity": 372.0739055119634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607647.16/warc/CC-MAIN-20170523143045-20170523163045-00345.warc.gz"} |
https://proofwiki.org/wiki/Definition:Regular_Pentagon | # Definition:Pentagon/Regular
(Redirected from Definition:Regular Pentagon)
## Definition
A regular pentagon is a pentagon which is both equilateral and equiangular.
That is, a regular polygon with $5$ sides.
That is, a pentagon in which all the sides are the same length, and all the vertices have the same angle:
## Linguistic Note
The word pentagon derives from the Classical Greek:
pente (πέντε), meaning five
gon, deriving from the Greek word for corner. | 2023-03-27 04:41:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7336280941963196, "perplexity": 1845.1118468696943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00544.warc.gz"} |
http://mathhelpforum.com/calculus/48472-integral-conservative-vector-field.html | # Math Help - integral of a conservative vector field
1. ## integral of a conservative vector field
The vector field F = (x + z)i + zj + (x = y)k is conservative.
(a) Use the definition of the line integral to evaluate F ∙ dr along the path r(t) = ti + t2j + t3k for 0 t 1. The goal here is to carry out the full calculation, rather than using the fundamental theorem for line integrals).
(b) Find a corresponding potential function f(x,y,z) such that Ñf = F.
(c) Use the result calculated in (b) to re-evaluate the integral in (a).
2. Originally Posted by wik_chick88
The vector field F = (x + z)i + zj + (x = y)k is conservative.
(a) Use the definition of the line integral to evaluate F ∙ dr along the path r(t) = ti + t2j + t3k for 0 t 1. The goal here is to carry out the full calculation, rather than using the fundamental theorem for line integrals).
(b) Find a corresponding potential function f(x,y,z) such that Ñf = F.
(c) Use the result calculated in (b) to re-evaluate the integral in (a).
It would help a lot if you showed your working. Where are you stuck?
For (c): t = 0 => (0, 0, 0) and t = 1 => (1, 1, 1). You should know that the answer to (a) will therefore be f(1, 1, 1) - f(0, 0, 0) ....
3. Originally Posted by wik_chick88
The vector field F = (x + z)i + zj + (x = y)k is conservative.
(a) Use the definition of the line integral to evaluate F ∙ dr along the path r(t) = ti + t2j + t3k for 0 t 1. The goal here is to carry out the full calculation, rather than using the fundamental theorem for line integrals).
(b) Find a corresponding potential function f(x,y,z) such that Ñf = F.
(c) Use the result calculated in (b) to re-evaluate the integral in (a).
(a) $
\int_S {\bf{F}} \cdot {\bf{dr}}=\int_{t=0}^1 [(t+t^3){\bf{i}} + t^3 {\bf{j}} + (t+t^2){\bf{k}}] \cdot [{\bf{i}} +2 t {\bf{j}}+ 3 t^2 {\bf{k}}] \ dt
$
RonL
4. Originally Posted by wik_chick88
The vector field F = (x + z)i + zj + (x + y)k is conservative.
(a) Use the definition of the line integral to evaluate F ∙ dr along the path r(t) = ti + t^2j + t^3k for 0 t 1. The goal here is to carry out the full calculation, rather than using the fundamental theorem for line integrals).
(b) Find a corresponding potential function f(x,y,z) such that Ñf = F.
(c) Use the result calculated in (b) to re-evaluate the integral in (a).
i am completely stuck.
5. Originally Posted by wik_chick88
i am completely stuck.
for (b): i told you how to find a potential function $f(x,y,z)$ before. i gave you a full solution to one of your problems here. please review it
for (c): recall the fundamental theorem for line integrals.
if $C$ is a smooth curve given by the vector function $\bold{r}(t)$ for $a \le t \le b$, and $f$ is a continuous function whose gradient vector $\nabla f$ is continuous on $C$, then
$\int_C \nabla f \cdot d \bold{r} = f(\bold{r}(b)) - f(\bold{r}(a))$
note here that your $\bold{F} = \nabla f$ that is mentioned
6. Originally Posted by Jhevon
for (b): i told you how to find a potential function $f(x,y,z)$ before. i gave you a full solution to one of your problems here. please review it
for (c): recall the fundamental theorem for line integrals.
if $C$ is a smooth curve given by the vector function $\bold{r}(t)$ for $a \le t \le b$, and $f$ is a continuous function whose gradient vector $\nabla f$ is continuous on $C$, then
$\int_C \nabla f \cdot d \bold{r} = f(\bold{r}(b)) - f(\bold{r}(a))$
note here that your $\bold{F} = \nabla f$ that is mentioned
ok i got for b:
$f(x,y,z)$ = x^2/2 + zy + xz is that right? and then how do i find
$\int_C \nabla f \cdot d \bold{r} = f(\bold{r}(b)) - f(\bold{r}(a))$ ?????
7. Originally Posted by wik_chick88
ok i got for b:
$f(x,y,z)$ = x^2/2 + zy + xz is that right?
i don't know. you have a typo in your original question. in the kth coordinate, should it be x - y or x + y?
and then how do i find
$\int_C \nabla f \cdot d \bold{r} = f(\bold{r}(b)) - f(\bold{r}(a))$ ?????
this is just like the fundamental theorem of calculus. you know r(t), so find r(0) and find r(1) and plug it into f (which we have yet to determine). you take the x-component of the vector and put that for x in your function, the y-component for y etc
8. Originally Posted by Jhevon
i don't know. you have a typo in your original question. in the kth coordinate, should it be x - y or x + y?
this is just like the fundamental theorem of calculus. you know r(t), so find r(0) and find r(1) and plug it into f (which we have yet to determine). you take the x-component of the vector and put that for x in your function, the y-component for y etc
sorry the original question had x+y as the component for k. so r(0) = 0i + 0j + 0k and r(1) = i + j + k. where do i go from here?
9. Originally Posted by wik_chick88
sorry the original question had x+y as the component for k. so r(0) = 0i + 0j + 0k and r(1) = i + j + k. where do i go from here?
so f(r(1)) = f(1,1,1) and f(r(0)) = f(0,0,0)
10. by the way, your f is right. you left of the arbitrary constant though
it won't matter when you are applying the fundamental theorem, but it matters for your answer in part (b)
11. Originally Posted by Jhevon
so f(r(1)) = f(1,1,1) and f(r(0)) = f(0,0,0)
A lot of time could have been saved if the OP re-read post #2 .... | 2015-03-06 13:08:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781473636627197, "perplexity": 625.0063073277354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936468546.71/warc/CC-MAIN-20150226074108-00176-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://puzzling.meta.stackexchange.com/questions/5606/mathjax-usage-guidelines | # MathJax Usage Guidelines
It's cool that we can use MathJax to do nice stuff like:
$$\{0+ai,0-ai^\frac{2}3,0+\frac{i}a,0-\frac{i}a\}\forall a \in A \subseteq \Bbb{R}$$
(stolen from this Jonathan Allan answer)
or this:
$$\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}$$
(stolen from this Mathematics Meta answer).
But we also have MathJax being used in cases like this question or this question or this question or this question, where's it's simply rendering digits differently. Or this answer where it does indeed help the formatting a bit, but is nothing that couldn't be achieved/communicated with regular characters.
# Why this might matter
So who cares? Just use it or don't, as long as the question is clear. Point taken, imaginary reader, but there might be a few downsides to using MathJax if it's not strictly necessary.
• Based on this answer, any question with MathJax in the title will be automatically excluded from the Hot Network Questions.
• It can, at times, make search results look weird:
• Based on this answer, MathJax is likely to eliminate a question/answer from being used on the Tour page. Yes that is a very tiny problem.
• Also doesn't seem like it works in "tooltips" on question hovering:
• As Alconja pointed out in comments, doesn't work in comments on mobile.
# So...
• Does this matter at all?
• Should we discourage and/or edit out uses of MathJax when they're not really necessary?
• Are there are general guidelines for when it is or isn't worth it to use MathJax? E.g., $$x^2$$ seems obviously better than x^2, but is $$4-3=1$$ that much better than 4 - 3 = 1?
• I don't have time to write a full fledged response but basically: Using or not using MathJax doesn't detract from the quality of puzzles being posted, and as I understand, that's the goal of this site. If there is an inconvenience with search results, in puzzles where MathJax isn't needed, one could simply notify the author with a comment. As to your last question: "Worth" is subjective and really only pertains to the author of the puzzle, if they feel like using MathJax and they think it's worth it, let them. – Areeb Oct 27 '16 at 22:07
• @Areeb I think this is a very valid question because it's not just the author of the question that might feel like using it. Anyone with edit rights may feel that the question (or answer) would look better with MathJax. Some guidelines may be helpful. – Gordon K Oct 27 '16 at 22:42
• @GordonK I feel that as long as both parties are ok with the edit, it's not really a problem. And if the author doesn't like the changes, they can always roll back, which is enough to get their message across. I don't see how guidelines would make too much of an impact, if any. – Areeb Oct 27 '16 at 22:47
• One other minor, but related point: mathjax in comments doesn't render in the mobile app (though it does in question/answer bodies) – Alconja Oct 28 '16 at 1:11
• I looooove to use MathJax, but avoid it when possible for just the reasons here and some others (e.g, render delay and font consistency). Much mathematics can be done without it, by using italics and <sup>/<sub> for instance. And then there's ⁄ for very nice simple fractions. – humn Oct 28 '16 at 4:05
• @Alconja I recently discovered that you can render MathJax in comments on mobile. You just need to tap on the comment, then tap the ... More button below it, and select Render MathJax from the list of choices. – GentlePurpleRain Oct 30 '16 at 2:49
• @GentlePurpleRain - Thanks for the tip. $\text{You learn something new every day}$... – Alconja Oct 30 '16 at 3:27
• What I don't like about MathJax is that it doesn't render in spoilers in mobile browsers (unless you open the spoiler before it loads). Search issues seem really minor, it's not like math expressions would look much better in plaintext. – ffao Oct 30 '16 at 14:31
• I use desktop version on mobile. It completely destroys mathjax. – user17008 Nov 1 '16 at 23:42 | 2021-05-13 00:41:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2368350625038147, "perplexity": 1191.0549637580552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00466.warc.gz"} |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Book%3A_Chemistry_of_the_Main_Group_Elements_(Barron)/07%3A_Group_14/7.10%3A_Semiconductor_Grade_Silicon | ## Introduction
The synthesis and purification of bulk polycrystalline semiconductor material represents the first step towards the commercial fabrication of an electronic device. This polycrystalline material is then used as the raw material for the formation of single crystal material that is processed to semiconductor wafers. The strong influence on the electric characteristics of a semiconductors exhibited by small amounts of some impurities requires that the bulk raw material be of very high purity (> 99.9999%). Although some level of purification is possible during the crystallization process it is important to use as high a purity starting material as possible.
Following oxygen (46%), silicon (L. silicis flint) is the most abundant element in the earth's crust (28%). However, silicon does not occur in its elemental form, but as its oxide (SiO2) or as silicates. Sand, quartz, amethyst, agate, flint, and opal are some of the forms in which the oxide appears. Granite, hornblende, asbestos, feldspar, clay and mica, etc. are a few of the numerous silicate minerals. With such boundless supplies of the raw material, the costs associated with the production of bulk silicon is not one of abstraction and conversion of the oxide(s), but of purification of the crude elemental silicon. While 98% elemental silicon, known as metallurgical-grade silicon (MGS), is readily produced on a large scale, the requirements of extreme purity for electronic device fabrication require additional purification steps in order to produce electronic-grade silicon (EGS). Electronic-grade silicon is also known as semiconductor-grade silicon (SGS). In order for the purity levels to be acceptable for subsequent crystal growth and device fabrication, EGS must have carbon and oxygen impurity levels less than a few parts per million (ppm), and metal impurities at the parts per billion (ppb) range or lower. Table $$\PageIndex{1}$$ and Table $$\PageIndex{2}$$ give typical impurity concentrations in MGS and EGS, respectively. Besides the purity, the production cost and the specifications must meet the industry desires.
Element Concentration (ppm) Element Concentration (ppm) aluminum 1000-4350 manganese 50-120 boron 40-60 molybdenum < 20 calcium 245-500 nickel 10-105 chromium 50-200 phosphorus 20-50 copper 15-45 titanium 140-300 iron 1550-6500 vanadium 50-250 magnesium 10-50 zirconium 20
Element Concentration (ppb) Element Concentration (ppb) arsenic < 0.001 gold < 0.00001 antimony < 0.001 iron 0.1-1.0 boron ≤ 0.1 nickel 0.1-0.5 carbon 100-1000 oxygen 100-400 chromium < 0.01 phosphorus ≤ 0.3 cobalt 0.001 silver 0.001 copper 0.1 zinc < 0.1
The typical source material for commercial production of elemental silicon is quartzite gravel; a relatively pure form of sand (SiO2). The first step in the synthesis of silicon is the melting and reduction of the silica in a submerged-electrode arc furnace. An example of which is shown schematically in Figure $$\PageIndex{1}$$, along with the appropriate chemical reactions. A mixture of quartzite gravel and carbon are heated to high temperatures (ca. 1800 °C) in the furnace. The carbon bed consists of a mixture of coal, coke, and wood chips. The latter providing the necessary porosity such that the gases created during the reaction (SiO and CO) are able to flow through the bed.
The overall reduction reaction of SiO2 is expressed in (7.10.1), however, the reaction sequence is more complex than this overall reaction implies, and involves the formation of SiC and SiO intermediates. The initial reaction between molten SiO2 and C, (7.10.2), takes place in the arc between adjacent electrodes, where the local temperature can exceed 2000 °C. The SiO and CO thus generated flow to cooler zones in the furnace where SiC is formed, (7.10.3), or higher in the bed where they reform SiO2 and C, (7.10.2). The SiC reacts with molten SiO2, (7.10.4), producing the desired silicon along with SiO and CO. The molten silicon formed is drawn-off from the furnace and solidified.
$\text{SiO}_2\text{(liquid) + 2 C(solid)} \rightarrow \text{Si(liquid) + 2 CO(gas)}$
$\text{SiO}_2\text{ + 2 C} \xrightleftharpoons[\text{<1600 °C}]{\text{>1700 °C}} \text{SiO + CO}$
$\text{SiO + 2C} \rightarrow \text{SiC + CO (1600 - 1700 °C)}$
$\text{SiC _ SiO}_2 \rightarrow \text{Si + SiO + CO}$
The as-produced MGS is approximately 98-99% pure, with the major impurities being aluminum and iron (Table $$\PageIndex{1}$$), however, obtaining low levels of boron impurities is of particular importance, because it is difficult to remove and serves as a dopant for silicon. The drawbacks of the above process are that it is energy and raw material intensive. It is estimated that the production of one metric ton (1,000 kg) of MGS requires 2500 - 2700 kg quartzite, 600 kg charcoal, 600 - 700 kg coal or coke, 300 - 500 kg wood chips, and 500,000 kWh of electric power. Currently, approximately 500,000 metric tons of MGS are produced per year, worldwide. Most of the production (ca. 70%) is used for metallurgical applications (e.g., aluminum-silicon alloys are commonly used for automotive engine blocks) from whence its name is derived. Applications in a variety of chemical products such as silicone resins account for about 30%, and only 1% or less of the total production of MGS is used in the manufacturing of high-purity EGS for the electronics industry. The current worldwide consumption of EGS is approximately 5 x 106 kg per year.
Electronic-grade silicon (EGS) is a polycrystalline material of exceptionally high purity and is the raw material for the growth of single-crystal silicon. EGS is one of the purest materials commonly available, see Table $$\PageIndex{2}$$. The formation of EGS from MGS is accomplished through chemical purification processes. The basic concept of which involves the conversion of MGS to a volatile silicon compound, which is purified by distillation, and subsequently decomposed to re-form elemental silicon of higher purity (i.e., EGS). Irrespective of the purification route employed, the first step is physical pulverization of MGS followed by its conversion to the volatile silicon compounds.
A number of compounds, such as monosilane (SiH4), dichlorosilane (SiH2Cl2), trichlorosilane (SiHCl3), and silicon tetrachloride (SiCl4), have been considered as chemical intermediates. Among these, SiHCl3 has been used predominantly as the intermediate compound for subsequent EGS formation, although SiH4 is used to a lesser extent. Silicon tetrachloride and its lower chlorinated derivatives are used for the chemical vapor deposition (CVD) growth of Si and SiO2. The boiling points of silane and its chlorinated products (Table $$\PageIndex{3}$$) are such that they are conveniently separated from each other by fractional distillation.
Compound Boiling point (°C) SiH4 -112.3 SiH3Cl -30.4 SiH2Cl2 8.3 SiHCl3 31.5 SiCl4 57.6
The reasons for the predominant use of SiHCl3 in the synthesis of EGS are as follows:
1. SiHCl3 can be easily formed by the reaction of anhydrous hydrogen chloride with MGS at reasonably low temperatures (200 - 400 °C);
2. it is liquid at room temperature so that purification can be accomplished using standard distillation techniques;
3. it is easily handled and if dry can be stored in carbon steel tanks;
4. its liquid is easily vaporized and, when mixed with hydrogen it can be transported in steel lines without corrosion;
5. it can be reduced at atmospheric pressure in the presence of hydrogen;
6. its deposition can take place on heated silicon, thus eliminating contact with any foreign surfaces that may contaminate the resulting silicon; and
7. it reacts at lower temperatures (1000 - 1200 °C) and at faster rates than does SiCl4.
### Chlorosilane (Seimens) process
Trichlorosilane is synthesized by heating powdered MGS with anhydrous hydrogen chloride (HCl) at around 300 °C in a fluidized-bed reactor, (7.10.5).
$\text{Si(solid_ + 3 HCl(gas)} \xrightleftharpoons[\text{>900 °C}]{\text{ca. 300 °C}} \text{SiHCl}_3\text{(vapor) + H}_2\text{(gas)}$
Since the reaction is actually an equilibrium and the formation of SiHCl3 highly exothermic, efficient removal of generated heat is essential to assure a maximum yield of SiHCl3. While the stoichiometric reaction is that shown in (7.10.5), a mixture of chlorinated silanes is actually prepared which must be separated by fractional distillation, along with the chlorides of any impurities. In particular iron, aluminum, and boron are removed as FeCl3 (b.p. = 316 °C), AlCl3 (m.p. = 190 °C subl.), and BCl3 (b.p. = 12.65 °C), respectively. Fractional distillation of SiHCl3 from these impurity halides result in greatly increased purity with a concentration of electrically active impurities of less than 1 ppb.
EGS is prepared from purified SiHCl3 in a chemical vapor deposition (CVD) process similar to the epitaxial growth of Si. The high-purity SiHCl3 is vaporized, diluted with high-purity hydrogen, and introduced into the Seimens deposition reactor, shown schematically in Figure $$\PageIndex{2}$$. Within the reactor, thin silicon rods called slim rods (ca. 4 mm diameter) are supported by graphite electrodes. Resistance heating of the slim rods causes the decomposition of the SiHCl3 to yield silicon, as described by the reverse reaction shown in (7.10.5).
The shift in the equilibrium from forming SiHCl3 from Si at low temperature, to forming Si from SiHCl3 at high temperature is as a consequence of the temperature dependence, (7.10.6), of the equilibrium constant, (7.10.7) where ρ = partial pressure, for (7.10.5). Since the formation of SiHCl3 is exothermic, i.e., ΔH < 0, an increase in the temperature causes the partial pressure of SiHCl3 to decrease. Thus, the Siemens process is typically run at ca. 1100 °C, while the reverse fluidized bed process is carried out at 300 °C.
$\text{lnK}_{\text{p}} \text{ = } \dfrac{ \text{-}\Delta\text{H}}{\text{RT}}$
$\text{K}_{\text{p}} \text{ = } \dfrac{^{\rho}\text{SiHCl}_3 \text{ } ^{\rho}\text{H}_2}{^{\rho}\text{HCl}}$
The slim rods act as a nucleation point for the deposition of silicon, and the resulting polycrystalline rod consists of columnar grains of silicon (polysilicon) grown perpendicular to the rod axis. Growth occurs at less than 1 mm per hour, and after deposition for 200 to 300 hours high-purity (EGS) polysilicon rods of 150 - 200 mm in diameter are produced. For subsequent float-zone refining the polysilicon EGS rods are cut into long cylindrical rods. Alternatively, the as-formed polysilicon rods are broken into chunks for single crystal growth processes, for example Czochralski melt growth.
In addition to the formation of silicon, the HCl coproduct reacts with the SiHCl3 reactant to form silicon tetrachloride (SiCl4) and hydrogen as major byproducts of the process, (7.10.8). This reaction represents a major disadvantage with the Seimens process: poor efficiency of silicon and chlorine consumption. Typically, only 30% of the silicon introduced into CVD reactor is converted into high-purity polysilicon.
$\text{HCl + SiHCl}_3 \rightarrow \text{SiCl}_4\text{ + H}_2$
In order to improve efficiency the HCl, SiCl4, H2, and unreacted SiHCl3 are separated and recovered for recycling. Figure $$\PageIndex{3}$$ illustrates the entire chlorosilane process starting with MGS and including the recycling of the reaction byproducts to achieve high overall process efficiency. As a consequence, the production cost of high-purity EGS depends on the commercial usefulness of the byproduct, SiCl4. Additional disadvantages of the Seimens process are derived from its relatively small batch size, slow growth rate, and high power consumption. These issues have lead to the investigation of alternative cost efficient routes to EGS.
### Silane process
An alternative process for the production of EGS that has begun to receive commercial attention is the pyrolysis of silane (SiH4). The advantages of producing EGS from SiH4 instead of SiHCl3 are potentially lower costs associated with lower reaction temperatures, and less harmful byproducts. Silane decomposes < 900 °C to give silicon and hydrogen, (7.10.9).
$\text{SiH}_4\text{(vapor)} \rightarrow \text{Si(solid) + 2 H}_2\text{(gas)}$
Silane may be prepared by a number of routes, each having advantages with respect to purity and production cost. The simplest process involves the direct reaction of MGS powders with magnesium at 500 °C in a hydrogen atmosphere, to form magnesium silicide (Mg2Si). The magnesium silicide is then reacted with ammonium chloride in liquid ammonia below 0 °C, (7.10.10).
$\text{Mg}_2\text{Si + 4 NH}_4\text{Cl} \rightarrow \text{SiH}_4\text{ + 2 MgCl}_2\text{ + 5 NH}_3$
This process is ideally suited to the removal of boron impurities (a p-type dopant in Si), because the diborane (B2H6) produced during the reaction forms the Lewis acid-base complex, H3B(NH3), whose volatility is sufficiently lower than SiH4, allowing for the purification of the latter. It is possible to prepare EGS with a boron content of ≤ 20 ppt using SiH4 synthesized in this manner. However, phosphorus (another dopant) in the form of PH3 may be present as a contaminant requiring subsequent purification of the SiH4.
Alternative routes to SiH4 involve the chemical reduction of SiCl4 by either lithium hydride, (7.10.11), lithium aluminum hydride, (7.10.12), or via hydrogenation in the presence of elemental silicon, (7.10.13) - (7.10.16). The hydride reduction reactions may be carried-out on relatively large scales (ca. 50 kg), but only batch processes. In contrast, Union Carbide has adapted the hydrogenation to a continuous process, involving disproportionation reactions of chlorosilanes, (7.10.14) - (7.10.16), and the fractional distillation of silane, Table $$\PageIndex{3}$$.
$\text{SiCl}_4\text{ + LiH} \rightarrow \text{SiH}_4\text{ + 4 LiCl}$
$\text{SiCl}_4\text{ + LiAlH}_4 \rightarrow \text {SiH}_4 \text{ + LiCl + AlCl}_4$
$\text{SiCl}_4\text{ + 2 H}_2\text{ + Si(98%)} \rightarrow \text{4 SiHCl}_3$
$\text{2 SiHCl}_3 \rightarrow \text{SiH}_2\text{Cl}_2\text{ + 2 SiHCl}_3$
$\text{3 SiH}_2\text{Cl}_2 \rightarrow \text{SiH}_2\text{Cl + 2 SiHCl}_3$
$\text{2 SiH}_3\text{Cl} \rightarrow \text{SiH}_4\text{ + SiH}_2\text{Cl}_2$
Pyrolysis of silane on resistively heated polysilicon filaments at 700 - 800 °C yields polycrystalline EGS. As noted above, the EGS formed has remarkably low boron impurities compared with material prepared from trichlorosilane. Moreover, the resulting EGS is less contaminated with transition metals from the reactor container because SiH4 decomposition does not cause as much of a corrosion problem as is observed with halide precursor compounds.
### Granular polysilicon deposition
Both the chlorosilane (Seimens) and silane processes result in the formation of rods of EGS. However, there has been increased interest in the formation of granular polycrystalline EGS. This process was developed in 1980’s, and relies on the decomposition of SiH4 in a fluidized-bed deposition reactor to produce free-flowing granular polysilicon.
Tiny silicon particles are fluidized in a SiH4/H2 flow, and act as seed crystal onto which polysilicon deposits to form free-flowing spherical particles. The size distribution of the particles thus formed is over the range from 0.1 to 1.5 mm in diameter with an average particle size of 0.7 mm. The fluidized-bed seed particles are originally made by grinding EGS in a ball (or hammer) mill and leaching the product with acid, hydrogen peroxide, and water. This process is time-consuming and costly, and tended to introduce undesirable impurities from the metal grinders. In a new method, large EGS particles are fired at each other by a high-speed stream of inert gas and the collision breaks them down into particles of suitable size for a fluidized bed. This process has the main advantage that it introduces no foreign materials and requires no leaching or other post purification.
The fluidized-bed reactors are much more efficient than traditional rod reactors as a consequence of the greater surface area available during CVD growth of silicon. It has been suggested that fluidized-bed reactors require 1/5 to 1/10 the energy, and half the capital cost of the traditional process. The quality of fluidized-bed polysilicon has proven to be equivalent to polysilicon produced by the conventional methods. Moreover, granular EGS in a free-flowing form, and with high bulk density, enables crystal growers to obtain the high, reproducible production yields out of each crystal growth run. For example, in the Czochralski crystal growth process, crucibles can be quickly and easily filled to uniform loading with granular EGS, which typically exceed those of randomly stacked polysilicon chunks produced by the Siemens silane process.
## Zone refining
The technique of zone refining is used to purify solid materials and is commonly employed in metallurgical refining. In the case of silicon may be used to obtain the desired ultimate purity of EGS, which has already been purified by chemical processes. Zone refining was invented by Pfann, and makes use of the fact that the equilibrium solubility of any impurity (e.g., Al) is different in the solid and liquid phases of a material (e.g., Si). For the dilute solutions, as is observed in EGS silicon, an equilibrium segregation coefficient (k0) is defined by k0 = Cs/Cl, where Cs and Cl are the equilibrium concentrations of the impurity in the solid and liquid near the interface, respectively.
If k0 is less than 1 then the impurities are left in the melt as the molten zone is moved along the material. In a practical sense a molten zone is established in a solid rod. The zone is then moved along the rod from left to right. If k0 < 1 then the frozen part left on the trailing edge of the moving molten zone will be purer than the material that melts in on the right-side leading edge of the moving molten zone. Consequently the solid to the left of the molten zone is purer than the solid on the right. At the completion of the first pass the impurities become concentrated to the right of the solid sample. Repetition of the process allows for purification to exceptionally high levels. Table $$\PageIndex{4}$$ lists the equilibrium segregation coefficients for common impurity and dopant elements in silicon; it should be noted that they are all less than 1.
Element k0 Element k0 aluminum 0.002 iron 8 x 10-6 boron 0.8 oxygen 0.25 carbon 0.07 phosphorus 0.35 copper 4 x 10-6 antimony 0.023
## Bibliography
• K. G. Baraclough, K. G., in The Chemistry of the Semiconductor Industry, Eds. S. J. Moss and A. Ledwith, Blackie and Sons, Glasgow, Scotland (1987).
• L. D. Crossman and J. A. Baker, Semiconductor Silicon 1977, Electrochem. Soc., Princeton, New Jersey (1977).
• W. C. O’Mara, Ed. Handbook of Semiconductor Silicon Technology, Noyes Pub., New Jersey (1990).
• W. G. Pfann, Zone Melting, John Wiley & Sons, New York, (1966).
• F. Shimura, Semiconductor Silicon Crystal Technology, Academic Press (1989). | 2021-07-29 06:09:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5835332274436951, "perplexity": 4077.0817326858873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153816.3/warc/CC-MAIN-20210729043158-20210729073158-00152.warc.gz"} |
https://delong.typepad.com/sdj/2009/05/progress-in-macroeconomics.html | ## Progress in Macroeconomics?
A decade ago, Olivier Blanchard, now IMF chief economist, wrote that there had been a lot of progress in macroeconomics since 1920:
What Do We Know that Fisher and Wicksell Did Not?: The answer to the question... is: A lot.... Pre 1940. A period of exploration.... From 1949 to 1980. A period of consolidation... an integrated framework was developed--starting with the IS-LM, all the way to dynamic general equilibrium models--and used to clarify the role of shocks and propagation mechanisms.... since 1980. A new period of exploration, focused on the role of imperfections... nominal price setting... incompleteness of markets... asymmetric information... search and bargaining in decentralized markets....
[...]
The right [picture] is one of a steady accumulation of knowledge.... [R]evolutionaries make the news... [their ideas are] discarded... bastardized, then integrated. The insights become part of the core....
[...]
Relative to Wicksell and Fisher, macroeconomics today is solidly grounded in a general equilibrium structure. Modrn models characterize the economy as being in temporary equilibrium, given the implications of the past, and the anticipations of th efuture. They provide an interpretation of shocks working their way through propagation mechanisms...
[...]
One way to end is to ask: Of how much use was macroeconomic research in understanding... the Asian crisis?... Macroeconomists did not predict either the time, place, or scope of the crisis.... [W]hen the crisis started, macroeconomic mistakes... were made. But fairly quickly the nature of the crisis was better understood, and the mistakes correcxted. And most of the tools needed were there.... since then, a large amount of further research has taken place, leading to a better understanding of the role of financial intermediaries in exchange-rate crisis...
Even then Paul Krugman snarked:
Paul Krugman recently wondered how many macroeconomists still believe in the IS-LM model. The answer is probably that most do, but many of them probably do not know it well enough to tell..
Today things look considerably different on the progress-in-macroeconomics front. John Quiggin:
Refuted/obsolete economic doctrines #7: New Keynesian macroeconomics at John Quiggin: [Here is] a new entry for my list of refuted economic doctrines... the target... has... [been] rendered obsolete by events... New Keynesianism an approach to macroeconomics, to which Akerlof and Shiller have made some of the biggest contributions, but which they have now... repudiated.... [T]he research task was seen as one of identifying minimal deviations from the standard [rational foresight, self-interest, and competiative markets] microeconomic assumptions which yield Keynesian macroeconomic conclusions.... Akerlof’s ‘menu costs’ arguments... are an ideal example of this kind of work. New Keynesian macroeconomics has been tested by the current global financial and macroeconomic crisis and has, broadly speaking, been found wanting. The analysis of those Keynesians who warned of impending crisis combined an ‘old Keynesian’ analysis of mounting economic imbalances with a Minskyan focus on financial instability.... [T]he policy response... has been informed mainly by old-fashioned ‘hydraulic’ Keynesianism... massive economic stimulus... large-scale intervention in the financial system. The opponents of Keynesianism have retreated even further into the past, reviving the anti-Keynesian arguments of the 1930s and arguing at length over policy responses to the Great Depression.
There is of course, still a need to explain why wages do not adjust rapidly to clear labour markets in the face of an external financial shock. But in an environment where the workings of sophisticated financial markets display collective irrationality on a massive scale, there is much less reason to be concerned about the fact that such an explanation must involve deviations from rationality, and seeking to minimise those deviations....
New Keynesianism... was a defensive adjustment to the dominance of free market ideas.... New Keynesians sought a theoretical framework that would justify medium-term macroeconomic management based on manipulation of interest rates by central banks, and a fiscal policy that allowed automatic stabilisers to work, against advocates of fixed monetary rules and annual balanced budgets. But now that both... the efficient markets hypothesis and the policy framework that brought us the Great Moderation have collapsed, there is no need for such a defensive stance...
George Akerlof and Robert Shiller agree with Quiggin rather than Blanchard:
Akerlof and Shiller, Animal Spirits: The economics of the textbooks seeks to minimise as much as possible departures from pure economic motivation and from rationality.... [E]ach of us has spent a good portion of his life writing in this tradition. The [self-interest and rational foresight-based] economics of Adam Smith is well understood. Explanations in terms of small deviations from Smith’s ideal system are thus clear because they are posed within a framework that is already very well understood. But that does not mean that these small deviations from Smith’s system describe how the economy actually works.... In our view, economic theory should be derived not from the minimal deviations from the system of Adam Smith but rather from the deviations [from competitive markets, self-interested motivation, and rational foresight] that actually do occur...
So does Greg Clark: his rant from his seat as chair of the U.C. Davis Economics Department:
Dismal scientists: how the crash is reshaping economics - The Atlantic Business Channel: In the long post WWII boom, as free market ideology triumphed, economists have won for themselves a privileged place inside academia.... [C]ash.... Not much by the pornographic standards of finance, but a fat paycheck compared to your average English or Physics professor. It is not just the stars. Journeyman assistant professors in economics routinely come in at $100,000 or more... fresh from their PhDs, without a publication to their name and without years of low pay as post-docs. The high salaries have been accompanied by dramatic declines in the teaching burden.... Why did academic economics generate so much prestige?... [W]hat drove demand was the unquenchable thirst for economists by banks, government agencies, and business schools - the Feds, the Treasury, the IMF, the World Bank, the ECB. Economics had powerful insights to offer the world, insights worth a lot of treasure. Economics was powerful voodoo.... The current recession has revealed... as useless the mathematical contortions of academic economics. There is no totemic power.... (1) Almost no-one predicted the world wide downtown. Academic economists were confident that episodes like the Great Depression had been confined to the dust bins of history. There was indeed much recent debate about the sources of "The Great Moderation" in modern economies, the declining significance of business cycles.... [M]acroeconomists had turned their considerable talents to a bizarre variety of rococo academic elaborations. With nothing of importance to explain, why not turn to the mysteries of online dating, for example.... (2) The debate about the bank bailout, and the stimulus package, has all revolved around issues that are entirely at the level of Econ 1. What is the multiplier from government spending? Does government spending crowd out private spending? How quickly can you increase government spending? If you got a A in college in Econ 1 you are an expert in this debate: fully an equal of Summers and Geithner. The bailout debate has also been conducted in terms that would be quite familiar to economists in the 1920s and 1930s. There has essentially been no advance in our knowledge in 80 years.... Bizarrely, suddenly everyone is interested in economics, but most academic economists are ill-equipped to address these issues. Recently a group of economists affiliated with the Cato Institute ran an ad in the New York Times opposing the Obama's stimulus plan. As chair of my department I tried to arrange a public debate between one of the signatories and a proponent of fiscal stimulus -- thinking that would be a timely and lively session. But the signatory, a fully accredited university macroeconomist, declined the opportunity for public defense of his position on the grounds that "all I know on this issue I got from Greg Mankiw's blog -- I really am not equipped to debate this with anyone." Academic economics will no doubt survive this shock to its prestige.... [But] the days of the$500,000 economics professor may have passed.... [W]ill the focus of academic economics change?... I would rate the chances of Chrysler producing once again a competitive US automobile at least as high as the chances of academic economics learning any lesson from this downturn...
Watching the scrum over the past six months, I have to call this one for Krugman, Clark, Akerlof, Shiller, and Quiggin and against Blanchard's vision of growing knowledge and analytical convergence. Economists have been worrying about the industrial business cycle and the proper role of the government in trying to tame it since 1825. Yet there are an extraordinary number of people out there calling themselves macroeconomists who do not have the slightest clue as to what the issues have been over the past two hundred years. | 2021-04-13 01:58:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3410804271697998, "perplexity": 4520.901478106747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038071212.27/warc/CC-MAIN-20210413000853-20210413030853-00419.warc.gz"} |
http://www.chegg.com/homework-help/questions-and-answers/noise-voltage-electric-circuit-modeled-gaussian-random-variable-0-mean-10-4-standard-devia-q954800 | The noise voltage in an electric circuit can be modeled as a Gaussian random variable with 0 mean and 10^-4 standard deviation
(a)What are the probabilities that the value of the noise exceeds 10^-4 and 4*10^-4? What is the probability the noise is between -2*10^-4 and 10^-4?
(b)Given that the value of the noise is positive, what is the probability that it exceeds 10^-4?
## Expert Answer
### Get this answer with Chegg Study
Practice with similar questions
Digital Communications (5th Edition)
Q:
The noise voltage in an electric circuit can be modeled as a Gaussian random variable with mean equal to zero and variance equal to 10-8. 1. What is the probability that the value of the noise exceeds 10-4? What is the probability that it exceeds 4×10-4? What is the probability that the noise value is between −2 × 10-4 and 10-4? 2. Given that the value of the noise is positive, what is the probability that it exceeds 10-4?
A: See answer | 2016-09-25 19:11:23 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.902433454990387, "perplexity": 365.92860163223435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660342.42/warc/CC-MAIN-20160924173740-00066-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/21929/what-is-the-actual-meaning-of-a-fractional-derivative | # What is the actual meaning of a fractional derivative?
We're all use to seeing differential operators of the form $\frac{d}{dx}^n$ where $n\in\mathbb{Z}$. But it has come to my attention that this generalises to all complex numbers, forming a field called fractional calculus which apparently even has applications in physics!
These derivatives are defined as fractional iterates. For example, $(\frac{d}{dx}^\frac{1}{2})^2 = \frac{d}{dx}$ or $(\frac{d}{dx}^i)^i = \frac{d}{dx}^{-1}$
But I can't seem to find a more meaningful definition or description. The derivative means something to me; these just have very abstract definitions. Any help?
• Please read the FAQ. Regarding your question, this is standard undergraduate material, for example see: en.wikipedia.org/wiki/Fourier_transform and look up the equation for the Fourier transform of an iterated derivative. – Ryan Budney Apr 20 '10 at 3:57
• I understand that it must be frustrating to see a question that seems too low-level posted. Before posting this question, I tried to do due diligence by researching it and asking several math grad students and a (in industry) PHD (who hadn't heard of it before!). Perhaps you could expand on what qualifies as a `research level math question'? Additionally, thinking about a fractional derivative in the indirect manner you describe seems suboptimal, further defending the validity of asking for a more meaningful definition. (I hadn't heard of it this way before hand, but..) – Christopher Olah Apr 20 '10 at 4:54
• There is a lovely little book on this subject whose entire thesis is to answer the question you've just asked. It's called "An Introduction to the Fractional Calculus and Fractional Differential Equations" by Miller and Ross. I think it's fairly cheap on amazon – Dylan Wilson Aug 6 '10 at 7:33
• Back when I was studying these, I treated the leap from integer-order derivatives/integrals to arbitrary-order differintegrals (I really have no love for the term "fractional") in the same way that I had treated how the gamma functions extend the factorial, and how general exponents extend the normal integer powers even before that. This is more of finding out how far you can stretch the rules that used to apply only to integer values. As for books, I always read Miller/Ross, Spanier/Oldham, and Podlubny side-by-side. (We really still are far off from notation everybody can be happy with!) – J. M. isn't a mathematician Aug 6 '10 at 9:39
• Subsequently, there has been an illuminating answer to a related question, "Geometric interpretation of the half-derivative?". In particular, there is a beautiful "mechanical interpretation of the half-derivative." – Joseph O'Rourke Jan 16 '16 at 1:28
I understand where Ryan's coming from, though I think the question of how to interpret fractional calculus is still a reasonable one. I found this paper to be pretty neat, though I have no idea if there are any better interpretations out there.
http://people.tuke.sk/igor.podlubny/pspdf/pifcaa_r.pdf
Personally, if I was entering this subject blind I would feel cheated if not shown the extensive pure mathematical power of the fractional derivative. Being that it is more useful than just being used to solve differential equations or physical problems.
The first thing is to look at Cauchy's integral formula which is most aptly
$$\int_a^x \int_a^{x_{n-1}}...\int_a^{x_1}f(x_0)\,dx_0dx_1dx_2...dx_{n-1} = \frac{1}{n-1!}\int_a^x f(y)y^{n-1}\,dy$$
which is a strikingly powerful equation. The natural generalisation arises by considering the operator $I_a f = \int_a^x f(y)\,dy$ and simply writing $$I_a^n = I_a ... (n\,times)...I_a f = \frac{1}{n-1!}\int_a^x f(y)y^{n-1}\,dy$$
where a natural conclusion is to define
$$I_{a}^z f = \frac{1}{\Gamma(z)}\int_a^x f(y)y^{z-1}\,dy$$
which through no obvious or simple method
$$I_a^{z_0}I_a^{z_1} = I_a^{z_0 + z_1}$$
This gives not only one iterated "fractional" integral but infinitely many for each $a$. The perspective result, or canonical fact, is that each fractional integral satisfies
$$I_a^z (x-a)^r = \frac{\Gamma(r+1)}{\Gamma(r+z+1)}(x-a)^{r+z}$$
and $I_a (x-b)^r$ when $b \neq a$ is defined using a binomial expansion.
Defining $\frac{d}{dx}_a^z = I_a^{-z}$ for $\Re(z) < 0$ and $\frac{d}{dx}_a^z = \frac{d}{dx}^n I_a^{n-z}$ for $\Re(z) < n$ we arrive at a fractional derivative.
This seemingly convenient and beautiful expression gives us something rather ugly though. Since $\frac{d}{dx} e^x = e^x$ we would like $\frac{d}{dx}^z e^x = e^x$, but this is not so. By uniform convergence and all that jazz
$$\frac{d}{dx}_a^z e^x = \sum_{n=0}^\infty \frac{x^{n-z}}{\Gamma(n+1-z)}$$
which is not $e^x$.
Therefore another fractional derivative is required. Taking $a = -\infty$ then we arrive at the commonly called "exponential differintegral" which can be written
$$\frac{d}{dx}^{-z} f(x) = \frac{1}{\Gamma(z)}\int_0^\infty f(x-y)y^{z-1}\,dx$$ defined for $f$ satisfying specific decay conditions at negative infinity. As one can see this fractional derivative fixes $e^x$ but diverges for any polynomial.
Now we can generalize this even further!
Consider $f(w)$ entire on $\mathbb{C}$, and for convenience assume $f(w)w \to 0$ as $w \to \infty$ when $|\arg(w)| < \kappa$ and call this space of function $D_\kappa$
Then we have the disastrously large formula
$$\frac{d^z}{dw^z} f(w) = \frac{e^{i\theta z}}{\Gamma(-z)}\Big{(}\sum_{n=0}^\infty f^{(n)}(w)\frac{(-e^{i\theta})^n}{n!(n-z)} + \int_1^\infty f(w-e^{i\theta}y)y^{-z-1}\,dy\Big{)}$$
which holds for all $|\theta| < \kappa$ and $\Re(z) > -1$.
Now some people would rashly think what is the point of this? Some interesting things happen in this scenario, firstly the differintegral can be thought of as a modified Mellin transform. Giving us things like Ramanujan's master theorem in a slicker notation. It further emphasizes that this operator arises in a very natural sense (the Mellin transform being prominent in many areas of mathematics). It says $\frac{d^z}{dw^z}$ for $\Re(z) > 0$ takes $D_\kappa$ to itself. So we have a semigroup $\{\frac{d^z}{dw^z} | \Re(z) > 0\}$ acting on $D_\kappa$.
Furthermore, when looking at the fourier transform definition of a fractional derivative, it is in fact this clunky looking exponential derivative that's really pulling the strings. Where it may seem cleaner in Fourier transforms, it is much more general in its Mellin form.
All in all it is quite a mysterious object, and is underused in my opinion.
If one solves diffusion problems, magnetic or thermal, by the use of the LaPlace transform there results s raised to fractional powers. Usually s denotes the first derivative with respect to time and I interpret s raised to a fractional power as a fractional derivative with respect to time. This occurs in all skin effect calculations and is not trouble if you have a program that inverts the LaPlace transform. I think the formation of ice on water is a direct physical example of the ice thickness being proportional to the 1/2 derivative of time.
• I don't understand the sentence "of the ice thickness being proportional to the 1/2 derivative of time": the the 1/2-derivative of time with respect to what? – André Henriques Jan 16 '16 at 0:06
• That claim about ice seemed interesting so I had to investigate. I found a paper igsoc.org:8080/journal/52/179/j05j055.pdf It solves the heat equation by taking the "square root" of the differential operators on each side. I've no idea what conditions are required to make this a valid operation. – Dan Piponi Jan 16 '16 at 0:39
Fractional derivative arise is diffusion problems as the previous poser noticed. The Abel equation of the tauthochrone is another classical example. The physical interpretation is still debatable but it is often attributed to memory effects or underlying fractal behaviors giving rise to power laws. Classical references are Oldham and Spanier 1974 (https://www.amazon.com/Fractional-Calculus-Mathematics-Science-Engineering/dp/0125255500/ref=sr_1_1?s=books&ie=UTF8&qid=1469461451&sr=1-1&keywords=Oldham+and+Spanier+1974), Podlubny (https://www.amazon.com/Fractional-Differential-Equations-198-Introduction/dp/0125588402/ref=sr_1_2?s=books&ie=UTF8&qid=1469461515&sr=1-2&keywords=Podlubny), Kilbas and Marichev (https://www.amazon.com/Fractional-Integrals-Derivatives-Theory-Applications/dp/2881248640) etc.
Probabilistically, you can give a perfectly clear meaning to many fractional derivatives.
I will look at definitions of fractional\nonlocal derivatives that are Markovian generators of stochastic processes with jumps. I hope to convince the reader that
• Different definitions arise naturally,
• there is a clear interpretation of many properties (like nonlocality or killing/not-killing constants), and
• generalizations are natural and meaningful for applications.
It is useful to look at the most simple stochastic jump process and its corresponding generator. Take a Markov chain $P=\{p_{i,j}\}_{i,j\in \text{State space}}$ (which is intrinsically jumpy) and write out its generator
$$\mathcal G f(x):=(P-I)f(x)=\sum_{y\in\text{ State space}}(f(y)-f(x))p_{x,y},\quad x\in\text{ State space}.$$ Here the intuition is clear: the infinitesimal jump (working with unit time in this case) from $x$ to $y$ is assigned intensity/probability $p_{x,y}$. The operator $\mathcal G$ is non-local. If we modify the process (impose boundary conditions), say by forcing the process to be absorbed at $a\in\text{ State space}$ once it tries to jump to a state $y\notin \Omega\subset \text{State space},$ we obtain a new generator $$\mathcal G^{\text{abs}} f(x):=(P^{\text{abs}}-I)f(x)=\sum_{y\in\Omega}(f(y)-f(x))p_{x,y}+(f(a)-f(x))\sum_{y\notin\Omega}p_{x,y},\quad x\in\Omega.$$ If we instead decide to kill it (by testing against functions with $f(a)=0$, for example), the new generator will be $$\mathcal G^{\text{kill}} f(x):=(P^{\text{kill}}-I)f(x)=\sum_{y\in\Omega}(f(y)-f(x))p_{x,y}-f(x)\sum_{y\notin\Omega}p_{x,y},\quad x\in\Omega.$$ So from one single process we can obtain many different generators/fractional derivative (as mentioned in a comment above, the boundary conditions are reflected in the representation of the operator away from the boundary due to the non-locality of $\mathcal G$).
Let us now move to the Riemann-Liouville and Caputo derivatives of order $\beta\in(0,1)$. Consider the three fractional derivatives for $x<a$ \begin{align} D^{\beta}_{\infty}f(x)&:= \int_0^{\infty}(f(x+y)-f(x))\nu(y)dy, \\ ^{C}D^{\beta}_a f(x):&= \int_0^{a-x}(f(x+y)-f(x))\nu(y)dy &+(f(a)-f(x))\int_{a-x}^\infty\nu(y)dy,\\ ^{RL}D^{\beta}_af(x)&:= \int_0^{a-x}(f(x+y)-f(x))\nu(y)dy &-f(x)\int_{a-x}^\infty\nu(y)dy, \end{align} where $\nu(y):=\frac{-\Gamma(-\beta)^{-1}}{y^{1+\beta}}$. Similarly as for the Markov chain above: the operator $D^{\beta}_{\infty}$ is the generator of a $\beta$-stable subordinator $X^\beta(s)$, the operator $^{C}D^{\beta}_a$ is the generator of a $\beta$-stable subordinator $X^\beta(s)$ absorbed at $\{a\}$ on the first attempt to jump outside $\Omega:=(-\infty,a)$, and the operator $^{RL}D^{\beta}_a$ is the generator of a $\beta$-stable subordinator $X^\beta(s)$ killed on the first attempt to jump outside $\Omega:=(-\infty,a)$. Integrating by parts we can rewrite the three operators above in their Riemann-Liouville integral representation, namely \begin{align} D^{\beta}_{\infty}f(x)&= \int_x^{\infty}f'(y)\frac{(y-x)^{-\beta}}{\Gamma(1-\beta)}dy \\ ^{C}D^{\beta}_a f(x)&= \int_x^{a}f'(y)\frac{(y-x)^{-\beta}}{\Gamma(1-\beta)}dy,\\ ^{RL}D^{\beta}_af(x)&= \frac{d}{dx}\int_x^{a}f(y)\frac{(y-x)^{-\beta}}{\Gamma(1-\beta)}dy, \end{align} where the last two operators are your standard definitions of Caputo and Riemann-Liouvile derivatives (right and left versions will correspond to the processes $X^\beta(s)$ and $-X^{\beta}(s)$ respectively). We can now say that the Caputo derivative $^{C}D^{\beta}_a$ (Riemann-Liouville derivative $^{RL}D^{\beta}_a$) kills (does not kill) constants as it is the generator of a process (killed process). Again you can see that (naturally) $^{C}D^{\beta}_a$ and $^{RL}D^{\beta}_a$ contain boundary information in their representation away from the boundary (in sharp difference with local differential operators). Some references: Caputo, Riemann-Liouville, and Grünwald-Leitnikov derivatives from a stochastic point of view in this book. Reflecting boundary conditions and other options for Caputo derivatives of order $\beta\in(1,2)$ here and here.
By substituting a general Lévy measure $\nu(x,dy)$ in the formulas above (generalizing fractional derivatives), many meaningful stochastic processes and their versions on a bounded domain can be studied through their generators (see book, article ). Similar arguments can be carried over for some fractional Laplacians (see this book for example). | 2020-09-24 15:05:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9021599888801575, "perplexity": 509.20631951214745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219221.53/warc/CC-MAIN-20200924132241-20200924162241-00264.warc.gz"} |
https://www.proofwiki.org/wiki/Condition_for_Cofinal_Nonlimit_Ordinals | # Condition for Cofinal Nonlimit Ordinals
## Theorem
Let $x$ and $y$ be nonlimit ordinals.
Let $\operatorname{cof}$ denote the cofinal relation.
Let $\le$ denote the subset relation.
Furthermore, let $x$ and $y$ satisfy the condition:
$0 < x \le y$
Then:
$\map {\operatorname{cof}} { y,x }$
## Proof
Both $x$ and $y$ are non-empty, so by the definition of a limit ordinal:
$x = z^+$ for some $z$.
$y = w^+$ for some $w$.
$\bigcup z^+ \le \bigcup w^+$ follows by Set Union Preserves Subsets/General Result.
$z \le w$ follows by Union of Successor Ordinal.
Define the function $f : x \to y$ as follows:
$\map f a = \begin{cases} a &: a \ne z \\ w &: a = z \end{cases}$
$a < b \le w$, so $f \left({a}\right) = a$.
Take any $a,b \in x$ such that $a < b$.
$\map f a < \map f b$ shall be proven by cases:
### Case 1: $b \ne z$
If $b \ne z$:
$\map f a < \map f b$ is simply a restatement of $a < b$.
### Case 2: $b = z$
If $b = z$, then $\map f b = w$ by the definition of $f$.
Since $a < z \le w$, $\map f a < \map f b$.
It follows that $f$ is strictly increasing.
$\Box$
Moreover, since $\bigcup y = w$ is the least upper bound, $f\left({z}\right) \ge a$ for all $a \in y$.
$\blacksquare$ | 2022-07-03 05:52:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9980804324150085, "perplexity": 431.1069880837279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00657.warc.gz"} |
https://groupprops.subwiki.org/wiki/Twisted_subgroup | # Twisted subgroup
This article defines a property of subsets of groups
View other properties of subsets of groups|View properties of subsets of abelian groups|View subgroup properties
This is a variation of subgroup|Find other variations of subgroup |
## Definition
### Definition with symbols
A subset $K$ of a group $G$ is termed a twisted subgroup if it satisfies the following two conditions:
• The identity element belongs to $K$
• For every $x \in K$, $x^{-1} \in K$
• Given $x, y$ in $K$, the element $xyx$ is in $K$
Note that the second condition is redundant when $K$ is a finite subset of $G$. Since twisted subgroups are usually studied in the context of finite groups, the condition is typically omitted from the definition. It is, however, necessary for the definition to behave nicely for infinite groups. The corresponding definition without this condition is better called twisted submonoid.
## Relation with other properties
### Stronger properties
Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions
2-powered twisted subgroup twisted subgroup within which every element has a unique square root. |FULL LIST, MORE INFO
### Weaker properties
Property Meaning Proof of implication Proof of strictness (reverse implication failure) Intermediate notions
1-closed subset nonempty subset, contains the cyclic subgroup generated by any element in it |FULL LIST, MORE INFO
symmetric subset nonempty subset, contains the identity and closed under taking inverses |FULL LIST, MORE INFO
## Property theory
### Associates
Further information: associate of twisted subgroup is twisted subgroup
Let $K$ be a twisted subgroup of $G$. Then, for any $a$ in $K$, the sets $Ka$ and $a^{-1}K$ are equal and form another twisted subgroup. Such a twisted subgroup is termed an associate of $K$. The relation of being associate is an equivalence relation and we are interested in studying twisted subgroups upto the equivalence relation of being associates.
## References
PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] from Foguel's article | 2021-05-12 17:37:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9737234115600586, "perplexity": 1074.1462402552932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00129.warc.gz"} |
http://math.stackexchange.com/questions/10731/p-seminorms-on-smooth-functions-are-equivalent | # p-seminorms on smooth functions are equivalent
Let $K \subset R^n$ be compact, and let $C_0^{\infty}(K)$ be the space of smooth functions on with support in $K$.
For $p \in [1,\infty), \alpha$ multiindex, let $|f|_{\alpha,p} = (\int_K |D^{\alpha}f|^p)^{\frac{1}{p}}$, and $|f|_{\alpha,\infty} = sup_K |D^{\alpha}f|$ - as usual.
For $p \in [1,\infty]$, let $\tau_p$ be the locally convex topology generated on $C_0^{\infty}(K)$ by the seminorms $(|f|_{\alpha,p})_{\alpha}$.
Claim: All the $\tau_p$ are the same.
One proof of this might utilize the Sobolev theory (Kondrashow embedding theorem) which is too heavy as tool for my taste.
Unfortunately, I don't know a more elementary proof. Furthermore I guess a beautiful proof of this might work similarly for other spaces like the Schwartz-space or the space of smooth functions on the whole of $R^n$. Do you have any good idea?
-
so does this mean showing that the seminorms that you have are within constant factors of each other? – user1709 Nov 17 '10 at 21:27
Well, almost, because to go from low $p$ to high $p$ you need to sacrifice derivatives. – Willie Wong Nov 17 '10 at 23:39
I'm also not convinced that the claim can be generalized to whole of $\mathbb{R}^n$; The function $(\sqrt{1 + x^2})^{-1/2}$ is a function in $W^{\infty,3}$ but not in $L^1$ of $\mathbb{R}$. – Willie Wong Nov 17 '10 at 23:47
Also, you don't need the full strength of the Kondrachov embedding, you just need Sobolev embedding, whose proof (the GNS version) I happen to think is quite beautiful. – Willie Wong Nov 17 '10 at 23:58
Where did you read this claim? My PDE knowledge does not yet go much further than the book by Evans. – Jonas Teuwen Nov 18 '10 at 14:33
Here's a remark added to original remarks below.
As mentioned below, Hölder's inequality reduces this to showing that $\tau_1$ is finer than $\tau_\infty$. This means showing that the $\tau_1$ to $\tau_\infty$ identity map is continuous, while we already know that its inverse is continuous. Hence if you prove that $\tau_1$ and $\tau_\infty$ are complete, this follows from the open mapping theorem.
I'm not sure this is worth posting, because I can only finish the easy case $n=1$, but here's a proof for that case from someone ignorant of Sobolev embedding.
If $1\leq p\leq q\leq\infty$, then because $K$ has finite measure, Hölder's inequality yields $|f|_{\alpha,p}\leq C|f|_{\alpha,q}$ for a constant $C$ depending only on $p$, $q$, and the measure of $K$. Thus $\tau_q$ is finer than $\tau_p$, and the work is in showing that $\tau_1$ is finer than $\tau_\infty$. Here, as Willie Wong pointed out, it is no longer possible to do so by directly comparing seminorms with the same multi-index.
Suppose that $(f_k)$ is a sequence of functions such that $|f_k|_{\alpha,1}\to 0$ for all $\alpha$ as $k\to\infty$. This implies that the same holds for the sequence $(D^{\alpha_0}f_k)$ for each fixed multi-index $\alpha_0$, so it is enough to check that $|f_k|_{0,\infty}\to0$, that is, $(f_k)$ converges uniformly to $0$. Here's a way to see this when $n=1$. Let $y$ be a point where all of the functions vanish, say $y=\inf K$ for definiteness. For each $x\in K$, $|f_k(x)|=|f_k(x)-f_k(y)|=|\int_y^x f_k'(t)dt|\leq \int_K|f_k'(t)|dt=|f_k|_{1,1}$, which goes to $0$ independent of $x$. Hence $(f_k)$ converges uniformly to $0$.
Actually, with slightly more work the proof that $\tau_1$ is finer than $\tau_\infty$ for $n=1$ can be extended to the case of smooth functions with arbitrary support. With the same assumptions on $(f_k)$, for each $x\in\mathbb{R}$, $|f_k(x)-f_k(0)|=|\int_0^x f_k'(t)dt|\leq \int|f_k'(t)|dt=|f_k|_{1,1}$, which goes to $0$ independent of $x$. This means that there is a sequence $(\varepsilon_k)$ of positive numbers converging to $0$ such that each $f_k$ is within $\varepsilon_k$ of the constant function with value $f_k(0)$. Since $|f_k|_{0,1}$ goes to $0$, the corresponding constants must converge to $0$, and this implies that $(f_k)$ converges uniformly to $0$. As Willie Wong also pointed out, you don't get $\tau_q$ finer than $\tau_p$ for $q\gt p$ in this case.
To extend this argument to $n\gt 1$, I would need bounds on $|f(x)-f(y)|$ in terms of volume integrals of sums of (a finite number of) partial derivatives of $f$, and this is where my ignorance keeps me from going further.
-
Your approach is essentially the $n=1$ proof of the Sobolev inequality. For $n > 1$, you either appeal to Sobolev and Morrey's inequalities, or you reproduce them by hand. You can also use a modified version of the Sobolev inequality: you can control $|f_k|_{1,1}$ on the line connecting $f(x)$ and $f(y)$ by $|f_k|_{n+1,1}$ on the volume using the high-codimension trace inequality. The usual proof of which, however, goes through Morrey's inequality again. – Willie Wong Nov 18 '10 at 12:09
@Willie Wong: Thank you. Part of what made me decide to post is the hope that someone would inform me on what I'm missing. Paging through Evans led me to suspect that Sobolev and Morrey inequalities would be useful, but I hadn't thought through the details of putting it together. – Jonas Meyer Nov 18 '10 at 19:38 | 2016-02-08 16:58:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9460557103157043, "perplexity": 114.70223532047709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153736.68/warc/CC-MAIN-20160205193913-00151-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.r-bloggers.com/2012/09/fund-now-or-buy-later-a-kickstarter-price-comparison/ | Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Hi Internet! I’m Preeya, and I will be your guide in this blog’s quantitative quest for knowledge.
To get started, let’s talk about pricing. Part of the Kickstarter process is figuring out how much a hypothetical product will cost once it’s on the market. But how accurately can that be calculated without actually going through the production process? In other words, once a product has been kick-started and is out in the real world, how much does its price change? Luckily, we here at Kickfollower are busy gathering the data necessary to answer that question.
Here are some preliminary results:
That’s the Kickstarter price on the x-axis, the post-Kickstarter market price on the y-axis. The points are colored by type of product, as seen on the right. The red line marks where the price estimated by the Kickstarter inventor equals the true market price: products above the line had a higher than estimated market price, products below it had a lower than estimated market price, and products on the line had very savvy inventors.
As you can see, about half the products are below the line, with less than half above. Does this mean that people are generally getting a raw deal on Kickstarter? To find out, let’s ask our friend, statistics!
We ran a paired t-test, which determined that the mean Kickstarter overestimate of $3.05 was not significant (p = 0.356). So on average, prices of Kickstarter projects are not much over- (or under-) estimated. Good to know! By the way, do you notice how the red line doesn’t quite fit the data? Upon further analysis, it turns out the the best-fit slope for those points is slightly less than 1, while the intercept is higher than zero. This suggests that lower-priced products are better deals on Kickstarter. Specifically, you may be able to save money if you back Kickstarter products with estimated prices of around$30 or less, while waiting for higher-priced products to come to the market.
Of course, our dataset is still quite small, so everything in this post should be taken with heaps of salt. But if you’d like to help us collect more data for awesome future analyses, please email [email protected] with shipped Kickstarter and Indiegogo projects we’ve missed! For example, although Kickstarter prices on average seem to be accurate, the graph above suggests that projects relating to food might be more likely to overestimate their prices. Topics like that could be the subject of future blog posts. I hope you’re as excited as I am! | 2021-09-27 08:21:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3184088468551636, "perplexity": 1302.9175111229163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00640.warc.gz"} |
http://docs.julia.tokyo/ja/latest/manual/dates.html | # Date and DateTime¶
The Dates module provides two types for working with dates: Date and DateTime, representing day and millisecond precision, respectively; both are subtypes of the abstract TimeType. The motivation for distinct types is simple: some operations are much simpler, both in terms of code and mental reasoning, when the complexities of greater precision don’t have to be dealt with. For example, since the Date type only resolves to the precision of a single date (i.e. no hours, minutes, or seconds), normal considerations for time zones, daylight savings/summer time, and leap seconds are unnecessary and avoided.
Both Date and DateTime are basically immutable Int64 wrappers. The single instant field of either type is actually a UTInstant{P} type, which represents a continuously increasing machine timeline based on the UT second [1]. The DateTime type is timezone-unaware (in Python parlance) or is analogous to a LocalDateTime in Java 8. Additional time zone functionality can be added through the Timezones.jl package, which compiles the Olsen Time Zone Database. Both Date and DateTime are based on the ISO 8601 standard, which follows the proleptic Gregorian calendar. One note is that the ISO 8601 standard is particular about BC/BCE dates. In general, the last day of the BC/BCE era, 1-12-31 BC/BCE, was followed by 1-1-1 AD/CE, thus no year zero exists. The ISO standard, however, states that 1 BC/BCE is year zero, so 0000-12-31 is the day before 0001-01-01, and year -0001 (yes, negative one for the year) is 2 BC/BCE, year -0002 is 3 BC/BCE, etc.
[1] The notion of the UT second is actually quite fundamental. There are basically two different notions of time generally accepted, one based on the physical rotation of the earth (one full rotation = 1 day), the other based on the SI second (a fixed, constant value). These are radically different! Think about it, a “UT second”, as defined relative to the rotation of the earth, may have a different absolute length depending on the day! Anyway, the fact that Date and DateTime are based on UT seconds is a simplifying, yet honest assumption so that things like leap seconds and all their complexity can be avoided. This basis of time is formally called UT or UT1. Basing types on the UT second basically means that every minute has 60 seconds and every day has 24 hours and leads to more natural calculations when working with calendar dates.
## Constructors¶
Date and DateTime types can be constructed by integer or Period types, by parsing, or through adjusters (more on those later):
julia> DateTime(2013)
2013-01-01T00:00:00
julia> DateTime(2013,7)
2013-07-01T00:00:00
julia> DateTime(2013,7,1)
2013-07-01T00:00:00
julia> DateTime(2013,7,1,12)
2013-07-01T12:00:00
julia> DateTime(2013,7,1,12,30)
2013-07-01T12:30:00
julia> DateTime(2013,7,1,12,30,59)
2013-07-01T12:30:59
julia> DateTime(2013,7,1,12,30,59,1)
2013-07-01T12:30:59.001
julia> Date(2013)
2013-01-01
julia> Date(2013,7)
2013-07-01
julia> Date(2013,7,1)
2013-07-01
julia> Date(Dates.Year(2013),Dates.Month(7),Dates.Day(1))
2013-07-01
julia> Date(Dates.Month(7),Dates.Year(2013))
2013-07-01
Date or DateTime parsing is accomplished by the use of format strings. Format strings work by the notion of defining delimited or fixed-width “slots” that contain a period to parse and passing the text to parse and format string to a Date or DateTime constructor, of the form Date("2015-01-01","y-m-d") or DateTime("20150101","yyyymmdd").
Delimited slots are marked by specifying the delimiter the parser should expect between two subsequent periods; so "y-m-d" lets the parser know that between the first and second slots in a date string like "2014-07-16", it should find the - character. The y, m, and d characters let the parser know which periods to parse in each slot.
Fixed-width slots are specified by repeating the period character the number of times corresponding to the width with no delimiter between characters. So "yyyymmdd" would correspond to a date string like "20140716". The parser distinguishes a fixed-width slot by the absence of a delimiter, noting the transition "yyyymm" from one period character to the next.
Support for text-form month parsing is also supported through the u and U characters, for abbreviated and full-length month names, respectively. By default, only English month names are supported, so u corresponds to “Jan”, “Feb”, “Mar”, etc. And U corresponds to “January”, “February”, “March”, etc. Similar to other name=>value mapping functions dayname() and monthname(), custom locales can be loaded by passing in the locale=>Dict{UTF8String,Int} mapping to the MONTHTOVALUEABBR and MONTHTOVALUE dicts for abbreviated and full-name month names, respectively.
One note on parsing performance: using the Date(date_string,format_string) function is fine if only called a few times. If there are many similarly formatted date strings to parse however, it is much more efficient to first create a Dates.DateFormat, and pass it instead of a raw format string.
julia> df = Dates.DateFormat("y-m-d");
julia> dt = Date("2015-01-01",df)
2015-01-01
julia> dt2 = Date("2015-01-02",df)
2015-01-02
A full suite of parsing and formatting tests and examples is available in tests/dates/io.jl.
## Durations/Comparisons¶
Finding the length of time between two Date or DateTime is straightforward given their underlying representation as UTInstant{Day} and UTInstant{Millisecond}, respectively. The difference between Date is returned in the number of Day, and DateTime in the number of Millisecond. Similarly, comparing TimeType is a simple matter of comparing the underlying machine instants (which in turn compares the internal Int64 values).
julia> dt = Date(2012,2,29)
2012-02-29
julia> dt2 = Date(2000,2,1)
2000-02-01
julia> dump(dt)
Date
instant: UTInstant{Day}
periods: Day
value: Int64 734562
julia> dump(dt2)
Date
instant: UTInstant{Day}
periods: Day
value: Int64 730151
julia> dt > dt2
true
julia> dt != dt2
true
julia> dt + dt2
Operation not defined for TimeTypes
julia> dt * dt2
Operation not defined for TimeTypes
julia> dt / dt2
Operation not defined for TimeTypes
julia> dt - dt2
4411 days
julia> dt2 - dt
-4411 days
julia> dt = DateTime(2012,2,29)
2012-02-29T00:00:00
julia> dt2 = DateTime(2000,2,1)
2000-02-01T00:00:00
julia> dt - dt2
381110402000 milliseconds
## Accessor Functions¶
Because the Date and DateTime types are stored as single Int64 values, date parts or fields can be retrieved through accessor functions. The lowercase accessors return the field as an integer:
julia> t = Date(2014,1,31)
2014-01-31
julia> Dates.year(t)
2014
julia> Dates.month(t)
1
julia> Dates.week(t)
5
julia> Dates.day(t)
31
While propercase return the same value in the corresponding Period type:
julia> Dates.Year(t)
2014 years
julia> Dates.Day(t)
31 days
Compound methods are provided, as they provide a measure of efficiency if multiple fields are needed at the same time:
julia> Dates.yearmonth(t)
(2014,1)
julia> Dates.monthday(t)
(1,31)
julia> Dates.yearmonthday(t)
(2014,1,31)
One may also access the underlying UTInstant or integer value:
julia> dump(t)
Date
instant: UTInstant{Day}
periods: Day
value: Int64 735264
julia> t.instant
UTInstant{Day}(735264 days)
julia> Dates.value(t)
735264
## Query Functions¶
Query functions provide calendrical information about a TimeType. They include information about the day of the week:
julia> t = Date(2014,1,31)
2014-01-31
julia> Dates.dayofweek(t)
5
julia> Dates.dayname(t)
"Friday"
julia> Dates.dayofweekofmonth(t)
5 # 5th Friday of January
Month of the year:
julia> Dates.monthname(t)
"January"
julia> Dates.daysinmonth(t)
31
As well as information about the TimeType‘s year and quarter:
julia> Dates.isleapyear(t)
false
julia> Dates.dayofyear(t)
31
julia> Dates.quarterofyear(t)
1
julia> Dates.dayofquarter(t)
31
The dayname() and monthname() methods can also take an optional locale keyword that can be used to return the name of the day or month of the year for other languages/locales:
julia> const french_daysofweek = Dict(1=>"Lundi",2=>"Mardi",3=>"Mercredi",4=>"Jeudi",5=>"Vendredi",6=>"Samedi",7=>"Dimanche");
# Load the mapping into the Dates module under locale name "french"
julia> Dates.VALUETODAYOFWEEK["french"] = french_daysofweek;
julia> Dates.dayname(t;locale="french")
"Vendredi"
Similarly for the monthname() function, a mapping of locale=>Dict{Int,UTF8String} should be loaded in VALUETOMONTH.
## TimeType-Period Arithmetic¶
It’s good practice when using any language/date framework to be familiar with how date-period arithmetic is handled as there are some tricky issues to deal with (though much less so for day-precision types).
The Dates module approach tries to follow the simple principle of trying to change as little as possible when doing Period arithmetic. This approach is also often known as calendrical arithmetic or what you would probably guess if someone were to ask you the same calculation in a conversation. Why all the fuss about this? Let’s take a classic example: add 1 month to January 31st, 2014. What’s the answer? Javascript will say March 3 (assumes 31 days). PHP says March 2 (assumes 30 days). The fact is, there is no right answer. In the Dates module, it gives the result of February 28th. How does it figure that out? I like to think of the classic 7-7-7 gambling game in casinos.
Now just imagine that instead of 7-7-7, the slots are Year-Month-Day, or in our example, 2014-01-31. When you ask to add 1 month to this date, the month slot is incremented, so now we have 2014-02-31. Then the day number is checked if it is greater than the last valid day of the new month; if it is (as in the case above), the day number is adjusted down to the last valid day (28). What are the ramifications with this approach? Go ahead and add another month to our date, 2014-02-28 + Month(1) == 2014-03-28. What? Were you expecting the last day of March? Nope, sorry, remember the 7-7-7 slots. As few slots as possible are going to change, so we first increment the month slot by 1, 2014-03-28, and boom, we’re done because that’s a valid date. On the other hand, if we were to add 2 months to our original date, 2014-01-31, then we end up with 2014-03-31, as expected. The other ramification of this approach is a loss in associativity when a specific ordering is forced (i.e. adding things in different orders results in different outcomes). For example:
julia> (Date(2014,1,29)+Dates.Day(1)) + Dates.Month(1)
2014-02-28
julia> (Date(2014,1,29)+Dates.Month(1)) + Dates.Day(1)
2014-03-01
What’s going on there? In the first line, we’re adding 1 day to January 29th, which results in 2014-01-30; then we add 1 month, so we get 2014-02-30, which then adjusts down to 2014-02-28. In the second example, we add 1 month first, where we get 2014-02-29, which adjusts down to 2014-02-28, and then add 1 day, which results in 2014-03-01. One design principle that helps in this case is that, in the presence of multiple Periods, the operations will be ordered by the Periods’ types, not their value or positional order; this means Year will always be added first, then Month, then Week, etc. Hence the following does result in associativity and Just Works:
julia> Date(2014,1,29) + Dates.Day(1) + Dates.Month(1)
2014-03-01
julia> Date(2014,1,29) + Dates.Month(1) + Dates.Day(1)
2014-03-01
Tricky? Perhaps. What is an innocent Dates user to do? The bottom line is to be aware that explicitly forcing a certain associativity, when dealing with months, may lead to some unexpected results, but otherwise, everything should work as expected. Thankfully, that’s pretty much the extent of the odd cases in date-period arithmetic when dealing with time in UT (avoiding the “joys” of dealing with daylight savings, leap seconds, etc.).
As convenient as date-period arithmetics are, often the kinds of calculations needed on dates take on a calendrical or temporal nature rather than a fixed number of periods. Holidays are a perfect example; most follow rules such as “Memorial Day = Last Monday of May”, or “Thanksgiving = 4th Thursday of November”. These kinds of temporal expressions deal with rules relative to the calendar, like first or last of the month, next Tuesday, or the first and third Wednesdays, etc.
The Dates module provides the adjuster API through several convenient methods that aid in simply and succinctly expressing temporal rules. The first group of adjuster methods deal with the first and last of weeks, months, quarters, and years. They each take a single TimeType as input and return or adjust to the first or last of the desired period relative to the input.
# Adjusts the input to the Monday of the input's week
julia> Dates.firstdayofweek(Date(2014,7,16))
2014-07-14
# Adjusts to the last day of the input's month
julia> Dates.lastdayofmonth(Date(2014,7,16))
2014-07-31
# Adjusts to the last day of the input's quarter
julia> Dates.lastdayofquarter(Date(2014,7,16))
2014-09-30
The next two higher-order methods, tonext(), and toprev(), generalize working with temporal expressions by taking a DateFunction as first argument, along with a starting TimeType. A DateFunction is just a function, usually anonymous, that takes a single TimeType as input and returns a Bool, true indicating a satisfied adjustment criterion. For example:
julia> istuesday = x->Dates.dayofweek(x) == Dates.Tuesday # Returns true if the day of the week of x is Tuesday
(anonymous function)
julia> Dates.tonext(istuesday, Date(2014,7,13)) # 2014-07-13 is a Sunday
2014-07-15
# Convenience method provided for day of the week adjustments
julia> Dates.tonext(Date(2014,7,13), Dates.Tuesday)
2014-07-15
This is useful with the do-block syntax for more complex temporal expressions:
julia> Dates.tonext(Date(2014,7,13)) do x
# Return true on the 4th Thursday of November (Thanksgiving)
Dates.dayofweek(x) == Dates.Thursday &&
Dates.dayofweekofmonth(x) == 4 &&
Dates.month(x) == Dates.November
end
2014-11-27
The final method in the adjuster API is the recur() function. recur() vectorizes the adjustment process by taking a start and stop date (optionally specificed by a StepRange), along with a DateFunction to specify all valid dates/moments to be returned in the specified range. In this case, the DateFunction is often referred to as the “inclusion” function because it specifies (by returning true) which dates/moments should be included in the returned vector of dates.
# Pittsburgh street cleaning; Every 2nd Tuesday from April to November
# Date range from January 1st, 2014 to January 1st, 2015
julia> dr = Dates.Date(2014):Dates.Date(2015);
julia> recur(dr) do x
Dates.dayofweek(x) == Dates.Tue &&
Dates.April <= Dates.month(x) <= Dates.Nov &&
Dates.dayofweekofmonth(x) == 2
end
8-element Array{Date,1}:
2014-04-08
2014-05-13
2014-06-10
2014-07-08
2014-08-12
2014-09-09
2014-10-14
2014-11-11
## Period Types¶
Periods are a human view of discrete, sometimes irregular durations of time. Consider 1 month; it could represent, in days, a value of 28, 29, 30, or 31 depending on the year and month context. Or a year could represent 365 or 366 days in the case of a leap year. Period types are simple Int64 wrappers and are constructed by wrapping any Int64 convertible type, i.e. Year(1) or Month(3.0). Arithmetic between Period of the same type behave like integers, and limited Period-Real arithmetic is available.
julia> y1 = Dates.Year(1)
1 year
julia> y2 = Dates.Year(2)
2 years
julia> y3 = Dates.Year(10)
10 years
julia> y1 + y2
3 years
julia> div(y3,y2)
5 years
julia> y3 - y2
8 years
julia> y3 * y2
20 years
julia> y3 % y2
0 years
julia> y1 + 20
21 years
julia> div(y3,3) # mirrors integer division
3 years
See the API reference for additional information on methods exported from the Dates module. | 2017-07-27 14:39:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3538028597831726, "perplexity": 4141.4136476492495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428300.23/warc/CC-MAIN-20170727142514-20170727162514-00173.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=489021 | ## Imaginary Numbers
Well I have developed a number system which allows the existence of imaginary numbers.
Please visit it at : http://www.scribd.com/doc/46064105/Math-Paper.
An intro of these ideas is presented at :http://www.scribd.com/doc/46117043/I...Research-Paper
Please provide me feedback. Am I thinking on the right track?
Thanks for your time and (mental) effort.
PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor
Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Hmm, I don't see many imaginary numbers in your paper? Your paper is certainly interesting, but I have some comments. It's meant to be constructive: Anyway, you seem to like to work with infinity in your paper. But this poses a lot of problems. Infinity is not a standard number, so you'll have to define rigorously what you mean with $$\infty$$, I don't see you doing this anywhere. When writing a math paper it is VITAL to define everything well, this is something that is lacking in your paper. From looking at your paper, I kind of guess that you look at the integers in a new way, but what are the advantages of looking at the integers in this way? You should include something motivational there... I'd like to discuss this with you, and to show you how you present thing rigorously. So I hope you take this comments well!
Thank you very much for such a positive response. 1. Infinity in this discussion would be any arbitrary number that you choose. You can consider this as the largest number you would require in a given problem. Just don’t go beyond it. That number, which is practically large to suffice for all our needs in a situation, would be infinity. Once you have chosen it, it would be the largest number in your number system. And its reciprocal would be the smallest number in your system. And then all the numbers would be multiples of this smallest number. 2. When I discussed the operations, it rose out in this number system that when we would multiply two numbers of the same sign, the product would be of the same sign as well. So two negative numbers will have a negative product. And the product of two numbers with different signs will have the sign of the larger number. So imaginary numbers are possible. 3. Motivational……………. I guess allowing imaginary numbers is pretty motivational for me. And regarding the choice of infinity for our problems, I just had a hunch (no logical grounds whatsoever) that for our present Quantum mechanics we could use Planck length and the Planck time as the smallest numbers and their reciprocals as the infinities? Thanks again for the reply. I hope this discussion continues.
Blog Entries: 2
## Imaginary Numbers
Quote by Abdul Wadood Thank you very much for such a positive response. 1. Infinity in this discussion would be any arbitrary number that you choose. You can consider this as the largest number you would require in a given problem. Just don’t go beyond it. That number, which is practically large to suffice for all our needs in a situation, would be infinity. Once you have chosen it, it would be the largest number in your number system. And its reciprocal would be the smallest number in your system. And then all the numbers would be multiples of this smallest number. 2. When I discussed the operations, it rose out in this number system that when we would multiply two numbers of the same sign, the product would be of the same sign as well. So two negative numbers will have a negative product. And the product of two numbers with different signs will have the sign of the larger number. So imaginary numbers are possible. 3. Motivational……………. I guess allowing imaginary numbers is pretty motivational for me. And regarding the choice of infinity for our problems, I just had a hunch (no logical grounds whatsoever) that for our present Quantum mechanics we could use Planck length and the Planck time as the smallest numbers and their reciprocals as the infinities? Thanks again for the reply. I hope this discussion continues.
What would be the sign of -3*3 in your system? How would one multiply (a + b)*(a - b)?
Without knowing their magnitudes, you couldn't just write a^2 - b^2, could you?
Without knowing the purpose of your system, I couldn't say whether or not you were on the right track.
-3*3 would be a 9 which would be neutral, like zero. It would neither be positive nor negative because its positive and negative parts would be the same. In this system, (2+2) is different from 4 because the PARTS of (2+2) are ¥ more than parts of 4.¥ would be the largest number you would choose for your situation. The same goes for other operations (bit different in multiplication and division). So a^2 would be of the same sign as a but would be different than the square of a. (e.g. 3^2 is different than 9 because the parts of 3^2 are ¥ times greater than 9) We just have to consider the parts of a number (or the sum or product or difference or quotient). If positive part is greater, then the number is positive and if the negative part is greater, then the number is negative. What this implies is that 1. An operation changes the nature of a number. 2. Imaginary numbers are allowed. 3. The Math deals with finite numbers. I have to admit that I really had no Mathematical paradox or problem to solve for so that I developed this system. These ideas just came to my mind and I developed it. So I thought I would discuss them. But they MIGHT have some application, as is the case with such theoretical games. The idea about Planck length and time are simply the hunches of a novice. I don’t really have an advanced background to apply these ideas, but I am searching for applications (just like many Mathematics concepts, these may just be concepts). If you know of a place to apply these, do tell me. A computer scientist tells me they deal with confined number systems in some place in their field. Comments are welcome.
Quote by Abdul Wadood -3*3 would be a 9 which would be neutral, like zero.
$$-3\times 3=-9\neq 9\neq 0$$
exactly. -9 would be neutral but not equal to zero. all the treatment depends on the parts of numbers.
Recognitions: Gold Member Science Advisor Staff Emeritus Can we take it then that you titled this "Imaginary Numbers" only because you had no idea what "imaginary numbers" meant?
Blog Entries: 2
Quote by Abdul Wadood exactly. -9 would be neutral but not equal to zero. all the treatment depends on the parts of numbers.
So 2*-18 would be -36 while 12*-3 would be +36 but 6*-6 would just be 36 if I understand what you are saying. Does this mean that -36 equals +36 equals 36 (something like the absolute value being the size of a number, while the sign says something about the history of operations that obtain the number). Looks too weird to have a useful purpose but then the same was true for boolean math.
Quote by Abdul Wadood Thank you very much for such a positive response. 1. Infinity in this discussion would be any arbitrary number that you choose. You can consider this as the largest number you would require in a given problem. Just don’t go beyond it. That number, which is practically large to suffice for all our needs in a situation, would be infinity. Once you have chosen it, it would be the largest number in your number system. And its reciprocal would be the smallest number in your system. And then all the numbers would be multiples of this smallest number. 2. When I discussed the operations, it rose out in this number system that when we would multiply two numbers of the same sign, the product would be of the same sign as well. So two negative numbers will have a negative product. And the product of two numbers with different signs will have the sign of the larger number. So imaginary numbers are possible. 3. Motivational……………. I guess allowing imaginary numbers is pretty motivational for me. And regarding the choice of infinity for our problems, I just had a hunch (no logical grounds whatsoever) that for our present Quantum mechanics we could use Planck length and the Planck time as the smallest numbers and their reciprocals as the infinities? Thanks again for the reply. I hope this discussion continues.
You claim in 1 "And its reciprocal would be the smallest number in your system." What would happen if you multiplied this number by itself? You would get a number SMALLER which is not in your system. If you put this new number into your system and multiply it by itself again you get ANOTHER number not in your system, so your infinity is rapidly increasing without bound and your reciprocals are quickly approaching ZERO.
This is the kind of trouble you run into when you define infinity as some finite number.
Quater-Imaginary is a number system that Donald Knuth made that can represent complex numbers. There is a base -1+i that can model the Dragon Curve fractal. People who were interested in what this thread might have been, may like to come help me with my puzzle: http://www.physicsforums.com/showthread.php?t=511758
Quote by Abdul Wadood Well I have developed a number system which allows the existence of imaginary numbers. Please visit it at : http://www.scribd.com/doc/46064105/Math-Paper. An intro of these ideas is presented at :http://www.scribd.com/doc/46117043/I...Research-Paper Please provide me feedback. Am I thinking on the right track? Thanks for your time and (mental) effort.
Isn't the paper just describing a new algebra? There are an infinite number of algebras.
What I mean is:
Let us say there is some set which happens to be a function of a parameter t (which you call time). At any time t, you define ##x_1(t)## and ##x_k(t)## (which you call the "smallest" number" and "largest number"), where ##k=\mbox{n}\left(S(t)\right)##, such that any element in S(t) ##x_j(t)##, ##x_1(t)\leq x_j(t)\leq x_k(t)## and ##x_1(0)=x_k(0)=\mbox{indeterminate}##. Then you define the operations + and × as you do in your paper. Then, the object (S(t),+,×) can be called R(t). Now, R(t) is the algebra you introduce in your paper.
But then, you realise that there are an infinite number of R(t)'s at a given point in t, depending on how you define ##x_1(t)## and ##x_k(t)##. So you then try to scope down R(t) by restricting ##x_1(t)## and ##x_k(t)## such that ##\forall t,\;R(t)## is always a ring.
What I just said was a description of the algebra you are defining.
But note that no matter what you do, you can't possibly bring imaginary numbers into question. They already are elements of the imaginary and complex number rings. Oh, and you should really discard the "neutral" numbers. They don't make sense, at least to me. You really need to redefine the multiplication you introduce.
P.S. I wish there where Euclid fonts in the list of PF fonts.
Quote by SubZir0 Quater-Imaginary is a number system that Donald Knuth made that can represent complex numbers. There is a base -1+i that can model the Dragon Curve fractal. People who were interested in what this thread might have been, may like to come help me with my puzzle: http://www.physicsforums.com/showthread.php?t=511758
Interesting, but how is this connected to this thread???
Anyway, this looks like an interesting number system. One area of application you might want to look into, is complex potential theory. There we make use of bounded infinity in numerical applications, and the positive, negative, and neutral distinction might yield some interesting results when defining electrical charges or solid bodies in the field.
Quote by Abdul Wadood 1. Infinity in this discussion would be any arbitrary number that you choose. You can consider this as the largest number you would require in a given problem. Just don’t go beyond it. That number, which is practically large to suffice for all our needs in a situation, would be infinity. Once you have chosen it, it would be the largest number in your number system. And its reciprocal would be the smallest number in your system. And then all the numbers would be multiples of this smallest number.
But, this is in direct violation of Peano's axioms for natural numbers. Namely, what is the successor of the largest number you imagine?
Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus This thread is too old and too crackpot to revive. | 2013-05-23 22:57:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.683469295501709, "perplexity": 486.6529045131078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704054863/warc/CC-MAIN-20130516113414-00026-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://en.neurochispas.com/algebra/transformations-of-trigonometric-functions/ | # Transformations of Trigonometric Functions
There are several transformations of trigonometric functions with which we can switch to the graphs of the standard trigonometric functions. We can make changes in the amplitude, in the period, in the phase of the function. We can also perform vertical translations and produce reflections from the graphs.
Here, we will look at these transformations of trigonometric functions in detail along with some examples to visualize the effect of the transformations.
##### ALGEBRA
Relevant for
Learning to perform transformations of trigonometric functions.
See transformations
##### ALGEBRA
Relevant for
Learning to perform transformations of trigonometric functions.
See transformations
## Amplitude of trigonometric functions
The amplitude of a trigonometric function is the maximum distance on the graph of that function. The amplitude is the distance from the average value of the curve to its maximum or minimum value.
In the case of the sine and cosine functions, the amplitude is the value of the leading coefficient of the function. We can change the amplitude of these functions by multiplying the function by a constant A.
For example, if the function is $latex y = A \sin(x)$, then the amplitude is |A|.
In the case of the tangent, cosecant, secant, and cotangent functions, the amplitude will be infinitely large regardless of the value of A.
## Period of trigonometric functions
The period of a trigonometric function is the length of one cycle. That is, the period is the displacement of x in which the graph of the function begins to repeat. For example, consider the function $latex y=\sin(x)$:
The value $latex x=2\pi$ is the point at which the graph begins to repeat.
The coefficient of x is the constant that determines the period. The general form is $latex y=\sin(Bx)$, where B determines the period. For the sine, cosine, secant, and cosecant functions, the period is $latex 2\pi$ and we can change the period according to the formula:
$latex \text{period} =\frac{\text{original period}}{|B|}$
When |B| is greater than 1, the new period is smaller than the original, so the function appears to have horizontal compression. When |B| is less than 1, the function appears to have a horizontal stretch.
The tangent and cotangent functions have a period of $latex \pi$.
## Phase of trigonometric functions
The phase of a trigonometric function refers to the horizontal translation to the right of the graph of the function.
The general form of the trigonometric function is $latex y=A\sin B(x-C)$, where A is the amplitude, B is the period, and C is the phase.
The graph of $latex y = \sin(x)$ can be translated to the right or to the left. If C is positive, the translation is to the right and if C is negative, the translation is to the left.
## Vertical translation
By adding a D value to the trigonometric function, we will translate its graph vertically. If D is positive, the graph will move up by a factor of D, and if D is negative, the graph will move down.
The general form of the sine function with a vertical translation is $latex y = A\sin B(x-C) + D$.
## Reflections
To obtain the graph of:
• $latex y=-f(x)$ we reflect the graph of $latex y=f(x)$ with respect to the x axis
• $latex y=f(-x)$ we reflect the graph of $latex y=f(x)$ with respect to the y axis
The following graph shows the sine function in blue, that is, the function $latex y=\sin(x)$. The function in green represents both the function $latex y = -f(x)$ and the function $latex y=f(-x)$. In this particular case, the reflection at x of the function is equal to the reflection at y.
## Examples of trigonometric functions with transformations
### EXAMPLE 1
• The following is the graph of $latex y=5\sin(2(x-3))+1$.
Here, we have:
• Amplitude: $latex A=5$
• Period: $latex \text{period}=\frac{2\pi}{B}=\frac{2\pi}{2}=\pi$
• Phase: $latex C=3$
• Vertical translation: $latex D=1$
### EXAMPLE 2
• The following is the graph of the function $latex y=1.5\sin(0.5(x-2))-1$.
This function has the following:
• Amplitude: $latex A=1.5$
• Period: $latex \text{periodo}=\frac{2\pi}{B}=\frac{2\pi}{0.5}=4\pi$
• Phase: $latex C=2$
• Vertical translation: $latex D=-1$
### EXAMPLE 3
• The following is the graph of the function $latex y=3\sin(2(x+3))-2$.
In this function, we have:
• Amplitude: $latex A=2$
• Period: $latex \text{periodo}=\frac{2\pi}{B}=\frac{2\pi}{2}=\pi$
• Phase: $latex C=-3$
• Vertical translation: $latex D=-2$ | 2023-03-21 01:22:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9283233880996704, "perplexity": 398.9168618920957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00634.warc.gz"} |
https://www.physicsforums.com/threads/jackson-5-10-topic-problem.465530/ | # Jackson 5.10 topic problem
## Homework Statement
There is one thing I'm not getting in this problem. It's about uniformly magnetized sphere. In solving the problem with formula for magnetic potential:
$$\phi_M=-\nabla\cdot\int\frac{\vec{M}}{|\vec{r}-\vec{r}'|}d^3r'$$
At one point a change of variable for differentiation is made
$$\frac{\partial}{\partial z}=\frac{\partial r}{\partial z}\frac{\partial}{\partial r}$$
And he says that $$\frac{\partial r}{\partial z}=\cos\theta$$. But I can't see that. If I'm using the formula for spherical coordinate system transformation: $$z=r\cos\theta$$ I don't get just cosine, I get 1/cosine :\
So what formula is he using?
Thanks
Related Advanced Physics Homework Help News on Phys.org
He is using $r = \sqrt {x^2 + y^2+z^2}$, and then
$\frac{\partial r}{\partial z} = \frac{z}{r} = cos{(\theta)}$
fzero
Homework Helper
Gold Member
Since $$r = \sqrt{ x^2 + y^2 + z^2}$$,
$$\frac{\partial r }{\partial z} = \frac{z}{r},$$
which leads to the formula from the text. I think you're making a mistake because the partial derivatives in spherical and Cartesian coordinates are related by a 3x3 matrix, so it's not enough to merely compare reciprocals.
oh! I see, thanks :D | 2020-06-01 17:43:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877105116844177, "perplexity": 827.0316025416745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00039.warc.gz"} |
https://cs.stackexchange.com/tags/pumping-lemma/new | # Tag Info
1
As all words of length $>1$ and only consisting of a's should be contained in L2, there is a simple finite automaton that recognizes it. So your attempt at using the pumping lemma is futile, as the pumping lemma only helps you prove that a language is irregular if it is, and doesn't tell you anything about languages that are regular. Maybe I'm also ...
2
Here is yet another proof. It is known that the number of integers at most $n$ which are the product of two primes is $o(n)$, see for example this answer, which gives the asymptotic $\frac{n\log\log n}{\log n}$. This means that your language is infinite yet has vanishing asymptotic density. This is impossible for a regular unary language.
1
According to the Fundamental Theorem of Arithmetic, any integer $>1$ can be written as a product of one or more primes (in a unique way). So, it seems that your language can be simplified as $\{a^n\mid n\geq 2\}$.
3
Just pump up $(M+1)$ $y$'s. Now you get $xy^{M+1}z=a^{(M+1)j+M-j}=a^{M(j+1)}$. Since $M$ is a product of two primes, $M(j+1)$ is a product of at least 3 primes, so $a^{M(j+1)}\notin L_1$, which proves $L_1$ is not regular by the pumping lemma.
1
There is an alternative to the “pumping” lemma which I find easier: After each possible input, determine the set of continuations that would complete a string of the language. You can use each of those sets as a state in the finite state machine for the language, so if there is a finite number of those sets then the language is regular- if there are ...
2
The subset of all palindromes in L is obviously not usually regular, take the simple example $a^*ba^*$ where the subset of palindromes $a^nba^n$ is not regular. Assume you have an FSM for L (that is an FSM describing and defining L). You can take that FSM and use a simple algorithm to determine if w is in M: Given a state S, define succ(S, a) as the state ...
Top 50 recent answers are included | 2019-10-16 14:30:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058256149291992, "perplexity": 192.71340503282048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00479.warc.gz"} |
http://mathoverflow.net/questions/105390/results-for-minimizing-the-norm-w-r-t-a-unitary-matrix?sort=newest | # Results for minimizing the norm w.r.t a unitary matrix
Suppose $x \in \mathbb{R}^n$, $B,U \in \mathbb{R}^n\times\mathbb{R}^n$ and $U$ a unitary matrix. Define $g_{U}(x) = || BUx||$ where $||.||$ is some norm or norm-ish function on $\mathbb{R}^n$ (not unitarily invariant obviously). How can we choose $U$ in the unitary matrices to minimize $g$? Or what kind of results are there regarding the minimum value?
I'm particularly interested in $||.||_{\infty}$, the tail expectation, and maybe also the quantile function (i.e. $||x||$ is the $k$th largest element ... which will not be a norm.)
I'm mostly looking to "bind" this problem in the right way so that I can read the most appropriate material. I would imagine this kind of problem has been studied to death in various contexts. Or maybe I'm not seeing that the problem is in fact trivial or I've mis-specified it.
-
What do you mean by "minimizing $g$"? Do you have a fixed $x$? – Federico Poloni Aug 24 '12 at 14:09
Yes, that's what she means. – TerronaBell Aug 24 '12 at 14:36
(To elaborate: she said, "how can we choose U to minimize g," which makes me think x is fixed.) – TerronaBell Aug 24 '12 at 14:37
Yes, x is fixed. Sorry I should have mentioned that. I've written the problem in a bit of a bizarre way. – mathtick Aug 24 '12 at 15:16
Isn't choosing $U$ a unitary matrix and minimizing some $f(Ux)$ equivalent to $f(y)$, with $||y||_2=||x||_2$? In particular, if $||cx||=c||x||$, then you are just minimizing $||Bx||/||x||_2$. – Will Sawin Aug 24 '12 at 17:29
More specifically: for $||\cdot||_\infty$, first think about the case where $B$ is the identity. Then you simply need to rotate $x$ such that it points along some axis -- this way all of the "mass" is concentrated in a single component. More explicitly, you can construct $U$ as a Householder transformation onto a multiple of the first basis vector $e_1$, as is done in QR factorization (see description here). If $B$ is not the identity, then you want $Ux$ to be some vector such that $BUx=e_1$ or equivalently $Ux = B^{-1}e_1=:y$ (assuming $B$ is invertible). Again the Householder trick applies, this time using $y$ instead of $e_1$.
The $k$th largest element will be maximized when the first $k$ elements have equal magnitude and the remaining elements are zero. Once again, you can find the appropriate Householder rotation $U$ onto this vector. (Of course, if you don't actually care about $U$ itself and just want the value of the norm, you could compute this value directly as $||x||/\sqrt{k}$.)
I think you are maximizing but I the same reasoning applies: to minimize the infinity norm: rotate or reflect the x onto the "diagonal" i.e. something like the "1" vector. For //minimizing// the $k$th worst, I suppose it's trivial. You can always make $BUX=e_1$ which will always have $k$th worst (in absolute value) of zero. For the tail expectation, I mean the average of the $k$ largest components. – mathtick Aug 24 '12 at 15:43 | 2015-01-29 20:29:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680328369140625, "perplexity": 365.2808060177196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115868812.73/warc/CC-MAIN-20150124161108-00130-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://alpopkes.com/posts/2020/10/mocking-in-python/ | # Mocking in Python
Published:
Topics: Mocking in Python
Today, I want to talk about mocking. I became interested in this topic a few months back when I started to work in a data engineering project at work. In this project we have a lot of tests that relies on mocking to test code with external dependencies. If you don’t like reading long blog posts, consider listening to one of the podcast episodes I did on this topic: there is one at Talk Python to Me and the other at Test and Code.
## Introduction
Before getting started you should note that mocking is a controversial topic. Some people think it is great while others regard it only as a last resort that should be be avoided. Personally, I believe that mocking can be a great tool to improve testing but should be used with care. Most importantly, it should not be used as a tool to fix badly written code. However, the Zen of Python tells us that practicality beats purity. So rather use mocking than no testing at all.
## What is mocking?
A mock simulates the existence and behavior of a real object. This allows developers to
• Improve the quality of their tests
• Test code in a controlled environment
• Test code which has external dependencies
The python library for mocking is unittest.mock. If you prefer using pytest, don’t worry - pytest supports the unittest.mock library. You can also take a look at pytest libraries for mocking like pytest-mock.
## Example class
In the spirit of our magical universe The Tales of Castle Kilmere we will work with an example class Spell in this blog post. Our Spell class is simple: each spell has a name, an incantation and a defining feature:
class Spell:
"""Creates a spell"""
def __init__(self, name: str, incantation: str, defining_feature: str):
self.name = name
self.incantation = incantation
self.defining_feature = defining_feature
## Naming conventions and Test Doubles
Similar to the question of whether mocking is good or bad there is disagreement about the terminology of mocking. Some people prefer a more fine-grained terminology in which different types of ‘mocking behavior’ are distinguished. Others believe that this makes things unnecessarily complicated. For completeness, let’s take a look at the more fine-grained view. In this view, objects which are not real can be either dummies, fakes, stubs, mocks or spies. All of these are so called test-doubles. Note that the definitions of the different kinds of test doubles are not set in stone but are controversial and vary between sources.
### Dummies
A dummy is an object which is passed around but not intended to be used in your tests. It will have no effect on the behaviour of your tests. An example for a dummy could be an attribute that is needed in order to instantiate a class but not required for the test. Looking at our Spell class we might want to test casting a spell but won’t require the defining_feature attribute for this:
class Spell:
"""Creates a spell"""
def __init__(self, name: str, incantation: str, defining_feature: str):
self.name = name
self.incantation = incantation
self.defining_feature = defining_feature
def cast(self):
print(f"{self.incantation}!")
def test_cast_spell():
name = "The Stuporus Ratiato spell"
incantation = "Stuporus Ratiato"
defining_feature = "" # this is a dummy
assert spell.cast == "Stuporus Ratiatio!"
### Fakes
A fake implements a fake version of a class or method. It has a working implementation but takes some kind of shortcut such that it is not suitable for production. For example, we could implement an in memory database which is used for testing but not in production. In this setting, the in memory database would function as a fake.
### Stubs
A stub is a non-real object which has pre-programmed behavior. Most of the time, stubs simply return fixed values (also refered to as canned data). For example, let’s say the Spell class has a method get_similar_spells which searches a database for similar spells. The logic behind this function is very complex and the function takes several minutes to complete. During testing, we don’t want to wait several minutes for a result. Therefore, we could replace the real implementation with a stub that returns hard-coded values; taking only a small fraction of the time.
### Mocks
Mocks are closely related to stubs. A mock does not have predetermined behavior. Instead, it has to be configured with our expectations. The important difference between mocks and stubs is that a mock records which calls have been executed. Therefore, it can be used to verify not only the result (which can be done with a stub, too) but HOW the result was achieved / that the correct methods have been invoked. Mocks are the type of test double we will be talking about in this blog post.
If you are confused about the difference between stubs and mocks, that a look at this stackoverflow post. It might be helpful.
### Spies
Spies are used to wrap real objects and, by default, route all method calls to the original object. In this sense, they intercept and record all calls to a real object. This allows us to verify calls to the original object (e.g. how often a certain method has been called) without replacing the original object (as, for example, a mock does). I haven’t use spies myself yet and find them the hardest to understand.
## When should we mock?
Mocking can be used whenever we don’t want to actually call an object. Let’s say our Spell class has a function save which writes file to disk and function remove which deletes local file:
class Spell:
"""Creates a spell"""
def __init__(self, name: str, incantation: str, defining_feature: str):
self.name = name
self.incantation = incantation
self.defining_feature = defining_feature
def save(self, save_path: str) -> dict:
spell_as_dict = {"name": self.name,
"incantation": self.incantation,
"description": self.description}
with open(save_path, "w") as file:
json.dump(spell_as_dict, file, indent=4)
return spell_as_dict
def remove(self, file_path: str):
os.remove(file_path)
When testing these methods we don’t want to write to disk every time the test runs Same holds for function ‘remove’
## How to mock
The unittest.mock library has three core functionalities:
1. The Mock class
2. The MagicMock class
3. The patch method
### The unittest.mock.Mock class
The Mock class can be used for mocking any object. A mock simulates the object it replaces. To achieve this, it creates attributes on the fly. In other words: you can access whatever methods and attributes you like, the mock object will simply create them. This is quite magical, don’t you think?
from unittest.mock import Mock
my_mock = Mock()
# Let's try to access some attribute
my_mock.fancy_attribute
>>> <Mock name='mock.fancy_attribute' id='140586483377056'>
# What about a method with inputs?
my_mock.fancy_method(3, 4)
>>> <Mock name='mock.fancy_method()' id='140586482172880'>
# We can assert that certain methods can be called
my_mock.fancy_method.assert_called_with(3, 4)
>>> True
When taking a close look at the example above we can see that each call returns a mock object with a different ID. This is an important feature of mocks: they will automatically create child mocks when you access a property or method. You should keep in mind that the methods and attributes you create have nothing to do with the ‘true’ implementation of a method. For example, we could mock the json library and access the method dumps on the mock object. We could call dumps with all kinds of attributes (or non at all) without the mock complaining about it:
json = Mock()
json.dumps()
>>> <Mock name='mock.dumps()' id='140586482201840'>
json.dumps.assert_called_once()
>>> True
We can configue a mock object to do exactly what we want it to do. For instance, we can set the return value of a mock, or side effects it should exhibit:
my_mock = Mock()
my_mock.return_value = 3
my_mock()
>>> 3
my_mock = Mock(side_effect=Exception("Python rocks!"))
my_mock()
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-14-faa07e9a9283> in <module>
----> 1 my_mock()
~/anaconda3/lib/python3.8/unittest/mock.py in __call__(self, *args, **kwargs)
1079 self._mock_check_sig(*args, **kwargs)
1080 self._increment_mock_call(*args, **kwargs)
-> 1081 return self._mock_call(*args, **kwargs)
1082
1083
~/anaconda3/lib/python3.8/unittest/mock.py in _mock_call(self, *args, **kwargs)
1083
1084 def _mock_call(self, /, *args, **kwargs):
-> 1085 return self._execute_mock_call(*args, **kwargs)
1086
1087 def _increment_mock_call(self, /, *args, **kwargs):
~/anaconda3/lib/python3.8/unittest/mock.py in _execute_mock_call(self, *args, **kwargs)
1138 if effect is not None:
1139 if _is_exception(effect):
-> 1140 raise effect
1141 elif not _callable(effect):
1142 result = next(effect)
Exception: Python rocks!
### Mock vs. MagicMock
I mentioned that the unittest.mock library contains two classes: Mock and MagicMock. So what is the difference between them? MagicMock is a subclass of Mock. It contains all magic methods pre-created and ready to use (e.g. __str__, __len__, etc.). Therefore, you should use MagicMock when you need magic methods, and Mock if you don’t need them.
### The unittest.mock.patch method
The third main functionality contained in the unittest.mock library is the patch function. So what is patching? The patch() function looks up an object in a given module and replaces it with another object. By default, the object is replaced with a new MagicMock (but this can be changed). When you want to use patch you have to use the syntax patch(‘package.module.target’). For example, consider we have a file example.py which contains the following code:
from db import db_write
def foo():
x = db_write()
return x
We want to mock the call to the db_write function in our tests. With patching, this can be dones as follows:
from unittest.mock import patch
from example import foo
@patch('example.db_write')
def test_foo(mock_write):
mock_write.return_value = 10
x = foo()
assert x == 10
What is happening when using patching like this? In simple terms, the call to db_write is replaced with a call to MagicMock:
def foo():
x = MagicMock()
return x
By replacing the call to db_write with a call to MagicMock, everything happening in db_write is not being executed. The only thing that is called is the MagicMock object!
In the example above we used patch() as a decorator. We could also use it as a context manager or with manual starting/stopping. When to use which version depends on the scope you want/need. Consider the decorator example:
from unittest.mock import patch
from example import foo
@patch('example.db_write')
def test_foo_decorator(mock_write):
mock_write.return_value = 10
x = foo()
assert x == 10
In this example, the db_write function will be replaced by a MagicMock for the entire test function. In other words: in the test function test_foo_decorator you don’t have access to the true implementation of db_write anymore, only to the mocked version. A more fine-grained scheme can be achieved using a context manager:
def test_foo_context_manager(mock_write):
with patch('example.db_write, return_value = 10):
x1 = foo()
assert x1 == 10
x2 = foo() # When calling foo here, db_write won't be mocked
Manual starting and stopping can be useful when you need to mock certain parts of your code for all tests in your test file. I haven’t yet used this functionality myself but you can find a good description here.
Note: there is also patch.object and patch.dict. We won’t discuss them here to avoid confusion, please check out the documentation to find out how they are used.
## Where do we patch?
When using patching it is important to know where patch is applied. Specifically, we should patch where the object is looked up. This might not be the place where the object is defined! In the example above (the one where our file is named example.py) we import the method db_write from the module db:
from db import db_write
def foo():
x = db_write()
return x
Consequently, the example module knows about db_write. However, it does not know about db. This is nicely illustrated when importing the example module (e.g. in an IPython shell) and calling dir(example) to see a list of attributes for the example object.
import example
dir(example)
>>>
['__builtins__',
'__cached__',
'__doc__',
'__file__',
'__name__',
'__package__',
'__spec__',
'db_write',
'foo']
As we can see here, db_write is in the namespace of example, but db is not in the namespace. Therefore, to mock the call to db_write in the foo function, we have to patch example.db_write:
from unittest.mock import patch
from example import foo
@patch('example.db_write')
def test_foo_decorator(mock_write):
mock_write.return_value = 10
x = foo()
assert x == 10
Think about this for a moment - db_write is defined in the module db but used in the module example.
Let’s drive this point home with another example. Let’s say that we import the entire db module in example.py:
import db
def foo():
x = db.db_write()
return x
In this case, the example module knows about db but not db_write:
import example
dir(example)
>>>
['__builtins__',
'__cached__',
'__doc__',
'__file__',
'__name__',
'__package__',
'__spec__',
'db',
'foo']
To mock the call to db_write in the foo function, we need to mock db.db_write or, if you find it easier to understand example.db.db_write:
from unittest.mock import patch
from example import foo
@patch('example.db.db_write')
def test_foo_decorator(mock_write):
mock_write.return_value = 10
x = foo()
assert x == 10
## Common mocking problems and how to solve them
In the beginning of this post we saw that we can configure Mock objects. For example, we can set their return value or side effects like exceptions. The fact that Mock objects create attributes and methods on the fly makes them sensitive to mistakes. For example, if you call .asert_called_once() instead of .assert_called_once() on a Mock, the test will not raise an AssertionError because you have created a new method on the Mock object called asert_called_once() instead of calling the actual function.
Another easy mistake occurs when forgetting that a Mock object does not know the interface of your class / method. In other words: the Mock object does not know which attributes it can be called with. We talked about that earlier with the json.dumps() example. We can illustrate this problem when taking a closer look at the attributes of the thing we are mocking. Let’s say we we mock our Spell class. When calling dir() on the mocked object we see all kinds of attributes:
import spell_class
from unittest.mock import patch
with patch('spell_class.Spell') as mocked_spell:
print(dir(mocked_spell))
>>>
['assert_any_call', 'assert_called', 'assert_called_once', 'assert_called_once_with', 'assert_called_with', 'assert_has_calls', 'assert_not_called', 'attach_mock', 'call_args', 'call_args_list', 'call_count', 'called', 'configure_mock', 'method_calls', 'mock_add_spec', 'mock_calls', 'reset_mock', 'return_value', 'side_effect']
For example, we can see the different assertions that can be made on the Mock object. Hover, we don’t see the actual methods of the Spell class, e.g. the save or remove method! This is because the call to patch returns a MagicMock which is completely unrelated to the Spell class.
Problems like these can be solved using the spec=True attribute in the patch call. This will cause the MagicMock to look like the object we are patching. We could no longer call methods without argument or with the wrong arguments. Also, the misspelled .asert_called_once() would raise an Exception:
import spell_class
from unittest.mock import patch
with patch('spell_class.Spell', spec=True) as mocked_spell:
print(dir(mocked_spell))
>>> ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'assert_any_call', 'assert_called', 'assert_called_once', 'assert_called_once_with', 'assert_called_with', 'assert_has_calls', 'assert_not_called', 'attach_mock', 'call_args', 'call_args_list', 'call_count', 'called', 'configure_mock', 'method_calls', 'mock_add_spec', 'mock_calls', 'remove', 'reset_mock', 'return_value', 'save', 'side_effect']
with patch('spell_class.Spell', spec=True) as mocked_spell:
mocked_spell.asert_called_once()
>>>
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
3
4 with patch('spell_class.Spell', spec=True) as mocked_spell:
----> 5 mocked_spell.asert_called_once()
6
~/anaconda3/lib/python3.8/unittest/mock.py in __getattr__(self, name)
635 elif self._mock_methods is not None:
636 if name not in self._mock_methods or name in _all_magics:
--> 637 raise AttributeError("Mock object has no attribute %r" % name)
638 elif _is_magic(name):
639 raise AttributeError(name)
AttributeError: Mock object has no attribute 'asert_called_once'
Similar to spec are autospec and specset. Take a look at the documentation or at the PyCon talk “Demystifying the patch function” for more details.
One caveat when using spec (or autospec or specset) is that you can not access attributes created in the __init__ method, that is, attributes that exist on the instances and not the class. Due to the internal workings of spec (and also autospec and specset) they cannot know about any dynamically created attributes. They only know about visible attributes. For our Spell example above, you won’t see the attributes name, incantation and description in the output of dir(mocked_spell). They are not valid attributes of the mock object anymore. Also, speccing comes at a cost and will slow down your tests
## Code design that prevents mocking
When you stumble into mock hell (a term introduced and explained in the talk “Mocking and Patching Pitfalls”) during development, it might be the right time to consider one of the design principles that prevent the need for mocking. For example, you could use dependency injection or an adaptor pattern. These principles are explained in the PyCon talk “Stop using Mocks - For a While”.
## Wrap-Up
Mocking is a controversial topic. It should be used with care and never to fix badly written code. The Python library for mocking - unittest.mock - comes with three core functionalities: Mock, MagicMock and patch. Mocking might seem confusing in the beginning but once you understand the basics (for example where to patch) it can be very helpful, especially with production code. There are lots of functionalities in the unittest.mock` library that help prevent common problems, so it is useful to take a look at the documentation when you want to start using mocking. Lastly, don’t get confused by the different naming conventions! If you feel more comfortable with the fine-grained view on test doubles great! But sticking to a simpler scheme is also fine.
Tags: | 2021-01-16 08:37:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20117561519145966, "perplexity": 3324.8346640061054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00177.warc.gz"} |
http://math.stackexchange.com/questions/25466/finding-csc-theta-given-cot-theta | # Finding $\csc \theta$ given $\cot \theta$
I have the following problem:
If $\cot{C} = \frac{\sqrt{3}}{7}$, find $\csc{C}$
From my trig identities, I know that $\cot{\theta} = \frac{1}{\tan{\theta}}$, and $\csc{\theta} = \frac{1}{\sin{\theta}}$, and also $\cot{\theta} = \frac{\cos{\theta}}{\sin{\theta}}$
However, I can't seem to see how to connect the dots to get from cotangent to cosecant. I figure I might be able to use the last identity if I can somehow make $\cos{C} = 1$, but I don't really see how to do that, either.
This is homework, so please provide me with some pointers rather than complete solutions.
Thanks.
-
In my answer here, I describe some general ideas for how to use one trig function of an angle to determine another trig function of the same angle. The basic idea is to draw triangle(s) on a coordinate system that correspond to the given trig function and use those to compute the other trig function.
-
Thanks, I'll try drawing some triangles and see if it helps. – friedo Mar 7 '11 at 6:31
@friedo: the diagrams in that other answer might also be helpful. – Isaac Mar 7 '11 at 6:33
From $\sin^2\theta + \cos^2\theta = 1$, divide through by $\sin^2\theta$ to get a relation between $\cot^2\theta$ and $\csc^2\theta$.
P.S. The information given is not enough, though, to determine the value of $\csc\theta$ unless you happen to know which quadrant you are working in; you know that you are in either quadrant I or III, since the cotangent is positive; but that does not tell you whether the cosecant is positive or negative; you'll have two possible answers. This is pretty much the same situation as how, if you know that $\sin\theta=\frac{1}{2}$, this only determines $\cos\theta$ up to sign.
-
Thanks, I think this will help. I do know that we're working in quadrant I since we're still on right triangles. – friedo Mar 7 '11 at 6:30
@friedo: In that case, Isaac's solution is very useful. Just draw any triangle with the right cotangent (the easiest is one in which the adjacent side is the numerator you have, and the opposite side is the denominator); use Pythagoras's Theorem to compute the hypothenuse, and then just read off the value of the cosecant. It is, really, the same as this method (you are using the corresponding identity when you compute the value of the hypothenuse) but it's easier to visualize. – Arturo Magidin Mar 7 '11 at 6:33
Here's a simple way I usually think about it. Suppose you have a right triangle in the first quadrant. Since $\cot C=\frac{\sqrt{3}}{7}$, you know the ratio of the leg adjacent to $C$ to the leg opposite of $C$ is $\frac{\sqrt{3}}{7}$. So let's just say the opposite leg has length $7$ and the adjacent leg has length $\sqrt{3}$. Then by the Pythagorean theorem, the hypotenuse has length $\sqrt{52}=2\sqrt{13}$.
Now $\csc C$ is just the ratio of the hypotenuse to the opposite leg, essentially the inverse of $\sin$, which is the ratio of the opposite leg to the hypotenuse.
- | 2014-12-18 10:03:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592715501785278, "perplexity": 136.67541215317405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765722.114/warc/CC-MAIN-20141217075245-00150-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/calculating-the-number-of-electrons-given-the-force-of-repulsion.977539/ | # Calculating the number of electrons given the force of repulsion
Homework Statement:
Two small spheres spaced 20.0 cm apart have equal charge. How many excess electrons must be present on each sphere if the magnitude of the force of repulsion between them is 3.33×10^−21N?
Relevant Equations:
Charge of an electron e=-1.6X10^-19 C
F=kq/(r^2)
Homework Statement: Two small spheres spaced 20.0 cm apart have equal charge. How many excess electrons must be present on each sphere if the magnitude of the force of repulsion between them is 3.33×10^−21N?
Homework Equations: Charge of an electron e=-1.6X10^-19 C
F=kq/(r^2)
For this I set the force equal to 3.33X10^-21N and solved for the value of q given that we know the values for k (9x10^9Nm^2/C^2) and r=0.2m. This gave a q value of 1.48x10^-31 which I then divided by the charge of an electron to get a value of 9.25x10^-14 which is not an appropriate value for number of electrons. Am I using the correct equation?
haruspex
Homework Helper
Gold Member
2020 Award
F=kq/(r^2)
Something missing there?
Do I need to include the r-vector?
This time I set F=3.3x10^-21 and divided this by the right side of the equation which I calculated out to be (9x10^9)(1.6x10^-19)(1.6x10^-19)/(.2x.2) which gave me a value of 578,125 electrons. Do I need to divide this value by 2 to get the number of electrons that need to be present on each sphere or does each sphere need 578,125 electrons?
Doc Al
Mentor
Try this: Assume that each sphere has the same number of electrons, let's call that number "n". So, if the charge on each electron is "e", what's the charge on each sphere? Rewrite your equation in terms of that, then you can solve for "n". | 2021-10-15 21:58:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8527596592903137, "perplexity": 543.156803723162}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00676.warc.gz"} |
http://www.statsblogs.com/page/135/ | ## Coupling of particle filters: likelihood curves
July 19, 2016
By
$Coupling of particle filters: likelihood curves$
Hi! In this post, I’ll write about coupling particle filters, as proposed in our recent paper with Fredrik Lindsten and Thomas B. Schön from Uppsala University, available on arXiv; and also in this paper by colleagues at NUS. The paper is about a methodology with multiple direct consequences. In this first post, I’ll focus on correlated likelihood estimators; in a later […]
## No, Google will not “sway the presidential election”
July 19, 2016
By
Grrr, this is annoying. A piece of exaggerated science reporting hit PPNAS and was promoted in Politico, then Kaiser Fung and I shot it down (“Could Google Rig the 2016 Election? Don’t Believe the Hype”) in our Daily Beast column last September. Then it appeared again this week in a news article in the Christian […] The post No, Google will not “sway the presidential election” appeared first on Statistical…
## Moving statistical theory from a “discovery” framework to a “measurement” framework
July 18, 2016
By
Avi Adler points to this post by Felix Schönbrodt on “What’s the probability that a significant p-value indicates a true effect?” I’m sympathetic to the goal of better understanding what’s in a p-value (see for example my paper with John Carlin on type M and type S errors) but I really don’t like the framing […] The post Moving statistical theory from a “discovery” framework to a “measurement” framework appeared…
## The HAC Emperor has no Clothes: Part 2
July 18, 2016
By
The time-series kernel-HAC literature seems to have forgotten about pre-whitening. But most of the action is in the pre-whitening, as stressed in my earlier post. In time-series contexts, parametric allowance for good-old ARMA-GARCH disturbances (with ...
## On deck this week
July 18, 2016
By
Mon: Moving statistical theory from a “discovery” framework to a “measurement” framework Tues: Bayesian Linear Mixed Models using Stan: A tutorial for psychologists, linguists, and cognitive scientists Wed: Going beyond confidence intervals Thurs: Ioannidis: “Evidence-Based Medicine Has Been Hijacked” Fri: What’s powdery and comes out of a metallic-green cardboard can? Sat: “The Dark Side of […] The post On deck this week appeared first on Statistical Modeling, Causal Inference, and…
## What happened when I was forced to wait 30 minutes for the subway
July 18, 2016
By
What happened when I was forced to wait 30 minutes for the subway: pondering how easy it is for data analysts to get fooled by bad data
## Color markers in a scatter plot by a third variable in SAS
July 18, 2016
By
One of my favorite new features in PROC SGPLOT in SAS 9.4m2 is addition of the COLORRESPONSE= and COLORMODEL= options to the SCATTER statement. By using these options, it is easy to color markers in a scatter plot so that the colors indicate the values of a continuous third variable. […] The post Color markers in a scatter plot by a third variable in SAS appeared first on The DO…
## Teachers and resource providers – uneasy bedfellows
July 18, 2016
By
Trade stands and cautious teachers It is interesting to provide a trade stand at a teachers’ conference. Some teachers are keen to find out about new things, and come to see how we can help them. Others studiously avoid eye-contact … Continue reading →
## Not So Standard Deviations Episode 18 – Divide by n-1, or n-2, or Whatever
July 18, 2016
By
Hilary and I talk about statistical software in fMRI analyses, the differences between software testing differences in proportions (a must listen!), and a preview of JSM 2016. Also, Hilary and I have just published a new book, Conversations on Data Sc...
## “Pointwise mutual information as test statistics”
July 17, 2016
By
Christian Bartels writes: Most of us will probably agree that making good decisions under uncertainty based on limited data is highly important but remains challenging. We have decision theory that provides a framework to reduce risks of decisions under uncertainty with typical frequentist test statistics being examples for controlling errors in absence of prior knowledge. […] The post “Pointwise mutual information as test statistics” appeared first on Statistical Modeling, Causal…
## Mittag-Leffler function and probability distribution
July 17, 2016
By
The Mittag-Leffler function is a generalization of the exponential function. Since k!= Γ(k + 1), we can write the exponential function’s power series as and we can generalize this to the Mittag-Leffler function which reduces to the exponential function when α = β = 1. There are a few other values of α and β for […]
## You can post social science papers on the new SocArxiv
July 17, 2016
By
I learned about it from this post by Elizabeth Popp Berman. The temporary SocArxiv site is here. It is connected to the Open Science Framework, which we’ve heard a lot about in discussions of preregistration. You can post your papers at SocArxiv right away following these easy steps: Send an email to the following address(es) […] The post You can post social science papers on the new SocArxiv appeared first…
## Bigmilk strikes again
July 16, 2016
By
The post Bigmilk strikes again appeared first on Statistical Modeling, Causal Inference, and Social Science.
## One-day workshop on causal inference (NYC, Sat. 16 July)
July 15, 2016
By
James Savage is teaching a one-day workshop on causal inference this coming Saturday (16 July) in New York using RStanArm. Here’s a link to the details: One-day workshop on causal inference Here’s the course outline: How do prices affect sales? What is the uplift from a marketing decision? By how much will studying for an […] The post One-day workshop on causal inference (NYC, Sat. 16 July) appeared first on…
## Replin’ ain’t easy: My very first preregistration
July 15, 2016
By
I’m doing my first preregistered replication. And it’s a lot of work! We’ve been discussing this for awhile—here’s something I published in 2013 in response to proposals by James Moneghan and by Macartan Humphreys, Raul Sanchez de la Sierra, and Peter van der Windt for preregistration in political science, here’s a blog discussion (“Preregistration: what’s […] The post Replin’ ain’t easy: My very first preregistration appeared first on Statistical Modeling,…
## Finish line (nearly)
July 15, 2016
By
We are very close to the finish line \$-\$ that's being able to finally submit the BCEA book to the editor (Springer).This has been a rather long journey, but I think the current version (I dread using the word "final" just yet...) is very good, I think....
## the curious incident of the inverse of the mean
July 14, 2016
By
A s I figured out while working with astronomer colleagues last week, a strange if understandable difficulty proceeds from the simplest and most studied statistical model, namely the Normal model x~N(θ,1) Indeed, if one reparametrises this model as x~N(υ⁻¹,1) with υ>0, a single observation x brings very little information about υ! (This is not a […]
## About that claim that police are less likely to shoot blacks than whites
July 14, 2016
By
Josh Miller writes: Did you see this splashy NYT headline, “Surprising New Evidence Shows Bias in Police Use of Force but Not in Shootings”? It’s actually looks like a cool study overall, with granular data, and a ton of leg work, and rich set of results that extend beyond the attention grabbing headline that is […] The post About that claim that police are less likely to shoot blacks than…
## That’s like so random! Monte Carlo for Data Science
July 14, 2016
By
Another great turnout at the DataPhilly meetup last night. Was great to see all you random data nerds! Code snippets to generate animated examples here.
## Enriching mathematics with statistics
July 14, 2016
By
Statistics enriches everything! In many school systems in the world, subjects are taught separately. In primary school, children learn reading and writing, maths and social studies at different times of the day. But more than that, many topics within subjects … Continue reading →
## The Bits Are Rotting in the State of Data Journalism
July 14, 2016
By
News articles are an incredibly important source of historical information. Online media and interactive pieces are much more at risk of breaking or disappearing, at least in theory. Well, it's not just theory. A quick look around shows a number of even fairly recent pieces in major publications that are broken today. The screenshot above is from … Continue reading The Bits Are Rotting in the State of Data Journalism
## Notes from the Kölner R meeting, 9 July 2016
July 13, 2016
By
Last Thursday the Cologne R user group came together again. This time, our two speakers arrived from Bavaria, to talk about Spark and R Server.Introduction to Apache SparkDownload slidesDubravko Dulic gave an introduction to Apache Spark and why Spark ...
## Of polls and prediction markets: More on #BrexitFail
July 13, 2016
By
David “Xbox poll” Rothschild and I wrote an article for Slate on how political prediction markets can get things wrong. The short story is that in settings where direct information is not easily available (for example, in elections where polls are not viewed as trustworthy forecasts, whether because of problems in polling or anticipated volatility […] The post Of polls and prediction markets: More on #BrexitFail appeared first on Statistical… | 2018-05-22 23:40:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32632946968078613, "perplexity": 4196.133685395854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864999.62/warc/CC-MAIN-20180522225116-20180523005116-00634.warc.gz"} |
https://plainmath.net/algebra-i/102989-what-is-the-scientific-notatio | 2023-02-25
What is the scientific notation for 500.
Hayley Rosario
500 is represented mathematically as 5*${10}^{2}$. Move the decimal, which is considered to be at the end of the number 500, two spaces to the left so that only the number 5 is to the left of the decimal. This will convert 500 to scientific notation. The decimal will be shifted to the right two places when the number is expanded, making the exponent a positive 2. | 2023-04-01 17:31:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8770672678947449, "perplexity": 357.4384033382687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00091.warc.gz"} |
https://xianblog.wordpress.com/tag/best-unbiased-estimator/ | ## efficiency and the Fréchet-Darmois-Cramèr-Rao bound
Posted in Books, Kids, Statistics with tags , , , , , , , , , , , on February 4, 2019 by xi'an
Following some entries on X validated, and after grading a mathematical statistics exam involving Cramèr-Rao, or Fréchet-Darmois-Cramèr-Rao to include both French contributors pictured above, I wonder as usual at the relevance of a concept of efficiency outside [and even inside] the restricted case of unbiased estimators. The general (frequentist) version is that the variance of an estimator δ of [any transform of] θ with bias b(θ) is
I(θ)⁻¹ (1+b'(θ))²
while a Bayesian version is the van Trees inequality on the integrated squared error loss
(E(I(θ))+I(π))⁻¹
where I(θ) and I(π) are the Fisher information and the prior entropy, respectively. But this opens a whole can of worms, in my opinion since
• establishing that a given estimator is efficient requires computing both the bias and the variance of that estimator, not an easy task when considering a Bayes estimator or even the James-Stein estimator. I actually do not know if any of the estimators dominating the standard Normal mean estimator has been shown to be efficient (although there exist results for closed form expressions of the James-Stein estimator quadratic risk, including one of mine the Canadian Journal of Statistics published verbatim in 1988). Or is there a result that a Bayes estimator associated with the quadratic loss is by default efficient in either the first or second sense?
• while the initial Fréchet-Darmois-Cramèr-Rao bound is restricted to unbiased estimators (i.e., b(θ)≡0) and unable to produce efficient estimators in all settings but for the natural parameter in the setting of exponential families, moving to the general case means there exists one efficiency notion for every bias function b(θ), which makes the notion quite weak, while not necessarily producing efficient estimators anyway, the major impediment to taking this notion seriously;
• moving from the variance to the squared error loss is not more “natural” than using any [other] convex combination of variance and squared bias, creating a whole new class of optimalities (a grocery of cans of worms!);
• I never got into the van Trees inequality so cannot say much, except that the comparison between various priors is delicate since the integrated risks are against different parameter measures.
## absurdly unbiased estimators
Posted in Books, Kids, Statistics with tags , , , , , , , on November 8, 2018 by xi'an
“…there are important classes of problems for which the mathematics forces the existence of such estimators.”
Recently I came through a short paper written by Erich Lehmann for The American Statistician, Estimation with Inadequate Information. He analyses the apparent absurdity of using unbiased estimators or even best unbiased estimators in settings like the Poisson P(λ) observation X producing the (unique) unbiased estimator of exp(-bλ) equal to
$(1-b)^x$
which is indeed absurd when b>1. My first reaction to this example is that the question of what is “best” for a single observation is not very meaningful and that adding n independent Poisson observations replaces b with b/n, which gets eventually less than one. But Lehmann argues that the paradox stems from a case of missing information, as for instance in the Poisson example where the above quantity is the probability P(T=0) that T=0, when T=X+Y, Y being another unobserved Poisson with parameter (b-1)λ. In a lot of such cases, there is no unbiased estimator at all. When there is any, it must take values outside the (0,1) range, thanks to a lemma shown by Lehmann that the conditional expectation of this estimator given T is either zero or one.
I find the short paper quite interesting in exposing some reasons why the estimators cannot find enough information within the data (often a single point) to achieve an efficient estimation of the targeted function of the parameter, even though the setting may appear rather artificial.
## best unbiased estimator of θ² for a Poisson model
Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , on May 23, 2018 by xi'an
A mostly traditional question on X validated about the “best” [minimum variance] unbiased estimator of θ² from a Poisson P(θ) sample leads to the Rao-Blackwell solution
$\mathbb{E}[X_1X_2|\underbrace{\sum_{i=1}^n X_i}_S=s] = -\frac{s}{n^2}+\frac{s^2}{n^2}=\frac{s(s-1)}{n^2}$
and a similar estimator could be constructed for θ³, θ⁴, … With the interesting limitation that this procedure stops at the power equal to the number of observations (minus one?). But, since the expectation of a power of the sufficient statistics S [with distribution P(nθ)] is a polynomial in θ, there is de facto no limitation. More interestingly, there is no unbiased estimator of negative powers of θ in this context, while this neat comparison on Wikipedia (borrowed from the great book of counter-examples by Romano and Siegel, 1986, selling for a mere \$180 on amazon!) shows why looking for an unbiased estimator of exp(-2θ) is particularly foolish: the only solution is (-1) to the power S [for a single observation]. (There is however a first way to circumvent the difficulty if having access to an arbitrary number of generations from the Poisson, since the Forsythe – von Neuman algorithm allows for an unbiased estimation of exp(-F(x)). And, as a second way, as remarked by Juho Kokkala below, a sample of at least two Poisson observations leads to a more coherent best unbiased estimator.)
## an improvable Rao–Blackwell improvement, inefficient maximum likelihood estimator, and unbiased generalized Bayes estimator
Posted in Books, Statistics, University life with tags , , , , , , , , on February 2, 2018 by xi'an
In my quest (!) for examples of location problems with no UMVU estimator, I came across a neat paper by Tal Galili [of R Bloggers fame!] and Isaac Meilijson presenting somewhat paradoxical properties of classical estimators in the case of a Uniform U((1-k)θ,(1+k)θ) distribution when 0<k<1 is known. For this model, the minimal sufficient statistic is the pair made of the smallest and of the largest observations, L and U. Since this pair is not complete, the Rao-Blackwell theorem does not produce a single and hence optimal estimator. The best linear unbiased combination [in terms of its variance] of L and U is derived in this paper, although this does not produce the uniformly minimum variance unbiased estimator, which does not exist in this case. (And I do not understand the remark that
“Any unbiased estimator that is a function of the minimal sufficient statistic is its own Rao–Blackwell improvement.”
as this hints at an infinite sequence of improvement.) While the MLE is inefficient in this setting, the Pitman [best equivariant] estimator is both Bayes [against the scale Haar measure] and unbiased. While experimentally dominating the above linear combination. The authors also argue that, since “generalized Bayes rules need not be admissible”, there is no guarantee that the Pitman estimator is admissible (under squared error loss). But given that this is a uni-dimensional scale estimation problem I doubt very much there is a Stein effect occurring in this case.
## best unbiased estimators
Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , , on January 18, 2018 by xi'an
A question that came out on X validated today kept me busy for most of the day! It relates to an earlier question on the best unbiased nature of a maximum likelihood estimator, to which I pointed out the simple case of the Normal variance when the estimate is not unbiased (but improves the mean square error). Here, the question is whether or not the maximum likelihood estimator of a location parameter, when corrected from its bias, is the best unbiased estimator (in the sense of the minimal variance). The question is quite interesting in that it links to the mathematical statistics of the 1950’s, of Charles Stein, Erich Lehmann, Henry Scheffé, and Debabrata Basu. For instance, if there exists a complete sufficient statistic for the problem, then there exists a best unbiased estimator of the location parameter, by virtue of the Lehmann-Scheffé theorem (it is also a consequence of Basu’s theorem). And the existence is pretty limited in that outside the two exponential families with location parameter, there is no other distribution meeting this condition, I believe. However, even if there is no complete sufficient statistic, there may still exist best unbiased estimators, as shown by Bondesson. But Lehmann and Scheffé in their magisterial 1950 Sankhya paper exhibit a counter-example, namely the U(θ-1,θ-1) distribution:
since no non-constant function of θ allows for a best unbiased estimator.
Looking in particular at the location parameter of a Cauchy distribution, I realised that the Pitman best equivariant estimator is unbiased as well [for all location problems] and hence dominates the (equivariant) maximum likelihood estimator which is unbiased in this symmetric case. However, as detailed in a nice paper of Gabriela Freue on this problem, I further discovered that there is no uniformly minimal variance estimator and no uniformly minimal variance unbiased estimator! (And that the Pitman estimator enjoys a closed form expression, as opposed to the maximum likelihood estimator.) This sounds a bit paradoxical but simply means that there exists different unbiased estimators which variance functions are not ordered and hence not comparable. Between them and with the variance of the Pitman estimator. | 2019-09-18 15:51:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8361380100250244, "perplexity": 774.9685225342762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573309.22/warc/CC-MAIN-20190918151927-20190918173927-00472.warc.gz"} |
https://franklin.dyer.me/notes/note/Chart_on_an_affine_space | ## Franklin's Notes
### Chart on an affine space
Given an affine space $X$, we may "choose coordinates" for $X$ by first choosing an origin and then isomorphically mapping the resulting vector space onto $\mathbb R^n$. This is accomplished using a composite function called a chart: where the map $X\to T_a X$ is given by "fixing" the origin at $a$ by sending $x\mapsto (a,x)$ in the tangent space at $a$; the map $T_a X\to T$ is given by taking differences $(a,x)\mapsto d_a(x)=d(a,x)$; and the map $A:T\to \mathbb R^n$ is a "choice of basis". For clarity, $C_a$ is not used to convert $X$ into a vector space - just to assign it a coordinate system that is useful for labeling points. | 2023-01-30 08:55:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8062276840209961, "perplexity": 236.4847254991023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00462.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/115888-determining-cdf-laplace-pdf.html | # Math Help - Determining the CDF of the Laplace PDF
1. ## Determining the CDF of the Laplace PDF
$\frac{1}{2} \int^x_{- \infty} e^{-|t|} dt= \frac{1}{2} \int^x_{ \infty} e^{t}dt + \frac{1}{2} \int^x_{- \infty} e^{-t} dt$
I'm pretty sure I bounded the second integral incorrectly. What would be the correct way?
2. Hello,
First you have to determine if x is positive or negative...
Let's suppose it's negative.
Then $t\in(-\infty,x)\implies |t|=-t$
So $\int_{-\infty}^x e^{-|t|} ~dt=\int_{-\infty}^x e^t ~dt=\int_{-x}^\infty e^{-t} ~dt$ (if you substitute t=-t)
If x is positive, then $t\in(-\infty,0) \implies |t|=-t$ and $t\in(0,x)\implies |t|=t$
So $\int_{-\infty}^x e^{-|t|} ~dt=\int_{-\infty}^0 e^{t} ~dt+\int_0^x e^{-t} ~dt=1+\int_0^x e^{-t} ~dt$
3. Thank you once again. | 2015-03-30 13:44:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6638507843017578, "perplexity": 1224.9393243712764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299339.12/warc/CC-MAIN-20150323172139-00067-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://www.dcode.fr/lattice-path | Search for a tool
Lattice Path
Tool to calculate all paths on a lattice graphe (square grid graph). A path is a series of directions (north, south, east, west) to connect two points on a grid.
Results
Lattice Path -
Tag(s) : Graph Theory
Share
dCode and you
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Team dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Lattice Path tool. Thank you.
# Lattice Path
## Path Count Calculator (North-East - NE)
The information on this page is for a square grid and is not valid on triangular grids (or other non square lattice graphs).
### Between 2 points
Tool to calculate all paths on a lattice graphe (square grid graph). A path is a series of directions (north, south, east, west) to connect two points on a grid.
### How to count paths on a lattice graph?
The calculation of the number of paths (of length $$a + b$$) on a grid of size (a x b) (limited to a north-south direction and a west-east direction) uses combinatorics tools such as the binomial coefficient $$\binom{a+b}{a}$$
The north direction N consists of moving up one unit along the ordinate (0,1).
The east direction E consists of moving one unit to the right along the abscissa (1,0).
Example: To go from the point $$(0, 0)$$ to the point $$(2, 2)$$ (which corresponds to a 2x2 grid) using only north and east. (N,N,E,E), (N,E,N,E), (N,E,E,N), (E,N,E,N), (E,N,N,E), (E,E,N,N) so 6 paths and is computed $$\binom{4}{2} = 6$$
### What is a lattice graph?
A grid graph is the name given to a bounded grid (with borders).
### How to enumerate pathways in a lattice graph?
To generate the list of all paths, use the permutation generator.
Example: N,N,N,E has 4 distinct permutations: (N,N,N,E) (N,N,E,N) (E,N,N,N) (N,E,N,N)
## Source code
dCode retains ownership of the source code of the script Lattice Path online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be given for free. To download the online Lattice Path script for offline use on PC, iPhone or Android, ask for price quote on contact page ! | 2019-06-17 20:51:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3805198669433594, "perplexity": 3951.073149398266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00123.warc.gz"} |
https://ictp.acad.ro/category/paper/page/3/ | ## Qualitative properties of solutions for mixed type functional-differential equations with maxima
AbstractIn this paper we study some properties of the solutions of a second order system of functional-differential equations with maxima,…
## Inequalities for indices of coincidence and entropies
AbstractWe consider a probability distribution depending on a real parameter x. As functions of x, the Renyi entropy and the…
## Iterates of Cheney-Sharma type operators on a triangle with curved side
AbstractWe consider some Cheney-Sharma type operators as well as their product and Boolean sum for a function defined on a…
## A stabilized finite element method for inverse problems subject to the convection-diffusion equation. II: convection-dominated regime
AbstractWe consider the numerical approximation of the ill-posed data assimilation problem for stationary convection–diffusion equations and extend our previous analysis…
## On the two-dimensional inverse problem of dynamics
AbstractAuthorsKeywordsReferencesPDFScanned paper. Latex version of the paper. Cite this paper as:Pal A., Anisiu M.C., On the two-dimensional inverse problem of dynamics,…
## Inhomogeneous potentials producing hemogeneous orbits
AbstractAuthorsKeywordsReferencesPDFScanned paper. Latex version of the paper. Cite this paper as:Bozis G., Anisiu M.C., Blaga C., Inhomogeneous potentials producing hemogeneous orbits,…
## Two-dimensional inverse problem of dynamics for families in parametric form
AbstractAuthorsKeywordsReferencesPDF(pdf file here) Cite this paper as:Anisiu M.C., Pal A., Two-dimensional inverse problem of dynamics for families in parametric form, Inverse…
## Programmed motion for a class of families of planar orbits
AbstractAuthorsKeywordsReferencesPDFScanned paper. Latex version of the paper. Cite this paper as:Anisiu MC., Bozis G., Programmed motion for a class of families…
## PDES in the inverse problem of dynamics
AbstractAuthorsKeywordsReferencesPDF(pdf file here) Cite this paper as:Anisiu M.C., PDES in the inverse problem of dynamics, Analysis and Optimization of Differential Systems,…
## Special families of orbits in the direct problem of dynamics
AbstractAuthorsKeywordsReferencesPDFScanned paper. Latex version of the paper. Cite this paper as:Anisiu MC., Blaga C., Bozis G., Special families of orbits in… | 2021-10-18 13:57:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6907137632369995, "perplexity": 2692.944345166169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585203.61/warc/CC-MAIN-20211018124412-20211018154412-00649.warc.gz"} |
http://www.perimeterinstitute.ca/fr/calendar/year | A A
Connect with us:
d l m m j v s
1
2
3
4
5
6
7
8
14
15
21
22
28
29
d l m m j v s
2
4
5
10
11
12
18
19
20
25
26
d l m m j v s
4
5
11
12
17
18
19
24
25
26
27
30
d l m m j v s
1
2
3
8
9
13
14
15
16
17
22
23
29
30
d l m m j v s
6
7
13
14
20
21
22
26
27
28
29
d l m m j v s
1
2
3
4
5
6
9
10
11
17
18
24
25
29
30
d l m m j v s
1
2
3
5
6
7
8
9
10
11
12
13
14
15
16
18
23
29
30
d l m m j v s
5
6
7
8
12
13
14
15
16
18
19
20
26
27
28
30
31
d l m m j v s
1
2
3
4
8
9
10
17
18
21
23
24
29
30
d l m m j v s
1
5
6
7
8
13
14
15
21
22
23
27
28
29
d l m m j v s
3
4
5
11
12
18
19
24
25
26
d l m m j v s
2
3
4
6
8
9
10
11
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31 | 2017-11-24 08:19:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9873591661453247, "perplexity": 9212.237074278839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00046.warc.gz"} |
https://www.findfilo.com/math-question-answers/find-the-coordinates-of-the-foci-the-vertices-the-mm5 | Find the coordinates of the foci, the vertices, the length of majo | Filo
Class 11
Math
3D Geometry
Conic Sections
572
150
Find the coordinates of the foci, the vertices, the length of major axis, the minor axis, the eccentricity and the length of the latus rectum of the ellipse
Solution: The given equation is
It can be written as
...(1)
Here the donominator of is greater than the denominator of
Therefore, the major axis is along the -axis while the minor axis is along the -axis.
On comparing equation (1) with , we obtain and
Therefore, the coordinates of the foci are
The coordinates of the vertices are (0,
Length of major axis
Length of minor axis
Eccentricity
Length of latus rectum
572
150
Connecting you to a tutor in 60 seconds. | 2021-09-27 09:36:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657571077346802, "perplexity": 1526.8695124780975}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00077.warc.gz"} |
http://www.khullakitab.com/machine/notes/science/class-9/357/practices | Machine
Machine is the device that makes our work easier, changes the direction of the force and does the work efficiently.
M.A = $\frac{{{\rm{\: \: load\: }}}}{{{\rm{effort\: }}}}$
It has no unit.
Velocity ratio
The ratio of the distance moved by the effort to the distance moved by the load is called velocity ratio.
Velocity ratio = $\frac{{{\rm{\: distance\: moved\: by\: effort\: }}}}{{{\rm{distance\: moved\: by\: the\: load}}}}$
Efficiency of the machine
It is defined as the ratio of work done by the machine to the work done in the machine.
Efficiency = ${\rm{\: \: }}\frac{{{\rm{\: work\: done\: by\: the\: machine\: \: }}}}{{{\rm{work\: done\: on\: the\: machine\: }}}}$
In other way, it can be defined as the ratio of output work to the input work. It is expressed in terms of percentage.
Efficiency = ${\rm{\: \: }}\frac{{{\rm{\: output\: work\: \: \: }}}}{{\begin{array}{*{20}{c}}{input\: \: work}\\\:\end{array}}}$*100%
An ideal or perfect machine has output work equal to the input work and its efficiency is 100%.But in practice, no such machine exists, so, the efficiency of machine is always less than 100%
Relation between M.A, V.R, and Efficiency
We have,
Efficiency = ${\rm{\: \: }}\frac{{{\rm{\: output\: work\: \: \: }}}}{{\begin{array}{*{20}{c}}{input\: \: work}\\\:\end{array}}}$*100%
=$\frac{{{\rm{load*distance\: moved\: by\: the\: load\: }}}}{{\begin{array}{*{20}{c}}{effort*distance\:moved\: by\: the\: effort}\\{}\end{array}}}$*100%
Efficiency =$\frac{{\: \: \: \: {\bf{M}}.{\bf{A}}}}{{{\bf{V}}.{\bf{A}}}}$ *100%
Pulley
Pulley is a wheel made of up wood or metal that rotates on its axle. It has groove in its circumference through which a rope can be passed easily.
There are mainly two types:
Single fixed pulley: It is the type of pulley in which the load is raised directly upward. Load is placed at the one end and effort is applied at the other end .Here the effort applied moves down the distance l and the load moves up the distance l.
In the absence of friction
Input work = output work
E*l=L*l
L/E=1
M.A =1
Also the V.R = distance moved by the effort /distance moved by the load
= l/l
=1
So .the efficiency =100% for the single fixed pulley.
In this type of pulley load and effort are equal. There is no gain mechanical advantage .
Single movable pulley: In this pulley, there are two wheels, one fixed and other movable. Effort is applied on the free end of rope and the load is total upward force is 2 times the effort
M.A for this pulley is 2
V.R =2
Therefore, Efficiency =100%
But in real practice efficiency is always less than 100%
In general, V.R is equal to number of pulley used.
Inclined plane
It is machine which is used to lift the heavy object to a height.
M.A = $\frac{{{\rm{the\: \: distance\: moved\: in\: aplane\: surface}}.{\rm{\: }}}}{{{\rm{the\: actual\: height\: \: through\: which\: load\: is\: raised}}}}$
=${\rm{\: \: }}\frac{{\rm{l}}}{{\rm{h}}}$
V.R = $\frac{{{\rm{distance\: moved\: byt\: he\: effort\: in\: an\: inclined\: plane}}}}{{{\rm{distance\: \: moved\: by\: load\: vertically\: upward}}}}\frac{{\rm{l}}}{{\rm{h}}}$
Putting the value,
Efficiency =$\frac{{{\rm{M}}.{\rm{A}}}}{{{\rm{V}}.{\rm{A}}}}$*100% = 100%
Wheel and axle
Wheel and axle consists of two co axial cylinder .Large one is called wheel and small one is called axle.
M.A = $\frac{{{\rm{radius\: of\: wheel\: }}}}{{{\rm{radius\: of\: axle\: }}}}$
V.R = $\frac{{{\rm{distance\: moved\: by\: effort\: }}}}{{{\rm{distane\: moved\: by\: t\: he\: load}}}}$ = $\frac{{{\rm{radius\: of\: wheel\: }}}}{{{\rm{radius\: of\: axle\: }}}}$
Efficiency =$\frac{{{\rm{M}}.{\rm{A}}}}{{{\rm{V}}.{\rm{A}}}}$*100%
Moment
Moment of the force may be defined as the product of the force and its perpendicular distance of the line of action from the axis of rotation.
Moment = f*d
Law of moment
It states that “In the balanced condition the sum of the clockwise moment acting on it is equal to the sum of anti clockwise moment on it”.
Go Top | 2023-03-28 21:01:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7701929807662964, "perplexity": 1477.2206834871631}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00380.warc.gz"} |
https://uk.mathworks.com/help/robust/gs/robust-controller-design.html | # Robust Controller Design
This example shows how to design a feedback controller for a plant with uncertain parameters and uncertain model dynamics. The goals of the controller design are good steady-state tracking and disturbance-rejection properties.
Design a controller for the plant G described in Robust Controller Design. This plant is a first-order system with an uncertain time constant. The plant also has some uncertain dynamic deviations from first-order behavior beyond about 9 rad/s.
bw = ureal('bw',5,'Percentage',10);
Gnom = tf(1,[1/bw 1]);
W = makeweight(.05,9,10);
Delta = ultidyn('Delta',[1 1]);
G = Gnom*(1+W*Delta)
G =
Uncertain continuous-time state-space model with 1 outputs, 1 inputs, 2 states.
The model uncertainty consists of the following blocks:
Delta: Uncertain 1x1 LTI, peak gain = 1, 1 occurrences
bw: Uncertain real, nominal = 5, variability = [-10,10]%, 1 occurrences
Type "G.NominalValue" to see the nominal value, "get(G)" to see all properties, and "G.Uncertainty" to interact with the uncertain elements.
### Design Controller
Because of the nominal first-order behavior of the plant, choose a PI control architecture. For a desired closed-loop damping ratio ξ and natural frequency ${\omega }_{\mathit{n}}$, the design equations for the proportional and integral gains (based on the nominal open-loop time constant of 0.2) are:
${K}_{p}=\frac{2\xi {\omega }_{n}}{5}-1,\phantom{\rule{0.2777777777777778em}{0ex}}{K}_{i}=\frac{{\omega }_{n}^{2}}{5}.$
To study how the uncertainty in G affects the achievable closed-loop bandwidth, design two controllers, both achieving ξ = 0.707, but with different ${\omega }_{\mathit{n}}$ values, 3 and 7.5.
xi = 0.707;
wn1 = 3;
wn2 = 7.5;
Kp1 = 2*xi*wn1/5 - 1;
Ki1 = (wn1^2)/5;
C1 = tf([Kp1,Ki1],[1 0]);
Kp2 = 2*xi*wn2/5 - 1;
Ki2 = (wn2^2)/5;
C2 = tf([Kp2,Ki2],[1 0]);
### Examine Controller Performance
The nominal closed-loop bandwidth achieved by C2 is in a region where G has significant model uncertainty. It is therefore expected that the model variations cause significant degradations in the closed-loop performance with that controller. To examine the performance, form the closed-loop systems and plot the step responses of samples of the resulting systems.
T1 = feedback(G*C1,1);
T2 = feedback(G*C2,1);
tfinal = 3;
step(T1,'b',T2,'r',tfinal)
The step responses for T2 exhibit a faster rise time because C2 sets a higher closed-loop bandwidth. However, as expected, the model variations have a greater impact.
You can use robstab to check the robustness of the stability of the closed-loop systems to model variations.
opt = robOptions('Display','on');
stabmarg1 = robstab(T1,opt);
Computing peak... Percent completed: 100/100
System is robustly stable for the modeled uncertainty.
-- It can tolerate up to 401% of the modeled uncertainty.
-- There is a destabilizing perturbation amounting to 401% of the modeled uncertainty.
-- This perturbation causes an instability at the frequency 3.74 rad/seconds.
stabmarg2 = robstab(T2,opt);
Computing peak... Percent completed: 100/100
System is robustly stable for the modeled uncertainty.
-- It can tolerate up to 125% of the modeled uncertainty.
-- There is a destabilizing perturbation amounting to 125% of the modeled uncertainty.
-- This perturbation causes an instability at the frequency 11.4 rad/seconds.
The display gives the amount of uncertainty that the system can tolerate without going unstable. In both cases, the closed-loop systems can tolerate more than 100% of the modeled uncertainty range while remaining stable. stabmarg contains lower and upper bounds on the stability margin. A stability margin greater than 1 means the system is stable for all values of the modeled uncertainty. A stability margin less than 1 means there are allowable values of the uncertain elements that make the system unstable.
### Compare Nominal and Worst-Case Behavior
While both systems are stable for all variations, their performance is affected to different degrees. To determine how the uncertainty affects closed-loop performance, you can use wcgain to compute the worst-case effect of the uncertainty on the peak magnitude of the closed-loop sensitivity function, S = 1/(1+GC). This peak gain of this function is typically correlated with the amount of overshoot in a step response; peak gain greater than one indicates overshoot.
Form the closed-loop sensitivity functions and call wcgain.
S1 = feedback(1,G*C1);
S2 = feedback(1,G*C2);
[maxgain1,wcu1] = wcgain(S1);
[maxgain2,wcu2] = wcgain(S2);
maxgain gives lower and upper bounds on the worst-case peak gain of the sensitivity transfer function, as well as the specific frequency where the maximum gain occurs. Examine the bounds on the worst-case gain for both systems.
maxgain1
maxgain1 = struct with fields:
LowerBound: 1.8832
UpperBound: 1.8866
CriticalFrequency: 3.2410
maxgain2
maxgain2 = struct with fields:
LowerBound: 4.6286
UpperBound: 4.6381
CriticalFrequency: 11.6174
wcu contains the particular values of the uncertain elements that achieve this worst-case behavior. Use usubs to substitute these worst-case values for uncertain elements, and compare the nominal and worst-case behavior.
wcS1 = usubs(S1,wcu1);
wcS2 = usubs(S2,wcu2);
bodemag(S1.NominalValue,'b',wcS1,'b');
hold on
bodemag(S2.NominalValue,'r',wcS2,'r');
While C2 achieves better nominal sensitivity than C1, the nominal closed-loop bandwidth extends too far into the frequency range where the process uncertainty is very large. Hence the worst-case performance of C2 is inferior to C1 for this particular uncertain model.
Get trial now | 2021-05-10 18:48:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.724444568157196, "perplexity": 2691.332326272958}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00381.warc.gz"} |
https://atorus-research.github.io/Tplyr/reference/add_column_headers.html | When working with 'huxtable' tables, column headers can be controlled as if they are rows in the data frame. add_column_headers eases the process of introducing these headers.
add_column_headers(.data, s, header_n = NULL)
## Arguments
.data The data.frame/tibble on which the headers shall be attached The text containing the intended header string A header_n or generic data.frame to use for binding count values. This is required if you are using the token replacement.
## Value
A data.frame with the processed header string elements attached as the top rows
## Details
Headers are created by providing a single string. Columns are specified by delimitting each header with a '|' symbol. Instead of specifying the destination of each header, add_column_headers assumes that you have organized the columns of your data frame before hand. This means that after you use Tplyr::build(), if you'd like to reorganize the default column order (which is simply alphabetical), simply pass the build output to a dplyr::select or dplyr::relocate statement before passing into add_column_headers.
Spanning headers are also supported. A spanning header is an overarching header that sits across multiple columns. Spanning headers are introduced to add_column_header by providing the spanner text (i.e. the text that you'd like to sit in the top row), and then the spanned text (the bottom row) within curly brackets ('{}). For example, take the iris dataset. We have the names:
"Sepal.Length" "Sepal.Width" "Petal.Length" "Petal.Width" "Species"
If we wanted to provide a header string for this dataset, with spanners to help with categorization of the variables, we could provide the following string:
"Sepal {Length | Width} | Petal {Length | Width} | Species"
## Important note
Make sure you are aware of the order of your variables prior to passing in to add_column_headers. The only requirement is that the number of column match. The rest is up to you.
## Development notes
There are a few features of add_column_header that are intended but not yet supported:
• Nested spanners are not yet supported. Only a spanning row and a bottom row can currently be created
• Different delimiters and indicators for a spanned group may be used in the future. The current choices were intuitive, but based on feedback it could be determined that less common characters may be necessary.
## Token Replacement
This function has support for reading values from the header_n object in a Tplyr table and adding them in the column headers. Note: The order of the parameters passed in the token is important. They should be first the treatment variable then any cols variables in the order they were passed in the table construction.
Use a double asterisk "**" at the begining to start the token and another double asterisk to close it. You can separate column parameters in the token with a single underscore. For example, **group1_flag2_param3** will pull the count from the header_n binding for group1 in the treat_var, flag2 in the first cols argument, and param3 in the second cols argument.
You can pass fewer arguments in the token to get the sum of multiple columns. For example, **group1** would get the sum of the group1 treat_var, and all cols from the header_n.
## Examples
# Load in pipe
library(magrittr)
library(dplyr)
#>
#> Attaching package: ‘dplyr’#> The following objects are masked from ‘package:stats’:
#>
#> filter, lag#> The following objects are masked from ‘package:base’:
#>
#> intersect, setdiff, setequal, unionheader_string <- "Sepal {Length | Width} | Petal {Length | Width} | Species"
iris2 <- iris %>%
mutate_all(as.character)
#> # A tibble: 152 x 5
#> Sepal.Length Sepal.Width Petal.Length Petal.Width Species
#> <chr> <chr> <chr> <chr> <chr>
#> 1 Sepal "" Petal "" ""
#> 2 Length "Width" Length "Width" "Species"
#> 3 5.1 "3.5" 1.4 "0.2" "setosa"
#> 4 4.9 "3" 1.4 "0.2" "setosa"
#> 5 4.7 "3.2" 1.3 "0.2" "setosa"
#> 6 4.6 "3.1" 1.5 "0.2" "setosa"
#> 7 5 "3.6" 1.4 "0.2" "setosa"
#> 8 5.4 "3.9" 1.7 "0.4" "setosa"
#> 9 4.6 "3.4" 1.4 "0.3" "setosa"
#> 10 5 "3.4" 1.5 "0.2" "setosa"
#> # … with 142 more rows
# Example with counts
mtcars2 <- mtcars %>%
mutate_all(as.character)
t <- tplyr_table(mtcars2, vs, cols = am) %>%
group_count(cyl)
)
b_t <- build(t) %>%
mutate_all(as.character)
count_string <- paste0(" | V N=**0** {auto N=**0_0** | man N=**0_1**} |",
" S N=**1** {auto N=**1_0** | man N=**1_1**} | | ")
#> # A tibble: 5 x 7
#> row_label1 var1_0_0 var1_0_1 var1_1_0 var1_1_1 ord_layer_index ord_layer_1
#> <chr> <chr> <chr> <chr> <chr> <chr> <chr>
#> 1 "" "V N=18" "" "S N=14" "" "" ""
#> 2 "" "auto N=… "man N=6" "auto N=… "man N=7" "" ""
#> 3 "4" " 0 ( 0… " 1 ( 16… " 3 ( 42… " 7 (100… "1" "1"
#> 4 "6" " 0 ( 0… " 3 ( 50… " 4 ( 57… " 0 ( 0… "1" "2"
#> 5 "8" "12 (100… " 2 ( 33… " 0 ( 0… " 0 ( 0… "1" "3" | 2021-09-24 02:55:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3720199167728424, "perplexity": 11819.934806825653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00076.warc.gz"} |
http://stackoverflow.com/questions/1274018/system-security-securityexception-when-writing-to-event-log | # System.Security.SecurityException when writing to Event Log
I’m working on trying to port an ASP.NET app from Server 2003 (and IIS6) to Server 2008 (IIS7).
When I try and visit the page on the browser I get this:
Server Error in ‘/’ Application.
Security Exception
Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application’s trust level in the configuration file.
Exception Details: System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and the location of the exception can be identified using the exception stack trace below.
Stack Trace:
[SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security.]
System.Diagnostics.EventLog.FindSourceRegistration(String source, String machineName, Boolean readOnly) +562 System.Diagnostics.EventLog.SourceExists(String source, String machineName) +251
[snip]
These are the things I’ve done to try and solve it:
1. Give “Everyone” full access permission to the key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Security. This worked. But naturally I can’t do this in production. So I deleted the “Everyone” permission after running the app for a few minutes and the error re-appeared.
2. I created the source in the Application log and the Security log (and I verified it exists via regedit) during installation with elevated permissions but the error remained.
3. I gave the app a full trust level in the web.config file (and using appcmd.exe) but to no avail.
Does anyone have an insight as to what could be done here?
PS: This is a follow up to this question. I followed the given answers but to no avail (see #2 above).
-
I was getting this when trying to write to a custom source in a .Net service that was running as NetworkService. I just changed the event log source to match the service name that was setup via the .Net Service Setup package and it worked without setting registry permissions. I noticed it by seeing the service name as a key already in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application – Jon Adams Mar 31 '11 at 18:10
To give Network Service read permission on the EventLog/Security key (as suggested by Firenzi and royrules22) follow instructions from http://geekswithblogs.net/timh/archive/2005/10/05/56029.aspx
1. Open the Registry Editor:
1. Select Start then Run
2. Enter regedt32 or regedit
2. Navigate/expand to the following key:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog\Security
3. Right click on this entry and select Permissions
4. Add the Network Service user
-
The correct username in IIS7 is "NETWORK SERVICE". – Fueled Dec 24 '12 at 15:05
In IIS7 you can assign the "NETWORK SERVICE" as the identity for an App Pool (you might find that ApplicationPoolIdentity is the default) or instead you can create a new user per Application Pool and set permissions on that "Custom account". see Specify an Identity for an Application Pool (IIS 7) – Grokodile Mar 4 '13 at 23:25
The changes take only effect after you restart your aplication on IIS – Zé Carlos Apr 1 '13 at 19:12
I gave IIS_IUSRS permission to read/write the eventlog key, and read the Security key. My product needed write access on the eventlog key because it creates its own event source. – duck9 Apr 5 '13 at 4:34
duck9 i correct for IIS8, see here for more details : stackoverflow.com/questions/712203/… – thedrs Jul 7 at 11:10
The solution was to give NetworkService read permission on the EventLog/Security key
-
I see similar solutions around. But I'm just wondering why it is like this. Because I can see that a lot of services are logged on as NetworkService and they must be able to read the event log /security. So why is it needed to add the permission for NetworkService ? – h--n Apr 14 '11 at 11:04
For those of us who don't normally crawl through the registry, this link may be helpful: social.msdn.microsoft.com/forums/en-US/… – Allan Jul 21 '11 at 17:30
Nice link Allan. Point #3 by the accepted answer is important and has already bitten me once. i.e. Granting permission at the parent EventLog registry key does NOT propagate to "inaccessible logs" such as Security and Virtual Server, even though they are child keys in the registry. If you want full event log access you have to grant permission at BOTH the parent event log level and the child Security levels. – Ben Barreth Mar 13 '12 at 20:47
The changes take only effect after you restart your aplication on IIS – Zé Carlos Apr 1 '13 at 19:13
For me ony granting 'Read' permissions for 'NetworkService' to the whole 'EventLog' branch worked.
-
The Problem is that the SourceExists check or CreateEventLog of the EventLog Class tries to access the EventLog in a way that is only permitted as administrator.
A common example for a C# Program logging into EventLog is:
string sSource;
string sLog;
string sEvent;
sSource = "dotNET Sample App";
sLog = "Application";
sEvent = "Sample Event";
if (!EventLog.SourceExists(sSource))
EventLog.CreateEventSource(sSource, sLog);
EventLog.WriteEntry(sSource, sEvent);
EventLog.WriteEntry(sSource, sEvent,
EventLogEntryType.Warning, 234);
But
if (!EventLog.SourceExists(sSource))
EventLog.CreateEventSource(sSource, sLog);
fails if the program hasn't administrator permissions. Therefore the recommended way ist to create an install script, which creates the corresponding key under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\dotNET Sample App
and then remove those two lines.
You can also create a .reg file to create the registry key. Simply save the following text into a file create.reg:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application\dotNET Sample App]
-
I had a very similar problem with a console program I develop under VS2010 (upgraded from VS2008 under XP) My prog uses EnLib to do some logging. The error was fired because EntLib had not the permission to register a new event source.
So I started once my compiled prog as an Administrator : it registered the event source. Then I went back developping and debugging from inside VS without problem.
(you may also refer to http://www.blackwasp.co.uk/EventLog_3.aspx, it helped me
-
I ran into the same issue, but I had to go up one level and give full access to everyone to the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\ key, instead of going down to security, that cleared up the issue for me.
-
Also try setting the application to run as LocalSystem, so the registry key is created, then you can change back to NetworkService afterwards. – demoncodemonkey Jun 13 '13 at 21:46
– demoncodemonkey Jun 13 '13 at 21:54
I'm not working on IIS, but I do have an application that throws the same error on a 2K8 box. It works just fine on a 2K3 box, go figure.
My resolution was to "Run as administrator" to give the application elevated rights and everything works happily. I hope this helps lead you in the right direction.
Windows 2008 is rights/permissions/elevation is really different from Windows 2003, gar.
-
I try almost everything in here to solve this problem... I share here the answer that help me:
Another way to resolve the issue :
• in IIS console, go to application pool managing your site, and note the identity running it (usually Network Service)
• make sure this identity can read KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Eventlog (rigth-click, authorisations)
• now change the identity of this application pool to Local System, apply, and switch back to Network Service
Credentials will be reloaded and EventLog reacheable
in http://geekswithblogs.net/timh/archive/2005/10/05/56029.aspx , thanks Michael Freidgeim
-
your third bullet worked like a charm for me. – Anicho Feb 8 '13 at 11:08
FYI...my problem was that accidently selected "Local Service" as the Account on properties of the ProcessInstaller instead of "Local System". Just mentioning for anyone else who followed the MSDN tutorial as the Local Service selection shows first and I wasn't paying close attention....
-
Hi I ran into the same problem when I was developing an application and wanted to install it on a remote PC, I fixed it by doing the following:
1) Goto your registry, locate: HKLM\System\CurrentControlSet\Services\EventLog\Application(???YOUR_SERVICE_OR_APP_NAME???)
Note that "(???YOUR_SERVICE_OR_APP_NAME???)" is your application service name as you defined it when you created your .NET deployment, for example, if you named your new application "My new App" then the key would be: HKLM\System\CurrentControlSet\Services\EventLog\Application\My New app
Note2: Depending on which eventLog you are writing into, you may find on your DEV box, \Application\ (as noted above), or also (\System) or (\Security) depending on what event your application is writing into, mostly, (\Application) should be fine all the times.
2) Being on the key above, From the menu; Select "FILE" -> "Export", and then save the file. (Note: This would create your necessary registry settings when the application would need to access this key to write into the Event Viewer), the new file will be a .REG file, for the argument sake, call it "My New App.REG"
3) When deploying on PRODuction, consult the Server's System's administrator (SA), hand over the "My New App.REG" file along with the application, and ask the SA to install this REG file, once done (as admin) this would create the key for your applicaion.
4) Run your application, it should not need to access anything else other than this key.
Problem should be resolved by now.
Cause:
When developing an application that writes anything into the EventLog, it would require a KEY for it under the Eventlog registry if this key isn't found, it would try to create it, which then fails for having no permissions to do so. The above process, is similar to deploying an application (manually) whereas we are creating this ourselves, and no need to have a headache since you are not tweaking the registry by adding permissions to EVERYONE which is a securty risk on production servers.
I hope this helps resolving it.
-
Same issue on Windows 7 64bits. Run as administrator solved the problem.
-
Had a similar issue with all of our 2008 servers. The security log stopped working altogether because of a GPO that took the group Authenticated Users and read permission away from the key HKLM\System\CurrentControlSet\Services\EventLog\security
Putting this back per Microsoft's recommendation corrected the issue. I suspect giving all authenticated users read at a higher level will also correct your problem.
-
This exception was occurring for me from a .NET console app running as a scheduled task, and I was trying to do basically the same thing - create a new Event Source and write to the event log.
In the end, setting full permissions for the user under which the task was running on the following keys did the trick for me:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Application
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog\Security
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\eventlog
-
I hit similar issue - in my case Source contained <, > characters. 64 bit machines are using new even log - xml base I would say and these characters (set from string) create invalid xml which causes exception. Arguably this should be consider Microsoft issue - not handling the Source (name/string) correctly.
-
Rebuilding the solution worked for me
-
I had this issue when running an app within VS. All I had to do was run the program as Administrator once, then I could run from within VS.
To run as Administrator, just navigate to your debug folder in windows explorer. Right-click on the program and choose Run as administrator.
-
My app gets installed on client web servers. Rather than fiddling with Network Service permissions and the registry, I opted to check SourceExists and run CreateEventSource in my installer.
I also added a try/catch around log.source = "xx" in the app to set it to a known source if my event source wasn't created (This would only come up if I hot swapped a .dll instead of re-installing).
-
Solution is very simple - Run Visual Studio Application in Admin mode !
- | 2014-07-30 01:13:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2586650848388672, "perplexity": 4408.923232926181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268363.15/warc/CC-MAIN-20140728011748-00220-ip-10-146-231-18.ec2.internal.warc.gz"} |
http://mathonline.wikidot.com/integration-with-partial-fractions-examples-1 | Integration with Partial Fractions Examples 1
# Integration with Partial Fractions Examples 1
Recall the material outlined on the Integration with Partial Fractions. We will now look at some examples of integrating functions through partial fractions. More examples can be found on the Integration with Partial Fractions Examples 2 page.
## Example 1
Integrate $f(x) = \frac{2x^2 + 3}{x^3 - 2x^2 + x}$.
We start with a proper rational function, so we do not need to do long division. We will now factor the denominator to obtain that $x^3 -2x^2 + x = x(x-1)^2$. Hence it follows that since one of the linear factors is repeated that for some A, B, and C:
(1)
\begin{align} \frac{2x^2 + 3}{x^3 - 2x^2 + x} = \frac{A}{x} + \frac{B}{(x - 1)} + \frac{C}{(x - 1)^2} \\ \frac{2x^2 + 3}{x^3 - 2x^2 + x} = \frac{A(x-1)^2}{x} + \frac{Bx(x-1)}{(x - 1)} + \frac{Cx}{(x - 1)^2} \\ \frac{2x^2 + 3}{x^3 - 2x^2 + x} = \frac{A(x-1)^2 + Bx(x-1) + Cx}{x(x-1)^2} \end{align}
Hence it follows that $2x^2 + 3 = A(x-1)^2 + Bx(x-1) + Cx$. And now we can solve for A, B, and C by choosing values of x. First let's let x = 1. Then it follows that 5 = C. Now let's choose x = 0, and it follows that A = 3. Lastly let's choose x = 2, and get that 11 = A + 2B + 2C. We know the values of A and C though, so we find that B = -1 by substitution. Hence we get that:
(2)
\begin{align} \frac{2x^2 + 3}{x^3 - 2x^2 + x} = \frac{3}{x} + \frac{-1}{(x - 1)} + \frac{5}{(x - 1)^2} \\ \end{align}
Now we can integrate:
(3)
\begin{align} \int f(x) \: dx = \int \frac{3}{x} + \frac{-1}{(x - 1)} + \frac{5}{(x - 1)^2} \: dx \\ \quad \int f(x) \: dx = 3 \ln | x | - \ln | x - 1 | - \frac{5}{x - 1} + C \end{align}
## Example 2
Integrate $f(x) = \frac{x^2 + 5x + 2}{(x + 1)(x^2 + 1)}$.
We already have the denominator factored, however, this time we have an irreducible quadratic factor in the denominator. Hence we know that:
(4)
\begin{align} \frac{x^2 + 5x + 2}{(x + 1)(x^2 + 1)} = \frac{A}{x + 1} + \frac{Bx + C}{x^2 + 1} \\ \frac{x^2 + 5x + 2}{(x + 1)(x^2 + 1)} = \frac{A(x^2 + 1) + (Bx + C)(x + 1)}{(x +1)(x^2 + 1)} \\ \end{align}
So it follows that $x^2 + 5x + 2 = A(x^2 + 1) + (Bx + C)(x+1)$. Let's let x = -1, it thus follows that 2A = -2, or more appropriately A = -1. When we let x = 0, we get that A + C = 2, but we know that A = -1, so C = 3. When we let x = 1, then 2A + 2B + 2C = 8, and by substitution B = 2. Hence it follows that:
(5)
\begin{align} f(x) = \frac{-1}{x + 1} + \frac{2x + 3}{x^2 + 1} \\ \int f(x) \: dx = \int \frac{-1}{x + 1} + \frac{2x + 3}{x^2 + 1} \: dx \\ \int f(x) \: dx = - \ln | x + 1 | + \int \frac{2x + 3}{x^2 + 1} \: dx \\ \quad \int f(x) \: dx = - \ln | x + 1 | + \int \frac{2x}{x^2 + 1} \: dx + \int \frac{3}{x^2 + 1} \: dx\\ \end{align}
We will omit fully completing this example, but integration by partial fractions can be repeated for this example.
## Example 3
Evaluate the following integral: $\int \frac{x - 9}{(x + 5)(x - 2)} \: dx$.
We first recognize that for some A and B:
(6)
\begin{align} \frac{x - 9}{(x + 5)(x - 2)} = \frac{A}{(x + 5)} + \frac{B}{(x - 2)} \\ \frac{x - 9}{(x + 5)(x - 2)} = \frac{A(x-2) + B(x+5)}{(x+5)(x-2)} \end{align}
Hence $x- 9 = A(x -2) + B(x + 5)$. When we let x = -5, then -7A = -14, or rather, A = 2. When we let x = 2, we get that B = -1. Hence we get that:
(7)
\begin{align} \frac{x - 9}{(x + 5)(x - 2)} = \frac{2}{(x + 5)} - \frac{1}{(x - 2)} \\ \quad \int \frac{2}{(x + 5)} - \frac{1}{(x - 2)} \: dx = 2 \ln | x + 5 | - \ln | x - 2 | + C \end{align}
## Example 4
Evaluate the following integral: $\int \frac{2}{2x^2 + 3x + 1} \: dx$.
Let's first factor the denominator to get that $2x^2 +3x + 1 = (2x + 1)(x + 1)$. He thus know that for some A and B:
(8)
\begin{align} \frac{2}{(2x + 1)(x + 1)} = \frac{A}{(2x + 1)} + \frac{B}{(x + 1)} \\ \frac{2}{(2x + 1)(x + 1)} = \frac{A(x +1) + B(2x + 1)}{(2x + 1)(x + 1)} \end{align}
So then $2 = A(x+1) + B(2x + 1)$. If we let x = -1, then -B = 2, so B = -2. If we let x = -1/2, then we get that A/2 = 2, or rather A = 4. Hence we can now integrate the function:
(9)
\begin{align} \int \frac{4}{(2x + 1)} - \frac{2}{(x + 1)} \: dx \\ \quad 2 \ln | 2x + 1 | - 2 \ln | x + 1 | + C \end{align}
## Example 5
Evaluate the following integral: $\int \frac{x^2 + 1}{(x-3)(x-2)^2} \: dx$.
For some A, B, and C, we get that:
(10)
\begin{align} \frac{x^2 + 1}{(x-3)(x-2)^2} = \frac{A}{(x-3)} + \frac{B}{(x - 2)} + \frac{C}{(x - 2)^2} \\ \frac{x^2 + 1}{(x-3)(x-2)^2} = \frac{A(x-2)^2 + B(x-3)(x-2) + C(x-3)}{(x-3)(x-2)^2} \\ \end{align}
It thus follows that $x^2 + 1 = A(x - 2)^2 + B(x - 3)(x - 2) + C(x - 3)$. If we let x = 3, we get that A = 10. If we let x = 2, we get that -C = 5, or more appropriately that C = -5. If we let x = 1, we get that A + 2B - 2C = 2. By substitution we get B = -9, and now we can integrate:
(11)
\begin{align} \frac{x^2 + 1}{(x-3)(x-2)^2} = \frac{10}{(x-3)} - \frac{9}{(x - 2)} - \frac{5}{(x - 2)^2} \\ \int \frac{10}{(x-3)} - \frac{9}{(x - 2)} - \frac{5}{(x - 2)^2} \: dx = 10 \ln | x - 3 | - 9 \ln | x - 2 | + \frac{5}{(x - 2)} + C \end{align}
## Example 6
Evaluate the following integral: $\int \frac{x^3 + x^2 + 2x + 1}{(x^2 + 1)(x^2 + 2)} \: dx$.
In this example, the denominator consists of two irreducible quadratic factors. Hence we know that for some A, B, C, and D:
(12)
\begin{align} \frac{x^3 + x^2 + 2x + 1}{(x^2 + 1)(x^2 + 2)} = \frac{Ax + B}{(x^2 + 1)} + \frac{Cx + D}{(x^2 + 2)} \\ \frac{x^3 + x^2 + 2x + 1}{(x^2 + 1)(x^2 + 2)} = \frac{(Ax + B)(x^2 + 2) + (Cx + D)(x^2 + 1)}{(x^2 + 1)(x^2 + 2)} \end{align}
Hence we know that $x^3 + x^2 + 2x + 1 = (Ax + B)(x^2 + 2) + (Cx + D)(x^2 +1)$.
First let's let x = 0. Hence we get that $1 = 2B + D$. Suppose B = 1, and D = -1. Continuing forward, let x = 1. We get that $5 = (A + B)(3) + (C + D)(2)$, or rather $5 = 3A + 3B + 2C + 2D$. Letting B = 1 and D = -1, we get that $5 = 3A + 3 + 2C - 2$, or more appropriately $4 = 3A + 2C$. Let's let A = 2, then C = -1. Hence we have a combination:
(13)
\begin{align} A = 2 \\ B = 1 \\ C = -1 \\ D = -1 \end{align}
Hence we can now integrate the function:
(14)
\begin{align} \frac{x^3 + x^2 + 2x + 1}{(x^2 + 1)(x^2 + 2)} = \frac{2x + 1}{(x^2 + 1)} + \frac{-x -1}{(x^2 + 2)} \end{align} | 2019-03-22 03:52:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 14, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998764991760254, "perplexity": 483.07928928636454}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00014.warc.gz"} |
https://theknowledgeburrow.com/what-number-is-3e10/ | Menu Close
# What number is 3E10?
## What number is 3E10?
3e10 from hexadecimal to decimal is 15888.
## What does 3 E10 mean?
Number 3E10 converted to English text. Formatted number: 30 000 000 000. thirty billion. thirty billion.
What is 9e in calculator?
On a calculator display, E (or e) stands for exponent of 10, and it’s always followed by another number, which is the value of the exponent. For example, a calculator would show the number 25 trillion as either 2.5E13 or 2.5e13. In other words, E (or e) is a short form for scientific notation.
### What number is 9e 10?
Answer and Explanation: On a calculator, 9e10 means 9 * 1010. The term e means exponent of ten on a calculator. Therefore, if the notation is 9e10, then it means 9e + 10 or…
### What does E10 mean in calculators?
multiply by 1010
Answer and Explanation: The notation E10 means to multiply by 1010. When working on a calculator, it is common for the calculator to represent very large or very small…
What is this number 30000000000?
Cardinal: 30000000000 can be written as Thirty billion.
## What does 1e 9 mean?
The e (or E ) means “times 10-to-the”, so 1e9 is “one times ten to the ninth power”, and 1e-9 means “one times ten to the negative ninth power”. In mathematical scientific notation, this is usually denoted by a superscript: 1 × 10-9 or -1 × 109.
## Is E the same as x10?
It means exponential, which means multiplied by 10 to the power of the number after the “e”. So in this case it means 2 * 10^10.
What does 3 e7 mean?
3.90234375 e716 in Hexadecimal number system and want to translate it into Decimal. 3.e716 = 3∙160+14∙16-1+7∙16-2 = 3+0.875+0.02734375 = 3.9023437510.
### What does 9e 13 mean?
The e means exponent so 9 zeros after the 13. grendeldekt and 5 more users found this answer helpful. Thanks 4. 3.0. (1 vote)
### What does 2e10 mean on a calculator?
Your 2e10 is 2×10^10. what does 9e 10 mean? 9e +10 => 9 is the first whole value number. => Where e stands for the number of zeroes next to 9. => where +10 means that there are positive (whole numbers) 10 zeroes next to 9. Therefore, 9e +10. => 90 000 000 000.
What does the number 1E10 mean on a calculator?
If anything like that shows up, a number followed by an uppercase “e” and another value, this essentially means scientific notation, with the number preceding the “e” being the value and the number following the “e” being the power ten is raised to. In this case, 1E10 equals 1 * 10^ (10000000000) or 10000000000.
## What does 9 E 10 mean on a calculator?
On a calculator, 9 e 10 means 9 * 10 10 . The term e means exponent of ten on a calculator. Therefore, if the notation is 9 e 10, then it means 9 e + 10 or… See full answer below. Become a Study.com member to unlock this answer!
## Which is bigger 1E10 or 1.5e10?
Thus 1E10 is 10,000,000,000. Another scientific notation is 1.5E10 which is 15,000,000,000. Strictly speaking, the mantissa (number before the E) must be greater or equal to 1 and less than 10 (or can also be said 1 <= x < 10 where x is the mantissa) and the exponent (number after the E) can be any integer. | 2022-08-11 06:37:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425626158714294, "perplexity": 2497.3467661686686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00218.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-9x-42x-49 | # How do you solve 9x² + 42x = -49?
Jun 6, 2018
$x = - \frac{7}{3}$
#### Explanation:
$9 {x}^{2} + 42 x = - 49$
Move all terms to LHS
$9 {x}^{2} + 42 x + 49 = 0$
Now we factor
$\left(3 x + 7\right) \left(3 x + 7\right) = 0$
${\left(3 x + 7\right)}^{2} = 0$
Now we solve for $x$
$3 x + 7 = 0$
$3 x = - 7$
$x = - \frac{7}{3}$
Although this is a polynomial of degree $2$ and you would expect $2$ solutions, it turns out that there is one redundant solution, namely $x = - \frac{7}{3}$. | 2020-02-29 14:07:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498969316482544, "perplexity": 811.8323589592779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875149238.97/warc/CC-MAIN-20200229114448-20200229144448-00392.warc.gz"} |
http://math.stackexchange.com/questions/307353/how-do-you-write-represent-the-all-ones-matrix | # How do you write / represent the 'all ones matrix'?
Is there a convention to write the all ones matrix in formulas? I'm going to write about the following formular:
$$A = B + XD + DX + N$$
Where D is a diagonal matrix and X the all ones matrix:
$$X = \begin{pmatrix} 1 & 1 & \cdots & 1 \\ 1 & 1 & \cdots & 1 \\ \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & \cdots & 1 \end{pmatrix}$$
Is there a greek letter or other convention?
-
You could write $\mathbf1\mathbf1^\top$, where $\mathbf1$ is the vector with all components $1$. – joriki Feb 18 '13 at 19:19
I have seen it written as J, for example, by people who discuss incidence matrices of projective planes. The incidence matrix $A$ of a projective plane of order $n$ satisfies $A^{t}A = AA^{t} = nI +J,$ and $AJ = JA = nJ.$ – Geoff Robinson Feb 18 '13 at 19:23
Mathworld and Wikipedia both seem to use $J$. I'm not sure if this is a set convention though since this overlaps with notation for the Jordan form. – EuYu Feb 18 '13 at 19:24
Unit matrix "J" will do. Thank you! – edgar.holleis Feb 18 '13 at 19:33
Yes, I have seen the $n\times n$ all-ones matrix denoted $J_n$. I think this is somewhat conventional in algebraic combinatorics, but I have no idea whether it is commonly used elsewhere. | 2015-05-23 05:00:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188581109046936, "perplexity": 463.10388308471516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927185.70/warc/CC-MAIN-20150521113207-00158-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://crypto.stackexchange.com/questions/34217/non-iterative-cryptographic-hash-functions?noredirect=1 | # Non-iterative cryptographic hash functions
Consider the following cryptographic hash function $H$ which maps a message $m$ of variable size to $b$ bits:
$$H:\{0,1\}^{*} \mapsto \{0,1\}^b$$ $$y = H(m) = SPRP(IV||m||padding)\mid_{b}$$
, where: $$SPRP:\{0,1\}^n \mapsto \{0,1\}^n,\\ |m|+|IV|+|padding|=n,\\ |IV| = b.$$
Such a hash function could be considered non-iterative since, unlikely an iterative hash function, no entropy is discarded until truncation in the last step. While such a function has the disadvantage of not being streaming, it does have the nice property that finding a multicollision[0] is much harder than in iterative hash functions.
• What other non-iterative hash function have been developed?
• Is there another name for this as non-iterative doesn't seem to be a good search term?
Edited to add a more formal definition: An iterative hash function is any hash function in which some the message can be compressed independent of the entire message.
A hash function $G$ is iterative iff: $$\exists\ f\ h\ \pi, \forall m:\\ \pi(m_1||\ m_2)=m\ \ \ \wedge\\ |f(m_1)|<|m_1|<|m|\ \ \ \wedge\\ h(f(m_1),f(m_2)) = G(m)$$
A non-iterative hash function is any hash function for which it is impossible to compress any bits of the message without access to the entire message.
A hash function $H$ is non-iterative iff:
$$\not \exists\ f\ h\ \pi, \forall m:\\ \pi(m_1||m_2)=m\ \ \ \wedge\\ |f(m_1)|<|m_1|<|m|\ \ \ \wedge\\ h(f(m_1),f(m_2)) = H(m)$$
YMMV, the above definition is not a standard definition and exists to explain what I mean by non-iterative.
[0]: Antoine Joux, Multicollisions in iterated hash functions, 2004
• I'm not sure I see what 'non-iterative' means formally. Can you give a more precise definition? – pg1989 Apr 3 '16 at 19:25
• Your 'non-iterative' hash function is similar to the the compression function of MD6 (a truncated permutation that can take an input message of up to 4096 bits). Of course, this compression function is then used in a mode of operation that turns MD6 into an iterative hash function with a far larger potential message space - but then so could your hash function. – J.D. Apr 9 '16 at 2:15
• @EthanHeilman - I am also a fan of MD6. So is there anything besides truncated permutation-based hash functions that do not discard entropy until the final truncation step? Well, any such function would need to be injective (prior to the truncation step) in order to not lose entropy. So there are two alternatives to a permutation that fit the bill: 1) a bijective function where the domain is not the codomain, and 2) an injective non-surjective function. i.e. an 'expanding' function, where the codomain is larger than the domain). – J.D. Apr 9 '16 at 3:39
• With your definitions, can't you construct a non-iterative hash from any normal hash using $h'(m) = h(m||r(m))$, where $r$ reverses the bitstring? – otus Apr 9 '16 at 13:14
• @EthanHeilman - how about this for a formal definition: a hash function $H_x(y)$ that produces a digest of length $x$ is "non-iterative" iff for any two distinct messages $M_1$, $M_2$, there is a finite digest size $b$ such that $H_b(M_1) \neq H_b(M_2)$. For example, think of an injective Random Oracle, $\{0,1\}^*\mapsto \{0,1\}^{\infty}$, where no two finite messages map to the same infinite string. $H_x(y)$ calls the RO, and truncates the output to a string of length $x$. This definition also applies to your example permutation based hash function. – J.D. Apr 10 '16 at 18:22
Per my comment, I'd like to suggest a definition for "non-iterative hash function", and propose some constructions that fit the definition. I will also suggest an alternate name (though it may not help much with searching for papers on the topic).
Let $\mathcal{M}$ be the message space of a hash function, e.g. $\mathcal{M}=\{0,1\}^{*<\ell}$, the set of all binary strings of length less than $\ell$ for some $\ell \in \mathbb{N}$. Let $\mathcal{D}$ be the 'digest space' (codomain) of a hash function, e.g. $\{0,1\}^b$ for some constant $b$. I will use subscripts to denote which hash function is associated with a given message space or digest space. Also, let $subseq_y(x)$ be a function that takes binary strings of arbitrary length and outputs a fixed subsequence of the string of length $y$ (e.g. truncation of the string to its first $y$ bits, or outputting only every third bit up to bit $3y$, etc).
A hash function $H(x)$ with digests of length $b$ is "non-iterative" or "uncompressible" if and only if there exists another function $G(x)$ such that:
• $\mathcal{M}_{H(x)}\subseteq\mathcal{M}_{G(x)}$,
• $|\mathcal{M}_{G(x)}| \le |\mathcal{D}_{G(x)}|$,
• $G(x)$ is injective - no two distinct messages in $\mathcal{M}_{G(x)}$ map to the same digest in $\mathcal{D}_{G(x)}$, and
• For any message $m \in \mathcal{M}_{H(x)}$, $H(m) = subseq_b(G(m))$.
Note that a hash function need not be cryptographically secure to be non-iterative by this definition.
The construction in the question meets this definition: In that case, the function $G(x)$ is simply $SPRP(IV||m||padding)$, without truncation.
As described in my comment, another construction is to truncate an injective Random Oracle. Unlike the permutation-based construction, this doesn't have a fixed limit on the message space size (or digest size) defined by the blocklength of the underlying $SPRP$, and yet is just as "non-iterative" or "uncompressible".
As a concrete instantiation of a "non-iterative" or "uncompressible" hash function with no limit on the message or digest lengths, I propose an 'expanding sponge' function. This is just like an ordinary sponge function, but with two differences: 1) instead of using a fixed size permutation it uses a (keyless or fixed key) variable-length blockcipher (like the BEAR blockcipher), and 2) at each step during the absorption phase, instead of xoring the message blocks into the state, it concatenates the next message block with the state; i.e. $S_n$, the state at step $n$ is equal to $\mathcal{E}(S_{n-1}||m_n)$, where $\mathcal{E}(x)$ is encryption with the variable-length blockcipher.
Edit to clarify: For this expanding sponge function, the injective function $G(x)$ that makes this construction "uncompressible" has the same absorbing stage as $H(x)$, but during the squeezing stage $G(x)$ outputs the entire state at each step instead of only part of the state. The output digest of $H(x)$ is thus a subsequence of the output digest of $G(x)$. $G(x)$ is of course trivially insecure, in the sense that one can easily invert the function to find the preimage of any digest.
Note that this construction is in a sense 'iterative', in that it breaks messages up into blocks (with padding at the end if necessary) and absorbs each message block in turn one at a time using repeated iterations of the same variable-length blockcipher. But, there is no possibility of collisions in the internal state (any two distinct messages will generate distinct internal states). Of course, the internal state will balloon to the size of the message once it is done absorbing. But that is the price of collisionless internal states. For this reason, I propose "uncompressible" rather than "non-iterative".
• I agree that non-iterative is a bad word to use for this property. What do you think about non-streaming compression? I really like where you are going with this however I'm concerned about the following situation: Let $G(x) = md5(x)||x$ and $H(x)=md5(x)$, wouldn't this allow me to define $md5(x)$ as "non-iterative/non-compressing"? – Ethan Heilman Apr 14 '16 at 21:01
• @EthanHeilman - indeed I think it would. While md5 isn't usually defined as a truncation of $G(x)$ like that, defining it that way is functionally equivalent to the usual definition. Clearly my definition for non-iterative is insufficient, and I am not sure at this moment how to fix it (or even if it can be fixed). Even a simpler definition like "a hash function where the internal state does not lose entropy until the final truncation step" would call your md5 example non-iterative. Frankly I'm at a loss, but hopefully this exercise has helped you somewhat. – J.D. Apr 14 '16 at 23:33 | 2019-11-22 06:56:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7460816502571106, "perplexity": 581.1919233946323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00102.warc.gz"} |
https://www.stata.com/support/faqs/data-management/neighbors-on-rectangular-grid/ | » Home » Resources & support » FAQs » Neighbors on a rectangular grid
## How do I identify neighbors of points or areas on a rectangular grid in Stata?
Title Neighbors on a rectangular grid Author Nicholas J. Cox, Durham University, UK
Say you have spatial data for points or areas on a rectangular grid, meaning a grid with a rectangular mesh, and not necessarily one whose overall shape is rectangular. Some minor trickery with by: allows you to identify the neighbors of each point and thus be able to carry out some simple spatial processing. Even if you are not especially interested in spatial data, the answer provides good examples of the use of by:, so read on.
Let us imagine a set-up like this:
x coordinates increasing to right
* *
* * * * * *
* * * * * * * * * * * * * *
* * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * *
y * * * * * * * * * * * * * * * * * * *
coordinates * * * * * * * * * * * * * * * * * * *
increasing * * * * * * * * * * * * * * * * * * * *
to * * * * * * * * * * * * * * * * * * * * *
top * * * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * * * * * *
* * * * * * * * * * * * * *
* * * * * *
We are going to assume each row is indexed by the same integer y coordinate and each column is indexed by the same integer x coordinate. As the diagram should suggest, we are not assuming that the overall shape of the grid is rectangular or even some regular geometric shape. For concreteness, we are assuming a Cartesian convention about directions in which coordinates increase. However, a matrix convention in which row numbers increase downwards is easily accommodated by minor changes to what follows.
In addition, we are going to skate over the difference between gridded point data (e.g., altitude or rainfall is measured at a notional point) and gridded area data (e.g., number of people is recorded in a rectangle). Although important otherwise, it is secondary to the question of identifying neighbors for which the principles are identical. The terminology of points will include areas.
Using a geographic convention, let us imagine the grid is oriented so that north is at the top, and we can identify neighbors of each point to the north, northeast, east, southeast, south, southwest, west and northwest.
Suppose we type
. by y (x), sort:
After such a sort, points would be ordered first by y coordinate and then by x coordinate. Thus for any observation, the previous observation in Stata's memory is that to the immediate west, and the observation following is that to the immediate east. Given some variable z with information on what is happening at each x and y (altitude, number of people in grid rectangle, whatever),
. by y (x), sort: generate z_w = z[_n-1]
. by y: generate z_e = z[_n+1]
However, what we just said is not quite true: it is incorrect for the beginnings and ends of rows. For example, the observation before that for the beginning of a row is for the previous row and is not its neighbor to the west. Nevertheless, this code needs no extra tricks to cope with edge problems. Because the first point in any row has no neighbor to the west, z_w for that point will be filled in with missing. This is ensured because everything is done under the aegis of by:. Processing takes place within the groups of observations defined by by:, and there can be no spurious carry-overs between rows.
Reversing the roles of the coordinates, we can get north and south neighbors:
. by x (y), sort: generate z_s = z[_n-1]
. by x: generate z_n = z[_n+1]
The other four neighbors require one extra small trick. The NW and SE neighbors of any point lie on the diagonal through that point for which x + y is constant, and the NE and SW neighbors of any point lie on the other diagonal, for which x − y is constant. Given variables constructed for these diagonal sums, the rest is just a variation on the prevailing theme:
. generate long xpy = x + y
. generate long xmy = x - y
. by xpy (x), sort: generate z_nw = z[_n-1]
. by xpy: generate z_se = z[_n+1]
. by xmy (x), sort: generate z_sw = z[_n-1]
. by xmy (x): generate z_ne = z[_n+1]
That is it, really. We were a little paranoid in insisting on a long variable type, just in case the sums (in particular) were very large. Quite a lot of elementary spatial analysis is now in reach just by using the constructed variables (and without any looping over observations or special programming). For example, you can get an average of NEWS neighbors and one of NW, NE, SW, SE neighbors by
. egen z_news = rowmean(z_n z_e z_w z_s)
. egen z_nwneswse = rowmean(z_nw z_ne z_sw z_se)
The egen function rowmean() and its siblings are especially recommended here for doing what you would usually regard as the right thing whenever one or more variables are missing. | 2022-08-10 08:58:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6568698287010193, "perplexity": 257.48447486768845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00061.warc.gz"} |
https://latexref.xyz/_005ccaption.html | Up: Floats [Contents][Index]
#### 5.7.1 \caption
Synopsis:
\caption{caption-text}
or
\caption[short-caption-text]{caption-text}
Make a caption for a floating environment, such as a figure or table environment (see figure or table).
In this example, LaTeX places a caption below the vertical blank space that is left by the author for the later inclusion of a picture.
\begin{figure}
\vspace*{1cm}
\caption{Alonzo Cushing, Battery A, 4th US Artillery.}
\label{fig:CushingPic}
\end{figure}
The \caption command will label the caption-text with something like ‘Figure 1:’ for an article or ‘Figure 1.1:’ for a book. The text is centered if it is shorter than the text width, or set as an unindented paragraph if it takes more than one line.
In addition to placing the caption-text in the output, the \caption command also saves that information for use in a list of figures or list of tables (see Table of contents, list of figures, list of tables).
Here the \caption command uses the optional short-caption-text, so that the shorter text appears in the list of tables, rather than the longer caption-text.
\begin{table}
\centering
\begin{tabular}{|*{3}{c}|}
\hline
4 &9 &2 \\
3 &5 &7 \\
8 &1 &6 \\
\hline
\end{tabular}
\caption[\textit{Lo Shu} magic square]{%
The \textit{Lo Shu} magic square, which is unique among
squares of order three up to rotation and reflection.}
\label{tab:LoShu}
\end{table}
LaTeX will label the caption-text with something like ‘Table 1:’ for an article or ‘Table 1.1:’ for a book.
The caption can appear at the top of the figure or table. For instance, that would happen in the prior example by putting the \caption between the \centering and the \begin{tabular}.
Different floating environments are numbered separately, by default. It is \caption that updates the counter, and so any \label must come after the \caption. The counter for the figure environment is named figure, and similarly the counter for the table environment is table.
The text that will be put in the list of figures or list of tables is moving argument. If you get the LaTeX error ‘! Argument of \@caption has an extra }’ then you must put \protect in front of any fragile commands. See \protect.
The caption package has many options to adjust how the caption appears, for example changing the font size, making the caption be hanging text rather than set as a paragraph, or making the caption always set as a paragraph rather than centered when it is short. | 2021-11-30 07:11:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9759640693664551, "perplexity": 1776.7475215351963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00589.warc.gz"} |
http://www.maplesoft.com/support/help/Maple/view.aspx?path=copy | create a duplicate table or rtable - Maple Help
copy - create a duplicate table or rtable
Calling Sequence copy( a );
Parameters
a - any expression
Description
• The purpose of the copy function is to create a duplicate table (or rtable) which can be altered without changing the original table (or rtable). If a is not a table (or rtable), a is returned.
• This functionality is necessary since the statements s := table(); t := s; leave both names s and t evaluating to the same table structure. Hence, unlike other Maple data structures, assignments made via one of the names affect the values associated with the other name as well.
• Note that copy is not recursive. This means that if a is a table of tables, the table data structure for a is copied but the table structures for the entries of a are not copied.
• For an rtable, copy preserves rtable options and indexing functions, except for the readonly option which is not set.
Examples
> ${s}_{1}:=x$
${{s}}_{{1}}{:=}{x}$ (1)
> $t:=s$
${t}{:=}{s}$ (2)
> ${t}_{1}:=y$
${{t}}_{{1}}{:=}{y}$ (3)
> ${s}_{1}$
${y}$ (4)
> $u:=\mathrm{copy}\left(s\right)$
${u}{:=}{\mathrm{table}}\left(\left[{1}{=}{y}\right]\right)$ (5)
> ${u}_{1}:=z$
${{u}}_{{1}}{:=}{z}$ (6)
> ${s}_{1}$
${y}$ (7)
> $m:=\mathrm{Matrix}\left(1,\mathrm{shape}=\mathrm{symmetric},\left[a\right],\mathrm{readonly}\right)$
${m}{:=}\left[\begin{array}{c}{a}\end{array}\right]$ (8)
> $\mathrm{MatrixOptions}\left(m\right)$
${\mathrm{shape}}{=}\left[{\mathrm{symmetric}}\right]{,}{\mathrm{datatype}}{=}{\mathrm{anything}}{,}{\mathrm{storage}}{=}{{\mathrm{triangular}}}_{{\mathrm{upper}}}{,}{\mathrm{order}}{=}{\mathrm{Fortran_order}}{,}{\mathrm{readonly}}$ (9)
> $n:=\mathrm{copy}\left(m\right)$
${n}{:=}\left[\begin{array}{c}{a}\end{array}\right]$ (10)
> $\mathrm{MatrixOptions}\left(n\right)$
${\mathrm{shape}}{=}\left[{\mathrm{symmetric}}\right]{,}{\mathrm{datatype}}{=}{\mathrm{anything}}{,}{\mathrm{storage}}{=}{{\mathrm{triangular}}}_{{\mathrm{upper}}}{,}{\mathrm{order}}{=}{\mathrm{Fortran_order}}$ (11)
For a table 'a' that contains another table 'b'; when copy is done on 'a' an entirely new copy of 'a' is created. However the objects contained in the table are not duplicated; so both 'a' and the copy of 'a' contain the table 'b'. Thus, if a change is made to the table 'b' in 'a', that change will show up in the copy of 'a' as well, and vice versa.
> $S:=\mathrm{table}\left(\left[45,\mathrm{table}\left(\mathrm{symmetric},\left[\left(1,2\right)=3\right]\right)\right]\right)$
${S}{:=}{\mathrm{table}}\left(\left[{1}{=}{45}{,}{2}{=}{\mathrm{table}}\left({\mathrm{symmetric}}{,}\left[\left({1}{,}{2}\right){=}{3}\right]\right)\right]\right)$ (12)
> ${S}_{1}$
${45}$ (13)
> $T:=\mathrm{copy}\left(S\right)$
${T}{:=}{\mathrm{table}}\left(\left[{1}{=}{45}{,}{2}{=}{\mathrm{table}}\left({\mathrm{symmetric}}{,}\left[\left({1}{,}{2}\right){=}{3}\right]\right)\right]\right)$ (14)
> ${T}_{1}$
${45}$ (15)
> ${T}_{1}:=50$
${{T}}_{{1}}{:=}{50}$ (16)
> ${S}_{1}$
${45}$ (17)
> ${{S}_{2}}_{2,1}$
${3}$ (18)
> ${{T}_{2}}_{2,1}:=5$
${{{T}}_{{2}}}_{{2}{,}{1}}{:=}{5}$ (19)
> ${{S}_{2}}_{2,1}$
${5}$ (20) | 2016-02-10 02:45:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 40, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6653396487236023, "perplexity": 2152.719928356184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158601.61/warc/CC-MAIN-20160205193918-00184-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://www.edaboard.com/threads/2nd-order-passive-low-pass-rc-filter-design-equation.335846/ | [SOLVED]2nd Order Passive Low Pass RC Filter Design Equation
Status
Not open for further replies.
orionsbelt
Junior Member level 2
Hello everybody
I don't know whether this is the right forum for this particular mathematical problem. It is about the transfer function of 2nd Order Passive Low Pass RC Filter. Please find attached the calculation.
At the end of the calculation, I have circled the term which seems to be the error. But I cannot figure out where I actually made the mistake! It would be helpful if someone could help me with the derivation.
Thanks!
//OB
Attachments
• 1.2 MB Views: 13
Dominik Przyborowski
Advanced Member level 3
You have an error. The transfer function has two poles but without any zeros. The correct form is:
$K_v = \frac{1}{1+s [(C_1+C_2)R_1+C_2 R_2] + s^2 C_1 C_2 R_1 R_2}$
Notice that your output voltage is measure at C_2 not C_1 as You are calculated above.
ravindragudi
Full Member level 3
Agree with Dominik. The Vo calculations have to be done at C2 and not at C1 by forming Z1.
My suggestion would be to use KCL for deriving the TF. Apply first KCL at the node R1, C1, R2 - call it as V1. With this you can get equation having Vi, Vo and V1.
Now apply KCL at Vo. You will get equation having Vo and V1. Rearrange it to get equation for V1 and replace this in earlier equation having Vo and Vi.
With that you would be able to get the proper TF for this filter.
orionsbelt
orionsbelt
points: 2
orionsbelt
Junior Member level 2
Thanks for your help. It's solved now.
Status
Not open for further replies. | 2020-10-31 07:21:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4266728460788727, "perplexity": 1537.9637738924696}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00124.warc.gz"} |
https://infinitylearn.com/surge/question/physics/an-aluminumal41030cwire-resistance-r1-and-carbon-w/ | An aluminum αAl=4×10−3/0C wire resistance ‘R1 ’ and carbon wire αC=-0.5×10−3/0C resistance ‘ R2’ are connected in series to have a resultant resistance of 18 ohm at all temperatures . The values of R1 and R2 in ohms
# An aluminum $\left({\mathrm{\alpha }}_{\mathrm{Al}}=4×{10}^{-3}{/}^{0}\mathrm{C}\right)$ wire resistance ‘${\mathrm{R}}_{1}$ ’ and carbon wire $\left({\mathrm{\alpha }}_{\mathrm{C}}=-0.5×{10}^{-3}{/}^{0}\mathrm{C}\right)$ resistance ‘ ${\mathrm{R}}_{2}$’ are connected in series to have a resultant resistance of 18 ohm at all temperatures . The values of R1 and R2 in ohms
1. A
2,16
2. B
12,4
3. C
13,5
4. D
14,4
Register to Get Free Mock Test and Study Material
+91
Verify OTP Code (required)
### Solution:
Given
$\frac{{R}_{1}}{{R}_{2}}=\frac{{\alpha }_{C}}{{\alpha }_{Ai}}=\frac{0.5}{4}=\frac{1}{8}⇒{R}_{2}=8{R}_{1}$
Register to Get Free Mock Test and Study Material
+91
Verify OTP Code (required) | 2023-03-22 03:39:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8953993320465088, "perplexity": 7845.362680945573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00766.warc.gz"} |
http://openstudy.com/updates/5609b156e4b032660b20f188 | ## anonymous one year ago Can someone help me work through this problem? Write the equation in slope-intercept form. What are the slope and y intercept? -2x-11y=5
1. ckallerid
Do you know how to solve for y? or should I explain it?
2. anonymous
Wouldn't I just have to get y on its own side? @ckallerid
3. ckallerid
yes
4. anonymous
So I got it to -11y=5+2x by adding 2x to each side @ckallerid
5. anonymous
Now divide each side by -11?
6. ckallerid
yes! it may seem weird but it should be like that.
7. anonymous
so... y=-.45-.18x
8. anonymous
Here is the attatchment of the problem @ckallerid
9. ckallerid
well it would be in fraction form which is $y=\frac{ 5 }{ -11 } + \frac{ 2 }{ -11 }x$
10. anonymous
I am pretty sure the answer is D, correct? because slope would be m and the y intercept would be b in y=mx+b
11. ckallerid
yes!
12. anonymous
Okay great! Thanks so much! @ckallerid
13. ckallerid
no problem :D
14. anonymous
Can you check this one please? (if you don't mind) @ckallerid I already have my answer chosen
15. anonymous
16. anonymous
Just kidding it's D.
17. anonymous
not A, right? | 2017-01-19 09:20:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6513241529464722, "perplexity": 3388.971878576023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280504.74/warc/CC-MAIN-20170116095120-00326-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/170010/how-to-wrap-text-in-flalign | # How to wrap text in flalign
not a duplicate of this question because I want to keep the equation numbering, so a tabular won't do (I think).
I am using flalign to typeset philosophical arguments. I also want to use hyperref to be able to hotlink back to the individual lines of the argument later in the document. This works fine until I have a sentence that needs to be wrapped to the next line.
MWE:
\documentclass[11pt]{amsart}
\usepackage{hyperref}
\begin{document}
\begin{flalign}
&& \text{This is premise 1.} && \text{(Premise)} \label{premise1} \\
&& \text{This is premise 2. What happens when the text goes over the line though? } && \text{(From \autoref{premise1})}\label{premise2}
\end{flalign}
Obviously \autoref{premise2} follows from \autoref{premise1}.
\end{document}
-
why use flalign at all??? it seems very strange to use a math alignment and then force it to be text in every cell, why not use a text alignment (or a list, which looks more suitable)? – David Carlisle Apr 6 at 20:32
Because I'll end up with 50 or 60 different lines and I want to be able to autoref them. – shane Apr 6 at 20:36
yes I you want automatic numbering/references but why the math? I couldn't tell from the example where the "long" text is in your real case or whether both fields need to allow long text. – David Carlisle Apr 6 at 20:40
I think a theorem-like structure would be better from a conceptual point point of view, and it allows for numbering and referencing. – Bernard Apr 6 at 20:40
Oh, sometimes the lines will need modal logic symbols, sometimes they need to just be plain text where the symbolism is overkill. ideally both fields would have to allow long text. – shane Apr 6 at 20:42
I develop my suggestion of using a theorem-like structure, using ntheorem and cleveref:
\documentclass[11pt]{article}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage[thref, hyperref]{ntheorem}
\usepackage{cleveref}
\theoremstyle{plain}
\theorembodyfont{\upshape}
\theoremseparator{:}
\newtheorem{prem}{Premise}
\begin{document}
\begin{prem}\label{prem1}
This is the text of premise 1. A very promising premise
\end{prem}
\begin{prem}\label{prem2}
This is premise 2. What happens when the text goes over the line though? \\
\footnotesize(from \cref{prem1}).
\end{prem}
Obviously \cref{prem2} follows from \cref{prem1}.
\end{document}
-
You can use a \parbox:
## Notes:
• As others have stated, this is not a recommended approach. You may want to post a new question asking for suggestions on exactly the desired out along with the constraints that you have. I am sure there are many better ways to do this.
## Code:
\documentclass[11pt]{amsart}
\usepackage{hyperref}
\begin{document}
\begin{flalign}
&& \text{This is premise 1.} && \text{(Premise)} \label{premise1} \\
&& \parbox[t]{0.6\linewidth}{This is premise 2. What happens when the text goes over the line though? } && \text{(From \autoref{premise1})}\label{premise2}
\end{flalign}
Obviously \autoref{premise2} follows from \autoref{premise1}.
\end{document}
-
I like this a lot, but now the first line of (2) isn't aligned with the equation tag. Is there a way to fix that? – shane Apr 6 at 20:41
You can, but I wouldn't:-) using fleqn is pretty odd anyway, but putting a fixed size parbox in it is doubly weird as you don't need to align fixed size boxes (they are fixed width anyway) and you don't need them to be in math:-) (It is though the answer to the question asked, of how to wrap text in a flalign) – David Carlisle Apr 6 at 20:42
@shane: See updated solution. – Peter Grill Apr 6 at 20:46
@shane -- from your comment, it sounds like you have the equation numbers on the right. (amsart default is on the left, but you didn't specify [reqno].) where do you want them -- left or right? – barbara beeton Apr 6 at 21:01
Adding to @barbarabeeton's remark, this is very depending on the fact that amsart uses leqno and will dramatically break if the journal where the paper is submitted prefers reqno. – egreg Apr 6 at 21:09
I don't think you want math at all.
\documentclass[11pt]{amsart}
\usepackage{longtable,array,hyperref}
\makeatletter
\newcounter{premise}
\newcommand\premiseautorefname{premise}
\makeatother
\begin{document}
\noindent X\dotfill X
\begin{longtable}{@{\stepcounter{premise}(\thepremise) }
>{\raggedright\arraybackslash}p{9cm} | 2014-11-26 14:12:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9045596718788147, "perplexity": 2041.9813258101287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006885.62/warc/CC-MAIN-20141125155646-00098-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://blender.stackexchange.com/questions/43345/i-cant-load-an-image-from-a-script | # I can't load an image from a script
I'm trying to use Blender's mesh displacement from a script to create stl's. Once this works, I'll run blender with --background to create the stl files from the command line.
I start Blender with:
blender ~/3D/Models/4x4UVSquare.blend -d -P ./test.py
the script is:
import bpy
bpy.ops.image.open(filepath="junk.jpg", \
directory="/Users/me/3D/Art Work/", \
files=[{"name":"junk.jpg", "name":"junk.jpg"}], \
relative_path=True, show_multiview=False \
)
bpy.data.textures['displaceImage'].image = bpy.data.images['junk.jpg']
I put in the ".reload()" hoping it would help. It doesn't. Blender starts up, loads the .blend file, loads the image and assigns the image to the texture. When the UI comes up and displays the scene, there's no displacement. In the UI Image area under Texture, the filename is correct, but a little message reads "Can't load image".
When I hit the "reload" icon, the image reloads, the displacement appears and all it well.
Why do I have to reload the image at all? Why doesn't image.reload() work in the script?
To load an image from disk doesn't require bpy.ops. You could use:
bpy.data.images.load(filepath, check_existing=False)
# Load a new image into the main database
# bpy.data.images['some_image.png']
# if an image exists by that name already, you could set the name
bpy.data.images['some_image.png.001'].name = 'some_image.png'
# check_exist=True will reuse an existing image with that name
bpy.ops.* are triggered each time you interact in some way with the UI (buttons, shortcuts..). Often the UI triggers a bpy.ops.whatever with information about which window the event occurred in - this is important because you could have two UV image editors open, or two TextEditors (just an example). Therefor sometimes just calling the ops isn't enough, and some ops even expect a UI which isn't present in background mode. | 2021-03-09 11:01:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2944920063018799, "perplexity": 6283.373699678219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00535.warc.gz"} |
https://codegolf.meta.stackexchange.com/questions/7521/would-this-be-eligible-on-code-golf-to-know-before-writing-a-contest-in-sandbox | # Would this be eligible on code golf (to know before writing a contest in Sandbox)?
A while ago I ended up on a StackOverflow question which I believe could fit under category as it's all about killing the idea a C program could beat anything else in speed (here the main competitor is mawk for text processing with regexes).
As there's already been a wide meta-effect on this question on SO, I'm unsure it would be on-topic there or not, so before writing a challenge about it I have two questions:
1. Would it be on-topic (and with which tag ?)
2. Is the test system in place (github+travis) valid to classify answers
Side Note: Would the test system be of some interest for other challenges ?
P.S: Tell me if I should copy here parts of the existing SO and Meta-SO questions to ease the understanding (but the subject is quite vast now and I'm afraid making this one too long by overquoting)
• A King of the Hill is a game or competitive challenge with interaction between competitors (like Checkers or Tank Wars). If you're going for "what's fastest", there is a fastest-code tag that sounds more appropriate. – Geobits Nov 19 '15 at 14:40
• Editing, after reading too much around I used the wrong tag ;) – Tensibai Nov 19 '15 at 14:42
## Sandbox it and see
Go ahead and write up a sandbox post. We'll give you critique on what is unclear, what should be changed, etc. It's much easier to answer "is this a good challenge" when looking at a fleshed-out sandbox post than a short description here. If it works, great! If not, oh well, you've (hopefully) learned something for the future.
# Yes.
We like problems.
In the future, if you want to identify whether a problem is feasible, we have a friendly chat room that would love to critique your problem (as posting that on meta seems kind of overkill IMO).
Edit: I'm not indicating that your post here is off-topic or that we have rules against issues that are "too small", just that I would personally ask in chat.
• Aww, used to meta SO where questions out of scope for main are the best place to go. – Tensibai Nov 19 '15 at 14:47 | 2021-05-16 09:45:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31720224022865295, "perplexity": 1351.783945776107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00216.warc.gz"} |
http://mathhelpforum.com/pre-calculus/149864-series-sequences.html | Math Help - Series and Sequences
1. Series and Sequences
Ok, first a really simple sequence, but for some reason this online page isn't accepting it.
Find a formula for the general term an of the sequence, assuming that the pattern of the first few terms continues.
I'm putting in 5/(4^n) but it's not taking it. Either this input is messed up, or i'm missing something elementary.
2. Try $a_n=\displaystyle\frac{5}{4^{n-1}}$.
3. Ahh, yes. I was assuming n=0, but I guess I shouldn't assume anything | 2015-05-06 01:00:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121662378311157, "perplexity": 1189.7885224041847}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430457655609.94/warc/CC-MAIN-20150501052055-00002-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/255658/complex-argument-and-equivalence-class | Complex argument and equivalence class
We know that the complex argument of a product of two numbers is equal to the sum of their arguments. I'm aware that one can find this statement written down in many mathematics and engineering texts. However, it is a statement that assumes the argument of a complex number not to be a number but rather an equivalence class of numbers (numbers considered equivalent if they differ by a multiple of $2πi$). If I need the argument as a single number (e.g. in programming) one needs to fix a half-open interval of length $2π$ to which the argument belongs by definition. There are two popular choices: $[0,2π)$ and $(-π,π]$. For both cases it is easy to find numbers in which this statement is violated for the then uniquely defined argument.
My question is how one can avoid or resolve this problem.
-
You would have to add or subtract a multiple of $2\pi$ to land in your chosen interval. – Andrew Dec 14 '12 at 19:46
It is precisely in programming where the notion of a cyclic sum is more natural: $0xFFFF+0x0001=0x0000$.
In this case, all you need to do is define "addition modulo $2\pi$", where you subtract $2\pi$ if your sum ends up outside your desired interval. | 2014-12-18 16:29:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421619176864624, "perplexity": 92.59374085050659}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767274.159/warc/CC-MAIN-20141217075247-00110-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://mesbursmith.com/natural-bliss-yot/g2m0u.php?e6dc19=of-the-zeroes-have-a-multiplicity-of-2 | The polynomial function is of degree n. The sum of the multiplicities must be n. Starting from the left, the first zero occurs at $x=-3\/extract_itex]. 4 + 6i, -2 - 11i -1/3, 4 + 6i, 2 + 11i -4 + 6i, 2 - 11i 3, 4 + 6i, -2 - 11i Can I have some guidance Precalculus Write a polynomial function of minimum degree in standard form with real coefficients whose zeros and their multiplicities include those listed. Keep this in mind: Any odd-multiplicity zero that flexes at the crossing point, like this graph did at x = 5, is of odd multiplicity 3 or more. We call this a triple zero, or a zero with multiplicity 3. Therefore the zero of the quadratic function y = x^{2} is x = 0. Multiplicity is how many times a certain solution to the function. The zero associated with this factor, $x=2$, has multiplicity 2 because the factor $\left(x - 2\right)$ occurs twice. Descartes also tells us the total multiplicity of negative real zeros is 3, which forces -1 to be a zero of multiplicity 2 and - \frac {\sqrt {6}} {2} to have multiplicity 1. The sum of the multiplicities is the degree. The x-intercept $x=-1$ is the repeated solution of factor ${\left(x+1\right)}^{3}=0$. Sometimes the graph will cross over the x-axis at an intercept. The sum of the multiplicities must be 6. The graph passes through the axis at the intercept, but flattens out a bit first. The multiplicity of a root is the number of times the root appears. The factor is quadratic (degree 2), so the behavior near the intercept is like that of a quadratic—it bounces off of the horizontal axis at the intercept. It just "taps" it, … The last zero occurs at $x=4$. The x-intercept $x=2\\$ is the repeated solution of equation ${\left(x - 2\right)}^{2}=0\\$. The factor is repeated, that is, the factor $\left(x - 2\right)\\$ appears twice. \[ \begin{align*} 2x+1=0 \\[4pt] x &=−\dfrac{1}{2} \end{align*} The zeros of the function are 1 and $$−\frac{1}{2}$$ with multiplicity 2… For higher odd powers, such as 5, 7, and 9, the graph will still cross through the x-axis, but for each increasing odd power, the graph will appear flatter as it approaches and leaves the x-axis. Don't forget the multiplicity of x, even if it doesn't have an exponent in plain view. The zero associated with this factor, $x=2$, has multiplicity 2 because the factor $\left(x - 2\right)$ occurs twice. Use the graph of the function of degree 6 to identify the zeros of the function and their possible multiplicities. This is a single zero of multiplicity 1. The factor theorem states that is a zero of a polynomial if and only if is a factor of that polynomial, i.e. This is called multiplicity. For more math shorts go to www.MathByFives.com It means that x=3 is a zero of multiplicity 2, and x=1 is a zero of multiplicity 1. 232. We have two unique zeros: #-2# and #4#. The table below summarizes all four cases. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … For example, the polynomial P(x) = (x - 2)^237 has precisely one root, the number 2. The zero of –3 has multiplicity 2. The graph passes directly through the x-intercept at $x=-3$. If a polynomial contains a factor of the form ${\left(x-h\right)}^{p}$, the behavior near the x-intercept h is determined by the power p. We say that $x=h$ is a zero of multiplicity p. The graph of a polynomial function will touch the x-axis at zeros with even multiplicities. To put things precisely, the zero set of the polynomial contains from 1 to n elements, in general complex numbers that can, of course, be real. The polynomial function is of degree n which is 6. Degree 3 so 3 roots. For higher odd powers, such as 5, 7, and 9, the graph will still cross through the horizontal axis, but for each increasing odd power, the graph will appear flatter as it approaches and leaves the x-axis. Large inputs, say –100 or –1,000 and ran until April 2015 identify the zeros of the....: degree: 4 zeros: 4 multiplicity of 2, 2i x=-1\\ [ ]. X-3 ) ², repeats twice intercept and changes direction multiplicity is how many possible zeroes of F or. Even and odd multiplicity have roots with multiplicities of 1, 2, and x=1 a... Latex ] x=4\\ [ /latex ] do i know how many zeroes are! Function is of multiplicity 2 the only ones x=-3\\ [ /latex ] appears twice of... The intercept and changes direction we call this a triple zero, we can set the equal... Sometimes the graph passes directly through the axis at the intercept, but flattens a. Degree 5 to identify the zeros of the equation of a graph and approximate the of. That is a factor of that polynomial, i.e just briefly touches the axis at intercept. X 2 + 1 ) degree of the function of degree 6 to identify the zeros of function! Imaginary solutions that come from the other zero will have a graph and approximate the of. } touches the axis at the intercept and changes direction until April 2015 ) ^237 precisely... Graph Would Look Like the One Shown Above forming polynomials using real coefficents: degree 4! ) Give the Formula for a polynomial is called the multiplicity of the output a factor of that polynomial i.e. Possible zeroes of a polynomial is called the multiplicity of the multiplicities must even... Polynomial of Least degree Whose graph Would Look Like the One Shown Above say or. In Figure 7 that the sum of the equation of a of the zeroes have a multiplicity of 2 will simply derive the from! Zeroes are root2 and -root2 every math class i ever took will the! D ) Give the Formula for a polynomial is called the multiplicity is likely 6 because the factor theorem that! ] x=-3 [ /latex ] x=4\\ [ /latex ] two of its zeroes are root2 and -root2,. Like the One Shown Above negative, it will change the direction of the zero x=3, which is to... April 2015 forming polynomials using real coefficents: degree: 4 multiplicity of the multiplicities must be 6 may... Graph Would Look Like the One Shown Above decreases without bound and either! Same is true for very large inputs, say 100 or 1,000, the graphs cross or intersect x-axis. Starting from the calculator ) = ( x ) = ( x + 3 ) 2, and 3 repeats. -3 are the only ones to find how many zeroes there are let ’ s that... Best student in every math class i ever took the x-axis, so of the zeroes have a multiplicity of 2 is!: 4 multiplicity of 2, 2i 2, and 3 each of the must. 1,000, the graphs touch or are tangent to the x-axis at with... Zero corresponds to a single factor of the zero of the multiplicities of the zeroes have a multiplicity of 2 the of! Behavior the end behavior of the graph of the function two imaginary solutions that come from the calculator 6 identify! Their multiplicities for example, we graph the function of degree 6 to identify the zeros and multiplicity. It does n't go through the axis at an intercept x-axis at point C ( 0,0.! Polynomial expression are the values of x, even if it does n't have an exponent in plain view Alma... X=3 is a zero at of multiplicity 2, and i have a story tell.. X=4 [ /latex ] the calculator need an accurate head count to the x-axis, so the is... Roots with multiplicities of 1, 2, and 3 to 0 as x decreases without bound and either. I will simply derive the answer from the learnmath community a function there are two imaginary solutions that come the. Roots method will either ultimately rise or fall as x increases without.. Quadratic function y = x^ { 2 } touches the axis at the intercept but flattens out bit. The only ones find how many times a given factor appears in the factored of!, repeats twice times a certain solution to the x-axis a single zero because the zero corresponds to a zero! Are root2 and -root2 may use a calculator or use the rational roots.... - 2\right ) [ /latex ] fall as x decreases without bound idea improving... Comments ) More posts from the left, the first zero occurs at latex! X ) = ( x ) = ( x + 3 ),. Graph and approximate the zeros and their multiplicities graphs cross or intersect the x-axis in Figure 7 the! Answer from the calculator the intercept and changes direction use a calculator or use the graph of the.... ( s ) come from the factor [ latex ] \left ( x =... Theorem states that is, of the zeroes have a multiplicity of 2 zero corresponds to a single factor of that,! May just want to hide, but flattens of the zeroes have a multiplicity of 2 a bit first ” Alma how! Expression are the only ones the zero Whose graph Would Look Like the One Above! ] x=4\\ [ /latex ] rise or fall as x increases without bound will... Their multiplicity other times the root appears polynomial P ( x + 3 ) 2, and i to! One root, the first zero occurs at [ latex ] x=-3 [ /latex.. Equal to 0 determined by examining the multiplicity is 3 and -3 are values! Will touch the x-axis at zeros with odd multiplicities, the polynomial P ( x 2 + 1 ) bounce! Martinez-Neale 2018 ), i.e we can set the factor ( x - 2\right ) [ /latex.. Using real coefficents: degree: 4 zeros: 4 multiplicity of the zero to! That polynomial, i.e + 3 ) 2, and 3 size of the zero of multiplicity 2,.. To tell. ” Alma and how She Got Her Name ( Martinez-Neale 2018 ) ] appears twice have idea... Numbers, 4i and -4i the rational roots method zeroes of a expression! And will either ultimately rise or fall as x decreases without bound {! Multiplicities, the factor is squared to a single factor of the output just want to hide, of the zeroes have a multiplicity of 2! That factor equal to zero and solve it and x=1 is a at! The multiplicity the Multiversity began in August 2014 and ran until April 2015 am having trouble with polynomials. Roots with multiplicities of 1, 2, and 3 graphs cross or intersect the x-axis, so multiplicity! Corresponds to a single factor of that polynomial, i.e in Figure that! The direction of the zero must be even polynomial if and only if is zero... T for touch and C for cross, has a zero at of multiplicity 2, and x=1 is zero... Of polynomial functions with multiplicity 1, 2, and i have a story to ”. Likely 6 an accurate head count we need an accurate head count 2014 and ran until 2015... The rational roots method the direction of the function of degree n which 6! = x^ { 2 } touches the x-axis, so the multiplicity of a polynomial called! And -4i use the graph passes through the axis at the intercept and changes direction do n't the... Of 2, and x=1 is a zero of multiplicity 6 5 to identify the zeros and their possible.! Multiplicity 3 related to the x-axis and bounce off it is of order 2 appears in the factored of. Function is of order 2 which is related to the factor theorem states that is a zero of the and... Will either rise or fall as x increases without bound root2 and -root2 derive answer. 4I and -4i large inputs, say –100 or –1,000 4 zeros: 4 zeros: zeros!, it will change the direction of the quadratic function y = x^ { 2 is! Zero, we can set the factor ( x 2 + 1 ) multiplicities, the graph through. Are tangent to the function and their possible multiplicities change the direction of the zero to!
Solar Plexus Chakra Yoga Poses, Starfinder Operative Guide 2020, Paladin Of Maglubiyet, 1660 Super Vs 1650 Super, Ford Courier Diesel Engine, Kobalt 80v Trimmer, Walk The Walk Synonym, Noctua Nh-d9l Vs Nh-u12s, | 2021-04-15 14:24:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5860064625740051, "perplexity": 765.4428383297533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038085599.55/warc/CC-MAIN-20210415125840-20210415155840-00004.warc.gz"} |
https://freakonometrics.hypotheses.org/category/courses/act2040/2013/page/2 | # Some heuristics about spline smoothing
Let us continue our discussion on smoothing techniques in regression. Assume that .$\mathbb{E}(Y\vert X=x)=h(x)$ where $h(\cdot)$ is some unkown function, but assumed to be sufficently smooth. For instance, assume that $h(\cdot)$ is continuous, that $h'(\cdot)$ exists, and is continuous, that $h''(\cdot)$ exists and is also continuous, etc. If $h(\cdot)$ is smooth enough, Taylor’s expansion can be used. Hence, for $x\in(\alpha,\beta)$
$h(x)=h(\alpha)+\sum_{k=1}^ d \frac{(x-\alpha)^k}{k!}h^{(k)}(x_0)+\frac{1}{d!}\int_{\alpha}^x [x-t]^d h^{(d+1)}(t)dt$
which can also be writen as
$h(x)=\sum_{k=0}^ d a_k (x-\alpha)^k +\frac{1}{d!}\int_{\alpha}^x [x-t]^d h^{(d+1)}(t)dt$
for some $a_k$‘s. The first part is simply a polynomial.
The second part, is some integral. Using Riemann integral, observe that
$\frac{1}{d!}\int_{\alpha}^x [x-t]^d h^{(d+1)}(t)dt\sim \sum_{i=1}^ j b_i (x-x_i)_+^d$
for some $b_i$‘s, and some
$\alpha < x_1< x_2< \cdots < x_{j-1} < x_j < \beta$
Thus,
$h(x) \sim \sum_{k=0}^ d a_k (x-\alpha)^k +\sum_{i=1}^ j b_i (x-x_i)_+^d$
Nice! We have our linear regression model. A natural idea is then to consider a regression of $Y$ on $\boldsymbol{X}$ where
$\boldsymbol{X} = (1,X,X^2,\cdots,X^d,(X-x_1)_+^d,\cdots,(X-x_k)_+^d )$
given some knots $\{x_1,\cdots,x_k\}$. To make things easier to understand, let us work with our previous dataset,
plot(db)
If we consider one knot, and an expansion of order 1,
attach(db)
library(splines)
B=bs(xr,knots=c(3),Boundary.knots=c(0,10),degre=1)
reg=lm(yr~B)
lines(xr[xr<=3],predict(reg)[xr<=3],col="red")
lines(xr[xr>=3],predict(reg)[xr>=3],col="blue")
The prediction obtained with this spline can be compared with regressions on subsets (the doted lines)
reg=lm(yr~xr,subset=xr<=3)
lines(xr[xr<=3],predict(reg)[xr<=3],col="red",lty=2)
reg=lm(yr~xr,subset=xr>=3)
lines(xr[xr>=3],predict(reg),col="blue",lty=2)
It is different, since we have here three parameters (and not four, as for the regressions on the two subsets). One degree of freedom is lost, when asking for a continuous model. Observe that it is possible to write, equivalently
reg=lm(yr~bs(xr,knots=c(3),Boundary.knots=c(0,10),degre=1),data=db)
So, what happened here?
B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=1)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)
Here, the functions that appear in the regression are the following
Now, if we run the regression on those two components, we get
B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=1)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)
If we add one knot, we get
the prediction is
reg=lm(yr~B)
lines(xr,predict(reg),col="red")
Of course, we can choose much more knots,
B=bs(xr,knots=1:9,Boundary.knots=c(0,10),degre=1)
reg=lm(yr~B)
lines(xr,predict(reg),col="red")
We can even get a confidence interval
reg=lm(yr~B)
P=predict(reg,interval="confidence")
plot(db,col="white")
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
points(db)
reg=lm(yr~B)
lines(xr,P[,1],col="red")
abline(v=c(0,2,5,10),lty=2)
And if we keep the two knots we chose previously, but consider Taylor’s expansion of order 2, we get
B=bs(xr,knots=c(2,5),Boundary.knots=c(0,10),degre=2)
matplot(xr,B,type="l")
abline(v=c(0,2,5,10),lty=2)
So, what’s going on? If we consider the constant, and the first component of the spline based matrix, we get
k=2
plot(db)
B=cbind(1,B)
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)
If we add the constant term, the first term and the second term, we get the part on the left, before the first knot,
k=3
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)
and with three terms from the spline based matrix, we can get the part between the two knots,
k=4
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)
and finallty, when we sum all the terms, we get this time the part on the right, after the last knot,
k=5
lines(xr,B[,1:k]%*%coefficients(reg)[1:k],col=k-1,lty=k-1)
This is what we get using a spline regression, quadratic, with two (fixed) knots. And can can even get confidence intervals, as before
reg=lm(yr~B)
P=predict(reg,interval="confidence")
plot(db,col="white")
polygon(c(xr,rev(xr)),c(P[,2],rev(P[,3])),col="light blue",border=NA)
points(db)
reg=lm(yr~B)
lines(xr,P[,1],col="red")
abline(v=c(0,2,5,10),lty=2)
The great idea here is to use functions $(x-x_i)_+$, that will insure continuity at point $x_i$.
Of course, we can use those splines on our Dexter application,
Here again, using linear spline function, it is possible to impose a continuity constraint,
plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
reg=lm(mu~bs(no,knots=c(12*(1:7)+.5),Boundary.knots=c(0,97),
degre=1),data=db)
lines(c(1:94,96),predict(reg),col="red")
But we can also consider some quadratic splines,
plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
reg=lm(mu~bs(no,knots=c(12*(1:7)+.5),Boundary.knots=c(0,97),
degre=2),data=db)
lines(c(1:94,96),predict(reg),col="red")
# Some heuristics about local regression and kernel smoothing
In a standard linear model, we assume that $\mathbb{E}(Y\vert X=x)=\beta_0+\beta_1 x$. Alternatives can be considered, when the linear assumption is too strong.
• Polynomial regression
A natural extension might be to assume some polynomial function,
$\mathbb{E}(Y\vert X=x)=\beta_0+\beta_1 x+\beta_2 x^2 +\cdots +\beta_k x^k$
Again, in the standard linear model approach (with a conditional normal distribution using the GLM terminology), parameters $\boldsymbol{\beta}=(\beta_0,\beta_1,\cdots,\beta_k)$ can be obtained using least squares, where a regression of $Y$ on $\boldsymbol{X}=(1,X,X^2,\cdots,X^k)$ is considered.
Even if this polynomial model is not the real one, it might still be a good approximation for $\mathbb{E}(Y\vert X=x)=h(x)$. Actually, from Stone-Weierstrass theorem, if $h(\cdot)$ is continuous on some interval, then there is a uniform approximation of $h(\cdot)$ by polynomial functions.
Just to illustrate, consider the following (simulated) dataset
set.seed(1)
n=10
xr = seq(0,n,by=.1)
yr = sin(xr/2)+rnorm(length(xr))/2
db = data.frame(x=xr,y=yr)
plot(db)
with the standard regression line
reg = lm(y ~ x,data=db)
abline(reg,col="red")
Consider some polynomial regression. If the degree of the polynomial function is large enough, any kind of pattern can be obtained,
reg=lm(y~poly(x,5),data=db)
But if the degree is too large, then too many ‘oscillations’ are obtained,
reg=lm(y~poly(x,25),data=db)
and the estimation might be be seen as no longer robust: if we change one point, there might be important (local) changes
plot(db)
attach(db)
lines(xr,predict(reg),col="red",lty=2)
yrm=yr;yrm[31]=yr[31]-2
regm=lm(yrm~poly(xr,25))
lines(xr,predict(regm),col="red")
• Local regression
Actually, if our interest is to have locally a good approximation of $h(\cdot)$, why not use a local regression?
This can be done easily using a weighted regression, where, in the least square formulation, we consider
$\min\left\{ \sum_{i=1}^n \omega_i [Y_i-(\beta_0+\beta_1 X_i)]^2 \right\}$
(it is possible to consider weights in the GLM framework, but let’s keep that for another post). Two comments here:
• here I consider a linear model, but any polynomial model can be considered. Even a constant one. In that case, the optimization problem is
$\min\left\{ \sum_{i=1}^n \omega_i [Y_i-\beta_0]^2 \right\}$which can be solve explicitly, since
$\widehat{\beta}_0=\frac{\sum \omega_i Y_i}{\sum \omega_i}$
• so far, nothing was mentioned about the weights. The idea is simple, here: if you can a good prediction at point $x_0$, then $\omega_i$ should be proportional to some distance between $X_i$ and $x_0$: if $X_i$ is too far from $x_0$, then it should not have to much influence on the prediction.
For instance, if we want to have a prediction at some point $x_0$, consider $\omega_i\propto \boldsymbol{1}(\vert X_i-x_0 \vert<1)$. With this model, we remove observations too far away,
Actually, here, it is the same as
reg=lm(yr~xr,subset=which(abs(xr-x0)<1)
A more general idea is to consider some kernel function $K(\cdot)$ that gives the shape of the weight function, and some bandwidth (usually denoted h) that gives the length of the neighborhood, so that
$\omega_i = K\left(\frac{x_0-X_i}{b}\right)$
This is actually the so-called Nadaraya-Watson estimator of function $h(\cdot)$.
In the previous case, we did consider a uniform kernel $K(x)=\boldsymbol{1}(x\in[-1/2,+1/2])$, with bandwith $2$,
But using this weight function, with a strong discontinuity may not be the best idea… Why not a Gaussian kernel,
$K(x)=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^2}{2}\right)$
This can be done using
fitloc0 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~1,data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}
On our dataset, we can plot
ul=seq(0,10,by=.01)
vl0=Vectorize(fitloc0)(ul)
u0=seq(-2,7,by=.01)
linearlocalconst=function(x0){
w=dnorm((xr-x0))
plot(db,cex=abs(w)*4)
lines(ul,vl0,col="red")
axis(3)
axis(2)
reg=lm(y~1,data=db,weights=w)
u=seq(0,10,by=.02)
v=predict(reg,newdata=data.frame(x=u))
lines(u,v,col="red",lwd=2)
abline(v=c(0,x0,10),lty=2)
}
linearlocalconst(2)
Here, we want a local regression at point 2. The horizonal line below is the regression (the size of the point is proportional to the wieght). The curve, in red, is the evolution of the local regression
Let us use an animation to visualize the construction of the curve. One can use
library(animate)
but for some reasons, I cannot install the package easily on Linux. And it is not a big deal. We can still use a loop to generate some graphs
vx0=seq(1,9,by=.1)
vx0=c(vx0,rev(vx0))
graphloc=function(i){
name=paste("local-reg-",100+i,".png",sep="")
png(name,600,400)
linearlocalconst(vx0[i])
dev.off()}
for(i in 1:length(vx0)) graphloc(i)
and then, in a terminal, I simply use
convert -delay 25 /home/freak/local-reg-1*.png /home/freak/local-reg.gif
Of course, it is possible to consider a linear model, locally,
fitloc1 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~poly(x,degree=1),data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}
or even a quadratic (local) regression,
fitloc2 = function(x0){
w=dnorm((xr-x0))
reg=lm(y~poly(x,degree=2),data=db,weights=w)
return(predict(reg,newdata=data.frame(x=x0)))}
Of course, we can change the bandwidth
To conclude the technical part this post, observe that, in practise, we have to choose the shape of the weight function (the so-called kernel). But there are (simple) technique to select the “optimal” bandwidth h. The idea of cross validation is to consider
$\min\left\{ \sum_{i=1}^n [Y_i-\widehat{Y}_i(b)]^2 \right\}$
where $\widehat{Y}_i(b)$ is the prediction obtained using a local regression technique, with bandwidth $b$. And to get a more accurate (and optimal) bandwith $\widehat{Y}_i(b)$ is obtained using a model estimated on a sample where the ith observation was removed. But again, that is not the main point in this post, so let’s keep that for another one…
Perhaps we can try on some real data? Inspired from a great post on http://f.briatte.org/teaching/ida/092_smoothing.html, by François Briatte, consider the Global Episode Opinion Survey, from some TV show, http://geos.tv/index.php/index?sid=189 , like Dexter.
library(XML)
file = "geos-tww.csv"
html = htmlParse("http://www.geos.tv/index.php/list?sid=189&collection=all")
html = xpathApply(html, "//table[@id='collectionTable']")[[1]]
data = data[,-3]
names(data)=c("no",names(data)[-1])
data=data[-(61:64),]
Let us reshape the dataset,
data$no = 1:96 data$mu = as.numeric(substr(as.character(data$Mean), 0, 4)) data$se = sd(data$mu,na.rm=TRUE)/sqrt(as.numeric(as.character(data$Count)))
data$season = 1 + (data$no - 1)%/%12
data$season = factor(data$season)
plot(data$no,data$mu,ylim=c(6,10))
segments(data$no,data$mu-1.96*data$se, data$no,data$mu+1.96*data$se,col="light blue")
As done by François, we compute some kind of standard error, just to reflect uncertainty. But we won’t really use it.
plot(data$no,data$mu,ylim=c(6,10))
abline(v=12*(0:8)+.5,lty=2)
for(s in 1:8){reg=lm(mu~no,data=db,subset=season==s)
lines((s-1)*12+1:12,predict(reg)[1:12],col="red") }
Henre, we assume that all seasons should be considered as completely independent… which might not be a great assumption.
db = data
NW = ksmooth(db$no,db$mu,kernel = "normal",bandwidth=5)
plot(data$no,data$mu)
lines(NW,col="red")
We can try to look the curve with a larger bandwidth. The problem is that there is a missing value, at the end. If we (arbitrarily) fill it, we can run a kernel regression,
db$mu[95]=7 NW = ksmooth(db$no,db$mu,kernel = "normal",bandwidth=12) plot(data$no,data$mu,ylim=c(6,10)) lines(NW,col="red") # Regression on variables, or on categories? I admit it, the title sounds weird. The problem I want to address this evening is related to the use of the stepwise procedure on a regression model, and to discuss the use of categorical variables (and possible misinterpreations). Consider the following dataset > db = read.table("http://freakonometrics.free.fr/db2.txt",header=TRUE,sep=";") First, let us change the reference in our categorical variable (just to get an easier interpretation later on) > db$X3=relevel(as.factor(db$X3),ref="E") If we run a logistic regression on the three variables (two continuous, one categorical), we get > reg=glm(Y~X1+X2+X3,family=binomial,data=db) > summary(reg) Call: glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db) Deviance Residuals: Min 1Q Median 3Q Max -3.0758 0.1226 0.2805 0.4798 2.0345 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -5.39528 0.86649 -6.227 4.77e-10 *** X1 0.51618 0.09163 5.633 1.77e-08 *** X2 0.24665 0.05911 4.173 3.01e-05 *** X3A -0.09142 0.32970 -0.277 0.7816 X3B -0.10558 0.32526 -0.325 0.7455 X3C 0.63829 0.37838 1.687 0.0916 . X3D -0.02776 0.33070 -0.084 0.9331 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 806.29 on 999 degrees of freedom Residual deviance: 582.29 on 993 degrees of freedom AIC: 596.29 Number of Fisher Scoring iterations: 6 Now, if we use a stepwise procedure, to select variables in the model, we get > step(reg) Start: AIC=596.29 Y ~ X1 + X2 + X3 Df Deviance AIC - X3 4 587.81 593.81 <none> 582.29 596.29 - X2 1 600.56 612.56 - X1 1 617.25 629.25 Step: AIC=593.81 Y ~ X1 + X2 Df Deviance AIC <none> 587.81 593.81 - X2 1 606.90 610.90 - X1 1 622.44 626.44 So clearly, we should remove the categorical variable if our starting point was the regression on the three variables. Now, what if we consider the same model, but slightly different: on the five categories, > X3complete = model.matrix(~0+X3,data=db) > db2 = data.frame(db,X3complete) > head(db2) Y X1 X2 X3 X3A X3B X3C X3D X3E 1 1 3.297569 16.25411 B 0 1 0 0 0 2 1 6.418031 18.45130 D 0 0 0 1 0 3 1 5.279068 16.61806 B 0 1 0 0 0 4 1 5.539834 19.72158 C 0 0 1 0 0 5 1 4.123464 18.38634 C 0 0 1 0 0 6 1 7.778443 19.58338 C 0 0 1 0 0 From a technical point of view, it is exactly the same as before, if we look at the regression, > reg = glm(Y~X1+X2+X3A+X3B+X3C+X3D+X3E,family=binomial,data=db2) > summary(reg) Call: glm(formula = Y ~ X1 + X2 + X3A + X3B + X3C + X3D + X3E, family = binomial, data = db2) Deviance Residuals: Min 1Q Median 3Q Max -3.0758 0.1226 0.2805 0.4798 2.0345 Coefficients: (1 not defined because of singularities) Estimate Std. Error z value Pr(>|z|) (Intercept) -5.39528 0.86649 -6.227 4.77e-10 *** X1 0.51618 0.09163 5.633 1.77e-08 *** X2 0.24665 0.05911 4.173 3.01e-05 *** X3A -0.09142 0.32970 -0.277 0.7816 X3B -0.10558 0.32526 -0.325 0.7455 X3C 0.63829 0.37838 1.687 0.0916 . X3D -0.02776 0.33070 -0.084 0.9331 X3E NA NA NA NA --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 806.29 on 999 degrees of freedom Residual deviance: 582.29 on 993 degrees of freedom AIC: 596.29 Number of Fisher Scoring iterations: 6 Both regressions are equivalent. Now, what about a stepwise selection on this new model? > step(reg) Start: AIC=596.29 Y ~ X1 + X2 + X3A + X3B + X3C + X3D + X3E Step: AIC=596.29 Y ~ X1 + X2 + X3A + X3B + X3C + X3D Df Deviance AIC - X3D 1 582.30 594.30 - X3A 1 582.37 594.37 - X3B 1 582.40 594.40 <none> 582.29 596.29 - X3C 1 585.21 597.21 - X2 1 600.56 612.56 - X1 1 617.25 629.25 Step: AIC=594.3 Y ~ X1 + X2 + X3A + X3B + X3C Df Deviance AIC - X3A 1 582.38 592.38 - X3B 1 582.41 592.41 <none> 582.30 594.30 - X3C 1 586.30 596.30 - X2 1 600.58 610.58 - X1 1 617.27 627.27 Step: AIC=592.38 Y ~ X1 + X2 + X3B + X3C Df Deviance AIC - X3B 1 582.44 590.44 <none> 582.38 592.38 - X3C 1 587.20 595.20 - X2 1 600.59 608.59 - X1 1 617.64 625.64 Step: AIC=590.44 Y ~ X1 + X2 + X3C Df Deviance AIC <none> 582.44 590.44 - X3C 1 587.81 593.81 - X2 1 600.73 606.73 - X1 1 617.66 623.66 What do we get now? This time, the stepwise procedure recommends that we keep one category (namely C). So my point is simple: when running a stepwise procedure with factors, either we keep the factor as it is, or we drop it. If it is necessary to change the design, by pooling together some categories, and we forgot to do it, then it will be suggested to remove that variable, because having 4 categories meaning the same thing will cost us too much if we use the Akaike criteria. Because this is exactly what happens here > library(car) > reg = glm(formula = Y ~ X1 + X2 + X3, family = binomial, data = db) > linearHypothesis(reg,c("X3A=X3B","X3A=X3D","X3A=0")) Linear hypothesis test Hypothesis: X3A - X3B = 0 X3A - X3D = 0 X3A = 0 Model 1: restricted model Model 2: Y ~ X1 + X2 + X3 Res.Df Df Chisq Pr(>Chisq) 1 996 2 993 3 0.1446 0.986 So here, we should pool together categories A, B, D and E (which was here the reference). As mentioned in a previous post, it is necessary to pool together categories that should be pulled together as soon as possible. If not, the stepwise procedure might yield to some misinterpretations. # ROC curves and classification To get back to a question asked after the last course (still on non-life insurance), I will spend some time to discuss ROC curve construction, and interpretation. Consider the dataset we’ve been using last week, > db = read.table("http://freakonometrics.free.fr/db.txt",header=TRUE,sep=";") > attach(db) The first step is to get a model. For instance, a logistic regression, where some factors were merged together, > X3bis=rep(NA,length(X3)) > X3bis[X3%in%c("A","C","D")]="ACD" > X3bis[X3%in%c("B","E")]="BE" > db$X3bis=as.factor(X3bis)
> reg=glm(Y~X1+X2+X3bis,family=binomial,data=db)
From this model, we can predict a probability, not a $\{0,1\}$ variable,
> S=predict(reg,type="response")
Let $\widehat{S}$ denote this variable (actually, we can use the score, or the predicted probability, it will not change the construction of our ROC curve). What if we really want to predict a $\{0,1\}$ variable. As we usually do in decision theory. The idea is to consider a threshold , so that
• if , then will be $1$, or “positive” (using a standard terminology)
• si , then will be $0$, or “negative
Then we derive a contingency table, or a confusion matrix
observed value predicted value “positive“ “négative“ “positive“ TP FP “négative“ FN TN
where TP are the so-called true positive, TN the true negative, FP are the false positive (or type I error) and FN are the false negative (type II errors). We can get that contingency table for a given threshold
> roc.curve=function(s,print=FALSE){
+ Ps=(S>s)*1
+ FP=sum((Ps==1)*(Y==0))/sum(Y==0)
+ TP=sum((Ps==1)*(Y==1))/sum(Y==1)
+ if(print==TRUE){
+ print(table(Observed=Y,Predicted=Ps))
+ }
+ vect=c(FP,TP)
+ names(vect)=c("FPR","TPR")
+ return(vect)
+ }
> threshold = 0.5
> roc.curve(threshold,print=TRUE)
Predicted
Observed 0 1
0 5 231
1 19 745
FPR TPR
0.9788136 0.9751309
Here, we also compute the false positive rates, and the true positive rates,
• TPR = TP / P = TP / (TP + FN) also called sentivity, defined as the rate of true positive: probability to be predicted positve, given that someone is positive (true positive rate)
• FPR = FP / N = FP / (FP + TN) is the rate of false positive: probability to be predicted positve, given that someone is negative (false positive rate)
The ROC curve is then obtained using severall values for the threshold. For convenience, define
> ROC.curve=Vectorize(roc.curve)
First, we can plot $(\widehat{S}_i,Y_i)$ (a standard predicted versus observed graph), and visualize true and false positive and negative, using simple colors
> I=(((S>threshold)&(Y==0))|((S<=threshold)&(Y==1)))
> plot(S,Y,col=c("red","blue")[I+1],pch=19,cex=.7,,xlab="",ylab="")
> abline(v=threshold,col="gray")
And for the ROC curve, simply use
> M.ROC=ROC.curve(seq(0,1,by=.01))
> plot(M.ROC[1,],M.ROC[2,],col="grey",lwd=2,type="l")
This is the ROC curve. Now, to see why it can be interesting, we need a second model. Consider for instance a classification tree
> library(tree)
> ctr <- tree(Y~X1+X2+X3bis,data=db)
> plot(ctr)
> text(ctr)
To plot the ROC curve, we just need to use the prediction obtained using this second model,
> S=predict(ctr)
All the code described above can be used. Again, we can plot $(\widehat{S}_i,Y_i)$ (observe that we have 5 possible values for $\widehat{S}_i$, which makes sense since we do have 5 leaves on our tree). Then, we can plot the ROC curve,
An interesting idea can be to plot the two ROC curves on the same graph, in order to compare the two models
> plot(M.ROC[1,],M.ROC[2,],type="l")
> lines(M.ROC.tree[1,],M.ROC.tree[2,],type="l",col="grey",lwd=2)
The most difficult part is to get a proper interpretation. The tree is not predicting well in the lower part of the curve. This concerns people with a very high predicted probability. If our interest is more on those with a probability lower than 90%, then, we have to admit that the tree is doing a good job, since the ROC curve is always higher, comparer with the logistic regression.
# Logistic regression and categorical covariates
A short post to get back – for my nonlife insurance course – on the interpretation of the output of a regression when there is a categorical covariate. Consider the following dataset
> attach(db)
> tail(db)
Y X1 X2 X3
995 1 4.801836 20.82947 A
996 1 9.867854 24.39920 C
997 1 5.390730 21.25119 D
998 1 6.556160 20.79811 D
999 1 4.710276 21.15373 A
1000 1 6.631786 19.38083 A
Let us run a logistic regression on that dataset
> reg = glm(Y~X1+X2+X3,family=binomial,data=db)
> summary(reg)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.45885 1.04646 -4.261 2.04e-05 ***
X1 0.51664 0.11178 4.622 3.80e-06 ***
X2 0.21008 0.07247 2.899 0.003745 **
X3B 1.74496 0.49952 3.493 0.000477 ***
X3C -0.03470 0.35691 -0.097 0.922543
X3D 0.08004 0.34916 0.229 0.818672
X3E 2.21966 0.56475 3.930 8.48e-05 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 552.64 on 999 degrees of freedom
Residual deviance: 397.69 on 993 degrees of freedom
AIC: 411.69
Number of Fisher Scoring iterations: 7
Here, the reference is modality $A$. Which means that for someone with characteristics $(X_1,X_2,X_3=A)$, we predict the following probability
$p=H(\widehat\beta_0+\widehat\beta_1 X_1+\widehat\beta_2 X_2)$
where $H(\cdot)$ denotes the cumulative distribution function of the logistic distribution
$H(x)=\frac{e^x}{1+e^x}$
For someone with characteristics $(X_1,X_2,X_3=B)$, we predict the following probability
$p=H(\widehat\beta_0+\widehat\beta_1 X_1+\widehat\beta_2 X_2+\widehat\beta_3^{\ (B)})$
For someone with characteristics $(X_1,X_2,X_3=C)$, we predict the following probability
$p=H(\widehat\beta_0+\widehat\beta_1 X_1+\widehat\beta_2 X_2+\widehat\beta_3^{\ (C)})$
(etc.) Here, if we accept $H_0:\beta_3^{\ (C)}=0$ (against $H_1:\beta_3^{\ (C)}\neq0$), it means that modality $C$ cannot be considerd as different from $A$.
A natural idea can be to change the reference modality, and to look at the $p$-values. If we consider the following loop, we get
> M = matrix(NA,5,5)
> rownames(M)=colnames(M)=LETTERS[1:5]
> for(k in 1:5){
+ db$X3 = relevel(X3,LETTERS[k]) + reg = glm(Y~X1+X2+X3,family=binomial,data=db) + M[levels(db$X3)[-1],k] = summary(reg)$coefficients[4:7,4] + } > M A B C D E A NA 0.0004771853 9.225428e-01 0.8186723647 8.482647e-05 B 4.771853e-04 NA 4.841204e-04 0.0009474491 4.743636e-01 C 9.225428e-01 0.0004841204 NA 0.7506242347 9.194193e-05 D 8.186724e-01 0.0009474491 7.506242e-01 NA 1.730589e-04 E 8.482647e-05 0.4743636442 9.194193e-05 0.0001730589 NA and if we simply want to know if the $p$-value exceeds – or not – 5%, we get the following, > M.TF = M>.05 > M.TF A B C D E A NA FALSE TRUE TRUE FALSE B FALSE NA FALSE FALSE TRUE C TRUE FALSE NA TRUE FALSE D TRUE FALSE TRUE NA FALSE E FALSE TRUE FALSE FALSE NA The first column is obtained when $A$ is the reference, and then, we see which parameter should be considered as null. The interpretation is the following: • $C$ and $D$ are not different from $A$ • $E$ is not different from $B$ • $A$ and $D$ are not different from $C$ • $A$ and $C$ are not different from $D$ • $B$ is not different from $E$ Note that we only have, here, some kind of intuition. So, let us run a more formal test. Let us consider the following regression (we remove the intercept to get a model easier to understand) > library(car) > db$X3=relevel(X3,"A")
> reg=glm(Y~0+X1+X2+X3,family=binomial,data=db)
> summary(reg)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
X1 0.51664 0.11178 4.622 3.80e-06 ***
X2 0.21008 0.07247 2.899 0.00374 **
X3A -4.45885 1.04646 -4.261 2.04e-05 ***
X3E -2.23919 1.06666 -2.099 0.03580 *
X3D -4.37881 1.04887 -4.175 2.98e-05 ***
X3C -4.49355 1.06266 -4.229 2.35e-05 ***
X3B -2.71389 1.07274 -2.530 0.01141 *
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 1386.29 on 1000 degrees of freedom
Residual deviance: 397.69 on 993 degrees of freedom
AIC: 411.69
Number of Fisher Scoring iterations: 7
It is possible to use Fisher test to test if some coefficients are equal, or not (more generally if some linear constraints are satisfied)
> linearHypothesis(reg,c("X3A=X3C","X3A=X3D","X3B=X3E"))
Linear hypothesis test
Hypothesis:
X3A - X3C = 0
X3A - X3D = 0
- X3E + X3B = 0
Model 1: restricted model
Model 2: Y ~ 0 + X1 + X2 + X3
Res.Df Df Chisq Pr(>Chisq)
1 996
2 993 3 0.6191 0.892
Here, we clearly accept the assumption that the first three factors are equal, as well as the last two. What is the next step? Well, if we believe that there are mainly two categories, $\{A,C,D\}$ and $\{B,E\}$, let us create that factor,
> X3bis=rep(NA,length(X3))
> X3bis[X3%in%c("A","C","D")]="ACD"
> X3bis[X3%in%c("B","E")]="BE"
> db$X3bis=as.factor(X3bis) > reg=glm(Y~X1+X2+X3bis,family=binomial,data=db) > summary(reg) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -4.39439 1.02791 -4.275 1.91e-05 *** X1 0.51378 0.11138 4.613 3.97e-06 *** X2 0.20807 0.07234 2.876 0.00402 ** X3bisBE 1.94905 0.36852 5.289 1.23e-07 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 552.64 on 999 degrees of freedom Residual deviance: 398.31 on 996 degrees of freedom AIC: 406.31 Number of Fisher Scoring iterations: 7 Here, all the categories are significant. So we do have a proper model. # Régression de Poisson Mercredi, on finira les arbres de classification, et on commencera la modélisation de la fréquence de sinistre. Les transparents sont en ligne. Comme annoncé lors du premier cours, je suggère de commencer la lecture du Practicionner’s Guide to Generalized Linear Models. Le document correspond au minimum attendu dans ce cours. # Non-observable vs. observable heterogeneity factor This morning, in the ACT2040 class (on non-life insurance), we’ve discussed the difference between observable and non-observable heterogeneity in ratemaking (from an economic perspective). To illustrate that point (we will spend more time, later on, discussing observable and non-observable risk factors), we looked at the following simple example. Let $X$ denote the height of a person. Consider the following dataset > Davis=read.table( + "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt") There is a small typo in the dataset, so let us make manual changes here > Davis[12,c(2,3)]=Davis[12,c(3,2)] Here, the variable of interest is the height of a given person, > X=Davis$height
If we look at the histogram, we have
> hist(X,col="light green", border="white",proba=TRUE,xlab="",main="")
Can we assume that we have a Gaussian distribution ?
$X\sim\mathcal{N}(\theta,\sigma^2)$Maybe not… Here, if we fit a Gaussian distribution, plot it, and add a kernel based estimator, we get
> (param <- fitdistr(X,"normal")$estimate) > f1 <- function(x) dnorm(x,param[1],param[2]) > x=seq(100,210,by=.2) > lines(x,f1(x),lty=2,col="red") > lines(density(X)) If you look at that black line, you might think of a mixture, i.e. something like $X\sim p_1\cdot\mathcal{N}(\theta_1,\sigma_1^2)+p_2\cdot\mathcal{N}(\theta_2,\sigma_2^2)$ (using standard mixture notations). Mixture are obtained when we have a non-observable heterogeneity factor: with probability $p_1$, we have a random variable $\mathcal{N}(\mu_1,\sigma_1^2)$ (call it type [1]), and with probability $p_2$, a random variable $\mathcal{N}(\mu_2,\sigma_2^2)$ (call it type [2]). So far, nothing new. And we can fit such a mixture distribution, using e.g. > library(mixtools) > mix <- normalmixEM(X) number of iterations= 335 > (param12 <- c(mix$lambda[1],mix$mu,mix$sigma))
[1] 0.4002202 178.4997298 165.2703616 6.3561363 5.9460023
If we plot that mixture of two Gaussian distributions, we get
> f2 <- function(x){ param12[1]*dnorm(x,param12[2],param12[4])
+ (1-param12[1])*dnorm(x,param12[3],param12[5]) }
> lines(x,f2(x),lwd=2, col="red") lines(density(X))
Not bad. Actually, we can try to maximize the likelihood with our own codes,
> logdf <- function(x,parameter){
+ p <- parameter[1]
+ m1 <- parameter[2]
+ s1 <- parameter[4]
+ m2 <- parameter[3]
+ s2 <- parameter[5]
+ return(log(p*dnorm(x,m1,s1)+(1-p)*dnorm(x,m2,s2)))
+ }
> logL <- function(parameter) -sum(logdf(X,parameter))
> Amat <- matrix(c(1,-1,0,0,0,0,
+ 0,0,0,0,1,0,0,0,0,0,0,0,0,1), 4, 5)
> bvec <- c(0,-1,0,0)
> constrOptim(c(.5,160,180,10,10), logL, NULL, ui = Amat, ci = bvec)$par [1] 0.5996263 165.2690084 178.4991624 5.9447675 6.3564746 Here, we include some constraints, to insurance that the probability belongs to the unit interval, and that the variance parameters remain positive. Note that we have something close to the previous output. Let us try something a little bit more complex now. What if we assume that the underlying distributions have the same variance, namely $X\sim p_1\cdot\mathcal{N}(\theta_1,\sigma^2)+p_2\cdot\mathcal{N}(\theta_2,\sigma^2)$ In that case, we have to use the previous code, and make small changes, > logdf <- function(x,parameter){ + p <- parameter[1] + m1 <- parameter[2] + s1 <- parameter[4] + m2 <- parameter[3] + s2 <- parameter[4] + return(log(p*dnorm(x,m1,s1)+(1-p)*dnorm(x,m2,s2))) + } > logL <- function(parameter) -sum(logdf(X,parameter)) > Amat <- matrix(c(1,-1,0,0,0,0,0,0,0,0,0,1), 3, 4) > bvec <- c(0,-1,0) > (param12c= constrOptim(c(.5,160,180,10), logL, NULL, ui = Amat, ci = bvec)$par)
[1] 0.6319105 165.6142824 179.0623954 6.1072614
This is what we can do if we cannot observe the heterogeneity factor. But wait… we actually have some information in the dataset. For instance, we have the sex of the person. Now, if we look at histograms of height per sex, and kernel based density estimator of the height, per sex, we have
So, it looks like the height for male, and the height for female are different. Maybe we can use that variable, that was actually observed, to explain the heterogeneity in our sample. Formally, here, the idea is to consider a mixture, with an observable heterogeneity factor: the sex,
$X\sim p_H\cdot\mathcal{N}(\theta_H,\sigma_H^2)+p_F\cdot\mathcal{N}(\theta_F,\sigma_F^2)$
We now have interpretation of what we used to call class [1] and [2] previously: male and female. And here, estimating parameters is quite simple,
> (pM <- mean(sex=="M"))
[1] 0.44
> (paramF <- fitdistr(X[sex=="F"],"normal")$estimate) mean sd 164.714286 5.633808 > (paramM <- fitdistr(X[sex=="M"],"normal")$estimate)
mean sd
178.011364 6.404001
And if we plot the density, we have
> f4 <- function(x) pM*dnorm(x,paramM[1],paramM[2])+(1-pM)*dnorm(x,paramF[1],paramF[2])
> lines(x,f4(x),lwd=3,col="blue")
What if, once again, we assume identical variance? Namely, the model becomes
$X\sim p_H\cdot\mathcal{N}(\theta_H,\sigma^2)+p_F\cdot\mathcal{N}(\theta_F,\sigma^2)$Then a natural idea to derive an estimator for the variance, based on previous computations, is to use
$\sigma^2=\frac{1}{n-2}\left(\sum_{i:H} [X_i-\overline{X}_H]^2+\sum_{i:F} [X_i-\overline{X}_F]^2\right)$
The code is here
> s=sqrt((sum((height[sex=="M"]-paramM[1])^2)+sum((height[sex=="F"]-paramF[1])^2))/(nrow(Davis)-2))
> s
[1] 6.015068
and again, it is possible to plot the associated density,
> f5 <- function(x) pM*dnorm(x,paramM[1],s)+(1-pM)*dnorm(x,paramF[1],s)
> lines(x,f5(x),lwd=3,col="blue")
Now, if we think a little about what we’ve just done, it is simply a linear regression on a factor, the sex of the person,
$X=\mu_H\cdot\boldsymbol{1}(H)+\mu_F\cdot\boldsymbol{1}(F)+\varepsilon$
where $\varepsilon\sim\mathcal{N}(0,\sigma^2)$. And indeed, if we run the code to estimate this linear model,
> summary(lm(height~sex,data=Davis))
Call:
lm(formula = height ~ sex, data = Davis)
Residuals:
Min 1Q Median 3Q Max
-16.7143 -3.7143 -0.0114 4.2857 18.9886
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 164.7143 0.5684 289.80 <2e-16 ***
sexM 13.2971 0.8569 15.52 <2e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Residual standard error: 6.015 on 198 degrees of freedom
Multiple R-squared: 0.5488, Adjusted R-squared: 0.5465
F-statistic: 240.8 on 1 and 198 DF, p-value: < 2.2e-16
we get the same estimators for the means and the variance as the ones obtained previously. So, as mentioned this morning in class, if you have a non-observable heterogeneity factor, we can use a mixture model to fit a distribution, but if you can get a proxy of that factor, that is observable, then you can run a regression. But most of the time, that observable variable is just a proxy of a non-observable one…
# Introduction à la régression logistique et aux arbres
Pour le second cours ACT2040, on va finir l’introduction (et les rappels de statistique inférentiel) puis attaque la première grosse section, sur la régression logistique et aux arbres de classification. base tirée du livre de Jed Frees, http://instruction.bus.wisc.edu/jfrees/…
> tail(baseavocat)
CASENUM ATTORNEY CLMSEX MARITAL CLMINSUR SEATBELT CLMAGE LOSS
1335 34204 2 2 2 2 1 26 0.161
1336 34210 2 1 2 2 1 NA 0.576
1337 34220 1 2 1 2 1 46 3.705
1338 34223 2 2 1 2 1 39 0.099
1339 34245 1 2 2 1 1 18 3.277
1340 34253 2 2 2 2 1 30 0.688
On dispose d’une variable dichotomique indiquant si un assuré – suite à un accident de la route – a été représenté par un avocat (1 si oui, 2 si non). On connaît le sexe de l’assuré (1 pour les hommes et 2 pour les femmes), le statut marital (1 s’il est marié, 2 s’il est célibataire, 3 pour un veuf, et 4 pour un assuré divorcé). On sait aussi si l’assuré portait ou non une ceinture de sécurité lorsque l’accident s’est produit (1 si oui, 2 si non et 3 si l’information n’est pas connue). Enfin, une information pour savoir si le conducteur du véhicule était ou non assuré (1 si oui, 2 si non et 3 si l’information n’est pas connue). On va recoder un peu les données afin de les rendre plus claires à lire.
Les transparents sont en ligne sur le blog,
Des compléments théoriques sur les arbres peuvent se trouver en ligne http://genome.jouy.inra.fr/…, http://ensmp.fr/…, ou http://ujf-grenoble.fr/… (pour information, nous ne verrons que la méthode CART). Je peux renvoyer au livre (et au blog) de Stéphane Tuffery, ou (en anglais) au livre de Richard Berk, dont un résumé se trouve en ligne sur http://crim.upenn.edu/….
# Assurance IARD
Mercredi aura lieu le premier cours d’assurance IARD de la session (ACT2040). Le plan de cours est en ligne, ainsi que les transparents de la première session. Nous verrons une introduction a la tarification, avec hétérogénéité, et quelques rappels de statistiques.
Des informations suivront, en particulier pour les démonstrations.
# Residuals from a logistic regression
I always claim that graphs are important in econometrics and statistics ! Of course, it is usually not that simple. Let me come back to a recent experience. A got an email from Sami yesterday, sending me a graph of residuals, and asking me what could be done with a graph of residuals, obtained from a logistic regression ? To get a better understanding, let us consider the following dataset (those are simulated data, but let us assume – as in practice – that we do not know the true model, this is why I decided to embed the code in some R source file)
> source("http://freakonometrics.free.fr/probit.R")
> reg=glm(Y~X1+X2,family=binomial)
If we use R’s diagnostic plot, the first one is the scatterplot of the residuals, against predicted values (the score actually)
> plot(reg,which=1)
we is simply
> plot(predict(reg),residuals(reg))
> abline(h=0,lty=2,col="grey")
Why do we have those two lines of points ? Because we predict a probability for a variable taking values 0 or 1. If the tree value is 0, then we always predict more, and residuals have to be negative (the blue points) and if the true value is 1, then we underestimate, and residuals have to be positive (the red points). And of course, there is a monotone relationship… We can see more clearly what’s going on when we use colors
> plot(predict(reg),residuals(reg),col=c("blue","red")[1+Y])
> abline(h=0,lty=2,col="grey")
Points are exactly on a smooth curve, as a function of the predicted value,
Now, there is nothing from this graph. If we want to understand, we have to run a local regression, to see what’s going on,
> lines(lowess(predict(reg),residuals(reg)),col="black",lwd=2)
This is exactly what we have with the first function. But with this local regression, we do not get confidence interval. Can’t we pretend, on the graph about that the plain dark line is very close to the dotted line ?
> rl=lm(residuals(reg)~bs(predict(reg),8))
> #rl=loess(residuals(reg)~predict(reg))
> y=predict(rl,se=TRUE)
> segments(predict(reg),y$fit+2*y$se.fit,predict(reg),y$fit-2*y$se.fit,col="green")
Yes, we can.And even if we have a guess that something can be done, what would this graph suggest ?
Actually, that graph is probably not the only way to look at the residuals. What not plotting them against the two explanatory variables ? For instance, if we plot the residuals against the second one, we get
> plot(X2,residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(X2,residuals(reg)),col="black",lwd=2)
> lines(lowess(X2[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(X2[Y==1],residuals(reg)[Y==1]),col="red")
> abline(h=0,lty=2,col="grey")
The graph is similar to the one we had earlier, and against, there is not much to say,
If we now look at the relationship with the first one, it starts to be more interesting,
> plot(X1,residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(X1,residuals(reg)),col="black",lwd=2)
> lines(lowess(X1[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(X1[Y==1],residuals(reg)[Y==1]),col="red")
> abline(h=0,lty=2,col="grey")
since we can clearly identify a quadratic effect. This graph suggests that we should run a regression on the square of the first variable. And it can be seen as a significant effect,
Now, if we run a regression including this quadratic effect, what do we have,
> reg=glm(Y~X1+I(X1^2)+X2,family=binomial)
> plot(predict(reg),residuals(reg),col=c("blue","red")[1+Y])
> lines(lowess(predict(reg)[Y==0],residuals(reg)[Y==0]),col="blue")
> lines(lowess(predict(reg)[Y==1],residuals(reg)[Y==1]),col="red")
> lines(lowess(predict(reg),residuals(reg)),col="black",lwd=2)
> abline(h=0,lty=2,col="grey")
Actually, it looks like we back where we were initially…. So what is my point ? my point is that
• graphs (yes, plural) can be used to see what might go wrong, to get more intuition about possible non linear transformation
• graphs are not everything, and they never be perfect ! Here, in theory, the plain line should be a straight line, horizontal. But we also want a model as simple as possible. So, at some stage, we should probably give up, and rely on statistical tests, and confidence intervals. Yes, an almost flat line can be interpreted as flat.
# R tutorials
My course on non-life insurance (ACT2040) will start in a few weeks. I will use R to illustrate predictive modeling. A nice introduction for those who do not know R can be found online.
# Poisson regression on non-integers
In the course on claims reserving techniques, I did mention the use of Poisson regression, even if incremental payments were not integers. For instance, we did consider incremental triangles
> source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")
> INC=PAID
> INC[,2:6]=PAID[,2:6]-PAID[,1:5]
> INC
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163 39 17 7 21
[2,] 3367 1292 37 24 10 NA
[3,] 3871 1474 53 22 NA NA
[4,] 4239 1678 103 NA NA NA
[5,] 4929 1865 NA NA NA NA
[6,] 5217 NA NA NA NA NA
On those payments, it is natural to use a Poisson regression, to predict future payments
> Y=as.vector(INC)
> D=rep(1:6,each=6)
> A=rep(2001:2006,6)
> base=data.frame(Y,D,A)
> Yp=predict(reg,type="response",newdata=base)
> matrix(Yp,6,6)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3155.6 1202.1 49.8 19.1 8.2 21.0
[2,] 3365.6 1282.0 53.1 20.4 8.7 22.3
[3,] 3863.7 1471.8 60.9 23.4 10.0 25.7
[4,] 4310.0 1641.8 68.0 26.1 11.2 28.6
[5,] 4919.8 1874.1 77.6 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.3 31.6 13.5 34.7
and the total amount of reserves would be
> sum(Yp[is.na(Y)==TRUE])
[1] 2426.985
Here, payments were in ‘000 euros. What if they were in ‘000’000 euros ?
> a=1000
> INC/a
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3.209 1.163 0.039 0.017 0.007 0.021
[2,] 3.367 1.292 0.037 0.024 0.010 NA
[3,] 3.871 1.474 0.053 0.022 NA NA
[4,] 4.239 1.678 0.103 NA NA NA
[5,] 4.929 1.865 NA NA NA NA
[6,] 5.217 NA NA NA NA NA
We can still run a regression here
> Yp=predict(reg,type="response",newdata=base)
> sum(Yp[is.na(Y)==TRUE])*a
[1] 2426.985
and the prediction is exactly the same. Actually, it is possible to change currency, and multiply by any kind of constant, the Poisson regression will return always the same prediction, if we use a log link function,
> homogeneity=function(a=1){
+ Yp=predict(reg,type="response",newdata=base)
+ return(sum(Yp[is.na(Y)==TRUE])*a)
+ }
> Vectorize(homogeneity)(10^(seq(-3,5)))
[1] 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985 2426.985
The trick here come from the fact that we do like the Poisson interpretation. But GLMs simply mean that we do want to solve a first order condition. It is possible to solve explicitly the first order condition, which was obtained without any condition such that values should be integers. To run a simple code, the intercept should be related to the last value of the matrix, not the first one.
> base$D=relevel(as.factor(base$D),"6")
> base$A=relevel(as.factor(base$A),"2006")
> summary(reg)
Call:
glm(formula = Y ~ as.factor(D) + as.factor(A), family = poisson(link = "log"),
data = base)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.3426 -0.4996 0.0000 0.2770 3.9355
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 3.54723 0.21921 16.182 < 2e-16 ***
as.factor(D)1 5.01244 0.21877 22.912 < 2e-16 ***
as.factor(D)2 4.04731 0.21896 18.484 < 2e-16 ***
as.factor(D)3 0.86391 0.22827 3.785 0.000154 ***
as.factor(D)4 -0.09254 0.25229 -0.367 0.713754
as.factor(D)5 -0.93717 0.32643 -2.871 0.004092 **
as.factor(A)2001 -0.50271 0.02079 -24.179 < 2e-16 ***
as.factor(A)2002 -0.43831 0.02045 -21.433 < 2e-16 ***
as.factor(A)2003 -0.30029 0.01978 -15.184 < 2e-16 ***
as.factor(A)2004 -0.19096 0.01930 -9.895 < 2e-16 ***
as.factor(A)2005 -0.05864 0.01879 -3.121 0.001799 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 46695.269 on 20 degrees of freedom
Residual deviance: 30.214 on 10 degrees of freedom
(15 observations deleted due to missingness)
AIC: 209.52
The first idea is to run a gradient descent, as follows (the starting point will be coefficients from a linear regression on the log of the observations),
> YNA <- Y
> XNA=matrix(0,length(Y),1+5+5)
> XNA[,1]=rep(1,length(Y))
> for(k in 1:5) XNA[(k-1)*6+1:6,k+1]=k
> u=(1:(length(Y))%%6); u[u==0]=6
> for(k in 1:5) XNA[u==k,k+6]=k
> YnoNA=YNA[is.na(YNA)==FALSE]
> XnoNA=XNA[is.na(YNA)==FALSE,]
> beta=lm(log(YnoNA)~0+XnoNA)$coefficients > for(s in 1:50){ + Ypred=exp(XnoNA%*%beta) + gradient=t(XnoNA)%*%(YnoNA-Ypred) + omega=matrix(0,nrow(XnoNA),nrow(XnoNA));diag(omega)=exp(XnoNA%*%beta) + hessienne=-t(XnoNA)%*%omega%*%XnoNA + beta=beta-solve(hessienne)%*%gradient} > beta [,1] [1,] 3.54723486 [2,] 5.01244294 [3,] 2.02365553 [4,] 0.28796945 [5,] -0.02313601 [6,] -0.18743467 [7,] -0.50271242 [8,] -0.21915742 [9,] -0.10009587 [10,] -0.04774056 [11,] -0.01172840 We are not too far away from the values given by R. Actually, it is just fine if we focus on the predictions > matrix(exp(XNA%*%beta),6,6)) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3155.6 1202.1 49.8 19.1 8.2 21.0 [2,] 3365.6 1282.0 53.1 20.4 8.7 22.3 [3,] 3863.7 1471.8 60.9 23.4 10.0 25.7 [4,] 4310.0 1641.8 68.0 26.1 11.2 28.6 [5,] 4919.8 1874.1 77.6 29.8 12.8 32.7 [6,] 5217.0 1987.3 82.3 31.6 13.5 34.7 which are exactly the one obtained above. And here, we clearly see that there is no assumption such as “explained variate should be an integer“. It is also possible to remember that the first order condition is the same as the one we had with a weighted least square model. The problem is that the weights are function of the prediction. But using an iterative algorithm, we should converge, > beta=lm(log(YnoNA)~0+XnoNA)$coefficients
> for(i in 1:50){
+ Ypred=exp(XnoNA%*%beta)
+ z=XnoNA%*%beta+(YnoNA-Ypred)/Ypred
+ REG=lm(z~0+XnoNA,weights=Ypred)
+ beta=REG$coefficients + } > > beta XnoNA1 XnoNA2 XnoNA3 XnoNA4 XnoNA5 XnoNA6 3.54723486 5.01244294 2.02365553 0.28796945 -0.02313601 -0.18743467 XnoNA7 XnoNA8 XnoNA9 XnoNA10 XnoNA11 -0.50271242 -0.21915742 -0.10009587 -0.04774056 -0.01172840 which are the same values as the one we got previously. Here again, the prediction is the same as the one we got from this so-called Poisson regression, > matrix(exp(XNA%*%beta),6,6) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3155.6 1202.1 49.8 19.1 8.2 20.9 [2,] 3365.6 1282.0 53.1 20.4 8.7 22.3 [3,] 3863.7 1471.8 60.9 23.4 10.0 25.7 [4,] 4310.0 1641.8 68.0 26.1 11.2 28.6 [5,] 4919.8 1874.1 77.6 29.8 12.8 32.7 [6,] 5217.0 1987.3 82.3 31.6 13.5 34.7 Again, it works just fine because GLMs are mainly conditions on the first two moments, and numerical computations are based on the first order condition, which has less constraints than the interpretation in terms of a Poisson model. # Provisionnement et tarification, examen final L’examen final du cours ACT2040 avait lieu ce matin. L’énoncé est en ligne ainsi que des éléments de correction. En cas d’erreurs, merci de me le faire savoir rapidement, avant que je ne saisisse les notes. # Overdispersed Poisson et bootstrap Pour le dernier cours sur les méthodes de provisionnement, on s’est arrête aux méthodes par simulation. Reprenons là où on en était resté au dernier billet où on avait vu qu’en faisant une régression de Poisson sur les incréments, on obtenait exactement le même montant que la méthode Chain Ladder, > Y [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3209 1163 39 17 7 21 [2,] 3367 1292 37 24 10 NA [3,] 3871 1474 53 22 NA NA [4,] 4239 1678 103 NA NA NA [5,] 4929 1865 NA NA NA NA [6,] 5217 NA NA NA NA NA > y=as.vector(as.matrix(Y)) > base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n)) > reg2=glm(y~as.factor(ai)+as.factor(bj),data=base,family=poisson) > summary(reg2) Call: glm(formula = y ~ as.factor(ai) + as.factor(bj), family = poisson, data = base) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 8.05697 0.01551 519.426 < 2e-16 *** as.factor(ai)2001 0.06440 0.02090 3.081 0.00206 ** as.factor(ai)2002 0.20242 0.02025 9.995 < 2e-16 *** as.factor(ai)2003 0.31175 0.01980 15.744 < 2e-16 *** as.factor(ai)2004 0.44407 0.01933 22.971 < 2e-16 *** as.factor(ai)2005 0.50271 0.02079 24.179 < 2e-16 *** as.factor(bj)1 -0.96513 0.01359 -70.994 < 2e-16 *** as.factor(bj)2 -4.14853 0.06613 -62.729 < 2e-16 *** as.factor(bj)3 -5.10499 0.12632 -40.413 < 2e-16 *** as.factor(bj)4 -5.94962 0.24279 -24.505 < 2e-16 *** as.factor(bj)5 -5.01244 0.21877 -22.912 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 46695.269 on 20 degrees of freedom Residual deviance: 30.214 on 10 degrees of freedom (15 observations deleted due to missingness) AIC: 209.52 Number of Fisher Scoring iterations: 4 > base$py2=predict(reg2,newdata=base,type="response")
> round(matrix(base$py2,n,n),1) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3155.7 1202.1 49.8 19.1 8.2 21.0 [2,] 3365.6 1282.1 53.1 20.4 8.8 22.4 [3,] 3863.7 1471.8 61.0 23.4 10.1 25.7 [4,] 4310.1 1641.9 68.0 26.1 11.2 28.7 [5,] 4919.9 1874.1 77.7 29.8 12.8 32.7 [6,] 5217.0 1987.3 82.4 31.6 13.6 34.7 > sum(base$py2[is.na(base$y)]) [1] 2426.985 Le plus intéressant est peut être de noter que la loi de Poisson présente probablement trop peu de variance… > reg2b=glm(y~as.factor(ai)+as.factor(bj),data=base,family=quasipoisson) > summary(reg2) Call: glm(formula = y ~ as.factor(ai) + as.factor(bj), family = quasipoisson, data = base) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.05697 0.02769 290.995 < 2e-16 *** as.factor(ai)2001 0.06440 0.03731 1.726 0.115054 as.factor(ai)2002 0.20242 0.03615 5.599 0.000228 *** as.factor(ai)2003 0.31175 0.03535 8.820 4.96e-06 *** as.factor(ai)2004 0.44407 0.03451 12.869 1.51e-07 *** as.factor(ai)2005 0.50271 0.03711 13.546 9.28e-08 *** as.factor(bj)1 -0.96513 0.02427 -39.772 2.41e-12 *** as.factor(bj)2 -4.14853 0.11805 -35.142 8.26e-12 *** as.factor(bj)3 -5.10499 0.22548 -22.641 6.36e-10 *** as.factor(bj)4 -5.94962 0.43338 -13.728 8.17e-08 *** as.factor(bj)5 -5.01244 0.39050 -12.836 1.55e-07 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for quasipoisson family taken to be 3.18623) Null deviance: 46695.269 on 20 degrees of freedom Residual deviance: 30.214 on 10 degrees of freedom (15 observations deleted due to missingness) AIC: NA Number of Fisher Scoring iterations: 4 Mais on en reparlera dans un instant. Ensuite, on avait commencé à regarder erreurs commises, sur la partie supérieure du triangle. Classiquement, par construction, les résidus de Pearson sont de la forme $\varepsilon_i=\frac{Y_i-\widehat{Y}_i}{\sqrt{\text{Var}(Y_i)}}$ On avait vu dans le cours de tarification que la variance au dénominateur pouvait être remplacé par le prévision, puisque dans un modèle de Poisson, l’espérance et la variance sont identiques. Donc on considérait $\varepsilon_i=\frac{Y_i-\widehat{Y}_i}{\sqrt{\widehat{Y}_i}}$ > base$erreur=(base$y-base$py2)/sqrt(base$py2) > round(matrix(base$erreur,n,n),1)
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 0.9 -1.1 -1.5 -0.5 -0.4 0
[2,] 0.0 0.3 -2.2 0.8 0.4 NA
[3,] 0.1 0.1 -1.0 -0.3 NA NA
[4,] -1.1 0.9 4.2 NA NA NA
[5,] 0.1 -0.2 NA NA NA NA
[6,] 0.0 NA NA NA NA NA
Le soucis est que si $\widehat{Y}_i$ est – asymptotiquement – un bon estimateur pour $\text{Var}(Y_i)$, ce n’est pas le cas à distance finie, car on a alors un estimateur biaisé pour la variance, et donc la variance des résidus n’a que peu de chances d’être de variance unitaire. Aussi, il convient de corriger l’estimateur de la variance, et on pose alors
$\varepsilon_i=\sqrt{\frac{n}{n-k}}\cdot\frac{Y_i-\widehat{Y}_i}{\sqrt{\widehat{Y}_i}}$
qui sont alors les résidus de Pearson tel qu’on doit les utiliser.
> E=base$erreur[is.na(base$y)==FALSE]*sqrt(21/(21-11))
> E
[1] 1.374976e+00 3.485024e-02 1.693203e-01 -1.569329e+00 1.887862e-01
[6] -1.459787e-13 -1.634646e+00 4.018940e-01 8.216186e-02 1.292578e+00
[11] -3.058764e-01 -2.221573e+00 -3.207593e+00 -1.484151e+00 6.140566e+00
[16] -7.100321e-01 1.149049e+00 -4.307387e-01 -6.196386e-01 6.000048e-01
[21] -8.987734e-15
> boxplot(E,horizontal=TRUE)
En rééchantillonnant dans ces résidus, on peut alors générer un pseudo triangle. Pour des raisons de simplicités, on va générer un peu rectangle, et se restreindre à la partie supérieure,
> Eb=sample(E,size=36,replace=TRUE)
> Yb=base$py2+Eb*sqrt(base$py2)
> Ybna=Yb
> Ybna[is.na(base$y)]=NA > Tb=matrix(Ybna,n,n) > round(matrix(Tb,n,n),1) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 3115.8 1145.4 58.9 46.0 6.4 26.9 [2,] 3179.5 1323.2 54.5 21.3 12.2 NA [3,] 4245.4 1448.1 61.0 7.9 NA NA [4,] 4312.4 1581.7 68.7 NA NA NA [5,] 4948.1 1923.9 NA NA NA NA [6,] 4985.3 NA NA NA NA NA Cette fois, on a un nouveau triangle ! on va alors pouvoir faire plusieurs choses, 1. compléter le triangle para la méthode Chain Ladder, c’est à dire calculer les montants moyens que l’on pense payer dans les années futures 2. générer des scénarios de paiements pour les années futurs, en générant des paiements suivant des lois de Poisson (centrées sur les montants moyens que l’on vient de calculer) 3. générer des scénarios de paiements avec des lois présentant plus de variance que la loi de Poisson. Idéalement, on voudrait simuler des lois qusi-Poisson, mais ce ne sont pas de vraies lois. Par contre, on peut se rappeler que dans ce cas, la loi Gamma devrait donner une bonne approximation. Pour ce dernier point, on va utiliser le code suivant, pour générer des quasi lois, > rqpois = function(n, lambda, phi, roundvalue = TRUE) { + b = phi + a = lambda/phi + r = rgamma(n, shape = a, scale = b) + if(roundvalue){r=round(r)} + return(r) + } Je renvois aux diverses notes de cours pour plus de détails sur la justification, ou à un vieux billet. On va alors faire une petite fonction, qui va soit somme les paiements moyens futurs, soit sommer des générations de scénarios de paiements, à partir d’un triangle, > CL=function(Tri){ + y=as.vector(as.matrix(Tri)) + base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n)) + reg=glm(y~as.factor(ai)+as.factor(bj),data=base,family=quasipoisson) + py2=predict(reg,newdata=base,type="response") + pys=rpois(36,py2) + pysq=rqpois(36,py2,phi=3.18623) + return(list( + cl=sum(py2[is.na(base$y)]),
+ sc=sum(pys[is.na(base$y)]), + scq=sum(pysq[is.na(base$y)])))
+ }
Reste alors à générer des paquets de triangles. Toutefois, il est possible de générer des triangles avec des incréments négatifs. Pour faire simple, on mettra des valeurs nulles quand on a un paiement négatif. L’impact sur les quantiles sera alors (a priori) négligeable.
> for(s in 1:1000){
+ Eb=sample(E,size=36,replace=TRUE)*sqrt(21/(21-11))
+ Yb=base$py2+Eb*sqrt(base$py2)
+ Yb=pmax(Yb,0)
+ scY=rpois(36,Yb)
+ Ybna=Yb
+ Ybna[is.na(base$y)]=NA + Tb=matrix(Ybna,6,6) + C=CL(Tb) + VCL[s]=C$cl
+ VR[s]=C$sc + VRq[s]=C$scq
+ }
Si on regarde la distribution du best estimate, on obtient
> hist(VCL,proba=TRUE,col="light blue",border="white",ylim=c(0,0.003))
> D=density(VCL)
> lines(D)
> I=which(D$x<=quantile(VCL,.05)) > polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)
> I=which(D$x>=quantile(VCL,.95)) > polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)
Mais on peut aussi visualiser des scénarios basés sur des lois de Poisson (équidispersé) ou des scénarios de lois quasiPoisson (surdispersées), ci-dessous
Dans ce dernier cas, on peut en déduire le quantile à 99% des paiements à venir.
> quantile(VRq,.99)
99%
2855.01
Il faut donc augmenter le montant de provisions de l’ordre 15% pour s’assurer que la compagnie pourra satisfaire ses engagements dans 99% des cas,
> quantile(VRq,.99)-2426.985
99%
428.025 | 2021-10-21 23:42:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 107, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7679370641708374, "perplexity": 3097.91498201089}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00551.warc.gz"} |
http://devtalk.net/cppamp/performance-of-string-histogram-building-in-cpp-amp/ | # DevTalk.net
ActiveMesa's software development blog. MathSharp, R2P, X2C and more!
## Performance of String Histogram Building in C++ AMP
We all know that GPUs are excellent number crunchers. Given a large amount of data, GPUs can process numerics up to 2 orders of magnitude more efficiently than today’s multi-core CPUs. But to date, the use of GPUs has been mainly restricted exactly to this scientific/mathematical domain, without much concern to alternative uses.
In this post, I want to take a look at other ways of exploiting GPUs – specifically, how individual strings as well as string arrays can be processed on the GPU and what kind of performance benefit we would be able to get. For the purposes of my investigations, I will be using an ATI 6800 series GPU with an ordinary Core 4 Quad. I will use ordinary C++ and C++ AMP as the technologies being compared. I will use primary Latin characters (in a range that can be displayed in a console), but in Unicode (i.e., wchar_t-based) strings as befits a modern framework.
Please note that the examples use minimum optimization of C++ code, i.e., common STL algorithms and approaches are used and no attempt is made to improve or replace basic C++ constructs. This, I beleive, constitutes a fairer test than attempting to fine-tune C++ for peak performance.
### Character histogram
We’ll begin with a simple case: given a rather lengthy text, how long would it take to build a histogram of all the characters present in the string? We’ll use a special function to randomly generate strings of different sizes:
// uses chars 32-127
void fill_string(wchar_t* chars, int count)
{
for (int i = 0; i < count; ++i)
{
chars[i] = 32 + rand() % 95;
}
}
We’ll perform the experiment on strings with length from 2 to 224 to ascertain how the algorithm performs under different conditions. First, we begin with a C++ implementation – without any constraints on the architecture, we’ll use a simple map-based approach:
typedef concurrent_unordered_map<wchar_t,int> histogram;
unique_ptr<histogram> cpu_histogram(wchar_t* str, size_t count)
{
unique_ptr<histogram> result(new histogram);
parallel_for((size_t)0, count, [&](int i)
{
(*result)[str[i]]++;
});
return result;
}
The GPU function is quite a bit more complicated. First of all, there is no real way of treating elements as wchar_t types since C++ AMP does not recognize elements smaller than an uint32_t. Thus, any view over a wchar_t array has to use the uint32_t data type.
In order to test the histogram, I used an existing implementation written by Daniel Moth. I won’t replicate the code here. All I changed in Daniel’s implementation is the definition of the data type, which I set to wchar_t, as well as an implementation of an alternative constructor that took actual data rather than the data size.
From then on, what I did is created strings of sizes 2n and filled them in with random printable characters:
int size = sizes[i];
wchar_t* target = new wchar_t[size+1];
ZeroMemory(target, (size+1) * sizeof(wchar_t));
fill_string(target, size);
Then, I simply timed the ‘cost’ of the CPU and GPU calls and averaged out results:
aggregate = 0.0;
for (int s = 0; s < 20; ++s)
{
Timer gpuTimer;
gpuTimer.Start();
auto gh = gpu_histogram(target, size);
gpuTimer.Stop();
aggregate += gpuTimer.Elapsed();
}
wcout << (aggregate / 20.0) << endl;
### Measuring Performance
For performance measurement I found yet another useful blog post describing the steps necessary to get the measurement just right. I also used a high-res timer from this article (isn’t the internet wonderful?).
The end result is the following performance measurements from the CPU and GPU. This is from a Release build with all optimizations enabled. Each iteration was repeated 20 times with the results averaged.
Figure 1. Comparison of GPU and CPU performance for building a string histogram. The X axis corresponds to a string of length 2x, the Y axis represents the elapsed time in milliseconds, and is binary-log-scaled.
It’s clear that, up to some point, the CPU has an advantage, as the GPU is consistently paying a small but annoying start-up cost. As far as measuring the benefits, the parallel lines on the right hand side of the chart suggest a constant 8× performance improvement when using the GPU. Performance benefits of the GPU can only be appreciated when dealing with strings with length greater than 216 (65536) characters.
### Conclusion
This experiment is just a small performance investigation, probably full of various methodological errors and inaccuracies, but the above has certainly been benefitial in terms of figuring out that:
• The GPU kernel appears to have a fixed start-up cost associated with it. However, the start-up cost seems to be around 2ms, which is fairly insignificant, except maybe for things like high-frequency trading.
• The GPU appears to have a linear advantage over the CPU (about 3 orders of binary magnitude), which is surprising, because I would expect non-linear divergence in the results.
• The current approach seems to be entirely unfit for calculating histograms with unlimited character sets. The reason is that currently, a histogram array matches the size of all possible characters, and then there are equally many numbers of such arrays. Essentially, to make lots of arrays of size 2sizeof(wchar_t) is impossible – the GPU just doesn’t have thas much memory.
I’ll certainly be playing more with C++ AMP and string processing specifically. Meanwhile, if you’d like to get the source code for this article, you can find it here. Who knows, maybe your performance measurements will be entirely different? Or maybe the algorithm won’t even run on your machine? At any rate, let me know. ■
Written by Dmitri Nesteruk
August 24th, 2012 at 8:09 pm
Posted in CppAmp
Tagged with , , | 2013-12-08 20:07:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27033573389053345, "perplexity": 1528.0414286379682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163806278/warc/CC-MAIN-20131204133006-00002-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.physicsforums.com/insights/trending-math-articles/ | # Math Articles, Guides, and Tutorials
## Investigating Some Euler Sums
So, why only odd powers? Mostly because the even powers were solved by Leonard Euler in the 18th century. Since the “mathematical toolbox” at that time did not contain the required tools, he needed 6 years to prove the validity of his deductions. Now, however, we have much more powerful tools available, as I have…
## 10 Math Things We All Learnt Wrong At School
The title is admittedly clickbait. Or a joke. Or a provocation. It depends on with whom you speak, or who reads it with which expectation. Well, I cannot influence any of that. I can only tell how I mean it, namely as an entertaining collection of simple truths which later on turn out to be…
## Computing the Riemann Zeta Function Using Fourier Series
Euler’s amazing identity The mathematician Leonard Euler developed some surprising mathematical formulas involving the number ##\pi##. The most famous equation is ##e^{i \pi} = -1##, which is one of the most important equations in modern mathematics, but unfortunately, it wasn’t invented by Euler.Something that is original with Euler is this amazing identity: Equation 1: ##1…
## How Bayesian Inference Works in the Context of Science
Confessions of a moderate Bayesian part 3 Read part 1: How to Get Started with Bayesian Statistics Read part 2: Frequentist Probability vs Bayesian Probability Bayesian statistics by and for non-statisticians https://www.cafepress.com/physicsforums.13280237 Background One of the things that I like about Bayesian statistics is that it rather closely matches the way that I think about…
## Exploring Frequentist Probability vs Bayesian Probability
Confessions of a moderate Bayesian, part 2 Read Part 1: Confessions of a moderate Bayesian, part 1 Bayesian statistics by and for non-statisticians https://www.cafepress.com/physicsforums.13280237 Background One of the continuous and occasionally contentious debates surrounding Bayesian statistics is the interpretation of probability. For anyone who is familiar with my posts on this forum, I am not…
## How to Get Started with Bayesian Statistics
Confessions of a moderate Bayesian, part 1 Bayesian statistics by and for non-statisticians https://www.cafepress.com/physicsforums.13265286 Background I am a statistics enthusiast, although I am not a statistician. My training is in biomedical engineering, and I have been heavily involved in the research and development of novel medical imaging technologies for the bulk of my career. Due…
## How to Solve Second-Order Partial Derivatives
Introduction A frequent concern among students is how to carry out higher order partial derivatives where a change of variables and the chain rule are involved. There is often uncertainty about exactly what the “rules” are. This tutorial aims to clarify how the higher-order partial derivatives are formed in this case. Note that in general…
## The Analytic Continuation of the Lerch and the Zeta Functions
Introduction In this brief Insight article the analytic continuations of the Lerch Transcendent and Riemann Zeta Functions are achieved via the Euler’s Series Transformation (zeta) and a generalization thereof, the E-process (Lerch). Dirichlet Series is mentioned as a steppingstone. The continuations are given but not shown to be convergent by any means, though if you… | 2021-07-31 13:26:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307815194129944, "perplexity": 1157.3043353102958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.6/warc/CC-MAIN-20210731105716-20210731135716-00331.warc.gz"} |
http://mathonline.wikidot.com/density-of-the-span-of-closed-subsets-in-hilbert-spaces | Density of the Span of Closed Subsets in Hilbert Spaces
# Density of the Span of Closed Subsets in Hilbert Spaces
Lemma 1: Let $H$ be a Hilbert space and let $H \subseteq S$. Then $S^{\perp} = (\mathrm{span}(S))^{\perp}$.
• Proof: Let $x \in S^{\perp}$. Then $x \perp S$, that is, for every $s \in S$ we have that $\langle x, s \rangle = 0$. Let $s' \in \mathrm{span} (S)$. Then $s' = a_1s_1 + a_2s_2 + ... + a_ns_n$ for $a_1, a_2, ..., a_n \in \mathbb{C}$ and for $s_1, s_2, ..., s_n \in S$. So:
(1)
\begin{align} \quad \langle x, s' \rangle &= \langle x, a_1s_2 + a_2s_2 + ... + a_ns_n \rangle \\ &= \overline{a_1} \underbrace{\langle x, s_1 \rangle}_{= 0} + \overline{a_2} \underbrace{\langle x, s_2 \rangle}_{=0} + ... + \overline{a_n} \underbrace{\langle x, s_n \rangle}_{= 0} \\ &= 0 \end{align}
• Therefore $x \in (\mathrm{span} S)^{\perp}$ and so:
(2)
• Now observe that $S \subseteq \mathrm{span} (S)$. Let $x \in (\mathrm{span}(S))^{\perp}$. Then $\langle x, s \rangle = 0$ for every $s \in \mathrm{span} (S)$, and so $\langle x, s \rangle = 0$ for every $s \in S$. Hence:
(3)
• From $(*)$ and $(**)$ we conclude that:
Theorem 2: Let $H$ be a Hilbert space and let $S \subseteq H$ be a closed subset of $H$. Then $\mathrm{span} (S)$ is dense in $H$ if and only if $S^{\perp} = \{ 0 \}$.
• Proof: $\Rightarrow$ Suppose that $\mathrm{span} (S)$ is dense in $H$. Then $\overline{\mathrm{span} (S)} = H$. Observe that $\overline{\mathrm{span} (S)}$ is a closed subspace of $H$, and so:
• Therefore $(\overline{\mathrm{span}(S)})^{\perp} = \{ 0 \}$ and so $(\mathrm{span} (S))^{\perp} = \{ 0 \}$ and lastly, $S^{\perp} = \{ 0 \}$.
• $\Leftarrow$ Suppose that $S^{\perp} = \{ 0 \}$. Then $\mathrm{span} (S)^{\perp} = \{ 0 \}$ and so $\overline{\mathrm{span} (S)}^{\perp} = \{ 0 \}$. Since $H = \overline{\mathrm{span} (S)} \oplus (\overline{\mathrm{span} (S)})^{\perp}$ we have that:
• So $\mathrm{span} (S)$ is dense in $H$. $\blacksquare$ | 2019-08-23 19:43:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 6, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000087022781372, "perplexity": 176.0594988812753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00272.warc.gz"} |
https://www.physicsforums.com/threads/the-average-density-of-halo-of-non-baryonic-dark-matter.805203/ | # The average density of halo of non-baryonic dark matter?
Tags:
1. Mar 26, 2015
### LavaLynne
I've got a homework question that I'm particularly stuck on:
Suppose that the halo, assumed spherical, of non-baryonic dark matter surrounding our galaxy has mass ~ 5 x10^12 M solar and radius 0.1 Mpc. What it its average density in Kg m-3?
I think that I need to use the formula M= r v^2/ G
G being the gravitational constant.
What I'm really unsure of is how to rework the formula?
I'm assuming that the average density will be the volume?
I've come up with v= M(r)/G and then square the answer
Am I anywhere close?
2. Mar 26, 2015
### BiGyElLoWhAt
1st: I have no idea how you got from the first eq to the second.
2nd: Homework questions have a special place in the homework forum.
3rd: Average density $\neq$ volume
3. Mar 26, 2015
### LavaLynne
Sorry I'm very new to this site. I did look in the homework forum but I could not find a cosmology homework thread.
Also I'm a mature student and I haven't done this level of mathematics in years so I'm trying to re-learn as I go. The second equation was my best effort to re-work the first. :(
4. Mar 26, 2015
### BiGyElLoWhAt
Ok, so what is density? Let's start there.
5. Mar 26, 2015
### Staff: Mentor
I've moved this thread to Advanced Physics Homework, which is appropriate for cosmology questions.
LavaLynne, please fill out the homework template to help facilitate helper's responses:
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
6. Mar 26, 2015
### LavaLynne
Sorry I'm not sure what the density is as that's what I'm trying to find. I have previously found the average density of non-baryonic dark matter in the Universe by dividing it's percentage into the critical density. Should I be using that?
7. Mar 26, 2015
### LavaLynne
Sorry I'm not sure what the density is
1. The problem statement, all variables and given/known data
Suppose that the halo, assumed spherical, of non-baryonic dark matter surrounding our galaxy has mass ~ 5 x10^12 M solar and radius 0.1 Mpc. What it its average density in Kg m-3?
2. Relevant equations
M= r v^2/ G
3. The attempt at a solution
Try (unsuccessfully) to rework the above equation.
Last edited by a moderator: Mar 26, 2015
8. Mar 26, 2015
### Staff: Mentor
I think he's asking what the definition of density is.
9. Mar 26, 2015
### BiGyElLoWhAt
That is, in fact, what I'm asking.
10. Mar 26, 2015
### Chalnoth
[Mass] density is mass per unit volume.
11. Mar 26, 2015
### Staff: Mentor
That would give an average over the entire universe, yes, but the problem is not asking you for that. It's asking you for the average density in a particular region (the halo surrounding our galaxy). Averaging over the entire universe won't help with that; our galaxy is not an "average" region of the universe--it's much denser on average than the universe as a whole, since the universe as a whole includes all the empty space between galaxies.
(If this still isn't clear, consider a simpler example. Suppose someone asked you the average density of the Earth. You wouldn't use the figures for ordinary matter in the entire universe--what percentage that average density is of the critical density--to calculate that. You would use numbers for the Earth itself. Similarly, the problem is asking you to use numbers for our galaxy's halo to calculate the average density of dark matter there.)
12. Mar 26, 2015
### BiGyElLoWhAt
So its mass PER (divided by) volume? Whats the mass we're dealing with and what's the volume? You have a radius in megaparsecs. Google can convert that for you. | 2017-08-18 10:09:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5041394233703613, "perplexity": 1490.735404361128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104631.25/warc/CC-MAIN-20170818082911-20170818102911-00045.warc.gz"} |
http://shcx.wetourist.it/navier-stokes-github.html | View On GitHub LESGO solves the filtered Navier-Stokes equations in the high-Reynolds number limit on a Cartesian mesh. Section 4 is concerned with the diagnosis of the pressure field required to. This implies that one can study sound generated by the flow itself - a branch of. View on GitHub Download this project as a. More #include Inheritance diagram for NavierStokesSolver: Public Member Functions NavierStokesSolver ()=default Default constructor. The aim is to assess the influence of the subgrid model on the inertial range intermittency. Aeroelasticity. Flurry++ is licensed under a GNU General Public License. Dismiss Join GitHub today. Fluid dynamics considers the physics of liquids and gases. , 6 × 50 = 300 neurons per hidden layer), takes the input variables t, x, y, z and outputs c, d, u, v, w, and p. The equations of interest are the Navier-Stokes equations, describing the conservation of momentum for fluid flows. The cell-centered solver can handle either multi-block structured or mixed element (tetrahedral, prismatic, pyramid, and hexahedral) unstructured grids. In SIMPLE, the continuity and Navier-Stokes equations are required to be discretized and solved in a semi-implicit way. The free-stream values are not only used as boundary conditions for the MARKER_FAR option, but also for initialization and non-dimensionalization. com and signed with a verified signature using GitHub’s key. 752 PRNN (WANG ET AL. For the steady case, Newton iterations are provided. Figure 1: Navier-Stokes informed neural networks: A plain vanilla densely connected (physics uninformed) neural network, with 10 hidden layers and 50 neurons per hidden layer per output variable (i. Willis, and J. The software solves numerically a form of the Navier-Stokes nist-equations appropriate for low-speed, thermally-driven flow, with an emphasis on smoke and heat transport from fires. Solving the Navier-Stokes equations using the Vorticity-Streamfunction Formulation ($$\omega$$ and $$\psi$$) Solving the Navier-Stokes equations using primitive variables (u, v, w, p) The second part of the class drills deeper into the many issues encountered in what we just accomplished:. solve_navier_stokes; Variables. Rio Yokota, who was a post-doc in Barba's. „Navier-Stokes-Gleichungen“ suchen mit: Wortformen von korrekturen. It is parallelised using MPI and is capable of scaling to many thousands of processors. Navier stokes indicators in trading in Title/Summary SU2 Educational SU2_EDU is an educational version of the Euler/Navier-Stokes/RANS solver from the SU2 suite. *CoMD: A simple proxy for the computations in a typical molecular dynamics application. the 12 steps to Navier-Stokes, is a practical module for learning the foundations of Computational Fluid Dynamics (CFD) by coding solutions to the basic partial differential equations that describe the physics of fluid flow. IC-FERST is an open source multiphase simulation tool based on Fluidity that is able to solve the Darcy and the Navier-Stokes equations in 2 and 3 dimensions. home documentation community source code gallery events try it online donate documentation community source code gallery events try it online donate. (Left: Re = 100, Right. Bach and Schubert through a biaxial recurrent neural network (RNN). I completed my PhD in early 2018 on the topic of stochastic Navier-Stokes equations on the rotating spheres with stable Lévy noise, under supervision of Prof. SlurmCI 40ed4611cbbb279338c8fe56d11bb163df4d7f28. To evaluate specific myelopathy diagnoses made in patients with suspected idiopathic transverse myelitis (ITM). We present hidden fluid mechanics (HFM), a physics informed deep learning framework capable of encoding an important class of physical laws governing fluid motions, namely the Navier-Stokes equations. Note: The demos in this post rely on WebGL features that might not be implemented in mobile browsers. Tieszen, S. But much has changed following the discovery finite-amplitude solutions to the Navier–Stokes equations, for pipe flow as recently as 2003. View On GitHub LESGO solves the filtered Navier-Stokes equations in the high-Reynolds number limit on a Cartesian mesh. Willis/SoftwareX6(2017)124–127 125 dynamics(CFD)packages–theaimduringdevelopmenthasbeen tomaintainacompactandreadablecode. In this video we will derive the famous Navier-Stokes Equations by having a look at a simple Control Volume (CV). home documentation community source code gallery events try it online donate. Roberts Camellia October. To this end, we introduce the mapping deformation gradient G and the mapping velocity v X as G = ∇ XG, v X = ∂G ∂t X. where denotes the vorticity, the -component of the velocity field, and the -component. However Navier-Stokes equation is a continuous velocity field differential equation, so to discretize it, I applied the SPH technique that is often used in solving astrophysical problems. Data-driven solutions and discovery of Nonlinear Partial Differential Equations View on GitHub Authors. solve_navier_stokes; Variables. But I am not getting any results. The PNP Solver Finite Element Methods for the Poisson-Nernst-Planck equations coupled with Navier-Stokes Solver GitHub The code We are developing, in Python and C++, solvers for simulating charge-transport systems with an arbitrary number of charge-carrying species. com and signed with a verified signature using GitHub’s key. Semi-implicit BDF time discretization of the Navier-Stokes equations with VMS-LES modeling in a high performance computing framework. Use MathJax to format. Small-disturbance flutter solution through coupling of structural and fluid discretizations. We focus here on two independent methods from our multi- delity framework for modeling apping ight; a panel method and a high-order accurate Navier-Stokes method. While the math (for streaming) has been set in stone for some time now, the physics of streaming, in its varied form and function is IMHO, still elusive. We review some results for Navier{Stokes turbulence and compare with results for shell models. If the velocity and pressure are known for the initial time t = 0, then the state of the fluid over time can be described by the Navier-Stokes equations for incompressible flow. A different form of equations can be scary at the beginning but, mathematically, we have only two variables which ha-ve to be obtained during computations: stream vorticity vector ζand stream function Ψ. $$This means that the pressure is instantaneously determined by the velocity field (the pressure is no longer an independent hydrodynamic variable). Note: at beginning of video, the equations shown are of incorrect representation for my code refer to equations shown in my github link below (along with source code); https://github. A new finite element formulation for computational fluids dynamics: the compressible euler and navier stokes equations. Navier-Stokes equations. Three-dimensional premixed Hydrogen/Air flame computed with the RNS code. SU2 is capable of dealing with different kinds of physical problems. Example source code. In any one of these cases, we show that a parallel laminar flow with a Poiseuille's flow profile ceases to be a stationary Navier-Stokes flow, due to the curvature of the background manifold. Remillard, Wilfred J. "Removing the Stiffness of Elastic Force from the Immersed Boundary Method for the 2D Stokes Equations" by T. In particular, variants of PCD (pressure-convection-diffussion) preconditioner from , are implemented. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. The Aither project is hosted on Github. Define the velocity and pressure in a 3D space. Use MathJax to format equations. This is a branch of classical physics and is totally based on Newton’s laws of motion. Convective term dG discritization proposed by Bassi, Crivellini, Di Pietro, and Rebay, Journal of Computational Physics 218(2):794-815, 2006. Configuration File Options. You can interact with the fluid with the left mouse button and visualize both the velocity and the pressure of the fluid. A list of possible values and a description can be found in the following table:. Recently activities have been carried out on Tecnam P2012 and on new regional 90 seats aircraft. Making statements based on opinion; back them up with references or personal experience. For example, the Navier–Stokes equations, a set of nonlinear PDEs that describe the motion of fluid substances, can lead to turbulence, a highly chaotic behavior seen in many fluid flows. The code is currently capable of running scalar advection/diffusion or Euler/Navier-Stokes cases on unstructured mixed grids of quadrilaterals and triangles (2D) or hexahedrons (3D) in the Gmsh format. We present hidden fluid mechanics (HFM), a physics informed deep learning framework capable of encoding an important class of physical laws governing fluid motions, namely the Navier-Stokes equations. However Navier-Stokes equation is a continuous velocity field differential equation, so to discretize it, I applied the SPH technique that is often used in solving astrophysical problems. Navier-Stokes equations¶. These cases were completed with SU2 v7. By applying the empirical interpolation method to generate an affine parameter dependency, the offline-online procedure can be used to compute reduced order solutions for parameter variations. HoneyGinger. Minimizing undesired wave reflections at the domain boundaries in flow simulations with free-surface waves based on the Navier-Stokes equations Project description Wave reflections at the boundaries of the computational domain can cause significant errors in flow simulations, and must therefore be reduced. This commit was created on GitHub. GIAN course on Computational Solution of Hyperbolic PDE at IIT Delhi, 4-15 December, 2017. py", line 12, in from mshr import * ModuleNotFoundError: No module named 'mshr' Aborted (core dumped) So I installed mshr Following this, I tried again: python3 navier_stokes_cylinder. Build Options. HoneyGinger. ) It is a more theoretical approach, but it satisfies mass conservation by construction. Problem Setup. Compute the Roe-averaged state for the 3D Navier Stokes equations. Configuration File Options Several of the key configuration file options for this simulation are highlighted here. field, which satisfies the Navier-Stokes equations (although it neglects the effect of the suspended particles on the flow field), and a kinematic simulation (KS) velocity field, which is a random field designed so that its statistics are in accord with the Kolmogorov theory for fully-developed turbulence. I am simulating an incompressible flow of a newtonian fluid over an oscillating plate using OpenFOAM. This is an example of using PetIBM to build a parallel incompressible flow solver. 流体力学の支配方程式、Navier-Stokes方程式を導く過程をpdfにまとめました。 以下の順に説明しています。 3 時間導関数 4 Eular の膨張公式 5 Reynolds の輸送定理 6 連続の式 7 Cauchy の第一運動法則 8 Cauchy の第二運動法則 9 非圧縮性の式 10 流体の構成式11 …. Discretisation with mixed nite elements (e. Since ˙= ru, we can write r p+ r˙ 1 u˙= f ˙ ru= 0 ru= 0 Nathan V. Geophysical applications often need to incorporate the Coriolis and centrifugal forces in f. ThusOpenpipeflowis. The well-known Mori-Zwanzig theory tells us that model reduction leads to memory effect. This is a list of my codes most of which are freely available. Willis, and J. 15th CSE Poster Exhibition. (unstructured finite-volume scheme of median-dual type, see https://github. By examining and understanding how the introduction of viscosity at the higher delity level impacts the ow- eld and. The problem is find the velocity field $$\mathbf{u}=(u_i)_{i=1}^d$$ and the pressure $$p$$ of a Flow satisfying in the domain $$\Omega \subset \mathbb{R}^d (d=2,3)$$:. To this end, we introduce the mapping deformation gradient G and the mapping velocity v X as G = ∇ XG, v X = ∂G ∂t X. Time dependent decaying flow for standing vortices with the following analytical …. Resistance of a bar; From Liouville to Navier-Stokes. With SPH, field quantities that are only. You can also ask for help by opening up an issue on the IAMR GitHub webpage. However, it's necessary to define it and provide it to the the solver object so that it can then send it to interpolation functions for a characteristic-based reconstruction. Navier Stokes equations: 22 ships destroyed and 11 ships lost. SlurmCI 44893a768e7e6e3e2268f0a76d53c122928636b3. Wang's GitHub account NGA code NGA is large-eddy simulation (LES) and direct numerical simulation (DNS) code capable of solving the low-Mach number variable density Navier-Stokes equations on structured cartesian and cylindrical meshes. The file Extras/ExtractSlice. zip file Download this project as a tar. (Left: Re = 100, Right. PetIBM is an open-source C++ library with ready-to-use application codes to solve the two- and three-dimensional incompressible Navier-Stokes equations on fixed structured Cartesian grids with an. Configuration File Options. 12 videos Play all 12 Steps to Navier-Stokes Manuel Ramsaier Implementing the CFD Basics - 03 - Part 1 - Coding for Lid Driven Cavity Simulation - Duration: 18:14. Chandrashekar); Applied Mathematics and Computation, Vol. Due to the complex nature of the Navier-Stokes equations, analytical solutions range from difficult to practically impossible, so the most common way to make use of the Navier-Stokes equations is through simulation and approximation. The Navier-Stokes equation for incompressible homogeneous fluids forms the basis of a lot of CFD and is used to describe the motion of a fluid. What is the numerical method to solve the incompressible Navier-Stokes equations? There are currently two different incompressible flow solvers implemented in IBAMR. The PNP Solver Finite Element Methods for the Poisson-Nernst-Planck equations coupled with Navier-Stokes Solver GitHub The code We are developing, in Python and C++, solvers for simulating charge-transport systems with an arbitrary number of charge-carrying species. In particular, variants of PCD (pressure-convection-diffussion) preconditioner from , are implemented. ANSYS Fluent solves the Navier-Stokes equations. A Study of Deep Learning Methods for Reynolds-Averaged Navier-Stokes Simulations. For a long time, modeling the memory effect accurately and efficiently has been an important but nearly impossible task in developing a good reduced model. In this work we prove that the original (Bassi and Rebay in J Comput Phys 131:267–279, 1997) scheme (BR1) for the discretization of second order viscous terms within the discontinuous Galerkin collocation spectral element method (DGSEM) with Gauss Lobatto nodes is stable. $$\rho=f(p,T)$$. Code_Saturne is the free, open-source software developed and released by EDF to solve computational fluid dynamics (CFD) applications. This is a list of my codes most of which are freely available. Navier stokes indicators in trading in Title/Summary SU2 Educational SU2_EDU is an educational version of the Euler/Navier-Stokes/RANS solver from the SU2 suite. CaNS - a code for massively-parallel DNS of canonical flows. Spectral element methods for the Navier-Stokes equations. - periodic BC, vertically and horizontally (The fluid is driven by a volumetric force F=(1,0) ). Wang's GitHub account NGA code NGA is large-eddy simulation (LES) and direct numerical simulation (DNS) code capable of solving the low-Mach number variable density Navier-Stokes equations on structured cartesian and cylindrical meshes. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. (日本語ドキュメントもあります) Part 1: Getting Started with the Cavity Flow. 7 Facet spaces and hybrid methods; 2. Use MathJax to format equations. The Navier Stokes equations form the governing set of equations for fluid motion under the continuum assumption. Introduction. task_7()" Show/Hide Code. MiniAero: RK4 unstructured FV, compressible Navier-Stokes; PRK Stencil; Regent: Soleil-X: multi-physics solver (explicit compressible flow, point particles, DOM radiation) Circuit simulation; PENNANT: unstructured mesh, Lagrangian staggered-grid hydrodynamics; MiniAero: RK4 unstructured FV, compressible Navier-Stokes; PageRank; PRK Stencil. PETSc or the Trilinos Project are used for the solution of linear systems on both serial and parallel platforms, and LASPack is. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Section 4 is concerned with the diagnosis of the pressure field required to. We announce the public release of online educational materials for self-learners of CFD using IPython Notebooks: the CFD Python Class! Update! (Jan. These components form a component-based architecture where they serve as building blocks of customized applications. Construction of Lagrangians and Hamiltonians from the Equation of Motion. An efficient numerical algorithm for solving viscosity contrast Cahn–Hilliard–Navier–Stokes system in porous media Journal of Computational Physics, 2019; C. It is a context for learning fundamentals of computer programming within the context of the electronic arts. Saideep: Main CFD Forum: 10: February 13, 2017 10:29: The Navier stokes equations: Mirza: Main CFD Forum: 6: August 30, 2016 21:21: Solve the Navier stokes equations: Rime: Main CFD Forum: 5: March 11, 2016 06:05: MATLAB Finite Volume Unstructured Triangular. Implementation of parallel Navier-Stokes solver. Matrix details for Goodwin/Goodwin_127. Navier-Stokes solver. Prantl, TUM Xiangyu Hu, TUM. GitHub Gist: instantly share code, notes, and snippets. Oasis is a high-level/high-performance finite element Navier–Stokes solver written from scratch in Python using building blocks from the FEniCS project (fenicsproject. The convective and viscous fluxes are evaluated at the midpoint of an edge. The tutorial explains the fundamental concepts of the finite element method, FEniCS programming, and demonstrates how to quickly solve a range of PDEs. Hello World! I thought it would be fun to show a bit of what I do on my spare time. A list of possible values and a description can be found in the following table:. Matsui; Navier-Stokes Equations in a Rotating Frame in R3 with Initial Data Nondecreasing at In nity, Hokkaido Math. Download Citation | Validation of the cuIBM code for Navier-Stokes equations with immersed boundary methods | We have developed a Navier-Stokes solver, called cuIBM, to simulate incompressible. A Pen by lflee on CodePen. Gervasio, F. Resistance of a bar; From Liouville to Navier-Stokes. com and signed with a verified signature using GitHub’s key. CFD Python: 12 steps to Navier-Stokes. licensed open source software on GitHub 1 since March 2014 [2], w as originally created to fa-. SlurmCI 44893a768e7e6e3e2268f0a76d53c122928636b3. The CatalyticFOAM solver exploits state-of-the-art and new numerical technique in order to enable the simulation. GitHub Gist: instantly share code, notes, and snippets. The GPU variant was implemented using DirectX 11 and used the DirectCompute API. c Initialization of the physics-related variables and function pointers for the 3D Navier-Stokes system. One is a cell-centered approximate projection method-based solver that uses an explicit second-order Godunov scheme to handle the nonlinear advection terms that appear in the. Alpha Channel Fluid Advection Demo. Algebraic fractional-step schemes with spectral methods for the incompressible Navier-Stokes equations. Member Function Documentation. For an incompressible fluid \dot\rho=0. Tieszen, S. Dismiss Join GitHub today. Navier-Stokes calculations for a highly-twisted rotor near stall twitter github. FEATool Multiphysics Navier Stokes Equations Models, Tutorials, and Examples. Other creators. Compute the left eigenvections for the 2D Navier Stokes equations. (6 July 2012). Additive synthesis controlled by the Navier-Stokes fluid equation. Praveen Chandrashekar, Souvik Roy, A. , 5 x 32 = 160 neurons per hidden layer), takes the input variables t, x, y and outputs the displacement, c, u, v, w, and p. The Navier-Stokes equations in their full and simplified forms help with the design of aircraft and cars, the study of blood flow, the. Note: The demos in this post rely on WebGL features that might not be implemented in mobile browsers. It s olves the Navier-Stokes equations for 2D, 2D-axisymmetric and 3D flows, steady or unsteady, laminar or turbulent, incompressible or weakly dilatable, isothermal or not, with scalars transport if required. Making statements based on opinion; back them up with references or personal experience. Solving the Navier-Stokes equations using the Vorticity-Streamfunction Formulation ($$\omega$$ and $$\psi$$) Solving the Navier-Stokes equations using primitive variables (u, v, w, p) The second part of the class drills deeper into the many issues encountered in what we just accomplished:. Also, remember that academic integrity is different from pure copyright (which my answer focused on). „Reynolds-gemittelte Navier-Stokes-Gleichung“ suchen mit: Wortformen von korrekturen. It is parallelised using MPI and is capable of scaling to many thousands of processors. while the Navier-Stokes equation15 formulates conservation of momentum ρ ∂v ∂t +v·∇v = −∇p+ρg+µ∇2v, (7) where g is an external force density field and µ the viscosity of the fluid. person About Me. 2 Incompressible Navier-Stokes equations; 3. 0, and the flow velocities specified in the initial and boundary conditions correspond to a characteristic Mach number of 0. the level set method is used to deal with Navier-Stokes ows, in a variational framework which alleviates the need for the redistancing stage inherent to many level set based algorithms; this idea is continued in [67] in the three-dimensional setting. 原文写作时间: 2007年3月18日 【非逐字逐句翻译】 对这个问题的标准答案是:湍流 —— 三维NS方程在精细尺度上的表现远比在粗糙尺度上更加非线性。. Clone You can also ask for help by opening up an issue on the incflo GitHub webpage. These systems are modeled by the Poisson-Nernst-Planck (PNP) equations with the possibility of coupling to the Navier-Stokes (NS. (2017) This simulation solves the Navier-Stokes equations for incompressible fluids in a GPU fragment shader using a mixed grid-particle model. But I am not getting any results. However Navier-Stokes equation is a continuous velocity field differential equation, so to discretize it, I applied the SPH technique that is often used in solving astrophysical problems. fortran cfd finite-volume numerical-methods navier-stokes paraview lu. Derivation ¶ The derivation of the Navier-Stokes equations contains some equations that are useful for alternative formulations of numerical methods, so we shall briefly recover the steps to arrive at (1) and (2). GitHub is where people build software. 719 Figure 1 shows a sample of the predictions of our system over the test set for the Navier Stokes equations. Flurry++ is licensed under a GNU General Public License. This commit was created on GitHub. Chandrashekar); Applied Mathematics and Computation, Vol. 8 #define BOOST_TEST_MODULE "Test module for heat-conduction related proto operations". It does not use the correct Navier-Stokes simulation for rendering the fluid surface. Then the continuity equation implies$$ abla\cdot u = 0. It begins by covering the process of discretizing and coding partial differential equations in 1D, then moves to 2D and finally applies the same process to the Navier Stokes equations in 2D. Navier Stokes The Fire: 26 ships destroyed and 36 ships lost. More precisely, we prove in the first part that the BR1 scheme preserves energy stability of the skew-symmetric advection. It is a structured multi-block Reynolds-Averaged Navier-Stokes code under active development. SlurmCI 44893a768e7e6e3e2268f0a76d53c122928636b3. SlurmCI e3e67f391d7f751d5688da4d311aa8add2ec234b. If you just want to see running code, it’s on GitHub. Based on FEATool Multiphysics (https://www. 14 Posted by Florin No comments This time we will use the last two steps, that is the nonlinear convection and the diffusion only to create the 1D Burgers' equation; as it can be seen this equation is like the Navier Stokes in 1D as it has the accumulation, convection and diffusion terms. Lorena Barba between 2009 and 2013 in the Mechanical. Solving the compressible Navier-Stokes equations means that acoustic waves are included in the solution (if the resolution is fine enough and if the accuracy of the numerical scheme is high enough to be able to be able to represent the high-frequency low-amplitude acoustic perturbations). This simulation solves the Navier-Stokes equations for incompressible fluid flow past an obstacle in a GPU fragment shader. Lorena Barba's Computational Fluid Dynamics class, as taught between 2010 and 2013 at Boston University is now available. U-g flutter solver with mode tracking. The central challenge in this context is the high dimensionality of Eulerian space-time data sets. Semi-implicit BDF time discretization of the Navier-Stokes equations with VMS-LES modeling in a high performance computing framework. Aeroacoustics. Dismiss Join GitHub today. It is a structured multi-block Reynolds-Averaged Navier-Stokes code under active development. Prantl, Xiangyu Hu TechnicalUniversityofMunich. Perssona J. Saideep: Main CFD Forum: 10: February 13, 2017 10:29: The Navier stokes equations: Mirza: Main CFD Forum: 6: August 30, 2016 21:21: Solve the Navier stokes equations: Rime: Main CFD Forum: 5: March 11, 2016 06:05: MATLAB Finite Volume Unstructured Triangular. Let us consider the Navier-Stokes equation in two dimensions (2D) given explicitly by. Engwirda, November 2005, Undergraduate Honours Thesis, School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney) My undergraduate honours thesis, describing an unstructured finite-volume type solver for the unsteady Navier-Stokes equations. problem of solving Navier-Stokes equations is rather easy. ARC2D (Efficient Two-Dimensional Solution Methods For The Navier-Stokes Equations) ARC-12112-1 ARC2D is a computational fluid dynamics program developed at the NASA Ames Research Center specifically for two-dimensional airfoil and simply connected geometries. In this example we solve the Navier-Stokes equation past a cylinder with the Uzawa algorithm preconditioned by the Cahouet-Chabart method (see [GLOWINSKI2003] for all the details). Navier–Stokes (NS) solvers [1,2] and Lattice Boltzmann (LB) methods [3,4] are well established techniques for simulating in- or weakly compressible fluid flow. The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum. SlurmCI 44893a768e7e6e3e2268f0a76d53c122928636b3. *CoMD: A simple proxy for the computations in a typical molecular dynamics application. Navier-Stokes equations. FENaPack is a package implementing preconditioners for Navier-Stokes problem using FEniCS and PETSc packages. Unlike previous approaches where optical. The Problems folder contains a gallery of tests, FEM_CFD_Steady_Test2D solution of the steady 2D Navier-Stokes equations in a backward-facing step channel. Derivation ¶ The derivation of the Navier-Stokes equations contains some equations that are useful for alternative formulations of numerical methods, so we shall briefly recover the steps to arrive at (1) and (2). This function just calls the macro _NavierStokes3DRoeAverage_ and is not used by any functions within the 3D Navier Stokes module. All the same, I like to give few useful words I came to learn about Navier-Stokes equations (NSEs), and could be useful to some readers too. Github repository. The default value of OUTPUT_FILES is (RESTART, PARAVIEW, SURFACE_PARAVIEW). Go to the documentation of this file. RB_library implements Galerkin and least-squares RB methods, POD, greedy algoritm, EIM, DEIM, MDEIM etc. task_7()" Show/Hide Code. This provides a MATLAB example code for the lid-driven cavity flow where incompressible Navier Stokes equation is numerically solved using a simple 2nd order finite difference scheme on a staggered grid system. Reynolds-Averged Navier-Stokes equations. Define the velocity and pressure in a 3D space. 599-620, 2004) MMS Source term, automatically generated by SymPy or Maple. Die Anwendung der Navier-Stokes-Gleichungen, so wie ich sie kenne, teilen sich in drei Bereiche: 1. C, "Finite volume discretization of heat equation and compressible Navier-Stokes equations with weak Dirichlet boundary condition on triangular grids", International Journal of Advances in Engineering Sciences and Applied Mathematics, vol. 8 #define BOOST_TEST_MODULE "Test module for heat-conduction related proto operations". Generally, the user needs to select the best-fit values according to their experimental or theoretical data. py , so it is easier to make comparison of PCD vs. See more of projects at My Github Profile. Investigating - We are investigating elevated errors starting GitHub Actions workflows. Chandrashekar); Applied Mathematics and Computation, Vol. Navier-Stokes equations 219 works Search for books with subject Navier-Stokes equations. Immersed boundary treatment for 3D Navier-Stokes equations. Structure containing variables and parameters specific to the 2D Navier Stokes equations. In this video we will derive the famous Navier-Stokes Equations by having a look at a simple Control Volume (CV). Click here to view in your browser. 12 videos Play all 12 Steps to Navier-Stokes Manuel Ramsaier Implementing the CFD Basics - 03 - Part 1 - Coding for Lid Driven Cavity Simulation - Duration: 18:14. Example (Navier-Stokes Equation) Our next example involves a realistic scenario of incompressible uid ow as described by the ubiquitous Navier-Stokes equations. GitHub; Jenkins; Navier-Stokes equations¶ We solve the Navier-Stokes equations using Taylor-Hood elements. 14th Symposium on Overset Composite Grids and Solution Technology, "Recent Advance on Extending the Added-Mass Partitioned (AMP) Scheme for Solving FSI Problems Coupling Incompressible Flows. A finite-volume, incompressible Navier Stokes model for studies of the ocean on parallel computers John Marshall, Alistair Adcroft, Chris Hill, Lev Perelman, and Curt Heisey Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge Abstract. bufr_X_Y bufr_Y_X k1 k2 k3 kx ky ky_ kz mk1 mk2 mk3 nshells oflag rflag rkstep step stop sx sy sz t_end t_start trigx trigxy trigy trigz u un ur. To this end, we introduce the mapping deformation gradient G and the mapping velocity v X as G = ∇ XG, v X = ∂G ∂t X. Download pdf version. The density and pressure are taken such that the speed of sound is 1. Navier-Stokes equations: theory and numerical analysis 1977, North-Holland Pub. A spectral-element solver for the incompressible Navier-Stokes equations with thermal coupling Instructor : Prof. def add_pressure_bc. spectralDNS. We show here an example of a complex algorithm and or first example of mesh adaptation. The modified momentum and energy. Recently activities have been carried out on Tecnam P2012 and on new regional 90 seats aircraft. The physical definition for the compressible solvers in SU2 based around the definition of the free-stream. It uses dynamic unstructured mesh optimisation and has been parallelized using MPI and also has been tested on the U. spectralDNS contains a classical high-performance pseudo-spectral Navier-Stokes DNS solver for triply periodic domains. 11 Matrix free operator application; 3. Intro to Fluid Mechanics for Engineering Students Part 3 4. Each of these depend on a tuning factor (alpha, beta, C), that needs to be established in a case by case basis. Build Options. Pawan Goyal und über Jobs bei ähnlichen Unternehmen. 2 Incompressible Navier-Stokes equations; 3. Implementation of parallel Navier-Stokes solver. The compressible Navier-Stokes differ mathematically from the incompressible ones mainly in the divergence constraint $$abla \cdot \mathbf u eq 0$$. def add_pressure_bc. Summary : Fluid flows require good algorithms and good triangultions. View the Project on GitHub suhasjains/u3d-VOF-SG. A least squares approach using displacements for magnetohydrodynamic equations. The Aither project is hosted on Github. home documentation community source code gallery events try it online donate. Reynolds number in the Navier-Stokes simulations (Re= 3000) is in characteristic of insect ight, we expect this study to provide general insight into the applicability of the numerical methods across a wide range of animals, including insects, bats and birds. If it is a time-dependent problem, the frequency is based on the time iterations, while for steady-state problems it is based on the outer or inner iterations, depending on whether it is a multi-zone or single-zone problem, respectively. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. 599-620, 2004) MMS Source term, automatically generated by SymPy or Maple. This function just calls the macro _NavierStokes3DLeftEigenvectors_ and is not used by any functions within the 3D Navier Stokes module. at different temporal horizons on the Navier Stokes equations. This project takes a novel approach to solving the Navier-Stokes Equations for turbulence by training a neural network using Bayesian Cluster and SOM neighbor weighting to map ionospheric velocity fields based on 3-dimensional inputs. MOOSE’s navier_stokes module, which is the subject of the present work, is capable of solving both the compressible and incompressible Navier{Stokes equations using a variety of Petrov{Galerkin, discontinuous Galerkin (DG), and nite volume (implemented as low-order DG) discretizations. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. „Reynolds-gemittelte Navier-Stokes-Gleichung“ suchen mit: Wortformen von korrekturen. An interesting research item, which has led to relevant scientific publications, concerns the development of new preliminary design methodologies, obtained through numerous CFD Navier-Stokes aerodynamic analyses. Advanced nuclear reactors and nuclear fuel cycles promise to further improve passive safety, fuel utilization, and environmental impacts of this key energy source. The stationary incompressible Navier-Stokes equations are applied with simulation parameters corresponding to a Reynolds number, Re = 389. About I’m really not that interesting. Below there is a selection of audio-visual projects developed over the last few years using OpenFrameworks (C++), Python and Processing (Java). The algorithm operates on 2D maps of velocity and height, calculates updated maps for the next time step. The Navier-Stokes equations in their most basic form can be derived from a fairly simple physical standpoint, relying only on local conservation of mass and momentum. This commit was created on GitHub. 原文: Why global regularity for Navier-Stokes is hard. This is a branch of classical physics and is totally based on Newton’s laws of motion. Fluid Simulation with Navier Stokes. y 2222 22 2 vuv v vv v p v v1 uv. navier_stokes, rans, inc_navier_stokes, inc_rans, fem_navier_stokes, heat_equation_fvm 7. However, the computational cost and memory demand required by CFD codes may become very high for ows of practical interest, such as in aerodynamic shape optimization. Member Function Documentation. MODEL T = 5 T = 10 T = 50 OURS 0. The solver is an immersed boundary method that uses GPU hardware, cuIBM (open-source under the MIT license). Data-driven solutions and discovery of Nonlinear Partial Differential Equations View on GitHub Authors. verlinkt: Fluidmechanik · Strömungslehre · Strömungsmechanik assoziiert Reynolds-gemittelte Navier-Stokes-Gleichung · Reynolds-Gleichung 2014-01-06 17:05 Synonymfresser. View source code on github. Navier-Stokes equation, in fluid mechanics, a partial differential equation that describes the flow of incompressible fluids. GitHub is where people build software. This repo provides a MATLAB example code for the lid-driven cavity flow where incompressible Navier Stokes equation is numerically solved using a simple 2nd order finite difference scheme on a staggered grid system. Navier Stokes equations on the sphere and discuss hydrostatic, quasi-hydrostatic, and nonhydrostatic regimes. Many forms of the Navier-Stokes equation ap-pear in the literature. ; Salisbury, David F. Implementations of the 12 steps in Lorena Barba's course 12 Steps to Navier-Stokes. Saideep: Main CFD Forum: 10: February 13, 2017 10:29: The Navier stokes equations: Mirza: Main CFD Forum: 6: August 30, 2016 21:21: Solve the Navier stokes equations: Rime: Main CFD Forum: 5: March 11, 2016 06:05: MATLAB Finite Volume Unstructured Triangular. Numerical simulations with FOSLL* for 2D incompressible Navier-Stokes equations. Most of my current work is focused on developing fast, high-order integral equation methods, for simulations of flows governed by the Stokes and Navier-Stokes equations. It was inspired by the ideas of Dr. This function just calls the macro _NavierStokes2DLeftEigenvectors_ and is not used by any functions within the 2D Navier Stokes module. Boundary conditions: - no slip BC at the interface fluid/solid. It does not use the correct Navier-Stokes simulation for rendering the fluid surface. Thanks for contributing an answer to Earth Science Stack Exchange! Please be sure to answer the question. Wang's GitHub account NGA code NGA is large-eddy simulation (LES) and direct numerical simulation (DNS) code capable of solving the low-Mach number variable density Navier-Stokes equations on structured cartesian and cylindrical meshes. Some background. SU/PG discretization of compressible Navier-Stokes equations (experimental). The original example is a heightmap approximation of the water surface, given an infinitesimal point perturbation. 3 EULER EQUATIONS In the context of fluid dynamics, the Euler equations are a simplification of the Navier-Stokes equations, with no viscosity or thermal conductivity ("˘k ˘0). with the gravitational force g and Reynoldsnumber Re and. Weißenow, L. Athena is a grid-based code for astrophysical magnetohydrodynamics (MHD). Navier-Stokes equations 219 works Search for books with subject Navier-Stokes equations. It s olves the Navier-Stokes equations for 2D, 2D-axisymmetric and 3D flows, steady or unsteady, laminar or turbulent, incompressible or weakly dilatable, isothermal or not, with scalars transport if required. Implementations of the 12 steps in Lorena Barba's course 12 Steps to Navier-Stokes. Yonsei University, Seoul, Republic of Korea, 2018. Build Options. Er, JavaScript or WebGL doesn't seem to be running, so basically all you're going to see is the bare code. bufr_X_Y bufr_Y_X k1 k2 k3 kx ky ky_ kz mk1 mk2 mk3 nshells oflag rflag rkstep step stop sx sy sz t_end t_start trigx trigxy trigy trigz u un ur. The derivation of the Navier-Stokes equations contains some equations that are useful for alternative formulations of numerical methods, so we shall briefly recover the steps to arrive at \eqref{ns:NS:mom} and \eqref{ns:NS:mass}. Finite element discretization of the Navier-Stokes and similar transport equations on various geometries from Ralph Goodwin. We focus here on two independent methods from our multi- delity framework for modeling apping ight; a panel method and a high-order accurate Navier-Stokes method. RB_library implements Galerkin and least-squares RB methods, POD, greedy algoritm, EIM, DEIM, MDEIM etc. de · Beolingus Deutsch-Englisch OpenThesaurus ist ein freies deutsches Wörterbuch für Synonyme, bei dem jeder mitmachen kann. The example is that of a lid-driven cavity. İTÜ Uçak ve Uzay Mühendisliğinde yüksek lisans yapmaktayım. In SIMPLE, the continuity and Navier-Stokes equations are required to be discretized and solved in a semi-implicit way. Doing this, the behavior of each component is understood. Low-Mach Navier-Stokes¶. 12 Steps to Navier Stokes. In particular, variants of PCD (pressure-convection-diffussion) preconditioner from , are implemented. But when I am opening I am seeing something like this Screenshot from 2020-03-22 08-56-40 1920×1080 127 KB. If I were teaching this I would add the vorticity-streamfunction formulation before "primitive variable" incompressible Navier-Stokes in 2D. Section 4 is concerned with the diagnosis of the pressure field required to. The Navier-Stokes equation is a set of differential equations for a space and time dependent velocity field. Navier-Stokes informed neural networks: A plain vanilla densely connected (physics uninformed) neural network, with 10 hidden layers and 32 neurons per hidden layer per output variable (i. F11U1 eDu1 w Ñu1 ¶w1 ¶x u1, M U1 u1. Navier-Stokes solver. Then the continuity equation implies abla\cdot u = 0. Each of these depend on a tuning factor (alpha, beta, C), that needs to be established in a case by case basis. C, "Finite volume discretization of heat equation and compressible Navier-Stokes equations with weak Dirichlet boundary condition on triangular grids", International Journal of Advances in Engineering Sciences and Applied Mathematics, vol. Cauchy momentum equation. , a fluid simulation running in TensorFlow opens up the possibility of back-propagating gradients through the simulation as well as running the simulation on GPUs. Maziar Raissi, Zhicheng Wang, Michael Triantafyllou, and George Karniadakis. Dismiss Join GitHub today. Then the motion of the fluid is determinded by the uncompressible Navier-Stokes equation. Example source code. We review some results for Navier{Stokes turbulence and compare with results for shell models. The density and pressure are taken such that the speed of sound is 1. We introduce a multi-spectral decimation scheme for high-Reynolds number turbulence simulations. Navier-Stokes Composite layer solver. The problem is find the velocity field $$\mathbf{u}=(u_i)_{i=1}^d$$ and the pressure $$p$$ of a Flow satisfying in the domain $$\Omega \subset \mathbb{R}^d (d=2,3)$$:. High-Order Navier-Stokes Simulations using a Sparse Line-Based Discontinuous Galerkin Method. An L2-finite element approximation for the incompressible Navier-Stokes equations. (1) In addition, we denote the Jacobian of the mapping by g = det(G). Go to the documentation of this file. These problems are solved on unstructured simplicial meshes. Generally, the user needs to select the best-fit values according to their experimental or theoretical data. Although it is not possible to derive an analytical solution to this test case, very accurate numerical solutions to benchmark reference quantities have been established for the pressure difference, drag, and lift coefficient [1],[2]. This function just calls the macro _NavierStokes2DLeftEigenvectors_ and is not used by any functions within the 2D Navier Stokes module. Introduction. while the Navier-Stokes equation15 formulates conservation of momentum ρ ∂v ∂t +v·∇v = −∇p+ρg+µ∇2v, (7) where g is an external force density field and µ the viscosity of the fluid. Investigating - We are investigating elevated errors starting GitHub Actions workflows. 1993-01-01. It is parallelised using MPI and is capable of scaling to many thousands of processors. SU2 is capable of dealing with different kinds of physical problems. The inlet velocity is given as u inlet = 4u max (y-h step )(1-y)/h inlet 2 where h inlet is the channel height, hstep the expansion step height, and u max = 1 the maximum velocity. Navier-Stokes Composite layer solver. This commit was created on GitHub. RB_library implements Galerkin and least-squares RB methods, POD, greedy algoritm, EIM, DEIM, MDEIM etc. Weißenow, L. ThusOpenpipeflowis. Physics class for Incompressible Navier-Stokes. Use MathJax to format. Baederz University of Maryland, College Park, MD, 20742 Non-linear compact interpolation schemes, based on the Weighted Essentially Non-Oscillatory algorithm, are applied to the unsteady Navier-Stokes equations in this paper. The original example is a heightmap approximation of the water surface, given an infinitesimal point perturbation. Description. The module was part of a course taught by Prof. 2D steady-state viscous simulations of rudders in incompressible water are carried out with the k-w SST turbulence model. 0, and the flow velocities specified in the initial and boundary conditions correspond to a characteristic Mach number of 0. , 214(1):347–365, 2006. Features redbKIT consists of three main packages FEM_library provides 2D/3D finite elements approximations of advection-diffusion-reaction equations, Navier-Stokes equations, nonlinear elastostatic and elastodynamics, and fluid-structure interaction problems. U-g flutter solver with mode tracking. Athena is a grid-based code for astrophysical magnetohydrodynamics (MHD). This function just calls the macro _NavierStokes3DLeftEigenvectors_ and is not used by any functions within the 3D Navier Stokes module. A framework for the automated derivation of finite difference solvers from high-level problem descriptions. This project is licensed under the MIT License - see the License file for details. Navier-Stokes Equations Debojyoti Ghosh, Shivaji Medidayand James D. The main tools we used were functional analysis and distribution theory. In this case the system of equations is not closed and an additional equation of state (EOS) is required to connect the state variables, e. Double click to reset. Three-dimensional premixed Hydrogen/Air flame computed with the RNS code. At flow speeds which are much lower than the speed of sound in the medium, the density is assumed to be constant and the incompressible Navier Stokes equations1,2 are used to obtain the flow field. ERIC Educational Resources Information Center. GIAN course on Computational Solution of Hyperbolic PDE at IIT Delhi, 4-15 December, 2017. This Page's Entity Where possible, edges connecting nodes are given different colours to make them easier to distinguish in large graphs. - coupling of the Navier-Stokes and neutron diffusion equations - coupling of the Navier-Stokes and Maxwell equations. Oh, if only mathematics loved me as much as I love her! Since the time the Clay Mathematics Institute announced The Millennium Prize Problems in 2000, one of them have been solved (Poincaré Conjecture), and another one (Navier–Stokes Equation) has a promising solution that is undergoing a verification. It is parallelised using MPI and is capable of scaling to many thousands of processors. This equation is needed when solving for compressible fluid flows and solving the Navier-Stokes. SUNTANS User Guide Stanford Unstructured Nonhydrostatic Terrain-following Adaptive Navier-Stokes Simulator. Additional Inherited Members Static Public Attributes inherited from GRINS::ParameterUser: static std::string zero_vector_function = std::string("{0}"): A parseable function string with LIBMESH_DIM components, all 0. Barba Group 10 users テクノロジー カテゴリーの変更を依頼 記事元: lorenabarba. Written in English. GitHub YouTube: Water YouTube: Gold Tech. Athena is a grid-based code for astrophysical magnetohydrodynamics (MHD). Vasudeva Murthy the date of receipt and acceptance should be inserted later Abstract A variational approach is used to recover uid motion governed by Stokes and Navier-Stokes equations. The problem is find the velocity field $$\mathbf{u}=(u_i)_{i=1}^d$$ and the pressure $$p$$ of a Flow satisfying in the domain $$\Omega \subset \mathbb{R}^d (d=2,3)$$:. while the Navier-Stokes equation15 formulates conservation of momentum ρ ∂v ∂t +v·∇v = −∇p+ρg+µ∇2v, (7) where g is an external force density field and µ the viscosity of the fluid. See more of projects at My Github Profile. Go to the documentation of this file. A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data View on GitHub Authors. The Shallow Water sample relies on flux splitting method for solving the approximated Navier-Stokes equations. We solve the Navier-Stokes equations using Taylor-Hood elements. Thanks for contributing an answer to Physics Stack Exchange! Please be sure to answer the question. MagIC is a numerical code that can simulate fluid dynamics in a spherical shell. Second algorithm is based on the paper “Navier-Stokes, Fluid Dynamics, and Image and Video Inpainting” by Bertalmio, Marcelo, Andrea L. View the Project on GitHub suhasjains/u3d-VOF-SG. ERIC Educational Resources Information Center. It is a modular, multiblock, finite-volume code developed to solve flow problems in the field of aerodynamics. The file Extras/ExtractSlice. Cauchy momentum equation. file NavierStokes3DInitialize. Authors: Nils Thuerey, TUM K. low_mach_navier_stokes_base. Define the velocity and pressure in a 3D space. These problems are solved on unstructured simplicial meshes. A Study of Deep Learning Methods for Reynolds-Averaged Navier-Stokes Simulations. Summary : Fluid flows require good algorithms and good triangultions. Şuanda bir startup projesinde donanım mühendisi olarak çalışmaktayım. Subroutines. Given the initial conditions of the fluid (which could be parameters in our implementation), we can solve this equation at each time step to find the state of the fluid at the next step. Note: at beginning of video, the equations shown are of incorrect representation for my code refer to equations shown in my github link below (along with source code); https://github. Solid arrows point from a parent (sub)module to the submodule which is descended from it. If you just want to see running code, it’s on GitHub. The Navier-Stokes equations describe the motion of a fluid. Code on Github. 650 PKNIDE BÉZENAC, PAJOT,AND GALLINARI (2018) 0. Basic principle is heurisitic. This is a list of my codes most of which are freely available. Thanks for contributing an answer to Computational Science Stack Exchange! Please be sure to answer the question. It s olves the Navier-Stokes equations for 2D, 2D-axisymmetric and 3D flows, steady or unsteady, laminar or turbulent, incompressible or weakly dilatable, isothermal or not, with scalars transport if required. Bertozzi, and Guillermo Sapiro in 2001. Module Graph. The algorithm attempts to imitate basic approaches used by professional restorators. http://claudiovz. Proteus two-dimensional Navier-Stokes computer code--version 2. (日本語ドキュメントもあります) Part 1: Getting Started with the Cavity Flow. Note: at beginning of video, the equations shown are of incorrect representation for my code refer to equations shown in my github link below (along with source code); https://github. This repo provides a MATLAB example code for the lid-driven cavity flow where incompressible Navier Stokes equation is numerically solved using a simple 2nd order finite difference scheme on a staggered grid system. However, it's necessary to define it and provide it to the the solver object so that it can then send it to interpolation functions for a characteristic-based reconstruction. The tutorial is an updated and expanded version of the popular first chapter of the FEniCS Book. See the complete profile on LinkedIn and discover Yao’s. I am simulating an incompressible flow of a newtonian fluid over an oscillating plate using OpenFOAM. SV Simulation tool can solve the three-dimensional incompressible Navier-Stokes equations in an arbitrary domain, generally a vascular model reconstructed from image data. The convective and viscous fluxes are evaluated at the midpoint of an edge. y 2222 22 2 vuv v vv v p v v1 uv. MOOSE's navier_stokes module, which is the subject of the present work, is capable of solving both the compressible and incompressible Navier{Stokes equations using a variety of Petrov{Galerkin, discontinuous Galerkin (DG), and nite volume (implemented as low-order DG) discretizations. Summary : Fluid flows require good algorithms and good triangultions. Tieszen, S. Hydro3D is a finite difference Navier Stokes solver that permits accurate and efficient Large Eddy Simulation (LES) of turbulent flows. Then the motion of the fluid is determinded by the uncompressible Navier-Stokes equation. Generally, the user needs to select the best-fit values according to their experimental or theoretical data. We develop an efficient fourth-order finite difference method for solving the incompressible Navier-Stokes equations in the vorticity-stream function formulation on a disk. Flow solvers and utilities » Navier-Stokes solver. Features redbKIT consists of three main packages FEM_library provides 2D/3D finite elements approximations of advection-diffusion-reaction equations, Navier-Stokes equations, nonlinear elastostatic and elastodynamics, and fluid-structure interaction problems. home documentation community source code gallery events try it online donate. In particular, variants of PCD (pressure-convection-diffussion) preconditioner from , are implemented. Based on FEATool Multiphysics (https://www. A more complete list can be found on my github, bitbucket and gitlab pages. 35 (2006) 321{364. These systems are modeled by the Poisson-Nernst-Planck (PNP) equations with the possibility of coupling to the Navier-Stokes (NS. Define the velocity and pressure in a 3D space. solve_navier_stokes; Variables. MODEL T = 5 T = 10 T = 50 OURS 0. Doing this, the behavior of each component is understood. ML preconditioners have been used on thousands of processors for a variety of problems, including the incompressible Navier-Stokes equations with heat and mass transfer, linear and nonlinear elasticity equations, the Maxwell equations, semiconductor equations, and more. GitHub Gist: star and fork lflee's gists by creating an account on GitHub. 原文写作时间: 2007年3月18日 【非逐字逐句翻译】 对这个问题的标准答案是:湍流 —— 三维NS方程在精细尺度上的表现远比在粗糙尺度上更加非线性。. Compute the left eigenvections for the 3D Navier Stokes equations. Barba Group 10 users テクノロジー カテゴリーの変更を依頼 記事元: lorenabarba. verlinkt: Fluidmechanik · Strömungslehre · Strömungsmechanik assoziiert Reynolds-gemittelte Navier-Stokes-Gleichung · Reynolds-Gleichung 2014-01-06 17:05 Synonymfresser. The scheme is based on a new spectral-Galerkin approximation for the space variables and a second-order projection scheme for the time variable. cpp in each of these examples. ) It is a more theoretical approach, but it satisfies mass conservation by construction. We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. Implementations of the 12 steps in Lorena Barba's course 12 Steps to Navier-Stokes. This page contains the results of running MMS for the compressible Navier-Stokes system in order to formally verify the order-of-accuracy for the 2nd-order finite volume solver in SU2. If you just want to see running code, it’s on GitHub. A Study of Deep Learning Methods for Reynolds-Averaged Navier-Stokes Simulations. Download pdf version. The example is that of a lid-driven cavity. About a year and a half ago, I had a passing interest in trying to figure out how to make a fluid simulation. 9 Fourth order equations; 2. home documentation community source code gallery events try it online donate. "Removing the Stiffness of Elastic Force from the Immersed Boundary Method for the 2D Stokes Equations" by T. The Navier-Stokes equations in their full and simplified forms help with the design of aircraft and cars, the study of blood flow, the. This project takes a novel approach to solving the Navier-Stokes Equations for turbulence by training a neural network using Bayesian Cluster and SOM neighbor weighting to map ionospheric velocity fields based on 3-dimensional inputs. This Page's Entity Where possible, edges connecting nodes are given different colours to make them easier to distinguish in large graphs. This will create executables named main2d and main3d (not all examples will generate both, some examples are only 2d or only 3d). On Oct 19, I attended the ACMT 2017 and had a presentation (Using a Meshless Numerical Method for Solving Navier-Stokes Equation with Traction Boundary Conditions, by Chia-Cheng Tsai, Bing-Han Lin, Bang-Fuh Chen) in the MS: Advance and Application in Meshfree (Meshless) Methods, which is oraginated by Judy P. These developments have led to the so-called Navier-Stokes Equations, a precise mathematical model for most fluid flows occurring in Nature. That means even if you don't have any farfield BCs in your problem, it might be important to prescribe physically meaningful values for the. The Ops Lab brings talented teams together with shared development methods in a collaborative space. The source code is available on Github. The example is that of a lid-driven cavity. 1 Parabolic model problem; 3. 2D/3D steady and unsteady Navier-Stokes equations approximated by P2-P1 or P1Bubble-P1 finite elements for velocity and pressure spaces, respectively; P1-P1 finite elements stabilized with the SUPG stabilization (implemented as in the framework of the Variational MultiScale Method). 1057: Remove LinearTemperatureProfile, switch to DecayingTemperatureProfile r=charleskawczynski a=charleskawczynski # Description - Removes LinearTemperatureProfile and replaces its use with. Vortex induced vibrations of bluff bodies occur when the vortex shedding frequency is close to the natural frequency of the structure. I have discretized the Navier Stokes equation as per the Patankar Power Law Scheme. It is a modular, multiblock, finite-volume code developed to solve flow problems in the field of aerodynamics. Introduction. SlurmCI 40ed4611cbbb279338c8fe56d11bb163df4d7f28. Domino, and A. py", line 12, in from mshr import * ModuleNotFoundError: No module named 'mshr' Aborted (core dumped) So I installed mshr Following this, I tried again: \$ python3 navier_stokes_cylinder.
8w7noundq57i sctbdu2v6f0pj1 xfe99dviez5x vdl3nco3yshm1 r4rutds6r0o t7w2tknovnpsio mxe9xua5dguskzf fb9o83n58j uhv4sdui4ro0 oxz90mesww86j 62jjtk6066jt18 0j01dgnijm 0hwrr1ru65d xoi6pl6cv5uu rsoas0g11tv58 hh1zus8lla4vdu9 yjs0dfhadf0m2y v13qw69dal4m9z ph10tft2qg hqv6z8snwprhj bl8gnp261mfjkdd 5wle9zq6ol m7ynpz1r19c9n e9cq99zq1j onzpif7q68pa5l1 x68bci8jnz y3hvp6uayi umosmv07q27dzjr pdl6vumyb74y kryy7h7w7arw0b4 pia91twpstw2 ggnh28rg2j x1jhnibsid5906p kbrlj2gklql5j | 2020-07-11 03:35:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38857075572013855, "perplexity": 2165.4617212668386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00431.warc.gz"} |
https://dukarahisi.com/topic-8-numbers-ii-mathematics-form-1/ | Home MATHEMATICS TOPIC 8: NUMBERS (II) ~ MATHEMATICS FORM 1
# TOPIC 8: NUMBERS (II) ~ MATHEMATICS FORM 1
1090
3
## TOPIC 8: NUMBERS
A Rational Number
Define a rational number
A Rational Number is a real number that can be written as a simple fraction (i.e. as a ratio). Most numbers we use in everyday life are Rational Numbers.
##### Rational?
5
5/1
Yes
1.75
7/4
Yes
.001
1/1000
Yes
-0.1
-1/10
Yes
0.111…
1/9
Yes
√2(square root of 2)
?
NO !
The square root of 2 cannot be written as a simple fraction! And there are many more such numbers, and because they arenot rationalthey are calledIrrational.
#### The Basic Operations on Rational Numbers
Perform the basic operations on rational numbers
To add two or more rational numbers, the denominator of all the rational numbers should be the same. If the denominators of all rational numbers are same, then you can simply add all the numerators and the denominator value will the same. If all the denominator values are not the same, then you have to make the denominator value as same, by multiplying the numerator and denominator value by a common factor.
Example 1
1⁄3+4⁄3=5⁄3
1⁄3 +1⁄5=5⁄15 +3⁄15 =8⁄15
#### Subtraction of Rational Numbers:
To subtract two or more rational numbers, the denominator of all the rational numbers should be the same.
If the denominators of all rational numbers are same, then you can simply subtract the numerators and the denominator value will the same.
If all the denominator values are not the same, then you have to make the denominator value as same by multiplying the numerator and denominator value by a common factor.
Example 2
4⁄3 -2⁄3 =2⁄3
1⁄3-1⁄5=5⁄15-3⁄15=2⁄15
#### Multiplication of Rational Numbers:
Multiplication of rational numbers is very easy. You should simply multiply all the numerators and it will be the resulting numerator and multiply all the denominators and it will be the resulting denominator.
Example 3
4⁄3×2⁄3=8⁄9
#### Division of Rational Numbers:
Division of rational numbers requires multiplication of rational numbers. If you are dividing two rational numbers, then take the reciprocal of the second rational number and multiply it with the first rational number.
Example 4
4⁄3÷2⁄5=4⁄3×5⁄2=20⁄6=10⁄3
#### Irrational Numbers
Define irrational numbers
An irrational number is areal number that cannot be reduced to any ratio between an integer and a natural number q. The union of the set of irrational numbers and the set of rational numbers forms the set of real numbers.
In mathematical expressions, unknown or unspecified irrationals are usually represented byuthroughz. Irrational numbers are primarily of interest to theoreticians.
Abstract mathematics has potentially far-reaching applications in communications and computer science, especially in data encryption and security.
Examples of irrational numbers are √2 (the square root of 2), the cube root of 3, the circular ratiopi, and the natural logarithm base e.
The quantities√2andthe cube root of 3are examples of algebraic numbers. Pi endeared examples of special irrationals known as transcendental numbers.
The decimal expansion of an irrational number is always nonterminating (it never ends) and nonrepeating (the digits display no repetitive pattern)
#### Real Numbers
Real Numbers
Define real numbers
he type of number we normally use, such as 1, 15.82, −0.1, 3/4, etc.Positive or negative, large or small, whole numbers or decimal numbers are all Real Numbers.
They are called “Real Numbers” because they are not Imaginary Numbers.
Absolute Value of Real Numbers
Find absolute value of real numbers
The absolute value of a number is the magnitude of the number without regard to its sign. For example, the absolute value of 𝑥 𝑜𝑟 𝑥 written as
The sign before 𝑥 is ignored. This is because the distance represented is the same whether positive or negative.
For example, a student walking 5 steps forward or 5 steps backwards will be considered to have moved the same distance from where she originally was, regardless of the direction.
The 5 steps forward (+5) and 5 steps backward (-5) have an absolute value of 5
Thus |𝑥| = 𝑥 when 𝑥 is positive (𝑥 ≥ 0), but |𝑥| = −𝑥 when 𝑥 is negative (𝑥 ≤ 0).
For example, |3| = 3 since 3 is positive (3 ≥ 0) And −3 = (−3) =3 since −3 is negative (3 ≤ 0)
Related Practical Problems
Solve related practical problems
Example 5
Solve for 𝑥 𝑖𝑓 |𝑥| = 5
Solution
For any number 𝑥, |𝑥| = 5, there are two possible values. Either 𝑥,= +5 𝑜𝑟 𝑥 = 5
Example 6
Solve for 𝑥, given that |𝑥 + 2| =4
Solution | 2022-05-28 00:21:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8780435919761658, "perplexity": 890.5954637675421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663011588.83/warc/CC-MAIN-20220528000300-20220528030300-00184.warc.gz"} |
http://planetmath.org/node/24051 | # polytrope
Forums:
Hi Chi,
What a good idea, thanks a lot! Indeed as I did put the tabulation looks really ugly. By using your suggested table now is another thing!
Done! But now I have a little problem as I got a table very attached from the top paragraph. So in order to separate it I have used "\\", "\newline", and also "{}\\", "{}\newline", but these things don't work. For sure I'm applying the wrong command, and I think pahio is, at this moment, in his cabin at Livonsaari, Finland, for vacations. So maybe you can help me on this issue. Thanks in advance.
Greetings,
Pedro
### Re: polytrope
Thanks Chi, I better leave it so.
### Re: polytrope
Actually, it doesn't look that bad... What if you try "\\\\" instead? It does look bad, however, when you view it in page mode. I suggest moving the legend out of the table... | 2017-12-15 12:13:34 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565315008163452, "perplexity": 1925.1311495996308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948569405.78/warc/CC-MAIN-20171215114446-20171215140446-00357.warc.gz"} |
https://aviation.stackexchange.com/questions/20877/what-are-some-examples-of-warning-areas-and-restricted-areas/20884 | What are some examples of warning areas and restricted areas?
Can you please give me some examples of warning and restricted areas?
How can an aircraft enter these areas? What are the procedures you have to follow before entering them?
I'll provide information for the US airspace. However many countries, ICAO members, will have similar concepts and procedures.
You are not required to follow any procedure for entering a warning area or an inactive restricted airspace. In the latter case, you need to ensure the area is actually inactive. A safe practice is to contact the controlling agency of the area prior to enter any warning or restricted one.
The longer answer below develops a bit around special use airspaces which encompass warning and restricted airspaces among other. You'll also find two examples of these areas.
Context
The airspace over the US is open to navigation, a part of this airspace benefit from services provided by air traffic control (ATC), e.g. safe separation and traffic information.
This controlled airspace includes airport areas and airways, and other areas of interest. It is is ruled by air regulations relative to airspace classes. In the US these regulations are found in document 94 CFR 91.
Some limited areas, that may be permanent or temporary, can be prohibited or restricted. Together they constitute the special use airspaces (SUA)
Special use airspaces
SUA (Wikipedia):
Special use airspace includes: restricted airspace, prohibited airspace, military operations areas (MOA), warning areas, alert areas, temporary flight restriction (TFR), national security areas, and controlled firing areas.
The SUA are repertoried by the FAA: SUA Web site.
You may search by name or scan the map. R denotes a restricted airspace, and W a warning area.
More on SUA: Course Notes, Dennis Seals, FAA
Your question is limited to warning areas and restricted airspace.
Restricted airspace
Restricted areas denote the existence of unusual, often invisible, hazards to aircraft such as artillery firing, aerial gunnery, or guided missiles. Penetration of restricted areas without authorization from the using or controlling agency may be extremely hazardous to the aircraft and its occupants.
Entry: Forbidden when active, subject to ATC clearance on a case by case basis.
For areas that have a scheduled activity, times of activation are published in the AIP. When inactive, a R-area is just a like any other location of the surrounding airspace class. When active it is just forbidden to enter it (albeit FAA may deliver specific clearances if solicited).
For VFR flights, contacting ATC prior to entering an inactive restricted area is not mandatory per regulations, but is a good practice regarding safety. IFR flights under monitoring of the ATC, will not be authorized to enter an active restricted area.
Restricted airspace zones may not be active at all times; in such cases there are typically schedules of local dates and times available to aviators specifying when the zone is active, and at other times, the airspace is subject to normal VFR/IFR operation for the applicable airspace class.
Activity time may happen unscheduled for some R-areas. A schedule cannot be published, in this case a NOTAM is used to inform airspace users.
A few zones are activated by NOTAM; an example is R-2503D over Camp Pendleton in southern California, between San Diego and Los Angeles. This particular zone, beginning at 2000ft above sea level over most of southern Camp Pendleton, can only be active for a certain number of days per year, thus allowing small planes to fly a direct route over land between the two metro areas instead of being diverted offshore or into mountainous terrain further inland.
Example: Restricted areas R-2501 around the Twentynine Palms Marine Corps Air Ground Combat Center:
The Twentynine Palms Complex is located in the Southern California desert approximately 115 NMI northeast of Los Angeles, CA. The Twentynine Palms Complex provides a vast land and restricted airspace (R-2501) area for live ordnance employment and combined arms training of infantry units, armored vehicles, artillery, and air support.
The complex includes the following instrumented area: R-2501E
The following are training, and Military Operating Areas (MOAs) associated with the Twentynine Palms complex:
• R-2501N
• R-2501W
• R-2501S
• Bristol MOA
• Sundance MOA
• Range Training Areas (RTA)
Description of R-2501N from Order JO 7400.8S:
R-2501N Bullion Mountains North, CA
Boundaries.
Beginning at lat. 34°30'00"N., lon g. 116°26'23"W.;
to lat. 34°36'00"N., long. 116°28'03"W.;
to lat. 34°40'30"N., long. 116°29'43"W.;
to lat. 34°43'00"N., [...]
Designated altitudes. Unlimited.
Time of designation. Continuous.
Controlling agency. FAA, Los Angeles ARTCC.
Using agency. Commanding General, Marine Corps Base, Twentynine Palms, CA.
This area around a large military complex is active (restricted) 24x7, at any altitude. A pilot who wants to enter this area for some reason needs to contact either the controlling agency (Los Angeles ARTCC) or the using agency (Marine Corps Base).
Warning area
Chapter 3−4−4. Warning Areas of the FAA-AIM.
A warning area is airspace of defined dimensions, extending from three nautical miles outward from the coast of the U.S., that contains activity that may be hazardous to nonparticipating aircraft. The purpose of such warning areas is to warn nonparticipating pilots of the potential danger. A warning area may be located over domestic or international waters or both.
Entry without specific clearance, but can be hazardous. Contacting ATC prior to entering a warning area is not mandatory per regulations, but is a good practice regarding safety.
Example: Pacific Missile Range Facility (PMRF) NS Barking Sands.
The area surrounding Kauai is divided into warning areas with W-186 and W-188 controlled by PMRF. The Fleet Area Control and Surveillance Facility (FACSFAC) controls W-187, 189, and 190. Space, air, and surface tracking are accomplished from PMRF precision-tracking radar sites at elevations of 75 ft., 1700 ft., and 3800 ft. These are supported by radar systems operated by agencies external to PMRF.
Description of W-188 from Order JO 7400.8S:
W-188 Hawaii, HI
Boundaries. Beginning at lat. 21°58'19"N., long. 159°48'45"W.;
to lat. 21°58'27"N., long. 159°59'50"W.; [...]
Altitudes. Surface to unlimited.
Times of use. Continuous.
Controlling agency. FAA, FAA, Honolulu Control Facility.
Using agency. Commander, Pacific Missile Range Facility, HI.
More:
They serve the same basic purpose, most generally military training areas.
The distinction is that restricted areas are all over land, most have a ceiling at 18000 feet and are generally isolated so that routing around is usually only a small diversion, they also comply only with US/FAA standards. Restricted differ from an MOA in the frequency of use and level of activity, as such you are not required to contact the controlling agency before entering an MOA (even if active)[but you should for your own safety], you must contact the controller before entering a restricted area.
Warning areas are over coastal waters and are operated with consideration for ICAO standards, warning areas are individually larger than restricted areas and are mostly in a continuous series so routing around them can be more trouble.
The specific times, altitudes, and contact frequencies are listed on the legend portion of low altitude charts. They may be active by a regular published schedule listed on the chart legend, or by notam. | 2020-09-26 21:21:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43649792671203613, "perplexity": 11450.664515056227}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400245109.69/warc/CC-MAIN-20200926200523-20200926230523-00759.warc.gz"} |
https://quantumcomputing.stackexchange.com/questions/4904/multiple-random-coin-flips/4905#4905 | # Multiple random coin flips
Suppose that in my circuit I have to generate multiple, say n, random coin flips. For example, this coin flips could be used to activate n CNOTs half of the time.
The trivial solution could be to use n different qubits and Hadamard them. However, this gets really huge when n is large.
Is there any better way? By better I mean using a small (fixed??) number of qubits and only a few simple quantum gates.
## 1 Answer
This depends on exactly what you want to do with the outcome. If you want to use the $$n$$ outcomes simultaneously, then you need $$n$$ separate coins. Alternatively, you are happy to implement them all in sequence (one after the other), then what you could do is:
• start with qubit in the state $$|0\rangle$$
• apply Hadamard to it
• measure it in the 0/1 basis
• drive the controlled-not off it
• apply Hadamard to it
• measure it in the 0/1 basis
• drive the controlled-not off it
• apply Hadamard to it
• ...
which only requires the one qubit. This is assuming that when you talk about coin flips, you really mean the classical version (which is why I have the measurements in there). If the coherence were important to you, it might be a different matter.
• Thanks. I've thought about it, implementing it in a slightly different way (swapping states 3 and 4). However, I was thinking if this was the only actual one or there are more clever algorithm involving phase shifts or something like that, avoiding the measurement at all. Dec 11 '18 at 10:16
• that probably depends on how you're using the output. I suspect that unless you do the measurement at each step, there will be detectable differences from independent random coins, it just depends on whether what you do with it later would be sensitive to those differences or not. Dec 11 '18 at 11:56
• If you try to re-use the qubit without measuring, you're probably going to accidentally perform a quantum walk instead of a random walk. Dec 11 '18 at 20:39
• @CraigGidney thanks for the article, pretty useful. Dec 12 '18 at 9:49 | 2021-10-16 21:29:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6489688754081726, "perplexity": 580.3267139357641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585025.23/warc/CC-MAIN-20211016200444-20211016230444-00169.warc.gz"} |
http://www.askiitians.com/forums/IIT-JEE-Entrance-Exam/37/27071/rankings.htm | i guess a seperate rankk list is prepared for st sc candidate apart frm d general rank list... so how is the councelling procedure then, imean regarding the choices of branches for reserved category ??
3 years ago
Share
Dear ninnee,
The allotment of the seats is according to the ranks in that particular category.In the seat allotment the categories are not mixed.The choice of the branch is according to the student and the seats are alloted according to the rank in that particualr category.
All the best.
Win exciting gifts by answering the questions on Discussion Forum. So help discuss any query on askiitians forum and become an Elite Expert League askiitian.
Suryakanth –IITB
3 years ago
More Questions On IIT JEE Entrance Exam
Post Question
Vouchers
To Win!!!
Hello, I am a 11 th class student preparing for JEE-Mains as well as Advanced.. I started my course this April and since then I have been studying regularly for 2 hrs, and revising...
The only thing is that right now you have to practice a lot of questions, so that whatever you have learnt you will have full grasp of concepts , fundas , theories etc etc....
Students are always welcome.... you just do keep practice of question right now and get familiarise with the types of question which JEE asks. i hope all the best for you.....
Ok Sir, Thanks for your help..:)
Vishal Somani 2 months ago
Three angles of a triangle ABC are in Arithmetic progression and two sides are in the ratio b : c = √3 : √2. Find angle A.
Hii use property of AP and then use the formulae of cosine from solution of triangles to get the answer in the required format. [a^2 = b^2 + c^2 - 2bc\cos\alpha\,] Best
Sourabh Singh 3 months ago
A block is resting on a horizontal plate in the xy plane and the coefficient of friction between block and plate is μ. The plate begins to move with velocity v = bt 2 in x -direction. At...
Hi yash! as the plate begins to move frictional force acts on the block along the direction of velocity to prevent relative motion. when the force exceed the maximum static friction the...
PRATYUSH MISHRA one month ago
no not yet yash I am a student of class 11
PRATYUSH MISHRA one month ago
are you studying in an iit
yash one month ago
Can a MBIPC student can apply for AIIMS exam
Hi Please elaborate the term MBIPC. I think it means you also have Mathematics subject. As long as you have Physics, Chemistry & Biology in Class 12 th you can appear for medical...
Sumreet Johar 3 months ago
Which books should I refer for preparation of AIIMS
AIIMS Delhi Medical examination preparation , Handbook of biology , Solved papers of previous year etc.
Saloni gordhan Rakholiya 5 months ago
AIIMS Delhi Medical examination preparation , Handbook of biology , Solved papers of previous year etc.
Saloni gordhan Rakholiya 5 months ago
centre of mass part 1 doubts
Vineet, In 1 st problem the horizontal component is vsin(2@) and vertical component is vcos(2@) and hence the angle with horizontal is 90-2@. Look into the diagram and equations . Solution...
Aziz Alam one month ago
Hi, Please look into the solution of the 1 st Question.
Aziz Alam one month ago
Hi, Please find the solution to the 2 nd question.
Aziz Alam one month ago
View all Questions » | 2014-12-21 15:09:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22290989756584167, "perplexity": 3256.7014544692875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771384.149/warc/CC-MAIN-20141217075251-00031-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://nodus.ligo.caltech.edu:8080/40m/page39?&sort=Author | 40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 39 of 341 Not logged in
ID Date Author Type Category Subject
11532 Thu Aug 27 01:41:41 2015 IgnacioUpdateIOOTriply Improved SISO (T240-X) FF of MCL
Earlier today I constructed yet another SISO filter for MCL. The one thing that stands out about this filter is its strong roll off . This prevents high frequency noise injection into YARM. The caviat, filter performance suffered quite a bit, but there is subtraction going on.
I have realized that Vectfit lacks the ability of constraining the fits it produces, (AC coupling, rolloff, etc) even with very nitpicky weighting. So the way I used vectfit to produce this filter will be explained in a future eLOG, I think it might be promising.
Anyways, the usual plots are shown below.
Filter:
T240-X (SISO)
Training data + Predicted FIR and IIR subtraction:
Online subtraction results:(High freq. stuff shown for noise injection evaluation of the filter)
MCL
YARM
Subtraction performace:
11535 Fri Aug 28 00:59:55 2015 IgnacioUpdateIOOFinal SISO FF Wiener Filter for MCL
This is my final SISO Wiener filter for MCL that uses the T240-X seismo as its witness.
The main difference between this filter and the one on elog:11532 is the actual 1/f rolloff this filter pocesses. My last filter had a pair of complex zeroes at 2kHz, that gave the filter some unusual behavior at high frequencies, thanks Vectfit. This filter has 10 poles and 8 zeroes, something Vectfit doesn't allow for and needs to be done manually.
The nice thing about this filter is the fact that Eric and I turned this filter on during his 40 min PRFPMI lock last night, Spectra for this is coming soon.
This filter lives on the static Wiener path on the OAF machine, MCL to MC2, filter bank 7.
Anyways, the usual plots are shown below.
Filter:
T240-X (SISO)
Training data + Predicted FIR and IIR subtraction:
Online subtraction results:(High freq. stuff shown for noise injection evaluation of the filter)
MCL
YARM
Subtraction performace:
11536 Fri Aug 28 02:20:35 2015 IgnacioUpdateLSCPRFPMI and MCL FF
A day late but here it is.
Eric and I turned on my SISO MCL Wiener filter elog:11535 during his PRFPMI 40min lock. We looked at the CARM_IN and CARM_OUT signals during the lock and with the MCL FF on/off. Here is the spectra:
11541 Sat Aug 29 04:53:24 2015 IgnacioUpdateIOOMCL Wiener Feedforward Final Results
After fighting relentlessly with the mode cleaner, I believe I have achieved final results
I have mostly been focusing on Wiener filtering MCL with a SISO Wiener filter for a reason, simplicity. This simplicity allowed me to understand the dificulties of getting a filter to work on the online system properly and to develope a systematic way of making this online Wiener filters. The next logical step after achieving my final SISO Wiener filter using the T240-X seismometer as witness for MCL (see elog:11535) and learning how to produce good conditioned Wiener filters was to give MISO Wiener filtering of MCL a try.
I tried performing some MISO filtering on MCL using the T240-X and T240-Y as witnesses but the procedure that I used to develope the Wiener filters did not work as well here. I made the decision to ditch it and use some of the training data I saved when the SISO (T240-X) filter was runing overnight to develope another SISO Wiener filter for MCL but this time using T240-Y as witness. I will compare how much more we gain when doing MISO Wiener filtering compared to just a bunch of SISO filtering in series, maybe a lot, or little.
I left both filters running overnight in order to get trainining data for arm and WFS yaw and pitch subtractions.
The SISO filters for MCL are shown below:
The theoretical FIR and IIR subtractions using the above filters:
Running the filters on the online system gave the following subtractions for MCL and YARM:
Comparing the subtractions using only the T240-X filter versus the T240-X and T240-Y:
11543 Sun Aug 30 10:57:29 2015 IgnacioUpdateIOOMCL Wiener Feedforward Final Results
Big thumbnails? Could it have been this? elog:11498.
Anyways, I fixed the plots and plotted an RMS that can actaully be read in my original eLOG. I'll see what can be done with the MC1 and MC2 Wilcoxon (z-channel) for online subtractions.
11546 Sun Aug 30 13:55:09 2015 IgnacioUpdateIOOSummary pages MCF
The summary pages show the effect of the MCL FF on MCF (left Aug 26, right Aug 30):
I'm not too sure what you meant by plotting the X & Y arm control signals with only the MCL filter ON/OFF. Do you mean plotting the control signals with ONLY the T-240Y MCL FF filter on/off? The one that reduced noise at 1Hz?
11547 Sun Aug 30 23:47:02 2015 IgnacioUpdateIOOMISO Wiener Filtering of MCL
I decided to give MISO Wiener filtering a try again. This time around I managed to get working filters. The overall performance of these MISO filters is much better than the SISO I constructed on elog:11541 .
The procedure I used to develope the SISO filters did not work well for the construction of these MISO filters. I found a way, even more systematic than what I had before to work around Vectfit's annoyances and get the filters in working condition. I'll explain it in another eLOG post.
Anyways, here are the MISO filters for MCL using the T240-X and T240-Y as witnesses:
Now the theoretical offline prediction:
The online subtractions for MCL, YARM and XARM. I show the SISO subtraction for reference.
And the subtraction performance:
11549 Mon Aug 31 09:36:05 2015 IgnacioUpdateIOOMISO Wiener Filtering of MCL
MISO Wiener filters for MCL kept the mode cleaner locked for a good 8+ hours.
11550 Mon Aug 31 14:15:23 2015 IgnacioUpdateIOOMeasured the MC_F whitening poles/zeroes
I measured the 15 Hz zero and the 150 Hz pole for the whitening filter channels of the Generic Pentek board in the IOO rack. The table below gives these zero/pole pairs for each of the 8 channels of the board.
channel zero [Hz] pole [Hz] Chan 1 15.02 151.05 C1:ASC-POP_QPD_YAW 2 15.09 150.29 C1:ASC-POP_QPD_PIT 3 14.98 150.69 C1:ASC-POP_QPD_SUM 4 14.91 147.65 C1:ALS-TRX 5 15.03 151.19 C1:ALS-TRY 6 15.01 150.51 --- 7 14.95 150.50 C1:IOO-MC_L 8 15.03 150.93 C1:IOO-MC_F
Here is a plot of one of the measured transfer functions,
and the measured data is attached here: Data.zip
EQ: I've added the current channels going through this board.
More importantly, I found that the jumpers on channel one (QPD X) were set to no whitening, in contrast to all other channels. Thus, the POP QPD YAW signals we've been using for who knows how long have been distorted by dewhitening. This has now been fixed.
Hence, the current state of this board is that the first whitening stage is disabled for all channels and the second stage is engaged, with the above parameters.
Attachment 1: Data.zip
11552 Tue Sep 1 06:58:11 2015 IgnacioUpdateWienerFilteringMCL FF => WFS1 and WFS2 FF => ARMS FF
I took some training data during Sunday night/Monday morning while the MCL MISO FF was turned on. We wanted to see how much residual noise was left in the WFS1/WFS2 YAW and PITCH signals.
The offline subtractions that can be achieved are:
For WFS1
For WFS2
I need to download data for these signals while the MCL FF is off in order to measure how much subtraction was achived indirectly with the MCL FF. In a previous elog:11472, I showed some offline subtractions for the WFS1 YAW and PITCH before any online FF was implemented either by me or Jessica. From the plots of that eLOG, one can clearly see that the YAW1 signal is clearly unchanged in the sense of how much seismic noise was mitigated indirectly torugh MCL.
Koji has implemented the FF paths (thank you based Koji) necessary for these subtractions to be implemented. The thing to figure out now is where we want to actually actuate and to measure the corresponding transfer functions. I will try to have either Koji or Eric help me measure some of these transfer functions.
Finally, I looked at the ARMS and see what residual seismic noise can be subtracted
I'm not too concerned about noise in the arms as if the WFS subtractions turn out to be promising then I expect for some of the arms seismic noise to go down a bit further. We also don't need to measure an actuator transfer function for arm subtractions, give that its essentially flat at low frequencies, (less than 50 Hz).
11553 Tue Sep 1 10:26:24 2015 IgnacioUpdateIOOMore MCL Subtractions (Post FF)
Using the training data that was collected during the MISO MCL FF. I decided to look at more MCL subtractions but this time using the accelerometers as Rana suggested.
I first plotted the coherence between MCL and all six accelerometers and the T240-Z seismometer.
For 1 - 5 Hz, based on coherence, I decided to do SISO Wiener filtering with ACC2X and MISO Wiener filtering with ACC2X and ACC1Y. The offline subtractions were as follows (RMS plotted from 0.1 to 10 Hz):
The subtractions above look very much like what you would get offline when using the T240(X,Y) seismometeres during MISO Wiener filtering. But this data was taken with the MISO filters on. This sort of shows the performance deterioration when one does the online subtractions. This is not surprising since the online subtraction performance for the MISO filters, was not too great at 3 Hz. I showed this in some other ELOG but I show it again here for reference:
Anyways, foor 10 - 20 Hz, again based on coherence, I decided to do SISO Wiener filtering with ACC2Z and MISO Wiener filtering with ACC2Z and ACC1Z (RMS plotted from 10 to 20 Hz):
I will try out these subtractions online by today. I'm still debating wether the MISO subtractions shown here are worth the Vectfit shananigans. The SISO subtractions look good enough.
Attachment 4: mclxycoh.png
11563 Thu Sep 3 00:45:25 2015 IgnacioUpdateIOORemeasured MC2 to MCL TF + Improved subtraction performance
Today, I remeasured the transfer function for MC2 to MCL in order to improve the subtraction performance for MCL and to quantify just how precisely it needs to be.
Here is the fit, and the measured coherence. Data is also attached here: TF.zip
OMG, I forgot to post the data and any residuals. LOL!
The transfer function was fitted using vectfit with a weighting based on coherence being greater than 0.95.
I then used the following filters to do FF on MCL online:
Here are the results:
Performance has definelty increased when compared to previous filters. The reason why I think we still have poor performance at 3 Hz, is 1) When I remeasured the transfer function, Eric and I were expecting to see a difference on its shape due to the whitening filters that were loaded a couple days ago. 2) Assuming the transfer function is correct, there is poor coherence at 3 Hz 3) The predicted IIR subtraction is worst at this frequency.
Attachment 1: TF.zip
11573 Fri Sep 4 08:00:49 2015 IgnacioUpdateCDSRC low pass circuit (1s stage) of Pentek board
Here is the transfer function and cutoff frequency (pole) of the first stage low pass circuit of the Pentek whitening board.
Circuit:
R1 = R2 = 49.9 Ohm, R3 = 50 kOhm, C = 0.01uF. Given a differential voltage of 30 volts, the voltage across the 50k resistor should be 29.93 volts.
Transfer Function:
Given by,
$H(s) = \frac{1.002\text{e}06}{s+1.002\text{e}06}$
So low pass RC filter with one pole at 1 MHz.
I have updated the schematic, up to the changes mentioned by Rana plus some notes, see the DCC link here: [PLACEHOLDER]
I should have done this by hand...
Attachment 1: circuit.pdf
11574 Fri Sep 4 09:23:32 2015 IgnacioUpdateCDSModified Pentek schematic
Attached is the modifed Pentek whitening board schematic. It includes the yet to be installed 1nF capacitors and comments.
Attachment 1: schematic.pdf
11584 Wed Sep 9 11:00:49 2015 IgnacioUpdateIOOLast Wiener MCL subtractions
On Thursday night (sorry for the late elog) I decided to give the MCL FF one more try.
I first remeasured the actuator transfer function because previous measurements had poor coherence ~0.5 - 0.7 at 3 Hz. I did a sine swept to measure the TF.
Raw transfer function:
The data is attached here: TF.zip
Then I made Wiener filters by fitting the transfer function data with coherence > 0.95 (on the left). Fitting all the data (on the right). Here are the filters:
The offline subtractions (high coh fit on left, all data fit on right). Notice the better IIR performance when all the TF data was fitted.
The online results: (these were aquired by taking five DTT measurements with 15 averages each and then taking the mean of these measurements)
And the subtraction performance:
Attachment 3: TF.zip
11590 Thu Sep 10 09:37:34 2015 IgnacioSummaryIOOFilters left on MCL static module
The following MCL filters were left loaded in the T240-X and T240-Y FF filter modules (filters go in pairs, both on):
FM7: SISO filters for MCL elog:11541
FM8: MISO v1 elog:11547
FM9: MISO v1.1 Small improvement over MISO v1
FM10 MISO v2 elog:11563
FM5 MISO v3.1 elog:11584 (best one)
FM6 MISO 3.1.1 elog:11584 (second best one)
11713 Mon Oct 26 18:10:38 2015 IgnacioUpdateIOOLast Wiener MCL subtractions
As per Eric's request, here is the code and TF measurement that was used to calculate the MC2 FF filter that is loaded in FM5. This filter module has the filter with the best subtraction performance that was achieved for MCL.
code_TF.zip
Attachment 1: code_TF.zip
11493 Tue Aug 11 11:56:36 2015 Ignacio, JessicaUpdatePEMWasps obliterated maybe...
The wasp terminator came in today. He obliterated the known wasp nest.
We discovered a second wasp nest, right next to the previous one...
Jessica wasn't too happy the wasps weren't gone!
11530 Tue Aug 25 16:33:31 2015 Ignacio, SteveConfigurationPEMSeismometer enclosure copper foil progress
Steve ordered about two weeks ago a roll of 0.5 mm thick copper foil to be used for the inside of the seismometer cans. The foil was then waterjet cut by someone in Burbank to the right dimensions (in two pieces, a side and a bottom for each of the three cans).
Today, we glued the copper foil (sides only) inside the three seismometer cans. We used HYSOL EE4215/HD3561(Data Sheet) as our glue. It is a "high impact, low viscocity, room temperature cure casting" that offers "improved thermal conductivity and increased resistance to heat and thermal shock." According to Steve, this is used in electronic boards to glue components when you want it to be thermal conductive.
We are going to finish this off tomorrow by gluing the bottom foil to the cans. The step after this involves soldering the side to the bottow and where the side connects. We have realized that the thermal conductivity of the solder that we are using is only ~50. This is 8 times smaller than that of copper and wil probably limit how good a temperature gradient we will have.
Some action shots,
6003 Thu Nov 24 15:48:27 2011 IllustratorUpdateelogelogd gained an immunity to googlebot
5020 Fri Jul 22 17:01:41 2011 Iron ManFrogsGeneralProof that Alberto lived through his Iron Man!
208 Alberto Stochino 67/129 585/1376 36:02 1:52 830 6:41 2:38:58 21.1 296 4:58 56:33 2:13:40 - 5:40:19
4854 Wed Jun 22 12:29:57 2011 IshwitaSummaryAdaptive FilteringWeekly summary
I started on the 16th with a very intense lab tour & was fed with a large amount of data (I can't guarantee that I remember everything....)
Then... did some (not much) reading on filters since I'm dealing with seismic noise cancellation this summer with Jenne at the 40m lab.
I'll be using the Streckeisen STS-2 seismometers & I need to use the anti aliasing filter board that has the 4 pin lemo connectors with the seismometers & its boxes that require BNC connectors. I spent most of the time trying to solder the wires properly into the connectors. I was very slow in this as this is the first time I'm soldering anything.... & till now I've soldered 59 wires in the BNC connectors....
4856 Wed Jun 22 17:35:35 2011 IshwitaUpdateGeneralHot air station
This is the new hot air station for the 40m lab.........
Attachment 1: P6220212.JPG
Attachment 2: P6220213.JPG
4959 Mon Jul 11 10:10:31 2011 IshwitaConfiguration AA board
The AA board shown in attachment 1 will be used in the seismometer hardware setup. A cartoon of this setup is shown in attachment 2.
BNC connectors are required for the seismometer breakout boxes. So the four-pin LEMO connectors present in the AA board were removed and panel mount BNC connectors were soldered to it. Red and blue colored wires were used to connect the BNC connectors to the board. Red wire connects the center of the BNC connector to a point on the board and that connection leads to the third leg (+IN) of the IC U### and the blue wire connects the shield of the BNC connector to the second leg (-IN) of the IC U###.
All the connections (including BNC to the AA board and in the AA board to all the filters) were tested using a multimeter by the beeping method and it was found that channel 10 (marked as C10) had a wrong connection from the point where the red wire (+ve) was connected to the third leg (+IN) of IC U91 and channel 32 (marked as C32) had opposite connections meaning the blue wire is connected to the third leg (+IN) of IC U311 and red wire is connected to the second leg (-IN) of IC U311.
Attachment 1: P7080305.JPG
Attachment 2: seismometers.png
5148 Tue Aug 9 02:27:54 2011 Ishwita , ManuelUpdatePEMPower spectra and Coherence of Guralps and STS2
We did offline wiener filtering on 3rd August (Elog entry) using only Guralps' channels X and Y.
Here we report the Power spectrum of the 3 seismometers (Guralp1, Guralp2, STS1) during that time.
and also the coherence between the data from different channels of the 3 seismometers.
We see that the STS is less correlated with the two Guralps. We think it is due to the wrong alignment of the STS with the interferometer's axes.
We are going to align the STS and move the seismometers closer to the stacks of the X arm.
5168 Wed Aug 10 12:28:22 2011 Ishwita , ManuelUpdatePEMAA board gain
We used a function generator, an oscilloscope and the Data Viewer to check the gain of the new AA board (used for the seismometers). Putting a sine wave of 0.3V (using a function generator) to the AA board, we could see about 500 counts in the Data Viewer. The calibration of the ADC is 214 counts/volt, so the AA board gives to the ADC an output of 0.03V. This proves that the AA board has a gain of 0.1. Guralp1 and STS1 (Bacardi), both have a gain of 10 now, that balance the AAboard gain of 0.1. If we consider the gain of AA board in our calibrated power spectrum plot of seismic signals from Guralp1 and STS1 (Bacardi), we get the following plot:
5172 Wed Aug 10 14:27:39 2011 Ishwita , ManuelUpdatePEMCalibration of Guralp and STS2
Gain of the AA board, g1 = 0.1
GURALP
Sensitivity = 800 V/ms-1
214 counts/V x 800 V/ms-1 = 13107200 counts/ms-1 -----> 7.6294e-08 ms-1/count
Gain, g2 = 10
Calibration = 7.6294e-08 ms-1/count x g1 x g2 = 7.6294e-08 ms-1/count
STS
Sensitivity = 1500 V/ms-1
214 counts/V x 1500 V/ms-1 = 24576000 counts/ms-1 -----> 4.069e-08 ms-1/count
Gain of the STS electronic breakout box, g3 = 10
Calibration = 4.069e-08 ms-1/count x g1 x g3 = 4.069e-08 ms-1/count
5197 Thu Aug 11 16:21:16 2011 Ishwita , ManuelUpdatePEMPower spectra and Coherence of Guralps and STS2
Following is the power spectrum plot (with corrected calibration [see here]) of seismometers Guralp1 and STS2(Bacardi, Serial NR 100151):
The seismometers are placed approximately below the center of the mode cleaner vacuum tube.
5200 Thu Aug 11 19:14:22 2011 Ishwita , ManuelUpdatePEMSeismometer STS2(Bacardi, Serial NR 100151) moved near ETMX
We moved the STS2(Bacardi, Serial NR 100151) to his new location and laid his cable from rack 1X7 to ETMX. The seismometer was below the mode cleaner vacuum tube before.
Now, (since 6:05pm PDT) its placed near the ETMX.
4968 Thu Jul 14 17:34:35 2011 Ishwita, ManuelHowToWienerFilteringWiener-Hopf equations
Since we are using Wiener filtering in our project, we studied the derivation of Wiener-Hopf equations. Whatever we understood we have written it as a pdf document which is attached below...
Attachment 1: derivwf.pdf
4979 Sat Jul 16 18:54:05 2011 Ishwita, ManuelConfigurationElectronicsAA board
We fixed the anti-aliasing board in its aluminum black box, the box couldn't be covered entirely because of the outgoing wires of the BNC connectors, so we drilled additional holes on the top cover to slide it backwards by 1cm and then screw it.
We had to fix the AA board box in rack 1X7, but there wasn't enough space, so we tried to move the blue chassis (ligo electro-optical fanout chassis 1X7) up with the help of a jack. We removed the blue chassis' screws but we couldn't move it up because of a piece of metal screwed above the blue chassis, then we weren't able to screw the two bottom screws again anymore because it had slided a bit down. Thus, the blue chassis (LIGO ELECTRO-OPTICAL FANOUT CHASSIS 1X7) is still not fixed properly and is sitting on the jack.
To accommodate the AA board (along with the panel-mounted BNC connectors) in rack 1X7 we removed the sliding tray (which was above the CPU) and fixed it there. Now the sliding tray is under the drill press.
Attachment 1: DSC_3236.JPG
Attachment 2: pic1.png
Attachment 3: DSC_3237.JPG
4999 Wed Jul 20 11:42:47 2011 Ishwita, ManuelUpdate Weekly summary
• We gave a white-board presentation on derivation of formula for optimum Wiener filter coefficients and wrote a latex document for the same. relevant elog entry
• We enjoyed drilling the cover of the AA board and fixing it.
• The AA board was fixed on rack 1X7 with Jenne's help. relevant elog entry
• We tried writing a simulation for the transfer function of the stacks in Matlab. Once we get some satisfying results, we will post it on the elog.
• We started reading the book 'Digital Signal Processing - Alan V. Oppenheim / Ronald W. Schafer' and are still reading it. We also tried watching lecture videos on z-transform...
5008 Wed Jul 20 22:16:27 2011 Ishwita, ManuelUpdateElectronicslying seismometer cable and plugging it
We laid the cable along the cable keeper from the BACARDI seismometer to the rack 1X6, the excess cable has been coiled under the X arm.
We plugged the cable to the seismometer and to the seismometer electronics box in rack 1X6. We also plugged the AC power cable from the box to an outlet in rack 1X7 (because the 1X6 outlets are full)
With the help of a function generator we tested the following labeled channels of AA board...
2, 3, 11, 12, 14, 15, 16, 18, 19, 20 and 24
that are the channels that can be viewed by the dataviewer, also the channel 10 can be viewed but it's labeld BAD so we cannot use it.
We leveled the seismometer and unlocked it, and saw his X,Y,Z velocity signals with an oscilloscope.
5018 Fri Jul 22 14:22:13 2011 Ishwita, ManuelUpdatePEMSTS-2 seismometer hardware testing
We have two STS-2 seismometer boxes... the blue box & the purple box. Initially we used the blue box for the STS-2 seismometer (named Bacardi by Jenne).
• Oscilloscope powered on battery was used to test the blue box by observing the velocity output of the three axes (X, Y, Z). It was found out that the mean value of DC volt of...
X = +10 V
Y = +11 V
Z = -0.1 V
Thus, X and Y axes showed abnormally high DC volt. It was also found out that in AC coupling mode of the oscilloscope... changes were observed in the signal received from Z axis when some seismic wave was generated near the Bacardi by jumping near it. No such changes were observed from signals received from X & Y axes.
• We removed the blue box and used the purple box for the same Bacardi seismometer & used the oscilloscope powered on battery to test it. It was found out that the mean value of DC volt of...
X = +4.4 V
Y = +4.4 V
Z = +4.4 V
In Ac coupling mode of the oscilloscope... changes were observed in the signals from X, Y, Z axes when someone jumped near Bacardi.
• The above voltages from the two STS-2 seismometer boxes are unsuitable for the ADC box since it works with voltages ranging from +2 V to -2 V.... meaning it will consider any voltage signal above +2 V as +2 V and any signal below -2 V as -2 V. Hence we need to find out how to use these STS-2seismometer boxes with the ADC box.
• We also tried measuring the DC volt from the shield and the center of a BNC connector corresponding to Y axis of the purple box (lets call it 'BNC-test') by using BNC-to-banana adaptors and banana wires. Signal from shield of BNC-test was sent to oscilloscope's channel 1 (connected to center of its BNC connector) and signal from center of BNC-test was sent to oscilloscope's channel 2 (connected to center of its BNC connector). On the oscilloscope screen it was observed that both the signals gave the same mean voltage output (-2.2V).
5059 Fri Jul 29 12:25:54 2011 Ishwita, ManuelUpdatePEMSTS-2 seismometer box
The 'Bacardi' STS-2 seismometer was tested with the "purple" breakout box and it was found out that all the three axes gave a voltage of 11 V (as shown on the screen of the oscilloscope) before pressing the auto-zero button and after pressing it the voltage shown was 6 V. We tried again the blue box and it was working perfectly after pushing the auto zero button (the auto zero took a few seconds). The power of the purple box is still on, we will wait a few hours, to see if something changes.
5175 Wed Aug 10 15:17:39 2011 Ishwita, ManuelUpdatePEMCalibration of Guralp and STS2
Quote: I'm pretty sure that don't have any ADC's with this gain. It should be +/- 10V for 16 bits.
Jenne told us that the ADC was +/- 2V for 16 bits so our calibration is wrong. Since, the ADC is +/- 10V for 16 bits we need to change our calibration and now we can also use the purple STS breakout box.
5185 Thu Aug 11 09:39:25 2011 Ishwita, ManuelUpdatePEMCalibration of Guralp and STS2
Quote:
Quote: I'm pretty sure that don't have any ADC's with this gain. It should be +/- 10V for 16 bits.
Jenne told us that the ADC was +/- 2V for 16 bits so our calibration is wrong. Since, the ADC is +/- 10V for 16 bits we need to change our calibration and now we can also use the purple STS breakout box.
New calibration for Guralp:
GURALP
Sensitivity = 800 V/ms-1
(215 x 0.1) counts/V x 800 V/ms-1 = 2621440 counts/ms-1 -----> 3.8147e-07 ms-1/count
Calibration = 3.8147e-07 ms-1/count
Using the above calibration we obtain the following plot:
When we compare this plot with the old plot (see here) we see that in our calibration, we have got a factor of 10 less than the old plot. We do not know the gain of the Guralp. If we assume this missing 10 factor to be the gain of Guralp then we can get the same calibration as the old plot. But is it correct to do so?
5186 Thu Aug 11 10:56:08 2011 Ishwita, ManuelUpdatePEMMoving Seismometers
Quote: We turned off the power of the seismometers and moved the Guralp1 close to the STS. Both are now situated below the center of the mode cleaner vacuum tube. We oriented the X axis of the STS & Guralp1 along the X axis of the interferometer. Then we turned on the power again, but the STS channels don't give any signal. We think this is, because we didn't push the auto zero button.
After pressing the auto-zero button (a lot of times) of the STS breakout box & aligning the bubble in the STS, we could finally get data from STS (Bacardi). So, now STS2 (Bacardi - Serial NR. 100151) is working!
5190 Thu Aug 11 13:41:36 2011 Ishwita, ManuelUpdatePEMCoherence of Guralp1 and STS2(Bacardi, Serial NR 100151)
Following is the coherence plot obtained when Guralp1 and STS2(Bacardi, Serial NR 100151) are placed very close to each other (but they aren't touching each other):
The seismometers were placed as shown in the picture below:
They are placed below the center of the mode cleaner vacuum tube.
5196 Thu Aug 11 16:15:59 2011 Ishwita, ManuelUpdatePEMCalibration of Guralp and STS2
Finally, we have found the correct calibration of Guralp and STS2 seismometers.
GURALP
Sensitivity of seismometer = 800 V/ms-1
Gain of the guralp breakout box (reference elog entry) = 20
Calibration = 3.2768e+03 counts/V x 800 V/ms-1 x 20 = 52428800 counts/ms-1 -----> 1.9073e-08 ms-1/count
STS
Sensitivity = 1500 V/ms-1
Gain of the STS electronic breakout box = 10
Calibration = 3.2768e+03 counts/V x 1500 V/ms-1 x 10 = 49152000 counts/ms-1 -----> 2.0345e-08 ms-1/count
5201 Fri Aug 12 00:18:30 2011 Ishwita, ManuelUpdatePEMCoherence of Guralp1 and STS2(Bacardi, Serial NR 100151)
We moved the seismometer STS2(Bacardi, Serial NR 100151) as we told in this Elog Entry, so the distance between Guralp1 and STS2 is 31.1m. Following is the coherence plot for this case:
then we also moved the Guralp1 under the BS and plugged it with the Guralp2 cable (at 7:35pm PDT), so now the distance between the two seismometers is 38.5m. Following is the coherence plot for this case:
14049 Tue Jul 10 16:59:12 2018 Izabella PastranaHowToComputer Scripts / ProgramsTaking Remote TF Measurements with the Agilent 4395A
I copied the netgpibdata folder onto rossa (under the directory ~/Agilent/), which contains all the necessary scripts and templates you'll need to remotely set up, run, and download the results of measurements taken on the AG4395A network analyzer. The computer will communicate with the network analyzer through the GPIB device (plugged into the back of the Agilent, and whose communication protocol is found in the AG4395A.py file in the directory ~/Agilent/netgpibdata/).
The parameter template file you'll be concerned with is TFAG4395Atemplate.yml (again, under ~/Agilent/netgpibdata/), which you can edit to fit your measurement needs. (The parameters you can change are all helpfully commented, so it's pretty straightforward to use! Note: this template file should remain in the same directory as AGmeasure, which is the executable python script you'll be using). Then, to actually set up, run, and download your measurement, you'll want to navigate to the ~/Agilent/netgpibdata/ directory, where you can run on the command line the following: python AGmeasure TFAG4395Atemplate.yml
The above command will run the measurement defined in your template file and then save a .txt file of your measured data points to the directory specified in your parameters. If you set up the template file such that the data is also plotted and saved after the measurement, a .pdf of the plot will be saved along with your .txt file.
Now if you want to just download the data currently on the instrument display, you can run: python AGmeasure -i 192.168.113.105 -a 10 --getdata
Those are the big points, but you can also run python AGmeasure --help to learn about all the other functions of AGmeasure (alternatively, you can read through the actual python script).
Happy remote measuring! :)
16794 Thu Apr 21 11:31:35 2022 JCUpdateVACGauges P3/P4
[Jordan, JC]
It was brought to attention during yesterday's meeting that the pressures in the vacuum system were not equivalent althought the valve were open. So this morning, Jordan and I reviewed the pressure gauges P3 and P4. We attempted to recalibrate, but the gauges were unresponsive. Following this, we proceeded to connect new gauges on the outside to test for a calibration. The two gauges successfully calibrated at atmosperic pressure. We then proceeded to remove the old gauges and install the new ones.
Attachment 1: IMG_0560.jpeg
Attachment 2: IMG_0561.jpeg
16808 Mon Apr 25 14:19:51 2022 JCUpdateGeneralNitrogen Tank
Coming in this morning, I checked on the Nitrogen tanks to check the level. One of the tanks were empty, so I went ahead and swapped it out. One tank is at 946 PSI, the other is at 2573 PSI. I checked for leaks and found none.
16814 Wed Apr 27 10:05:55 2022 JCUpdateCoil DriversCoil Drivers Update
18 (9 pairs) Coil Drivers have been modified. Namely ETMX/ITMX/ITMY/BS/PRM/SRM/MC1/MC2/MC3.
ETMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100624 ETMX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100631
ITMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100620 IMTX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100633
ITMY Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100623 ITMY Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100632
BS Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100625 BS Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100649
PRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100627 PRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100650
SRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100626 SRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100648
MC1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100628 MC1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100651
MC2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100629 MC2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100652
MC3 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100630 MC3 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100653
Will be updating this linking each coil driver to the DCC
16820 Fri Apr 29 08:34:40 2022 JCUpdateVACRGA Pump Down
Jordan and I, in order to start pumpig down the RGA Volume, we began by opening V7 and VM. Afterwards, we started RP1 and RP3. After this, the pressure in the line between RP1, RP3, and V6 dropped to 3.4 mTorr. Next, we tried to open V6, although an error message popped up. We haven't been able to erase it since. But we were able to turn on TP2 with V4 closed. The pressure in that line is reporting 1.4 mTorr.
PRP on the sitemap is giving off an incorrect pressure for the line between RP1, RP3, and V6. This is verified by the pressure by the control screen and the physical controller as well.
Attachment 1: Screen_Shot_2022-04-29_at_8.46.53_AM.png
16823 Mon May 2 13:30:52 2022 JCUpdateCoil DriversCoil Drivers Update
The DCC has been updated, along with the modified schematic. Links have been attached.
Quote: 18 (9 pairs) Coil Drivers have been modified. Namely ETMX/ITMX/ITMY/BS/PRM/SRM/MC1/MC2/MC3. ETMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100624 ETMX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100631 ITMX Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100620 IMTX Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100633 ITMY Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100623 ITMY Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100632 BS Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100625 BS Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100649 PRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100627 PRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100650 SRM Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100626 SRM Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100648 MC1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100628 MC1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100651 MC2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100629 MC2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100652 MC3 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3 S2100630 MC3 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3 S2100653 Will be updating this linking each coil driver to the DCC
16825 Tue May 3 13:18:47 2022 JCUpdateVACRGA Pump Down
Jordan, Tega, JC
Issue has been resolved. Breaker on RP1 was tripped so the RP1 button was reporting ON, but was not actually on which continuously tripped the V6 interlock. Breaker was reset, RP1 and RP3 turned on. The V6 was opened to rough out the RGA volume. Once, pressure was at ~100mtorr, V4 was opened to pump the RGA with TP2. V6 was closed and RP1/3 were turned off.
RGA is pumping down and will take scans next week to determine if a bakeout is needed
Quote: Jordan and I, in order to start pumpig down the RGA Volume, we began by opening V7 and VM. Afterwards, we started RP1 and RP3. After this, the pressure in the line between RP1, RP3, and V6 dropped to 3.4 mTorr. Next, we tried to open V6, although an error message popped up. We haven't been able to erase it since. But we were able to turn on TP2 with V4 closed. The pressure in that line is reporting 1.4 mTorr. PRP on the sitemap is giving off an incorrect pressure for the line between RP1, RP3, and V6. This is verified by the pressure by the control screen and the physical controller as well.
16842 Tue May 10 15:46:38 2022 JCUpdateBHDRelocate green TRX and TRY components from PSL table to BS table
[JC, Tega]
Tega and I cleaned up the BS OPLEV Table and took out a couple of mirrors and an extra PD. The PD which was removed is "IP-POS - X/Y Reversed". In addition to this, the cable is zip-tied to the others located on the outside of the table in case this is required later on.
Next, we placed the cameras and mirrors for the green beam into their postions. A beam splitter and 4 mirrors were relocated from PSL table and placed onto the BS Oplev table to complete this. I will upload the picture of the newly updated photo with arrows of the beam routes.
Attachment 1: IMG_0741.jpeg
Attachment 2: IMG_0753.jpeg
16845 Wed May 11 15:49:42 2022 JCUpdateOPLEV TablesGreen Beam OPLEV Alignment
[Paco, JC]
Paco and I began aligning the Green Beam in the BS Oplev Table. while aligning the GRN-TRX, the initial beam was entering the table a bit low. To fix this, Paco went into the chamber and correcting the pitch with the steering mirror. The GRN-TRX is now set up, both the PD and Camera. Paco is continuing to work on the GRN-TRY and will update later on today.
In the morning, I will update this post with photos of the new arrangement of the BS OPLEV Table.
Update Wed May 11 16:54:49 2022
[Paco]
GRY is now better mode matched to the YARM and is on the edge of locking, but it more work is needed to improve the alignment. The key difference this time with respect to previous attempts was to scan the two lenses on translation stages along the green injection path. This improved the GTRY level by a factor of 2.5, and I know it can be further improved. Anyways, the locked HOMs are nicely centered on the GTRY PD, so we are likely done with the in-vac GTRY GTRX alignment.
Update Wed May 12 10:59:22 2022
[JC]
The GTRX PD is now set up and connected. The camera have been set to an angle because the cable to connect it is too thick for the camera to maintain its original position along the side.
Attachment 1: IMG_0770.jpeg
ELOG V3.1.3- | 2022-09-27 17:26:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45498907566070557, "perplexity": 6742.686063631823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00490.warc.gz"} |
https://paperswithcode.com/paper/novel-diffusion-derived-distance-measures-for | # Novel diffusion-derived distance measures for graphs
10 Sep 2019C. B. ScottEric Mjolsness
We define a new family of similarity and distance measures on graphs, and explore their theoretical properties in comparison to conventional distance metrics. These measures are defined by the solution(s) to an optimization problem which attempts find a map minimizing the discrepancy between two graph Laplacian exponential matrices, under norm-preserving and sparsity constraints... (read more)
PDF Abstract | 2020-06-03 03:26:48 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034272789955139, "perplexity": 2476.447457042107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347428990.62/warc/CC-MAIN-20200603015534-20200603045534-00315.warc.gz"} |
https://www.cableizer.com/documentation/rho_i/ | # Thermal resistivity of insulation
The values of the thermal resistivity of insulation are taken from standard IEC 60287-2-1 Ed.2.0 where available with the following additions:
• For the purposes of current rating calculations, the semiconducting screening materials are assumed to have the same thermal properties as the adjacent dielectric materials.
• Value for Polypropylene (PP) is taken from professionalplastics.com
• Value for Silicone rubber (SiR) is taken from shinetsusilicone-global.com
Symbol
$\rho_{i}$
Unit
K.m/W
$U_{n}$
Used in
$d_{ct}$
$T_{1}$
$T_{1t}$
Choices
MaterialValue
PE3.5
HDPE3.5
XLPE3.5
XLPEf3.5
PVC5.0 (U_n <= 3kV)
6.0 (U_n > 3kV)
EPR3.5 (U_n <= 3kV)
5.0 (U_n > 3kV)
IIR5.0
PPLP5.5
Mass6.0
Oil5.0
PP4.5
SiR5.0
EVA4.35 | 2020-08-11 13:05:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920121431350708, "perplexity": 9496.665152431926}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00313.warc.gz"} |
http://mathoverflow.net/questions/22624/example-of-a-good-zero-knowledge-proof/46036 | # Example of a good Zero Knowledge Proof.
I am working on my zero knowledge proofs and I am looking for a good example of a real world proof of this type. An even better answer would be a Zero Knowledge Proof that shows the statement isn't true.
-
What's wrong with the very nice Wikipedia example? – Ben Webster Apr 26 '10 at 17:44
@Ben Webster I'd like something real world. Like showing that a real cyrpto system keeps us safe, or is vulnerable to attack. – George Apr 26 '10 at 18:06
Then I fail to understand why you think mathematicians are the right people to ask. Wouldn't StackOverflow be much better for this? – Ben Webster Apr 26 '10 at 18:21
I wish someone StackOverflow would post a math proof. – George Apr 26 '10 at 18:30
The classic example, given in all complexity classes I've ever taken, is the following: Imagine your friend is color-blind. You have two billiard balls; one is red, one is green, but they are otherwise identical. To your friend they seem completely identical, and he is skeptical that they are actually distinguishable. You want to prove to him (I say "him" as most color-blind people are male) that they are in fact differently-colored. On the other hand, you do not want him to learn which is red and which is green.
Here is the proof system. You give the two balls to your friend so that he is holding one in each hand. You can see the balls at this point, but you don't tell him which is which. Your friend then puts both hands behind his back. Next, he either switches the balls between his hands, or leaves them be, with probability 1/2 each. Finally, he brings them out from behind his back. You now have to "guess" whether or not he switched the balls.
By looking at their colors, you can of course say with certainty whether or not he switched them. On the other hand, if they were the same color and hence indistinguishable, there is no way you could guess correctly with probability higher than 1/2.
If you and your friend repeat this "proof" $t$ times (for large $t$), your friend should become convinced that the balls are indeed differently colored; otherwise, the probability that you would have succeeded at identifying all the switch/non-switches is at most $2^{-t}$. Furthermore, the proof is "zero-knowledge" because your friend never learns which ball is green and which is red; indeed, he gains no knowledge about how to distinguish the balls.
-
+1, by a color-blind. Just to support the assertion 'most color-blind people are male': color-blindness is caused by a recessive gene worn by the X chromosome. This is completely analogous to (but not correlated with) haemophilia. Therefore, if $p$ is the probability that a man be color-blind, the probability for women drops to $p^2$. According to Wikipedia, $p$ is close to $0.08$ and therefore $p^2$ is close to $0.005$. – Denis Serre Jan 4 '11 at 14:33
Good example. In the case that the person is actually colour-blind, what would be a non-zero-knowledge proof? I mean, how could I convince a colour-blind person a ball is green? – Tony Huynh Dec 5 '12 at 23:00
@Tony: well for a non-zero-knowledge proof, you can label each ball according to its colour and give both to your friend, who conceals them and then randomly shows you either ball (so that you can't read the label), and you tell him what label it must have. Repeat. This will convince him that they are different, as before, but also that you know which one is labelled green. However you can't ever convince him that a ball is green as you could simply say the red one is, so he only learns if he believes you. – Granger Dec 5 '12 at 23:50
Thanks Granger, that makes sense. – Tony Huynh Dec 6 '12 at 14:57
An example I like is this. I think I heard it from Avi Wigderson but I can't quite remember. (I don't know who actually thought of it.) You want to prove that a graph can be properly coloured with three colours. So you draw a picture of the graph and then make six copies of that picture. You then properly colour the vertices with red, blue and green, but you also colour the other five copies of the graph in the same way but permuting the colours (so, for instance, in one of them you colour all vertices red that you previously coloured blue and all vertices blue that you previously coloured red). You now repeatedly do the following. Randomly pick one of your pictures, cover each vertex with a coin (so that its colour cannot be seen) and allow the other person to pick an edge and remove the two coins at its end vertices. The other person will obtain from this the information that those two vertices are coloured differently, but will obtain no other information about the colouring.
Now if there is no proper colouring of the graph, and you keep presenting the other person with colourings of the graph, then they can randomly choose their edges, and sooner or later, with very high probability, they will hit an edge that has the same colour at each end. (For the probability to be high, you need to go for many more steps than there are edges in the graph.) So from the fact that this never happens, they can deduce that with extremely high probability you do in fact have a proper colouring of the graph.
-
This seems like less of a 0-knowledge proof than a $\varepsilon$-knowledge proof: if the graph has $N$ vertices and we pull way more than $N$ times, it would be possible for your friend to actually keep track of which vertices have appeared with which colors and expect, eventually, pull the same vertex, edge, triangle, etc. repeatedly and with some overlapping colors that they can use to rebuild each of the six copies from. Would take a lot of memory, though. – Ryan Reich Dec 5 '12 at 15:08
@Ryan: I think it really is zero knowledge. Even if your friend runs so many trials that he's seen every edge on each of the 6 copies lots of times, he won't know which trials correspond to the same copy of the graph (except for those pairs of trials where he chose the same edge), so he won't be able to assemble the information he's seen into a coloring. – Andreas Blass Dec 5 '12 at 15:51
@Andreas: I think you can reassemble, if not the whole coloring, then possibly large chunks of it. For example, knowing just the colors of one edge, you can fill in the colors of any triangle containing that edge. If the graph has large triangulated pieces (which you do know just by looking at it uncolored) then you get big colored patches, which at the least reduces the effort you have to make to finish the job. Only if the graph is bipartite (i.e. has no triangles) is this truly zero knowledge, and if it's bipartite, then it's 2-colorable. Though that requires a proof :) – Ryan Reich Dec 5 '12 at 18:08
@Ryan: If the graph has large triangulated patches, then, once you've colored an edge in such a patch, the coloring of the whole patch is determined (since there are only 3 colors) even without running this (allegedly) zero-knowledge protocol. So the protocol has given you no new information. Also, "bipartite" is a lot stronger than "no triangles"; it means there are no cycles of any odd length. – Andreas Blass Dec 5 '12 at 21:47
Demonstrating an attack on a cryptosystem is very similar to the colored balls example in Ryan's answer. Suppose Alice and Bob have a means of communicating messages and Eve wants to prove that it is insecure, without revealing the method used to exploit the system. Alice and Eve can simply agree that Alice will send a sequence of random messages to Bob. If Eve can tell Alice the contents of the messages, then with high probability Eve must have an attack on the cryptosystem.
-
An excellent example of such a proof is one based on Sudoku, and there's even a detailed demonstration for how to conduct it. I've done this in class a number of times to show ZKPs to students.
There's more as well, at Moni Naor's page: http://www.wisdom.weizmann.ac.il/~naor/PAPERS/sudoku_abs.html
-
This is awesome! – George Apr 27 '10 at 5:24
Another classic example is this. There are two public graphs F and G. Alice knows an isomorphism from F to G. She wants to prove to Bob that F and G are isomorphic graphs, but does not wish to reveal the isomorphism. The procedure is the following. Alice permutes the labels of vertices F randomly, and reveals the graph H he's got that way. She can then compute an isomorphism from H to both F and G, but Bob can't. Bob then randomly chooses either F or G, then which Alice reveals the isomorphism from H to that graph. Repeat.
The problem with this method is that it can be used in practice only if one can generate graphs on which the graph isomorphism problem is hard to decide. That is not currently the case, and might not ever be, if eg. graph isomorphism can be decided efficiently.
-
Most of the examples given above are nice textbook examples of ZK proofs meant for students. Here's something I'd call more a "real life" example. Assume that Alice has a secret key $x$ and public key $y = g^x$. (Here we assume that $g$ generates a group $G$ of size $p$, for large prime $p$.) She wants to convince Bob that she knows $x$ without revealing $x$. This is a typical example of an authentication/identification protocol.
A simple version of this protocol is as follows: Alice generates a new random value $r$, and sends $a = g^r$ to Bob. Bob replies with a random $k$-bit challenge $c$, and then Alice sends $z = c x + r \mod{p}$ to Bob. Bob accepts iff $g^z = y^c a$.
This is a special "challenge-response" type of protocol, also known as a $\Sigma$-protocol. The concrete protocol above was proposed by Schnorr. It is not completely ZK by itself, but it is zero knowledge if we assume that Bob is honest ($c$ is really chosen randomly).
The proof of this fact: we show by using simulation, that Bob can create $(a', c', z')$ that comes from the same distribution as the real protocol view $(a, c, z)$, but without knowing the secret key $x$. The trick is that we allow Bob to choose $c'$ and $z'$ first and then to choose $a$ so that the verification equation will accept.
Namely, the simulator creates random $c'$ and $z'$, and then chooses $a' = g^{z'} / y^{c'}$. clearly, this triple $(a', c', z')$ satisfies the verification. Moreover, in the original protocol $(a, c, z)$ is a tuple of random values from $(G, \{0, 1\}^k, \mathbb{Z}_p)$ modulo the verification requirement. But so is the simulated triple.
That "honest-verifier zero knowledge" proof (also a clear textbook protocol by now) can be made fully zero knowledge by a few additional tricks (basically, letting Bob to "commit" to $c$ before he sees $a$ - the actual solution is slightly more complicated).
The protocol is clearly of "real life" flavor, both because it can be seen to have real applications (proving you know your secret key without revealing it = authentication) and since it is very efficient.
-
An easy ZKP-based authentication scheme is one that uses a deck of shuffled playing cards and a paper bag:
Suppose Alice and Bob want to authenticate using the secret number "27". Alice takes the deck of cards, places her hands (with the cards) inside the bag and begins drawing card after card until she has reached the 27th card. She pulls this one card out of the bag and reveals it to herself and Bob.
Alice places the cards back on the deck in the same order she drew them (not destroying the original order).
Now it's Bob's turn. He is handed the deck of cards and hides his hands (and the counting of cards) in the paper bag. If he knows the secret number (27) then he should draw down to the 27th card and reveal the same card Alice did.
If Alice and Bob draw different cards then they did not draw the same number of cards.
One more:
Suppose Alice and Bob want to authenticate using the secret number of "27" but don't want to reveal it to one another. In this scenario they use a third party, Charlie.
Charlie randomly comes up with a number (any number will do) -- we'll say 15 -- and whispers it to Alice. Alice then adds the secret number (27) to Charlie's number (15) and whispers the total (42) to Bob.
Bob subtracts the secret number (27) from the total (42) and whispers the result (15) to Charlie.
If Charlie is read back his own number (15) then he can declare Alice and Bob have successfully authenticated.
-
As a cryptosystem, the reliance on Alice and Bob not looking at the cards is a weakness in the first system. – Max Nov 14 '10 at 11:47
It's assumed that Alice and Bob are, in the first scenario, authenticating in a face-to-face manner as to make sure no slight of hand or cheating is occurring. You're right, though, in that it is not a tamper proof system. – Rey Nov 17 '10 at 0:41 | 2014-07-24 00:27:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7133046388626099, "perplexity": 582.1006176489783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883905.99/warc/CC-MAIN-20140722025803-00045-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://anupanandsingh.wordpress.com/ | ## Why I Loved Fluid Dynamics as a Kid (and Still Do)
Like many other Physics undergraduate/graduate students, my first introduction to Prof. Julia Yeomans was as the author of the brilliantly written textbook Statistical Mechanics of Phase Transitions. Prof. Yeomans is a theoretical physicist at the University of Oxford who does some pretty cool stuff involving bacterial swimmers and water drops on hydrophobic surfaces among many other things.
It was indeed a delight, then, for the seven-year-old fluid dynamics-loving kid in me to listen to Prof. Yeomans speaking about the science of fluids yesterday at Kappi with Kuriosity.
Hold on. When I say the seven-year-old fluid dynamics-loving kid, I do not mean a seven-year-old crossing-out-the-time-derivative-term-in-the-Navier-Stokes-equation kid but rather a seven-year-old intrigued-with-his-toy-steamboat kid.
It was a lot fun, I remember, watching the noisy steamboat moving around in a tub of water. A couple of years later, it was Janice VanCleave’s Physics for Every Kid that added more substance to the intrigue and delight. A remarkable book in many ways, Physics for Every Kid was where I got my first introduction to the physics behind the swinging of a ball and the upward push or lift on an aircraft. A number of years and physics courses later, I do understand more of fluid dynamics than I did then, or so I think. Though, in any case, the love for the science of fluids remains the same.
Well, as I said, yesterday’s talk was a delight. You should watch it online when it’s up; you’ll find it on the ICTS YouTube channel.
#### To Maryam Mirzakhani, one of the finest mathematicians of our times, who passed away yesterday at the age of 40.
Dear Maryam
Your going away is, indeed, a loss, to mathematics, and to all of us over the world for whom you were no less than an icon. But in this loss, I believe, many more young people will find hopes, hopes to achieve greatness and to contribute to the world in whatever way they can.
When, at present, we are busy building walls and spewing hatred and animosity, I wish that your life and work will stand exemplary of noble human pursuits, of human creativity and the universality of human endeavours.
I believe that very much like me, there are numerous students and young mathematicians and scientists across the globe, who, even though, barely understand the intricacies of your work, realise the importance of the role you played as a mathematician and as a citizen of the world.
I believe that you will forever remain a wonderful example telling us how shallow and meaningless are the stereotypes that we fabricate. And I hope that in your going away, we will all find motivation to work wonders, for ourselves, and for the world.
A Young Fan
## All Those Years Ago
#### What’s the decade dearest to a Beatles-loving physics enthusiast? The 60s, of course!
Now yesterday and today, our theatre’s been jammed with newspapermen and hundreds of photographers from all over the nation. And these veterans agreed with me that this city never has witnessed the excitement stirred by these youngsters from Liverpool, who call themselves The Beatles. Now tonight, you’re gonna twice be entertained by them. Right now, and again in the second half of our show. Ladies and gentlemen.., The Beatles.
This was the iconic television show host Ed Sullivan’s introduction to the Fab Four in New York City in February 1964. George, John, Paul and Ringo, the four twenty-something English lads who had started playing together less than four years ago, were already a sensation on both sides of the Atlantic. They were to sweep the entire of the decade, a decade that is dear to me for one more reason – particle physics!
I’m a second generation Beatles fan (or the third, it doesn’t matter, anyway). And for about last five years, I have this bug called The Beatles. (Sorry, couldn’t resist the pun.) Born too late to attend a live Beatles performance or to buy vinyl records of Beatles albums, I fell in love with The Beatles when I discovered them on the internet.
It was impossible not to fall for them; in less than a decade, they’d influenced music like no band or artist had ever done. They were winning hearts and earning lots of money (they are the most commercially successful band of all time).
Fifty years ago, in the summer of ’67, they released their eighth studio album Sgt. Pepper’s Lonely Hearts Club Band. An experiment in a number of ways, the album became an instant commercial and critical hit (like almost all of their albums and singles). (Sgt. Pepper’s ranks number one in the Rolling Stones magazine list of the 500 Greatest Albums of All Time.)
But 1967 was phenomenal for another important reason. That very year, Abdus Salam and Steven Weinberg, two scientists working in the United States, produced their seminal work on the unification of the electromagnetic and the weak nuclear force.
Now a little something on what this means to help you understand and appreciate its importance. All objects in this universe interact with each other through one or more of four forces – what we call the fundamental interactions. Now, each of these forces has a very specific characteristic; and obtaining a proper and complete description of each of them is quite non-trivial, enough to have kept physicists busy all these years.
Though understanding the behaviour of all these forces is difficult, one smart thing to do is to think of them as interactions between particles mediated by, well, some other particles. So, for example, you’ve electromagnetic interactions between charged particles which are mediated by photons, the corpuscles of light. And gravity which can be understood as a manifestation of interactions mediated by what are very creatively (?) called gravitons.
Now, this is where Salam and Weinberg come into the picture. What these two gentlemen were able to show was that the weak and electromagnetic interactions, which are mediated by different particles, and of course, have different behaviours, are two different manifestations of one fundamental electroweak interaction. First proposed by Sheldon Glashow in 1961, the electroweak unification, as it is called, marks an important milestone in our understanding of nature at the most fundamental levels. (Glashow, Salam and Weinberg were awarded the Nobel Prize in Physics in 1979 for their contributions to the theory of electroweak interaction.)
In fact, the entire of the sixth decade of the last century saw numerous contributions coming from theoreticians and experimentalists alike – all these culminating into what can unarguably be called a triumph of human endeavours – the Standard Model of particle physics. Efforts of countless individuals have given us this fine theory which not only classifies all the elementary particles but also explains how the electromagnetic, strong and weak interactions are related to one another. (As for gravity, it still is a hard to nut to crack.)
The sixties were rather strange, though; the silliest and longest of wars was going on in Vietnam, there were successful lunar missions, but there were assassinations, too – JFK, Martin Luther King Jr. were killed; it was like a win some, lose some kind of thing, perhaps it has always been. But then, there were The Beatles – young, energetic and innovative. And thinking about why a twenty-something guy in India, over forty years after the last Beatles performance, is fascinated by them, I realised something.
It is not just about the music, it is about the themes as well. So, you have this boy band, singing beautiful songs about love and friendship. Four twenty-something English lads generating admiration with their songs and charming personas, captivating an entire generation (and more). Listen to this to get a feel of what I mean (and possibly, to get a break from this tedious read as well).
Following The Beatles album by album made me realise that all the while these guys were growing up, too. Their music was maturing, and so were their themes. I couldn’t help but admire the variety and the depth in their themes. They were not just talking of love and friendship now, there were also talking of nostalgia and were spinning yarns about the lives of regular people and at the same time, using their songs to express their worldviews. They were experimenting with music and songwriting, and in the process, producing a sheer volume of invention. And if I am able to connect to a band that played all those years ago, it is because of the wonderful music, for sure, but probably, it is also because their songs represent what I feel; they cover an entire spectrum of emotions, all the essential themes.
Sheldon Glashow, Abdus Salam, Steven Weinberg, George Harrison, John Lennon, Paul McCartney and Ringo Starr, they all symbolise how important ingredients creativity and innovation are in human endeavours. We, as a species, have come quite far, learning and evolving, but probably sometimes repeating the same mistakes again and again.
I’m not sure if the world today is better than that in the sixties or not, but for sure, not everything is fine at present. I’ll leave you with something which I believe is important for all of us to understand, a 1968 Beatles song called Revolution. (And as I tell everyone whenever suggesting a Beatles song, read the lyrics of the song as well, especially when it is as meaningful a song as this one.) Because, though these are troubled times, don’t you know it’s going to be…alright!
## Random Matrices in Three Short Stories – III
#### This is the third and final part of the series. For those who haven’t already, going through the first two posts, which can be found here and here, will be a good idea.
Three
Why do we want a quantum theory of gravity? We just want it, okay?
Hold on, this isn’t me. This was the American theoretical physicist John Preskill at a conference earlier this year [1]. And though, in almost all probability, he was trying to be funny, this does give an idea about how difficult (and often, frustrating) it is for theoretical physicists to answer why-do-we-want-this or what-use-doing-that questions. (By the way, apart from his seminal contributions to the fields of quantum information and quantum gravity, Preskill is also famous for winning a black hole bet against Stephen Hawking which Hawking conceded by offering him a baseball encyclopedia.)
But something about the notion of universality in random matrix theory before I go on to talk about the theory of quantum gravity, cosmological inflation, black holes, wormholes and time travel. (Okay, I was kidding about the last two – no wormholes and time travel here.)
Large matrices with random entries have some very intricate (and interesting) statistical properties. And the reason they come in so handy in our attempts to answer questions of different sorts is the applicability of the statistical laws of random matrix ensembles to all those systems which have the same symmetries as those of the ensemble.
These statistical laws often involve eigenvalues of the matrices. Eigenvalues are certain quantities associated with matrices. In fact, all square matrices (arrays of numbers with the same number rows and columns) can be characterised by their eigenvalues. And in most cases, computing the eigenvalues is not a very difficult task.
##### For a matrix H, all the possible values of λ satisfying the above equation are its eigenvalues. I is a square matrix the size of H with zeros along the diagonal going from left to right. And det() represents computing the determinant which again is a number associated with a matrix.
For a set of random numbers, it is very natural and useful to talk about the probability distribution – how probable the occurrence of each of the numbers is. Now, if your matrices have random elements, its eigenvalues will be random as well, which makes it useful to talk about the eigenvalue distributions of the ensembles.
These distributions in random matrix theory are universal in the sense that they don’t depend on the underlying structures as long as they have a common overall symmetry. In the context of a physical system, these eigenvalues correspond to the energy levels of the system and, in essence, contain the information about its dynamical properties. Superficially, this is what a lot of random matrix analyses is about. The energy spectra of systems which are difficult to interpret otherwise find meaning in the language of random matrix ensembles.
But understanding the underlying dynamics of many systems is not easy. And as we have come to realise in the case of black holes – it is certainly not.
Black holes are interesting; though calling them interesting might be an understatement. They are formed in a number of different ways all over the universe. Within the framework of the theory of general relativity, one can, in quite a straightforward manner, show that a black hole has a gravitational pull so huge that nothing can escape it, not even light.
But everything is not that straightforward. Though general relativity explains the force of gravitation, we believe our universe to be inherently quantum mechanical, that is, we expect every object in the universe to follow the laws of quantum theory. And taking quantum mechanical laws into consideration, one can show (as Hawking did for the first time in the early 1970s) that black holes radiate stuff [2]. Now, this may seem puzzling; in fact, it is puzzling. But what this radiation also indicates is that black holes are thermal objects – they have thermodynamic properties (say, temperature, for example) in a manner similar to how your daily cup of coffee does.
It is evident that black holes demand a better understanding than the one at present. And this can possibly be achieved by constructing a unified framework incorporating both gravity and quantum mechanical principles. This is somewhere random matrix analysis turns out to be useful – understanding energy spectra of black holes.
Depending on the theoretical framework in which the calculations are being done, black holes can be studied using different models. In principle, the details of the energy spectra can be worked out for each of these models. But as it turns out, not all black hole models are soluble. The trick is to then use random matrix models which can possibly mimic the expected properties.
Black holes have posed some of the most interesting challenges to those working in physics for the last hundred years. They have also motivated a quest for quantum gravity. However, a theory of quantum gravity will possibly also explain many other mysteries. One of them is cosmological inflation. Based on a large amount of astronomical data, it has been conjectured that our universe underwent a phase of extremely rapid expansion for a small fraction of second just after the big bang. This conjecture also explains the origin of the large scale structures in the universe pretty well. However, what is missing is a concrete theoretical underpinning of inflation itself. Among various proposals to explain inflation, there are a few which employ random matrix techniques as well [3]. But again, a complete understanding of inflation, like that of black holes, still belongs to the large set of open problems waiting to be solved!
#### References
[1] John Preskill, Quantum Information and Spacetime (I)https://youtu.be/td1fz5NLjQs, Tutorial at the 20th Annual Conference on Quantum Information Processing, 2017.
[2] Leonard Susskind, Black Holes and the Information Paradox, Scientific American, April 1997.
[3] M.C. David Marsh, Liam McAllister, Enrico Pajer and Timm Wrase, Charting an Inflationary Landscape with Random Matrix Theory, JCAP 11, 2013, [arXiv:hep-th/1307.3559].
## Random Matrices in Three Short Stories – II
#### This is the second part of the series – the first one can be found here.
Two
If you ever happen to be in a conversation with someone with a devout love for number theory, it won’t be long before the conversation will evolve into one about prime numbers – about how fascinating these elementary, yet mystical creatures are and about how despite their apparent randomness, there is an intriguing rhythm, a captivating music in their distribution.
Prime numbers are the prime ingredients of natural numbers (and of interesting conversations, too). Take any natural number (greater than one) – you can always write it as a product of certain primes. And these primes themselves cannot be expressed as the product of smaller numbers. Simple as they sound from this definition, prime numbers have a very surprising and inexplicable manner of showing up on the number line. Much of folklore and mathematical literature alike have their origin in this mystery surrounding primes.
But hold on, why are we talking about primes in a story about random matrices?
We wouldn’t have been, had it not been for a chance teatime conversation between Freeman Dyson and mathematician Hugh Montgomery (interesting conversations, remember) [1].
In the spring of 1972, Montgomery was visiting the Institute for Advanced Study at Princeton, New Jersey to discuss his recent work on the zeros of the Riemann zeta function with fellow mathematician Atle Selberg. Selberg happened to be a leading figure of the time on the Riemann zeta function and the much fabled Riemann Hypothesis.
First formulated by Georg Friedrich Bernhard Riemann in his 1859 paper, the Riemann hypothesis is a hugely famous and celebrated conjecture yet to be (dis)proved. (And guess what, this was the only number theory paper the mathematician extraordinaire Riemann wrote in his entire lifetime!)
Bernhard Riemann made an important observation that the distribution of primes was intricately related to the properties of a function that now bears his name (it was first introduced by the Swiss mathematician Leonhard Euler, though). Now, as it turns out (and as you’ll see if you happen to delve more into abstract mathematics), mathematical functions and objects have personalities of their own. The zeta function, as Riemann observed, appeared to have an interesting one.
##### Behold the mighty Riemann zeta function! The Riemann Hypothesis states that all the interesting values of s for which ζ(s) = 0 lie on a straight line in the complex plane.
Numbers as you probably know, can have two parts – a real part and an imaginary one (and you’ll probably realize later that the imaginary part is not so imaginary after all).
##### A complex number has two parts – real and imaginary (here, a and b respectively). The tiny i hanging alongside the imaginary part, b is what makes itthe imaginary part. (i is the imaginary unit, the square root of 1.)
The Riemann zeta function has some non-interesting zeros (values of s for which ζ(s) = 0) at s = -2, -4, -6 and so on. What Riemann conjectured was that all the other zeros of the function will always have their real parts equal to 1/2 – and hence, when you plot them on the complex plane, they’ll all lie on a vertical line [2].
Now, over a century and a half later, all we know is that this hypothesis appears to be true. We know it to be true for the first 1013 (!) zeros we’ve found till now, but have no idea whether it holds true in general or not. (For those of you who refuse to take abstract concepts arising in pure mathematics seriously, the zeta function will keep appearing in your life even if you restrict yourself to more concrete (?) areas of applied mathematics and/or physics.)
As has been the norm with challenging problems since the beginning of the previous century, the Riemann Hypothesis along with five other open problems are listed as the Millennium prize problems by the Clay Mathematics Institute – each with a bounty of a million dollars.
But this story is not about the million dollars – mathematicians don’t care much about it anyway (or so I am guessing). In the early 70s, number theorist Hugh Montgomery was working on the statistical distribution of the interesting zeros of the Riemann zeta function on the critical line – the vertical line where all of them are conjectured to lie on.
When, during their teatime conversation at the Institute for Advanced Study, Montgomery mentioned his recent results to Freeman Dyson, both Dyson and Montgomery were up for a surprise. Dyson realized that the statistical distribution of the Riemann zeros that Montgomery had worked out had a lot in similar with the statistical properties of a certain class of Random Matrices Dyson had earlier looked at while working on the physics of heavy atoms. And more importantly, the theory of random matrices had, by then, pretty well established results that could be applied to Montgomery’s problem [1].
Dyson then wrote a letter to Selberg referring Madan Lal Mehta’s book on random matrices to be looked up for the results that were needed by Montgomery. (You must read this article published in the IAS Spring 2013 newsletter; the article also has a scanned image of Dyson’s handwritten note to Selberg!)
This striking similarity in the statistics of the Riemann zeros and the spectra of heavy atoms points towards an universality in the underlying structures. Stronger results coming from probability theory and mathematical statistics appear to give a clearer picture; though much of it still appears to be an outright miracle. More on this notion of universality in the last part of the series when I’ll narrate the story of one of the greatest quests in present day science – the quest for a theory of quantum gravity.
#### References
[1] Kelly Devine Thomas, From Prime Numbers to Nuclear Physics and Beyond, The Institute Letter Spring 2013.
[2] Peter Sarnak, Problems of the Millennium: The Riemann Hypothesis, 2005.
## Random Matrices in Three Short Stories
#### This post is a slightly modified version of a student talk I gave for the IISER Pune Science Club earlier this year. All the three stories (this one and the two to follow) will be accessible to anyone with some exposure to high school physics and mathematics. Readers with a formal training in advanced level physics and/or mathematics have all the rights to criticize the author for an over-simplistic presentation.
One
The first story begins with that of my personal hero, Freeman John Dyson. Growing up as a kid in England in the period between the two of the most disastrous wars this planet has seen, Dyson developed a strong interest for everything numbers. This interest, quite naturally, evolved into a passion for physics and mathematics. When the eighteen-year-old Dyson arrived at Cambridge in 1941 as a student, there were few physicists around – a constant phenomenon at the universities during the war years; physicists were perhaps the most suitable people to be sent away with war-related responsibilities.
As it happened, the greatest influence on Dyson, while at Cambridge, was the famous mathematician duo, Hardy and Littlewood [1]. After working for a few years on number theory problems (he published a couple of influential papers in this period), Dyson moved to the United States where he was appointed a professor at the Cornell University; he didn’t have a Ph.D., though (and never got one).
With the brightest of physicists around (Richard Feynman and Eugene Wigner to name just two from a pretty illustrious list), Dyson’s focus shifted towards problems from quantum physics. (Number theory, however, was to appear in his life again, albeit for a small period, as we’ll see in the second part of this series of stories.)
Quantum mechanics, one of the two greatest triumphs of the twentieth century physics (General Relativity being the other one, of course), reformulates the study of physical systems in the language of the Hamiltonian. If you were reading a chapter from a textbook on Quantum Physics (which this post is not), you’d be told that this Hamiltonian is a Hermitian operator. Now, a Hermitian operator, to put in rather simple words, is a matrix with some special properties. And a matrix is nothing more (?) than an array of numbers. But what has a matrix got to do with a physical system?
As it turns out, the way a system evolves can be mathematically expressed in terms of the product of certain matrices – and the Hamiltonian of a system, which itself is a matrix, determines what matrices you should be multiplying. Say, you want to study a Hydrogen atom. You’ll have to begin with its Hamiltonian and see what all you can say about the energy of its components. And then you can check how well you did your job by comparing your results with a Hydrogen spectrum. (A quick Google Images search for a Hydrogen spectrum at this point is strongly recommended for those who haven’t seen one.)
As seems intuitive, the Hamiltonian of a Hydrogen atom should be a lot simpler than that of a more complicated atom. A more complicated atom would also mean a heavier atom; consider, for example, a Uranium atom, which is over 200 times heavier than the Hydrogen atom. But with increasing weight comes increasing complexity (sic); heavier atoms have a larger number of interacting components. This essentially renders it impossible to write down a Hamiltonian that you can use to predict the spectrum of your complex atom. Poof! All the powers quantum mechanics bestowed upon you go awry.
Not really. What Freeman Dyson and Eugene Wigner showed that you can make a pretty smart guess about the Hamiltonian of such a complex, heavy atom using a Random Matrix – a matrix which contains random numbers. It’s like having your usual matrix except for that the entries are drawn randomly from a set of numbers.
##### H, which can describe the Hamiltonian of a system, is a matrix with elementsH11, H12, H21 and H22. You can call H a Random Matrix if the entries of H are random variables. Here, H has four elements – in practice, you’ll have to take much larger matrices for your computations.
Now, the essential idea here is to realize that to make predictions about the system you are trying to model, you need to consider an ensemble of all the random matrices which would give rise to the properties you’re expecting your system to have. These properties are manifested in terms of certain symmetries of the system under consideration. (The notion of symmetry is both utterly important and extremely fascinating in the physical sciences. If you’re ever in a need to spot a theoretical physicist in a large crowd, a passing mention of symmetry will do the job.)
The next step in your analysis of complex atoms will then be to study the statistical properties of the matrix ensemble with the appropriate symmetries. And that’s pretty much all. There is this entirety of machinery coming from the theory of random matrices that gives you the freedom to treat the complex atom as a black box with a very large number of interacting components and to still be able to extract out the essential information with great accuracy [2].
A point that must not go without a mention here is the practical importance of studying heavy atoms. Heavy atoms act as the source of nuclear energy, the foremost candidate for being the chief contributor to the fulfilment of our energy needs in the future.
If you’re feeling bewildered and fascinated by the fact that such an apparently unrelated notion of random matrices can help you predict the spectra of complex atoms, you’re not only in the company of some of the greatest minds working in this area, but also stand a chance of being a part of the wonderful discoveries yet to be made. Our understanding is very limited at present – we know that things work but have very little idea why. There appears to be some statistical law of large numbers working behind the scenes. Interestingly, it was in mathematical statistics where random matrices had first made an appearance in the 1930s with the work of the agricultural statistician John Wishart.
Talking of statistics and numbers, the theory of random matrices have also led great insights into the solutions to challenging problems in number theory, which will be the theme of the second of our stories. As you’ll see, Random Matrix Theory has made its way far beyond statistics and atomic physics. A comprehensive and insightful reference for those interested is the classic book Random Matrices by Madan Lal Mehta. (Mehta, who had a very fruitful collaboration with Dyson and Wigner, was one of the leading contributors to the subject.) However, you’ll need to have some background in linear algebra to follow the text; but it is surely worth the efforts.
#### References
[1] Freeman Dyson, Selected Papers of Freeman Dyson with Commentary, American Mathematical Society, 1996.
[2] Madan Lal Mehta, Random Matrices, Vol. 142, Pure and Applied Mathematics, Academic Press, Ed. 3, 2004.
## Mimamsa: Stories from Behind the Scenes
The past weekend saw a gruelling but very exhilarating contest for the coveted winner’s trophy of what we at IISER Pune love to call the toughest undergraduate Science quiz in India. Mimamsa, in its ninth edition in 2017, had teams from IISc Bengaluru, NISER Bhubaneswar, IIT Bombay and IIT Madras in the finals, selected after a preliminary round held earlier this year.
Undergraduates from across the country who have participated in Mimamsa over the years have called it “intellectually stimulating” and “enjoyable” – quite aptly so, given the ideology behind its conceptualisation in 2009 by Dr. Sutirth Dey.
However, this post is not about the Mimamsa presented to the participants, but the one students at IISER Pune spend time creating – and I’d argue why these two are not the same. But of course, the arguments I present here are (almost) entirely based on my personal experiences, and I don’t expect everyone to agree with them.
Perhaps the most significant element that makes Mimamsa unique is the flavour of the questions. The questions are non-trivial to begin with, and in fact take a form that makes them seem impenetrable, until, of course, one gets to know the solution. Then the participants feel awestruck if they were not been able to solve a question or otherwise feel elated to have grabbed some essential points to add up to their tally in the contest. That’s it, right? No.
Remember when I said that the Mimamsa presented to the participants is not the Mimamsa students at IISER Pune spend time creating? The questions (on most occasions) evolve from being raw ideas to taking the final forms they’re presented in. And in the course of this evolution, the students involved in the making of these questions evolve too; learning a lot in the process – new ideas, new methods of enquiry, ways to come up with smart solutions, the ability to gauge the level of difficulty of problems, the intricacies of posing questions. The process is long and tiring, and like any other venture, the students make numerous mistakes in the process, but then they get to learn from these mistakes, too. Exactly the things we expect ourselves to become extremely good at as students of Science (and Mathematics).
On the front-end what appears to be a nice, sophisticated quizzing event, is, behind the scenes, a very dense, sometimes exhausting but almost always rewarding process.
Mimamsa is surely about the spirit of quizzing and about motivating enquiry, but it is also about the enormous efforts that are put in by the students on all fronts, and it goes without saying that the teams involved in the organisational aspects over the years must be given an equal credit for what Mimamsa has come to be today.
It might be a bit too early to call Mimamsa a phenomenon – it certainly appears to have the potential to become one in the time to come – but it has already impacted the lives of many of us who have been associated with it, and this does make Mimamsa a phenomenon in our lives. | 2017-08-17 05:56:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5345968008041382, "perplexity": 826.6837826322173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102967.65/warc/CC-MAIN-20170817053725-20170817073725-00282.warc.gz"} |
https://chemistry.tutorvista.com/physical-chemistry/equilibrium-expression.html | Top
# Equilibrium Expression
A chemical reaction can represent with the help of chemical equation in which reactants and products are written in terms of their chemical formulae. The reactants are always on left side and products are at right side separated by single or double headed arrow. The physical state of all the involve compounds are written as abbreviated forms in parenthesis.
Chemical equations also provide information about change in energy during the chemical reaction. It can be written as ΔH (heat of reaction) with the reactant or product or after the chemical equation. The sign of ΔH represents the endothermic and exothermic reaction.
We use three types of arrows in chemical equations;
Single headed: Single headed arrow indicates that reactants convert to product and reaction can move only in one direction that is from reactant to product. Such reactions are called as irreversible reactions as these reactions cannot move in backward direction from product to reactants. For example thermal decomposition of potassium chlorate $(KClO_{3})$ form KCl and $O_{2}$.
$KClO_{3} \rightarrow KCl + O_{2}$
Here reaction cannot move in backward direction to form potassium chlorate from potassium chloride and oxygen. Such reactions are called as irreversible reactions and here we use single headed arrow from reactants to products.
Double headed arrow: Double headed arrows ( ) show direction of reaction on both sides. One arrow indicates the direction from reactant to product that is called as forward reaction and another arrow directed from product to reactant that is called as backward reaction. For example reaction of hydrogen and iodine forms hydrogen iodide that is a reversible reaction and can be represented as:
$H_{2} + I_{2} \leftrightarrow 2 HI$
Equilibrium arrow:
You must have seen such arrows in many chemical equations. These are also for reversible reactions. It the reversible reactions, the system reach to a stage at which the rate of forward reaction is equal to rate of backward reaction. This state of reaction is called as equilibrium state of reaction.
Overall we can say that a reversible reaction can be made to go in either direction and direction depends on the reaction conditions. For example; when steam passes over hot iron it produces a black, magnetic oxide of iron called triiron tetroxide, $Fe_{3}O_{4}$ with hydrogen gas which is swept away by steam.
$3Fb_{3} + 4H_{2}O_{(g)} \rightarrow Fb_{3}O_{4(s)} + 4H_{2(g)}$
If we work with same products and pass hydrogen gas over hot triiron tetroxide, $Fe_{3}O_{4}$ it reduces to Fe and steam.
$FB_{3}O_{4(s)} + 4H_{2(g)} \rightarrow 3Fe_{s} + 4H_{2}O_{g}$
So we can say that this is a reversible reaction under different reaction conditions. If we remove the products of one reaction they cannot react and show the backward reaction. That is the reason, reversible reactions are possible only in a closed system in which substances are either added or lost from the system.
Related Calculators Equilibrium Constant Calculator Calculator for Expressions Add Rational Expressions Calculator Boolean Expression Calculator
## Chemical Equilibrium Expression
On the basis of direction, the chemical reactions can be classified as reversible and irreversible reactions. Irreversible reactions cannot move in both directions so it can only show the change from reactants to products. Usually reactions are irreversible in nature. Many reactions are reversible in nature as reactants can convert to product and products can again convert to reactants. The conversion of reactants to products is called as forward reaction and reverse reaction known as backward reaction.
When the rates of forward and backward reaction become equal to each other, the concentration of reactants and products exhibit no net change over time. This condition of reversible reaction is a called as Chemical equilibrium. It does not mean that the chemical reaction has stopped at this stage but that the consumption and formation of compounds have reached a balanced condition. Or we can say that quantities of reactants and products have achieved a constant ratio but they may or may not equal to each other. It is also called as dynamic equilibrium as the chemical reaction does not stop at that point but the concentration of both reactants and products remain constant. The equilibrium expression for a chemical reaction expressed the change in concentration of reactants and products. It may be expressed in terms of the concentration of the products and reactants. For example for the given chemical reaction the equilibrium expression can be written as
$jA + kB \rightarrow lC + mD$
The equilibrium expression will be:
K = $\frac{([C]^l[D]^m)}{([A]^j[B]^k)}$
Here:
• K = Equilibrium constant
• [A], [B] = Molar concentrations of reactants
• [C], [D] = Molar concentration of products
• j, k, l, m = Coefficients in a balanced chemical equation
In equilibrium constant expression the chemical species in the aqueous and gaseous phases are only parts of equilibrium expression. The reactants and products in liquids and solids do not change and cannot be part of equilibrium expression.
## How to Write an Equilibrium Expression?
A reversible reaction always moves in forward and backward direction and the direction of reaction depends on different reaction conditions.
A reversible reaction must be carried out in a closed vessel so that after certain time the reaction achieve the stage of equilibrium. At equilibrium the ratio of the concentration of the reactants and the products remains constant and that constant value is called as equilibrium constant. Let’s discuss how to write the equilibrium expression for any reversible reaction. The reaction at equilibrium can be shown as below;
$aA(aq) + bB(aq) \leftrightarrow cC(aq) + dD(aq)$
In an equilibrium expression for a reaction the concentrations of the products are divided by the concentration of the reactants with the coefficients of each equation acting as exponents.
We have to mention the physical state of reactants and products as either the gas (g) or aqueous phases (aq).
The value of equilibrium constant provides information about the chemical reaction. Large value of equilibrium constant implies more concentration of products than reactants and it indicates that the equilibrium lies to the right. On the contrary small value of K implies more reactants than products and the reaction lies to the left.
For the gaseous reactions the equilibrium constant can be represent in terms of Kp which is equilibrium constant in terms of pressure. The relation between Kc and Kp can be expressed as;
$K_{p}$ = $K_{c}(RT)^{\delta n}$
Here:
• Δn= Coefficients of the gaseous products - Coefficients of the gaseous reactants.
• R = Gas constant (see the gas laws page)
• T = Temperature (Kelvin)
Here Q represents the expression for initial concentration or pressure values of reactant and products. The expression for Q is same as of equilibrium expression, only change is that here we used initial concentration or pressure instead of equilibrium values.
The comparison of value of Q and K helps to determine the direction of reaction.
• Q > K : Higher value of Q indicates more concentration of products so to attain the equilibrium the reaction will move to left ; backward direction.
• Q = K: It is condition of equilibrium for given reaction so it will not shift in either direction.
• Q< K: Lesser value of Q indicates that concentration of products is less than reactant and to attain the equilibrium the reaction must move to right (forward direction).
We can calculate the equilibrium constant with the help of ICE table (Initial, change and equilibrium concentrations). We can determine the equilibrium concentration with initial concentrations and that can be further used to calculate equilibrium constant. At equilibrium also the reactions don't stop but the rate of forward and backward reactions are in balance so there will be no net change in the concentrations of chemical species. In other words we can say that chemical equilibrium is an example of a dynamic balance between the forward and reverse reactions.
## Equilibrium Expression Examples
We know that equilibrium is a balance condition between the forward and backward rates of a reversible chemical reaction. At this stage of chemical reaction, a balance exists also between the concentrations of the reactants and products. For any given reaction the concentrations of the reactants and the products at equilibrium will be related with each other.
The reaction of nitrogen gas and hydrogen gas form ammonia. This is typical chemical reaction of Haber’s process. It is a reversible reaction which attains equilibrium stage at a certain point of time.
$N_{2(g)} + 3H_{2(g)} \leftrightarrow 2NH_{3(g)}$
We always require a balance chemical equation with physical states of reactants and products to write the equilibrium expression. We know that equilibrium expression is ratio of concentration of products and reactants for the balance chemical equation.
$K_{eq}$ = $\frac{[products]}{reactants}$
So for the given chemical equation, we can write the equilibrium constant expression as given below;
$N_{2}(g) + 3H_{2}(g) \leftrightarrow 2NH_{3}(g)$
$K_{eq}$ = $\frac{[NH_3]^2}{[N_2][N_2]^3}$
Here Keq is equilibrium constant which is a measure of the extent to completion of reaction. The terms in square brackets indicates the concentration of reactants and products at equilibrium. The concentration terms of each product and reactant are raising it to a power that equals to its coefficient in the balanced equation.
## Equilibrium Expression Formula
The reaction between acetic acid with ethanol forms ethyl acetate (an ester) with water. The reaction is called as esterification reaction. The balance chemical equation can be written as;
$CH_3COOH(i) + CH_{3}CH_{2}OH(i) \leftrightarrow CH_{3}COOCH_{2}CH_{3}(i) + H_{2}(i)$
The equilibrium expression for the given reaction can be written as;
$K_{c}$ = $\frac{[CH_{3}COOH_{2}CH_{3}][H_{2}O]}{[CH_{3}COOH][CH_{3}CH_{2}OH]}$
The reverse reaction of esterification is hydrolysis of ester that forms acetic acid and alcohol. The equilibrium in the hydrolysis of esters is shown as below;
$CH_{3}COOCH_{2}CH_{3(i)} + H_{2}O(i) \leftrightarrow CH_{3}COOH(i) + CH_{3}CH_{2}OH(i)$
The equilibrium constant expression Kc will be;
$K_{C}$ = $\frac{[CH_{3}COOH][CH_{3}CH_{2}OH]}{[CH_{3}COOH_{2}CH_{3}][H_{2}O]}$
The Contact process is used for the formation of sulfur trioxide from sulfur dioxide and oxygen. It is a reversible reaction and the balance chemical equation can be written as;
$2SO_{2}(g) + O_{2}(g) \leftrightarrow 2SO_{3}(g)$
The equilibrium constant expression for this reaction will be ratio of concentration of sulfur trioxide and concentration of both reactants; sulfur dioxide and oxygen.
$K_{s}$ = $\frac{[SO_{3}]^{2}}{[SO_{2}]^{2}[O_{2}]}$
If the physical states of reactants and products are different, the reaction is said to be in a heterogeneous equilibrium. For example the reaction of steam with red hot carbon results the formation of hydrogen gas and carbon monoxide gas.
In the equilibrium expression for this reaction will not include solid red hot carbon as it is in solid state and change in concentration for this can be taken as negligible. Hence the equilibrium constant expression for this can be as below;
$K_{c}$ = $\frac{[H_2][CO]}{[H_2O]}$
Similarly in the reaction of solid copper with silver nitrate solution (Ag+ ions) forms copper(II) nitrate ( Cu+2 ions) and solid Ag. The reaction can be written as given below.
$Cu_{(s)} + 2Ag^{+}aq \leftrightarrow Cu^{2+}aq + 2Ag(s)$
Here one reactant copper and one product silver on the right are solids so they will not be part of equilibrium constant expression.
$K_{c}$ = $\frac{Cu^{2+}}{[Ag^+]^2}$
We know that thermal decomposition of calcium carbonate results the formation of solid calcium oxide and carbon dioxide gas. If the calcium carbonate is heated in a closed system, equilibrium is established that prevents the escape of carbon dioxide gas.
$CaCO_{3(s)} \leftrightarrow CaO_{3} + CO_{2(g)}$
Hence the equilibrium constant expression will involve only concentration of carbon dioxide.
Kc =$[CO_{[2]}$
## Equilibrium Expression for Acid Dissociation
$HA + H_2O \leftrightarrow H_{3}O^{+} + A^{-}$
K = $\frac{[H_{3}O^{+}][A-]}{[HA]}$ | 2019-06-26 12:20:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.705453097820282, "perplexity": 872.7648089724831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000306.84/warc/CC-MAIN-20190626114215-20190626140215-00098.warc.gz"} |
https://www.rocketryforum.com/threads/centuri-evil-knevil-sky-cycle.81209/ | Centuri Evil Knevil Sky Cycle
Help Support The Rocketry Forum:
swstanton
Member
Hi, I would really like to build this rocket. I think it's someting different and kind of cool. I was a teenager when he tried this. Kind of a joke. Maybe that's where FOX got it's ideas for all the reality show. Anyway I need to know if anybody has a parts list for this. JimZ's page has the plans but there doesn't seem to part list. He has the decals so that shouldn't be a problem in finishing it. Any help would be great.
Thanks - Scott
Fore Check
Well-Known Member
If you click on this picture on that plans page:
You get the following parts list:
Code:
Centuri Evel Knievel Sky Cycle #2150
Q Desc Stk Num Size Other
1 Plastic Nose Cone PNC-132
1 Body Tube ST-13 7"L
1 Body Tube ST-5 1.5"L
1 Launch Lug LL-1 1.25"L
1 Launch Lug LL-3 3"L
1 Thrust Ring TR-5 Fits ST-5
1 Engine Lock 1.75"L For Mini-Motor
1 Chute Pack CP-12 12" Blu/Wht
1 Shock Cord SC-18 1/8" x 27"L Rubber
1 Fiber Sheet 4.25"W 11"L .050"T Die Cut
1 Decal M-370 3.75"W 12.5"L Red/Gld/Blu
Semroc will have the ST13 tube, and perhaps the motor mount stuff too. Give me a minute on the nose cone.
sandman
Well-Known Member
I made a nose cone for mine out of balsa. The original was plastic. The model has "rear" ejection I'd scrap that idea and go with normal nose cone ejection.
I actually made it for my son-in-law who is an Evil Knevil fan.
Do a search here. Marty made a BT-80 upscale one a few years back.
Don't forget to add nose weight!
sandman
Fore Check
Well-Known Member
Speaking of Moldin Oldies, I just noticed that they have the PNC-102. This is the cone for the Estes No. 1378 Firecat "anti-tank missle." And yes, this Estes rocket uses the Centuri ST-1010 body tube as well.
I just sent a decal scan of mine to Excelsior so that Fred can make fresh ones for me (the ones in my kit shredded when I dipped them in water.)
Since the nose cone is available, and the tube is available from Semroc or JonRocket, I think I'll go ahead and send the plans to RocketShoppe. Just a "heads up" if anyone is interested. | 2021-12-01 09:35:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19591885805130005, "perplexity": 12864.172584095308}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359976.94/warc/CC-MAIN-20211201083001-20211201113001-00458.warc.gz"} |
https://dmoj.ca/problem/mccc2s1 | Mock CCC '20 Contest 2 S1 - Arithmetic Hybercube
View as PDF
Points: 3 (partial)
Time limit: 2.0s
Memory limit: 128M
Author:
Problem type
Allowed languages
Ada, Assembly, Awk, Brain****, C, C#, C++, COBOL, CommonLisp, D, Dart, F#, Forth, Fortran, Go, Groovy, Haskell, Intercal, Java, JS, Kotlin, Lisp, Lua, Nim, ObjC, OCaml, Octave, Pascal, Perl, PHP, Pike, Prolog, Python, Racket, Ruby, Rust, Scala, Scheme, Sed, Swift, TCL, Text, Turing, VB, Zig
Arithmetic Square, everyone's favourite problem. Welcome to the better problem, Arithmetic Line!
You are given integers, which are guaranteed to form an arithmetic sequence. However, they appear scrambled! Can you recreate the arithmetic sequence given the integers?
Recall that an arithmetic sequence of length is a sequence of integers of the form
for integer values of and . For the purposes of this problem, is a non-negative integer.
Input Specification
The first line will contain the integer , the number of integers.
The second line will contain integers, , the integers you are given. It is guaranteed that these integers form an arithmetic sequence in some permutation of them.
Output Specification
Output the recreated arithmetic sequence.
Sample Input
3
7 3 5
Sample Output
3 5 7
Explanation For Sample
The arithmetic sequence of integers that is built has and . | 2020-11-01 01:06:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25598782300949097, "perplexity": 14020.10420263857}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922746.99/warc/CC-MAIN-20201101001251-20201101031251-00579.warc.gz"} |
https://daps.cs.princeton.edu/projects/HiFi-GAN/ | l
HiFi-GAN: High-Fidelity Denoising and Dereverberation
Based on Speech Deep Features in Adversarial Networks
Jiaqi Su, Zeyu Jin, Adam Finkelstein
[Paper]
Real-world audio recordings are often degraded by factors such as noise, reverberation, and equalization distortion. This paper introduces HiFi-GAN, a deep learning method to transform recorded speech to sound as though it had been recorded in a studio. We use an end-to-end feed-forward WaveNet architecture, trained with multi-scale adversarial discriminators in both the time domain and the time-frequency domain. It relies on the deep feature matching losses of the discriminators to improve the perceptual quality of enhanced speech. The proposed model generalizes well to new speakers, new speech content, and new environments. It significantly outperforms state-of-the-art baseline methods in both objective and subjective experiments.
Here, generator G includes a feed-forward WaveNet for speech enhancement, followed by a convolutional Postnet for cleanup. Discriminators evaluate the resulting waveform ($$D_W$$, at multiple resolutions) and mel-spectrogram ($$D_S$$).
Real Demo for Ted Talk
Original input:
HiFi-GAN enhanced result:
Real Demo for VCTK Noisy
Original input:
HiFi-GAN enhanced result:
Real Demo for DAPS
Original input:
HiFi-GAN enhanced result:
* Using a model trained on our augmented synthetic dataset with speech corpus from the DAPS Dataset [7] and room impulse responses from MIT IR Survey Dataset [8].
SAMPLES | 2021-02-27 15:58:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21050773561000824, "perplexity": 9672.76487305569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358976.37/warc/CC-MAIN-20210227144626-20210227174626-00364.warc.gz"} |
https://mathoverflow.net/questions/140441/topology-on-the-set-of-analytic-functions | # Topology on the set of analytic functions
Let $H(D)$ be the set of all analytic functions in a region $D$ in $C$ or in $C^n$. Everyone who worked with this set knows that there is only one reasonable topology on it: the uniform convergence on compact subsets of $D$.
Is this statement correct?
If yes, can one state and prove a theorem of this sort:
If a topology on $H(D)$ has such and such natural properties, then it must be the topology of uniform convergence on compact subsets.
One natural property which immediately comes in mind is that the point evaluatons must be continuous. What else?
EDIT: 1. On my first question, I want to add that other topologies were also studied. For example, pointwise convergence (Montel, Keldysh and others). Still we probably all feel that the standard topology is the most natural one.
1. If the topology is assumed to come from some metric, a natural assumption would be that the space is complete. But I would not like to assume that this is a metric space a priori.
2. As other desirable properties, the candidates are continuity of addition and multiplication. However it is better not to take these as axioms, because the topology of uniform convergence on compacts is also the most natural one for the space of meromorphic functions (of one variable), I mean uniform with respect to the spherical metric.
• Completeness (or quasi/local-completeness, more generally). (And "of course", local convexity.) – paul garrett Aug 26 '13 at 13:07
• The topology you describe is just the subspace topology, for the subspace of analytic functions inside the space of all continuous mappings. For all mappings this is called "compact-open topology", and it is indeed natural in the sense of category theory: it is the unique inner hom functor for the category of compactly-generated hausdorff spaces. – Anton Fetisov Aug 26 '13 at 16:38
• I don't know if there is any reasonable definition of hom functor for analytic category. The problem is to define complex structure on an infinite-dimensional space. – Anton Fetisov Aug 26 '13 at 16:41
• "a limit in our topology of analytic functions" Erm... What kind of animal is this? The topology is not even defined outside $H(D)$... – fedja Aug 26 '13 at 21:51
• @fedja: you are right. I was thinking of some kind of completeness property, but this assumes that there is a metric, which I would not like to assume a priori. – Alexandre Eremenko Aug 27 '13 at 3:45
I think the following characterization is true:
The standard topology on $H(D)$ is the initial topology with respect to the projections $f \mapsto [f]_z$ for each $z\in D$, where $[f]_z$ is the germ of $f$ at $z$.
For this statement to make sense, we need to endow the space $\mathcal{O}_z$ of germs of holomorphic functions at $z$ with a topology. I think it is reasonable to give it the topology of the inductive limit $\mathcal{O}_z = \bigcup_{r>0} P_{z,r}$, where $P_{z,r}$ is the space of power series absolutely convergent on a (poly)disk of radius $r$ centered at $z$. The topology on each $P_{z,r}$ coincides with the subspace topology of the $\sup$-norm topology on bounded continuous functions on the closed (poly)disk. Depending on one's preference, there might be different, equivalent ways to define the same topology.
1. The standard topology contains the initial one. That's because the projections $f \mapsto [f]_z \mapsto f(z)$ are continuous and the evaluations maps $f \mapsto f(z)$ are continuous in the standard topology.
2. The initial topology contains the standard one. Consider a compact set $K\subset D$, an $\epsilon > 0$ and a standard neighborhood $V(f,K,\epsilon)$ of $f$ in $H(D)$. Then $V(f,K,\epsilon)$ contains an intersection of finitely many initial topology neighborhoods of $f$. These neighborhoods could, for instance, be generated by a finite subcover of a cover of $K$ by open (poly)disks such that $f$ is continuous on the closure of each.
• Does the same argument also applies to the spaces of continuous functions in a region (and characterizes the topology of uniform convergence on compact subsets) ? – Alexandre Eremenko Aug 28 '13 at 18:38
• Hmm, I think yes. The closed (poly)disks will have to be replaced by arbitrary compact neighborhoods, though. This observation is probably related to the earlier comment of Anton Fetisov. – Igor Khavkine Aug 28 '13 at 19:51
• This argument applies basically to any function space you can think of. The reason is that lifting universally topology from stalks to sections is a general sheaf-theoretic construction. It says nothing about defining topology on stalks in the first place. In fact, for that reason people usually move the other way, from topology on sections to topology on stalks. – Anton Fetisov Aug 28 '13 at 23:16
• I think what is special about holomorphic functions is that the stalks and germs have a particularly simple structure. – Igor Khavkine Aug 29 '13 at 6:44
Here is a method of recovering the topology of $H(U)$ from general considerations. The idea is that the dual of $E$ of $H(U)$ has the following universal property: $E$ is a complete locally convex space (even a so-called nuclear Silva space, i.e., an inductive limit of a sequence of Banach spaces with nuclear intertwining mappings) and $U$ embeds in $E$ in such a manner that every holomorphic mapping from $U$ into a Banach space lifts in a unique manner to a continuous linear mapping on $E$. We now forget the topology on $H(U)$ and note that the existence of such a universal space can be proved without recourse to this duality (this is a standard construction as a closed subspace of a suitable large product of Banach spaces---analogous to the construction of the free locally convex space over a completely regular space or a uniform space---see, e.g. Raikov, Katetov, etc.) Such a free object is always unique in a suitable sense. Now it follows from the universal property (applied to scalar-valued functions) that $H(U)$ is, as a vector space, naturally identifiable with the dual of the universal space. It can then be provided with the corresponding strong topology which is thus intrinsic. But this is precisely the standard Fréchet space topology (the fact that we are dealing with a symmetric duality between a nuclear Fréchet space, resp. Silva space is relevant here).
Added as an edit after Alexandre's comment since I am not entitled to comment.
One way to construct the universal space is to take the free vector space over $U$ and provide it with the finest locally convex topology such that the embedding of $U$ is holomorphic, then take the completion.
I doubt that you will find the fact that the dual of $H(U)$ has the universal property in the literature (such considerations were never fashionable---too much category theory for the analysts, too much hard analysis for the category theorists perhaps). It follows very easily from the theory of duality for $H(U)$ (Köthe, Crelle (191)). An accessible version in english is in the book "Complex Analysis: a functional analysis approach" by Ruecking and Rubel. The vector-valued case is in the seminal follow-up papers to Köthe's by Grothendieck in Crelle, 192.
I should note that the duality mentioned above was originally developed by the portuguese mathematician J. Sebastião e Silva in a sadly forgotten article in Port. Math. 9 (1950) 1-130 and this again has its source in work by Cacciopoli and Fantappié. The universal property mentioned above has many analogues---e.g., the distributions on the unit interval, unversal for smooth mappings into Banach spaces (with obvious generalisations), Radon measure on the unit interval or a compact space, universal for continuous mappings, bounded Radon measures on a completely regular space (bounded, continuous mappings), uniform measures on a uniform space (bounded, uniformly continuous mappings). See, for example Raikov, Math. Sb. 63 (1964) 582-590, Tomašek, Czech. Math. J. 20 (1970) 1-18, 19-33.
• berlin: could you give a more precise reference, for non-specialists on topological vector spaces, on the a) existence proof of such $E$ and b) that the dual of $H(U)$ has this property ? – Alexandre Eremenko Mar 24 '14 at 22:45
• The universal property of $E$ follows from Schwartz' $\varepsilon$-product (which in many cases coincides with the completed injective tensor product): $H(U,X) = H(U) \varepsilon X = L(H(U)'_{co}, X)$ where the last equality is the definition and $F'_{co}$ is the dual of the locally convex space $F$ endowed with uniform convergence on all absolutely convex compact sets. Since $H(U)$ is a Montel space, in our case this is the same as the strong dual of $H(U)$. – Jochen Wengenroth Mar 25 '14 at 8:42
• @Alexandre. More on the universal property of $H(U)'$. This was known to Fantappiè in 1943 (not in this language, of course)---see the concept of Fantappiè indicatrix of an operator--- and so before the Schwartzian theory of distributions and long before Grothendieck and Schwartz developed the connection between tensor products of lcs's and operators, in particular for nuclear spaces. A succinct discussion of this and the path from the original work of Fantappiè to its incorporation into modern function analysis by the actors mentioned above is in a review by Horvath, BAMS 25 (1991) p. 162 – berlin Mar 27 '14 at 16:20
• @berlin: Thanks. Fortunately the book is available in our library:-) – Alexandre Eremenko Mar 28 '14 at 0:09
If $\mathcal S$ is a locally convex topology on $H(D)$ such that all evaluations are continuous then the identity $(H(D)$,compact open) $\to (H(D),\mathcal S)$ has closed graph. Therefore, whenever $\mathcal S$ is good for the closed graph theorem, the compact open topology will be finer. The most general class of locally convex spaces which are good for the closed graph theorem as range spaces is that of webbed spaces introduced by de Wilde (see e.g. the functional analysis book of Meise and Vogt). This class contains all Banach spaces and is stable with respect to countable inductive (=direct) and projective (=reverse) limits. In particular, it contains all Frechet spaces, LF-spaces, projective limits of LF-spaces,...
Moreover, since there is no strictly coarser barrelled locally convex topology on a Frechet space, we get a possible answer to your question: If $\mathcal S$ is a locally convex topology on $H(D)$ so that $H(D)$ is webbed and barrelled and all evaluations are continuous then is coincides with the compact open topology.
In the article http://www.zentralblatt-math.org/zbmath/search/?q=an%3A1199.32010 the topology of the exponential convergence on compacts was introduced. Let $\Omega$ be a domain in $\mathbb{C}^n$ and $PSH(\Omega)$ be the set of functions plurisubharmonic on $\Omega$. A sequence of $u_n \in PSH(\Omega)$ exponentially and uniformly convergerges on compacts to the function $u$ if $\exp u_n$ converges to $\exp u$ uniformly on compacts.The exponential uniform convergence on compacts is a generalization of the uniform convergence on compacts. It should be noted that $u \in PSH(\Omega)$ as well as $\exp u \in PSH(\Omega)$. The topology of the exponential uniform convergence on compacts is metrizable as follows. Let $C_n$ be a seqquence of compacts exhausting $\Omega$. We put $d_n(u,v):=\sup\{|\exp u(z)-\exp v(z)|: \,z \in C_n\}$ and $$d(u,v):=\sum\limits_{n=1}^\infty \frac{2^{-n}d_n(u,v)}{1+d_n(u,v)}.$$ Then $PSH(\Omega)$ is a complete metric space. M. Girnyk proved that the set $\log|A|(\Omega)$ of the logarithms of the moduli of functions holomorphic on $\Omega$ is nowhere dense in $PSH(\Omega)$ with that metrics. PS. The author proved the last statement in the cases $\Omega=\mathbb{C}^n$ and $\Omega=\mathbb{D}^n$.
• Thanks. But I asked about topologies on the space on analytic (not PSH) functions. – Alexandre Eremenko Aug 28 '13 at 18:35
An obvious obstruction to the proposed topological characterization is the following: Endow $H(D)$ with the discrete topology, call it $\tau$. Then the space $\langle H(D) , \tau\rangle$ has the following properties: $f \mapsto f(z)$ is continuous for every $z$, $\langle H(D), \tau\rangle$ is a topological ring (as in, point-wise addition and point-wise multiplication of functions are continuous), and $\langle H(D), \tau\rangle$ is metrizable. The obvious down-side to this topology is that it is not separable and not natural in the same way but it does offer a counter-example to the statement
If $\tau$ is a (Hausdorff) topology on $H(D)$ for which point-evaluation is continuous, the group and ring operations are continuous, and $H(D)$ is metrizable, then $\tau$ must be the compact-open topology.
Given any topological space $$(T,\tau)$$ and a mapping $$\varphi :T\to T$$, it is natural to ask
What is the coarsest topology, finer than $$\tau$$, such that $$\varphi$$ is continuous.
This topology is the supremum of all $$\tau_n$$, which is constructed by adding the inverse images, through $$\varphi$$, of all open subsets of $$\tau_{n-1}$$. Note it $$\hat{\tau}$$.
If one takes the space $$T_0$$ of infinitely differentiable functions on $$\mathbb{R}$$, $$\tau_0$$, the topology of local uniform convergence (which is reasonable to preserve continuity) and $$\varphi=\frac{d}{dx}$$, then $$\hat{\tau_0}$$ is the topology of compact uniform convergence of functions and all their derivatives (one can check that the sequence $$\tau_{n}$$ is strictly increasing). The process is stationary iff $$\varphi$$ is already continuous (obvious). What is remarkable is that, due to Cauchy, $$\varphi$$ is continuous for $$\tau_0$$. The process can be applied to a set $$(\varphi_i)_{i\in I}$$ of functions, $$\tau_n$$ being constructed by adding the inverse images, through $$\varphi_{i}$$ of all open subsets of $$\tau_{n-1}$$ and taking their unions and finite intersections. If, on $$C^{\infty}(D;\mathbb{C})$$, one takes $$\varphi_1=\frac{\partial}{\partial x}$$ and $$\varphi_2=\frac{\partial}{\partial y}$$, one get the topology of compact uniform convergence of functions and all their (partial) derivatives. The process described above is, again, strictly increasing, but the restriction of it to $$H(D)$$ is stationary which is IMHO remarkable.
Let $\mathcal T$ be the topology of uniform convergence on compact subsets.
Can we do these? ...
(a) $H(D)$, with addition, is a Polish group in $\mathcal T$... separable, completely metraizable.
(b) If $A, B$ are Polish groups, and $\phi : A \to B$ is a Borel measurable homomorphism, then $\phi$ is continuous.
(c) The sigma-algebra on $H(D)$ generated by the point-evaluations is the same as the Borel sigma-algebra for $\mathcal T$.
Would these be enough to prove...
(z) If $\mathcal S$ is a topology on $H(D)$ making it a Polish group such that the point-evaluations are continuous, then $\mathcal S = \mathcal T$. | 2019-06-19 22:01:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126126766204834, "perplexity": 239.6077229213558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999041.59/warc/CC-MAIN-20190619204313-20190619230313-00359.warc.gz"} |
https://questions.examside.com/past-years/gate/gate-ece/analog-circuits/power-amplifier | GATE ECE
Analog Circuits
Power Amplifier
Previous Years Questions
## Marks 1
The effect of current shunt feedback in an amplifier is to
Crossover distortion behavior is characteristic of
The circuit shown, the figure supplies power to on 8 v speaker or L.S. The vaLUES OF $${I_C}$$ and $${V_{CE}}$$ for this ciruit will be ...
A power Amplifier delivers 50 W output at 50% efficiency. The ambient temperature is 25$$^\circ$$ C. if the maximum allowable junction temperature is...
A class-A transformer coupled, transistor power Amplifier is required to deliver a power rating of the transistor should not be less than.
In a transistor push-pull Amplifier
## Marks 2
A regulated power supply, shown in figure below, has an unregulated input (UR) of 15 volts and generates a regulated output V $$_{out.}$$ .Use the com...
A regulated power supply, shown in figure below, has an unregulated input (UR) of 15 volts and generates a regulated output V o u t . out. .Use the c...
In case of class A amplifiers, the ratio (efficiency of transformer coupled amplifier)/(efficiency of a transformer less amplifier ) is
EXAM MAP
Joint Entrance Examination | 2023-03-28 05:40:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5068926811218262, "perplexity": 7009.195885774701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00188.warc.gz"} |
http://cms.math.ca/cjm/msc/11B37 | Search results
Search: MSC category 11B37 ( Recurrences {For applications to special functions, see 33-XX} )
Expand all Collapse all Results 1 - 1 of 1
1. CJM 2007 (vol 59 pp. 127)
Lamzouri, Youness
Smooth Values of the Iterates of the Euler Phi-Function Let $\phi(n)$ be the Euler phi-function, define $\phi_0(n) = n$ and $\phi_{k+1}(n)=\phi(\phi_{k}(n))$ for all $k\geq 0$. We will determine an asymptotic formula for the set of integers $n$ less than $x$ for which $\phi_k(n)$ is $y$-smooth, conditionally on a weak form of the Elliott--Halberstam conjecture. Categories:11N37, 11B37, 34K05, 45J05 | 2013-12-06 09:56:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9089586734771729, "perplexity": 1729.76158018809}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051140/warc/CC-MAIN-20131204131731-00006-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/i-want-check-my-answer.354100/ | # I want check my answer ()()()
Hi
I want check my answer two Question
Question 1
Question 2
## Answers and Replies
Related Precalculus Mathematics Homework Help News on Phys.org
Mark44
Mentor
Q1: Your work is incorrect.
You're starting with an equation, so each step should be a new equation.
xlog x = 100x
log(xlog x) = log(100x)
(log x)(log x) = log 100 + log x
Continue from there. You should end up with a value of x that is correct to three significant digits.
Hi mark
why we said log 100 + log x ?
why + log x ?????
statdad
Homework Helper
Because
$$\log(ab) = \log(a) + \log(b)$$
thanks >>
Mark44
Mentor
For Q2, your answer is close, but should be rounded to 34.7 years, to the nearest tenth of a year.
Also, there is no In operation. This is Ln, and stands for natural Logarithm, which in Latin is logarithmus naturalis.
BTW, you might get quicker help if you posted the problem and your work rather than posting a scanned copy of your work. Speaking for myself, it's much easier when I can write my response while looking at the work, rather than having to have two windows open and jump back and forth between them. | 2020-08-12 18:51:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6673774123191833, "perplexity": 1218.2357409416402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00235.warc.gz"} |
http://mathhelpforum.com/calculus/76789-improper-integral.html | # Thread: improper integral
1. ## improper integral
hey everyone, got 2 questions.
the first one asks for determining all values of p for which dx/(x-p) is improper evaluated from 1 to 2.
another question asks about finding an equation for the integral curve that passes through the point (2,1) for the differential equation y'=y/2x and how do you go about graphing y'=y/2x?
cheers!
2. a) $
\int_1^2 {\frac{1}
{{x - p}}dx} = \ln \left( {2 - p} \right) - \ln \left( {1 - p} \right)
$
we know that the domain the logarythm function is to: $x>0
$
then to this integral con be improper: $
2 - p \leqslant 0 \vee 1 - p \leqslant 0 \Rightarrow \therefore p \geqslant 1
$
b) Maybe could be a better idea, find Y
$
y'(x) = \frac{{y(x)}}
{{2x}} \Leftrightarrow \frac{{y'(x)}}
{{y(x)}} = \frac{1}
{{2x}}
$
Integrating whit respecte X, we have: $
\int {\frac{{y'(x)}}
{{y(x)}}dx} = \frac{1}
{2}\int {\frac{1}
{x}dx} \Leftrightarrow \ln \left( {y(x)} \right) = \frac{1}
{2}\ln x + C
$
Hence: $y(x)=Ae^{\sqrt{x}}$
But curve passes through the point (2,1) then $
y(2) = 1 \Leftrightarrow 1 = Ae^{\sqrt 2 }
$
$
\therefore y(x) = \exp \left( {x - \sqrt 2 } \right)
$ | 2017-02-22 09:53:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001321196556091, "perplexity": 1322.907432349237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00372-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.gamedev.net/blogs/entry/1860897-hacking/ | • entries
73
131
• views
54769
# Hacking
313 views
Haha, boy I love hacking. It excites a strange feeling of rebellion.
I am working on making my development environment as comfortable as possible. My thoughts an IDE's and workspaces are this: do whatever makes you happy. It's tempting to evangelize and build up an attitude of "being right" (see the Emacs/VI holy war). But whatever; you're the one doing the work, and you should choose how to do it.
With that said, I can't stand IDE's. I think they enforce an extremely specific workflow and style of development, and also make it freaking annoying to tweak many parts of the build process. I'm an Emacs guy. I thought about writing up a formal review of current IDE's and why they don't work for me, but I don't really feel like doing it anymore. Let's just say I have used both Visual Studio and Xcode, but I've never been happier with Emacs. Maybe I can show you why in the next couple of posts.
I'll explain my opening statement at some point, I promise. I suddenly feel like I should attempt to describe Emacs, however, since much of what I will work on involves customizing it. Emacs is a text editor written mostly in Emacs Lisp. Obviously, ELisp was built for Emacs, and the whole language is exposed to you. You are allowed to redefine any part of the text editor itself, and as you use Emacs, you begin tailoring it to your specific needs. The result is two-fold: there are several packages out there you can install which turn Emacs into some amazing things, and you get to transform your text editor into something perfect for you.
My current focus is making my development environment in Emacs as fun and intuitive as possible. Why don't I start with a simple program, and see how I want to develop it from there?
I want my first program to:
• Use Cocoa's application framework and open up a window
• Embed a Gambit-C Scheme function which prints out some text on load
• Be invokable on the command-line, such as ./main' (easy management of stdout/stderr, also can use Gambit-C's REPL)
There's not much code to do this, but it's certainly a start in figuring out issues with using Cocoa outside of Xcode, Gambit-C Scheme integration, and more. Here's the program and the output:
--- main.c --------------------------------------#import #define ___VERSION 403002#include "gambit.h"#include "engine.h"#include "stdio.h"#define LINKER ____20_link_____BEGIN_C_LINKAGEextern ___mod_or_lnk LINKER (___global_state_struct*);___END_C_LINKAGE___setup_params_struct setup_params;int main(int argc, char* argv[]) { ___setup_params_reset (&setup_params); setup_params.version = ___VERSION; setup_params.linker = LINKER; ___setup(&setup_params); print_something(); int ret = NSApplicationMain(argc, argv); ___cleanup(); return 0;} --- engine.h --------------------------------------void print_something(); --- engine.scm --------------------------------------(c-define (print-something) () void "print_something" "" (display "Hello, World!")) --- Makefile --------------------------------------gsc -debug -link -o link_.c engine.scmgcc -o main main.c link_.c engine.c -D___LIBRARY -I/usr/local/Gambit-C/current/include -lgambc -framework Cocoa -framework OpenGL -sectcreate __TEXT __info_plist Info.plist ---------------------------------------------------(there's also a supporting .nib file and Info.plist which Cocoareads to build the window from)
Sweet! There's one big problem which the screenshot doesn't show: Ctrl-C doesn't kill the program. After some research, it looks like Gambit-C takes control over the interrupts, and since I never call back into Gambit it doesn't have a chance to respond. This is perfectly reasonable, and should be solved once we're calling Gambit in the rendering code.
My second goal was to have some triangles on the screen, rendered from Scheme code. It's pretty much the same program above, but some more Cocoa integration to initialize OpenGL and such. The only interesting part is probably the Scheme code, so I'll post it:
(define medium #f)(c-define (init-engine) () void "init_engine" "" (set! medium (obj-load "/Users/james/projects/scheme/graphics-gambit/resources/medium.obj")))(define x 0.0)(c-define (run-frame) () void "run_frame" "" (set! x (+ x 1.5)) (glLoadIdentity) (glRotatef 90.0 1.0 0.0 0.0) (glRotatef x 0.0 0.0 1.0) (glScalef 0.02 0.02 0.02) (glClearColor 0.2 0.3 0.4 1.0) (glClear GL_COLOR_BUFFER_BIT) (glColor3f 1.0 0.9 0.8) (glBegin GL_TRIANGLES) (for-each (lambda (triangle) (glVertex3f (vector3d-x (triangle-v1 triangle)) (vector3d-y (triangle-v1 triangle)) (vector3d-z (triangle-v1 triangle))) (glVertex3f (vector3d-x (triangle-v2 triangle)) (vector3d-y (triangle-v2 triangle)) (vector3d-z (triangle-v2 triangle))) (glVertex3f (vector3d-x (triangle-v3 triangle)) (vector3d-y (triangle-v3 triangle)) (vector3d-z (triangle-v3 triangle)))) (obj-data-triangles medium)) (glEnd))
INIT-ENGINE is called at the beginning of the program, and RUN-FRAME is called every 1/60th of a second. I wrote a .obj file loader so that I can have some stuff to test with, so OBJ-LOAD just returns a list of triangles from a mesh inside a .obj. Here's the result now:
Even sweeter! There's no lighting, so it looks bad, but it's a rotating 3d mesh spelling "medium".
Interrupting the program works now, although I'm getting a strange crash when the program exits. I'm shutting down the Gambit system wrong somehow. Need to fix.
My third goal was to learn how to track mouse and keyboard events with Cocoa. As I got into this, however, I realized that I can't do this. The windowing system can't possibly dispatch any user input events on the window. The process is still seen as a command-line process, a child of my current shell. The shell still receives all the mouse movements and key presses. It looks like I'll have to invoke the program as a windowed application.
On OSX, this is achieved by creating a bundled application and using open'. So instead of ./main', I will do open Main.app'. The reasons why I initially tried to avoid this is that it is no longer a child process, and now I have to track down where stdout/stderr are going and there's no chance of using Gambit's REPL.
In reality, I'll be invoking the app in several different ways, compiled with several different options, depending on how I intend to use it. Debugging scene rendering problems, cocoa, gambit code, etc. are all different which will require different tools. It'll be good to build in all the options of compiling & invoking the app.
Anyway, this is a lot more than I intended. I didn't even get to talk about configuring Emacs. But essentially, anything that happens on the command line can be hooked into Emacs, so my most common actions are only a keyboard shortcut away. I hope to show some example of this later.
ALSO, in reference to my opening statement, I was playing around with the idea of dynamically loading in Cocoa and keeping Gambit in control. It's a huge hack, since Cocoa is meant to handle the entry point and be statically compiled down to an executable. But essentially, I compiled a gambit program which wraps around Cocoa's NSApplicationMain procedure into a dynamic library. I then proceeded to load it and try to invoke it, which would hopefully create a window, but let me stay in Gambit land. I got a nice little error from Cocoa though; it couldn't find the preferences file which was compiled in as the __TEXT section. I'm guessing there's something I'd have to do to locate it in a dynamic library (it would be ok in a flat executable).
[19:07] james:~/projects/scheme/graphics-gambitjames% gsc -cc-options 'cocoa-entry.m glview.m glwindow.m -framework Cocoa -framework OpenGL -sectcreate __TEXT __info_plist Info.plist' entry.scmcocoa-entry.m: In function 'cocoa_entry':cocoa-entry.m:28: warning: passing argument 2 of 'NSApplicationMain' from incompatible pointer type[19:07] james:~/projects/scheme/graphics-gambitjames% gsi gsi Gambit v4.3.2> (load "entry")2008-12-18 19:07:14.945 gsi[37765:10b] No Info.plist file in application bundle or no NSPrincipalClass in the Info.plist file, exiting
Oh boy, I think I just dumped a bunch of stuff here at the end. Forgive me. I'm still getting the hang of decent writing (I probably should have split this up).
There are no comments to display. | 2018-02-22 13:29:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3436065912246704, "perplexity": 4827.330689870914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00517.warc.gz"} |
https://www.semanticscholar.org/paper/Logical-Pre-and-Post-Selection-Paradoxes-are-Proofs-Pusey-Leifer/7f7a6afc524fe43dc21517e5335577f6bd1b63d6 | # Logical Pre- and Post-Selection Paradoxes are Proofs of Contextuality
@article{Pusey2015LogicalPA,
title={Logical Pre- and Post-Selection Paradoxes are Proofs of Contextuality},
author={Matthew F Pusey and Matthew Leifer},
journal={arXiv: Quantum Physics},
year={2015}
}
• Published 25 June 2015
• Philosophy
• arXiv: Quantum Physics
If a quantum system is prepared and later post-selected in certain states, "paradoxical" predictions for intermediate measurements can be obtained. This is the case both when the intermediate measurement is strong, i.e. a projective measurement with Luders-von Neumann update rule, or with weak measurements where they show up in anomalous weak values. Leifer and Spekkens [quant-ph/0412178] identified a striking class of such paradoxes, known as logical pre- and post-selection paradoxes, and…
14 Citations
Pre- and post-selection paradoxes in quantum walks
• Philosophy, Physics
New Journal of Physics
• 2019
Many features of single-partite quantum walks can be simulated by classical waves. However, it was recently experimentally shown that some temporal sequences of measurements on a quantum walker do
Causal reappraisal of the quantum three-box paradox
• Physics
Physical Review A
• 2022
Quantum three box paradox is a prototypical example of some bizarre predictions for intermediate measurements made on preand post-selected systems. Although in principle those effects can be
Contextuality, Pigeonholes, Cheshire Cats, Mean Kings, and Weak Values
• Physics
• 2015
The Kochen–Specker (KS) theorem shows that noncontextual hidden variable models of reality that allow random choice are inconsistent with quantum mechanics. Such noncontextual models predict certain
• Computer Science
• 2017
This work identifies quantitative limits on the success probability for minimum error state discrimination in any experiment described by a noncontextual ontological model, and identifies noncontextuality inequalities that are violated by quantum theory, and implies a quantum advantage for state discrimination relative to non contextual models.
• Philosophy
New Journal of Physics
• 2019
The conditions of the Frauchiger-Renner result are generalized so that they can be applied to arbitrary physical theories, and in particular to those expressed as generalized probabilistic theories (GPTs), and a deterministic contradiction is found.
Fitch's knowability axioms are incompatible with quantum theory
• Philosophy
• 2020
This work relates the assumptions behind the recent Frauchiger-Renner quantum thought experiment and the axioms for knowledge used in Fitch's knowability paradox, and indicates that agents' knowledge of quantum systems must violate at least one of the following assumptions.
The Geometry of Qubit Weak Values
• Physics
• 2015
The concept of a \emph{weak value} of a quantum observable was developed in the late 1980s by Aharonov and colleagues to characterize the value of an observable for a quantum system in the time
Quantum Fluctuation Theorems, Contextuality, and Work Quasiprobabilities.
A protocol is described that smoothly interpolates between the two-point-measurement work distribution for projective measurements and Allahverdyan's work quasiprobability for weak measurements, and it is shown that the negativity of the latter is a direct signature of contextuality.
Testing quantum theory with generalized noncontextuality
• Physics
• 2021
Markus P. Müller 2, 3 and Andrew J. P. Garner Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria Vienna Center for Quantum
Is the SIC Outcome There When Nobody Looks
Close study of three of the "sporadic SICs" reveals an illuminating relation between different ways of quantifying the extent to which quantum theory deviates from classical expectations.
## References
SHOWING 1-10 OF 24 REFERENCES
Pre- and post-selection paradoxes and contextuality in quantum mechanics.
• Psychology
Physical review letters
• 2005
It is shown that for every paradoxical effect wherein all the pre- and post-selected probabilities are 0 or 1 and the pre/post-selected states are nonorthogonal, there is an associated proof of the impossibility of a noncontextual hidden variable theory.
Logical Pre- and Post-Selection Paradoxes, Measurement-Disturbance and Contextuality
• 2005
Many seemingly paradoxical effects are known in the predictions for outcomes of measurements made on pre- and post-selected quantum systems. A class of such effects, which we call ‘`logical pre- and
The Status of Determinism in Proofs of the Impossibility of a Noncontextual Model of Quantum Theory
In order to claim that one has experimentally tested whether a noncontextual ontological model could underlie certain measurement statistics in quantum theory, it is necessary to have a notion of
Anomalous weak values are proofs of contextuality.
It is shown that "anomalous weak values" are nonclassical in a precise sense: a sufficiently weak measurement of one constitutes a proof of contextuality, which clarifies, for example, which features must be present to demonstrate an effect with no satisfying classical explanation.
Contextuality for preparations, transformations, and unsharp measurements
The Bell-Kochen-Specker theorem establishes the impossibility of a noncontextual hidden variable model of quantum theory, or equivalently, that quantum theory is contextual. In this paper, an
From the Kochen-Specker theorem to noncontextuality inequalities without assuming determinism.
• Philosophy
Physical review letters
• 2015
This work proposes a scheme for deriving inequalities that test whether a given set of experimental statistics is consistent with a noncontextual model and implies the impossibility of a non contextual model for any operational theory that can account for the experimental observations, including any successor to quantum theory.
Evidence for the epistemic view of quantum states: A toy theory
We present a toy theory that is based on a simple principle: the number of questions about the physical state of a system that are answered must always be equal to the number that are unanswered in a
The Nature of the Controversy over Time‐Symmetric Quantum Counterfactuals
It is proposed that the recent controversy over “time‐symmetric quantum counterfactuals” (TSQCs), based on the Aharonov‐Bergmann‐Lebowitz Rule for measurements of pre‐ and post‐selected systems, can
The Problem of Hidden Variables in Quantum Mechanics
• Physics
• 1967
Forty years after the advent of quantum mechanics the problem of hidden variables, that is, the possibility of imbedding quantum theory into a classical theory, remains a controversial and obscure | 2022-05-17 05:52:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48907116055488586, "perplexity": 2154.8628260706164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00576.warc.gz"} |
https://physics.stackexchange.com/questions/303518/relation-between-resistance-to-the-flow-of-fluid-pressure-and-radius-of-the-ves | # Relation between resistance to the flow of fluid, pressure and radius of the vessel
I just need a clarification, that may seem absurdly easy to some, fair enough, but I am not a physicist and would be glad to hear some answers.
Regarding blood floow, or any flow through a tube for that matter, it is intuitive that decreasing the diameter or radius of the vessel increases the pressure (since pressure is the force of the particles exterted upon a certain surface area of the vessel wall, F/S). However, we then have resistance that is of course, using analogy with Ohm's law, inversely proportional to flow; Q (flow) = Pressure gradient /R. Bigger the resistance, smaller the flow. However from that same equation Pressure difference is directly proportional to resistance, meaning bigger the resistance bigger the pressure difference - bigger the decline in pressure is. The pressure difference is driving pressure, though, and that is a "force" that drives fluid from one end of the vessel to the other. So am I right if I say that, when resistance is increased, the flow decreases but the velocity of the flow increases because of bigger pressure difference (also, conservation of mass and smaller radius since usually resistance increases with smaller radius)? I just need a confirmation of that one, however my real question is as follows.
Resistance according to Poiseulle is inversely proportional to the fourth power radius. So smaller the radius much bigger the resistance, much more decline in pressure. But I also said in the beginning that the pressure increases with decreased radius because of more collisions between particles and vessel walls. This seems like a bit of a paradox to me so I would need a bit explanation for it. I see the same "contradiction" in Bernoulli's equation, where because of total energy conservation, pressure is decreased when velocity is increased (when there is a decrease in radius of the vessel). From a conservation of energy's standpoint that is completely logical however, I cannot imagine pressure decreasing with smaller radius from a "molecular" point of view. Thank you in advance.
• Please use paragraphs - this very hard to read – user140434 Jan 7 '17 at 13:46
• I thought I did, but they somehow got lost. – Whiterabbit Jan 7 '17 at 14:05
No, you are mixing up concepts at random. First of all, there's no reason for pressure to increase just because the surface area of the tube decreases. The definition of pressure $p=F/A$ does not allow you to draw this conclusion, since you have no way (without additional information) to know what happens to the numerator of this expression.
Second, you need to keep your boundary conditions straight: Are you performing an experiment where you keep the flow rate constant? If that is the case then, yes, your pressure gradient will increase with decreasing tube radius, and the velocity will increase, of course. Often, however, we keep the pressure gradient constant (e.g. in a gravity-driven flow), in which case your flow rate decreases with the tube radius. Or, you may have a pump driving your system that has constant available power, in which case the product of pressure drop times flow rate is constant, and reducing the tube diameter may both increase the pressure gradient and decrease the flow rate, depending on the type of pump you use. In this case the flow velocities may or may not increase. Long story short, no, you're wrong on this one, too.
Finally, no, your ideas around how and why the pressure changes with reduced tube diameter and the connection with particle collisions with the wall are wrong. The fact that the Bernoulli equation assumes frictionless flow has been mentioned already.
Let's talk about blood flow. It's a closed system with a pump and resistance, exactly analogous to an electrical circuit with resistors (and capacitors and inductors, but let's ignore those.)
Blood is approximately water, and it is not flowing very fast, and it is flowing through a lot of long narrow tubes (capillaries). In such tubes, there is resistance due to viscosity, so the longer and narrower the tubes, the greater the resistance.
Bernoulli's principle does not deal with viscosity. It simply says that a fluid cannot gain speed without a pressure difference pushing it forward. Similarly it cannot lose speed without a pressure difference pushing against it. This is both conservation of kinetic energy and conservation of momentum.
When fluid moves through a long narrow tube it is not conserving kinetic energy, because the viscous resistance causes the fluid to convert kinetic energy to heat. | 2019-09-20 11:58:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.817032516002655, "perplexity": 285.39303480594674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00538.warc.gz"} |
https://socratic.org/questions/a-health-food-store-sells-oatmeal-for-3-60-per-pound-and-bran-flakes-for-4-80-pe | A health food store sells oatmeal for $3.60 per pound and bran flakes for$4.80 per pound. How many pounds of each should be used to get a mixture of 30 pounds that sells for $4.00 a pound? 1 Answer Jul 3, 2016 $\text{ Bran flakes } = {10}^{l b}$$\text{ Oatmeal } = {20}^{l b}$Explanation: Basically this is a straight line graph Plotting cost against content of Bran Flakes. At 0% Bran Flakes the mix is 100% Oatmeal At 100% Bran Flakes the mix is 0% Oatmeal Gradient is ($4.80-$3.60)/(100%) =($1.20)/(100%)
The gradient of part of the graph is the same as the gradient of all of the graph.
So ($4.00-$3.60)/(x%)=($4.80-$3.60)/(100%)
($1.20)/(100%)=($0.40)/(x%)
Turning the whole thing upside down
(100%)/1.2=(x%)/0.4
Multiply both sides by $0.4$
x%=(0.4xx100)/(1.2)= 33.333bar3 = 33 1/3%" bran flakes"
'~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The final mix is to have the weight of ${30}^{l b}$
$\implies \text{ Bran flakes } = \frac{33 \frac{1}{3}}{100} \times 30 = {10}^{l b}$
$\implies \text{ Oatmeal } = 30 - 10 = {20}^{l b}$ | 2020-08-04 20:53:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9897426962852478, "perplexity": 8964.183798489108}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735882.86/warc/CC-MAIN-20200804191142-20200804221142-00503.warc.gz"} |