url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
http://mathhelpforum.com/algebra/192968-equation-involving-addition-powers.html | # Math Help - equation involving addition of powers
1. ## equation involving addition of powers
Show that $2^k+2^k-1=2^{k+1}-1$
2. ## Re: equation involving addition of powers
Originally Posted by Jskid
Show that $2^k+2^k-1=2^{k+1}-1$
\displaystyle \begin{align*} 2^k + 2^k - 1 &= 2\cdot 2^k - 1 \\ &= 2^1 \cdot 2^k - 1 \\ &= 2^{k + 1} - 1 \end{align*}
3. ## Re: equation involving addition of powers
Originally Posted by Prove It
\displaystyle \begin{align*} 2^k + 2^k - 1 &= 2\cdot 2^k - 1 \\ &= 2^1 \cdot 2^k - 1 \\ &= 2^{k + 1} - 1 \end{align*}
The very first equality is not apparent to me.
4. ## Re: equation involving addition of powers
Originally Posted by Jskid
The very first equality is not apparent to me.
n + n = 2n
Here your n just happens to be 2^k. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1565.0135306690377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447266.73/warc/CC-MAIN-20151124205407-00049-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://crypto.stackexchange.com/questions/50693/des-bruteforce-attack-and-false-positive-keys/53048 | # DES Bruteforce attack and false positive keys
I have read that a bruteforce attack (regardless on which block cipher is used) can lead to a false positive if key space is greater than blocks space. It is pretty clear to me that is due the pigeons hole principle (if I have more keys than available blocks, sooner or later I will use two different keys for the same mapping).
My question is, how can DES exhibit such behavior if the key space is about $2^{56}$ and block space is $2^{64}$.
What I am expecting from those premises is that given a plaintext and all possible keys, not all possible ciphertexts will be addressed.
My question is, how can DES exhibit such behaviour if the key space is about $2^{56}$ and block space is $2^{64}$
Let us assume that the attacker has a single plaintext/ciphertext pair, that is, two eight byte values $P, C$ with $C = \text{DES}_k(P)$, where $k$ is the correct unknown key.
Then, let us consider the values $\text{DES}_{k'}(P)$; there $k'$ ranges over the $2^{56}-1$ possible incorrect keys. If we model DES with the incorrect key as a random permutation, then what we get is a list of $2^{56}-1$ random 8 byte values, each one of which has a $2^{-64}$ probability of just happening to be $C$.
Hence, the expected number of times the value $C$ appears on that list is $(2^{56}-1)2^{-64} \approx 2^{-8}$ (and the probability that the value $C$ appears at least once on the list is a tad smaller).
Hence, there does indeed exist a nontrivial probability that a brute force search would find two keys; the correct key $k$, and another key $k'$ that just happens to map $P$ to $C$.
Of course, in practice, we never really get only one plaintext block and one ciphertext block; we generally get additional ciphertext blocks that, at the very least, attempt to decrypt, and see if they make sense; that'll allow us to distinguish the correct key from any incorrect ones.
There is more going on here than the block length vs keylength issue of DES.
A block cipher under a given key is a pseudorandom permutation of the form $$E_k:\{0,1\}^n \rightarrow \{0,1\}^n$$ where $n$ is the block length.
Consider two distinct keys, and assume the permutations $E_{k_i}$ they determine are random permutations. Then the permutation $$E_{k_2}^{-1}(E_{k_1}(\cdot))$$ is also random.
If the plaintext $x$ is mapped to the same ciphertext under these two keys then $$E_{k_1}(x)=E_{k_2}(x)$$ or $$E_{k_2}^{-1}(E_{k_1}(x))=x.$$ Therefore set of plaintext blocks fixed by this composite permutation is the set of blocks that are mapped to the same ciphertext block by the two original permutations and hence the two keys won't be distinguished by an attack that happens to use any of those blocks. How many such $x$ are there?
By the derangement problem of combinatorics, a random permutation has roughly a fraction $1/e$ of input points that it fixes. So this happens more often that you might expect.
PS: I am assuming this is not a question about false positives in, say, linear cryptanalysis which arise due to linear relations being used to search for the subkeys holding with a certain probability less than one.
• The probability $1/e$ considered is the probability there exists a plaintext/ciphertext pair that matches two fixed distinct keys $k_1$ and $k_2$. This is not the same thing as the probability (about $2^{-8}$ ) that there exists another $k_2$ for a fixed $x$, which is what's considered. – fgrieu Nov 11 '17 at 8:03
The answer is hidden in the internal structure of DES. the Heart of DES is S-box. All internal DES operations are one-to-one mapping, except S-box which in turn, do many-to-one mapping from input to output, because it has 6-bits input and 4-bits output. that means different inputs will lead to the same output. this way some of the sub keys will generate same output for the same data (of course, I am talking for one internal Round). The average number of these subkeys is (2^48 / 2^32 = 2^16 subkeys. calculated from the input/output size of the S-BOXs). But, I still haven't a clear estimation for the number of false positive Main keys. Any ideas?
• The usual way to deal with the problem of evaluating the number of false positives in exhaustive key search is not to consider the inner structure of the bock cipher (examining the structure of rounds, as you do). Rather, it is to assume that the cipher behaves like an ideal cipher. This leads to the simple analysis in the accepted answer. – fgrieu Nov 11 '17 at 8:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8394591212272644, "perplexity": 454.29468378783366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999853.94/warc/CC-MAIN-20190625152739-20190625174739-00013.warc.gz"} |
http://eprints.iisc.ernet.in/6538/ | # Thermo stimulated luminescence studies of porous $CdSiO_3$ ceramic powders
Chakradhar, Sreekanth RP and Nagabhushana, BM and Nagabhushana, H and Rao, JL and Chandrappa, GT and Ramesh, KP (2006) Thermo stimulated luminescence studies of porous $CdSiO_3$ ceramic powders. In: National Conference on Novel Materials and Technologies (NCNMT-2006), 17-18 February, S. V. University, Tirupati..
Preview
PDF
Porous $CdSiO_3$ ceramic powders have been synthesized by a novel low temperature initiated self-propagating, gas producing solution combustion process. The solution combustion method renders a low-temperature synthetic route to prepare fine-grained $CdSiO_3$ powders with better sintering properties. The effects of temperature on crystalline phase formation, amount of porogens, and particle size of porous $CdSiO_3$ have been investigated. Complete crystalline single phase $CdSiO_3$ has been obtained at 950 °C. It is observed from scanning electron micrograph (SEM) that the powders become more and more porous (increase in pore diameter 0.5 – 5 \mu m) as the calcinations temperature increases. Thermo stimulated luminescence (TSL) studies have been carried out in both powdered and pelletized $CdSiO_3$ irradiated with gamma rays $(^{60}Co)$ (having dose of 10-100 Gy). Two glow peaks one at ~ 390 K and another one at ~ 450 K were recorded in the entire sample. The effect of irradiation on any solid material is known to produce at least a pair of TL glow peaks arising due to recombination of two kinds of holes/electrons deficiency trapping centers with at least one type of electron/electron donor centers. The TL intensity in powdered sample is more compared to the pelletized $CdSiO_3$, which is attributed to the inter particle spacing and pressure-induced defects. Further, the TSL intensity of 450 K- glow peak increases with increase in irradiation dose. This glow peak may be used as a dosimetric peak in radiation dosimetry. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8150424957275391, "perplexity": 5750.567618205319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118950.30/warc/CC-MAIN-20170423031158-00549-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.openfoam.com/documentation/guides/latest/api/cyclicACMIPolyPatchI_8H_source.html | The open source CFD toolbox
cyclicACMIPolyPatchI.H
Go to the documentation of this file.
1/*---------------------------------------------------------------------------*\
2 ========= |
3 \\ / F ield | OpenFOAM: The Open Source CFD Toolbox
4 \\ / O peration |
5 \\ / A nd | www.openfoam.com
6 \\/ M anipulation |
7-------------------------------------------------------------------------------
8 Copyright (C) 2013-2016 OpenFOAM Foundation
9-------------------------------------------------------------------------------
11 This file is part of OpenFOAM.
12
13 OpenFOAM is free software: you can redistribute it and/or modify it
15 the Free Software Foundation, either version 3 of the License, or
16 (at your option) any later version.
17
18 OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
19 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
20 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
21 for more details.
22
23 You should have received a copy of the GNU General Public License
24 along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>.
25
26\*---------------------------------------------------------------------------*/
27
28// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
29
31{
32 return nonOverlapPatchName_;
33}
34
35
37{
38 // note: use nonOverlapPatchID() as opposed to patch name to initialise
39 // demand-driven data
40
41 return this->boundaryMesh()[nonOverlapPatchID()];
42}
43
44
46{
47 // note: use nonOverlapPatchID() as opposed to patch name to initialise
48 // demand-driven data
49
50 return const_cast<polyPatch&>(this->boundaryMesh()[nonOverlapPatchID()]);
51}
52
53
55{
56 if (owner())
57 {
59 }
60
62}
63
64
66{
68}
69
70
71// ************************************************************************* //
Addressing for all faces on surface of mesh. Can either be read from polyMesh or from triSurface....
Definition: boundaryMesh.H:63
const polyPatch & nonOverlapPatch() const
Return a const reference to the non-overlapping patch. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885148406028748, "perplexity": 2965.761203038383}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00218.warc.gz"} |
https://brilliant.org/problems/things-get-tricky-here/ | # Things get tricky here
Algebra Level pending
• $$n$$ and $$a$$ are different real integers, where $$n≥0$$ and $$a≥0$$. Which of the following equations have the most real pair of integral solutions $$(a,n)$$ for any positive integer $$a$$?
• 1.$$\sqrt n - \sqrt{a-n} = a$$.
• 2.$$\sqrt n - \sqrt{n-a} = a$$
• 3.$$\sqrt n + \sqrt{a-n} = a$$
• 4.$$\sqrt n + \sqrt{n-a} = a$$
• Type your answer (1,2,3,or 4) and try not to use luck to finish this question lol.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599634170532227, "perplexity": 1280.6578570616953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282937.55/warc/CC-MAIN-20170116095122-00514-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://guitarknights.com/guitar-tuner-guitar-store-near-me.html | After teaching guitar and music theory to thousands of students over past three decades, I thought that I had basically 'seen it all' when it comes to guitar instruction. Then I discovered Justin’s website, and man was I impressed! Justin’s caring spirit, attention to detail, vast knowledge base, and especially his lucid, laidback and nurturing style, allow students to fall in love with the learning process. You see, it’s not enough to simply find out how to play a few cool licks or chords. A truly great teacher will make you fall in love with the process of discovery so that you can unlock the best within you. Justin is one of these great teachers, and I highly recommend justinguitar.com to anyone who wants to tap into their best selves.
You need to place one finger on whatever fret you want to bar and hold it there over all of the strings on that fret. The rest of your fingers will act as the next finger down the line (second finger barring, so third finger will be your main finger, and so on). You can also buy a capo, so that you don't have to deal with the pain of the guitar's strings going against your fingers. The capo bars the frets for you. This also works with a ukulele.
The ratio of the spacing of two consecutive frets is {\displaystyle {\sqrt[{12}]{2}}} (twelfth root of two). In practice, luthiers determine fret positions using the constant 17.817—an approximation to 1/(1-1/ {\displaystyle {\sqrt[{12}]{2}}} ). If the nth fret is a distance x from the bridge, then the distance from the (n+1)th fret to the bridge is x-(x/17.817).[15] Frets are available in several different gauges and can be fitted according to player preference. Among these are "jumbo" frets, which have much thicker gauge, allowing for use of a slight vibrato technique from pushing the string down harder and softer. "Scalloped" fretboards, where the wood of the fretboard itself is "scooped out" between the frets, allow a dramatic vibrato effect. Fine frets, much flatter, allow a very low string-action, but require that other conditions, such as curvature of the neck, be well-maintained to prevent buzz.
The acoustic bass guitar is a bass instrument with a hollow wooden body similar to, though usually somewhat larger than, that of a 6-string acoustic guitar. Like the traditional electric bass guitar and the double bass, the acoustic bass guitar commonly has four strings, which are normally tuned E-A-D-G, an octave below the lowest four strings of the 6-string guitar, which is the same tuning pitch as an electric bass guitar. It can, more rarely, be found with 5 or 6 strings, which provides a wider range of notes to be played with less movement up and down the neck.
Ernie Ball is the world's leading manufacturer of premium electric, acoustic, and classical guitar strings, bass strings, mandolin, banjo, pedal steel strings and guitar accessories. Our strings have been played on many of the best-selling albums of all time and are used by some of history’s greatest musicians including Paul McCartney, Eric Clapton, Jimmy Page, Slash, The Rolling Stones, Angus Young, Eagles, Jeff Beck, Pete Townshend, Aerosmith, Metallica, and more.
When you get right down to it, Guitar Center's friendly instructors will do everything they can to help you reach your highest level of musical potential. And remember, guitar lessons are available for both newcomers to the instrument, as well as experienced players who want to push the limits of their performance to even greater heights. If you would like to learn more about any upcoming workshops at a Guitar Center near you, feel free to give us a shout via phone or email. Any info you need can be found on our Guitar Center Lessons homepage and we'll gladly answer any questions you may have.
For the second note of the A minor 7 chord, place your second finger on the second fret of the D string. This is the second of the two notes you need to fret to play this chord. Make sure you’re on the tip of your finger and right behind the fret. Now that you have both notes in place, strum the top five strings, remembering to leave the low E string out.
On a day when there's a temptation to go into a dark place, and only see all the bad stuff there is in the world ... greed, cruelty, exploitation, selfishness ... I get days like that pretty often .... it's great to find someone giving out, and giving out good, and operating on an honour basis ... There are so many people who can't afford Guitar lessons .... well, here's a wonderful guy who has set up a whole system of teaching guitar ... Blues, Jazz, Rock, even Songwriting, from the basics, tuning the guitar, etc ... upwards ... If you use his site, it's up to you to determine how much you can contribute ... but this is an amazing site .... he is also very aware of issues in the world which need attention ... a great channel .. Check him out. He's a giver.
## Guitar Compass features hundreds of free guitar lesson videos. These online lessons are designed to teach you how to play guitar by covering the absolute basics up to more advanced soloing concepts and techniques. The lessons span different difficultly levels and genres like blues, rock, country, and jazz. Each lesson is designed to introduce you to a subject and get to know our instructors and their teaching style. To access more lessons and in-depth instruction, try a free 7 day trial of our premium membership.
Yellow Brick Cinema’s Classical Music is ideal for studying, reading, sleeping (for adults and babies) and general relaxation. We’ve compiled only the best quality music from some of the world’s most renowned composers such as Mozart, Beethoven, Bach, Vivaldi, Debussy, Brahms, Handel, Chopin, Schubert, Haydn, Dvorak, Schumann, Tchaikovsky and many more.
What's the best way to learn guitar? No matter which method you choose, or what style of music you want to play, these three rules from guitar teacher Sean L. are sure to put you on the road to success... Learning guitar can be a daunting task when first approached. For many it is seen as only for the musically adept, but in reality anyone can learn guitar. By following these three simple rules, anyone can become a great guitarist. 1. Set Goals There is no one path to take for learning
The ratio of the spacing of two consecutive frets is {\displaystyle {\sqrt[{12}]{2}}} (twelfth root of two). In practice, luthiers determine fret positions using the constant 17.817—an approximation to 1/(1-1/ {\displaystyle {\sqrt[{12}]{2}}} ). If the nth fret is a distance x from the bridge, then the distance from the (n+1)th fret to the bridge is x-(x/17.817).[15] Frets are available in several different gauges and can be fitted according to player preference. Among these are "jumbo" frets, which have much thicker gauge, allowing for use of a slight vibrato technique from pushing the string down harder and softer. "Scalloped" fretboards, where the wood of the fretboard itself is "scooped out" between the frets, allow a dramatic vibrato effect. Fine frets, much flatter, allow a very low string-action, but require that other conditions, such as curvature of the neck, be well-maintained to prevent buzz.
Most electric guitar bodies are made of wood and include a plastic pick guard. Boards wide enough to use as a solid body are very expensive due to the worldwide depletion of hardwood stock since the 1970s, so the wood is rarely one solid piece. Most bodies are made from two pieces of wood with some of them including a seam running down the center line of the body. The most common woods used for electric guitar body construction include maple, basswood, ash, poplar, alder, and mahogany. Many bodies consist of good-sounding, but inexpensive woods, like ash, with a "top", or thin layer of another, more attractive wood (such as maple with a natural "flame" pattern) glued to the top of the basic wood. Guitars constructed like this are often called "flame tops". The body is usually carved or routed to accept the other elements, such as the bridge, pickup, neck, and other electronic components. Most electrics have a polyurethane or nitrocellulose lacquer finish. Other alternative materials to wood are used in guitar body construction. Some of these include carbon composites, plastic material, such as polycarbonate, and aluminum alloys.
School of Rock has a finely-tuned preschoolers program called Little Wing that offers all the benefits of our beginner lessons, but is tailored to capture the attention of these young students and set them on a path towards music proficiency. Through playful exploration of rhythm, song structure, and melody kids are introduced to the guitar and other instruments.
Learning guitar is a lot of fun, and with the right lessons anyone can become a great guitar player. However, to be successful it's important to pick the right learning method and stay focused. We designed our Core Learning System to be a step-by-step system that keeps beginners on-track and having fun. Give it a try today by becoming a Full Access member.
The Beatles668 tabs 637 visualizations1 Elvis Presley542 tabs 410 visualizations2 Nirvana513 tabs 360 visualizations3 Eagles139 tabs 349 visualizations4 Frank Sinatra387 tabs 348 visualizations5 Misc Soundtrack1160 tabs 279 visualizations6 Misc Cartoons897 tabs 265 visualizations7 Green Day650 tabs 254 visualizations8 Metallica382 tabs 247 visualizations9 Johnny Cash441 tabs 221 visualizations10
You can tell whether or not strings are of a thin or thick gauge based on the numbers on the package. The smallest number, which is the gauge of thinnest string, will usually be .9 or lower on thin gauge strings. On thick gauge strings this number will be .12 or higher. Strings that are .10 or .11 are generally considered to be “mediums”, and produce a tone and feel which is the middle ground between these two extremes.
{"eVar4":"shop: guitars","pageName":"[mf] shop: guitars","reportSuiteIds":"musiciansfriendprod","eVar3":"shop","prop18":"skucondition|0||historicalgrossprofit|1||hasimage|1||creationdate|1","prop2":"[mf] shop: guitars","prop1":"[mf] shop: guitars","prop17":"sort by","evar51":"default: united states","prop10":"category","prop11":"guitars","prop5":"[mf] shop: guitars","prop6":"[mf] shop: guitars","prop3":"[mf] shop: guitars","prop4":"[mf] shop: guitars","campaign":"directsourcecode2","channel":"[mf] shop","linkInternalFilters":"javascript:,musiciansfriend.com","prop7":"[mf] category"}
A major chord is made from the I, III and V notes, so C major uses the notes C, E and G. To make a major chord into a minor, you flatten (lower the pitch by one fret, or a half-step) the III note. This means C minor is made up of C, Eb (flat) and G. So now, from the E major scale, E = I, F# (sharp) =II, G# = III, A = IV, B = V, C# = VI and D# = VII, you can work out both the major and minor chords. Sharps are just the opposite of flats, so you raise the pitch by one fret (or half-step). When you're working out the E minor chord, you have to flatten the F#, which just makes it back into a natural (neither flat nor sharp) F.
As stated above, construction has just as much of an impact on a guitar’s tone as material. The factors that make up construction are as follows: gauge, string core, winding type, and string coating. And while these factors are all important, keep in mind that different companies use different approaches to all of them. So never be afraid to try out a variety brands, because while the strings may look the same you will get a different response.
A few years back, I dusted off the ol' Takamine I got in high school to try some 'music therapy' with my disabled son, who was recovering from a massive at-birth stroke. This reignited my long dormant passion to transform myself from a beach strummer to a 'real' musician; however, as a single mom, taking in-person lessons was financially difficult. Then I found Justinguitar! Flash forward to today; my son is almost fully recovered (YAY!), my guitar collection has grown significantly, and I'm starting to play gigs. None of this would have been possible without your guidance and generosity, Justin. Thank you for being part of the journey!
This is by far the best online instruction. The fact you receive instruction from an established professional musician who wants to hear and see you play and has the enthusiasm to want you to become the best guitar player/musician you can be speaks volumes about the level and dedication he has towards his students. This is a great value and I recommend it to everyone I know who is learning guitar.
A capo (short for capotasto) is used to change the pitch of open strings.[28] Capos are clipped onto the fretboard with the aid of spring tension, or in some models, elastic tension. To raise the guitar's pitch by one semitone, the player would clip the capo onto the fretboard just below the first fret. Its use allows players to play in different keys without having to change the chord formations they use. For example, if a folk guitar player wanted to play a song in the key of B Major, they could put a capo on the second fret of the instrument, and then play the song as if it were in the key of A Major, but with the capo the instrument would make the sounds of B Major. This is because with the capo barring the entire second fret, open chords would all sound two semitones (aka one tone) higher in pitch. For example, if a guitarist played an open A Major chord (a very common open chord), it would sound like a B Major chord. All of the other open chords would be similarly modified in pitch. Because of the ease with which they allow guitar players to change keys, they are sometimes referred to with pejorative names, such as "cheaters" or the "hillbilly crutch". Despite this negative viewpoint, another benefit of the capo is that it enables guitarists to obtain the ringing, resonant sound of the common keys (C, G, A, etc.) in "harder" and less-commonly used keys. Classical performers are known to use them to enable modern instruments to match the pitch of historical instruments such as the Renaissance music lute.
The intervals between the notes of a chromatic scale are listed in a table, in which only the emboldened intervals are discussed in this article's section on fundamental chords; those intervals and other seventh-intervals are discussed in the section on intermediate chords. The unison and octave intervals have perfect consonance. Octave intervals were popularized by the jazz playing of Wes Montgomery. The perfect-fifth interval is highly consonant, which means that the successive playing of the two notes from the perfect fifth sounds harmonious.
Unlike the piano, the guitar has the same notes on different strings. Consequently, guitar players often double notes in chord, so increasing the volume of sound. Doubled notes also changes the chordal timbre: Having different "string widths, tensions and tunings, the doubled notes reinforce each other, like the doubled strings of a twelve-string guitar add chorusing and depth".[38] Notes can be doubled at identical pitches or in different octaves. For triadic chords, doubling the third interval, which is either a major third or a minor third, clarifies whether the chord is major or minor.[39]
Open tuning refers to a guitar tuned so that strumming the open strings produces a chord, typically a major chord. The base chord consists of at least 3 notes and may include all the strings or a subset. The tuning is named for the open chord, Open D, open G, and open A are popular tunings. All similar chords in the chromatic scale can then be played by barring a single fret.[16] Open tunings are common in blues and folk music,[17] and they are used in the playing of slide and bottleneck guitars.[16][18] Many musicians use open tunings when playing slide guitar.[17]
YellowBrickCinema’s Sleep Music is the perfect relaxing music to help you go to sleep, and enjoy deep sleep. Our music for sleeping is the best music for stress relief, to reduce insomnia, and encourage dreaming. Our calm music for sleeping uses Delta Waves and soft instrumental music to help you achieve deep relaxation, and fall asleep. Our relaxing sleep music can be used as background music, meditation music, relaxation music, peaceful music and sleep music. Let our soothing music and calming music help you enjoy relaxing deep sleep. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1715477555990219, "perplexity": 3186.6879647645796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00725.warc.gz"} |
http://math.stackexchange.com/questions/180513/what-are-the-mathematical-and-real-world-applications-of-quadratic-maps-a-t | # What are the mathematical and “real world” applications of “quadratic maps”, a type of dynamical system?
If we suppose that we can get a generating function for any "quadratic map (as in dynamical systems)", what are the mathematical applications? Also, what are the "real world" applications of this? Would this, for instance, allow new computations that were previously unobtainable?
MORE DETAILED EXPLANATION
To show the problem, we start with an equation, which is our "quadratic map": $$a_{n+1} = A(a_n)^2 + B(a_n) + C$$
Then we map it out into a generating function: $$A(x) = a_0 + a_1 x + a_2 x^2 + \dots + a_n x^n + \dots$$
So, for instance, $$a_1 = A(a_0)^2 + B(a_0) + C$$ $$a_2 = A\left(A(a_0)^2 + B(a_0) + C \right)^2 + B\left(A(a_0)^2 + B(a_0) + C \right) + C$$ $$= A^2(a_0)^4+2AB(a_0)^3+(2AC+B^2+A)(a_0)^2+(2BC + B)a_0+(C^2+2C)$$ $$\dots$$
Now we can suppose, for instance, that we know a very simple formula for $A(x)$. In other words, $a_n$ may have a very complicated formula in terms of $a_0$, but the formula for $A(x)$ could be relatively simple in some cases. How can the $A(x)$ simplification be used to advantage?
-
I don't see what's quadratic about your map, except for the deceptive notation in the first equation. – Raskolnikov Aug 9 '12 at 5:09
@Raskolnikov: I didn't intend to be deceptive. I'm talking about "quadratic maps", which are a specific topic, and not just maps that are quadratic, or quadratic equations that are maps. They are defined recursively in a quadratic fashion. I've been trying to find more information about them. There is some literature under "dyanamic systems", if that helps. – Matt Groff Aug 9 '12 at 5:57
I don't see where you're getting a power series. $a_1$ is of degree 2, $a_2$ of degree 4, $a_3$ of degree 8 in $a_0$, etc., but that's a sequence of polynomials - where's the power series? And what do you have in mind when you ask for a generating function? – Gerry Myerson Aug 9 '12 at 6:03
@GerryMyerson: Sorry, changed the question to say I'm using a generating function. There's no power series. I have in mind a closed form in the strictest sense, I believe. Just a function involving the variable $x$ and very basic/elementary arithmetic - not even summations or integrations. The only other things present would be the exact expression for $a_0$. I would like to know whether or not this could be useful. I have asked a few related questions lately. – Matt Groff Aug 9 '12 at 6:16
So your question boils down to "if we have a simple, closed-form expression for the generating function of a sequence, what does it teach us about the sequence itself?", is that it? – D. Thomine Aug 9 '12 at 18:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.760762095451355, "perplexity": 353.83747799865944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981969.11/warc/CC-MAIN-20150728002301-00181-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.authorea.com/users/74969/articles/141919-the-design-of-hyperfets/_show_article | # The Design of HyperFETs
## Model
### Transistor
The transistor is modeled generically by a heavily simplified virtual-source (short-channel) MOSFET model (Khakifirooz 2009). Although this model was first defined for Silicon transistors, it has been successfully adapted to numerous other contexts, including Graphene (Wang 2011) and Gallium Nitride devices, both HEMTs (Radhakrishna 2013) and MOSHEMT+VO$${}_{2}$$ HyperFETs (Verma 2017). Following Khakifirooz (Khakifirooz 2009), the drain current $$I_{D}$$ is expressed
$$\frac{I_{D}}{W}=Q_{ix_{0}}v_{x_{0}}F_{s}\\$$
where $$Q_{iz_{0}}$$ is the charge at the virtual source point, $$v_{x_{0}}$$ is the virtual source saturation velocity, and $$F_{s}$$ is an empirically fitted ”saturation function” which smoothly transitions between linear ($$F_{s}\propto V_{DS}/V_{DSSAT}$$) and saturation ($$F_{s}\approx 1$$) regimes. The charge in the channel is described via the following semi-empirical form first proposed for CMOS-VLSI modeling (Wright 1985) and employed frequently since (often with modifications, eg (Khakifirooz 2009, Radhakrishna 2013)):
$$Q_{ix_{0}}=C_{\mathrm{inv}}nV_{\mathrm{th}}\ln\left[1+\exp\left\{\frac{V_{GSi}-V_{T}}{nV_{\mathrm{th}}}\right\}\right]\\$$
where $$C_{\mathrm{inv}}$$ is an effective inversion capacitance for the gate, $$nV_{th}\ln 10$$ is the subthreshold swing of the transistor, $$V_{GSi}$$ is the transistor gate-to-source voltage, $$V_{T}$$ is the threshold voltage, and $$V_{\mathrm{th}}$$ is the thermal voltage $$kT/q$$.
For precise modeling, Khakifirooz includes further adjustments of $$V_{T}$$ due to the drain voltage (DIBL parameter) and the gate voltage (strong vs weak inversion shift), as well as a functional form of $$F_{s}$$. For a first-pass, we will ignore these effects, employ a constant $$V_{T}$$, and assume the supply voltage is maintained above the gate overdrive such that $$F_{s}\approx 1$$. However, we will add on a leakage floor with conductance $$G_{\mathrm{leak}}$$. Altogether, the final current expression (for the analytical part of this analysis) is
$$\label{eq:transistor_iv}\frac{I_{D}}{W}=nv_{x_{0}}C_{\mathrm{inv}}V_{th}\ln\left[1+\exp\left\{\frac{V_{\mathrm{GSi}}-V_{\mathrm{T}}}{nV_{th}}\right\}\right]+\frac{G_{\mathrm{leak}}}{W}V_{\mathrm{DSi}}\\$$
## Phase-change resistor
\label{ss:PCR}
The phase-change material is included by a similarly generic and brutally simple model. As done with the transistor, the goal is to capture only the most relevant feature: here, an abrupt change in resistance. However, for a concrete example, the material most frequently used in HyperFET research (Pergament 2013, Shukla 2015) is Vanadium Diozide (VO$${}_{2}$$), which features an S-style (ie current-controlled) and hysteretic negative differential resistance (NDR) region (Pergament 2016, Zimmers 2013) due to an insulator-metal transition (IMT), the underlying mechanism of which has been a source of long-running controversy (Pergament 2013). Though the literature contains numerous examples of voltage-swept I-V curves (Shukla 2015, Zimmers 2013, Radu 2015, Yoon 2014), proper modeling of a current-controlled NDR device in a circuit requires a current-swept I-V, examples of which can be found in (Zimmers 2013, Kumar 2013, Pergament 2016). The cleanest of these is Figure 1(b) of Kumar (Kumar 2013), which is suggested to the reader as a concrete realization of the model used herein.
The phase-change resistor (PCR) will be described by a hysteretic piecewise-linear model:
$$\label{eq:PCR_iv}V_{R}=\left\{\begin{array}[]{llr}I_{R}R_{\mathrm{ins}}&,&I_{R}<I_{\mathrm{IMT}}\\ V_{\mathrm{met}}+I_{R}R_{\mathrm{met}}&,&I_{R}>I_{\mathrm{MIT}}\\ \end{array}\right\}\\$$
where we require $$I_{\mathrm{MIT}}\leq I_{\mathrm{IMT}}$$ to ensure that the model is defined for all values of the current; $$I_{\mathrm{MIT}}=I_{\mathrm{IMT}}$$ would be the case of zero hysteresis. For convenience, we define voltage thresholds, $$V_{\mathrm{IMT}}=I_{\mathrm{IMT}}R_{\mathrm{ins}}$$ and $$V_{\mathrm{MIT}}=I_{\mathrm{MIT}}R_{\mathrm{met}}+V_{\mathrm{met}}$$. Finally, we require $$V_{\mathrm{met}}+I_{IMT}R_{\mathrm{met}}<V_{\mathrm{IMT}}$$ and $$I_{\mathrm{MIT}}R_{\mathrm{ins}}>V_{\mathrm{MIT}}$$ to ensure that the absolute resistance of the metallic state is lower than that of the insulating state wherever they are both defined.
## HyperFET Regimes
When the PCR is attached in series with the source of the transistor, the total device satisfies the above equations with the additional matching $$I=I_{D}=I_{R}$$ and $$V_{\mathrm{GSi}}=V_{GS}-V_{R}$$ where $$I$$ is the current through the device and $$V_{GS}$$ is the voltage between HyperFET gate (the transistor gate) and the HyperFET source (the exterior node of the resistor). We can immediately solve for several regions of the HyperFET model. For this section, it is assumed that the transistor and PCR are scaled such that the hysteretic region is entirely contained within subthreshold, and above the leakage floor; these choices will be discussed in the next section.
### Leakage floor
When the transistor is completely off, only the leakage term of (\ref{eq:transistor_iv}) remains, and combines with the PCR off-state resistance, leading to
$$I=G_{\mathrm{off}}V_{DS},\quad G_{\mathrm{off}}^{-1}=R_{\mathrm{ins}}+1/G_{\mathrm{leak}}\\$$
### Insulating (lower) branch of hysteretic region
For the lower branch (in the region above the leakage floor), we plug $$V_{\mathrm{GSi}}=V_{\mathrm{GS}}-IR_{\mathrm{ins}}$$ into the transistor I-V (\ref{eq:transistor_iv}), and take the subthreshold limit: $$\ln(1+e^{x})\approx e^{x}$$ for $$-x\gg 1$$.
$$\label{eq:insbranch_preW}\frac{I}{W}=nC_{\mathrm{inv}}v_{x_{0}}V_{th}\exp\left\{\frac{V_{\mathrm{GS}}-IR_{\mathrm{ins}}-V_{\mathrm{T}}}{nV_{th}}\right\}\\$$
This can be rearranged and solved in terms of the Lambert $$\mathcal{W}$$ function
$$\label{eq:insbranch}I=\frac{nV_{th}}{R_{\mathrm{ins}}}\mathcal{W}\left[WC_{\mathrm{inv}}v_{x_{0}}R_{\mathrm{ins}}\exp\left\{\frac{V_{\mathrm{GS}}-V_{\mathrm{T}}}{nV_{th}}\right\}\right]\\$$
### Metallic (upper) branch of the hysteretic region
For the upper branch, we plug in $$V_{\mathrm{GSi}}=V_{\mathrm{GS}}-V_{\mathrm{met}}-IR_{\mathrm{met}}$$, and follow the same procedure to find
$$\label{eq:metbranch}I=\frac{nV_{th}}{R_{\mathrm{met}}}\mathcal{W}\left[WC_{\mathrm{met}}v_{x_{0}}R_{\mathrm{met}}\exp\left\{\frac{V_{\mathrm{GS}}-V_{\mathrm{met}}-V_{\mathrm{T}}}{nV_{th}}\right\}\right]\\$$
Note that if the metal-state resistance is small $$IR_{\mathrm{met}}\ll nV_{th}$$, we can approximate
$$\frac{I}{W}\approx nV_{th}C_{\mathrm{met}}v_{x_{0}}\exp\left\{\frac{V_{\mathrm{GS}}-V_{\mathrm{met}}-V_{\mathrm{T}}}{nV_{th}}\right\}\\$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 9, "x-ck12": 0, "texerror": 0, "math_score": 0.9174749851226807, "perplexity": 1874.7232746266693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108709.89/warc/CC-MAIN-20170821133645-20170821153645-00622.warc.gz"} |
http://math.stackexchange.com/questions/355418/should-i-be-worried-that-i-am-doing-well-in-analysis-and-not-well-in-algebra | Should I be worried that I am doing well in Analysis and not well in Algebra?
I attend a mostly liberal arts focused university, in which I was able to test out of an "Introduction to Proofs" class and directly into "Advanced Calculus 1" (Introductory Analysis I) and I loved it; I did great in the class. I was not very mathematically mature at the time, but I studied hard and started to outpace many of the senior level students who had a least a good year or more of experience than me. Furthermore, the professor teaching the course was apparently known to be particularly difficult, but I loved his course. I enjoyed the challenge and wound up with a B+, the 2nd highest grade given in the class. I took Advanced Calculus 2 and loved it even more. The professor even suggested that I take a graduate Complex Analysis course in the Fall. (Just a side note here, the undergraduate complex analysis course at my school does not use any theorem's or proofs. The grad version is similar to say, an honors undergraduate course at more traditional math program.) I took this as a high complement, and a verification that I was in fact doing well. I know I am not very deep into Analysis, but I feel comfortable with the subject, even with the more abstract parts.
However, I am really struggling with Abstract Algebra. I can't understand why. I study the material really hard. I am doing "better than most" in the class, and I am maintaining a solid B average, but I really have trouble thinking about Algebra like I do Analysis. I feel like I am mostly just regurgitating theorems and techniques just to pass the exams. I know I can pass the course, but I also know that this mindless memorization will eventually come back to haunt me later on in my mathematical career. Algebra is truly one of the "pillars" of math which is why I really feel terrible that I don't understand it.
Is this a sign that I simply don't have what it takes to succeed in math? I would love to go on to graduate school and hopefully get a Ph.D. In fact, a professor actually said to me "I think it would be a shame if you didn't go to grad school for math." He told me that before I took Algebra, but now I feel like my world is "crashing down" in a sense. Before I was a "good" student, now, I feel like a zombie in the back of the room. Any input is greatly appreciated but I really want to know if this has happened to anyone who has gone on to succeed in a Ph.D math program?
-
For what it's worth, almost all of my math major friends find themselves falling into the category of "analysis person" or "algebra person" (and the graduate students and postdocs sort themselves into finer categories, like "harmonic analysis person" and "algebraic geometry person"). Some are excellent at both but it looks like people's intuitions tend to develop differently in these two areas. – Julien Clancy Apr 9 '13 at 0:54
Abstract algebra isn't a prerequisite for everything. You can still study p.d.e.'s and statistics/probabilities and numerical analysis without much knowledge of algebra. If you want to do pure mathematics though you had probably better study... (by the way: I felt the same way about analysis..) – Cocopuffs Apr 9 '13 at 0:57
@user66345 That's very true and deserves mention: algebra is a very jargon-y discipline. Learning the definitions and how they relate to each other is half the battle. – Alexander Gruber Apr 9 '13 at 2:01
Superman does good; you're doing well. You need to study your grammar, son. – in_wolframAlpha_we_trust Apr 9 '13 at 5:32
Don't worry. I never did good in analysis; but I did just fine in set theory. Algebra was hit and miss. – Asaf Karagila Apr 9 '13 at 9:35
I believe that I may be of some consolation.
I had a very similar experience to you. I started doing "serious" math when I was a senior in high school. I thought I was very smart because I was studying what I thought was advanced analysis--baby Rudin. My ego took a hit when I reached college and realized that while I had a knack for analysis and point-set topology, I could not get this algebra thing down! I just didn't understand what all these sets and maps had to do with anything. I didn't understand why they were useful, and even when I finally did grasp a concept I was entirely impotent when it came to those numbered terrors at the end of chapters.
I held the same fear that you do. I convinced myself that I was destined to be an analyst--I even went as far to say that I "hated" algebra (obnoxious, I know). After about a year of so, with the osmotic effect of being in algebra related classes, and studying tangentially related subjects, I started to understand, and really pick up on algebra. Two years after that (now) I would firmly place myself on the algebraic side of the bridge (if there is such a thing), even though I still enjoy me some analysis!
I think the key for me was picking up the goals and methods of algebra. It is much easier for a gifted math student to "get" analysis straight out of high-school, you have been secretly doing it for years. For the first half of Rudin while I "got it", this was largely thanks to the ability to rely on my calculus background to get why and how we roughly approached things. There was no such helpful intuition for algebra. It was the first type of math I seriously attempted to learn that was "structural", which was qualitative vs. quantitative. My analytic (read calculus) mind was not able to understand why it would ever be obvious to pass from ring X to its quotient, nor why we care that every finitely generated abelian group is a finite product of cyclic groups. I just didn't understand.
But, as I said, as I progressed through more and more courses, learned more and more algebra and related subjects, things just started to click. I not only was able to understand the technical reasons why an exact sequence split, but I understood what this really means intuitively. I started forcing myself to start phrasing other parts of mathematics algebraically, to help my understanding.
The last thing I will say to you, is that you should be scared and worried. I can't tell you how many times in my mathematical schooling I was terrified of a subject. I always thought that I would never understand Subject X or that Concept Y was just beyond me. I can tell you, with the utmost sincerity, that those subjects I was once mortified by, are the subjects I know best. The key is to take your fear that you can't do it, that algebra is just "not your thing", and own it. Be intrigued by this subject you can't understand, read everything you can about it, talk to those who are now good at the subject (even though many of them may have had similar issues), and sooner than you know, by sheer force of will you will find yourself studying topics whose name would make you-right-now die of fright. Stay strong friend, you can do it.
-
+1 especially for the last paragraph. You gotta love the smell of napalm in the morning. – Alexander Gruber Apr 9 '13 at 1:53
I can really connect with you. Believe it or not, I was actually a high school dropout, math was the subject I hated the most. When I finally screwed my head on right and decided to go back to school, I told myself I wouldn't let math stop me. I read about it, studied it, talked to people who did it, and well, here I am now. : ) Thank you for the encouraging words. – Eric Apr 9 '13 at 2:36
Off topic, but where are you going to graduate school, Alex? – Potato Apr 11 '13 at 0:28
@Potato Berkeley – Alex Youcis Apr 11 '13 at 1:39
@AlexYoucis Congratulations! – Potato Apr 11 '13 at 1:52
Some people are just naturally more analytic than algebraic, and vice versa. Personally, I do research level algebra, but if I see $\epsilon$ and $\delta$ on the same page I run screaming.
That's not good though, so I'm making myself do it. I enrolled in a complex analysis class, and awful as y'all's side of the fence is, I'm sticking to it. And though I'm not doing the best in the class, y'know, I'm starting to enjoy parts of it.
You can't expect to excel in every area as a mathematician, so focus on rocking at the stuff you do like to do, but make sure your head stays above water in the areas you don't like. You never know when you might end up needing algebra to accomplish something in analysis. If you need motivation, try thinking about hybrid disciplines like functional analysis / operator theory (there's even an "algebraic analysis"). Relate theorems you learn in algebra back to analysis in any way you can. It will help you remember them and maybe give you some cool ideas for later.
-
@OP Like Alexander Gruber, I progressed with algebra way more quickly than with analysis. By the end of last year I had taken every single algebra class offered at my university including algebraic topology - while only having done a basic course in analysis of metric spaces! That being said, I still forced myself to take a measure theory course. Also, the distinction is not just algebra/analysis - what about subjects like Differential Geometry say? – user38268 Apr 9 '13 at 1:46
+1 for the entire post, but especially for the suggestion regarding hybrid disciplines. I'm a differential geometer at heart, so much of my own motivation for algebra comes from an area that sits in between the two: (complex) algebraic geometry. – Jesse Madnick Apr 11 '13 at 0:46
No reason to be alarmed or worried...it's too early for you to be in a position to worry about it. For a first crack at abstract algebra, don't fret. Most undergraduate math majors inevitably do fall into "analyis-oriented" and "algebra-oriented", just as in highschool, there is often a "partition" of students into those who prefer high school algebra and those who prefer geometry.
But, it takes more than one course to know this about yourself. Abstract algebra, when I took it as an undergraduate, was typically the gateway course into higher-level abstract math. Personally, I loved it! But there were also classes I developed a love for after covering the "tools" and language of the field: i.e., only after having taken a class or two.
If it's any consolation, you are encountering abstract algebra at a very young age, and though you no doubt have mathematical talent, it does take "time" and effort, and not just raw talent, to develop the cognitive and mathematical maturity to reason abstractly and to develop a sense of "grasping a subject intuitively." Sometimes this is facilitate by classes like an "Introduction to the language and practice of mathematics" (which a Univ near me offers as a bridge between calculus/differential equations and introductory linear algebra, and all subsequent course offerings.
At any rate, wrt becoming comfortable with the more abstract nature of what you're encountering: It is largely a matter of exposure to and engagement with abstract math to operate comfortably within that realm, but there is also a purely developmental component which impacts the ease with which this "acclimation" occurs.
Is this a sign that I simply don't have what it takes to succeed in math?
No, it is not. Every math student I know, sooner or later, has encountered a point where they ask themselves that very question. Often times, more than once.
How you respond at points like this, and how you respond when you feel overwhelmed or intimidated (and you will feel that way again, if you persist in your studies!): that will determine whether you have what it takes to succeed in math.
-
Eric: I hope you don't "give up" on algebra yet. Certainly, don't give up math!! My point was/is: don't question your aptitude for further study based on your first experience in a subject. Most highly talented math students are alarmed when they first have to really struggle, partly because math had all previously come more easily. But EVERY serious student of math encounters a point where they feeling rather overwhelmed, even if they won't admit it! It happens to us all, but getting through this is what brings you to the "next level" - and deepens your love for math. – amWhy Apr 9 '13 at 22:27
I am also an undergrad student. I suggest you to calm down. I know I will do a math PhD, so I work hard to enjoy the math and feel no pressure. Sometimes or often I have difficulty in some areas that might be trivial to many people, but I don't feel the rush because I know one day I will understand them, simply by going through them over and over again.
I often compare studying math to playing a computer game with infinitely many levels that gets exponentially harder. Sure, it's good if you make lots of progress and go through many levels fast, but sometimes the point of playing the game is to enjoy the moment and not to always aim at next level while you are playing your current level, because sooner or later you will get stuck at some level and might not go to next for a long long time.
So relax and study and enjoy.
-
Actually, in some levels the point is to collect all the coins in that level. – Rudy the Reindeer Apr 22 '13 at 19:43
I heard somewhere that "Abstract Algebra is the class that separates the boys from the men", so I'd be worried... algebra is more abstract, with concepts that are hard to impossible to visualize. And mathematics is mostly about abstractions. You need to get acquainted with them.
Just my 2¢ as a mathematics minor...
Good luck! Just keep at it, get other sources for different viewpoints/teaching technique/emphasis, not everybody learns the same way. Try doing exercises, check out what cooks here in the area, answer questions.
-
I very much disagree, especially with the quote. I find it difficult to visualise a whole host of analytic ideas and am much more comfortable with algebraic notions, but this is a personal preference! Different people have different skill sets, is say a graduate student in PDEs a boy and I a man because I know what a Dedekind domain is and they don't? Of course not! To Eric, keep doing what you enjoy and what you are good at, try and relate the algebraic things you learn about to the analysis you like, you'll find it a lot easier to understand the ideas with a few examples you like in mind. – Alex J Best Apr 9 '13 at 1:23
All math are abstract. All are equally difficult at higher level. All takes enough devotion in order to succeed. Have faith in yourself. Intelligent and ingenious steps result only through hard and continued effort. First try to finish cover to cover "Jacobson's vol.I, Abstract Algebra". This book will definitely enhance your taste for AA.
-
A warning: Jacobson is tough, and his writing style is not for everyone. – Alexander Gruber Dec 13 '13 at 20:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.408758282661438, "perplexity": 803.0813246414781}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771374.156/warc/CC-MAIN-20141217075251-00092-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/345600/being-ready-to-study-calculus?answertab=votes | # Being ready to study calculus
Some background: I have a degree in computer science, but the math was limited and this was 10 years ago. High school was way before that. A year ago I relearnt algebra (factoring, solving linear equations, etc). However, I have probably forgotten some of that. I never really studied trigonometry properly.
I want to self study calculus and other advanced math, but I feel there are some holes that I should fill before starting. I planned on using MIT OCW for calculus but they don't have a revision course. Is there a video course or book that covers all of this up to calculus? (No textbooks with endless exercises please.) I would like to complete this in a few weeks. Given my background, I think this is possible.
-
Maybe "Essential Calculus with Applications" by Richard Silverman? You can look at the table of contents on Amazon to see if it fits your needs (read also customer reviews). It's at a relatively basic level, but if you start from scratch, it's probably better. – Jean-Claude Arbaut Mar 29 '13 at 11:42
Thanks arbautjc, I was looking for something that covers pre calculus more, i.e. trig, functions, logs, complex numbers and more algebra. – Mark Mar 29 '13 at 12:01
here they suggest "Calculus" Apostol. Much more expensive, but also much more comprehensive. After a quick look ate TOC, it seems to be at the level of first two years in university in France (in scientific studies), and it covers the whole maths courses (I mean the two volumes). – Jean-Claude Arbaut Mar 29 '13 at 12:11
See my previous comment – Mark Mar 29 '13 at 12:32
Math, just like riding a bicicle, is learned by doing. So the "endless exercises" are an asset. Nobody forces you to do all of them, select some and work them over carefully. Perhaps check out Pólya's clasic "How to solve it", problem solving skills will be indispensable later on. – vonbrand Mar 29 '13 at 12:50
On my first day of college, my Calc III professor began by extending the properties of real numbers to $\mathbb{R}^n$ vector space. At the end of class, I came up to him, very much floored that the words "distributive property" ever made a reappearance in my life, and mumbled something about not being cut out for this and asking for a resource to do a massive review. He pointed simply to homework he just assigned and said, "That will do fine."
This is all to say that learning math is a little like learning English. The best way to improve your vocabulary is to read and look up unfamiliar words, not insist on finishing the dictionary first. Fill in holes when you get to them: the exponent rules comes first with limits and differentiation, then logarithms, then trig properties, and then all of them again with integration. Don't go back to high school, even if it is only a few weeks; you'll do just fine without it.
-
Very good. Very astute. Don't try to prepare in advance toooo much for a thing you don't-know-what-is... Rather, arrive there, discover your own needs, and respond then, with that information. – paul garrett Apr 1 '13 at 2:05
I think this is a great approach, I had been put off, however when I heard people warning, "if you don't have Algebra and Trig mastered you will struggle with Calculus". – Mark Apr 1 '13 at 9:20
But where do you start? – Surya Nov 27 '13 at 18:48
The lecture notes by William Chen cover the requested material nicely. The Trillia Group distributes good texts too.
-
Try Paul's Online Math notes covering algebra-precalculus, calculus and differential equations.
-
Thanks, they look good, it doesn't seem to cover Trig though, but I suppose it can be seen elswhere – Mark Mar 29 '13 at 12:29
@Mark there is a link near the bottom called "Algebra/Trig Review" in addition to the cheat sheets at the top of the page. – Tyler Mar 29 '13 at 12:55
@TylerBailey Ok I see. – Mark Mar 29 '13 at 13:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6169793605804443, "perplexity": 1343.9611804859273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830323.35/warc/CC-MAIN-20140820021350-00359-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/33590/print-only-existing-footnotes-suppress-empty-footnotes?answertab=active | # Print only existing footnotes / suppress empty footnotes
This is a follow-up to my earlier question Merge separate footnotes into text.
I now have a similar case, where I need to check whether there actually are footnotes relating to a particular number:
\documentclass[a4paper]{article}
\usepackage{lipsum}
\newcommand{\mylips}[1]{\printfootnote#1 \lipsum[#1]}
\def\printfootnote#1{\footnote{\csname extfootnote#1\endcsname}}
\def\definefootnote#1 #2\endfootnote{%
\expandafter\def\csname extfootnote#1\endcsname{#2}}
\input{footnotefile}
\begin{document}
\mylips{1}
\mylips{2}
\mylips{3}
\mylips{4}
\mylips{5}
\mylips{6}
\mylips{7}
\end{document}
footnotefile.tex would be
\definefootnote1 Some footnote\endfootnote
\definefootnote3 Another footnote\endfootnote
\definefootnote7 And another footnote\endfootnote
The challenge is that I don't want the empty notes to appear at all.
How would I write a test for whether the result of a certain \printfootnote\arabic{somecounter} will be empty?
-
Well, your \printfootnote should check whether the footnote was defined:
\def\printfootnote#1{% | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22186583280563354, "perplexity": 2991.0621293489507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00051-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/321000/bivariate-normal-distribution-link-bbb-ey-mid-x-x-and-bbb-ex-mid-y-y | # Bivariate normal distribution , link $\Bbb E(Y\mid X=x)$ and $\Bbb E(X\mid Y=y)$ [duplicate]
Note: I edited this question on 1/1/2018 because of the comments on the original question. So some comments relate to the earlier version. It is closed as duplicaten but I dusagree with that
For a bi-variate normal distribution with mean $(\mu_X, \mu_Y)$, variances $\sigma_X^2, \sigma_Y^2$ and correlation $\rho$ it holds (see e.g. http://athenasc.com/Bivariate-Normal.pdf, https://math.stackexchange.com/questions/33993/bivariate-normal-conditional-variance) that
$$\Bbb E(Y\mid{X=x})=\mu_Y + \rho \frac{\sigma_Y}{\sigma_X}(x-\mu_X)$$
By symmetry I would say that (is this correct?)
$$\Bbb E(X\mid{Y=y})=\mu_X + \rho \frac{\sigma_X}{\sigma_Y}(y-\mu_Y)$$
Next question:
These are two lines in an $(x,y)$ plane.
The picture below shows and example of a sample from such a bivariate normal distribution. Conditioning on $X$ means that I 'intersect' along the red lines, conditioning on $Y$ is 'intersecting' along the blue lines.
The contours of constant density for a bi-variate normal distribution are (rotated) ellipses.
Note that the rotation angle of the prinicipal axis wrt to the $x$-axis depends on the correlation $\rho$ (so, as opposed to what @whuber says in his comments below) correlation is the relevant topic here. Because, as argued by @MichaelHardy, there are values for $\rho$ where these lines correspond to these principal axis.
My question is whether the two lines of the conditional means correspond to (one of) the principal axes of these ellipses and if not how this can be explained geometrically.
Note: this question is not answered by the answer of @DilipSarwate here (and which I fully agree with) : Effect of switching response and explanatory variable in simple linear regression because he is using OLS (so regression techniques). The above formula are a theoretical property of the bivariate (multivariate) normal distribitions namely that all conditional distributions of it are also normal (see http://athenasc.com/Bivariate-Normal.pdf), there is no need for referring to regression of OLS to show that.
## marked as duplicate by whuber♦Jan 1 '18 at 14:54
• The "next question" is a duplicate of Effect of switching response and explanatory variable in simple linear regression – Dilip Sarwate Dec 31 '17 at 15:30
• I was referring to the highlighted "next question" which merely shows a bunch of data points and asks about fitting a straight line to them. It doesn't matter in the least as to whether the points are from a bivariate normal distribution of not; the answer is linear regression in either case, and is thoroughly discussed in the cited question. – Dilip Sarwate Dec 31 '17 at 18:08
• "Regression" refers to estimating properties of conditional distributions. That makes this question squarely about regression. Correlation, although related to regression in the Binormal case, is not under discussion here and isn't terribly relevant. The formulas supplied in this question hold for Ordinary Least Squares regression, which is applicable very broadly and is not confined to the Binormal case. Thus, this question is entirely about (linear) regression, which is why you are getting regression-oriented answers. – whuber Dec 31 '17 at 18:51
• A good one is Freedman, Pisani, and Purves, Statistics (any edition). – whuber Dec 31 '17 at 20:26
• Your revision merely obscures the fact that the basic ideas you are asking about are about regression and not about the Binormal distribution or correlation. OLS still applies and is still informative. Indeed, the relationship between the slope of the linear regression and the correlation coefficient holds regardless of the underlying distribution. Thus, your edits actually harm the question rather than help it. Please understand that there's nothing wrong with the new question per se: but you will find it has been thoroughly answered in the second duplicate thread. – whuber Jan 1 '18 at 14:53
Your argument from symmetry is correct.
They do not represent the same line.
That can be seen by looking at $y = mx+b$ and solving for $x,$ getting $x= \frac 1 m y - \frac b m,$ and seeing that the coefficient of $x$ in the first equation and that of $y$ in the second are each other's reciprocals. But $\rho\sigma_X/\sigma_Y$ and $\rho\sigma_Y/\sigma_X$ are not reciprocals of each other.
To see why you ought to expect two different lines, consider the case in which $\rho=0.$ Then $X$ and $Y$ are uncorrelated. Thus the estimated expected value of $Y$ given $X=x$ should not depend on $x,$ and so the line is $y = \mu_X,$ a horizontal line. But similarly you'd get the line $X=\mu_Y,$ a vertical line if the $y$-axis is vertical, to estimate the average value of $X$ given $Y=y.$ Clearly two different lines. Then consider what you should expect if $\rho = 0.01,$ etc.
This has a seemingly paradoxical result: If you find the estimated average $y$-value for a given $x$-value, and then find the estimated average $x$-value of that $y$-value, then you don't return to where you started, but instead get something closer to the average $x$-value. For example, suppose you want to estimate an athlete's performance next week given his performance today. If he performs unusually well or unusually badly today, then this says he will be closer to average performance next week than he is today. But if they're always moving toward the average, how is it that we don't see them all near the average after some time has passed? The answer is that although most of the ones whose performance is far from the average today are closer to average next week, there will be some others whose performance diverges from the average then.
Your argument from symmetry re the formulas for $E[Y\mid X=x]$ and $E[X\mid Y=y]$ is correct. For bivariate normal random variables, $E[Y\mid X=x]$ is a linear function of $x$ and $E[X\mid Y=y]$ is a linear function of $y$ with formulas as you have found them.
With regard to fitting straight lines to a bunch of data points -- as you have asked in the highlighted "Next question" in your query -- a detailed description of what happens with data points as you have shown in the figure and fitting straight lines to them, read the answers to Effect of switching response and explanatory variable in simple linear regression. It doesn't matter in the least whether the points came from a bivariate normal distribution or not: the straight line fit is the same in either case, and, as the answers to the referenced question show, the two straight lines are different unless all the data points lie on a straight line.
• Regression is not about bivariate normal distribution? It s about cirrelation hete. – user83346 Dec 31 '17 at 16:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083655834197998, "perplexity": 376.84159939635634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00263.warc.gz"} |
http://www.scienceforums.com/topic/24187-percent-yield-after-dehydration-of-2-methyl-2-butanol/ | # Percent Yield After Dehydration Of 2-Methyl-2-Butanol
No replies to this topic
### #1 GlitterGirl
GlitterGirl
Curious
• Members
• 1 posts
Posted 23 October 2011 - 12:07 PM
Organic Chemistry Lab, we performed an experiment to dehydrate 2-methyl-2-butanol and formed 2-methyl-2-butene(85.888%) and 2-methyl-1-butene(14.112%), percentages derived from gas chromatoghraphy analysis after dehydration.
We were asked to calculate the percent yield but I'm not sure how to do this. The starting material was 18mL of H2O, 9mL of concentrated sulfuric acid, and 18mL (15grams) of 2-methyl-2-butanol. After dehydration was complete, the final product (which was the mixture of 2-methyl-2-butene and 2-methyl-1-butene, before GC analysis) was weighed at 3.055 grams.
I am confused, is the question of percent yield just asking after dehydration how much was left meaning 3.055g/15gx100 giving only a 20.4% yield?
Any explanation would be GREATLY appreciated. BTW, this calculation is required in a lab report that is due tomorrow. Thanks to anyone who is wiling to help! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645046353340149, "perplexity": 4519.276228833469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542288.7/warc/CC-MAIN-20161202170902-00410-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.johanneskirche.at/?q=calendar-node-field-time/year | M D M D F S S
2
3
4
5
6
7
9
14
16
18
19
23
25
26
27
28
30
31
M D M D F S S
2
6
7
8
11
13
14
15
17
18
20
21
22
23
24
25
27
28
M D M D F S S
2
4
6
8
9
10
11
13
15
18
20
22
23
24
25
27
29
30
31
M D M D F S S
1
3
4
5
6
7
8
9
10
11
12
13
15
16
17
19
20
21
22
23
24
25
26
27
28
M D M D F S S
2
3
4
6
7
8
9
10
11
12
13
14
15
16
19
20
21
22
23
24
25
26
27
28
29
30
31
M D M D F S S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
M D M D F S S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
M D M D F S S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
M D M D F S S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
M D M D F S S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
M D M D F S S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
M D M D F S S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9539062976837158, "perplexity": 7462.146474067424}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499891.42/warc/CC-MAIN-20230131222253-20230201012253-00706.warc.gz"} |
http://laneas.com/publication/stackelberg-game-incentive-proactive-caching-mechanisms-wireless-networks | # A Stackelberg Game for Incentive Proactive Caching Mechanisms in Wireless Networks
Conference Paper
### Authors:
Fei Shen; Kenza Hamidouche; Ejder Baştuğ; Mérouane Debbah
### Source:
IEEE Global Communications Conference, Washington, DC, USA (2016)
### Abstract:
In this paper, an incentive proactive cache mechanism in cache-enabled small cell networks (SCNs) is proposed, in order to motivate the content providers (CPs) to participate in the caching procedure. A network composed of a single mobile network operator (MNO) and multiple CPs is considered. The MNO aims to define the price it charges the CPs to maximize its revenue while the CPs compete to determine the number of files they cache at the MNO's small base stations (SBSs) to improve the quality of service (QoS) of their users. This problem is formulated as a Stackelberg game where a single MNO is considered as the leader and the multiple CPs willing to cache files are the followers. The followers game is modeled as a non-cooperative game and both the existence and uniqueness of a Nash equilibrium (NE) are proved. The closed-form expression of the NE which corresponds to the amount of storage each CP requests from the MNO is derived. An optimization problem is formulated at the MNO side to determine the optimal price that the MNO should charge the CPs. Simulation results show that at the equilibrium, the MNO and CPs can all achieve a utility that is up to $50$% higher than the cases in which the prices and storage quantities are requested arbitrarily.
Full Text: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46580076217651367, "perplexity": 1330.0694646116508}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589237.16/warc/CC-MAIN-20180716080356-20180716100356-00016.warc.gz"} |
https://arxiv.org/abs/1811.08912?context=astro-ph | astro-ph
# Title:Study of the X-ray pulsar IGR J19294+1816 with NuSTAR: detection of cyclotron line and transition to accretion from the cold disc
Abstract: In the work we present the results of two deep broad-band observations of the poorly studied X-ray pulsar IGR J19294+1816 obtained with the NuSTAR observatory. The source was observed during Type I outburst and in the quiescent state. In the bright state a cyclotron absorption line in the energy spectrum was discovered at $E_{\rm cyc}=42.8\pm0.7$ keV. Spectral and timing analysis prove the ongoing accretion also during the quiescent state of the source. Based on the long-term flux evolution, particularly on the transition of the source to the bright quiescent state with luminosity around $10^{35}$ erg s$^{-1}$, we concluded that IGR J19294+1816 switched to the accretion from the "cold" accretion disc between Type I outbursts. We also report the updated orbital period of the system.
Comments: 7 pages, 8 figures, 2 tables; accepted for publication in A&A Subjects: High Energy Astrophysical Phenomena (astro-ph.HE) Journal reference: A&A 621, A134 (2019) DOI: 10.1051/0004-6361/201833786 Cite as: arXiv:1811.08912 [astro-ph.HE] (or arXiv:1811.08912v1 [astro-ph.HE] for this version)
## Submission history
From: Sergey Tsygankov [view email]
[v1] Wed, 21 Nov 2018 19:00:15 UTC (605 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7747349143028259, "perplexity": 5908.307509578281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00150.warc.gz"} |
https://physics.stackexchange.com/questions/392416/how-to-know-wind-speed-given-the-force | How to know wind speed, given the force
I'm doing a game that is set in a desert and there are sand storms that occur periodically. What I've done is very basic and far from realistic, which is
if (distance > 30) {
windForce = 60000 / distance;
}
Beyond 30 meters you are within the eye of the storm and (supposedly) feel no wind. Given the equation above, at 30 meters the force would be 2.000 newtons and get weaker from there.
How can I know how much wind speed that is in meters/s? I know wind speed is calculated by special equipment and it depends on air density and temperature, (and probably other factors), but a rough approximation would do. And as I said, it's in a desert, so hot and dry climate.
• Air isn't very viscous, so force shouldn't really be a function of distance (not at scales of ~30 m). – Mohammad Athar Mar 15 '18 at 13:52
• Not even in a hurricane? I thought the closer to the eye walls the stronger the wind was, and it gradually decreased in strenght further away. Anyway, that's how I am doing it :P just wanted to know how to translate x force to wind speed in m/s – RealAnyOne Mar 15 '18 at 13:58
• you want to use control volume analysis to get a decent estimate: jove.com/science-education/10444/… I can write up a solution later tonight (when I'm not at work), but that link should get you started – Mohammad Athar Mar 15 '18 at 14:04
The simplest model of air resistance has the force proportional to the square of the velocity. To go a distance $d$ through a fluid at speed $v$, the minimum air resistance requires you to accelerate the air to speed $v$ to move it out of the way. That requires energy:
$$E \propto \frac 1 2 m v^2$$
with the mass, $m$, depending on $d$:
$$m \propto \rho \cdot d$$.
The work done is:
$$W = F\cdot d = E$$
so that:
$$F = E/d \propto (\rho d)v^2/d \propto v^2$$.
The hurricane model has a core of constant vorticity in the eye (basically, it rotates like a solid block). Outside the eye wall, the vorticity is zero. That require zero curl:
$$\vec{\nabla}\times \vec{u} = 0$$
which is solved by a circular flow with:
$$||u|| \propto 1/r$$.
(Note: you can watch airborne big rigs revolve around a tornado vortex, without rotating--that's zero curl irrotational flow).
So in summary:
$$F \propto 1/r^2$$
for $r > r_{eye}$, and:
$$F \propto r^2$$
for $r < r_{eye}$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669369578361511, "perplexity": 1080.0133406045113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00297.warc.gz"} |
http://crypto.stackexchange.com/questions/14567/exercise-attack-on-a-two-round-des-cipher | # Exercise: Attack on a Two-Round DES Cipher
Working through the exercises in Cryptography Engineering (Schneier, Ferguson & Kohno) I have stalled on the following exercise:
Consider a new block cipher, DES2, that consists only of two rounds of the DES block cipher. DES2 has the same block and key size as DES. For this question you should consider the DES F function as a black box that takes two inputs, a 32-bit data segment and a 48-bit round key, and that produces a 32-bit output.
Suppose you have a large number of plaintext-ciphertext pairs for DES2 under a single, unknown key. Give an algorithm for recovering the 48-bit round key for round 1 and the 48-bit round key for round 2. Your algorithm should have fewer operations than an exhaustive search for an entire 56-bit DES key. Can your algorithm be converted into a distinguishing attack against DES2?
With regards to the first sub-exercise ("Give an algorithm…"), I have proceeded in the following way:
If I assume an initial input of 64 bits giving us two 32-bit blocks $L_0$ and $R_0$, I know that after the first round we have
$L_1 = R_0$
$R_1 = L_0 \oplus F(R_0, K_0)$
Then, after the second round, we have:
$L_2 = R_1 = L_0 \oplus F(R_0, K_0)$
$R_2 = L_1 \oplus F(R_1, K_1) = L_1 ⊕ F(L_0 ⊕ F(R_0, K_0), K_1)$
My thought was that I could then XOR $L_2$ with $L_0$ which is the output of $F(R_0, K_0)$ and then use $R_0$ to retrieve $K_0$. But I'm not sure how to do that… and not at all sure whether I am on the right path.
Any thoughts would be greatly appreciated.
Tylo has pointed out that the $F$ function is to be treated as a black box.
Updated
I’m afraid that I have come so close but can’t seem to get any further. I can get the output of $F(R_0, K_0)$ and I know $R_0$. But I just don’t know how, since I can’t call $F$ directly I don’t know how to get the 48-bit $K_0$.
Can anyone help?
-
"[I'm] not at all sure whether I am on the right path"; you are on the right path. If you know $F(R_0, K_0)$ and $R_0$, how can you recover $K_0$? Hint: look at the details of $F$. – poncho Feb 18 '14 at 14:50
Try multiple inputs with the same $R_0$ or $L_0$ and looking for a pattern. – David Cash Feb 18 '14 at 14:50
Also, you have a number of plaintext/ciphertext pairs (and hence multiple $F(R_0, K_0)$, $R_0$ pairs. How can you use these multiple sets to make recovering $K_0$ even easier? – poncho Feb 18 '14 at 15:10
@Poncho: "Look at the details of F": Do you mean the 4-stage process of expansion, key mixing, substution, and permutation in the Feistel function? – David Brower Feb 18 '14 at 15:30
You might want to read the exercise carefully enough, I think they are looking for a different solution: " ... consider the DES F function as a black box that takes two inputs" and "... converted into a distinguishing attack ..." – tylo Feb 18 '14 at 17:10
Your Formulas are alright, but there is some additional information from the exercise/setup:
The exercise states, that $F$ should be considered as a blackbox (otherwise you could use the internal stages of $F$, as poncho already suggested). However, as I understand it you can stil evaluate $F$ on any input of your choice.
At this point, you can do a couple of things. First, you're already done without knowing it. As a hint: Read the goal of the exercise and compare the complexity with a brute force on your formulas. You only need 1 ciphertext/plaintext pair.
A more complex idea: If you have a lot of ciphertext/plaintext pairs, and you just want to distinguish the permutation from a random oracle, then you can do the following: Look for two plaintexts, where $R_0$ (32 bit) is equal. What happens then to the output? And what would happen in a truly random permutation? This is a distinguishing criteria.
-
Presumably I am not able to run an exhaustive search on all possible 48-bit keys since I am not able to call F directly? I'm only about 6 weeks into learning about encryption and am still taking baby steps. – David Brower Feb 19 '14 at 16:42
Usually black box means, that you have no idea about the internal algorithm (and therefore can't exploit one of its weaknesses), but it does not mean you can't evaluate the function. So yeah, as I understand it, exhaustive search on the 48 bits of $K_0$ and afterwards on the 48 bits of $K_1$ should give you the key in $2 \cdot 2^{47}$ steps (lower than brute force on 56 bit DES). In order to make this an distinguishing attack, you can break the key of one text pair, and try to encrypt other plaintexts with this key. If the results are equal to the according ciphertexts, you got the cipher. – tylo Feb 19 '14 at 17:13
But probably the preferred solution is the other attack from my second hint. It has much less complexity than $2^{48}$, too. – tylo Feb 19 '14 at 17:15
I'm afraid I've come to a dead end. – David Brower Feb 20 '14 at 16:49
Try to think differently, no need to call $F$ directly, if you have many pairs of ciphertext/plaintext. If you have $x$ of these pairs, then there are $x(x-1)/2$ ways to compare two of those. So if $x(x-1)/2$ is greater than $2^{32}$, there should be two plaintexts, which share the same $R_0$. And with this, you can distinguish between this cipher and a truly random permutation. – tylo Feb 20 '14 at 19:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5470111966133118, "perplexity": 452.6181294841818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446535.72/warc/CC-MAIN-20151124205406-00346-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://export.arxiv.org/abs/1904.07582 | math.PR
(what is this?)
# Title: Continued fractions, the Chen-Stein method and extreme value theory
Abstract: In this work, we deal with extreme value theory in the context of continued fractions using techniques from probability theory, ergodic theory and real analysis. We give an upper bound for the rate of convergence in the Doeblin-Iosifescu asymptotics for the exceedances of digits obtained from the regular continued fraction expansion of a number chosen randomly from $(0,1)$ according to the Gauss measure. As a consequence, we significantly improve the best known upper bound on the rate of convergence of the maxima in this case. We observe that the asymptotics of order statistics and the extremal point process can also be investigated using our methods.
Comments: Minor revisions following referee report. This is the final version. To appear in Ergodic Theory and Dynamical Systems Subjects: Probability (math.PR); Dynamical Systems (math.DS); Number Theory (math.NT) Cite as: arXiv:1904.07582 [math.PR] (or arXiv:1904.07582v2 [math.PR] for this version)
## Submission history
From: Anish Ghosh [view email]
[v1] Tue, 16 Apr 2019 10:25:47 GMT (11kb)
[v2] Sun, 4 Aug 2019 12:10:34 GMT (12kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8537799119949341, "perplexity": 858.0175297841245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00146.warc.gz"} |
http://mathhelpforum.com/statistics/67373-probability-distrubtion-answer-check.html | # Math Help - Probability Distrubtion - ANSWER CHECK!
1. Originally Posted by cnmath16
48. A coin is tossed 8 times. Calculate the probability of tossing
a) 6 heads and 2 tails
b) 7 heads and 1 tail
The coin question is binomially distributed with $p = 0.5$ and $n = 8$. The problem of getting $k$ successes out of $n$ trials is calculated by
$\frac{n!}{(n-k)!k!} p^k (1-p)^{n-k}$
For more properties of the distribution, see: Binomial distribution - Wikipedia, the free encyclopedia
2. ## Probability Distrubtion - ANSWER CHECK!
48. A coin is tossed 8 times. Calculate the probability of tossing..
a) 6 heads and 2 tails
Let q= P (tails) =1/2
From the expansion of (p+q)^8, use the term containing p^6.
Thus, P(6 heads) = c (8,2)(1/2)^6 (1/2)^2
= 8! ÷ 6! 2! (1/2)^8
= (56/2)(1/256)
= 28/256
= 14/128
= 7/64 is the probability of 6 heads and 2 tails occuring
b) 7 heads and 1 tail
From the expansion of (p+q)^8, use the term containing p^7
Thus, P(7 heads) = c (8,1) (1/2)^7(1/2)
= 8! ÷ 7! 1! (1/2)^8
= 8/256
= 1/32 is the probability of 7 heads and 1 tail
Thus, the total probability of 6 heads is equal to 28/256 + 8/256 + 1/256 which is equal to P= 37/256
3. Originally Posted by cnmath16
48. A coin is tossed 8 times. Calculate the probability of tossing..
a) 6 heads and 2 tails
Let q= P (tails) =1/2
From the expansion of (p+q)^8, use the term containing p^6.
Thus, P(6 heads) = c (8,2)(1/2)^6 (1/2)^2
= 8! ÷ 6! 2! (1/2)^8
= (56/2)(1/256)
= 28/256
= 14/128
= 7/64 is the probability of 6 heads and 2 tails occuring
b) 7 heads and 1 tail
From the expansion of (p+q)^8, use the term containing p^7
Thus, P(7 heads) = c (8,1) (1/2)^7(1/2)
= 8! ÷ 7! 1! (1/2)^8
= 8/256
= 1/32 is the probability of 7 heads and 1 tail | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843236207962036, "perplexity": 2410.436030206118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.6/warc/CC-MAIN-20150728002308-00145-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/182388/evaluating-an-expression-on-one-line | # Evaluating an expression on one line
Is it possible to evaluate an expresson on one line?
I have a linear algebra problem to re-write the straight line $$-x=1-3y=-1-2z$$ to a 'vector form';
line = {x, y, z} /. Solve[{{-x, 1 - 3 y, -1 - 2 z} == {1, 1, 1} t}, {x, y, z}][[1]];
and I want to extract the 'fix/start point' and the directional vector of line (in lack of better knowledge of any inbuilt functions in M.) by
t = 0;
p0 = line;
t = 1;
n = line - p0;
These 4 lines I think could be merged to two lines, but I don't know the syntax for it, something in the line of
p0 = line @{t=0};
n = line-p0 @{t=1};
What is the correct syntax for this?
• mf67, may I suggest that you revisit your questions and consider accepting any answers that best solve your problem.
– kglr
Sep 22, 2018 at 23:08
• There is a blue popup 'flash' each time but I don't manage to read it before it disappears. Are there instructions somewhere to read to correctly process answers?
– mf67
Sep 22, 2018 at 23:54
• mf67, this may be useful: Accepting an aswer: how does it work
– kglr
Sep 23, 2018 at 0:00
You can use ReplaceAll with a list of lists of rules ({a->b}):
line /. {{t -> 0}, {t -> 1}}
{{0, 1/3, -(1/2)}, {-1, 0, -1}}
To get p0 and n in a single line:
{p0, n} = {#, #2 - #} & @@ (line /. {{t -> 0}, {t -> 1}})
{{0, 1/3, -(1/2)}, {-1, -(1/3), -(1/2)}}
Note: The same trick in a simpler example:
x /. {{x -> 100}, {x -> 5}, {x -> abc}}
{100, 5, abc} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1667943298816681, "perplexity": 2888.3517945651015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00759.warc.gz"} |
https://stats.stackexchange.com/questions/15529/improving-data-analysis-through-a-better-visualization-of-data/15557 | # Improving data analysis through a better visualization of data?
I ran four programs a, b, c, d parallely on two different machines X and Y separately for 10 times. The below is a sample of the data. The running-times (milliseconds) in 10 runs of each program are given under their respective names.
Machine-X:
a b c d
29 40 21 18
28 43 20 18
30 49 20 28
29 50 19 19
28 51 21 19
29 41 30 29
32 47 10 18
29 43 20 18
28 51 30 29
29 41 21 19
Machine-Y:
a b c d
16 24 19 18
16 24 19 18
16 23 19 18
16 24 19 18
16 24 19 18
16 22 19 18
16 24 19 18
16 24 19 18
16 24 19 18
16 24 19 18
I need to create graphs for visualizing the following:
1. Compare each program's performance (i.e. running-time) on both the machines X and Y.
2. Compare the variation in the running-times of each program on both the machines X and Y
3. Which machine is fair in providing computing resources to each program?
4. Compare the total running times (a+b+c+d) of the four programs in each run on both the machines X and Y.
5. Compare variation in the total running-times of the four programs in the 10 runs.
For 1 and 2, I made Figure A, Figure B is for 3, and figure C is for 4 and 5. However, I am not satisfied because there are three graphs and it is difficult to fit all the three graphs in my paper. Moreover, I believe that we can produce better than these. I really appreciate if someone helps me to draw one or two nice graphs instead of three in R while satisfying my requirements. Please see below for the R code I have used to produce these graphs.
Figure A:
Figure B: X-axis shows the runs, Y-axis shows the running-times of the four programs in a particular run.
Figure C:
R Code
> pdf("Figure A.pdf")
> par(mfrow=c(1,2))
> boxplot(x,boxwex=0.4, ylim=c(15, 60))
> mtext("Time", side=2, line=2)
> mtext("Running times of each program in 10 runs", side=3, line=2, at=6,cex=1.4)
> mtext("Machine X", side=3, line=0.5, at=2,cex=1.1)
> boxplot(y,boxwex=0.4, ylim=c(15, 60))
> mtext("Machine Y", side=3, line=0.4, at=2,cex=1.1)
> dev.off()
> pdf("Figure B.pdf")
> par(mfrow=c(1,2))
> boxplot(t(x),boxwex=0.4, ylim=c(0,50))
> mtext("Run Number", side=1, line=2, at=12, cex=1.2)
> mtext("Fairness", side=3, line=2, at=12,cex=1.4)
> mtext("Machine X", side=3, line=0.5, at=5,cex=1.1)
> boxplot(t(y),boxwex=0.4, ylim=c(0,50))
> mtext("Machine Y", side=3, line=0.4, at=5,cex=1.1)
> dev.off()
> pdf("Figure C.pdf")
> par(mfrow=c(1,2))
> barplot(t(x), ylim=c(0,150),names=1:10,col=mycolor)
> mtext("Run Number", side=1, line=2, at=14, cex=1.2)
> mtext("Total Running-Times in 10 Runs", side=3, line=2, at=14, cex=1.2)
> mtext("Machine X", side=3, line=0.5, at=5,cex=1.1)
> barplot(t(y), ylim=c(0,150), names=1:10,col=mycolor)
> mtext("Machine Y", side=3, line=0.5, at=5,cex=1.1)
> legend("topright",legend=c("a","b","c","d"),fill=mycolor,cex=1.1)
> dev.off()
Although the other respondents have provided useful insights, I find myself disagreeing with some of their points of view. In particular, I believe that graphics which can show the details of the data (without being cluttered) are richer and more rewarding to view than those that overtly summarize or hide the data, and I believe all the data are interesting, not just those for computer X. Let's take a look.
(I am showing small plots here to make the point that quite a lot of numbers can be usefully shown, in detail, in small spaces.)
This plot shows the individual data values, all $80 = 2 \times 4 \times 10$ of them. It uses the distance along the y-axis to represent computing times, because people can most quickly and accurately compare distances on a common axis (as Bill Cleveland's studies have shown). To ensure that variability is understood correctly in the context of actual time, the y-axis is extended down to zero: cutting it off at any positive value will exaggerate the relative variation in timing, introducing a "Lie Factor" (in Tufte's terminology).
Graphic geometry (point markers versus line segments) clearly distinguish computer X (markers) from computer Y (segments). Variations in symbolism--both shape and color for the point markers--as well as variation in position along the x axis clearly distinguish the programs. (Using shape assures the distinctions will persist even in a grayscale rendering, which is likely in a print journal.)
The programs appear not to have any inherent order, so it is meaningless to present them alphabetically by their code names "a", ..., "d". This freedom has been exploited to sequence the results by the mean time required by computer X. This simple change, which requires no additional complexity or ink, reveals an interesting pattern: the relative timings of the programs on computer Y differ from the relative timings on computer X. Although this might or might not be statistically significant, it is a feature of the data that this graphic serendipitously makes apparent. That's what we hope a good graphic will do.
By making the point markers large enough, they almost blend visually into a graphical representation of total variability by program. (The blending loses some information: we don't see where the overlaps occur, exactly. This could be fixed by jittering the points slightly in the horizontal direction, thereby resolving all overlaps.)
This graphic alone could suffice to present the data. However, there is more to be discovered by using the same techniques to compare timings from one run to another.
This time, horizontal position distinguishes computer Y from computer X, essentially by using side-by-side panels. (Outlines around each panel have been erased, because they would interfere with the visual comparisons we want to make across the plot.) Within each panel, position distinguishes the run. Exactly as in the first plot--and using the same marker scheme to distinguish the programs--the markers vary in shape and color. This facilitates comparisons between the two plots.
Note the visual contrast in marker patterns between the two panels: this has an immediacy not afforded by the tables of numbers, which have to be carefully scanned before one is aware that computer Y is so consistent in its timings.
The markers are joined by faint dashed lines to provide visual connections within each program. These lines are extra ink, seemingly unnecessary for presenting the data, so I suspect Professor Tufte would eschew them. However, I find they serve as useful visual guides to separate the clutter where markers for different programs nearly overlap.
Again, I presume the runs are independent and therefore the run number is meaningless. Once more we can exploit that: separately within each panel, runs have been sequenced by the total time for the four algorithms. (The x axis does not label run numbers, because this would just be a distraction.) As in the first plot, this sequencing reveals several interesting patterns of correlation among the timings of the four algorithms within each run. Most of the variation for computer X is due to changes in algorithm "b" (red squares). We already saw that in the first graphic. The worst total performances, however, are due to two long times for algorithms "c" and "d" (gold diamonds and green triangles, respectively), and these occurred within the same two runs. It is also interesting that the outliers for programs "a" and "c" both occurred in the same run. These observations could reveal useful information about variation in program timing for computer X. They are examples of how because these graphics show the details of the data (rather than summaries like bars or boxplots or whatever), much can be seen concerning variation and correlations--but I needn't elaborate on that here; you can explore it for yourself.
I constructed these graphics without giving any thought to a "story" or "spinning" the data, because I wanted first to see what the data have to say. Such graphics will never grace the pages of USA Today, perhaps, but due to their ability to reveal patterns by enabling fast, accurate visual comparisons, they are good candidates for communicating results to a scientific or technical audience. (Which is not to say they are without flaws: there are some obvious ways to improve them, including jittering in the first and supplying good legends and judicious labels in both.) So yes, I agree that attention to the potential audience is important, but I am not persuaded that graphics ought to be created with the intention of advocating or pressing a particular point of view.
In summary, I would like to offer this advice.
• Use design principles found in the literature on cartography and cognitive neuroscience (e.g., Alan MacEachren) to improve the chances that readers will interpret your graphic as you intend and that they will be able to draw honest, unbiased, conclusions from them.
• Use design principles found in the literature on statistical graphics (e.g., Ed Tufte and Bill Cleveland) to create informative data-rich presentations.
• Experiment and be creative. Principles are the starting point for making a statistical graphic, but they can be broken. Understand which principles you are breaking and why.
• Aim for revelation rather than mere summary. A satisfying graphic clearly reveals patterns of interest in the data. A great graphic will reveal unexpected patterns and invites us to make comparisons we might not have thought of beforehand. It may prompt us to ask new questions and more questions. That is how we advance our understanding.
• +1 Fantastic answer! The first and last paragraphs alone are great advice succinctly put, and the details in the middle show exactly what great graphics can and should look like. – Aaron Sep 14 '11 at 19:26
• @whuber: Great answer! Thank you. Could you also share the code you used for the figures, please? Is it R code? – samarasa Sep 14 '11 at 20:13
• @kkp It's not R code: these are Mathematica graphics. (If you have access to this software, I would be happy to share the code I used.) They're straightforward to emulate in R using the plot, lines, and points commands. Most of the work involves setting the graphics options and in assigning the horizontal coordinates to the data. Packages like ggplot might reduce some of this work. – whuber Sep 14 '11 at 20:30
• @Aaron Thank you; your opinion, as an expert in statistical graphics, is much appreciated. – whuber Sep 14 '11 at 20:35
Plots let you tell a story, to spin the data in the way that you want the reader to interpret your results. What's the takeaway message? What do you want to stick in their minds? Determine that message, then think about how to make it into a figure.
In your plots, I don't know what message I should learn and you give me too much of the raw data back---I want efficient summaries, not the data themselves.
For plot 1, I'd ask, what comparisons do you want to make? The charts that you have illustrate the run times across program for a given computer. It sounds like you want to do the comparisons across computers for a given program. If this is the case, then you want the stats for program a on computer x to be in the same plot as the stats for program a on computer y. I'd put all 8 boxes in your two boxplots in the same figure, ordered ax, ay, bx, by, ... to facilitate the comparison that you are really making.
The same goes for plot 2, but I find this plot strange. You are basically showing every data point that you have---a box for each run and a run only has 4 observations. Why not just give me a box plot of total run times for computer x and one for computer y?
The same "too much data" critique applies to your last plot as well. Plot 3 doesn't add any new information to plot 2. I can get the overall time if I just multiply the mean time by 4 in plot 2. Here, too, you could plot a box each for computer x and y, but these will literally be multiples of the plot that I proposed to replace plot 2.
I agree with @Andy W that computer y isn't that interesting and maybe you want to just state that and exclude it from the plots for brevity (though I think the suggestions that I made can help you trim these plots down). I don't think that tables are very good ways to go, however.
Your plots seem fine to me, and if you have space constraints you could place them all in one plot instead of three separate ones (e.g. use par(mfrow=c(3,2)) and then just output them to all the same device).
There isn't much to report though for Machine Y, it literally has no variation except for program b. I do think the graphs are informative to see not only how much longer the running times are for Machine X but also how much the running times vary.
If this really is your use case though, it is such simple data placing all of the data in a table would be sufficient to demonstrate the difference between machines (although I believe the graphs are still useful if you can afford room to place them in the document as well). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4959700107574463, "perplexity": 1292.3531151651614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262369.94/warc/CC-MAIN-20190527105804-20190527131804-00305.warc.gz"} |
http://stackoverflow.com/questions/14615371/phpunit-placeholder-for-empty-tests?answertab=active | # PHPUnit Placeholder for empty tests
I like to have empty functions on occasion for placeholders (primarily empty constructors, since it helps avoid accidental duplication of constructors since my team knows there must always be one somewhere).
I also like to have at least one test for every method of a class (largely because it is a nice easy rule to hold my team against).
My question is simple: what should we put in these empty test methods to prevent the "no tests" warning.
We could just do $this->assertTrue(true), which I know will work just fine. However, I was wondering if there was anything a touch more official and proper (preferably something which makes it so the method isn't counted in the number of tests run, artificially inflating it a bit). Thanks. - ## 2 Answers try this : /** * @covers Controllers\AdminController::authenticate * @todo Implement testAuthenticate(). */ public function testAuthenticate() { // Remove the following lines when you implement this test.$this->markTestIncomplete(
'This test has not been implemented yet.'
);
}
-
That's more along the lines of what I'm looking for and could work. However, the test isn't really incomplete... it tests everything it needs to test (which is nothing) because the function itself does absolutely nothing. – samanime Jan 30 '13 at 23:24
I also realized that causes it to skip anything which @depends on that, so it doesn't really work for my use case. – samanime Jan 31 '13 at 0:19
it is however exactly what PHPUnit recommends you do. How can a test that depends on another truly depend on it, if it isn't yet implemented? Sounds like your dependencies mean something different than intended. – AD7six Jan 31 '13 at 8:10
First, I agree with mpm. If you insist on testing a function that does nothing, then you could verify that the return is null. Less then ideal but makes more sense than $this->assertTrue(true), which says nothing at all about the tested method. – qrazi Jan 31 '13 at 9:06 I agree it is a bit wonky to have a dependency on a constructor. However, in a team environment, it's better if we set this up now so if something is added to the constructor, you just have to add a proper test and away you go, instead of having to go back and add in all of the dependencies then. It is a bit silly, but also helps with consistency. – samanime Feb 1 '13 at 19:08 You try a reflection class on the function and ensure that the method is there. Then the test could simply be that the method exists (empty or not), which will pass without a warning. class MethodExistsTest extends \PHPUnit_Framework_TestCase { protected$FOO;
protected function setUp()
{
$this->FOO = new \FOO(); } /** * @covers \FOO::Bar */ public function testEmptyMethodBarExists() {$ReflectionObject = new \ReflectionObject($this->FOO);$this->assertTrue($ReflectionObject->getMethod('Bar')); } /** * @covers \FOO::__construct */ public function testConstructorExists() {$ReflectionObject = new \ReflectionObject($this->FOO);$this->assertNotNull(\$ReflectionObject->getConstructor());
}
}
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45716801285743713, "perplexity": 1340.7625512547602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422121981339.16/warc/CC-MAIN-20150124175301-00036-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://math.stackexchange.com/users/2199/santosh-linkha | less info
reputation
1335
bio website location Kathmandu, Nepal age member for 3 years, 10 months seen 20 hours ago profile views 1,164
Now just another ordinary stupid NEET. Abstract Algebra has always been my enemy, now I am trying to conquer it.
17 Calculus proof for the area of a circle 15 How to compute $\sqrt{i + 1}$ 9 What's the formula of this sequence? 8 Definite integral problem 8 Showing that $\int_{0}^{\pi/2}\frac{1}{\sqrt{\sin{x}}}\;{dx}=\int_{0}^{\pi/2}\frac{2}{\sqrt{2-\sin^2{x}}}\;{dx}?$
# 7,702 Reputation
+10 Find the value of the integral $\int_0^{2\pi}\ln|a+b\sin x|dx$ where $0\lt a\lt b$ +5 how to show that $\int_0^\infty \sin(x^2) dx$ converges +5 Is this way of solving integration problem correct? +10 Calculus proof for the area of a circle
# 45 Questions
25 Evaluate:: $2 \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n+1}\left( 1 + \frac12 +\cdots + \frac 1n\right)$ 11 Curl of a vector in spherical coordinates 10 How to solve an exponential equation with two different bases: $3^x - 2^x = 5$ 8 Evaluate: $\sum_{n=1}^{\infty}\frac{1}{n k^n}$ 8 Evaluate: $\int_{0}^\infty e^{-x^2} \cos^n(x) dx$
# 98 Tags
167 calculus × 85 49 definite-integrals × 16 119 integration × 60 33 inequality × 11 93 limits × 38 29 complex-analysis × 19 84 sequences-and-series × 37 25 multivariable-calculus × 19 62 real-analysis × 40 25 trigonometry × 11
# 19 Accounts
Mathematics 7,702 rep 1335 Stack Overflow 7,641 rep 74683 Ask Ubuntu 549 rep 21127 Physics 429 rep 420 Mathematica 250 rep 29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936302661895752, "perplexity": 3214.40521955946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500808153.1/warc/CC-MAIN-20140820021328-00037-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://danieltakeshi.github.io/2020/11/07/safe-dagger/ | The seminal DAgger paper from AISTATS 2011 has had a tremendous impact on machine learning, imitation learning, and robotics. In contrast to the vanilla supervised learning approach to imitation learning, DAgger proposes to use a supervisor to provide corrective labels to counter compounding errors. Part of this BAIR Blog post has a high-level overview of the issues surrounding compounding errors (or “covariate shift”), and describes DAgger as an on-policy approach to imitation learning. DAgger itself — short for Dataset Aggregation — is super simple and looks like this:
• Train $\pi_\theta(\mathbf{a}_t \mid \mathbf{o}_t)$ from demonstrator data $\mathcal{D} = \{\mathbf{o}_1, \mathbf{a}_1, \ldots, \mathbf{o}_N, \mathbf{a}_N\}$.
• Run $\pi_\theta(\mathbf{a}_t \mid \mathbf{o}_t)$ to get an on-policy dataset $\mathcal{D}_\pi = \{\mathbf{o}_1, \ldots, \mathbf{o}_M\}$.
• Ask a demonstrator to label $\mathcal{D}_\pi$ with actions $\mathbf{a}_t$.
• Aggregate $\mathcal{D} \leftarrow \mathcal{D} \cup \mathcal{D}_{\pi}$ and train again.
with the notation borrowed from Berkeley’s DeepRL course. The training step is usually done via standard supervised learning. The original DAgger paper includes a hyperparameter $\beta$ so that the on-policy data is actually generated with a mixture:
$\pi = \beta \pi_{\rm supervisor} + (1-\beta) \pi_{\rm agent}$
but in practice I set $\beta=0$, which in this case means all states are generated from the learner agent, and then subsequently labeled from the supervisor.
DAgger is attractive not only in practice but also in terms of theory. The analysis of DAgger relies on mathematical ingredients from regret analysis and online learning, as hinted by the paper title: “A Reduction of Imitation learning and Structured Prediction to No-Regret Online Learning.” You can find some relevant theory in (Kakade and Tewari, NeurIPS 2009).
## The Dark Side of DAgger
Now that I have started getting used to reading and reviewing papers in my field, I can more easily understand tradeoffs in algorithms. So, while DAgger is a conceptually simple and effective method, what are its downsides?
• We have to request the supervisor for labels.
• This has to be done for each state the agent encounters when taking steps in an environment.
Practitioners can mitigate these by using a simulated demonstrator, as I have done in some of my robot fabric manipulation work. In fact, I’m guessing this is the norm in machine learning research papers that use DAgger. This is not always feasible, however, and even with a simulated demonstrator, there are advantages to querying less often.
Keeping within the DAgger framework, an obvious solution would be to only request labels for a subset of data points. That’s precisely what the SafeDAgger algorithm, proposed by Zhang and Cho, and presented at AAAI 2017, intends to accomplish. Thus, let’s understand how SafeDAgger works. In the subsequent discussion, I will (generally) use the notation from the SafeDAgger paper.
## SafeDAgger
The SafeDAgger paper has a nice high-level summary:
In this paper, we propose a query-efficient extension of the DAgger, called SafeDAgger. We first introduce a safety policy that learns to predict the error made by a primary policy without querying a reference policy. This safety policy is incorporated into the DAgger’s iterations in order to select only a small subset of training examples that are collected by a primary policy. This subset selection significantly reduces the number of queries to a reference policy.
Here is the algorithm:
SafeDAgger uses a primary policy $\pi$ and a reference policy $\pi^*$, and introduces a third policy $\pi_{\rm safe}$, known as the safety policy, which takes in the observation of the state $\phi(s)$ and must determine whether the primary policy $\pi$ is likely to deviate from a reference policy $\pi^*$ at $\phi(s)$.
A quick side note: I often treat “states” $s$ and “observations” $\phi(s)$ (or $\mathbf{o}$ in my preferred notation) interchangeably, but keep in mind that these technically refer to different concepts. The “reference” policy is also often referred to as a “supervisor,” “demonstrator,” “expert,” or “teacher.”
A very important fact, which the paper (to its credit) repeatedly accentuates, is that because $\pi_{\rm safe}$ is called at each time step to determine if the reference must be queried, $\pi_{\rm safe}$ cannot query $\pi^*$. Otherwise, there’s no benefit — one might as well dispense with $\pi_{\rm safe}$ all together and query $\pi^*$ normally for all data points.
The deviation $\epsilon$ is defined with the $L_2$ distance:
$\epsilon(\pi, \pi^*, \phi(s)) = \| \pi(\phi(s)) - \pi^*(\phi(s)) \|_2^2$
since actions in this case are in continuous land. The optimal safety policy $\pi_{\rm safe}^*$ is:
$\pi_{\rm safe}^*(\pi, \phi(s)) = \begin{cases} 0, \quad \mbox{if}\; \epsilon(\pi, \pi^*, \phi(s)) > \tau \\ 1, \quad \mbox{otherwise} \end{cases}$
where the cutoff $\tau$ is user-determined.
The real question now is how to train $\pi_{\rm safe}$ from data $D = \{ \phi(s)_1, \ldots, \phi(s)_N \}$. The training uses the binary cross entropy loss, where the label is “are the two policies taking sufficiently different actions”? For a given dataset $D$, the loss is:
\begin{align} l_{\rm safe}(\pi_{\rm safe}, \pi, \pi^*, D) &= - \frac{1}{N} \sum_{n=1}^{N} \pi_{\rm safe}^*(\phi(s)_n) \log \pi_{\rm safe}(\phi(s)_n, \pi) + \\ & (1 - \pi_{\rm safe}^*(\phi(s)_n)) \log(1 - \pi_{\rm safe}(\phi(s)_n, \pi)) \end{align}
again, here, $\pi_{\rm safe}^*$ and $(1-\pi_{\rm safe}^*)$ represent ground-truth labels for the cross entropy loss. It’s a bit tricky; the label isn’t something inherent in a training data, but something SafeDAgger artificially enforces to get desired behavior.
Now let’s discuss the control flow of SafeDAgger. The agent collects data by following a safety strategy. Here’s how it works: at every time step, if $\pi_{\rm safe}(\pi, \phi(s)) = 1$, let the usual agent take actions. Otherwise, $\pi_{\rm safe}(\pi, \phi(s)) = 0$ (remember, this function is binary) and the reference policy takes actions. Since this is done at each time step, the reference policy can return control to the agent as soon as it is back into a “safe” state with low action discrepancy.
Also, when the reference policy takes actions, these are the data points that get labeled to produce a subset of data $D’$ that form the input to $l_{\rm safe}$. Hence, the process of deciding which subset of states should be used to query the reference happens during environment interaction time, and is not a post-processing event.
Training happens in lines 9 and 10 of the algorithm, which updates not only the agent $\pi$, but also the safety policy $\pi_{\rm safe}$.
Actually, it’s somewhat strange why the safety policy should help out. If you notice, the algorithm will continually add new data to existing datasets, so while $D_{\rm safe}$ initially produces a vastly different dataset for $\pi_{\rm safe}$ training, in the limit, $\pi$ and $\pi_{\rm safe}$ will be trained on the same dataset. Line 9, which trains $\pi$, will make it so that for all $\phi(s) \in D$, we have $\pi(\phi(s)) \approx \pi^*(\phi(s))$. Then, line 10 trains $\pi_{\rm safe}$ … but if the training in the previous step worked, then the discrepancies should all be small, and hence it’s unclear why we need a threshold if we know that all observations in the data result in similar actions between $\pi$ and $\pi^*$. In some sense $\pi_{\rm safe}$ is learning a support constraint, but it would not be seeing any negative samples. It is somewhat of a philosophical mystery.
Experiments. The paper uses the driving simulator TORCS with a scripted demonstrator. (I have very limited experience with TORCS from an ICRA 2019 paper.)
• They use 10 tracks, with 7 for training and 3 for testing. The test tracks are only used to evaluate the learned policy (called “primary” in the paper).
• Using a histogram of squared errors in the data, they decide on $\tau = 0.0025$ as the threshold so that 20 percent of initial training samples are considered “unsafe.”
• They report damage per lap as a way to measure policy safety, and argue that policies trained with SafeDAgger converge to a perfect, no-damage policy faster than vanilla DAgger. I’m having a hard time reading the plots, though — their “SafeDAgger-Safe” curve in Figure 2 appears to be perfect from the beginning.
• Experiments also suggest that as the number of DAgger iterations increases, the proportion of time driven by the reference policy decreases.
Future Work? After reading the paper, I had some thoughts about future work directions:
• First, SafeDAgger is a broadly applicable algorithm. It is not specific to driving, and it should be feasible to apply to other imitation learning problems.
• Second, the cost is the same for each data point. This is certainly not the case in real life scenarios. Consider context switching: one can request the reference for help in time steps 1, 3, 5, 7, and 9, or it can request the reference for help in times 3, 4, 5, 6, and 7. Both require the same raw number of references, but it seems intuitive in some way that given a fixed budget of time, a reference policy should want a contiguous time step.
• Finally, one downside strictly from a scientific perspective is that there are no other baseline methods tested other than vanilla DAgger. I wonder if it would be feasible to compare SafeDAgger with an approach such as SHIV from ICRA 2016.
## Conclusion
To recap: SafeDAgger follows the DAgger framework, and attempts to reduce the number of queries to the reference/supervisor policy. SafeDAgger predicts the discrepancy among the learner and supervisor. Those states with high discrepancy are those which get queried (i.e., labeled) and used in training.
There’s been a significant amount of follow-up work on DAgger. If I am thinking about trying to reduce supervisor burden, then SafeDAgger is among the methods that come to my mind. Similar algorithms may get increasingly used in machine learning if DAgger-style methods become more pervasive in machine learning research, and in real life. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5171149373054504, "perplexity": 868.3736445786793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058467.95/warc/CC-MAIN-20210927181724-20210927211724-00483.warc.gz"} |
http://ellenmcraven.soup.io/ | You are at the newest post.
## June242015
### Is Hammer Toe Surgery Painful
Overview
A Hammer toes is a deformity of the second, third or fourth toes. In this condition, the toe is bent at the middle joint, so that it resembles a hammer. Initially, hammer toes are flexible and can be corrected with simple measures but, if left untreated, they can become fixed and require surgery. People with hammer toe may have corns or calluses on the top of the middle joint of the toe or on the tip of the toe. They may also feel pain in their toes or feet and have difficulty finding comfortable shoes.
Causes
Many people develop hammertoe because they wear shoes that are too tight. Shoes with narrow toe boxes squeeze the toes together, forcing some to bend. This causes the toe muscles to contract. If the toes are forced into this cramped position too often, the muscles may permanently tighten, preventing the toes from extending. Chronic hammertoe can also cause the long bones that connect the toes to the foot, called metatarsals, to move out of position. The misaligned metatarsal bones may pinch a nerve running between them, which can cause a type of nerve irritation called a neuroma.
Symptoms
The most obvious symptom of hammertoe is the bent, hammer-like or claw-like appearance of one or more of your toes. Typically, the proximal joint of a toe will be bending upward and the distal joint will be bending downward. In some cases, both joints may bend downward, causing the toes to curl under the foot. In the variation of mallet toe, only the distal joint bends downward. Other symptoms may include Pain and stiffness during movement of the toe, Painful corns on the tops of the toe or toes from rubbing against the top of the shoe's toe box, Painful calluses on the bottoms of the toe or toes, Pain on the bottom of the ball of the foot, Redness and swelling at the joints. If you have any of these symptoms, especially the hammer shape, pain or stiffness in a toe or toes, you should consider consulting your physician. Even if you're not significantly bothered by some of these symptoms, the severity of a hammertoe can become worse over time and should be treated as soon as possible. Up to a point hammertoes can be treated without surgery and should be taken care of before they pass that point. After that, surgery may be the only solution.
Diagnosis
First push up on the bottom of the metatarsal head associated with the affected toe and see if the toe straightens out. If it does, then an orthotic could correct the problem, usually with a metatarsal pad. If the toe does not straighten out when the metatarsal head is pushed up, then that indicates that contracture in the capsule and ligaments (capsule contracts because the joint was in the wrong position for too long) of the MTP joint has set in and surgery is required. Orthotics are generally required post-surgically.
Non Surgical Treatment
Apply a commercial, non-medicated hammer toe pad around the bony prominence of the hammer toe to decrease pressure on the area. Wear a shoe with a deep toe box. If the hammer toe becomes inflamed and painful, apply ice packs several times a day Hammer toe to reduce swelling. Avoid heels more than two inches tall. A loose-fitting pair of shoes can also help protect the foot while reducing pressure on the affected toe, making walking a little easier until a visit to your podiatrist can be arranged. While this treatment will make the hammer toe feel better, it is important to remember that it does not cure the condition. A trip to the podiatrist's office will be necessary to repair the toe to allow for normal foot function. Avoid wearing shoes that are too tight or narrow. Children should have their shoes properly fitted on a regular basis, as their feet can often outgrow their shoes rapidly.
Surgical Treatment
Until recently, wires were used for surgical correction. In this technique, one or more wires are inserted into the bone through both the affected joint and a normally healthy toe joint, and the end of the toe. These wires stay in place for four to six weeks, protruding from the end of the toes. Due to the protruding wire, simple things such working, driving, bathing and even sleeping are difficult while these wires are in place. During this recovery period, patients often experience discomfort during sleep and are subject possible infection.
Tags: Hammertoe
## June072015
### Prevent Hereditary Bunions
Overview
A bunion looks like a bump on the inside of the foot where the big toe joins the foot. Over time, the bunion gets worse. The big toe starts to lean toward neighboring toes instead of pointing straight ahead. (The scientific name for this is hallux valgus or hallux abducto valgus.) The bump is a sign that the bones of the foot are out of alignment. While shoes with high heels or pointed toes may cause the joint to ache, they don't cause bunions. Most often they are due to an inherited foot structure. The tendons and ligaments that hold bones and muscles together at the joint are not working as they should. This structure makes it more likely that a person will develop a bunion.
Causes
Despite the popular belief, wearing high heels and too-narrow shoes does not cause bunions. Wearing them can irritate, aggravate, or accelerate the formation of the bunion, but are not the root cause. Bunions are more commonly inherited, if your parents or grandparents had bunions, you may also get one. Bunions can also be caused by trauma or injury to the joints, ligaments, or bones of the foot.
Symptoms
Bunions are readily apparent, you can see the prominence at the base of the big toe or side of the foot. However, to fully evaluate your condition, the Podiatrist may take x-rays to determine the degree of the deformity and assess the changes that have occurred. Because bunions are progressive, they don't go away, and will usually get worse over time. But not all cases are alike, some bunions progress more rapidly than others. There is no clear-cut way to predict how fast a bunion will get worse. The severity of the bunion and the symptoms you have will help determine what treatment is recommended for you.
Diagnosis
Physical examination typically reveals a prominence on the inside (medial) aspect of the forefoot. This represents the bony prominence associated with the great toe joint ( the medial aspect of the first metatarsal head). The great toe is deviated to the outside (laterally) and often rotated slightly. This produces uncovering of the joint at the base of the big toe (first metatarsophalangeal joint subluxation). In mild and moderate bunions, this joint may be repositioned back to a neutral position (reduced) on physical examination. With increased deformity or arthritic changes in the first MTP joint, this joint cannot be fully reduced. Patients may also have a callus at the base of their second toe under their second metatarsal head in the sole of the forefoot. Bunions are often associated with a long second toe.
Non Surgical Treatment
Conservative Treatment. Apply a commercial, nonmedicated bunion pad around the bony prominence. Wear shoes with a wide and deep toe box. If your bunion becomes inflamed and painful, apply ice packs several times a day to reduce swelling. Avoid high-heeled shoes over two inches tall. See your podiatric physician if pain persists. Orthotics. Shoe inserts may be useful in controlling foot function and may reduce symptoms and prevent worsening of the deformity. Padding & Taping. Often the first step in a treatment plan, padding the bunion minimizes pain and allows the patient to continue a normal, active life. Taping helps keep the foot in a normal position, thus reducing stress and pain. Medication. Anti-inflammatory drugs and cortisone injections are often prescribed to ease the acute pain and inflammations caused by joint deformities. Physical Therapy. Often used to provide relief of the inflammation and from bunion pain. Ultrasound therapy is a popular technique for treating bunions and their associated soft tissue involvement.
Surgical Treatment
Sometimes a screw is placed in the foot to hold a bone in a corrected position, other times a pin, wire or plate is chosen. There are even absorbable pins and screws, which are used for some patients. In British Columbia, pins seem to be used most frequently, as they're easier to insert and less expensive. They are typically--but not always--removed at some point in the healing process. But as a general rule, Dr. Schumacher prefers to use screws whenever possible, as they offer some advantages over pins. First, using screws allows you to close over the wound completely, without leaving a pin sticking out of the foot. That allows for a lower infection rate, it allows you to get your foot wet more quickly following the surgery, and it usually allows for a quicker return to normal shoes. Second, they're more stable than pins and wires. Stability allows for faster, more uneventful, bone healing. Third, they usually don't need to be removed down the road, so there's one less procedure involved.
Tags: Bunions
## June022015
### What Does Over-Pronation Mean
Overview
Pronation is a normal motion that our feet make as they walk. With each step, the heel touches the ground first, then the foot rolls forward to the toes, causing the ankle to roll inward slightly and the arch to flatten out. That?s normal. But when that rolling inward becomes more pronounced, that?s over-pronation, which is a big problem. You can usually see over-pronation by looking at the back of the leg and foot. The Achilles tendon normally runs straight down from the leg to the foot, hitting the floor at a perpendicular angle. In feet that over-pronate, the Achilles tendon will be at a slight angle to the ground and the ankle bone will appear more prominent than usual.
Causes
A common cause of pronation is heredity - we can inherit this biomechanical defect. The second most common cause is due to the way our feet were positioned in the uterus while we were developing; this is called a congenital defect. In either instance, the following occurs in our feet during our development.
Symptoms
If you overpronate, your symptoms may include discomfort in the arch and sole of foot. Your foot may appear to turn outward at the ankle. Your shoes wear down faster on the medial (inner) side of your shoes. Pain in ankle, shins, knees, or hips, especially when walking or running.Unfortunately, overpronation can lead to additional problems with your feet, ankles, and knees. Runners in particular find that overpronation can lead to shin splints, tarsal tunnel syndrome, plantar fasciitis, compartment syndrome, achilles tendonitis, bunions (hallux valgus) patello femoral pain syndrome, heel spurs, metatarsalgia. You do not have to be a runner or athlete to suffer from overpronation. Flat feet can be inherited, and many people suffer from pain on a day-to-day basis. Flat feet can also be traumatic in nature and result from tendon damage over time. Wearing shoes that do not offer enough arch support can also contribute to overpronation.
Diagnosis
To easily get an idea of whether a person overpronates, look at the position and condition of certain structures in the feet and ankles when he/she stands still. When performing weight-bearing activities like walking or running, muscles and other soft tissue structures work to control gravity's effect and ground reaction forces to the joints. If the muscles of the leg, pelvis, and feet are working correctly, then the joints in these areas such as the knees, hips, and ankles will experience less stress. However, if the muscles and other soft tissues are not working efficiently, then structural changes and clues in the feet are visible and indicate habitual overpronation.
Non Surgical Treatment
Overpronation is a term used to describe excessive flattening of the plantar arch. Pronation is a normal part of our gait (the way we walk), and it comprises three movements: dorsiflexion, eversion, and abduction. Dorsiflexion is the upward movement of the foot, eversion describes the foot rolling in, and abduction is ?out toeing,? meaning your toes are moving away from the midline of your body. When these three motions are extreme or excessive, overpronation results. Overpronation is very common in people who have flexible flat feet. Flatfoot, or pes planus, is a condition that causes collapse of the arch during weight bearing. This flattening puts stress on the plantar fascia and the bones of the foot, resulting in pain and further breakdown.
Prevention
Exercises to strengthen and stretch supporting muscles will help to keep the bones in proper alignment. Duck stance: Stand with your heels together and feet turned out. Tighten the buttock muscles, slightly tilt your pelvis forwards and try to rotate your legs outwards. You should feel your arches rising while you do this exercise. Calf stretch: Stand facing a wall and place hands on it for support. Lean forwards until stretch is felt in the calves. Hold for 30 seconds. Bend at knees and hold for a further 30 seconds. Repeat 5 times. Golf ball: While drawing your toes upwards towards your shins, roll a golf ball under the foot between 30 and 60 seconds. If you find a painful point, keep rolling the ball on that spot for 10 seconds. Big toe push:
Stand with your ankles in a neutral position (without rolling the foot inwards). Push down with your big toe but do not let the ankle roll inwards or the arch collapse. Hold for 5 seconds. Repeat 10 times. Build up to longer times and fewer repetitions. Ankle strengthener: Place a ball between your foot and a wall. Sitting down and keeping your toes pointed upwards, press the outside of the foot against the ball, as though pushing it into the wall. Hold for 5 seconds and repeat 10 times. Arch strengthener: Stand on one foot on the floor. The movements needed to remain balanced will strengthen the arch. When you are able to balance for 30 seconds, start doing this exercise using a wobble board.
## May212015
### Physiotherapy And Severs Disease
Overview
When recurring heel pain occurs in children, it is usually due to Sever's Disease, while adult heel pain is usually due to heel spurs, plantar fasciitis, or retrocalcaneal bursitis (Haglund's Deformity). Calcaneus is the anatomical name of the heel bone. Sever's Disease or Calcaneal Apophysitis is an inflammation of the growth plate located at the posterior aspect (back) of the heel.
Causes
Heel pain can also be caused by a stress fracture in the heel, bursitis, tendonitis, bone cysts, and rheumatologic disorders. If the athlete is not active in impact sports or is not between age 9 and 13 years, other conditions should be considered.
Symptoms
Sever's Disease is most commonly seen in physically active girls and boys from ages 10 to 15 years old. These are the years when the growth plate is still ""open,"" and has not fused into mature bone. Also, these are the years when the growth plate is most vulnerable to overuse injuries, which are usually caused by sports activities. The most common symptoms of this disease include. Heel pain in one or both heels. Usually seen in physically active children, especially at the beginning of a new sports season. The pain is usually experienced at the back of the heel, and includes the following areas. The back of the heel (that area which rubs against the back of the shoe). The sides of the heel. Actually, this is one of the diagnostic tests for Sever's Disease, squeezing the rear portion of the heel from both sides at the same time will produce pain. It is known as the Squeeze Test.
Diagnosis
A physical exam of the heel will show tenderness over the back of the heel but not in the Achilles tendon or plantar fascia. There may be tightness in the calf muscle, which contributes to tension on the heel. The tendons in the heel get stretched more in patients with flat feet. There is greater impact force on the heels of athletes with a high-arched, rigid foot.
Non Surgical Treatment
Primary treatment involves the use of heel cups or orthotics with a sturdy, supportive plastic shell. Treatment may also include cutting back on sports activities if pain interferes with performance, calf muscle stretching exercises, icing, and occasionally anti-inflammatory medications. Severe cases may require the short term use of a walking boot or cast.
## March272015
### Heel Pain And Discomfort
Overview
Heel Pain is a problem for many people. It makes standing and even walking around for long periods of time very uncomfortable. Several different conditions can lead to uncomfortable heels, but the most common culprit is plantar fasciitis. This is the inflammation and swelling of the plantar fascia, a tendon that runs along the sole of your foot and attaches to the bottom of the calcaneus, or heel bone. Repeated hard impacts or strain from overuse causes micro-tears to develop in the tendon, irritating it. The minor damage compounds over time and causes the tissue to swell and tighten, painfully pulling on the heel bone.
Causes
Long standing inflammation causes the deposition of calcium at the point where the plantar fascia inserts into the heel. This results in the appearance of a sharp thorn like heel spur on x-ray. The heel spur is asymptomatic (not painful), the pain arises from the inflammation of the plantar fascia.
Symptoms
Plantar fasciitis is a condition of irritation to the plantar fascia, the thick ligament on the bottom of your foot. It classically causes pain and stiffness on the bottom of your heel and feels worse in the morning with the first steps out of bed and also in the beginning of an activity after a period of rest. For instance, after driving a car, people feel pain when they first get out, or runners will feel discomfort for the first few minutes of their run. This occurs because the plantar fascia is not well supplied by blood, which makes this condition slow in healing, and a certain amount of activity is needed to get the area to warm up. Plantar fasciitis can occur for various reasons: use of improper, non-supportive shoes; over-training in sports; lack of flexibility; weight gain; prolonged standing; and, interestingly, prolonged bed rest.
Diagnosis
The diagnosis of heel pain and heel spurs is made by a through history of the course of the condition and by physical exam. Weight bearing x-rays are useful in determining if a heel spur is present and to rule out rare causes of heel pain such as a stress fracture of the heel bone, the presence of bone tumors or evidence of soft tissue damage caused by certain connective tissue disorders.
Non Surgical Treatment
Treatment options for plantar fasciitis include custom prescription foot orthoses (orthotics), weight loss when indicated, steroid injections and physical therapy to decrease the inflammation, night-splints and/or cast boots to splint and limit the stress on the plantar fascia. Orthotripsy (high frequency ultra-sonic shock waves) is also a new treatment option that has been shown to decrease the pain significantly in 50 to 85 percent of patients in published studies. Surgery, which can be done endoscopically, is usually not needed for over 90 percent of the cases of plantar fasciitis. (However, when surgery is needed, it is about 85 percent successful.) Patients who are overweight do not seem to benefit as much from surgery. Generally, plantar fasciitis is a condition people learn to control. There are a few conditions similar to plantar fascia in which patients should be aware. The most common is a rupture of the plantar fascia: the patient continues to exercise despite the symptoms and experiences a sudden sharp pain on the bottom of the heel and cannot stand on his or her toes, resulting in bruising in the arch. Ruptures are treated very successfully by immobilization in a cast boot for two to six weeks, a period of active rest and physical therapy. Another problem with prolonged and neglected plantar fasciitis is development of a stress fracture from the constant traction of this ligament on the heel bone. This appears more common in osteoporotic women, and is also treated with cast boot immobilization. The nerves that run along the heel occasionally become inflamed by the subsequent thickening and inflammation of the adjacent plantar fascia. These symptoms often feel like numbness and burning and usually resolve with physical therapy and injections. Patients should also be aware that heel numbness can be the first sign of a back problem.
Surgical Treatment
Prevention
You should always wear footwear that is appropriate for your environment and day-to-day activities. Wearing high heels when you go out in the evening is unlikely to be harmful. However, wearing them all week at work may damage your feet, particularly if your job involves a lot of walking or standing. Ideally, you should wear shoes with laces and a low to moderate heel that supports and cushions your arches and heels. Avoid wearing shoes with no heels. Do not walk barefoot on hard ground, particularly while on holiday. Many cases of heel pain occur when a person protects their feet for 50 weeks of the year and then suddenly walks barefoot while on holiday. Their feet are not accustomed to the extra pressure, which causes heel pain. If you do a physical activity, such as running or another form of exercise that places additional strain on your feet, you should replace your sports shoes regularly. Most experts recommend that sports shoes should be replaced after you have done about 500 miles in them. It is also a good idea to always stretch after exercising, and to make strength and flexibility training a part of your regular exercise routine.
Tags: Heel Pain
## March082015
### What Leads To Achilles Tendon Pain ?
Overview
The Achilles is a large tendon that connects two major calf muscles to the back of the heel bone. If this tendon is overworked and tightens, the collagen fibres of the tendon may break, causing inflammation and pain. This can result in scar tissue formation, a type of tissue that does not have the flexibility of tendon tissue. Four types of Achilles injuries exist, 1) Paratendonitis - involves a crackly or crepitus feeling in the tissues surrounding the Achilles tendon. 2) Proliferative Tendinitis - the Achilles tendon thickens as a result of high tension placed on it. 3) Degenerative Tendinitis - a chronic condition where the Achilles tendon is permanently damaged and does not regain its structure. 4) Enthesis - an inflammation at the point where the Achilles tendon inserts into the heel bone.
Causes
Achilles tendonitis is an overuse injury that is common especially to joggers and jumpers, due to the repetitive action and so may occur in other activities that requires the same repetitive action. Most tendon injuries are the result of gradual wear and tear to the tendon from overuse or ageing. Anyone can have a tendon injury, but people who make the same motions over and over in their jobs, sports, or daily activities are more likely to damage a tendon. A tendon injury can happen suddenly or little by little. You are more likely to have a sudden injury if the tendon has been weakened over time. Common causes of Achilles tendonitis include, over-training or unaccustomed use,?too much too soon?. Sudden change in training surface e.g. grass to bitumen. Flat (over-pronated) feet, High foot arch with tight Achilles tendon. tight hamstring (back of thigh) and calf muscles, toe walking (or constantly wearing high heels). Poorly supportive footwear, hill running. Poor eccentric strength.
Symptoms
Pain anywhere along the tendon, but most often on or close to the heel. Swelling of the skin over the tendon, associated with warmth, redness and tenderness. Pain on rising up on the toes and pain with pushing off on the toes. If you are unable to stand on your toes you may have ruptured the tendon. This requires urgent medical attention. A painful heel for the first few minutes of walking after waking up in the morning. Stiffness of the ankle, which often improves with mild activity.
Diagnosis
X-rays are usually normal in patients with Achilles tendonitis, but are performed to evaluate for other possible conditions. Occasionally, an MRI is needed to evaluate a patient for tears within the tendon. If there is a thought of surgical treatment an MRI may be helpful for preoperative evaluation and planning.
Nonsurgical Treatment
The latest studies on Achilles tendonitis recommend a treatment plan that incorporates the following three components. Treatment of the inflammation. Strengthening of the muscles that make up the Achilles tendon using eccentric exercise. These are a very specific type of exercise that has been shown in multiple studies to be a critical component of recovering from Achilles tendonitis. Biomechanical control (the use of orthotics and proper shoes). Shockwave therapy.
Surgical Treatment
Surgery is an option of last resort. However, if friction between the tendon and its covering sheath makes the sheath thick and fibrous, surgery to remove the fibrous tissue and repair any tears may be the best treatment option.
Prevention
Achilles tendinitis cannot always be prevented but the following tips will help you reduce your risk. If you are new to a sport, gradually ramp up your activity level to your desired intensity and duration. If you experience pain while exercising, stop. Avoid strenuous activity that puts excessive stress on your Achilles tendon. If you have a demanding workout planned, warm up slowly and thoroughly. Always exercise in shoes that are in good condition and appropriate for your activity or sport. Be sure to stretch your calf muscles and Achilles tendon before and after working out. If you suffer from Achilles tendinitis make sure you treat it properly and promptly. If self-care techniques don?t work, don?t delay. Book a consultation with a foot care expert or you may find yourself sidelined from your favourite sports and activities.
## January172015
### What Is Pain At The Heel
Overview
Plantar Fasciitis is a painful foot condition that affects the Plantar Fascia tendon that runs along the bottom of the foot (as seen in the picture). This tendon runs along the arches of the foot. Sometimes this tendon can become sore from normal use or strenuous activity, but this is not to be confused with the pain associated with Plantar Fasciitis. Small tears in the plantar fascia tendon can cause foot discomfort and pain, if left untreated, can become unbearable (seen in picture below). These tears are made worse by over-use, strenuous activity, weight gain, improper foot wear and a variety of other factors. Although there is no one absolute cause for the condition, it remains clear that this condition, while painful, can be corrected with products such as footwear, night splints, insoles and a variety of other plantar fasaciitis products.
Causes
Plantar Fasciitis is simply caused by overstretching of the plantar fascia ligament under the foot. So why is the ligament being overstretched? There are different factors, over-use, too much sports, running, walking or standing for long periods (e.g. because of your job) weight gain, our feet are designed to carry a 'normal' weight. Any excess weight places great pressure on the bones, nerves, muscles and ligaments in the feet, which sooner or later will have consequences. Even pregnancy (in the last 10 weeks) can cause foot problems! age, as we get older ligaments become tighter & shorter and msucles become weaker; the ideal circumstances for foot problems, unsupportive footwear, 'floppy' shoes with no support as well as thongs affect our walking pattern, walking barefoot, especially on hard surfaces like concrete or tiles, low arch and flat feet or over-pronation. An important contributing factor to Plantar Fasciitis is 'excess pronation' (or over-pronation). This is a condition whereby the feet roll over, the arches collapse and the foot elongates. This unnatural elongation puts excess strain on the ligaments, muscles and nerves in the foot. When the foot is not properly aligned, the bones unlock and cause the foot to roll inward. With every step taken your foot pronates and elongates, stretching the plantar fascia and causing inflammation and pain at the attachment of the plantar fascia into the heel bone. Re-alignment of the foot should therefore an important part of the treament regime.
Symptoms
Plantar fasciosis is characterized by pain at the bottom of the heel with weight bearing, particularly when first arising in the morning; pain usually abates within 5 to 10 min, only to return later in the day. It is often worse when pushing off of the heel (the propulsive phase of gait) and after periods of rest. Acute, severe heel pain, especially with mild local puffiness, may indicate an acute fascial tear. Some patients describe burning or sticking pain along the plantar medial border of the foot when walking.
Diagnosis
The health care provider will perform a physical exam. This may show tenderness on the bottom of your foot, flat feet or high arches, mild foot swelling or redness, stiffness or tightness of the arch in the bottom of your foot. X-rays may be taken to rule out other problems.
Non Surgical Treatment
Careful attention to footwear is critical. Every effort should be made to wear comfortable shoes with proper arch support, fostering proper foot posture. Should arch supports prove insufficient, an orthotic shoe should be considered. Fortunately, most cases of plantar fasciitis respond well to non-operative treatment. Recovery times however vary enormously from one athlete to another, depending on age, overall health and physical condition as well as severity of injury. A broad period between 6 weeks and 6 months is usually sufficient for proper healing. Additionally, the mode of treatment must be flexible depending on the details of a particular athlete’s injury. Methods that prove successful in one patient, may not improve the injury in another. Early treatment typically includes the use of anti-inflammatory medication, icing, stretching activities, and heel inserts and splints. Cortisone injections may be necessary to achieve satisfactory healing and retard inflammation. In later stages of the rehabilitation process, typically after the first week, ice should be discontinued and replaced with heat and massage.
Surgical Treatment
Surgery may be considered in very difficult cases. Surgery is usually only advised if your pain has not eased after 12 months despite other treatments. The operation involves separating your plantar fascia from where it connects to the bone; this is called a plantar fascia release. It may also involve removal of a spur on the calcaneum if one is present. Surgery is not always successful. It can cause complications in some people so it should be considered as a last resort. Complications may include infection, increased pain, injury to nearby nerves, or rupture of the plantar fascia.
## January132015
### What Causes Pain On The Heel To Surface
Overview
The plantar fascia (a connective tissue structure) stretches from the toes and ball of the foot, through the arch, and connects to the heel bone in three places: outside, center and inside. Normally it helps the foot spring as it rolls forward. It also provides support for the arch of the foot. The plantar fascia helps keep the foot on track, cutting down on oscillation. When the foot over-pronates (rolls to the inside) the plantar fascia tries to stabilize it and prevent excessive roll. In time, the inside and sometimes center connections are overstressed and pull away from their attachments. The first sign is usually heel pain as you rise in the morning. When you walk around, the pain may subside, only to return the next morning. Inflammation and increased soreness are the results of long-term neglect and continued abuse. A heel bone spur may develop after a long period of injury when there is no support for the heel. The plantar fascia attaches to the heel bone with small fibers. When these become irritated they become inflamed with blood containing white blood cells. Within the white blood cells are osteoblasts which calcify to form bone spurs and calcium deposits. The body is trying to reduce stress on that area by building a bone in the direction of stress. Unfortunately, these foreign substances cause pain and further irritation in the surrounding soft tissue.
Causes
It is common to see patients with Plantar Fasciitis who have been wearing shoes that are too soft and flexible. The lack of support can be stressful on the heel for those patients who’s feet aren’t particularly stable. If these ill fitting shoes are worn for long enough, the stress will lead to Heel Pain as the inflammation of the fascia persists. Footwear assessment and advice will be essential in order to get on top of the Plantar Fasciitis. It may surprise some people to learn that high heeled shoes are not the cause of Plantar Fasciitis, although they can cause tight calf muscles. High arches can lead to Plantar Fasciitis. This is due to the lack of contact under the sole of the foot. Even sports shoes which appear to have good arch support inside are often too soft and not high enough to make contact with the arch of the foot. Hence, the plantar fascia is unsupported. This can lead to Heel pain and Plantar Fasciitis. Flat feet can lead to Plantar Fasciitis. Flat feet is caused by ligament laxity and leads to foot instability. Other structures such as muscles, tendons and fascia work harder to compensate for this instability. Heel pain or Plantar Fasciitis arises when the instability is too great for these other structures to cope with. The strain on the fascia is too severe and the inflammation sets in. Over stretching can lead to Plantar Fasciitis. Certain calf stretches put the foot into a position that creates a pulling sensation through the sole of the foot. This can cause Plantar Fasciitis which can cause pain in the arch of the foot as well as Heel Pain.
Symptoms
The typical presentation is sharp pain localized at the anterior aspect of the calcaneus. Plantar fasciitis has a partial association with a heel spur (exostosis); however, many asymptomatic individuals have bony heel spurs, whereas many patients with plantar fasciitis do not have a spur.
Diagnosis
X-rays are a commonly used diagnostic imaging technique to rule out the possibility of a bone spur as a cause of your heel pain. A bone spur, if it is present in this location, is probably not the cause of your pain, but it is evidence that your plantar fascia has been exerting excessive force on your heel bone. X-ray images can also help determine if you have arthritis or whether other, more rare problems, stress fractures, bone tumors-are contributing to your heel pain.
Non Surgical Treatment
Over-the-counter arch supports may be useful in patients with acute plantar fasciitis and mild pes planus. The support provided by over-the-counter arch supports is highly variable and depends on the material used to make the support. In general, patients should try to find the most dense material that is soft enough to be comfortable to walk on. Over-the-counter arch supports are especially useful in the treatment of adolescents whose rapid foot growth may require a new pair of arch supports once or more per season. Custom orthotics are usually made by taking a plaster cast or an impression of the individual's foot and then constructing an insert specifically designed to control biomechanical risk factors such as pes planus, valgus heel alignment and discrepancies in leg length. For patients with plantar fasciitis, the most common prescription is for semi-rigid, three-quarters to full-length orthotics with longitudinal arch support. Two important characteristics for successful treatment of plantar fasciitis with orthotics are the need to control over-pronation and metatarsal head motion, especially of the first metatarsal head. In one study, orthotics were cited by 27 percent of patients as the best treatment. The main disadvantage of orthotics is the cost, which may range from $75 to$300 or more and which is frequently not covered by health insurance.
Surgical Treatment
Surgery should be reserved for patients who have made every effort to fully participate in conservative treatments, but continue to have pain from plantar fasciitis. Patients should fit the following criteria. Symptoms for at least 9 months of treatment. Participation in daily treatments (exercises, stretches, etc.). If you fit these criteria, then surgery may be an option in the treatment of your plantar fasciitis. Unfortunately, surgery for treatment of plantar fasciitis is not as predictable as a surgeon might like. For example, surgeons can reliably predict that patients with severe knee arthritis will do well after knee replacement surgery about 95% of the time. Those are very good results. Unfortunately, the same is not true of patients with plantar fasciitis.
Stretching Exercises
## January102015
### What Triggers Heel Discomfort And Approaches To Fix It
Overview
Plantar fasciitis is thickening of the plantar fascia, a band of tissue running underneath the sole of the foot. The thickening can be due to recent damage or injury, or can be because of an accumulation of smaller injuries over the years. Plantar fasciitis can be painful.
Causes
Plantar Fasciitis is simply caused by overstretching of the plantar fascia ligament under the foot. So why is the ligament being overstretched? There are different factors, over-use, too much sports, running, walking or standing for long periods (e.g. because of your job) weight gain, our feet are designed to carry a 'normal' weight. Any excess weight places great pressure on the bones, nerves, muscles and ligaments in the feet, which sooner or later will have consequences. Even pregnancy (in the last 10 weeks) can cause foot problems! age, as we get older ligaments become tighter & shorter and msucles become weaker; the ideal circumstances for foot problems, unsupportive footwear, 'floppy' shoes with no support as well as thongs affect our walking pattern, walking barefoot, especially on hard surfaces like concrete or tiles, low arch and flat feet or over-pronation. An important contributing factor to Plantar Fasciitis is 'excess pronation' (or over-pronation). This is a condition whereby the feet roll over, the arches collapse and the foot elongates. This unnatural elongation puts excess strain on the ligaments, muscles and nerves in the foot. When the foot is not properly aligned, the bones unlock and cause the foot to roll inward. With every step taken your foot pronates and elongates, stretching the plantar fascia and causing inflammation and pain at the attachment of the plantar fascia into the heel bone. Re-alignment of the foot should therefore an important part of the treament regime.
Symptoms
Among the symptoms for Plantar Fasciitis is pain usually felt on the underside of the heel, often most intense with the first steps after getting out of bed in the morning. It is commonly associated with long periods of weight bearing or sudden changes in weight bearing or activity. Plantar Fasciitis also called “policeman’s heel” is presented by a sharp stabbing pain at the bottom or front of the heel bone. In most cases, heel pain is more severe following periods of inactivity when getting up and then subsides, turning into a dull ache.
Diagnosis
Diagnosis of plantar fasciitis is based on a medical history, the nature of symptoms, and the presence of localised tenderness in the heel. X-rays may be recommended to rule out other causes for the symptoms, such as bone fracture and to check for evidence of heel spurs. Blood tests may also be recommended.
Non Surgical Treatment
Although there is no single cure, many treatments can be used to ease pain. In order to treat it effectively for the long-term, the cause of the condition must be corrected as well as treating the symptoms. Rest until it is not painful. It can be very difficult to rest the foot as most people will be on their feet during the day for work. A plantar fasciitis taping technique can help support the foot relieving pain and helping it rest. Plantar fasciitis tapingApply ice or cold therapy to help reduce pain and inflammation. Cold therapy can be applied for 10 minutes every hour if the injury is particularly painful for the first 24 to 48 hours. This can be reduced to 3 times a day as symptoms ease. Plantar fasciitis exercises can be done if pain allows, in particular stretching the fascia is an important part of treatment and prevention. Simply reducing pain and inflammation alone is unlikely to result in long term recovery. The fascia tightens up making the origin at the heel more susceptible to stress. Plantar fasciitis night splint. Plantar fasciitis night splint is an excellent product which is worn overnight and gently stretches the calf muscles preventing it from tightening up overnight.
Surgical Treatment
If you consider surgery, your original diagnosis should be confirmed by the surgeon first. In addition, supporting diagnostic evidence (such as nerve-conduction studies) should be gathered to rule out nerve entrapment, particularly of the first branch of the lateral plantar nerve and the medial plantar nerve. Blood tests should consist of an erythrocyte sedimentation rate (ESR), rheumatoid factor, human leukocyte antigen B27 (HLA-B27), and uric acid. It’s important to understand that surgical treatment of bone spurs rarely improves plantar fasciitis pain. And surgery for plantar fasciitis can cause secondary complications-a troubling condition known as lateral column syndrome.
## January022015
### Symptoms Of leg length discrepancy
If your Fallen Arches feels like a bruise or a dull ache, you may have metatarsalgia People with metatarsalgia will often find that the pain is aggravated by walking in bare feet and on hard floor surfaces. Pain in the ball of your foot can stem from several causes. Ball of foot pain is the pain felt in the ball of foot region. Metatarsalgia is a condition characterized by having pain in ball of foot. The average adult takes about 9,000 steps per day.
Orthotics are shoe insoles, custom-made to guide the foot into corrected biomechanics. Orthotics are commonly prescribed to help with hammer toes, heel spurs, metatarsal problems, bunions, diabetic ulcerations and numerous other problems. They also help to minimize shin splints, back pain and strain on joints and ligaments. Orthotics help foot problems by ensuring proper foot mechanics and taking pressure off the parts of your foot that you are placing too much stress on. Dr. Cherine's mission is to help you realize your greatest potential and live your life to its fullest.
When the tissue of the arch of the foot becomes irritated and inflamed, even simple movements can be quite painful. Plantar fasciitis is the name that describes inflammation of the fibrous band of tissue that connects the heel to the toes. Symptoms of plantar fasciitis include pain early in the morning and pain with long walks or prolonged standing. Arch pain early in the morning is due to the plantar fascia becoming contracted and tight as you sleep through the night. Bunions develop from a weakness in the bone structure of your foot.
Do not consume food items which you are allergic to. Keep dead skin off your lips by lightly scrubbing them at least twice a week using a mild, natural ingredient such as cornflour or a lemon juice-sugar pack. I had a long road workout two weeks ago and immediately after starting having pain on the ball of my foot in this area. I have also learned buying shoes online is easy.
During the average lifetime our feet cover over 70,000 miles, the equivalent of walking four times around the world., so it's not surprising that problems can occur. Indeed around three-quarters of all adults will experience some sort of problem with their feet at some time. And without treatment most foot complaints will become gradually worse with time. This means people often endure painful conditions for far too long, and the problem can get worse. People often assume nothing can be done to help their condition, but in fact these conditions are extremely treatable. Swollen lump on big toe joint; lump may become numb but also make walking painful.
## December162014
### Regarding Achilles Tendinitis
Overview
Achilles Tendinitis is the inflammation of the Achilles Tendon located in the heel, and is typically caused by overuse of the affected limb. Most often, it occurs in athletes who are not training with the proper techniques and/or equipment. When the Achilles Tendon is injured, blood vessels and nerve fibers from surrounding areas migrate into the tendon, and the nerve fibers may be responsible for the discomfort. Healing is often slow in this area due to the comparably low amount of cellular activity and blood flowing through the area.
Causes
Achilles tendonitis most commonly occurs due to repetitive or prolonged activities placing strain on the Achilles tendon. This typically occurs due to excessive walking, running or jumping activities. Occasionally, it may occur suddenly due to a high force going through the Achilles tendon beyond what it can withstand. This may be due to a sudden acceleration or forceful jump. The condition may also occur following a calf or Achilles tear, following a poorly rehabilitated sprained ankle or in patients with poor foot biomechanics or inappropriate footwear. In athletes, this condition is commonly seen in running sports such as marathon, triathlon, football and athletics.
Symptoms
Achilles tendonitis typically starts off as a dull stiffness in the tendon, which gradually goes away as the area gets warmed up. It may get worse with faster running, uphill running, or when wearing spikes and other low-heeled running shoes. If you continue to train on it, the tendon will hurt more sharply and more often, eventually impeding your ability even to jog lightly. About two-thirds of Achilles tendonitis cases occur at the ?midpoint? of the tendon, a few inches above the heel. The rest are mostly cases of ?insertional? Achilles tendonitis, which occurs within an inch or so of the heelbone. Insertional Achilles tendonitis tends to be more difficult to get rid of, often because the bursa, a small fluid-filled sac right behind the tendon, can become irritated as well.
Diagnosis
A podiatrist can usually make the diagnosis by clinical history and physical examination alone. Pain with touching or stretching the tendon is typical. There may also be a visible swelling to the tendon. The patient frequently has difficulty plantarflexing (pushing down the ball of the foot and toes, like one would press on a gas pedal), particularly against resistance. In most cases X-rays don't show much, as they tend to show bone more than soft tissues. But X-rays may show associated degeneration of the heel bone that is common with Achilles Tendon problems. For example, heel spurs, calcification within the tendon, avulsion fractures, periostitis (a bruising of the outer covering of the bone) may all be seen on X-ray. In cases where we are uncertain as to the extent of the damage to the tendon, though, an MRI scan may be necessary, which images the soft tissues better than X-rays. When the tendon is simply inflamed and not severely damaged, the problem may or may not be visible on MRI. It depends upon the severity of the condition.
Nonsurgical Treatment
If caught early enough, simple physical therapy that you can do by yourself should be fine. Over the counter solutions as easy as pain medication, cold compresses, a different pair of shoes, or a new set of stretching exercises can make most of the symptoms of Achilles tendinitis disappear. Further trouble or extreme pain should be regarded as a sign that something more serious is wrong, and you should immediately consult a doctor or physician. They will look to see whether non-surgical or surgical methods are your best options, and from there you can determine what your budget is for dealing with the condition.
Surgical Treatment
Open Achilles Tendon Surgery is the traditional Achilles tendon surgery and remains the 'gold standard' of surgery treatments. During this procedure one long incision (10 to 17 cm in length) is made slightly on an angle on the back on your lower leg/heel. An angled incision like this one allows for the patient's comfort during future recovery during physical therapy and when transitioning back into normal footwear. Open surgery is performed to provide the surgeon with better visibility of the Achilles tendon. This visibility allows the surgeon to remove scar tissue on the tendon, damaged/frayed tissue and any calcium deposits or bone spurs that have formed in the ankle joint. Once this is done, the surgeon will have a full unobstructed view of the tendon tear and can precisely re-align/suture the edges of the tear back together. An open incision this large also provides enough room for the surgeon to prepare a tendon transfer if it's required. When repairing the tendon, non-absorbale sutures may be placed above and below the tear to make sure that the repair is as strong as possible. A small screw/anchor is used to reattach the tendon back to the heel bone if the Achilles tendon has been ruptured completely. An open procedure with precise suturing improves overall strength of your Achilles tendon during the recovery process, making it less likely to re-rupture in the future.
Prevention
You can take measures to reduce your risk of developing Achilles Tendinitis. This includes, Increasing your activity level gradually, choosing your shoes carefully, daily stretching and doing exercises to strengthen your calf muscles. As well, applying a small amount ZAX?s Original Heelspur Cream onto your Achilles tendon before and after exercise.
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2366165667772293, "perplexity": 4033.0628383465823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00086-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://fractalsoftworks.com/forum/index.php?topic=19531.15 | # Fractal Softworks Forum
• February 08, 2023, 12:37:45 PM
• Welcome, Guest
### News:
Starsector 0.95.1a is out! (12/10/21); In-development patch notes for Starsector 0.96a (02/01/23)
Pages: 1 [2] 3
### AuthorTopic: [0.95.1a] Dynamic Tariffs 1.3 (Read 72536 times)
#### q-rau
• Ensign
• Posts: 26
##### Re: [0.9.1a] Dynamic Tariffs 1.3
« Reply #15 on: April 08, 2021, 03:11:25 PM »
Do you have any interest in having tariffs react to things other than player rep? I think it would be interesting if markets that were in a shortage lowered tariffs on incoming (ie player-sold) goods to attempt to alleviate it.
Logged
#### 5ColouredWalker
• Lieutenant
• Posts: 87
##### Re: [0.9.1a] Dynamic Tariffs 1.3
« Reply #16 on: April 11, 2021, 12:45:23 AM »
Do you have any interest in having tariffs react to things other than player rep? I think it would be interesting if markets that were in a shortage lowered tariffs on incoming (ie player-sold) goods to attempt to alleviate it.
Given that relieving a planet suffering shortages a couple of times easily makes you a millionaire, there's plenty of incentive without adjusting tariffs.
Course, what's really fun is when you load up on supplies, get to the destination, and the AI beat you too it.
Logged
#### AliTanwir
• Ensign
• Posts: 4
##### Re: [0.9.1a] Dynamic Tariffs 1.3
« Reply #17 on: April 27, 2021, 12:25:14 AM »
Hi, is there any way to config the mod to apply to all markets? I know that some modded markets have their own tariffs but Im a bit lazy to write every modded market.
Logged
#### thorkellthetall
• Ensign
• Posts: 5
##### Re: [0.9.1a] Dynamic Tariffs 1.3
« Reply #18 on: April 27, 2021, 02:39:21 PM »
Hi, is there any way to config the mod to apply to all markets? I know that some modded markets have their own tariffs but Im a bit lazy to write every modded market.
The easiest way to do this manually (that I've found) is to use console commands, use the "list" command for "list markets", open the log in the starsector core folder, and copy every market listed there. (Tab out and do this immediately after entering the command, as the output will just be at the bottom on the plaintext log file, but any other actions in game will push it up)
I threw it all into notepad++ then did a few regular expressions to clean it up.
Keep in mind that wrap around will mess up regular expressions for this purpose depending on the expression and the document.
Keep in mind match case too.
Regular expressions will execute from your current document position onward until it reaches the end, unless wrap around is enabled in which it will do whatever it feels like. Execute these from the beginning of the document.
In my case as of starsector .95a-RC15 with console commands version 2021.4.10 for .95a-RC12, and Dynamic Tariffs version 1.3 for .095a-RC12 my starsector log outputs lines that look like this (For anyone from the future that might want to do this, in case the way console commands outputs to the log changes):
Quote
sphinx in Samarra Star System (Hegemony, Cooperative)
staloplanet in Rama Star System (Hegemony, Cooperative)
station_kapteyn in Isirah Star System (pirates, Vengeful)
There is white space in the front, and potentially at the back. The values we need are the first part which is the market id, the rest is just information.
1. ctrl + H to open Replace
Tick Regular Expression on the bottom left.
Find What: \in.*$Replace with: (Above field is deliberately blank) Replace All 2. Edit -> Blank Operations -> Trim Leading and Tailing Space 3. ctrl + H to open Replace Tick Regular Expression on the bottom left Find What: ^ (Above is the regex anchor meaning the beginning of a line) Replace with: " Replace All 4. ctrl + H to open Replace Find What:$
(Above is the regex anchor meaning the end of a line)
Replace with: ",
Replace All
5. Ctrl + A (Select All)
6. Ctrl + J (Edit -> Line Operations -> Join Lines)
(By default this will properly place a space inbetween each collapsed line on the single line. I do not know if notepad++ can be configured otherwise but keep this in mind if it doesn't)
7. Delete the extra comma at the very end of your finished single line.
There, now you have everything. Take that one line and neatly Overwrite the whitelist in this mod's settings, as the existing whitelist is vanilla markets which this output will also have. I don't know what double listings will do and I'm not about to find out. Keep in mind the final market id will have a comma at the end which you need to delete if you haven't already.
Someone could probably make a .bat that does all this, but I do not know how to do that. Depending on the number of faction mods you are running and on your computer literacy level, it might be faster to just manually apply the market ids after fetching them with "list markets" instead of doing this set of instructions.
Keep in mind as the mod author stated, you do this at your own risk depending on what a mod does to its added markets, as it might create mustard gas or something.
Thanks for coming to my TedX Talk.
« Last Edit: April 27, 2021, 04:24:50 PM by thorkellthetall »
Logged
#### Kalos
• Ensign
• Posts: 2
##### Re: [0.9.1a] Dynamic Tariffs 1.3
« Reply #19 on: May 01, 2021, 08:57:54 AM »
How does this interact with nexerelin Tariffs? (the 9 and 18%?)
Will this overide Nexerelin, or will nexerelin override this?
Logged
#### Tuv0x
• Ensign
• Posts: 11
##### Re: [0.9.1a] Dynamic Tariffs 1.3
« Reply #20 on: May 11, 2021, 07:26:06 PM »
How does this interact with nexerelin Tariffs? (the 9 and 18%?)
Will this overide Nexerelin, or will nexerelin override this?
It shouldn't overwrite anything that Nexerelin does, at least from what other mod authors have told me when I was programming this. The mod uses a whitelist in the settings.json file, only the vanilla markets are effected.
The way the mod is set up it should work just fine with all mods, UNLESS they had changed something with the vanilla markets. Also, this mod is a utility mod, it can be removed and added whenever you want.
• Ensign
• Posts: 11
##### Re: [0.95a] Dynamic Tariffs 1.3
« Reply #21 on: May 11, 2021, 11:25:52 PM »
May want to include a primary "dynamic tariffs" folder to the zip file, before you get weird bug reports about it not showing up on the list. Pretty much standard. I missed it, thanks to normally just unpacking mods in bulk when a new update comes.
Logged
#### Simulated Knave
• Ensign
• Posts: 10
##### Re: [0.95a] Dynamic Tariffs 1.3
« Reply #22 on: May 17, 2021, 08:51:40 PM »
You spelled "epiphany" wrong in the tags in the whitelist.
Says a lot about how sympathetic the Path are that this is a wonderful mod that everyone should use and I think I'm the first person to notice.
Logged
#### BHunterSEAL
• Lieutenant
• Posts: 72
##### Re: [0.95a] Dynamic Tariffs 1.3
« Reply #23 on: May 22, 2021, 10:32:28 PM »
May want to include a primary "dynamic tariffs" folder to the zip file, before you get weird bug reports about it not showing up on the list. Pretty much standard. I missed it, thanks to normally just unpacking mods in bulk when a new update comes.
Came here to post this, anyone unpacking multiple archives to their Mods folder is going to wind up with your mod dumped in as lose files.
Logged
#### Oni
• Captain
• Posts: 314
##### Re: [0.95a] Dynamic Tariffs 1.3
« Reply #24 on: June 24, 2021, 05:09:16 PM »
I wonder.... how feasible would making each level a multiple/fraction of the normal rate be?
Such as:
Suspicious: 2x
Neutral: 1.5x
Favorable: 1x
Welcoming: 0.5x
Friendly: 0.25x
Cooperative: 0.01x
That way you could make it affect all markets, even modded ones since it would use whatever rate is "normal" for its base.
Heck it'd probably work with Nexerelin which lowers the default tariff rate.
« Last Edit: June 26, 2021, 04:33:40 PM by Oni »
Logged
#### Starslinger909
• Ensign
• Posts: 15
##### Re: [0.95a] Dynamic Tariffs 1.3
« Reply #25 on: October 14, 2021, 01:57:17 PM »
How do you install this?
Logged
#### WalkerBoh
• Ensign
• Posts: 1
##### Re: [0.95a] Dynamic Tariffs 1.3
« Reply #26 on: December 10, 2021, 08:43:49 PM »
How do you install this?
Unzip the zip file in to your Starsector /mods folder. I noticed it has no folder of its own, so I created a new 'Dynamic Tariffs' folder first in my /mods folder, and unzipped it in to that folder. Start the game, click Mods.. on the launch panel and find the mod in the list. Double-click it to turn it ON, then click save. When you start the game, that mod should now be active for a new game or a saved game.
« Last Edit: December 11, 2021, 10:19:33 AM by WalkerBoh »
Logged
#### Chief_Curtains
• Ensign
• Posts: 3
##### Re: [0.9.1a] Dynamic Tariffs 1.3
« Reply #27 on: January 07, 2022, 08:04:00 PM »
Hi, is there any way to config the mod to apply to all markets? I know that some modded markets have their own tariffs but Im a bit lazy to write every modded market.
The easiest way to do this manually (that I've found) is to use console commands, use the "list" command for "list markets", open the log in the starsector core folder, and copy every market listed there. (Tab out and do this immediately after entering the command, as the output will just be at the bottom on the plaintext log file, but any other actions in game will push it up)
I threw it all into notepad++ then did a few regular expressions to clean it up.
Keep in mind that wrap around will mess up regular expressions for this purpose depending on the expression and the document.
Keep in mind match case too.
Regular expressions will execute from your current document position onward until it reaches the end, unless wrap around is enabled in which it will do whatever it feels like. Execute these from the beginning of the document.
In my case as of starsector .95a-RC15 with console commands version 2021.4.10 for .95a-RC12, and Dynamic Tariffs version 1.3 for .095a-RC12 my starsector log outputs lines that look like this (For anyone from the future that might want to do this, in case the way console commands outputs to the log changes):
Quote
sphinx in Samarra Star System (Hegemony, Cooperative)
staloplanet in Rama Star System (Hegemony, Cooperative)
station_kapteyn in Isirah Star System (pirates, Vengeful)
There is white space in the front, and potentially at the back. The values we need are the first part which is the market id, the rest is just information.
1. ctrl + H to open Replace
Tick Regular Expression on the bottom left.
Find What: \in.*$Replace with: (Above field is deliberately blank) Replace All 2. Edit -> Blank Operations -> Trim Leading and Tailing Space 3. ctrl + H to open Replace Tick Regular Expression on the bottom left Find What: ^ (Above is the regex anchor meaning the beginning of a line) Replace with: " Replace All 4. ctrl + H to open Replace Find What:$
(Above is the regex anchor meaning the end of a line)
Replace with: ",
Replace All
5. Ctrl + A (Select All)
6. Ctrl + J (Edit -> Line Operations -> Join Lines)
(By default this will properly place a space inbetween each collapsed line on the single line. I do not know if notepad++ can be configured otherwise but keep this in mind if it doesn't)
7. Delete the extra comma at the very end of your finished single line.
There, now you have everything. Take that one line and neatly Overwrite the whitelist in this mod's settings, as the existing whitelist is vanilla markets which this output will also have. I don't know what double listings will do and I'm not about to find out. Keep in mind the final market id will have a comma at the end which you need to delete if you haven't already.
Someone could probably make a .bat that does all this, but I do not know how to do that. Depending on the number of faction mods you are running and on your computer literacy level, it might be faster to just manually apply the market ids after fetching them with "list markets" instead of doing this set of instructions.
Keep in mind as the mod author stated, you do this at your own risk depending on what a mod does to its added markets, as it might create mustard gas or something.
Thanks for coming to my TedX Talk.
I just want to chime in, this is amazing assistance. I'm not the one you were replying to but this reply is fantastic and really helped me get this done!
Logged
#### CreamPanzer
• Ensign
• Posts: 3
##### Re: [0.95.1a] Dynamic Tariffs 1.3
« Reply #28 on: February 04, 2022, 04:56:39 PM »
Great Mod, I am enjoying it. And it motives me to increase the relation with other factions.
However, in my next play, I am planning to install Nexerelin. It mentions tariff change as well. Will this cause troubles?
Thanks.
Logged
#### Xarlas
• Ensign
• Posts: 2
##### Re: [0.95.1a] Dynamic Tariffs 1.3
« Reply #29 on: February 26, 2022, 02:48:24 PM »
It seems this mod does not change the rate for player owned markets (colonies).
Is this intended ?
Logged
Pages: 1 [2] 3 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40608224272727966, "perplexity": 5722.795266595016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00603.warc.gz"} |
https://plainmath.net/discrete-math/81397-inductive-proof-that-msqrt-k | Lillianna Andersen
2022-07-08
Inductive Proof that $\sqrt{k+1}-\sqrt{k}\le \frac{1}{\sqrt{k+1}}$
Help me understand this:
$\sqrt{k+1}-\sqrt{k}\le \frac{1}{\sqrt{k+1}}$
Dalton Lester
Expert
Step 1
For $K\ge 0$ we have ${K}^{2}\le {K}^{2}+K=K\left(K+1\right)$ and so $K\le \sqrt{K\left(K+1\right)}$ which further implies $K+1\le \sqrt{K\left(K+1\right)}+1,$.
Step 2
Thus we have $K+1-\sqrt{K\left(K+1\right)}\le 1,$, or $\sqrt{K+1}\left(\sqrt{K+1}-\sqrt{K}\right)\le 1,$, which is the required inequality.
Do you have a similar question? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9940547347068787, "perplexity": 861.5376847597797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00032.warc.gz"} |
http://cpc.ihep.ac.cn/article/2002/12 | ## 2002 Vol. 26, No. 12
column
Display Method: |
2002, 26(12): 1195-1200.
Abstract:
High spin states of 125Sb have been investigated for the first time by means of in-beam γ-ray spectroscopy techniques via the 124Sn(7Li, α2n) reaction at 32 MeV beam energy. Based on the measurements of γ-γ coincidence and γ-ray anisotropies, a level scheme including 21 new γ-transitions and 14 new excited levels was established up to 23/2+. Three isomers at 1970, 2110 and 2471 keV levels have been identified and proposed as three-quasiparticle πg7/2ν(h11/2s1/2), πg7/2ν(h11/2d3/2) and πg7/2ν(h211/2) configurations, respectively. The level structure of 125Sb is ?discussed in terms of particle-core coupling
2002, 26(12): 1201-1208.
Abstract:
It is of great interest to build a high luminosity accelerator in τ-charm energy region. The physics interest in this energy region is briefly reviewed. Events rate and events number at different energy in one year's data taking are estimated under typical luminosity of 1033cm-2s-1. It is pointed out that the luminosity of a newly designed accelerator should be optimized between ψ″and ψ(4160), the peak luminosity should be determined by the events rate which the trigger system, detector electronics and data acquisition system could handle.
2002, 26(12): 1209-1213.
Abstract:
We study a flavor-changing toppion production process e+e-→tcΠ0t in the topcolorassisted technicolor(TC2) model. The studies show that, with high centre of mass energy in TESLA collider, the production cross section of e+e-→tcΠ0t is about 0.1 fb in most parameter regions of TC2 model and a few tens events of toppion can be produced each year. The resonance effect can enhance the cross section to a few fb when toppion mass is small. With clean background, the toppion events can possibly be detected at TESLA collider. On the other hand, we find that there exists a narrow peak in the toppion-charm invariant mass distribution which could be clearly detected. Therefore, the toppion production process e+e-→tcΠ0t provides a unique chance to detect toppion events and test the TC2 model.
2002, 26(12): 1214-1222.
Abstract:
Based on the phase-space generating functional for a system with a singular higher-order Lagrangian,the quantal canonical Noether identities under the local and non-local transformation in phase space for such system have been derived. For a gauge-invariant system with a higher-order Lagrangian,the quantal Noether identities under the local and non-local transformation in configuration space have also been derived. It has been pointed out that in certain cases the quantal Noether identities may be converted to the conservation laws at the quantum level. This algorithm to derive the quantal conservation laws is significantly different from the first quantal Noether theorem. The applications to the non-Abelian CS theories with higher-order derivatives are given. The conserved quantities at the quantum level for some local and non-local transformation are found respectively.
2002, 26(12): 1223-1227.
Abstract:
The new experimental data of 11B(p,α1)8Be*(1)(2α) three-body decay show that the continuous α spectrum of the two alpha particles produced by the intermediate nuclear 8Be*(1) looks like a saddle type distribution. To explain the experimental facts, we have written a Monte Carlo simulation program to the p+11B reaction. The calculation results of the program indicate that the anisotropy distribution emission of the decay alpha particles produced by 8Be*(1) can give a satisfying explanation to the experimental spectrum.
2002, 26(12): 1228-1237.
Abstract:
Exact eigen-energies and the corresponding wavefunctions of the interacting sl-boson system in U(2l+1) transitional region are obtained by using an algebraic Bethe ansatz with the infinite dimensional Lie algebraic technique. Numerical algorithm for solving the Bethe ansatz equations by using mathematica package is also outlined.
2002, 26(12): 1238-1246.
Abstract:
Within the isospin dependent Brueckner-Hartree-Fock approach,the equation of state of isospin asymmetric nuclear matter and its isospin dependence have been investigated in the whole isospin range. The present work has been focused on the effects of a microscopic three-body force on the equation of state of asymmetric nuclear matter and nuclear symmetry energy. It is shown that,even with the presence of the three-body force,the empirical parabolic law of the energy per nucleon vs isospin asymmetry is still fulfilled accurately in the whole isospin range (0≤ β ≤1). Around the empirical saturation density ρ0=0.17fm-3,the three-body force effect on the symmetry energy is rather small and the symmetry energy at the saturation density obtained in the presence of the three-body force is 30.71MeV in good agreement with its empirical value 30±4MeV;while at high density,the three-body force provides a strong enhancement of the symmetry energy and makes the symmetry energy increase much more rapid as increasing density. A simple parametrization of the symmetry energy as a function of density is proposed
2002, 26(12): 1247-1253.
Abstract:
The mass and charge distribution of residual products produced in the spallation reactions needs to be studied,because it can provide useful information for the disposal of nuclear waste and residual radioactivity generated by the spallation neutron target system. In present work,the Many Stage Dynamical Model(MSDM) is based on the Cascade-Exciton Model (CEM) We use it to investigate the mass distribution of Nb,Au and Pb proton-induced reactions in energy range from 100 MeV to 3 GeV. The agreement between the MSDM simulations and the measured data is good in this energy range,and deviations mainly show up in the mass range of 90—150 for the high energy proton incident upon Au and Pb.
2002, 26(12): 1254-1263.
Abstract:
Using the hypothesis as well as the γ-ray strength function proposed by us, the neutron radiative capture reaction cross sections and the γ energy spectra have been calculated for 93Nb,natural Ag and 181Ta in the neutron incident energy region from 0.01 to 5MeV as well as for 197Au in the neutron incident energy region from 0.01 to 10MeV. The results which are coincident better with the experimental values were obtained. The comparisons with the experimental values have shown that, not only the abnormal protuberances near and after 5.5MeV of the γ spectra in the nuclear mass regions about 110<A<140 and 180<A<210 are explained better,but also the γ production data can be theoretically calculated for the middle and heavy nuclei by means of this hypothesis and the γ-ray strength function deduced from this hypothesis.
2002, 26(12): 1264-1270.
Abstract:
Effects of the symmetry potential Usym and the isospin dependence of in-medium nucleon-nucleon cross section Nn( Np) on the number of neutron (proton) emitted as well as their dependence on the momentum dependent interaction (MDI) are studied within an isospin dependent Quantum Molecular Dynamics (IQMD) model. The isospin dependence nucleon-nucleon cross section is found to have a much stronger influence on the Nn( Np) especially for the neutron-deficient collision system with MDI in the energy region from about 100 to 400MeV/nucleon. The calculation results are clear to show that the number of neutron (proton) emitted during reaction in the neutron-deficient system,depends sensitively on the isospin dependence of in-medium nucleon-nucleon cross section and weakly on the symmetry potential with MDI. In this case one can make use of the number of neutron (proton) emitted as a probe to extract simultaneously both the magnitude and the in-medium nucleon-nucleon cross section.
2002, 26(12): 1271-1276.
Abstract:
Using a hadron and string cascade model,JPCIAE,and the corresponding Monte Carlo events generator,the behavior of the charged particle ratio event-by-event fluctuations in subsystem depending on energy,centrality,resonance decay and rapidity interval was investigated for Pb+Pb collisions at SPS and ALICE energies,and for Au+Au collisions at RHIC energies. The model results of charged particle ratio event-by-event fluctuations as a function of the rapidity interval in Pb+Pb collisions at SPS energies were comparable with the preliminary NA49 data. It turned out that the charged particle ratio fluctuation has no strong energy,centrality,resonance decay and rapidity interval dependences.
2002, 26(12): 1277-1284.
Abstract:
The non-uniform longitudinal flow model (NUFM) proposed recently is extended to include also the transverse flow. The resulting longitudinally non-uniform collective expansion model (NUCEM) is applied to the calculation of rapidity distribution of kaons, lambdas and protons in relativistic heavy ion collisions at CERN-SPS energies. The model results are compared with the 200A GeV/c S-S and 158 A GeV/c Pb-Pb collision data. The central dips observed in experiments are reproduced in a natural way. ;It is found that the depth of the central dip depends on the magnitude of the parameter e and the mass of produced particles, i.e. the non-uniformity of the longitudinal flow which is described by the parameter e determines the depth of the central dip for produced particles. Comparing with one-dimensional non-uniform longitudinal flow model, the rapidity distribution of lighter strange particle kaon also shows a dip due to the effect of transverse flow.
2002, 26(12): 1285-1290.
Abstract:
A two-dimension position sensitive parallel-plate avalanche (PPAC) detector has been developed for RIBLL. The detector consists of one anode and two cathodes. In each cathode a resistance chain is used to readout position signals. The detector has been tested in different operating gases with an α source. When the detector is at 7mb flowing rate of isobutane and +500V on anode, the position resolution of 0.76mm is obtained. For 7mb C3F8 and +595V on anode, the position resolution is 0.64mm. The efficiencies are around 99.1% in the cases of C3F8 and isobutane.
2002, 26(12): 1291-1296.
Abstract:
The efficiencies of clover and cluster composite detectors using NaI and BGO crystals as the media for detection of high energy γ ray are systematically simulated with Monte Carlo method. It is shown that for the same geometry of detection media concerned the efficiency of the composite BGO detector is much higher than that of the composite NaI detector. Therefore NaI crystal is not a suitable medium of composite detectors for high energy γ ray duo to low efficiency, Doppler broadening and distortion to γ spectrum in comparison with BGO crystal. The composite BGO detectors have many advantages such as large photopeak efficiency, small Doppler effect and regular γ spectrum. As to the clover and cluster composite detectors consisting of the cylinders of BGO crystal with original size 76×127, the intrinsic photopeak efficiencies are over 40% and the enhanced factor of absolute efficiencies is as high as 2.4 and 2.7, respectively, for 22 MeV γ ray.
2002, 26(12): 1297-1301.
Abstract:
Small cell drift chamber is adopted in the design of BESⅢ-the upgrade version of the BES detector, for its lower electron diffusion and better spatial resolution. We did Monte Carlo study for BES-Ⅲ drift chamber using GARFIELD7, and got the characteristics of the output signals of the sense wires. This become the theoretical basis for the design of drift chamber readout electronics.
2002, 26(12): 1302-1308.
Abstract:
To have a high positron yield,it is extremely important to minimize the primary electron beam radius at the positron production target. Various effects on electron beam blow-up have been analyzed.By comparing the measured beam radius with simulation,result at current BEPC positron source,some concrete effects on beam blow-up have been described. A design study on minimizing the electron beam radius at BEPC-Ⅱ positron source is given,and the ways to reach this goal have been summarized.
2002, 26(12): 1309-1315.
Abstract:
We propose to build a GeV γ-ray beam line, Shanghai laser Electron Gamma Source-Ⅱ (SLEGS-Ⅱ), at the Shanghai Synchrotron Radiation Facility (SSRF). By Backward Compton Scattering (BCS) of ultraviolet laser from 3.5 GeV electrons of SSRF, high intense quasi-monochromatic BCS γ-rays with high linear or circular polarization ranging 300-870MeV will be produced. In this paper, we present the outline of SLEGS-Ⅱ and the properties of BCS γ-ray with numerical computation based on the major parameters of SSRF storage ring. The selection of interaction region and tagging position is discussed.
2002, 26(12): 1316-1319.
Abstract:
The identical graphite targets were irradiated by 60keV Ni+and Ar+ ions with the fluences of 1018/cm2 in turns under the same experimental conditions. Using high-resolution transmission electron microscope (HRTEM) with energy dispersive X-ray (EDX) and electron diffraction (ED), we found different-size nanoscale Ar-bubbles embedded in glass-carbon-like membranes for the first time. Moreover, in part of these nanobubbles Ar may have been formed into a solid-like structures.
IF: 5.861
Monthly founded in 1977
ISSN 1674-1137 CN 11-5641/O4
Original research articles, Ietters and reviews Covering theory and experiments in the fieids of
• Particle physics
• Nuclear physics
• Particle and nuclear astrophysics
• Cosmology
Author benefits
News
Meet Editor | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062501192092896, "perplexity": 2571.9897804680786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00276.warc.gz"} |
https://aas.org/archives/BAAS/v32n3/dps2000/284.htm | DPS Pasadena Meeting 2000, 23-27 October 2000
Session 31. Extra-Solar Planets
Oral, Chairs: W. Cochran, G. Wuchterl, Wednesday, 2000/10/25, 10:30am-12:10pm, Little Theater (C107)
[31.10] High Probabilities of Planet Detection during Microlensing Events.
S. J. Peale (UCSB)
The averaged probability of detecting a planetary companion of a lensing star during a gravitational microlensing event toward the Galactic center when the planet-lens mass ratio is 0.001 is shown to have a maximum exceeding 20% for a distribution of source-lens impact parameters that is determined by the efficiency of event detection, and a maximum exceeding 10% for a uniform distribution of impact parameters. The probability varies as the square root of the planet-lens mass ratio. A planet is assumed detectable if the perturbation of the light curve exceeds 2/(S/N) for a significant number of data points, where S/N is the signal-to noise ratio for the photometry of the source. The probability peaks at a planetary semimajor axis a that is close to the mean Einstein ring radius of the lenses of about 2 AU along the line of sight, and remains significant for 0.6\leq a\leq 10 AU. The low value of the mean Einstein ring radius results from the dominance of M stars in the mass function of the lenses. The probability is averaged over the distribution of the projected position of the planet onto the lens plane, over the lens mass function, over the distribution of impact parameters, over the distribution of lens along the line of sight to the source star, over the I band luminosity function of the sources adjusted for the source distance, and over the source distribution along the line of sight. If two or more parameters of the lensing event are known, such as the I magnitude of the source and the impact parameter, the averages over these parameters can be omitted and the probability of detection determined for a particular event. The calculated probabilities behave as expected with variations in the line of sight, the mass function of the lenses, the extinction and distance to and magnitude of the source, and with a more demanding detection criterion. The relatively high values of the probabilities are robust to plausible variations in the assumptions. The high probabilities offer the promise of gaining statistics rapidly on the frequency of planets in long period orbits, and thereby encourage the expansion of ground based microlensing searches for planets with enhanced capabilities. A ground based microlensing search for planets complements the highly successful radial velocity searches and expanding transit searches by being most sensitive to distant, long period planets, whereas both radial velocity and transit searches are most sensitive to close, massive planets. Existing and proposed astrometric searches are also most sensitive to distant planets, but only with a data time span that is a significant fraction of the orbit period. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212813973426819, "perplexity": 1086.648639993405}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824201.28/warc/CC-MAIN-20160723071024-00052-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/author/M.-Kozlov/2776124 | • Publications
• Influence
Limit on the cosmological variation of mp/me from the inversion spectrum of ammonia.
• Physics, Medicine
• Physical review letters
• 15 June 2007
We obtain the limit on the space-time variation of the ratio of the proton mass to the electron mass, mu=m(p)/m(e), based on comparison of quasar absorption spectra of NH3 with CO, HCO+ and HCNExpand
• 107
• 15
• PDF
Methanol as A Tracer of Fundamental Constants
• Physics
• 8 June 2011
The methanol molecule CH{sub 3}OH has a complex microwave spectrum with a large number of very strong lines. This spectrum includes purely rotational transitions as well as transitions withExpand
• 63
• 6
• PDF
Blackbody-radiation shift in the Sr optical atomic clock
• Physics
• 26 October 2012
We evaluated the static and dynamic polarizabilities of the \$5{s}^{2}\phantom{\rule{0.16em}{0ex}}{}^{1}{S}_{0}\$ and \$5s5p\phantom{\rule{0.16em}{0ex}}{}^{3}{P}_{0}^{o}\$ states of Sr using theExpand
• 82
• 4
• PDF
A new approach for testing variations of fundamental constants over cosmic epochs using FIR fine-structure lines
• Physics
• 18 December 2007
Aims. We aim to obtain limits on the variation of the fine-structure constant α and the electron-to-proton mass ratio μ over different cosmological epochs. Methods. A new approach based on theExpand
• 35
• 4
• PDF
Searching for chameleon-like scalar fields with the ammonia method , II. Mapping of cold molecular cores in NH3 and HC3N lines
Context. In our previous work we found a statistically significant offset ΔV ≈ 27 m s −1 between the radial velocities of the HC3N J = 2− 1a nd NH 3 (J,K) = (1, 1) transitions observed in molecularExpand
• 29
• 4
• PDF
ELECTRIC-DIPOLE AMPLITUDES, LIFETIMES, AND POLARIZABILITIES OF THE LOW-LYING LEVELS OF ATOMIC YTTERBIUM
• Physics
• 1 October 1999
Centre for Optics and Atomic Physics, Sussex University, Falmer, Brighton BN1 9QH, United Kingdom~Received 2 March 1999!The results of ab initio calculations of electric-dipole amplitudes, lifetimes,Expand
• 60
• 3
• PDF
Flavor physics of leptons and dipole moments
This chapter of the report of the “Flavor in the era of the LHC” Workshop discusses the theoretical, phenomenological and experimental issues related to flavor phenomena in the charged lepton sectorExpand
• 281
• 3
• PDF
Λ -doublet spectra of diatomic radicals and their dependence on fundamental constants
\$\Lambda\$-doublet spectra of light diatomic radicals have high sensitivity to the possible variations of the fine structure constant alpha and electron-to-proton mass ratio beta. For molecules OH andExpand
• 33
• 3
• PDF
An upper limit to the variation in the fundamental constants at redshift z = 5.2
• Physics
• 16 March 2012
Aims. We constrain a hypothetical variation in the fundamental physical constants over the course of cosmic time. Methods. We use unique observations of the CO(7-6) rotational line and the (Ci) 3 P2Expand
• 30
• 3
• PDF
SENSITIVITY OF THE H3O+ INVERSION-ROTATIONAL SPECTRUM TO CHANGES IN THE ELECTRON-TO-PROTON MASS RATIO
• Physics
• 20 September 2010
Quantum-mechanical tunneling inversion transition in ammonia (NH3) is actively used as a sensitive tool to study possible variations of the electron-to-proton mass ratio, μ = m e/m p. The moleculeExpand
• 20
• 3
• PDF | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814496397972107, "perplexity": 8983.127344056154}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799741.85/warc/CC-MAIN-20210126104721-20210126134721-00549.warc.gz"} |
http://tex.stackexchange.com/questions/95990/declare-a-new-font-shape | # Declare a new font shape
In my document \bfseries is defined to select the "semibold" shape, however the typewriter family (beramono) does not have a "semibold" shape. So I receive a warning:
LaTeX Font Warning: Font shape T1/fvm/sb/n' undefined
(Font) using T1/fvm/m/n' instead on input line 250.
I tried the solution discussed here by using the follwoing command to declare a new font shape for typewriter family.
\DeclareFontShape{T1}{fvm}{sb}{n}{<->ssub*fvm/b/n}{}
or
\DeclareFontShape{\encodingdefault}{\ttdefault}{sb}{n}{<->ssub*\ttdefault/b/n}{}
And for both commands I received the follwoing error:
! LaTeX Error: Font family T1+fvm' unknown.
A working example is:
\documentclass{report}
\usepackage[T1]{fontenc}
\usepackage[oldstyle,semibold,type1]{libertine}
\usepackage[scaled=.85]{beramono}% typewriter font
\DeclareFontShape{T1}{fvm}{sb}{n}{<->ssub * fvm/b/n}{}
\begin{document}
\ttfamily\bfseries Hello World!
\end{document}
-
You have to issue an appropriate \DeclareFontFamily command before any \DeclareFontShape related to it. – egreg Jan 30 '13 at 9:58
@egreg, the font familiy is available in the documnet. I thought it should be declared by the package beramono. If I go to declare the font family, I should also declare the bold shape and maybe different sizes fot it. Is there an easy way to copy bold shape as semi-bold? – Aydin Jan 30 '13 at 10:52
The \DeclareFontFamily command is in the .fd file that's not read until some text has to be typeset with a font from that family. Can you add a minimal working example (MWE)? – egreg Jan 30 '13 at 10:55
@Aydin You can just load the .fd file with \input before issuing your \DeclareFontShape. – Stephan Lehmke Jan 30 '13 at 10:59
@StephanLehmke, thanks alot, adding .fd solved the problem. – Aydin Jan 30 '13 at 11:41
LaTeX needs to see a \DeclareFontFamily declaration before a \DeclareFontShape command can be issued.
A simple way out is to add
\sbox0{\ttfamily X}
after \usepackage[<options>]{beramono}, which will cause the corresponding .fd file to be read in, which contains the appropriate \DeclareFontFamily command.
-
It is not necessary to use a box. A simple \ttfamily or (if you fear side effects) a local {\ttfamily} is enough to trigger the loading of the fd-file. – Ulrike Fischer Jan 30 '13 at 14:39
Thanks to egreg and Stephan, loading the .fd file before \DeclareFontShape solved the problem.
\documentclass{report}
\usepackage[T1]{fontenc}
\usepackage[oldstyle,semibold,type1]{libertine}
\usepackage[scaled=.85]{beramono}% typewriter font
\makeatletter
\input{t1fvm.fd}
\makeatother
\DeclareFontShape{T1}{fvm}{sb}{n}{<->ssub * fvm/b/n}{}
\begin{document}
\ttfamily\bfseries Hello World!
\end{document}
-
You can force the loading also with \sbox0{\ttfamily X}` – egreg Jan 30 '13 at 11:48
@egreg, thanks. I will do so, as it's more general. – Aydin Jan 30 '13 at 12:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9505282044410706, "perplexity": 7363.828069673585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768050.31/warc/CC-MAIN-20141217075248-00165-ip-10-231-17-201.ec2.internal.warc.gz"} |
http://clay6.com/qa/36912/a-mass-m-is-supported-by-a-massless-string-wound-around-a-uniform-hollow-cy | Browse Questions
# A mass $m$ is supported by a massless string wound around a uniform hollow cylinder of mass $m$ and radius $R$. If the string does not slip on the cylinder, with what acceleration will the mass fall on release?
$\begin{array}{1 1}(A)\;\frac{5g}{6}\\ (B)\;g \\(C)\;\frac{2g}{3} \\(D)\;\frac{g}{2} \end{array}$
The acceleration will the mass fall on release is $\large\frac{g}{2}$
Hence D is the correct answer.
+1 vote | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.755368173122406, "perplexity": 494.17297494646243}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171418.79/warc/CC-MAIN-20170219104611-00235-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/213248/general-formula-for-expanding-wave-function-in-terms-of-orthogonal-states | # General formula for expanding wave function in terms of orthogonal states?
Given a wave function $\psi(x) = \langle \psi | x \rangle$. It can be expanded in terms of orthogonal states:
$$\langle \psi | x \rangle = \sum_n \langle \psi | n \rangle \langle n |x \rangle$$
Questions:
1. Can this always be done for an arbitary action?
2. Given a general action $S(x,\dot{x})$, is there a general formula to find $\langle \psi |n \rangle$ and $\langle x |n \rangle$ in terms of the action, S, and the wavefunction $\psi(x)$?
All I know is that the ground state can be given formally by the path integral:
$$\langle x | 0 \rangle = \int\limits^{y_0=x}_{y_{-\infty}=0} e^{iS(y,\dot{y})} Dy$$
and so (I think)
$$\langle \psi | 0 \rangle = \int \psi(y_0)e^{iS(y,\dot{y})} Dy$$
1. Are there any general formulae for $|n\rangle$ for $n>0$ ?
2. Also, can this be done in the case of full QFT or just for free fields?
• The expansion is just a matter of using the fact that the states |n> form a complete set, you can express that as the so-called closure relation: $$\sum_n |n\rangle\langle n| = 1$$ The path integral gives the time evolution, the matrix element you wrote down is to be interpreted in the Heisenberg picture, so it gives the amplitude for a particle at position 0 to evolve to position x. You can manipulate that via various closure relations and express it in terms of wavefunctions. – Count Iblis Oct 18 '15 at 19:42
• To get the ground state from the path integral, you can use the Gell-Mann and Low theorem – Count Iblis Oct 18 '15 at 19:42
• Hi, yes, I got the ground state using the path integral. I just wondered if there is a way to get the other states using the path integral? Maybe there is no general way of doing it? The states are eigenstates of the energy if this helps? Maybe there is no general formula? – zooby Oct 18 '15 at 19:52
(1) Can this always be done for an arbitrary action?
The answer to this was given by CountIblis in his comment. The expansion you are asking about follows from the closure formula for basis $\{|n\rangle\}$, $I = \sum_n{|n\rangle\langle n|}$: $$\langle \psi|x\rangle = \langle \psi|I|x\rangle = \sum_n{\langle\psi|n\rangle\langle n|x\rangle}$$ It is independent of the action that governs the evolution of $|\psi\rangle$.
(2) Given a general action $S(x,\dot{x})$, is there a general formula to find $\langle \psi|n\rangle$ and $\langle x|n\rangle$ in terms of the action, $S$, and the wavefunction $\psi(x)$?
It we know the wavefunctions $\langle x| \psi \rangle$ and $\langle x| n \rangle$, the overlap $\langle n |\psi\rangle = \langle \psi|n\rangle^*$ can be found (trivially) as $$\langle n |\psi\rangle = \int{dx \;\langle n |x\rangle \langle x|\psi\rangle} = \int{dx\; \langle x |n\rangle^* \langle x|\psi\rangle}$$
You can look at this as another application of the closure relation, this time for the $\{|x\rangle\}$ basis, $I = \int{dx\; |x\rangle \langle x|}$. Again, it is independent of the action for the evolution of $|\psi\rangle$.
The problem of finding the wavefunctions $\langle x| n \rangle$ for excited states in the path integral representation is another issue, and not a simple one. You may want to take a look at these papers to get an idea of what's involved:
• T. E. Sorensen & W. B. England, Molecular Physics 89, 1577 (1996)
• A.G. Ushveridze, Physics Letters A 110(4), 217–220 (1985)
• Unfortunately this did not answer the question. I did not say the $\langle x | n \rangle$ was known. This is one of the unknowns that we are trying to find! All that is known is S and the wavefunction. All you've done is express one unknown in terms of the other unknown! – zooby Oct 20 '15 at 13:25
• In answering (2) I said "If we know ...$\langle x|n\rangle$ ...", followed by (last paragraph) "The problem of finding the wavefunctions $\langle x|n\rangle$ ... is another issue". This means "we can calculate $\langle n| \psi\rangle$ provided we know $\langle x|n\rangle$ in addition to $\langle x|\psi\rangle$, but $\langle x|n\rangle$ itself is not so easy to calculate in the first place in path integral representation even if we know S exactly". However, I did provide 2 refs. that consider exactly this path integral calculation for excited states, closing the problem you asked about. – udrv Oct 20 '15 at 18:48
I don't think there is a general way. Also, probably the state basis is only applicable probably to free field theories. Just the quadratic part of the action.
In general it is a very difficult if not impossible task to diagonalise the Hamiltonian. That is why path integrals are used for interactions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581353664398193, "perplexity": 205.50520639125384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00277.warc.gz"} |
http://math.stackexchange.com/questions/457181/short-integral-question | # Short integral question
Can anyone here just tell me this is true? I just need a YES/NO, because I am a bit confused right now...
\begin{align} \int\limits_{-\infty}^{\infty}\exp\left[{-\frac{x^2}{a}}\right]dx = \left.\left( -\frac{a}{2x} \right)\exp\left[{-\frac{x^2}{a}}\right]\right|_{-\infty}^{\infty} \end{align}
-
No. The right hand is not the primitive of the integrand in the left hand, if that's what you meant to write. – DonAntonio Aug 1 '13 at 11:32
Yes that is what i wanted to write. Where is the catch here??? – 71GA Aug 1 '13 at 11:33
It seems like you are trying to use the chain rule in reverse... the problem is that it doesn't work like that. You would need to use substitution; unfortunately, though, you don't have an $x$ to cancel out the derivative of $-\frac{x^2}{a}$, and so you can't carry out substitution here. – Nicholas R. Peterson Aug 1 '13 at 11:37
Have you tried differentiating the expression on the right-hand side? – Mark Bennet Aug 1 '13 at 11:37
i think you should use polar coordinates – what'sup Aug 1 '13 at 11:38
Here's a way to find out the simplest case (understand and explain each step):
$$I:=\int\limits_{-\infty}^\infty e^{-x^2}dx\implies I^2=\left(\int\limits_{-\infty}^\infty e^{-x^2}dx\right)^2=\int\limits_{-\infty}^\infty e^{-x^2}dx\int\limits_{-\infty}^\infty e^{-y^2}dy=$$
$$=\int\limits_{-\infty}^\infty\int\limits_{-\infty}^\infty e^{-(x^2+y^2)}dxdy\stackrel{\text{polar coord.}}=\int\limits_0^\infty\int\limits_0^{2\pi}re^{-r^2}d\theta dr=$$
$$=\left.-\pi\int\limits_0^\infty(-2r\,dr)e^{-r^2}=-\pi e^{-r^2}\right|_0^\infty=-\pi(0-1)=\pi$$
and from here
$$I=\sqrt\pi$$
Now your integral, assuming $\,a>0\,$:
$$J:=\int\limits_{-\infty}^\infty e^{-x^2/a}dx\;\ldots\;\;\text{substitution}:\;\;u:=\frac x{\sqrt a}\;,\;dx=\sqrt a\,du\implies$$
$$J=\sqrt a\int\limits_{-\infty}^\infty e^{-u^2}du=\sqrt{a\pi}$$
-
oh man i just posted an answer like your answer – what'sup Aug 1 '13 at 11:46
i didn't see your answer i swear – what'sup Aug 1 '13 at 11:47
@what'sup , don't worry: if it bothers you a lot delete your answer, or else leave it as it is and let others decide which approach they like better. This happens a lot with these basic questions and no need to feel bad. – DonAntonio Aug 1 '13 at 11:48
ok thank you DonAntonio – what'sup Aug 1 '13 at 11:50
Thanks @SamiBenRomdhane , fixed. – DonAntonio Aug 1 '13 at 11:56
Hint
To find the value of the integral:
Multiply the integral by $\int\limits_{-\infty}^{\infty}\exp\left[{-\frac{y^2}{a}}\right]dy$ then use the polar coordinates.
-
Nice suggestion, Sami. – amWhy Apr 25 at 12:03
evaluate $$\large{ \int_{-\infty}^{\infty} e^{\frac{-x^2}{a}} \ dx}$$
now we have $$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-\frac{x^2+y^2}{a}} \ dx \ dy = 4\int_0^{\infty} \int_0^{\infty} e^{-\frac{x^2+y^2}{a}} \ dx \ dy$$ (ok i like 0 to inf ) to polar coordinate
$$4\int_0^{\frac{\pi}{2}} \int_0^{\infty} re^{-\frac{r^2}{a}} \ dr \ d\theta$$
now it became easy and note that
$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-\frac{x^2+y^2}{a}} \ dx \ dy = \left(\large{ \int_{-\infty}^{\infty} e^{\frac{-x^2}{a}} \ dx} \right)^2$$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344140887260437, "perplexity": 1741.5235514074159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136966.6/warc/CC-MAIN-20140914011216-00313-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://www.pratiyogi.com/assessment/chemical-bonds-test-5/100 | # Chemical Bonds Test 5
Total Questions:50 Total Time: 60 Min
Remaining:
## Questions 1 of 50
Question:In the following which substance will have highest boiling point
### Answers Choices:
$$He$$
$$CsF$$
$$N{H_3}$$
$$CHC{l_3}$$
## Questions 2 of 50
Question:Phosphate of a metal M has the formula $${M_3}{\left( {P{O_4}} \right)_2}.$$ The formula for its sulphate would be
### Answers Choices:
$$MS{O_4}$$
$$M{\left( {S{O_4}} \right)_2}$$
$${M_2}{\left( {S{O_4}} \right)_3}$$
$${M_3}{\left( {S{O_4}} \right)_2}$$
## Questions 3 of 50
Question:Chemical formula for calcium pyrophosphate is $${120^o}$$ The formula for ferric pyrophosphate will be
### Answers Choices:
$$F{e_3}{({P_2}{O_7})_3}$$
$$F{e_4}{P_4}{O_{14}}$$
$$F{e_4}{({P_2}{O_7})_3}$$
$$F{e_3}P{O_4}$$
## Questions 4 of 50
Question:An atom with atomic number 20 is most likely to combine chemically with the atom whose atomic number is
11
14
16
10
## Questions 5 of 50
Question:In covalency
### Answers Choices:
Electrons are transferred
Electrons are equally shared
The electron of one atom are shared between two atoms
None of the above
## Questions 6 of 50
Question:Octet rule is not valid for the molecule
### Answers Choices:
$$C{O_2}$$
$${H_2}O$$
$$CO$$
$${O_2}$$
## Questions 7 of 50
Question:The interatomic distances in $${H_2}$$ and $$C{l_2}$$ molecules are 74 and 198 pm respectively. The bond length of $$HCl$$ is
272 pm
136 pm
124 pm
248 pm
## Questions 8 of 50
Question:Which of the following atoms has minimum covalent radius
B
C
N
Si
## Questions 9 of 50
Question:Number of electrons in the valence orbit of nitrogen in an ammonia molecule are
8
5
6
7
## Questions 10 of 50
Question:Which has a coordinate bond
### Answers Choices:
$$SO_3^{2 - }$$
$$C{H_4}$$
$$C{O_2}$$
$$N{H_3}$$
## Questions 11 of 50
Question:Which of the following would have a permanent dipole moment
### Answers Choices:
$$B{F_3}$$
$$Si{F_4}$$
$$S{F_4}$$
$$Xe{F_4}$$
## Questions 12 of 50
Question:Carbon tetrachloride has no net dipole moment because of
### Answers Choices:
Its planar structure
Its regular tetrahedral structure
Similar sizes of carbon and chlorine atoms
Similar electron affinities of carbon and chlorine
## Questions 13 of 50
Question:Which shows the least dipole moment
### Answers Choices:
$$CC{l_4}$$
$$CHC{l_3}$$
$$C{H_3}C{H_2}OH$$
$$C{H_3}COC{H_3}$$
## Questions 14 of 50
Question:Which molecule has zero dipole moment
### Answers Choices:
$${H_2}O$$
AgI
$$PbS{O_4}$$
HBr
## Questions 15 of 50
Question:Polarization is the distortion of the shape of an anion by an adjacently placed cation. Which of the following statements is correct
### Answers Choices:
Maximum polarization is brought about by a cation of high charge
Minimum polarization is brought about by a cation of low radius
A large cation is likely to bring about a large degree of polarization
A small anion is likely to undergo a large degree of polarization
## Questions 16 of 50
Question:The bonds between $$P$$ atoms and $$Cl$$ atoms in $$PC{l_5}$$ are likely to be
### Answers Choices:
Ionic with no covalent character
Covalent with some ionic character
Covalent with no ionic character
Ionic with some metallic character
## Questions 17 of 50
Question:Which of the following has zero dipole moment
### Answers Choices:
ClF
$$PC{l_3}$$
$$Si{F_4}$$
$$CFC{l_3}$$
## Questions 18 of 50
Question:Which of the following compounds has least dipole moment
### Answers Choices:
$$P{H_3}$$
$$CHC{l_3}$$
$$N{H_3}$$
$$B{F_3}$$
## Questions 19 of 50
Question:The most acidic compound among the following is
### Answers Choices:
$$C{H_3}C{H_2}OH$$
$${C_6}{H_5}OH$$
$$C{H_3}COOH$$
$$C{H_3}C{H_2}C{H_2}OH$$
## Questions 20 of 50
Question:Which of the following is not correct
### Answers Choices:
A sigma bond is weaker than $$\pi$$bond
A sigma bond is stronger than $$\pi$$bond
A double bond is stronger than a single bond
A double bond is shorter than a single bond
## Questions 21 of 50
Question:The bond angle in ethylene is
### Answers Choices:
$${180^o}$$
$${120^o}$$
$${109^o}$$
$${90^o}$$
## Questions 22 of 50
Question:Compound formed by $$s{p^3}d$$hybridization will have structure
### Answers Choices:
Planar
Pyramidal
Angular
Trigonal bipyramidal
## Questions 23 of 50
Question:Which of the following is the correct electronic formula of chlorine molecule
### Answers Choices:
$$:\,\mathop {Cl}\limits_{.\,.}^{\,\,\,.\,.} \,:\,\mathop {Cl}\limits_{.\,.}^{\,\,\,.\,.} \,:$$
$$:\,\mathop {C{l^ - }}\limits_{.\,.}^{\,\,\,.\,.} \,:\,\,:\,\,\mathop {C{l^ + }}\limits_{.\,.}^{\,.\,.} \,:$$
$$:\,\mathop {Cl}\limits_{}^{\,\,\,.\,.} \,:\,\mathop {Cl}\limits_{}^{\,\,\,.\,.} \,:$$
$$:\,\mathop {Cl}\limits_{}^{\,\,\,.\,.} \,:\,\mathop {\,:\,Cl}\limits_{}^{\,\,\,.\,.} \,:$$
## Questions 24 of 50
Question:In $$Xe{F_4}$$ hybridization is
### Answers Choices:
$$s{p^3}{d^2}$$
$$s{p^3}$$
$$s{p^3}d$$
$$s{p^2}d$$
## Questions 25 of 50
Question:The structural formula of a compound is $$C{H_3} - CH = C = C{H_2}.$$ The type of hybridization at the four carbons from left to right are
### Answers Choices:
$$s{p^2},\,\,sp,\,\,s{p^2},\,\,s{p^3}$$
$$s{p^2},\,\,s{p^3},\,\,s{p^2},\,\,sp$$
$$s{p^3},\,\,s{p^2},\,\,sp,\,\,s{p^2}$$
$$s{p^3},\,\,s{p^2},\,\,s{p^2},\,\,s{p^2}$$
## Questions 26 of 50
Question:Acetate ion contains
### Answers Choices:
One $$C,\,\,O$$single bond and one $$C,\,\,O$$ double bond
Two $$C,\,\,O$$ single bonds
Two $$C,\,\,O$$double bonds
None of the above
## Questions 27 of 50
Question:In the compound $$C{H_3} \to OCl$$ which type of orbitals have been used by the circled carbon in bond formation
### Answers Choices:
$$s{p^3}$$
$$s{p^2}$$
$$sp$$
p
## Questions 28 of 50
Question:The correct order of the $$O - O$$ bond length in $${O_2},\,\,{H_2}{O_2}$$ and $${O_3}$$ is
### Answers Choices:
$${O_2} > {O_3} > {H_2}{O_2}$$
$${O_3} > {H_2}{O_2} > {O_2}$$
$${H_2}{O_2} > {O_3} > {O_2}$$
$${O_2} > {H_2}{O_2} > {O_3}$$
## Questions 29 of 50
Question:Which of the following is isoelectronic as well as has same structure as that of $${N_2}O$$
### Answers Choices:
$${N_3}H$$
$${H_2}O$$
$$N{O_2}$$
$$C{O_2}$$
## Questions 30 of 50
Question:$$CC{l_4}$$ has the hybridization
### Answers Choices:
$$s{p^3}d$$
$$ds{p^2}$$
$$sp$$
$$s{p^3}$$
## Questions 31 of 50
Question:Pyramidal shape would be of
### Answers Choices:
$$NO_3^ -$$
$${H_2}O$$
$${H_3}{O^ + }$$
$$NH_4^ +$$
## Questions 32 of 50
Question:What is the correct mode of hybridization of the central atom in the following compounds : $$NO_2^ + ,S{F_{4,}}P{F_6}^ -$$
### Answers Choices:
$$s{p^2},\,\,s{p^3},\,\,{d^2}s{p^3}$$
$$s{p^3},\,\,s{p^3}{d^2},\,\,s{p^3}{d^2}$$
$$sp,\,\,s{p^3}d,\,\,s{p^3}{d^2}$$
$$sp,\,\,s{p^2},\,\,s{p^3}$$
## Questions 33 of 50
Question:As the s-character of hybridisation orbital increases, the bond angle
Increases
Decreases
Becomes zero
Does not change
## Questions 34 of 50
Question:The shape of $$I{F_7}$$molecule is
### Answers Choices:
Octahedral
Pentagonal bipyramidal
Trigonal bipyramidal
Tetrahedral
## Questions 35 of 50
Question:Among the following compounds the one that is polar and has central atom with $$s{p^2} -$$hybridization is
### Answers Choices:
$${H_2}C{O_3}$$
$$B{F_3}$$
$$Si{F_4}$$
$$HCl{O_2}$$
## Questions 36 of 50
Question:Which of the following molecules has pyramidal shape
### Answers Choices:
$$PC{l_3}$$
$$S{O_3}$$
$$CO_3^{2 - }$$
$$NO_3^ -$$
## Questions 37 of 50
Question:Resonance is due to
### Answers Choices:
Delocalization of sigma electrons
Delocalization of pi electrons
Migration of H atoms
Migration of protons
## Questions 38 of 50
Question:$$Xe{F_6}$$ is
### Answers Choices:
Octahedral
Distorted octahedral
Planar
Tetrahedral
## Questions 39 of 50
Question:Which of the following species is planar
### Answers Choices:
$$CO_3^{2 - }$$
$$N{H_2}$$
$$PC{l_3}$$
None of these
## Questions 40 of 50
Question:The bond angle in ammonia molecule is
### Answers Choices:
$${91^o}8'$$
$${93^o}3'$$
$${106^o}45'$$
$${109^o}28'$$
## Questions 41 of 50
Question:Out of the following which has smallest bond length
### Answers Choices:
$${O_2}$$
$$O_2^ +$$
$$O_2^ -$$
$$O_2^{2 - }$$
## Questions 42 of 50
Question:The energy of a $$2p$$orbital except hydrogen atom is
### Answers Choices:
Less than that of $$s{p^2}$$ orbital
More than that of $$2s$$orbital
Equal to that of $$2s$$ orbital
Double that of $$2s$$ orbital
## Questions 43 of 50
Question:Which bond is strongest
### Answers Choices:
$$F - F$$
$$Br - F$$
$$Cl - F$$
$$I - F$$
## Questions 44 of 50
Question:Which of the following is paramagnetic
### Answers Choices:
$$O_2^ +$$
$$C{N^ - }$$
CO
$${N_2}$$
## Questions 45 of 50
Question:The paramagnetic molecule at ground state among the following is
### Answers Choices:
$${H_2}$$
$${O_2}$$
$${N_2}$$
$$CO$$
## Questions 46 of 50
Question:The reason for exceptionally high boiling point of water is
### Answers Choices:
Its high specific heat
Its high dielectric constant
Low ionization of water molecules
Hydrogen bonding in the molecules of water
## Questions 47 of 50
Question:Hydrogen bonding is maximum in
Ethanol
Diethyl ether
Ethyl chloride
Triethyl amine
## Questions 48 of 50
Question:$$N{H_3}$$ has a much higher boiling point than $$P{H_3}$$ because
### Answers Choices:
$$N{H_3}$$ has a larger molecular weight
$$N{H_3}$$ undergoes umbrella inversion
$$N{H_3}$$ forms hydrogen bond
$$N{H_3}$$ contains ionic bonds whereas $$P{H_3}$$ contains covalent bonds
## Questions 49 of 50
Question:Which one of the following substances consists of small discrete molecules
### Answers Choices:
$$CO$$
Graphite
Copper
Dry ice
## Questions 50 of 50
Question:Blue vitriol has
Ionic bond
Coordinate bond
Hydrogen bond
All the above | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5632762312889099, "perplexity": 13286.023631169397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816912.94/warc/CC-MAIN-20180225190023-20180225210023-00767.warc.gz"} |
http://link.springer.com/chapter/10.1007%2F978-3-642-27440-4_26 | Chapter
Monte Carlo and Quasi-Monte Carlo Methods 2010
Volume 23 of the series Springer Proceedings in Mathematics & Statistics pp 471-486
Date:
# On Monte Carlo and Quasi-Monte Carlo Methods for Series Representation of Infinitely Divisible Laws
• Reiichiro KawaiAffiliated withDepartment of Mathematics, University of Leicester Email author
• , Junichi ImaiAffiliated withFaculty of Science and Technology, Keio University
* Final gross prices may vary according to local VAT.
## Abstract
Infinitely divisible random vectors and Lévy processes without Gaussian component admit representations with shot noise series. To enhance efficiency of the series representation in Monte Carlo simulations, we discuss variance reduction methods, such as stratified sampling, control variates and importance sampling, applied to exponential interarrival times forming the shot noise series. We also investigate the applicability of the generalized linear transformation method in the quasi-Monte Carlo framework to random elements of the series representation. Although implementation of the proposed techniques requires a small amount of initial work, the techniques have the potential to yield substantial improvements in estimator efficiency, as the plain use of the series representation in those frameworks is often expensive. Numerical results are provided to illustrate the effectiveness of our approaches. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8986670970916748, "perplexity": 723.879146239481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662541.24/warc/CC-MAIN-20160924173742-00112-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/151699/intersection-numbers-on-a-surface | # Intersection numbers on a surface
Some (probably very easy) questions on intersection theory on surfaces...
Say $S$ is a smooth projective surface over $\mathbb{C}$ with canonical divisor $K_S$.
• If $S$ is not ruled and $H$ is a hyperplane section (for an arbitrary embedding), do we always have that $K_S \cdot H > 0$? I know that $K_S \cdot H \geq 0$, but why would $K_S \cdot H = 0$ be impossible? EDIT: OK, this can happen if $K_S = 0$, of course (see QiL's answer). But if I moreover assume that $K_S^2 > 0$, can we still have $K_S \cdot H = 0$?
• If $D \cdot H < 0$ for some divisor $D$ and some hyperplane section $H$, does it follow that $H^0(D,\mathcal{O}_S(D)) = 0$? It seems reasonable but also too simple, so I'm not so sure.
-
1. It can happen that $K_S=0$ (i.e. $S$ is a K3 surface), then $K_S\cdot H=0$.
2. If $H^0(D, O_S(D))\ne 0$, then up to linear equivalence, $D\ge 0$, so $D\cdot H\ge 0$ because $H$ is ample.
Thank you, QiL. Concerning (1), I added an extra hypothesis - does the result hold with this hypothesis? Concerning (2), what is the formal argument showing that $D \cdot H \geq 0$ if $D$ is effective and $H$ is ample? Intuitively I feel that this has to be right, but I don't see a formal proof... – Evariste May 30 '12 at 22:07
$K3$-surfaces aren't ruled so it can still happen that $K_S=0$ with your extra hypothesis. Moreover, even if $K_S = 0$ you can have that $K_S\cdot H =0$, I think. – Harry May 31 '12 at 7:54
Yes, but I added the hypothesis $K_S^2 > 0$... – Evariste May 31 '12 at 16:14
@Evariste, then $K_S\cdot H>0$. See Hodge index theorem (Hartshorne, V.1.9). For $D\cdot H>0$: there is a hyplane $H'$ which doesn't contain $D$. So $D\cdot H=D\cdot H'>0$. See also Hartshorne V.1.10 for the inverse implication. – user18119 May 31 '12 at 22:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9695913195610046, "perplexity": 260.82233041070333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824337.54/warc/CC-MAIN-20160723071024-00034-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.all-dictionary.com/sentences-with-the-word-at%20variance | # Sentence Examples with the word at variance
Paul's attitude towards nepotism was at variance with his character as a reformer.
The Solomonic authorship has long since been given up: the historical setting of the work and its atmosphere - the silent assumption of monotheism and monogamy, the nonnational tone, the attitude towards kings and people, the picture of a complicated social life, the strain of philosophic reflection - are wholly at variance with what is known of the 10th century B.C. and with the Hebrew literature down to the 5th or 4th century B.C. The introduction of Solomon, the ideal of wisdom, is a literary device of the later time, and probably deceived nobody.
In the interval there had been other questions on which he found himself at variance with Gladstonian Liberalism, for instance, as regards the Sudan and the Transvaal, nor was he inclined to stomach the claims of the Caucus or the Birmingham programme.
View more
Irapa, beyond, contrary to, S6Ea, opinion), a proposition or statement which appears to be at variance with generally-received opinion, or which apparently is self-contradictory, absurd or untrue, but either contains a concealed truth or may on examination be proved to be true.
Further, since Socrates and the Socratics were educators, they too might be, and in general were, regarded as sophists; but, as they conceived truth - so far as it was attainable - rather than success in life, in the law court, in the assembly, or in debate, to be the right end of intellectual effort, they were at variance with their rivals, and are commonly ranked by historians, not with the sophists, who confessedly despaired of knowledge, but with the philosophers, who, however unavailingly, continued to seek it.
At the same time all ancient Welsh laws and customs, which were at variance with the recognized law of England, were now declared illegal, and Cymric land tenure by gavelkind, which had been respected by Edward I., was expressly abolished and its place taken by the ordinary practice of primogeniture.
The missionaries from the first often found themselves at variance 1 It appears that the first persons to treat the Bushmen other than as animals to be destroyed were two missionaries, Messrs J.
His memoir (1775) on the rotatory motion of a body contains (as the author was aware) conclusions at variance with those arrived at by Jean le Rond, d'Alembert and Leonhard Euler in their researches on the same subject.
A Landtag was first called together in 1387, and the landgraves were con stantly at variance with the electors of Mainz, who had large temporal possessions in the country.
Cloistered seclusion is an artificial condition quite at variance with human instincts and habits, and the treatment, long continued, has proved injurious to health, inducing mental breakdown. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8456448316574097, "perplexity": 4152.472747391007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607649.26/warc/CC-MAIN-20170523183134-20170523203134-00405.warc.gz"} |
https://astarmathsandphysics.com/index.php?option=com_content&view=article&id=4765:failure-of-the-galilean-transformation&catid=125&Itemid=1782 | ## Failure of the Galilean Transformation
The speed of light was known by the Greeks to be high - much higher than the speed of sound. You can see a bolt of lightning seconds before you hear the thunderclap. There was no, no reason for scientists to believe that light did no obey the Galilean Transformation.
Anyone carrying a light source would emit light that traveled faster in their direction of travel, and slower to a stationary observer behind the.
Repeated experiments showed however, that the speed of light was a fixed number
$c=299792458$
m/s, whatever speed of source and observer. Light, and in fact all electromagnetic radiation did not obey the Galilean transformation.
In addition there was the puzzling form of the Lorentz Force
$\mathbf{F}=q \mathbf{E}+q \mathbf{v} \times \mathbf{B}$
.
The Lorentz law implied a force that depended on speed. Because the speed changes under the Galilean Transformation, so should the force, then using
$\mathbf{F}=m \mathbf{a}$
, so should the acceleration. This directly contradicts the Galilean transformation, which implies forces and accelerations are unchanged by the Galilean Transformation. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950334370136261, "perplexity": 581.5569767425457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904039.84/warc/CC-MAIN-20201029095029-20201029125029-00423.warc.gz"} |
https://zrna.org/docs/module/dual-arbitrary-wave-gen | DualArbitraryWaveGen
An ArbitraryWaveGen that splits its wavetable in two to drive two independent outputs.
Parameters
counter_reset_value
Options
output_mode $$\in \{$$ HALF $$,$$ HOLD $$\}$$
output_phase $$\in \{$$ PHASE1 $$,$$ PHASE2 $$\}$$
opamp_mode $$\in \{$$ DEFAULT $$,$$ CHOPPER_STABILIZED $$\}$$
Lookup Table
lookup_table $$= ( V_0, \; V_1 \;, \; ... \;, \; V_{255} ), \; -3.0 \leq V_i \leq 3.0$$
A sequence of 256 floats that defines two 128 value output voltage wavetables.
A counter indexes into this table at a rate proportional to the primary module clock.
Outputs
output1 Half-Cycle
output2 Half-Cycle
Analog Resource Usage
$$\begin{array}{|c|c|} \hline \text{Opamps} & \text{2 of 8} \\ \hline \text{Capacitors} & \text{6 of 32} \\ \hline \text{Lookup Table} & \text{1 of 1} \\ \hline \text{Counter} & \text{1 of 1} \\ \hline \end{array}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.672515869140625, "perplexity": 19645.53015782119}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145774.75/warc/CC-MAIN-20200223123852-20200223153852-00043.warc.gz"} |
https://www.dlubal.com/en-US/support-and-learning/support/faq/003376 | New
FAQ 003376 EN-US
07/31/2019
# According to which formula is the elastic critical buckling load for the torsional buckling Ncr,T calculated in RF-/STEEL EC3?
The elastic critical buckling load for torsional buckling Ncr,T is calculated as follows:
${\mathrm N}_{\mathrm{cr},\mathrm T}\;=\frac1{{\mathrm i}_{\mathrm M}^2}\;\cdot\;\left(\frac{\mathrm\pi^2\;\cdot\;\mathrm E\;\cdot\;{\mathrm I}_{\mathrm w}}{{\mathrm L}_{\mathrm T}^2}\;+\;\mathrm G\;\cdot\;{\mathrm I}_{\mathrm t}\right)$
${\mathrm i}_{\mathrm M}\;=\;\sqrt{{\mathrm i}_{\mathrm u}^2\;+\;{\mathrm i}_{\mathrm v}^2\;+\;{\mathrm u}_{\mathrm M}^2\;+\;{\mathrm v}_{\mathrm M}^2}$
with
E Modulus of elasticity G Shear modulus Iw Warping resistance It Torsion moment of inertia iu, iv Pricipial radius of gyration um, vm Shear center coordinates in the principal axis system LT Torsional buckling critical length
#### Reference
[1] Roik, K.; Carl, J.; Lindner, J. (1972). Biegetorsionsprobleme gerader dünnwandiger Stäbe. Berlin: Ernst & Sohn. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648489952087402, "perplexity": 27541.863478096027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00515.warc.gz"} |
https://chemistry.stackexchange.com/questions/15690/why-are-dcm-and-chloroform-so-resistant-towards-nucleophilic-substitution | # Why are DCM and chloroform so resistant towards nucleophilic substitution?
In the book Organic Chemistry by J. Clayden, N. Greeves, S. Warren, and P. Wothers I found the following reasoning:
You may have wondered why it is that, while methyl chloride (chloromethane) is a reactive electrophile that takes part readily in substitution reactions, dichloromethane (DCM) is so unreactive that it can be used as a solvent in which substitution reactions of,other alkyl halides take place. You may think that this is a steric effect: Indeed, $$\ce{Cl}$$ is bigger than $$\ce{H}$$. But $$\ce{CH2Cl2}$$ is much less reactive as an electrophile than ethyl chloride or propyl chloride: there must be more to its unreactivity. And there is: Dichloromethane benefits from a sort of 'permanent anomeric effect.' One lone pair of each chlorine is always anti-periplanar to the other $$\ce{C–Cl}$$ bond so that there is always stabilization from this effect.
So, in MO-terms the situation would look something like this.
The reasoning looks plausible to me. The interaction between the free electron pair on $$\ce{Cl}$$ and the $$\sigma^{*}$$ orbital of the neighboring $$\ce{C-Cl}$$ bond, which would be the LUMO of DCM, lowers the energy of the free-electron-pair-orbital, thus stabilizing the compound and it raises the energy of the LUMO, thus making DCM less reactive towards nucleophiles.
But how important is this anomeric effect actually for explaining the unreactiveness of DCM toward nucleophiles, especially compared to the steric effect? And how important is the steric effect actually: Is the steric hindrance exerted by the second $$\ce{Cl}$$ in $$\ce{CH2Cl2}$$ really that much larger than the steric hindrance exerted by the methyl group in $$\ce{CH3CH2Cl}$$ (which is a good electrophile for $$\mathrm{S_N2}$$ reactions)?
I'm a little skeptical because if this anomeric effect was very important I would have expected that the $$\ce{C-Cl}$$ bond length in DCM would be slightly higher than in methyl chloride because there should be some transfer of electron density from the free electron pair into the antibonding $$\sigma^{*}$$ orbital, which should weaken the $$\ce{C-Cl}$$ bond. But the actual bond lengths don't show this. They show rather the contrary:
$$\begin{array}{c|c} \hline \text{Species} & \text{Average }\ce{C-Cl}\text{ bond length / Å} \\ \hline \ce{CH3Cl} & 1.783 \\ \ce{CH2Cl2} & 1.772 \\ \ce{CHCl3} & 1.767 \\ \ce{CCl4} & 1.766 \\ \hline \end{array}$$ (source: Wikipedia)
Now, I know that the stronger polarization of the $$\ce{C}$$ atomic orbitals in di-, tri-, and tetrachlorinated methane as compared to methyl chloride (due to the electronegativity of $$\ce{Cl}$$) should lead to an overall strengthening of the $$\ce{C-Cl}$$ bonds in those compounds. But I would have expected a trend that would show only a slight decrease (or maybe even an increase) of bond length when going from methyl chloride to DCM and then a more pronounced decrease when going from DCM to chloroform followed by a similar decrease when going from chloroform to tetrachloromethane. But instead the polarization effect seems to only slightly increase on adding more chlorine atoms.
• Can you tell which chapter if it's not a problem? – Marko Aug 29 '14 at 20:43
• @Marko It's in the first edition, chapter 42, page 1133. – Philipp Aug 29 '14 at 20:48
• Never mind if one of the starting orbitals was an anti-bonding one. Then electrons aren't put in this anti-bonding orbital, but in the newly formed orbital which has the lowest energy in that system. The new anti-bonding orbital (new LUMO) is very high on the energy scale, but without any electrons in it, no destabilization in the system occurs. – Marko Aug 30 '14 at 12:51
• @Marko But the in-phase combination of the $\sigma^{*}(\ce{C-Cl})$ orbital and the free-electron-pair-orbital will have to some extent a $\ce{C-Cl}$-antibonding character. The amount of $\sigma^{*}(\ce{C-Cl})$-character in the newly formed orbital will of course depend on the energetic difference between the interacting orbitals and their orbital overlap. But if the interaction is strong enough to raise the LUMO enough to make DCM unreactive then I would definitely expect some bond-lengthening effect as well. – Philipp Aug 30 '14 at 12:58
• @Marko, I don't understand what your objection is. I think the claim is that in-phase combination of $\sigma^{*}\ce{(C-Cl)}$ with $n\ce{(e^{-})}$ is energy-lowering in the reactant ground state. The destabilization would occur in the transition state as the electrons of the nucleophile interact with the LUMO (which has been made more energetic by inclusion of the orbital combination described above). – Greg E. Aug 30 '14 at 19:45
# Introduction (and Abstract, TLDR)
In very short words you can say, that the anomeric effect is responsible for the lack of reactiveness. The electronic effect may very well be compensating for the the steric effect, that could come from the methyl moiety. In any way, most of the steric effects can often been seen as electronic effects in disguise.
# Analysis of Molecular Orbitals
I will analyse the bonding picture based on calculations at the density fitted density functional level of theory, with a fairly large basis set: DF-BP86/def2-TZVPP. As model compounds I have chosen chloromethane, dichloromethane, chloroform and chloroethane.
First of all let me state, that the bond lengths are a little larger at this level, however, the general trend for shortening can also be observed. In this sense, chloroethane behaves like chloromethane. An attempt to explain this will be given at the end of this article.
\begin{array}{lr}\hline \text{Compound} & \mathbf{d}(\ce{C-Cl})\\\hline \ce{ClCH3} & 1.797\\ \ce{Cl2CH2} & 1.786\\ \ce{Cl3CH} & 1.783\\\hline \ce{ClCH2CH3} & 1.797\\\hline \end{array}
In the canonical bonding picture, it is fairly obvious, that the electronic effects dominate and are responsible for the lack of reactivity. In other words, the lowest unoccupied molecular orbital is very well delocalised in the dichloromethane and chloroform case. This is effectively leaving no angle to attack the antibonding orbitals.
In the mono substituted cases there is a large coefficient at the carbon, where a nucleophile can readily attack.
One can also analyse the bonding situation in terms of localised orbitals. Here I make use of a Natural Bond Orbital (NBO) analysis, that transforms the canonical orbitals into hybrid orbitals, which all have an occupation of about two electrons. Due to the nature of the approach, it is no longer possible to speak of HOMO or LUMO, when analysing the orbitals. Due to the nature of the calculations, i.e. there are polarisation functions, the values do not necessarily add up to 100%. The deviation is so small, that it can be omitted.
The following table shows the composition (in $\%$)of the carbon chloro bond and anti bond. \begin{array}{lrr}\hline \text{Compound} &\sigma-\ce{C-Cl} & \sigma^*-\ce{C-Cl}\\\hline \ce{ClCH3} & 45\ce{C}(21s79p) 55\ce{Cl}(14s85p) & 55\ce{C}(21s79p) 45\ce{Cl}(14s85p)\\ \ce{Cl2CH2} & 46\ce{C}(22s77p) 54\ce{Cl}(14s85p) & 54\ce{C}(22s77p) 46\ce{Cl}(14s85p)\\ \ce{Cl3CH} & 48\ce{C}(24s76p) 52\ce{Cl}(14s86p) & 52\ce{C}(24s76p) 48\ce{Cl}(14s86p)\\\hline \ce{ClCH2CH3} & 44\ce{C}(19s81p) 56\ce{Cl}(14s85p) & 56\ce{C}(19s81p) 44\ce{Cl}(14s85p)\\\hline \end{array} As we go from mono- to di- to trisubstituted methane, the carbon contribution increases slightly, along with the percentage of $s$ character. More $s$ character usually means also a stronger bond, which often results in a shorter bond distance. Of course, delocalization will have a similar effect on its own.
The reason, why dichloromethane and chloroform are fairly unreactive versus nucleophiles, has already been pointed out in terms of localised bonding. But we can have a look at these orbitals as well.
In the case of chloromethane, the LUMO has more or less the same scope of the canonical orbital, with the highest contribution from the carbon. If we compare this antibonding orbital to an analogous orbital in dichloromethane or chloroform, we can expect the same form. We soon run into trouble, because of the localised $p$ lone pairs of chlorine. Not necessarily overlapping, but certainly in the way of the "backside" of the bonding orbital. In the case of chloroethane we can observe hyperconjugation. However, this effect is probably less strong, and from the canonical bonding picture we could also assume, that this increases the polarisation of the antibonding orbital in favour of carbon.
In the following pictures, occupied orbitals are coloured red and yellow, while virtual orbitals are coloured purple and orange.
(Note that in chloroform two lone pair orbitals are shown.)
# Conclusion
Even though this article does not use the Valence Bond Approach, one can clearly see the qualitative manifestation of Bent's Rule (compare also: Utility of Bent's Rule - What can Bent's rule explain that other qualitative considerations cannot?). A higher $s$ character means a shorter bond. The lack of reactivity towards nucleophiles can be explained electronically with a delocalised LUMO. In terms of localised bonding, the lone pairs of any additional chlorine atom would provide sufficient electron density, to shield the backside attack on the carbon.
• Awesome answer, thank you very much. But since I'm not very familiar with NBO could you maybe explain why you say that the localised $\mathrm{p}$ lone pairs of chlorine are "not necessarily overlapping" with the (anti-)bonding orbital of the $\ce{C-Cl}$ bond? Because looking at the form of the orbitals it look to me that they could overlap quite well, but then again, as I said, I don't know much about NBO. Also would you share your opinion on my own answer which is mostly concerned with solving the apparent contradiction concerning the the bond lengths? Do you think the reasoning is sound? – Philipp Sep 1 '14 at 17:06
• NBO transforms canonical orbitals into localised orbitals, i.e. it uses linear combinations to form orbitals that are concentrated to a certain spatial area. Symmetry constraints are lifted and these orbitals are no eigenfunctions of the Schrödinger equation. The canonical orbitals are already the optimal solution. With "not necessarily overlapping" I mean, that in the ground state you have two entirely separate orbitals. They do not form new orbitals. They could interact with each other when an external potential would necessitate this. – Martin - マーチン Sep 2 '14 at 3:42
But the main question remains: Is the anomeric effect the main cause for the unreactiveness of DCM towards nucleophiles or is it the steric hinderance exerted by the second Cl ?
Assessment of Steric Effects
Bonds, like humans, have a length and a girth. In some reactions, the reaction is most likely to occur when the attacking group strikes an atom or substituent "head on". In other reactions, like the $\ce{S_{N}2}$, the attacking group approaches from the back side and slips by the various substituents in a side-ways manner as it approaches the carbon under attack.
There is a nice way to measure this side-ways (girth of a bond, if you will) interference in organic systems. Axial substituents in a cyclohexane ring are subject to what is termed "A-strain" produced by the interactions depicted in the figure below. These interactions are very analogous to the gauche interactions
in butane and arise from side-ways interaction of the substituent and the two diaxial hydrogens located two carbons away. Equatorial substituents avoid these repulsive interactions. The smaller the substituent, the less repulsive the 1,3 interaction will be and, at equilibrium, more of the substituent should exist in the axial position. Conversely, the larger the substituent, more of it should reside in the equatorial position. By measuring the axial/equatorial ratio for various substituents we can meaningfully assess the side-ways steric bulk of the substituent.
For chlorocyclohexane the axial/equatorial ratio is ca. 31/69 ($\ce{\Delta G = 0$.$48 kcal/mol}$) at room temperature. For methylcyclohexane the axial/equatorial ratio is ca. 5/95 ($\ce{\Delta G = 0$.$1.7 kcal/mol}$) at room temperature. This analysis clearly suggests that the chloro substituent is "smaller" than a methyl substituent when we use this side-ways steric probe.
For the interested reader, here is a Table of axial/equatorial $\ce{\Delta G's}$ (in kcal/mol) for other substituents.
Since this analysis suggests that a chlorine substituent is smaller than a methyl substituent, in terms of the side-ways interaction expected in a back side $\ce{S_{N}2}$ attack, it would be difficult to use a steric argument to explain the reduced reactivity in the polyhalomethanes towards $\ce{S_{N}2}$ reaction. This leaves the anomeric effect as a reasonable explanation for the reduced reactivity of the polyhalomethanes. But...
Other Thoughts
• How unreactive are these compounds to $\ce{S_{N}2}$ reaction? Dichloromethane, chloroform and carbon tetrachloride will all undergo the $\ce{S_{N}2}$ reaction with various nucleophiles, under various conditions. Before I sign up for the anomeric effect explanation, it would be nice to know exactly what reaction Clayden, et. al. were discussing. Perhaps some other factor (hydrogen bonding, who knows what) suppressed the nucleophile's reactivity in Clayden's series of compounds.
• Bond strengths: $\ce{C-Cl}$ bond strengths decrease from chloromethane (80 kcal/mol) to carbon tetrachloride (70 kcal/mol). This would cause us to suspect the chloromethane might react the slowest and carbon tetrachloride the fastest. That this is not the case is another argument in support of the anomeric effect playing a key role.
• Thanks for bringing the A-strain into the discussion and explaining it so nicely. As for Clayden et al.: You can find the passage on p. 1133 of the book (first ed.) if you want to read it. They discuss this anomeric effect in the context of heterocycles and bring in some unusual examples such as that of DCM. The statement about the unreactiveness of DCM is made without reference to specific reactions. But they also write in a small box: "Dichloromethane will react as an electrophile, but it needs a very powerful nucleophile and long reaction times" and give the following example reaction... – Philipp Sep 1 '14 at 18:30
• ... $\ce{PhSNa ->[\substack{\ce{CH2Cl2}\\ \text{as solvent}} ][\text{several days}] (PhS)2CH2}$ – Philipp Sep 1 '14 at 18:30
• I really dislike the concept of strain. If you put a hydrogen and a chlorine atom in close proximity then this would be primarily an attractive interaction (dipole-dipole and electrostatics and orbitals). The same applies to other substituents. Only if you put them too close the nuclei start to repel each other (but this only starts when it is significantly shorter than the covalent radii). I would expect that hyperconjugation has a much larger effect in stabilising the equatorial position, than stabilising the axial position of the chlorine. – Martin - マーチン Sep 2 '14 at 6:31
• On a completely unrelated matter: Why do you put pictures in the middle of a sentence? I always have the feeling there is missing something... – Martin - マーチン Sep 2 '14 at 6:43
• @Martin As far as I know the origin of the strain is the interaction of filled orbitals on the two neighboring atoms. Only the axial bonds are in the right position to let the bond orbitals overlap appreciably. So, in an axial position of a cyclohexylchloride ring you have the $\sigma(\ce{C-Cl})$ orbital interacting with a neighboring axial $\sigma(\ce{C-H})$ orbital. As both orbitals are filled you get a 2-center-4-electron interaction which will be destabilizing (because the out-of-phase combination will more destabilized than the in-phase combination is stabilized) as it raises the energy. – Philipp Sep 2 '14 at 19:34
Thinking about it some more made me realize how the problem with the bond lenghts might be resolved. Let's look at the orbital picture again:
I still think, that the in-phase combination of the $\sigma^{∗}(\ce{C-Cl})$ orbital and the free-electron-pair-orbital $n(\ce{e-})$ will have to some extent - depending on the energetic difference between the interacting orbitals and their orbital overlap - an $\ce{C-Cl}$-antibonding character, thus weakening the blue $\ce{C-Cl}$ bond (a simplified view would be, that on interaction electron density flows from the free electron pair into antibonding $\sigma^{∗}(\ce{C-Cl})$ orbital and more electron density in an antibonding orbital mean a weakening of the corresponding bond). But my mistake was not to also consider the $\pi$-bonding interaction between the red $\ce{Cl}$ and $\ce{C}$ in the in-phase combination. This $\pi$-bonding interaction will of course strengthen the red $\ce{C-Cl}$ bond. So resulting from this one interaction pictured above there should be a bond-shortening of the red $\ce{C-Cl}$ bond and a bond-lengthening of the blue $\ce{C-Cl}$ bond.
But the anomeric effect works not only in one direction in DCM (or Chloroform). The blue $\ce{Cl}$ atom has free electron pairs, too. And there is also a $\sigma^{∗}(\ce{C-Cl})$ orbital for the red bond that can interact with one of those free electron pairs in exactly the same way described before. Now, this interaction will strengthen the blue $\ce{C-Cl}$ bond while weakening the red $\ce{C-Cl}$ bond. So, it counteracts the effects of the interaction presented above and the bond-shortening and bond-lengthening effects should cancel each other out to some extent. But the raising of the LUMO, which makes DCM so unreactive remains unchanged.
Now, the only thing that needs some thinking is: Do the $\pi$-bonding and the $\sigma$-antibonding in the in-phase combination have a comparable magnitude? This question I can't answer in any quantitative way. So I will only list some factors of influence here. The $\ce{C-Cl}$ bond is rather long. This means the $\pi$-overlap won't be optimal. And the $n(\ce{e-})$ orbital will lie somewhat lower in energy than the $\sigma^{∗}(\ce{C-Cl})$ orbital thus weakening the covalency of the $\pi$-interaction. On the other hand, the same factors will also lower the amount of $\sigma^{∗}(\ce{C-Cl})$-character contained in the in-phase combination MO. All this leaves me with the gut feeling that the $\pi$-bonding might indeed be comparable to or even a little stronger than the $\sigma$-antibonding effect. And this would mean that the observed bond lengths are in accord with the orbital situation and the anomeric effect might very well be quite important for explaining the unreactiveness of DCM towards nucleophiles.
But the main question remains: Is the anomeric effect the main cause for the unreactiveness of DCM towards nucleophiles or is it the steric hinderance exerted by the second $\ce{Cl}$?
Maybe one could go about it by employing the van der Waals radii of a chlorine atom and a methyl group. According to Wikipedia, the radius of a chlorine atom is 175 pm while the radius of a carbon atom (without any hydrogen atoms around) is 170 pm (so it should be safe to say that the radius of a methyl group is higher than that). Here is a picture of the van der Waals surface of methyl chloride created with Avogadro
which seems to support the view that the spacial requirements of a chlorine atom and a methyl group are rather similar (but I don't know how reliable the van der Waals surface calculations by Avogadro actually are). If those lengths mirror the real spacial requirements of the groups in the context of a nucleophilic substitution reaction then a chlorine atom wouldn't exert a larger steric hinderance than a methyl group and the anomeric effect would clearly be the determining factor when explaining the unreactiveness of DCM. But I'm unsure whether resorting to van der Waals radii is justified.
• Your qualitative analysis is sound. A (partial) $\pi$ contribution would strengthen the one bond while weaken the other. But you are interpreting only a concept here, something that is not there. Since you are arguing with orbitals that cannot be assigned energies, magnitudes would have no meaning. The anomeric effect is therefore a concept, that allows us to understand delocalisation in terms of a much simpler concept - it is not the true reason. – Martin - マーチン Sep 2 '14 at 4:01
• @Martin Thanks for the comment. Your point is well taken. But as for "... something that is not there. Since you are arguing with orbitals that cannot be assigned energies, magnitudes would have no meaning": Are you sure that there is not a little more reality in there than you give it credit. I mean, the procedure of looking at the interaction of the $n(\ce{e-})$ and the $\sigma^{*}(\ce{C-Cl})$ orbitals is basically the usage of perturbational MO-theory on the problem of what happens if I take a $\ce{{}^{+}CH2-Cl}$ fragment and let it interact with a $\ce{Cl-}$ fragment... – Philipp Sep 2 '14 at 19:21
• ...of course this treatment is far from being exact and additional symmetry consideration would have to be made but in principle I think it should be possible to assign energies to the fragment orbitals used here (although I would never go so far as to really do that with any accuracy as MO-theory usually serves only a qualitative purpose). – Philipp Sep 2 '14 at 19:25
• @Philipp There are many schemes that decompose energy based on orbital interactions, but they all work in the same way, i.e. using the original AO. I guess you can go ahead and do a full Valence Bond Theory approach, to find out which resonance structure contributes how much to the total structure and then you get an idea of how strong the $\pi$ and $\sigma$ interactions are - then this approach would be (at least semi) quantitative. The unfortunate thing is, that you can only observe the total bonding directly and not the singular contributions. – Martin - マーチン Sep 3 '14 at 2:02
• Thinking about it a little more, the EDA-NOCV may give you exactly what you are looking for. (I do not have access to this program, so I cannot help out here.) I would suspect your analysis would be correct. On the other hand, I have grown quite fond of the simplicity of Bent's rule, which explains the shortening of the bonds. So maybe there is no use overanalyzing it. As for many things, there is no definite truth when it comes to interpretation, we always put some of our own to any analysis. Your analysis is well founded and (from my point of view) a valid approach to this problem. – Martin - マーチン Sep 3 '14 at 2:11
The anomeric effect where σ(C−C) orbital overlaps with the σ∗(C−Cl) should be considerably less than the overlap of n(e−) and σ∗(C−Cl). Therefore by comparing the relative rates of substitution of DCM, and a primary alkyl chloride, whose alkyl group is similar in size as the chlorine atom, against MeCl should give us the answer. I would be thankful if someone could provide the data.
• Actually, I would say it is even justified to say that there can't be anything like the anomeric effect present in alkylchlorides. A $\sigma(\ce{C-C})$/$\sigma(\ce{C-H})$ orbital is much too low in energy compared to a $\sigma^{*}(\ce{C-Cl})$ orbital to show any appreciable interaction. As for the test you suggested: I think it will be hard to say whether a primary alkyl group/methyl group and a chlorine atoms exert a similar steric effect in the context of a nucleophilic substitution reaction. – Philipp Aug 31 '14 at 17:03
• Maybe one could go about it by employing the van der Waals radii of a chlorine atom and a methyl group. According to Wikipedia, the radius of a chlorine atom is 175 pm while the radius of a carbon atom (without any hydrogen atoms around) is 170 pm. If those lengths mirror the real spacial requirements of the groups in the context of a nucleophilic substitution reaction then a chlorine atom wouldn't exert a larger steric hinderance compared to a methyl group and the anomeric effect would clearly be the determining factor when explaining the unreactiveness of DCM. – Philipp Aug 31 '14 at 17:28
• The A-values for methyl and chloro are 1.7 and 0.4 respectively, suggesting that methyl is sterically larger than chloro. – jerepierre Aug 31 '14 at 18:56
• @jerepierre I am not sure if it is valid to apply the A-values to these acyclic molecules, as A-values are derived from cyclohexane conformations. – Marko Sep 1 '14 at 9:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.713774561882019, "perplexity": 1346.0154003178639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257731.70/warc/CC-MAIN-20190524184553-20190524210553-00503.warc.gz"} |
https://nrich.maths.org/6401/solution | ### N000ughty Thoughts
Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the number of noughts in 10 000! and 100 000! or even 1 000 000!
### Mod 3
Prove that if a^2+b^2 is a multiple of 3 then both a and b are multiples of 3.
### Novemberish
a) A four digit number (in base 10) aabb is a perfect square. Discuss ways of systematically finding this number. (b) Prove that 11^{10}-1 is divisible by 100.
##### Stage: 3 and 4 Challenge Level:
We received lots of good solutions to this problem. Thanks to everyone who submitted a solution! Unfortunately there were so many we can't mention you all by name. A special well done to the pupils of Beaconsfield High School for all their great solutions - we're glad you enjoyed the problem so much!
Here is a really nice solution submitted by Oliver from Loreto College:
Assuming n is an integer, there are no values of n so that $2^n$ is a multiple of 10 because $2^n$ doesn't contain any necessary factors of 5.
The unit digits of $2^n$ for n=1,2,3... are 2, 4, 8, 6 then repeat. For $3^n$ they go 3, 9, 7, 1 then repeat. If n is odd, the units that are being added are either 2 + 3 or 7 + 8, which both end in 5. So $2^n + 3^n$ where n is odd always ends in 5. This is a stronger conclusion than saying 'it's a multiple of 5' as a multiple of 5 can also end in 0.
If n is a multiple of 4 the units being added are 6 + 1 so in this case it will always end in 7.
$1^n + 2^n + 3^n$ is even for all values of n. This is obvious because $1^n$ and $3^n$ are always odd and $2^n$ is always even, and odd + odd + even =even.
$1^n + 2^n + 3^n + 4^n$ is a multiple of 10 for when 4 does not divide n:
The unit digits of the powers of 1, 2, 3 and 4 are:
x^1 x^2 x^3 x^4 1 1 1 1 2 4 8 6 3 9 7 1 4 6 4 6
Summing down the columns gives 10, 20, 20 and 14. This shows that when n is not divisible by 4, the last digit of $1^n + 2^n + 3^n + 4^n$ is 0 so it's a multiple of 10. (When n is divisible by 4, the last digit is 4).
$1^n + 2^n + 3^n + 4^n + 5^n$ ends in 5 for n not divisible by 4. This is obvious if we consider the previous result because $5^n$ ends in 5 for all n, so adding this to the multiples of 10 will give a final digit of 5.
Ryan from Renaissance College Hong Kong had another good explanation for why $2^n$ cannot be a multiple of 10:
Power of 2 Answer Units Digit 1 2 2 2 4 4 3 8 8 4 16 6 5 32 2 6 64 4 7 128 8 8 256 6
For which values of n will $2^n$ be a multiple of 10?
As can be seen from the following table, the unit digits of the powers of
two are in a repetitious pattern of 2, 4, 8, 6, 2, 4, 8, 6…
All multiples of 10 have a unit digit of 0. However, as seen from the
pattern, none of the powers of 2 have their unit digit ending in a 0.
Therefore, no power of 2 is a multiple of 10.
Alexander from University College School noticed the following about the extension questions:
If the power n in $4^n + 5^n + 6^n$ is an odd number, then the last digit of this sum is 5; if the power n is even then the last digit of the sum is 7.
$3^n + 8^n$: For consecutive values of n (n = 1, 2, 3, …) the last digit of the sum goes in a pattern of 1, 3, 9, 7.
$2^n + 4^n + 6^n$: For consecutive values of n (n = 1, 2, 3, …) the last digit of the sum goes in a pattern of 2, 6, 8, 8.
$3^n +5^n +7^n$: For consecutive values of n (n = 1, 2, 3, …) the last digit of the sum goes in a pattern of 5, 3, 5, 7.
$3^n-2^n$: For consecutive values of n (n = 1, 2, 3, …) the last digit of the sum goes in a pattern of 1, 5, 9, 5.
Can you explain why this might be true?
Ryan also had some interesting ideas about how he could extend his results:
The unit digit of $1^n$ is 1, 1, 1, 1…
The unit digit of $2^n$ is 2, 4, 8, 6, 2, 4, 8, 6,...
The unit digit of $3^n$ is 3, 9, 7, 1, 4, 9, 7, 1,...
The unit digit of $4^n$ is 4, 6, 4, 6…
The unit digit of $5^n$ is 5, 5, 5, 5…
The unit digit of $6^n$ is 6, 6, 6, 6…
The unit digit of $7^n$ is 7, 9, 3, 1, 7, 9, 3, 1,...
The unit digit of $8^n$ is 8, 4, 2, 6, 8, 4, 2, 6,...
The unit digit of $9^n$ is 9, 1, 9, 1…
The unit digit of $10^n$ is 0, 0, 0, 0…
Thus the unit digits of the powers of any number repeat in a cycle four digits long, as all numbers end with one of the above 10 digits.Let the pattern of the unit digits of the powers of x be a, b, c, d... and let The pattern of the unit digits of the powers of another number y be e, f, g, h…
As a result, the sum/difference of the unit digits of the powers of x and y are in a repeating pattern of: a + e, b + f, c + g, and d + h, or a - e, b - f, c - g and d – h.
As a result, no matter what combination of powers, adding or subtracting, the unit digits are always in a pattern of 4 repeating numbers.
Well done everyone! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5471634268760681, "perplexity": 250.6001355418168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469258948335.92/warc/CC-MAIN-20160723072908-00323-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://brilliant.org/discussions/thread/xyz-its-easy-as-abc-d/ | ×
# XYZ! It's easy as abc! :D
Let $$x, y, z$$ be positive numbers such that:
$$(1)$$ $$x = \frac{a}{a+b}$$
$$(2)$$ $$y = \frac{b}{b+c}$$
$$(3)$$ $$z = \frac{c}{c+a}$$
Prove the following:
$$x+y+z > 1$$
Note by Thomas Kim
3 years, 1 month ago
Sort by:
Hint: fund $$\frac{1}{x}$$ and similar, apply AM-HM on $$x,y,z$$, and then try to simplify · 3 years, 1 month ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9754056334495544, "perplexity": 5620.327511502527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424756.92/warc/CC-MAIN-20170724062304-20170724082304-00110.warc.gz"} |
http://paperity.org/search/?q=authors%3A%22Qiang+Zhao%22 | # Search: authors:"Qiang Zhao"
46 papers found.
Use AND, OR, NOT, +word, -word, "long phrase", (parentheses) to fine-tune your search.
#### Whole genome sequence of the Treponema pallidum subsp. pallidum strain Amoy: An Asian isolate highly similar to SS14
: Man-Li Tong, Jian-Jun Niu, Zhi-Liang Ji, Tian-Ci Yang. Data curation: Qiang Zhao, Li-Li Liu, Hui-Lin Zhang. Formal analysis: Qiang Zhao, Xiao-Zhen Zhu, Kun Gao. Funding acquisition: Man-Li Tong, Li ... -Li Liu, Li-Rong Lin, Jian-Jun Niu, Tian-Ci Yang. Investigation: Li-Li Liu. Methodology: Qiang Zhao, Kun Gao, Hui-Lin Zhang, Li-Rong Lin. Project administration: Tian-Ci Yang. Resources: Xiao-Zhen
#### Open charm contributions to the E1 transitions of $\psi (3686)$ and $\psi (3770)\rightarrow \gamma \chi _{cJ}$
The E1 transitions of $\psi (3686)$ and $\psi (3770)\rightarrow \gamma \chi _{cJ}$ are investigated in a non-relativistic effective field theory (NREFT) where the open charm effects are included systematically as the leading corrections. It also allows a self-consistent inclusion of the S–D mixing in the same framework. We are able to show that the open charm contributions are ...
#### MicroRNA-32 promotes calcification in vascular smooth muscle cells: Implications as a novel marker for coronary artery calcification
Cardiovascular calcification is one of the most severe outcomes associated with cardiovascular disease and often results in significant morbidity and mortality. Previous reports indicated that epigenomic regulation of microRNAs (miRNAs) might play important roles in vascular smooth muscle cell (VSMC) calcification. Here, we identified potential key miRNAs involved in vascular ...
#### Mid-term results of coronary bypass graft surgery in patients with ischaemic left ventricular systolic dysfunction and no detected myocardial viability
OBJECTIVES There are concerns about effects of surgical revascularization on patients with ischaemic systolic dysfunction when no signs of myocardial viability have been detected by nuclear imaging preoperatively. We reviewed our data to determine the efficacy of coronary bypass graft in this special patient cohort.
#### Rapid Detection of Ricin in Serum Based on Cu-Chelated Magnetic Beads Using Mass Spectrometry
residue present at the 4324 position from a highly conserved loop region of 28S rRNA. This activity prevents Yong-Qiang Zhao and Jian Song contributed equally to this work. the formation of the critical
#### Association of AT1R polymorphism with hypertension risk: An update meta-analysis based on 28,952 subjects
Background: Previous studies have shown that angiotensin II AT1 receptor gene (AT1R) polymorphisms are associated with the risk for hypertension. However, the results remain controversial. In the present study, we performed a meta-analysis to systematically summarize the association between AT1R genetic polymorphisms and the risk for hypertension.
#### Molecular diagnosis and comprehensive treatment of multiple endocrine neoplasia type 2 in Southeastern Chinese
Background Multiple endocrine neoplasia type 2 (MEN2) is an autosomal dominant inherited endocrine malignancy syndrome. Early and normative surgery is the only curative method for MEN 2-related medullary thyroid carcinoma (MTC). In patients with adrenal pheochromocytoma, cortical-sparing adrenalectomy (CSA) can be utilized to preserve adrenocortical function. Methods We present ...
#### Fine mapping of a large-effect QTL conferring Fusarium crown rot resistance on the long arm of chromosome 3B in hexaploid wheat
Background Fusarium crown rot (FCR) is a major cereal disease in semi-arid areas worldwide. Of the various QTL reported, the one on chromosome arm 3BL (Qcrs.cpi-3B) has the largest effect that can be consistently detected in different genetic backgrounds. Nine sets of near isogenic lines (NILs) for this locus were made available in a previous study. To identify markers that could ...
#### Outcomes of Technical Variant Liver Transplantation versus Whole Liver Transplantation for Pediatric Patients: A Meta-Analysis
Objective To overcome the shortage of appropriate-sized whole liver grafts for children, technical variant liver transplantation has been practiced for decades. We perform a meta-analysis to compare the survival rates and incidence of surgical complications between pediatric whole liver transplantation and technical variant liver transplantation. Methods To identify relevant ...
#### Over-Expression of a Tobacco Nitrate Reductase Gene in Wheat (Triticum aestivum L.) Increases Seed Protein Content and Weight without Augmenting Nitrogen Supplying
Xiao-Qiang Zhao 0 Xuan-Li Nie 0 Xing-Guo Xiao 0 Haibing Yang, Purdue University, United States of America 0 State Key Laboratory of Plant Physiology and Biochemistry, College of Biological Sciences
#### Meta-Analysis of Apolipoprotein E Gene Polymorphism and Susceptibility of Myocardial Infarction
A number of case-control studies have been conducted to clarify the association between ApoE polymorphisms and myocardial infarction (MI); however, the results are inconsistent. This meta-analysis was performed to clarify this issue using all the available evidence. Searching in PubMed retrieved all eligible articles. A total of 33 studies were included in this meta-analysis, ...
#### Prognostic analysis of esophageal cancer in elderly patients: metastatic lymph node ratio versus 2010 AJCC classification by lymph nodes
Background Recent studies have proposed a new prognostic factor (metastatic lymph node ratio, or MLNR) for patients with esophageal cancer (EC). However, to the best of our knowledge, there have been no studies conducted to date regarding MLNR in elderly patients. The aim of this study was to determine the prognostic value of MLNR staging compared with the 2010 American Joint ...
#### Tantalum Nitride-Decorated Titanium with Enhanced Resistance to Microbiologically Induced Corrosion and Mechanical Property for Dental Application
Microbiologically induced corrosion (MIC) of metallic devices/implants in the oral region is one major cause of implant failure and metal allergy in patients. Therefore, it is crucial to develop practical approaches which can effectively prevent MIC for broad clinical applications of these materials. In the present work, tantalum nitride (TaN)-decorated titanium with promoted ...
#### Hunting for a scalar glueball in exclusive B decays
Although glueballs, as one of the type of exotic hadrons allowed by QCD, have been well established on the lattice, experimental searches up to this date for bound states of gluons have only produced controversial signals. In this work, using flavor SU(3) symmetry for the light quarks validated by the available experimental data, we propose an intuitive way to hunt for a scalar ...
#### Application of temporarily functional antibiotic-containing bone cement prosthesis in revision hip arthroplasty
Purpose To investigate the clinical outcome of two-stage revision total hip arthroplasty for infected hip arthroplasty using antibiotic-impregnated cement prosthesis. Materials and methods Forty-one patients, who suffered from an infection after hip replacement or internal fixation of femoral neck and trochanteric fractures, were treated with a two-stage revision hip arthroplasty ...
#### MicroRNA-92a Inhibition Attenuates Hypoxia/Reoxygenation-Induced Myocardiocyte Apoptosis by Targeting Smad7
Zhao 0 Yao Liang Tang, Georgia Regents University, United States of America 0 Department of Cardiac Surgery, Ruijin Hospital, Shanghai Jiaotong University School of Medicine , Shanghai , China
#### Bone Selective Protective Effect of a Novel Bone-seeking Estrogen on Trabecular Bone in Ovariectomized Rats
The drawbacks of estrogen restrict the clinical use of hormone replacement therapy, and it would be most helpful to explore new estrogenic substances that could prevent bone loss and be free from any adverse effects. We synthesized a new compound named bone-seeking estrogen (SE2) by combining 17β-estradiol (E2) with iminodiacetic acid through the Mannich reaction. E2 and SE2 were ...
#### Highly Oxygenated Limonoids and Lignans from Phyllanthus flexuosus
the known lignan glycoside, phyllanthusmin C, with the IC50 values of 11.5 (1), 8.5 (2), and 7.8 (phyllanthusmin C) lM, respectively. glycosides - Jian-Qiang Zhao and Yan-Ming Wang contributed
#### Five new sucrose esters from the whole plants of Phyllanthus cochinchinensis
Chemical investigation of the whole plants of Phyllanthus cochinchinensis (Euphorbiaceae) led to the isolation of five new sucrose benzoyl esters, 3,6′-di-O-benzoylsucrose (1), 3,6′-di-O-benzoyl-2′-O-acetylsucrose (2), 3,6′-di-O-benzoyl-4′-Oacetylsucrose (3), 3,6′-di-O-benzoyl-3′-O-acetylsucrose (4) and 3-O-benzoyl-6′-O-(E)-cinnamoylsucrose (5), together with two known secoiridoid ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23820935189723969, "perplexity": 17149.892740689163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814002.69/warc/CC-MAIN-20180222041853-20180222061853-00240.warc.gz"} |
https://brilliant.org/problems/verschlimmbesserung/ | # Abc
Algebra Level 2
True or false:
$$\quad$$ For real numbers $$a$$, $$b$$ and $$c$$, if $$ab=ac$$, then $$b=c$$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820920825004578, "perplexity": 2828.5151275374305}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608652.65/warc/CC-MAIN-20170526090406-20170526110406-00140.warc.gz"} |
http://umj.imath.kiev.ua/volumes/issues/?lang=en&year=2004&number=3 | 2017
Том 69
№ 12
# Volume 56, № 3, 2004
Anniversaries (Ukrainian)
### Dmytro Yakovych Petryna (on his 70 th birthday)
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 291-292
Article (Ukrainian)
### Creative Contribution of D. Ya. Petrina to the Development of Contemporary Mathematical Physics
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 293-308
This is a brief survey of the results obtained by Prof. D. Ya. Petrina in various branches of contemporary mathematical physics.
Article (English)
### BCS Model Hamiltonian of the Theory of Superconductivity as a Quadratic Form
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 309-338
Bogolyubov proved that the average energies (per unit volume) of the ground states for the BCS Hamiltonian and the approximating Hamiltonian asymptotically coincide in the thermodynamic limit. In the present paper, we show that this result is also true for all excited states. We also establish that, in the thermodynamic limit, the BCS Hamiltonian and the approximating Hamiltonian asymptotically coincide as quadratic forms.
Article (Russian)
### On the Optimal Coefficient of Efficiency of a Semi-Markov System in the Scheme of Phase Lumping
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 339-345
By using methods of the theory of semi-Markov processes, we analyze the problem of detecting signals in a multichannel system. We construct an optimal strategy for the motion of a search device in a multichannel system and obtain the corresponding estimate for the search efficiency.
Article (Ukrainian)
### On Permutable Congruences on Antigroups of Finite Rank
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 346-351
We find necessary and sufficient conditions for any two congruences on an antigroup of finite rank to be permutable.
Article (Ukrainian)
### Coconvex Approximation of Functions with More than One Inflection Point
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 352-365
Assume that fC[−1, 1] belongs to C[−1, 1] and changes its convexity at s > 1 different points y i, $\overline {1,s}$ , from (−1, 1). For nN, n ≥ 2, we construct an algebraic polynomial P n of order ≤ n that changes its convexity at the same points y i as f and is such that $$|f(x) - P_n (x)|\;\; \leqslant \;\;C(Y)\omega _3 \left( {f;\frac{1}{{n^2 }} + \frac{{\sqrt {1 - x^2 } }}{n}} \right),\;\;\;\;\;x\;\; \in \;\;[ - 1,\;1],$$ where ω3(f; t) is the third modulus of continuity of the function f and C(Y) is a constant that depends only on $\mathop {\min }\limits_{i = 0,...,s} \left| {y_i - y_{i + 1} } \right|,\;\;y_0 = 1,\;\;y_{s + 1} = - 1$ , y 0 = 1, y s + 1 = −1.
Article (Russian)
### Nevanlinna–Pick Problem for Stieltjes Matrix Functions
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 366-380
We consider the Nevanlinna–Pick interpolation problem for Stieltjes matrix functions. We obtain two criteria for the indeterminacy of the Nevanlinna–Pick problem with infinitely many interpolation nodes. In the indeterminate case, we describe the general solution of the Nevanlinna–Pick problem in terms of fractional-linear transformations.
Article (Russian)
### Boundary Functionals of a Semicontinuous Process with Independent Increments on an Interval
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 381-398
We investigate boundary functionals of a semicontinuous process with independent increments on an interval with two reflecting boundaries. We determine the transition and ergodic distributions of the process, as well as the distributions of boundary functionals of the process, namely, the time of first hitting the upper (lower) boundary, the number of hittings of the boundaries, the number of intersections of the interval, and the total sojourn time of the process on the boundaries and inside the interval. We also present a limit theorem for the ergodic distribution of the process and asymptotic formulas for the mean values of the distributions considered.
Article (Ukrainian)
### Euler Approximations of Solutions of Abstract Equations and Their Applications in the Theory of Semigroups
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 399-410
Using the Euler approximations of solutions of abstract differential equations, we obtain new approximation formulas for C 0-semigroups and evolution operators.
Article (Russian)
### On the Discreteness of the Structural Space of Weakly Completely Continuous Banach Algebras
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 411-418
We consider a class of Banach algebras with irreducible finite-dimensional representations and prove that, for amenable Banach algebras from this class, the weak complete continuity implies the discreteness of their structural space.
Article (Ukrainian)
### On the Decomposition of an Operator into a Sum of Four Idempotents
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 419-424
We prove that operators of the form (2 ± 2/n)I + K are decomposable into a sum of four idempotents for integer n > 1 if there exists the decomposition K = K 1K 2 ⊕ ... ⊕ K n, $\sum\nolimits_1^n {K_i = 0}$ , of a compact operator K. We show that the decomposition of the compact operator 4I + K or the operator K into a sum of four idempotents can exist if K is finite-dimensional. If n tr K is a sufficiently large (or sufficiently small) integer and K is finite-dimensional, then the operator (2 − 2/n)I + K [or (2 + 2/n)I + K] is a sum of four idempotents.
Brief Communications (Ukrainian)
### Interpolation Sequences for the Class of Functions of Finite η-Type Analytic in the Unit Disk
Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 425-430
We establish conditions for the existence of a solution of the interpolation problem f n ) = b n in the class of functions f analytic in the unit disk and such that $$\left( {\exists \;c_1 > 0} \right)\;\left( {\forall z,\;|\;z\;| < 1} \right):\;\;\left| {f\left( z \right)} \right|\;\; \leqslant \;\;\;\exp \left( {c_1 \eta \left( {\frac{{c_1 }}{{1 - \left| z \right|}}} \right)} \right).$$ Here, η : [1; +∞) → (0; +∞) is an increasing function convex with respect to ln t on the interval [1; +∞) and such that ln t = o(η(t)), t → ∞. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982329964637756, "perplexity": 644.6173674723195}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812327.1/warc/CC-MAIN-20180219032249-20180219052249-00319.warc.gz"} |
https://greenemath.com/Algebra_Practice/Solving-Inequalities-Parentheses/Solving-Linear-Inequalities-Parentheses.html | Practice Objectives
• Demonstrate an understanding of the addition property of inequality
• Demonstrate an understanding of the multiplication property of inequality
• Demonstrate the ability to solve a multi-step linear inequality with parentheses
## Practice Solving Linear Inequalities with Parentheses
Instructions:
Solve each inequality for x.
Problem:
Correct!
Not Correct!
$$x$$
$$x$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8666954636573792, "perplexity": 3207.8493192961932}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038089289.45/warc/CC-MAIN-20210416191341-20210416221341-00529.warc.gz"} |
http://www.bmj.com/content/342/bmj.d671.long | CCBYNC Open access
Research
# Association of alcohol consumption with selected cardiovascular disease outcomes: a systematic review and meta-analysis
BMJ 2011; 342 (Published 22 February 2011) Cite this as: BMJ 2011;342:d671
1. Paul E Ronksley, doctoral student1,
2. Susan E Brien, postdoctoral fellow1,
3. Barbara J Turner, professor of medicine and director2,
4. Kenneth J Mukamal, associate professor of medicine3,
5. William A Ghali, scientific director and professor14
1. 1Calgary Institute for Population and Public Health, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Alberta, Canada T2N 4Z6
2. 2REACH Center, University of Texas Health Science Center, San Antonio, TX, USA, and Health Outcomes Research, University Health System, San Antonio
3. 3Harvard Medical School and Associate in Medicine, Division of General Medicine and Primary Care, Beth Israel Deaconess Medical Center, Boston, MA, USA
4. 4Department of Medicine, Faculty of Medicine, University of Calgary
1. Correspondence to: W Ghali wghali{at}ucalgary.ca
• Accepted 12 December 2010
## Abstract
Objective To conduct a comprehensive systematic review and meta-analysis of studies assessing the effect of alcohol consumption on multiple cardiovascular outcomes.
Design Systematic review and meta-analysis.
Data sources A search of Medline (1950 through September 2009) and Embase (1980 through September 2009) supplemented by manual searches of bibliographies and conference proceedings.
Inclusion criteria Prospective cohort studies on the association between alcohol consumption and overall mortality from cardiovascular disease, incidence of and mortality from coronary heart disease, and incidence of and mortality from stroke.
Studies reviewed Of 4235 studies reviewed for eligibility, quality, and data extraction, 84 were included in the final analysis.
Results The pooled adjusted relative risks for alcohol drinkers relative to non-drinkers in random effects models for the outcomes of interest were 0.75 (95% confidence interval 0.70 to 0.80) for cardiovascular disease mortality (21 studies), 0.71 (0.66 to 0.77) for incident coronary heart disease (29 studies), 0.75 (0.68 to 0.81) for coronary heart disease mortality (31 studies), 0.98 (0.91 to 1.06) for incident stroke (17 studies), and 1.06 (0.91 to 1.23) for stroke mortality (10 studies). Dose-response analysis revealed that the lowest risk of coronary heart disease mortality occurred with 1–2 drinks a day, but for stroke mortality it occurred with ≤1 drink per day. Secondary analysis of mortality from all causes showed lower risk for drinkers compared with non-drinkers (relative risk 0.87 (0.83 to 0.92)).
Conclusions Light to moderate alcohol consumption is associated with a reduced risk of multiple cardiovascular outcomes.
## Introduction
Possible cardioprotective effects of alcohol consumption seen in observational studies continue to be hotly debated in the medical literature and popular media. In the absence of clinical trials, clinicians must interpret these data when answering patients’ questions about taking alcohol to reduce their risk of cardiovascular disease. Systematic reviews and meta-analyses have addressed the association of alcohol consumption with cardiovascular disease outcomes1 2 3 4 5 6 7 8 but have not uniformly addressed associations between alcohol use and mortality from cardiovascular disease, as well as the incidence and mortality from coronary heart disease and stroke. Additionally, further studies have been published since 2006, when the most recent reviews appeared. The continuing debate on this subject warrants an in depth reassessment of the evidence.
In this paper, we synthesise results from longitudinal cohort studies comparing alcohol drinkers with non-drinkers for the outcomes of overall mortality from cardiovascular disease, incident coronary heart disease, mortality from coronary heart disease, incident stroke, and mortality from stroke. Because of the many biological effects of alcohol consumption, we also examine the association of alcohol with mortality from all causes when this is reported in studies. We conducted meta-analyses for each of these outcomes and a sensitivity analysis with lifetime abstainers as the reference category to account for the heterogeneity within the reference group of non-drinkers. We also examined the effect of confounding on the strength of observed associations. In our companion paper,110 we link these cardiovascular outcomes with experimental trials of alcohol consumption on candidate causal molecular markers.
## Methods
### Data sources and searches
We performed a systematic review and meta-analysis following a predetermined protocol in accordance with the Meta-analysis of Observational Studies in Epidemiology (MOOSE) reporting guidelines.9 We identified all potentially relevant articles regardless of language by searching Medline (1950 through September 2009) and Embase (1980 through September 2009). Searches were enhanced by scanning bibliographies of identified articles and review articles, as well as reviewing conference proceedings from three major scientific meetings (American Heart Association, American College of Cardiology, and European Heart Congress) between 2007 and 2009. Experts in the field were contacted regarding missed, ongoing, or unpublished studies.
To search electronic databases, we used the strategy recommended for systematic reviews of observational studies.10 We specified three comprehensive search themes:
• To identify relevant terms related to the exposure of interest (theme 1), the first Boolean search used the term “or” to explode (search by subject heading) and map (search by keyword) the medical subject headings “ethanol” or “alcohol” or “alcoholic beverages” or “drinking behaviour” or “alcohol drinking” or text words “drink$” or “liquor$” or “ethanol intake” or “alcohol$drink$” or “ethanol drink$” • To identify relevant outcomes (theme 2), a second Boolean search was performed using the term “or” to explode and map the medical subject headings “stroke” or “cardiovascular diseases” or “myocardial infarction” or “myocardial ischemia” or “coronary artery disease” or “heart infarction” or text words “cva$” or “infarct$” or “ischem$” or “cvd” or “ami” or “ihd” or “cad”
• To identify relevant study designs (theme 3), a final Boolean search using the term “or” to explode and map the medical subject headings “cohort studies” or “follow-up studies” or “incidence” or “prognosis” or “early diagnosis” or “survival analysis” or text words “course” or predict$” or “prognos$” was performed.
These three comprehensive search themes were then combined using the Boolean operator “and” in varying combinations.
### Study selection
Two individuals (SEB and PER) independently reviewed all identified abstracts for eligibility. All abstracts reporting on the association between alcohol intake and cardiovascular disease events were selected for full text review. This stage was intentionally liberal. We discarded only those abstracts that clearly did not meet the aforementioned criteria. The inter-rater agreement for this review was high (κ=0.86 (95% confidence interval 0.80 to 0.91)). Disagreements were resolved by consensus.
The same reviewers performed the full text review of articles that met the inclusion criteria and articles with uncertain eligibility. Articles were retained if they met the inclusion criteria for study design (prospective cohort design), study population (adults ≥18 years old without pre-existing cardiovascular disease), exposure (current alcohol use with a comparison group of non-drinkers), and outcome (overall cardiovascular disease mortality or atherothrombotic conditions, specifically incident coronary heart disease, coronary heart disease mortality, incident stroke, or stroke mortality). Both published and unpublished studies were eligible for inclusion. Authors were contacted if the risk profile of the cohort was unclear.
### Data extraction and quality assessment
The primary exposure variable was the presence of active alcohol drinking at baseline compared with a reference group of non-drinkers. Because of the heterogeneity of this reference group, we identified the subset of studies using lifetime abstainers as the reference group and studies that distinguished former drinkers from non-drinkers. Whenever available, we extracted information on amount of alcohol consumed, using grams of alcohol per day as the common unit of measure. When a study did not specifically report the grams of alcohol per unit, we used 12.5 g/drink for analysis.11 We standardised portions as a 12 oz (355 ml) bottle or can of beer, a 5 oz (148 ml) glass of wine, and 1.5 oz (44 ml) glass of 80 proof (40% alcohol) distilled spirits. Volume of intake was categorised as <2.5 g/day (<0.5 drink), 2.5–14.9 g/day (about 0.5–1 drink), 15–29.9 g/day (about 1–2.5 drinks), 30–60 g/day (about 2.5–5 drinks), and >60 g/day (≥5 drinks).
The outcome variables of interest were defined as the presence or absence of death from cardiovascular disease (that is, fatal cardiovascular or stroke events), incident coronary heart disease (fatal or non-fatal incident myocardial infarction, angina, ischaemic heart disease, or coronary revascularisation), death from coronary heart disease (fatal myocardial infarction or ischaemic heart disease), incident stroke (ischaemic or haemorrhagic events), or death from stroke. A secondary analysis was performed within these selected studies to determine the association between alcohol consumption and the risk of death from all causes.
Both reviewers independently extracted data from all studies fulfilling the inclusion criteria, and any disagreement was resolved by consensus. We extracted the data elements of cohort name, sample size, and population demographics (country, percentage male, mean age or age range). We also extracted information for key indicators of study quality in observational studies proposed by Egger et al10 and Laupacis et al.12 Specifically, we evaluated the effect on each outcome of the number of potential confounding variables and the number of years participants were followed.
### Data synthesis and analysis
The relative risk was used as the common measure of association across studies. Hazard ratios and incidence density ratios were directly considered as relative risks. Where necessary, odds ratios were transformed into relative risks with this formula:
• Relative risk=odds ratio/[(1–Po)+(Po×odds ratio)], in which Po is the incidence of the outcome of interest in the non-exposed group.13
The standard error of the resulting converted relative risk was then determined with this formula:
• SElog(relative risk)=SElog(odds ratio)×log(relative risk)/log(odds ratio).
Because these transformations can underestimate the variance of the relative risks derived from the odds ratios,14 15 we performed a sensitivity analysis that excluded four studies for which this transformation had been applied. All analyses were performed with Stata 10.0 (StataCorp, College Station TX, USA). The Stata “metan” command was used to pool the ln(relative risks) across studies according to the DerSimonian and Laird random effects model.16
In some studies, a single relative risk (or odds ratio) was not available for drinkers versus non-drinkers because the data were presented as only a dose-response (that is, several alcohol consumption levels relative to non-drinkers). In these cases, we first pooled across levels of intake within the study using a random effects model to derive a single relative risk for drinkers versus non-drinkers. The resulting single, study-specific relative risk was then pooled with those of other studies.
To visually assess the relative risk estimates and corresponding 95% confidence intervals across studies, we generated forest plots sorted by year of publication. Analyses were stratified by study quality criteria and by participant characteristics.
To assess heterogeneity of relative risks across studies, we inspected forest plots and calculated Q (significance level of P≤0.10) and I2 statistics.17 18 In the presence of heterogeneity, random effects models were used (rather than fixed effects models) to obtain pooled effect estimates across studies. Sensitivity analyses and stratified analyses were performed to assess the associations of selected study quality and clinical factors on cardiovascular risk, including number of confounding factors and duration of follow-up dichotomised at the median value. We also performed a sensitivity analysis excluding studies reporting only odds ratios. We conducted a cumulative meta-analysis of studies ordered chronologically to assess the sequential contributions of studies published over time.19 Finally, we assessed evidence of publication bias through visual inspection of funnel plots and Begg’s rank correlation test for asymmetry.20 21
## Results
### Identification of studies
Our initial search yielded a total of 4235 unique citations (fig 1). After two rounds of reviews and searching citations of retained articles, we identified 131 studies as potentially relevant for analysis. We excluded studies of cardiovascular outcomes predefined as ineligible (such as chronic congestive heart failure or stable angina), non-atherothrombotic end points (such as arrhythmias), composite end points, or non-cardiovascular outcomes (such as cancer), and duplicate reports. This left 84 studies for our systematic review and meta-analysis. Table 1 provides details of the included studies.22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 Of these 84 studies, 34 (40%) reported on all-male cohorts, six (7%) reported on women only, and 44 (52%) included both men and women.
Fig 1 Details of study selection for review
Table 1
Details of studies included in meta-analysis of association of alcohol consumption with selected cardiovascular disease outcomes
View this table:
### Study quality
We evaluated two primary features of study quality—the number of years that participants were followed and adjustment for confounding. Duration of follow-up for study end points ranged from 2.5 to 35 years, with a mean follow-up of 11 years (standard deviation 6 years) (table 1). Of the included studies, 13 (15%) had ≤5 years of follow-up. Similarly, studies varied in the degree of confounder adjustment, ranging from none to 18 variables, with a mean of six (SD 4). Most studies (68) presented adjusted estimates, but eight reported only unadjusted estimates and another eight adjusted only for basic demographic information. Methods of adjustment, effect measure, and confounding variables used in each study are presented in the appendix tables 1–5 on bmj.com for each of our primary outcomes.
### Primary analyses of cardiovascular disease mortality, coronary heart disease incidence and mortality, and stroke incidence and mortality
For cardiovascular disease mortality and both end points for coronary heart disease, alcohol consumption was associated with lower risk, with relative risks of about 0.75 (table 2). In general, relative risks derived from the more highly adjusted and from the less adjusted results were similar. Figures 2–4 reveal little visual evidence of heterogeneity despite statistical evidence of heterogeneity (P<0.001, I2=72.2%), probably driven by the large number of participants (>1 million). All the point estimates were <1.0 in studies, except for one study for cardiovascular disease mortality and two studies for coronary heart disease incidence and mortality.
Fig 2 Forest plot of mortality from cardiovascular disease associated with alcohol consumption
Fig 3 Forest plot of incident coronary heart disease associated with alcohol consumption
Fig 4 Forest plot of mortality from coronary heart disease associated with alcohol consumption
Table 2
Stratified analyses of pooled relative risks (95% CI) for cardiovascular and stroke outcomes (number of pooled studies in parentheses after each effect estimate)
View this table:
In contrast, the overall associations of alcohol intake with stroke incidence and mortality were close to null, both in minimally adjusted and more highly adjusted models (table 2, figs 5 and 6). However, this null association seemed to obscure nearly significant but opposite associations with subtypes of incident stroke. Among the 12 studies on incident haemorrhagic stroke, the pooled relative risk for current alcohol drinkers compared with non-drinkers was 1.14 (95% confidence interval 0.97 to 1.34), whereas the eight studies on ischaemic stroke showed a moderate reduction in the pooled relative risk of 0.92 (0.85 to 1.00). Alcohol use was not associated with stroke mortality, but few studies assessed the risk of mortality from haemorrhagic or ischaemic stroke separately. Furthermore, only two studies reported relative risks on stroke end points for former drinkers compared with non-drinkers.
Fig 5 Forest plot of incident stroke associated with alcohol consumption
Fig 6 Forest plot of mortality from stroke associated with alcohol consumption
### Analyses of dose response
Analyses of the dose of alcohol consumed showed that 2.5–14.9 g alcohol (about ≤1 drink) per day was protective for all five outcomes compared with no alcohol (table 2). For coronary heart disease outcomes, all levels of intake >2.5 g/day had similar degrees of risk reduction. For cardiovascular disease mortality as well as stroke incidence and mortality, the dose-response relations were less clear and more consistent with U or J shaped curves, suggesting an increased risk among drinkers of greater amounts of alcohol. Specifically, those who consumed >60 g/day were at a significantly increased risk of incident stroke compared with abstainers (relative risk 1.62 (1.32 to 1.98)).
### Sensitivity analyses
In an analysis of differences in associations by sex, any amount of alcohol consumption relative to none was associated with greater reduction in cardiovascular disease mortality, stroke incidence, and stroke mortality for women than men. However, the association with stroke should be interpreted with caution, as the risk estimates for women are based on only three pooled studies. On the other hand, similar associations by sex were observed for coronary heart disease incidence and mortality (table 2).
Sensitivity analyses that were confined to only studies that controlled for the important confounders of smoking, age, and sex revealed generally similar results for all of the outcomes. Additional sensitivity analyses that account for the median number of confounding variables in the multivariable analyses of included studies revealed that those with fewer (less than the median) confounding variables generally reported slightly lower relative risk estimates. However, this pattern was inconsistent across the outcomes. Specifically, an increased risk of stroke mortality was observed for studies with limited adjustment for confounding. A similar trend was observed when considering the duration of follow-up. Using the pooled median number of years as the cut point, we found that studies with shorter follow-up reported a greater risk reduction for all outcomes except cardiovascular disease and coronary heart disease mortality (table 2).
Among those studies that used long term abstainers as the referent category, excluding former drinkers or evaluating them separately, the estimated association between drinking and both incidence and mortality estimates did not change substantively (table 2). Among studies that evaluated former drinkers separately, the risk of death (from cardiovascular disease and coronary heart disease) was significantly higher in former drinkers than in drinkers. However, former drinkers did not have an increased risk of incident cardiovascular events (coronary heart disease or stroke).
Finally, a sensitivity analysis that excluded the few studies where only odds ratios instead of relative risks were presented had little effect on the results. In cumulative meta-analyses of cardiovascular disease and coronary heart disease outcomes (appendix figs 1–3 on bmj.com), there was little variation in the relative risk associated with alcohol consumption on cardiovascular disease mortality or incident coronary heart disease with addition of new studies after 1999; for coronary heart disease mortality, this plateau in incremental change from new studies occurred as early as 1992–3.
### Mortality from all causes
Of the 84 studies addressing alcohol and cardiovascular disease events, 31 also examined the association of alcohol consumption with all cause mortality. The pooled estimates from these studies showed a lower risk of all cause mortality for drinkers compared with non-drinkers (relative risk 0.87 (0.83 to 0.92)) (fig 7). However, the association was J shaped, with the lowest risk for those consuming 2.5–14.9 g/day (relative risk 0.83 (0.80 to 0.86), 16 studies) and an elevated risk in those consuming >60 g/day (relative risk 1.30 (1.22 to 1.38), 8 studies).
Fig 7 Forest plot of mortality from all causes associated with alcohol consumption
### Publication bias
Visual inspection of the funnel plot for each outcome did not show asymmetry, an indication that significant publication bias was not likely. This was further confirmed by a non-significant Begg’s test for each outcome (for cardiovascular disease mortality, P=0.40; incident coronary heart disease, P=0.75; coronary heart disease mortality, P=0.089; incident stroke, P=0.33; stroke mortality, P=0.59; all cause mortality, P=0.26).
## Discussion
In this review of 84 studies of alcohol consumption and cardiovascular disease, alcohol consumption at 2.5–14.9 g/day (about ≤1 drink a day) was consistently associated with a 14–25% reduction in the risk of all outcomes assessed compared with abstaining from alcohol. Such a reduction in risk is potentially of clinical importance, but consumption of larger amounts of alcohol was associated with higher risks for stroke incidence and mortality.
To our knowledge, this systematic review and meta-analysis is the most comprehensive to date. Although roughly similar estimates of lower risk were observed in previous meta-analyses of both coronary heart disease and stroke,1 2 3 4 5 6 7 8 our review extends the findings by assessing a broader array of relevant cardiovascular outcomes and adding several new important studies. Our review clarifies several discrepancies among prior reports. Corrao et al reported a J shaped relation between alcohol intake and coronary heart disease,2 whereas the review by Maclure described this relation as L shaped because he did not observe an increase in coronary heart disease risk associated with higher alcohol consumption.6 Our updated meta-analysis supports the latter association for coronary heart disease, with a 25–35% risk reduction for light to moderate drinking106 that also is present with heavier drinking.
Our analysis of multiple cardiovascular outcomes also shows the complexities inherent in the study of alcohol consumption. Modest alcohol intake was associated with lower stroke incidence and mortality, but the risk increased substantially with heavier drinking (that is, a J shaped relation). Furthermore, the association of alcohol consumption is complex and differs by stroke subtype, with a slightly lower risk of ischaemic stroke but higher risk of haemorrhagic stroke. These differential associations probably reflect the known antithrombotic effects of alcohol.107 Alcohol consumption, particularly at high doses, also seems to have an adverse association with blood pressure that may account, in part, for the higher risk of haemorrhagic stroke associated with heavier drinking.108 Additionally, our analysis does not consider other known detrimental effects of high alcohol consumption.3 Therefore, our findings lend further support for limits on alcohol consumption.106 109
Our review also highlights other important aspects of the relation between alcohol consumption and cardiovascular disease. Firstly, the lower risk of coronary heart disease associated with alcohol consumption was at least as strong for women as for men. Limited evidence suggests that the risk of stroke related to alcohol is lower for women than men, but this may only reflect lower alcohol intake among women. Secondly, inclusion of former drinkers did not seem to bias the association of alcohol consumption with cardiovascular disease. Thirdly, when studies were summarised chronologically, we found that the overall association between drinking and cardiovascular disease and coronary heart disease became apparent at least a decade ago, and ongoing studies have done little to revise the estimated associations.
### An argument for causation
From the extensive body of literature summarised here, the association between alcohol consumption and decreased cardiovascular risk is not in question, as additional research has not changed this conclusion. Rather, the lingering question is whether this association is causal. Clearly, observational studies cannot establish causation. However, when the present results are coupled with those from our companion review paper summarising interventional mechanistic studies focusing on biomarkers associated with cardiovascular disease,110 the argument for causation becomes more compelling. Indeed, the mechanistic biomarker review shows biological plausibility for a causal association by showing favourable changes in pathophysiologically relevant molecules.
Therefore, we can now examine the argument for causation based on Hill’s criteria.111 Beyond the biological plausibility argument discussed above, there is an appropriate temporal relation with alcohol use preventing cardiovascular disease. Secondly, we have observed a greater protective association with increasing dose, except that it seems to be offset somewhat by negative associations with the risk of haemorrhagic stroke. Thirdly, the protective association of alcohol has been consistently observed in diverse patient populations and in both women and men. Fourthly, the association is specific: moderate drinking (up to 1 drink or 12.5 g alcohol per day for women and 2 drinks or 25 g alcohol per day for men106) is associated with lower rates of cardiovascular disease but is not uniformly protective for other conditions, such as cancer.112 Lastly, the reduction in risk is notable even when controlling for known confounders (such as smoking, diet, and exercise). Any potential unmeasured confounder would need to be very strong to explain away the apparently protective association.
### Limitations of study
The results of our meta-analysis should be interpreted in context of the limitations of available data. Firstly, the quality of individual studies varied, with some studies having limited follow-up and limited adjustment for potential confounding. With respect to study follow-up, it is possible that misclassification of alcohol consumption may increase with study length because of changes in drinking habits over time. It is also possible that potential biological effects of alcohol vary with time of exposure. However, arguing against both these possibilities, the analysis stratified by length of follow-up did not show different associations between alcohol intake and outcome for shorter follow-up times versus longer times.
Secondly, only a limited subset of studies provided specific risk estimates for different beverages. Although there is great interest in differences between beer, wine, and spirits, alcoholic drinks generally have similar effects on high density lipoprotein cholesterol,113 and it is likely that any particular benefit of wine is prone to confounding by diet and socioeconomic status.114 115 None the less, this remains an interesting topic for further investigation.
Thirdly, we found only limited information on the relation between alcohol intake and mortality from subtypes of stroke, so this topic continues to be important for large observational cohort studies. Finally, we observed significant heterogeneity across studies for several of our pooled analyses. This may be due in great part to large study sample sizes, which can confer greater statistical power to heterogeneity tests, whereas the clinical relevance of this heterogeneity may be quite modest.10 Visual inspection of our various forest plots and the relative consistency of pooled relative risks across clinical and methodological variables suggest that there is considerable consistency in the relative risk findings across studies and across strata.
### Implications
Given the consistency observed in our findings and compelling mechanistic data pointing to causation in our companion review, additional observational studies will have limited value except to elucidate more precisely the association of alcohol and stroke.116 Rather, debate should centre now on how to integrate this evidence into clinical practice and public health messages. In the realm of clinical practice, the evidence could form a foundation for proposing counselling for selected patients to incorporate moderate amounts of alcohol into their diets to improve their coronary heart disease risk. However, such a clinical strategy requires formal evaluation in pragmatic clinical trials that assess the questions of optimal patient selection, compliance, risks, and benefits. The focus of such trials would shift from assessing the association between alcohol and disease outcomes to evaluating the receptivity of both physicians and patients to the recommended consumption of alcohol for therapeutic purposes and the extent to which it can be successfully and safely implemented. In support of implementation trials, our two papers show that alcohol consumption in moderation has reproducible and plausible effects on markers of coronary heart disease risk.
With respect to public health messages, there may now be an impetus to better communicate to the public that alcohol, in moderation, may have overall health benefits that outweigh the risks in selected subsets of patients. Again, any such strategy would need to be accompanied by rigorous study and oversight of impacts. One approach would be to undertake public health messaging pilot studies on well defined target populations (such as a workplace or in a small jurisdiction) to permit detailed evaluation of effects on measures such as knowledge, attitudes, self reported drinking behaviours, and perhaps, secondarily, health outcomes.
The debate on how to integrate this evidence into clinical practice and public health messages will require integration of all possible effects of alcohol—from injury and violence to glucose metabolism and inflammation—and recognition that these effects may be distributed unequally across the population. For example, injury risk probably disproportionately affects younger individuals, whereas cardiovascular disease mainly affects older adults. Robust studies that examine multiple outcomes simultaneously are needed to identify those subsets of the population in which reduced cardiovascular risk might dominate against those for whom the risks of social and medical problems (including several cancers and injury112 117) are too great. Despite the latter concerns, results of our secondary analysis of overall mortality (fig 5) support the notion that moderate alcohol consumption is associated with net benefit, at least in populations similar to those studied in the extant literature.
Our two systematic review papers summarise a surprisingly extensive body of literature on the relation between alcohol and cardiovascular disease. Our findings point to the need to define implications for clinical and public health practice. These reviews and the perspectives above provide a foundation for that dialogue.
#### What is already known on this topic:
• Systematic reviews have addressed the association of alcohol consumption with various cardiovascular outcomes
• However, these reviews are somewhat out of date, and none has comprehensively studied a broad spectrum of relevant cardiovascular end points
• This meta-analysis provides a summary of current knowledge regarding alcohol associations with six meaningful clinical end points—cardiovascular disease mortality, coronary heart disease incidence and mortality, stroke incidence and mortality, and all cause mortality
• The results confirm the beneficial effects of moderate alcohol consumption and the need to elucidate the underlying pathophysiological mechanisms
## Notes
Cite this as: BMJ 2011;342:d671
## Footnotes
• Preliminary results from this manuscript were presented at the 32nd annual meeting of the Society of General Internal Medicine, Miami, Florida, 14 May 2009.
• Contributors: All authors conceived the study and developed the protocol. PER and SEB conducted the search, abstracted the data for the analysis, and performed the statistical analysis. PER, SEB, and WAG wrote the first draft of the manuscript. All authors had access to the data, critically reviewed the manuscript for important intellectual content, and approved the final version of the manuscript. WAG will act as guarantor for the paper.
• Funding: This work was supported by a contracted operating grant from Program of Research Integrating Substance Use Information into Mainstream Healthcare (PRISM) funded by the Robert Wood Johnson Foundation, project No 58529, with cofunding by the Substance Abuse and Mental Health Services and the Administration Center for Substance Abuse Treatment. PER is supported by a Frederick Banting and Charles Best Canada Graduate Scholarship from the Canadian Institutes of Health Research. SEB is supported by a Postdoctoral Fellowship Award from the Alberta Heritage Foundation for Medical Research. WAG is supported by a Canada Research Chair in Health Services Research and by a Senior Health Scholar Award from the Alberta Heritage Foundation for Medical Research. The study was conducted independently of funding agencies. None of the funding agencies played an active role in the preparation, review, or editing of this manuscript.
• Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: support from the Robert Wood Johnson Foundation, the Substance Abuse and Mental Health Services, and the Administration Center for Substance Abuse Treatment (as detailed above) for the submitted work, no financial relationships with any organisations that might have an interest in the submitted work in the previous three years, no other relationships or activities that could appear to have influenced the submitted work.
• Ethical approval: Not required.
• Data sharing: Statistical code and datasets available from the corresponding author at wghali@ucalgary.ca
This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.
View Abstract | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.493392676115036, "perplexity": 4859.79910592632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588251.76/warc/CC-MAIN-20171216143011-20171216165011-00662.warc.gz"} |
http://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-2-section-2-3-calculating-limits-using-the-limit-laws-2-3-exercises-page-103/21 | ## Calculus: Early Transcendentals 8th Edition
$\lim\limits_{h\to 0}\frac{\sqrt{9+h}-3}{h}=\frac{1}{6}$
$A=\lim\limits_{h\to 0}\frac{\sqrt{9+h}-3}{h}$ Multiply both numerator and denominator by $\sqrt{9+h}+3$, we have $A=\lim\limits_{h\to 0}\frac{(9+h)-9}{h(\sqrt{9+h}+3)}$ (We see that ($\sqrt{9+h}-3)(\sqrt{9+h}+3)=(9+h)-9$ since $(a-b)(a+b)=a^2-b^2$) $A=\lim\limits_{h\to 0}\frac{h}{h(\sqrt{9+h}+3)}$ $A=\lim\limits_{h\to 0}\frac{1}{\sqrt{9+h}+3}$ (divide numerator and denominator by $h$) $A=\frac{1}{\sqrt{9+0}+3}$ $A=\frac{1}{6}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999816417694092, "perplexity": 195.57642769327475}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823067.51/warc/CC-MAIN-20171018180631-20171018200631-00870.warc.gz"} |
https://courantklein.wordpress.com/tag/discussion/ | # Research Polymath
## July 28, 2009
First of all, we need to gather some resources to tackle this problem. I found John Preskill’s notes on quantum computation to be very valuable. But what about $\mathbb{Z}_3$ is special? Moreover, why are we considering spaces of the form $\mathbb{Z}_p$ where $p$ is prime? As mentioned before here, quantum entanglement can be modelled more generally than tensor products in hilbert spaces. We can consider cartesian products of various sets. But will this general view help us tackle our more specific problem? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 3, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36879414319992065, "perplexity": 507.5115313221684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00248.warc.gz"} |
https://www.ecb.europa.eu/pub/economic-bulletin/articles/2020/html/ecb.ebart202002_01~1a58c02776.hu.html | Keresési lehetőségek
Kezdőlap Média Kisokos Kutatás és publikációk Statisztika Monetáris politika Az €uro Fizetésforgalom és piacok Karrier
Javaslatok
Rendezési szempont
Magyar nyelven nem elérhető
# Multinational enterprises, financial centres and their implications for external imbalances: a euro area perspective
Prepared by Virginia Di Nino, Maurizio Michael Habib and Martin Schmitz
Published as part of the ECB Economic Bulletin, Issue 2/2020.
This article analyses how the operations of large multinational enterprises (MNEs) affect the external account of the euro area and, in general, financial centres. The increased ease of moving intangible assets, profits and headquarters across borders poses challenges to the current framework of international statistics and economic analysis. First, the article shows how MNE operations are recorded in cross-border statistics, as well as the challenges in measuring such data. Second, the article highlights evidence of the impact that MNEs have on the external account of the euro area – this is most evident in current account balances and foreign direct investment in euro area financial centres, often involving special-purpose entities (SPEs). Third, the article looks at the tendency of financial centres to report current account surpluses that may be tentatively attributed, in part, to the activity of MNEs. Multilateral initiatives could help to improve the transparency of MNE operations and ensure an exchange of information across borders for statistical and tax purposes.
## 1 Introduction
The rise of large, profitable, global firms and the mobility of intangible assets[1] have increased the relevance of firms’ profit-shifting activities, posing challenges to the current framework of international statistics. The balance sheets of large multinational enterprises (MNEs)[2] have become very sizeable. The assets of the largest listed companies in major advanced economies, amounting to a value of several hundred billions of US dollars, are roughly equal to the gross domestic product of many small open economies. In order to reduce their tax burden, MNEs carry out a range of activities: these include shifting profits to low-tax jurisdictions by manipulating transfer pricing[3] and shifting intra-company positions – often this involves complex financial structures and the creation of SPEs in low-tax, or no-tax, jurisdictions. These activities are extremely difficult to track. The novelty of some activities – in particular the growth in intellectual property products and improved opportunities to strategically choose their location – poses significant challenges for the existing framework of national and international statistics, which is based on the concept of residence[4].
International tax avoidance by MNEs is not a novel phenomenon but its rapid growth increasingly attracts the attention of academics and policy makers.[5] Global firms respond to tax incentives when recording worldwide income among affiliates. A recent survey of this literature finds that a decrease by one percentage point in the statutory corporate tax rate translates into a 1% expansion of before-tax income for global firms.[6] Importantly, this study shows that the estimated impact appears to be increasing over time. Transfer pricing and licensing seem to be the main channels of tax avoidance – these appear to be more important than financial planning.[7] International taxation may also alter the geography of foreign direct investment (FDI): a higher statutory tax rate in a target investment country discourages the acquisition of firms in that country, while lower tax burdens may attract FDI related to profit-shifting activities.[8] Another area of research focuses on the implications of these tax-avoidance activities for the measurement of the external wealth of nations and the diminished ability of governments when it comes to taxing the corporate profits of global firms.[9]
A number of policy initiatives at the international level have been launched to counteract the intensification of tax avoidance. The Organisation for Economic Co-operation and Development (OECD) estimates that 240 billion US dollars in tax revenues are lost globally every year as a result of tax avoidance by MNEs. As a result, the OECD and the G20 sponsored the Base Erosion and Profit Shifting (BEPS) Project, including an action plan that identifies 15 actions intended to limit international tax avoidance.[10] This initiative currently involves over 135 countries, including the European Union (EU) Member States. The EU built on the BEPS Project’s recommendations by adopting two Anti-Tax Avoidance Directives, which entered into force between 2019 and 2020. The EU reform package includes concrete measures to reduce tax avoidance, boost tax transparency and move towards a level playing field for all businesses in the EU, but also new requirements for MNE financial reporting (see Box 1).[11]
## Box 1 Tax avoidance and transparency: policy initiatives at the international and EU level
Prepared by Maurizio Michael Habib and Martin Schmitz
At the international level, the OECD, with the support of the G20, championed work on limiting tax avoidance. The OECD/G20 BEPS Project, finalised in 2015, proposes measures to reduce tax avoidance; it also includes new requirements for MNE financial reporting, in particular for country-by-country reporting by 2025. Many of the recommendations of the OECD/G20 BEPS Project have been transposed at the EU level via the European Commission’s broad Anti-Tax Avoidance Package.[12] This package also includes the revision of the Administrative Cooperation Directive, proposing country-by-country reporting between Member States' tax authorities on key tax-related information concerning multinationals operating in the EU.
Statistical compilers need to closely cooperate internationally to ensure that MNE activities are recorded consistently from country to country. This means that they have to share confidential data on MNEs and their subsidiaries across borders. The GNI pilot project, launched by the European Statistical System Committee in 2018, takes steps in this direction; it aims to jointly assess the consistency of statistical recording among national statistical authorities, using a sample of 25 MNEs in Europe.
Moreover, some national statistical authorities have set up large case units to monitor the activities of MNEs nationally. However, no formal coordination exists yet at the international level. Further development of legal entity identifiers and business registers would also be instrumental in improving national accounts and b.o.p. statistics.[13]
The traces of MNE operations are particularly apparent in the external statistics of financial centres. Since the euro area hosts some significant financial centres, this article discusses the dynamics of their external accounts. We adopt a standard operational definition of financial centres on the basis of the size of their stock of foreign liabilities relative to GDP. These are therefore economies where financial activities tend to dominate domestic economic activity. In particular, financial centres are defined as the ten advanced economies with the largest ratios of foreign liabilities to GDP in a large sample of more than 60 countries. These ten financial centres include six euro area economies (Belgium, Cyprus, Ireland, Luxembourg, Malta and the Netherlands) and four non-euro area economies (Hong Kong SAR, Singapore, Switzerland and the United Kingdom).[14] Chart 1 shows the ratio of foreign liabilities to GDP for three groups of countries: advanced economies (excluding financial centres), financial centres and emerging market economies. In contrast to the effect it had on other advanced economies, the global financial crisis in 2008 does not appear to have dented the rise in the international financial integration of financial centres. In financial centres the median value of foreign liabilities increased, from around seven times GDP before the global financial crisis, to almost 11 times GDP at the end of 2018; the dispersion of the distribution of this statistic – foreign liability to GDP – markedly increased over the same period.
The importance of MNEs within the global economy has increased over time – as has the role of financial centres. It is worth considering whether this has an impact on current account imbalances, particularly on those of large financial centres. Financial centres tend to record large current account surpluses: eight out of the ten financial centres, as defined in this article, had a current account surplus over the past two decades on average. However, each one has its own business model, which is reflected in the diverse composition of their current accounts. Chart 2 shows the breakdown of the current accounts of these economies into their main subcomponents since 2010, when the stock of FDI liabilities started to grow rapidly. For the first group of economies – Singapore, Switzerland, the Netherlands and Ireland – the current account surplus is mainly the outcome of a large surplus in the balance of goods. As explained in Section 2 and Section 3, the activities of MNEs (such as merchanting and contract manufacturing) may boost the goods balance of financial centres. For a second group of economies – Luxembourg, Malta and Hong Kong – the surplus is mostly due to the service balance, in turn driven by the financial services sector.
The correct measurement of external statistics, such as those discussed in this article, is important for central banks. Large external imbalances may raise concerns about the sustainability of economic growth and about financial stability, which can affect monetary policy and macroprudential policies. For instance, central banks monitor external accounts to assess the equilibrium value of exchange rates, while noting potential misalignments – this is because abrupt and significant corrections in exchange rates may influence inflation developments. A distorted representation of aggregate current account imbalances could provide flawed signals to policy makers.
This article is structured as follows. Section 2 explains how typical operations by MNEs are recorded in balance of payments (b.o.p.) and international investment position (i.i.p.) statistics; it also highlights relevant challenges faced when measuring these statistics. Section 3 aims to gauge the quantitative relevance of MNE operations for the external accounts of euro area countries, in particular distinguishing financial centres from other euro area economies, and focusing on aspects of trade and the composition of euro area FDI. Section 4 summarises and concludes the article.
## 2 Recording multinational enterprise operations in balance of payments statistics
### 2.1 The origins of measurement challenges
The operations of large MNEs affect national accounts statistics and, in particular, external accounts, thus creating challenges for statistical compilation and economic analysis.[15] This section reviews how typical MNE operations are captured in b.o.p. and i.i.p. statistics; it also highlights some of the associated measurement challenges. MNE tax planning strategies mainly affect b.o.p. data in three ways: (i) by shifting profits to affiliates in low-tax jurisdictions, which can involve moving IPPs or manipulation of transfer prices on intra-firm trade; (ii) by shifting intra-firm debt obligations and capital linkages; (iii) by redomiciling headquarters and legal incorporations to financial centres with favourable tax arrangements. This section also shows why these activities have different implications for the current account and i.i.p. of countries hosting MNEs and their affiliates.
Measurement challenges are caused by friction between residence-based national statistics methodologies and the global activities and ownership structures of large MNEs. B.o.p. and national accounts statistics, and their associated data collection processes, are based on the residency concept, according to which each institutional unit[16] is resident of one economic territory: the place where they have their centre of predominant economic interest. However, MNEs tend to organise their production chains and corporate structures across the globe involving numerous legal entities, including SPEs (see Box 2).[17] Data on these entities are recorded in the national b.o.p. statistics for the economy of the country where they reside. Consequently these data are not consolidated across borders with the home country of their parent MNE.[18]
## Box 2 Towards a recording of special-purpose entities in cross-border statistics
Prepared by Martin Schmitz
The use of SPEs by MNEs has increased rapidly in recent years.[19] According to a recent Task Force of the Balance of Payments Committee (BOPCOM) at the International Monetary Fund (IMF), an SPE is: (i) a formally registered or incorporated legal entity that is resident in an economy and recognised as an institutional unit with little or no employment (up to a maximum of five employees), little or no physical presence, and little or no physical production activities in the host economy; (ii) directly or indirectly controlled by non-residents; (iii) established to obtain specific advantages provided by the host jurisdiction; (iv) transacting almost entirely with non-residents with large parts of the financial balance of a cross-border nature.[20] The IMF BOPCOM Task Force proposed this internationally agreed definition of SPEs with the aim of collecting comparable cross-country data that separately identify SPEs in cross-border statistics. This is because the size of SPE-related cross-border financial flows and positions often tends to be outsized relative to a country’s domestic economy, blurring the analysis of macroeconomic statistics in the affected countries.
There is a high presence of SPEs in a number of euro area countries. This group of countries includes Cyprus, Ireland, Luxembourg, Malta and the Netherlands, which are all part of the financial centres group shown in Chart 1. In these economies SPEs have a significant impact on the i.i.p. and cross-border transactions, mainly affecting FDI but also portfolio and other investment. Moreover, in some cases, SPEs have non-financial assets (such as IPPs) on their balance sheet.
EU economies with SPE presence tend to a have a well-developed legal, financial and consulting services sector.[21] MNEs may set up SPEs to organise their internal financing arrangement, which requires the availability of highly specialised service providers such as lawyers, tax consultants and financial sector experts in the economies that are hosting SPEs. Tax-avoidance strategies, for instance, often involve the establishment of complex corporate structures involving SPEs across several EU countries.
The IMF BOPCOM Task Force’s definition of SPEs would be helpful in ensuring the availability of internationally consistent external sector statistics with a separate breakdown for SPEs. The IMF BOPCOM aims to publish data that separately identify SPEs in cross-border statistics by the end of 2021. Achieving this goal would require further practical guidance on the application of the definition of SPEs in the light of their heterogeneous nature and their cross-border activities.
Measurement challenges are exacerbated by digitalisation and the increasing importance of IPPs, which are particularly relevant for financial centres. Over time the corporate structures of MNEs have become increasingly dynamic as a result of the redomiciling of headquarters and the increased relevance of intangible assets (such as patents and copyrights), which can be moved across borders with greater ease than physical assets, such as factories. These phenomena can have large effects in terms of magnitude and volatility of statistical indicators, which become especially visible in those economies where MNE transactions and balance sheets are large relative to the size of the domestic economy.
### 2.2 MNEs and current account balances
To trace the impact that MNE operations have on external accounts, various components of the b.o.p. need to be looked at separately.[22] According to the b.o.p. identity, it holds that
$C A + K A + E O = F A$ (1)
where CA stands for the current account balance, KA for the capital account balance (comprising mainly transfers of capital and non-produced non-financial assets), EO for errors and omissions (capturing any statistical discrepancy), and FA for the financial account balance.[23]
MNE operations affect various items of a country’s current account balance, the key variable measuring trade, and income and transfer flows vis-à-vis non-residents. The current account consists of the trade balances in goods and services as well as cross-border factor income (primary income) and transfers (secondary income), with the first three being directly affected by the actions of MNEs:
(2)
Cross-border production arrangements and merchanting activities related to MNEs can affect the trade-in-goods component of the current account. This might involve foreign subsidiaries of MNEs (in what is known as offshoring) or an unrelated foreign company (i.e. outsourcing). B.o.p. statistics are based on the concept of change in economic ownership. Which means, in contrast to international trade statistics that measure all goods crossing a country’s border, trade in goods recorded in b.o.p. statistics also includes contract manufacturing and merchanting. In contract manufacturing, an MNE hires a foreign company to produce a good. During the production process, the ownership of the inputs remains with the MNE and hence no trade flows are recorded in the b.o.p. (with the exception of an import by the MNE of manufacturing services from the foreign company that is producing the good). However, the b.o.p. does include the sale of the final products to third countries, which is consistent with the change in ownership principle. Merchanting is the process whereby a company purchases a good from an entity resident abroad, and subsequently sells it to a buyer in a third country without the good crossing the border of the country where the merchant is based.[24] If such transactions involve foreign entities belonging to the same group, their pricing has a decisive impact on the amount and location of profits booked, which is in line with the well-established concept of transfer pricing.[25]
MNE business operations affect trade in services, reflecting the rise of the knowledge economy and digitalisation. As IPPs can often be easily moved across borders within an MNE group, possibly involving SPEs, they affect exports and imports. [26] IPPs are hard to value at market prices and, therefore, MNEs may use them to avoid taxation. For example, one entity of an MNE might own the group’s IPP assets, while other entities in the same group pay licence fees and royalties for its use.
The primary income balance, which is dominated by investment income flows, is another component of the current account affected by MNE operations.[27] Investment income reflects the receipts and payments generated by an economy’s external assets and liabilities (such as dividends and interest), and can be further decomposed into functional categories of the b.o.p. (FDI, portfolio investment, other investment and reserve assets).
MNE operations are particularly visible in FDI income.[28] Income on FDI comes from its equity and debt components. Equity income can be further decomposed into dividends (profits distributed to the direct investor) and reinvested earnings (profits retained in the foreign affiliate). Crucially, the direct investor’s decision to reinvest earnings (i.e. to keep them in a foreign subsidiary) is recorded twice in offsetting ways in the b.o.p. – once as income on FDI, and once as a reinvestment of equal size in the financial account. In practice, MNEs can use complex corporate structures to optimise their tax burden – for example, by concentrating reinvested earnings in certain jurisdictions and by organising intragroup debt obligations. Apart from FDI, the cross-border ownership of MNEs may also affect portfolio investment in equity. In portfolio investment equity, only dividend payments are recorded in the income account, while non-distributed profits are not included.[29]
The MNE operations described in this article mainly affect the composition of a country’s current account balance, while leaving the level of the current account balance unchanged. For instance, let’s first assume that a company residing in “country A” manufactures a pharmaceutical product and exports it to “country B”. This will generate a trade surplus in “country A” and a trade deficit in “country B”. Now, assume that the company resident in “country A” decides to move production offshore to a subsidiary, which is resident in “country C” (a financial centre economy) and subsequently the goods are sold to “country B”. This implies, all other things being equal, that the current account of “country A” records a profit – from the subsidiary in “country C” – equal in size to the net exports recorded before the decision to move production offshore. Thus, the value of the current account balance of “country A” is the same in either scenario, but the composition is altered in the second scenario because an investment income surplus replaces a trade surplus.
In contrast, MNE redomiciliation strategies – i.e. relocating their headquarters to another country – may have a significant impact on headline current account balances.[30] Even if the redomiciliation of an MNE is not associated with additional economic activity in the economy of residency, the current account balance may be affected in several ways (e.g. due to attribution of net exports resulting from contract manufacturing or IPP related services trade). Primary income may be affected due to the differing treatment of reinvested earnings in FDI and portfolio equity. The country hosting the redomiciled global firm will record an improvement in the net FDI position and deterioration in the net portfolio equity position, to the extent that its shareholders are located outside the economy that hosts the new headquarters, which is typically the case for a small FDI hub. However, these two offsetting positions produce two different income streams. Reinvested earnings from foreign subsidiaries are recorded as income receipts and boost the recorded current account balance, whereas profits payments to foreign MNE shareholders are only recorded if they are distributed as dividends (in portfolio investment).
### 2.3 MNEs and cross-border financial and national accounts
Mirroring the current account, MNE operations also affect the financial account of the b.o.p. and external assets and liabilities. Changes to a country’s net i.i.p. can be broken down into net financial transactions as captured in the financial account (FA), revaluations due to changes in exchange rates and other asset prices (REV) and other volume changes (OVC).[31]
(3)
MNEs have a particularly large impact on FDI, both in the i.i.p. and the financial account. All FDI transactions (such as withdrawals of equity and reinvestment of earnings) are recorded in the financial account and hence affect the i.i.p. as shown in equation (3). Redomiciliations, which imply cross-border movements of MNE balance sheets, may give rise to OVC as defined in equation (3) and can thereby substantially change a country’s i.i.p.
Finally, it should be noted that MNE activities not only impact cross-border statistics but also affect the broader national accounts. A case in point is Ireland, where investment income flows, related to redomiciled MNEs, the depreciation of IPPs and aircraft leasing, had a large impact on Irish GDP and GNI.[32] As a result, Ireland’s Central Statistics Office publishes a number of modified economic indicators (such as GNI* and a modified current account CA*) that exclude these phenomena and thereby provide a more focused view of domestic economic developments.
## 3 How do multinational enterprise activities affect the euro area balance of payments?
MNE operations affect the external accounts of the euro area, though their impact varies markedly across the 19 euro area countries. The aggregate b.o.p. of the euro area masks the varied impact of MNE activities on the external statistics of each individual country. Euro area countries can be classified into two groups, which present marked differences in their external accounts: six economies that are specialised in providing financial services[33] and another 13 economies that are not.
The size, composition and volatility of the current account and financial account balances of euro area financial centres are significantly affected by MNE transactions. Section 3.1 presents stylised facts on the euro area b.o.p. related to the activity of specialised subsidiaries, such as SPEs in financial centres, whose location is primarily determined by tax-related, financial and regulatory considerations. Section 3.2 then focuses on the impact that SPEs have on FDI.
### 3.1 Euro area current account
When comparing the composition and size of the current accounts of financial centres with those of other economies in the euro area, five key features stand out.
First, financial centres in the euro area share a similar current account composition: they exhibit large trade surpluses that are partly counterbalanced by income deficits. This is shown in Chart 3 and corroborated by the empirical evidence in Box 3 based on a larger sample of the top ten global financial centres. The trade surpluses of financial centres often reflect exports with large value added, such as those related to licences in the field of information and communications technology. The literature on global value chains (GVCs) has established that value added is mainly created in very upstream activities (e.g. research and development, design and financial services) or very downstream activities (e.g. merchanting, logistics, royalties from licences, branding and marketing) – financial centres appear to have comparative advantages in several of these activities.[34] If production is fragmented across borders, the allocation of value added across the firm’s network may result in financial centres appropriating a significant part of the value added on a global level. Income deficits can also reflect the practice of booking profits in financial centres.
## Box 3 Financial centres and current account imbalances
Prepared by Maurizio Michael Habib
This box provides an empirical assessment of the size of current account imbalances in financial centres compared with other countries. As noted throughout this article, MNE activities widen the gross external positions and the current accounts of financial centres, while also affecting their composition. Moreover, financial centres tend to report current account surpluses. To a large extent, these observed patterns may be ascribed to the concentration of financial activities in a limited number of financial centres, which may not exclusively reflect MNE activities, but also those of banks, other financial intermediaries and individual investors resident in financial centres. It is, therefore, important to widen this analysis to the various subcomponents of financial centre current accounts, including the goods balance, the services balance and the investment income balance.
Empirical evidence confirms that the current account surpluses of financial centres, after controlling for other potential determinants of current account balances, are particularly large from a global perspective. Current account balances and their main subcomponents, across a panel of more than 60 economies since the early 2000s, are regressed on a number of traditional drivers, such as the net foreign asset position, GDP growth, terms of trade, the oil trade balance and per capita GDP. Table A reports the regressions results for the dummy variable identifying financial centres. Notably, this variable is positive and statistically significant in the first two columns of Table A. This confirms that, everything else being equal, financial centres tend to have larger current account surpluses and trade in goods surpluses – the latter is potentially the outcome of MNE merchanting and contract manufacturing activities. Financial centres post particularly large surpluses in the services balance (see column (3) of Table A), possibly related to financial activities that are not necessarily related to MNEs. In contrast, financial centres tend to report larger deficits in the investment income balance because the dummy in column (4) is negative and statistically significant, providing further support to the finding related to the income balance of euro area economies in Section 3.1. Finally, further analysis – not included here – suggests that the positive relationship between the status of financial centres and the current account (and the negative relationship between financial centres and investment income) has become stronger in recent years.
Second, the negative income balances recorded by euro area financial centres partly reflect the redistribution of profits to foreign shareholders. The sum of the income deficits in financial centres was 5% of their cumulated GDP in 2018, whereas the primary surplus in the other euro area economies stood at 1.6% of GDP. The global value added retained in financial centres is ultimately owned by foreign investors that receive an after tax profit which is recorded as income deficits. In practice, however, while aggregate income deficits are very common in euro area financial centres, not all arise from FDI income. They may also be driven by portfolio income, as in the case of Luxembourg and Cyprus. Heterogeneity in income balance composition reflects specific business models, i.e. different net direct investment and portfolio investment asset positions, as well as their position in the global capital network and in relation to other financial centres.
Third, the practice of moving value added to low-tax euro area jurisdictions may also inflate their trade surpluses, while producing the opposite effect in higher-tax economies. This is suggested by the different scale of the vertical axes in Chart 3. MNEs pursue several strategies aimed at avoiding taxes that, while vested differently, ultimately boil down to value added being shifted across borders; these strategies affect the trade balances of euro area countries.
Available evidence shows that, as a result, the trade surplus of euro area financial centres stood at 13% of their combined GDP at the end of 2018. As shown in Chart 3, this contrasts with a surplus of less than 3% in the average of other euro area economies. Moreover, the surplus recorded by financial centres has tripled over the past decade, mirroring the growth in FDI recorded in the financial account of the b.o.p.
Fourth, contract manufacturing and merchanting conducted by entities resident in financial centres have generated a growing discrepancy between b.o.p. statistics and international trade statistics for euro area financial centres. Different concepts underlying the compilation of b.o.p. data with that of international trade statistics lead to some differences (see Section 2.2). In the euro area the gap between these two sources has been growing over time, in particular since 2015 (see Chart 4). Among euro area countries, financial centres account for the bulk of the growing discrepancy, whereas the discrepancy has remained stable for the other economies. This may be partly driven by MNE practices such as change of domicile and outsourcing of merchanting activities to specialised subsidiaries located in financial centres.
Fifth, the trade surplus of financial centres is mainly driven by value added that is produced elsewhere (i.e. foreign value added) and then re-exported. This contrasts with the group of other euro area economies, whose cumulated trade surplus primarily reflects domestic value added that is traded with final consumers. For a more detailed discussion of this feature, see Box 4.
## Box 4 A representation of trade balances in terms of value added: financial centres versus other euro area economies
Prepared by Virginia Di Nino
The goods and services we buy are composed of inputs from various countries from around the world. As a result, the trade balance of each country can be decomposed in terms of (i) the value added that the exporting country itself has produced in every relevant transaction, and (ii) the value added produced by its partner economies in every relevant transaction. The former is called domestic value added (DVA). The latter is known as foreign value added (FVA). An additional useful distinction can be made between transactions directly involving the country that absorb the production (DIR) and transactions related to the intermediate stages of GVCs. This taxonomy helps better understand the mechanisms generating the large surpluses of financial centres in the euro area as well as their contribution to the creation of global value added.[35]
Financial centres usually present large trade surpluses in value added derived from other countries, which cross the borders of these financial centres before reaching final consumers abroad (FVA-DIR). In other words, while financial centres import very little FVA that is absorbed domestically, they re-export large amounts of FVA directly to the final consumers in other countries, see Chart A – the green bars. This is not the case elsewhere. In particular, in the other euro area economies the trade surpluses reflect primarily domestic value added that is directly traded with the final consumers (DVA-DIR), as shown in Chart B – the blue bars.
Financial centres also typically present large deficits in the balance of domestic value added that is further re-exported (DVA-GVC). This reflects the fact that financial centres tend to occupy the very last stage(s) in the production chain as they are located more downstream – i.e. they are closer to the final consumers – than any other participants in the global production network.
While domestic value added exported to final consumers (DVA-DIR) is the dominant component in the trade balance of other euro area economies, it is interesting to observe that the same component measures however more than twice the size in financial centres (see blue bars in Charts A and B). Financial centres’ domestic contribution to the multi-stage production of goods and services is primarily in intangibles – the value of these is added at the very last stage and constitutes the difference between the final price and the factory price of a product.
If tax avoidance is one of the main factors shaping the trade balances in financial centres, then one should expect such balances to primarily reflect bilateral balances with higher tax, non-financial centres. Practices that manipulate trade prices mostly concern the bilateral trade relationships between financial and non-financial centres (i.e. low and higher taxation economies), thus resulting in selective trade surpluses. As a result, a more granular decomposition of the bilateral trade balances, expressed in terms of value added content, shows that financial centres hold large trade surpluses only in relation to higher taxation jurisdictions, especially euro area economies (whereas the positions in relation to other financial centres are more balanced).
In conclusion, the dissection of the trade balance in value added shows that financial centres are also conduits for real transactions. A tiny fraction of their total trade is for their own domestic consumption, whereas a significant share of their trade responds to different objectives, including escaping profit taxation.
### 3.2 Euro area foreign direct investment
FDI is a very significant component of the euro area’s financial account. In recent years it has gained prominence as a result of the striking expansion of gross transactions channelled by euro area financial centres (see Chart 5). The increase in gross FDI flows in turn reflects MNE activities, as discussed in this subsection.
The size of gross FDI flows going through financial centres is so large that they drive the aggregate developments of gross FDI in the euro area as a whole. FDI transiting through financial centres is, on average, between two and three times higher than that recorded by the other euro area economies. It is also three times more volatile. On a net basis, however, the FDI flows of the other euro area economies are more important in determining the aggregate net external position of the euro area (see Chart 5).
As a result of MNE activity, gross FDI transactions in the euro area have become less stable and less predictable compared with when FDI mostly consisted of mergers and acquisitions and greenfield investment.[36] Furthermore, the volatility of gross FDI flows in the euro area, once considered a stable source of external financing, rose above that of other financial flows in the post-crisis period (see Chart 5). Conversely, over the same period the volatility of gross FDI flows in the other euro area economies declined compared to pre-crisis values.
Another defining feature of FDI is the strong positive correlation between gross assets and liabilities, especially in financial centres. The very large degree of co-movement of FDI inflows and outflows is determined by capital passing through financial centres en route to other destinations (Chart 5).[37] Complex international investment schemes have been engineered to take advantage of favourable corporate tax and legal conditions; this makes financial centres highly interconnected while also allowing them to preserve their own business models.
The bulk of FDI transactions in financial centres are carried out by financial subsidiaries or holding companies of MNEs, including SPEs. In fact, other financial institutions’ transactions (which include these entities) dominate the size and dynamics of FDI in financial centres, whereas NFCs drive gross asset and liabilities flows in the other euro area economies (see Chart 6). According to the dedicated IMF Task Force (see Box 2), SPEs are set up by MNEs specifically to access capital markets or sophisticated financial services; isolate owner(s) from financial risks; and/or reduce regulatory and tax burden; and/or safeguard confidentiality of their transactions and owner(s).[38] Euro area financial centres offer many of these advantages. In particular, they have developed sophisticated financial instruments, such as securitised products. The SPEs located in euro area financial centres typically hold MNE equities, manage corporate MNE debt-issuance, and allocate financing across parent and subsidiaries.[39]
SPEs channel European and global capital around the world, also involving securitisation schemes. Some SPEs operate by pooling parent company debts and often transferring asset backed securities to a third subsidiary entity that is legally separate and possibly resident in another financial centre within or outside the euro area. This set of within-group financial transactions accounts for part of the earnings of SPEs and other subsidiaries in financial centres and represents another potential profit-shifting channel. Finally, to the extent that these securitisation schemes consist of within-group financial operations, neither the assets nor the risk underlying the securitised assets are shifted off the balance sheet consolidated at group level.
MNEs not only exert a significant impact on the size of gross FDI flows, but can also be a source of asymmetries in the measurement of bilateral external positions. These asymmetries are particularly pronounced for bilateral FDI income recorded in US and euro area b.o.p. (see Box 5).
## Box 5 Euro area-US current account asymmetries: the role of foreign direct investment income in the presence of multinational enterprises
Prepared by Fausto Pastoris and Martin Schmitz
In the context of recent discussions on trade policies between the United States and its trading partners, bilateral current account balances have received growing attention from policy makers and the media. However, the interpretability of bilateral current account statistics may be affected by the existence of bilateral asymmetries.[40]
In 2018 the euro area recorded a bilateral current account surplus of €131 billion vis-à-vis the United States, according to ECB data, while the euro area surplus amounted to only €40 billion in US Bureau of Economic Analysis (BEA) data (see panel (a) of Chart A).[41] The euro area surplus was around €90 billion smaller according to BEA data, due to a €23 billion smaller area goods surplus and larger euro area deficits for services and primary income (by €17 billion and €55 billion, respectively).[42] Panel (b) of Chart A reveals that the current account asymmetry has increased over time, largely due to the primary income balance, in particular in FDI.
The divergence in recording FDI income is particularly pronounced. In 2018 a paradoxical situation arose, in which both the euro area (according to ECB data) and the United States (according to BEA data) recorded positive income balances vis-à-vis each other (see panel (a) of Chart B). A large difference is observable for FDI income paid to US investors on their investments in the euro area, with the ECB recording a value around €85 billion lower than the corresponding figure reported by the BEA. In contrast, the income euro area residents earned on their FDI investment in the United States was relatively consistent in 2018 (diverging by around €18 billion).[43] The large discrepancy in FDI income paid by the euro area to the United States arises primarily from data on US FDI investment in the Netherlands, Luxembourg and Ireland.
US MNEs often resort to complex chains of ownership – involving multiple FDI relationships in several euro area countries – which complicate the estimation of FDI income. According to BEA data (see panel (b) of Chart B), more than 60% of US FDI in the euro area is invested in holding companies, while only around 10% directly reaches euro area manufacturing entities. Holding companies – which are often SPEs – may serve as the first links between US MNEs and their euro area subsidiaries. Crucially, the income of these holding companies also includes the profits earned from other entities in MNE ownership chains (known as indirectly owned affiliates). [44] Recording such income – in particular for retained earnings – is challenging for statisticians because it requires comprehensive access to MNE balance sheets and their ownership links. Differences in the information available on US MNEs may partly explain why FDI income paid to US investors is lower in European statistics compared to US statistics.
Differences in the identification of the immediate counterpart country may also contribute to the observed asymmetries in FDI income. The complexity of MNE corporate structures makes it difficult for statisticians to attribute linkages to the correct counterpart countries. There is some evidence pointing to differences between the United States and the euro area, as euro area countries attribute sizeable parts of FDI income paid to immediate counterparts in offshore financial centres (in line with international statistical standards). Subsequently, these income flows are likely to be passed through to the United States. [45] The BEA may partly attribute such income as directly received from the euro area (rather than from offshore centres).
Several work streams are active between b.o.p. compilers, monitoring and analysing the observed asymmetries of euro area countries vis-à-vis the United States – in particular in the context of FDI income flows.
## 4 Conclusions
This article analysed how the operations of large multinational enterprises (MNEs) are affecting the external accounts of the euro area and, in general, financial centres. First, the article presented how MNE operations are recorded in cross-border statistics, as well as the related measurement challenges. Second, this article showed the impact of MNEs on the external accounts of the euro area, which is most evident in the current account balances and in foreign direct investment of euro area financial centres, often involving special-purpose entities. Third, financial centre economies generally report current account surpluses that may be attributed, in part, to the activity of MNEs.
Multilateral initiatives to improve the transparency of MNE operations are necessary to ensure exchanges of information across borders both for tax and statistical purposes. Such initiatives should help national authorities to take action against tax avoidance. Moreover, close international cooperation between statistical compilers – including sharing of potentially confidential information – would help to ensure consistent cross-border recording of MNE activities, thereby improving the quality and consistency of macroeconomic statistics. In particular, such initiatives could help to ensure clarity by disentangling the transactions conducted by SPEs in the context of FDI in the b.o.p.
1. Intangible assets include non-physical items such as goodwill items, brand recognition products and intellectual property products (IPPs). IPPs, such as licenses and patents, result from varying combinations of research, development, investigation and innovation that lead to knowledge; using this knowledge is restricted by laws or other means of protection (see European system of accounts - ESA 2010). Research and development leading to assets of intellectual property are recorded as gross fixed capital formation.
2. Multinational enterprises are enterprises producing goods or delivering services in more than one country. MNE headquarters are rarely located in more than one country (the home country). However they operate in a number of other countries (the host countries).
3. Transfer pricing refers to the rules and methods for pricing transactions within and between enterprises under common ownership or control.
4. See Avdjiev, S., Everett, M., Lane, P.R. and Shin, H.S., “Tracking the international footprints of global firms”, BIS Quarterly Review, March 2018.
5. See, for example, Tørsløv, L., Wier, L. and Zucman, G., “The Missing Profits of Nations”, NBER working paper, No 24701, August 2018.
6. See Beer. S. de Mooij, R and Liu, L., “International corporate tax avoidance: A review of the channels, magnitudes, and blind spots”, Journal of Economic Surveys, Special issue, January 2019, pp. 1-29.
7. See Heckemeyer, J. H., Overesch, M., “Multinationals’ profit response to tax differentials: Effect size and shifting channels”, Canadian Journal of Economics/Revue canadienne d'économique, Vol. 50, No 4, 2017.
8. See Arulampalam, W., Devereux, M.P. and Liberini, F., “Taxes and the location of targets, Journal of Public Economics, Vol. 176, 2019, pp. 161-178.
9. See Zucman, G., “Taxing across Borders: Tracking Personal Wealth and Corporate Profits”, Journal of Economic Perspectives, fall, Vol. 28, No 4, 2014, pp. 121-148.
10. See OECD BEPS 2015 Final Reports.
11. See Directive (EU) 2016/1164 of 12 July 2016 laying down rules against tax avoidance practices that directly affect the functioning of the internal market (OJ L 193, 19.7.2016, p. 1) and Directive (EU) 2017/952 of 29 May 2017 amending Directive (EU) 2016/1164 as regards hybrid mismatches with third countries (OJ L 144, 7.6.2017, p. 1).
12. See the European Commission’s Anti-Tax Avoidance Package.
13. Initiatives in this field include the LEI (Legal Entity Identifier), the Register of Institutions and Affiliates Database (RIAD) – which is a business register, operated by the European System of Central Banks (ESCB) – and the Eurogroup’s Register (EGR), which is used for statistical purposes on MNEs in the EU and operated by the European Statistical System.
14. These economies (with the exception of the United Kingdom) are also the largest hubs in terms of the stock of foreign direct investment (FDI) to GDP. FDI is a component of the balance of payments (b.o.p.) and international investment position (i.i.p.) that is closely related to the activities of MNEs. In this article, to identify financial centres and exclude oil producing countries that tend to report large gross foreign asset positions, we focus on gross foreign liabilities instead of the sum of assets and liabilities, using IMF Balance of Payments Statistics. The activities of small off-shore financial centres fall outside the scope of this article. This is because detailed b.o.p. statistics are not always available. Moreover, the huge size of the external balance sheet of offshore centres relative to their GDP would distort some of the results shown in the article. It should be noted that advanced economies classed as financial centres are not necessarily considered to be tax havens for corporate taxation purposes. In general, these financial centres have relatively low corporate tax rates, but this is not necessarily always the case. For instance, the statutory corporate tax rates of Belgium, the Netherlands and Malta are above the average rate of all other economies in our sample.
15. See Stapel-Weber, S. et al., “Meaningful Information for Domestic Economies in the Light of Globalization - Will Additional Macroeconomic Indicators and Different Presentations Shed Light?”, NBER Working Paper, No 24859, 2018.
16. The following qualify as institutional units: households, corporations, non-profit institutions, government units and legal or social entities recognised by law or society, or other entities that may own or control them.
17. The UNCTAD World Investment Report 2015 shows that larger MNEs are associated with a greater complexity of their internal ownership structures. The top 100 MNEs in UNCTAD’s Transnationality Index have on average more than 500 affiliates across more than 50 countries, with seven hierarchical levels, involving 20 holding companies.
18. The BIS provides accounts for international banking groups consolidated to their home country (in the locational banking statistics by nationality). In a similar vein, Tissot 2016 (“Globalisation and financial stability risks: is the residency-based approach of the national accounts old-fashioned?” BIS Working Papers, No 587, 2016) argues that large MNE groups should be consolidated with the home country. This would require the sharing of confidential data across borders, as statistical data collection is also organised according to the residency principle.
19. See Lane, P.R. and Milesi-Ferretti, G.M., “International Financial Integration in the Aftermath of the Global Financial Crisis”, IMF Economic Review, 66, 2018, pp. 189–222.
20. See the IMF Committee on Balance of Payments Statistics (BOPCOM)’s Final Report of the Task Force on Special Purpose Entities, 2018.
21. See Jellema, T., Pastoris, F. and Picon-Aguilar, C., “A European perspective to observing and reporting on SPEs”, ISI World Statistics Congress, 2019, and Galstyan, V., Maqui, E., McQuade, P., "International debt and Special Purpose Entities: evidence from Ireland", ECB Working Paper Series, No 2301, ECB, Frankfurt am Main, July 2019.
22. Lane, P.R., “Risk Exposures in International and Sectoral Balance Sheet Data", World Economics, Vol. 16, Issue 4, 2015, pp. 55-76.
23. The financial account balance is defined in terms of net financial outflows, i.e. the net purchases of foreign assets by domestic residents minus the net incurrence of liabilities by domestic residents vis-à-vis foreign residents.
24. The difference between revenues from the sale and purchase of the good (net of any expenses incurred to finance, insure, store and transport the good) is recorded as net exports of merchanting in the goods balance of the country where the company resides.
25. In many countries tax authorities apply what is known as the arms-length principle to transfer pricing (i.e. the rules for pricing intra-group transactions). According to this principle, intra-group transactions need to be priced in the same way as transactions with unrelated firms.
26. Trade in IPPs is included in the other business services category of the b.o.p., while the royalties and fees for use of these assets are recorded as charges for the use of intellectual property. Non-produced intangible assets are recorded in the b.o.p.’s capital account.
27. Primary income also includes compensation of employees and other primary income.
28. An FDI relationship exists when a foreign direct investor holds equity that entitles it to 10% (or more) of the voting power in the direct investment enterprise. Once the FDI relationship is established between two entities, all financial transactions between them are recorded as FDI.
29. The asymmetric treatment of reinvested earning in FDI and portfolio investment equity is seen, in some studies, as creating biases in the current account. See, for example, Thomas J. Jordan’s speech at the University of Basel from the 23 November 2017, which notes an upward bias for the Swiss current account surplus as the FDI profits (distributed and retained) earned by Swiss MNEs are included in the Swiss current account. As these MNEs are to a large extent owned by non-Swiss residents via portfolio equity investments only dividend payments “leave” Switzerland via the income account. While not recorded in the current account, the non-distributed profits should increase the market value of the Swiss MNEs and hence increase the portfolio equity liabilities in the i.i.p. of Switzerland.
30. For a numerical example on the impact of redomiciliation on the current account, see Avdjiev et al., “Tracking the international footprints of global firms”, BIS Quarterly Review, March 2018.
31. Other volume changes include, for example, reclassifications, write-downs, breaks arising from changes in sources and methods, and changes in the residency of companies.
32. See Lane, P.R., “Notes on the treatment of global firms in national accounts”, Economic Letter Series, Vol. 2017, No 1, Central Bank of Ireland, 2017.
33. This first group includes Cyprus, Luxembourg, Ireland, the Netherlands, Malta and Belgium. They are defined as financial centres according to the size of their foreign liabilities to GDP, as described in Section 1 of this article.
34. See Cheng, K., Rehman, S., Seneviratne, D., Zhang, S., “Reaping the benefits from Global Value chains”, IMF, 2015; “Mapping Global Value Chains”, OECD, 2013; “Interconnected Economies: benefiting from Global Value Chains”, OECD, 2013. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16206824779510498, "perplexity": 5276.718891985489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00089.warc.gz"} |
https://forum.azimuthproject.org/profile/comments/712/David%20Tanzer | #### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
# David Tanzer
David Tanzer
Joined
-
Visits
902
Last Active
Roles
• Welcome, Johan! I studied c.s., but have a gap in my learning when it comes to Haskell. Can you write a little bit about some of the key ways in which constructs from category theory are used in Haskell? Of course I could find this on the web, bu…
• Welcome, Steele! That sounds really intriguing. Could you describe a small example of how you apply category theory to the design of software?
• The number of comparisons QUICKSORT makes to sort a list of $$n$$ values is a random variable. Let $$X_n$$ be the number of comparisons QUICKSORT makes to sort a list of $$n$$ values, and let $$M_n = E[X_n]$$. Let $$Y$$ be rank of the number $$x_i… • From Introduction, cont'd. Analysis of QUICKSORT. QUICKSORT(\(x_1, ..., x_n$$): If $$n = 0$$ or $$n = 1$$, no sorting needing, so return. Randomly choose $$x_i$$. Divide up the remaining values into two sets $$L$$ and $$H$$, where $$L$$ is the …
• Notes from Introduction. Examples of stochastic systems: CPU with jobs arriving in random fashion; network multiplexor with packets arriving randomly; a store with random demands on its inventory. Prob. A monkey hits keys on a typewriter randoml…
• Great!! This was very helpful in explaining -- in a mathematically entertaining way -- the pith of some nice topics in applied category theory.
• Good idea. I just created a category called Applied Category Theory Formula Examples. Thanks Fredrick.
Comment by David Tanzer May 2018
• Hi Maria, I am curious to know about the connections between music and category theory that you have drawn. Would it be possible for you to post a link to a paper or two that you consider to be of interest? I tried to download a couple from your …
Comment by David Tanzer May 2018
• in Matthew Doty's puzzles MD1 - MD3 we learn that the logical operations "and" and "or" can be described as right and left adjoints. Wow!
• I think of meet as what is common to the two -- the point at which they meet -- so that's the intersection. Joining implies combining them to make something bigger, which is the union.
• Great!!
• Great, Fredrick!! Thank you
• Let P be a poset. Then the least upper bound of {} is, naturally, the least of the upper bounds of {}. Every member of P is an upper bound for {}. So the least upper bound for {} is the least member of P, i.e., the minimum element of P -- if suc…
• There's a good function going the other way, $$f^{\ast}: PY \rightarrow PX$$, the preimage function, defined by $$f^{\ast}(S \in PY) = \{x \in X: f(x) \in S\} .$$ Claim: this is right adjoint to the image function $$f_{\ast}: PX \rightarrow PY$$. …
• Inverse functions as a special case of adjoints: if $$A$$ and $$B$$ be preorders, where the ordering is the identity relation, then $$f: A \rightarrow B$$ and $$g: B \rightarrow A$$ are adjoint iff they are inverse functions.
• BTW I've been with the Azimuth Project for some time now, and am running the server for the wiki and the forum. I'm not from the first generation of Azimuth, though, which was ending as I joined. This third generation is very exciting! If peopl…
• It makes logical sense to have a separate discussion for each exercise. The downside is that there will be a lot more discussions, which could make it harder to navigate to the main discussions - especially the lectures. We could control this usi…
• Puzzle 7. Why does any set with a reflexive and transitive relation $$\le$$ yield a category with at most one morphism from any object $$x$$ to any object $$y$$? That is: why are reflexivity and transitivity enough? To answer this, let's go thro…
• Michael Hong wrote: In wikipedia, it says a poset can only have at most one relation per pair. Why is this? Two relations per pair would mean both $$x \le y$$ and $$y \le x$$. By antisymmetry, then $$x = y$$. Then the "pair" is just…
• Michael Hong wrote: Must posets be connected? Let S be a set, which is ordered only by the identity relation -- i.e., all that we have is $$x \le x$$, for all $$x$$. That's a poset. And it's as disconnected as it gets: there are no …
• In comment # 101, Matthew Doty addressed this question: What are all the total orders that are also equivalence relations? And proved that for these orders: $$\forall x, y. x \leq y$$. That's correct, but a stronger conclusion follows: the o…
• Example of a monotone mapping: Let T be the nodes of a tree, ordered by the following relation: $$x \le y$$ means $$x$$ is an ancestor of $$y$$ in the tree. Let $$h(n)$$ be the height of the node in the tree, i.e. the number of edges in the path …
• Example of a monotone mapping: a function that maps $$\mathbb{Z}$$ to $$\mathbb{Z}$$ by adding a constant $$c$$ to the input integer, i.e., the translation function lambda x: x + c.
• And scaling by a negative integer is an antimonotone mapping.
• Scaling by a nonnegative integer is a monotone mapping from the integers into the integers (using the standard ordering).
• Lingo: monotone functions are also called order-preserving functions. The dual notion is an order-reversing function, aka an anti-monotone function.
• The powerset $$2 ^ S$$, which consists of all subsets of $$S$$, is a preorder (and a poset) under the inclusion relation. For another set $$T$$, the function from $$2 ^ S$$ into $$2 ^ T$$ that is defined by intersection with $$T$$ is monotone.
• The identity function on a preorder is a monotone function. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706285357475281, "perplexity": 499.7428639954674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490107.12/warc/CC-MAIN-20190219122312-20190219144312-00466.warc.gz"} |
https://canvas.upenn.edu/courses/1413701/quizzes/2059197 | # Regression Penalties
• Due No due date
• Points 10
• Questions 10
• Time Limit None
• Allowed Attempts Unlimited
## Instructions
Suppose we are given some data
X= $\begin{array}{cc}1& 1\\ 0& 2\\ 1& 1\\ -2& -2\\ 0& 1\\ -5& -5\end{array}$
and values
Y=$\begin{array}{c}2\\ -3\\ 3\\ -5\\ 0\\ -9\end{array}$
Using ordinary least squares regression we get the following weights ${w}_{1}=3,{w}_{2}=-1$${w}_{1}=3,{w}_{2}=-1$ (rounded to nearest integer for simplicity)
Only registered, enrolled users can take graded quizzes | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866044521331787, "perplexity": 19461.927118995558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578528481.47/warc/CC-MAIN-20190420020937-20190420042937-00155.warc.gz"} |
http://blog.invibe.net/ | # 2016-07-16 Predictive coding of motion in an aperture
After reading the paper http://www.jneurosci.org/content/34/37/12601.full by Helena X. Wang, Elisha P. Merriam, Jeremy Freeman, and David J. Heeger (The Journal of Neuroscience, 10 September 2014, 34(37): 12601-12615; doi: 10.1523/JNEUROSCI.1034-14.2014), I was interested to test the hypothesis they raise in the discussion section :
The aperture-inward bias in V1–V3 may reflect spatial interactions between visual motion signals along the path of motion (Raemaekers et al., 2009; Schellekens et al., 2013). Neural responses might have been suppressed when the stimulus could be predicted from the responses of neighboring neurons nearer the location of motion origin, a form of predictive coding (Rao and Ballard, 1999; Lee and Mumford, 2003). Under this hypothesis, spatial interactions between neurons depend on both stimulus motion direction and the neuron's relative RF locations, but the neurons themselves need not be direction selective. Perhaps consistent with this hypothesis, psychophysical sensitivity is enhanced at locations further along the path of motion than at motion origin (van Doorn and Koenderink, 1984; Verghese et al., 1999).
Concerning the origins of aperture-inward bias, I want to test an alternative possibility. In some recent modeling work:
Laurent Perrinet, Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012 http://invibe.net/LaurentPerrinet/Publications/Perrinet12pred
I was surprised to observe a similar behavior: the trailing edge was exhibiting a stronger activation (i. e. higher precision revealed by a lower variance in this probabilistic model) while I would have thought intuitively the leading edge would be more informative. In retrospect, it made sense in a motion-based prediction algorithm as information from the leading edge may propagate in more directions (135° for a 45° bar) than in the trailing edge (45°, that is a factor of 3 here). While we made this prediction we did not have any evidence for it.
In this script the predictive coding is done using the MotionParticles package and for a http://motionclouds.invibe.net/ within a disk aperture.
Read more…
# 2016-06-25 compiling notebooks into a report
For a master's project in computational neuroscience, we adopted a quite novel workflow to go all the steps from the learning of the small steps to the wrtiting of the final thesis. Though we were flexible in our method during the 6 months of this work, a simple workflow emerged that I describe here.
Read more…
# 2016-06-01 Compiling and using pyNN + NEST + python3
PyNN is a neural simulation language which works well with the NEST simulator. Here I show my progress in using both with python 3 and how to show results in a notebook.
Read more…
# 2016-02-19 Compiling and using pyNN + NEST + python3
PyNN is a neural simulation language which works well with the NEST simulator. Here I show my progress in using both with python 3 and how to show results in a notebook.
Read more…
# 2016-01-20 Using scratch to illustrate the Flash-Lag Effect
Scratch (see https://scratch.mit.edu/) is a programming language aimed at introducing coding litteracy to schools and education. Yet you can implement even complex algorithms and games. It is visual, multi-platform and critically, open-source. Also, the web-site educates to sharing code and it is very easy to "fork" an existing project to change details or improve it. Openness at its best!
During a visit of a 14-year schoolboy at the lab, we used that to make a simple psychopysics experiment available at https://scratch.mit.edu/projects/92044597/ :
Read more…
# 2016-01-19 élasticité trames V1
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
On va maintenant utiliser des forces elastiques pour coordonner la dynamique des lames dans la trame.
Read more…
# 2016-01-18 bootstraping posts for élasticité
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
.. media:: http://vimeo.com/150813922
Ce meta-post gère la publication sur http://blog.invibe.net.
Read more…
# 2015-12-11 Reproducing Olshausen's classical SparseNet (part 4)
In this notebook, we write an example script for the sklearn library showing the improvement in the convergence of dictionary learning induced by the introduction of Olshausen's homeostasis.
See also :
Read more…
# 2015-12-11 Reproducing Olshausen's classical SparseNet (part 3)
In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robusteness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:
• first, whatever the learning rate, the convergence is not complete without homeostasis,
• second, we achieve better convergence for similar learning rates and on a certain range of learning rates for the homeostasis
• third, the smoothing parameter alpha_homeo has to be properly set to achieve a good convergence.
• last, this homeostatic rule works with the diferent variants of sparse coding.
See also :
Read more…
# 2015-12-08 homebrew cask : updating an existing cask
• A new version of owncloud is out, I will try today to push that new infomation to http://caskroom.io/
• I will base things on this previous contribution
• set-up variables
cd \$(brew --prefix)/Library/Taps/caskroom/homebrew-cask
github_user='meduz'
project='owncloud'
git remote -v
Read more… | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6828696727752686, "perplexity": 6965.364165952474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296721.55/warc/CC-MAIN-20160823195816-00152-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://learnzillion.com/lesson_plans/8076-use-precise-language-to-manage-complexity | Use precise language to manage complexity
teaches Common Core State Standards CCSS.ELA-Literacy.W.9-10.2d http://corestandards.org/ELA-Literacy/W/9-10/2/d
You have saved this lesson!
Here's where you can access your saved items.
Dismiss
Card of | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8509781956672668, "perplexity": 25319.609644067365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542851.96/warc/CC-MAIN-20161202170902-00240-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://astronomy.stackexchange.com/questions/30243/are-astronomers-waiting-to-see-something-in-an-image-from-a-gravitational-lens-t | # Are astronomers waiting to see something in an image from a gravitational lens that they've already seen in an adjacent image?
@RobJeffries' answer to the question Does gravitational lensing provide time evolution information? points out that there can be a substantial different in arrival times of light from a given source seen in different images from a gravitational lens.
The linked paper there shows "$$\Delta t$$" values of the order of 30 days, but it is hard for me to understand what the actual observable is.
What I'm asking for here is (ideally) if there is a well defined event that a lay person could understand, something blinking or disappearing or brightening substantially that has already been seen in one image produced by a gravitational lens that has not yet been seen in one of the other images, and is expected to be seen in the (presumably near) future.
If something like this does not exist, a substitute could be a case where this has happened, and the second sighting of the same event was predicted, waited for, and observed on time.
I have no idea if this happens all the time, or has never happened yet.
• FWIW, this kind of thing is a bit easier in radio astronomy. I've been looking for a good relevant article, but without success, but I've only been looking for stuff on HTML pages, not PDF links. I read about it years ago, with the astronomer comparing tapes of radio data, with a time delay of several months, but I can't remember where I read it. Apr 5 '19 at 10:59
• I think the observable here is that you cross-correlate time-series observations, usually at radio wavelengths, to determine the delay. The "events" are just the general variability of the background quasar/AGN. I think there is an example where a type Ia supernova was seen in >1 image at different times. Apr 5 '19 at 11:09
• @PM2Ring re the typo, next time feel free to just make an edit. It's pretty common in the more civil SE sties for people to edit each other's posts I think. As for radio, I'm not so interested in time correlations as I am "...a well defined event that a lay person could understand, something blinking or disappearing or brightening substantially..."
– uhoh
Apr 5 '19 at 11:17
• @RobJeffries ditto re "event". So a supernova would exactly fit the bill!
– uhoh
Apr 5 '19 at 11:21
• Optical observations may be easier for a lay person to relate to, but radio gives you a much more useful fingerprint / barcode. Bear in mind that the different paths mean that the signals experience different filtering & distortions, and the source isn't a point, so the images aren't of exactly the same thing, so it can be quite hard to even verify that they actually come from the same source. Apr 5 '19 at 11:48
What you do is cross-correlate the observational datasets for the multiple sources and look for the "lag" that maximises the cross-correlation function. Generally speaking, the "events" are not really individual flares or dips, but the summation of all the time variability that is seen.
The variability in question usually comes about from the central portions of the "central engine" of a quasar or active galactic nucleus. For a supermassive black hole at the centre of a quasar, the innermost stable circular orbit is at 3 times the Schwarzschild radius ($$= 6GM_{\rm BH}/c^2$$). This basically defines the inner edge of any accretion disk and if we divide this by $$c$$ then we get a a timescale for the most rapid variations in luminosity output. So this is very nearly the same formula as presented in the linked question $$\tau \sim 3\times 10^{-5} \left(\frac{M_{\rm BH}}{M_{\odot}}\right)\ {\rm sec}\, ,$$ except that the supermassive black holes are much less massive than entire foreground lensing galaxies (usually). This the timescale of variation is much shorter than the potential delay time due to gravitational lensing. It is this difference in timescales that means there is plenty of "structure" within the light curves that can be locked onto by the cross-correlation.
There is a notable example however of a type Ia supernova being seen in a multiply lensed image (Goobar et al. (2017), but the predicted delay in the light curves was $$<35$$ hours and the light curves are not good enough to measure this. This technique is an active area of research and a major bit of science that is exprected to be achieved by the Large Synoptic Survey Telescope (Huber et al. 2019).
Finally, the thing you are really looking for has happened in terms of SN "Refsdal". This was a type II supernova seen to "go off" in a multiply imaged galaxy, seen through/around a galaxy cluster. A prediction was made, based on a model for the cluster gravitational potential, that another image ought to appear within a year or two. This further image was then detected by Kelly et al. (2016) in a paper entitled "Deja vu all over again".
From Kelly et al. (2016) ("Deja vu all over again"). See "SX" in the third panel:
Figure 1. Coadded WFC3-IR F125W and F160W exposures of the MACS J1149.5+2223 galaxy-cluster field taken with HST. The top panel shows images acquired in 2011 before the SN appeared in S1–S4 or SX. The middle panel displays images taken on 2015 April 20 when the four images forming the Einstein cross are close to maximum brightness, but no flux is evident at the position of SX. The bottom panel shows images taken on 2015 December 11 which reveal the new image SX of SN Refsdal. Images S1–S3 in the Einstein cross configuration remain visible in the 2015 December 11 coadded image (see Kelly et al. 2015a and Rodney et al. 2015b for analysis of the SN light curve).
Kelly, P. L., Brammer, G., Selsing, J., et al. 2015a, ApJ, submitted (arXiv:1512.09093)
Rodney, S. A., Strolger, L.-G., Kelly, P. L., et al. 2015b, ApJ, in press (arXiv:1512.05734)
• I've added Figure 1 from "Deja vu all over again". I hope you don't mind, it's just so cool!
– uhoh
Apr 5 '19 at 11:55
• Named after the gravitational lensing pioneer Sjur Refsdal. Apr 5 '19 at 11:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6504616737365723, "perplexity": 983.4006476180311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00368.warc.gz"} |
http://www.turkmath.org/beta/seminer.php?id_seminer=2645 | # turkmath.org
Türkiye'deki Matematiksel Etkinlikler
28 Nisan 2021, 13:15
### İstanbul Üniversitesi Matematik Bölümü Seminerleri
Nazife Erkurşun Özcan
Hacettepe Üniversitesi, Türkiye
It is a well-known fact that the asymptotic behavior of certain physical systems constitutes an integral part of physics and mathematics. The solutions of equations are typically expressed by one-parameter operator semigroups, and therefore, it is essential to know their long-term behavior. One needs to have effective methods and techniques for the examination of their asymptotic behavior. Great attention has been attracted to the study of the connection between uniform ergodicities and ergodic coefficients of semigroups defined on the classical function spaces. However, in those investigations, the limiting operator was taken as a rank-one projection. If one wants to consider the limiting operators as some projections then the standard ergodic coefficient is no longer effective. Consequently, a generalized Dobrushin ergodicity coefficient $δ_P(T)$ for Markov operators (acting on an abstract state space) concerning a projection P has been introduced.
The main goal of the talk is to explore stability and perturbation bounds for positive $C_0$-semigroups defined on abstract state spaces using a generalized Dobrushin ergodicity coefficient. It is worth noting that we also establish that the uniform and weak stabilities of time average Markov operators through the generalized ergodicity coefficient shed new light on this subject.
NOT: The seminar will be held online via the Zoom program. Those who want to participate should send an e-mail to "huseyinuysal@istanbul.edu.tr" in order to receive the Zoom meeting ID and Passcode.
Matematik Türkçe
Online
iu 26.04.2021
## İLETİŞİM
Akademik biriminizin ya da çalışma grubunuzun ülkemizde gerçekleşen etkinliklerini, ilan etmek istediğiniz burs, ödül, akademik iş imkanlarını veya konuk ettiğiniz matematikçileri basit bir veri girişi ile kolayca turkmath.org sitesinde ücretsiz duyurabilirsiniz. Sisteme giriş yapmak için gerekli bilgileri almak ya da görüş ve önerilerinizi bildirmek için iletişime geçmekten çekinmeyiniz. Katkı verenler listesi için tıklayınız.
Özkan Değer ozkandeger@gmail.com
## DESTEK VERENLER
31. Journees Arithmetiques Konferansı Organizasyon Komitesi
Web sitesinin masraflarının karşılanması ve hizmetine devam edebilmesi için siz de bağış yapmak, sponsor olmak veya reklam vermek için lütfen iletişime geçiniz. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8863357901573181, "perplexity": 11907.224635426697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00609.warc.gz"} |
https://hal.archives-ouvertes.fr/LPNHE/tel-03114988v4 | # Approche multi-diagnostique des émissions de haute énergie des microquasars à trous noirs
Abstract : Accretion-ejection phenomena are seen across the whole Universe, in all wavelenghts and size scales; from the formation of young stars, to active galactic nuclei. Typical timescales are also very different; a few seconds for g-ray bursts to several billion years for stellar formation.In our galaxy, accreting black holes, usually called microquasars, have the advantage of evolving on human timescales: from one day to several weeks. However, the link between accretion and ejection is misundurstanding. The connection between these mechanisms is the aim of my thesis work. In the first part, I study the spectrotemporal changes observed during outbursts. I compare the fast temporal variabilities observed during four outbursts of GX 339–4 to physical parameters from an accretion-ejection model. In this model, the spectral evolution is due to an interplay between two accretion flows: a standard accretion disk in the outer parts and a jet-emitting disk in the inner parts. I highlight the link between the observed variability and the transitional radius between the two accretion flows. In the second part, I use the INTEGRAL satellite which is an ideal instrument to probe the behaviour of these sources at higher energies, typically between 100 keV and 1000 keV. Here, I research a non-thermal emission signature for several sources which have very distinct spectral behaviour. This emission is, hitherto, unconstrained and its origin is widely debated. I detect and characterize this component for several sources and spectral states. I then present results of a polarimetric study performed on the same sources. Polarization from this emission brings crucial and decisive insights into the emission mechanism from this component. I finally discuss about the implication of these results and the potential origin of this emission while comparing properties of the different sources.
Keywords :
Document type :
Theses
https://tel.archives-ouvertes.fr/tel-03114988
Contributor : Abes Star : Contact
Submitted on : Tuesday, May 4, 2021 - 11:18:09 AM
Last modification on : Wednesday, June 2, 2021 - 4:27:42 PM
### File
CANGEMI_Floriane_va2.pdf
Version validated by the jury (STAR)
### Identifiers
• HAL Id : tel-03114988, version 4
### Citation
Floriane Cangemi. Approche multi-diagnostique des émissions de haute énergie des microquasars à trous noirs. Astrophysique [astro-ph]. Université de Paris, 2020. Français. ⟨NNT : 2020UNIP7075⟩. ⟨tel-03114988v4⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9296389222145081, "perplexity": 4194.571773533768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586390.4/warc/CC-MAIN-20210612193058-20210612223058-00587.warc.gz"} |
https://conference.ippp.dur.ac.uk/event/530/session/7/contribution/31 | XQCD 2016
1-3 August 2016
Plymouth University
Europe/London timezone
The 14th International workshop on QCD in extreme conditions, Aug. 1-3 2016
Home > Timetable > Session details > Contribution details
Contribution Talk
Plymouth University - Portland Square
Quantum Field Theories of dense, cold matter
Study of the phase diagram of dense two-color QCD with $N_f=2$ within lattice simulation
Speakers
• Mr. Aleksandr NIKOLAEV
Content
In this talk we present our results on the low-temperature scan of the phase diagram of dense two-color QCD with two flavors of quarks. The study is conducted using lattice simulation with rooted staggered quarks and real baryon chemical potential. At small chemical potential we observe the hadronic phase, where the theory is in a confining state, chiral symmetry is broken, the baryon density is zero and there is no diquark condensate. At the critical point $\mu = m_{\pi}/2$ we observe the expected second order transition to Bose-Einstein condensation of scalar diquarks. In this phase the system is still in confinement in conjunction with non-zero baryon density, but the chiral symmetry is restored in the chiral limit. We have also found that in the first two phases the system is well described by chiral perturbation theory. For larger values of the chemical potential the system turns into another phase, where the relevant degrees of freedom are fermions residing inside the Fermi sphere, and the diquark condensation takes place on the Fermi surface. In this phase the system is still in confinement, chiral symmetry is restored and the system is very similar to the quarkyonic state predicted by SU($N_c$) theory at large $N_c$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9538401365280151, "perplexity": 1113.7493396196605}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805809.59/warc/CC-MAIN-20171119210640-20171119230640-00251.warc.gz"} |
https://secure.sky-map.org/starview?object_type=1&object_id=2077&object_name=HD+168914&locale=DE | SKY-MAP.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# 107 Her
Contents
### Images
DSS Images Other Images
### Related articles
Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 On the Near-Infrared Size of VegaNear-infrared (2.2 μm) long baseline interferometric observations ofVega are presented. The stellar disk of the star has been resolved, andthe data have been fitted with a limb-darkened stellar disk of diameterΘLD=3.28+/-0.01 mas. The derived effective temperatureis Teff=9553+/-111 K. However, the residuals resulting fromthe stellar disk model appear to be significant and display organizedstructure. Instrumental artifacts, stellar surface structure, stellaratmosphere structure, and extended emission/scattering from the debrisdisk are discussed as possible sources of the residuals. While thecurrent data set cannot uniquely determine the origin of the residuals,the debris disk is found to be the most likely source. A simple debrisdisk model, with 3%-6% of Vega's flux emanating from the disk at r<~4AU, can explain the residuals. The Physical Basis of Luminosity Classification in the Late A-, F-, and Early G-Type Stars. I. Precise Spectral Types for 372 StarsThis is the first in a series of two papers that address the problem ofthe physical nature of luminosity classification in the late A-, F-, andearly G-type stars. In this paper, we present precise spectralclassifications of 372 stars on the MK system. For those stars in theset with Strömgren uvbyβ photometry, we derive reddenings andpresent a calibration of MK temperature types in terms of the intrinsicStrömgren (b-y)0 index. We also examine the relationshipbetween the luminosity class and the Strömgren c1 index,which measures the Balmer jump. The second paper will address thederivation of the physical parameters of these stars, and therelationships between these physical parameters and the luminosityclass. Stars classified in this paper include one new λ Bootisstar and 10 of the F- and G-type dwarfs with recently discoveredplanets. HD 169981 - an overlooked photometric binary?In 1999 and 2000 we obtained spectroscopic and photometric observationsof the A-type binary star HD 169981. The observations were part of acampaign to search for short-term photometric and radial-velocityvariations among early-type binaries. From the radial velocities of 18metal lines we derived more precise orbital elements. Quiteunexpectedly, our photometric data show a dip that could be caused by aneclipse. The same feature is also visible in the Hipparcos data. Fromour analysis of the available observations we have estimated thephysical parameters of the binary. Neither the spectroscopic nor thephotometric observations hint at any short-term variations. Based onspectroscopic observations made with the 2-m telescope at theThüringer Landessternwarte Tautenburg, Germany, and photometricobservations made with the 0.6-m telescope of the National AstronomicalObservatory Rozhen, Bulgaria. A spectroscopic survey for lambda Bootis stars. II. The observational datalambda Bootis stars comprise only a small number of all A-type stars andare characterized as nonmagnetic, Population i, late B to early F-typedwarfs which show significant underabundances of metals whereas thelight elements (C, N, O and S) are almost normal abundant compared tothe Sun. In the second paper on a spectroscopic survey for lambda Bootisstars, we present the spectral classifications of all program starsobserved. These stars were selected on the basis of their Strömgrenuvbybeta colors as lambda Bootis candidates. In total, 708 objects insix open clusters, the Orion OB1 association and the Galactic field wereclassified. In addition, 9 serendipity non-candidates in the vicinity ofour program stars as well as 15 Guide Star Catalogue stars were observedresulting in a total of 732 classified stars. The 15 objects from theGuide Star Catalogue are part of a program for the classification ofapparent variable stars from the Fine Guidance Sensors of the HubbleSpace Telescope. A grid of 105 MK standard as well as pathological''stars guarantees a precise classification. A comparison of our spectralclassification with the extensive work of Abt & Morrell(\cite{Abt95}) shows no significant differences. The derived types are0.23 +/- 0.09 (rms error per measurement) subclasses later and 0.30 +/-0.08 luminosity classes more luminous than those of Abt & Morrell(\cite{Abt95}) based on a sample of 160 objects in common. The estimatederrors of the means are +/- 0.1 subclasses. The characteristics of oursample are discussed in respect to the distribution on the sky, apparentvisual magnitudes and Strömgren uvbybeta colors. Based onobservations from the Observatoire de Haute-Provence, OsservatorioAstronomico di Padova-Asiago, Observatório do Pico dosDias-LNA/CNPq/MCT, Chews Ridge Observatory (MIRA) and University ofToronto Southern Observatory (Las Campanas). Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 Hot Inner Disks that Appear and Disappear around Rapidly Rotating A-Type DwarfsAt any one time, approximately one-quarter of the most rapidly rotatingnormal A-type dwarfs (V sin i >= 200 km s-1) show shell lines of TiII in the near-ultraviolet. Our observations during 22 years show thatthe lines appear and disappear on timescales of decades but do notdisplay significant changes within 1 year. This implies that they arenot remnants of the star formation but rather are probably caused bysporadic mass-loss events. A working hypothesis is that all A-type starsthat are rotating near their limits have these shells, but for onlyone-quarter of the time. Because these lines do not appear in stars withsmaller sin i, the shells must be disks. These are hot inner disks thatmay or may not be related to the cool outer disks seen by Smith andTerrile around beta Pic or through infrared excesses around Vega andother A-type dwarfs. The similar, limited line widths indicate that thedisks are ~7 R* above the stellar surfaces. The Relation between Rotational Velocities and Spectral Peculiarities among A-Type StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1995ApJS...99..135A&db_key=AST Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. An Atlas of Balmer Lines - H-Delta and H-GammaAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1993A&AS..101..599C&db_key=AST The radial velocity curve and the two-dimensional spectral classification of the Delta SCT variable HD 18878The radial velocities of HD 18878 were measured from spectra of this newDelta Sct type variable. The radial velocity curve with elementsdetermined from the photoelectric observations of Frolov et al. (1990)is derived. The mean radial velocity is -20 km/s, and the amplitude ofits variation is 34 km/s. On the basis of spectral criteria and thelocation of this star on a two-color diagram, it is argued that HD 18878is an A9III star. ICCD speckle observations of binary stars. I - A survey for duplicity among the bright starsA survey of a sample of 672 stars from the Yale Bright Star Catalog(Hoffleit, 1982) has been carried out using speckle interferometry onthe 3.6-cm Canada-France-Hawaii Telescope in order to establish thebinary star frequency within the sample. This effort was motivated bythe need for a more observationally determined basis for predicting thefrequency of failure of the Hubble Space Telescope (HST) fine-guidancesensors to achieve guide-star lock due to duplicity. This survey of 426dwarfs and 246 evolved stars yielded measurements of 52 newly discoveredbinaries and 60 previously known binary systems. It is shown that thefrequency of close visual binaries in the separation range 0.04-0.25arcsec is 11 percent, or nearly 3.5 times that previously known. Catalogue of the energy distribution data in spectra of stars in the uniform spectrophotometric system.Not Available Prediction of spectral classification from photometric observations - Application of the UVBY beta photometry and the MK spectra classification. II - General caseAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1980A&A....85...93M&db_key=AST Prediction of spectral classification from photometric observations-application to the UVBY beta photometry and the MK spectral classification. I - Prediction assuming a luminosity classAn algorithm based on multiple stepwise and isotonic regressions isdeveloped for the prediction of spectral classification from photometricdata. The prediction assumes a luminosity class with reference touvbybeta photometry and the MK spectral classification. The precisionattained is about 90 percent and 80 percent probability of being withinone spectral subtype respectively for luminosity groups I and V and forluminosity groups III and IV. A list of stars for which discrepanciesappear between photometry and spectral classification is given. Detection of errors in spectral classification by cluster analysisCluster analysis methods are applied to the photometric catalogue ofuvby-beta measurements by Hauck and Lindemann (1973) and point out 249stars the spectral type of which should be reconsidered or thephotometric indices of which should be redetermined. Absolute luminosity calibration of Stroemgren's 'late group'A statistical parallax method based on the principle of maximumlikelihood is used to calibrate absolute luminosities for samples ofcooler stars constituting the 'late group' defined by Stromgren (1966).The samples examined include 415 stars of all luminosity classes and asubset comprising 86 main-sequence stars. Linear calibration relationsinvolving the Stromgren beta, (b-y), and bracketted c1 indices arederived which yield mean absolute magnitudes with an accuracy of 0.09magnitude for the overall sample and 0.13 magnitude for themain-sequence subsample. Several second-order relations are considered,and the results are compared with Crawford's (1975) calibrations as wellas with mean absolute magnitudes obtained from trigonometric parallaxes.The possible effect of interstellar absorption on the calibrationrelations is also investigated. Catalogue of early-type stars measured in a narrow-band photometric systemA compilation of the photoelectric measurements in the Barbier-Morguleffsystem is presented. The catalogue includes data for 773 stars ofspectral type 08 to F6. 706 stars have been measured at least twice. Further observations of stars in the intermediate-age open cluster NGC 2477.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1974ApJ...192..391H&db_key=AST Rotation and shell spectra among A-type dwarfs.Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1973ApJ...182..809A&db_key=AST K-Line Photometry of Southern a StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1971ApJS...23..421H&db_key=AST K-Line Photometry of a StarsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1969ApJS...18...47H&db_key=AST
Submit a new article
• - No Links Found - | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625633716583252, "perplexity": 7117.792518208832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00526.warc.gz"} |
http://mathhelpforum.com/calculus/142775-integral-eqn-laplace-tranforms.html | # Math Help - Integral Eqn - laplace tranforms
1. ## Integral Eqn - laplace tranforms
I am suppose to solve the following equation using laplace transforms and have started but am both unsure if i'm going about it the right way and where to go next,
$
y(t) + \int_{0}^{t} e^{2(t - T)} . y(T) . dT = e^{2t} - t
$
where T is supposed to be tau.
i re-arranged the equation to have y(t) = the rest , then used convolution theorem to say;
$
y(t) = e^{2t} - t -$
(y * $e^{2}$)(t)
where (y * $e^{2}$)(t) is the convolution
i then took la place transforms to give;
$
L(y(t)) = \frac{1}{s-2} - \frac{1}{s^{2}} - (L(y(t)) . \frac{1}{s-2}
$
From here i simplify but end up with terms i can't take the inverse laplace transform of in order to solve, so am thinking i must ahve gone wrong up to this point?
Can anyone help here?
Cheers,
2. Use y-hat for the transform and when I take the transform of the equation and simplify the convolution, I get: $\widehat{y}+\frac{1}{s-2}\widehat{y}=\frac{1}{s-2}-\frac{1}{s^2}$. Solving for yhat, I get: $\widehat{y}=\frac{2}{s-1}-\frac{2}{s^2}-\frac{1}{s}$. Now, you can take the inverse transform of that right?
3. Yes thanks!
My problem was in simplifying to something i could take the inverse transform of, but finally got it there.
Cheers. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859440922737122, "perplexity": 1008.4068824782863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063881.16/warc/CC-MAIN-20150827025423-00096-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-5-section-5-2-negative-exponents-and-scienti-c-notation-vocabulary-and-readiness-check-page-353/9 | ## Algebra: A Combined Approach (4th Edition)
Published by Pearson
# Chapter 5 - Section 5.2 - Negative Exponents and Scientific Notation - Vocabulary and Readiness Check: 9
#### Answer
$4y^{3}$
#### Work Step by Step
We know that $a^{−n}=\frac{1}{a^{n}}$ and $\frac{1}{a^{-n}}=a^{n}$ (as long as a is a nonzero real number and n is an integer). Therefore, $\frac{4}{y^{-3}}=4\times y^{3}=4y^{3}$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7192350625991821, "perplexity": 1916.3096264134285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946721.87/warc/CC-MAIN-20180424135408-20180424155408-00465.warc.gz"} |
https://papers.nips.cc/paper/2014/file/5d616dd38211ebb5d6ec52986674b6e4-Reviews.html | Paper ID: 1017 Title: Sequential Monte Carlo for Graphical Models
Current Reviews
Submitted by Assigned_Reviewer_34
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)
This paper is concerned with Sequential Monte Carlo Methods for Probabilistic Graphical Models (PGM). The main contribution of this paper is that it introduces a sequence of auxiliary distributions defined on a monotonically increasing sequence of probability spaces. The authors make use of the structure of the PGM to define a sequence of intermediate target distributions for the sampler. The SMC sampler that is proposed can be then used within a Particle MCMC algorithm to come with efficient algorithms both for parameter and state estimation.
The paper is generally well written and the authors do a good job to explain what the algorithm is and illustrate its performance versus existing algorithms.
This is a nicely written paper which proposes a novel efficient SMC sampler for Probabilistic Graphical Models and its performance is illustrated via various examples.
Submitted by Assigned_Reviewer_42
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)
The authors propose a method for applying sequential Monte Carlo (SMC) inference to probabilistic graphical models (PGMs). Specifically, the authors show how SMC can be applied to infer latent variables and estimate the partition function (model evidence, posterior normalisation) when a PGM is represented as a factor graph. The authors also show how SMC inference for PGMs can be used as a kernel in an MCMC sampler (via PMCMC) to perform block sampling of latent variables. Finally, the authors apply their method to several datasets showing that their method can i) accurately estimate the partition function and ii) PMCMC sampling can be used to introduce block resampling moves of latent variables with very low auto-correlation.
Using SMC for inference in PGMs is an interesting idea that exploits the fact that SMC inference can be done on fairly arbitrary sequence of distributions, provided that the final distribution in the sequence is the desired target distribution. The authors clearly explain how the factor graph representation of a PGM can be decomposed to define such a sequence of distributions.
I found the review of factor graphs well written but lacking in references to more detailed resources. I felt a similar section reviewing SMC inference, independent of the application to PGMs would greatly aid paper. As the paper is written it may be difficult for readers without a fairly detailed knowledge of the SMC literature to access the paper. For example the authors talk about adjustment multipliers and fully adapted samplers in section 3.2. While these are important concepts for building efficient SMC samplers, they are also potentially unnecessary complications when presenting the basic idea of SMC inference in PGMs.
The experiment in section 5.1 is sound and demonstrates that the estimates of the partition function generated by SMC are comparable to state of the art methods.
I am not satisfied with the results of section 5.2. I found it surprising that the authors were able to include $\theta$ as part of the SMC chain, particularly that the variable would be added first during inference. For the synthetic dataset (Figure 6a) the authors show that the SMC algorithm performs reasonably well. I suspect this is due to the small number of topics and the corresponding small dimension of $\theta$. It would be useful to repeat the experiment with a larger number of topics so that the dimension of $\theta$ is larger and see if performance remains competitive with LRS. It is also not clear how the authors are determining the performance on the real data (Figures 6b-c); are they comparing against LRS or looking at the variability of estimates?
The experiment in section 5.3 explores the use of PMCMC for PGMs. Building PMCMC samplers that can update blocks of latent variables by SMC and parameters by Metropolis-Hastings or Gibss sampling could be very powerful. Unfortunately, I feel the experiment is incomplete. The authors only examine auto-correlation times (Figure 7), but do not show a comparison of time complexity of the iterations for each method, or how accurate the different inference schemes are for a fixed computational budget.
A few other small issues
- For figures 3 and 6a I am not sure what N on the x-axis refers to.
- The sentence "Both methods converges to the true value, however, despite the fact that it uses much fewer computational SMC performs significantly better in terms of estimator variance." on lines 369-371 is grammatically incorrect and confusing.
- In supplemental section 1.1 I couldn't find the definition for the functions $\kappa$ or $\mu$. They are defined implicitly as the mean and dispersion of a Von Mise distribution, but it would be useful to explicitly state what they are.
This work presents a novel and interesting inference scheme for PGMs. However, the current presentation of the model is not sufficiently clear and the benchmarking is not adequate.
Submitted by Assigned_Reviewer_43
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)
The authors apply a Sequential Monte Carlo sampling algorithm in the context of graphical models. The key idea is to construct an artificial sequence of auxiliary distributions that build up to the target distribution of interest; the authors do this through a factor graph representation of the target graphical model. Whilst the authors refer to an intuitive ordering of the sequence of artificial targets used in the general algorithmic case, I would have found it more instructive to include a detailed discussion on this point within Section 3. This appears to be a key issue and feels like it is superficially treated in the paper as it stands. As a natural extension, the authors offer a Particle MCMC-based algorithm that exploits their SMC sampler based technique. Section 4 also requires additional detail on the partial blocked decomposition definition. The reader is left with vague statements, which are only partially illuminated during the experiments section. The experiments section is well written and considers a good range of example models. I believe that this is a very good contribution to the scientific literature. The paper is well written, the references are complete, and the presentation is accurate.
I believe that this paper should be published at NIPS. In my opinion it will be of interest to the scientific community.
Author Feedback
Author Feedback
Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point.
We would like to thank the reviewers for the many insightful comments and valuable suggestions for improvements. We will make sure to take these into account.
To clarify the LDA likelihood estimation example (Section 5.2), we use marginalisation of $\theta$ within the SMC algorithm (cf. the L-R [15], and LRS [16] methods) and a fully adapted SMC sampler. This mitigates the degeneracy issue, although we are aware that it is not completely resolved. Nevertheless, the intention with this example was to show that even an out-of-the-box application of SMC can give comparable results to a special-purpose state-of-the-art method, such as LRS.
We agree that repeating the simulation experiment with a larger number of topics is of interest.
The comment raised in the review report motivated us to run additional simulations with a larger number of topics (up to 200) and words (up to 100). These simulations resulted in similar performance gains for SMC compared to LRS as those reported for the smaller example in the paper.
Regarding the real data LDA analysis (also Section 5.2): Unfortunately (and embarrassingly), we realized after submission that we had included the wrong plot in Fig. 6(c). We sincerely apologize for the confusion caused by this mistake, as expressed by Assigned_Reviewer_42. The correct values for the estimates of \log(Z) for Fig. 6(c) are:
(mean, std from bootstrapping)
LRS 1: (-13540, 11)
LRS 2: (-13508, 9)
SMC 1: (-13509, 10)
SMC 2: (-13496, 11)
The performance of the algorithms on the real data is assessed by comparing the mean values of the estimates. By computing the logarithm of the normalizing constant estimate, a negative bias is introduced (cf. Jensen’s inequality and recall that SMC gives an unbiased estimate of Z). Consequently, larger values of \log(Z) typically implies more accurate results. Based on this criteria and the numbers reported above, we see that SMC gives slightly better results than LRS also for the ‘20 newsgroups’ data (SMC1 and LRS2 give similar performance, and SMC2 gives the overall most accurate results).
The Gaussian MRF example (Section 5.3) was included primarily as a proof-of-concept for PMCMC and not to reflect any actual application where the use of PMCMC is justified. In particular, we wanted to illustrate the potential of mimicking the tree-sampler [30] using PMCMC, thereby extending the scope of this powerful method to PGMs where it is otherwise not applicable. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9122809767723083, "perplexity": 645.9384694325299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057018.8/warc/CC-MAIN-20210920040604-20210920070604-00400.warc.gz"} |
http://www.aimsciences.org/search/author?author=Alberto%20%20Boscaggin | # American Institute of Mathematical Sciences
## Journals
DCDS
Discrete & Continuous Dynamical Systems - A 2013, 33(1): 89-110 doi: 10.3934/dcds.2013.33.89
We study the problem of existence and multiplicity of subharmonic solutions for a second order nonlinear ODE in presence of lower and upper solutions. We show how such additional information can be used to obtain more precise multiplicity results. Applications are given to pendulum type equations and to Ambrosetti-Prodi results for parameter dependent equations.
keywords:
PROC
Conference Publications 2009, 2009(Special): 72-81 doi: 10.3934/proc.2009.2009.72
It is proved the existence of infinitely many solutions to a superquadratic Dirac-type boundary value problem of the form $\tau z = \nabla_z F(t,z)$, $y(0) = y(\pi) = 0$ ($z=(x,y)\in \mathbb{R}^2$). Solutions are distinguished by using the concept of rotation number. The proof is performed by a global bifurcation technique.
keywords:
DCDS
Discrete & Continuous Dynamical Systems - A 2016, 36(10): 5231-5244 doi: 10.3934/dcds.2016028
We deal with positive solutions for the Neumann boundary value problem associated with the scalar second order ODE $$u'' + q(t)g(u) = 0, \quad t \in [0, T],$$ where $g: [0, +\infty[\, \to \mathbb{R}$ is positive on $\,]0, +\infty[\,$ and $q(t)$ is an indefinite weight. Complementary to previous investigations in the case $\int_0^T q(t) < 0$, we provide existence results for a suitable class of weights having (small) positive mean, when $g'(u) < 0$ at infinity. Our proof relies on a shooting argument for a suitable equivalent planar system of the type $$x' = y, \qquad y' = h(x)y^2 + q(t),$$ with $h(x)$ a continuous function defined on the whole real line.
keywords: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7617875337600708, "perplexity": 346.44452064282876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210735.11/warc/CC-MAIN-20180816113217-20180816133217-00354.warc.gz"} |
http://mathoverflow.net/users/22112/lev-glebsky?tab=activity | # Lev Glebsky
less info
reputation
37
bio website location age member for 2 years, 5 months seen Aug 17 at 18:13 profile views 337
# 35 Actions
Jul8 comment Fixed points of $x\mapsto 2^{2^{2^{2^x}}} \mod p$ Here are an estimates (From above and, I think, very imprecise)of number of solutions of $q^{x^{x^x}}=x \mod p$ based on similar constructions. There are some annoying details due to $x+k (\mod p)= x+k (\mod p-1)$ or $x+k(\mod p-1) +1$. So, $2^{x+k}= 2^x2^k$ or $22^x2^k\mod p$. I don't think that there is an easy generalization for points of period 4. I think that the proof goes through due to the group $\langle a,b,w\;|\;b^{-1}ab=b^2,w^{-1}aw=b, w^3=1\rangle$ is finite. (We may define "almost action" of this group, with $xa=x+1$, $xb=2x$, $xw=2^x\mod p$) Jul8 comment Fixed points of $x\mapsto 2^{2^{2^{2^x}}} \mod p$ Here an estimates of Dec14 comment A special residually finite group @IanAgol: did you mean some Wilson's construction of just infinte groups of type N(h)? Is it published now? Sep26 revised The free group $F_2$ has index 12 in SL(2,$\mathbb{Z}$) edited body Sep24 awarded Necromancer Sep6 awarded Necromancer Mar14 awarded Yearling Feb3 answered Measures idempotent with respect to addition and multiplication. Feb3 awarded Commentator Feb3 comment The Higman group II @Ashot. This answers my first question! Thank you. Feb3 awarded Scholar Feb3 accepted The Higman group II Feb2 revised The Higman group II added 45 characters in body Feb2 comment The Higman group II @Yves and @Derek. Oops, you are right. I will make the corresponding corrections. Thank you. Feb1 asked The Higman group II Jan24 comment Eigenvalues of the products of a fixed unitari matrix with diagonal unitari matrices @ Michael I have not notice your comment before. I just consider it as a set... About b). Let $n$ be "very large" Then in example 2) al matrices $DU$ have "very small" spectral gaps. So, the question: for which $U$ all $DU$ have small maximal spectral gap? As $\\{DU\\}$ may be considered as a point in the flag manifold, one could try to relate this gap with a Reimann distance on the flag manifold.... Jan10 revised Eigenvalues of the products of a fixed unitari matrix with diagonal unitari matrices added 89 characters in body Jan9 asked Eigenvalues of the products of a fixed unitari matrix with diagonal unitari matrices Jan9 comment A subgroup intersects every conjugacy class Yes, a free group seems to have a large subgroup. It may be constructed inductively adding $g$ to $$for each g with g^F\cap =\emptyset. (We need to start with a good initial$$ to avoid getting all $F$.) Dec28 comment Polynomial bijection from ZxZ to Z? @Dicman and @Boumol: Thank you for interesting references. Interesting, AMM6028 asks for polynomials with integer coefficients. In fact, the bijection $\mathbb{N}\times\mathbb{N}\to\mathbb{N}$ I know has rational coefficients. Does there exist polynomial $\mathbb{N}\times\mathbb{N}\to\mathbb{N}$ bijection with integer coefficients? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8744832873344421, "perplexity": 1764.1059005025274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535922089.6/warc/CC-MAIN-20140901014522-00252-ip-10-180-136-8.ec2.internal.warc.gz"} |
http://bayesianthink.blogspot.com/2013/01/a-game-of-pots-and-gold.html | ## Saturday, January 19, 2013
### A Game of Pots and Gold
Q: You are in a game with a bankroll of 13 gold coins. The game involves a 13 pots lined up. There is a 96% chance that one of the pots has 22 gold coins in it. You get to inspect a pot by paying 1 gold coin. The game organizer tells you that there is a 90% chance that the very first pot has the gold coins in it. You pay 1 gold coin to inspect the first pot and you find there is no gold in it. Should you continue to play?
Fifty Challenging Problems in Probability with Solutions (Dover Books on Mathematics)
A: This is good example of a puzzle where at first blush it appears that there is no merit in continuing. It almost gives the impression that "most" of your probability of winning is lost right from the get go as the first pot, which was known to have a 90% chance of having the 13 gold coins does not contain gold.
Let us assume that the probability of winning is $$x$$ downstream of the first pot (that is pots 2 through 13). Next, lets estimate the probability of not winning at all in this game. The probability of not winning at the first pot is $$1 - 0.9 = 0.1$$ and that of not winning on any of the remaining pots is $$1 - x$$. The net probability we know to be $$1 - 0.96 = 0.04$$. Thus we can state this as
$$0.1 \times ( 1 - x ) = 0.04$$
Solving for $$x$$ yields
$$x = 60\%$$
You are left with 12 gold coins. Your expected pay off is $$0.6 \times 22 = 13.2$$ coins, so you must play.
As an aside, notice that the probability of a win is independent of the number of pots that are lined up. But the expected pay off will vary.
Some must buy books on Probability
40 Puzzles and Problems in Probability and Mathematical Statistics (Problem Books in Mathematics)
A new entrant and seems promising
Fifty Challenging Problems in Probability with Solutions (Dover Books on Mathematics)
This book is a great compilation that covers quite a bit of puzzles. What I like about these puzzles are that they are all tractable and don't require too much advanced mathematics to solve.
Introduction to Algorithms
This is a book on algorithms, some of them are probabilistic. But the book is a must have for students, job candidates even full time engineers & data scientists
Introduction to Probability Theory
An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd Edition
The Probability Tutoring Book: An Intuitive Course for Engineers and Scientists (and Everyone Else!)
Introduction to Probability, 2nd Edition
The Mathematics of Poker
Good read. Overall Poker/Blackjack type card games are a good way to get introduced to probability theory
Let There Be Range!: Crushing SSNL/MSNL No-Limit Hold'em Games
Easily the most expensive book out there. So if the item above piques your interest and you want to go pro, go for it.
Quantum Poker
Well written and easy to read mathematics. For the Poker beginner.
Bundle of Algorithms in Java, Third Edition, Parts 1-5: Fundamentals, Data Structures, Sorting, Searching, and Graph Algorithms (3rd Edition) (Pts. 1-5)
An excellent resource (students/engineers/entrepreneurs) if you are looking for some code that you can take and implement directly on the job.
Understanding Probability: Chance Rules in Everyday Life A bit pricy when compared to the first one, but I like the look and feel of the text used. It is simple to read and understand which is vital especially if you are trying to get into the subject
Data Mining: Practical Machine Learning Tools and Techniques, Third Edition (The Morgan Kaufmann Series in Data Management Systems) This one is a must have if you want to learn machine learning. The book is beautifully written and ideal for the engineer/student who doesn't want to get too much into the details of a machine learned approach but wants a working knowledge of it. There are some great examples and test data in the text book too.
Discovering Statistics Using R
This is a good book if you are new to statistics & probability while simultaneously getting started with a programming language. The book supports R and is written in a casual humorous way making it an easy read. Great for beginners. Some of the data on the companion website could be missing.
1. Nice puzzle. BTW you should make it clear that 90% refers to the *unconditional* chance that the first pot has the 22 coins, at first I thought you meant the chance the first pot has the gold given that one of the pots has it.
Also, a nice, intuitive and quick way to see the answer is to draw a box and partition it into gold exists, gold doesn't exist, and gold is in first pot.
Cheers,
Matt
1. Thanks Matt. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5508809089660645, "perplexity": 591.8289985549757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652631.96/warc/CC-MAIN-20150417045732-00229-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://openstudy.com/updates/50f30c2ce4b0694eaccf7834 | Here's the question you clicked on:
55 members online
• 0 viewing
## Mrfootballman97 2 years ago How do you simplify: Delete Cancel Submit
• This Question is Closed
1. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
$\sqrt{72}$
• 2 years ago
Best Response
You've already chosen the best response.
2
Okay so some easy steps to start simplifying.
• 2 years ago
Best Response
You've already chosen the best response.
2
First you're going to want to try to split that 72 into something you know can factor.
• 2 years ago
Best Response
You've already chosen the best response.
2
Any idea what 72 can be split into?
5. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
36?
• 2 years ago
Best Response
You've already chosen the best response.
2
and what?
7. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
ohhhhh. Do you mean like 8 times 9?
• 2 years ago
Best Response
You've already chosen the best response.
2
yes, or like tyou said 36 times what? Your first guess was correct.
9. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
so then what?
10. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
or is that the answer? lol
• 2 years ago
Best Response
You've already chosen the best response.
2
$\sqrt{36 * ?}$
12. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
2
• 2 years ago
Best Response
You've already chosen the best response.
2
$\sqrt{36 * 2}$
• 2 years ago
Best Response
You've already chosen the best response.
2
now can you simplify sqrt(36 or sqrt(2)?
• 2 years ago
Best Response
You've already chosen the best response.
2
WE don't split it up, so sorry if that last comment is confusing....
• 2 years ago
Best Response
You've already chosen the best response.
2
WE can just look at each piece now, and determine what's needed....
17. abb0t
• 2 years ago
Best Response
You've already chosen the best response.
0
You can also use 8 and 9...
• 2 years ago
Best Response
You've already chosen the best response.
2
yes, we can abb0t.... but this is easier. I was going to show 8 and 9 after.
19. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
i can simplify 36 into
20. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
$\sqrt{6 * 6}$
• 2 years ago
Best Response
You've already chosen the best response.
2
nono, we don't need to simplify it that way. I meant does sqrt(36) or sqrt(2)= anything?
22. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
oh yeah, 36= 6 and 2= 1.414214562. do i have to simplify 2?
• 2 years ago
Best Response
You've already chosen the best response.
2
correct so we get $6 \sqrt{2}$ which is our answer.
24. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
ahhhhhhh thanks a lot
• 2 years ago
Best Response
You've already chosen the best response.
2
we don't need to simplify 2, we just need to get it into it's simpliest form.
• 2 years ago
Best Response
You've already chosen the best response.
2
so for example, lets use 8 and 9.
• 2 years ago
Best Response
You've already chosen the best response.
2
$\sqrt{72} = \sqrt{8 * 9} = 3 \sqrt{ 4 * 2} = 2 *3 \sqrt{2} = 6 \sqrt{2}$
• 2 years ago
Best Response
You've already chosen the best response.
2
Its' really just trying to take some ugly sqrt thart we cannot solve and split it into things that we can solve for :).
29. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
i liked the 36 and 2 one better lol. Thanks again!
• 2 years ago
Best Response
You've already chosen the best response.
2
Yeah, it's easier, but you can see how it can be applied with any set :). Good luck, I hope you understand the concepts.
31. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
so lets see if i have this:
• 2 years ago
Best Response
You've already chosen the best response.
2
Sure, throw up a random # and lets see if we can solve.
• 2 years ago
Best Response
You've already chosen the best response.
2
as long as it's not prime... AHAH..
34. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
$\sqrt{75} = 5\sqrt{3}$
• 2 years ago
Best Response
You've already chosen the best response.
2
yes.
36. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
yaaahooooo
37. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
ok thanks cya later!
• 2 years ago
Best Response
You've already chosen the best response.
2
$\sqrt{75} = \sqrt{25 * 3} = 5 \sqrt{3}$
• 2 years ago
Best Response
You've already chosen the best response.
2
np good luck.
40. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
Hey here is a new type of problem:
41. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
$\sqrt{5} \times \sqrt{35}$
42. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
Think you can do that?
• 2 years ago
Best Response
You've already chosen the best response.
2
hmm Idk man you're asking a lot here... :p
• 2 years ago
Best Response
You've already chosen the best response.
2
lets start off basic. Multiply them together, what do we get?
• 2 years ago
Best Response
You've already chosen the best response.
2
or, if you want, you can split sqrt(35) into 2 parts, and go that way :).
46. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
well i dont need to get the answer to the equation, i think i just need to simplify it. Its under the section of simplifying
47. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
so do i just simplify each part?
• 2 years ago
Best Response
You've already chosen the best response.
2
simplifying is the same thing.
• 2 years ago
Best Response
You've already chosen the best response.
2
you basically are given an eq, to solve. IT could be multiplication, addition, etc, etc. Then you want to use the elements to combine and form something you are able to simplify and take something out of the sqrt. That is our main objective, to get a real number.
• 2 years ago
Best Response
You've already chosen the best response.
2
We are doing exactly what we did before, exept we are adding an additional step. Instead of 35 and 5, we had 36 and 2. or 9 and 8(9 * 4 *2)
• 2 years ago
Best Response
You've already chosen the best response.
2
I guess above I shouldn't have said you shouldn't split them out, because in the end you can, and it doesn't matter.... sqrt(36) * sqrt(2) is the same thing
• 2 years ago
Best Response
You've already chosen the best response.
2
6 * sqrt(2)
53. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
so would it be sqrt(5) and 35= 7 * 5
54. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
$\sqrt{5} \times \sqrt{7} \times \sqrt{5}$?
55. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
but those are primes....
• 2 years ago
Best Response
You've already chosen the best response.
2
yessir, but you can combine them in other ways, no?
• 2 years ago
Best Response
You've already chosen the best response.
2
sqrt(5) * sqrt(5) = ?
58. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
i dont know. So then what do i do?
59. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
• 2 years ago
Best Response
You've already chosen the best response.
2
Always remember when we have 2 of the same sqrt() we get the actual number
61. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
25?
62. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
oops 5
• 2 years ago
Best Response
You've already chosen the best response.
2
sqrt(25)yes.
• 2 years ago
Best Response
You've already chosen the best response.
2
and yes.
65. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
wait...
• 2 years ago
Best Response
You've already chosen the best response.
2
67. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
so how do i write down this answer?
• 2 years ago
Best Response
You've already chosen the best response.
2
5sqrt(7)
69. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
ohhhh ok
• 2 years ago
Best Response
You've already chosen the best response.
2
Math is awesome that you can combine and move things around to find whatever you need. Lets say we have sqrt(3) sqrt(3) and sqrt(4) the 2 sqrt(3) become a 3. and the sqrt(4) becomes a 2. Our answer is 6.
• 2 years ago
Best Response
You've already chosen the best response.
2
But if we had lets say sq(15) and sqrt(3) we will have to split up the sqrt(15) into sqrt(3) sqrt(5) and then another sqrt(3) which we end up getting 3sqrt(5)
72. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
ahhhhhhhh that makes sense! Are you a math teacher or something?
• 2 years ago
Best Response
You've already chosen the best response.
2
haha na...
• 2 years ago
Best Response
You've already chosen the best response.
2
I just know how much it sucks to not get proper help.....
75. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
yeah man. Ok well you saved my grade! Thanks again
• 2 years ago
Best Response
You've already chosen the best response.
2
No problem good luck ,anymore questions you can do @konradzuse to get me.
77. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
ok lets hope i dont need to do that :P
• 2 years ago
Best Response
You've already chosen the best response.
2
There are lots more math in your future :).
79. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
true that.
• 2 years ago
Best Response
You've already chosen the best response.
2
:) I just got done with Linear Algebra and Calculus 2 :)
81. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
im half way through geometry...
82. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
hey would the answer to sqrt of 9 over 5 = sqrt of 45 over 5?
83. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
|dw:1358109390649:dw|
84. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
• 2 years ago
Best Response
You've already chosen the best response.
2
nope... what's sqrt(9)?
86. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
3
87. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
would it be:
88. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
$\sqrt{3\sqrt{5}}$
• 2 years ago
Best Response
You've already chosen the best response.
2
nope 3/sqrt(5)...
90. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
ohh
91. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
so just:
92. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
|dw:1358110687298:dw|
93. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
or should it have a sqrt sign over the entire thing?
• 2 years ago
Best Response
You've already chosen the best response.
2
think about it. sqrt(9)/sqrt(5) is the same thing as sqrt(9/5) sqrt(9) is 3.
95. Mrfootballman97
• 2 years ago
Best Response
You've already chosen the best response.
1
ok thanks. Have a good day
• 2 years ago
Best Response
You've already chosen the best response.
2
yup.
• 2 years ago
Best Response
You've already chosen the best response.
2
you can also double check everything with a calculator...
98. Not the answer you are looking for?
Search for more explanations.
• Attachments:
Find more explanations on OpenStudy
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999967813491821, "perplexity": 10012.555851133831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936469016.33/warc/CC-MAIN-20150226074109-00245-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.eevblog.com/forum/eda/autodesk-buys-eagle/msg978101/ | 0 Members and 1 Guest are viewing this topic.
#### f5r5e5d
• Frequent Contributor
• Posts: 346
« Reply #125 on: July 07, 2016, 02:50:06 am »
I just hope they don't throw the Fusion 360 dev team at it
apparently they think "agile development" means you can skip old style Mech E CAD hard/deterministic curve generation, tangent, dimensioning feature requirements and go with flashy "organic" "sculpting features only "documented" in video clips
and while I may be a few sigma out there on allergy to video as a learning tool people have been complaining for nearly the entire public history of the project about the lack of in tool help and actual logically structured decent quality written documentation anywhere on the site - transcripts of the video presenters yaking isn't that
#### rx8pilot
• Super Contributor
• Posts: 3462
• Country:
• If you want more money, be more valuable.
« Reply #126 on: July 07, 2016, 04:04:56 am »
FWIW, I disagree that the current interface is "clunky."
When I need to holes to be a certain distance apart in Eagle - I cannot do that directly. I have to figure out the position of each hole relative to the origin and do the math to figure out the distance.
When I need to move a whole design to accommodate an outline change - I have to select all (no problem) and then use the CLI to type: MOVE (>0 0) (1.25 0) which gets the job done, but is much more clunky than a context sensitive dialog box asking for an X-Y value for the move.
When routing off grid, I hold the ALT key to get finer movements but there is no (apparent) way to lock in 45deg or 90deg traces. I end up with a lot of slightly crooked traces.
I also enjoy standard shift sports cars. Please don't fix it to be like SolidWorks.
To be fair - I don't want Eagle to be like SolidWorks either, I want it to be sharply focused on the task of electronics design in the same way SolidWorks is focused on 3D mechanical design. There are some cool 2D tricks in SolidWorks, AutoCAD, etc that would be nice to have as a component of the final Eagle solution - but EDA software should obviously consider the task not copy another unrelated solution. I am not an Eagle genius by any means because I don't use it daily but I am also a periodic user of all my software packages and don't struggle nearly as much with any other title. I have watched a lot of Eagle videos and written tutorials by what seemed to be expert users and it takes them a long time too, so maybe daily use would only speed me up a little.
Factory400 - the worlds smallest factory. https://www.youtube.com/c/Factory400
#### LabSpokane
• Super Contributor
• Posts: 1899
• Country:
« Reply #127 on: July 07, 2016, 04:37:39 am »
Techno..,
Thank you.
#### KE5FX
• Super Contributor
• Posts: 1035
• Country:
« Reply #128 on: July 07, 2016, 05:30:29 am »
Techno..,
Don't mess with the UI. Please.
Thank you.
#### H.O
• Frequent Contributor
• Posts: 604
• Country:
« Reply #129 on: July 07, 2016, 05:35:30 am »
When I need to holes to be a certain distance apart in Eagle - I cannot do that directly. I have to figure out the position of each hole relative to the origin and do the math to figure out the distance.
You'll probably consider it to be a workaround but if you use the Mark command you can place whatever features you want relative to the position of the mark. Place the mark at your "reference hole" (or your G54 zero if you like :-) ) and the coordinates displayed is relative to that. You CAN then of course also use the much hated command line to enter the feature to place, the size and the position directly hole 0.003 (R 0.1 0.25) something like that.
Another workaround is of course to place the first hole at the origin and then change the grid to whatever spacing you want.
#### Karel
• Super Contributor
• Posts: 1326
• Country:
« Reply #130 on: July 07, 2016, 05:46:59 am »
@technolomaniac, what ever you are going to do, please don't break compatibility with existing ULP's and scripts.
Don't introduce new features without accompanying "console" commands.
Don't throw out the existing realtime forward/backward annotation.
For me, the user interface is the least of the problems. What would make me happy is:
- a correct functioning IDF export based on geometries drawn in layers 50, 57 and 58 (bdCAD, tCAD and bCAD).
- cam processor ODB++ export.
- cam processor Gerber X2 export.
- improved impedance controlled routing.
- push & shove
- a library/schematic diff function a la: http://teuniz.net/eagle/eaglelibcheck/
The difference between theory and practice is less in theory than
the difference between theory and practice in practice.
Expensive tools cannot compensate for lack of experience.
#### rx8pilot
• Super Contributor
• Posts: 3462
• Country:
• If you want more money, be more valuable.
« Reply #131 on: July 07, 2016, 05:49:50 am »
Techno....
If your team chooses not to radically deal with UI improvements - please let me know now so I can get a professional tool. (hope that is not too sharp, but it is total shit from my professional perspective). If the new version still requires constant CLI and scripting to get basic tasks done - I am out in a flash. You don't (and shouldn't) have to take those things away, but add the features to the software natively that people are using ULP's to work around. I love have the ULP option to make custom output for my particular P&P and have no expectation that would be included as a standard feature. Moving groups, panelization, part creation, re-size all text, re-size a trace, are just a few examples of critically absent native features. The ULP system has pushed the development to the end-user and I don't have the time for that.
The question is if the future of Eagle is to be a tool for enthusiasts, hobbyists, makers or is it a professional tool that is also friendly to enthusiasts, hobbyists, and makers? I need speed which comes from delicately developed features from the beginning of the process to the very end. The less I deal with the software, the more I can focus on my design. The more I focus on my design, the better it is. If I get a better design to the market faster - the cost of Altium is all of a sudden a bargain.
Factory400 - the worlds smallest factory. https://www.youtube.com/c/Factory400
The following users thanked this post: Robaroni
#### rx8pilot
• Super Contributor
• Posts: 3462
• Country:
• If you want more money, be more valuable.
« Reply #132 on: July 07, 2016, 06:01:29 am »
When I need to holes to be a certain distance apart in Eagle - I cannot do that directly. I have to figure out the position of each hole relative to the origin and do the math to figure out the distance.
You'll probably consider it to be a workaround but if you use the Mark command you can place whatever features you want relative to the position of the mark. Place the mark at your "reference hole" (or your G54 zero if you like :-) ) and the coordinates displayed is relative to that. You CAN then of course also use the much hated command line to enter the feature to place, the size and the position directly hole 0.003 (R 0.1 0.25) something like that.
Another workaround is of course to place the first hole at the origin and then change the grid to whatever spacing you want.
Those are good options, but yes I would consider them extra effort for a simple task. I don't hate command line at all - it has it's place. I learned my computer skills starting in the early 80's when command line was it. The reason I don't like it for these applications is because it is another memory item that I have to keep fresh. After coding in C, Python, of BASH languages all day - I don't want to remember what the command or syntax of the command is just to move, size, position something. GUI's allow the user to focus on the task and not the interface and are particularly good for users that occasionally use the software. If I used Eagle everyday, I would certainly have a lot more proficiency in the commands - but I don't. I just want a dialog box to pop up and give me my options in context of the selection. If I want a line to be longer, I just want to type in a length - not calculate a start point and end point. If I want two lines to connect off-grid - I expect the software to at least offer to snap two ends of a line together.
To be clear - the command line is indeed powerful and I would not want it to go away. The ULP scripting engine allows outside the box features. What I don't care for is when these things are required for tasks that should be integrated in the core of the system as mouse clickable tools.
Factory400 - the worlds smallest factory. https://www.youtube.com/c/Factory400
#### LabSpokane
• Super Contributor
• Posts: 1899
• Country:
« Reply #133 on: July 07, 2016, 06:27:21 am »
Quote
The question is if the future of Eagle is to be a tool for enthusiasts, hobbyists, makers or is it a professional tool that is also friendly to enthusiasts, hobbyists, and makers? I need speed which comes from delicately developed features from the beginning of the process to the very end. The less I deal with the software, the more I can focus on my design. The more I focus on my design, the better it is. If I get a better design to the market faster - the cost of Altium is all of a sudden a bargain.
This. ^
Makers and hobbyists do not pay for software. They have decided that free is the only acceptable price. (The will only pay for ICs on breadboard-able PCBs.) Make the tool accessible to "makers" if you feel a social need, but catering to makers is a complete waste of resources. The market that will pay consists of Jill/Jack-of-all-trades professionals whose sole job is not PCB design. I believe that market is hugely underestimated. There are no shortage of professionals and businesses trapped - making do with third party hardware and paying through the nose for the privilege - believing that the designs tools are either too primitive or too expensive and time consuming to jump in and succeed.
People that build cell phones and high zoot spectrum analyzers (etc.) will continue to rightfully remain on the high end platforms.
The untapped market is in the middle.
The following users thanked this post: Bassman59
#### PCB.Wiz
• Frequent Contributor
• Posts: 298
• Country:
« Reply #134 on: July 07, 2016, 07:23:44 am »
... Yesterday, I needed to change the outline of my board and it took a very long time since each line and radius had to be manually entered. Arcs are defined only by end points and degrees - so when I need a sharp corner to have a radius added, it's a slow and manual job. In any 2D CAD software like Autocad, you simply pick a radius tool, tell it what radius you want and click on any sharp corner and the radius is added.
That's a common issue across most PCB packages, and it seems Autodesk could easily do a 2-D clipboard, where you Select an entity, or group, then paste into a proper 2D editor and then replace original.
Other packages have DXF import/export into PCB area and Footprint editors, but it tends to be coarse-grained.
It's likely to be much easier for Autodesk to include a base-Real-CAD tool, than mess about trying to re-code any editing engine.
#### PCB.Wiz
• Frequent Contributor
• Posts: 298
• Country:
« Reply #135 on: July 07, 2016, 08:04:17 am »
I agree and as the guy from Autodesk / EAGLE on the board, the thing I'd say is we should look to approach this from multiple directions. 1) We need to handle input data better. Ok, fair enough. The standards for this including IPC, JEDEC, etc - along with what the mfg's have been producing - however, mean there's just SO little consistency in how this data shared. This I think is the elephant in the room. This wreaks havoc on anyone building parts. Grids and reference points and such are all good, but let's call the input data what it is...messy! (some mfg's being MUCH better than others of course)
Bottom line, we need to flex a bit from the tools side to meet the incoming data in the middle or we are attempting to swim upstream against 40 years of information that's all over the map.
There are lowest-common-denominator files that can do some useful web-harvesting - those are Gerber and DXF.
These are almost universal, so should be Import/export supported, but those are not easily scriptable or edited.
A much better system for scripting information, is to publish a form of S-expression file
see:
https://en.wikipedia.org/wiki/S-expression#Parsing
Important to notice this key comment:
S-Expressions are often compared to XML, a key difference being that S-Expressions are far simpler in syntax, therefore being much easier to parse.
#### chris_leyson
• Super Contributor
• Posts: 1233
• Country:
« Reply #136 on: July 07, 2016, 08:20:33 am »
True, but don't forget about step and igis, pcb design so much more about simple 2D these days.
#### PCB.Wiz
• Frequent Contributor
• Posts: 298
• Country:
« Reply #137 on: July 07, 2016, 08:29:47 am »
...So it will be a long slow switch to Kicad.
Err, you do know KiCad can now simply import an Eagle design (and can import Altium too, via P-CAD) ? - See image.
Search Web.Find EagleFile.Import.
This makes the switch to KiCad very rapid indeed, and it is already underway for many web-published designs.
KiCad has an impressive library resource, and I predict others will soon be adding 'Import KiCad Library' buttons.
#### PCB.Wiz
• Frequent Contributor
• Posts: 298
• Country:
« Reply #138 on: July 07, 2016, 08:54:20 am »
True, but don't forget about step and igis, pcb design so much more about simple 2D these days.
Of course, but my point was more to not ignore the widespread but less lofty common-denominator imports.
For example, below is a DXF file of a relay, that someone like Autodesk should be able to import, and with a few smart mouse clicks, create a footprint.
Select outline -> Silkscreen
Select circles -> Add Terminals, use circle X.Y.D as Seed. Prompt for drill size. If multiple concentric circles seed Drill & mask too..
Delete construction lines. Save footprint, use DXF name as a seed.
https://www.omron.com/ecb/products/DXF/G6DN.DXF
Export footprint as DXF, using simple layer name rules and circle rules like above, should also be possible.
#### chris_leyson
• Super Contributor
• Posts: 1233
• Country:
« Reply #139 on: July 07, 2016, 09:11:44 am »
Sorry man had my 3D head on, but you're right, use a common file format for 2D footprints, DXF and GErber, it makes a lot sense. Thanks
#### technolomaniac
• Contributor
• Posts: 42
• Country:
« Reply #140 on: July 07, 2016, 02:59:25 pm »
@technolomaniac
I have been an Eagle user since 3.x, got an academic/educational license at 4.03. I certainly appreciate and agree with you that Autodesk will do what it sees fit to do, and doesn't need to be told.
FWIW, I disagree that the current interface is "clunky." In fact, I rather enjoy having control rather than having some piece of software telling me what it wants to do, which gets back to my second sentence above. I also enjoy standard shift sports cars. Please don't fix it to be like SolidWorks.
The real point of this post is to ask that another class of customer be considered -- maybe something like a loyalty discount for long-time users or retired users. The last time I looked, Eagle had a package for about $169 that offered 6 layers, a reasonable size board, and was only for non-commercial use. Since the education package is gone, I hope you can keep a similarly priced (i.e, <$200), very functional package available.
Regards, John
Hi John --
Thanks for the suggestions. We'll definitely keep the cost low and continue the $169 Make license. This is essential to EAGLE's success and we want to avoid anything that might make it harder for folks to use the product. This community is largely responsible for the glut of content, tutorials, and other resources available and we want to be sure we enable everyone to continue to make & share resources that make designing electronics easier. This includes both keeping the product cost low but also making sure we don't force people to adopt a complex data management system when they have already decided on a model for sharing and communicating data. (Point being, we want to reduce the friction and work with the community rather than try and build our own wonky ecosystem that forces others to join it if they want to play along.) Regarding the UI, what we will absolutely avoid is making it heavy like so many other packages with their dozens of workspace panels and menus and toolbars, etc. (I mean honestly, if your Preferences dialog has hyperlinks that launch another series of dialogs, it might be time for a refactoring Hope that helps set some expectations. Please feel free to contact me directly anytime with questions, concerns, ideas, etc. I'm at @technolomaniac on Hackaday.io & Twitter and you can email me directly at matt@cadsoft.com or matt.b-e-r-g-g-r-e-n@autodesk.com. (no dashes). #### technolomaniac • Contributor • Posts: 42 • Country: ##### Re: Autodesk buys Eagle « Reply #141 on: July 07, 2016, 03:03:36 pm » @technolomaniac, what ever you are going to do, please don't break compatibility with existing ULP's and scripts. Don't introduce new features without accompanying "console" commands. Don't throw out the existing realtime forward/backward annotation. For me, the user interface is the least of the problems. What would make me happy is: - a correct functioning IDF export based on geometries drawn in layers 50, 57 and 58 (bdCAD, tCAD and bCAD). - cam processor ODB++ export. - cam processor Gerber X2 export. - improved impedance controlled routing. - push & shove - a library/schematic diff function a la: http://teuniz.net/eagle/eaglelibcheck/ Oh man, this is awesome. So we have a all of these on the list, even the last one. Routing is actually SUPER high on my list but depends on real-time DRC in PCB coming into the fold. So we have some sequencing to get right but it's all very clear what needs to happen. The other mfg output is all in the pipe. As is the interface to mechanical. Let me ask though... do you want IDF or would you prefer a "real" mechanical interface? something that supported bringing a design into e.g. Fusion or Inventor (or whatever else you might be using)? Id suspect most folks would say "just give me an interface to a mechanical tool" as the IDF format is pretty sparse. especially if we can preserve copper features and layer construction, etc. #### technolomaniac • Contributor • Posts: 42 • Country: ##### Re: Autodesk buys Eagle « Reply #142 on: July 07, 2016, 03:05:04 pm » True, but don't forget about step and igis, pcb design so much more about simple 2D these days. Of course, but my point was more to not ignore the widespread but less lofty common-denominator imports. For example, below is a DXF file of a relay, that someone like Autodesk should be able to import, and with a few smart mouse clicks, create a footprint. Select outline -> Silkscreen Select circles -> Add Terminals, use circle X.Y.D as Seed. Prompt for drill size. If multiple concentric circles seed Drill & mask too.. Delete construction lines. Save footprint, use DXF name as a seed. https://www.omron.com/ecb/products/DXF/G6DN.DXF Export footprint as DXF, using simple layer name rules and circle rules like above, should also be possible. Indeed, this is helpful. Let us have a crack at this and see what we can come up with. There's a lot we can do in this space to make footprint generation easier. Best regards, Matt (Autodesk / Cadsoft) #### technolomaniac • Contributor • Posts: 42 • Country: ##### Re: Autodesk buys Eagle « Reply #143 on: July 07, 2016, 03:06:34 pm » True, but don't forget about step and igis, pcb design so much more about simple 2D these days. We're pushing hard on mechanical interfaces / content. So expect some interesting things to happen here soon -ish! Best regards, Matt (Autodesk / Cadsoft) #### EEVblog • Administrator • Posts: 28638 • Country: ##### Re: Autodesk buys Eagle « Reply #144 on: July 07, 2016, 03:10:10 pm » @Dave, it's not going subscription. So there. At this stage, that isn't anywhere on my roadmap. Thought about it. Decided against it. Can I say that we will never in the life of any product do that? No, of course not. That would be at best unfair, at worst dishonest. But I have so many things that are more pressing. The point of my response - which I agree was unclear was - routing, real-time DRC, some improvements to polygon handling, better revision management and versioning, better BOM tools, better interface to manufacturing, some library improvements, interface to 3D, etc are all good things to worry about today as they drive value for the users. Those are the priority. We'll shelve the other stuff until get to a place where that makes sense. That was the point of that comment. I've got other stuff on my radar. And I think that the shortlist today is pretty much a who's-who of what folks have been asking for for some time. Only now we have a combined development team that can really drive some of this home. Thanks for calling me out...I sounded like a politician and it was totally fair. Thanks for the clarification! #### EEVblog • Administrator • Posts: 28638 • Country: ##### Re: Autodesk buys Eagle « Reply #145 on: July 07, 2016, 03:17:31 pm » But I have so many things that are more pressing. The point of my response - which I agree was unclear was - routing, real-time DRC, some improvements to polygon handling, better revision management and versioning, better BOM tools, better interface to manufacturing, some library improvements, interface to 3D, etc are all good things to worry about today as they drive value for the users. Those are the priority. Baring in mind I'm not an Eagle user... Out of that list of items the only thing I would say to drop is better revision control. I'd put that waaay down the list. It might be important for the mid to high level packages like Altium and their professional customers, but let's face it, Eagle isn't exactly competing in that mid to high level space. It's for the makers, the one man bands, and the small few people companies making relatively simple products. They either don't use version control, or they can implement it themselves. #### EEVblog • Administrator • Posts: 28638 • Country: ##### Re: Autodesk buys Eagle « Reply #146 on: July 07, 2016, 03:18:55 pm » Matt, what do you think about the new$995 Altium Circuit Studio move?
Do you think more than coincidence in timing with the Eagle buyout?
#### LabSpokane
• Super Contributor
• Posts: 1899
• Country:
« Reply #147 on: July 07, 2016, 03:30:46 pm »
Out of that list of items the only thing I would say to drop is better revision control. I'd put that waaay down the list.
Yup. Forget about the rev control for now.
#### PCB.Wiz
• Frequent Contributor
• Posts: 298
• Country:
« Reply #148 on: July 07, 2016, 04:58:53 pm »
...Out of that list of items the only thing I would say to drop is better revision control. I'd put that waaay down the list.
The key element is to not break revision control that users may already have.
Provided Eagle sticks with an ASCII file, and maybe even adds the easier to parse S-expression file I linked above, users revision control they have now should work.
#### Karel
• Super Contributor
• Posts: 1326
• Country: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21132875978946686, "perplexity": 3066.74848975464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998440.47/warc/CC-MAIN-20190617063049-20190617085049-00034.warc.gz"} |
http://newsgroups.derkeiler.com/Archive/Comp/comp.text.tex/2007-06/msg00254.html | # Re: How to change "References" to "Bibliography"
Robin Fairbairns wrote:
Does any one know why the article class uses \refname and, say, book, used \bibname?
hysterical raisins[*]. it was done that way when this mechanism was
introduced into latex 2.09 in ca.1992, and no-one complained then so
it's remained that way ever since, for compatibility purposes.
if one was starting again...
[*] aka historical reasons; it occurs to me this joke may not be
common away from my environment.
no problem, here.
I added a list of all the standard names controlled by babel and mentioned that there is a difference between article and book-like classes.
I was just woundering
--
/daleif (remove RTFSIGNATURE from email address)
LaTeX FAQ: http://www.tex.ac.uk/faq
LaTeX book: http://www.imf.au.dk/system/latex/bog/ (in Danish)
Remember to post minimal examples, see URL below
http://www.tex.ac.uk/cgi-bin/texfaq2html?label=minxampl
.
## Relevant Pages
• Re: General encoding question
... Robin Fairbairns writes: ... encoding, when `\'e' is correctly interpreted, no matter the encoding. ... if latex is reading using the correct inputenc, ... keyboards tend to have keys for the characters of the local language. ...
(comp.text.tex)
• Re: varioref, colons and french
... you're supposed to be able to read all the replies at the trouble ticket ... i've sent a message to the latex team asking about the bugs web script, ... Robin Fairbairns, Cambridge ...
(comp.text.tex)
• Re: Help
... Robin Fairbairns wrote: ... intro texts on latex -- if they mention the topic at all, ... you about packages for this job. ... until he/she is more fluent with what latex can do. ...
(comp.text.tex)
• Re: Get the first letter of a word
... letters, I guess LaTeX too... ... a curious beast (a macro processor with a built-in typesetting ... Robin Fairbairns, Cambridge ... It is my first "big" redefinition since I work with LaTeX and I was pretty ...
(comp.text.tex)
• Re: hyperref, subfig and natbib with sort&compress
... Robin Fairbairns wrote: ... I can compile it with latex with any problem, but using pdflatex, I receive warning/error: ... you're probably using an extended pdftex, ...
(comp.text.tex) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91853928565979, "perplexity": 10471.514236126979}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011151170/warc/CC-MAIN-20140305091911-00074-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/21986-hard-limit.html | # Math Help - Hard Limit
1. ## Hard Limit
How do get the following
$\displaystyle\lim_{x\to 1}\frac{x^\frac{1}{7}-1}{x^\frac{1}{5}-1}$
Without l'Hopital's Rule....
2. One way to appraoch it is to do some fancy factoring.
$\lim_{x\rightarrow{1}}\frac{x^{\frac{1}{7}}-1}{x^{\frac{1}{5}}-1}$
= $\lim_{x\rightarrow{1}}\frac{x^{\frac{4}{35}}+x^{\f rac{3}{35}}+x^{\frac{2}{35}}+x^{\frac{1}{35}}+1}{x ^{\frac{6}{35}}+x^{\frac{4}{35}}+x^{\frac{3}{35}}+ x^{\frac{2}{35}}+x^{\frac{1}{7}}+x^{\frac{1}{35}}+ 1}=\boxed{\frac{5}{7}}$
3. Originally Posted by polymerase
$\displaystyle\lim_{x\to 1}\frac{x^\frac{1}{7}-1}{x^\frac{1}{5}-1}$
Substitute $u^{35}=x,$ the limit becomes to
$\lim_{u\to1}\frac{u^5-1}{u^7-1}.$
The factor $u-1$ is bothering the top & bottom, so we pull it out:
$u^5-1=(u-1)(u^4+u^3+u^2+u+1),$ $u^7-1=(u-1)(u^6+u^5+u^4+u^3+u^2+u+1),$ so
$\lim_{u\to1}\frac{u^5-1}{u^7-1}=\lim_{u\to1}\frac{u^4+u^3+u^2+u+1}{u^6+u^5+u^4+ u^3+u^2+u+1},$
and the conclusion follows $\blacksquare$
4. Originally Posted by galactus
One way to appraoch it is to do some fancy factoring.
$\lim_{x\rightarrow{1}}\frac{x^{\frac{1}{7}}-1}{x^{\frac{1}{5}}-1}$
= $\lim_{x\rightarrow{1}}\frac{x^{\frac{4}{35}}+x^{\f rac{3}{35}}+x^{\frac{2}{35}}+x^{\frac{1}{35}}+1}{x ^{\frac{6}{35}}+x^{\frac{4}{35}}+x^{\frac{3}{35}}+ x^{\frac{2}{35}}+x^{\frac{1}{7}}+x^{\frac{1}{35}}+ 1}=\boxed{\frac{5}{7}}$
How do you do that....what's the "rule"(lack of a better word)?
5. It's basically the same as the previous post, but K eliminated the fractional exponents. Just factoring. See the pattern?.
6. Originally Posted by galactus
It's basically the same as the previous post, but K eliminated the fractional exponents. Just factoring. See the pattern?.
I get K's completely but i dont see the pattern for yours
7. Originally Posted by polymerase
I get K's completely but i dont see the pattern for yours
Its just writting $x^{1/7}-1$ and $x^{1/5}-1$ as sums of powers of $x^{1/35}$ , which can be done because $5\times 7=35$
RonL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9346275925636292, "perplexity": 1819.0413283746861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932705.91/warc/CC-MAIN-20150521113212-00017-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://www.ideals.illinois.edu/handle/2142/79123 | ## Files in this item
FilesDescriptionFormat
application/vnd.openxmlformats-officedocument.presentationml.presentation
369936.pptx (5MB)
PresentationMicrosoft PowerPoint 2007
application/pdf
1008.pdf (22kB)
AbstractPDF
## Description
Title: NEW ACCURATE WAVENUMBERS OF H35Cl+ AND H37Cl+ ROVIBRATIONAL TRANSITIONS IN THE v=0_1 BAND OF THE 2_ STATE. Author(s): Domenech, Jose Luis Contributor(s): Drouin, Brian; Cernicharo, Jose; Tanarro, Isabel; Herrero, Victor Jose; Cueto, Maite Subject(s): Astronomy Abstract: HCl$^+$ is a key intermediate in the interstellar chemistry of chlorine. It has been recently identified in space from {it Herschel's} spectrafootnote{M. De Luca et al., {it Astrophys. J. Lett.} {bf 751}, L37 (2012)} and it has also been detected in the laboratory through its optical emissionfootnote{W. D. Sheasley and C. W. Mathews, {it J. Mol. Spectrosc.} {bf 47}, 420 (1973)}, infraredfootnote{P. B. Davies, P. A. Hamilton, B. A. Johnson, {it Mol. Phys.} {bf 57}, 217 (1986) } and mm-wave spectrafootnote{H. Gupta, B. J. Drouin, and J. C. Pearson, {it Astrophys. J. Lett.} {bf 751}, L37 (2012) }. Now that {it Hershchel} is decomissioned, further astrophysical studies on this radical ion will likely rely on ground-based observations in the mid-infrared. We have used a difference frequency laser spectrometer coupled to a hollow cathode discharge to measure the absorption spectrum of $rm{H}^{35}rm{Cl}^+$ and $rm{H}^{37}rm{Cl}^+$ in the $v=0-1$ band of the $^2Pi$ state with Dopppler limited resolution. The accuracy of the individual measurements ($sim$ 10 MHz (3$sigma$)) relies on a solid state wavemeter referenced to an iodine-stabilized Ar$^+$ laser. The new data are being fit using the CALPGM software from JPL, and the current status will be presented. Issue Date: 25-Jun-15 Publisher: International Symposium on Molecular Spectroscopy Citation Info: ACS Genre: CONFERENCE PAPER/PRESENTATION Type: Text Language: English URI: http://hdl.handle.net/2142/79123 Date Available in IDEALS: 2016-01-05
| {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42817944288253784, "perplexity": 23968.880365166267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.60/warc/CC-MAIN-20170423031202-00145-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://annals.math.princeton.edu/articles/category/1991/134-1991/134-2 | Volume 134, Issue 2, September 1991
## Regularity properties of Fourier integral operators
Pages 231-251 by Andreas Seeger, Christopher D. Sogge, Elias M. Stein
## A boundary Harnack principle in twisted Hölder domains
Pages 253-276 by Richard F. Bass, Krzysztof Burdzy
## Floer homology and splittings of manifolds
Pages 277-323 by Tomoyoshi Yoshida
## Singular spaces, characteristic classes, and intersection homology
Pages 325-374 by Sylvain E. Cappell, Julius L. Shaneson
## Stratified symplectic spaces and reduction
Pages 375-422 by Reyer Sjamaar, Eugene Lerman
## $\mathrm{C}^\ast$-algebras associated with groups with Kazhdan’s Property T
Pages 423-431 by Simon Wassermann
## Szegö’s extremum problem on the unit circle
Pages 433-453 by Attila Máté, Paul Nevai, Vilmos Totik | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054503798484802, "perplexity": 21756.030093904148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314696.33/warc/CC-MAIN-20190819073232-20190819095232-00342.warc.gz"} |
http://mymathforum.com/abstract-algebra/33439-two-questions-approach.html | My Math Forum Two questions - Approach?
Abstract Algebra Abstract Algebra Math Forum
January 25th, 2013, 06:17 AM #1 Newbie Joined: Jan 2013 Posts: 26 Thanks: 0 Two questions - Approach? Let U be a universal set, A and B two subsets of U. (1) Show that B ? A ? (B ? A). _ _ (2) A = B if and only if there exists a subset X of U such that A ? X = B ? X and X\A = X\B. For (1), I did this: B ? A ? (B ? A). _ B ? (A?B)?(A?A) B ? (A?B)?(u) Therefore since A and B are both in the universe, B ? A ? (B ? A) For (2), I did this: Property of double-inclusion _ _ A ? X = B ? X and X\A = X\B, Show A=B Let X ? A Case 1: X?X X?X ---> X?AUX ---> X?BUX --->X?B Case 2 X?X ---> X?X\A ---> X?X\B ---> X?X or X?B, but X?X so X?B This one, I don't really know how to approach... If someone could prove it or even just point me in the direction, I'd appreciate it. Let A, B, C and D four non-empty sets. If f : A ? B and g : C ? D are two functions, we de?ne a new function h : A×C ? B×D as follows: ?(a, c) ? A×C, h(a, c) = (f(a), g(c)). Show that h is bijective if and only if f and g are bijective. This is really my first EVER exposure to abstract math, so at this point I'm kind of fumbling in the dark; any pointers would be great.
January 25th, 2013, 06:19 AM #2 Newbie Joined: Jan 2013 Posts: 26 Thanks: 0 Re: Two questions - Approach? The format I used didn't properly show that for question 1, it's B ? A ? (B ? A). where A with the intersection of B is a COMPLEMENT A A ? X = B ? X and X\A = X\B. Here the A and B in the XA XB difference is also complements.
January 25th, 2013, 07:22 AM #3 Senior Member Joined: Mar 2012 Posts: 294 Thanks: 88 Re: Two questions - Approach? if you want to show that: B ? A?(B?A^c) you know to show that every element of B is in the set on the right. now B = B?U = B ? (AUA^c) = (B?A) U (B?A^c). if b in B is in B ? A, then it lies in A, so certainly lies in A?(B?A^c). otherwise, b is in B?A^c, so lies in A?(B?A^c). with number 2, you cannot conclude that just because x lies in B?X, it lies in B. this is false. however, since x is assumed to lie in A, it is NOT in X\A. therefore it is NOT in X\B, therefore it IS in B (why? because B?X = X\B U B). this shows that A is a subset of B. now what you need to do is show if y is in B, y must also lie in A. it might help to draw some pictures. even that is only "half" of 2: showing that the existence of X with A?X = B?X, and A\X = B\X implies A = B. now you have a different proof: showing that if A = B, there is some X with that property. will X = U work? ***************************** for your last problem, you need a definition of injective and surjective. the one i will use is: f is injective if f(x) = f(y) implies x = y, and f is surjective if for ANY b in B there is some a in A with f(a) = b. bijective functions are both. one part (as is often the case) of this "iff (if an only if)" proof is fairly simple: suppose f,g are injective, and that h(a,c) = h(a',c'). this means that (f(a),g(c)) = (f(a'),g(c')). by the definition of equality on BxD, this means f(a) = f(a'), and g(c) = g(c'). since BOTH f,g are injective, a = a', and c = c'. therefore (a,c) = (a',c') and we have proved h is injective. now suppose f,g are surjective. then for any (b,d) in BxD, we have b = f(a), d = g(c), for some a in A, and c in C. hence h(a,c) = (f(a),g(c)) = (b,d), so h is surjective. now...the "other direction". suppose all we know is that h is bijective. well this means h is injective and surjective. now suppose f(a) = f(a'). then h(a,c) = (f(a),g(c)) = (f(a'),g(c)) = h(a',c), and since h is injective, (a,c) = (a',c), which means a = a' and c = c. we don't care about c right now, but a = a' shows f is injective. a similar proof works for g. so both f,g are injective if h is. now let b be any element of B. we need to find an a such that f(a) = b to show f is surjective. since h is surjective, for any element of BxD, we have (a,c) in AxC with h(a,c) = (b,d). so pick some random element d of D (we can do this, D is non-empty) and consider (b,d). there exists SOME (a,c) in AxC with h(a,c) = (b,d). so let's use the a of that particular (a,c). since h(a,c) = (f(a),g(c)) = (b,d), we have f(a) = b. again, a similar proof works for g.
January 25th, 2013, 10:44 AM #4 Newbie Joined: Jan 2013 Posts: 26 Thanks: 0 Re: Two questions - Approach? OK I'm a little confused... for the first question: B ? A?(B?A^c) to B = B?U = B ? (AUA^c) = (B?A) U (B?A^c). Where is the B?U from? Am I just looking at it backwards? AUA^c is just the universe, can't I switch it using that property? Is my method of using associative properties, followed by the AUA^c property to show the universe not sufficient when it shows B and A are both in the universe? Man this stuff is stressing me out lol.
January 25th, 2013, 10:00 PM #5 Senior Member Joined: Mar 2012 Posts: 294 Thanks: 88 Re: Two questions - Approach? I'm not sure I even understand what you wrote (part of this could be because however you are denoting "complement" isn't showing up). We want to show B is contained in a certain set, which is a union of two other sets. Since A,B are arbitrary, we can easily imagine situations where B is not contained in A, nor in the intersection of B with the complement of A. For example, suppose U = {1,2,3,4,5,6}, A = {1,2,3}, and B = {2,3,4}. then B clearly isn't contained in A. Now A^c = {4,5,6}, and B?A^c = {4}, so B clearly isn't contained in this set, either. So, we want to split up U somehow so that when we take the part of B that is in the first part of U, we have A?B, and when we take the part of B that is in the second part of U, we have A^c?B. this way we split up U should be "clean", exactly 2 pieces with no overlap. U = A?A^c does this for us. this isn't just a union, but a very special kind of union called a DISJOINT union, which is much easier to talk about (x is either in A, or A^c, never both). If you understand this, then I could have just written: B = (A?B)?(A^c?B) without mentioning U <---the same disjoint union of U, induces a disjoint union of B. This is just the same as saying: B = (B-A)?(A?B) <--B-A, or B\A is another way of writing (B?A^c). This is important for describing the "interaction" of A and B, A partitions B into two, non-overlapping sets: B\A and A?B. This is implicitly appealing to the logical: Law of the Excluded Middle - either x in B is in A as well, or it's not, there are no other choices. This is a basic principle of set-membership, and this exercise is trying to make this clear to you.
January 26th, 2013, 01:16 PM #6 Newbie Joined: Jan 2013 Posts: 26 Thanks: 0 Re: Two questions - Approach? OK. I took what you said, and this is what I've developed: So B?A?(B?A^c) To show this, we can say that when x?B, implies there exists an x?A?(B?A^c) We will let x?B, Case 1: x? A x? A--->x? A and x?(B?A^c). So clearly, x?A. Case 2: x?A x?A or x?(B?A^c). We have declared x?A, so x?B and x?A^c. Now we note that X?A^c is = to X?A. Thus proving that B?A?(B?A^c)
January 26th, 2013, 09:55 PM #7 Senior Member Joined: Mar 2012 Posts: 294 Thanks: 88 Re: Two questions - Approach? So very close. When you say: "....that when x is in B, there exists an x in x?A?(B?A^c)...." it should not be: "there exists", that is too general. it must be the very same x we are supposing we have in B. That is: to say X?Y is equivalent to saying: (x is in X) implies (x is in Y). Case 1) is wrong. If x is in A, x is NOT in A^c, and it is certainly not in B?A^c. you need to change the "and" to "or", that is the meaning of "union". Like this: suppose x is in B. If x is in A, then x is in A?(B?A^c). That's all you need to write. Case 2) is much better, but I would START with: Case 2: x?A we note that X?A^c is = to X?A. Hence x is in B?A^c, and thus in A?(B?A^c). ********************************** In general, for any two sets A and B: A?B ? A ? A?B A?B ? B ? A?B If you want to show something (say an element x) is in A?B, you can do this one of two ways: 1) show it is in A, or it is in B (but at least one; doesn't have to be both). 2) assume it is not in A, and show it must lie in B (this is really the same thing....if it's in A, then it surely lies in A?B by dint of the inclusions listed above. So if it's not in A, it had better be in B - this is a round-about way of saying: A?B = [(A^c)?(B^c)]^c). If you want to show something is in A?B, there is really only one way: 1) show it is in A, and it lies in B. Both must be true.
September 20th, 2013, 11:50 PM #8 Global Moderator Joined: Dec 2006 Posts: 20,820 Thanks: 2159 1. A?(B?A^c) = (A?B)?(A?A^c) (distributive law) = (A?B)?(U) = A?B (as A and B are subsets of U) ? B.
Tags approach, questions
### let A and B be two any sets, if A intersection X=B intersection X and A union B =BuX,then prove that A=B .
Click on a term to search for related topics.
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post matthius Calculus 0 February 20th, 2012 07:26 AM JohnA Algebra 2 February 19th, 2012 09:29 AM tiba Math Books 0 February 19th, 2012 07:03 AM akle Calculus 1 July 9th, 2010 10:11 AM cmmcnamara Advanced Statistics 4 February 10th, 2010 05:49 AM
Contact - Home - Forums - Cryptocurrency Forum - Top | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9492806792259216, "perplexity": 1317.6879915958361}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526560.40/warc/CC-MAIN-20190720173623-20190720195623-00027.warc.gz"} |
https://www.groundai.com/project/time-dependent-solutions-for-a-stochastic-model-of-gene-expression-with-molecule-production-in-the-form-of-a-compound-poisson-process/ | Time-dependent solutions for a stochastic model of gene expression with molecule production in the form of a compound Poisson process
# Time-dependent solutions for a stochastic model of gene expression with molecule production in the form of a compound Poisson process
Jakub Jędrak Institute of Physical Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, 01-224 Warsaw, Poland Anna Ochab-Marcinek Institute of Physical Chemistry, Polish Academy of Sciences, ul. Kasprzaka 44/52, 01-224 Warsaw, Poland
July 6, 2019July 6, 2019
July 6, 2019July 6, 2019
###### Abstract
We study a stochastic model of gene expression, in which protein production has a form of random bursts whose size distribution is arbitrary, whereas protein decay is a first-order reaction. We find exact analytical expressions for the time evolution of the cumulant-generating function for the most general case when both the burst size probability distribution and the model parameters depend on time in an arbitrary (e.g. oscillatory) manner, and for arbitrary initial conditions. We show that in the case of periodic external activation and constant protein degradation rate, the response of the gene is analogous to the RC low-pass filter, where slow oscillations of the external driving have a greater effect on gene expression than the fast ones. We also demonstrate that the -th cumulant of the protein number distribution depends on the -th moment of the burst size distribution. We use these results to show that different measures of noise (coefficient of variation, Fano factor, fractional change of variance) may vary in time in a different manner. Therefore, any biological hypothesis of evolutionary optimization based on the nonmonotonicity of a chosen measure of noise must justify why it assumes that biological evolution quantifies noise in that particular way. Finally, we show that not only for exponentially distributed burst sizes but also for a wider class of burst size distributions (e.g. Dirac delta and gamma) the control of gene expression level by burst frequency modulation gives rise to proportional scaling of variance of the protein number distribution to its mean, whereas the control by amplitude modulation implies proportionality of protein number variance to the mean squared.
###### pacs:
82.39.Rt, 87.10.Mn, 87.17.Aa
## I Introduction
It has been confirmed experimentally that in living cells both mRNA Golding et al. (2005) and protein Ozbudak et al. (2002); Cai et al. (2006); Yu et al. (2006); Taniguchi et al. (2010); Choi et al. (2008) production may take form of stochastic bursts of a random size. The presence of bursts may be a result of processes involving short-lived molecules (e.g. the mRNA in case of protein production), concentration of which may be treated as a fast degree of freedom Friedman et al. (2006); Shahrezaei and Swain (2008). The number of protein molecules that can be produced from a single mRNA molecule before the latter is degraded is a random variable, and its distribution may, in the several experimentally known cases, be well approximated by geometric or exponential distribution Cai et al. (2006); Yu et al. (2006); Taniguchi et al. (2010). For that reason, in most of the existing models of bursty gene expression, the exponential (or geometric in a discrete case) bursts of protein Paulsson and Ehrenberg (2000); Friedman et al. (2006); Shahrezaei and Swain (2008); Aquino et al. (2012); Lin and Galla (2016) or mRNA Aquino et al. (2012) production are considered.
However, in the case of eukaryotic cells, certain models predict nonexponential distributions of burst sizes Schwabe et al. (2012); Elgart et al. (2011); Kuwahara et al. (2015). In particular, in the case of transcriptional bursts the molecular ratchet model predicts peaked distributions, that resemble gamma distribution Schwabe et al. (2012). Therefore, it seems desirable to study the analytically tractable models of bursty gene expression dynamics with a general, nonexponential form of burst size distributions.
Also, for the majority of stochastic models of gene expression proposed to date, even if the time-dependent solutions are considered Iyer-Biswas et al. (2009); Tabaka and Hołyst (2010); Ramos et al. (2011); Feng et al. (2012); Pendar et al. (2013); Kumar et al. (2014), it is usually assumed that model parameters are time-independent. However, taking into account the time variation of the model parameters, in particular the periodic time dependence of the rate of protein production Mugler et al. (2010) gives us an opportunity to model in a simple manner the response of a genetic circuit to oscillatory regulation and to indicate some qualitative properties of solutions for other oscillating parameters.
In this paper, we investigate a simple gene expression model, which is a natural generalization of the analytical framework proposed in Ref. Friedman et al. (2006), and which may serve as a model of both transcription and translation Aquino et al. (2012). Namely, in contrast to Ref. Friedman et al. (2006) we consider the case of an arbitrary (not necessarily exponential) burst size probability distribution and time-dependent model parameters. However, gene autoregulation is neglected. To the best of our knowledge, the time-dependent solutions of the model of Ref. Friedman et al. (2006) have not been known to date even in the absence of gene autoregulation or for the simplest case of time-independent model parameters.
We find the explicit time dependence of the cumulant-generating function for the probability distribution of molecule (protein) concentration. This general result is then applied to describe the oscillatory response of a gene to periodic modulation of the rate of protein production. In particular, we consider a gene driven by a single-frequency, sinusoidal regulation. In such a case, the time dependence of the mean molecule concentration consists of both the transient, exponentially decaying part and of the periodic part, whose amplitude depends on the driving frequency. We also point out that the division of the system’s response into periodic and transient part remains true in a more general case, when the model parameters are periodic functions of time.
We also show a simple relationship that links the -th cumulant of the protein number distribution and the -th moment of the burst size distribution. In particular, this relationship is proportional in the steady state. We use these results to discuss the question of possible evolutionary optimization of cellular processes with respect to noise intensity. Since it has been shown experimentally that distributions of protein numbers have universal scaling properties (variance proportional to mean or variance proportional to mean squared) Salman et al. (2012); Bar-Even et al. (2006), we use our results to gain an insight into possible origins of such scalings in the properties of the burst size distributions.
Stochastic models with bursty dynamics similar to the model considered here are known both in mathematics (so-called Takacs processes Cox and Isham (1986); Takacs et al. (1961)) and in physics (under the name of compound Poisson processes), where such models are used not only to describe stochastic dynamics of transcription or translation, but also to model such diverse phenomena as diffusion with jumps Luczka et al. (1995); Czernik et al. (1997); Łuczka et al. (1997), time dependence of soil moisture Porporato and D’Odorico (2004); Daly and Porporato (2006, 2010); Suweis et al. (2011); Mau et al. (2014), dynamics of snow avalanches Perona et al. (2012), statistics of the solar flares Wheatland (2008, 2009) and oil prices on the stock market Askari and Krichene (2008). And therefore, our results may be relevant to other fields beyond stochastic modeling of gene expression.
## Ii Results
Let us consider a source (gene) that creates objects (protein or mRNA molecules) of a single type, denoted by X, which are subsequently degraded or diluted due to the system size expansion, e.g. cell growth and division,
DNA I(t)−−→ X, Xγ(t)−−→∅. (1)
We focus on the simplest situation, when the molecules interact neither with each other, nor with the source. In consequence, the probability of degradation of a single molecule does not depend on the total number of molecules in the system. This assumption leads to a linear decay process (first-order reaction), which is the simplest, but arguably the most natural choice here. Still, we assume that both the source intensity and the decay parameter may vary with time in an arbitrary manner. Therefore, although we assume that the characteristics of the source are independent on the number of molecules present in the system (feedback effects are neglected), we allow the source (gene) to be externally regulated. If the number of molecules is sufficiently large, the continuous approximation is justified and the molecule concentration may be used instead of the exact copy number of molecules.
In order to obtain the stochastic description of the system, we assume that the molecule production takes the form of bursts of random size. Namely, the number of newly created molecules (or the magnitude of a concentration jump in the present continuous model), , is a stochastic variable drawn from the probability distribution , which may be explicitly time-dependent. It is assumed here, that burst duration is short enough that even large bursts can be treated as instantaneous. The time of appearance of each burst is also a random variable.
The occurrence of stochastic bursts in a given system may be due to the presence of some processes that are much faster than production or degradation of molecules in question; such processes are not explicitly taken into account within the model. For example, translational bursts of proteins are attributed to the existence of short-lived mRNA molecules Shahrezaei and Swain (2008); Lin and Galla (2016). However, it is not our aim here to relate the functional form of the burst size probability distribution to dynamics of fast degrees of freedom. Rather, we treat bursty dynamics as a well-justified approximation leading to reasonable effective description of the system at hand.
The deterministic model describing the kinetics of reactions (1) is given by a simple rate equation (60), see Appendix A. Its stochastic counterpart is the following Langevin-like equation
˙x = I(t)−γ(t)x, (2)
where is the molecule concentration and dot denotes the time derivative. appearing in (2) is now a compound Poisson process, i.e.
I(t) = N(t)∑k=1ukδ(t−tk), (3)
where is the size of the molecule burst (concentration jump) that takes place at and is the number of concentration jumps in the interval .
Stochastic differential equations similar to (2) have been used to model diffusion in asymmetric periodic potentials Luczka et al. (1995); Czernik et al. (1997); Łuczka et al. (1997), soil moisture dynamics and other phenomena in geophysics Perona et al. (2012); Porporato and D’Odorico (2004); Daly and Porporato (2006, 2010); Suweis et al. (2011); Mau et al. (2014), astrophysics Wheatland (2008, 2009) and economics Askari and Krichene (2008).
Instead of Eq. (2) it is more convenient to study the corresponding master equation111According to the terminology of Ref. Gardiner (2009), Eq. (4) is a special case of differential Chapman-Kolmogorov equation., proposed in Ref. Friedman et al. (2006)
∂p(x,t)∂t = γ(t)∂∂x[xp(x,t)] (4) + k(t)∫x0w(x−x′,t)p(x′,t)dx′.
In the above equation, is a time-dependent probability distribution of molecule concentration in the population of cells. We also have
w(u,t)=ν(u,t)−δ(u), (5)
where is the burst size probability distribution, denotes Dirac delta distribution, is the burst size, whereas and are time-dependent model parameters (in Ref. Friedman et al. (2006), only time independent model parameters have been considered).
Note that from Eq. (4) one can obtain equations for the time evolution of moments of , see Appendix B. However, solution of the moment equations is tedious, and it is usually much more convenient to work with the moment generating function.
In order to solve Eq. (4), we apply the Laplace transform: , , i.e., . In result, Eq. (4) is transformed into the following first-order linear partial differential equation
∂^p(s,t)∂t+γ(t)s∂^p(s,t)∂s−k(t)^w(s,t)^p(s,t)=0, (6)
which can be solved by the standard method of characteristics van Kampen (2007); Arfken et al. (2011). We obtain
^p(s,t)=Φ(Ω(t)s)eG(Ω(t)s,t), (7)
where
Φ(z)=^p(z,t0)=L{p(x,t0)} (8)
is the Laplace transform of the initial probability distribution ;
Ω(t)=exp(−∫tt0γ(t′)dt′), (9)
whereas is defined as
G(z,t)=∫tt0k(t′)^w(zΩ(t′),t′)dt′. (10)
It can be easily verified that for (7) we have
^p(s,t0)=Φ(s), ^p(0,t)=1. (11)
If , , and are periodic functions of time (including constant function treated as a special case of periodic function), and at least one of these three functions is not a constant function, time evolution of has an oscillatory character. More precisely, it is shown that each cumulant of consists of both the periodic part and the exponentially decaying transient terms, cf. Appendix C.
In most cases, given by (7) cannot be expressed in terms of elementary or standard special functions. Even if for some choice of , , and functions it is feasible to obtain a closed analytical formula for , the analytical evaluation of the inverse Laplace transform and hence the explicit analytical form of is usually out of question (a notable exception, for which the explicit form of can be obtained is analyzed in Section II.3).
However, making use of the relationship between , moment generating function and the cumulant generating function ,
^p(s,t)=M(−s,t)=∞∑m=0μm(t)(−s)mm!, (12)
ln[^p(s,t)]=K(−s,t)=∞∑m=1κm(t)(−s)mm!, (13)
one may find the exact analytical form of the time evolution of moments and cumulants of van Kampen (2007); Gardiner (2009). The cumulants of are of special interest here; from (7), (8), (9), (10), and (13) one gets
κr(t) = (−1)r(∂rln[^p(s,t)]∂sr)s=0 (14) = [Ω(t)]r(κr(0)+∫tt0k(t′)mr(t′)[Ω(t′)]rdt′).
In the above equation, denotes -th moment of the burst size probability distribution (5), i.e.,
mr(t)=∫∞0urν(u,t)du. (15)
From (14) we see that the time evolution of depends only on its initial value, , on the time dependence of the model parameters , , and on the time evolution of -th moment of , but it does not depend explicitly on any other cumulants of or moments of . Note that by using Eqs. (13) and (14) we can reconstruct (at least in principle) the time evolution of , provided that the time evolution of all moments of as well as the initial distribution are given.
Eq. (14) can also be obtained in an alternative way, which does not require the solution of Eq. (6). Namely, dividing Eq. (6) by we obtain the following equation for given by Eq. (13)
∂K(−s,t)∂t+γ(t)s∂K(−s,t)∂s−k(t)^w(s,t)=0. (16)
If we compute the -th derivative of Eq. (16) with respect to -variable, and subsequently put , we get the time-evolution equation for
˙κr(t)+rγ(t)κr(t)−k(t)mr(t)=0, (17)
from which we immediately obtain (14).
The two most important cumulants are the mean molecule concentration and variance . In particular, is given by
κ1(t)=Ω(t)[κ1(0)+∫tt0k(t′)m1(t′)Ω(t′)dt′], (18)
cf. Eq. (67) in Appendix B. and are of special interest also with the connection with two standard noise measures frequently used in biology: the Fano factor and the coefficient of variation , defined as
F(t)=κ2(t)κ1(t), η(t)=√κ2(t)κ1(t). (19)
### ii.1 Periodic gene regulation
Let us now analyze the case of a time-independent, but otherwise arbitrary burst size probability distribution , constant decay rate and molecule production rate (burst frequency) of the form
k(t)=C1sin(ωft+φ)+C2, (20)
where . In other words, our gene is periodically driven with a single angular frequency,
ωf=2π/T, (21)
where is an oscillation period; is the initial phase. Making use of (14) and (20), one can easily compute time evolution of -th cumulant of . Assuming for simplicity , we get
κr(t) = κr(0)e−rγt+C2mrrγ(1−e−rγt) (22) + C1mrsin(ωft+φ+β)√r2γ2+ω2f − C1mrsin(φ+β)e−rγt√r2γ2+ω2f,
where
β=arctan(−ωfrγ). (23)
given by (22) contains both the transient, exponentially decaying terms and the terms which are periodic functions of time, oscillating with an angular frequency of the driving. What is important, and easily visible when is written in a form (22), the oscillation amplitude depends on both and ,
Ar(γ,ωf)=C1√r2γ2+ω2f. (24)
(24) is a monotonically decreasing function of , therefore in the present case no resonant behavior should be expected.
In Fig. 1 we plot the time evolution of the average protein number as a function of dimensionless time variable and for various oscillation frequencies corresponding to (blue), (green) and (orange), as well as for the limiting case of nonoscillatory driving (). We assume that , therefore . Also, we assume here that is an exponential distribution (subscript stands for ’exponential’)
νϵ(u)=1bexp(−ub). (25)
Moments of (25) are given by
m(ϵ)n=bnn!. (26)
In particular, we have
m(ϵ)1=b, m(ϵ)2=2b2. (27)
Exponentially (or geometrically) distributed sizes of translational bursts have been observed in E. coli Cai et al. (2006); Yu et al. (2006); Taniguchi et al. (2010); Choi et al. (2008). For that reason, (25) appears to be a natural choice of the burst size distribution in the case of stochastic models of gene expression in which particle concentration is used instead of discrete particle number. Note that any other choice of can only affect values of and in Eq. (22); this results in identical rescaling of each plot along the -axis. As can be inferred from Eq. (22), the amplitude of oscillation is the largest for the largest oscillation period.
Similarly, in Fig. 2 we plot the time evolution of the Fano factor (19), again as a function of dimensionless time variable and for the same model parameters as in Fig 1. By employing the L’Hôpital’s rule, it can be shown that for and (25) we have
limτ→0F(τ)=2b, (28)
which is close to value () obtained in Ref. (Thattai and Van Oudenaarden, 2001) for a similar discrete model.
Most of the results of the present Section can be immediately generalized to the case of arbitrary periodic dependence of burst frequency
k(t)=a(k)0+∞∑q=1[a(k)qcos(qωft)+b(k)qsin(qωft)]. (29)
Invoking (14), for given by (29) we obtain
κr(t) = Tr(t)+Pr(t)+a(k)0mrrγ, (30)
where
Tr(t) = ⎛⎝κr(0)+∞∑q=0b(k)qqωf−a(k)qrγr2γ2+q2ω2fmr⎞⎠e−rγt,
and
Pr(t) = ∞∑q=1a(k)q⎛⎝qωfsin(qωft)+rγcos(qωft)r2γ2+q2ω2f⎞⎠mr + ∞∑q=1b(k)q⎛⎝rγsin(qωft)−qωfcos(qωft)r2γ2+q2ω2f⎞⎠mr.
In Appendix C we show that the division of into constant, transient and periodic part as given by (30) remains valid when not only , but also or are periodic functions of time.
Finally, let us note that Eq. (17) with given by (20) or, in general case, by (29) has a simple mechanical interpretation. Namely, it is the equation of motion of a particle moving with velocity in a viscous medium under the influence of both the drag force (, with constant ) and the external periodic force . Perhaps an even more compelling analogy is the RC low-pass filter: Fast oscillations of the external driving of gene expression Mugler et al. (2010) have less effect than slow ones.
### ii.2 Time-independent model parameters
#### Time evolution of p(x,t)
When the model parameters do not depend on time, i.e., , , and , Eq. (7) may be rewritten as
^p(s,t)=Φ(sΩ(t))exp[aΨ(s)−aΨ(sΩ(t))], (33)
where
a=kγ, (34)
Ψ(z)=∫^w(z)zdz. (35)
is given again by Eq. (8), whereas
Ω(t)=exp(−γt) (36)
is a special case of (9). In the steady-state limit, from (33) we obtain
limt→∞^p(s,t)≡^p(s)=exp[a(Ψ(s)−Ψ(0))]. (37)
The form of stationary distribution (we distinguish stationary and nonstationary probability distribution functions by the number of arguments) depends neither on the values of and parameters alone, nor on the initial condition, but only on the functional form of the burst size pdf and value of the parameter (34).
Using (37), we may rewrite (33) as
^p(s,t)=Φ(sΩ(t))[^p(sΩ(t))]−1^p(s). (38)
Invoking the following property of Laplace transform Abramowitz and Stegun (1964)
L−1[^f(αs)]=1αf(xα), (39)
where , by taking the inverse Laplace transform of (38) we can express as the convolution of three terms
p(x,t)=1Ω(t)p(xΩ(t),0)∗p(x)∗1Ω(t)q(xΩ(t)), (40)
where
p(x)=L−1[^p(s)], q(x)=L−1[1/^p(s)]. (41)
(37) and cannot simultaneously satisfy the necessary conditions required for the Laplace transform of an ordinary function, in particular the condition . Clearly, the latter condition should be obeyed by , hence we have . This implies that (41) is not an ordinary function, but a distribution consisting of (apart from some ordinary function) superposition of delta distribution and its derivatives. In particular, if is a polynomial of degree ,
1^p(s)=M∑k=0qksk, (42)
we obtain
q(x)=M∑k=0qkδ(k)(x). (43)
If the explicit form of both and (43) is known, it may be feasible to find the explicit form of by invoking Eq. (40) and the identity
(δ(k)∗f)(x)=f(k)(x). (44)
Derivative on the r.h.s of Eq. (44) should be understood as a distribution derivative Schwartz and Denise (1965). Namely, if has a discontinuity at , but is at least times differentiable for , the -th distribution derivative of reads
f(m)={f(m)}+σ0δ(m−1)+σ1δ(m−2)+…++σm−1δ, (45)
where denotes distribution related to treated as an ordinary function (not defined at ), whereas Schwartz and Denise (1965). In Appendix F we apply Eqs. (38)-(45) to obtain solution of Eq. (4) with the exponential probability distribution of burst sizes (25) in an alternative way than the one used in Section II.3.
#### Time evolution of cumulants of p(x,t)
If the model parameters do not depend on time, Eq. (14) takes a remarkably simple form
κr(t)=κr(0)e−rγt+a(1−e−rγt)mrr, (46)
where is given by Eq. (15) with . In the limit, from (46) we obtain
κr=κr(∞)=amrr. (47)
which also follows from Eq. (79) of Appendix D (in this Appendix, we further elaborate on the relationship between functional form of the burst size probability distribution and the functional form of the corresponding steady-state distribution of protein concentration, ). Using (47), we may rewrite (46) as
κr(t)=κr(0)e−rγt+κr(∞)(1−e−rγt). (48)
The time evolution of as given by (46) or (48) consists of the exponentially decaying contribution coming from the initial probability distribution , as well as the contribution proportional to the stationary distribution (47); the latter is completely determined solely by the values of and .
In the present case, if only the initial distribution is known, the time evolution of cumulants may be immediately recovered from (46) if needed. This allows us to concentrate solely on the stationary limit (). Next, by making use of (13) and (46), we can obtain in the form of a power series in variable.
From Eqs. (46) or (48) we see that the higher the cumulant order is, the faster approaches its stationary value. In particular, variance approaches stationary value faster than the mean protein concentration. For , from (47) we obtain a simple relation,
κ1=μ1=m1a. (49)
The parameter as defined by Eq. (34) is equal to the burst frequency, , multiplied by the characteristic time scale of the system, . Therefore is proportional to the mean number of bursts (in Ref. Friedman et al. (2006) parameter itself is called the burst frequency) and (49) has a simple interpretation, i.e., the average protein concentration (number) is the average burst size times the mean number of bursts in time interval of the length .
For , Eq. (46) can be rewritten as
κ2(t)=κ2(0)e−2γt+a2(1−e−2γt)(σ2(u)+b2), (50)
where and is the variance of . The term proportional to in Eq. (50) is related to the stochasticity of the burst size distribution. However, even for the dispersionless () burst size distribution,
νδ(u)=δ(u−b), (51)
we have an irreducible contribution to coming from the term proportional to in Eq. (50). For a fixed , (51) minimizes the variance of , a result which could be intuitively expected.
### ii.3 Example: p(x,t) corresponding to the exponential burst size distribution
In this section we find the time-dependent solution of Eq. (4) for the exponential burst size distribution (25). Apart from gene expression models, exponential distribution (25), as well as closely related two-sided exponential distribution found applications in models of other phenomena Porporato and D’Odorico (2004); Daly and Porporato (2006, 2010); Suweis et al. (2011); Mau et al. (2014). It should be also noted that in most cases only for of the form (25) both Eq. (4) and its generalizations (e.g., jump-diffusion equations Luczka et al. (1995); Czernik et al. (1997); Łuczka et al. (1997)) are analytically tractable.
As shown in Ref. Friedman et al. (2006), for the time-independent model parameters, (25) leads to stationary distribution in the form of gamma distribution,
(52)
This can be readily verified by making use of Eqs. (35) and (37).
From (26) and (46) we have
κ(ϵ)n(t)=κn(0)e−nγt+a(1−e−nγt)bn(n−1)! (53)
In the limit, we obtain , i.e., the cumulants of gamma distribution (52). From (13) and (53) the Taylor series expansion of can be reconstructed, we get
ln[^pϵ(s)]=a∞∑n=1(−bs)nn=ln[1(sb+1)a], (54)
hence , which is indeed the inverse Laplace transform of gamma distribution (52).
Interestingly, in the present case both the explicit expression for and even for can be obtained, at least for the initial distribution of the form
pϵ(x,0)=δ(x−x0), (55)
where is the initial molecule concentration. The Laplace transform of (55) is , and hence from (33) we obtain
^pϵ(s,t) = ⎛⎝se−γt+1bs+1b⎞⎠aexp(−x0e−γts). (56)
For simplicity, we put (which is arguably the most natural choice in the case of gene expression models). Moreover, we confine our attention to , as only in this case we were able to find compact analytical expression for the inverse Laplace transform of (56). Still, (56) is valid for arbitrary real . It is also convenient to change the independent variable according to . In such a case, reads
~pϵ,n(x,ω) = ωnδ(x)+n∑i=1(ni)(1−ω)iωn−i(i−1)!bixi−1e−xb ≡ ωnδ(x)+n∑i=1(ni)(1−ω)iωn−iqγ(x;i,b),
where is given by (52), whereas by we denote (56) for and similarly for and . Each of functions (LABEL:solution_for_p_no_hat_x) for is a superposition of gamma distributions (Dirac delta can be also treated as a limiting case of the gamma distribution) with different integer values of and time-dependent weights. Hence, (LABEL:solution_for_p_no_hat_x) is a natural time-dependent generalization of the gamma distribution (52) with , obtained in Friedman et al. (2006), where only the stationary limit of Eq. (4) has been considered.
Note that for , the dependence of (56) on and the mean burst size is of the form (92), therefore , and in particular (LABEL:solution_for_p_no_hat_x) have the characteristic dependence on variable and parameter as given by (93), cf. Appendix E.
An alternative way of obtaining (LABEL:solution_for_p_no_hat_x), its generalization for and its explicit form for small are discussed in Appendix F.
## Iii Discussion, biological insights
The stochastic description of the simple system studied here shares a common feature with the corresponding deterministic model: The time evolution of the average protein number predicted by the stochastic model is identical with the time evolution of the protein concentration obtained from deterministic equations of kinetics (see Appendix A). On the other hand, the evolution of the -th cumulant of the protein number distribution in time depends solely on the behavior of the -th moment of the burst size distribution in time, but it does not depend on its other moments. In consequence, the time evolution of the average molecule number is identical for all burst size distributions which have the same first moments, if only the remaining model parameters are identical. If additionally the time dependence of the second moments of the burst size distributions is identical, we obtain an identical time dependence of the coefficient of variation and the Fano factor of the protein number distributions, the two important measures of gene expression noise. And therefore, the predictions of stochastic models with bursty molecule production are, to a large extent, universal as they do not depend on other details of the burst statistics. This may explain the success of gene expression models that commonly assume exponentially distributed burst sizes, despite the fact that the experimental evidence for this particular burst size distribution can be found in only a few papers Cai et al. (2006); Yu et al. (2006); Taniguchi et al. (2010); Choi et al. (2008). (Note that a somewhat similar conclusion about an unexpected universality of coarse-grained models was drawn by Pedraza et al. Pedraza and Paulsson (2008) in regards to statistics of waiting times between mRNA bursts.)
It should also be noted that the effective bursty dynamics results from the approximation based on integrating out fast degrees of freedom. In order to check the range of validity of this approximation, the dynamics of the effective model (e.g. with protein but without mRNA, as considered here) should be compared with the dynamics of the full model including both slow (protein copy number) and fast (mRNA copy number) degrees of freedom. However, it is expected that predictions of the latter model are in agreement with the predictions of the former for greater than few mRNA lifetimes Thattai and Van Oudenaarden (2001); Shahrezaei and Swain (2008).
The Eq. (46) shows that the relaxation of the variance is twice faster than that of the mean (Fig. 3 A). This has been shown previously for the model of gene expression where mRNA was explicitly taken into account and all reactions were Poissonian Thattai and Van Oudenaarden (2001). The same has been shown in ref. Bar-Even et al. (2006) (supplementary information therein), without referring to any particular reaction statistics. Eq. (46), on the other hand, links that result with the moments of an arbitrary distribution of protein bursts. Below, we will discuss these results in the context of evolutionary optimization of biological processes with respect to time-dependent noise intensity, and also we will relate the behavior of Eq. (46) to experimentally measurable scaling relations between protein mean and variance. Although our model does not account for extrinsic noise nor feedback in gene regulation, our analysis may shed some light on understanding of the relation between protein number statistics and underlying burst statistics.
### iii.1 Optimization of protein level detection with respect to noise is dependent on the assumed measure of noise
Suppose that a cell population expresses a protein at a certain level in given environmental conditions, and then the conditions abruptly change, which results in a change in the expression level. How does the width of the protein distribution vary over time before it reaches a new steady state? Although the stationary behavior of noise in gene circuits has been widely studied, somewhat less studies have been devoted to transient behavior of noise (see e.g. Thattai and Van Oudenaarden (2001); Tabaka et al. (2008); Palani and Sarkar (2012); Dixon et al. (2016)).
The difference in relaxation time scales of the protein mean and variance may result in a nonmonotonic or, at least, nonlinear dependence of noise on time. It would be tempting to put forward a hypothesis that this feature may be exploited by evolution for optimization of some processes with respect to noise: For example, let the gene expression be reduced from an induced level to a basal level, and suppose that this reduction should trigger some other processes in the cell. For the trigger to be maximally precise (such that all cells can detect the decrease in protein concentration at almost the same time), its threshold should not necessarily be located precisely at the basal expression level, but perhaps somewhere higher, where the noise is minimal.
We will show below, however, that such interpretations are dependent on the function assumed to quantify noise. We do not know what measure of noise does the biological evolution use – that probably depends on the nature of a specific biological process. Coefficient of variation (19) seems to be a relatively natural choice because it measures the ratio of distribution width to its mean, so it is a dimensionless quantity. However, Fano factor (19) is also frequently used in literature, a function that measures the ratio of variance to the mean, i.e. the deviation of the process from Poissonian statistics. On the other hand, in the context of detection of transition between two expression levels, an equally natural choice may be the fractional change of the distribution width between the initial and final (stationary) state. One can easily see that each of these quantities behaves differently.
For visualization of the problem, suppose that the proteins are produced in exponential bursts of a mean size . The number of proteins is therefore gamma-distributed with mean and variance . Let the initial expression level be proteins, and after the abrupt environmental change it tends to . Such a change can be attained by two mechanisms: Decreasing (frequency modulation, FM) or decreasing (amplitude modulation, AM). Experimental evidence suggests that cells are able to adjust both and Padovan-Merhar et al. (2015). For and , a ten-fold decrease in by changing at fixed results in a ten-fold change in variance (i). The same decrease in mean protein concentration by changing at fixed yields a change in variance by the factor of (ii).
In the case (i), the coefficient of variation has a deep minimum in , and the Fano factor has a minimum at (a relatively deep one, compared to the initial and final values). These dependencies are different in the case (ii): Here, the coefficient of variation has a shallow minimum at , and the Fano factor decreases almost monotonically by one order of magnitude, with a minimum that is insignificant compared to the total change of (see Fig. 3 B, C).
The situation is still different if one takes into consideration the fractional change in the protein distribution width between the initial and final state. It immediately follows from Eq. (48) that the fractional change of the -th moment,
Kr(t)≡κr(t)−κr(∞)κr(0)−κr(∞)=e−rγt. (58)
In particular, the square root of the fractional change of the variance is equal to the fractional change of the mean (Fig. 3). If (or its increasing function) is used as the measure of the distribution width, then its minimal value is at . And therefore, the optimization of the position of a detection threshold to minimize noise would be ambiguous, depending on a function chosen to quantify noise.
The above example shows that any biological hypotheses regarding the evolutionary optimization of some processes with respect to the amount of noise must assume that evolution has a specified method of measurement of that noise. If such optimizations really take place in nature, then it seems that the way how the evolution quantifies noise depends on a particular biological process. To date, it is not clear, however, which measure of noise is important in which process. This problem deserves a deepened experimental analysis.
### iii.2 Frequency modulation and amplitude modulation cause different scalings of protein number variance to mean
Experimental results suggest that cells can control gene expression levels both by adjusting the mean burst frequency (frequency modulation, FM) and the mean burst size (amplitude modulation, AM) Padovan-Merhar et al. (2015). With the Eq. (46) of our model, we can relate these two types of burst control to the scaling of mean and variance of protein distributions.
According to Eqs. (46) and (47), the scaling of the variance of the protein number distribution with the mean depends on three parameters that describe burst statistics: mean burst frequency , mean burst size , and the second moment of the burst size distribution, . For simplicity of notation, in the following discussion we will denote by a constant whose value is universal for a set of different genes or for a single gene in cells cultured in various conditions. If the protein number distributions produced by the studied genes obey the scaling (i), then the moments of the burst size distribution depend on each other so that and the mean burst frequency can be arbitrary. On the other hand, if one observes the scaling (ii), then the three burst parameters depend on each other so that
If additionally the burst size distribution is such that , as in our examples in the main text and in the Appendix E ( for delta burst size distribution, for exponential, and for gamma distribution, with defined in the Eq. 101), then (i) implies that the mean burst size is universal, and the gene expression levels in the studied gene set, or in the set of conditions studied, are modulated by varying the mean burst frequency (FM). If, on the other hand, (ii), then the mean burst frequency is universal and the gene expression level is modulated by mean burst size (AM). This dependence of the variance-to-mean relationship on AM or FM has been known Hornung et al. (2012), but an explicit or implicit assumption was that the burst size distributions are exponential. Here, we show that this property also extends to a class of nonexponential distributions.
Scaling (i) was observed, e.g., in S. cerevisiae Bar-Even et al. (2006), where different promoters controlled transcription of their native proteins fused with GFP, under different environmental conditions. Assuming burst size distributions such that , would it be possible that the mean size of protein burst was the same in all these experiments and only the burst frequency varied? This could perhaps be conceivable, if the protein burst size were globally limited by the availability of translational machinery, or if the mRNA of different GFP fusions had simultaneously similar stability and similar translation rates, such that the average number of proteins produced from one mRNA molecule was the same regardless of the gene. The burst frequencies could differ from gene to gene depending on the on/off switching rates of different promoters.
However, we note that if the parameters of the burst size distribution are independent on time, then this fact imposes a particular form of scaling of the mean and variance of protein number distribution with time (46). In the Ref. Bar-Even et al. (2006), the authors observed that when the variance and mean were normalized with respect to the initial state, , then the normalized variance was proportional to the normalized mean even in the time-dependent case out of the stationary state. Although the authors supposed that their theoretical model explained this scaling, it does not seem to be the case. The equations for nonstationary mean and covariance proposed in Bar-Even et al. (2006) are fully consistent with the moment equations of our model and they imply that
Δκ2(t)Δκ1(t)=Δκ2(∞)Δκ1(∞)(1+e−γt). (59)
The value in the braces changes from to , so cannot be maintained constant in time within our model, if the parameters of the burst distribution are constant in time. What if they are time-dependent? Using Eq. 14, one can easily check that no simple substitution of exponential dependence of or allows the function to lose the dependence on time. The problem with time-dependent scaling suggests that as simple a model as ours may not be suitable for description of the data presented in Bar-Even et al. (2006). This also suggests that more detailed studies are needed on time dependence of protein noise and its relation to the properties of burst statistics.
Scaling (ii) was observed, e.g., in E. coli and S. cerevisiae cultured in different conditions Salman et al. (2012). The GFP gene was inserted under the control of three different promoters in multiple-copy plasmids (5 or 15 copies), or integrated into the genome in a single copy. If the distributions of burst sizes were such that , would it be possible to explain this scaling behavior within our model? Gene expression level should be then modulated by the mean burst size, and the burst frequency should be universal. The quadratic scaling of mean vs. variance (Salman et al. (2012), Fig. 3 therein) can be fitted by a one-parameter parabola . We note that the values of are different for the three different promoters. Within our model, this would mean that each promoter has its own characteristic frequency of bursting. This would sound reasonable if single gene copies were studied. However, in the experiments of Salman et al. (2012), the promoters were present in variable numbers of copies. The burst frequencies of the gene copies should then add up (see Eq. 83) and a single promoter should have a lower burst frequency than its multiple copies, unless there is some mechanism of dosage compensation in cells, which keeps the total burst frequency independent on the copy number of a given promoter. Moreover, the universal scaling of the full protein number distributions in Salman et al. (2012) was defined by a function (where denotes standard deviation), so, for example, gamma distribution produced by exponential bursts does not obey that scaling. Therefore, the validity of our model with time-independent parameters and the AM modulation of gene expression seems unlikely in the case of the data presented in Salman et al. (2012).
Yet, the above considerations based on our simple theory reveal that there are still unexplored problems in the field of stochastic gene expression: Are the distributions of protein burst sizes constant in time? Do they always belong to the wide class of those fulfilling , which includes the exponential distribution, commonly assumed in modeling? Under what biological conditions are the protein number distributions controlled by amplitude modulation of protein bursts or by frequency modulation (AM vs. FM)? Do these mechanisms undergo dosage compensation in the case of gene multiplication? The present discussion may be, therefore, an inspiration for a deepened experimental analysis of time dependence of protein number statistics on the underlying burst size statistics.
## Acknowledgments
The research was partly supported by the Ministry of Science and Higher Education Grant No. 0501/IP1/2013/72 (Iuventus Plus).
Corresponding author, e-mail: jjedrak@ichf.edu.pl
## Appendix A Deterministic rate equation
The deterministic model of the reaction kinetics for the system described by Eq. (1) is given by the following rate equation
˙x = c(t)−γ(t)x, (60)
where is the molecule concentration, is the source intensity, whereas is the decay parameter; dot denotes the time derivative. Note that describes a deterministic birth process, in contrast to the random source intensity of the corresponding stochastic model (2). The solution to the Eq. (60) for can be readily obtained:
x(t)=e−∫tt0γ(t′)dt′[x0+∫tt0c(t′)e∫t′t0γ(~t)d~tdt′]. (61)
The existence of the stationary solution to Eq. (60), as well as the possible oscillatory character of molecule concentration depends on the functional form of and . Clearly, in the case of time independent model parameters, i.e., and , the unique, stable stationary point of (60) is given by
xs=cγ. (62)
## Appendix B Time-evolution of the moments of p(x,t)
Multiplying Eq. (4) by and integrating such obtained equation, one gets the time evolution equation for the -th moment of ,
μr(t)=∫∞0xrp(x,t)dx. (63)
In the resulting time evolution equation for , the term derived from the first term on the r.h.s of Eq. (4) is readily integrated by parts, whereas in the term containing we have to change order of integration with respect to and as well as to change the independent variables according to , , cf. Ref. Feng et al. (2012). In result, we get
˙μr(t)=−rγ(t)μr(t)+k(t)r∑q=1(rq)μr−q(t)mq(t), (64)
where denotes -th moment of the burst size distribution as given by Eq. (15), i.e.,
mr(t)=∫∞0urν(u,t)du. (65)
We assume here that the burst size pdf is properly normalized and that the normalization is conserved during time evolution, i.e. , but we impose no other restrictions on the functional form of . On the other hand, the normalization of , i.e., the condition follows immediately from Eq. (64). For , Eq. (64) reads
˙μ1(t)=k(t)m1(t)−γ(t)μ1(t). (66)
Eq. (66) is identical to Eq. (60) provided that we put . In such a case, the time evolution of the average molecule number is the same as the time evolution of molecule concentration in the corresponding deterministic model (60); this is a general property of linear deterministic dynamical systems van Kampen (2007); McQuarrie (1967). Therefore, making use of Eq. (61), we may immediately write down the solution of Eq. (66)
μ1(t)=Ω(t)[μ1(0)+∫tt0k(t′)m1(t′)Ω(t′)dt′], (67)
where is defined by Eq. (9), i.e.,
Ω(t)=exp(−∫tt0γ(t′)dt′). (68)
For each , Eqs. (64) with form closed hierarchy of linear differential equations (time evolution equation for does not depend on if ). Therefore, in principle, starting from , the explicit analytical formula for can be found iteratively for arbitrary .
## Appendix C Periodic time dependence of the model parameters
Below, we show that the possibility of division of into two parts, the exponentially decaying and periodic one, as demonstrated on the simple example analyzed in Subsection II.1, is in fact a general feature of the present model when its parameters depend periodically on time. We assume now that not only (29), but also , and appearing in (14) are period | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9582430720329285, "perplexity": 566.7062209823932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00349.warc.gz"} |
https://ham.stackexchange.com/questions/9145/linux-rtl-sdr-waterfall-via-the-command-line/9153 | # Linux RTL-SDR waterfall via the command line
Using Raspbian (Debian) Linux on a Raspberry Pi 2, how can I view data from my software-defined radio via the command line over SSH? Is there an RTL-SDR client with terminal emulation?
• How much data bandwidth for l does your SSH connection have? Why not simply send the data necessary for visualization over that connection and plot on your local machine? – Marcus Müller Aug 31 '17 at 7:17
• or use one of the multiple web SDR frontend things? That would be way more useful than a ascii art DFT – Marcus Müller Aug 31 '17 at 19:41
• Bandwidth won't be limited. I do need it to display right from SSH because I'm using an online SSH client (shellinabox). A good front end would be nice though because it would work in the browser. – Finn Bear Sep 6 '17 at 1:20
• Band width is always limited, unless your RTL dongle computer has a bigger internet uplink than 48 Mb/s (roughly the rate the RTL dongle can produce) :) my point is that you really don't; if you can connect via SSH, you can also run a web frontend, either publicly or privately forwarded to your local machine – Marcus Müller Sep 6 '17 at 18:22
• The thing is, the two computers are on the same network so bandwidth is high. One is a raspberry pi and one is a laptop. The web front end idea is a good one though :) – Finn Bear Sep 6 '17 at 22:11
This is actually implementable, as specified, but you may have to code your own app.
I wanted to measure whether 2 signals were present. So I ran rtl_tcp on my Pi 3 to serve RTL-SDR IQ samples over a socket. I then wrote 2 DSP filters (in C) that connected to rtl_tcp and measured the magnitudes of the two signals, ran the filters on the Pi 3, and printed the results to stdout over ssh. I suppose I could have presented the 2 magnitudes as ASCII art of a 2 bar waterfall spectrograph.
My benchmarks show that a Pi 3 could also easily run an FFT on the IQ data as well as 2 simpler filters for a wider waterfall.
• Seems like you understand the requirements. Any chance you could provide a bit of your code? Would it be easy to adapt from 2 bars to 1 bar per character of terminal width? – Finn Bear Sep 6 '17 at 1:22
• This Q&A seem appropriate for info on ascii art plotting: stackoverflow.com/questions/123378/… – hotpaw2 Sep 6 '17 at 2:00
• I converted portions of the FFT waterfall code to Swift 3 with Metal shader graphics, and posted it to github: gist.github.com/hotpaw2 . I'll leave it as an exercise for the student to convert the FFT output into some ASCII-art plot routines. – hotpaw2 Sep 6 '17 at 2:06
• Thanks for that. I will be making a cusotm ASCII plotter if I can't get a web interface to work. – Finn Bear Sep 6 '17 at 22:17
You didn't specify the OS of the client from which you would like to view the remote session.
On the Pi side, use the built in RealVNC to allow remote access.
If your remote client is Linux based, take a look at Remmina. It is a free, open source project with support for several Linux variants.
If your remote client is Windows based, evaluate TightVNC. It is free download if you don't need any extensions.
• I specified that I need to be able to view the waterfall "via the command line over SSH". All I have is SSH via shellinabox. – Finn Bear Sep 6 '17 at 1:18
This is two questions.
I will presume the answer to the first is to simply start your sdrsharp software
sdrsharp
Then, to see it remotely, you need an xterm-compatible connection.
I suggest that your best option is to install a graphical remote-viewing setup.
Three ways to do this:
First, enable X11 over ssh
Procedure is to sudo vi /etc/ssh/ssh_config and change it to allow X11 connections.
Then, you can use
ssh -Y hostname
echo $TERM The response should be xterm Then you can issue the command to start the software sdrsharp Second, install tightvnc so you can control the console of the system. Once that works you can open a terminal window and do the same. Third, install Remote Desktop Protocol, which allows you to make virtual connections to the system which are compatible with Windows Remote Desktop client, or remmina. sudo apt-get install xrdp Then do the same. If you are on Windows, you should be able to install OpenSSH, then use SSH -X clientname to get it to pull up an xterm. • I can't install software such as sdrsharp. All I have access to is a browser. Right now, I'm using shellinabox to do ssh. – Finn Bear Sep 6 '17 at 1:21 • OK, I get it now. It is a VT-100 emulator via AJAX in a browser. I do not really see any way to do X11 with it. Same with Gate One. Hmm... So you do not have control of the server enough to do VNC or RDP, then. I answered the question for a normal command line, not a browser-based terminal emulator. This is like Lynx in reverse. --- But why would you think this could do RTL-SDR at all, then, if you can't do installs? I think I'm confused about what will help you most. For myself, I have found that SDR# works much better on Windows... – SDsolar Sep 6 '17 at 5:58 • I have full control over the server (my raspberry pi). The client is limited (school laptop) and all I have is a web browser (chrome), no VNC client. I can use SSH over shellinabox. – Finn Bear Sep 6 '17 at 22:13 • So you are using a Chromebook? There simply MUST be a way to get a real Xterm server running on it. Please tell me the make and model and such and I'll look up your options. Of course, you know that you can put a monitor and keyboard on a Rpi. I have a Rpi3 with a nice monitor and keyboard that I use all the time to do ssh into my other systems. It makes a decent web browser and Chromium synchronizes my bookmarks with my Chrome browsers - Ubuntu Linux, Window 7 and 8.1 and Rpi. Well worth the effort to turn one into a workstation. – SDsolar Sep 7 '17 at 1:35 • btw, here is my Ubuntu system: hardwarerecs.stackexchange.com/questions/7624/… - It has turned out to be the bedrock of my network. And only cost a tad more than a Rpi.$45 to start, for a real PC. I run Ubuntu 16.04 LTS on it. Also beefed it up a bit, though, with a solid-state drive that boots in about 20 seconds and more RAM. Sounds to me like you might need an upgrade from that old system. – SDsolar Sep 7 '17 at 1:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16254866123199463, "perplexity": 2581.238366256302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598726.39/warc/CC-MAIN-20200120110422-20200120134422-00520.warc.gz"} |
https://www.clutchprep.com/chemistry/practice-problems/74229/suppose-a-small-lake-is-contaminated-with-an-insecticide-that-decomposes-with-ti | # Problem: Suppose a small lake is contaminated with an insecticide that decomposes with time. An analysis done in June shows the decomposition product concentration to be 3.13 x 10-4 mol/L. An analysis done 35 days later shows the concentration of decomposition product to be 7.33 x 10-4 mol/L. Assume the lake volume remains constant and calculate the average rate of decomposition of the insecticide.
93% (74 ratings)
###### Problem Details
Suppose a small lake is contaminated with an insecticide that decomposes with time. An analysis done in June shows the decomposition product concentration to be 3.13 x 10-4 mol/L. An analysis done 35 days later shows the concentration of decomposition product to be 7.33 x 10-4 mol/L. Assume the lake volume remains constant and calculate the average rate of decomposition of the insecticide.
What scientific concept do you need to know in order to solve this problem?
Our tutors have indicated that to solve this problem you will need to apply the Average Rate of Reaction concept. You can view video lessons to learn Average Rate of Reaction. Or if you need more Average Rate of Reaction practice, you can also practice Average Rate of Reaction practice problems.
What professor is this problem relevant for?
Based on our data, we think this problem is relevant for Professor Osambo's class at University of New Haven. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712182283401489, "perplexity": 949.0058376856329}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141723602.80/warc/CC-MAIN-20201203062440-20201203092440-00296.warc.gz"} |
http://www.ck12.org/book/CK-12-Earth-Science-Concepts-For-Middle-School/r9/section/7.7/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 7.7: Mesosphere
Difficulty Level: Basic Created by: CK-12
Estimated3 minsto complete
%
Progress
Practice Mesosphere
Progress
Estimated3 minsto complete
%
Next up: A field trip to the mesosophere!
Not so fast. The mesosphere is the least known layer of the atmosphere. The mesosphere lies above the highest altitude an airplane can go. It lies below the lowest altitude a spacecraft can orbit. Maybe that's just as well. If you were in the mesosphere without a space suit, your blood would boil! This is because the pressure is so low that liquids would boil at normal body temperature.
### Mesosphere
The mesosphere is the layer above the stratosphere. It rises to about 85 kilometers (53 miles) above the surface. Temperature decreases with altitude in this layer.
#### Temperature in the Mesosphere
There are very few gas molecules in the mesosphere. This means that there is little matter to absorb the sun’s rays and heat the air. Most of the heat that enters the mesosphere comes from the stratosphere below. That’s why the mesosphere is warmest at the bottom.
#### Meteors in the Mesosphere
Did you ever see a meteor shower, like the one in Figure below? Meteors burn as they fall through the mesosphere. The space rocks experience friction with the gas molecules. The friction makes the meteors get very hot. Many meteors burn up completely in the mesosphere.
The Perseid meteor shower with the Milky Way.
#### Red Sprites and Blue Jets
Red sprites and blue jets are electrical discharges. They are not the same thing as lightning. They more closely resemble the discharges seen in fluorescent tubes. These events occur much higher up in the atmosphere than lightning.
Red sprites occur higher in the atmosphere than blue jets, while both occur higher than lightning.
#### Polar Mesospheric Clouds
Clouds in the mesosphere are very rare. The ones that exist occur near the poles. The clouds are called polar mesospheric clouds. At the edge of these clouds are noctilucent clouds. They are forming more often now, perhaps as a result of climate change.
Noctilucent clouds are the highest clouds in the atmosphere.
#### Mesopause
At the top of the mesosphere is the mesopause. Temperatures here are colder than anywhere else in the atmosphere. They are as low as -100° C (-212° F)! Nowhere on Earth’s surface is that cold.
### Vocabulary
• mesosphere: Layer between the stratosphere and thermosphere; temperature decreases with altitude.
• noctilucent cloud: Seen only rarely, these clouds are the highest in the atmosphere.
### Summary
• The mesosphere has a very low density of gas molecules.
• Temperature decreases in the mesosphere with altitude. This is because the heat source is the stratosphere.
• The mesosphere has red sprites, blue jets, and two types of clouds.
• The mesosphere is no place for human life!
### Practice
Use this resource to answer the questions that follow.
Mesosphere and Thermosphere at http://www.youtube.com/watch?feature=player_embedded&v=mUZ4faPCiDY (1:57)
1. Where is the mesophere found?
2. What does it protect us from?
3. What are noctilucent clouds?
4. When can noctilucent clouds be seen?
5. Why can they only be seen at that time?
### Review
1. How does temperature change with altitude in the mesosphere?
2. What types of clouds are found in the mesosphere?
3. How can meteors burn in the mesosphere when the air density is so low?
### My Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
mesosphere
Layer between the stratosphere and thermosphere; temperature decreases with altitude.
noctilucent cloud
Seen only rarely, these clouds are the highest in the atmosphere.
Show Hide Details
Description
Difficulty Level:
Basic
Authors:
Tags:
Subjects: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8277028203010559, "perplexity": 4399.985841727563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824226.79/warc/CC-MAIN-20160723071024-00030-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://terrytao.wordpress.com/2008/05/13/a-global-compact-attractor-for-high-dimensional-defocusing-non-linear-schrodinger-equations-with-potential/ | I’ve just uploaded to the arXiv my paper “A global compact attractor for high-dimensional defocusing non-linear Schrödinger equations with potential“, submitted to Dynamics of PDE. This paper continues some earlier work of myself in an attempt to understand the soliton resolution conjecture for various nonlinear dispersive equations, and in particular, nonlinear Schrödinger equations (NLS). This conjecture (which I also discussed in my third Simons lecture) asserts, roughly speaking, that any reasonable (e.g. bounded energy) solution to such equations eventually resolves into a superposition of a radiation component (which behaves like a solution to the linear Schrödinger equation) plus a finite number of “nonlinear bound states” or “solitons”. This conjecture is known in many perturbative cases (when the solution is close to a special solution, such as the vacuum state or a ground state) as well as in defocusing cases (in which no non-trivial bound states or solitons exist), but is still almost completely open in non-perturbative situations (in which the solution is large and not close to a special solution) which contain at least one bound state. In my earlier papers, I was able to show that for certain NLS models in sufficiently high dimension, one could at least say that such solutions resolved into a radiation term plus a finite number of “weakly bound” states whose evolution was essentially almost periodic (or almost periodic modulo translation symmetries). These bound states also enjoyed various additional decay and regularity properties. As a consequence of this, in five and higher dimensions (and for reasonable nonlinearities), and assuming spherical symmetry, I showed that there was a (local) compact attractor $K_E$ for the flow: any solution with energy bounded by some given level E would eventually decouple into a radiation term, plus a state which converged to this compact attractor $K_E$. In that result, I did not rule out the possibility that this attractor depended on the energy E. Indeed, it is conceivable for many models that there exist nonlinear bound states of arbitrarily high energy, which would mean that $K_E$ must increase in size as E increases to accommodate these states. (I discuss these results in a recent talk of mine.)
In my new paper, following a suggestion of Michael Weinstein, I consider the NLS equation
$i u_t + \Delta u = |u|^{p-1} u + Vu$
where $u: {\Bbb R} \times {\Bbb R}^d \to {\Bbb C}$ is the solution, and $V \in C^\infty_0({\Bbb R}^d)$ is a smooth compactly supported real potential. We make the standard assumption $1 + \frac{4}{d} < p < 1 + \frac{4}{d-2}$ (which is asserting that the nonlinearity is mass-supercritical and energy-subcritical). In the absence of this potential (i.e. when V=0), this is the defocusing nonlinear Schrödinger equation, which is known to have no bound states, and in fact it is known in this case that all finite energy solutions eventually scatter into a radiation state (which asymptotically resembles a solution to the linear Schrödinger equation). However, once one adds a potential (particularly one which is large and negative), both linear bound states (solutions to the linear eigenstate equation $(-\Delta + V) Q = -E Q$) and nonlinear bound states (solutions to the nonlinear eigenstate equation $(-\Delta+V)Q = -EQ - |Q|^{p-1} Q$) can appear. Thus in this case the soliton resolution conjecture predicts that solutions should resolve into a scattering state (that behaves as if the potential was not present), plus a finite number of (nonlinear) bound states. There is a fair amount of work towards this conjecture for this model in perturbative cases (when the energy is small), but the case of large energy solutions is still open.
In my new paper, I consider the large energy case, assuming spherical symmetry. For technical reasons, I also need to assume very high dimension $d \geq 11$. The main result is the existence of a global compact attractor K: every finite energy solution, no matter how large, eventually resolves into a scattering state and a state which converges to K. In particular, since K is bounded, all but a bounded amount of energy will be radiated off to infinity. Another corollary of this result is that the space of all nonlinear bound states for this model is compact. Intuitively, the point is that when the solution gets very large, the defocusing nonlinearity dominates any attractive aspects of the potential V, and so the solution will disperse in this case; thus one expects the only bound states to be bounded. The spherical symmetry assumption also restricts the bound states to lie near the origin, thus yielding the compactness. (It is also conceivable that the localised nature of V also restricts bound states to lie near the origin, even without the help of spherical symmetry, but I was not able to establish this rigorously.)
In view of my previous results concerning local compact attractors, the main difficulty is to show that spherically symmetric almost periodic solutions – solutions which range inside a compact subset of the energy space – enjoy a universal upper bound on their energy and mass. (This can be viewed as a “quasi-Liouville theorem”, in analogy with other recent Liouville theorems in the literature which classify various types of almost periodic solutions.)
This is accomplished in two stages. Firstly, by extensive use of the Duhamel formula and the dispersive properties of the free Schrödinger propagator (as in my previous papers), one shows that spherically symmetric almost periodic solutions exhibit quite strong decay away from the origin (more than is predicted just from the finite energy hypothesis); indeed, they decay like the Newton potential $|x|^{2-d}$ (which makes sense, if one looks at the bound state equation). In high dimension, this gives additional moment bounds on the solution. For instance, in 11 and higher dimensions, it implies that not only do almost periodic solutions have finite mass (which means that $\int_{{\Bbb R}^d} |u(t,x)|^2\ dx$ is finite) but that the sixth moment $\int_{{\Bbb R}^d} |u(t,x)|^2 |x|^6\ dx$ is also finite.
These moment conditions allow one to use some exotic virial identities. The basic virial identity for NLS is given by the formula
$\partial_t \int_{{\Bbb R}^d} \nabla a \cdot \hbox{Im}( \overline{u} \nabla u )\ dx$
$= 2 \int_{{\Bbb R}^d} \hbox{Hess}(a)( \nabla u, \overline{\nabla u} )\ dx$
$+ \frac{p-1}{p+1} \int_{{\Bbb R}^d} |u|^{p+1} \Delta a\ dx$
$- \frac{1}{2} \int_{{\Bbb R}^d} |u|^2 \Delta \Delta a\ dx$
$- \int_{{\Bbb R}^d} (\nabla a \cdot \nabla V) |u|^2\ dx$
where $a: {\Bbb R}^d \to {\Bbb R}$ is a weight function which has to obey some reasonable regularity and growth hypotheses but is otherwise arbitrary. The more moment conditions one has on u, the more rapid one can take the growth of a to be.
Different choices of the weight a yield different interesting consequences. For instance, $a(x):=1$ gives the momentum conservation law, while $a(x) := |x|$ gives the Morawetz inequalities. The choice $a(x):=|x|^2$ gives the virial identity of Glassey, which I use to establish a universal bound on the energy. It turns out that the choice $a(x) := |x|^4$ gives an identity that can give a universal bound on the mass (coming from the $\Delta \Delta a$ term in the identity), which yields the main theorem; the dimension hypothesis $d \geq 11$ is needed to get enough decay on the almost periodic solution in order to justify the formal application of the virial identity with this quartic weight. (By working a bit harder I was able to weaken this hypothesis to $d \geq 7$, but the correct hypothesis should be $d \geq 5$, in analogy with the classical theory of resonances for the linear Schrödinger operator with potential.)
One technical feature that comes up when dealing with superquadratic weights such as $|x|^4$ is that the mass term that involves $\Delta \Delta a$ is negative, which looks unfavourable. Fortunately, it turns out that one can use Hardy’s inequality and the term coming from the Hessian $\hbox{Hess}(a)$ to convert this negative term into a positive one.
There is an amusing consequence of these results; once one has a global compact attractor for a PDE, it becomes possible in principle to establish soliton resolution for this PDE by a finite amount of rigorous numerics on that attractor (or on some larger compact set containing that attractor), combined with some quantitative nonlinear stability results on all the soliton states. However such a program would be extremely complicated to execute in practice. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9739375114440918, "perplexity": 308.2446707801207}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463611569.86/warc/CC-MAIN-20170528220125-20170529000125-00264.warc.gz"} |
https://brilliant.org/problems/fibonacci-like-sequences/ | # Fibonacci-like sequences
Probability Level 4
Conside The Fibonacci sequence is defined by $f_1 = 1, f_2 = 1, f_n = f_{n-1} + f_{n-2} \text{ for } n \geq 3.$ We have that $f_{13} = 233.$ Consider a sequence such that
$g_1 = 1, g_2 = x, g_n = g_{n-1} + g_{n-2} \text{ for } n \geq 3.$
Determine the sum of all positive integer values of $x \geq 2$ for which $233$ is a term of the sequence $g$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9936404824256897, "perplexity": 199.54752637797034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00019.warc.gz"} |
http://hal.in2p3.fr/in2p3-01278370 | # Second 0+ state of unbound $^{12}$O: Scaling of mirror asymmetry
7 CSNSM SNO
CSNSM - Centre de Spectrométrie Nucléaire et de Spectrométrie de Masse, CSNSM - Centre de Sciences Nucléaires et de Sciences de la Matière : UMR8609
Abstract : The unbound O12 nucleus was studied via the two-neutron transfer (p,t) reaction in inverse kinematics using a radioactive O14 beam at 51 MeV/u. Excitation energy spectra and differential cross sections were deduced by the missing mass method using MUST2 telescopes. We achieved much higher statistics compared to the previous experiments of O12, which allowed accurate determination of resonance energy and unambiguous spin and parity assignment. The O12 resonance previously reported using the same reaction was confirmed at an excitation energy of 1.62±0.03(stat.)±0.10(syst.). MeV and assigned spin and parity of 0+ from a distorted-wave Born approximation analysis of the differential cross sections. Mirror symmetry of O12 with respect to its neutron-rich partner Be12 is discussed from the energy difference of the second 0+ states. In addition, from systematics of known 0+ states, a distinct correlation is revealed between the mirror energy difference and the binding energy after carrying out a scaling with the mass and the charge. We show that the mirror energy difference of the observed 0+ state of O12 is highly deviated from the systematic trend of deeply bound nuclei and in line with the scaling relation found for weakly bound nuclei with a substantial 2s1/2 component. The importance of the scaling of mirror asymmetry is discussed in the context of ab initio calculations near the drip lines and universality of few-body quantum systems.
Document type :
Journal articles
Cited literature [86 references]
http://hal.in2p3.fr/in2p3-01278370
Contributor : Sandrine Guesnon <>
Submitted on : Tuesday, March 17, 2020 - 3:50:36 PM
Last modification on : Friday, April 30, 2021 - 10:20:59 AM
Long-term archiving on: : Thursday, June 18, 2020 - 3:41:14 PM
### File
PhysRevC.93.024316.pdf
Files produced by the author(s)
### Citation
D. Suzuki, H. Iwasaki, D. Beaumel, M. Assié, H. Baba, et al.. Second 0+ state of unbound $^{12}$O: Scaling of mirror asymmetry. Physical Review C, American Physical Society, 2016, 93 (2), pp.024316. ⟨10.1103/PhysRevC.93.024316⟩. ⟨in2p3-01278370⟩
Record views | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400026202201843, "perplexity": 4893.172691912252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00688.warc.gz"} |
https://tex.stackexchange.com/questions/382920/define-custom-environment | # Define custom environment
I'm currently writing a .sty-file that defines the look of assignment-sheets for students. I now want do define some custom environments for the questions and answers.
The answer blocks are printed inside of a tcolorbox and only shown, if a boolean is set to true. So in the definition of my custom environment I first check the boolean and then paste the tcolorbox opening tag. Unfortunately this returns the following error:
LaTeX Error: \begin{tcb@savebox} on input line 62 ended by \end{enumerate}.
Here is the code:
\newenvironment{solution}
{
\ifthenelse{\boolean{solution}}{
\begin{tcolorbox}[breakable, width=\textwidth, colframe=red, colback=white]
}
{
\end{tcolorbox}
}{}
}
I used the code like this:
\begin{solution}
\begin{align*}
x^2 + y^2 &= z^2\\
\Rightarrow x &= \sqrt{z^2 - y^2}\\
&= ...
\end{align*}
\end{solution}
What have I done wrong? Thank you very much in advance!
# Clarification
It seems that my question is a bit unclear, so I'm trying to explain it a little better by the use of the following picture.
Note that the page on the left only displays the assignments and the page on the right also includes the solutions to the problems. What I want to do, is to be able to compile both of these documents out of one single .tex file. I don't want to have a file for the assignments and one for the solutions. In my preamble i want to set a boolean to either true or false. If the boolean solution is set to false, the solution should NOT be compiled, so the resulting document is the one on the left. If the boolean is set to true, the solutions should be compiled, so the resulting document is the one on the right.
The solution-environment I asked for should check if the boolean is set to true and if so, compile it's content.
Until now it worked if I coded like this:
\begin{enumerate}[a)]
%
%
%%%%%%%%%%%%%%%
%% Question
\item Compute the Fourier transform of $e^{-|x|}$ for $x\in \mathbb{R}$.
%
%
%%%%%%%%%%%%%%%
%% Solution
\ifthenelse{\boolean{solution}}{
\begin{tcolorbox}[breakable, width=\textwidth, colframe=red, colback=white]
\begin{eqnarray*}
\hat{f}(\xi)&=&\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-|x|}e^{-ix\xi}dx\\
&=&\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}e^{-x-ix\xi}dx+\int_{-\infty}^0e^{x-ix\xi}dx\\
&=&\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}(e^{-x-ix\xi}-e^{-x+ix\xi})dx\\
&=&\frac{1}{\sqrt{2\pi}}[\frac{1}{-(1+i\xi)}(-1)-\frac{1}{-1+i\xi}(-1)]\\
&=&\frac{1}{\sqrt{2\pi}}[\frac{1-i\xi}{1+\xi^2}+\frac{-(1+i\xi)}{1+\xi^2}]\\
&=&\frac{1}{\sqrt{2\pi}}\frac{-2i\xi}{1+\xi^2}\\
&=&-\sqrt{\frac{2}{\pi}}\frac{i\xi}{1+\xi^2}
\end{eqnarray*}
\end{tcolorbox}
}{}
%
%
\item Compute the Fourier transform of $e^{-a|x|^2},~a>0$, directly, where $x\in \mathbb{R}$.\\
\ifthenelse{\boolean{solution}}{
\begin{tcolorbox}[breakable, width=\textwidth, colframe=red, colback=white]
\begin{eqnarray*}
\hat{f}(\xi)&=&\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-a|x|^2}e^{-ix\xi}dx\\
&=&\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-a(x+\frac{i\xi}{2a})^2+\frac{-\xi^2}{4a}}dx~~~~~~~~x'\doteq x+\frac{i\xi}{2a}\\
&=&\frac{1}{\sqrt{2\pi}}e^{-\frac{\xi^2}{4a}}\int_{-\infty}^{\infty}e^{-ax^2}dx\\
&=&\frac{e^{-\frac{\xi^2}{4a}}}{2a}
\end{eqnarray*}
\end{tcolorbox}
}{}
\end{enumerate}
The custom solution-environment should combine these two lines (and their closing tags) to make programming quicker and cleaner:
\ifthenelse{\boolean{solution}}{
\begin{tcolorbox}[breakable, width=\textwidth, colframe=red, colback=white]
Hopefully this clears up the misunderstandings. If you have any further question, feel free to ask. :)
• Off-Topic: The eqnarray* environment is outdated – user31729 Jul 27 '17 at 18:30
• See the update please – user31729 Jul 27 '17 at 18:43
• To the off-topic: I'm actually using align*, I just copied this math code from the internet because I didn't want to write something out myself. Back to the main question: @ChristianHupfer, you're awesome!! This now really works as expected. Great! – Sam Jul 27 '17 at 22:33
The \ifthenelse condition ends prematurely and leaves an open environment hanging around in the middle of nowhere.
In conjunction with tcolorbox environment, the end - delimiter is \endtcolorbox and I suggest to use two \ifthenelse statements, one for the start code of the environment and another one for the end code.
A better approach would use \DeclareTColorbox, in my opinion or a weird \scantokens construct.
Also possible: Use \tcolorboxenvironment to wrap around an existing solution environment.
\documentclass{article}
\usepackage{ifthen}
\usepackage[most]{tcolorbox}
\newboolean{solution}
\newenvironment{solution}{%
\ifthenelse{\boolean{solution}}{%
\tcolorbox[breakable, width=\textwidth, colframe=red, colback=white]
}{%
}%
}{\ifthenelse{\boolean{solution}}{\endtcolorbox}{}}
\begin{document}
\setboolean{solution}{true}
\begin{solution}
\begin{align*}
x^2 + y^2 &= z^2\\
\Rightarrow x &= \sqrt{z^2 - y^2}\\
&= ...
\end{align*}
\end{solution}
\setboolean{solution}{false}
\begin{solution}
\begin{align*}
x^2 + y^2 &= z^2\\
\Rightarrow x &= \sqrt{z^2 - y^2}\\
&= ...
\end{align*}
\end{solution}
\end{document}
Cleaner solution with two different environments
\documentclass{article}
\usepackage[most]{tcolorbox}
\tcbset{
commonboxes/.style={nobeforeafter},
nobox/.style={commonboxes,blank,breakable},
solutionbox/.style={commonboxes,breakable, colframe=red, colback=white}
}
\newtcolorbox{solutionbox}[1][]{
solutionbox,#1
}
\newtcolorbox{solutionbox*}[1][]{%
nobox,#1
}
\begin{document}
\begin{solutionbox*}
\begin{align*}
x^2 + y^2 &= z^2\\
\Rightarrow x &= \sqrt{z^2 - y^2}\\
&= ...
\end{align*}
\end{solutionbox*}
\begin{solutionbox}
\begin{align*}
x^2 + y^2 &= z^2\\
\Rightarrow x &= \sqrt{z^2 - y^2}\\
&= ...
\end{align*}
\end{solutionbox}
\end{document}
Third installment of a solution with \NewEnviron and the \BODY command.
\documentclass{article}
\usepackage{environ}
\usepackage{ifthen}
\usepackage[shortlabels]{enumitem}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage[most]{tcolorbox}
\newboolean{solution}
\tcbset{
commonboxes/.style={nobeforeafter,breakable},
nobox/.style={commonboxes,blank,breakable},
solutionbox/.style={commonboxes,breakable, colframe=red, colback=white}
}
\NewEnviron{solution}[1][]{%
\ifthenelse{\boolean{solution}}{%
\tcolorbox[solutionbox, width=\textwidth,#1]
\BODY
}{%
}%
}[\ifthenelse{\boolean{solution}}{\endtcolorbox}{}]
\begin{document}
\begin{enumerate}[label={\alph*)}]
\item Compute the Fourier transform of $e^{-|x|}$ for $x\in \mathbb{R}$.
\begin{solution}[colframe=blue]
\begin{align*}
\hat{f}(\xi)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-|x|}e^{-ix\xi}dx\\
&=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}e^{-x-ix\xi}dx+\int_{-\infty}^0e^{x-ix\xi}dx\\
&=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}(e^{-x-ix\xi}-e^{-x+ix\xi})dx\\
&=\frac{1}{\sqrt{2\pi}}[\frac{1}{-(1+i\xi)}(-1)-\frac{1}{-1+i\xi}(-1)]\\
&=\frac{1}{\sqrt{2\pi}}[\frac{1-i\xi}{1+\xi^2}+\frac{-(1+i\xi)}{1+\xi^2}]\\
&=\frac{1}{\sqrt{2\pi}}\frac{-2i\xi}{1+\xi^2}\\
&=-\sqrt{\frac{2}{\pi}}\frac{i\xi}{1+\xi^2}
\end{align*}
\end{solution}
\item Compute the Fourier transform of $e^{-a|x|^2},~a>0$, directly, where $x\in \mathbb{R}$.\\
\begin{solution}
\begin{align*}
\hat{f}(\xi)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-a|x|^2}e^{-ix\xi}dx\\
&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-a(x+\frac{i\xi}{2a})^2+\frac{-\xi^2}{4a}}dx~~~~~~~~x'\doteq x+\frac{i\xi}{2a}\\
&=\frac{1}{\sqrt{2\pi}}e^{-\frac{\xi^2}{4a}}\int_{-\infty}^{\infty}e^{-ax^2}dx\\
&=\frac{e^{-\frac{\xi^2}{4a}}}{2a}
\end{align*}
\end{solution}
\end{enumerate}
\setboolean{solution}{true}
\begin{enumerate}[label={\alph*)}]
\item Compute the Fourier transform of $e^{-|x|}$ for $x\in \mathbb{R}$.
\begin{solution}[colframe=blue]
\begin{align*}
\hat{f}(\xi)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-|x|}e^{-ix\xi}dx\\
&=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}e^{-x-ix\xi}dx+\int_{-\infty}^0e^{x-ix\xi}dx\\
&=\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}(e^{-x-ix\xi}-e^{-x+ix\xi})dx\\
&=\frac{1}{\sqrt{2\pi}}[\frac{1}{-(1+i\xi)}(-1)-\frac{1}{-1+i\xi}(-1)]\\
&=\frac{1}{\sqrt{2\pi}}[\frac{1-i\xi}{1+\xi^2}+\frac{-(1+i\xi)}{1+\xi^2}]\\
&=\frac{1}{\sqrt{2\pi}}\frac{-2i\xi}{1+\xi^2}\\
&=-\sqrt{\frac{2}{\pi}}\frac{i\xi}{1+\xi^2}
\end{align*}
\end{solution}
\item Compute the Fourier transform of $e^{-a|x|^2},~a>0$, directly, where $x\in \mathbb{R}$.\\
\begin{solution}
\begin{align*}
\hat{f}(\xi)&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-a|x|^2}e^{-ix\xi}dx\\
&=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-a(x+\frac{i\xi}{2a})^2+\frac{-\xi^2}{4a}}dx~~~~~~~~x'\doteq x+\frac{i\xi}{2a}\\
&=\frac{1}{\sqrt{2\pi}}e^{-\frac{\xi^2}{4a}}\int_{-\infty}^{\infty}e^{-ax^2}dx\\
&=\frac{e^{-\frac{\xi^2}{4a}}}{2a}
\end{align*}
\end{solution}
\end{enumerate}
\end{document}
The \BODY command contains the environment 'text' and is printed only in the case solution is true.
• Works fine, but the tcolorbox doesn't do page-breaks anymore. As you see, I had the argument breakable in my previous code. I also used \tcbuselibrary{breakable} to allow page breaks which worked. Now with the new \begin{solution}-environment, it doesn't perform the breaks anymore. How could I fix this? PS: I used your first method. – Sam Jul 23 '17 at 20:51
• @Sam: I forgot the breakable option in the nobox style, I've added it to the second solution, it can be used in commonboxes style as well, since it should apply to all boxes of this kind. – user31729 Jul 23 '17 at 20:55
• No, I'm using your very first solution, not the cleaner one. – Sam Jul 23 '17 at 20:58
• @Sam: well, I tried the first solution and the breaking works for me. – user31729 Jul 23 '17 at 21:00
• Maybe, using code={\ifthenelse{\boolean{solution}}{\tcbset{solutionbox}}{\tcbset{nobox}}} is another alternative, if the boolean switch is important for the OP? – Thomas F. Sturm Jul 24 '17 at 5:30
Here is the code you wrote:
\newenvironment{solution}
{% open 1
\ifthenelse{\boolean{solution}}{% open 2
\begin{tcolorbox}[breakable, width=\textwidth, colframe=red, colback=white]
}% close 1
{% open 3
\end{tcolorbox}
}{}% close 2, open & close 4
}% close 3
But this is mixing open and closed (and TeX doesn’t look at indentation). Here is the code TeX sees:
\newenvironment{solution}
{% open 1
\ifthenelse{\boolean{solution}}{% open 2
\begin{tcolorbox}[breakable, width=\textwidth, colframe=red, colback=white]
}% close 2
{% open 3
\end{tcolorbox}
}{}% close 3, open & close 4
}% close 1
And of course, this doesn’t make sense. The environment is missing its second argument; if solution, then the tcolorbox opens (but never closes); if not solution, then the tcolorbox closes (but never opened). And {} #4 does nothing. Something more along the lines of your original would be:
\newenvironment{solution}
{% open 1
\ifthenelse{\boolean{solution}}{% open 2
\begin{tcolorbox}[breakable, width=\textwidth, colframe=red, colback=white]
}{% close 2, open 3
\begin{comment}
}% close 3
}% close 1
{% open 4
\ifthenelse{\boolean{solution}}{% open 5
\end{tcolorbox}
}{% close 5, open 6
\end{comment}
}% close 6
}% close 4
where we use the comment package to discard the answer. (You may want to use different variable names, so that solution isn't an environment and a boolean.)
• How is this different to my first solution? – user31729 Jul 27 '17 at 9:16
• @ChristianHupfer Unfortunately it's not different to your's at all. – Sam Jul 27 '17 at 9:25
• @Teepeemm I see wat my mistake was, that caused the error, but it still does not answer the main issue. – Sam Jul 27 '17 at 9:26
• Took me awhile to spot the "main issue". I've added the comment environment to discard the solution when necessary. – Teepeemm Jul 29 '17 at 0:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9071361422538757, "perplexity": 2493.95093600124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522133.33/warc/CC-MAIN-20210120213234-20210121003234-00206.warc.gz"} |
http://archive.financialexpress.com/news/poor-rain-doesnt-equal-costly-grain/1259189?rhheader | # Poor rain doesn't equal costly grain
Jun 12 2014, 12:27 IST
SummaryWhat happens to both production as well as inflation depends upon a combination of circumstances.
Though a deficient monsoon should logically mean lower production of foodgrains as well as fruits and vegetables, and hence higher inflation, history suggests this is not necessarily true, reports fe Bureau in New Delhi.
What happens to both production as well as inflation depends upon a combination of circumstances. If years of deficient rain — the Met has forecast a 33% probability of a monsoon rain of under 90% of the long period average this year — are preceded by a year of good rainfall, the impact is more muted. Last year, for instance, was a good year, and reservoir levels are 57% above the 10-year average. Given the greatest impact of the poor monsoon is expected in northwest India — a 15% shortfall is expected here — and this area is highly irrigated, the impact on production will be less.
The year 2002 saw an 80.8% monsoon level, on top of a fairly poor 92.2% rainfall level in 2001. As a result, foodgrain production in the year fell a whopping 18%.
The year 2009, by contrast, had an even worse rainfall level of 78.2%, but this was preceded by a healthy 98.3% monsoon precipitation the previous year — as a result, foodgrain production in
2009 fell by just 7%. Quite the same thing happened in 2004, another poor monsoon year with a precipitation level of just 86.2%.
The price impact, in turn, is not just dependent upon the rainfall, a lot depends on how minimum support prices (MSP) are raised. In 2002, despite the dramatic fall in production, inflation levels were subdued, not just in overall terms, but also for foodgrains as well as fruits and vegetables. In 2009, however, while food production fell 7%, inflation levels were much higher. The reason is that MSP hikes in 2002 were muted (1.6% for wheat and nothing for rice) versus hefty MSP hikes in 2009 (11.8% for rice and 8% for wheat). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8474166393280029, "perplexity": 3680.390607692524}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006593.41/warc/CC-MAIN-20141125155646-00207-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://brilliant.org/practice/abstract-data-types-level-3-challenges/ | Computer Science
Abstract Data Types: Level 3-4 Challenges
One fateful day, 9 people numbered 1, 2, 3, 4, 5, 6, 7, 8 and 9 are trying to cross a road that contains several deep holes. To pass through a hole with depth 3, the 3 leading people have to go into the holes in order so that the other people can safely cross the hole. Then, the last people who crossed the hole will pull the highest people from the hole up, and so on. For clarity, look at the diagram below where 4 people is crossing a hole with depth 3 :
If the output sequence is $1,2,3,4,5,6,7,8,9$ and the $30$ holes they must pass through in order has depth of
1 3 5 9 8 5 1 8 5 4 5 3 8 6 7 2 6 3 9 2 5 2 7 8 6 7 3 6 9 2 5
respectively, what is the input sequence?
If you think the output sequence is $6,5,4,3,2,1$, input your answer as 654321.
You are advised to solve the Easy and Medium version of this problem first.
A Mathematician was tasked to do a magic show in front of a group of elderly in an elderly home. Using his mathematical knowledge, he made a magic trick on the spot but was stuck at the last step, can you help him? The magic trick is as follows:
1. Get a member of the audience to pick a card from the deck and with only the audience knowing the value of the card, place the card at the bottom of the pile.
2. Take the top half of the deck (26 cards) and place them at the bottom of the card.
3. Take the 26 cards in the middle and place them on top of the pile.
4. Remove the bottom 26 cards from the deck.
5. Once again, take 14 cards from the centre of the deck out and remove the other cards.
6. For the last time, we take 6 cards from the middle and remove everything else.
7. He then places all removed cards on top of the current deck in the order they were removed.
8. Pull out the $n^{th}$ card from the top and show it to the audiences.
The question is, what is $n$?
Definition of terms used
• $n$ cards from the middle - We take the number of cards into the deck to be 5. if $n=3$, if means the we take the 2nd, 3rd and 4th card.
• The order they were removed - Assuming we have 7 cards and we remove the middle 3 cards(3rd, 4th, 5th) and then remove the middle 2 cards(2nd and 6th), we will then put the cards in order meaning that we first place the 3rd card, followed by the 4th, then the 5th, then the 2nd and lastly the 6th.
Image credit: Wikipedia Wuprisha
10 students are standing in a row. From left to right, they are labeled as 1 to 10. When the teacher left for the bathroom, they start to switch positions. When the teacher come back on the $k^\text{th}$ minute, the queue becomes the $k^\text{th}$ lexicography order.
The teacher needs your help to answer 2 types of queries:
• K L N : On the $k^\text{th}$ minute, what is the label of the $n^\text{th}$ person from the left?
• K P N : On the $k^\text{th}$ minute, what is the position of the person labeled $n$?
This file contains 1000 queries. What is the sum of all output?
Sample Input
1 2 3 4 5 6 7 8 1 L 1 1 L 2 1 L 10 2 L 10 1 P 1 1 P 2 1 P 10 2 P 9
Sample Output
1 2 3 4 5 6 7 8 1 2 10 9 1 2 10 10
For this example, the answer is 45.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5741726756095886, "perplexity": 277.6191196446013}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250595787.7/warc/CC-MAIN-20200119234426-20200120022426-00062.warc.gz"} |
https://tel.archives-ouvertes.fr/tel-02402347 | # Contribution to ellipsoidal and zonotopic set-membership state estimation
Abstract : In the context of dynamical systems, this thesis focuses on the development of robust set-membership state estimation procedures for different classes of systems. We consider the case of standard linear time-invariant systems, subject to unknown but bounded perturbations and measurement noises. The first part of this thesis builds upon previous results on ellipsoidal set-membership approaches. An extended ellipsoidal set-membership state estimation technique is applied to a model of an octorotor used for radar applications. Then, an extension of this ellipsoidal state estimation approach is proposed for descriptor systems. In the second part, we propose a state estimation technique based on the minimization of the P-radius of a zonotope, applied to the same model of the octorotor. This approach is further extended to deal with piecewise affine systems. In the continuity of the previous approaches, a new zonotopic constrained Kalman filter is proposed in the last part of this thesis. By solving a dual form of an optimization problem, the algorithm projects the state on a zonotope forming the envelope of the set of constraints that the state is subject to. Then, the computational complexity of the algorithm is improved by replacing the original possibly large-scale zonotope with a reduced form, by limiting its number of generators.
Keywords :
Document type :
Theses
Cited literature [185 references]
https://tel.archives-ouvertes.fr/tel-02402347
Contributor : Abes Star <>
Submitted on : Tuesday, December 10, 2019 - 2:04:06 PM
Last modification on : Monday, February 3, 2020 - 7:26:38 PM
### File
81818_MERHY_2019_archivage.pdf
Version validated by the jury (STAR)
### Identifiers
• HAL Id : tel-02402347, version 1
### Citation
Dory Merhy. Contribution to ellipsoidal and zonotopic set-membership state estimation. Automatic Control Engineering. Université Paris-Saclay, 2019. English. ⟨NNT : 2019SACLS362⟩. ⟨tel-02402347⟩
Record views | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8016738295555115, "perplexity": 1283.971503348953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147628.27/warc/CC-MAIN-20200228170007-20200228200007-00119.warc.gz"} |
https://intl.siyavula.com/read/za/physical-sciences/grade-10/the-atom/04-the-atom-04 | We think you are located in United States. Is this correct?
# 4.4 Structure of the atom
## 4.4 Structure of the atom (ESAAZ)
As a result of the work done by previous scientists on atomic models, scientists now have a good idea of what an atom looks like. This knowledge is important because it helps us to understand why materials have different properties and why some materials bond with others. Let us now take a closer look at the microscopic structure of the atom (what the atom looks like inside).
So far, we have discussed that atoms are made up of a positively charged nucleus surrounded by one or more negatively charged electrons. These electrons orbit the nucleus.
Before we look at some useful concepts we first need to understand what electrons, protons and neutrons are.
### The electron (ESABA)
The electron is a very tiny particle. It has a mass of $$\text{9,11} \times \text{10}^{-\text{31}}$$ $$\text{kg}$$. The electron carries one unit of negative electric charge (i.e. $$-\text{1,6} \times \text{10}^{-\text{19}}$$ $$\text{C}$$).
### The nucleus (ESABB)
Unlike the electron, the nucleus can be broken up into smaller building blocks called protons and neutrons. Together, the protons and neutrons are called nucleons.
Scientists believe that the electron can be treated as a point particle or elementary particle meaning that it cannot be broken down into anything smaller.
#### The proton
The electron carries one unit of negative electric charge (i.e. $$-\text{1,6} \times \text{10}^{-\text{19}}$$ $$\text{C}$$, C is Coulombs).
Each proton carries one unit of positive electric charge (i.e. $$\text{+1,6} \times \text{10}^{-\text{19}}$$ $$\text{C}$$). Since we know that atoms are electrically neutral, i.e. do not carry any extra charge, then the number of protons in an atom has to be the same as the number of electrons to balance out the positive and negative charge to zero. The total positive charge of a nucleus is equal to the number of protons in the nucleus. The proton is much heavier than the electron ($$\text{10 000}$$ times heavier!) and has a mass of $$\text{1,6726} \times \text{10}^{-\text{27}}$$ $$\text{kg}$$. When we talk about the atomic mass of an atom, we are mostly referring to the combined mass of the protons and neutrons, i.e. the nucleons.
#### The neutron
The neutron is electrically neutral i.e. it carries no charge at all. Like the proton, it is much heavier than the electron and its mass is $$\text{1,6749} \times \text{10}^{-\text{27}}$$ $$\text{kg}$$ (slightly heavier than the proton).
proton neutron electron Mass ($$\text{kg}$$) $$\text{1,6726} \times \text{10}^{-\text{27}}$$ $$\text{1,6749} \times \text{10}^{-\text{27}}$$ $$\text{9,11} \times \text{10}^{-\text{31}}$$ Units of charge $$\text{+1}$$ $$\text{0}$$ $$-\text{1}$$ Charge ($$\text{C}$$) $$\text{1,6} \times \text{10}^{-\text{19}}$$ $$\text{0}$$ $$-\text{1,6} \times \text{10}^{-\text{19}}$$
Table 4.2: Summary of the particles inside the atom.
### Atomic number and atomic mass number (ESABC)
The chemical properties of an element are determined by the charge of its nucleus, i.e. by the number of protons. This number is called the atomic number and is denoted by the letter Z.
Atomic number (Z)
The number of protons in an atom.
You can find the atomic number on the periodic table (see periodic table at front of book). The atomic number is an integer and ranges from 1 to about 118.
The mass of an atom depends on how many nucleons its nucleus contains. The number of nucleons, i.e. the total number of protons plus neutrons, is called the atomic mass number and is denoted by the letter A.
Currently element 118 is the highest atomic number for an element. Elements of high atomic numbers (from about 93 to 118) do not exist for long as they break apart within seconds of being formed. Scientists believe that after element 118 there may be an “island of stability” in which elements of higher atomic number occur that do not break apart within seconds.
A nuclide is a distinct kind of atom or nucleus characterised by the number of protons and neutrons in the atom. To be absolutely correct, when we represent atoms like we do here, then we should call them nuclides.
Atomic mass number (A)
The number of protons and neutrons in the nucleus of an atom.
The atomic number (Z) and the mass number (A) are indicated using a standard notation, for example carbon will look like this: $$_{6}^{12}\text{C}$$
Standard notation shows the chemical symbol, the atomic mass number and the atomic number of an element as follows:
For example, the iron nucleus which has 26 protons and 30 neutrons, is denoted as $$_{26}^{56}\text{Fe}$$ where the atomic number is $$Z = 26$$ and the mass number $$A = 56$$. The number of neutrons is simply the difference $$N = A - Z = 30$$.
Do not confuse the notation we have used here with the way this information appears on the periodic table. On the periodic table, the atomic number usually appears in the top left-hand corner of the block or immediately above the element's symbol. The number below the element's symbol is its relative atomic mass. This is not exactly the same as the atomic mass number. This will be explained in "Isotopes". The example of iron is shown below.
For a neutral atom the number of electrons is the same as the number of protons, since the charge on the atom must balance. But what happens if an atom gains or loses electrons? Does it mean that the atom will still be part of the same element? A change in the number of electrons of an atom does not change the type of atom that it is. However, the charge of the atom will change. The neutrality of the atom has changed. If electrons are added, then the atom will become more negative. If electrons are taken away then the atom will become more positive. The atom that is formed in either of these two cases is called an ion. An ion is a charged atom. For example: a neutral sodium atom can lose one electron to become a positively charged sodium ion ($$\text{Na}^{+}$$). A neutral chlorine atom can gain one electron to become a negatively charged chlorine ion ($$\text{Cl}^{-}$$). Another example is $$\text{Li}^{+}$$ which has lost one electron and now has only 2 electrons, instead of 3. Or consider $$\text{F}^{-}$$ which has gained one electron and now has 10 electrons instead of 9.
## Worked example 1: Standard notation
Use standard notation to represent sodium and give the number of protons, neutrons and electrons in the element.
### Give the element symbol
$$\text{Na}$$
### Find the number of protons
Sodium has 11 protons, so we have: $$_{11}^{23}\text{Na}$$
### Find the number of electrons
Sodium is neutral, so it has the same number of electrons as protons. The number of electrons is $$\text{11}$$.
### Find $$A$$
From the periodic table we see that $$A = 23$$.
### Work out the number of neutrons
We know $$A$$ and $$Z$$ so we can find $$N$$: $$N = A - Z = 23 - 11 = 12$$.
### Write the answer
In standard notation sodium is given by: $$_{11}^{23}\text{Na}$$. The number of protons is 11, the number of neutrons is 12 and the number of electrons is 11.
## The structure of the atom
Textbook Exercise 4.2
Explain the meaning of each of the following terms:
1. nucleus
2. electron
3. atomic mass
Solution not yet available
Complete the following table:
Element Atomic mass units Atomic number Number of protons Number of electrons Number of neutrons $$\text{Mg}$$ $$\text{24}$$ $$\text{12}$$ $$\text{O}$$ $$\text{8}$$ $$\text{17}$$ $$\text{Ni}$$ $$\text{28}$$ $$\text{40}$$ $$\text{20}$$ $$\text{Zn}$$ $$\text{0}$$ $$\text{C}$$ $$\text{12}$$ $$\text{6}$$ $$\text{Al}^{3+}$$ $$\text{13}$$ $$\text{O}^{2-}$$ $$\text{10}$$
Solution not yet available
Use standard notation to represent the following elements:
1. potassium
2. copper
3. chlorine
1. $$_{19}^{40}\text{K}$$
2. $$_{29}^{64}\text{Cu}$$
3. $$_{17}^{35}\text{Cl}$$
For the element $$_{17}^{35}\text{Cl}$$, give the number of...
1. protons
2. neutrons
3. electrons
... in the atom.
Solution not yet available
Which of the following atoms has 7 electrons?
1. $$_{2}^{5}\text{He}$$
2. $$_{6}^{13}\text{C}$$
3. $$_{3}^{7}\text{Li}$$
4. $$_{7}^{15}\text{N}$$
Solution not yet available
In each of the following cases, give the number or the element symbol represented by X.
1. $$_{18}^{40}\text{X}$$
2. $$_{20}^{X}\text{Ca}$$
3. $$_{X}^{31}\text{P}$$
Solution not yet available
Complete the following table:
A Z N $$_{92}^{235}\text{U}$$ $$_{92}^{238}\text{U}$$
In these two different forms of uranium...
1. What is the same?
2. What is different?
Solution not yet available | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7277102470397949, "perplexity": 366.7644393273356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00469.warc.gz"} |
https://library2.smu.ca/handle/01/26059?show=full | # Stellar-mass black hole spin constraints from disk reflection and continuum modeling
## Files in this item
Published Version: | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686412572860718, "perplexity": 16930.928627817346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00514.warc.gz"} |
https://search.datacite.org/repositories/spbpu.mpm?resource-type-id=text&affiliation-id=ror.org%2F00em04n91 | ### Homogeneous horizontal and vertical seismic barriers: mathematical foundations and dimensional analysis
V.A. Bratov, A.V. Ilyashenko, S.V. Kuznetsov, T.-K. Lin & N.F. Morozov
The concept of a vertical barrier embedded in soil to protect from seismic waves of the Rayleigh type is discussed. Horizontal barriers are also analyzed. The principle idea for such a barrier is to reflect and scatter energy of an oncoming wave by the barrier, thus decreasing the amplitude of surface vibrations beyond the barrier. Numerical FE simulations of a plane model are presented and discussed.
• 2020
1
• Text
1
#### Affiliations
• Moscow State University of Civil Engineering
1
• Institute for Problems in Mechanics
1
• National Chiao Tung University
1
• Saint Petersburg State University
1
• Institute of Problems of Mechanical Engineering
1 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438383340835571, "perplexity": 6124.5934449003225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00560.warc.gz"} |
https://brilliant.org/discussions/thread/mad-mh-contest/ | Ever wondered what every average human thinks when looking at non-arithmetic M@H?
Now I challenge YOU to make the craziest possible math problem EVER!!!
Here are the rules:
• No words: introductions, if-then's, assumptions, ... Exception: you may use words briefly for definitions, but make sure to MINIMIZE the usage - i.e. use math language instead where possible (# vs. number, etc.)
• One part of the problem must be solvable by > 90% of the masses (such as 1+1=2)
• Must be visually terrorizing
• Must be mentally terrorizing
Post links below for your submissions of the craziest math problem. The contest will be judged by votes: majority rule. Downvotes allowed. I will post a congratulation note to the winner along with the link to the problem. Hopefully it will be powered by re-shares from the big boys (Calvin).
The contest begin today and ends August 1st.
$\huge{\text{GO MAD-M@H CRAZY!!!}}$
3 years, 6 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
$$n$$ is a positive non-zero integer. Solve for $$n$$. Clues for this one can be found in one of my notes.
You may use a computer.
- 3 years, 6 months ago
Woops I past my deadline, lol. Apparently the only downvote you got was from me ^.^ Well, could you please provide me with the LaTeX of the problem, or is that its best rendering over there?
- 3 years, 5 months ago
If you copy and paste the image, it can be enlarged nicely, with little loss of detail. But I suggest you first have a look at my messageboard to get a hint how to solve this.
To actually try to compute this expression, even with a computer and math software, is a fairly formidable task.
- 3 years, 5 months ago
Hm ok so what you're saying is that it is ACTUALLY DIFFICULT - versus looking difficult. I see. I was more looking for a problem that LOOKS difficult - regardless of its true difficulty. And I really think my problem outdid yours in that respect. "I THINK" - no flames please, lol. Okay, YOU ARE THE WINNER!!! But since I'm unsatisfied with the result, I will make this an annual contest ^.^
So, what do you think about my precious new VE about derivatives?
- 3 years, 5 months ago
John, here's the thing. Your posed problem is actually a legitimate one, i.e., something that can be done by hand, if one is patient enough, and willing enough to understand the concept of irrational derivatives. Mine can only be understood if first deconstructed, i.e., go back to the source of other prime number formulas that make use of floor functions (there are now quite a few out there). Otherwise, as I said, it's a formula that even computers can find it difficult to compute. It's a very real thing, it's not made up fantasy. The formula I posted in my messageboard will indeed deliver P(n), or the nth prime number, for integer n. So, the correct answer to the problem I posted is $$3$$.
Given that it's not realistic for anyone to do my problem without first knowing its antecedents, I'd vote for yours.
- 3 years, 5 months ago
3?!?!? LOOOL !!! My History & Government and Politics teacher always said the answer to all my math problems is 3 - because 3 is the magic number. WHO WOULD HAVE THOUGHT THAT I WOULD BE THE ONE TO FIND IT MOST USEFUL?! But yes I've heard from Numberphile how insanely hard it is to compute prime numbers - and since I'm clueless at number theory, I'll leave it at that ^.^ Sooo... did you get a chance to look at my new VE post?
- 3 years, 5 months ago
Oh, that was you. The guy who posted that long treatise on derivatives. What does "VE" mean? But let me go through your treatise, and after I've had my little jog today, I'll get back to you on that. For things like this, I'd like to think before I respond. Unlike the way I solve some problems here.
Edit: Okay, see my first comments on this
VE
By the way, "VE" to someone like me means "Victory in Europe". I think one has to be really old to get this.
- 3 years, 5 months ago
Meh. Looks lazy and repetitive. Lots of white space.
Not scary.
(Just my opinion ^.^)
- 3 years, 6 months ago
Yours doesn't have much white space in the same way graffiti art doesn't leave much room for white space. But what do I know about art? It's just an opinion. Hoo-hah.
- 3 years, 6 months ago
Bah come on my problem looks far more intimidating and you know it. In fact, there are like 15 maniacs who "solved" it somehow, apparently, according to the problem statistics.
But... I can't promise my problem is harder if we remove the bottom-most fraction. Everything is simply simplification, that's all. Kk I'ma back away into my corner now...
- 3 years, 6 months ago
Just wait until the next bell.
- 3 years, 6 months ago
$|x|<0$
- 3 years, 6 months ago
Poon, algebras with negative norms is a thing. It's not true that $$|x|<0$$ is a mathematical impossibility. Here's a paper discussing spinors and Clifford algebras, both of which play prominent roles in theoretical physics
The Construction of Spinors in Geometric Algebra
On pages 2 and 6, it speaks of vectors of negative norms. This is not an isolated instance. What makes this paper slightly different from most other papers on mathematical physics is that the construction of the relevant Clifford algebra for spinors uses the utility of vectors of negative norms. Most other physics papers that run into negative norms complain about how they keep cropping up like weeds, and they talk about how to get rid of them because they seem to suggest physical nonsense--like negative energy and negative distance. Kind of like how that black, sticky goo that sometimes came out of the ground back before the 19th century was considered to be an odious nuisance best to be gotten rid of. But, mathematically speaking, "negative norms" are not un-mathematical.
- 3 years, 6 months ago
Maaan... how many courses do I need to take to understand that lol
Though on the other hand I feel like I'm beyond this stuff and just feel intimidated by the notations or am just lazy ;p
- 3 years, 6 months ago
Oh, don't think I read this stuff professionally. Who has the time, except the professionals themselves?
- 3 years, 6 months ago
^.^
- 3 years, 6 months ago
$$\color{white}{\text{Too scary! OMG! Save me!}}$$
- 3 years, 6 months ago
O_o...
You're onto something.
- 3 years, 6 months ago
It must look crazier than this!:
Here is my submission (without math-speak). GO BEAT IT!
- 3 years, 6 months ago
I cannot recommend this idea.
Words taken away from math is just Kaboobly doo, like life taken away from a person would just be a bag of chemicals. The essence of mathematics is in its beauty and simplicity, not to scare the damn out of people.
Personally, I've no idea what joy a sane mathematician would get out of solving your problem
Staff - 3 years, 6 months ago
The point is not to make it "right," or even elegant. The point is to make the most crazy-looking problem you can, so that it intimidates the neurons out of those who look at it (besides the math pros).
- 3 years, 6 months ago
I agree that what the world needs are sane mathematicians.
- 3 years, 6 months ago
- 3 years, 6 months ago
I'm still waiting for a solution to this problem!
- 3 years, 6 months ago
I have all of it besides the bottom fraction. I lost the notebook on where I copied the solution down to... Meh...
But the moment I'm finding I'm flying straight over here!
- 3 years, 6 months ago
The only thing that really throws me for a loop is that derivative of irrational order. Where can I learn more about that?
- 3 years, 6 months ago
Pith Derivative
In fact, all but the bottom fraction are scattered throughout my profile. It's an easter egg hunt ^.^
- 3 years, 6 months ago | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183779954910278, "perplexity": 1862.1130543050872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658981.19/warc/CC-MAIN-20190117123059-20190117145059-00449.warc.gz"} |
https://learnmech.com/introduction-to-heat-transfer/ | # Modes of Heat Transfer- Conduction, Convection, Radiation
## Introduction Heat Transfer/ What is Heat Transfer?
• The heat will always be transferred from higher temperature to lower temperature independent of the mode. The energy transferred is measured in Joules (kcal or Btu). The rate of energy transfer, more commonly called heat transfer, is measured in Joules/second (kcal/hr or Btu/hr).
• Heat transfer plays a major role in the design of many other devices, such as car radiators, solar collectors, various components of power plants, and even spacecraft.
## Heat Transfer Mechanisms
• Heat can be transferred in three different modes: conduction, convection, and radiation.
• All modes of heat transfer require the existence of a temperature difference, and all modes are from the high-temperature medium to a lower temperature one.
Heat is transferred by three primary modes:
• Conduction (Energy transfer in a solid)
• Convection (Energy transfer in a fluid)
• Radiation (Does not need a material to travel through)
## Conduction
• Conduction is the transfer of energy from the more energetic particles of a substance to the adjacent less energetic ones as a result of interactions between the particles. Conduction can take place in solids, liquids, or gases.
• In gases and liquids, conduction is due to the collisions and diffusion of the molecules during their random motion.
• In solids, it is due to the combination of vibrations of the molecules in a lattice and the energy transport by free electrons.
• The rate of heat conduction through a medium depends on the geometry of the medium, its thickness, and the material of the medium, as well as the temperature difference across the medium.
• We know that wrapping a hot water tank with glass wool (an insulating material) reduces the rate of heat loss from the tank. The thicker the insulation, the smaller the heat loss.
• We also know that a hot water tank will lose heat at a higher rate when the temperature of the room housing the tank is lowered. Further, the larger the tank, the larger the surface area and thus the rate of heat loss.
• Consider steady heat conduction through a large plane wall of thickness Δx = L and area A, as shown in the figure. The temperature difference across the wall is ΔT = T2-T1
• The rate of heat conduction through a plane layer is proportional to the temperature difference across the layer and the heat transfer
area but is inversely proportional to the thickness of the layer. That is,
Rate of heat conduction ∝ ( Area) (Temperature Difference ) / Thickness
• Fourier’s law of heat conduction
Fourier’s law of conduction of heat is expressed as
Q ∝ A × (dt / dx)
Where,
Q = heat flow through a body per unit time (in watts W)
A = Surface area of heat flow m2,
dt = Temperature difference in oC or K
dx = Thickness of the body in the direction of flow, m.
Hence, we can express the Heat Conduction formula by
Q = – k × A (dt / dx)
Where
k = thermal conductivity of the body and it is a Constant of proportionality
Heat is conducted in the direction of decreasing temperature, and the temperature gradient becomes negative when temperature decreases with increasing x. The negative sign in Eq. ensures that heat transfer in the positive x-direction is a positive quantity.
### Factors affecting the conduction of heat:-
i)The cross-sectional area of the rod (A)
ii)The temperature difference between the two surfaces of the conductor (θ1- θ2)
iii) Time for which heat flows. (t)
iv)Distance between two surfaces. (d)
### Applications of conduction-
1. Fins provided on a motorcycle engine
2. Electric fuse cut off
3. Electric heater
4. Carbonization of coal
5. Melting of iron in a blast furnace
6. Fission reactions in nuclear fuel rods of nuclear reactors.
7. Electrical wiring in housing
8. Electric discharge machining in manufacturing
### Typical units of measure for conductive heat transfer are:
Per unit area (for a given thickness)
Metric (SI): Watt per square meter (W/m)
Overall
Metric (SI) : Watt (W) or kilowatts (kW)
### Thermal Conductivity
• The thermal conductivity of a material can be defined as the rate of heat transfer through a unit thickness of the material per unit area per unit temperature difference.
• The thermal conductivity of a material is a measure of the ability of the material to conduct heat.
• A high value for thermal conductivity indicates that the material is a good heat conductor, and a low value indicates that the material is a poor heat conductor or insulator.
• Note that materials such as copper and silver that are good electrical conductors are also good heat conductors, and have high values of thermal conductivity.
• Materials such as rubber, wood, and styrofoam are poor conductors of heat and have low conductivity values.
## Convection
• Convection is the mode of energy transfer between a solid surface and the adjacent liquid or gas that is in motion, and it involves the combined effects of conduction and fluid motion.
• The faster the fluid motion, the greater the convection heat transfer. In the absence of any bulk fluid motion, heat transfer between a solid surface and the adjacent fluid is by pure conduction.
• The presence of bulk motion of the fluid enhances the heat transfer between the solid surface and the fluid, but it also complicates the determination of heat transfer rates.
• Consider the cooling of a hot block by blowing cool air over its top surface (Figure).
• For example, in the absence of a fan, heat transfer from the surface of the hot block in the figure will be by natural convection since any motion in the air, in this case, will be due to the rise of the warmer (and thus lighter) air near the surface and the fall of the cooler (and thus heavier) air to fill its place.
• Heat transfer between the block and the surrounding air will be by conduction if the temperature difference between the air and the block is not large enough to overcome the resistance of air to movement and thus to initiate natural convection currents.
• Energy is first transferred to the air layer adjacent to the block by conduction.
• This energy is then carried away from the surface by convection, that is, by the combined effects of conduction within the air that are due to the random motion of air molecules and the bulk or macroscopic motion of the air that removes the heated air near the surface and replaces it by the cooler air.
### Types of Convection:
• Forced Convection- Convection is called forced convection if the fluid is forced to flow over the surface by external means such as a fan, pump, or the wind.
• Natural or Free Convection- In contrast, convection is called natural (or free) convection if the fluid motion is caused by buoyancy forces that are induced by density differences due to the variation of temperature in the fluid (Figure)
### Convection Formula :
The rate of convection heat transfer is observed to be proportional to the temperature difference, and is conveniently expressed by Newton’s law of cooling as,
Q = hA ( Ts – T∞)
Where,
h is the convection heat transfer coefficient in W/m^2 °C.
A is the surface area through which convection heat transfer takes place.
Ts is the surface temperature
T∞ is the temperature of the fluid sufficiently far from the surface.
The convection heat transfer coefficient h is not a property of the fluid. It is an experimentally determined parameter whose value depends on all the variables influencing convection such as the surface geometry, the nature of fluid motion, the properties of the fluid, and the bulk fluid velocity.
### Units of measure for the rate of convective heat transfer are:
Metric (SI) : Watt (W) or kilowatts (kW)
### Applications of convection-
1. Forced Convection is used to cool down the headed plate.
2. Forced Convection is used to cool down the heated engine of the vehicle.
3. Forced convection is used to cool down the laptop and supercomputer etc.
4. Forced convection is used to cool down the human body in the summer season.
5. Radiator – Puts warm air out at the top and draws in cooler air at the bottom.
## Difference Between Conduction and Convection ;
Comparison between conduction and convection are as follows ;
Sr. no. Conduction Convection
1.It is the mode of heat transfer from one part of substance to another part of same substance or one substance to another without displacement of molecules or due to the vibrations of molecules.It is the mode of heat transfer from one part of a substance to another part of same substance or one substance to another with a displacement of molecules or due to the fluid flowing.
2.It is the mode of heat transfer in which fluid particles do not mix with each other.It is the mode of heat transfer in which fluid particles mix with each other.
3.It occurs in solid.It occurs in liquid and gases.
4.It governs by Fourier‟s law of heat conduction.It governs by Newton‟s law of convection heat transfer.
5.Example: Heat flow from one end to other end of metal rod.Example: Heat flow from boiler shell to water.
• Radiation is the energy emitted by matter in the form of electromagnetic waves (or photons) as a result of the changes in the electronic configurations of the atoms or molecules.
• Unlike conduction and convection, the transfer of energy by radiation does not require the presence of an intervening medium. In fact, energy transfer by radiation is the fastest (at the speed of light) and it suffers no attenuation in a vacuum. This is how the energy of the sun reaches the earth.
### The mechanism of the heat flow by radiation consists of three distinct phases:
1.Conversion of thermal energy of the hot source into electromagnetic waves:
• All bodies above absolute zero temperature are capable of emitting radiant energy. The energy released by a radiating surface is not continuous but is in the form of successive and separate (discrete) packets or quanta of energy called photons. The photons are propagated through space as rays; the movement of a swarm of photons is described as electromagnetic waves.
2. Passage of wave motion through intervening space:
• The photons, as carries of energy, travel with unchanged frequency in straight paths with speed equal to that of light.
3. Transformation of waves into heat:
• When the photons approach the cold receiving surface, there occurs reconversion of wave motion into thermal energy which is partly absorbed, reflected, or transmitted through the receiving surface.
• In heat transfer studies we are interested in thermal radiation, which is the form of radiation emitted by bodies because of their temperature. It differs from other forms of electromagnetic radiation such as x-rays, gamma rays, microwaves, radio waves, and television waves that are not related to temperature.
### Radiation Heat transfer equation :
The net exchange of heat between the two radiating surfaces is due to the face that one at a higher temperature radiates more and receives less energy for its absorption.
Q = σ ε Ai Fij ( Ti^4 – Tj^4 )
Where,
Q = Heat flow rate from surface i to j
σ = Stephan- boltzman constant
ε = Emmissivity
Ai = area of surface i
Fij = Form factor between surface i and j
Ti and Tj = absolute temperatures of the surfaces
The maximum rate of radiation that can be emitted from a surface at an absolute temperature (in K) is given by the Stefan–Boltzmann law as
Eb = σb AT^4
Where Eb is the energy radiated by the black body, σb is the Stefan Boltzman constant
## Terms Related to radiation :
Transmissivity:
It is the fraction of energy that is transmitted through the body.
Or
The ratio of the amount of energy transmitted to the amount of energy incident on a body.
Black body: A black body is an object that absorbs all the radiant energy reaching its surface from all the directions with all the wavelengths.
Grey Body: A gray body is defined as a body whose absorptivity of a surface does not vary with variation in temperature and wavelength of the incident radiation. It absorbs a definite percentage of incident energy irrespective of wavelength. Its absorptivity lies between 0 to 1.
Reflectivity:
It is defined as the ratio of the amount of energy reflected in the amount of energy incident on a body.
Typical units of measure for the rate of radiant heat transfer
Metric (SI) ——Watt per square meter (W/m
Example of radiation: Energy emitted by the sun reaches the earth through radiation.
## Applications of heat transfer:
1) Fins provided on the motorcycle engine.
2) Cooling jackets provided in cylinder blocks
4) Heat carried away by exhaust gases
5) Heat transfer from sun rays into the cabin/car
6) HVAC system etc.
###### Mechanical Subjectwise Basic Concept Notes ,Articles
Sachin Thorat
Sachin is a B-TECH graduate in Mechanical Engineering from a reputed Engineering college. Currently, he is working in the sheet metal industry as a designer. Additionally, he has interested in Product Design, Animation, and Project design. He also likes to write articles related to the mechanical engineering field and tries to motivate other mechanical engineering students by his innovative project ideas, design, models and videos.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.847463846206665, "perplexity": 775.1121746112701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300722.91/warc/CC-MAIN-20220118032342-20220118062342-00289.warc.gz"} |
https://inteltoday.org/2020/11/05/kryptos-30-years-anniversary-the-solution-of-section-ii/ | ## KRYPTOS 30 Years Anniversary — The Solution of Section II
Kryptos is a sculpture by the American artist Jim Sanborn located on the grounds of the Central Intelligence Agency (CIA) in Langley, Virginia. Of the four parts of the message, the first three have been solved. The last part of the message remains as one of the most famous unsolved code in the world.
November 5 2020 — The ciphertext on the left-hand side of the sculpture (as seen from the courtyard) of the main sculpture contains 869 characters in total (865 letters and 4 question marks). The right-hand side of the sculpture comprises a keyed Vigenère encryption tableau, consisting of 867 letters. In our last post about KRYPTOS, we learned how to break a Vigenère code. In this post, we finish the job regarding the entire section II. Follow us on Twitter: @INTEL_TODAY
RELATED POST: KRYPTOS 30 Years Anniversary — How to Break a Vigenère Code
Top parts of the sculpture
Bottom parts of the sculpture
Where do we start?
This post will discuss only some part — although a big one — of the message coded in the (LEFT) top panel. Many people have realized something ‘promising’ in the fourth line.
GGWHKK? DQM CPFQZ DQM MIAGPFXHQRLG
The sequence ‘DQM’ appears twice (separated by a length of eight characters). In English, a three-letter word is very likely to be ‘THE’. We are going to start our journey by assuming that the entire text from DQM to the end of the first panel has been coded by a unique method (Vigenère table) using the same key and passphrase. Perhaps, Sanborn even gave us the correct key — KRYPTOS — in the right panels? First, we need to learn a few concepts.
The Vigenère Cipher
The Vigenère square or Vigenère table, also known as the tabula recta, can be used for encryption and decryption.
A table of alphabets, termed a tabula recta, Vigenère square, or Vigenère table can be used to encrypt a message. It consists of the alphabet written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the previous alphabet, corresponding to the 26 possible Caesar ciphers. At different points in the encryption process, the cipher uses a different alphabet from one of the rows. The alphabet used at each point depends on a repeating passphrase. [Wikipedia]
For example, suppose that the plaintext to be encrypted is:
ATTACKATDAWN
The person sending the message chooses a passphrase and repeats it until it matches the length of the plaintext, for example, the passphrase “LEMON”:
Plaintext ATTACKATDAWN
Passphrase LEMONLEMONLE
Ciphertext LXFOPVEFRHNR
The ‘Keyed’ Vigenère Cipher
The ‘Keyed’ Vigenère Cipher uses an alternate tableau. The “Alphabet Key” helps decide the alphabet to use to encrypt and decrypt the message. Instead of just using the alphabet from A to Z in order, the alphabet key puts a series of letters first, making the cipher even tougher to break. The “passphrase” is the code word/sentence used to select columns in the tableau.
For instance, in the right panels of his sculpture, Sanborn used the key “KRYPTOS” before the normal sequence of the alphabet: KRYPTOSABCDE…
The Index of Coincidence
The Index of Coincidence [IC] measures the probability that any two randomly chosen source-language letters are the same. This probability — also known as the $kappa$ index — is about 0.067 for monocase English while the probability of a coincidence for a uniform random selection from the alphabet is 1/26 = 0.0385.
$\displaystyle \kappa =\frac{\sum_{i=1}^{c}n_i(n_i -1)}{N(N-1)}$
where c is the size of the alphabet (26 for English), N is the length of the text, and $n_1$ through $n_c$ are the observed ciphertext letter frequencies, as integers.
The IC for ‘Section II’ of KRYPTOS
Therefore, I have calculated the IC for each passphrase length from 1 to 16.
Here are my calculations for length going from 1 to 10. The last number is the average of each columns.
0.0452
0.0455 0.0507 0.0481
0.0434 0.0434 0.05 0.0456
0.0523 0.0646 0.0541 0.0529 0.0560
0.0439 0.0606 0.0485 0.0452 0.0462 0.04888
0.0461 0.0481 0.0512 0.0485 0.0451 0.0496 0.0481
0.039 0.0426 0.0363 0.0527 0.0352 0.0574 0.0379 0.0430
0.0755 0.079 0.0732 0.0674 0.0732 0.0488 0.089 0.0549 0.0701
0.039 0.0345 0.0495 0.0405 0.0375 0.048 0.0541 0.0435 0.0429 0.0432
0.0374 0.0702 0.0455 0.0549 0.0473 0.0511 0.0417 0.0568 0.036 0.0625 0.0503
Please, keep in mind that these numbers do NOT apply to the full section II. Nevertheless, it is absolutely clear that the length of the passphrase is 8.
The peaks above average at length 4, 12 and 16– respectively 0.0560, 0.0565 and 0.0663– are due to some mathematical consequences.
Let me explain this. The text has been encrypted with a L = 8 long passphrase. Then, when we divide the text in four column, one every two letters has been substituted with the same alphabet.
1 2 3 4
5 6 7 8
1 2 3 4
5 6 7 8
1 2 3 4
So, the probability that two randomly selected letters are identical will depend on whether these letters are in an odd or even position. Therefore such a probability will be about (0.067+0.0385)/2 = 0.0528.
The same logic applies if you divide the text in twelve columns.
1 2 3 4 5 6 7 8 1 2 3 4
5 6 7 8 1 2 3 4 5 6 7 8
1 2 3 4 5 6 7 8 1 2 3 4
5 6 7 8 1 2 3 4 5 6 7 8
Finally, if you divide the text in a length twice longer than the passphrase, all letters are encoded with the same alphabet. Therefore, the IC should again be close to 0.067. At this point, you should have no doubt whatsoever on the fact that the passphrase is 8 letters long.
Comment — Unlike David Stein, my calculations show no roughness/anomaly at length 13. Perhaps, one (or both) of us made a mistake. We both worked out the whole thing by hand. It is of course FUN, but not best for exactitude and reliability…] Or perhaps, the ‘signal’ detected by Stein is a statistical ‘fluke’ resulting from a short number of letters and the fact that Stein and myself worked on different parts of the real section II. This issue MUST be investigated a bit further. I will make my calculations available to the readers. Obviously, a computer code will decide quickly on this issue. — End of Comment
Anyway, the hard work has been done. The fun can begin.
The Cipher in 8 columns
So, I re-wrote the coded text in eight columns. I know that ALL letters in a given column have been encrypted with the same “Vigenère alphabet”.
Back to our early hunch…
At this point, a frequency analysis of the letters contained in each column will reveal which letters in the plaintext have been substituted in the coded text. But that is a bit mathematical and not nearly as much fun as following our early hunch!
I will now search for the ‘Vigenère alphabets’ [keyed with KRYPTOS] that transform ‘DQM’ into ‘THE’. Clearly, the passphrase characters for the first, second, and third column correspond to the letters: S, S and A.
K R Y P T O S A B C D E F G H I J L M N Q U V W X Z
S A B C D E F G H I J L M N Q U V W X Z K R Y P T O
S A B C D E F G H I J L M N Q U V W X Z K R Y P T O
A B C D E F G H I J L M N Q U V W X Z K R Y P T O S
How does it look?
How to keep going?
At this point, it seems very likely that we are on the right path. Let us keep going. Anyone will go his own way. I simply noticed that the first three letters in row 41 are ‘WES’ which is likely to be followed by a ‘T’. This implies that the 4th letter in the passphrase is again a ‘A’.
Now, words begin to appear. For instance, GATH in row 7 is likely to be followed by a ‘E’. This implies that the 5th in the passphrase is a ‘B’.
We could go on…. But we can be smarter. We KNOW that the passphrase is eight characters long. And so far, we have identify SSAAB. Assuming that the passphrase is an english word — and this is why you should NEVER use english words in a passphrase — the word obviously must start with a ‘A’ as there are no English words with a double A. So we are looking for a word that looks like this: AB – – – SSA. You will find only one such word in English: ABSCISSA.
The TEXT at Last
And now?
Here is thus the plaintext message:
“They used the Earths magnetic field X The information was gathered and transmitted undergruund to an unknown location X Does Langley know about this? They should Its buried out there somewhere X Who knows the exact location? Only WW This was his last message X Thirty eight degrees fifty seven minutes six point five seconds north Seventy seven degrees eight minutes forty four seconds west ID by rows”
Sanborn acknowledges a mistake
However, we are not done for several reasons. First, Jim Sanborn has acknowledged that he made an error in the sculpture by omitting an “X” used to separate sentences, for aesthetic reasons. Therefore, the deciphered text that ends “…FOUR SECONDS WEST ID BY ROWS” is NOT correct. Let us now fix that.
“FOUR SECONDS WEST ID BY ROWS” should actually be “FOUR SECONDS WEST X LAYER TWO”.
Question: What “letter” did Jim Sanborn forgot to put in the sculpture?
OK. I will tell you. The last line misses one ‘S”:
DQUMEBEDMHDAFMJGZNUPLGESWJLLAETG
TESFORTYFOURSECONDSWESTXLAYERTWO
Where does Section II begin?
Since our passphrase starts in the middle of a word ‘SSA’ from ‘ABSCISSA’, we can safely conclude that the coded text starts earlier than I had assumed. Clearly, it starts 5 + (8*N) characters earlier!
And indeed, with a bit of additional work, you will come to the conclusion that the 37 characters [5 + (8*4)] before the question mark
VFPJUDEEHZWETZYVGWHKKQETGFQJNCEGGWHKK?
have been encoded with the same key and passphrase. We can now easily decode this too.
And the result is:
“It was totally invisible Hows that possible?”
KRYPTOS — SECTION II
“It was totally invisible Hows that possible? They used the Earths magnetic field X The information was gathered and transmitted undergruund to an unknown location X Does Langley know about this? They should Its buried out there somewhere X Who knows the exact location? Only WW This was his last message X Thirty eight degrees fifty seven minutes six point five seconds north Seventy seven degrees eight minutes forty four seconds west X Layer two”
Finally, the puzzle is also — and perhaps most importantly — about solving the underlying riddle. But, we are not ready for this yet. Tiny steps for tiny feet…
When it is done, we may understand so much better the failures of the US Intelligence Agencies since the fall of the Berlin wall. Here is a hint: Hubris.
Stay tuned!
Kryptos: The CIA’s Unsolved Secret Code
“Kryptos remains one of the most famous unsolved codes in the world today.
Since the encrypted sculpture was placed on display by American artist Jim Sanborn on the grounds of the Central Intelligence Agency (CIA) in Langley, Virginia, in 1989, there has been much speculation about the meaning of the encrypted messages it bears.
Of the four messages, three have been solved, with the fourth remaining a mystery. Over the years, hints and slight cracks have appeared in the armour of this puzzle, however its continuity of being one of the greatest enigmas of all time continues to provide a diversion for cryptanalysts, both amateur and professional, who are attempting to decrypt the final section.
Even after unsolving the final section, the final riddle of this enigma within an enigma must be worked out, which from the solutions so far seem to be connected with (1) Illusion in Darkness, (2) Using the Earth’s magnetic field being transmitted to something buried underground and (3) Ancient Egyptian tombs, with the clue (4) still waiting to be solved after more than 2 decades.”
REFERENCES
Kryptos — Wikipedia
Stein, David D. (1999). “The Puzzle at CIA Headquarters: Cracking the Courtyard Crypto” (pdf). Studies in Intelligence. 43 (1).
=
The KRYPTOS Code — The Solution of Section II
KRYPTOS Week 2019 — The Solution of Section II
KRYPTOS 30 Years Anniversary — The Solution of Section II
This entry was posted in KRYPTOS and tagged , , . Bookmark the permalink. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5674784779548645, "perplexity": 1111.7683566968174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00744.warc.gz"} |
http://intellectualmathematics.com/blog/active-learning-implementation-ideas/ | # Active learning implementation ideas
You don’t want to lecture for hours on end to passive students; you want to stimulate discussion, interaction, active engagement in developing the ideas. But how? I suggest using a system such as WeBWorK to translate the brilliant insights and illuminating trains of thought from your lectures into a worksheet-style format. This gives the discussion a clear structure and purpose, and incentivises students to participate wholeheartedly since answers are automatically graded and count toward their final grade. This works on many levels and to many ends that go beyond the advantages of using WeBWorK for computational practice problems.
At the level of lecture discussion it makes students eager to attend class and attentively follow your reasoning since this will give them “free answers.” For instance, you are introducing multivariable functions and want to convey the idea that a great way of analysing them are by their cross-sections with horizontal and vertical planes. So you pose this problem and work out at least part of it on the board. Students are primed to appreciate your point since it answers a direct need of theirs. And by stopping short of giving away the final answer you force students to pay attention to the underlying method since they will need it to complete the problem.
You also want to create substantive students discussions in pairs or small groups. To this end it is nice to have conceptual questions that allow for multiple reasonable standpoints. An example is this “paradox” on how one integral can have several “different” answers. You can ask students to work in pairs, one checking one method, the other the other, and then try to convince each other that they are right. Heated discussion ensues, ultimately leading to some reflection on the meaning of the answer––a lesson that cannot be taught often enough.
Full-class discussion or group work is especially stimulating for problems that involve more open-ended conceptual thinking, interpretation, and reflection, rather than single-track computation. This and this are examples that work very well.
Exam-oriented thinking is a plague that prevents students from learning and teachers from teaching. Many a traditional course shoots itself in the foot already before it starts by being structured around the idea of a final exam consisting of a fixed number of highly standardised, computational problems. This corrupts the teacher, who in this mindset asks questions that are “good practice for the exam” instead of asking what lines of inquiry are best for actually learning mathematics in a meaningful way. It also corrupts the students, who quickly conclude that rote computations is all they “really need to know” and hence zone out at any attempts by the teacher to explain underlying reasoning.
The worksheet model frees us from this tyranny. Teachers are no longer crippled by the straightjacket of having to ask only “exam-type” questions, and students find that a large part of their grade comes from a variety of questions involving genuine thought rather than a restrictive set of archetype calculations. We are free to pursue interesting “one-off” problems that make you think, instead of having to discard them as “unexaminable” just because they are not replicable ad infinitum with different formulas and numbers. Since a large part of the graded work takes part in a formative, discussion-oriented setting, we are not constrained to ask self-contained, unambiguous questions of a fixed level of difficulty, as a traditional high-stakes exam requires. Instead of designing our course with the exam in mind, we can design it with mathematical thinking and learning in mind.
Here are a number of examples of problems in this vein. These questions make you think about the material from various vantage points: the “why” behind certain formulas; visual, intuitive, qualitative interpretation of what you are doing; and at the end even some “cultural interest” connections.
These types of problems can be incorporated in a class in various ways depending on the format of the class. In a larger lecture setting they can be used as the basis for the lecture, in which case the students have an extra incentive to follow along since they need the answers. They can also be used to break up the lecture for a few minutes of reflection and discussion among students. In some settings the boundary between class discussion material and exercise assignments need not be sharp: one can assign a number of these kinds of problems and let student requests determine which get discussed in class and which are left as homework. A small class could even be entirely student-driven thanks to the structure that a well-thought-out sequence of questions affords.
The worksheet format is also suited for longer “story” problems such as these, which allow us to work out substantial problems from first principles, such as setting up a differential equation before solving it. These problems too can be incorporated in various ways, from making homework more interesting to making extended discussions of applications viable in a lecture (since it is now truly part of the [graded] course content rather than “enrichment” material as in a traditional course). In classes of moderate size one can also assign such problems to individual students to present to the class. Since everyone needs to enter the answers in the online system, it’s everyone’s points on the line and the class will listen attentively and try to catch any errors. Assigning such problems based on individual student interests is also a way of drawing on existing expertise and connecting the course with other parts of their study program.
The worksheet format also allows us to break down the traditional division between “theory” and “practice.” Again, this very unfortunate and harmful aspect of conventional teaching is in large part a product of examination needs: having students run through computation problems that can be multiplied at will is very convenient for examination purposes, whereas asking for explanations and conceptual reasoning is very messy. But with the WeBWorK worksheet model we can make the latter realistically implementable.
One useful way of getting students engaged with proof-oriented thinking is asking them to evaluate purported proofs, like this. In a traditional course, the teacher and textbook may model many examples of good proofs, but students are seldom confronted with erroneous reasoning. Therefore they often come to associate proofs more with superficial aspects such as phraseology than with actual content. Reasoning-evaluation problems like this forces them to look deeper and cultivates a healthy critical mindset for reading proofs in general.
Here’s another theory example: I introduce the fundamental theorem of calculus, give an intuitive proof, and then ask some follow-up questions that should be easy if you followed the proof, but often prove not so easy since students have so little experience with this type of mathematical reasoning––which is exactly why we need these kinds of questions. In a classroom I might go through the given proof on half a board and then ask the students to complete the proof of the follow-up case in parallel on the other half of the board, mimicking the steps of the first proof with minor adjustments as needed. I might ask for a volunteer student to come to the board to carry this out with the help of suggestions from the class and maybe some leading questions from me if needed. If I refuse to do it any other way (i.e., explain it myself) the students will be pushed to go along with it: after all, if they don’t they will have to do this as a homework problem, which will be much harder and more work.
The second follow-up problem asks for much greater conceptual insight. I marked it with a dagger $\dagger$, signalling that it is a challenge problem. I like to include some problems like this for ambitious students to puzzle about, while others do not need to worry about them since the grading scale will reflect that these kinds of problems are for those aspiring to the very highest grades.
Much other theoretical material will be of this “$\dagger$ type”: it is not at all required for average students in introductory courses, but on the other hand you want to encourage students with substantial mathematical aspirations to start reflecting on more theoretical aspects as early as possible. One way of doing this is to include some of the more theoretical material as $\dagger$-marked readings with various interspersed comprehension questions. Here are a number of examples of how this can be implemented in WeBWorK. With such an “interactive textbook” type of presentation, ambitious students are rewarded for reading the theory and given a “training wheels” guide to reflective reading of mathematical texts. These are some examples.
All of the above problems I have written myself. WeBWorK comes with a large library of standard practice problems (which I also use), but to reach all the goals I highlighted above we must go beyond this restricted notion of what an online homework system is for. | {"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6756466627120972, "perplexity": 748.52767617897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823462.26/warc/CC-MAIN-20171019194011-20171019214011-00708.warc.gz"} |
https://physicsoverflow.org/28239/intuition-for-homological-mirror-symmetry | # Intuition for Homological Mirror Symmetry
+ 5 like - 0 dislike
135 views
first of all, I need to confess my ignorance with respect to any physics since I'm a mathematician. I'm interested in the physical intuition of the Langlands program, therefore I need to understand what physicists think about homological mirror symmetry. This question is related to this other one Intuition for S-duality.
As I have heard, the mirror symmetry can be derived from S-duality by picking a topological sector using an element of the super Lie algebra of the Lorentz group $Q$, such that things commutes with $Q$, $Q^2 = 0$ and some other properties that I actually don't understand. Then to construct this $Q$, one would need to recover the action of $\text{Spin}(6)$ (because the dimension 4 is a reduction of a 10 dimensional theory? is this correct?) and there are different ways of doing this. Anyway, passing through all the details, this is a twisting of the theory giving a families of topological field theory parametrized by $\mathbb{P}^1$.
Compactifying this $M_4 = \Sigma \times X$ gives us a topological $\sigma$-model with values in Hitchin moduli space (that is hyperkähler). The Hitchin moduli space roughly can be described as semi-stable flat $G$ bundles or vector bundles with a Higgs field. However since the Hitchin moduli is kähler, there will be just two $\sigma$-models: A-models and B-models. I don't want to write more details, so, briefly there is an equivalence between sympletic structures and complex structures (for more details see http://arxiv.org/pdf/0906.2747v1.pdf).
So the main point is that Lagrangian submanifolds (of a Kähler-Einstein manifold) with a unitary local system should be dual to flat bundles.
1) But what's the physical interpretation of a Lagrangian submanifold with a unitary local system?
2) What's the physical intuition for A-models and B-models (or exchanging "models" by "branes")?
3) What's the physical interpretation of this interplay between complex structures and sympletic ones (coming from the former one)?
This post imported from StackExchange Physics at 2015-03-17 04:42 (UTC), posted by SE-user user40276
asked Mar 6, 2015
edited Mar 17, 2015
Re 'And, then the theory would be non-perturbative, since it would be defined "for all" τ, because amplitudes are computed with an expansion in power series in τ':
Actually, to a physicist, such a power-series expansion is the hallmark of (the outcome of) a perturbative theory: Such power series typically correspond to some perturbation calculated to (arbitrarily) high order. A base in the coefficient corresponds to a physical coupling constant and causes such approaches to become invalid for large (e.g. unity) coupling constants as the power series no longer converges.
Please take this comment with a grain of salt: I am myself from a foreign field (to theoretical physics) as I am a mere quantumoptics experimental physicist curious about expanding my mental horizon. This is just my first "that's usually like this" association.
The answer to this question would easily take thousands of pages. A first thousand is the Clay math book "Mirror Symmetry".
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverflo$\varnothing$Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.804456353187561, "perplexity": 846.4801483527274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00278.warc.gz"} |
http://www.anderswallin.net/tag/hene/ | ## Measuring the thermal expansion of ULE glass
Here's an experiment I've done recently:
(Time-lapse of ca 18 hour experiment. Bottom left is a spectrum-analyzer view of the beat-note signal. Top left is a frequency counter reading of the beat-note. Bottom right is a screen showing the a camera-view of the output-beam from the resonator)
This is a measurement of the thermal expansion of a fancy optical resonator made from Corning "Ultra Low Expansion" (ULE) glass. This material has a specified thermal expansion of 0.03 ppm/K around room temperature. This thermal expansion is roughly 800-times smaller than Aluminium, around 400-times smaller than Steel, and 40-times better than Invar - a steel grade specifically designed for low thermal expansion.
Can we do even better? Yes! Because ULE glass has a coefficient of thermal expansion (CTE) that crosses zero. Below a certain temperature it shrinks when heated, and above the zero-crossing temperature it expands when heated (like most materials do). This kind of behavior sounds exotic, but is found is something as common as water! (water is heaviest at around 4 C). If we can use the ULE resonator at or very close to this magic zero-crossing temperature it will be very very insensitive to small temperature fluctuations.
So in the experiment I am changing the temperature of the ULE glass and looking for the temperature where the CTE crosses zero (let's call this temperature T_ZCTE). The effect is fairly small: if we are 1 degree C off from T_ZCTE we would expect the 300 mm long piece of ULE glass to be 200 pm (picometers) longer than at T_ZCTE. That's about the size of a single water-molecule, so this length change isn't exactly something you can go and measure with your digital calipers!
Here's how it's done (this drawing is simplified, but shows the essential parts of the experiment):
We take a tuneable HeNe laser and lock the frequency of the laser to the ULE-cavity. The optical cavity/resonator is formed between mirrors that are bonded to the ends of the piece of ULE glass. We can lock the laser to one of the modes of the cavity, corresponding to a situation where (twice) the length of the cavity is an integer number of wavelenghts. Now as we change the temperature of the ULE-glass the laser will stay locked, and as the glass shrinks/expands the wavelength (or frequency/color) of the laser will change slightly.
Directly measuring the frequency of laser light isn't possible. Instead we take second HeNe laser, which is stabilized to have a fixed frequency, and detect a beat-note between the stabilized laser and the tuneable laser. The beat-note will have a frequency corresponding to the (absolute value of the) difference in frequency between the two lasers. Now measuring a length-change corresponding to the size of a single water-molecule (200 pm) shouldn't be that hard anymore!
Let's say the stabilized laser has a wavelength of $\lambda_1 = 632.8 \,\mathrm{nm}$ (red light). Its frequency will be $\nu_1 = {c \over \lambda_1} = 474083438685209 \,\mathrm{Hz}$ (that's around 474 THz). When the tuneable laser is locked to the cavity we force its wavelength to agree with $\lambda_2 = {2L\over m}$ where $m$ is an integer and $L$ is the length of the cavity. I've drawn only a small number of wavelengths in the figure, but a realistic integer is $m=948167$. We get $\lambda_2 = 632.7999181579 \,\mathrm{nm}$ and $\nu_2 = {c \over \lambda_2} = 474083500000000 \,\mathrm{Hz}$, very nearly but not quite the same wavelength/frequency as the stabilized laser. Now our photodiode which measures the beat-note will measure a frequency of $\nu_{beat} = | \nu_1-\nu_2 | = 61.314 \,\mathrm{ MHz}$.
How does this change when the ULE glass expands by 200 pm? When we heat or cool the cavity by 1 degree C the length changes to 300 mm + 200 pm, and the wavelength of the tuneable laser will change to
$\lambda_3 = 632.7999185797\,\mathrm{nm}$. Now our beat-note detector will show $\nu_{beat} = | \nu_1-\nu_3 | = 60.998 \,\mathrm{ MHz}$. That's a change in the beat-note of more than 300 kHz - easily measurable!
That's how you measure a length-change corresponding to the diameter of a water molecule!
Why do this? Some of the best ultra-stable lasers known are made by locking the laser to this kind of ULE-resonator. Narrow linewidth ultra-stable lasers are interesting for a host of atomic physics and other fundamental physics experiments.
Update 2013 August: I made a drawing in inkscape of the experimental setup.
This figure shows most (if not all?) of the important components of this experiment. The AOM is not strictly required but I found it useful to shift the tuneable HeNe laser by +80 MHz to reach a TEM-00 mode of the ULE resonator. Not shown is a resonance-circuit (LC-tank) between the 2.24MHz sinewave-generator and the EOM. The EOM was temperature controlled by a TEC with an NTC thermistor giving temperature feedback.
## Laser noise
I've been measuring the beat-note (wikipedia talks about sound-waves, but it works for light-waves too) between two HeNe lasers. It jumps around maybe +/- 5 MHz quite rapidly which is not nice at all:
One laser is a commercial stabilized laser (I've tried both a HP5501A and a Mark-Tech 7900), and the other laser is a tunable one which I want to use for my experiment. But with this much jumping around the tunable laser is no good for the experiment I want to do 🙁 | {"extraction_info": {"found_math": true, "script_math_tex": 11, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7631548643112183, "perplexity": 1435.1047765439132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00396.warc.gz"} |
https://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/07-Kalman-Filter-Math.ipynb | # Kalman Filter Math¶
In [1]:
%matplotlib inline
In [2]:
#format the book
import book_format
book_format.set_style()
Out[2]:
If you've gotten this far I hope that you are thinking that the Kalman filter's fearsome reputation is somewhat undeserved. Sure, I hand waved some equations away, but I hope implementation has been fairly straightforward for you. The underlying concept is quite straightforward - take two measurements, or a measurement and a prediction, and choose the output to be somewhere between the two. If you believe the measurement more your guess will be closer to the measurement, and if you believe the prediction is more accurate your guess will lie closer to it. That's not rocket science (little joke - it is exactly this math that got Apollo to the moon and back!).
To be honest I have been choosing my problems carefully. For an arbitrary problem designing the Kalman filter matrices can be extremely difficult. I haven't been too tricky, though. Equations like Newton's equations of motion can be trivially computed for Kalman filter applications, and they make up the bulk of the kind of problems that we want to solve.
I have illustrated the concepts with code and reasoning, not math. But there are topics that do require more mathematics than I have used so far. This chapter presents the math that you will need for the rest of the book.
## Modeling a Dynamic System¶
A dynamic system is a physical system whose state (position, temperature, etc) evolves over time. Calculus is the math of changing values, so we use differential equations to model dynamic systems. Some systems cannot be modeled with differential equations, but we will not encounter those in this book.
Modeling dynamic systems is properly the topic of several college courses. To an extent there is no substitute for a few semesters of ordinary and partial differential equations followed by a graduate course in control system theory. If you are a hobbyist, or trying to solve one very specific filtering problem at work you probably do not have the time and/or inclination to devote a year or more to that education.
Fortunately, I can present enough of the theory to allow us to create the system equations for many different Kalman filters. My goal is to get you to the stage where you can read a publication and understand it well enough to implement the algorithms. The background math is deep, but in practice we end up using a few simple techniques.
This is the longest section of pure math in this book. You will need to master everything in this section to understand the Extended Kalman filter (EKF), the most common nonlinear filter. I do cover more modern filters that do not require as much of this math. You can choose to skim now, and come back to this if you decide to learn the EKF.
We need to start by understanding the underlying equations and assumptions that the Kalman filter uses. We are trying to model real world phenomena, so what do we have to consider?
Each physical system has a process. For example, a car traveling at a certain velocity goes so far in a fixed amount of time, and its velocity varies as a function of its acceleration. We describe that behavior with the well known Newtonian equations that we learned in high school.
\begin{aligned} v&=at\\ x &= \frac{1}{2}at^2 + v_0t + x_0 \end{aligned}
Once we learned calculus we saw them in this form:
$$\mathbf v = \frac{d \mathbf x}{d t}, \quad \mathbf a = \frac{d \mathbf v}{d t} = \frac{d^2 \mathbf x}{d t^2}$$
A typical automobile tracking problem would have you compute the distance traveled given a constant velocity or acceleration, as we did in previous chapters. But, of course we know this is not all that is happening. No car travels on a perfect road. There are bumps, wind drag, and hills that raise and lower the speed. The suspension is a mechanical system with friction and imperfect springs.
Perfectly modeling a system is impossible except for the most trivial problems. We are forced to make a simplification. At any time $t$ we say that the true state (such as the position of our car) is the predicted value from the imperfect model plus some unknown process noise:
$$x(t) = x_{pred}(t) + noise(t)$$
This is not meant to imply that $noise(t)$ is a function that we can derive analytically. It is merely a statement of fact - we can always describe the true value as the predicted value plus the process noise. "Noise" does not imply random events. If we are tracking a thrown ball in the atmosphere, and our model assumes the ball is in a vacuum, then the effect of air drag is process noise in this context.
In the next section we will learn techniques to convert a set of higher order differential equations into a set of first-order differential equations. After the conversion the model of the system without noise is:
$$\dot{\mathbf x} = \mathbf{Ax}$$
$\mathbf A$ is known as the systems dynamics matrix as it describes the dynamics of the system. Now we need to model the noise. We will call that $\mathbf w$, and add it to the equation.
$$\dot{\mathbf x} = \mathbf{Ax} + \mathbf w$$
$\mathbf w$ may strike you as a poor choice for the name, but you will soon see that the Kalman filter assumes white noise.
Finally, we need to consider any inputs into the system. We assume an input $\mathbf u$, and that there exists a linear model that defines how that input changes the system. For example, pressing the accelerator in your car makes it accelerate, and gravity causes balls to fall. Both are control inputs. We will need a matrix $\mathbf B$ to convert $u$ into the effect on the system. We add that into our equation:
$$\dot{\mathbf x} = \mathbf{Ax} + \mathbf{Bu} + \mathbf{w}$$
And that's it. That is one of the equations that Dr. Kalman set out to solve, and he found an optimal estimator if we assume certain properties of $\mathbf w$.
## State-Space Representation of Dynamic Systems¶
We've derived the equation
$$\dot{\mathbf x} = \mathbf{Ax}+ \mathbf{Bu} + \mathbf{w}$$
However, we are not interested in the derivative of $\mathbf x$, but in $\mathbf x$ itself. Ignoring the noise for a moment, we want an equation that recursively finds the value of $\mathbf x$ at time $t_k$ in terms of $\mathbf x$ at time $t_{k-1}$:
$$\mathbf x(t_k) = \mathbf F(\Delta t)\mathbf x(t_{k-1}) + \mathbf B(t_k)\mathbf u (t_k)$$
Convention allows us to write $\mathbf x(t_k)$ as $\mathbf x_k$, which means the the value of $\mathbf x$ at the $k^{th}$ value of $t$.
$$\mathbf x_k = \mathbf{Fx}_{k-1} + \mathbf B_k\mathbf u_k$$
$\mathbf F$ is the familiar state transition matrix, named due to its ability to transition the state's value between discrete time steps. It is very similar to the system dynamics matrix $\mathbf A$. The difference is that $\mathbf A$ models a set of linear differential equations, and is continuous. $\mathbf F$ is discrete, and represents a set of linear equations (not differential equations) which transitions $\mathbf x_{k-1}$ to $\mathbf x_k$ over a discrete time step $\Delta t$.
Finding this matrix is often quite difficult. The equation $\dot x = v$ is the simplest possible differential equation and we trivially integrate it as:
$$\int\limits_{x_{k-1}}^{x_k} \mathrm{d}x = \int\limits_{0}^{\Delta t} v\, \mathrm{d}t$$$$x_k-x_{k-1} = v \Delta t$$$$x_k = v \Delta t + x_{k-1}$$
This equation is recursive: we compute the value of $x$ at time $k$ based on its value at time $k-1$. This recursive form enables us to represent the system (process model) in the form required by the Kalman filter:
\begin{aligned} \mathbf x_k &= \mathbf{Fx}_{k-1} \\ &= \begin{bmatrix} 1 & \Delta t \\ 0 & 1\end{bmatrix} \begin{bmatrix}x_{k-1} \\ \dot x_{k-1}\end{bmatrix} \end{aligned}
We can do that only because $\dot x = v$ is simplest differential equation possible. Almost all other in physical systems result in more complicated differential equation which do not yield to this approach.
State-space methods became popular around the time of the Apollo missions, largely due to the work of Dr. Kalman. The idea is simple. Model a system with a set of $n^{th}$-order differential equations. Convert them into an equivalent set of first-order differential equations. Put them into the vector-matrix form used in the previous section: $\dot{\mathbf x} = \mathbf{Ax} + \mathbf{Bu}$. Once in this form we use of several techniques to convert these linear differential equations into the recursive equation:
$$\mathbf x_k = \mathbf{Fx}_{k-1} + \mathbf B_k\mathbf u_k$$
Some books call the state transition matrix the fundamental matrix. Many use $\mathbf \Phi$ instead of $\mathbf F$. Sources based heavily on control theory tend to use these forms.
These are called state-space methods because we are expressing the solution of the differential equations in terms of the system state.
### Forming First Order Equations from Higher Order Equations¶
Many models of physical systems require second or higher order differential equations with control input $u$:
$$a_n \frac{d^ny}{dt^n} + a_{n-1} \frac{d^{n-1}y}{dt^{n-1}} + \dots + a_2 \frac{d^2y}{dt^2} + a_1 \frac{dy}{dt} + a_0 = u$$
State-space methods require first-order equations. Any higher order system of equations can be reduced to first-order by defining extra variables for the derivatives and then solving.
Let's do an example. Given the system $\ddot{x} - 6\dot x + 9x = u$ find the equivalent first order equations. I've used the dot notation for the time derivatives for clarity.
The first step is to isolate the highest order term onto one side of the equation.
$$\ddot{x} = 6\dot x - 9x + u$$
We define two new variables:
\begin{aligned} x_1(t) &= x \\ x_2(t) &= \dot x \end{aligned}
Now we will substitute these into the original equation and solve. The solution yields a set of first-order equations in terms of these new variables. It is conventional to drop the $(t)$ for notational convenience.
We know that $\dot x_1 = x_2$ and that $\dot x_2 = \ddot{x}$. Therefore
\begin{aligned} \dot x_2 &= \ddot{x} \\ &= 6\dot x - 9x + u\\ &= 6x_2-9x_1 + u \end{aligned}
Therefore our first-order system of equations is
\begin{aligned}\dot x_1 &= x_2 \\ \dot x_2 &= 6x_2-9x_1 + u\end{aligned}
If you practice this a bit you will become adept at it. Isolate the highest term, define a new variable and its derivatives, and then substitute.
### First Order Differential Equations In State-Space Form¶
Substituting the newly defined variables from the previous section:
$$\frac{dx_1}{dt} = x_2,\, \frac{dx_2}{dt} = x_3, \, ..., \, \frac{dx_{n-1}}{dt} = x_n$$
into the first order equations yields:
$$\frac{dx_n}{dt} = \frac{1}{a_n}\sum\limits_{i=0}^{n-1}a_ix_{i+1} + \frac{1}{a_n}u$$
Using vector-matrix notation we have:
$$\begin{bmatrix}\frac{dx_1}{dt} \\ \frac{dx_2}{dt} \\ \vdots \\ \frac{dx_n}{dt}\end{bmatrix} = \begin{bmatrix}\dot x_1 \\ \dot x_2 \\ \vdots \\ \dot x_n\end{bmatrix}= \begin{bmatrix}0 & 1 & 0 &\cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -\frac{a_0}{a_n} & -\frac{a_1}{a_n} & -\frac{a_2}{a_n} & \cdots & -\frac{a_{n-1}}{a_n}\end{bmatrix} \begin{bmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{bmatrix} + \begin{bmatrix}0 \\ 0 \\ \vdots \\ \frac{1}{a_n}\end{bmatrix}u$$
which we then write as $\dot{\mathbf x} = \mathbf{Ax} + \mathbf{B}u$.
### Finding the Fundamental Matrix for Time Invariant Systems¶
We express the system equations in state-space form with
$$\dot{\mathbf x} = \mathbf{Ax}$$
where $\mathbf A$ is the system dynamics matrix, and want to find the fundamental matrix $\mathbf F$ that propagates the state $\mathbf x$ over the interval $\Delta t$ with the equation
\begin{aligned} \mathbf x(t_k) = \mathbf F(\Delta t)\mathbf x(t_{k-1})\end{aligned}
In other words, $\mathbf A$ is a set of continuous differential equations, and we need $\mathbf F$ to be a set of discrete linear equations that computes the change in $\mathbf A$ over a discrete time step.
It is conventional to drop the $t_k$ and $(\Delta t)$ and use the notation
$$\mathbf x_k = \mathbf {Fx}_{k-1}$$
Broadly speaking there are three common ways to find this matrix for Kalman filters. The technique most often used is the matrix exponential. Linear Time Invariant Theory, also known as LTI System Theory, is a second technique. Finally, there are numerical techniques. You may know of others, but these three are what you will most likely encounter in the Kalman filter literature and praxis.
### The Matrix Exponential¶
The solution to the equation $\frac{dx}{dt} = kx$ can be found by:
$$\begin{gathered}\frac{dx}{dt} = kx \\ \frac{dx}{x} = k\, dt \\ \int \frac{1}{x}\, dx = \int k\, dt \\ \log x = kt + c \\ x = e^{kt+c} \\ x = e^ce^{kt} \\ x = c_0e^{kt}\end{gathered}$$
When $t=0$, $x=x_0$. Substitute these to equation above.
$$\begin{gathered}x_0 = c_0e^{k(0)} \\ x_0 = c_01 \\ x_0 = c_0 \\ x = x_0e^{kt}\end{gathered}$$
Using similar math, the solution to the first-order equation
$$\dot{\mathbf x} = \mathbf{Ax} ,\, \, \, \mathbf x(0) = \mathbf x_0$$
where $\mathbf A$ is a constant matrix, is
$$\mathbf x = e^{\mathbf At}\mathbf x_0$$
Substituting $F = e^{\mathbf At}$, we can write
$$\mathbf x_k = \mathbf F\mathbf x_{k-1}$$
which is the form we are looking for! We have reduced the problem of finding the fundamental matrix to one of finding the value for $e^{\mathbf At}$.
$e^{\mathbf At}$ is known as the matrix exponential. It can be computed with this power series:
$$e^{\mathbf At} = \mathbf{I} + \mathbf{A}t + \frac{(\mathbf{A}t)^2}{2!} + \frac{(\mathbf{A}t)^3}{3!} + ...$$
That series is found by doing a Taylor series expansion of $e^{\mathbf At}$, which I will not cover here.
Let's use this to find the solution to Newton's equations. Using $v$ as a substitution for $\dot x$, and assuming constant velocity we get the linear matrix-vector form
$$\begin{bmatrix}\dot x \\ \dot v\end{bmatrix} =\begin{bmatrix}0&1\\0&0\end{bmatrix} \begin{bmatrix}x \\ v\end{bmatrix}$$
This is a first order differential equation, so we can set $\mathbf{A}=\begin{bmatrix}0&1\\0&0\end{bmatrix}$ and solve the following equation. I have substituted the interval $\Delta t$ for $t$ to emphasize that the fundamental matrix is discrete:
$$\mathbf F = e^{\mathbf A\Delta t} = \mathbf{I} + \mathbf A\Delta t + \frac{(\mathbf A\Delta t)^2}{2!} + \frac{(\mathbf A\Delta t)^3}{3!} + ...$$
If you perform the multiplication you will find that $\mathbf{A}^2=\begin{bmatrix}0&0\\0&0\end{bmatrix}$, which means that all higher powers of $\mathbf{A}$ are also $\mathbf{0}$. Thus we get an exact answer without an infinite number of terms:
\begin{aligned} \mathbf F &=\mathbf{I} + \mathbf A \Delta t + \mathbf{0} \\ &= \begin{bmatrix}1&0\\0&1\end{bmatrix} + \begin{bmatrix}0&1\\0&0\end{bmatrix}\Delta t\\ &= \begin{bmatrix}1&\Delta t\\0&1\end{bmatrix} \end{aligned}
We plug this into $\mathbf x_k= \mathbf{Fx}_{k-1}$ to get
\begin{aligned} x_k &=\begin{bmatrix}1&\Delta t\\0&1\end{bmatrix}x_{k-1} \end{aligned}
You will recognize this as the matrix we derived analytically for the constant velocity Kalman filter in the Multivariate Kalman Filter chapter.
SciPy's linalg module includes a routine expm() to compute the matrix exponential. It does not use the Taylor series method, but the Padé Approximation. There are many (at least 19) methods to compute the matrix exponential, and all suffer from numerical difficulties[1]. You should be aware of the problems, especially when $\mathbf A$ is large. If you search for "pade approximation matrix exponential" you will find many publications devoted to this problem.
In practice this may not be of concern to you as for the Kalman filter we normally just take the first two terms of the Taylor series. But don't assume my treatment of the problem is complete and run off and try to use this technique for other problem without doing a numerical analysis of the performance of this technique. Interestingly, one of the favored ways of solving $e^{\mathbf At}$ is to use a generalized ode solver. In other words, they do the opposite of what we do - turn $\mathbf A$ into a set of differential equations, and then solve that set using numerical techniques!
Here is an example of using expm() to solve $e^{\mathbf At}$.
In [3]:
import numpy as np
from scipy.linalg import expm
dt = 0.1
A = np.array([[0, 1],
[0, 0]])
expm(A*dt)
Out[3]:
array([[1. , 0.1],
[0. , 1. ]])
### Time Invariance¶
If the behavior of the system depends on time we can say that a dynamic system is described by the first-order differential equation
$$g(t) = \dot x$$
However, if the system is time invariant the equation is of the form:
$$f(x) = \dot x$$
What does time invariant mean? Consider a home stereo. If you input a signal $x$ into it at time $t$, it will output some signal $f(x)$. If you instead perform the input at time $t + \Delta t$ the output signal will be the same $f(x)$, shifted in time.
A counter-example is $x(t) = \sin(t)$, with the system $f(x) = t\, x(t) = t \sin(t)$. This is not time invariant; the value will be different at different times due to the multiplication by t. An aircraft is not time invariant. If you make a control input to the aircraft at a later time its behavior will be different because it will have burned fuel and thus lost weight. Lower weight results in different behavior.
We can solve these equations by integrating each side. I demonstrated integrating the time invariant system $v = \dot x$ above. However, integrating the time invariant equation $\dot x = f(x)$ is not so straightforward. Using the separation of variables techniques we divide by $f(x)$ and move the $dt$ term to the right so we can integrate each side:
$$\begin{gathered} \frac{dx}{dt} = f(x) \\ \int^x_{x_0} \frac{1}{f(x)} dx = \int^t_{t_0} dt \end{gathered}$$
If we let $F(x) = \int \frac{1}{f(x)} dx$ we get
$$F(x) - F(x_0) = t-t_0$$
We then solve for x with
$$\begin{gathered} F(x) = t - t_0 + F(x_0) \\ x = F^{-1}[t-t_0 + F(x_0)] \end{gathered}$$
In other words, we need to find the inverse of $F$. This is not trivial, and a significant amount of coursework in a STEM education is devoted to finding tricky, analytic solutions to this problem.
However, they are tricks, and many simple forms of $f(x)$ either have no closed form solution or pose extreme difficulties. Instead, the practicing engineer turns to state-space methods to find approximate solutions.
The advantage of the matrix exponential is that we can use it for any arbitrary set of differential equations which are time invariant. However, we often use this technique even when the equations are not time invariant. As an aircraft flies it burns fuel and loses weight. However, the weight loss over one second is negligible, and so the system is nearly linear over that time step. Our answers will still be reasonably accurate so long as the time step is short.
#### Example: Mass-Spring-Damper Model¶
Suppose we wanted to track the motion of a weight on a spring and connected to a damper, such as an automobile's suspension. The equation for the motion with $m$ being the mass, $k$ the spring constant, and $c$ the damping force, under some input $u$ is
$$m\frac{d^2x}{dt^2} + c\frac{dx}{dt} +kx = u$$
For notational convenience I will write that as
$$m\ddot x + c\dot x + kx = u$$
I can turn this into a system of first order equations by setting $x_1(t)=x(t)$, and then substituting as follows:
\begin{aligned} x_1 &= x \\ x_2 &= \dot x_1 \\ \dot x_2 &= \ddot x_1 = \ddot x \end{aligned}
As is common I dropped the $(t)$ for notational convenience. This gives the equation
$$m\dot x_2 + c x_2 +kx_1 = u$$
Solving for $\dot x_2$ we get a first order equation:
$$\dot x_2 = -\frac{c}{m}x_2 - \frac{k}{m}x_1 + \frac{1}{m}u$$
We put this into matrix form:
$$\begin{bmatrix} \dot x_1 \\ \dot x_2 \end{bmatrix} = \begin{bmatrix}0 & 1 \\ -k/m & -c/m \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} 0 \\ 1/m \end{bmatrix}u$$
Now we use the matrix exponential to find the state transition matrix:
$$\Phi(t) = e^{\mathbf At} = \mathbf{I} + \mathbf At + \frac{(\mathbf At)^2}{2!} + \frac{(\mathbf At)^3}{3!} + ...$$
The first two terms give us
$$\mathbf F = \begin{bmatrix}1 & t \\ -(k/m) t & 1-(c/m) t \end{bmatrix}$$
This may or may not give you enough precision. You can easily check this by computing $\frac{(\mathbf At)^2}{2!}$ for your constants and seeing how much this matrix contributes to the results.
### Linear Time Invariant Theory¶
Linear Time Invariant Theory, also known as LTI System Theory, gives us a way to find $\Phi$ using the inverse Laplace transform. You are either nodding your head now, or completely lost. I will not be using the Laplace transform in this book. LTI system theory tells us that
$$\Phi(t) = \mathcal{L}^{-1}[(s\mathbf{I} - \mathbf{A})^{-1}]$$
I have no intention of going into this other than to say that the Laplace transform $\mathcal{L}$ converts a signal into a space $s$ that excludes time, but finding a solution to the equation above is non-trivial. If you are interested, the Wikipedia article on LTI system theory provides an introduction. I mention LTI because you will find some literature using it to design the Kalman filter matrices for difficult problems.
### Numerical Solutions¶
Finally, there are numerical techniques to find $\mathbf F$. As filters get larger finding analytical solutions becomes very tedious (though packages like SymPy make it easier). C. F. van Loan [2] has developed a technique that finds both $\Phi$ and $\mathbf Q$ numerically. Given the continuous model
$$\dot x = Ax + Gw$$
where $w$ is the unity white noise, van Loan's method computes both $\mathbf F_k$ and $\mathbf Q_k$.
I have implemented van Loan's method in FilterPy. You may use it as follows:
from filterpy.common import van_loan_discretization
A = np.array([[0., 1.], [-1., 0.]])
G = np.array([[0.], [2.]]) # white noise scaling
F, Q = van_loan_discretization(A, G, dt=0.1)
In the section Numeric Integration of Differential Equations I present alternative methods which are very commonly used in Kalman filtering.
## Design of the Process Noise Matrix¶
In general the design of the $\mathbf Q$ matrix is among the most difficult aspects of Kalman filter design. This is due to several factors. First, the math requires a good foundation in signal theory. Second, we are trying to model the noise in something for which we have little information. Consider trying to model the process noise for a thrown baseball. We can model it as a sphere moving through the air, but that leaves many unknown factors - ball rotation and spin decay, the coefficient of drag of a ball with stitches, the effects of wind and air density, and so on. We develop the equations for an exact mathematical solution for a given process model, but since the process model is incomplete the result for $\mathbf Q$ will also be incomplete. This has a lot of ramifications for the behavior of the Kalman filter. If $\mathbf Q$ is too small then the filter will be overconfident in its prediction model and will diverge from the actual solution. If $\mathbf Q$ is too large than the filter will be unduly influenced by the noise in the measurements and perform sub-optimally. In practice we spend a lot of time running simulations and evaluating collected data to try to select an appropriate value for $\mathbf Q$. But let's start by looking at the math.
Let's assume a kinematic system - some system that can be modeled using Newton's equations of motion. We can make a few different assumptions about this process.
We have been using a process model of
$$\dot{\mathbf x} = \mathbf{Ax} + \mathbf{Bu} + \mathbf{w}$$
where $\mathbf{w}$ is the process noise. Kinematic systems are continuous - their inputs and outputs can vary at any arbitrary point in time. However, our Kalman filters are discrete (there are continuous forms for Kalman filters, but we do not cover them in this book). We sample the system at regular intervals. Therefore we must find the discrete representation for the noise term in the equation above. This depends on what assumptions we make about the behavior of the noise. We will consider two different models for the noise.
### Continuous White Noise Model¶
We model kinematic systems using Newton's equations. We have either used position and velocity, or position, velocity, and acceleration as the models for our systems. There is nothing stopping us from going further - we can model jerk, jounce, snap, and so on. We don't do that normally because adding terms beyond the dynamics of the real system degrades the estimate.
Let's say that we need to model the position, velocity, and acceleration. We can then assume that acceleration is constant for each discrete time step. Of course, there is process noise in the system and so the acceleration is not actually constant. The tracked object will alter the acceleration over time due to external, unmodeled forces. In this section we will assume that the acceleration changes by a continuous time zero-mean white noise $w(t)$. In other words, we are assuming that the small changes in velocity average to 0 over time (zero-mean).
Since the noise is changing continuously we will need to integrate to get the discrete noise for the discretization interval that we have chosen. We will not prove it here, but the equation for the discretization of the noise is
$$\mathbf Q = \int_0^{\Delta t} \mathbf F(t)\mathbf{Q_c}\mathbf F^\mathsf{T}(t) dt$$
where $\mathbf{Q_c}$ is the continuous noise. The general reasoning should be clear. $\mathbf F(t)\mathbf{Q_c}\mathbf F^\mathsf{T}(t)$ is a projection of the continuous noise based on our process model $\mathbf F(t)$ at the instant $t$. We want to know how much noise is added to the system over a discrete intervat $\Delta t$, so we integrate this expression over the interval $[0, \Delta t]$.
We know the fundamental matrix for Newtonian systems is
$$F = \begin{bmatrix}1 & \Delta t & {\Delta t}^2/2 \\ 0 & 1 & \Delta t\\ 0& 0& 1\end{bmatrix}$$
We define the continuous noise as
$$\mathbf{Q_c} = \begin{bmatrix}0&0&0\\0&0&0\\0&0&1\end{bmatrix} \Phi_s$$
where $\Phi_s$ is the spectral density of the white noise. This can be derived, but is beyond the scope of this book. See any standard text on stochastic processes for the details. In practice we often do not know the spectral density of the noise, and so this turns into an "engineering" factor - a number we experimentally tune until our filter performs as we expect. You can see that the matrix that $\Phi_s$ is multiplied by effectively assigns the power spectral density to the acceleration term. This makes sense; we assume that the system has constant acceleration except for the variations caused by noise. The noise alters the acceleration.
We could carry out these computations ourselves, but I prefer using SymPy to solve the equation.
$$\mathbf{Q_c} = \begin{bmatrix}0&0&0\\0&0&0\\0&0&1\end{bmatrix} \Phi_s$$
In [4]:
import sympy
from sympy import (init_printing, Matrix, MatMul,
integrate, symbols)
init_printing(use_latex='mathjax')
dt, phi = symbols('\Delta{t} \Phi_s')
F_k = Matrix([[1, dt, dt**2/2],
[0, 1, dt],
[0, 0, 1]])
Q_c = Matrix([[0, 0, 0],
[0, 0, 0],
[0, 0, 1]])*phi
Q = integrate(F_k * Q_c * F_k.T, (dt, 0, dt))
# factor phi out of the matrix to make it more readable
Q = Q / phi
MatMul(Q, phi)
Out[4]:
$\displaystyle \left[\begin{matrix}\frac{\Delta{t}^{5}}{20} & \frac{\Delta{t}^{4}}{8} & \frac{\Delta{t}^{3}}{6}\\\frac{\Delta{t}^{4}}{8} & \frac{\Delta{t}^{3}}{3} & \frac{\Delta{t}^{2}}{2}\\\frac{\Delta{t}^{3}}{6} & \frac{\Delta{t}^{2}}{2} & \Delta{t}\end{matrix}\right] \Phi_{s}$
For completeness, let us compute the equations for the 0th order and 1st order equations.
In [5]:
F_k = Matrix([[1]])
Q_c = Matrix([[phi]])
print('0th order discrete process noise')
integrate(F_k*Q_c*F_k.T,(dt, 0, dt))
0th order discrete process noise
Out[5]:
$\displaystyle \left[\begin{matrix}\Delta{t} \Phi_{s}\end{matrix}\right]$
In [6]:
F_k = Matrix([[1, dt],
[0, 1]])
Q_c = Matrix([[0, 0],
[0, 1]]) * phi
Q = integrate(F_k * Q_c * F_k.T, (dt, 0, dt))
print('1st order discrete process noise')
# factor phi out of the matrix to make it more readable
Q = Q / phi
MatMul(Q, phi)
1st order discrete process noise
Out[6]:
$\displaystyle \left[\begin{matrix}\frac{\Delta{t}^{3}}{3} & \frac{\Delta{t}^{2}}{2}\\\frac{\Delta{t}^{2}}{2} & \Delta{t}\end{matrix}\right] \Phi_{s}$
### Piecewise White Noise Model¶
Another model for the noise assumes that the that highest order term (say, acceleration) is constant for the duration of each time period, but differs for each time period, and each of these is uncorrelated between time periods. In other words there is a discontinuous jump in acceleration at each time step. This is subtly different than the model above, where we assumed that the last term had a continuously varying noisy signal applied to it.
We will model this as
$$f(x)=Fx+\Gamma w$$
where $\Gamma$ is the noise gain of the system, and $w$ is the constant piecewise acceleration (or velocity, or jerk, etc).
Let's start by looking at a first order system. In this case we have the state transition function
$$\mathbf{F} = \begin{bmatrix}1&\Delta t \\ 0& 1\end{bmatrix}$$
In one time period, the change in velocity will be $w(t)\Delta t$, and the change in position will be $w(t)\Delta t^2/2$, giving us
$$\Gamma = \begin{bmatrix}\frac{1}{2}\Delta t^2 \\ \Delta t\end{bmatrix}$$
The covariance of the process noise is then
$$Q = \mathbb E[\Gamma w(t) w(t) \Gamma^\mathsf{T}] = \Gamma\sigma^2_v\Gamma^\mathsf{T}$$
.
We can compute that with SymPy as follows
In [7]:
var = symbols('sigma^2_v')
v = Matrix([[dt**2 / 2], [dt]])
Q = v * var * v.T
# factor variance out of the matrix to make it more readable
Q = Q / var
MatMul(Q, var)
Out[7]:
$\displaystyle \left[\begin{matrix}\frac{\Delta{t}^{4}}{4} & \frac{\Delta{t}^{3}}{2}\\\frac{\Delta{t}^{3}}{2} & \Delta{t}^{2}\end{matrix}\right] \sigma^{2}_{v}$
The second order system proceeds with the same math.
$$\mathbf{F} = \begin{bmatrix}1 & \Delta t & {\Delta t}^2/2 \\ 0 & 1 & \Delta t\\ 0& 0& 1\end{bmatrix}$$
Here we will assume that the white noise is a discrete time Wiener process. This gives us
$$\Gamma = \begin{bmatrix}\frac{1}{2}\Delta t^2 \\ \Delta t\\ 1\end{bmatrix}$$
There is no 'truth' to this model, it is just convenient and provides good results. For example, we could assume that the noise is applied to the jerk at the cost of a more complicated equation.
The covariance of the process noise is then
$$Q = \mathbb E[\Gamma w(t) w(t) \Gamma^\mathsf{T}] = \Gamma\sigma^2_v\Gamma^\mathsf{T}$$
.
We can compute that with SymPy as follows
In [8]:
var = symbols('sigma^2_v')
v = Matrix([[dt**2 / 2], [dt], [1]])
Q = v * var * v.T
# factor variance out of the matrix to make it more readable
Q = Q / var
MatMul(Q, var)
Out[8]:
$\displaystyle \left[\begin{matrix}\frac{\Delta{t}^{4}}{4} & \frac{\Delta{t}^{3}}{2} & \frac{\Delta{t}^{2}}{2}\\\frac{\Delta{t}^{3}}{2} & \Delta{t}^{2} & \Delta{t}\\\frac{\Delta{t}^{2}}{2} & \Delta{t} & 1\end{matrix}\right] \sigma^{2}_{v}$
We cannot say that this model is more or less correct than the continuous model - both are approximations to what is happening to the actual object. Only experience and experiments can guide you to the appropriate model. In practice you will usually find that either model provides reasonable results, but typically one will perform better than the other.
The advantage of the second model is that we can model the noise in terms of $\sigma^2$ which we can describe in terms of the motion and the amount of error we expect. The first model requires us to specify the spectral density, which is not very intuitive, but it handles varying time samples much more easily since the noise is integrated across the time period. However, these are not fixed rules - use whichever model (or a model of your own devising) based on testing how the filter performs and/or your knowledge of the behavior of the physical model.
A good rule of thumb is to set $\sigma$ somewhere from $\frac{1}{2}\Delta a$ to $\Delta a$, where $\Delta a$ is the maximum amount that the acceleration will change between sample periods. In practice we pick a number, run simulations on data, and choose a value that works well.
### Using FilterPy to Compute Q¶
FilterPy offers several routines to compute the $\mathbf Q$ matrix. The function Q_continuous_white_noise() computes $\mathbf Q$ for a given value for $\Delta t$ and the spectral density.
In [9]:
from filterpy.common import Q_continuous_white_noise
from filterpy.common import Q_discrete_white_noise
Q = Q_continuous_white_noise(dim=2, dt=1, spectral_density=1)
print(Q)
[[0.333 0.5 ]
[0.5 1. ]]
In [10]:
Q = Q_continuous_white_noise(dim=3, dt=1, spectral_density=1)
print(Q)
[[0.05 0.125 0.167]
[0.125 0.333 0.5 ]
[0.167 0.5 1. ]]
The function Q_discrete_white_noise() computes $\mathbf Q$ assuming a piecewise model for the noise.
In [11]:
Q = Q_discrete_white_noise(2, var=1.)
print(Q)
[[0.25 0.5 ]
[0.5 1. ]]
In [12]:
Q = Q_discrete_white_noise(3, var=1.)
print(Q)
[[0.25 0.5 0.5 ]
[0.5 1. 1. ]
[0.5 1. 1. ]]
### Simplification of Q¶
Many treatments use a much simpler form for $\mathbf Q$, setting it to zero except for a noise term in the lower rightmost element. Is this justified? Well, consider the value of $\mathbf Q$ for a small $\Delta t$
In [13]:
import numpy as np
np.set_printoptions(precision=8)
Q = Q_continuous_white_noise(
dim=3, dt=0.05, spectral_density=1)
print(Q)
np.set_printoptions(precision=3)
[[0.00000002 0.00000078 0.00002083]
[0.00000078 0.00004167 0.00125 ]
[0.00002083 0.00125 0.05 ]]
We can see that most of the terms are very small. Recall that the only equation using this matrix is
$$\mathbf P=\mathbf{FPF}^\mathsf{T} + \mathbf Q$$
If the values for $\mathbf Q$ are small relative to $\mathbf P$ then it will be contributing almost nothing to the computation of $\mathbf P$. Setting $\mathbf Q$ to the zero matrix except for the lower right term
$$\mathbf Q=\begin{bmatrix}0&0&0\\0&0&0\\0&0&\sigma^2\end{bmatrix}$$
while not correct, is often a useful approximation. If you do this for an important application you will have to perform quite a few studies to guarantee that your filter works in a variety of situations.
If you do this, 'lower right term' means the most rapidly changing term for each variable. If the state is $x=\begin{bmatrix}x & \dot x & \ddot{x} & y & \dot{y} & \ddot{y}\end{bmatrix}^\mathsf{T}$ Then $\mathbf Q$ will be 6x6; the elements for both $\ddot{x}$ and $\ddot{y}$ will have to be set to non-zero in $\mathbf Q$.
## Stable Compution of the Posterior Covariance¶
I've presented the equation to compute the posterior covariance as
$$\mathbf P = (\mathbf I - \mathbf{KH})\mathbf{\bar P}$$
and while strictly speaking this is correct, this is not how I compute it in FilterPy, where I use the Joseph equation
$$\mathbf P = (\mathbf I-\mathbf {KH})\mathbf{\bar P}(\mathbf I-\mathbf{KH})^\mathsf T + \mathbf{KRK}^\mathsf T$$
I frequently get emails and/or GitHub issues raised, claiming the implementation is a bug. It is not a bug, and I use it for several reasons. First, the subtraction $(\mathbf I - \mathbf{KH})$ can lead to nonsymmetric matrix results due to floating point errors. Covariances must be symmetric, and so becoming nonsymmetric usually leads to the Kalman filter diverging, or even for the code to raise an exception because of the checks built into NumPy.
A traditional way to preserve symmetry is the following formula:
$$\mathbf P = (\mathbf P + \mathbf P^\mathsf T) / 2$$
This is safe because $\sigma_{ij} = \sigma_{ji}$ for all covariances in the matrix. Hence this operation averages the error between the differences of the two values if they have diverged due to floating point errors.
If you look at the Joseph form for the equation above, you'll see there is a similar $\mathbf{ABA}^\mathsf T$ pattern in both terms. So they both preserve symmetry. But where did this equation come from, and why do I use it instead of
$$\mathbf P = (\mathbf I - \mathbf{KH})\mathbf{\bar P} \\ \mathbf P = (\mathbf P + \mathbf P^\mathsf T) / 2$$
Let's just derive the equation from first principles. It's not too bad, and you need to understand the derivation to understand the purpose of the equation, and, more importantly, diagnose issues if your filter diverges due to numerical instability. This derivation comes from Brown[4].
First, some symbology. $\mathbf x$ is the true state of our system. $\mathbf{\hat x}$ is the estimated state of our system - the posterior. And $\mathbf{\bar x}$ is the estimated prior of the system.
Given that, we can define our model to be
$$\mathbf x_{k+1} = \mathbf F_k \mathbf x_k + \mathbf w_k \\ \mathbf z_k = \mathbf H_k \mathbf x_k + \mathbf v_k$$
In words, the next state $\mathbf x_{k+1}$ of the system is the current state $k$ moved by some process $\mathbf F_k$ plus some noise $\mathbf w_k$.
Note that these are definitions. No system perfectly follows a mathematical model, so we model that with the noise term $\mathbf w_k$. And no measurement is perfect due to sensor error, so we model that with $\mathbf v_k$
I'll dispense with the subscript $k$ since in the remainder of the derivation we will only consider values at step $k$, never step $k+1$.
Now we define the estimation error as the difference between the true state and the estimated state
$$\mathbf e = \mathbf x - \mathbf{\hat x}$$
Again, this is a definition; we don't know how to compute $\mathbf e$, it is just the defined difference between the true and estimated state.
This allows us to define the covariance of our estimate, which is defined as the expected value of $\mathbf{ee}^\mathsf T$:
\begin{aligned} P &= E[\mathbf{ee}^\mathsf T] \\ &= E[(\mathbf x - \mathbf{\hat x})(\mathbf x - \mathbf{\hat x})^\mathsf T] \end{aligned}
Next, we define the posterior estimate as
$$\mathbf {\hat x} = \mathbf{\bar x} + \mathbf K(\mathbf z - \mathbf{H \bar x})$$
That looks like the equation from the Kalman filter, and for good reason. But as with the rest of the math so far, this is a definition. In particular, we have not defined $\mathbf K$, and you shouldn't think of it as the Kalman gain, because we are solving this for any problem, not just for linear Kalman filters. Here, $\mathbf K$ is just some unspecified blending value between 0 and 1.
Now we have our definitions, let's perform some substitution and algebra.
The term $(\mathbf x - \mathbf{\hat x})$ can be expanded by replacing $\mathbf{\hat x}$ with the definition above, yielding
$$(\mathbf x - \mathbf{\hat x}) = \mathbf x - (\mathbf{\bar x} + \mathbf K(\mathbf z - \mathbf{H \bar x}))$$
Now we replace $\mathbf z$ with $\mathbf H \mathbf x + \mathbf v$:
\begin{aligned} (\mathbf x - \mathbf{\hat x}) &= \mathbf x - (\mathbf{\bar x} + \mathbf K(\mathbf z - \mathbf{H \bar x})) \\ &= \mathbf x - (\mathbf{\bar x} + \mathbf K(\mathbf H \mathbf x + \mathbf v - \mathbf{H \bar x})) \\ &= (\mathbf x - \mathbf{\bar x}) - \mathbf K(\mathbf H \mathbf x + \mathbf v - \mathbf{H \bar x}) \\ &= (\mathbf x - \mathbf{\bar x}) - \mathbf{KH}(\mathbf x - \mathbf{ \bar x}) - \mathbf{Kv} \\ &= (\mathbf I - \mathbf{KH})(\mathbf x - \mathbf{\bar x}) - \mathbf{Kv} \end{aligned}
Now we can solve for $\mathbf P$ if we note that the expected value of $(\mathbf x - \mathbf{\bar x})$ is the prior covariance $\mathbf{\bar P}$, and that the expected value of $\mathbf v$ is $E[\mathbf{vv}^\mathbf T] = \mathbf R$:
\begin{aligned} \mathbf P &= E\big[[(\mathbf I - \mathbf{KH})(\mathbf x - \mathbf{\bar x}) - \mathbf{Kv})] [(\mathbf I - \mathbf{KH})(\mathbf x - \mathbf{\bar x}) - \mathbf{Kv}]^\mathsf T\big ] \\ &= (\mathbf I - \mathbf{KH})\mathbf{\bar P}(\mathbf I - \mathbf{KH})^\mathsf T + \mathbf{KRK}^\mathsf T \end{aligned}
which is what we came here to prove.
Note that this equation is valid for any $\mathbf K$, not just the optimal $\mathbf K$ computed by the Kalman filter. And that is why I use this equation. In practice the Kalman gain computed by the filter is not the optimal value both because the real world is never truly linear and Gaussian, and because of floating point errors induced by computation. This equation is far less likely to cause the Kalman filter to diverge in the face of real world conditions.
Where did $\mathbf P = (\mathbf I - \mathbf{KH})\mathbf{\bar P}$ come from, then? Let's finish the derivation, which is simple. Recall that the Kalman filter (optimal) gain is given by
$$\mathbf K = \mathbf{\bar P H^\mathsf T}(\mathbf{H \bar P H}^\mathsf T + \mathbf R)^{-1}$$
Now we substitute this into the equation we just derived:
\begin{aligned} &= (\mathbf I - \mathbf{KH})\mathbf{\bar P}(\mathbf I - \mathbf{KH})^\mathsf T + \mathbf{KRK}^\mathsf T\\ &= \mathbf{\bar P} - \mathbf{KH}\mathbf{\bar P} - \mathbf{\bar PH}^\mathsf T\mathbf{K}^\mathsf T + \mathbf K(\mathbf{H \bar P H}^\mathsf T + \mathbf R)\mathbf K^\mathsf T \\ &= \mathbf{\bar P} - \mathbf{KH}\mathbf{\bar P} - \mathbf{\bar PH}^\mathsf T\mathbf{K}^\mathsf T + \mathbf{\bar P H^\mathsf T}(\mathbf{H \bar P H}^\mathsf T + \mathbf R)^{-1}(\mathbf{H \bar P H}^\mathsf T + \mathbf R)\mathbf K^\mathsf T\\ &= \mathbf{\bar P} - \mathbf{KH}\mathbf{\bar P} - \mathbf{\bar PH}^\mathsf T\mathbf{K}^\mathsf T + \mathbf{\bar P H^\mathsf T}\mathbf K^\mathsf T\\ &= \mathbf{\bar P} - \mathbf{KH}\mathbf{\bar P}\\ &= (\mathbf I - \mathbf{KH})\mathbf{\bar P} \end{aligned}
Therefore $\mathbf P = (\mathbf I - \mathbf{KH})\mathbf{\bar P}$ is mathematically correct when the gain is optimal, but so is $(\mathbf I - \mathbf{KH})\mathbf{\bar P}(\mathbf I - \mathbf{KH})^\mathsf T + \mathbf{KRK}^\mathsf T$. As we already discussed the latter is also correct when the gain is suboptimal, and it is also more numerically stable. Therefore I use this computation in FilterPy.
It is quite possible that your filter still diverges, especially if it runs for hundreds or thousands of epochs. You will need to examine these equations. The literature provides yet other forms of this computation which may be more applicable to your problem. As always, if you are solving real engineering problems where failure could mean loss of equipment or life, you will need to move past this book and into the engineering literature. If you are working with 'toy' problems where failure is not damaging, if you detect divergence you can just reset the value of $\mathbf P$ to some 'reasonable' value and keep on going. For example, you could zero out the non diagonal elements so the matrix only contains variances, and then maybe multiply by a constant somewhat larger than one to reflect the loss of information you just injected into the filter. Use your imagination, and test.
## Deriving the Kalman Gain Equation¶
If you read the last section, you might as well read this one. With this we will have derived the Kalman filter equations.
Note that this derivation is not using Bayes equations. I've seen at least four different ways to derive the Kalman filter equations; this derivation is typical to the literature, and follows from the last section. The source is again Brown [4].
In the last section we used an unspecified scaling factor $\mathbf K$ to derive the Joseph form of the covariance equation. If we want an optimal filter, we need to use calculus to minimize the errors in the equations. You should be familiar with this idea. If you want to find the minimum value of a function $f(x)$, you take the derivative and set it equal to zero: $\frac{x}{dx}f(x) = 0$.
In our problem the error is expressed by the covariance matrix $\mathbf P$. In particular, the diagonal expresses the error (variance) of each element in the state vector. So, to find the optimal gain we want to take the derivative of the trace (sum) of the diagonal.
Brown reminds us of two formulas involving the derivative of traces:
$$\frac{d\, trace(\mathbf{AB})}{d\mathbf A} = \mathbf B^\mathsf T$$$$\frac{d\, trace(\mathbf{ACA}^\mathsf T)}{d\mathbf A} = 2\mathbf{AC}$$
where $\mathbf{AB}$ is square and $\mathbf C$ is symmetric.
We expand out the Joseph equation to:
$$\mathbf P = \mathbf{\bar P} - \mathbf{KH}\mathbf{\bar P} - \mathbf{\bar P}\mathbf H^\mathsf T \mathbf K^\mathsf T + \mathbf K(\mathbf H \mathbf{\bar P}\mathbf H^\mathsf T + \mathbf R)\mathbf K^\mathsf T$$
Now we need to the the derivative of the trace of $\mathbf P$ with respect to $\mathbf K$: $\frac{d\, trace(\mathbf P)}{d\mathbf K}$.
The derivative of the trace the first term with respect to $\mathbf K$ is $0$, since it does not have $\mathbf K$ in the expression.
The derivative of the trace of the second term is $(\mathbf H\mathbf{\bar P})^\mathsf T$.
We can find the derivative of the trace of the third term by noticing that $\mathbf{\bar P}\mathbf H^\mathsf T \mathbf K^\mathsf T$ is the transpose of $\mathbf{KH}\mathbf{\bar P}$. The trace of a matrix is equal to the trace of it's transpose, so it's derivative will be same as the second term.
Finally, the derivative of the trace of the fourth term is $2\mathbf K(\mathbf H \mathbf{\bar P}\mathbf H^\mathsf T + \mathbf R)$.
This gives us the final value of
$$\frac{d\, trace(\mathbf P)}{d\mathbf K} = -2(\mathbf H\mathbf{\bar P})^\mathsf T + 2\mathbf K(\mathbf H \mathbf{\bar P}\mathbf H^\mathsf T + \mathbf R)$$
We set this to zero and solve to find the equation for $\mathbf K$ which minimizes the error:
$$-2(\mathbf H\mathbf{\bar P})^\mathsf T + 2\mathbf K(\mathbf H \mathbf{\bar P}\mathbf H^\mathsf T + \mathbf R) = 0 \\ \mathbf K(\mathbf H \mathbf{\bar P}\mathbf H^\mathsf T + \mathbf R) = (\mathbf H\mathbf{\bar P})^\mathsf T \\ \mathbf K(\mathbf H \mathbf{\bar P}\mathbf H^\mathsf T + \mathbf R) = \mathbf{\bar P}\mathbf H^\mathsf T \\ \mathbf K= \mathbf{\bar P}\mathbf H^\mathsf T (\mathbf H \mathbf{\bar P}\mathbf H^\mathsf T + \mathbf R)^{-1}$$
This derivation is not quite iron clad as I left out an argument about why minimizing the trace minimizes the total error, but I think it suffices for this book. Any of the standard texts will go into greater detail if you need it.
## Numeric Integration of Differential Equations¶
We've been exposed to several numerical techniques to solve linear differential equations. These include state-space methods, the Laplace transform, and van Loan's method.
These work well for linear ordinary differential equations (ODEs), but do not work well for nonlinear equations. For example, consider trying to predict the position of a rapidly turning car. Cars maneuver by turning the front wheels. This makes them pivot around their rear axle as it moves forward. Therefore the path will be continuously varying and a linear prediction will necessarily produce an incorrect value. If the change in the system is small enough relative to $\Delta t$ this can often produce adequate results, but that will rarely be the case with the nonlinear Kalman filters we will be studying in subsequent chapters.
For these reasons we need to know how to numerically integrate ODEs. This can be a vast topic that requires several books. However, I will cover a few simple techniques which will work for a majority of the problems you encounter.
### Euler's Method¶
Let's say we have the initial condition problem of
$$\begin{gathered} y' = y, \\ y(0) = 1 \end{gathered}$$
We happen to know the exact answer is $y=e^t$ because we solved it earlier, but for an arbitrary ODE we will not know the exact solution. In general all we know is the derivative of the equation, which is equal to the slope. We also know the initial value: at $t=0$, $y=1$. If we know these two pieces of information we can predict the value at $y(t=1)$ using the slope at $t=0$ and the value of $y(0)$. I've plotted this below.
In [14]:
import matplotlib.pyplot as plt
t = np.linspace(-1, 1, 10)
plt.plot(t, np.exp(t))
t = np.linspace(-1, 1, 2)
plt.plot(t,t+1, ls='--', c='k');
You can see that the slope is very close to the curve at $t=0.1$, but far from it at $t=1$. But let's continue with a step size of 1 for a moment. We can see that at $t=1$ the estimated value of $y$ is 2. Now we can compute the value at $t=2$ by taking the slope of the curve at $t=1$ and adding it to our initial estimate. The slope is computed with $y'=y$, so the slope is 2.
In [15]:
import kf_book.book_plots as book_plots
t = np.linspace(-1, 2, 20)
plt.plot(t, np.exp(t))
t = np.linspace(0, 1, 2)
plt.plot([1, 2, 4], ls='--', c='k')
book_plots.set_labels(x='x', y='y');
Here we see the next estimate for y is 4. The errors are getting large quickly, and you might be unimpressed. But 1 is a very large step size. Let's put this algorithm in code, and verify that it works by using a small step size.
In [16]:
def euler(t, tmax, y, dx, step=1.):
ys = []
while t < tmax:
y = y + step*dx(t, y)
ys.append(y)
t +=step
return ys
In [17]:
def dx(t, y): return y
print(euler(0, 1, 1, dx, step=1.)[-1])
print(euler(0, 2, 1, dx, step=1.)[-1])
2.0
4.0
This looks correct. So now let's plot the result of a much smaller step size.
In [18]:
ys = euler(0, 4, 1, dx, step=0.00001)
plt.subplot(1,2,1)
plt.title('Computed')
plt.plot(np.linspace(0, 4, len(ys)),ys)
plt.subplot(1,2,2)
t = np.linspace(0, 4, 20)
plt.title('Exact')
plt.plot(t, np.exp(t));
In [19]:
print('exact answer=', np.exp(4))
print('difference =', np.exp(4) - ys[-1])
print('iterations =', len(ys))
exact answer= 54.598150033144236
difference = 0.0010919448029866885
iterations = 400000
Here we see that the error is reasonably small, but it took a very large number of iterations to get three digits of precision. In practice Euler's method is too slow for most problems, and we use more sophisticated methods.
Before we go on, let's formally derive Euler's method, as it is the basis for the more advanced Runge Kutta methods used in the next section. In fact, Euler's method is the simplest form of Runge Kutta.
Here are the first 3 terms of the Taylor expansion of $y$. An infinite expansion would give an exact answer, so $O(h^4)$ denotes the error due to the finite expansion.
$$y(t_0 + h) = y(t_0) + h y'(t_0) + \frac{1}{2!}h^2 y''(t_0) + \frac{1}{3!}h^3 y'''(t_0) + O(h^4)$$
Here we can see that Euler's method is using the first two terms of the Taylor expansion. Each subsequent term is smaller than the previous terms, so we are assured that the estimate will not be too far off from the correct value.
### Runge Kutta Methods¶
Runge Kutta is the workhorse of numerical integration. There are a vast number of methods in the literature. In practice, using the Runge Kutta algorithm that I present here will solve most any problem you will face. It offers a very good balance of speed, precision, and stability, and it is the 'go to' numerical integration method unless you have a very good reason to choose something different.
$$\ddot{y} = \frac{d}{dt}\dot{y}$$
.
We can substitute the derivative of y with a function f, like so
$$\ddot{y} = \frac{d}{dt}f(y,t)$$
.
Deriving these equations is outside the scope of this book, but the Runge Kutta RK4 method is defined with these equations.
$$y(t+\Delta t) = y(t) + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) + O(\Delta t^4)$$\begin{aligned} k_1 &= f(y,t)\Delta t \\ k_2 &= f(y+\frac{1}{2}k_1, t+\frac{1}{2}\Delta t)\Delta t \\ k_3 &= f(y+\frac{1}{2}k_2, t+\frac{1}{2}\Delta t)\Delta t \\ k_4 &= f(y+k_3, t+\Delta t)\Delta t \end{aligned}
Here is the corresponding code:
In [20]:
def runge_kutta4(y, x, dx, f):
"""computes 4th order Runge-Kutta for dy/dx.
y is the initial value for y
x is the initial value for x
dx is the difference in x (e.g. the time step)
f is a callable function (y, x) that you supply
to compute dy/dx for the specified values.
"""
k1 = dx * f(y, x)
k2 = dx * f(y + 0.5*k1, x + 0.5*dx)
k3 = dx * f(y + 0.5*k2, x + 0.5*dx)
k4 = dx * f(y + k3, x + dx)
return y + (k1 + 2*k2 + 2*k3 + k4) / 6.
Let's use this for a simple example. Let
$$\dot{y} = t\sqrt{y(t)}$$
with the initial values
\begin{aligned}t_0 &= 0\\y_0 &= y(t_0) = 1\end{aligned}
In [21]:
import math
import numpy as np
t = 0.
y = 1.
dt = .1
ys, ts = [], []
def func(y,t):
return t*math.sqrt(y)
while t <= 10:
y = runge_kutta4(y, t, dt, func)
t += dt
ys.append(y)
ts.append(t)
exact = [(t**2 + 4)**2 / 16. for t in ts]
plt.plot(ts, ys)
plt.plot(ts, exact)
error = np.array(exact) - np.array(ys)
print(f"max error {max(error):.5f}")
max error 0.00005
## Bayesian Filtering¶
Starting in the Discrete Bayes chapter I used a Bayesian formulation for filtering. Suppose we are tracking an object. We define its state at a specific time as its position, velocity, and so on. For example, we might write the state at time $t$ as $\mathbf x_t = \begin{bmatrix}x_t &\dot x_t \end{bmatrix}^\mathsf T$.
When we take a measurement of the object we are measuring the state or part of it. Sensors are noisy, so the measurement is corrupted with noise. Clearly though, the measurement is determined by the state. That is, a change in state may change the measurement, but a change in measurement will not change the state.
In filtering our goal is to compute an optimal estimate for a set of states $\mathbf x_{0:t}$ from time 0 to time $t$. If we knew $\mathbf x_{0:t}$ then it would be trivial to compute a set of measurements $\mathbf z_{0:t}$ corresponding to those states. However, we receive a set of measurements $\mathbf z_{0:t}$, and want to compute the corresponding states $\mathbf x_{0:t}$. This is called statistical inversion because we are trying to compute the input from the output.
Inversion is a difficult problem because there is typically no unique solution. For a given set of states $\mathbf x_{0:t}$ there is only one possible set of measurements (plus noise), but for a given set of measurements there are many different sets of states that could have led to those measurements.
Recall Bayes Theorem:
$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$
where $P(z \mid x)$ is the likelihood of the measurement $z$, $P(x)$ is the prior based on our process model, and $P(z)$ is a normalization constant. $P(x \mid z)$ is the posterior, or the distribution after incorporating the measurement $z$, also called the evidence.
This is a statistical inversion as it goes from $P(z \mid x)$ to $P(x \mid z)$. The solution to our filtering problem can be expressed as:
$$P(\mathbf x_{0:t} \mid \mathbf z_{0:t}) = \frac{P(\mathbf z_{0:t} \mid \mathbf x_{0:t})P(\mathbf x_{0:t})}{P(\mathbf z_{0:t})}$$
That is all well and good until the next measurement $\mathbf z_{t+1}$ comes in, at which point we need to recompute the entire expression for the range $0:t+1$.
In practice this is intractable because we are trying to compute the posterior distribution $P(\mathbf x_{0:t} \mid \mathbf z_{0:t})$ for the state over the full range of time steps. But do we really care about the probability distribution at the third step (say) when we just received the tenth measurement? Not usually. So we relax our requirements and only compute the distributions for the current time step.
The first simplification is we describe our process (e.g., the motion model for a moving object) as a Markov chain. That is, we say that the current state is solely dependent on the previous state and a transition probability $P(\mathbf x_k \mid \mathbf x_{k-1})$, which is just the probability of going from the last state to the current one. We write:
$$\mathbf x_k \sim P(\mathbf x_k \mid \mathbf x_{k-1})$$
In practice this is extremely reasonable, as many things have the Markov property. If you are driving in a parking lot, does your position in the next second depend on whether you pulled off the interstate or were creeping along on a dirt road one minute ago? No. Your position in the next second depends solely on your current position, speed, and control inputs, not on what happened a minute ago. Thus, cars have the Markov property, and we can make this simplification with no loss of precision or generality.
The next simplification we make is do define the measurement model as depending on the current state $\mathbf x_k$ with the conditional probability of the measurement given the current state: $P(\mathbf z_k \mid \mathbf x_k)$. We write:
$$\mathbf z_k \sim P(\mathbf z_k \mid \mathbf x_k)$$
We have a recurrence now, so we need an initial condition to terminate it. Therefore we say that the initial distribution is the probablity of the state $\mathbf x_0$:
$$\mathbf x_0 \sim P(\mathbf x_0)$$
These terms are plugged into Bayes equation. If we have the state $\mathbf x_0$ and the first measurement we can estimate $P(\mathbf x_1 | \mathbf z_1)$. The motion model creates the prior $P(\mathbf x_2 \mid \mathbf x_1)$. We feed this back into Bayes theorem to compute $P(\mathbf x_2 | \mathbf z_2)$. We continue this predictor-corrector algorithm, recursively computing the state and distribution at time $t$ based solely on the state and distribution at time $t-1$ and the measurement at time $t$.
The details of the mathematics for this computation varies based on the problem. The Discrete Bayes and Univariate Kalman Filter chapters gave two different formulations which you should have been able to reason through. The univariate Kalman filter assumes that for a scalar state both the noise and process are linear model are affected by zero-mean, uncorrelated Gaussian noise.
The Multivariate Kalman filter makes the same assumption but for states and measurements that are vectors, not scalars. Dr. Kalman was able to prove that if these assumptions hold true then the Kalman filter is optimal in a least squares sense. Colloquially this means there is no way to derive more information from the noisy measurements. In the remainder of the book I will present filters that relax the constraints on linearity and Gaussian noise.
Before I go on, a few more words about statistical inversion. As Calvetti and Somersalo write in Introduction to Bayesian Scientific Computing, "we adopt the Bayesian point of view: randomness simply means lack of information"[3]. Our state parameterizes physical phenomena that we could in principle measure or compute: velocity, air drag, and so on. We lack enough information to compute or measure their value, so we opt to consider them as random variables. Strictly speaking they are not random, thus this is a subjective position.
They devote a full chapter to this topic. I can spare a paragraph. Bayesian filters are possible because we ascribe statistical properties to unknown parameters. In the case of the Kalman filter we have closed-form solutions to find an optimal estimate. Other filters, such as the discrete Bayes filter or the particle filter which we cover in a later chapter, model the probability in a more ad-hoc, non-optimal manner. The power of our technique comes from treating lack of information as a random variable, describing that random variable as a probability distribution, and then using Bayes Theorem to solve the statistical inference problem.
## Converting Kalman Filter to a g-h Filter¶
I've stated that the Kalman filter is a form of the g-h filter. It just takes some algebra to prove it. It's more straightforward to do with the one dimensional case, so I will do that. Recall
$$\mu_{x}=\frac{\sigma_1^2 \mu_2 + \sigma_2^2 \mu_1} {\sigma_1^2 + \sigma_2^2}$$
which I will make more friendly for our eyes as:
$$\mu_{x}=\frac{ya + xb} {a+b}$$
We can easily put this into the g-h form with the following algebra
\begin{aligned} \mu_{x}&=(x-x) + \frac{ya + xb} {a+b} \\ \mu_{x}&=x-\frac{a+b}{a+b}x + \frac{ya + xb} {a+b} \\ \mu_{x}&=x +\frac{-x(a+b) + xb+ya}{a+b} \\ \mu_{x}&=x+ \frac{-xa+ya}{a+b} \\ \mu_{x}&=x+ \frac{a}{a+b}(y-x)\\ \end{aligned}
We are almost done, but recall that the variance of estimate is given by
\begin{aligned} \sigma_{x}^2 &= \frac{1}{\frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}} \\ &= \frac{1}{\frac{1}{a} + \frac{1}{b}} \end{aligned}
We can incorporate that term into our equation above by observing that
\begin{aligned} \frac{a}{a+b} &= \frac{a/a}{(a+b)/a} = \frac{1}{(a+b)/a} \\ &= \frac{1}{1 + \frac{b}{a}} = \frac{1}{\frac{b}{b} + \frac{b}{a}} \\ &= \frac{1}{b}\frac{1}{\frac{1}{b} + \frac{1}{a}} \\ &= \frac{\sigma^2_{x}}{b} \end{aligned}
We can tie all of this together with
\begin{aligned} \mu_{x}&=x+ \frac{a}{a+b}(y-x) \\ &= x + \frac{\sigma^2_{x}}{b}(y-x) \\ &= x + g_n(y-x) \end{aligned}
where
$$g_n = \frac{\sigma^2_{x}}{\sigma^2_{y}}$$
The end result is multiplying the residual of the two measurements by a constant and adding to our previous value, which is the $g$ equation for the g-h filter. $g$ is the variance of the new estimate divided by the variance of the measurement. Of course in this case $g$ is not a constant as it varies with each time step as the variance changes. We can also derive the formula for $h$ in the same way. It is not a particularly illuminating derivation and I will skip it. The end result is
$$h_n = \frac{COV (x,\dot x)}{\sigma^2_{y}}$$
The takeaway point is that $g$ and $h$ are specified fully by the variance and covariances of the measurement and predictions at time $n$. In other words, we are picking a point between the measurement and prediction by a scale factor determined by the quality of each of those two inputs.
## References¶
• [1] C.B. Molwer and C.F. Van Loan "Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later,", SIAM Review 45, 3-49. 2003.
• [2] C.F. van Loan, "Computing Integrals Involving the Matrix Exponential," IEEE Transactions Automatic Control, June 1978.
• [3] Calvetti, D and Somersalo E, "Introduction to Bayesian Scientific Computing: Ten Lectures on Subjective Computing,", Springer, 2007.
• [4] Brown, R. G. and Hwang, P. Y.C., "Introduction to Random Signals and Applied Kalman Filtering", Wiley and Sons, Fourth Edition, p.143-147, 2012. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9824296236038208, "perplexity": 403.1997674851035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104248623.69/warc/CC-MAIN-20220703164826-20220703194826-00201.warc.gz"} |
https://bertsblogs.com/vectors/ | # Vectors
We manipulate scalar information all the time using plus, minus, multiply and divide operators. The scalar quantities are not always of the same type. We can add a count of pears to a count of apples and give the answer as pieces of fruit. It would make little sense to multiply a quantity of apples by a quantity of pears but their are many unlike quantities we multiply together to get a result. For example we continuously sum the product of voltage by current taken from the mains by time to get the energy we use in our homes.
Vectors are scalars with a direction that matters. One way of showing both size and directions is to draw vectors on graph paper using a coordinate system. The length of an arrow represents the size or scalar part of the vector. The alignment of the arrow and its head represents the direction of the vector.
If you and I push an object in the same direction the scalars of our pushes add. If we push in opposite directions they subtract. If I am pushing an object northward and you are pushing on the same object with an identical force but toward the south west we can guess that the pushed object will move, if it can do so, both to the north and west. More accurately by combining vectors geometrically as shown in the diagram we can get a combined force with both size and direction. Of course we must draw the vector lengths to some scale and with the right relative directions.
It may be more convenient and more accurate to work out the result algebraically. The top triangle from the above is illustrated here and we can use the equality of the cosine rule (see my blog on trigonometry) to determine the size of the result.
Once we have R we can determine the part of it that acts to move the object west and the part that acts to move the object north. These are as shown in green. Like the M and Y forces that actually delivered R these two component vector forces would deliver the same R force result. We have if you like replaced the M and Y forces with an east/west component and a north/south one that would deliver the same result. We call this process resolving. In this case we have resolved the M and Y forces into the convenient north/south and east/west directions but resolving can take place in any direction.
We can look at these resolved component vectors shown green in another way and consider them as scalar multiples of unit vectors. One is a scalar multiple of the unit west vector shown in red and the other a scalar multiple of the unit north vector in blue.
We can clearly add and subtract vectors but can we and what does it mean to multiply two vectors ?
Let us consider a practical example. The diagram shows the basic operation of an electrical motor. View the currents as going away from you under the North magnetic pole (into the screen) and coming toward you (out of the screen) under the South magnetic pole. The role of a commutator is to deliver current flow directions under each magnetic pole such that the interactions between current and magnetic field output forces that support one another and produce rotary motion. The forces, current flows and magnetic field strength are all vectors.
The force vectors are due to the interactions of the current vectors with the magnetic field vector. Their values are related to the product of current and field values and at a maximum when current and field directions are at right angles, as in the diagram. Force direction is perpendicular to the plane of the field and current vectors.
The force vectors, shown in green, are most effective in delivering the rotary torque. But there are are many other less effective force vectors, as in yellow; less effective because the strength of the magnetic field acting on current conductors not under the poles is diminished and because only a component of their output forces is effective in delivering the rotary motion. We say the force vector is the cross product of the field and current vectors.
The vector dot product
If in a 3d coordinate system we have a vector A with component lengths x1 yz1 and a vector B with component lengths x2 y2 z2 then if a vector C = vector A – vector B it will have component lengths of x1 –x2 y1 –y2 and z1 –z2.
If A and B are perpendicular to one another then Pythagoras tells us that the length of C squared ||C||2 equals ||A||2 + ||B||2 . It means (x1 –x2)+ (y1 –y2)+ (z1 –z2) must equal (x12 +y12 + z12 ) + (x22 +y22 + z22 ).
Simplified this means x1x2 + y1y2 + z1z2 = 0 when two vectors are perpendicular to one another. We call the term x1x2 + y1y2 + z1z2 the dot product of the two vectors and show it as A ⋅ B
Where two vectors A and B have an angle Θ between them the Cosine rule tells us that ||C||2 = ||A||2 + ||B||– 2||A|| ||B||cosθ. Substituting all of the above paragraph x, y and z values for ||C||2, ||A||2 and ||B||2 into this formula we discover the more general formula in which the dot product A ⋅ B = x1x2 + y1y+ z1z2 = ||A|| ||B||cosθ . So the dot, sometimes called scalar product, is the product of one vector length multiplied by the other vector length projected onto it.
The vectors cross product
Consider two vectors A and B, shown in red and yellow. Each has components along the x, y and z axes shown as Ax Ay Az and Bx By Bz. Those vector components can be regarded as scalars with their directions determined by unit vectors i , j or k. So for example Ax i describes vector A’s component along the x axis, Ax being its size and i setting its vector direction.
We saw above how a motor force only arose when there were components of magnetic field and current acting across one another. No force arose when the magnetic field and current aligned. So to get a resultant cross product vector C = A x B we must multiply each component of vector A by each component of vector B but ignore the products of aligned components (i.e we ignore any multiplications involving components with like i, j or k directions as they contribute nothing to the cross product vector. It is also worth noting that as our cross product vector C will be perpendicular to the plane of A and B that both C ⋅ A and C ⋅ B will be zero.
The very nature of a cross product is such that an i direction component multiplied by a j direction component will give us a k direction component. In fact i j = k, jk = i, ki = j, ji = -k, ik = -j and kj = -i as per the cyclic diagram shown. Now, knowing the above we do the cross product.
A x B = (Ax i + Ay j + Az k) x (Bx i + By j + Bz k)
= Ax By ij + Ax Bz ik + Ay Bx ji + Ay Bz jk + Az Bx ki + Az By k j
= Ax By k – Ax Bz j – Ay Bx k + Ay Bzi + Az Bx j – Az By i
Simplifying the above A x B= i ( Ay Bz – Az By ) – j( Ax Bz – Az Bx) + k( Ax By – Ay Bx )
The above cross product can be written in the simpler matrix form left. Follow the above terms in the matrix. We write the j term in the above cross product as a negative because the seen matrix gives us Ax Bz – Az Bx but we could have written it as a a positive and multiplied the terms in cyclic fashion regarding i as after k. and thereby getting Az Bx – Ax Bz.
On the right, we show a cross product vector geometrically. It is at right angles to the plane containing the multiplied vectors and its magnitude is that of the yellow area AB sinθ.
In our motor example the angle between current (say A) and field (B) is a right angle and the yellow shape would be a rectangle. The force( A x B) is at right angles to the plane of current and field and its size is related to the yellow area. Such a force will clearly reduce in value as the angle between current and field gets smaller becoming nothing when current and field have like direction. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424481153488159, "perplexity": 656.6956625889861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00616.warc.gz"} |
http://cust-serv@ams.org/bookstore?fn=20&arg1=stmlseries&ikey=STML-54 | New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
Glimpses of Soliton Theory: The Algebra and Geometry of Nonlinear PDEs
Alex Kasman, College of Charleston, SC
SEARCH THIS BOOK:
Student Mathematical Library
2010; 304 pp; softcover
Volume: 54
ISBN-10: 0-8218-5245-0
ISBN-13: 978-0-8218-5245-3
List Price: US$46 Institutional Members: US$36.80
All Individuals: US\$36.80
Order Code: STML/54
Solitons are explicit solutions to nonlinear partial differential equations exhibiting particle-like behavior. This is quite surprising, both mathematically and physically. Waves with these properties were once believed to be impossible by leading mathematical physicists, yet they are now not only accepted as a theoretical possibility but are regularly observed in nature and form the basis of modern fiber-optic communication networks.
Glimpses of Soliton Theory addresses some of the hidden mathematical connections in soliton theory which have been revealed over the last half-century. It aims to convince the reader that, like the mirrors and hidden pockets used by magicians, the underlying algebro-geometric structure of soliton equations provides an elegant and surprisingly simple explanation of something seemingly miraculous.
Assuming only multivariable calculus and linear algebra as prerequisites, this book introduces the reader to the KdV Equation and its multisoliton solutions, elliptic curves and Weierstrass $$\wp$$-functions, the algebra of differential operators, Lax Pairs and their use in discovering other soliton equations, wedge products and decomposability, the KP Equation and Sato's theory relating the Bilinear KP Equation to the geometry of Grassmannians.
Notable features of the book include: careful selection of topics and detailed explanations to make this advanced subject accessible to any undergraduate math major, numerous worked examples and thought-provoking but not overly-difficult exercises, footnotes and lists of suggested readings to guide the interested reader to more information, and use of the software package Mathematica® to facilitate computation and to animate the solutions under study. This book provides the reader with a unique glimpse of the unity of mathematics and could form the basis for a self-study, one-semester special topics, or "capstone" course.
Undergraduate and graduate students interested in nonlinear PDEs; applications of algebraic geometry to differential equations.
Reviews
"[T]his introduction to soliton theory is ideal for precisely the type of course for which it is intended - a .single semester special topics class' or a 'capstone experience . . . course.' . . . One of the delightful bonuses found in the text is the list of sources for additional reading found at the end of each chapter. In addition, the appendix, Ideas for Independent Projects,' provides both the student and the teacher many options for even more connections and/or more depth in numerous areas of study. Recommended."
-- J. T. Zerger, CHOICE
"The book is well written and contains numerous worked-out examples as well as many exercises and a guide to the literature for further reading. In particular, I feel that it serves its intended purpose quite well."
-- Gerald Teschl, Mathematical Reviews
AMS Home | Comments: webmaster@ams.org © Copyright 2014, American Mathematical Society Privacy Statement | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1598440706729889, "perplexity": 1678.569121511278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270399.7/warc/CC-MAIN-20140728011750-00192-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/can-you-help-me-in-this-homework.185606/ | # Can you help me in this homework?
1. Sep 19, 2007
### coconut88
1. The problem statement, all variables and given/known data
Aboy whirls aball on a string in ahorizontal circle of radius 0.8m. How many revolutions per minute must the ball make if the magnitude of its centriptal acceleration is to be the same as the free-fall acceleration due to gravity g?
2. Relevant equations
3. The attempt at a solution
2. Sep 19, 2007
### learningphysics
3. Sep 19, 2007
### PFStudent
Hey,
Consider what acceleration is being referred to here, centripetal or linear?
Also, then consider that acceleration as it relates to the number of revolutions per minute.
Thanks,
-PFStudent
4. Sep 19, 2007
### Kushal
the best way to solve these kinds of problems is to find the appropriate equations and determine which of the quantity has to be common in the equations(usually two equations).
then equate these equations to find the unknown, which should be in one of the relevant equations.
5. Sep 19, 2007
### coconut88
a=v2/r this is the first equation that I used but I don't kow what is the secend will be ?
what does the revoluition mean ? what does the equation we need to use ?
pleace help me I me international student fot that I have a lot of diffeculties
6. Sep 19, 2007
### learningphysics
Yes, you can find v using a = v^2/r
number of revolutions means number of times it goes around in a circle...
let n = total number of revolutions. let d = distance. let t = time.
So
$$d = n*2{\pi}r$$
$$\frac{d}{t}= \frac{n*2{\pi}r}{t}$$
v = d/t
number of revolutions per second = n/t
Using these you should be able to get number of revolutions per second. What do you get?
7. Sep 20, 2007
### coconut88
I used this v^2/r and the answer is 2.8 m/s after that I used 2rr and the answar is 5.0/1 rev that means 1 rev/5.0
aftet that I can't do any thing because I don't have the d !!
??
8. Sep 20, 2007
### learningphysics
you don't need d. let X = number of revolutions/second
you know that v = X*(distance per revolution) = X*5.0
so v = X*5.0
solve for X.
9. Sep 20, 2007
### coconut88
The x will be .56
10. Sep 20, 2007
### learningphysics
looks right to me. 0.56 revolutions/second
11. Sep 20, 2007
### coconut88
12. Sep 20, 2007
### learningphysics
you're welcome. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8345707654953003, "perplexity": 1754.8930750361067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321497.77/warc/CC-MAIN-20170627170831-20170627190831-00494.warc.gz"} |
http://napitupulu-jon.appspot.com/posts/confidence-interval-coursera-statistics.ipynb | { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Confidence Interval (CI)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Confidence interval is the range of plausible values in which we want to capture the population parameter. For example if estimate the point estimate, if we guess the exact value chances are we will miss. But if we take range of plausible values (net fishing instead of doing it with spear), there's a good chance that we capture population parameter.Note that sample statistics acts as a point estimate to our population parameter. So if we want to get a population mean, we get a point estimate mean. In this case, the sample statistics and point estimate is synonymous." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/69) 02:53*\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So typically CI is within two standard error, within 95% of sampling distribution. The question really is, when we take sample and calculate the point estimate, the mean, what is the probability that the mean of the sample is within two standard deviation of sampling distribution. Remember $\\bar{x}$ is our mean sample. So there are two mutually exclusive events here. Either the sample mean within 95% CI, or it doesn't.Remember that within 95% means within 2 standard deviation.Half of this range is often called **margin of error (ME)**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "CLT have its own requirement, if point estimates is about mean:\n", "\n", "* Assert that population distribution is normal. If we can't check that, we can infer from the shape of sample distribution taken.\n", "* The sample size >= 30 or population distribution is normal, larger if the distribution is more skewed.\n", "* The samples taken is independent.\n", "* the distribution of sample mean will be nearly normal, and we can calculate with CLT advantage.\n", "* The larger the sample size, the less concern it will be about the shape of population distribution.\n", "\n", "If the sample is taking by random sampling/assignment, and the sample size is less than 10% of population, then CI can take advantage of CLT. Sampling distributions that doesn't support CLT skew,size, and independence will not be normal.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a look at one of the example." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/69) 06:17*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this study, we can find the following:\n", "\n", "$\\bar{x}$ = 64.5%\n", "\n", "$\\sigma$ = 4%" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So the first example is always true. Conceptually, the higher the sample size, the less variability of error, led to lower standard error. Mathematically, taking formula for calculating of standard error, inversely, sample size increase, will decrease the standard error.\n", "\n", "The second example is also correct 95% means two standard deviation, give us Margin of Error:\n", "\n", " two standard deviation * 2 = 8%\n", "\n", "Third case, as we said earlier, two standard deviation, but this example only gives us 1 standard deviation, hence it's not correct. The final options is true. Although it's different level of confidence, it's also valid. within 99.7% would means 3 standard deviaion hence:\n", "\n", " 3*sd = 12%" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/69) 08:17*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So because the CI is based on the CLT, CI has same requirements, though stricter. For the skewness, CLT let's you at least more than 30 size of the sample. But CI on the other hand require larger size if the population is very skew. Here we see that any interval is correct, as long we specify the standard error accordingly.Again we use s to denote standard deviation of sample distributions. Since we're almost always doesn't get true population mean, we use mean of sample.\n", "\n", "Note that because we also use z-score in conjunction with the SE, that doesn't mean standard deviation is similar to standard error. $\\sigma\\$ is used to describe the variability of your data. While SE is used to decribed the error variety of your samples point estimates, taken with same size and from the same population." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/69) 11:02*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we know that 95% is only approximate to 2 standard deviation. We're doing that because we want to rounding up. Taking the exact standard deviation will need calculation above. We can simply take half of the proportion, cutting the distribution and finding what are the standard deviation. Since we're doing qnorm without specifying the lower.tail, it will give the lower tail hence left standard deviation. Critical value is always positive, and for right standard deviation, we only need to make it positive value." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Accuracy vs Precision" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/71) 02:06*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Taking the plot above, we know that 95% is famous for CI with 1.96SE. But it could also work well with 90%,98%,99%. Suppose we take same sample size and plot it 25 times. It turns out, one sample in its interval is not capturing true population. In this case what we get is 24/25 = 0.96. Note that because it is sampling, and therefore have sampling variability, there's a chance that our sampling doesn't catch point estimate, population mean for example." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/71) 03:17*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Increase the confidence level, will increase the range of the interval as well. as The level gets higher, the interval is also get higher. We see that plot above, 95% will take all data within 1.96 deviations, 99% will take all data within 2.58 standard deviation. Sadly it's no free lunch, it comes with a cost.\n", "\n", "So we're going to compare the accuracy vs precision.The accuracy is about whether we're exactly predicting the value, the precision is about the width of the interval. Suppose we're predicting a value based on Confidence Interval. Is the value that we get is precise? Basically no. So as the higher confidence level, the lower the precision.Why? Because **as we increase the confidence level, our standard error is also increase**, that is the width of our interval. So how we get best of both worlds? We increase the sample size. As we get higher the sample size, the variability of error(SE) will decrease.This would means get lower precision. \n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/71) 07:17*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* a is False. This is not about observing the individuals, rather, the population parameter, the average.\n", "* b is True. This is the exact definition of the confidence interval\n", "* c is False. this is not the probability in which one events occurs between certain range vs they dont. 95% is the interval, not probability. Saying 95% probability that occurs in a range 3.53-3.83 is not correct.\n", "* d is False. Confidence Interval is not talking about the sample, it's talking about the population parameter. This would be correct if it's said 100% confidence, because that's what we are getting sample mean as a basis of CLT. But getting sample mean only is not truly interesting." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So in summary:\n", "\n", "* We can say something like 'We are 95% confident that true average population have been between this and this'\n", "* We can say that 95% of random samples with same size and from same population will yield an interval that capture the true population parameter.\n", "* We can say confidence interval will capture true population parameter.\n", "* We can say 95% is the percentage of random samples that capture true population parameter.\n", "\n", "However:\n", "\n", "* we can't say '95% of the time, true average lies between ..', this is not a probability.\n", "* we can't say '95% of the samples will have average between ...', CI is about the true population parameter, not sample.\n", "\n", "Master statement CI from various problem will help you through various problem.**Always mention what your population is and the parameter**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Required sample size for ME" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/71) 00:44*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Consider following example where we can plug everything in." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/71) 03:21*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We know that the margin of error should at maximum 4, can't go any higher than that. Getting critical value for 90% confidence level can be done as:\n", "\n", " (1 - 0.9)/2 = 0.05\n", " \n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "\n", "[-1.644854]" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%R qnorm(0.05)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we get approximately 1.65 (if using z-table). Now that we have everything in our hand, calculating the sample size should be easy. The result that we get is 55.13. But since we're calculating people here, desimal should not be in place. As the result stated the minimum value is 55.13, rounding down would be mistake in statistics, eventhough it should be mathematically. As this is the requirement, we have to always round up to nearest value, therefore 56. Notice that the relationship between sample size and ME is **inverse exponential**. Half the margin of error, we have to increase to quarduple the sample size. This confirm the CLT earlier, where if you want 1/3 margin of error, you have to multiply n by 9.\n", "\n", "\n", "The Z-score that we get is negative, but it's only a sign that the calculation observe the lower tail. So as distance is absolute number, you convert it to positive. Say, what you concern is only one-tail, that is less than, not greater than. Why we still divide it by 2? Because no matter one/two tail, CI will always symmetric, that is you always concern about two tails, eventhough you only consider one tail." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### CI(Mean) examples" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/75) 01:21*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Remember that the confidence level is about what the population mean reallys is, and the interval which capture the data. So we can say, 'We are 95% confident that Americans on average have 3.40 to 4.24 have bad mental health days per month.'" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here another question. **In this context, what does a 95% confidence level mean?**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If we're able to capture the mean of confidence interval through various problem, then you're able to master the material.Remember that confidence interval was achieved through gather samples with same size and from same population, plot it into sampling distribution. we can say that 95% of random samples of 1151 Americans will have CIs that yields interval that capture the true population average of bad mentah health days." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**If researchers found 99% interval is more appropriate, is the interval wider or narrower?**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Wider of course.Depending of your error, if you don't want to increase the standard error, you should increase your larger size exponentially." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "\n", "\n", "*Snapshot taken from [Coursera](https://class.coursera.org/statistics-003/lecture/75) 04:26*" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "All the condition are met. We know the sample size, the mean, and standard deviation. But first, we must checked at the conditions.\n", "\n", "* The sample never stated by random sampling. But **50 is less than 10% of college students.** By checking that, we have good assumption that the sample is independent.\n", "* n > 30 & not so skewed sample.Knowing that 50 is more than 30, we can make assumption the sampling distributions will be nearly normal.\n", "\n", "After confirming those, we can make a calculation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's us state that:\n", "\n", " n = 50\n", " mu = 3.2\n", " s = 1.74\n", " z = 1.96\n", "\n", "$$SE = \\frac{s}{n} = \\frac{1.74}{\\sqrt{50}} \\approx 0.246$$\n", "\n", "$$\\bar{x} \\pm z * SE = 3.2 \\pm 1.96 * 0.246$$\n", "\n", "$$3.2 \\pm 0.48 = (2.72,3.68)$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "So we are 95% confident that true average of mutually exclusive college students have been in 2.72 - 3.68 exclusive relationships\n", "\n", "Margin of error is half the confidence interval, so the distance between the middle of interval to either direction.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "> **REFERENCES**:\n", "\n", "> * https://class.coursera.org/statistics-003/" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2", "language": "python", "name": "python2" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.9" } }, "nbformat": 4, "nbformat_minor": 0 } | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316851258277893, "perplexity": 4797.961152605538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823785.24/warc/CC-MAIN-20181212065445-20181212090945-00190.warc.gz"} |