url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
http://www.reference.com/browse/Harmonic+Distortion
Definitions # Total harmonic distortion The total harmonic distortion, or THD, of a signal is a measurement of the harmonic distortion present and is defined as the ratio of the sum of the powers of all harmonic components to the power of the Fundamental frequency. Lesser THD, for example, allows the components in a loudspeaker, amplifier or microphone or other equipment to make a violin sound like a violin when played back, and not a cello or simply a distorted noise. ## Explanation In most cases, the transfer function of a system is linear and time-invariant. When a signal passes through a non-linear device, additional content is added at the harmonics of the original frequencies. THD is a measurement of the extent of that distortion. The measurement is most commonly the ratio of the sum of the powers of all harmonic frequencies above the fundamental frequency to the power of the fundamental: mbox{THD} = {sum{mbox{harmonic powers}} over mbox{fundamental frequency power}} = {{P_2 + P_3 + P_4 + cdots + P_n} over P_1} Other calculations for amplitudes, voltages, currents, and so forth are equivalent. For a voltage signal, for instance, the ratio of the squares of the RMS voltages is equivalent to the power ratio: $mbox\left\{THD\right\} = \left\{\left\{V_2^2 + V_3^2 + V_4^2 + cdots + V_n^2\right\} over V_1^2\right\}$ In this calculation, Vn means the RMS voltage of harmonic n, where n=1 is the fundamental harmonic. One can also calculate THD using all harmonics (n=∝): mbox{THD} = {{V_{RMS}^2 - V_1^2} over V_1^2} Other definitions may be used. Many authors define THD as an amplitude ratio rather than a power ratio. This results in a definition of THD which is the square root of that given above. For example in terms of voltages the definition would be: $mbox\left\{THD\right\} = \left\{sqrt\left\{V_2^2 + V_3^2 + V_4^2 + cdots + V_n^2\right\} over V_1\right\}$ This latter definition is commonly used in audio distortion (percentage THD) specifications. It is unfortunate that these two conflicting definitions of THD (one as a power ratio and the other as an amplitude ratio) are both in common usage. Fortunately if the THD is expressed in dB then both definitions are equivalent. This is not the case if the THD is expressed as a percentage. The power THD can be higher than 100% and is known as IEEE, but for audio measurements 100% is preferred as maximum, thus the IEC version is used (Rohde & Schwartz, Bruel and Kjær use it). mbox{THD} = {{V_{RMS}^2 - V_1^2} over V_{RMS}^2} A measurement must also specify how it was measured. Measurements for calculating the THD are made at the output of a device under specified conditions. The THD is usually expressed in percent as distortion factor or in dB as distortion attenuation. A meaningful measurement must include the number of harmonics included. ## THD+N THD+N means total harmonic distortion plus noise. This measurement is much more common and more comparable between devices. This is usually measured by inputting a sine wave, notch filtering the output in question, and measuring the ratio between the output signal with and without the sine wave: mbox{THD+N} = {sum{mbox{harmonic powers}} + mbox{noise power} over mbox{total output power}} A meaningful measurement must include the bandwidth of measurement. This measurement includes effects from intermodulation distortion, interference, and so on, instead of just harmonic distortion. For a given input frequency and amplitude, THD+N is equal to SINAD, provided the bandwidth for the noise measurement is the same for both (the Nyquist bandwidth).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9551474452018738, "perplexity": 1229.4721850746805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067075.78/warc/CC-MAIN-20141017150107-00247-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.amplifiedparts.com/products/luthier-parts-tools?sort=price_high_to_low&filters=3395a3427c3388a3395c3387a3388c3427a3445
# Luthier Parts & Tools Potentiometer - PMT, Humbucker Control, Coil Filters The HCP Humbucker Control Pot is a dual mode variable coil tap selector for electric guitars and basses that eliminates the additional switches associated with coil splitting a pair of humbuckers. In TAP mode, rotating the pot counterclockwise fades from humbuckers to outer coils (neck pickup South coil & bridge pickup North coil). Pulling the HCP control up and rotating it counterclockwise fades from humbuckers to inner coils (neck pickup North coil & bridge pickup South coil). Tonal control is expanded with the unique ability to coil tap both humbuckers in varying amounts. In COIL FILTER mode, turning the HCP control applies varying amounts of a specially tuned high frequency filter to only one coil of the selected coils of the humbuckers (either North or South coils can be filtered). This makes each pickup a single coil at high frequencies and a humbucker at lower frequencies which produces a sound similar to a pair of single coils but with a stronger, fuller tone and 50% less noise than coil tapping. $22.75 Potentiometer - PMT, Dual Mode Tone Control The DMT is a passive dual mode guitar tone control designed specifically for electric guitars that can operate in either HIGH PASS or LOW PASS modes greatly expanding the instruments tonal possibilities without the use of a battery or active electronics. Micro switches on the circuit board provide the user the ability to select one of two different frequency ranges for both high and low pass modes. Easy installation with no capacitors or other parts needed. Replace the tone control in you guitar with one that you will actually use! Starting at$19.50 Don't see what you're looking for? Send us your product suggestions!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4090500771999359, "perplexity": 8876.252730034506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00582.warc.gz"}
http://www.ipam.ucla.edu/abstract/?tid=7885&pcode=LE2009
## Computing eigensolutions with meshless methods - advantages and difficulties. #### Carlos AlvesTechnical University of Lisbon In this talk we present a meshless method of fundamental solutions to compute eigenvalues and eigenfunctions of the Laplacian (2D, 3D) and of the Bilaplacian. We present some numerical experiments related to some known problems (e.g. quasi-stadium conjecture). We will also discuss the conditioning and boundary regularity difficulties related to these Trefftz type methods, and present some new results. [joint work with my PhD student - P. Antunes] Back to Laplacian Eigenvalues and Eigenfunctions: Theory, Computation, Application
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8566315174102783, "perplexity": 2000.7574257246092}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152156.49/warc/CC-MAIN-20210726215020-20210727005020-00375.warc.gz"}
https://en.wikibooks.org/wiki/Calculus_Course/Integration/Indefinite_Integral
# Calculus Course/Integration/Indefinite Integral ## Indefinite Integral Indefinite Integral is a mathematic operation on a non linear function over an indefinite a boundary ${\displaystyle \int f(x)dx=Lim_{\Delta x\to 0}\Sigma \Delta x[f(x)+{\frac {f(x+\Delta x)}{2}}]}$ ## Indefinite Integral Rules Because antidifferentiation is the inverse operation of the differentiation, antidifferentiation theorems and rules are obtained from those on differentiation. Thus, the following theorems can be proven from the corresponding differentiation theorems: • General antidifferentiation rule: ${\displaystyle \int dx=x+C}$ • The general antiderivative of a constant times a function is the constant multiplied by the general antiderivative of the function: ${\displaystyle \int af(x)\,dx=a\int f(x)\,dx+C}$ • If ƒ and g are defined on the same interval, then the general antiderivative of the sum of ƒ and g equals the sum of the general antiderivatives of ƒ and g: ${\displaystyle \int [f(x)+g(x)]\,dx=\int f(x)\,dx+\int g(x)\,dx+C}$ • If n is a real number, ${\displaystyle \int x^{n}\,dx={\begin{cases}{\frac {x^{n+1}}{n+1}}+C,&{\text{if }}n\neq -1\\[6pt]\ln |x|+C,&{\text{if }}n=-1\end{cases}}}$ ${\displaystyle \int f(x)dx=f^{'}(x)+C}$ ${\displaystyle \int {\frac {f^{'}(x)}{f(x)}}{\rm {d}}x=\ln |f(x)|+c}$ ${\displaystyle \int {UV}=U\int {V}-\int {\left(U^{'}\int {V}\right)}}$ ${\displaystyle e^{x}}$ also generates itself and is susceptible to the same treatment. ${\displaystyle \int {e^{-x}\sin x}~dx=(-e^{-x})\sin x-\int {(-e^{-x})\cos x}~dx}$ ${\displaystyle =-e^{-x}\sin x+\int {e^{-x}\cos x}~dx}$ ${\displaystyle =-e^{-x}(\sin x+\cos x)-\int {e^{-x}\sin x}~dx+c}$ We now have our required integral on both sides of the equation so ${\displaystyle =-{\frac {1}{2}}e^{-x}(\sin x+\cos x)+c}$ • ${\displaystyle f(x)=m}$ ${\displaystyle \int mdx=mx+C}$ • ${\displaystyle f(x)=x^{n}}$ ${\displaystyle \int {f(x)}dx={\frac {1}{n+1}}x^{n+1}+c}$ • ${\displaystyle f(x)={\frac {1}{x}}}$ ${\displaystyle \int {\frac {1}{x}}dx=\ln x}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767515659332275, "perplexity": 876.5680651581907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00145.warc.gz"}
https://gmatclub.com/forum/if-a-b-and-c-are-integers-such-that-b-a-is-b-c-a-78848.html
If a, b and c are integers such that b > a, is b+c > a : GMAT Data Sufficiency (DS) Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack It is currently 20 Feb 2017, 08:04 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar If a, b and c are integers such that b > a, is b+c > a Author Message TAGS: Hide Tags SVP Joined: 04 May 2006 Posts: 1925 Schools: CBS, Kellogg Followers: 23 Kudos [?]: 1042 [1] , given: 1 If a, b and c are integers such that b > a, is b+c > a [#permalink] Show Tags 26 May 2009, 01:46 1 KUDOS 22 This post was BOOKMARKED 00:00 Difficulty: 95% (hard) Question Stats: 32% (06:24) correct 68% (01:25) wrong based on 1276 sessions HideShow timer Statistics If a, b and c are integers such that b > a, is b+c > a ? (1) c > a (2) abc > 0 [Reveal] Spoiler: OA _________________ Last edited by Bunuel on 09 Aug 2012, 23:22, edited 1 time in total. Edited the question. Director Joined: 27 Jun 2008 Posts: 546 WE 1: Investment Banking - 6yrs Followers: 2 Kudos [?]: 63 [0], given: 92 Show Tags 26 May 2009, 06:18 sondenso wrote: If a, b and c are integers such that b > a, is b+c > a ? (1) c > a (2) abc > 0 E a = -4 or 3 b = -3 or 4 The above satifies b>a, using - or + as examples. (1) c>a c = -2 or 4 Insuff (2) abc>0 This means either 2 of the intergers are negative or all of the intergers are positive. Insuff Manager Joined: 13 May 2009 Posts: 195 Followers: 5 Kudos [?]: 37 [1] , given: 1 Show Tags 26 May 2009, 10:34 1 KUDOS sondenso wrote: If a, b and c are integers such that b > a, is b+c > a ? (1) c > a (2) abc > 0 b>a <==> 0>a-b <==> a-b is negative Question:(b+c>a)? Question:(c>a-b)? Question:(c >= 0)? (1) Insufficient, as a-b<0 could mean that a<0 or a>0, which means c<0 or c>=0. (2) We have an even # of negatives, ie 0/2. If we have 0 negatives, then sufficient. Again if we have 2 insufficient. I can see the pattern, the answer is E. Here are the Yes/No cases: Yes: 4 5 7 No: -9 3 -10 Final Answer, $$E$$. _________________ Manager Joined: 10 Aug 2008 Posts: 76 Followers: 1 Kudos [?]: 13 [0], given: 0 Show Tags 27 May 2009, 08:39 For me its straight A) Manager Joined: 11 Apr 2009 Posts: 164 Followers: 2 Kudos [?]: 92 [1] , given: 5 Show Tags 27 May 2009, 10:27 1 KUDOS why cant it be C? 1. b>a and c>a and abc>0: If a<0, b can be less than or greater than 0. If a<0, b<0 then c>0 (a=-6, b=-4, c=2) => b+c >a If a<0, b>0, c<0 (a=-3, b=5, c=-1) => b+c >a If all are positive, then also b+c >a. Hence, C. Are there any assumptions which do not satisfy both conditions simultaneously? What is the OA? Manager Affiliations: Beta Gamma Sigma Joined: 14 Aug 2008 Posts: 210 Schools: Harvard, Penn, Maryland Followers: 6 Kudos [?]: 65 [1] , given: 3 Show Tags 27 May 2009, 13:20 1 KUDOS statement 1 does work for positives, but not for negatives. a= -2 b= -1 c= -1. statement 2 doesnt work on its own because c could be way less than a, negating b, a= -3 b=2 c= -5 however, both together mean that there can only be two negatives, and c must be larger than A. for any value A, negative or not, two values that are greater than it with one positive are going to together be greater than A. if A is positive = a=2, c=3, b=3, a=1 b=2 c=2 if A is negative = a= -3, c= -2, b=1, a= -2 b=-1 c=1 There is no solution with (1) and (2) true that doesn't end up with B + C > A gmatprep is right Manager Joined: 11 Apr 2009 Posts: 164 Followers: 2 Kudos [?]: 92 [0], given: 5 Show Tags 27 May 2009, 13:22 Has anybody considered taking both the conditions 1 and 2 together? What is the OA for this? I still think its C. Senior Manager Joined: 08 Jan 2009 Posts: 329 Followers: 2 Kudos [?]: 152 [0], given: 5 Show Tags 27 May 2009, 19:18 b>a stmt 1 : c>a so b+c>2a we cannot tell whether b+c >a may or may not. So insufficient. stmt 2 : abc>0 and b >a a = +ve b = +ve c =+ve so definetly b+c > a a = -ve b = -ve c = +ve so definetly b+c > a. ( since c =+ve and b>a) a = -ve b = +ve c = -ve ( here it is just the opposite u know b =+ve but do not know c>a).so u cant tell whether b+c > a Combing : yes u know you have got what u wanted so C. Thanks gmatprep09 and dk94588. Manager Joined: 16 Apr 2009 Posts: 243 Schools: Ross Followers: 3 Kudos [?]: 84 [0], given: 10 Show Tags 27 May 2009, 20:02 I agree C is the answer. If a,b and c and positive,it is possible. _________________ Keep trying no matter how hard it seems, it will get easier. Manager Joined: 28 Jan 2004 Posts: 203 Location: India Followers: 2 Kudos [?]: 27 [0], given: 4 Show Tags 27 May 2009, 20:42 tkarthik wrote - c>a so b+c>2a we cannot tell whether b+c >a may or may not. So insufficient If b+c>2a how in the world can it not be > a !!!! I m confused.Can you pls. elaborate or give example numbers. Senior Manager Joined: 08 Jan 2009 Posts: 329 Followers: 2 Kudos [?]: 152 [0], given: 5 Show Tags 27 May 2009, 22:12 Hi mdfrahim you need to take some negative value. Say a,b,c are +ve then b+c > a and b+c>2a say b = -2 c = -3 a =-4 ( c> a and b > a) b + c = -5 which less than a but will be greater than 2a. I meant to say this only. Hope i have made it clear. Manager Joined: 13 May 2009 Posts: 195 Followers: 5 Kudos [?]: 37 [0], given: 1 Show Tags 28 May 2009, 10:53 If you look at both together, we have that b>a & c>a. from (1) we get b+c>2a and we're almost sufficient, but only if 2a>a. Well if a<0, no, but if a>0, yes. (2) tells us that 0 or 2 are negative. If all 3 are positive, sufficient. Now if 2 are negative, then a has to be negative as well. Suppose a were positive-- then b/c would be negative, and b>a & c>a would be false. Hence a has to be positive. (C) Very tricky _________________ Manager Joined: 11 Sep 2009 Posts: 109 Followers: 2 Kudos [?]: 27 [0], given: 0 Re: Property of Integers, Inequalities [#permalink] Show Tags 22 Oct 2009, 23:02 (1) because b and c both can be negative, thus (1) is insufficient (2) insufficient for example, a = -2, b = 1, c = -4 Both are sufficient because either b or c is always greater than a. Manager Joined: 02 Nov 2009 Posts: 138 Followers: 3 Kudos [?]: 173 [0], given: 97 Re: If a, b and c are integers such that b > a, is b+c > a [#permalink] Show Tags 09 Aug 2012, 22:40 Math Expert Joined: 02 Sep 2009 Posts: 37036 Followers: 7230 Kudos [?]: 96127 [9] , given: 10707 Re: If a, b and c are integers such that b > a, is b+c > a [#permalink] Show Tags 10 Aug 2012, 00:15 9 KUDOS Expert's post 4 This post was BOOKMARKED If a, b and c are integers such that b > a, is b+c > a ? Question: is $$b+c > a$$ ? --> or: is $$b+c-a > 0$$? (1) c > a. If $$a=1$$, $$b=2$$ and $$c=3$$, then the answer is clearly YES but if $$a=-3$$, $$b=-2$$ and $$c=-1$$, then the answer is NO. Not sufficient. (2) abc > 0. Either all three unknowns are positive (answer YES) or two unknowns are negative and the third one is positive. Notice that in the second case one of the unknowns that is negative must be $$a$$ (because if $$a$$ is not negative, then $$b$$ is also not negative so we won't have two negative unknowns). To get a NO answer for the second case consider $$a=-3$$, $$b=1$$ and $$c=-4$$ and . Not sufficient. (1)+(2) We have that $$b > a$$, $$c > a$$ ($$c-a>0$$) and that either all three unknowns are positive or $$a$$ and any from $$b$$ and $$c$$ is negative. Again for the first case the answer is obviously YES. As for the second case: say $$a$$ and $$c$$ are negative and $$b$$ is positive, then $$b+(c-a)=positive+positive>0$$ (you can apply the same reasoning if $$a$$ and $$b$$ are negative and $$c$$ is positive). So, we have that in both cases we have an YES answer. Sufficient. _________________ Math Expert Joined: 02 Sep 2009 Posts: 37036 Followers: 7230 Kudos [?]: 96127 [0], given: 10707 Re: If a, b and c are integers such that b > a, is b+c > a [#permalink] Show Tags 05 Jul 2013, 01:44 Bumping for review and further discussion*. Get a kudos point for an alternative solution! *New project from GMAT Club!!! Check HERE _________________ Current Student Joined: 02 Jul 2012 Posts: 215 Location: India Schools: IIMC (A) GMAT 1: 720 Q50 V38 GPA: 2.6 WE: Information Technology (Consulting) Followers: 15 Kudos [?]: 235 [0], given: 84 If a, b and c are integers such that b > a, is b+c > a [#permalink] Show Tags 21 Oct 2014, 21:12 sondenso wrote: If a, b and c are integers such that b > a, is b+c > a ? (1) c > a (2) abc > 0 The question stem says that b > a. There are following possibilities for this a. Both Positive Lets say $$b = 5 and a = 2$$ b. Both Negative, lets say $$b = -3 and a = -5$$ c. One positive one negative, lets say $$b = 3 and a = -1$$ d. Either of the two zero 1. c > a C can be positive or negative and can fit in any of the above mentioned scenarios. Insufficient 2. abc > 0 This means either all are positive or any two are negative. Since b > a, there are numerous possibilities for c. So, Insufficient. c > a and abc > 0 This would mean that either all are positive - b + c > a or two negatives and one one positive - a has to be negative, either of b and c has to be negative and the other one positive. This would also mean that c + b > a Ans - C _________________ Give KUDOS if the post helps you... Intern Joined: 11 Jan 2015 Posts: 21 Followers: 0 Kudos [?]: 1 [0], given: 11 Re: If a, b and c are integers such that b > a, is b+c > a [#permalink] Show Tags 07 Mar 2016, 13:51 Bunuel wrote: If a, b and c are integers such that b > a, is b+c > a ? Question: is $$b+c > a$$ ? --> or: is $$b+c-a > 0$$? (1) c > a. If $$a=1$$, $$b=2$$ and $$c=3$$, then the answer is clearly YES but if $$a=-3$$, $$b=-2$$ and $$c=-1$$, then the answer is NO. Not sufficient. (2) abc > 0. Either all three unknowns are positive (answer YES) or two unknowns are negative and the third one is positive. Notice that in the second case one of the unknowns that is negative must be $$a$$ (because if $$a$$ is not negative, then $$b$$ is also not negative so we won't have two negative unknowns). To get a NO answer for the second case consider $$a=-3$$, $$b=1$$ and $$c=-4$$ and . Not sufficient. (1)+(2) We have that $$b > a$$, $$c > a$$ ($$c-a>0$$) and that either all three unknowns are positive or $$a$$ and any from $$b$$ and $$c$$ is negative. Again for the first case the answer is obviously YES. As for the second case: say $$a$$ and $$c$$ are negative and $$b$$ is positive, then $$b+(c-a)=positive+positive>0$$ (you can apply the same reasoning if $$a$$ and $$b$$ are negative and $$c$$ is positive). So, we have that in both cases we have an YES answer. Sufficient. Bunuel could you please explain how you can solve the problem with the approach above in 2 min? It took me almost 3,5... Thanks! Manager Joined: 24 May 2013 Posts: 86 Followers: 0 Kudos [?]: 20 [0], given: 99 If a, b and c are integers such that b > a, is b+c > a [#permalink] Show Tags 16 Mar 2016, 21:41 If a, b and c are integers such that b > a, is b+c > a ? (1) c > a (2) abc > 0 Either two are negative and one positive or all are positive. Individually both the statements are not sufficient. Combining 1 and 2 Sufficient. Hence C. Attachments abc integer.png [ 6.31 KiB | Viewed 5353 times ] Intern Joined: 14 Nov 2015 Posts: 13 Followers: 0 Kudos [?]: 1 [0], given: 94 Re: If a, b and c are integers such that b > a, is b+c > a [#permalink] Show Tags 18 Mar 2016, 07:08 Bunuel In this problem can we subtract the following inequalities? b>a b+c>a and arrive at 'is c>0??' Re: If a, b and c are integers such that b > a, is b+c > a   [#permalink] 18 Mar 2016, 07:08 Go to page    1   2    Next  [ 22 posts ] Similar topics Replies Last post Similar Topics: Is a/b > b/c? 1 13 Jan 2017, 08:04 5 If a>b, is a>c? 1) a/b > c/b 2) 5ab > 6bc 4 25 May 2016, 02:56 3 If b ≠ 0 and a > b, is a > c? (1) a/b> c/b (2) 5ab > 6bc 3 28 Aug 2015, 01:31 6 If a, b, c, and d are positive, is ac + bd > bc + ad? 7 22 Oct 2013, 09:28 If A, B, C are points on a plane, is AB >15? 1) BC + 3 12 Apr 2010, 06:47 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7958216071128845, "perplexity": 3246.3570740533683}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170569.99/warc/CC-MAIN-20170219104610-00472-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mymathforum.com/calculus/344929-equation-tangent-line.html
My Math Forum equation of a tangent line Calculus Calculus Math Forum September 19th, 2018, 07:08 PM #1 Senior Member   Joined: Apr 2017 From: New York Posts: 155 Thanks: 6 equation of a tangent line Compute an equation of the tangent line to the curve q(s) = (s,sin(πs^2),cos(3πs^2))at the point (2,0,1) where s∈R in this questions my steps will be these: step1: match x,y,z components of the point and q(s) to find parameter s step2: find q'(s) step3: insert parameter s into q'(s) to find the point of tangency step 4: equation of tangent line is what in 3D? y-y1=m(x-x1) in 2D September 19th, 2018, 09:17 PM   #2 Senior Member Joined: Apr 2017 From: New York Posts: 155 Thanks: 6 This is the work I have done. Attached Images 85747C8B-181E-4A29-A969-8C0D4D50EDAE.jpg (20.6 KB, 4 views) D9CAFF95-10F3-4F07-A02B-D8847043948E.jpg (19.7 KB, 3 views) September 20th, 2018, 08:33 PM #3 Senior Member     Joined: Sep 2015 From: USA Posts: 2,501 Thanks: 1373 you seem to be misunderstanding this a bit $q(s) = (s,~\sin(\pi s^2),~\cos(3 \pi s^2)$ $p=(2,~0,~1) \Rightarrow s=2$ We find the tangent vector at $p$ by differentiating $q(s)$ and letting $s=2$ $\dfrac{dq}{ds} = (1,~2 \pi s \cos \left(\pi s^2\right),~-6 \pi s \sin \left(3 \pi s^2\right))$ $\left . \dfrac{dq}{ds}\right|_{s=2} = (1,~4 \pi ,~0)$ and our unit tangent vector at $p$ is thus $T = \dfrac{1}{\sqrt{16\pi^2 + 1}}(1,~4\pi,~0)$ and the equation for our line is simply $\ell(u) =uT + p$ $\ell(u) =\left( \dfrac{u}{\sqrt{1+16 \pi ^2}}+2,~\dfrac{4 \pi u}{\sqrt{1+16 \pi ^2}},~1\right)$ Tags equation, line, tangent Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post frazza999 Pre-Calculus 6 June 25th, 2014 02:58 PM unwisetome3 Calculus 2 October 28th, 2012 06:52 PM kevpb Calculus 3 May 25th, 2012 10:32 PM arron1990 Calculus 5 February 9th, 2012 01:29 AM RMG46 Calculus 28 September 28th, 2011 09:21 AM Contact - Home - Forums - Cryptocurrency Forum - Top
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3982592821121216, "perplexity": 4939.26909073073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00263.warc.gz"}
http://www.physicsforums.com/showthread.php?p=3682867
How do you deal with crackpots? by bigfooted Tags: crackpots, deal P: 287 Hi! I was on a forum recently where I saw a typical message from what some people would call a crackpot. These people are usually easy to identify: The message is full of spelling errors. The message usually starts with the claim that a great discovery was being made. They never use math beyond high-school mathematics. They address people like Einstein as Dr. Einstein. They respond very aggressively to friendly but skeptical replies. They never use standard mathematical notation. Although my first impulse is to try to help these people, I usually find that they are beyond help. The discussion becomes grim very fast, most of the arguments are ad hominem ("You do not accept my new theory because you belong to the establishment") and I always hope the topic dies before it reaches Godwin's law. My question to you: what would you do? Try to help them? Ignore them from the start? When is enough enough for you? Mentor P: 5,407 Quote by bigfooted My question to you: what would you do? Try to help them? Ignore them from the start? When is enough enough for you? There are many other subtle ways to tell. Whilst many crackpots usually join the site and post with "i thnk iv made a spaceship lke dr. einstein" some simply post normally but then slowly over time start introducing innocuous yet crazy ideology into their posts. Sometimes I do try to help people, I try to break down what the believe and why (break down as in lay it out in fundamentals, not destroy) and address each point. However almost always eventually it comes down to an almost religious belief in whatever pseudo-science they are peddling. HW Helper P: 6,187 I help people who want to be helped. If people show they don't want to be helped, or at least not by me, I move on. I don't like to be dragged into endless fruitless discussions, so I usually quit responding after a couple of posts. Sometimes it takes an effort to distance myself, but there are other people around that I'd rather help. PF Gold P: 6,135 How do you deal with crackpots? I take Ryan's observation a little further and say that such folks are usually, in at least some very real sense, religious fanatics and there is absolutely no point whatsoever in trying to reason with them. I Like Serena has the right idea ... helping folks is a good thing but as soon as you realize that you are doing what the military call "pissing up a rope" it's time to move on. P: 94 "How do you deal with crackpots?' depends on how the crack pot idea is presented. Strangely enough this can fall under the social system due to the aspects of mankind's past. For the most part any thing you try and tell them is pointless due to they "BELIEVE" they are right and every one else is wrong. It is also the motivation behind the presenter that can be a problem. Some ideas are to make nothing more than money off some fool. If some one is questioning the belief, then you may be able to present facts and help them. Regrettably that is the rare case. I deal with a lot of the free energy garbage out there. It will surprise you what people can be convenience of, that they will waist money on. It is easy to confuse an individual and then show some alternative idea as being right. From there, correcting such an "idea" becomes a problem. Good luck with such. Emeritus Sci Advisor PF Gold P: 12,498 I think a lot of debunkers have unrealistic expectations. If a person has been led down the garden path of silliness, it may take some time for them to get their bearings. Don't expect to change any minds with a single argument. These things need time to sink in. And there is often an emotional investment, which takes even more time to get past. My approach was to present the best information available and let the chips fall where they may. Once the answer or information is out there in cyberspace, it is there for all to see who wish to see it. I always tried to view this more as a library than a forum. The goal was to present the information, not to prove it to anyone. If a person is genuinely deluded or irrational, then an argument isn't going to change that anyway. PF Gold P: 7,120 Quote by Ivan Seeking I think a lot of debunkers have unrealistic expectations. If a person has been led down the garden path of silliness, it may take some time for them to get their bearings. Don't expect to change any minds with a single argument. These things need time to sink in. And there is often an emotional investment, which takes even more time to get past. My approach was to present the best information available and let the chips fall where they may. Once the answer or information is out there in cyberspace, it is there for all to see who wish to see it. I always tried to view this more as a library than a forum. The goal was to present the information, not to prove it to anyone. If a person is genuinely deluded or irrational, then an argument isn't going to change that anyway. Exactly. A lot of people don't realize that for many crackpots, their belief in their idea is as deeply engrained as our own beliefs in mainstream physics/science. What sort of evidence would it take for all of us to no longer believe in Physics? How long could that take? Is it possible in a single argument? Of course not, so why many people feel they can convince crackpots that they're wrong is beyond me. It's not like you're talking to another physicist and trying to convince someone the lowest energy state of a quantum SHO is ${{1}\over{2}}\hbar \omega$ and not $\hbar \omega$ where they just need to find the small error in their calculation. You're talking typically talking to a non-scientist that has no understanding of how science works in the first place. Mentor P: 5,407 Quote by Pengwuino Exactly. A lot of people don't realize that for many crackpots, their belief in their idea is as deeply engrained as our own beliefs in mainstream physics/science. What sort of evidence would it take for all of us to no longer believe in Physics? How long could that take? Is it possible in a single argument? Of course not, so why many people feel they can convince crackpots that they're wrong is beyond me. It's not like you're talking to another physicist and trying to convince someone the lowest energy state of a quantum SHO is ${{1}/over{2}}\hbar \omega$ and not $\hbar \omega$ where they just need to find the small error in their calculation. You're talking typically talking to a non-scientist that has no understanding of how science works in the first place. I have a quibble with this; belief in pseudo-science and acceptance of science are generally not comparable. If the only reason someone accepts a pseudo-scientific proposal is that they think the burden of proof has been met then they can be corrected simply through educating them of the actual status of current evidence (this may be a challenge because of how educated they are in logic and epistimology). However the majority of the time in my experience people do not accept pseudo-science because they have seen and accept evidence but because there is some sort of emotional and ideological dimension. People accept things like healing crystals, psychic powers, alternate planes of existence etc because they have a religious belief in these things and then retrospectively tack any science they think supports it. As an example of this a few years ago I was hitch-hiking and was picked up by a man on his way to Glastonbury Tor. He told me he was a spiritual healer and that someone from his pagan group had told him if he went to the Tor that night and ascended through the seven gates he would meet a goddess who would grant him a new power. I didn't really want to be kicked out of the car so I had to be more timid than I usually am in these conversations; when I asked him why he thought he had powers, what evidence he had of them etc he would respond with things like "well physics shows there are 10 dimensions which is similar to the number of spirit realms" and "Auras have been scientifically proven; did you know that the human body has a magnetic field and that our DNA emits photons?" Pretty much everything he said was a massive distortion of real science because he had taken something he didn't understand, identified some vague semantic resemblance to his belief (i.e. dimension and realm) and dragged it through an ideological filter to construct some sort of justification for his belief. Note though that he had no need of such a scientific justification, all he was doing was reaffirming his belief. This type of crackpot is the one that is near-impossible to educate because all information presented to them is not taken on it's own merits but instead distorted and altered until it fits within their world view. HW Helper P: 6,187 I've observed that well educated science people can become pretty emotional about what for instance a word means exactly. It the other party is a supposed crackpot, scientific people join in an emotional fight to put the supposed crackpot down, which can become pretty ugly. It seems to me that it borders on religious fanaticism. Emeritus PF Gold P: 12,498 Quote by Ryan_m_b I have a quibble with this; belief in pseudo-science and acceptance of science are generally not comparable. If the only reason someone accepts a pseudo-scientific proposal is that they think the burden of proof has been met then they can be corrected simply through educating them of the actual status of current evidence (this may be a challenge because of how educated they are in logic and epistimology). However the majority of the time in my experience people do not accept pseudo-science because they have seen and accept evidence but because there is some sort of emotional and ideological dimension. People accept things like healing crystals, psychic powers, alternate planes of existence etc because they have a religious belief in these things and then retrospectively tack any science they think supports it. As an example of this a few years ago I was hitch-hiking and was picked up by a man on his way to Glastonbury Tor. He told me he was a spiritual healer and that someone from his pagan group had told him if he went to the Tor that night and ascended through the seven gates he would meet a goddess who would grant him a new power. I didn't really want to be kicked out of the car so I had to be more timid than I usually am in these conversations; when I asked him why he thought he had powers, what evidence he had of them etc he would respond with things like "well physics shows there are 10 dimensions which is similar to the number of spirit realms" and "Auras have been scientifically proven; did you know that the human body has a magnetic field and that our DNA emits photons?" Pretty much everything he said was a massive distortion of real science because he had taken something he didn't understand, identified some vague semantic resemblance to his belief (i.e. dimension and realm) and dragged it through an ideological filter to construct some sort of justification for his belief. Note though that he had no need of such a scientific justification, all he was doing was reaffirming his belief. This type of crackpot is the one that is near-impossible to educate because all information presented to them is not taken on it's own merits but instead distorted and altered until it fits within their world view. I agree with everything except the near-impossible part, which gets back to my point that this can't be viewed over short delta Ts. Over a period of years, people can make a complete 180. I've seen it happen many times. Truthfully, I never worried much about the individual arguments. I wasn't going to worry about changing the mind of some guy in Jersey who's been drinking too much. To me the point here was more a matter of information flow. In the short term it seems that chaos is winning the information war. But there is the underlying belief on my part that with time and the free flow of information, the truth will sort itself out and the masses will follow. Just don't expect a watched [crack]pot to boil. Sci Advisor PF Gold P: 9,389 I have engaged in many such arguments, probably due to my dogmatic, mainstream views. They usually call my bluff. I then admit their logic is irrefutable and move on. But, I am satisfied with having implanted a seed of doubt that will infect the body of their argument. Emeritus PF Gold P: 12,498 Quote by Chronos But, I am satisfied with having implanted a seed of doubt that will infect the body of their argument. The same technique was used by Janeway to destroy the Borg. I guess I had better add the "" Sci Advisor PF Gold P: 3,524 an empty head is not really empty, it's stuffed full of rubbish. if the post is an honest request for explanation of something i'll try a discussion is an exchange of facts but an argument is an exchange of ignorance where silence is golden. PF Gold P: 7,120 Quote by Ryan_m_b I have a quibble with this; belief in pseudo-science and acceptance of science are generally not comparable. If the only reason someone accepts a pseudo-scientific proposal is that they think the burden of proof has been met then they can be corrected simply through educating them of the actual status of current evidence (this may be a challenge because of how educated they are in logic and epistimology). However the majority of the time in my experience people do not accept pseudo-science because they have seen and accept evidence but because there is some sort of emotional and ideological dimension. People accept things like healing crystals, psychic powers, alternate planes of existence etc because they have a religious belief in these things and then retrospectively tack any science they think supports it. Sorry, I didn't mean to say they've gone through the proper checks to convince themselves of their beliefs in the way that we do. I think most people have their own beliefs system that has its own way of distinguishing right and wrong. For a lot of people, they believe nothing except what their own eyes see and what they "feel" the right answer is. You can throw textbooks at them, lecture them for hours, bring up a dozen examples, but in the end they'll never be convinced unless they see it for themselves because that's how they convince themselves. I actually think that's the big problem. It's impossible to convince people of something. People must convince themselves of things :) Emeritus Sci Advisor PF Gold P: 12,498 How about this one: Most people aren't geared to be scientists or engineers. In many cases, people simply choose to believe what makes them happy. In fact this probably applies to everyone to some extent. While it isn't appropriate to allow faith-based, unscientific, or pseudoscientific beliefs to be posted at a place like PF, perhaps fantasies are what allow people to function. What if, on the average, most people need fantasies? Perhaps this is simply human nature and a defense mechanism that is necessary for most people to cope with a hostile and confusing world? What if by proving a person's beliefs to be wrong or fallacious, you are actually inflicting psychological damage? Do we know if this is possible? I would bet that it is. That is to say, they will be less happy and it won't improve their life in the slightest. PF Gold P: 7,120 Quote by Ivan Seeking How about this one: Most people aren't geared to be scientists or engineers. In many cases, people simply choose to believe what makes them happy. In fact this probably applies to everyone to some extent. While it isn't appropriate to allow faith-based, unscientific, or pseudoscientific beliefs to be posted at a place like PF, perhaps fantasies are what allow people to function. What if, on the average, most people need fantasies? Perhaps this is simply human nature and a defense mechanism that is necessary for most people to cope with a hostile and confusing world? One of my students told me that learning about science is actually quite depressing. She said it took a lot of the mystery out of life and she said that in a kind of disappointing tone. I can imagine there exists a percentage of people who really do see the world as some sort of exciting fantasy full of mystery and that science is on par with world of elves of fairies. And why wouldn't they? For 18 years a child is bombarded with fantastic versions of reality on tv, books, and in theaters and is subsequently reinforced by parents whom, for the most part, don't think critically either. Why would they think science is the correct description of the world and not a single unicorn has ever existed..... outside of special ranch that I am forbidden to speak of? Part of me thinks its less of a coping mechanism and more of simple upbringing issue. Then again, science is all about right vs. wrong, fact vs. fiction. Can it be that people are scared of being wrong and, since science is all about finding out what is right and wrong, are they subsequently scared of science? P.S. I eventually rekindled my students interest in string theory (she said she loved that kind of stuff) at the end so it wasn't a totally depressing conversation. PF Gold P: 1,885 Quote by Ivan Seeking How about this one: Most people aren't geared to be scientists or engineers. In many cases, people simply choose to believe what makes them happy. In fact this probably applies to everyone to some extent. While it isn't appropriate to allow faith-based, unscientific, or pseudoscientific beliefs to be posted at a place like PF, perhaps fantasies are what allow people to function. What if, on the average, most people need fantasies? Perhaps this is simply human nature and a defense mechanism that is necessary for most people to cope with a hostile and confusing world? What if by proving a person's beliefs to be wrong or fallacious, you are actually inflicting psychological damage? Do we know if this is possible? I would bet that it is. That is to say, they will be less happy and it won't improve their life in the slightest. Ivan Seeking, I have experienced exactly what you describe here...some folks are so comfortable with their "unscientific beliefs" that they do not want to hear from some scientist that they are mistaken. Yes, it is possible to inflict psychological damage to them (their egos) if their cherished beliefs are attacked. I have noticed this more than once. You are exactly right in saying they will be less happy and it would not improve their life at all, so in some cases rather than trying to debunk someone's mythical belief it is better to just remain silent As for the OP: I use two guides to help me recognise crackpots: THE TEN QUESTIONS TO DETECT BALONEY, BY MICHAEL SHERMER HTTP://HOMEPAGES.WMICH.EDU/~KORISTA/BALONEY.HTML CARL SAGAN'S BALONEY DETECTION KIT http://www.carlsagan.com/index_ideascontent.htm By the way, in these days of many different media newscasts, the above two sets of criteria help me sort out crackpot news from "more believable" news stories. P: 1,412 Yes, it is possible to inflict psychological damage to them (their egos) if their cherished beliefs are attacked. I have noticed this more than once. You are exactly right in saying they will be less happy and it would not improve their life at all, so in some cases rather than trying to debunk someone's mythical belief it is better to just remain silent But believing in wrong things can be harmful, for the person him/herself or others. Homeopathy is an obvious example, but even more harmless looking things as participating in the lottery: a lot of small bad choices (based on wrong conceptions or often even wrong chains of reasoning) can lead to a considerable harm in one's life. Do others agree? And does it weigh into the consideration when deciding if to "convert" someone to reason? Related Discussions Precalculus Mathematics Homework 1 Set Theory, Logic, Probability, Statistics 9 Set Theory, Logic, Probability, Statistics 2 Set Theory, Logic, Probability, Statistics 1 Set Theory, Logic, Probability, Statistics 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2759294807910919, "perplexity": 1133.049544568028}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276584.58/warc/CC-MAIN-20140728011756-00324-ip-10-146-231-18.ec2.internal.warc.gz"}
http://www.ck12.org/geometry/Segments-from-Chords/lesson/Segments-from-Chords/r13/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Segments from Chords ( Read ) | Geometry | CK-12 Foundation # Segments from Chords % Best Score Practice Segments from Chords Best Score % Segments from Chords 0  0  0 What if you were given a circle with two chords that intersect each other? How could you use the length of some of the segments formed by their intersection to determine the lengths of the unknown segments? After completing this Concept, you'll be able to use the Intersecting Chords Theorem to solve problems like this one. ### Guidance When we have two chords that intersect inside a circle, as shown below, the two triangles that result are similar. This makes the corresponding sides in each triangle proportional and leads to a relationship between the segments of the chords, as stated in the Intersecting Chords Theorem. Intersecting Chords Theorem: If two chords intersect inside a circle so that one is divided into segments of length $a$ and $b$ and the other into segments of length $c$ and $d$ then $ab = cd$ . #### Example A Find $x$ in each diagram below. a) b) Use the formula from the Intersecting Chords Theorem. a) $12 \cdot 8 &= 10 \cdot x\\96 &= 10x\\9.6 &= x$ b) $x \cdot 15 &= 5 \cdot 9\\15x &= 45\\x &= 3$ #### Example B Solve for $x$ . a) b) Use the Intersecting Chords Theorem. a) $8 \cdot 24 &= (3x+1)\cdot 12\\192 &= 36x+12\\180 &= 36x\\5 &= x$ b) $(x-5)21 &= (x-9)24\\21x-105 &= 24x-216\\111 &= 3x\\37 &= x$ #### Example C Ishmael found a broken piece of a CD in his car. He places a ruler across two points on the rim, and the length of the chord is 9.5 cm. The distance from the midpoint of this chord to the nearest point on the rim is 1.75 cm. Find the diameter of the CD. Think of this as two chords intersecting each other. If we were to extend the 1.75 cm segment, it would be a diameter. So, if we find $x$ in the diagram below and add it to 1.75 cm, we would find the diameter. $4.25 \cdot 4.25&=1.75\cdot x\\18.0625&=1.75x\\x & \approx 10.3 \ cm,\ \text{making the diameter} 10.3 + 1.75 \approx \ 12 \ cm, \ \text{which is the}\\& \qquad \qquad \qquad \text{actual diameter of a CD.}$ ### Guided Practice Find $x$ in each diagram below. Simplify any radicals. 1. 2. 3. For all problems, use the Intersecting Chords Theorem. 1. $15\cdot 4 &=5\cdot x\\ 60&=5x \\ x&=12$ 2. $18 \cdot x &=9\cdot 3\\18x &=27\\ x&=1.5$ 3. $12 \cdot x &=9 \cdot 16 \\ 12x&=144\\ x&=12$ ### Practice Fill in the blanks for each problem below and then solve for the missing segment. $20x=\underline{\;\;\;\;\;\;\;}$ $\underline{\;\;\;\;\;\;} \cdot 4=\underline{\;\;\;\;\;\;\;} \cdot x$ Find $x$ in each diagram below. Simplify any radicals. Find the value of $x$ . 1. Suzie found a piece of a broken plate. She places a ruler across two points on the rim, and the length of the chord is 6 inches. The distance from the midpoint of this chord to the nearest point on the rim is 1 inch. Find the diameter of the plate. 2. Fill in the blanks of the proof of the Intersecting Chords Theorem. Given : Intersecting chords $\overline{AC}$ and $\overline{BE}$ . Prove : $ab=cd$ Statement Reason 1. Intersecting chords $\overline{AC}$ and $\overline{BE}$ with segments $a, \ b, \ c,$ and $d$ . 1. 2. 2. Congruent Inscribed Angles Theorem 3. $\triangle ADE \sim \triangle BDC$ 3. 4. 4. Corresponding parts of similar triangles are proportional 5. $ab=cd$ 5.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 30, "texerror": 0, "math_score": 0.7744250297546387, "perplexity": 635.2228016964651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442420.22/warc/CC-MAIN-20141017005722-00188-ip-10-16-133-185.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007/s00526-019-1660-7?error=cookies_not_supported&code=96cc218d-dc7f-443d-a5c0-eca2b6a66538
# A note on the singular set of area-minimizing hypersurfaces ## Abstract We prove an isoperimetric-type bound on the $$(n-7)$$-dimensional measure of the singular set for a large class of area-minimizing n-dimensional hypersurfaces, in terms of the geometry of their boundary. This is a preview of subscription content, access via your institution. ## References 1. 1. Allard, W.: On the first variation of a varifold. Ann. Math. 2(95), 417–491 (1972) 2. 2. Almgren Jr., F.J.: Optimal isoperimetric inequalities. Indiana Univ. Math. J. 35, 451–547 (1986) 3. 3. Almgren, F.J., Lieb, Elliott H.: Singularities of energy minimizing maps from the ball to the sphere: examples, counterexamples, and bounds. Ann. Math. 128(3), 483–530 (1988) 4. 4. Bombieri, E.: Regularity theory for almost minimal currents. Arch. Rational Mech. Anal. 78, 99–130 (1982) 5. 5. Duzaar, F., Steffan, K.: Optimal interior and boundary regularity for almost minimizers to elliptic variational integrals. J. Reine Angew. Math. 546, 73–138 (2002) 6. 6. Edelen, N., Engelstein, M.: Quantitative stratification for some free-boundary problems. Trans. Am. Math. Soc. (2017) 7. 7. Edelen, N.: Notes on a measure theoretic version of Naber–Valtorta’s rectifiability theorem. http://math.mit.edu/~edelen/general-nv.pdf. Accessed 10 Jan 2019 8. 8. Gruter, M.: Optimal regularity for codimension one minimal surfaces with a free-boundary. Manuscr. Math. 58, 295–343 (1987) 9. 9. Gruter, M., Jost, J.: Allard type regularity results for varifolds with free boundaries. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 13, 129–169 (1986) 10. 10. Hardt, R., Simon, L.: Boundary regularity and embedded solutions for the oriented plateau problem. Ann. Math. 2(110), 439–486 (1979) 11. 11. Mazowiecka, K., Miskiewicz, M., Schikorra, A.: On the size of the singular set of minimizing harmonic maps into the sphere in dimension three (2018). arXiv:1811.00515 12. 12. Mazowiecka, K, Miskiewicz, M., Schikorra, A.: On the size of the singular set of minimizing harmonic maps into a $$2$$ sphere in dimension four and higher (2018). arXiv:1902.03161 13. 13. Naber, A., Valtorta, D.: The singular structure and regularity of stationary and minimizing varifolds (2015). arXiv:1505.03428 14. 14. Naber, A., Valtorta, D.: Rectifiable-reifenberg and the regularity of stationary and minimizing harmonic maps. Ann. Math. 2(185), 131–227 (2017) 15. 15. Perez, J.: Stable embedded minimal surfaces bounded by a straight line. Calc. Variations Partial Differ Equ 29, 267–279 (2007) 16. 16. Simon, L.: Lectures on geometric measure theory. In: Proceedings of the Centre for Mathematical Analysis, Australian National University, vol. 3. Australian National University, Centre for Mathematical Analysis, Canberra (1983) 17. 17. Simon, L.: Rectifiability of the singular set of energy minimizing maps. Calc. Variations Partial Differ. Equ. 3, 1–65 (1995) 18. 18. White, B.: Half of Enneper’s surface minimizes area. In: Jost, J. (ed.) Geometric Analysis and the Calculus of Variations for Stefan Hildebrandt, pp. 361–367. International Press, Somerville (1996) ## Author information Authors ### Corresponding author Correspondence to Nick Edelen. ### Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Communicated by C. de Lellis. ## Rights and permissions Reprints and Permissions Edelen, N. A note on the singular set of area-minimizing hypersurfaces. Calc. Var. 59, 18 (2020). https://doi.org/10.1007/s00526-019-1660-7
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559560775756836, "perplexity": 5956.844193432027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00594.warc.gz"}
http://math.stackexchange.com/users/29066/lena-richman?tab=reputation
Lena Richman Reputation Next privilege 100 Rep. Edit community wikis 5 Impact ~2k people reached • 0 posts edited # 84 Reputation 5 Apr 10 +5 13:40 upvote The assignment $R\mapsto\operatorname{Iso}_{R\text{-alg}}(A\otimes_k R,M_n(R))$ is a scheme? 15 May 12 '15 +15 08:17 3 events The assignment $R\mapsto\operatorname{Iso}_{R\text{-alg}}(A\otimes_k R,M_n(R))$ is a scheme? 7 Nov 16 '14 +5 23:40 upvote Does every quasi-affine variety have an open cover of affine dense subsets? +2 21:02 accept If $n\geq 2$, why is $k[\mathbb{A}^n\setminus\{p\}]=k[\mathbb{A}^n]$? 7 Nov 2 '14 10 Nov 16 '13 5 May 26 '13 14 Jun 17 '12 5 Apr 17 '12 5 Apr 16 '12 10 Apr 14 '12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25333157181739807, "perplexity": 12995.732604808334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00002-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/ode-where-have-i-gone-wrong.84139/
# ODE - where have I gone wrong? 1. Aug 4, 2005 ### Benny Hi can someone please help me out with the following DE question. I'm reading ahead so as to enable myself to keep up to date so I don't how the part in italics(below) can be used so could someone please explain it to me. I'm also having trouble with the ODE itself. Q. Given that y = x^7 is a solution of $$x^2 y'' - 12xy' + 42y = 0$$...(1), find the general solution of $$x^2 y'' - 12xy' + 42y = 280x^2 + 150x - 168$$...(2). Hence find a second linearly independent solution of (1) and a particular solution of (2). Here is what I've tried. I think this is one of those DEs with a 'partially known complimentary solution.' Let y = u(x)v(x) = (x^7)v. Then: $$y' = 7x^6 v + x^7 \frac{{dv}}{{dx}}$$ $$y'' = 42x^5 v + 7x^6 \frac{{dv}}{{dx}} + 7x^6 \frac{{dv}}{{dx}} + x^7 \frac{{d^2 v}}{{dx^2 }} = 42x^5 v + 14x^6 \frac{{dv}}{{dx}} + x^7 \frac{{d^2 v}}{{dx^2 }}$$ Substituting into equation (2) and simplifying I get: $$x^9 \frac{{d^2 v}}{{dx^2 }} + 2x^8 \frac{{dv}}{{dx}} = 280x^2 + 150x - 168$$ $$\frac{{d^2 v}}{{dx^2 }} + \frac{2}{x}\frac{{dv}}{{dx}} = 280x^{ - 7} + 150x^{ - 8} - 168x^{ - 9}$$ This is a first order linear ODE in dv/dx. $$IF = \mu \left( x \right) = \exp \left( {\int {\frac{2}{x}dx} } \right) = x^2$$ $$\frac{d}{{dx}}\left( {\mu \left( x \right)\frac{{dv}}{{dx}}} \right) = \mu \left( x \right)\left( {280x^{ - 7} + 150x^{ - 8} - 168x^{ - 9} } \right)$$ $$\Rightarrow x^2 \frac{{dv}}{{dx}} = \int {\left( {280x^{ - 5} + 150x^{ - 6} - 168x^{ - 7} } \right)} dx$$ $$\Rightarrow \frac{{dv}}{{dx}} = - 70x^{ - 6} - 30x^{ - 7} + 28x^{ - 6} + c_1 x^{ - 2}$$ Hmm nevermind about this part...I realised that when I did this on paper I wrote (d/dx)(IFv) rather than (d/dx)(IF(dv/dx)). Anyway the general solution I get, and according to answer it is correct, is: $$y = 14x^2 + 5x - 4 + c_2 x^6 + c_3 x^7$$ Can someone tell me how to find a second linearly independent solution of (1) and a particular solution of (2) from the genral solution that I have found? Any help would be good thanks. Edit: A while ago when I did questions where you substitute y = Ae^(rx) as a solution, ie. the ones where you have an auxillary/characteristic equation and you solve for the roots to find the solution to the DE, the particular solution was the 'non-complimentary' of the general solution. Comparing those sorts of questions with the general solution I found - The arbitrary constants suggest to me that (c_2)(x^6) + (c_3)(x^7) is a solution to the homogeneous equation while 14x^2 + 5x - 4 is a particular solution. Is that how the I am supposed to deduce a particular solution to (2)? Also, how would I deduce a second linearly independent solution of (1)? When I did the easier questions involving characteristic equations I simply multiplied the particular solution, I think it was, by x whenever there was a repeated root, not sure if that is related to this though. Last edited: Aug 4, 2005 2. Aug 4, 2005 ### HallsofIvy You write, correctly, that $$y = 14x^2 + 5x - 4 + c_2 x^6 + c_3 x^7$$. I don't know what you mean by a "second linearly independent solution". You have two constants, c_2 and c_3. The TWO linearly independent solutions are their coefficients: x^6 and x^7 (the solution set only forms a vector space, and so the term "linearly independent" only applies, for homogenous equations). To find a specific solution, take whatever values you want for c_2 and c_3- c_2= c_3= 0 would be simple. Last edited by a moderator: Aug 4, 2005 3. Aug 4, 2005 ### lurflurf $$y = 14x^2 + 5x - 4 + c_2 x^6 + c_3 x^7$$ This is simple for a particular solution chose any c2,c3 the typical choice would be c2=c3=0 giving 14x^2 + 5x - 4 for a linearly independent solution take the difference of two particular solutions and make sure to use different c2 for each the typical choice would be c2first-1=c2second c3first=c3second giving x^6 this is also clear by inspection for a particular solution take the stuff without constants 14x^2 + 5x - 4 for an independent homogeneous solution take the suff with a constant that you did not get the first time x^6 Now do this simple one for practice y''+y=exp(x) y=exp(x)/2+c1 cos(x)+c2 sin(x) find two linearly independent solutions to the homogeneous problem and a particular solution Last edited: Aug 4, 2005 4. Aug 4, 2005 ### saltydog For the record, this is a particular case of the Euler-Cauchy Equation: $$x^2y^{''}+axy^{'}+by=0$$ In general, powers $y=x^m$ decrease by 1 when we differentiate, $y^{'}=mx^{m-1}$,$y^{''}=m(m-1)x^{m-2}$. Hence they should solve linear differential equations in x, xy', and $x^2y''$. So, letting: $$y=x^m$$ and substituting into the ODE: $$m(m-1)x^m-12mx^m+42x^m=0$$ Avoiding the case x=0 and dividing through by [itex]x^m[/tex] yields: $$m^2-13m+42=0$$ For which the roots are 7 and 6. That is, the general solution for the homogeneous equation is: $$y(x)=c_1x^7+c_2x^6$$ 5. Aug 5, 2005 ### Benny Thanks for the help guys. I'll need to go over some definitions. At the moment it's probably best if I just stick with solving some DEs until the theory is covered.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9434164762496948, "perplexity": 666.6730628855546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647671.73/warc/CC-MAIN-20180321160816-20180321180816-00077.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2011_v48n3_555
GLOBAL STABILITY OF THE VIRAL DYNAMICS WITH CROWLEY-MARTIN FUNCTIONAL RESPONSE Title & Authors GLOBAL STABILITY OF THE VIRAL DYNAMICS WITH CROWLEY-MARTIN FUNCTIONAL RESPONSE Zhou, Xueyong; Cui, Jingan; Abstract It is well known that the mathematical models provide very important information for the research of human immunodeciency virus type. However, the infection rate of almost all mathematical models is linear. The linearity shows the simple interaction between the T-cells and the viral particles. In this paper, a differential equation model of HIV infection of $\small{CD4^+}$ T-cells with Crowley-Martin function response is studied. We prove that if the basic reproduction number $\small{R_0}$ < 1, the HIV infection is cleared from the T-cell population and the disease dies out; if $\small{R_0}$ > 1, the HIV infection persists in the host. We find that the chronic disease steady state is globally asymptotically stable if $\small{R_0}$ > 1. Numerical simulations are presented to illustrate the results. Keywords HIV infection;permanence;globally asymptotical stability; Language English Cited by 1. Stability Analysis of a Stochastic SIR Epidemic Model with Specific Nonlinear Incidence Rate, International Journal of Stochastic Analysis, 2013, 2013, 1 2. Global Stability of Delayed Viral Infection Models with Nonlinear Antibody and CTL Immune Responses and General Incidence Rate, Computational and Mathematical Methods in Medicine, 2016, 2016, 1 3. Pattern analysis of a modified Leslie–Gower predator–prey model with Crowley–Martin functional response and diffusion, Computers & Mathematics with Applications, 2014, 67, 8, 1607 4. Partial Differential Equations of an Epidemic Model with Spatial Diffusion, International Journal of Partial Differential Equations, 2014, 2014, 1 5. Global dynamics for an HIV infection model with Crowley-Martin functional response and two distributed delays, Journal of Systems Science and Complexity, 2017 6. Stability and Hopf bifurcation of a delayed virus infection model with Beddington-DeAngelis infection function and cytotoxic T-lymphocyte immune response, Mathematical Methods in the Applied Sciences, 2015, 38, 18, 5253 7. Dynamics of a Fractional Order HIV Infection Model with Specific Functional Response and Cure Rate, International Journal of Differential Equations, 2017, 2017, 1 8. A New Approach to Global Stability of Discrete Lotka-Volterra Predator-Prey Models, Discrete Dynamics in Nature and Society, 2015, 2015, 1 9. Stability analysis of a virus dynamics model with general incidence rate and two delays, Applied Mathematics and Computation, 2013, 221, 514 10. Global stability of a delayed viral infection model with nonlinear immune response and general incidence rate, Discrete and Continuous Dynamical Systems - Series B, 2015, 21, 1, 133 11. Global dynamics of a modified Leslie-Gower predator-prey model with Crowley-Martin functional responses, Journal of Applied Mathematics and Computing, 2013, 43, 1-2, 271 12. Global Stability Analysis for Vector Transmission Disease Dynamic Model with Non-linear Incidence and Two Time Delays, Journal of Interdisciplinary Mathematics, 2015, 18, 4, 395 13. Multiplicity and Uniqueness of Positive Solutions for a Predator–Prey Model with C–M Functional Response, Acta Applicandae Mathematicae, 2015, 139, 1, 187 14. Global dynamics of a delay reaction–diffusion model for viral infection with specific functional response, Computational and Applied Mathematics, 2015, 34, 3, 807 15. Global stability of the virus dynamics model with intracellular delay and Crowley-Martin functional response, Mathematical Methods in the Applied Sciences, 2014, 37, 10, 1405 16. Global stability for a class of HIV infection models with cure of infected cells in eclipse stage and CTL immune response, International Journal of Dynamics and Control, 2016 17. Global stability of a virus dynamics model with intracellular delay and CTL immune response, Mathematical Methods in the Applied Sciences, 2015, 38, 3, 420 18. Age-Structured Within-Host HIV Dynamics with Multiple Target Cells, Studies in Applied Mathematics, 2017, 138, 1, 43 19. GLOBAL ANALYSIS OF AN EXTENDED HIV DYNAMICS MODEL WITH GENERAL INCIDENCE RATE, Journal of Biological Systems, 2015, 23, 03, 401 20. Global stability of a diffusive virus dynamics model with general incidence function and time delay, Nonlinear Analysis: Real World Applications, 2015, 25, 64 21. Global dynamics for a class of age-infection HIV models with nonlinear infection rate, Journal of Mathematical Analysis and Applications, 2015, 432, 1, 289 22. Uniqueness and stability of a predator–prey model with C–M functional response, Computers & Mathematics with Applications, 2015, 69, 10, 1080 23. Global dynamics of two heterogeneous SIR models with nonlinear incidence and delays, International Journal of Biomathematics, 2016, 09, 03, 1650046 24. Global Dynamics of HIV Infection of CD4+T Cells and Macrophages, Discrete Dynamics in Nature and Society, 2013, 2013, 1 25. Global Dynamics of a Virus Dynamical Model with Cell-to-Cell Transmission and Cure Rate, Computational and Mathematical Methods in Medicine, 2015, 2015, 1 26. Dynamics of a Class of HIV Infection Models with Cure of Infected Cells in Eclipse Stage, Acta Biotheoretica, 2015, 63, 4, 363 27. A numerical method for a delayed viral infection model with general incidence rate, Journal of King Saud University - Science, 2016, 28, 4, 368 28. On the dynamics of a stochastic ratio-dependent predator–prey model with a specific functional response, Journal of Applied Mathematics and Computing, 2015, 48, 1-2, 441 29. Mathematical analysis of a virus dynamics model with general incidence rate and cure rate, Nonlinear Analysis: Real World Applications, 2012, 13, 4, 1866 30. Analysis of stability and Hopf bifurcation for HIV-1 dynamics with PI and three intracellular delays, Nonlinear Analysis: Real World Applications, 2016, 27, 55 31. Global dynamics of a virus dynamical model with general incidence rate and cure rate, Nonlinear Analysis: Real World Applications, 2014, 16, 17 32. An SEIR Epidemic Model with Relapse and General Nonlinear Incidence Rate with Application to Media Impact, Qualitative Theory of Dynamical Systems, 2017 33. Qualitative Analysis of a Predator–Prey Model with Crowley–Martin Functional Response, International Journal of Bifurcation and Chaos, 2015, 25, 09, 1550110 34. A generalized virus dynamics model with cell-to-cell transmission and cure rate, Advances in Difference Equations, 2016, 2016, 1 35. Global properties of a discrete viral infection model with general incidence rate, Mathematical Methods in the Applied Sciences, 2016, 39, 5, 998 36. Global properties for an age-structured within-host model with Crowley–Martin functional response, International Journal of Biomathematics, 2017, 10, 02, 1750030 References 1. S. Bonhoeffer, R. M. May, G. M. Shaw, and M. A. Nowak, Virus dynamics and drug therapy, Proc. Natl. Acad. Sci. USA 94 (1997), 6971-6976. 2. S. M. Ciupe, R. M. Ribeiro, P. W. Nelson, and A. S. Perelson, Modeling the mechanisms of acute hepatitis B virus infection, J. Theor. Biol. 247 (2007), no. 1, 23-35. 3. P. H. Crowley and E. K. Martin, Functional responses and interference within and between year classes of a dragon y population, J. North. Am. Benth. Soc. 8 (1989), 211-221. 4. R. V. Culshaw and S. G. Ruan, A delay-differential equation model of HIV infection of CD4+ T-cells, Math. Biosci. 165 (2000), 27-39. 5. R. V. Culshaw, S. G. Ruan, and R. J. Spiteri, Optimal HIV treatment by maximising immune response, J. Math. Biol. 48 (2004), no. 5, 545-562. 6. F. R. Gantmacher, The Theory of Matrices, Chelsea Publ. Co., New York, 1959. 7. M. W. Hirsch, Systems of differential equations which are competitive or cooperative IV, SIAM J. Math. Anal. 21 (1990), 1225-1234. 8. P. De Leenheer and H. L. Smith, Virus dynamics: a global analysis, SIAM J. Appl. Math. 63 (2003), no. 4, 1313-1327. 9. D. Li and W. Ma, Asymptotic properties of a HIV-1 infection model with time delay, J. Math. Anal. Appl. 335 (2007), no. 1, 683-691. 10. A. L. Lloyd, The dependence of viral parameter estimates on the assumed viral life cycle: limitations of studies of viral load data, Proc. R. Soc. Lond. B 268 (2001), 847-854. 11. A. R. McLean and T. B. L. Kirkwood, A model of human immunode ciency virus infection in T helper cell clones, J. Theor. Biol. 147 (1990), 177-203. 12. A. R. McLean, M. M. Rosado, F. Agenes, R. Vasconcellos, and A. A. Freitas, Resource competition as a mechanism for B cell homeostasis, Proc. Natl Acad. Sci. USA 94 (1997), 5792-5797. 13. L. Q. Min, Y. M. Su, and Y. Kuang, Mathematical analysis of a basic virus infection model with application to HBV infection, Rocky Mountain J. Math. 38 (2008), no. 5, 1573-1585. 14. J. E. Mittler, B. Sulzer, A. U. Neumann, and A. S. Perelson, In uence of delayed viral production on viral dynamics in HIV-1 infected patients, Math. Biosci. 152 (1998), 143-163. 15. J. S. Muldowney, Compound matrices and ordinary differential equations, Rocky Mountain J. Math. 20 (1990), no. 4, 857-872. 16. N. Nagumo, Uber die lage der integralkurven gewohnlicher differential gleichungen, Proc. Phys-Math. Soc. Japan 24 (1942), 551-559. 17. P. W. Nelson, J. D. Murray, and A. S. Perelson, A model of HIV-1 pathogenesis that includes an intracellular delay, Math. Biosci. 163 (2000), no. 2, 201-215. 18. A. S. Perelson, D. E. Kirschner, and R. de Boer, Dynamics of HIV infection of CD4+ T cells, Math. Biosci. 114 (1993), 81-125. 19. A. S. Perelson and P. W. Nelson, Mathematical analysis of HIV-I dynamics in vivo, SIAM Rev. 41 (1999), 3-44. 20. A. S. Perelson, A. U. Neumann, M. Markowitz, et al., HIV-1 dynamics in vivo: virion clearance rate, infected cell life-span, and viral generation time, Science 271 (1996), 1582-1586. 21. H. L. Smith, Monotone dynamical systems: An Introduction to the theory of competitive and cooperative systems, Trans. Amer. Math. Soc., vol. 41, 1995. 22. H. L. Smith , Systems of ordinary differential equations which generate an order preserving flow, SIAM Rev. 30 (1998), 87-98. 23. X. Y. Song and A. U. Neumann, Global stability and periodic solution of the viral dynamics, J. Math. Anal. Appl. 329 (2007), no. 1, 281-297. 24. H. R. Thieme, Persistence under relaxed point-dissipativity (with applications to an endemic model), SIAMJ. Math. Anal. 24 (1993), 407-435. 25. X. Y. Zhou, X. Y. Song, and X. Y. Shi, A differential equation model of HIV infection of CD4+CD4+ T-cells with cure rate, J. Math. Anal. Appl. 342 (2008), no. 2, 1342-1355. 26. X. Y. Zhou, X. Y. Song, and X. Y. Shi, Analysis of stability and Hopf bifurcation for an HIV infection model with time delay, Appl. Math. Comput. 199 (2008), no. 1, 23-38. 27. H. R. Zhu and H. L. Smith, Stable periodic orbits for a class of three-dimensional competitive systems, J. Differential Equations 110 (1994), no. 1, 143-156.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2801660895347595, "perplexity": 2645.428103661986}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937161.15/warc/CC-MAIN-20180420061851-20180420081851-00389.warc.gz"}
https://www.semanticscholar.org/paper/Convolutions-of-sets-with-bounded-VC-dimension-are-Sisask/66103c748cd7bfe9d2ef9c4466c1aaeedc57c793
Corpus ID: 119328562 # Convolutions of sets with bounded VC-dimension are uniformly continuous @article{Sisask2018ConvolutionsOS, title={Convolutions of sets with bounded VC-dimension are uniformly continuous}, journal={arXiv: Combinatorics}, year={2018} } • Published 2018 • Mathematics • arXiv: Combinatorics We introduce a notion of VC-dimension for subsets of groups, defining this for a set $A$ to be the VC-dimension of the family $\{ A \cap(xA) : x \in A\cdot A^{-1} \}$. We show that if a finite subset $A$ of an abelian group has bounded VC-dimension, then the convolution $1_A*1_{-A}$ is Bohr uniformly continuous, in a quantitatively strong sense. This generalises and strengthens a version of the stable arithmetic regularity lemma of Terry and Wolf in various ways. In particular, it directly… Expand 10 Citations The stability of finite sets in dyadic groups. • PDF On finite sets of small tripling or small alternation in arbitrary groups • G. Conant • Mathematics, Computer Science • Combinatorics, Probability and Computing • 2020 • 5 • PDF The coset and stability rings • 3 • PDF Quantitative structure of stable sets in finite abelian groups • Mathematics • 2018 • 9 • Highly Influenced • PDF A model-theoretic note on the Freiman-Ruzsa theorem. • Mathematics • 2019 • 3 • PDF Approximate subgroups with bounded VC-dimension • Mathematics • 2020 • Highly Influenced • PDF Quantitative structure of stable sets in arbitrary finite groups • 1 • PDF #### References SHOWING 1-10 OF 24 REFERENCES A Szemerédi-type regularity lemma in abelian groups, with applications • 207 • Highly Influential • PDF A Probabilistic Technique for Finding Almost-Periods of Convolutions • Mathematics • 2010 • 75 • PDF ROTH’S THEOREM FOR FOUR VARIABLES AND ADDITIVE STRUCTURES IN SUMS OF SPARSE SETS • Mathematics • Forum of Mathematics, Sigma • 2016 • 16 • PDF Arithmetic Progressions in Sumsets and Lp-Almost-Periodicity • Mathematics, Computer Science • Combinatorics, Probability and Computing • 2013 • 32 • PDF VC-sets and generic compact domination • 7 • Highly Influential • PDF Lower bounds of tower type for Szemerédi's uniformity lemma • 218 Sphere Packing Numbers for Subsets of the Boolean n-Cube with Bounded Vapnik-Chervonenkis Dimension • D. Haussler • Mathematics, Computer Science • J. Comb. Theory, Ser. A • 1995 • 312 • PDF Freiman's theorem in an arbitrary abelian group • Mathematics • 2005 • 141 • PDF
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621604681015015, "perplexity": 14859.635879847889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00621.warc.gz"}
http://eprints.iisc.ernet.in/16149/
# Formation of stable ultra-thin pentagon Cu nanowires under high strain rate loading Sutrakar, Vijay Kumar and Mahapatra, Roy D (2008) Formation of stable ultra-thin pentagon Cu nanowires under high strain rate loading. In: Journal of Physics: Condensed Matter, 20 (33). pp. 335206-1. PDF forma.pdf - Published Version Restricted to Registered users only Download (797Kb) | Request a copy ## Abstract Molecular dynamics (MD) simulations of $<100>/\{100\}$ Cu nanowires at 10 K with varying cross-sectional areas ranging from $0.3615\times0.3615 nm^2$ to $2.169\times2.169 nm^2$ have been performed using the embedded atom method (EAM) to investigate their structural behaviors and properties at high strain rate. Our studies reported in this paper show the reorientation of $<100>/\{100\}$ square cross-sectional Cu nanowires into a series of stable ultra-thin pentagon Cu nanobridge structures with diameter of $\sim 1$ nm under a high strain rate tensile loading. The strain rates used for the present studies range from $1 \times 10^9$ to $0.5 \times 107 s^{-1}$. The pentagonal multi-shell nanobridge structure is observed for cross-sectional dimensions < 1.5 nm. From these results we anticipate the application of pentagonal Cu nanowires even with diameters of $\sim 1$ nm in nano-electronic devices. A much larger plastic deformation is observed in the pentagonal multi-shell nanobridge structure as compared to structures that do not form such a nanobridge. It indicates that the pentagonal nanobridge is stable. The effect of strain rate on the mechanical properties of Cu nanowires is also analyzed and shows a decreasing yield stress and yield strain with decreasing strain rate for a given cross-section. Also, a decreasing yield stress and decreasing yield strain are observed for a given strain rate with increasing cross-sectional area. The elastic modulus is found to be $\sim 100$ GPa and is independent of strain rate effect and independent of size effect for a given temperature. Item Type: Journal Article Publisher Copyright of this article belongs to Institute of Physics. Division of Mechanical Sciences > Aerospace Engineering (Formerly, Aeronautical Engineering) 07 Oct 2008 10:31 19 Sep 2010 04:51 http://eprints.iisc.ernet.in/id/eprint/16149
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9421332478523254, "perplexity": 2780.0873143635067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00006-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.shaalaa.com/textbook-solutions/c/rd-sharma-solutions-10-mathematics-chapter-1-real-numbers_287
Share Books Shortlist # RD Sharma solutions for Class 10 Mathematics chapter 1 - Real Numbers ## Chapter 1: Real Numbers Ex. 1.10Ex. 1.20Ex. 1.30Ex. 1.40Ex. 1.50Ex. 1.60Others #### Chapter 1: Real Numbers Exercise 1.10 solutions [Page 10] Ex. 1.10 | Q 1 | Page 10 If a and b are two odd positive integers such that a > b, then prove that one of the two numbers (a+b)/2and(a-b)/2 is odd and the other is even. Ex. 1.10 | Q 2 | Page 10 Prove that the product of two consecutive positive integers is divisible by 2. Ex. 1.10 | Q 3 | Page 10 Prove that the product of three consecutive positive integer is divisible by 6. Ex. 1.10 | Q 4 | Page 10 For any positive integer n , prove that n3 − n divisible by 6. Ex. 1.10 | Q 5 | Page 10 Prove that if a positive integer is of the form 6q + 5, then it is of the form 3q + 2 for some integer q, but not conversely. Ex. 1.10 | Q 6 | Page 10 Prove that the square of any positive integer of the form 5q + 1 is of the same form. Ex. 1.10 | Q 7 | Page 10 Prove that the square of any positive integer is of the form 3m or, 3m + 1 but not of the form 3m +2. Ex. 1.10 | Q 8 | Page 10 Prove that the square of any positive integer is of the form 4q or 4q + 1 for some integer q. Ex. 1.10 | Q 9 | Page 10 Prove that the square of any positive integer is of the form 5q, 5q + 1, 5q + 4 for some integer q. Ex. 1.10 | Q 10 | Page 10 Show that the square of an odd positive integer is of the form 8q + 1, for some integer q. Ex. 1.10 | Q 11 | Page 10 Show that any positive odd integer is of the form 6q + 1 or, 6q + 3 or, 6q + 5, where q is some integer. #### Chapter 1: Real Numbers Exercise 1.20 solutions [Pages 27 - 28] Ex. 1.20 | Q 1.01 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 32 and 54 Ex. 1.20 | Q 1.02 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 18 and 24 Ex. 1.20 | Q 1.03 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 70 and 30 Ex. 1.20 | Q 1.04 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 56 and 88 Ex. 1.20 | Q 1.05 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 475 and 495 Ex. 1.20 | Q 1.06 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 75 and 243 Ex. 1.20 | Q 1.07 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 240 and 6552 Ex. 1.20 | Q 1.08 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 155 and 1385 Ex. 1.20 | Q 1.09 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 100 and 190 Ex. 1.20 | Q 1.1 | Page 27 Define HOE of two positive integers and find the HCF of the following pair of numbers: 105 and 120 Ex. 1.20 | Q 2.1 | Page 27 Using Euclid's division algorithm, find the H.C.F. of 135 and 225 Ex. 1.20 | Q 2.2 | Page 27 Using Euclid's division algorithm, find the H.C.F. of 196 and 38220 Ex. 1.20 | Q 3.1 | Page 27 Find the HCF of the following pairs of integers and express it as a linear combination of 963 and 657. Ex. 1.20 | Q 3.2 | Page 27 Find the HCF of the following pairs of integers and express it as a linear combination of 592 and 252. Ex. 1.20 | Q 3.3 | Page 27 Find the HCF of the following pairs of integers and express it as a linear combination of 506 and 1155. Ex. 1.20 | Q 3.4 | Page 27 Find the HCF of the following pairs of integers and express it as a linear combination of 1288 and 575. Ex. 1.20 | Q 4 | Page 27 Find the largest number which divides 615 and 963 leaving remainder 6 in each case. Ex. 1.20 | Q 5 | Page 27 If the HCF of 408 and 1032 is expressible in the form 1032 m − 408 × 5, find m. Ex. 1.20 | Q 6 | Page 27 If the HCF of 657 and 963 is expressible in the form 657x + 963y − 15, find x. Ex. 1.20 | Q 7 | Page 27 An army contingent of 616 members is to march behind an army band of 32 members in a parade. The two groups are to march in the same number of columns. What is the maximum number of columns in which they can march? Ex. 1.20 | Q 8 | Page 27 A merchant has 120 liters of oil of one kind, 180 liters of another kind and 240 liters of third kind. He wants to sell the oil by filling the three kinds of oil in tins of equal capacity. What should be the greatest capacity of such a tin? Ex. 1.20 | Q 9 | Page 27 During a sale, colour pencils were being sold in packs of 24 each and crayons in packs of 32 each. If you want full packs of both and the same number of pencils and crayons, how many of each would you need to buy? Ex. 1.20 | Q 10 | Page 28 144 cartons of Coke Cans and 90 cartons of Pepsi Cans are to be stacked in a Canteen. If each stack is of the same height and is to contain cartons of the same drink, what would be the greatest number of cartons each stack would have? Ex. 1.20 | Q 11 | Page 28 Find the greatest number which divides 285 and 1249 leaving remainders 9 and 7 respectively. Ex. 1.20 | Q 12 | Page 28 Find the largest number which exactly divides 280 and 1245 leaving remainders 4 and 3, respectively. Ex. 1.20 | Q 13 | Page 28 What is the largest number that divides 626, 3127 and 15628 and leaves remainders of 1, 2 and 3 respectively. Ex. 1.20 | Q 14 | Page 28 Find the greatest number that will divide 445, 572 and 699 leaving remainders 4, 5 and 6 respectively. Ex. 1.20 | Q 15 | Page 28 Find the greatest number which divides 2011 and 2623 leaving remainders 9 and 5 respectively. Ex. 1.20 | Q 17 | Page 28 Two brands of chocolates are available in packs of 24 and 15 respectively. If I need to buy an equal number of chocolates of both kinds, what is the least number of boxes of each kind I would need to buy? Ex. 1.20 | Q 18 | Page 28 A mason has to fit a bathroom with square marble tiles of the largest possible size. The size of the bathroom is 10 ft. by 8 ft. What would be the size in inches of the tile required that has to be cut and how many such tiles are required? Ex. 1.20 | Q 19 | Page 28 15 pastries and 12 biscuit packets have been donated for a school fete. These are to be packed in several smaller identical boxes with the same number of pastries and biscuit packets in each. How many biscuit packets and how many pastries will each box contain? Ex. 1.20 | Q 20 | Page 28 105 goats, 140 donkeys and 175 cows have to be taken across a river. There is only one boat which will have to make many trips in order to do so. The lazy boatman has his own conditions for transporting them. He insists that he will take the same number of animals in every trip and they have to be of the same kind. He will naturally like to take the largest possible number each time. Can you tell how many animals went in each trip? Ex. 1.20 | Q 21 | Page 28 The length, breadth and height of a room are 8 m 25 cm, 6 m 75 cm and 4 m 50 cm, respectively. Determine the longest rod which can measure the three dimensions of the room exactly. Ex. 1.20 | Q 22 | Page 28 Express the HCF of 468 and 222 as 468x + 222y where x, y are integers in two different ways. #### Chapter 1: Real Numbers Exercise 1.30 solutions [Page 35] Ex. 1.30 | Q 1.1 | Page 35 Express each of the following integers as a product of its prime factors: 420 Ex. 1.30 | Q 1.2 | Page 35 Express each of the following integers as a product of its prime factors: 468 Ex. 1.30 | Q 1.3 | Page 35 Express each of the following integers as a product of its prime factors: 945 Ex. 1.30 | Q 1.4 | Page 35 Express each of the following integers as a product of its prime factors: 7325 Ex. 1.30 | Q 2.1 | Page 35 Determine the prime factorisation of each of the following positive integer: 20570 Ex. 1.30 | Q 2.2 | Page 35 Determine the prime factorisation of each of the following positive integer: 58500 Ex. 1.30 | Q 2.3 | Page 35 Determine the prime factorisation of each of the following positive integer: 45470971 Ex. 1.30 | Q 3 | Page 35 Explain why 7 × 11 × 13 + 13 and 7 × 6 × 5 × 4 × 3 × 2 × 1 + 5 are composite numbers. Ex. 1.30 | Q 4 | Page 35 Check whether 6n can end with the digit 0 for any natural number n. #### Chapter 1: Real Numbers Exercise 1.40 solutions [Pages 39 - 40] Ex. 1.40 | Q 1.1 | Page 39 Find the LCM and HCF of the following pairs of integers and verify that LCM × HCF = Product of the integers: 26 and 91 Ex. 1.40 | Q 1.2 | Page 39 Find the LCM and HCF of the following pairs of integers and verify that LCM × HCF = Product of the integers: 510 and 92 Ex. 1.40 | Q 1.3 | Page 39 Find the LCM and HCF of the following pairs of integers and verify that LCM × HCF = Product of the integers: 336 and 54 Ex. 1.40 | Q 2.1 | Page 39 Find the LCM and HCF of the following integers by applying the prime factorisation method: 12, 15 and 21 Ex. 1.40 | Q 2.2 | Page 39 Find the LCM and HCF of the following integers by applying the prime factorisation method: 17, 23 and 29 Ex. 1.40 | Q 2.3 | Page 39 Find the LCM and HCF of the following integers by applying the prime factorisation method: 8, 9 and 25 Ex. 1.40 | Q 2.4 | Page 39 Find the LCM and HCF of the following integers by applying the prime factorisation method: 40, 36 and 126 Ex. 1.40 | Q 2.5 | Page 39 Find the LCM and HCF of the following integers by applying the prime factorisation method: 84, 90 and 120 Ex. 1.40 | Q 2.6 | Page 39 Find the LCM and HCF of the following integers by applying the prime factorisation method: 24, 15 and 36 Ex. 1.40 | Q 3 | Page 39 Given that HCF (306. 657) = 9, find LCM (306, 657). Ex. 1.40 | Q 4 | Page 40 Can two numbers have 16 as their HCF and 380 as their LCM? Give reason. Ex. 1.40 | Q 5 | Page 40 The HCF of two numbers is 145 and their LCM is 2175. If one number is 725, find the other. Ex. 1.40 | Q 6 | Page 40 The HCF to two numbers is 16 and their product is 3072. Find their LCM. Ex. 1.40 | Q 7 | Page 40 The LCM and HCF of two numbers are 180 and 6 respectively. If one of the numbers is 30, find the other number. Ex. 1.40 | Q 8 | Page 40 Find the smallest number which when increased by 17 is exactly divisible by both 520 and 468. Ex. 1.40 | Q 9 | Page 40 Find the smallest number which leaves remainders 8 and 12 when divided by 28 and 32 respectively. Ex. 1.40 | Q 10 | Page 40 What is the smallest number that, when divided by 35, 56 and 91 leaves remainders of 7 in each case? Ex. 1.40 | Q 11 | Page 40 A rectangular courtyard is 18 m 72 cm long and 13 m 20 cm broad. it is to be paved with square tiles of the same size. Find the least possible number of such tiles. Ex. 1.40 | Q 12 | Page 40 Find the greatest number of 6 digits exactly divisible by 24, 15 and 36. Ex. 1.40 | Q 13 | Page 40 Determine the number nearest to 110000 but greater than 100000 which is exactly divisible by each of 8, 15 and 21. Ex. 1.40 | Q 14 | Page 40 Find the least number that is divisible by all the numbers between 1 and 10 (both inclusive). Ex. 1.40 | Q 15 | Page 40 A circular field has a circumference of 360 km. Three cyclists start together and can cycle 48, 60 and 72 km a day, round the field. When will they meet again? Ex. 1.40 | Q 16 | Page 40 In a morning walk three persons step off together, their steps measure 80 cm, 85 cm and 90 cm respectively. What is the minimum distance each should walk so that he can cover the distance in complete steps? #### Chapter 1: Real Numbers Exercise 1.50 solutions [Page 49] Ex. 1.50 | Q 1.1 | Page 49 Show that the following numbers are irrational. $\frac{1}{\sqrt{2}}$ Ex. 1.50 | Q 1.2 | Page 49 Show that the following numbers are irrational. $7\sqrt{5}$ Ex. 1.50 | Q 1.3 | Page 49 Show that the following numbers are irrational. $6 + \sqrt{2}$ Ex. 1.50 | Q 1.4 | Page 49 Show that the following numbers are irrational. $3 - \sqrt{5}$ Ex. 1.50 | Q 2.1 | Page 49 Prove that following numbers are irrationals: $\frac{2}{\sqrt{7}}$ Ex. 1.50 | Q 2.2 | Page 49 Prove that following numbers are irrationals: $\frac{3}{2\sqrt{5}}$ Ex. 1.50 | Q 2.3 | Page 49 Prove that following numbers are irrationals: $4 + \sqrt{2}$ Ex. 1.50 | Q 2.4 | Page 49 Prove that following numbers are irrationals: $5\sqrt{2}$ Ex. 1.50 | Q 3 | Page 49 Show that $2 - \sqrt{3}$ is an irrational number. Ex. 1.50 | Q 4 | Page 49 Show that $3 + \sqrt{2}$ is an irrational number. Ex. 1.50 | Q 5 | Page 49 Prove that $4 - 5\sqrt{2}$ is an irrational number. Ex. 1.50 | Q 6 | Page 49 Show that $5 - 2\sqrt{3}$ is an irrational number. Ex. 1.50 | Q 7 | Page 49 Prove that $2\sqrt{3} - 1$ is an irrational number. Ex. 1.50 | Q 8 | Page 49 Prove that $2 - 3\sqrt{5}$ is an irrational number. Ex. 1.50 | Q 9 | Page 49 Prove that $\sqrt{5} + \sqrt{3}$ is irrational. Ex. 1.50 | Q 11 | Page 49 Prove that for any prime positive integer p, $\sqrt{p}$ is an irrational number. Ex. 1.50 | Q 12 | Page 49 If p, q are prime positive integers, prove that $\sqrt{p} + \sqrt{q}$ is an irrational number. #### Chapter 1: Real Numbers Exercise 1.60 solutions [Pages 56 - 58] Ex. 1.60 | Q 1.1 | Page 56 Without actually performing the long division, state whether state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion. $\frac{23}{8}$ Ex. 1.60 | Q 1.2 | Page 56 Without actually performing the long division, state whether state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion. $\frac{125}{441}$ Ex. 1.60 | Q 1.3 | Page 56 Without actually performing the long division, state whether state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion. $\frac{35}{50}$ Ex. 1.60 | Q 1.4 | Page 57 Without actually performing the long division, state whether state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion. $\frac{77}{210}$ Ex. 1.60 | Q 1.5 | Page 56 Without actually performing the long division, state whether state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion. $\frac{129}{2^2 \times 5^7 \times 7^{17}}$ Ex. 1.60 | Q 1.6 | Page 56 Without actually performing the long division, state whether state whether the following rational numbers will have a terminating decimal expansion or a non-terminating repeating decimal expansion. $\frac{987}{10500}$ Ex. 1.60 | Q 2.1 | Page 56 Write down the decimal expansions of the following rational numbers by writing their denominators in the form 2m × 5n, where, mn are non-negative integers. $\frac{3}{8}$ Ex. 1.60 | Q 2.2 | Page 56 Write down the decimal expansions of the following rational numbers by writing their denominators in the form 2m × 5n, where, mn are non-negative integers.$\frac{13}{125}$ Ex. 1.60 | Q 2.3 | Page 56 Write down the decimal expansions of the following rational numbers by writing their denominators in the form 2m × 5n, where, mn are non-negative integers.$\frac{7}{80}$ Ex. 1.60 | Q 2.4 | Page 56 Write down the decimal expansions of the following rational numbers by writing their denominators in the form 2m × 5n, where, mn are non-negative integers.$\frac{14588}{625}$ Ex. 1.60 | Q 2.5 | Page 56 Write down the decimal expansions of the following rational numbers by writing their denominators in the form 2m × 5n, where, mn are non-negative integers.$\frac{129}{2^2 \times 5^7}$ Ex. 1.60 | Q 4.1 | Page 57 What can you say about the prime factorisations of the denominators of the following rationals: 43.123456789 Ex. 1.60 | Q 4.2 | Page 57 What can you say about the prime factorisations of the denominators of the following rationals: $43 . 123456789$ Ex. 1.60 | Q 4.3 | Page 57 ​What can you say about the prime factorisations of the denominators of the following rationals: $27 . \bar{{142857}}$ Ex. 1.60 | Q 4.4 | Page 57 ​What can you say about the prime factorisations of the denominators of the following rationals: 0.120120012000120000 ... Q 16 | Page 58 If p and q are two prime number, then what is their HCF? #### Chapter 1: Real Numbers solutions [Pages 50 - 58] Q 1 | Page 57 State Euclid's division lemma. Q 2 | Page 57 State Fundamental Theorem of Arithmetic. Q 3 | Page 57 Write 98 as product of its prime factors. Q 4 | Page 57 Write the exponent of 2 in the price factorization of 144. Q 5 | Page 57 Write the sum of the exponents of prime factors in the prime factorization of 98. Q 6 | Page 57 If the prime factorization of a natural number n is 23 ✕ 32 ✕ 52 ✕ 7, write the number of consecutive zeros in n. Q 7 | Page 57 If the product of two numbers is 1080 and their HCF is 30, find their LCM. Q 8 | Page 58 Write the condition to be satisfied by q so that a rational number$\frac{p}{q}$has a terminating decimal expansions. Q 9 | Page 58 Write the condition to be satisfied by q so that a rational number$\frac{p}{q}$has a terminating decimal expansion. Q 10 | Page 58 Complete the missing entries in the following factor tree. Q 11 | Page 58 The decimal expansion of the rational number $\frac{43}{2^4 \times 5^3}$ will terminate after how many places of decimals? Q 12 | Page 58 Has the rational number $\frac{441}{2^2 \times 5^7 \times 7^2}$a terminating or a nonterminating decimal representation? Q 13 | Page 58 Write whether $\frac{2\sqrt{45} + 3\sqrt{20}}{2\sqrt{5}}$ on simplification gives a rational or an irrational number. Q 14 | Page 58 What is an algorithm? Q 15 | Page 58 What is a lemma? Q 17 | Page 58 If p and q are two prime number, then what is their LCM? Q 18 | Page 58 What is the total number of factors of a prime number? Q 19 | Page 58 What is a composite number? Q 20 | Page 58 What is the HCF of the smallest composite number and the smallest prime number? Q 21 | Page 58 HCF of two numbers is always a factor of their LCM (True/False). Q 22 | Page 58 π is an irrational number (True/False). Q 23 | Page 58 The sum of two prime number is always a prime number (True/ False). Q 24 | Page 58 The product of any three consecutive natural number is divisible by 6 (True/False). Q 25 | Page 58 Every even integer is of the form 2m, where m is an integer (True/False). Q 26 | Page 58 Every odd integer is of the form 2m − 1, where m is an integer (True/False). Q 27 | Page 58 The product of two irrational numbers is an irrational number (True/False). Q 28 | Page 58 The sum of two irrational number is an irrational number (True/False). Q 29 | Page 50 For what value of n, 2n ✕ 5n ends in 5. Q 30 | Page 58 If a and b are relatively prime numbers, then what is their HCF? Q 31 | Page 58 If a and b are relatively prime numbers, then what is their LCM? Q 32 | Page 58 Two numbers have 12 as their HCF and 350 as their LCM (True/False). #### Chapter 1: Real Numbers solutions [Pages 10 - 61] Q 1 | Page 59 The exponent of 2 in the prime factorisation of 144, is • 4 • 5 • 6 • 3 Q 2 | Page 59 The LCM of two numbers is 1200. Which of the following cannot be their HCF? • 600 • 500 • 400 •  200 Q 3 | Page 59 If n = 23 ✕ 34 ✕ 54 ✕ 7, then the number of consecutive zeros in n, where n is a natural number, is • 2 • 3 • 4 • 7 Q 4 | Page 59 The sum of the exponents of the prime factors in the prime factorisation of 196, is • 1 • 2 • 4 • 6 Q 5 | Page 59 The number of decimal place after which the decimal expansion of the rational number $\frac{23}{2^2 \times 5}$ will terminate, is • 1 • 2 • 3 • 4 Q 6 | Page 59 If p1 and p2 are two odd prime numbers such that p1 > p2, then $p_1^2 - p_2^2$  is •  an even number • an odd number • an odd prime number • a prime number Q 7 | Page 59 If two positive ingeters a and b are expressible in the form a = pq2 and b = p3qpq being prime number, then LCM (ab) is • pq • p3q3 • p3q2 •  p2q2 Q 8 | Page 59 In Q.No. 7, HCF (ab) is • pq •  p3q3 • p3q2 • p2q2 Q 9 | Page 59 If two positive integers m and n are expressible in the form m = pq3 and n = p3q2, where pq are prime numbers, then HCF (mn) = • pq • pq2 • p3q2 • p2q2 Q 10 | Page 60 If the LCM of a and 18 is 36 and the HCF of a and 18 is 2, then a • 2 • 3 • 4 • 1 Q 11 | Page 60 The HCF of 95 and 152, is • 57 • 1 • 19 • 38 Q 12 | Page 60 If HCF (26, 169) = 13, then LCM (26, 169) = • 26 •  2 • 3 • 4 Q 13 | Page 10 If a = 23 ✕ 3, = 2 ✕ 3 ✕ 5, c = 3n ✕ 5 and LCM (abc) = 23 ✕ 32 ✕ 5, then n = • 1 • 2 • 3 • 4 Q 14 | Page 10 The decimal expansion of the rational number $\frac{14587}{1250}$  will terminate after • one decimal place • two decimal place • three decimal place • four decimal place Q 15 | Page 60 If p and q are co-prime numbers, then p2 and q2 are •  coprime • not coprime • even • odd Q 16 | Page 60 Which of the following rational numbers have terminating decimal? • $\frac{16}{225}$ • $\frac{5}{18}$ • $\frac{2}{21}$ • $\frac{7}{250}$ • Non of the above Q 17 | Page 60 If 3 is the least prime factor of number a and 7 is the least prime factor of number b, then the least prime factor of a + b, is • 2 • 3 • 5 • 10 Q 18 | Page 60 $3 . 27$  is • an integer • a rational number • a natural number • an irrational number Q 19 | Page 60 The smallest number by which $\sqrt{27}$  should be multiplied so as to get a rational number is • $\sqrt{27}$ • $3\sqrt{3}$ • $\sqrt{3}$ • 3 Q 20 | Page 60 The smallest rational number by which $\frac{1}{3}$ should be multiplied so that its decimal expansion terminates after one place of decimal, is • $\frac{3}{10}$ • $\frac{1}{10}$ • 3 • $\frac{3}{100}$ Q 21 | Page 60 If n is a natural number, then 92n − 42n is always divisible by •  5 • 13 • both 5 and 13 • None of these Q 22 | Page 60 If n is any natural number, then 6n − 5n always ends with • 1 • 3 • 5 • 7 Q 23 | Page 61 The LCM and HCF of two rational numbers are equal, then the numbers must be • prime • co-prime • composite • equal Q 24 | Page 61 If the sum of LCM and HCF of two numbers is 1260 and their LCM is 900 more than their HCF, then the product of two numbers is • 203400 •  194400 • 198400 • 205400 Q 25 | Page 61 The remainder when the square of any prime number greater than 3 is divided by 6, is • 1 • 3 • 2 • 4 ## Chapter 1: Real Numbers Ex. 1.10Ex. 1.20Ex. 1.30Ex. 1.40Ex. 1.50Ex. 1.60Others ## RD Sharma solutions for Class 10 Mathematics chapter 1 - Real Numbers RD Sharma solutions for Class 10 Maths chapter 1 (Real Numbers) include all questions with solution and detail explanation. This will clear students doubts about any question and improve application skills while preparing for board exams. The detailed, step-by-step solutions will help you understand the concepts better and clear your confusions, if any. Shaalaa.com has the CBSE 10 Mathematics solutions in a manner that help students grasp basic concepts better and faster. Further, we at Shaalaa.com are providing such solutions so that students can prepare for written exams. RD Sharma textbook solutions can be a core help for self-study and acts as a perfect self-help guidance for students. Concepts covered in Class 10 Mathematics chapter 1 Real Numbers are Revisiting Rational Numbers and Their Decimal Expansions, Revisiting Irrational Numbers, Proofs of Irrationality, Fundamental Theorem of Arithmetic Motivating Through Examples, Fundamental Theorem of Arithmetic, Euclid’s Division Lemma, Real Numbers Examples and Solutions, Introduction of Real Numbers. Using RD Sharma Class 10 solutions Real Numbers exercise by students are an easy way to prepare for the exams, as they involve solutions arranged chapter-wise also page wise. The questions involved in RD Sharma Solutions are important questions that can be asked in the final exam. Maximum students of CBSE Class 10 prefer RD Sharma Textbook Solutions to score more in exam. Get the free view of chapter 1 Real Numbers Class 10 extra questions for Maths and can use Shaalaa.com to keep it handy for your exam preparation S
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.579304039478302, "perplexity": 1444.1027154743074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00247.warc.gz"}
https://gateoverflow.in/64328/recurrence-relation-and-probability
118 views N rooms are there and they are numbered from 1 to N. A person P is in charge of room allocation and allocates these rooms inthe following way: • Each query ask for two consecutive rooms. • P selects two consecutive rooms out of the vacant rooms and serves the query. Once allocated those two rooms can not be reallocated. Example : if 2nd and 3rd rooms are vacant, P can consider to select them. If 4th,5th and 7th rooms are vacant, (5,7) can not be considered [ these two can not be considered as allocatable ( because these two(5,7) are not consecutive) ] but (4,5) can be considered to be allocatable. Out of all allocatable vacant rooms at present , P selects any two consecutive randomly (randomly uniform). • Queries are coming continuously and P is serving them. • At some point of time, all allocatable rooms are exhausted. At that time further queries are not processed. Process STOPPED. K is a positive integer $\leq$ N. What is the reccurece relation for the probability of Kth room being filled up after the room allocation process has been stopped. one example : If initially 4 rooms are given [1,2,3,4]. First query : assume P selects (2,3) Seconds query onwards can not be processed. because although 1,4 are vacant, these rooms are not consecutive. edited | 118 views how can we do this? :-/ i could only figure out that ,in your example (1,2,3,4) for,1 to be filled,2 should be filled . for 2 to be filled,1 or 3 should be filled. for 3 to be filled,2 and 4 should be filled. and for 4 to be filled,3 should be filled. for corner rooms,the probability that they are filled is 1/nC2 for other rooms,the probabioty that they are filled is 2/nC2 (considering their adjacent rooms) pls help further in solving and correct me  :)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49636250734329224, "perplexity": 3729.6505701747337}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803848.60/warc/CC-MAIN-20171117170336-20171117190336-00062.warc.gz"}
http://physics.stackexchange.com/questions/56729/quantum-chemistry-kronecker-delta-formality
# Quantum Chemistry Kronecker Delta formality In semi-empirical quantum chemistry, one frequently encounters the so called zero differential overlap approximation $$\langle \mu \nu | \lambda \sigma \rangle = \delta_{\mu\nu}\delta_{\lambda\sigma} \langle \mu \mu | \lambda \lambda \rangle .$$ Why is it rather not written as $$\langle \mu \nu | \lambda \sigma \rangle = \delta_{\mu\nu}\delta_{\lambda\sigma} \langle \mu \nu | \lambda \sigma \rangle = \langle \mu \mu | \lambda \lambda \rangle$$ since on the right hand side of the first equation there are no $\nu$ nor $\sigma$ contained anymore. So either all four variables plus the Kronecker Deltas (middle expression of second equation), or only the "remaining" variables after evaluation of the Kronecker deltas (last expression of second equation). - I think there is a summation which can't be seen in this notation. You have tensors of rank 2 there. –  Andyk Mar 13 '13 at 12:12 Can you expand on your comment a bit? I dont understand where do you suspect a summation. –  TMOTTM Mar 13 '13 at 13:39 Something like this $\langle \mu \nu | \lambda \sigma \rangle =\langle \chi^{\lambda},\chi_{\mu}\rangle \langle \chi^{\sigma},\chi_{\nu}\rangle=\langle \chi^{\lambda},\chi_{\mu} \rangle \langle \delta_{\lambda}^{\sigma}\chi^{\lambda},\delta_{\nu}^{\mu}\chi_{\mu} \rangle=\delta_{\lambda}^{\sigma}\delta_{\nu}^{\mu}\langle \chi^{\lambda},\chi_{\mu} \rangle \langle \chi^{\lambda}, \chi_{\mu}\rangle$ –  Andyk Mar 13 '13 at 17:47 So you have the deltas acting only on the second inner product (or the first, but only on one of them). Actually it would be better to change the indexes to $\lambda '$ and $\mu '$ in the second inner product in order to not confuse them. But there is no problem... –  Andyk Mar 13 '13 at 17:53 $\mu$ and $\nu$ are dummy variables, they can take any values from $1$ to $N$. As such, you cannot evaluate the Kronecker delta, before specifying $\mu$ and $\nu$ are. For example, $\delta_{14} = 0$, and $\delta_{NN} = 1$, but that is only because you have been given what $\mu$ and $\nu$ are. So the first line still contains $\nu$ and $\sigma$! What the equation is saying is that, when $\mu = \nu$, and $\lambda = \sigma$, for any $\mu, \lambda = 1, \cdots, N$, then $\langle \mu \mu | \lambda \lambda \rangle = \langle \mu \mu | \lambda \lambda \rangle$, which is obviously true (though it doesn't tell you what the numerical value is). But if any one of those conditions is not true then $\langle \mu \nu | \lambda \sigma \rangle = 0$. Now what you wrote doesn't make sense. If $\langle \mu \nu | \lambda \sigma \rangle = \delta_{\mu \nu} \delta_{\lambda \sigma} \langle \mu \nu | \lambda \sigma \rangle$ then $1 = \delta_{\mu \nu} \delta_{\lambda \sigma}$ (if $\langle \mu \nu | \lambda \sigma \rangle \neq 0$). But this is obviously a false statement, since if say $\mu = 1, \nu = 2$ then we have $1 = 0$. The bottom line is that $\mu, \nu, \lambda, \sigma$ are dummy variables, and you cannot evaluate the Kronecker delta without being given what the two indices are. I get the mechanism of the Kronecker delta and now I also get how to read the first equation. So in the integral of the right hand side in the first equation, the second $\mu$ is implicitly the $\nu$, which I plug in for the $\nu$ in the first Kronecker delta. –  TMOTTM Mar 13 '13 at 11:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935865581035614, "perplexity": 174.2341173550485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858962.69/warc/CC-MAIN-20140722025738-00225-ip-10-33-131-23.ec2.internal.warc.gz"}
https://www.campusgate.co.in/2012/10/maxima-and-minima.html
# Maxima and Minima Maxima and minima is a very important chapter as far as CAT, XAT, SNAP etc exams are concerned.  Before you read this lesson, read Slope of polynomial 1. Find the minimum value of ${x^2} - 4x + 5$. We know that graph of this equation is concave up.  As you can see from the graph that it won't touch the x-axis so it does not have any real root. But we can find, where this graph attains its minima we can't find maxima. Differentiating the given function we get 2x - 4. We equate this expression to zero to find where minima exists. 2x - 4 = 0 and x = 2. Substituting in the given expression we get ${2^2} - 4.2 + 5$ = 1. 2. Find the maxima value of $2 - 2x - {x^2}$. ${x^2}$ coefficient is negative.  As this graph is concave down, it has maxima. We can't find the minimum Differentiating the given expression we get, - 2 - 2x. Equating to zero, we get x = -1 So at x  = -1 it attains maxima which is equal to 2 - 2 (-1)-${\left( { - 1} \right)^2}$ = 3 3. y = max (2-2x, x - 3) then find the minimum value of this function. The given function is a combination of two linear equations. 2 - 2x is a downward sloping line, and x - 3 is a upward sloping line. As y is defined as max of these two equations, y can be represented as the graph noted with red line. That is upto some point in between 1 and 2, it decreases, and starts increasing after that point.  so this graph attains minimum where these two lines intersect. Equating, 2 - 2x = x - 3 We get x = 5/ 3 So minimum value can be obtained by substituting x value in any of these linear equations. 2 - 2(5/3) = -4/3. Three very important Rules: Rule 1: For positive variables, if the sum of the variables is a constant, the product of the variables will be maximum when all the variables are equal. Eg: If a + b + c = 21, find the maximum value of abc. Here sum of the variable is constant.  So product will be maximum, when all the three variables are equal i.e., 3a = 21, a = 7.  So product = 7 x 7 x 7 = 343 Rule 2: For positive variables, if the product of the variables is a constant, the sum of the variables will be minimum when all the variables are equal Eg: Find the minimum value of $\displaystyle\frac{a}{b} + \frac{b}{c} + \frac{c}{a}$ Here the product of the variables = $\displaystyle\frac{a}{b} \times \frac{b}{c} \times \frac{c}{a}$ = 1 So given sum is minimum when all are equal $\displaystyle\frac{a}{b} = 1,\frac{b}{c} = 1,\frac{c}{a} = 1$ So sum = 1 + 1 + 1 = 3. $\displaystyle\frac{a}{b} + \frac{b}{c} + \frac{c}{a}$$\ge 3 Rule 3: For positive variables, Arithmetic Mean (AM), is always greater than Geometric Mean (GM) i.e., A.M \ge G.M Eg: If xy = 16, then find the minimum value of x + y. AM of x, y = \displaystyle\frac{{x + y}}{2} GM of x, y = \sqrt {\left( {x.y} \right)} from AM GM rule \displaystyle\frac{{x + y}}{2} \ge \sqrt {\left( {x.y} \right)} Substituting xy = 16, we get \displaystyle\frac{{x + y}}{2} \ge 4 Or x + y \ge 8 Other Examples: 1. Find the greatest value of {a^2}.{b^3}.{c^4} subject to the condition a+b+c=18 Sol: Though sum of the variables are constant in this question, directly we cannot apply the rules learned above. We have to modify the given expression to suit the above rules Let Z = {a^2}.{b^3}.{c^4} Z = {2^2}{.3^3}{.4^4}.{\left( {\displaystyle\frac{a}{2}} \right)^2}.{\left( {\displaystyle\frac{b}{3}} \right)^3}.{\left( {\displaystyle\frac{c}{4}} \right)^4} [ any question of this type, we modify {a^p} as {\left( {\frac{a}{p}} \right)^p} so on and multiply with suitable powers to make it equal to original equation] Z will have the maximum when {\left( {\displaystyle\frac{a}{2}} \right)^2}.{\left( {\displaystyle\frac{b}{3}} \right)^3}.{\left( {\displaystyle\frac{c}{4}} \right)^4} is maximum. But {\left( {\displaystyle\frac{a}{2}} \right)^2}.{\left( {\displaystyle\frac{b}{3}} \right)^3}.{\left( {\displaystyle\frac{c}{4}} \right)^4} is a product of 2+3+4=9 factors whose sum = 2\left( {\displaystyle\frac{a}{2}} \right) + 3\left( {\displaystyle\frac{b}{3}} \right) + 4\left( {\displaystyle\frac{c}{4}} \right) = a+ b + c = 18 {\left( {\displaystyle\frac{a}{2}} \right)^2}.{\left( {\displaystyle\frac{b}{3}} \right)^3}.{\left( {\displaystyle\frac{c}{4}} \right)^4} will be maximum if all the factors are equal. i.e., if \displaystyle\frac{a}{2} = \frac{b}{3} = \displaystyle\frac{c}{4} = \frac{{a + b + c}}{9} = \frac{{18}}{9} = 2 So maximum value of Z = {2^2}{.3^3}{.4^4}.{(2)^2}.{(2)^3}.{(2)^4} = {2^{19}}{.3^3} Alternate method: The greatest value of {a^m}.{b^n}.{c^p}, when m, n, p being +ve integers, a+b+c is constant is given by {m^m}.{n^n}.{p^p}......{\left( {\displaystyle\frac{{a + b + c + ...}}{{m + n + p + ...}}} \right)^{m + n + p + ..}} By applying above concept: {2^2}{.3^3}{.4^4}.{\left( {\displaystyle\frac{{18}}{9}} \right)^9} = {2^{19}}{.3^3} 2. If 2x+3y=7; find the greatest value of {x^3}.{y^4} Solution: Let Z = {x^3}.{y^4} [ we change the original function by taking {\left( {\displaystyle\frac{{2x}}{3}} \right)^3} instead {x^3} and {\left( {\displaystyle\frac{{3y}}{4}} \right)^4} instead of {y^4}] So Z = {x^3}.{y^4} = {\left( {\displaystyle\frac{3}{2}} \right)^3}{\left( {\displaystyle\frac{4}{3}} \right)^4}{\left( {\displaystyle\frac{{2x}}{3}} \right)^3}{\left( {\displaystyle\frac{{3y}}{4}} \right)^4} But {\left( {\displaystyle\frac{{2x}}{3}} \right)^3}{\left( {\displaystyle\frac{{3y}}{4}} \right)^4} is a product of 3 + 4 = 7 factors, whose sum = 3\left( {\displaystyle\frac{{2x}}{3}} \right) + 4\left( {\displaystyle\frac{{3y}}{4}} \right) = 2x + 3y = 7 Therefore; {\left( {\displaystyle\frac{{2x}}{3}} \right)^3}{\left( {\displaystyle\frac{{3y}}{4}} \right)^4} will be maximum if all the factors are equal i.e., \displaystyle\frac{{2x}}{3} = \displaystyle\frac{{3y}}{4} = \displaystyle\frac{{2x + 3y}}{{3 + 4}} = \displaystyle\frac{7}{7} = 1 So maximum value of Z = {\left( {\displaystyle\frac{3}{2}} \right)^3}{\left( {\displaystyle\frac{4}{3}} \right)^4}{(1)^3}{(1)^4} = \displaystyle\frac{{27}}{8} \times \displaystyle\frac{{256}}{{81}} = \displaystyle\frac{{32}}{3} Alternate method: We partial differentiate the given function w.r.t x and then with y and find the ratio. Also we partial differentiate 2x+3y = 7 w.r.t x and then with y and find the ratio. Now we equate these two ratio's and find y value interms of x. \Rightarrow$$\displaystyle\frac{{3{x^2}{y^4}}}{{4{y^3}{x^3}}} = \displaystyle\frac{2}{3}$ $\Rightarrow$$\displaystyle\frac{{3y}}{{4x}} = \frac{2}{3} \Rightarrow \displaystyle\frac{y}{x} = \frac{8}{9}$ $\Rightarrow y = \displaystyle\frac{8}{9}x$ Substituting in 2x + 3y = 7 we get x = $\displaystyle\frac{3}{2}$ Now we find value of y as $\displaystyle\frac{4}{3}$ So maximum value of ${x^3}.{y^4}$ = ${\left( {\displaystyle\frac{3}{2}} \right)^3}.{\left( {\displaystyle\frac{4}{3}} \right)^4} = \displaystyle\frac{{32}}{3}$ 3. If x, y , z are positive reals such that ${x^3}{y^2}{z^4} = 7$ then find the minimum value of $2x + 5y + 3z$ We modify the product to apply AM GM rule. Consider the product ${\left( {\displaystyle\frac{{2x}}{3}} \right)^3}{\left( {\displaystyle\frac{{5y}}{2}} \right)^2}{\left( {\displaystyle\frac{{3z}}{4}} \right)^4}$ Above is the product of nine quantities. Apply AM $\ge$ GM $\Rightarrow \displaystyle\frac{{\left( {3.\displaystyle\frac{{2x}}{3} + 2.\displaystyle\frac{{5y}}{2} + 4.\displaystyle\frac{{3z}}{4}} \right)}}{{3 + 2 + 4}} \ge \left\{ {{{\left( {\displaystyle\frac{{2x}}{3}} \right)}^3}.{{\left( {\displaystyle\frac{{5y}}{2}} \right)}^2}.{{\left( {\displaystyle\frac{{3z}}{4}} \right)}^4}} \right\}$ $\Rightarrow \displaystyle\frac{{\left( {3.\displaystyle\frac{{2x}}{3} + 2.\displaystyle\frac{{5y}}{2} + 4.\displaystyle\frac{{3z}}{4}} \right)}}{{3 + 2 + 4}} \ge {\left\{ {{{\left( {\displaystyle\frac{2}{3}} \right)}^3}.{{\left( {\displaystyle\frac{5}{2}} \right)}^2}.{{\left( {\displaystyle\frac{3}{4}} \right)}^4}.{x^3}{y^2}{z^4}} \right\}^{1/9}}$ $\Rightarrow 2x + 5y + 3z \ge 9{\left\{ {\displaystyle\frac{8}{{27}}.\displaystyle\frac{{25}}{4}.\displaystyle\frac{{81}}{{256}}.7} \right\}^{1/9}}$ $\Rightarrow 2x + 5y + 3z \ge 9{\left\{ {\displaystyle\frac{{525}}{{{2^7}}}} \right\}^{1/9}}$ 4. Find the maximum value of ${\left( {7 - x} \right)^4}{\left( {2 + x} \right)^5}$ when x lies between - 2, 7. To apply any of the above said rules, we first consider that the given terms are positive or not.  7-x, 2+x both are positive between -2, 7 We have to find max. value of ${\left( {7 - x} \right)^4}{\left( {2 + x} \right)^5}$ or ${A^4}{B^5}$ where A + B = 9. It will be maximum if ${\left( {\displaystyle\frac{A}{4}} \right)^4}{\left( {\displaystyle\frac{B}{5}} \right)^5}$ is maximum Their sum is $4\left( {\displaystyle\frac{A}{4}} \right) + 5\left( {\displaystyle\frac{B}{5}} \right)$ = A + B = 9 For max.product $\displaystyle\frac{A}{4} = \displaystyle\frac{B}{5} = \displaystyle\frac{{A + B}}{{4 + 5}} = \displaystyle\frac{9}{9} = 1$ So A = 4 and B = 5 Max. product is ${4^4}{5^5}$ Alternate Method: We know that, The greatest value of ${a^m}.{b^n}.{c^p}$, when m, n, p being +ve integers, a+b+c is constant is given by ${m^m}.{n^n}.{p^p}......{\left( {\displaystyle\frac{{a + b + c + ...}}{{m + n + p + ...}}} \right)^{m + n + p + ..}}$ Therefore max value of the above = ${4^4}{5^5}{\left( {\displaystyle\frac{{7 - x + 2 + x}}{{4 + 5}}} \right)^{4 + 5}} = {4^4}{5^5}{\left( {\displaystyle\frac{9}{9}} \right)^{4 + 5}} = {4^4}{5^5}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9165634512901306, "perplexity": 2476.0913819629573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578770163.94/warc/CC-MAIN-20190426113513-20190426135124-00041.warc.gz"}
https://abcl.org/trac/changeset/13855
# Changeset 13855 Ignore: Timestamp: 02/05/12 09:28:54 (9 years ago) Message: manual: minor corrections to previous commit. File: 1 edited ### Legend: Unmodified r13854 (jcall "getBaseLoader" cl-user::*classpath-manager*)) (defmethod print-object ((device-id (java:jclass "dto.nbi.service.hdm.alcatel.com.NBIDeviceID" *other-classloader*)) (defmethod print-object ((device-id (java:jclass "dto.nbi.service.hdm.alcatel.com.NBIDeviceID" *other-classloader*)) stream) ;;; ... code is greater than 0x00ff. \section{Overloading of the CL:REQUIRE mechanism} The CL:REQUIRE mechanism is overloaded in the following ways: \section{Overloading of the CL:REQUIRE Mechanism} The \code{CL:REQUIRE} mechanism is overloaded by attaching the following semantic to the execution of \code{REQUIRE} on the following symbols: \begin{description} \item{ASDF} Loads the ASDF implementation shipped with the implementation.  After ASDF has been loaded in this manner, symbols passed to CL:REQUIRE which are otherwise unresolved, are passed to ASDF for a chance for resolution.  This means, for instance that if CL-PPCRE can be located as a loadable ASDF system \code{(require 'cl-ppcre)} is equivalent to \code{(asdf:load-system 'cl-ppcre)}. \item{ABCL-CONTRIB} Locates and pushes the toplevel contents of abcl-contrib.jar'' into the ASDF central registry. \item{\code{ASDF}} Loads the \textsc{ASDF} implementation shipped with the implementation.  After \textsc{ASDF} has been loaded in this manner, symbols passed to \code{CL:REQUIRE} which are otherwise unresolved, are passed to ASDF for a chance for resolution.  This means, for instance if \code{CL-PPCRE} can be located as a loadable \textsc{ASDF} system \code{(require 'cl-ppcre)} is equivalent to \code{(asdf:load-system 'cl-ppcre)}. \item{\code{ABCL-CONTRIB}} Locates and pushes the toplevel contents of abcl-contrib.jar'' into the \textsc{ASDF} central registry. \end{description} The user may extend the CL:REQUIRE mechanism by pushing function hooks into SYSTEM:*MODULE-PROVIDER-FUNCTIONS*.  Each such hook function takes a single argument containing the symbol passed to CL:REQUIRE and returns a non-nil value if it can successful resolve the symbol. \subsection{JSS optionally extends the Reader} The user may extend the \code{CL:REQUIRE} mechanism by pushing function hooks into \code{SYSTEM:*MODULE-PROVIDER-FUNCTIONS*}.  Each such hook function takes a single argument containing the symbol passed to \code{CL:REQUIRE} and returns a non-\code{NIL} value if it can successful resolve the symbol. \section{JSS optionally extends the Reader} The JSS contrib consitutes an additional, optional extension to the
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4846542179584503, "perplexity": 21267.77094074904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141685797.79/warc/CC-MAIN-20201201231155-20201202021155-00041.warc.gz"}
https://www.physicsforums.com/threads/cyclic-subspaces-question.122504/
# Cyclic subspaces question. 1. May 31, 2006 ### MathematicalPhysicist prove that Z(v,T)=Z(u,T) iff g(T)(u)=v, where g(t) is prime compared to a nullify -T of u. (which means f(t) is the minimal polynomial of u, i.e f(T)(u)=0). (i think that when they mean 'is prime compared to' that f(t)=ag(t) for some 'a' scalar). i tried proving this way: suppose, g(T)(u)=v and suppose v belongs to V v doesn't equal 0, so f(T)(u)=0=av a=0, i need to show that some polynomial function of v belong to V too, but i don't really know how? and i need help on other way proof (suppose, Z(v,T)=Z(u,T)). 2. May 31, 2006 ### matt grime What is Z(v,T), what is T (we can guess from context but shouldn't have to), what is a 'nullify -T of u' (that makes no sense as a sentence, nullify is a verb)? For what its worth two polynomials f and g are prime if the ideal they span is all of the polynomial ring, or equivalently there exits polys h and k such that fh+gk=1 (or they have no common factors) and does not mean one is a scalar multiple of the other. 3. May 31, 2006 ### MathematicalPhysicist Z(v,T) is a T cyclic subspace of V which is spanned by v. v is a vector which belongs to V, and T is a linear operator T:V->V. nullify is a unique polynomial f(t) (f(t)=t^k+...+a1t+t) of lowest rank which its highest degree coefficient is 1 such that: f(T)(v)=0. 4. May 31, 2006 ### matt grime Z(v,T) is the subspace *generated* by T and v ie the smallest subspace invariant under T and containing v. It is *not* 'spanned' by v, it is spanned by v, Tv, T^2v,.. T^kv (and these are a basis, where k is the k in your expression for f). Span and generation have subtly different meanings, and I think you should keep them separate. Nullify is still a verb. f(t) is the minimal poly which nullifies v (apparently) but that does not mean you can call it 'the nullify'. Last edited: May 31, 2006 5. May 31, 2006 ### MathematicalPhysicist ok, now that matters are cleared any other hints on this question? 6. May 31, 2006 ### matt grime Yes, as I put in my first post, start with the definition of coprime polynomials too see what is going on. (note that your assertion that prime meant f(t)=ag(t) can't be what was meant, since f(T)u=0 by definition, hence ag(T)u=0, and Z(0,T)=0 which is not in general Z(u,T)).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063528180122375, "perplexity": 1789.2064338843043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645613.9/warc/CC-MAIN-20180318110736-20180318130736-00483.warc.gz"}
https://math.stackexchange.com/questions/3769659/very-slow-convergence-of-picard-method-for-solving-nonlinear-system-of-equations/3778159
# very slow convergence of Picard method for solving nonlinear system of equations I have a nonlinear system of equations as $$\left(\mathbf{K}_{\mathbf{L}}+\mathbf{K}_{\mathbf{N L}}(\mathbf{X})\right) \mathbf{X}=\mathbf{F}$$ in which $$\mathbf{K}_{\mathbf{N L}}(\mathbf{X})$$ represents the nonlinear stiffness matrix which is dependent to $$\mathbf{X}$$. I'm solving with Picard iteration like this: 1. first ignore the nonlinear stiffness matrix and solve the linear matrix for $$\mathbf{X}$$. 2. put the resulting $$\mathbf{X}$$ in the nonlinear stiffness matrix and solve the full equation for $$\mathbf{X}$$. 3. check convergence and repeat 2 if the convergence is not satisfied. the problem i have here is when the force vector($$\mathbf{F}$$) is small the nonlinear equation solves very fast but when i increase the force beyond some threshold it gets ages to converge. i have tried to solve it using Matlab fsolve function with algorithms like 'trust-region' and 'levenberg-marquardt' but the same thing happens with large force vectors. is there any way i can improve the convergence speed ? p.s. heres a gif of the result vector $$\mathbf{X}$$ inside the convergence loop with a force vector slighly over the threshold. edit(more details): so my problem is bending of a nonlinear timoshenko beam that has three governing equations as below: $$-\frac{d}{d x}\left\{A_{x x}\left[\frac{d u}{d x}+\frac{1}{2}\left(\frac{d w}{d x}\right)^{2}\right]+B_{x x} \frac{d \phi_{x}}{d x}\right\}=0$$ $$-\frac{d}{d x}\left\{A_{x x} \frac{d w}{d x}\left[\frac{d u}{d x}+\frac{1}{2}\left(\frac{d w}{d x}\right)^{2}\right]+B_{x x} \frac{d w}{d x} \frac{d \phi_{x}}{d x}\right\}-\frac{d}{d x}\left[S_{x x}\left(\frac{d w}{d x}+\phi_{x}\right)\right]=q$$ $$-\frac{d}{d x}\left\{D_{x x} \frac{d \phi_{x}}{d x}+B_{x x}\left[\frac{d u}{d x}+\frac{1}{2}\left(\frac{d w}{d x}\right)^{2}\right]\right\}+S_{x x}\left(\frac{d w}{d x}+\phi_{x}\right)=0$$ along with the proper boundary conditions and using finite difference, when assembled they form: $$\left(\mathbf{K}_{\mathbf{L}}+\mathbf{K}_{\mathbf{N L}}(\mathbf{X})\right) \mathbf{X}=\mathbf{F}$$ • Have you tried ode23s and others? – user10354138 Jul 26 '20 at 9:56 • @user10354138 whats the difference between fsolve and ode23s ? – omgtheykilledkenny Jul 26 '20 at 10:23 • The ordinary Picard algorithm converges linearly (notice that the scheme revolves around a fixed point iteration). So, it is not "superfast" even if you are close to the solution, this can be a nightmare (or a night with eyes open until the end of computation) for some large systems. Still, you may want to check if conditions for using the scheme hold, i.e. the function and its derivative are continuous at the point where you start your iterations (typically the origin). – Basco Jul 28 '20 at 21:23 • the function is continuous but i didn't really consider the derivative. do you mean the derivative with respect to time or X itself? because X is has three separate variables{u;w;s} – omgtheykilledkenny Jul 29 '20 at 12:19 • This approach is reasonable only if $\mathbf K_{NL} \ll \mathbf K_L$. Otherwise you should use linearization of $\mathbf K_{NL}$ either full (Newton's method) or partial. The less is the nonlinear term, the faster the convergence will be. – uranix Jul 29 '20 at 20:25 I assume that $$K_{NL}(0) = 0$$. Currently you are using the iteration $$(K_L + K_{NL}(X_n))X_{n+1} = F$$ with $$X_0 = 0$$. Instead, first solve $$(K_L + K_{NL}(X_\sigma))X_\sigma = \sigma F$$ for small $$\sigma$$, using this method. This converges quickly as you noticed. Then solve $$(K_L + K_{NL}(X_{\sigma'}))X_{\sigma'} = \sigma' F$$ with the same iteration for some $$\sigma' > \sigma$$, using the same iteration but now starting with the previously found $$X_\sigma$$ and not with the zero vector. And so on until the right hand side is $$F$$. For example, choose $$\sigma = N^{-1}, \, \sigma' = 2 N^{-1}$$ and so on, for sufficiently large $$N$$. Of course Anderson acceleration is also a good idea here :) • oh so you mean that i start solving the nonlinear equations for small forces and with their solution start increasing the force and converge for force? hummm seems like a reasonable idea. let me try that. i have read the Anderson acceleration papers but it looks so complicated however i trie the suggestion of @hyperplane but it gives different answers for different beta ratios. – omgtheykilledkenny Aug 3 '20 at 7:59 Here's a simple idea you could try with very little extra effort: Your GIF shows that it's oscillating back and forth, a phenomenon that can also occur in classical gradient descent algorithms if the problem is bad conditioned. A very popular and powerful method to alleviate this kind of problem is called momentum, which basically consists of averaging over previous iterations. So instead of throwing away all the previous iterates, you can do something like $$x_{k+1} = (1-\beta)g(x_{k}) + \beta x_k$$ Note that when $$\beta=0$$, we recover a standard fixed point iteration. Consider a simple fixed point problem like $$x=\cos(x)$$, which exhibits the oscillatory phenomenon. Then, starting from the same seed here are the residuals $$|x_*-x_k|$$ for different values of $$\beta$$: $$\small\begin{array}{lllllll} k & \beta=0 & \beta=0.1 &\beta=0.2 &\beta=0.3 &\beta=0.4 &\beta=0.5 \\\hline 0 & 5.45787 &5.45787 &5.45787 &5.45787 &5.45787 &5.45787 \\1 & 0.2572 & 0.777267 & 1.29733 & 1.8174 & 2.33747 & 2.85754 \\2 & 0.19566 & 0.538475 & 0.690985 & 0.555697 & 0.107195 & 0.610102 \\3 & 0.116858 & 0.162927 & 0.0696096 & 0.00419339 & 0.00218156 & 0.0454083 \\4 & 0.0835784 & 0.0908543 & 0.0249916 & 0.000723828 & 8.0351e-06 & 0.0070347 \\5 & 0.053654 & 0.0431759 & 0.00828335 & 0.000124022 & 3.34983e-08 & 0.0011389 \\6 & 0.0371882 & 0.0224696 & 0.00282738 & 2.12772e-05 & 1.39595e-10 & 0.000185622 \\7 & 0.0245336 & 0.0112062 & 0.000955803 & 3.64953e-06 & 5.81757e-13 & 3.02859e-05 \\8 & 0.0167469 & 0.00571477 & 0.000324182 & 6.26001e-07 & 2.44249e-15 & 4.94232e-06 \\9 & 0.0111768 & 0.00288222 & 0.000109831 & 1.07377e-07 & 1.11022e-16 & 8.06552e-07 \end{array}$$ A well chosen momentum can speed up convergence tremendously! A variant of this idea specific to fixed point iterations appears to be known as Anderson Acceleration. • I like the proposal, but although I cannot confirm it from the information in the question as posed by @omgtheykilledkenny, it seems that those oscillations are the result of plotting the solution for the linear stiffness and the update considering the nonlinear part. If using damping factors, it may be necessary to consider dissipation in some problems (physical systems may not conserve energy under some of these schemes). – Basco Jul 29 '20 at 0:50 • Thank you for your proposal @Hyperplane. i am going to try this method and will let you know how it works. if i got it correctly the function g(x) gives the result of my system of nonlinear equations, right? – omgtheykilledkenny Jul 29 '20 at 12:14 • Isn't $g(x_k)$ supposed to be $x_{k-1}$? Otherwise you are just using the previous iterate. – Mick Aug 2 '20 at 13:24 • @Mick $g$ returns the solution to the linear system with stiffness matrix evaluated at $x_k$, i.e. $g(x_k) = \text{solve}(K_L+K_{NL}(x_k),\, F)$ – Hyperplane Aug 4 '20 at 10:37 You can try some of the convergence acceleration algorithms which can work very well for fixed-point (Picard-type) iterations. If you are using R, there is a package called SQUAREM which implements a reliable, convergence acceleration scheme. It is based on the paper (Varadhan and Roland, Simple and Globally Convergent Methods for Accelerating the Convergence of Any EM Algorithm, Scandinavian Journal of Statistics, 2008). EM algorithms are essentially Picard-like algorithms - they are contraction mappings which converge slowly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9327524304389954, "perplexity": 310.4964540753159}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988828.76/warc/CC-MAIN-20210507211141-20210508001141-00184.warc.gz"}
https://matnoble.me/tech/programming/matlab-support-high-dpi-screens-on-linux/
# 在 Linux 上解决 Matlab 适应高分屏问题(字体过小) ◎ 设置之前,Matlab 字体小得看不清 ◎ Matlab 又恢复了原来的样子 MATLAB supports High DPI screens on Linux starting from R2017b. Tuning a high-DPI Linux system requires two steps: 1. Setting the MATLAB scale factor 2. Calibrating the system's DPI The MATLAB scale factor affects MATLAB desktop and the size/position of windows. The system DPI determines the scale and font size of axes and labels. The two tuning steps are described below: 1. To set the MATLAB scale factor, please execute the following commands in the MATLAB Command Window: (Here the scale factor has been set to 1.5.) 1 2 >> s = settings;s.matlab.desktop.DisplayScaleFactor >> s.matlab.desktop.DisplayScaleFactor.PersonalValue = 1.5 1. To calibrate the system's DPI to match the scale factor, please use the following terminal commands: 1 2 3 $xdpyinfo | grep resolution resolution: 96x96 dots per inch$ xrandr --dpi 182.4 The DPI value chosen should be the resolution found with "xdpyinfo" multiplied by the MATLAB scale factor that was set. In the example, 96 × 1.5 = 144. MATLAB must be restarted after Step 2. Note: In earlier releases than R2017b, high DPI screens on Linux is not supported. The possible workarounds mentioned below may help improve the visual appearance: • You can increase font sizes of text in the different windows. However, the icon or font size of the toolbar cannot be changed. • You can switch the high DPI monitor to a lower screen resolution, for example 1920x1080 or as preferred. • You can connect a lower resolution monitor and use MATLAB on that monitor. 1 2 >> s = settings;s.matlab.desktop.DisplayScaleFactor >> s.matlab.desktop.DisplayScaleFactor.PersonalValue = 1.9 1 2 3 $xdpyinfo | grep resolution resolution: 96x96 dots per inch$ xrandr --dpi 182.4 Note • Ubuntu 20.04 LTS • Intel® Core™ i5-10210U CPU @ 1.60GHz × 8 • 分辨率 2560 × 1600(16:10) • GNOME 3.36.0 • 64 位
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7016181349754333, "perplexity": 7735.286176343776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00394.warc.gz"}
https://freakonometrics.hypotheses.org/date/2013/02
# Somewhere else, part 35 This week, as usual, there is much more to read somewhere else than on my own blog. La nouvelle pour les francophones, c’est sinon toujours plusieurs posts intéressants, en français, Did I miss something ? # Job for life ? Bishop of Rome ? The job of Bishop of Rome – i.e. the Pope – is considered to be a life-long commitment. I mean, it usually was. There have been 266 popes since 32 A.D. (according to http://oce.catholic.com/…): almost all popes have served until their death. But that does not mean that they were in the job for long… One can easily extract the data from the website, > L2=scan("http://oce.catholic.com/index.php?title=List_of_Popes",what="character") > index=which(L2=="</td><td>Reigned") > X=L2[index+1] > Y=strsplit(X,split="-") But one should work a little bit because sometimes, there are inconsistencies, e.g. 911-913 and then 913-14, so we need some more lines. Further, we can extract from this file the years popes started to reign, the year it ended, and the length, using those functions > diffyears=function(x){ + s=NA + if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))} + if(length(x)==1){s=1} + if(length(x)==2){s=diff(as.numeric(x))} + return(s)} > whichyearsbeg=function(x){ + s=NA + if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))} + if(length(x)==1){s=as.numeric(x)} + if(length(x)==2){s=as.numeric(x)[1]} + return(s)} > whichyearsend=function(x){ + s=NA + if(sum(substr(x,1,1)=="c")>0){x[substr(x,1,1)=="c"]=substr(x[substr(x,1,1)=="c"],3,nchar(x[substr(x,1,1)=="c"]))} + if(length(x)==1){s=as.numeric(x)} + if(length(x)==2){s=as.numeric(x)[2]} + return(s)} On our file, we have > Years=unlist(lapply(Y,whichyearsbeg)) > YearsB=c(Years[1:91],752,Years[92:length(Years)]) > YearsB[187]=1276 > Years=unlist(lapply(Y,whichyearsend)) > YearsE=c(Years[1:91],752,Years[92:length(Years)]) > YearsE[187]=1276 > YearsE[266]=2013 > YearsE[122]=914 > W=unlist(lapply(Y,diffyears)) > W=c(W[1:91],1,W[92:length(W)]) > W[W==-899]=1 > which(is.na(W)) [1] 187 266 > W[187]=1 > W[266]=2013-2005 If we plot it, we have the following graph, > plot(YearsB,W,type="h") and if we look at the average length, we have the following graph, > n=200 > YEARS = seq(0,2000,length=n) > Z=rep(NA,n) > for(i in 2:(n-1)){ + index=which((YearsB>YEARS[i]-50)&(YearsE<YEARS[i]+50)) + Z[i] = mean(W[index])} > plot(YEARS,Z,type="l",ylim=c(0,30)) > n=50 > YEARS = seq(0,2000,length=n) > Z=rep(NA,n) > for(i in 2:(n-1)){ + index=which((YearsB>YEARS[i]-50)&(YearsE<YEARS[i]+50)) + Z[i] = mean(W[index])} > lines(YEARS,Z,type="l",col="grey") which does not reflect mortality improvements that have been observed over two millenniums. It might related to the fact that the average age at time of election has  increased over time (for instance, Benedict XVI was elected at 78 – one of the oldest to be elected). Actually, serving a bit more than 7 years is almost the median, > mean(W>=7.5) [1] 0.424812 (42% of the Popes did stay at least 7 years in charge) or we can look at the histogram, > hist(W,breaks=0:35) Unfortunately, I could not find more detailed database (including the years of birth for instance) to start a life-table of Popes. # Semaine de relâche et données de comptage Comme annoncé en cours (pour ceux qui souhaitent profiter de la semaine de relâche pour se préparer) une partie de l’examen intra portera sur la base > base=read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE) > tail(base) SEX AGE YEARMARRIAGE CHILDREN RELIGIOUS EDUCATION OCCUPATION SATISFACTION Y 596 1 47 15.0 1 3 16 4 2 7 597 1 22 1.5 1 1 12 2 5 1 598 0 32 10.0 1 2 18 5 4 6 599 1 32 10.0 1 2 17 6 5 2 600 1 22 7.0 1 3 18 6 2 2 601 0 32 15.0 1 3 14 1 5 1 Il s’agit d’une base construite à partir des données de l’article A Theory of Extramarital Affairs, de Ray Fair, paru en 1978 dabs Journal of Political Economy. La variable d’intérêt est (comme son nom l’indique) Y, le nombre d’aventures extra-conjugales pendant l’année passée, avec plusieurs variables explicatives • sex: 0 pour une femme, et 1 pour un homme • age: âge de la personne interrogée • yearmarriage: nombre d’années de mariage • children: 0 si la personne n’a pas d’enfants (avec son épouse) et 1 si elle en a • religious: degré de “religiosité”, entre 1 (anti-religieuse) à 5 (très religieuse) • education: nombre d’aéées d’éducation, 9=grade school, 12=high school, …, à 20=PhD • occupation: construit suivant l’échelle d’Hollingshead (cf http://cba.uah.edu/berkowd/….) • Higher executives of large concerns, proprietors, and major professionals (1) • Clerical and sales workers, technicians, and owners of little businesses (4) • Skilled manual employees (5) • Machine operators and semiskilled employees (6) • Unskilled employees (7) • satisfaction: perception de son mariage, de très mécontente (1) à très contente (5) Je ne répondrais pas, a priori, aux questions portant sur ces données. Bon courage, et bonne semaine de relâche. # Somewhere else, part 34 Time to share interesting posts and articles found this week on the internet, with a popular article to start with (retweetted by almost 50 followers), and still, a lot of posts here and there avec, comme souvent, quelques articles en français, à commencer par plusieurs articles sur Alain Desrosières, qui nous a quitté en début de semaine mais là aussi, plusieurs articles divers et variés Did I miss something ? # Further readings on GLMs and ratemaking Some articles found in Actuarial journal, on ratemarking, and in the CAS forums, and Astin conference papers # Bristish Statisticians and American Gangsters A few months ago, I did publish a post (in French) following my reading of Leonard Mlodinow’s the Drunkard’s Walk. More precisely, I mentioned a paragraph that I found extremely informative But it looks like those gangsters were not only stealing money. They were also stealing ideas, here from a British statistician, manely Leonard Henry Caleb Tippett. Leonard Tippett is famous in Extreme Value Theory for his theorem (the so-called Fisher-Tippett theorem, which gives the possible limiting distributions for a normalized version of the maximum from an i.i.d. sequence, see old posts). According to Martin Gardner, Leonard Tippett suggested to use middle numbers (not the last ones) of larger ones to generate (pseudo) random sequences, or more precisely, in 1927, “published a table of 41,600 random numbers, obtained by taking the middle digits of the area of parishes in England I could not get a copy of the book Random Sampling Numbers by Leonard Tippett (I could only find reviews, e.g. Nair (1938)) but I do believe that this technique should work to generate sequences that do look like sequences of random numbers. Note that several techniques were mentioned in previous posts (in French) published a few years ago. Now, I should also take some time to apologize because, sometimes, I am the one playing the gangster: I do steal a lot of illustrations on the internet. And I would like to apologize to the authors. On my previous blog, I did try – once – to add a short line at the end of a post, explaining where the illustration was coming from (trying to give credit to the illustrator). Less than 10 days after adding this short line, I received an email from a ‘publisher’, telling me that there were rights attached to the picture, and that I had 24 hours to remove it (if not, their lawyers will see what to do). Of course, I did remove the picture, and the mention. Now, I use pictures, and no mention. And I feel guilty. So I wanted to apologize for stealing others’ work. I am still discussing to hire an illustrator, to illustrate my blog. Work in progress…. # Somewhere else, part 33 Like every Sunday (almost), it is time share links to read interesting posts I found, here and there. This week, we should start with among many other posts and articles, et quelques billets intéressants, en français, Did I miss something ? # Modélisation des coûts individuels en tarification Avant de terminer le cours sur la tarification, on va parler de la modélisation des coûts individuels. On parlera de lois Gamma et de lois lognormales (sur cette dernière, je suggère de relire ce qui avait été dit dans le cours de modèles de régression sur les modèles log-linéaires, rappelé dans un court billet publié à l’automne). On parlera aussi de mélanges de lois, et de lois multinomiales. Les transparents sont en ligne ici, Pour aller plus loin, il y a l’article de Fu & Moncher (2004) sur la comparaison Gamma versus lognormale, http://casact.org/… ou Holler, Sommer & Trahair (1999) http://casact.org/… qui proposait un état de l’art, il y a une quinzaine d’années. Sinon, je recommande la lecture du Practitioner’s Guide to Generalized Linear Models, en ligne sur http://casact.org/…. # From Simpson’s paradox to pies Today, I wanted to publish a post on economics, and decision theory. And probability too… Those who do follow my blog should know that I am a big fan of Simpson’s paradox. I also love to mention it in my econometric classes. It does raise important questions, that I do relate to multicolinearity, and interepretations of regression models, with multiple (negatively correlated) explanatory variables. This paradox has amazing pedogological virtues. I did mention it several times on this blog (I should probably mention that I discovered this paradox via Marco Scarsini, who did learn me a lot of things, in decision theory and in probability). For those who do not know this paradox, here is an example that Marco gave in one of his talk, a few years ago. Consider the following statistics, when healthy people entered in some hospital hospital total survivors deaths survival rate hospital A 600 590 10 98% hospital B 900 870 30 97% while, when sick people entered in the same hospitals hospital total survivors deaths survival rate hospital A 400 210 190 53% hospital B 100 30 70 30% Somehow, whatever your health situation, you should choose hospital A. Now, if we agregate hospital total survivors deaths survival rate hospital A 1000 800 200 80% hospital B 1000 900 100 90% i.e. without any doubts, people should choose hospital B. and while With symbolic notations, one can have at the same time and with also as shown on the graph below There should be connection between Simpson’s paradox and the ecological fallacy (which is an issue I recently discovered and that I found extremely interesting, related again to difficulties of interpreting regressions). But that’s another story. My point today is that Colin Blyth did mention another nice paradox, that is related, this time, to stochastic orderings. The idea is the following. Consider the three spinners drawn below (imagine some arrows in those circles) • spinner A: no matter where the arrow stops, the gain is 3, • spinner B: 56% chances to gain 2, 22% chances to gain 4, and 22% chances to gain 6, • spinner C: 51% chances to gain 1, 49% chances to gain 5. Instead of spinners, it is also possible to consider three different lotteries, You play against a friend, you pick a spinner, while the friend picks another. Everyone flick his arrow, the highest number wins (no matter the difference). Let us compute the odds. First case, A against B, from A’s perspective B-2 B-4 B-6 A-3 56% +1 win 22% -1 lose 22% -3 lose In that case, A has 56% chance of beating B. Second case, A against C, from A’s perspective, C-1 C-5 A-3 51% +1 win 49% -2 lose In that case, A has 51% chance of beating C. Third (an final) case, B against C, from B’s perspective. Assuming independence between the spinners, joint probabilities can easily be computed, C-1 C-5 B-2 28.56% +1 win 27.44% -3 lose B-4 11.22% +3 win 10.78% -1 lose B-6 11.22% +5 win 10.78% +1 win In that case, B has 61.78% chance of beating C. So, if we try to summarize, • A is the best choice, since it beats both with – always – more than 50% chance, • C is the worst choice, since it is beaten by both with – always – more than 50% chance, Now, assume that you play not against one friend, but two friends. An everyone picks a different spinner. Let us compute the odds, one more time. First case, A against B and C, from A’s perspective B-2 C-1 B-2 C-5 B-4 C-1 B-4 C-5 B-6 C-1 B-6 C-5 A-3 28.56% +1 win 27.44% -2 lose 11.22% -1 lose 10.78% -1 lose 11.22% -3 lose 10.78% -3 lose In that case, A has 28.56% chance of beating B and C. Second case, B against A and C, from B’s perspective, A-3 C-1 A-3 C-5 B-2 28.56% -1 lose 27.44% -2 lose B-4 11.22% +1 win 10.78% -1 lose B-6 11.22% +3 win 10.78% +1 win In that case, B has 33.22% chance of beating A and B.Third (an final) case, C against A, from C’s perspective, A-3 B-2 A-3 B-4 A-3 B-6 C-1 28.56% -2 lose 11.22% -3 lose 11.22% -5 lose C-5 27.44% +2 win 10.78% +1 win 10.78% -1 lose In that case, C has 38.22% chance of beating A and B. So, if we try to summarize, this time • C is the best choice, since has (strictly) more than 1/3 chances to win, which the highest probability • A is the worst choice, since has (strictly) less than 1/3 chances to win, which the lowest probability Odd isn’t it ? Now, is there an interpretation of that paradox ? Yes, Martin Gardner, in his paper on induction and probability, mentioned the case of drug testing. The value we had with the spinner is the health level, rated from 1 to 6. Thus, taking drug A, you always get an average health level of 3. With drug C, on the other hand, you get either very sick (level 1) or very well (level 5). Consider now a doctor who wants to maximize the patient’s chance of being well. If only pills A and C are available, then the doctor should choose A. This is what we’ve seen in the first part. Assume that now a company delivers a third pill, called drug B. Then the doctor should find C more interesting…. Odd, isn’t it ? Colin Blyth gave a more amusing application. Assume that you like to go to the restaurant, and you like get a dessert there. Dessert A – the apple pie – is the average one, with a standard level, that you rank 3 (on a scale from 1 to 6). Dessert C – the cheese cake – can either be awfull (ranked 1) or delicious (ranked 5). You’d better go for the apple pie if you want to maximize the probability of not being disappointed (i.e. maximizing your “best chance” according to Colin Blyth, but I guess it can be interpreted as regret minimization too). Now assume that dessert B – the blueberry pie – is available (with ranks given by the spinner). Then you should go for the cheese cake. I let you imagine the discussion that you can have, then, with your favorite waitress – Hi Mr Freakonometrics, do you want a piece of apple pie ? (yes, actually she also comes frequently on my blog, and knows me from my pseudo…) – Probably. But actually, I was wondering if you did have your blueberry pie today ? – Yes, in fact we do…. – Great, in that case, I’ll go for the cheese cake. She’ll probably think that I am freak… so I hope she’ll come and read my post, to understand that, actually, it does make a lot of sense to go for what was supposed to be my worst case. # Modeling individual losses with mixtures Usually, the sentence that I keep saying in my regression classes is “please, look at your data“. In our previous post, we’ve been playing like most econometricians: we did not look at the data. Actually, if we look at the distribution of individual losses, in the dataset, we see the following, > n=nrow(couts) > plot(sort(couts$cout),(1:n)/(n+1),xlim=c(0,10000),type="s",lwd=2,col="green") It looks like there are fixed costs claims in our database. How do we deal with it in the standard case (e.g. in Loss Models textbook) ? We can use a mixture of – at least – three distributions here, $f(y\boldsymbol) = p_1 {\color{Blue} f_1(}y{\color{Blue} )} + p_2 {\color{Magenta} \delta_{\kappa}(}y{\color{Magenta} )} + p_3 {\color{Red} f_3(}y{\color{Red} )}$ with • a distribution for small claims, ${\color{Blue} f_1(}\cdot{\color{Blue} )}$, e.g. an exponential distribution • a Dirac mass in ${\color{Magenta} \kappa}$, i.e. ${\color{Magenta} \delta_{\kappa}(}\cdot{\color{Magenta} )}$ • a distribution for larger claims, ${\color{Red} f_3(}\cdot{\color{Red} )}$, e.g. a Gamma, or a lognormal, distribution > I1=which(couts$cout<1120) > I2=which((couts$cout>=1120)&(couts$cout<1220)) > I3=which(couts$cout>=1220) > (p1=length(I1)/nrow(couts)) [1] 0.3284823 > (p2=length(I2)/nrow(couts)) [1] 0.4152807 > (p3=length(I3)/nrow(couts)) [1] 0.256237 > X=couts$cout > (kappa=mean(X[I2])) [1] 1171.998 > X0=X[I3]-kappa > u=seq(0,10000,by=20) > F1=pexp(u,1/mean(X[I1])) > F2= (u>kappa) > F3=plnorm(u-kappa,mean(log(X0)),sd(log(X0))) * (u>kappa) > F=F1*p1+F2*p2+F3*p3 > lines(u,F) In our previous post, we’ve discussed the idea that all parameters might be related to some covariates, i.e. $f(y|\boldsymbol{X}) = p_1(\boldsymbol{X}) {\color{Blue} f_1(}y|\boldsymbol{X}{\color{Blue} )} + p_2(\boldsymbol{X}) {\color{Magenta} \delta_{\kappa}(}y{\color{Magenta} )} + p_3(\boldsymbol{X}) {\color{Red} f_3(}y|\boldsymbol{X}{\color{Red} )}$ which yield the following premium model, $\mathbb{E}(Y|\boldsymbol{X}) = {\color{Blue} {\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq s_1)}_{A} \cdot {\underbrace{\mathbb{P}(Y\leq s_1|\boldsymbol{X})}_{D}}}}\\+{\color{Purple} {{\underbrace{\mathbb{E}(Y|Y\in( s_1,s_2], \boldsymbol{X}) }_{B}}\cdot {\underbrace{\mathbb{P}(Y\in( s_1,s_2]| \boldsymbol{X})}_{D}}}}\\+{\color{Red} {{\underbrace{\mathbb{E}(Y|Y> s_2, \boldsymbol{X}) }_{C}}\cdot {\underbrace{\mathbb{P}(Y> s_2| \boldsymbol{X})}_{D}}}}$ For the ${\color{Blue} A}$${\color{Magenta} B}$ and ${\color{Red} C}$ terms, that’s easy, we can use standard models we’ve seen in the course. For the probability, we should use a multinomial model. Recall that for the logistic regression model, if $(\pi,1-\pi)=(\pi_1,\pi_2)$, then $\log \frac{\pi}{1-\pi}=\log \frac{\pi_1}{\pi_2} =\boldsymbol{X}'\boldsymbol{\beta}$ i.e. $\pi_1 = \frac{\exp(\boldsymbol{X}'\boldsymbol{\beta})}{1+\exp(\boldsymbol{X}'\boldsymbol{\beta})}$ and $\pi_2 = \frac{1}{1+\exp(\boldsymbol{X}'\boldsymbol{\beta})}$ To derive a multivariate extension, write $\pi_1 = \frac{\exp(\boldsymbol{X}'\boldsymbol{\beta}_1)}{1+\exp(\boldsymbol{X}'\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}'\boldsymbol{\beta}_2)}$ $\pi_2 = \frac{\exp(\boldsymbol{X}'\boldsymbol{\beta}_2)}{1+\exp(\boldsymbol{X}'\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}'\boldsymbol{\beta}_2)}$ and $\pi_3 = \frac{1}{1+\exp(\boldsymbol{X}'\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}'\boldsymbol{\beta}_2)}$ Again, maximum likelihood techniques can be used, since $\mathcal{L}(\boldsymbol{\pi},\boldsymbol{y})\propto \prod_{i=1}^n \prod_{j=1}^3 \pi_{i,j}^{Y_{i,j}}$ where here, variable $Y_{i}$  – which take three levels – is splitted in three indicators (like any categorical explanatory variables in standard regression model). Thus, $\log \mathcal{L}(\boldsymbol{\beta},\boldsymbol{y})\propto \sum_{i=1}^n \sum_{j=1}^2 \left(Y_{i,j} \boldsymbol{X}_i'\boldsymbol{\beta}_j\right) - n_i\log\left[1+1+\exp(\boldsymbol{X}'\boldsymbol{\beta}_1)+\exp(\boldsymbol{X}'\boldsymbol{\beta}_2)\right]$ and, as for the logistic regression, then use Newton Raphson’ algorithm to compute numerically the maximum likelihood. In R, first we have to define the levels, e.g. > seuils=c(0,1120,1220,1e+12) > couts$tranches=cut(couts$cout,breaks=seuils, + labels=c("small","fixed","large")) nocontrat no garantie cout exposition zone puissance agevehicule 1 1870 17219 1RC 1692.29 0.11 C 5 0 2 1963 16336 1RC 422.05 0.10 E 9 0 3 4263 17089 1RC 549.21 0.65 C 10 7 4 5181 17801 1RC 191.15 0.57 D 5 2 5 6375 17485 1RC 2031.77 0.47 B 7 4 ageconducteur bonus marque carburant densite region tranches 1 52 50 12 E 73 13 large 2 78 50 12 E 72 13 small 3 27 76 12 D 52 5 small 4 26 100 12 D 83 0 small 5 46 50 6 E 11 13 large Then, we can run a multinomial regression, from > library(nnet) using some selected covariates > reg=multinom(tranches~ageconducteur+agevehicule+zone+carburant,data=couts) # weights: 30 (18 variable) initial value 2113.730043 iter 10 value 2063.326526 iter 20 value 2059.206691 final value 2059.134802 converged The output is here > summary(reg) Call: multinom(formula = tranches ~ ageconducteur + agevehicule + zone + carburant, data = couts) Coefficients: (Intercept) ageconducteur agevehicule zoneB zoneC fixed -0.2779176 0.012071029 0.01768260 0.05567183 -0.2126045 large -0.7029836 0.008581459 -0.01426202 0.07608382 0.1007513 zoneD zoneE zoneF carburantE fixed -0.1548064 -0.2000597 -0.8441011 -0.009224715 large 0.3434686 0.1803350 -0.1969320 0.039414682 Std. Errors: (Intercept) ageconducteur agevehicule zoneB zoneC zoneD fixed 0.2371936 0.003738456 0.01013892 0.2259144 0.1776762 0.1838344 large 0.2753840 0.004203217 0.01189342 0.2746457 0.2122819 0.2151504 zoneE zoneF carburantE fixed 0.1830139 0.3377169 0.1106009 large 0.2160268 0.3624900 0.1243560 To visualize the impact of a covariate (one, only), one can use also spline functions > library(splines) > reg=multinom(tranches~agevehicule,data=couts) # weights: 9 (4 variable) initial value 2113.730043 final value 2072.462863 converged > reg=multinom(tranches~bs(agevehicule),data=couts) # weights: 15 (8 variable) initial value 2113.730043 iter 10 value 2070.496939 iter 20 value 2069.787720 iter 30 value 2069.659958 final value 2069.479535 converged For instance, if the covariate is the age of the car, we do have the following probabilities > predict(reg,newdata=data.frame(agevehicule=5),type="probs") small fixed large 0.3388947 0.3869228 0.2741825 and for all ages from 0 to 20, For instance, for new cars, the proportion of fixed costs is rather small (here in purple), and keeps increasing with the age of the car. If the covariate is the density of population in the area the driver lives, we do obtain the following probabilities > reg=multinom(tranches~bs(densite),data=couts) # weights: 15 (8 variable) initial value 2113.730043 iter 10 value 2068.469825 final value 2068.466349 converged > predict(reg,newdata=data.frame(densite=90),type="probs") small fixed large 0.3484422 0.3473315 0.3042263 Based on those probabilities, it is then possible to derive the expected cost of a claims, given some covariates (e.g. the density). But first, define subsets of the whole dataset > sbaseA=couts[couts$tranches=="small",] > sbaseB=couts[couts$tranches=="fixed",] > sbaseC=couts[couts$tranches=="large",] with a threshold given by > (k=mean(sousbaseB$cout)) [1] 1171.998 Then, let us run our four models, > reg=multinom(tranches~bs(densite),data=couts) > regC=glm((cout-k)~bs(densite),data=sousbaseC,family=Gamma(link="log")) We can now compute predictions based on those models, > nouveau=data.frame(densite=seq(10,100)) > proba=predict(reg,newdata=nouveau,type="probs") > predA=predict(regA,newdata=nouveau,type="response") > predB=predict(regB,newdata=nouveau,type="response") > predC=predict(regC,newdata=nouveau,type="response")+k > pred=cbind(predA,predB,predC) To visualize the impact of each component on the premium, we can compute probabilities, are well as expected costs (given a cost in each subset), > cbind(proba,pred)[seq(10,90,by=10),] small fixed large predA predB predC 10 0.3344014 0.4241790 0.2414196 423.3746 1171.998 7135.904 20 0.3181240 0.4471869 0.2346892 428.2537 1171.998 6451.890 30 0.3076710 0.4626572 0.2296718 438.5509 1171.998 5499.030 40 0.3032872 0.4683247 0.2283881 451.4457 1171.998 4615.051 50 0.3052378 0.4620219 0.2327404 463.8545 1171.998 3961.994 60 0.3136136 0.4417057 0.2446807 472.3596 1171.998 3586.833 70 0.3279413 0.4056971 0.2663616 473.3719 1171.998 3513.601 80 0.3464842 0.3534126 0.3001032 463.5483 1171.998 3840.078 90 0.3652932 0.2868006 0.3479061 440.4925 1171.998 4912.379 Now, it is possible to plot those figures in a graph, > barplot(t(proba*pred)) > abline(h=mean(couts$cout),lty=2) (the dotted horizontal line is the average cost of a claim, in our dataset). # Sorting rows and colums in a matrix (with some music, and some magic) This morning, I was working on some paper on inequality measures, and for computational reasons, I had to sort elements in a matrix. To make it simple, I had a rectangular matrix, like the one below, > set.seed(1) > u=sample(1:(nc*nl)) > (M1=matrix(u,nl,nc)) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 7 5 11 23 6 17 [2,] 9 18 1 21 24 15 [3,] 13 19 3 8 22 2 [4,] 20 12 14 16 4 10 I had to sort elements in this matrix, by row. > (M2=t(apply(M1,1,sort))) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 5 6 7 11 17 23 [2,] 1 9 15 18 21 24 [3,] 2 3 8 13 19 22 [4,] 4 10 12 14 16 20 Nice, elements are sorted by row. But for symmetric reasons, I also wanted to sort them by column. So from this sorted matrix, I decided to sort elements by column, > (M3=apply(M2,2,sort)) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 3 7 11 16 20 [2,] 2 6 8 13 17 22 [3,] 4 9 12 14 19 23 [4,] 5 10 15 18 21 24 Nice, elements are sorted by column now. Wait… elements are also sorted by row. How comes ? Is it some coincidence ? Actually, no, you can try… > library(scatterplot3d) > nc=6; nl=5 > set.seed(1) > u=sample(1:(nc*nl)) > (M1=matrix(u,nl,nc)) [,1] [,2] [,3] [,4] [,5] [,6] [1,] 8 23 5 30 10 15 [2,] 11 27 4 29 21 28 [3,] 17 16 13 24 26 12 [4,] 25 14 7 20 1 3 [5,] 6 2 18 9 22 19 > M2=t(apply(M1,1,sort)) > M3=apply(M2,2,sort) > M3 [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 3 7 14 19 22 [2,] 2 6 9 15 20 25 [3,] 4 8 10 17 23 26 [4,] 5 11 16 18 24 29 [5,] 12 13 21 27 28 30 or use the following function is two random matrices are not enough, > doublesort=function(seed=2,nl=4,nc=6){ + set.seed(seed) + u=sample(1:(nc*nl)) + (M1=matrix(u,nl,nc)) + (M2=t(apply(M1,1,sort))) + return(apply(M2,2,sort)) + } Please, feel free to play with this function. Because this will always be the case. Of course, this is not a new result. Actually, it is mentioned in More Mathematical Morsels by Ross Honsberger, related to some story on marching band. The idea is simple: consider a marching band, a rectangular one. Here are my players > library(scatterplot3d) > scatterplot3d(rep(1:nl,nc),rep(1:nc,each=nl), as.vector(M1), + col.axis="blue",angle=40, + col.grid="lightblue", main="", xlab="", ylab="", zlab="", + pch=21, box=FALSE, cex.symbols=1,type="h",color="red",axis=FALSE) Quite messy, isn’t it ? At least, this is what the leader of the band though, since some tall players were hiding shorter ones. So, he brought the shorter ones forward, and moved the taller ones in the back. But still on the same line, > m=scatterplot3d(rep(1:nl,nc),rep(1:nc,each=nl), as.vector(M2), > col.axis="blue",angle=40, + col.grid="lightblue", main="", xlab="", ylab="", zlab="", + pch=21, box=FALSE, cex.symbols=1,type="h",color="red",axis=FALSE) From the leader’s perspective, everything was fine, > M=M2 > for(i in 1:nl){ + for(j in 1:(nc-1)){ + pts=m$xyz.convert(x=c(i,i),y=c(j,j+1),z=c(M[i,j],M[i,j+1])) + segments(pts$x[1],pts$y[1],pts$x[2],pts$y[2]) + }} But someone in the public (on the right of this graph) did not have the same perspective. > for(j in 1:nc){ + for(i in 1:(nl-1)){ + pts=m$xyz.convert(x=c(i,i+1),y=c(j,j),z=c(M[i,j],M[i+1,j])) + segments(pts$x[1],pts$y[1],pts$x[2],pts$y[2]) + }} So the person in the audience ask – one more time – players to move, but this time, to match with his perspective. Since I consider someone on the right, some minor adjustments should be made here > sortrev=function(x) sort(x,decreasing=TRUE) > M3b=apply(M2,2,sortrev) This time, it is much bettter, > m=scatterplot3d(rep(1:nl,nc),rep(1:nc,each=nl), as.vector(M3b), + col.axis="blue",angle=40, + col.grid="lightblue", main="", xlab="", ylab="", zlab="", + pch=21, box=FALSE, cex.symbols=1,type="h",color="red",axis=FALSE) And not only from the public’ perspective, > M=M3b > for(j in 1:nc){ + for(i in 1:(nl-1)){ + pts=m$xyz.convert(x=c(i,i+1),y=c(j,j),z=c(M[i,j],M[i+1,j])) + segments(pts$x[1],pts$y[1],pts$x[2],pts$y[2]) + }} but also for the leader of the marching band Nice, isn’t it ? And why is this property always valid ? Actually, it comes from the pigeonhole theorem (one more time), a nice explanation can be found in The Power of the Pigeonhole by Martin Gardner (a pdf version can also be found on http://www.ualberta.ca/~sgraves/..). As mentioned at the end of the paper, there is also an interpretation of that result that can be related to some magic trick, discussing – in picture – a few month ago on http://www.futilitycloset.com/… : deal cards into any rectangular array: Then put each row into numerical order: Now put each column into numerical order: That last step hasn’t disturbed the preceding one: rows are still in order. And that’s a direct result from  pigeonhole theorem. That’s awesome, isn’t it ? # Crash course on R for financial and actuarial econometrics The crash course announced for this Friday, in Montréal, entitled Econometric Modeling in Finance and Insurance with the R Language, has been canceled by IFM2 – or to be more specific postponed. I will upload the slides on financial and actuarial applications later on (even if most of the material can be found, here and here, on this blog). Sorry about this late announcement. # Visualizing overdispersion (with trees) This week, we started to discuss overdispersion when modeling claims frequency. In my previous post, I discussed computations of empirical variances with different exposure. But I did use only one factor to compute classes. Of course, it is possible to use much more factors. For instance, using cartesian products of factors, > X=as.factor(paste(sinistres$carburant,sinistres$zone, + cut(sinistres$ageconducteur,breaks=c(17,24,40,65,101)))) > E=sinistres$exposition > Y=sinistres$nbre > vm=vv=ve=rep(NA,length(levels(X))) > for(i in 1:length(levels(X))){ + ve[i]=Ei=E[X==levels(X)[i]] + Yi=Y[X==levels(X)[i]] + vm[i]=meani=weighted.mean(Yi/Ei,Ei) # moyenne + vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei) # variance + cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n") + } Class D A (17,24] average = 0.06274415 variance = 0.06174966 Class D A (24,40] average = 0.07271905 variance = 0.07675049 Class D A (40,65] average = 0.05432262 variance = 0.06556844 Class D A (65,101] average = 0.03026999 variance = 0.02960885 Class D B (17,24] average = 0.2383109 variance = 0.2442396 Class D B (24,40] average = 0.06662015 variance = 0.07121064 Class D B (40,65] average = 0.05551854 variance = 0.05543831 Class D B (65,101] average = 0.0556386 variance = 0.0540786 Class D C (17,24] average = 0.1524552 variance = 0.1592623 Class D C (24,40] average = 0.0795852 variance = 0.09091435 Class D C (40,65] average = 0.07554481 variance = 0.08263404 Class D C (65,101] average = 0.06936605 variance = 0.06684982 Class D D (17,24] average = 0.1584052 variance = 0.1552583 Class D D (24,40] average = 0.1079038 variance = 0.121747 Class D D (40,65] average = 0.06989518 variance = 0.07780811 Class D D (65,101] average = 0.0470501 variance = 0.04575461 Class D E (17,24] average = 0.2007164 variance = 0.2647663 Class D E (24,40] average = 0.1121569 variance = 0.1172205 Class D E (40,65] average = 0.106563 variance = 0.1068348 Class D E (65,101] average = 0.1572701 variance = 0.2126338 Class D F (17,24] average = 0.2314815 variance = 0.1616788 Class D F (24,40] average = 0.1690485 variance = 0.1443094 Class D F (40,65] average = 0.08496827 variance = 0.07914423 Class D F (65,101] average = 0.1547769 variance = 0.1442915 Class E A (17,24] average = 0.1275345 variance = 0.1171678 Class E A (24,40] average = 0.04523504 variance = 0.04741449 Class E A (40,65] average = 0.05402834 variance = 0.05427582 Class E A (65,101] average = 0.04176129 variance = 0.04539265 Class E B (17,24] average = 0.1114712 variance = 0.1059153 Class E B (24,40] average = 0.04211314 variance = 0.04068724 Class E B (40,65] average = 0.04987117 variance = 0.05096601 Class E B (65,101] average = 0.03123003 variance = 0.03041192 Class E C (17,24] average = 0.1256302 variance = 0.1310862 Class E C (24,40] average = 0.05118006 variance = 0.05122782 Class E C (40,65] average = 0.05394576 variance = 0.05594004 Class E C (65,101] average = 0.04570239 variance = 0.04422991 Class E D (17,24] average = 0.1777142 variance = 0.1917696 Class E D (24,40] average = 0.06293331 variance = 0.06738658 Class E D (40,65] average = 0.08532688 variance = 0.2378571 Class E D (65,101] average = 0.05442916 variance = 0.05724951 Class E E (17,24] average = 0.1826558 variance = 0.2085505 Class E E (24,40] average = 0.07804062 variance = 0.09637156 Class E E (40,65] average = 0.08191469 variance = 0.08791804 Class E E (65,101] average = 0.1017367 variance = 0.1141004 Class E F (17,24] average = 0 variance = 0 Class E F (24,40] average = 0.07731177 variance = 0.07415932 Class E F (40,65] average = 0.1081142 variance = 0.1074324 Class E F (65,101] average = 0.09071118 variance = 0.1170159 Again, one can plot the variance against the average, > plot(vm,vv,cex=sqrt(ve),col="grey",pch=19, + xlab="Empirical average",ylab="Empirical variance") > points(vm,vv,cex=sqrt(ve)) > abline(a=0,b=1,lty=2) An alternative is to use a tree. The tree can be obtained from another variable (the insured had, or had not, a claim, during the period considered) but it should be rather close to the one we would like to model (the number of claims over the period considered). Here, I did use the whole database (with more that 600,000 lines) > library(tree) > T=tree((nombre>0)~as.factor(zone)+as.factor(puissance)+ + as.factor(marque)+as.factor(carburant)+as.factor(region)+ + agevehicule+ageconducteur,data=baseFREQ, + split = "gini",minsize =25000) The tree is the following > plot(T) > text(T) Now, each knot defines a class, and it is possible to use it to define a class. Which is supposed to be homogeneous. > X=as.factor(T$where) > E=sinistres$exposition > Y=sinistres$nbre > vm=vv=ve=rep(NA,length(levels(X))) > for(i in 1:length(levels(X))){ + ve[i]=Ei=E[X==levels(X)[i]] + Yi=Y[X==levels(X)[i]] + vm[i]=meani=weighted.mean(Yi/Ei,Ei) # moyenne + vv[i]=variancei=sum((Yi-meani*Ei)^2)/sum(Ei) # variance + cat("Class ",levels(X)[i],"average =",meani," variance =",variancei,"\n") + } Class 6 average = 0.04010406 variance = 0.04424163 Class 8 average = 0.05191127 variance = 0.05948133 Class 9 average = 0.07442635 variance = 0.08694552 Class 10 average = 0.4143646 variance = 0.4494002 Class 11 average = 0.1917445 variance = 0.1744355 Class 15 average = 0.04754595 variance = 0.05389675 Class 20 average = 0.08129577 variance = 0.0906322 Class 22 average = 0.05813419 variance = 0.07089811 Class 23 average = 0.06123807 variance = 0.07010473 Class 24 average = 0.06707301 variance = 0.07270995 Class 25 average = 0.3164557 variance = 0.2026906 Class 26 average = 0.08705041 variance = 0.108456 Class 27 average = 0.06705214 variance = 0.07174673 Class 30 average = 0.05292652 variance = 0.06127301 Class 31 average = 0.07195285 variance = 0.08620593 Class 32 average = 0.08133722 variance = 0.08960552 Class 34 average = 0.1831559 variance = 0.2010849 Class 39 average = 0.06173885 variance = 0.06573939 Class 41 average = 0.07089419 variance = 0.07102932 Class 44 average = 0.09426152 variance = 0.1032255 Class 47 average = 0.03641669 variance = 0.03869702 Class 49 average = 0.0506601 variance = 0.05089276 Class 50 average = 0.06373107 variance = 0.06536792 Class 51 average = 0.06762947 variance = 0.06926191 Class 56 average = 0.06771764 variance = 0.07122379 Class 57 average = 0.04949142 variance = 0.05086885 Class 58 average = 0.2459016 variance = 0.2451116 Class 59 average = 0.05996851 variance = 0.0615773 Class 61 average = 0.07458053 variance = 0.0818608 Class 63 average = 0.06203737 variance = 0.06249892 Class 64 average = 0.07321618 variance = 0.07603106 Class 66 average = 0.07332127 variance = 0.07262425 Class 68 average = 0.07478147 variance = 0.07884597 Class 70 average = 0.06566728 variance = 0.06749411 Class 71 average = 0.09159605 variance = 0.09434413 Class 75 average = 0.03228927 variance = 0.03403198 Class 76 average = 0.04630848 variance = 0.04861813 Class 78 average = 0.05342351 variance = 0.05626653 Class 79 average = 0.05778622 variance = 0.05987139 Class 80 average = 0.0374993 variance = 0.0385351 Class 83 average = 0.06721729 variance = 0.07295168 Class 86 average = 0.09888492 variance = 0.1131409 Class 87 average = 0.1019186 variance = 0.2051122 Class 88 average = 0.05281703 variance = 0.0635244 Class 91 average = 0.08332136 variance = 0.09067632 Class 96 average = 0.07682093 variance = 0.08144446 Class 97 average = 0.0792268 variance = 0.08092019 Class 99 average = 0.1019089 variance = 0.1072126 Class 100 average = 0.1018262 variance = 0.1081117 Class 101 average = 0.1106647 variance = 0.1151819 Class 103 average = 0.08147644 variance = 0.08411685 Class 104 average = 0.06456508 variance = 0.06801061 Class 107 average = 0.1197225 variance = 0.1250056 Class 108 average = 0.0924619 variance = 0.09845582 Class 109 average = 0.1198932 variance = 0.1209162 Here, when ploting the empirical variance (per knot) against the empirial average of claims, we get Here, we can identify classes where remaining heterogeneity. # Large claims, and ratemaking During the course, we have seen that it is natural to assume that not only the individual claims frequency can be explained by some covariates, but individual costs too. Of course, appropriate families should be considered to model the distribution of the cost $Y$, given some covariates $\boldsymbol{X}$.Here is the dataset we’ll use, > sinistre=read.table("http://freakonometrics.free.fr/sinistreACT2040.txt", > sinistres=sinistre[sinistre$garantie=="1RC",] > sinistres=sinistres[sinistres$cout>0,] > couts=merge(sinistres,contrat) > tail(couts) nocontrat no garantie cout exposition zone puissance agevehicule 1919 6104006 11933 1RC 5376.04 0.37 E 6 1 1920 6107355 12349 1RC 51.63 0.74 E 4 1 1921 6108364 13229 1RC 1320.00 0.74 B 9 1 1922 6109171 11567 1RC 1320.00 0.74 B 13 1 1923 6111208 14161 1RC 970.20 0.49 E 10 5 1924 6111650 14476 1RC 1940.40 0.48 E 4 0 ageconducteur bonus marque carburant densite region 1919 32 57 12 E 93 10 1920 45 57 12 E 72 10 1921 32 100 12 E 83 0 1922 56 50 12 E 93 13 1923 30 90 12 E 53 2 1924 69 50 12 E 93 13 Here, each line is a claim. Usual families to model the cost are the Gamma distribution, or the inverse Gaussian. Or the lognormal distribution (which is not in the exponential family, but one can assume that the logarithm of the cost can be modeled with a Gaussian distribution). Consider here only one covariate, e.g. the age of the car, and two different models: a Gamma one, and a lognormal one. > age=0:20 + data=couts) > Pgamma <- predict(reggamma.sp,newdata=data.frame(agevehicule=age),type="response") For the Gamma regression, it is a simple GLM, so it is not difficult. For a lognormal distribution, one should remember that the expected value of a lognormal distribution is not the exponential of the underlying Gaussian distribution. A correction should be made, here to get an unbiased estimator for the average cost, > reglm.sp <- lm(log(cout)~agevehicule,data=baseCOUT) > sigma <- summary(reglm.sp)$sigma > mu <- predict(reglm.sp,newdata=data.frame(agevehicule=age)) > Pln <- exp(mu+sigma^2/2) We can plot those two predictions on a single graph, > plot(age,Pgamma,xlab="",ylab="",col="red",type="b",pch=4) > lines(age,Pln,col="blue",type="b") Here it is, Observe that it is also possible to use splines, since there might be no reason for the age to appear here in a multiplicative way, Here, the two models are rather close. Nevertheless, one should remember that the Gamma model can be extremely sensitive to large claims (I mean here really large claims). On the other hand, with the log-transformation for the lognormal model, it seams that this model is less sensitive to large events. Actually, if I use the complete dataset, the regressions are the following, i.e. with a lognormal distribution, the average cost is decreasing with the age of the car, while it is increasing with a Gamma model. The main reason here is that there is one large (not to say huge) claim in the dataset, > couts[which.max(couts$cout),] cout exposition zone puissance agevehicule ageconducteur 7842 4024601 0.22 B 9 13 19 marque carburant densite region 7842 2 E 93 24 One young driver got a $4 million claim, with a 13 year old car. This is an outliers for the Gamma regression, that clearly influences the estimation (the second largest if only one third of this one). Since there is a clear influence of large claims on the estimation of the average cost, a natural idea might be to remove those large claims. Or perhaps to see them as different from normal claims: normal claims can be explained by some covariates, but perhaps that those large claims should be shared not only within its own class, but within all the insured on the portfolio. To formalize this idea, observe that we can write $\mathbb{E}(Y|\boldsymbol{X}) = {\color{Blue} {\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq s)}_{A} \cdot {\underbrace{\mathbb{P}(Y\leq s|\boldsymbol{X})}_{B}}}}+{\color{Red} {{\underbrace{\mathbb{E}(Y|Y> s, \boldsymbol{X}) }_{C}}\cdot {\underbrace{\mathbb{P}(Y> s| \boldsymbol{X})}_{B}}}}$ where the blue part is associated to normal-sized claims, while large ones correspond to the red part. It is then possible to run three regressions: one on normal sized claims, one on large claims, and one on the indicator of having a large claims, given that a claim occurred. The code here is something like that: a large claim – here – is above$ 10,000 (one has a fix it) > s= 10000 > couts$normal=(couts$cout<=s) > mean(couts$normal) [1] 0.9818087 which represent 2% of the claims in our dataset.We can run 3 sets of regressions, with smoothed regression on the age of the car. The first one to model large claims individual costs, > indice = which(couts$cout>s) > mean(couts$cout[indice]) [1] 34471.59 > library(splines) > regB=glm(cout~bs(agevehicule),data=couts, + subset=indice,family=Gamma(link="log")) > ypB=predict(regB,newdata=data.frame(agevehicule=age),type="response") > ypB2=mean(couts$cout[indice]) the second one to model normal claims individual costs, > indice = which(couts$cout<=s) > mean(couts$cout[indice]) [1] 1335.878 > regA=glm(cout~bs(agevehicule),data=couts, > ypA=predict(regA,newdata=data.frame(agevehicule=age),type="response") > ypA2=mean(couts\$cout[indice]) And finally, a third one, on the probability of having a normal sized claim, given that a claim occurred > regC=glm(normal~bs(agevehicule),data=couts,family=binomial) > ypC=predict(regC,newdata=data.frame(agevehicule=age),type="response") > regC2=glm(normal~1,data=couts,family=binomial) > ypC2=predict(regC2,newdata=data.frame(agevehicule=age),type="response") Note that we to have, each time something that can be interpreted either as $\mathbb{E}(Y|\boldsymbol{X},Y\gtrless s)$, or $\mathbb{E}(Y|Y\gtrless s)$ – i.e. no covariate is considered on the later. On the graph below, we did plot $\mathbb{E}(Y|\boldsymbol{X}) = {\color{Blue} {\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq s)}_{A} \cdot {\underbrace{\mathbb{P}(Y\leq s|\boldsymbol{X})}_{B}}}}+{\color{Red} {{\underbrace{\mathbb{E}(Y|Y> s, \boldsymbol{X}) }_{C}}\cdot {\underbrace{\mathbb{P}(Y> s| \boldsymbol{X})}_{B}}}}$ where Gamma regressions – with splines – are considered for the average costs, while logistic regressions – again with splines – are considered to model probabilities. (but careful with splines: on borders, since we do not have a lot of observations, the behavior can be… odd. And adjustments should be made to obtain an adequate level of premium).  If it is legitimate to assume that normal-sized claims can be explained by some covariates, perhaps large claims (or extremely large ones) are just purely random, i.e. not function of any covariate, at all. I.e. $\mathbb{E}(Y|\boldsymbol{X}) = {\color{Blue} {\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq s)}_{A} \cdot {\underbrace{\mathbb{P}(Y\leq s|\boldsymbol{X})}_{B}}}}+{\color{Red} {{\underbrace{\mathbb{E}(Y|Y> s) }_{C'}}\cdot {\underbrace{\mathbb{P}(Y> s| \boldsymbol{X})}_{B}}}}$ To go one step further, it might also be possible to assume that not only the size of the claim (given that it is a large one) is not a function of any covariate, but perhaps neither is the probability of having an extremely large claim, too $\mathbb{E}(Y|\boldsymbol{X}) = {\color{Blue} {\underbrace{\mathbb{E}(Y|\boldsymbol{X},Y\leq s)}_{A} \cdot {\underbrace{\mathbb{P}(Y\leq s)}_{B'}}}}+{\color{Red} {{\underbrace{\mathbb{E}(Y|Y> s) }_{C'}}\cdot {\underbrace{\mathbb{P}(Y> s)}_{B'}}}}$ From the first part, we’ve seen that the distribution considered had an impact on the prediction, and in the second, we’ve seen that the definition of large claims (and how to deal with them) also has an impact. So clearly, actuaries have some leverage when working on ratemaking… # Somewhere else, part 32 A little bit late this week (busy week end), and as usual, a lot of interesting posts here, and there, et toujours quelques articles en français, Did I miss something ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6896259188652039, "perplexity": 4028.072843990362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00625.warc.gz"}
https://www.maplesoft.com/support/help/Maple/view.aspx?path=SignalProcessing/PowerSpectrum
PowerSpectrum - Maple Help SignalProcessing PowerSpectrum compute the power spectrum of an array of samples Calling Sequence PowerSpectrum( A, options ) PowerSpectrum( Br, Bi, options ) Parameters A - rtable or list of numeric values for the signal or spectrum. Br - rtable or list of numeric values for the real parts of the signal or spectrum. Bi - rtable or list of numeric values for the imaginary parts of the signal or spectrum. Options • container: (optional) Predefined rtable of float[8] datatype having the same dimensions as the input rtable(s) to store the power spectrum. • dimension: (optional) Integer, non-empty list of integers, all, or "all", which specifies the dimensions of the signal to which the Discrete Fourier Transform (DFT) is to be applied. The default is 1. • fftnormalization: (optional) One of none, symmetric, or full, indicates the normalization to be applied when using the DFT. The default is symmetric. • frequencyunit: (optional) Unit which specifies the unit of frequency. The default is Unit(Hz). Either of the forms algebraic or Unit(algebraic) is accepted, and the unit must be convertible to a valid unit of frequency. • periodogramoptions: (optional) List of additional plot options to be passed when creating the periodogram. The default is []. • powerscale: (optional) Unit which indicates the scaling, if any, to be applied to the power spectrum. Either of the forms algebraic or Unit(algebraic) is accepted, and the unit must be convertible to a valid unit of power (see below for more details). The default is Unit(1). • samplerate: (optional) Positive numeric value for the sampling rate. The default is 1.0. • temperendpoints: (optional) Either true or false, specifies whether the power spectrum is to be tempered at the endpoints. The default is false. • timeunit: (optional) Unit which specifies the unit of time. The default is Unit(s). Either of the forms algebraic or Unit(algebraic) is accepted, and the unit must be convertible to a valid unit of time. • variety: (optional) Either signal or fft, specifies if the data passed is a signal or the DFT of a signal. The default is fft. • window: (optional) Either a list, name, or string, specifies the windowing command to be applied to the signal. The default is "none" (for no windowing to be applied). If a list is passed, the first element provides the name of the windowing command, and any remaining terms are passed as options to the command. • windownormalization (optional) Either true or false, indicates if the windowing function is to be normalized. The default is true. • output: (optional) The type of output. The supported options are: – frequencies: Returns a Vector of float[8] datatype containing the frequencies. – periodogram: Returns a periodogram which displays the power spectrum versus the frequencies. – power: Returns an rtable of float[8] datatype containing the power spectrum. This is the default. – times: Returns a Vector of float[8] datatype containing the times. – record: Returns a record with the previous options. – list of any of the above options: Returns an expression sequence with the corresponding outputs, in the same order. Description • The PowerSpectrum(A) command computes the power spectrum of the rtable or list A, and returns the result in an rtable of datatype float[8] having the same dimensions as A. • To determine the power spectrum of a 1-D signal: 1 Apply a window, if any, to the signal. 2 Compute the DFT of the windowed signal. 3 Square the magnitudes of the elements. • When variety = signal, the size of all transform dimensions cannot be less than two. • When temperendpoints=true, the input has one or two dimensions, there is exactly one transform dimension, and the transform dimension has size no less than three, then the endpoints of the power spectrum in the transform dimension are halved. • The PowerSpectrum(Br,Bi) command computes the same result as the PowerSpectrum(A) command, but the real and imaginary parts of the complex numbers are stored, respectively, in Br and Bi. Of course, Br and Bi must have the same dimensions and be coercible to datatype float[8]. • The rtable subtype returned by the PowerSpectrum command will be the same as the first rtable passed, or an Array if a list was the first passed. For example, if A is a row Vector, then PowerSpectrum(A) will be a row Vector, and if Br is a Matrix, then PowerSpectrum(Br,Bi) will be a Matrix. • The value of window, when not passed as a list, should be the name or string, with or without the Window suffix, that corresponds to the windowing command. For example, to use a Hamming window, you can pass window=Hamming or window="HammingWindow". In both cases, the command SignalProcessing[HammingWindow] will be used internally. Similarly, you can pass window=["Exponential",0.5] or window=[ExponentialWindow,0.5] to use SignalProcessing[ExponentialWindow] with parameter value 0.5. • To apply a window to a Vector $V$ of length $n$, the window is first applied to another Vector $W$ of size $n$ and filled with ones, and then $V$ is multiplied element-wise by $W$. When windownormalization=true, $W$ is first normalized with respect to its Root Mean Square (RMS). • A window can only be applied when the input has one or two dimensions, and there is exactly one transform dimension. • To scale the power spectrum with the powerscale option, units which are dimensionally equivalent to the following are accepted: • 1: No further scaling is performed. • 1/Hz: The power spectrum is divided by $r=\mathrm{samplerate}$. • 1/rad/Hz: power spectrum is divided by $2\mathrm{\pi }r$. • dB: Each element $u$ of power spectrum is replaced with $10{\mathrm{Typesetting}:-\mathrm{_Hold}\left(\left[\mathrm{%log}\right]\right)}_{10}\left(u\right)$. • dB/Hz: Each element $u$ of power spectrum is replaced with $10{\mathrm{Typesetting}:-\mathrm{_Hold}\left(\left[\mathrm{%log}\right]\right)}_{10}\left(\frac{u}{r}\right)$. • dB/rad/Hz: Each element $u$ of power spectrum is replaced with $10{\mathrm{Typesetting}:-\mathrm{_Hold}\left(\left[\mathrm{%log}\right]\right)}_{10}\left(\frac{u}{2\mathrm{\pi }r}\right)$. • The frequencies and times Vectors can only be computed when there is exactly one transform dimension. If this is the case, the frequencies and times Vectors are of the same size as the transform dimension, say $n$, and have components defined by, respectively, ${F}_{i}=\frac{\left(i-1\right)r}{n}$ and ${T}_{i}=\frac{i-1}{r}$, where $r=\mathrm{samplerate}$. • The samplerate option can also include a unit of frequency. If a unit is provided, and it differs from frequencyunit, then the sample rate will be converted to use the same unit as frequencyunit. • A periodogram can only be created when the input has one or two dimensions, and there is exactly one transform dimension. In the two-dimensional case, the periodogram is a plot Array, with the separate pltos being the periodograms corresponding to the separate channels defined by the transform dimension. • If A or Br is an rtable of type AudioTools:-Audio and variety=signal, the sample rate is inferred from the attributes. Should samplerate also be passed, it will be overridden. • Before the code performing the computation runs, any input containers are converted to datatype complex[8] (for the calling sequence with A) or float[8] (for the calling sequence with Br and Bi) if they do not have this datatype already. For this reason, it is most efficient if input containers have this datatype beforehand. • The input rtables cannot have an indexing function, must use rectangular storage, and have the same order (C_order or Fortran_order). • If the container=C option is provided, then the results are stored in C and C is returned. With this option, no additional memory is allocated to store the result. • As the underlying implementation of the SignalProcessing package is a module, it is also possible to use the form SignalProcessing:-PowerSpectrum to access the command from the package. For more information, see Module Members. • The PowerSpectrum command is not thread safe. Examples > $\mathrm{with}\left(\mathrm{SignalProcessing}\right):$ Example 1 > $a≔\mathrm{Array}\left(\left[1.+I,2.-3.I,4.,-1.I\right],'\mathrm{datatype}'='\mathrm{complex}'\left[8\right]\right)$ $\left[\begin{array}{cccc}1.0+1.0{}I& 2.0-3.0{}I& 4.0+0.0{}I& 0.0-1.0{}I\end{array}\right]$ (1) > $\mathrm{PowerSpectrum}\left(a\right)$ $\left[\begin{array}{cccc}2.0& 13.0& 16.0& 1.0\end{array}\right]$ (2) > $c≔\mathrm{Array}\left(1..\mathrm{numelems}\left(a\right),'\mathrm{datatype}'='\mathrm{float}'\left[8\right]\right):$ > $\mathrm{PowerSpectrum}\left(a,'\mathrm{container}'=c\right)$ $\left[\begin{array}{cccc}2.0& 13.0& 16.0& 1.0\end{array}\right]$ (3) > $c$ $\left[\begin{array}{cccc}2.0& 13.0& 16.0& 1.0\end{array}\right]$ (4) Example 2 > $r≔\mathrm{Array}\left(\left[1.,2.,4.,0.\right],'\mathrm{datatype}'='\mathrm{float}'\left[8\right]\right)$ $\left[\begin{array}{cccc}1.0& 2.0& 4.0& 0.0\end{array}\right]$ (5) > $i≔\mathrm{Array}\left(\left[1.,-3.,0.,-1.\right],'\mathrm{datatype}'='\mathrm{float}'\left[8\right]\right)$ $\left[\begin{array}{cccc}1.0& -3.0& 0.0& -1.0\end{array}\right]$ (6) > $\mathrm{PowerSpectrum}\left(r,i\right)$ $\left[\begin{array}{cccc}2.0& 13.0& 16.0& 1.0\end{array}\right]$ (7) > $\mathrm{PowerSpectrum}\left(r,i,'\mathrm{container}'=c\right)$ $\left[\begin{array}{cccc}2.0& 13.0& 16.0& 1.0\end{array}\right]$ (8) > $c$ $\left[\begin{array}{cccc}2.0& 13.0& 16.0& 1.0\end{array}\right]$ (9) Example 3 > $m≔\mathrm{Array}\left(1..2,1..2,\left[\left[1.+I,2.-I\right],\left[-3.+2.,-4.+2I\right]\right],'\mathrm{datatype}'='\mathrm{complex}'\left[8\right]\right)$ $\left[\begin{array}{cc}1.0+1.0{}I& 2.0-1.0{}I\\ -1.0+0.0{}I& -4.0+2.0{}I\end{array}\right]$ (10) > $\mathrm{PowerSpectrum}\left(m\right)$ $\left[\begin{array}{cc}2.0& 5.0\\ 1.0& 20.0\end{array}\right]$ (11) > $n≔\mathrm{Array}\left(1..2,1..2,'\mathrm{datatype}'='\mathrm{float}'\left[8\right]\right):$ > $\mathrm{PowerSpectrum}\left(m,'\mathrm{container}'=n\right)$ $\left[\begin{array}{cc}2.0& 5.0\\ 1.0& 20.0\end{array}\right]$ (12) > $n$ $\left[\begin{array}{cc}2.0& 5.0\\ 1.0& 20.0\end{array}\right]$ (13) Example 4 • Consider the following signal: > $\mathrm{num_points}≔4096$ ${\mathrm{num_points}}{≔}{4096}$ (14) > $\mathrm{sample_rate}≔\mathrm{evalhf}\left(\frac{\mathrm{num_points}-1}{2\mathrm{\pi }}\right)$ ${\mathrm{sample_rate}}{≔}{651.739491961311387}$ (15) > $\mathrm{Times}≔\mathrm{Vector}\left(\mathrm{num_points},i↦\mathrm{evalhf}\left(\frac{i-1}{\mathrm{sample_rate}}\right),'\mathrm{datatype}'='\mathrm{float}\left[8\right]'\right)$ ${{\mathrm{_rtable}}}_{{36893628626787217404}}$ (16) > $\mathrm{Signal}≔\mathrm{Vector}\left(\mathrm{num_points},i↦\mathrm{evalhf}\left(3\cdot \mathrm{sin}\left(200\cdot \mathrm{Times}\left[i\right]\right)-I\cdot \mathrm{cos}\left(500\cdot \mathrm{Times}\left[i\right]\right)\right),'\mathrm{datatype}'='\mathrm{complex}\left[8\right]'\right)$ ${{\mathrm{_rtable}}}_{{36893628626787219204}}$ (17) • Now, apply a Hamming window to the signal, and return everything in a record: > $R≔\mathrm{PowerSpectrum}\left(\mathrm{Signal},'\mathrm{samplerate}'=\mathrm{sample_rate},'\mathrm{variety}'='\mathrm{signal}','\mathrm{window}'='\mathrm{Hamming}','\mathrm{powerscale}'='\frac{\mathrm{dB}}{\mathrm{Hz}}','\mathrm{output}'='\mathrm{record}'\right):$ • Periodogram: > $R\left['\mathrm{periodogram}'\right]$ Example 5 • The PowerSpectrum command can produce multiple periodograms from two-dimensional input. Here, each column represents separate channels: > $n≔1024$ ${n}{≔}{1024}$ (18) > $a≔0$ ${a}{≔}{0}$ (19) > $b≔2\mathrm{\pi }$ ${b}{≔}{2}{}{\mathrm{\pi }}$ (20) > $\mathrm{fs}≔\mathrm{evalhf}\left(\frac{n-1}{b-a}\right)$ ${\mathrm{fs}}{≔}{162.815506783008942}$ (21) > $T≔\mathrm{Vector}\left['\mathrm{column}'\right]\left(n,i↦\mathrm{evalhf}\left(\frac{i-1}{\mathrm{fs}}\right),'\mathrm{datatype}'='\mathrm{float}\left[8\right]'\right)$ ${{\mathrm{_rtable}}}_{{36893628626787212812}}$ (22) > $A≔⟨\mathrm{Vector}\left['\mathrm{column}'\right]\left(n,i↦\mathrm{sin}\left(100\cdot T\left[i\right]\right),'\mathrm{datatype}'='\mathrm{float}\left[8\right]'\right)|\mathrm{Vector}\left['\mathrm{column}'\right]\left(n,i↦\mathrm{cos}\left(2000\cdot T\left[i\right]\right),'\mathrm{datatype}'='\mathrm{float}\left[8\right]'\right)⟩$ ${{\mathrm{_rtable}}}_{{36893628626787206676}}$ (23) > $\mathrm{PowerSpectrum}\left(A,'\mathrm{samplerate}'=\mathrm{fs},'\mathrm{variety}'='\mathrm{signal}','\mathrm{dimension}'=1,'\mathrm{output}'='\mathrm{periodogram}','\mathrm{powerscale}'='\frac{\mathrm{dB}}{\mathrm{Hz}}'\right)$ > Compatibility • The SignalProcessing[PowerSpectrum] command was introduced in Maple 17.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 72, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8941484093666077, "perplexity": 1386.5416924408532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00154.warc.gz"}
https://www.techwhiff.com/learn/you-want-to-obtain-a-sample-to-estimate-a/56671
# You want to obtain a sample to estimate a population proportion. Based on previous evidence, you... ###### Question: You want to obtain a sample to estimate a population proportion. Based on previous evidence, you believe the population proportion is approximately 35%. You would like to be 99% confident that your estimate is within 1% of the true population proportion. How large of a sample size is required? n= #### Similar Solved Questions ##### Find A, B1, A + B, and A + Bl. Then verify that Al + B... Find A, B1, A + B, and A + Bl. Then verify that Al + B + A + Bl. -1 1 9 1 0 1 A = 0 1 1 -1 19 1 1 - 1 0 1 9 (a) JAL (b) (c) A + B (d) 1A + B1... ##### QUESTION 2 [23 Marks] You are required to design Stairmand HR cyclones to be operated in... QUESTION 2 [23 Marks] You are required to design Stairmand HR cyclones to be operated in parallel to treat 4 m/s of gas density 0.6 kg/m and viscosity 2 x 10 Pa's carrying a dust of density 2200 kg/m? A Xso cut size of at most 7 um is to be achieved at a pressure drop of 1100 Pa. (For a Stairman... ##### What is GAAP, and why is necessary or useful? 6. Who/what is the SEC? 7. What... What is GAAP, and why is necessary or useful? 6. Who/what is the SEC? 7. What is the FASB, and why is it necessary or useful?...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7002231478691101, "perplexity": 1402.0538763730135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00288.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=CJGHBH_2016_v20n1_14
Effect of AP Particle Size on the Physical Properties of HTPB/AP Propellant Title & Authors Effect of AP Particle Size on the Physical Properties of HTPB/AP Propellant Yim, Yoo Jin; Park, Eun Ji; Kwon, Tae Ha; Choi, Seong Han; Abstract The viscosity and mechanical property of HTPB/AP composite solid propellant are profoundly affected by particle size of AP. In HTPB/AP propellant formulated by two mode of AP size such as $\small{190{\mu}m}$ and $\small{7{\mu}m}$, the propellant was found to be much less viscose at end of mix when coarse/fine AP ratio is ranged from 70/30 to 60/40 due to high solid packing fraction. It was shown that the toughness of tensile strength test for HTPB/AP propellant increased with the increase in coarse AP. Considering both lower viscosity and better tensile strength, the optimum ratio of AP coarse/fine was estimated to be 70/30. Keywords HTPB/AP Propellant;Solid Packing Fraction;Tensile Strength;Viscosity; Language Korean Cited by References 1. Oberth, A.E., Principles of Solid Propellant Development, CPIA Publication, Baltimore, MD, USA, Ch 5, 1987. 2. Sutton, G.P. and Biblarz, O., Rocket Propulsion Elements, 8th ed., John Wiley & Sons Inc., New York, N.Y., USA, 2010. 3. Yim, Y.J., "A Study on the Burning Rate of Composite Solid Propellant," Ph. D. Thesis, Yonsei University, 1983. 4. Ha, J.S. and Kim, J.H., "Pressure-Induced Crack Propagation Behavior in A Particle-Reinforced Composite," International Journal of Modern Physics: Conference Series, Vol. 6, pp. 178-183, 2012. 5. Maraden, A.M. and Mostafa, H.E., "Experimental and Numerical Investigation for the Combustion of Bimodal Pre-packed AP based Composite Propellant," 44th International Annual Conference of the Fraunhofer ICT, Karlsruhe, Germany, V26, June 2013. 6. McGeary, R.K., "Mechanical Packing of Spherical Particles," Journal of the American Ceramic Society, Vol. 44, No. 10, pp. 513-522, 1961. 7. Dorr, A., Sadiki, A. and Mehdizadeh, A., "A Discrete Model for the Apparent Viscosity of Polydisperse Suspensions Including Maximum Packing Fraction," Journal of Rheology, Vol. 57, No. 3, pp. 1-14, 2013. 8. Horine, C.L. and Madison, E.W., "Solid Propellant Processing Factors in Rocket Motor Design," NASA SP-8075, 1971. 9. Fedele, D., Ponti, F, Bertacin R., Ravaglioli, V. and Mancini, G., "Analytical Model and Numerical Simulations for Solid Propellant Using a Random Loose Packing Approach," 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference, Cleveland, OH, USA, AIAA 2014-4019, July 2014. 10. Kim, C.K., Hwang, K.S. and Yim, Y.J., "Propellant for Rocket Propulsion System," Korea Patent, 10-0551205, 3 Feb. 2006. 11. Lengelle, G., Duterque J. and Trubert, J.F., "Combustion of Solid Propellants," NATO, RTO Educational Notes EN-023, 2002. 12. Kim, C.K., Yoo, J.C, Hwang, K.S, and Yim, Y.J., "Properties of HTPB/AP/Butacene Propellants," Journal of the Korean Society of Propulsion Engineers, Vol. 9, No. 2, pp. 40-45, 2005. 13. Kim, J.H., Yim, Y.J., Kim, I.C., Park, Y.C., Seo, T.S., Jeong, J.Y. and Yoo, J.C., "Increasing the Burning Rate of Solid Propellants," 2009 Spring Conference, Korean Society of Propulsion Engineers, Jeonju, Korea, pp. 169-172, May 2009. 14. Yim, Y.J., "Burning Rate Catalytic Effects of $Fe_2O_3$ and $Cr_2O_3$, in Composite Propellants," Korean Chemical Engineering Research, Vol. 25, No. 5, pp. 442-446, 1987.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4659252464771271, "perplexity": 20140.075783855198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00288-ip-10-171-10-70.ec2.internal.warc.gz"}
https://arxiv.org/abs/1104.0711
astro-ph.CO (what is this?) # Title: An estimate of the electron density in filaments of galaxies at z~0.1 Abstract: Most of the baryons in the Universe are thought to be contained within filaments of galaxies, but as yet, no single study has published the observed properties of a large sample of known filaments to determine typical physical characteristics such as temperature and electron density. This paper presents a comprehensive large-scale search conducted for X-ray emission from a population of 41 bona fide filaments of galaxies to determine their X-ray flux and electron density. The sample is generated from Pimbblet et al.'s (2004) filament catalogue, which is in turn sourced from the 2 degree Field Galaxy Redshift Survey (2dFGRS). Since the filaments are expected to be very faint and of very low density, we used stacked ROSAT All-Sky Survey data. We detect a net surface brightness from our sample of filaments of (1.6 +/- 0.1) x 10^{-14} erg cm^{-2} s^{-1} arcmin^{-2} in the 0.9-1.3 keV energy band for 1 keV plasma, which implies an electron density of n_{e} = (4.7 +/- 0.2) x 10^{-4} h_{100}^{1/2} cm^{-3}. Finally, we examine if a filament's membership to a supercluster leads to an enhanced electron density as reported by Kull & Bohringer (1999). We suggest it remains unclear if supercluster membership causes such an enhancement. Comments: Accepted for publication in MNRAS. v2: typos corrected Subjects: Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Astrophysical Phenomena (astro-ph.HE) Journal reference: Mon.Not.Roy.Astron.Soc.415:1961-1966,2011 DOI: 10.1111/j.1365-2966.2011.18847.x Cite as: arXiv:1104.0711 [astro-ph.CO] (or arXiv:1104.0711v2 [astro-ph.CO] for this version) ## Submission history From: Kevin A. Pimbblet [view email] [v1] Mon, 4 Apr 2011 23:41:24 GMT (280kb) [v2] Wed, 6 Apr 2011 23:04:26 GMT (280kb)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420605659484863, "perplexity": 5087.768225594709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.7/warc/CC-MAIN-20170822221214-20170823001214-00015.warc.gz"}
http://www.varsitytutors.com/isee_upper_level_math-help/how-to-find-the-surface-area-of-a-cube
# ISEE Upper Level Math : How to find the surface area of a cube ## Example Questions ← Previous 1 ### Example Question #37 : Solid Geometry The length of the side of a cube is . Give its surface area in terms of . Explanation: Substitute  in the formula for the surface area of a cube: ### Example Question #38 : Solid Geometry If a cube has one side measuring cm, what is the surface area of the cube? Explanation: To find the surface area of a cube, use the formula , where represents the length of the side.  Since the side of the cube measures , we can substitute in for . ### Example Question #1 : How To Find The Surface Area Of A Cube Your friend gives you a puzzle cube for your birthday. If the length of one edge is 5 cm, what is the surface area of the cube? Explanation: Your friend gives you a puzzle cube for your birthday. If the length of one edge is 5 cm, what is the surface area of the cube? To find the surface area of a cube, use the following formula: This works, because we have 6 sides, each of which has an area of Plug in our known to get our answer: ### Example Question #40 : Solid Geometry A cube has a side length of , what is the surface area of the cube? Explanation: A cube has a side length of , what is the surface area of the cube? Surface area of a cube can be found as follows: Plug in our side length to find our answer: ### Example Question #2 : How To Find The Surface Area Of A Cube If one of the edges has a length of 6 inches, what is the surface area of the box? Explanation: If one of the edges has a length of 6 inches, what is the surface area of the box? We can find the surface area of a square by squaring the length of the side and then multiplying it by 6. ### Example Question #3 : How To Find The Surface Area Of A Cube Find the surface area of a cube that has a width of 6cm. Explanation: To find the surface area of a cube, we will use the following formula: where a is the length of any side of the cube. Now, we know the width of the cube is 6cm.  Because it is a cube, all sides are 6cm.  That is why we can choose any side to substitute into the formula. Now, knowing this, we can substitute into the formula.  We get ### Example Question #4 : How To Find The Surface Area Of A Cube Find the surface area of a cube with a length of 7in. Explanation: To find the surface area of a cube, we will use the following formula: where a is the length of any side of the cube.  Note that all sides are equal on a cube.  That is why we can use any length in the formula. Now, we know the length of the cube is 7in. Knowing this, we can substitute into the formula.  We get ### Example Question #5 : How To Find The Surface Area Of A Cube Find the surface area of a cube with a width of 4cm. Explanation: To find the surface area of a cube, we will use the following formula. where a is the length of any side of the cube. Now, we know the width of the cube is 4in.  Because it is a cube, all sides are equal (this is why we can use any length in the formula).  So, we will use 4in in the formula.  We get ### Example Question #6 : How To Find The Surface Area Of A Cube Find the surface area of a cube with a length of 12in. Explanation: To find the surface area of a cube, we will use the following formula: where l is the length, and w is the width of the cube. Now, we know the length of the cube is 12in.  Because it is a cube, all lengths, widths, and heights are the same.  Therefore, the width is also 12in. Knowing this, we can substitute into the formula.  We get ### Example Question #7 : How To Find The Surface Area Of A Cube While exploring an ancient ruin, you discover a small puzzle cube. You measure the side length to be . Find the cube's surface area. Explanation: While exploring an ancient ruin, you discover a small puzzle cube. You measure the side length to be . Find the cube's surface area. To find the surface area, use the following formula: ← Previous 1
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8405354619026184, "perplexity": 360.9863850898524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725451.13/warc/CC-MAIN-20161020183845-00481-ip-10-171-6-4.ec2.internal.warc.gz"}
http://nrich.maths.org/2007/index?nomenu=1
If you write plus signs between each of the digits $1$ to $9$, this is what you get: $1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 = 45$ However, if you alter where the plus signs go, you could also get: $12 + 3 + 45 + 6 + 7 + 8 + 9 = 90$ Can you put plus signs in so this is true? $1\;$ $2\;$ $3\;$ $4\;$ $5\;$ $6\;$ $7\;$ $8\;$ $9\;$ $= 99$ How many ways can you do it?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9574664235115051, "perplexity": 144.40579125707288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652114.13/warc/CC-MAIN-20150417045732-00307-ip-10-235-10-82.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/73683/why-do-helium-balloons-expand-in-volume-as-they-go-higher/73688
# Why do helium balloons expand in volume as they go higher? I realize as balloons go higher, the atmospheric pressure decreases, doing less to counteract the force of the gas particles pushing against the inner walls of the balloon. But at the same time, doesn't the outside temperature decrease, causing the gas particles in the balloon to lose energy and have less impactful collisions and a lower velocity? Is the reason why the volume nonetheless increases because the pressure outside decreases faster than the temperature? Or am I overthinking this? I didn't know that balloons expanded during the fly because of thermodynamics, and I didn't know how high they can fly, but a rapid search tells that a partially unfilled regular balloon can fly until an altitude of around $\pu{25 km}$. Now, $\pu{25 km}$ means that it reaches the first part of the stratosphere, with temperatures of $\pu{-60 ^\circ C}$, that gradually increase until $\pu{0 ^\circ C}$ at $\pu{50 km}$. As for the pressure, it goes from around $\pu{40 mmHg}$ to $<\pu{1 mmHg}$ in the range $25\text{–}\pu{50 km}$. If you try a $pV=nRT$ calculation on these data, you see that the gas is already at around 10 times its initial volume with $\pu{40 mmHg}$ pressure and a $\pu{213 K}$ temperature, and that at the $\pu{50 km}$ point the volume is increased 700 times! Also: while the trend of the pressure is quite logical, that of temperature is caused by complex interations (eg: sun rays that heat particles). You can find this image quite interesting: You are exactly correct that it is a matter of atmospheric pressure decreasing at a rate great enough to overcome the contraction due to decrease in temperature. On a nice, clear, dry 25°C day at sea level, atmospheric pressure decreases by about 12% per km, where the air temperature decreases by about 3% per km. This is very similar to the process that allows convectional clouds to expand with height and the resulting thunderstorms to form. If an air mass started to rise then contract due to cooling, that would be the end of that convection event. Thanks to the ideal gas law, the volume of the balloon doesn't really depend on the fact that the gas inside is helium in particular. So we should get the same volume if we fill the balloon with ordinary air and hoist it into the atmosphere mechanically instead of letting it rise by its own buoyancy. In this modified thought experiment, the density of the gas inside the balloon is exactly the same as the density of the surrounding air, assuming that both pressure and temperature are equalized. If the volume of the balloon stayed the same, this would require that the atmosphere has the same density at all altitudes. But then there would need to be infinitely much of it, which is absurd in conflict with observations: It is well known that the density in space approaches zero, and since the atmosphere is not a liquid, it has no distinct upper edge, so its density must decrease gradually as we move up. Also, if the density were the same everywhere, air at sea level would not be compressed by the weight of the atmosphere above us, which is also absurd. (Consider what happens if we compress a portion of air with a piston -- say, a bicycle pump. Both pressure and temperature will increase -- but temperature does not rise so fast that the volume will stay constant!)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899634599685669, "perplexity": 329.42380902953926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00059.warc.gz"}
http://physics.stackexchange.com/questions/66412/can-the-vanishing-of-the-riemann-tensor-be-determined-from-causal-relations
# Can the vanishing of the Riemann tensor be determined from causal relations? Given a Lorentzian manifold and metric tensor, "$( M, g )$", the corresponding causal relations between its elements (events) may be derived; i.e. for every pair (in general) of distinct events in set $M$ an assignment is obtained whether it is timelike separated, or lightlike separated, or neither (spacelike separated). In turn, I'd like to better understand whether causal separation relations, given abstractly as "$( M, s )$", allow to characterize the corresponding Lorentzian manifold/metric. As an exemplary and surely relevant characteristic (cmp. answer here) let's consider whether the Riemann curvature tensor vanishes, or not, at each event of the whole set $M$ (or perhaps suitable subsets of $M$). Are there particular causal separation relations which would be indicative, or counter-indicative, of the Riemann curvature tensor vanishing at all events of set $M$ (or if this may simplify considerations: at all events of a chart of the manifold); or on some subset of $M$? To put my question still more concretely, consider as possible illustration of "counter-indication": (a) Can any chart of a 3+1 dimensional Lorentzian manifold with everywhere vanishing Riemann curvature tensor (or, at least, a whole such manifold) contain [Edit in consideration of 1st comment (by twistor59): -- the Riemann curvature tensor vanish at least in one event of a 3+1 dimensional Lorentzian manifold if each of its charts contains -- ] • fifteen events (conveniently organized as five triples): $A, B, C$; $\,\,\,\, F, G, H$; $\,\,\,\, J, K, L$; $\,\,\,\, N, P, Q\,\,\,\,$, and $\,\,\,\, U, V, W$, • where (to specify the causal separation relations among all corresponding one-hundred-and-five event pairs): $s[ A, B ]$ and $s[ A, C ]$ and $s[ B, C ]$ are timelike, $s[ F, G ]$ and $s[ F, H ]$ and $s[ G, H ]$ are timelike, $s[ J, K ]$ and $s[ J, L ]$ and $s[ K, L ]$ are timelike, $s[ N, P ]$ and $s[ N, Q ]$ and $s[ P, Q ]$ are timelike, $s[ U, V ]$ and $s[ U, W ]$ and $s[ V, W ]$ are timelike, $s[ A, G ]$ and $s[ G, C ]$ and $s[ A, K ]$ and $s[ K, C ]$ and $s[ A, P ]$ and $s[ P, C ]$ and $s[ A, V ]$ and $s[ V, C ]$ are lightlike, $s[ F, B ]$ and $s[ B, H ]$ and $s[ F, K ]$ and $s[ K, H ]$ and $s[ F, P ]$ and $s[ P, H ]$ and $s[ F, V ]$ and $s[ V, H ]$ are lightlike, $s[ J, B ]$ and $s[ B, L ]$ and $s[ J, G ]$ and $s[ G, L ]$ and $s[ J, P ]$ and $s[ P, L ]$ and $s[ J, V ]$ and $s[ V, L ]$ are lightlike, $s[ N, B ]$ and $s[ B, Q ]$ and $s[ N, G ]$ and $s[ G, Q ]$ and $s[ N, K ]$ and $s[ K, Q ]$ and $s[ N, V ]$ and $s[ V, Q ]$ are lightlike, $s[ U, B ]$ and $s[ B, W ]$ and $s[ U, G ]$ and $s[ G, W ]$ and $s[ U, K ]$ and $s[ K, W ]$ and $s[ U, P ]$ and $s[ P, W ]$ are lightlike, the separations of all ten pairs among the events $A, F, J, N, U$ are spacelike, the separations of all ten pairs among the events $B, G, K, P, V$ are spacelike, the separations of all ten pairs among the events $C, H, L, Q, W$ are spacelike, and finally the separations of all twenty remaining event pairs are timelike ? Conversely, consider as possible illustration of "indication": (b) Is there a 3+1 dimensional Lorentzian manifold with everywhere vanishing Riemann curvature tensor (or, at least, one of its charts) which doesn't [Edit in consideration of 1st comment (by twistor59): -- nowhere vanishing Riemann curvature tensor such that all of its charts -- ] contain • twenty-four events, conveniently organized as four triples ($A, B, C$; $\,\,\,\, F, G, H$; $\,\,\,\, J, K, L$; $\,\,\,\, N, P, Q$) and six pairs ($D, E$; $\,\,\,\, S, T$; $\,\,\,\, U, V$; $\,\,\,\, W, X$; $\,\,\,\, Y, Z$; $\,\,\,\, {\it\unicode{xA3}}, {\it\unicode{x20AC}\,}$), • where (again explicitly, please bear with me$\, \!^*$): the sixty-six separations among the twelve events belonging to the four triples are exactly as in question part (a), each of the six pairs is timelike separated, the separations of all fifteen pairs among the events $D, S, U, W, Y, {\it\unicode{xA3}}$ are spacelike, the separations of all fifteen pairs among the events $E, T, V, X, Z, {\it\unicode{x20AC}\,}$ are spacelike, $s[ D, {\it\unicode{x20AC}\,} ]$ and $s[ S, Z ]$ and $s[ U, X ]$ are spacelike, $s[ E, {\it\unicode{xA3}} ]$ and $s[ T, Y ]$ and $s[ V, W ]$ are spacelike, $s[ A, {\it\unicode{xA3}} ]$ and $s[ A, Y ]$ and $s[ A, W ]$ are spacelike, $s[ A, {\it\unicode{x20AC}\,} ]$ and $s[ A, Z ]$ and $s[ A, X ]$ are timelike, $s[ A, E ]$ and $s[ A, T ]$ and $s[ A, V ]$ are timelike, $s[ C, {\it\unicode{x20AC}\,} ]$ and $s[ C, Z ]$ and $s[ C, X ]$ are spacelike, $s[ C, {\it\unicode{xA3}} ]$ and $s[ C, Y ]$ and $s[ C, W ]$ are timelike, $s[ C, D ]$ and $s[ C, S ]$ and $s[ C, U ]$ are timelike, $s[ F, {\it\unicode{xA3}} ]$ and $s[ F, D ]$ and $s[ F, S ]$ are spacelike, $s[ F, {\it\unicode{x20AC}\,} ]$ and $s[ F, E ]$ and $s[ F, T ]$ are timelike, $s[ F, V ]$ and $s[ F, X ]$ and $s[ F, Z ]$ are timelike, $s[ H, {\it\unicode{x20AC}\,} ]$ and $s[ H, E ]$ and $s[ H, T ]$ are spacelike, $s[ H, {\it\unicode{xA3}} ]$ and $s[ H, D ]$ and $s[ H, S ]$ are timelike, $s[ H, U ]$ and $s[ H, W ]$ and $s[ H, Y ]$ are timelike, $s[ J, D ]$ and $s[ J, U ]$ and $s[ J, Y ]$ are spacelike, $s[ J, E ]$ and $s[ J, V ]$ and $s[ J, Z ]$ are timelike, $s[ J, T ]$ and $s[ J, X ]$ and $s[ J, {\it\unicode{x20AC}\,} ]$ are timelike, $s[ L, E ]$ and $s[ L, V ]$ and $s[ L, Z ]$ are spacelike, $s[ L, D ]$ and $s[ L, U ]$ and $s[ L, Y ]$ are timelike, $s[ L, S ]$ and $s[ L, W ]$ and $s[ L, {\it\unicode{xA3}} ]$ are timelike, $s[ N, D ]$ and $s[ N, S ]$ and $s[ N, W ]$ are spacelike, $s[ N, E ]$ and $s[ N, T ]$ and $s[ N, X ]$ are timelike, $s[ N, V ]$ and $s[ N, Z ]$ and $s[ N, {\it\unicode{x20AC}\,} ]$ are timelike, $s[ Q, E ]$ and $s[ Q, T ]$ and $s[ Q, X ]$ are spacelike, $s[ Q, D ]$ and $s[ Q, S ]$ and $s[ Q, W ]$ are timelike, $s[ Q, U ]$ and $s[ Q, Y ]$ and $s[ Q, {\it\unicode{xA3}} ]$ are timelike, and finally the separations of all ninety-six remaining event pairs are lightlike ? (*: The two sets of causal separation relations stated explicitly in question part (a) and part (b) are of course not arbitrary, but have motivations that are somewhat outside the immediate scope of my question -- considering Lorentzian manifolds -- itself. It may nevertheless be helpful, if not overly suggestive, to attribute the relations of part (a) to "five participants, each finding coincident pings from the four others", and the relations of part (b) to "ten participants -- four as vertices of a regular tetrahedron and six as middles between these vertices -- pinging among each other".) - This sounds like a very complicated set of conditions - have you tried to see if you can satisfy your (a) and (b) counterexamples by doing the usual trick of taking Minkowski space and identifying various points, lines etc to change the causal structure? –  twistor59 May 30 '13 at 6:26 @twistor59: "This sounds like a very complicated set of conditions" -- still the simplest I could think of, to expect them being relevant. (The footnote may hint at why I thought so.) "have you tried [...] the usual trick of taking Minkowski space and identifying [...] the causal structure?" -- (1): thanks for pointing out that this "usual trick" applies to both question parts (a) and (b), as presently stated (may I rephrase (b), perhaps? ...) Is the applicable "causal structure of Minkowski space" even spelt out anywhere explicitly enough; even by J.W.Schutz, or A.A.Robb? ... And (2): –  user12262 May 30 '13 at 19:31 ... (2): yes, I tried; case (a) seriously for about a week, until I had an idea how to solve it just a few days ago; while case (b), as presently stated, is sort of trivial once it is clear that I'm asking about Minkowski space (but I'm also interested already quite a while, and unable to quite solve so far, the question of whether the structure of case (b) would then identify "mutual rest" of participants). Finally (3): I failed to realize that this "usual trick" is directly applicable because the answer here (see above) doesn't mention it either. –  user12262 May 30 '13 at 19:32 p.s. Reading the first comment again I now also notice that I had significantly crippled the suggested "usual trick" in quoting. Sorry, FWIW; I still took occasion to edit part (b) of my question, to be perhaps less easily answered, and perhaps even be addressable by applying the suggested "usual trick" as intended ... –  user12262 May 30 '13 at 20:14 The casual structure of spacetime is invariant under conformal transformations, transformations where the metric is $g$ is changed to a new metric $\tilde{g} =\Omega^2g$[1]. The Riemann tensor as a whole is not invariant under conformal transformation, and therefore it cannot be a good measure of causal structure.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529587984085083, "perplexity": 348.12353563977683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802767453.104/warc/CC-MAIN-20141217075247-00103-ip-10-231-17-201.ec2.internal.warc.gz"}
http://jmlr.org/papers/v17/gulchere16a.html
## Knowledge Matters: Importance of Prior Information for Optimization Çağlar Gülçehre, Yoshua Bengio; 17(8):1−32, 2016. ### Abstract We explored the effect of introducing prior knowledge into the intermediate level of deep supervised neural networks on two tasks. On a task we designed, all black-box state-of-the-art machine learning algorithms which we tested, failed to generalize well. We motivate our work from the hypothesis that, there is a training barrier involved in the nature of such tasks, and that humans learn useful intermediate concepts from other individuals by using a form of supervision or guidance using a curriculum. Our results provide a positive evidence in favor of this hypothesis. In our experiments, we trained a two- tiered MLP architecture on a dataset for which each input image contains three sprites, and the binary target class is $1$ if all of three shapes belong to the same category and otherwise the class is $0$. In terms of generalization, black-box machine learning algorithms could not perform better than chance on this task. Standard deep supervised neural networks also failed to generalize. However, using a particular structure and guiding the learner by providing intermediate targets in the form of intermediate concepts (the presence of each object) allowed us to solve the task efficiently. We obtained much better than chance, but imperfect results by exploring different architectures and optimization variants. This observation might be an indication of optimization difficulty when the neural network trained without hints on this task. We hypothesize that the learning difficulty is due to the composition of two highly non-linear tasks. Our findings are also consistent with the hypotheses on cultural learning inspired by the observations of training of neural networks sometimes getting stuck, even though good solutions exist, both in terms of training and generalization error. [abs][pdf][bib]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5424478054046631, "perplexity": 755.9786342539708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647153.50/warc/CC-MAIN-20180319214457-20180319234457-00238.warc.gz"}
https://wq.io/1.0/docs/setup-windows
Home ### Installing wq on Windows wq version: 0.7.8 1.0 1.1 # Installing wq on Windows 8.1 The following steps should help you install wq and get a wq-powered web application running on Windows 8.1, Windows 10, or Windows Server 2012. These steps are tested on Windows 8.1. ## Using both wq.db and wq.app FIXME: This section is a WIP. ## Using only wq.app If you are only interested in using wq.app, you can run pip3 install wq.app or simply download the latest release directly from GitHub. You will likely want to set up your project with the following layout (inspired by volo): project/ js/ lib/ -> /path/to/wq.app/js/ myapp/ main.js myapp.js css/ lib/ -> /path/to/wq.app/css/ myapp.css scss/ lib/ wq/ -> /path/to/wq.app/scss/wq compass/ -> /path/to/compass_stylesheets/stylesheets/compass/ themes.scss images/ templates/ index.html wq.yml Note that wq.app currently comes bundled with all of its JavaScript dependencies vendored in. So, for many applications, you should be able to use wq.app's js/ folder directly as your js/lib/ folder. The typical workflow is to symbolically link to wq.app's js/ folder from your app's js/lib/ folder and similarly for css and scss. If you have other dependencies, or want to use different versions of the vendored apps, create your js/lib/ folder first, and link to wq.app's js/wq folder from js/lib/wq/. In either case, wq init can do the linking automatically. If you use the default js/, css/, and/or scss/ folder names, wq init will work without any configuration required. That said, you'll likely want to create a configuration file called wq.yml with an optimize section (which is required to run the build process). An example wq.yml can be obtained from the Django wq template. The full list of options are documented in the wq build section. Download and install Python 3 and Node if you don't have them already. When installing Python, be sure to enable the option to add Python.exe to the system path. (You might need to log out and back in for the setting to take effect.) Then run the following from a command prompt: pip3 install wq.app Next, create a project folder with js & css subdirectories and a wq.yml configuration file. Then run the following from a command prompt: cd C:\path\to\my\project wq init If you get an error about symbolic link privilege not held, try running the command prompt as administrator. If you are unable to do this, follow the Python 2 instructions below. #### Python 2.7 You can also use wq.app with Python 2.7, though this usage is deprecated. Python 2.7 on Windows does not support symbolic links, so wq init will not work. You can copy the folder wq/app/js into your project's js folder and rename it to "lib". Similarly, copy wq/app/css into your project's css folder and rename it to "lib". ### Utilizing wq.app Once you have done this you should be able to reference wq.app's modules from your JavaScript code: // myapp/mymodule.js define(['wq/chart'], function(chart) { // do something }); See the wq.app module list for available modules, and the build docs for information about available build options.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.168104887008667, "perplexity": 6306.736518360512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583831770.96/warc/CC-MAIN-20190122074945-20190122100945-00128.warc.gz"}
http://urania.sissa.it/xmlui/handle/1963/2844
SISSA Digital Library # A note on the power divergence in lattice calculations of $\Delta I = 1/2$ $K\to\pi\pi$ amplitudes at $M_{K}=M_{\pi}$ ## This item appears in the following Collection(s) • Papers articles, preprints, proceedings, book chapters, books, lecture notes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7144686579704285, "perplexity": 6258.229032846816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.65/warc/CC-MAIN-20161020183838-00238-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/18406/a-very-simple-discrete-dynamical-system-with-pebbles
# A very simple discrete dynamical system with pebbles Let us suppose we have slots $n$ slots $1, \ldots, n$ and $k$ pebbles, each of which is initially placed in some slot. Now the pebbles want to space themselves out as evenly as possible, and so they do the following. At each time step $t$, each pebble moves to the slot closest to the halfway point between its neighboring pebbles; if there is a tie, it chooses the slot to the left. The leftmost and rightmost pebbles apply the same procedure, but imagining that there are slots numbered $0$ and $n+1$ with pebbles in them. Formally, numbering the pebbles $1, \ldots, k$ from left to right, and letting $x_i(t)$ be the slot of the $i$'th pebble at time t, we have $$x_i(t+1) = \lfloor \frac{x_{i-1}(t)+x_{i+1}(t)}{2} \rfloor, i = 2, \ldots, k-1$$ $\lfloor \cdot \rfloor$ rounds down to the closest integer. Similarly, $$x_1(t+1) = \lfloor (1/2) x_2(t) \rfloor, x_k(t+1) = \lfloor \frac{x_{k-1}(t) + (n+1)}{2} \rfloor.$$ Now the fixed point of this procedure is the arrangement in which $x_{i+1} - x_i$ and $x_i - x_{i-1}$ differ by $1$. My question: is it true that this fixed point is reached by the above procedure after sufficiently many iterations? Why I care: no concrete reason really, I am just reading about finite difference methods, and this seemed like a simple problem connected with some of the things which are confusing me. - You're missing a factor of 1/2 in your expression for $x_1(t+1)$. –  mjqxxxx Jan 21 '11 at 7:40 @mjqxxxx - thanks, fixed now. –  angela o. Jan 21 '11 at 12:38 Update Even simpler: 5 slots. $(1,4) \rightarrow (2,3) \rightarrow (1,4)$. You have a total number of unoccupied spaces $n-k$, divided into $k+1$ intervals between pebbles. Your procedure looks at adjacent pairs of empty intervals and tries to make them more similar. If they differ by more than one, the larger of the two intervals will be reduced and the smaller increased so that their difference becomes zero or one. If they differ by zero, they will be left alone. If they differ by one, they will be swapped if necessary so that the larger of the two intervals is on the right. In short, the only steady states will be those where each interval is the same size or one greater than its neighboring interval on the left -- there are many of these. If the update rules are applied one pebble at a time, a steady state will always be reached, because the total discrepancy, $D \equiv \sum_{i=1}^{k} |x_{i+1} - 2x_i + x_{i-1} - 1/2|$, decreases with each move. But if the update rules are applied simultaneously, it is less clear that you must reach a steady state. I think that you still must, but I don't have as simple a proof.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.796898365020752, "perplexity": 244.77003869077035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/Running.20the.20linter.20locally.html
## Stream: new members ### Topic: Running the linter locally #### Frédéric Dupuis (Jul 21 2020 at 00:15): Is there a way to run the linter locally to make sure all the checks pass before committing? #### Jalex Stark (Jul 21 2020 at 00:26): #lint #### Jalex Stark (Jul 21 2020 at 00:26): i usually put it at the bottom of the file but maybe one can put it anywhere instead #### Floris van Doorn (Jul 21 2020 at 05:09): To elaborate on Jalex: #lint checks all declarations in that file (above the #lint command). There is also #lint_mathlib that checks in all files (in mathlib) that you've imported. #### Frédéric Dupuis (Jul 21 2020 at 13:21): Thanks! Last updated: May 14 2021 at 05:20 UTC
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5974053144454956, "perplexity": 10734.307517996896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00613.warc.gz"}
http://cpr-nuclth.blogspot.com/2013/08/13080418-hirotaka-shimoyama-et-al.html
## Di-neutron correlation in monopole two-neutron transfer modes in Sn isotope chain    [PDF] Hirotaka Shimoyama, Masayuki Matsuo We study microscopic structures of monopole pair vibrational modes and associated two-neutron transfer amplitudes in neutron-rich Sn isotopes by means of the linear response formalism of the quasiparticle random phase approximation(QRPA). For this purpose we introduce a method to decompose the transfer amplitudes with respect to two-quasiparticle components of the QRPA eigen mode. It is found that pair-addition ibrational modes in neutron-rich \$^{132-140}\$Sn and the pair rotational modes in \$^{142-150}\$Sn are commonly characterized by coherent contributions of quasaiparticle states having high orbital angular momenta \$l \gesim 5\$, which suggests transfer of a spatially correlated neutron pair. The calculation also predicts a high-lying pair vibration, the giant pair vibration, emerging near the one-neutron separation energy in \$^{110-130}\$Sn, and we find that they have the same di-neutron characters as that of the low-lying pair vibration in \$^{132-140}\$Sn. View original: http://arxiv.org/abs/1308.0418
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9287623167037964, "perplexity": 6816.95547349551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320736.82/warc/CC-MAIN-20170626115614-20170626135614-00606.warc.gz"}
http://swaper.72vanni.ru/net+liquidating+value/32.html
# Net liquidating value online dating in lake tahoe The book value is the value of the asset as listed on the balance sheet. The balance sheet lists assets at the historical cost, so the value of assets may be higher or lower than market prices. In an economic environment with rising prices, the book value of assets is lower than the market value. For a cash account this is equal to the lesser of ELV or Previous Day ELV less the Initial Margin Requirement. Selector .selector_input_interaction .selector_input. Selector .selector_input_interaction .selector_spinner. The estimated amount of money that an asset or company could quickly be sold for, such as if it were to go out of business. It is calculated as $$\text = \text \text - \text.$$ Your P&L and returns are computed based on changes in your net liquidation value.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26739412546157837, "perplexity": 2837.168640535328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862929.10/warc/CC-MAIN-20180619115101-20180619135101-00018.warc.gz"}
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/squeezing.20lemma.html
## Stream: new members ### Topic: squeezing lemma #### Hanting Zhang (Dec 25 2020 at 20:22): Is there any lemma for $a \leq b \leq a$ implies $a = b$? #### Eric Wieser (Dec 25 2020 at 20:28): docs#le_antisymm I think? #### Eric Wieser (Dec 25 2020 at 20:29): tactic#library_search would find that #### Hanting Zhang (Dec 25 2020 at 20:42): Mhm. Thanks, I guess it'll take a while pick up on how to use everything. Didn't even know library_search existed. :o Last updated: May 14 2021 at 05:20 UTC
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7455417513847351, "perplexity": 22973.452259930807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00078.warc.gz"}
http://science.sciencemag.org/content/ns-12/288/72.2
Letters # Note on Breeding-Habits of the Bill-Fish (Tylosurus longirostris) See allHide authors and affiliations Science  10 Aug 1888: Vol. ns-12, Issue 288, pp. 72 DOI: 10.1126/science.ns-12.288.72-a This is a PDF-only article. The first page of the PDF of this article appears above.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8122674226760864, "perplexity": 23469.28692162427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00423.warc.gz"}
http://appliedmechanics.asmedigitalcollection.asme.org/article.aspx?articleid=1415587
0 TECHNICAL PAPERS # Stokes Mechanism of Drag Reduction [+] Author and Article Information Autonomous Systems and Technology Department, Naval Undersea Warfare Center, Newport, RI 02841bandyopadhyaypr@npt.nuwc.navy.mil J. Appl. Mech 73(3), 483-489 (Sep 20, 2005) (7 pages) doi:10.1115/1.2125974 History: Received November 23, 2003; Revised September 20, 2005 ## Abstract The mechanism of drag reduction due to spanwise wall oscillation in a turbulent boundary layer is considered. Published measurements and simulation data are analyzed in light of Stokes’ second problem. A kinematic vorticity reorientation hypothesis of drag reduction is first developed. It is shown that spanwise oscillation seeds the near-wall region with oblique and skewed Stokes vorticity waves. They are attached to the wall and gradually align to the freestream direction away from it. The resulting Stokes layer has an attenuated nature compared to its laminar counterpart. The attenuation factor increases in the buffer and viscous sublayer as the wall is approached. The mean velocity profile at the condition of maximum drag reduction is similar to that due to polymer. The final mean state of maximum drag reduction due to turbulence suppression appears to be universal in nature. Finally, it is shown that the proposed kinematic drag reduction hypothesis describes the measurements significantly better than what current direct numerical simulation does. <> ## Figures Figure 7 Regular law of the wall representation of the effects of increasing frequency on total axial velocity. The reference sublayer profile and the logarithmic velocity profile of unperturbed turbulent boundary layers are shown (———). Virk’s (11) ultimate polymer profile for the condition of maximum drag reduction is also shown (– – –). Symbols: ◯=1 Hz; ◻=3 Hz; ▵=5 Hz; ×=7 Hz. Measurements are due to Choi (5). Figure 13 Relationship between vorticity reorientation angle and Choi ’s oscillation parameter. Symbols are from extrapolation of flow visualization data shown in Fig. 9 to wall. Figure 14 Recovery of mean velocity profile for maximum drag reduction from an unperturbed profile by means of attenuated Stokes layer modeling. Symbols: ◯=unperturbed flow; ▵=computed maximum drag reduction. Broken and thick solid lines are Virk’s polymer maximum drag reduction profiles (11). Figure 16 Validation of vorticity reorientation hypothesis of drag reduction. Symbols are measurements in turbulent boundary layers: ◯=Choi (5) and ◻=Laadhari (6); - - - is DNS simulation in a channel flow due to Akhavan and co-workers (3) and ——— is presently proposed vorticity reorientation hypothesis. Figure 8 Stokes’ layer scaling of change in axial velocity due to spanwise oscillation. Symbols: ◻=3 Hz; ▵=5 Hz; ×=7 Hz. Measurements are due to Choi (5). Figure 9 Stokes layer phase lag representation of spanwise fluid material displacement (symbols; ———=Δz∕Z) in the oscillating turbulent boundary layer compared with laminar Stokes velocity profile (--•--=w∕W). Symbols: ×=5 Hz (sample 1); ◻=5 Hz (sample 2); and ▵=2 Hz. Band in η indicates thickness of laser light sheet used for flow visualization. Data extracted from flow visualization video due to Choi (5,10). Figure 10 Attenuated nature of near-wall Stokes layer in an oscillating turbulent boundary layer. Symbols: ◯=1 Hz; ◻=3 Hz; ▵=5 Hz; ×=7 Hz. Measurements are due to Choi (5). Figure 11 Oblique “two-dimensional” Stokes waves. Freestream direction is from left to right. Frame extracted from flow visualization video due to Choi (5). Freestream velocity is 1.5m∕s; wall-oscillation frequency is 5 Hz; amplitude of spanwise oscillation is 50 mm. Figure is roughly in scale; height of laser light sheet is 1±0.5mm from wall. Figure 12 Extrapolation of Choi ’s (5) experimental condition (symbols). This is carried out via proposed hypothesis of drag reduction (solid line) for the determination of the condition for maximum drag reduction. Figure 15 Calculated variation of Stokes’ attenuation parameter across the boundary layer for the condition of maximum drag reduction Figure 17 Comparison of the yaw angle (α line) of the resultant wall vorticity due to turbulence as per the vorticity reorientation hypothesis [Eq. 1], with hydrogen bubble measurements of the variation of angle of inclination γ (symbol) of wall streaks with respect to mean streamwise direction of flow, with fractional drag reduction (12). Figure 1 Measurements of change in axial velocity due to spanwise oscillation of a turbulent boundary layer normalized by freestream velocity. Symbols: ◯=1 Hz; ◻=3 Hz, ▵=5 Hz, and ×=7 Hz of wall-oscillation frequency. Reproduced from Choi (5). Figure 2 Comparison of simulation and measurements of drag reduction due to spanwise wall-oscillation (4). Symbols are measurements in turbulent boundary layers: ◯=Choi (5), and ▵=Laadhari (6); ——— is DNS simulation in a channel flow due to Akhavan and co-workers (3). Reproduced from Choi (5). Figure 3 Schematic of drag reduction hypothesis of vorticity reorientation Figure 4 Drag reduction due to the proposed vorticity reorientation hypothesis Figure 5 The Stokes’ phase variation of total axial velocity. Symbols: ———=1 Hz; ×=3 Hz; ◻=5 Hz; ◯=7 Hz. Measurements are due to Choi (5). Figure 6 Total axial velocity in normal wall-layer velocity and length scales. Solid line is U+=y+. Symbols: ———=1 Hz; ×=3 Hz; ◻=5 Hz; ◯=7 Hz. Measurements are due to Choi (5). ## Errata Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections • TELEPHONE: 1-800-843-2763 (Toll-free in the USA) • EMAIL: asmedigitalcollection@asme.org
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710774779319763, "perplexity": 9761.619277464513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211719.12/warc/CC-MAIN-20180817045508-20180817065508-00294.warc.gz"}
http://mathhelpforum.com/advanced-algebra/542-need-help.html
1. ## need help I really need help. How do you do this problem? Factor 4g from: 8g^2j - 12g^3hj 2. ## Factor question Factor 4g from: 8g^2j - 12g^2hj Do you mean?: $8g^{2j} - 12g^{2hj}$ 3. I wrote it the way the book shows the problem. Factor 4g from: 8g^2 j - 12g^3 hj I'm really confused I've looked in the book and I can not find an example of this problem anywhere. 4. i think what you mean is: $8g^2j - 12g^3hj$ If that is the case then it is pretty easy to factor out a 4g. You would get: $2gj - 3g^2hj$ You simply divide by 4g. 5. Thanks for the help. 6. ## Factoring out Just to be more precise: $8g^2j - 12g^3hj$ to factor out 4g you would write the following: $4g[2gj - 3g^2hj]$ 7. Thank you all once again for the help.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8422662615776062, "perplexity": 2711.391848123403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687642.30/warc/CC-MAIN-20170921044627-20170921064627-00056.warc.gz"}
https://forum.sumatrapdfreader.org/t/can-you-see-if-a-pdf-is-locked-against-editing-or-not/2182
Forum moved here! # Home / Can you see if a PDF is locked (against editing) or not? bugmenot Is there any setting that can tell you if the PDF is like this? GitHubRulesOK The only way is to use dropdown File Properties or Ctrl+D where if some settings such as no copy are active it will show at the bottom as “Denied Permissions - copying text”
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9080073833465576, "perplexity": 3389.227086703486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00283.warc.gz"}
http://crypto.stackexchange.com/questions/11072/dsa-generate-signature-and-verify
# DSA generate signature and verify I'm trying to generate a signature for DSA with the following parameters: $p = 23$ , $q = 11$ , $g = 3$ , $H(m) = 8$ , $x = 5$ for the life of me, I cannot choose a random $k$ ($0 > k > q$) that will give me $r$ , $s$ that 'add up' when calculating $w$, $u1$, $u2$, and verifying. I don't know if I'm just doing my maths wrong, but I've tried every possible $k$ between $0$ and $11$ and I just can't get $v = r$ at the end of verification. - What is the signature you produced, and what is the private key ($x$)? –  B-Con Oct 16 '13 at 4:51 @B-Con oops I forgot to give the private key. x=5. as for the signature - I have produced multiple ones while attempting to find a value of k that 'adds up' when calculating v. I think I may be doing something wrong in the calculations because surely at least one value > 0 < 11 must work. –  jonsnowed Oct 16 '13 at 7:24 Ok, back to my initial answer (which I edited to the last version, thinking that you did not choose an appropriate generator): I now think that you may calculate the inverses wrongly: I tried it with $k=2$ and get: $r=9,k^{-1}=6, s=10, w=10, u_1=3, u_2=2$ which works out. Choosing a generator Since the order $11$ is prime, you can simply choose an arbitrary element of $Z_{23}^*$, say $h$, and then compute your $g$ as $g=h^{22/11} \pmod{23}$, i.e., every element that lies in this subgroup is a generator of this subgroup. Take for instance $h=2$ and compute $g=h^{2}\pmod{23}=4$ (in your case, $3$ is also fine). General case: For the general case $Z_p^*$ with $p$ being prime, an element $g$ is a generator if it holds that all for all prime divisors of the order $p-1$ the following holds: $g^{(p-1)/q_i}\neq 1 \pmod{p}$. Typically, you construct $p$ as a safe prime, i.e. choosing prime $q$ and set $p=2q+1$ as, then you know the prime divisors $2$ and $q$ by construction. - hi! thank you so much for answering. I see what you're saying and I had not thought of it before. HOWEVER, this is revision I am doing and the question is on a worksheet given to the class. I didn't choose g... they gave us the value for g, p, q, x and H(m). That's why I said I couldn't come up with a suitable value for k... the only value they didn't specify in the question. –  jonsnowed Oct 16 '13 at 9:01 Then you should point out that the worksheet is buggy ;) –  DrLecter Oct 16 '13 at 9:07 are you sure?? I mean, the very first question in this set is to show that g has order q as required. our lectures say to calculate this by showing that: $g^q$ mod $p$ = $1$ . which it does... $3^1$$^1$ mod $23$ = $1$ . So how can it be wrong? –  jonsnowed Oct 16 '13 at 9:28 You are right, I thought this would be the mistake. But actually, I think you may make some errors when computing multiplicative inverses? –  DrLecter Oct 16 '13 at 9:54 sorry but... how do you get $6$ from $2^{-1}$?? –  jonsnowed Oct 16 '13 at 9:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8804041147232056, "perplexity": 332.4607891258917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928780.77/warc/CC-MAIN-20150521113208-00166-ip-10-180-206-219.ec2.internal.warc.gz"}
https://sppgway.jhuapl.edu/pspbiblio?aname=Pecora
# PSP Bibliography Notice: Clicking on the title will open a new window with all details of the bibliographic entry. Clicking on the DOI link will open a new window with the original bibliographic entry from the publisher. Clicking on a single author will show all publications by the selected author. Clicking on a single keyword, will show all publications by the selected keyword. ## Found 6 entries in the Bibliography. ### Showing entries from 1 through 6 2022 Isotropization and Evolution of Energy-containing Eddies in Solar Wind Turbulence: Parker Solar Probe, Helios 1, ACE, WIND, and Voyager 1 We examine the radial evolution of correlation lengths perpendicular ( $\lambda _C^\perp$ ) and parallel ( $\lambda _C^\parallel$ ) to the magnetic-field direction, computed from solar wind magnetic-field data measured by Parker Solar Probe (PSP) during its first eight orbits, Helios 1, Advanced Composition Explorer (ACE), WIND, and Voyager 1 spacecraft. Correlation lengths are grouped by an interval s alignment angle; the angle between the magnetic-field and solar wind velocity vectors (\ensuremath\Theta$_BV$). Parallel a ... Cuesta, Manuel; Chhiber, Rohit; Roy, Sohom; Goodwill, Joshua; Pecora, Francesco; Jarosik, Jake; Matthaeus, William; Parashar, Tulasi; Bandyopadhyay, Riddhi; Published by: \apjl      Published on: jun YEAR: 2022     DOI: 10.3847/2041-8213/ac73fd On the Transmission of Turbulent Structures across the Earth s Bow Shock Collisionless shocks and plasma turbulence are crucial ingredients for a broad range of astrophysical systems. The shock-turbulence interaction, and in particular the transmission of fully developed turbulence across the quasi-perpendicular Earth s bow shock, is here addressed using a combination of spacecraft observations and local numerical simulations. An alignment between the Wind (upstream) and Magnetospheric Multiscale (downstream) spacecraft is used to study the transmission of turbulent structures across the shock, r ... Trotta, Domenico; Pecora, Francesco; Settino, Adriana; Perrone, Denise; Hietala, Heli; Horbury, Timothy; Matthaeus, William; Burgess, David; Servidio, Sergio; Valentini, Francesco; Published by: \apj      Published on: jul YEAR: 2022     DOI: 10.3847/1538-4357/ac7798 Magnetic Switchback Occurrence Rates in the Inner Heliosphere: Parker Solar Probe and 1 au The subject of switchbacks, defined either as large angular deflections or polarity reversals of the magnetic field, has generated substantial interest in the space physics community since the launch of the Parker Solar Probe (PSP) in 2018. Previous studies have characterized switchbacks in several different ways and have been restricted to data available from the first few orbits. Here, we analyze the frequency of occurrence of switchbacks per unit distance for the first full eight orbits of PSP. In this work, events that r ... Pecora, Francesco; Matthaeus, William; Primavera, Leonardo; Greco, Antonella; Chhiber, Rohit; Bandyopadhyay, Riddhi; Servidio, Sergio; Published by: \apjl      Published on: apr YEAR: 2022     DOI: 10.3847/2041-8213/ac62d4 2021 Parker solar probe observations of helical structures as boundaries for energetic particles Energetic particle transport in the interplanetary medium is known to be affected by magnetic structures. It has been demonstrated for solar energetic particles in near-Earth orbit studies, and also for the more energetic cosmic rays. In this paper, we show observational evidence that intensity variations of solar energetic particles can be correlated with the occurrence of helical magnetic flux tubes and their boundaries. The analysis is carried out using data from Parker Solar Probe orbit 5, in the period 2020 May 24 to Ju ... Pecora, F.; Servidio, S.; Greco, A.; Matthaeus, W.~H.; McComas, D.~J.; Giacalone, J.; Joyce, C.~J.; Getachew, T.; Cohen, C.~M.~S.; Leske, R.~A.; Wiedenbeck, M.~E.; McNutt, R.~L.; Hill, M.~E.; Mitchell, D.~G.; Christian, E.~R.; Roelof, E.~C.; Schwadron, N.~A.; Bale, S.~D.; Published by: \mnras      Published on: sep YEAR: 2021     DOI: 10.1093/mnras/stab2659 2020 Identification of coherent structures in space plasmas: The magnetic helicity-PVI method Context. Plasma turbulence can be viewed as a magnetic landscape populated by large- and small-scale coherent structures. In this complex network, large helical magnetic tubes might be separated by small-scale magnetic reconnection events (current sheets). However, the identification of these magnetic structures in a continuous stream of data has always been a challenging task. \ Aims: Here, we present a method that is able to characterize both the large- and small-scale structures of the turbulent solar wind, based on the c ... Pecora, F.; Servidio, S.; Greco, A.; Matthaeus, W.; Published by: Astronomy and Astrophysics      Published on: jun YEAR: 2020     DOI: "10.1051/0004-6361/202039639" 2019 Single-spacecraft Identification of Flux Tubes and Current Sheets in the Solar Wind A novel technique is presented for describing and visualizing the local topology of the magnetic field using single-spacecraft data in the solar wind. The approach merges two established techniques: the Grad-Shafranov (GS) reconstruction method, which provides a plausible regional two-dimensional magnetic field surrounding the spacecraft trajectory, and the Partial Variance of Increments (PVI) technique that identifies coherent magnetic structures, such as current sheets. When applied to one month of Wind magnetic field d ... Pecora, Francesco; Greco, Antonella; Hu, Qiang; Servidio, Sergio; Chasapis, Alexandros; Matthaeus, William; Published by: The Astrophysical Journal      Published on: 08/2019 YEAR: 2019     DOI: 10.3847/2041-8213/ab32d9 1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6290965676307678, "perplexity": 8907.14060462166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00282.warc.gz"}
https://researchportal.port.ac.uk/en/publications/micro-finite-element-models-of-the-vertebral-body-validation-of-l
# Micro finite element models of the vertebral body: validation of local displacement predictions Maria Costa, Gianluca Tozzi, Luca Cristofolini, Valentina Danesi, Marco Viceconti, Enrico Dall׳Ara Research output: Contribution to journalArticlepeer-review ## Abstract The estimation of local and structural mechanical properties of bones with micro Finite Element (microFE) models based on Micro Computed Tomography images depends on the quality bone geometry is captured, reconstructed and modelled. The aim of this study was to validate microFE models predictions of local displacements for vertebral bodies and to evaluate the effect of the elastic tissue modulus on model’s predictions of axial forces. Four porcine thoracic vertebrae were axially compressed in situ, in a step-wise fashion and scanned at approximately 39µm resolution in preloaded and loaded conditions. A global digital volume correlation (DVC) approach was used to compute the full-field displacements. Homogeneous, isotropic and linear elastic microFE models were generated with boundary conditions assigned from the interpolated displacement field measured from the DVC. Measured and predicted local displacements were compared for the cortical and trabecular compartments in the middle of the specimens. Models were run with two different tissue moduli defined from microindentation data (12.0GPa) and a back-calculation procedure (4.6GPa). The predicted sum of axial reaction forces was compared to the experimental values for each specimen. MicroFE models predicted more than 87% of the variation in the displacement measurements (R2=0.87-0.99). However, model predictions of axial forces were largely overestimated (80-369%) for a tissue modulus of 12.0GPa, whereas differences in the range 10-80% were found for a back-calculated tissue modulus. The specimen with the lowest density showed a large number of elements strained beyond yield and the highest predictive errors. This study shows that the simplest microFE models can accurately predict quantitatively the local displacements and qualitatively the strain distribution within the vertebral body, independently from the considered bone types. Original language English e0180151 PLoS One 12 7 https://doi.org/10.1371/journal.pone.0180151 Published - 11 Jul 2017 • RCUK • EPSRC • EP/K03877X/1 • RS • RG130831 • RG150012 ## Fingerprint Dive into the research topics of 'Micro finite element models of the vertebral body: validation of local displacement predictions'. Together they form a unique fingerprint. • ### ZGC: Zeiss Global Centre 2/05/16 → … Project: Innovation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8011826872825623, "perplexity": 7828.804200930814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104597905.85/warc/CC-MAIN-20220705174927-20220705204927-00173.warc.gz"}
http://math.stackexchange.com/questions/82259/finding-volume-of-a-cone-through-integration
# Finding volume of a cone through integration I am trying to find the volume of a cone using integration through horizontal slicing. The cone has a base radius of 10cm and a height of 5cm. I am assuming this means I should integrate with respect to y, but I am not entirely sure how to set this up. I know that volume of a cylinder is given by the following: $$V = \pi r^2h$$ So I am assuming that the integral would be: $$\pi \int_0^5 f(y)^2dy$$ I am not sure how the x value of the radius 10cm (since it is not with respect to y) should fit into the equation, though. Also, sorry for the pseudo-code style. I do not know how to use the math typesetting yet. - volume of cone –  pedja Nov 15 '11 at 5:50 Place your cylinder so the center of the base is at $(0,0)$, and the apex is at $(0,5)$. If you imagine looking at the cylinder straight on, it will look like a triangle with base $20$ and height $5$. If you make a horizontal slice at level $y$, then you get a figure with two similar triangles: ^ ^ / \ | / \ | ^ /_____\ 5 | / 2r \ | y / \ | | / \ | V /_____________\ V |----- 20 -----| then using similar triangles note that the height of the triangle on the top is $5-y$, and the base is $2r$. So we have $$\frac{2r}{20} = \frac{5-y}{5}.$$ From this, we can express $r$ in terms of $y$. This is where you are using the fact that the base of your original cone is $10$ cm. - Sorry, I am a bit thick-headed here. I understand how you derived 5-y, but I am not sure how you derived the above equality (specifically how you arrived at the ratio of r/10). Can you explain a bit more? –  Dylan Nov 15 '11 at 5:40 @Dylan: Does the picture help? –  Arturo Magidin Nov 15 '11 at 5:50 OK. That makes sense. So solving for y from the above, i get y = -1/2r + 5. This should be plugged into the function f(y)dy i am assuming and then integrated (where b-a -> 5-0)? –  Dylan Nov 15 '11 at 5:57 @Dylan: You don't want to solve for $y$, you want to solve for $r$: the volume of the cylindrical slice at level $y$ is $\pi r^2\Delta y$, so you want to express $r$ in terms of $y$, not the other way around. –  Arturo Magidin Nov 15 '11 at 6:02 Hmm, but when i solve the above equality for r then i get 10 - 2y. I am not sure this agrees with Andre's response above (or at least I don't see how it does - it appears he is taking the slope delta y over delta x??) –  Dylan Nov 15 '11 at 6:17 You can set things up so that you integrate with respect to $x$ or you can set things up so that you integrate with respect to $y$. It's your pick! For each solution, you should draw the picture that goes with that solution. With respect to $x$: Look at the line that passes through the origin and the point $(5,10)$. Rotate the region below this line, above the $x$-axis, from $x=0$ to $x=5$, about the $x$-axis. This will generate a cone with base radius $10$ and height $5$. The main axis of this cone is along the $x$-axis. Kind of a sleeping cone. Take a slice of width "$dx$" at $x$, perpendicular to the $x$-axis. The ordinary name for this would be a vertical slice. This slice is almost a very thin cylinder: if you are hungry, think of a thin ham slice taken from a conical ham. Let us find the radius of this slice. The line through the origin that goes through $(5,10)$ has slope $2$, so has equation $y=2x$. Thus at $x$ the radius of our almost cylinder is $2x$. It follows that the thin slice the slice has (almost) volume $\pi(2x)^2 dx$. "Add up" (integrate) from $x=0$ to $x=5$. The volume of our cone is equal to $$\int_{x=0}^5 \pi(2x)^2 \,dx=\int_0^5 4\pi x^2\,dx.$$ The integration is easy. We get $\dfrac{500\pi}{3}$. With respect to $y$: It is a matter of taste whether our cone is point up or point down. Since an answer with point up has already been posted, we imagine the cone with point down at the origin. Look at the line that goes through the origin and passes through the point $(10,5)$. Take the region to the left of this line, to the right of the $y$-axis, from $y=0$ to $y=5$. Rotate this region about the $y$-axis. We get a cone with base radius $10$ and height $5$. Take a horizontal slice of width "$dy$" at height $y$. This looks almost like a flat cylindrical coin. We want to find the volume of that coin. The line through the origin and $(10,5)$ has slope $1/2$, so it has equation $y=x/2$. So $x=2y$, and therefore the radius of our thin slice is $\pi(2y)^2 dy$. Thus the volume of the cone is $$\int_{y=0}^5 \pi(2y)^2\,dy.$$ This is the same definite integral as our previous one. Only the name of the variable of integration has changed. Naturally, the result is the same. - Andre, which is more easy/reasonable deriving the general formula and then substituting the values or deriving for the particular values? ... also why use cylinders when you can simply use discs? –  Quixotic Nov 15 '11 at 6:09 Indeed, the general formula is just as easy to derive. However, sometimes concrete specific numbers can help the initial understanding. After that, the work with $r$ and $h$ seems reasonable. As to why not circle, the cross-sections are indeed circles. But I wanted to convey, unfortunately without a picture, that we are adding up the volumes of very thin disks. If one practices that enough times, the intuition behind some formulas of say Physics becomes clearer. –  André Nicolas Nov 15 '11 at 6:17 Aha..., I got it!, I guess in these kinds of solids thinking about the volume interpretation is better than the discs however they might lead to the same thing but the volume interpretation is much more neat, another example would be while computing the volume of a paraboloid. –  Quixotic Nov 15 '11 at 6:34 Or total displacement as definite integral of velocity, or fluid pressure, or moment about the $y$-axis, or many other things. –  André Nicolas Nov 15 '11 at 6:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650681614875793, "perplexity": 171.9066989613472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207925201.39/warc/CC-MAIN-20150521113205-00295-ip-10-180-206-219.ec2.internal.warc.gz"}
http://ieeexplore.ieee.org/xpl/tocresult.jsp?reload=true&isnumber=4278348&punumber=55
# IEEE Electron Device Letters ## Filter Results Displaying Results 1 - 25 of 41 Publication Year: 2007, Page(s):C1 - C4 | |PDF (51 KB) • ### IEEE Electron Device Letters publication information Publication Year: 2007, Page(s): C2 | |PDF (59 KB) • ### Enhanced Gate Swing in InP HEMTs With High Threshold Voltage by Means of InAlAsSb Barrier Publication Year: 2007, Page(s):669 - 671 Cited by:  Papers (3) | |PDF (124 KB) | HTML We demonstrated the suitability of the InP HEMTs with the InAlAsSb Schottky barrier to realize the high threshold voltage (enhancement mode), low gate current, and low power consumption. This quaternary compound material increases the conduction band discontinuity to the InGaAs channel by introducing only 10% of antimony to InAlAs. The gate current is reduced by an order of the magnitude (or even ... View full abstract» • ### 35-nm Zigzag T-Gate $hbox{In}_{0.52}hbox{Al}_{0.48} hbox{As/In}_{0.53}hbox{Ga}_{0.47}hbox{As}$ Metamorphic GaAs HEMTs With an Ultrahigh $f_{max}$ of 520 GHz Publication Year: 2007, Page(s):672 - 675 Cited by:  Papers (16) | |PDF (274 KB) | HTML Metamorphic GaAs high electron mobility transistors (mHEMTs) with the highest-f max reported to date are presented here. The 35-nm zigzag T-gate In0.52Al0.48As/In0.53Ga0.47As metamorphic GaAs HEMTs show f maxof 520 GHz, f T of 440 GHz, and maximum transconductance (g m) of 1100 mS/mm... View full abstract» • ### On-Resistance Modulation of High Voltage GaN HEMT on Sapphire Substrate Under High Applied Voltage Publication Year: 2007, Page(s):676 - 678 Cited by:  Papers (28) | |PDF (225 KB) | HTML The 620-V/1.4-A GaN high-electron mobility transistors on sapphire substrate were fabricated and the ON-resistance modulations caused by current collapse phenomena were measured under high applied voltage. Since the fabricated devices had insulating substrates, no field-plate (FP) effect was expected and the ON-resistance increases of these devices were larger than those on an n-SiC substrate even... View full abstract» • ### InGaAsSb/InP Double Heterojunction Bipolar Transistors Grown by Solid-Source Molecular Beam Epitaxy Publication Year: 2007, Page(s):679 - 681 Cited by:  Papers (13) | |PDF (207 KB) | HTML This letter investigates the dc characteristics of a double heterojunction bipolar transistor (DHBT) with a compressively strained InGaAsSb base, which is grown by solid-source molecular beam epitaxy. The novel InP/InGaAsSb HBT has a lower base/emitter (B/E) junction turn-on voltage, a lower offset voltage, and a junction ideality factor closer to unity than the conventional InP/InGaAs composite c... View full abstract» • ### High Transconductance MISFET With a Single InAs Nanowire Channel Publication Year: 2007, Page(s):682 - 684 Cited by:  Papers (58)  |  Patents (1) | |PDF (215 KB) | HTML Metal-insulator field-effect transistors (FETs) are fabricated using a single n-InAs nanowire (NW) with a diameter of d = 50 nm as a channel and a silicon nitride gate dielectric. The gate length and dielectric scaling behavior is experimentally studied by means of dc output- and transfer-characteristics and is modeled using the long-channel MOSFET equations. The device properties are studi... View full abstract» • ### Ultrahigh-Speed 0.5 V Supply Voltage In0.7 Ga0.3As Quantum-Well Transistors on Silicon Substrate Publication Year: 2007, Page(s):685 - 687 Cited by:  Papers (61)  |  Patents (50) | |PDF (262 KB) | HTML The direct epitaxial growth of ultrahigh-mobility InGaAs/InAlAs quantum-well (QW) device layers onto silicon substrates using metamorphic buffer layers is demonstrated for the first time. In this letter, 80 nm physical gate length depletion-mode InGaAs QW transistors with saturated transconductance gm of 930 muS / mum and fT of 260 GHz at VDS = 0.5 V are achieved on 3.2 mu... View full abstract» • ### An Improved Planar Triode With ZnO Nanopin Field Emitters Publication Year: 2007, Page(s):688 - 690 Cited by:  Papers (9) | |PDF (361 KB) | HTML In this letter, an improved planar triode with ZnO nanopin field emitters has been proposed. Comparison with a conventional planar triode, a layer of ZnO nanostructures is deposited between the cathode and gate electrode. These ZnO nanostructures are used as field emitters. Because both electrodes and ZnO layer can be deposited with screen-printing method, the fabrication process of an improved pl... View full abstract» • ### Novel Approach to Reduce Source/Drain Series and Contact Resistance in High-Performance UTSOI CMOS Devices Using Selective Electrodeless CoWP or CoB Process Publication Year: 2007, Page(s):691 - 693 Cited by:  Papers (1)  |  Patents (2) | |PDF (408 KB) | HTML This letter reports a selective metal deposition process using an electrodeless technique for MOSFETs fabricated in an ultrathin silicon-on-insulator (UTSOI) substrate. A layer of metal (CoWP or CoB) is formed on the source and drain nickel and cobalt silicides without depositing on the dielectric spacers. Leakage current information, which is an indication of selectivity of the process, is presen... View full abstract» • ### Use of a High-Work-Function Ni Electrode to Improve the Stress Reliability of Analog SrTiO3 Metal–Insulator–Metal Capacitors Publication Year: 2007, Page(s):694 - 696 Cited by:  Papers (25) | |PDF (245 KB) | HTML We have studied the stress reliability of low-energy-bandgap high- metal-insulator-metal capacitors under constant voltage stress. By using a high-work-function Ni electrode (5.1 eV), we reduced the degrading effects of stress on the capacitance variation (DeltaC/C), the quadratic voltage coefficient of capacitance (VCC-alpha), and the long-term reliability, in contrast with using a TaN. The impro... View full abstract» • ### Analysis of Temperature in Phase Change Memory Scaling Publication Year: 2007, Page(s):697 - 699 Cited by:  Papers (35) | |PDF (361 KB) | HTML We analyze constant-voltage isotropic and non-isotropic scaling issues for phase change memory (PCM) based on electrothermal physics. Various analytical and simulation models of general and typical PCM cells that support the analysis is also provided. The analysis shows that the maximum temperature in the PCM cell, which is a key parameter for PCM operation, is independent of geometrical sizes and... View full abstract» • ### Thickness Scaling and Reliability Comparison for the Inter-Poly High- κ Dielectrics Publication Year: 2007, Page(s):700 - 702 Cited by:  Papers (8) | |PDF (103 KB) | HTML In this letter, the inter-poly dielectric (IPD) thickness, scaling, and reliability characteristics of Al2O3 and HfO2 IPDs are studied, which are then compared with conventional oxide/nitride/oxide (ONO) IPD. Regardless of deposition tools, drastic leakage current reduction and reliability improvement have been demonstrated by replacing ONO IPD with high-permittivi... View full abstract» • ### Sub-0.1-eV Effective Schottky-Barrier Height for NiSi on n-Type Si (100) Using Antimony Segregation Publication Year: 2007, Page(s):703 - 705 Cited by:  Papers (32) | |PDF (271 KB) | HTML We report a new method of forming nickel silicide (NiSi) on n-Si with low contact resistance, which achieves a Schottky barrier height of as low as 0.074 eV. Antimony (Sb) and nickel were introduced simultaneously and annealed to form NiSi on n-Si (100). Sb dopant atoms were found to segregate at the NiSi/Si interface. The devices with Sb segregation show complete nickel monosilicide formation on ... View full abstract» • ### Low-Temperature Polymer-Based Three-Dimensional Silicon Integration Publication Year: 2007, Page(s):706 - 709 Cited by:  Papers (3) | |PDF (350 KB) | HTML We describe a low-temperature polymer-based 3D integration technique for wafer-scale transplantation of micrometer thick circuit and device layers onto another host wafer. The maximum temperature of this approach is 340 oC. It incorporates a low-k semiconductor compatible dielectric bonding media, employs tools that are readily available within a fabrication environment, and is very sim... View full abstract» • ### An Efficient Macromodeling Approach for Simulating Carbon-Nanotube Field-Emission Triode Devices in Display Applications Publication Year: 2007, Page(s):710 - 712 | |PDF (343 KB) | HTML The advances of using carbon-nanotube (CNT) triode structure field-emission (FE) devices for display applications require an accurate and efficient SPICE-compatible device model for evaluating their electrical behaviors in the early circuit and system design stage. This letter presents a simple and efficient macromodeling approach that can accurately model the CNT triode FE devices independent of ... View full abstract» • ### Amorphous-SiCBN-Based Metal–Semiconductor–Metal Photodetector for High-Temperature Applications Publication Year: 2007, Page(s):713 - 715 Cited by:  Papers (18) | |PDF (214 KB) | HTML A photodetector (PD) with metal-semiconductor-metal (MSM) structure has been developed using an amorphous SiCBN film. The amorphous SiCBN film was deposited on the silicon substrate using reactive RF magnetron sputtering. The optoelectronic performance of the SiCBN MSM devices has been examined through photocurrent measurements. Temperature effect, with respect to photocurrent ratios, has been stu... View full abstract» • ### Numerical Simulation of Low-Frequency Noise in Polysilicon Thin-Film Transistors Publication Year: 2007, Page(s):716 - 718 Cited by:  Papers (2) | |PDF (239 KB) | HTML Numerical simulations of low-frequency noise are carried out in two technologies of N-channel polysilicon thin-film transistors (TFTs) biased from weak to strong inversion and operating in the linear mode. Noise is simulated by generation/recombination processes. The contribution of grain boundaries on the noise level is higher in the strong inversion region. The microscopic noise parameter that i... View full abstract» • ### SiGe-Channel Confinement Effects for Short-Channel PFETs With Nonbandedge Gate Workfunctions Publication Year: 2007, Page(s):719 - 721 Cited by:  Papers (3) | |PDF (191 KB) | HTML Thin SiGe-channel confinement is found to provide significant control of the short channel effects typically associated with nonbandedge gate electrodes, in an analogous manner to ultrathin-body approaches. Gate workfunction requirements for thin-SiGe-channel p-type field effect transistors are therefore relaxed substantially more than what is expected from a simple observation of the difference b... View full abstract» • ### High-Performance Polycrystalline-Silicon TFT by Heat-Retaining Enhanced Lateral Crystallization Publication Year: 2007, Page(s):722 - 724 Cited by:  Papers (1) | |PDF (321 KB) | HTML High-performance low-temperature polycrystalline-silicon thin-film transistors (TFTs) have been fabricated by heat-retaining enhanced crystallization (H-REC). In the H-REC technology, a heat-retaining capping layer (HRL) is applied on the prepattern amorphous silicon islands to slow down the heat dissipation effectively. It thereby retains long duration of melting process and further enhances poly... View full abstract» • ### Defect Passivation by Selenium-Ion Implantation for Poly-Si Thin Film Transistors Publication Year: 2007, Page(s):725 - 727 | |PDF (253 KB) | HTML Low-dose (1013 cm-2) selenium-ion implantation prior to pulsed-excimer-laser crystallization is investigated as a low-thermal-budget defect-passivation technique for polycrystalline silicon TFTs. Selenium defect passivation is found to be effective for improving TFT performance and for providing superior TFT reliability as compared with hydrogenation. Ion implantation, passiv... View full abstract» • ### High-Voltage Self-Aligned p-Channel DMOS-IGBTs in 4H-SiC Publication Year: 2007, Page(s):728 - 730 Cited by:  Papers (26) | |PDF (209 KB) | HTML SiC power MOSFETs designed for blocking voltages of 10 kV and higher face the problem of high drift layer resistance that gives rise to a high internal power dissipation in the ON -state. For this reason, the ON-state current density must be severely restricted to keep the power dissipation below the package limit. We have designed, optimized, and fabricated high-voltage SiC p-channel doubly-impla... View full abstract» • ### Border-Trap Characterization in High-κ Strained-Si MOSFETs Publication Year: 2007, Page(s):731 - 733 Cited by:  Papers (10) | |PDF (122 KB) | HTML In this letter, we focus on the border-trap characterization of TaN/HfO2/Si and TaN/HfO2/strained-Si/Si0.8Ge0.2 n-channel MOSFET devices. The equivalent oxide thickness for the gate dielectrics is 2 nm. Drain-current hysteresis method is used to characterize the border traps, and it is found that border traps are higher in the case of high-kappa films on... View full abstract» • ### Extraction of the Threshold-Voltage Shift by the Single-Pulse Technique Publication Year: 2007, Page(s):734 - 736 Cited by:  Papers (8) | |PDF (103 KB) | HTML Methods for extracting threshold-voltage shift (DeltaVth) in high-kappa transistors using the single-pulse drain current-gate voltage (Id-Vg) technique were compared with respect to their accuracies and limitations. It is concluded that an accurate estimation of the (DeltaVth) caused by charge trapping in high-kappa dielectrics can be obtained from the h... View full abstract» • ### Pinch-Off Voltage-Adjustable High-Voltage Junction Field-Effect Transistor Publication Year: 2007, Page(s):737 - 739 Cited by:  Papers (2)  |  Patents (5) | |PDF (460 KB) | HTML In this letter, a novel type of high-voltage n-channel junction field-effect transistor (JFET) was designed using a conventional n-channel laterally diffused metal-oxide-semiconductor (n-LDMOS) without changing any step in the process. High-voltage JFET can be a start-up device in power factor correction, dc-ac converters, and ac-dc converters for providing a self-powered circuit and minimizing st... View full abstract» ## Aims & Scope IEEE Electron Device Letters publishes original and significant contributions relating to the theory, modeling, design, performance and reliability of electron devices. Full Aims & Scope ## Meet Our Editors Editor-in-Chief Tsu-Jae King Liu tking@eecs.berkeley.edu
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2508772611618042, "perplexity": 23815.717224317046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812880.33/warc/CC-MAIN-20180220050606-20180220070606-00113.warc.gz"}
http://maxwell3d.net/2015/01/orbital-period-calculator-cplusplus/
# Orbital Period Calculator (C++) Being a tad obsessed with space, I have created a calculator for approximating orbital periods of satellites in our solar system. It is a very straightforward program I wrote in C++. Based upon Kepler’s Third Law of Planetary Motion, it evaluates the following formula: $T = 2\pi \sqrt{\frac{a^{3}}{\mu}}$ where: • $a$ is the orbit’s semi-major axis • $\mu = GM$ is the standard gravitational parameter • $G$ is the gravitational constant • $M$ is the mass of the two bodies Kepler’s Third Law states: the square of the orbital period of a planet is directly proportional to the cube of the semi-major axis of its orbit. It has been used to describe the relationship between the distance of planets from the Sun, and their orbital periods. This notion can be generalised to satellites orbiting planets to yield the above equation. Here, is the program’s interface. I have chosen the International Space Station as the satellite whose orbital period we want to know. We begin by selecting a central body and stating the mass and distance of our satellite. We see, below, the formula evaluates to 92.6 minutes. Wikipedia states the International Space Station’s orbital period to be: 92.69 minutes. Great! We now know the C++ algorithm works. Let’s try the moon. And Titan. We can even calculate Arnold Schwarzenegger’s orbit around Mars (assuming a semi-major axis of 5000km). I wrote this program purely for the fun of it. It has no practical merit. However, should Schwarzenegger ever be launched into an orbit around Mars, I will be the first to know his orbital time! I may extend the calculator in the future to contain a larger database of planets and satellites. It would be interesting to see other metrics such as average satellite speed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9005839228630066, "perplexity": 638.435396646253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320736.82/warc/CC-MAIN-20170626115614-20170626135614-00654.warc.gz"}
http://clay6.com/qa/28908/consider-the-following-statements-statement-1-a-solenoid-tends-to-expand-wh
Consider the following statements Statement 1: A solenoid tends to expand when a current passes through it Statement 2: Two parallel metallic wires carrying current in the same direction repel each other $\begin {array} {1 1} (a)\;Both \: the\: statements\: are\: false \\ (b)\;Both\: the\: statements\: are\: true.\: Statement \: 2\: is \: a \: correct\: explanation\: of \: statement\: 1 \\ (c)\;Statement \: 1 \: is \: true, \: statement\: 2\: is\: false \\ (d)\;Statement \: 2 \: is \: true, \: statement\: 1\: is\: false \end {array}$ The opposite of both the statements are true Ans : (a)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4073430299758911, "perplexity": 447.59749997511784}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155413.17/warc/CC-MAIN-20180918130631-20180918150631-00385.warc.gz"}
https://math.stackexchange.com/questions/865041/prove-or-disprove-that-there-are-n-consecutive-odd-positive-integers-that-are
# Prove or disprove that there are $n$ consecutive odd positive integers that are prime Question: Prove or disprove that there are $n$ consecutive odd positive integers that are prime. If my answer for the question above is correct, then a new question arises. My Attempt: Odd numbers consist of multiples of $5$. I think that address the question. New Question: Is there at most $3$ consecutive primes? If So how would someone tackle this? • There are plenty of odd numbers that are not a multiple of $5$, such as $31$, $47$, $999$. For solution, hint: If we take $3$ consecutive odd numbers, one of them is divisible by $3$. – André Nicolas Jul 12 '14 at 11:48 • @AndréNicolas I'm just wondering if you are talking about the first question. – JoeyAndres Jul 12 '14 at 11:59 • If you are not talking about the first question, that means, there are at most 3 odd consecutive integer which are prime, which is 3, 5, 7. For numbers $5$ or more, we will stumble upon an odd integer divisible by 3. Right? – JoeyAndres Jul 12 '14 at 12:02 • Yes. that's right. The only $3$ consecutive are $3$, $5$, $7$, which does not extend to $4$ consecutive. – André Nicolas Jul 12 '14 at 13:10 E.g. $17,19,21,23$ are $4$ consecutive odd numbers, none of which are divisible by $5$. So the observation implies the maximum number of consecutive odd numbers that are primes is at most $4$ (and we should be careful: there is one prime that ends in $5$). We know $3,5,7$ are three consecutive odd numbers that are primes. So we next have one of two tasks: • Find an example of four consecutive odd numbers that are primes; or • Prove that no such example exists. It's natural to consider the numbers $a,a+2,a+4$ modulo $3$ (to see if we find a factor of $3$). • If $a \equiv 0 \pmod 3$, then ...? • If $a \equiv 1 \pmod 3$, then ...? • If $a \equiv 2 \pmod 3$, then ...? • AndreNicolas above said that if we take 3 consecutive odd numbers, one of them is divisible by 3. How does 17, 19, 21, 23 implies that there are 4 consecutive odd primes? – JoeyAndres Jul 12 '14 at 12:52 • They are four consecutive odd numbers that are not multiples of $5$. All I'm saying is that your attempt does not cover this case. (They're not all prime.) – Rebecca J. Stones Jul 12 '14 at 12:55 Hint: All primes except for $2$ and $3$ are of the form $6n\pm1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.820427656173706, "perplexity": 241.19196681213776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00420.warc.gz"}
http://mathhelpforum.com/differential-geometry/142376-convergent-subsequence.html
# Math Help - Convergent subsequence 1. ## Convergent subsequence Prove or disprove: Let $x_n \in R$ and suppose the sequence $\{\sin(x_n)\}$ converges. The sequence $\{{x_n}\}$ must contain a convergent subsequence. I am not being able to figure out on how to prove or disprove this. Any help will be appreciated. 2. Originally Posted by serious331 Prove or disprove: Let $x_n \in R$ and suppose the sequence $\{\sin(x_n)\}$ converges. The sequence $\{{x_n}\}$ must contain a convergent subsequence. I am not being able to figure out on how to prove or disprove this. Any help will be appreciated. False: Let $x_k = 2\pi k$ 3. Originally Posted by southprkfan1 False: Let $x_k = 2\pi k$ Thanks southparkfan. does the sequence $\{sin(2\pi k)\}$ converge to 0? because it is 0 for all values of k? and how can we show that the subsequence of $\{(2\pi k)\}$ doesn't converge? 4. Originally Posted by serious331 how can we show that the subsequence of $\{(2\pi k)\}$ doesn't converge? You are joking aren’t you? You cannot be serious. 5. A convergent sequence is always bounded! 6. Given any $M>0$, let $N=\frac{M}{2\pi}$. Then $k>N$ implies $2\pi k > M$. And so $\lim_{k\to \infty} 2\pi k = +\infty$. It's pretty clear why any subsequence cannot converge since the tail of any subsequence will still tend to $+\infty$. 7. Originally Posted by Plato You are joking aren’t you? You cannot be serious. Okay, my question was retarded. Make as much fun as you can! 8. Originally Posted by serious331 Okay, my question was retarded. Make as much fun as you can! I have to say Plato, I did think that was a tiny bit arrogant to ask. It should be obvious, but better to ask than to just assume! If you wanna formalize why there can be no convergent subsequence, we can assume to the contrary. Let [tex]s_{n} = 2\pi n[/math[ and assume that there is a subsequence $(s_{n_{k}})$ such that given $\epsilon > 0$, there exists $N$ such that $n_{k} > N$ implies $|s_{n_{k}} - a| < \epsilon$, where $a$ is the finite number that we suppose the subsequence will converge to. And so this means for all $n_{k} > N$, $s_{n_{k}} < \epsilon + a$ (assume that $a > 0$ because if it wasn't, then clearly after some point the sequence would have to be negative). But since $\epsilon + a > 0$, we know for all $n > \frac{\epsilon + a}{2\pi}$, $2\pi n > \epsilon + a$, and so rises the contradiction. 9. Originally Posted by Pinkk I have to say Plato, I did think that was a tiny bit arrogant to ask. It should be obvious, but better to ask than to just assume! If you wanna formalize why there can be no convergent subsequence, we can assume to the contrary. Let [tex]s_{n} = 2\pi n[/math[ and assume that there is a subsequence $(s_{n_{k}})$ such that given $\epsilon > 0$, there exists $N$ such that $n_{k} > N$ implies $|s_{n_{k}} - a| < \epsilon$, where $a$ is the finite number that we suppose the subsequence will converge to. And so this means for all $n_{k} > N$, $s_{n_{k}} < \epsilon + a$ (assume that $a > 0$ because if it wasn't, then clearly after some point the sequence would have to be negative). But since $\epsilon + a > 0$, we know for all $n > \frac{\epsilon + a}{2\pi}$, $2\pi n > \epsilon + a$, and so rises the contradiction. Thank you for the detailed explanation Pinkk. Not everyone is as smart as Mr. Plato. I have another question. how can we prove that if the sequence $\{|x_n|\}$ converges, then the sequence $\{x_n\}$ has a convergent subsequence. 10. Since $|x_{n}|$ converges to let's say $a \ge 0$ (obviously the limit must be nonnegative since $|x_{n}| \ge 0$ by virtue of the absolute value), then we have some $N$ such that $n > N$ implies $||x_{n}| - a| < 1$, so $|x_{n}|$ is bounded by $1+a$ for all $n > N$. So let $M = \max\{|x_{1}|, |x_{2}|, ... , |x_{N}|, 1+a\}$. Then we have $|x_{n}|\le M$ for all $n$, so by definition, $(x_{n})$ is a bounded sequence. Thus, by the Bolzano-Weierstrass Theorem, $(x_{n})$ has a convergent subsequence.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 57, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955556392669678, "perplexity": 174.78530865641292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445219.14/warc/CC-MAIN-20151124205405-00354-ip-10-71-132-137.ec2.internal.warc.gz"}
http://clay6.com/qa/31466/which-set-of-configuration-is-of-natural-rubber-and-gutta-percha-
Browse Questions # Which set of configuration is of natural rubber and Gutta-Percha? $\begin{array}{1 1}(a)\;\text{Cis;trans}\\(b)\; \text{Trans;trans}\\(c)\;\text{ Cis;Cis}\\(d)\;\text{Trans;cis}\end{array}$ Hence (a) is the correct answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5663464665412903, "perplexity": 2878.416052759789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171933.81/warc/CC-MAIN-20170219104611-00198-ip-10-171-10-108.ec2.internal.warc.gz"}
http://robertovitillo.com/category/mozilla/
# Recommending Firefox add-ons with Spark We are currently evaluating possible replacements for our Telemetry map-reduce infrastructure. As our current data munging machinery isn’t distributed, analyzing days worth of data can be quite a pain. Also, many algorithms can’t easily be expressed with a simple map/reduce interface. So I decided to give Spark another try. “Another” because I have played with it in the past but I didn’t feel it was mature enough to be run in production. And I wasn’t the only one to think that apparently. I feel like things have changed though with the latest 1.1 release and I want to share my joy with you. What is Spark? In a nutshell, “Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala and Python, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming.” Spark primary abstraction is the Resilient Distributed Dataset (RDD), which one can imagine as a distributed pandas or R data frame. The RDD API comes with all kinds of distributed operations, among which also our dear map and reduce. Many RDD operations accept user-defined Scala or Python functions as input which allow average Joe to write distributed applications like a pro. A RDD can also be converted to a local Scala/Python data structure, assuming the dataset is small enough to fit in memory. The idea is that once you chopped the data you are not interested in, what you are left with fits comfortably on a single machine. Oh and did I mention that you can issue Spark queries directly from a Scala REPL? That’s great for performing exploratory data analyses. The greatest strength of Spark though is the ability to cache RDDs in memory. This allows you to run iterative algorithms up to 100x faster than using the typical Hadoop based map-reduce framework! It has to be remarked though that this feature is purely optional. Spark works flawlessly without caching, albeit slower. In fact in a recent benchmark Spark was able to sort 1PB of data 3X faster using 10X fewer machines than Hadoop, without using the in-memory cache. Setup A Spark cluster can run in standalone mode or on top of YARN or Mesos. To the very least for a cluster you will need some sort of distributed filesystem, e.g. HDFS or NFS. But the easiest way to play with it though is to run Spark locally, i.e. on OSX: brew install spark spark-shell --master "local[*]" The above commands start a Scala shell with a local Spark context. If you are more inclined to run a real cluster, the easiest way to get you going is to launch an EMR cluster on AWS: aws emr create-cluster --name SparkCluster --ami-version 3.3 --instance-type m3.xlarge \ --instance-count 5 --ec2-attributes KeyName=vitillo --applications Name=Hive \ --bootstrap-actions Path=s3://support.elasticmapreduce/spark/install-spark Then, once connected to the master node, launch Spark on YARN: yarn-client /home/hadoop/spark/bin/spark-shell --num-executors 4 --executor-cores 8 \ --executor-memory 8g —driver-memory 8g The parameters of the executors (aka worker nodes) should obviously be tailored to the kind of instances you launched. It’s imperative to spend some time understanding and tuning the configuration options as Spark doesn’t automagically do it for you. Now what? Time for some real code. Since Spark makes it so easy to write distributed analyses, the bar for a Hello World application should be consequently be much higher. Let’s write then a simple, albeit functional, Recommender Engine for Firefox add-ons. In order to do that, let’s first go over quickly the math involved. It turns out that given a matrix of the rankings of each user for each add-on, the problem of finding a good recommendation can be reduced to matrix factorization problem: The model maps both users and add-ons to a joint latent factor space of dimensionality $F$. Both users and add-ons are thus seen as vectors in that space. The factors express latent characteristics of add-ons, e.g. if a an add-on is related to security or to UI customization. The ratings are then modeled as inner products in that space, which is proportional to the angle of the two vectors. The closer the characteristics of an add-on align to the preferences of the user in the latent factor space, the higher the rating. But wait, Firefox users don’t really rate add-ons. In fact the only information we have in Telemetry is binary: either a user has a certain add-on installed or he hasn’t. Let’s assume that if someone has a certain add-on installed, he probably likes that add-on. That’s not true in all cases and a more significant metric like “usage time” or similar should be used. I am not going to delve into the details, but having binary ratings changes the underlying model slightly from the conceptual one we have just seen. The interested reader should read this paper. Mllib, a machine learning library for Spark, comes out of the box with a distributed implementation of ALS which implements the factorization. Implementation Now that we have an idea of the theory, let’s have a look at how the implementation looks like in practice. Let’s start by initializing Spark: val sc = new SparkContext(conf) As the ALS algorithm requires tuples of (user, addon, rating), let’s munge the data into place: val ratings = sc.textFile("s3://mreid-test-src/split/").map(raw => { val parsedPing = parse(raw.substring(37)) (parsedPing \ "clientID", parsedPing \ "addonDetails" \ "XPI") }).filter{ // Remove sessions with missing id or add-on list case (JNothing, _) => false case (_, JNothing) => false case (_, JObject(List())) => false case _ => true }.map{ case (id, xpi) => { }}.filter{ case (id, addonList) => { // Remove sessions with empty add-on lists }}.flatMap{ case (id, addonList) => { // Create add-on ratings for each user }} Here we extract the add-on related data from our json Telemetry pings and filter out missing or invalid data. The ratings variable is a RDD and as you can see we used the distributed map, filter and flatMap operations on it. In fact it’s hard to tell apart vanilla Scala code from the distributed one. As the current ALS implementation doesn’t accept strings for the user and add-on representations, we will have to convert them to numeric ones. A quick and dirty way of doing that is to hash the strings: // Positive hash function def hash(x: String) = x.hashCode & 0x7FFFFF val hashedRatings = ratings.map{ case(u, a, r) => (hash(u), hash(a), r) }.cache We are nearly there. To avoid overfitting, ALS uses regularization, the strength of which is determined by a parameter $\lambda$. As we don’t know beforehand the optimal value of the parameter, we can try to find it by minimizing the mean squared error over a pre-defined grid of $\lambda$ values using k-fold cross-validation. // Use cross validation to find the optimal number of latent factors val folds = MLUtils.kFold(hashedRatings, 10, 42) val lambdas = List(0.1, 0.2, 0.3, 0.4, 0.5) val iterations = 10 val factors = 100 // use as many factors as computationally possible val factorErrors = lambdas.flatMap(lambda => { folds.map{ case(train, test) => val model = ALS.trainImplicit(train.map{ case(u, a, r) => Rating(u, a, r) }, factors, iterations, lambda, 1.0) val usersAddons = test.map{ case (u, a, r) => (u, a) } val predictions = model.predict(usersAddons).map{ case Rating(u, a, r) => ((u, a), r) } val ratesAndPreds = test.map{ case (u, a, r) => ((u, a), r) }.join(predictions) val rmse = sqrt(ratesAndPreds.map { case ((u, a), (r1, r2)) => val err = (r1 - r2) err * err }.mean) (model, lambda, rmse) } }).groupBy(_._2) .map{ case(k, v) => (k, v.map(_._3).reduce(_ + _) / v.length) } Finally, it’s just a matter of training ALS on the whole dataset with the optimal $\lambda$ value and we are good to go to use the recommender: // Train model with optimal number of factors on all available data val model = ALS.trainImplicit(hashedRatings.map{case(u, a, r) => Rating(u, a, r)}, factors, iterations, optimalLambda._1, 1.0) def recommend(userID: Int) = { val top = predictions.top(10)(Ordering.by[Rating,Double](_.rating)) } recommend(hash("UUID...")) I omitted some details but you can find the complete source on my github repository. To submit the packaged job to YARN run: spark-submit --class AddonRecommender --master yarn-client --num-executors 4 \ So what? Question is, how well does it perform? The mean squared error isn’t really telling us much so let’s take some fictional user session and see what the recommender spits out. For user A that has only the add-on Ghostery installed, the top recommendations are, in order: • NoScript • Web of Trust • Symantec Vulnerability Protection • Better Privacy • LastPass • DuckDuckGo Plus • HTTPS-Everywhere • Lightbeam One could argue that 1 out of 10 recommendations isn’t appropriate for a security aficionado. Now it’s the turn of user B who has only the Firebug add-on installed: • Web Developer • FiddlerHook • Greasemonkey • ColorZilla • User Agent Switcher • McAfee • RealPlayer Browser Record Plugin • FirePHP • Session Manager There are just a couple of add-ons that don’t look that great but the rest could fit the profile of a developer. Now, considering that the recommender was trained only on a couple of days of data for Nightly, I feel like the result could easily be improved with more data and tuning, like filtering out known Antivirus, malware and bloatware. # Popular hw/sw configurations of Firefox users Knowing the distribution of our users wrt. cpu/gpu/os etc. is one of those question that comes up time and time again. After a couple of times of running a custom map-reduce job on our Telemetry data I decided to write a periodic job so that we can keep track of it and quickly get the most updated data. Here is a distribution tree of all the data collected on the 20th of October on the release channel: There are many ways of displaying the data but this is one I find particularly useful as I am more interested in the frequency of the combination of factors than than the distribution of the single factors. The size of the factors tries to be proportional to the frequency of the prefix. For instance, the most common machine configuration has a spinning disk, 4 GB of memory, 4 cores, an Intel GPU and runs Firefox 32.0.3 on Windows 6.1 (aka Windows 7). Note that the GPU refers to the main one detected when Firefox launches. This means that if a machine has more than one accelerator, only the one active during startup is reported. This is clearly suboptimal and we have a bug on file to address this issue. The online dashboard also allows to dig in the individual nodes and show the cumulative percentage of users for the hovered node. # Correlating Firefox add-ons to slow shutdown times This is a follow-up to my earlier findings about add-ons and startup times. In this post I am going to dig deeper between the relations of add-ons and shutdown times. A slow shutdown doesn’t seem to be a big deal. If one considers though that a new instance of Firefox can’t be launched if the old one is still shutting down, the issue becomes more serious. It turns out that for shutdown times a simple linear model is not good enough while a log-linear instead has a reasonably good performance. Log transforming the shutdown times slighly complicates the meaning of the coefficients as they have to be interpreted as the percentage change of the average shutdown time. E.g. if an add-on has a coefficient of 100%, it means that it might (correlation is not causation!) slow down shutdown by 2 times. The idea of using a log-linear model comes from our contributors Jeremy Atia and Martin Gubri [1], which discovered the relationship during a preliminary analysis. Unlike in the startup case, there are fewer stronger relationships here. Some patterns start to emerge though, the Yandex add-on for instance seems to be associated with both slower startup and shutdown timings. We started to keep track of those results on a weekly basis through a couple iacomus dashboard: one for startup and the other for shutdown times correlations. The dashboards are surely not going to win any design award but they get the job done and didn’t require any effort to setup. I am confident that by spotting consistently ill-behaved add-ons through the time-series we should be able to spot real tangible offenders. [1] If you love probability, statistics and machine learning and are looking for an open-source project to contribute to, Firefox is a cool place to start! Get in touch with me if that sounds interesting. # Correlating Firefox add-ons to performance bottlenecks Update: I run the analysis on more data as some add-ons had very few entries with extreme outliers that were skewing the results; I also considered more add-ons. I started looking into exploiting our Telemetry data to determine which add-ons are causing performance issues with Firefox. So far there are three metrics that I plan to correlate with add-ons: • startup time, • shutdown time, • background hangs. In this post I am going over my findings for the first scenario, i.e. the relation between startup time and installed add-ons. In an ideal world, all add-ons would have an uniform way to initialize themselves which could be instrumented. Unfortunately that’s not possible, many add-ons use asynchronous facilities and or rely on observer notifications for initialization. In other words, there is no good way to easily measure the initialization time for all add-ons without possibly touching their codebases individually. This is the sort of problem that screams for a multi-way ANOVA but, after some thought and data exploration, it turns out that the interaction terms can be dropped between add-ons, i.e. the relation between add-ons and the startup time can be modeled as a pure additive one. Since a multi-way ANOVA is equivalent to a linear regression between a set of predictors and their interactions, the problem can be modeled with a generalized linear model where for each Telemetry submission the add-on map is represented as a boolean vector of dummy variables that can assume a value of 0 or 1 corresponding to “add-on installed” and “add-on not installed”, respectively. Startup time depends on many other factors that are not taken into account in the model, like current system load and hard drive parameters. This means that it would be very surprising, to say the least, if one could predict the startup time without those variables. That doesn’t mean that we can’t explain part of the variance! In fact, after training the model on the data collected during the past month, it yielded a $R^2$ score of about 0.15, which in other words means that we can explain about 15% of the variance. Again, as we are not trying to predict the startup time accurately this is not necessarily a bad result. The F ratio, which relates the variance between add-ons to the variance within add-ons, is significant which remarks that having or not certain add-ons installed does influence the startup time. Many of the p-values of the predictor’s coefficients are highly significant (<< 0.001); it’s just a matter of sorting the significant results by their effect size to determine the add-ons that cause a notable slowdown of Firefox during startup: The horizontal axis measures the startup time overhead with respect to the average startup time of Firefox. For instance, Yandex Elements seems to be slowing down startup by about 8 seconds on average. The error-bars represent the standard errors of the sampling distributions of the coefficients. Note that the model is based on a very small fraction of our user-base, i.e. the subset that has Telemetry enabled, so there clearly is some implicit bias. The picture might be different for a truly random sample of our users, nevertheless it is an indication of where to start digging deeper. The next step is to “dashboardify” the whole thing and contact the developers of the various add-ons. We are also considering notifying users, in a yet to be determined way, when the browser detects add-ons that are known to cause performance issues. References: map-reduce job # Telemetry meets Clojure. tldr: Data related telemetry alerts (e.g. histograms or main-thread IO) are now aggregated by medusa, which allows devs to post, view and filter alerts. The dashboard allows to subscribe to search criterias or individual metrics. As mentioned in my previous post, we recently switched to a dashboard generator, “iacomus“, to visualize the data produced by some of our periodic map-reduce jobs. Given that the dashboards gained some metadata that describes their datasets, writing a regression detection algorithm based on the iacomus data-format followed naturally. The algorithm generates a time-series for each possible combination of the filtering and sorting criterias of a dashboard, compares the latest data-point to the distribution of the previous N, and generates an alert if it detects an outlier. Stats 101. Alerts are aggregated by medusa, which provides a RESTful API to submit alerts and exposes a dashboard that allows users to view and filter alerts using regular expressions and subscribe to alerts. Writing the aggregator and regression detector in Clojure[script] has been a lot of fun. I found particularly attracting the fact that Clojure doesn’t have any big web framework a la Ruby or Python that forces you in one specific mindset. Instead you can roll your own using a wide set of libraries, like: • HTTP-Kit, an event-driven HTTP client/server • Compojure, a routing library • Korma, a SQL DSL • Liberator, RESTful resource handlers • om, React.js interface for Clojurescript • secretary, a client-side routing library The ability to easily compose functionality from different libraries is exceptionally well explained by a quote from Alan Perlis: “It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures”. And so as it happens instead of each library having its own set of independent abstractions and data-structures, Clojure libraries tend to use mostly just lists, vectors, sets and maps which greatly simplify interoperability. Lisp gets criticized for its syntax, or lack thereof, but I don’t feel that’s fair. Using any editor that inserts and balances parentheses for you does the trick. I also feel like I didn’t have to run a background thread in my mind to think if what I was writing would please the compiler or not, unlike in Scala for instance. Not to speak of the ability to use macros which allows one to easily extend the compiler with user-defined code. The expressiveness of Clojure means also that more thought is required per LOC but that might be just a side-effect of not being a full-time functional programmer. What I do miss in the clojure ecosystem is a strong set of tools for statistics and machine learning. Incanter is a wonderful library but, coming from a R and python/scipy background, I had the impression that there is still a lot of catching up to do. # Dasbhoard generator for custom Telemetry jobs tldr: Next time you are in need of a dashboard similar to the one used to monitor main-thread IO, please consider using my dashboard generator which takes care of displaying periodically generated data. So you wrote your custom analysis for Telemetry, your map-reduce job is finally giving you the desired data and you want to set it up so that it runs periodically. You will need some sort of dashboard to monitor the weekly runs but since you don’t really care how it’s done what do you do? You copy paste the code of one of our current dashboards, a little tweak here and there and off you go. That basically describes all of the recent dashboards, like the one for main-thread IO (mea culpa). Writing dashboards is painful when the only thing you care about is data. Once you finally have what you were looking for, the way you present is often considered an afterthought at best. But maintaining N dashboards becomes quickly unpleasant. But what makes writing and maintaining dashboards so painful exactly? It’s simply that the more controls you have, the more different kind events you have to handle and the easier things get out of hand quickly. You start with something small and beautiful that just displays some csv and presto you end up with what should have been properly described as a state machine but instead is a mess of intertwined event handlers. What I was looking for was something on the line of Shiny for R, but in javascript and with the option to have a client-only based interface. It turns out that React does more or less what I want. It’s not necessary meant for data analysis so there aren’t any plotting facilities but everything is there to roll your own. What makes exactly Shiny and React so useful is that they embrace reactive programming. Once you define a state and a set of dependencies, i.e. a data flow graph in practical terms, changes that affect the state end up being automatically propagated to the right components. Even though this can be seen as overkill for small dashboards, it makes it extremely easy to extend them when the set of possible states expands, which is almost always what happens. To make things easier for developers I wrote a dashboard generator, iacumus, for use-cases similar to the ones we currently have. It can be used in simple scenarios when: • the data is collected in csv files on a weekly basis, usually using build-ids; • the dashboard should compare the current week against the previous one and mark differences in rankings; • it should be possible to go back back and forward in time; • the dashboard should provide some filtering and sorting criterias. Iacumus is customizable through a configuration file that is specified through a GET parameter. Since it’s hosted on github, it means you just have to provide the data and don’t even have to spend time deploying the dashboard somewhere, assuming the machine serving the configuration file supports CORS. Here is how the end result looks like using the data for the add-on start-up correlations dashboard. Note that currently Chrome doesn’t handle properly our gzipped datasets and is unable to display anything, in case you wonder… My next immediate goal is to simplify writing map-reduce jobs for the above mentioned use cases or to the very least write down some guidelines. For instance, some of our dashboards are based on Firefox’s version numbers and not on build-ids, which is really what you want when you desire to make comparisons of Nightly on a weekly basis. Another interesting thought would be to automatically detect differences in the dashboards and send alerts. That might be not as easy with the current data, since a quick look at the dashboards makes it clear that the rankings fluctuate quite a bit. We would have to collect daily reports and account for the variance of the ranking in those as just using a few weekly datapoints is not reliable enough to account for the deviation. # Regression detection for Telemetry histograms. tldr: An automatic regression detector system for Telemetry data has been deployed; the detected regressions can be seen in the dashboard. Mozilla is collecting over 1,000 Telemetry probes which give rise to histograms, like the one in the figure below, that change slightly every day. Until lately the only way to monitor those histogram was to sit down and literally stare the screen while something interesting was spotted. Clearly there was the need for an automated system which is able to discern between noise and real regressions. Noise is a major challenge, even more so than with Talos data, as Telemetry data is collected from a wide variety of computers, configurations and workloads. A reliable mean of detecting regressions, improvements and changes in a measurement’s distribution is fundamental as erroneous alerts (false positives) tend to annoy people to the point that they just ignore any warning generated by the system. I have looked at various methods to detect changes in histogram, like • Correlation Coefficient • Chi-Square statistic • U statistic (Mann-Whitney) • Kolmogorov-Smirnov statistic of the estimated densities • One Class Support Vector Machine • Bhattacharyya Distance Only the Bhattacharyya distance proved satisfactory for our data. There are several reasons why each of the previous methods fails with our dataset. For instance a one class SVM wouldn’t be a bad idea if some distributions wouldn’t change dramatically over the course of time due to regressions and/or improvements in our code; so in other words, how do you define how a distribution should look like? You could just take the daily distributions of the past week as training set but that wouldn’t be enough data to get anything meaningful from a SVM. A Chi-Square statistic instead is not always applicable as it doesn’t allow cells with an expected count of 0. We could go on for quite a while and there are ways to get around those issues but the reader is probably more interested in the final solution. I evaluated how well those methods are actually at pinpointing some past known regressions and the Bhattacharyya distance proved to be able to detect the kind of pattern changes we are looking for, like distributions shifts or bin swaps, while minimizing the number of false positives. Having a relevant distance metric is only part of the deal since we still have to decide what to compare. Should we compare the distribution of today’s build-id against the one from yesterday? Or the one from a week ago? It turns out that trying to mimic what an human would do yields a good algorithm: if • the variance of the distance between the histogram of the current build-id and the histograms of the past N build-ids is small enough and • the distance between the histograms of the current build-id and the previous build-id is above a cutoff value K yielding a significant difference and • a significant difference is also present in the next K build-ids, then a distribution change is reported. Furthermore, Histograms that don’t have enough data are filtered out and the cut-off values and parameters are determined empirically from past known regressions. I am pretty satisfied with the detected regressions so far, for instance the system was able to correctly detect a regression caused by the OMTC patch that landed the 20st of May which caused a significant change in the the average frame interval during tab open animation: We will soon roll-out a feature to allow histogram authors to be notified through e-mail when an histogram change occurs. In the meantime you can have a look at the detected regressions in the dashboard.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3401043117046356, "perplexity": 1654.2238447038678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400376728.38/warc/CC-MAIN-20141119123256-00063-ip-10-235-23-156.ec2.internal.warc.gz"}
http://farside.ph.utexas.edu/teaching/jk1/lectures/node144.html
Next: Retarded Potentials Up: Relativity and Electromagnetism Previous: Potential 4-Vector # Gauge Invariance The electric and magnetic fields are obtained from the vector and scalar potentials according to the prescription (1738) (1739) These fields are important because they determine the electromagnetic forces exerted on charged particles. Note that the previous prescription does not uniquely determine the two potentials. It is possible to make the following transformation, known as a gauge transformation, that leaves the fields unaltered: (1740) (1741) where is a general scalar field. It is necessary to adopt some form of convention, generally known as a gauge condition, to fully specify the two potentials. In fact, there is only one gauge condition that is consistent with Equations (1733). This is the Lorenz gauge condition, (1742) Note that this condition can be written in the Lorentz invariant form (1743) This implies that if the Lorenz gauge holds in one particular inertial frame then it automatically holds in all other inertial frames. A general gauge transformation can be written (1744) Note that, even after the Lorentz gauge has been adopted, the potentials are undetermined to a gauge transformation using a scalar field, , that satisfies the sourceless wave equation (1745) However, if we adopt sensible boundary conditions in both space and time then the only solution to the previous equation is . Next: Retarded Potentials Up: Relativity and Electromagnetism Previous: Potential 4-Vector Richard Fitzpatrick 2014-06-27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9902485609054565, "perplexity": 428.0873519577209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815918.89/warc/CC-MAIN-20180224172043-20180224192043-00033.warc.gz"}
https://bibbase.org/network/publication/kent-laslo-rafaeli-interactivityinonlinediscussionsandlearningoutcomes-2016
Interactivity in online discussions and learning outcomes. Kent, C., Laslo, E., & Rafaeli, S. Computers and Education, 97:116-128, Elsevier Ltd, 6, 2016. The increased use of online discussions in learning environments both formal and informal, positions the construct of interactivity as central to learning. Interactivity in learning communities' online discourse is viewed in this study as a socio-constructivist process. It is the network of interactions among content items and participants which drives a collective knowledge construction process. Conceptualizing interactivity in the literature is still unclear and not enough is known about its role in knowledge construction and about its relationship to learning outcomes. In addition, assessing learning outcomes using analytics has not matured fully and is still subject to intense development. This study thus sets out to investigate the role of interactivity as a process of knowledge construction within online discussions, and in particular, its association with learning outcomes, as measured by formal assessment tasks. We present significant positive correlations between various interactivity measures, taken from various learning communities, and a set of well-known learning assessments. We suggest that patterns of interactivity among learners can be measured, and teach us, not just about group dynamics and collaboration, but also about the actual individual learning process. @article{ title = {Interactivity in online discussions and learning outcomes}, type = {article}, year = {2016}, identifiers = {[object Object]}, keywords = {Computer-mediated communication,Cooperative/collaborative learning,Evaluation methodologies,Interactive learning environments,Learning communities}, pages = {116-128}, volume = {97}, month = {6}, publisher = {Elsevier Ltd}, day = {1}, id = {ba740df9-16f8-30e4-813b-3d938a8da1d2}, created = {2020-02-03T15:25:15.796Z}, accessed = {2020-02-03}, file_attached = {false}, profile_id = {66be748e-b1e3-36e1-95e1-5830d0ccc3ca}, group_id = {ed1fa25d-c56b-3067-962d-9d08ff49394c}, last_modified = {2020-02-03T15:25:41.188Z}, }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43293696641921997, "perplexity": 6117.102653381923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945292.83/warc/CC-MAIN-20230325002113-20230325032113-00335.warc.gz"}
https://www.mathscinotes.com/2016/12/another-angle-measurement-using-roller-gages-plus-error-analysis-example/
# Another Angle Measurement Using Roller Gages Plus Error Analysis Example Quote of the Day Einstein himself, of course, arrived at the same Lagrangian but without the help of a developed field theory, and I must admit that I have no idea how he guessed the final result. We have had troubles enough arriving at the theory - but I feel as though he had done it while swimming underwater, blindfolded, and with his hands tied behind his back! — Feynman Lectures on Gravitation (1995) p.87. Feynman was showing how to derive the theory of general relativity. Even Feynman struggled to understand how Einstein obtained his theory of general relativity given the state of knowledge at the time. ## Introduction Figure 1: Example of Measuring a Small Angle. While I covered angle measurement in a previous post, that approach can be difficult to apply for acute angles. The approach presented in this post works well for acute angles, but will not work for obtuse angles. As part of this post, I will also demonstrate how to perform a tolerance analysis on this approach. The tolerance analysis is important in understanding the level of accuracy required in your linear measurements to achieve the desired angle accuracy. This example was motivated by material presented on this web page. ## Background All the background required is covered in my previous metrology posts: ## Analysis ### Definition and Derivation Figure 2 show the configuration of the two roller gages of diameter D with the angle. A slip gage is used to measure the distance L between the outer gage and the upper leg of the angle. I have included a red-legged reference triangle in Figure 2. The formula for θ is found by applying the definition of the sin of angle (opposite/hypotenuse). This means that $\sin \left( \theta \right)=\frac{L}{D}$. Figure 2: Symbol Definitions. ### Example Figure 3 show the calculations associated with the example of Figure 1. My results are in reasonable agreement with the angle measurement taken from the Figure 1, which is a scale drawing. Figure 3: Calculations For Example of Figure 1. ### Tolerance Analysis Figure 4 shows how I performed my tolerance analysis. In this analysis, I wanted to estimate the impact of tolerance errors in the roller (±0.0001 in) and slip gages (±0.001 in). These errors produce an error in the angle measurement of 2.5 arcminutes. The error analysis makes use of the concept of differentials. Figure 4: Error Analysis. ## Conclusion This post ends my series on using roller gages and gage balls. This effort has been part of my attempts to gather information and tutorials on making accurate measurement of tough parameters. Save Save Save Save Save Save Save Save Save Save This entry was posted in Metrology. Bookmark the permalink. ### One Response to Another Angle Measurement Using Roller Gages Plus Error Analysis Example 1. Malcolm Frame says: The most accurately machined spheres ever manufactured (about the size of a ping-pong ball) are now orbiting the Earth in gyroscopes that are part of a project to prove that Einstein was right: https://einstein.stanford.edu/TECH/technology1.html
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712003231048584, "perplexity": 1031.370381656588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738888.13/warc/CC-MAIN-20200812083025-20200812113025-00534.warc.gz"}
http://mathhelpforum.com/algebra/53262-exponential-problem.html
## Exponential problem I've got this problem: $1/4[-a2e^{-a/2} - 4e^{-a/2} + 4)] = 0.05$ and I'm supposed to solve for a. But I'm not very good with exponentials and logarithms. I start out like this: $1/4(-a2e^{-a/2}) -e^{a/2} +1 = 0.05$ $-0.5a(e^{-a/2}) -e^{a/2} +1 = 0.05$ $-0.5a(e^{-a/2}) -e^{a/2} = -0.95$ $-e^{-a/2} (1+0.5a)= -0.95$ $-e^{-a/2} = \frac{-0.95}{(1+0.5a)}$ $e^{-a/2} = \frac{0.95}{-(1+0.5a)}$ Am I right so far? And if so, where do I go next? I'd be very thankful if someone could help me out with this. And please also explain the steps.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862122893333435, "perplexity": 121.01274440832917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657128337.85/warc/CC-MAIN-20140914011208-00089-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.tutorpaul.com/2-6/
## 3 thoughts on “2.6” 1. kristian8495 says: For part B is it an elipse with the dimensions of 2D and D because of the equations Dcos(wt) and 2Dsin(wt)? 1. tutorpaul says: We know it is an ellipse because $x_p = A \cos(\omega t)$, $y_p = B \sin(\omega t)$, and $A \neq B$. We know that it is taller than it is wide because $B > A$; specifically we know that it is twice as tall as it is wide because $B = 2A$.
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8839685916900635, "perplexity": 635.0398205016784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825728.30/warc/CC-MAIN-20181214114739-20181214140239-00474.warc.gz"}
https://www.library2.smu.ca/xmlui/browse?type=author&value=Unsworth%2C+C.
# Browsing by Author "Unsworth, C." Sort by: Order: Results: • (American Physical Society, 2017-06-28) How does nature hold together protons and neutrons to form the wide variety of complex nuclei in the Universe? Describing many-nucleon systems from the fundamental theory of quantum chromodynamics has been the greatest ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527114033699036, "perplexity": 4150.261820577729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00475.warc.gz"}
http://aimsciences.org/search/author?author=Victor%20S.%20Kozyakin
# American Institue of Mathematical Sciences ## Journals DCDS The influence of the driving system on a skew-product flow generated by a triangular system of differential equations can be perturbed in two ways, directly by perturbing the vector field of the driving system component itself or indirectly by perturbing its input variable in the vector field of the coupled component. The effect of such perturbations on a nonautonomous attractor of the driven component is investigated here. In particular, it is shown that a perturbed nonautonomous attractor with nearby components exists in the indirect case if the driven system has an inflated nonautonomous attractor and that the direct case can be reduced to this case if the driving system is shadowing. keywords: perturbations attractors Skew product flow shadowing. DCDS A nonautonomous or cocycle dynamical system that is driven by an autonomous dynamical system acting on a compact metric space is assumed to have a uniform pullback attractor. It is shown that discretization by a one-step numerical scheme gives rise to a discrete time cocycle dynamical system with a uniform pullback attractor, the component subsets of which converge upper semi continuously to their continuous time counterparts as the maximum time step decreases to zero. The proof involves a Lyapunov function characterizing the uniform pullback attractor of the original system. keywords: perturbations discretization. Cocycle dynamical systems attractors DCDS-S We consider discrete time systems $x_{k+1}=U(x_{k};\lambda)$, $x\in\R^{N}$, with a complex parameter $\lambda$. The map $U(\cdot;\lambda)$ at infinity contains a principal linear term, a bounded positively homogeneous nonlinearity, and a smaller part. We describe the sets of parameter values for which the large-amplitude $n$-periodic trajectories exist for a fixed $n$. In the related problems on small periodic orbits near zero, similarly defined parameter sets, known as Arnold tongues, are more narrow. keywords: Arnold tongue positively homogeneous nonlinearity discrete time system bifurcation at infinity Poincare map. Periodic trajectory saturation [Back to Top]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935393214225769, "perplexity": 654.7455635167247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891377.59/warc/CC-MAIN-20180122133636-20180122153636-00781.warc.gz"}
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Chirality/Stereoisomers/Chirality_and_Symmetry/Enantiomorphism/Conformational_Enantiomorphism
Skip to main content # Conformational Enantiomorphism The Fischer projection formula of meso-tartaric acid has a plane of symmetry bisecting the C2–C3 bond, as shown on the left in the diagram below, so this structure is clearly achiral. The eclipsed orientation of bonds that is assumed in the Fischer drawing is, however, an unstable conformation, and we should examine the staggered conformers that undoubtedly make up most of the sample molecules. The four structures that are shown to the right of the Fischer projection consist of the achiral Fischer conformation (A) and three staggered conformers, all displayed in both sawhorse and Newman projections. The second and fourth conformations (B & D) are dissymmetric, and are in fact enantiomeric structures. The third conformer (C) has a center of symmetry and is achiral. ### Conformations of meso-Tartaric Acid Fischer Projection A eclipsed, achiral B staggered, chiral C staggered, achiral D staggered, chiral Since a significant proportion of the meso-tartaric acid molecules in a sample will have chiral conformations, the achiral properties of the sample (e.g. optical inactivity) should not be attributed to the symmetry of the Fischer formula. Equilibria among the various conformations are rapidly established, and the proportion of each conformer present at equilibrium depends on its relative potential energy (the most stable conformers predominate). Since enantiomers have equal potential energies, they will be present in equal concentration, thus canceling their macroscopic optical activity and other chiral behavior. Simply put, any chiral species that are present are racemic. It is interesting to note that chiral conformations are present in most conformationally mobile compounds, even in the absence of any chiral centers. The gauche conformers of butane, for example, are chiral and are present in equal concentration in any sample of this hydrocarbon. The following illustration shows the enantiomeric relationship of these conformers, which are an example of a chiral axis rather than a chiral center. ### Contributors • Was this article helpful?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8910893201828003, "perplexity": 2552.5507818209976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.6/warc/CC-MAIN-20210731105716-20210731135716-00544.warc.gz"}
https://www.physicsforums.com/threads/linear-momentum-and-collisions-of-meteor.69974/
# Linear Momentum and Collisions of meteor 1. Apr 4, 2005 ### wildr0id A meteor whose mass was about 10^8 kg struck the Earth (m = 6.0 X10^24 kg) with a speed of about 11 km/s and came to rest in the Earth. (a) What was the Earth's recoil speed? (m/s) (b) What fraction of the meteor's kinetic energy was transformed to kinetic energy of the Earth? (%) (c) By how much did the Earth's kinetic energy change as a result of this collision? (J) I know this problem requires a look at conservation of momentum and conservation of energy principles, but I am having trouble just trying to start this problem out :grumpy: 2. Apr 4, 2005 ### dextercioby You know that u need to apply the law of conservation of momentum.Well,then do it...I'm afraid you're dealing with a plastic collision for which the KE is not really conserved... Daniel. 3. Apr 4, 2005 ### HallsofIvy Staff Emeritus This is a "completely inelastic" collision- Kinetic energy is not conserved so you cannot use that. You do, however, know that the earth has 0 velocity initially and that both the earth and the asteroid have the same velocity after. Mava+ Meve= Mav'a+ Mev'e ("e" subscripts are "earth", "a" subscripts are "asteroid". v' is after the collision.) becomes Mav= (Ma+ Me)v'. You know Ma, Me, and v. Solve for v'. Once you know that you can calculate the kinetic energy of the asteroid and earth after the collision and compare it with those values before the collision.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9144910573959351, "perplexity": 1348.7072427445053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542213.61/warc/CC-MAIN-20161202170902-00097-ip-10-31-129-80.ec2.internal.warc.gz"}
https://forum.effectivealtruism.org/posts/SbmJfGk5wH3g2XbXc/a-relatively-atheoretical-perspective-on-astronomical-waste
# 9 Crossposted from the Global Priorities Project # Introduction It is commonly objected that the “long-run” perspective on effective altruism rests on esoteric assumptions from moral philosophy that are highly debatable. Yes, the long-term future may overwhelm aggregate welfare considerations, but does it follow that the long-term future is overwhelmingly important? Do I really want my plan for helping the world to rest on the assumption that the benefit from allowing extra people to exist scales linearly with population when large numbers of extra people are allowed to exist? In my dissertation on this topic, I tried to defend the conclusion that the distant future is overwhelmingly important without committing to a highly specific view about population ethics (such as total utilitarianism). I did this by appealing to more general principles, but I did end up delving pretty deeply into some standard philosophical issues related to population ethics. And I don’t see how to avoid that if you want to independently evaluate whether it’s overwhelmingly important for humanity to survive in the long-term future (rather than, say, just deferring to common sense). In this post, I outline a relatively atheoretical argument that affecting long-run outcomes for civilization is overwhelmingly important, and attempt to side-step some of the deeper philosophical disagreements. It won’t be an argument that preventing extinction would be overwhelmingly important, but it will be an argument that other changes to humanity’s long-term trajectory overwhelm short-term considerations. And I’m just going to stick to the moral philosophy here. I will not discuss important issues related to how to handle Knightian uncertainty, “robust” probability estimates, or the long-term consequences of accomplishing good in the short run. I think those issues are more important, but I’m just taking on one piece of the puzzle that has to do with moral philosophy, where I thought I could quickly explain something that may help people think through the issues. In outline form, my argument is as follows: 1. In very ordinary resource conservation cases that are easy to think about, it is clearly important to ensure that the lives of future generations go well, and it’s natural to think that the importance scales linearly with the number of future people whose lives will be affected by the conservation work. 2. By analogy, it is important to ensure that, if humanity does survive into the distant future, its trajectory is as good as possible, and the importance of shaping the long-term future scales roughly linearly with the expected number of people in the future. 3. Premise (2), when combined with the standard set of (admittedly debatable) empirical and decision-theoretic assumptions of the astronomical waste argument, yields the standard conclusion of that argument: shaping the long-term future is overwhelmingly important. As when I have discussed this issue in other contexts (such as Nick Bostrom’s papers “Astronomical Waste” and “Existential Risk Prevention as Global Priority,” and my dissertation) this conversation is going to generally assume that we’re talking about good accomplished from an impartial perspective, and will not attend to deontological, virtue-theoretic, or justice-related considerations. # A review of the astronomical waste argument and an adjustment to it The standard version of the astronomical waste argument runs as follows: 1. The expected size of humanity's future influence is astronomically great. 2. If the expected size of humanity's future influence is astronomically great, then the expected value of the future is astronomically great. 3. If the expected value of the future is astronomically great, then what matters most is that we maximize humanity’s long-term potential. 4. Some of our actions are expected to reduce existential risk in not-ridiculously-small ways. 5. If what matters most is that we maximize humanity’s future potential and some of our actions are expected to reduce existential risk in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to reduce existential risk. 6. Therefore, what it is best to do is primarily determined by how our actions are expected to reduce existential risk. I’ve argued for adjusting the last three steps of this argument in the following way: 4’.   Some of our actions are expected to change our development trajectory in not-ridiculously-small ways. 5’.   If what matters most is that we maximize humanity’s future potential and some of our actions are expected to change our development trajectory in not-ridiculously-small ways, what it is best to do is primarily determined by how our actions are expected to change our development trajectory. 6’.   Therefore, what it is best to do is primarily determined by how our actions are expected to change our development trajectory. The basic thought here is that what the astronomical waste argument really shows is that future welfare considerations swamp short-term considerations, so that long-term consequences for the distant future are overwhelmingly important in comparison with purely short-term considerations (apart from long-term consequences that short-term consequences may produce). # Astronomical waste may involve changes in quality of life, rather than size of population Often, the astronomical waste argument is combined with the idea that the best way to minimize astronomical waste is to minimize the probability of pre-mature human extinction. How important it is to prevent pre-mature human extinction is a subject of philosophical debate, and the debate largely rests on whether it is important to allow large numbers of people to exist in the future. So when someone complains that the astronomical waste argument rests on esoteric assumptions about moral philosophy, they are implicitly objecting to premise (2) or (3). They are saying that even if human influence on the future is astronomically great, maybe changing how well humanity exercises its long-term potential isn’t very important because maybe it isn’t important to ensure that there are a large number of people living in the future. However, the concept of existential risk is wide enough to include any drastic curtailment to humanity’s long-term potential, and the concept of a “trajectory change” is wide enough to include any small but important change in humanity’s long-term development. And the value of these existential risks or trajectory changes need not depend on changes in the population. For example, • In “The Future of Human Evolution,” Nick Bostrom discusses a scenario in which evolutionary dynamics result in substantial decreases in quality of for all future generations, and the main problem is not a population deficit. • Paul Christiano outlined long-term resource inequality as a possible consequence of developing advanced machine intelligence. • I discussed various specific trajectory changes in a comment on an essay mentioned above. # There is limited philosophical debate about the importance of changes in the quality of life of future generations The main group of people who deny that it is important that future people exist have “person-affecting views.” These people claim that if I must choose between outcome A and outcome B, and person X exists in outcome A but not outcome B, it’s not possible to affect person X by choosing outcome A rather than B. Because of this, they claim that causing people to exist can’t benefit them and isn’t important. I think this view suffers from fatal objections which I have discussed in chapter 4 of my dissertation, and you can check that out if you want to learn more. But, for the sake of argument, let’s agree that creating “extra” people can’t help the people created and isn’t important. A puzzle for people with person-affecting views goes as follows: Suppose that agents as a community have chosen to deplete rather than conserve certain resources. The consequences of that choice for the persons who exist now or will come into existence over the next two centuries will be “slightly higher” than under a conservation alternative (Parfit 1987, 362; see also Parfit 2011 (vol. 2), 218). Thereafter, however, for many centuries the quality of life would be much lower. “The great lowering of the quality of life must provide some moral reason not to choose Depletion” (Parfit 1987, 363). Surely agents ought to have chosen conservation in some form or another instead. But note that, at the same time, depletion seems to harm no one. While distant future persons, by hypothesis, will suffer as a result of depletion, it is also true that for each such person a conservation choice (very probably) would have changed the timing and manner of the relevant conception. That change, in turn, would have changed the identities of the people conceived and the identities of the people who eventually exist. Any suffering, then, that they endure under the depletion choice would seem to be unavoidable if those persons are ever to exist at all. Assuming (here and throughout) that that existence is worth having, we seem forced to conclude that depletion does not harm, or make things worse for, and is not otherwise “bad for,” anyone at all (Parfit 1987, 363). At least: depletion does not harm, or make things worse for, and is not "bad for," anyone who does or will exist under the depletion choice. The seemingly natural thing to say if you have a person-affecting view is that because conservation doesn’t benefit anyone, it isn’t important. But this is a very strange thing to say, and people having this conversation generally recognize that saying it involves biting a bullet. The general tenor of the conversation is that conservation is obviously important in this example, and people with person-affecting views need to provide an explanation consonant with that intuition. Whatever the ultimate philosophical justification, I think we should say that choosing conservation in the above example is important, and this has something to do with the fact that choosing conservation has consequences that are relevant to the quality of life of many future people. # Intuitively, giving N times as many future people higher quality of life is N times as important Suppose that conservation would have consequences relevant to 100 times as many people in case A than it would in case B. How much more important would conservation be in case A? Intuitively, it would be 100 times more important. This generally fits with Holden Karnofsky’s intuition that a 1/N probability of saving N lives is about as important as saving one life, for any N: I wish to be the sort of person who would happily pay $1 for a robust (reliable, true, correct) 10/N probability of saving N lives, for astronomically huge N - while simultaneously refusing to pay$1 to a random person on the street claiming s/he will save N lives with it. More generally, we could say: Principle of Scale: Other things being equal, it is N times better (in itself) to ensure that N people in some position have higher quality of life than other people who would be in their position than it is to do this for one person. I had to state the principle circuitously to avoid saying that things like conservation programs could “help” future generations, because according to people with person-affecting views, if our "helping" changes the identities of future people, then we aren't "helping" anyone and that's relevant. If I had said it in ordinary language, the principle would have said, “If you can help N people, that’s N times better than helping one person.” The principle could use some tinkering to deal with concerns about equality and so on, but it will serve well enough for our purposes. The Principle of Scale may seem obvious, but even it would be debatable. You wouldn’t find philosophical agreement about it. For example, some philosophers who claim that additional lives have diminishing marginal value would claim that in situations where many people already exist, it matters much less if a person is helped. I attack these perspectives in chapter 5 of my dissertation, and you can check that out if you want to learn more. But, in any case, the Principle of Scale does seem pretty compelling—especially if you’re the kind of person that doesn’t have time for esoteric debates about population ethics—so let’s run with it. Now for the most questionable steps: Let’s assume with the astronomical waste argument that the expected number of future people is overwhelming, and that it is possible to improve the quality of life for an overwhelming number of future people through forward-thinking interventions. If we combine this with the principle from the last paragraph and wave our hands a bit, we get the conclusion that shifting quality of life for an overwhelming number of future people is overwhelmingly more important than any short term consideration. And that is very close to what the long-run perspective says about helping future generations, though importantly different because this version of the argument might not put weight on preventing extinction. (I say “might not” rather than “would not” because if you disagree with the people with person-affecting views but accept the Principle of Scale outlined above, you might just accept the usual conclusion of the astronomical waste argument.) # Does the Principle of Scale break down when large numbers are at stake? I have no argument that it doesn’t, but I note that (i) this wasn’t Holden Karnofsky’s intuition about saving N lives, (ii) it isn’t mine, and (iii) I don’t really see a compelling justification for it. The main reason I can think of for wanting it to break down is not liking the conclusion that affecting long-run outcomes for humanity is overwhelmingly important in comparison with short-term considerations.  If you really want to avoid the conclusion that shaping the long-term future is overwhelmingly important, I believe it would be better to accommodate this idea by appealing to other perspectives and a framework for integrating the insights of different perspectives—such as the one that Holden has talked about—rather than altering this perspective. For such people, my hope would be that reading this post would cause you to put more weight on the perspectives that place great importance on the future. # Summary To wrap up, I’ve argued that: 1. Reducing astronomical waste need not involve preventing human extinction—it can involve other changes in humanity’s long-term trajectory. 2. While not widely discussed, the Principle of Scale is fairly attractive from an atheoretical standpoint. 3. The Principle of Scale—when combined with other standard assumptions in the literature on astronomical waste—suggests that some trajectory changes would be overwhelmingly important in comparison with short-term considerations. It could be accepted by people who have person-affecting views or people who don’t want to get too bogged down in esoteric debates about moral philosophy. The perspective I’ve outlined here is still philosophically controversial, but it is at least somewhat independent of the standard approach to astronomical waste. Ultimately, any take on astronomical waste—including ignoring it—will be committed to philosophical assumptions of some kind, but perhaps the perspective outlined would be accepted more widely, especially by people with temperaments consonant with effective altruism, than perspectives relying on more specific theories or a larger number of principles. Pingbacks New Comment Nice post. It's also worth noting that this version of the far-future argument appeals even to negative utilitarians, strongly anti-suffering prioritarians, Buddhists, antinatalists, and others who don't think it's important to create new lives for reasons other than holding a person-affecting view. I also think even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future. The most likely so-called "extinction" event in my mind is human replacement by AIs, but AIs would be their own life forms with their own complex galaxy-colonization efforts, so I think work on AI issues should be considered part of "changing the direction of the future" rather than "making sure there is a future". I think it's an open question whether "even if you want to create lots of happy lives, most of the relevant ways to tackle that problem involve changing the direction in which the future goes rather than whether there is a future." But I broadly agree with the other points. In a recent talk on astronomical waste stuff, I recommended thinking about AI in the category of "long-term technological/cultural path dependence/lock in," rather than the GCR category (though that wasn't the main point of the talk). Link here: http://www.gooddoneright.com/#!nick-beckstead/cxpp, see slide 13. Thanks Nick. I like the abstraction to see precisely which features allow you to run these arguments. Although my best guess agrees with it, I am a little more hesitant about the principle of scale than you are. There are some reasons for scepticism: 1) Very many population axiologies reject it. Indeed it looks as though it will cut somewhere close to where a suitable separability axiom would -- which already gets you to summing utility functions (not necessarily preference-based ones). But perhaps I'm wrong about quite where it cuts; it could be interesting to explore this. 2) As well as doing the work in this argument, the principle of scale is a key part of what can make you vulnerable to Pascal's Mugging. I'd hope we can resolve that without giving up this principle, but I don't think it's entirely settled. 3) You say you see no great justification for the principle to break down when large numbers are at stake. But when not-so-large numbers are at stake, there are very compelling justifications to endorse the principle (and not just for improving quality of life). And these reasons do apply for a larger range of ethical views than would agree with it at large scale. So you might think that you only believed it for these reasons, and have no reason to support it in their absence. Re 1, yes it is philosophically controversial, but it also does speak to people with a number of different axiologies, as Brian Tomasik points out in another comment. One way to frame it is that it's doing what separability does in my dissertation, but noticing that astronomical waste can run without making assumptions about the value of creating extra people. So you could think of it as running that argument with one less premise. Re 2, yes it pushes in an unbounded utility function direction, and that's relevant if your preferred resolution of Pascal's Mugging is to have a bounded utility function. But this is also a problem for standard presentations of the astronomical waste argument. As it happens, I think you can run stuff like astronomical waste with bounded utility functions. Matt Wage has some nice stuff about this in his senior thesis, and I think Carl Shulman has a forthcoming post which makes some similar points. I think astronomical waste can be defended from more perspectives than it has been in the past, and it's good to show that. This post is part of that project. Re 3, I'd frame this way, "We use this all the time and it's great in ordinary situations. I'm doing the natural extrapolation to strange situations." Yes, it might break down in weird situations, but it's the extrapolation I'd put most weight on. Yes, I really like this work in terms of pruning the premises. Which is why I'm digging into how firm those premises really are (even if I personally tend to believe them). It seems like the principle of scale is in fact implied by separability. I'd guess it's rather weaker, but I don't know of any well-defined examples which accept scale but not separability. I do find your framing of 3 a little suspect. When we have a solid explanation for just why it's great in ordinary situations, and we can see that this explanation doesn't apply in strange situations, it seems like the extrapolation shouldn't get too much weight. Actually most of my weight for believing the principle of scale comes the fact that it's a consequence of separability. One more way the principle might break down: 4) You might accept the principle for helping people at a given time, but not as a way of comparing between helping people at different times. Indeed in this case it's not so clear most people would accept the small-scale version (probably because intuitions are driven by factors such as improving lives earlier gets more time to have indirect effects acting to improve lives later). Assuming I'm understanding the principle of scale correctly, I would have thought that the Average View is an example of something where Scale holds, but Separability fails. As it seems that whenever Scale is applied, the population is the same size in both cases (via a suppressed other-things-equal clause). Yes, good example. "Reducing astronomical waste need not involve preventing human extinction—it can involve other changes in humanity’s long-term trajectory." Glad to see this gaining more traction in the x-risk community!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6463277339935303, "perplexity": 999.2980929822228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00102.warc.gz"}
http://physics.stackexchange.com/users/1637/user6818?tab=bounties&sort=offered
# user6818 less info reputation 819 bio website location age member for 3 years, 9 months seen Oct 12 at 14:57 profile views 727 # 7 Offered bounties for 350 reputation 1 4 1 389 views +50 2 3 0 166 views +50 ### Some more questions on conformal spinors of $SO(n,2)$ jun 11 '12 at 22:46 user6818 1,507 1 3 0 195 views +50 ### Central charge at the fixed point of the ${\cal N}=2$ Landau-Ginzburg theory in $1+1$ dimensions may 15 '12 at 17:12 user6818 1,507 3 4 2 588 views +50 1 2 0 133 views +50 ### Argument for quantum theoretic conformality of $\cal{N}=2$ super-Chern-Simon's theory in $2+1$ dimensions -Part 2 jul 1 '11 at 17:50 user6818 1,507 7 1 426 views +50 2 12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3132103383541107, "perplexity": 6474.20159254198}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450252.23/warc/CC-MAIN-20141017005730-00132-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/geometry/CLONE-68e52840-b25a-488c-a775-8f1d0bdf0669/chapter-11-section-11-3-the-tangent-ratio-and-other-ratios-exercises-page-512/34b
## Elementary Geometry for College Students (6th Edition) We divide cosine by sine to find: $cot57^{\circ} = \frac{cos57^{\circ}}{sin57^{\circ}}=\frac{.54}{.65} =.85$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499031066894531, "perplexity": 9780.006693422678}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143455.25/warc/CC-MAIN-20200217235417-20200218025417-00343.warc.gz"}
https://futurism.com/an-interesting-turn-of-events-for-proponents-of-mond/
The “holy grail” of a scientific theory is one that can make accurate predictions of phenomena in advance. Recently, a group of researchers developing Modified Newtonian Dynamics (MOND), which is a modified law of gravity, successfully predicted the motions of 10 dwarf galaxies in orbit around the Andromeda galaxy in advance. This could be a huge leap forward for MOND.   What is MOND? MOND is an alternative to dark matter and attempts to explain the gravitational discrepancies seen in the motions of galaxies. In the early 20th century, astronomers discovered galaxies were moving in a way that didn’t make sense with all of the observable matter. This lead to the shocking realization that either there is an invisible substance adding to the mass of the galaxy, or our current laws of gravity were incomplete. Dark matter appeals to the first solution and MOND appeals to the second. Thus far, dark matter has proven itself to be a more reliable theory, providing a little more accuracy when describing the motions of galaxies, but it has some deep seated problems. The winds might be changing, as MOND is developing further. The 10 dwarf galaxies that were surveyed have some huge discrepancies from one another, so it’s a fantastic place for MOND to prove its worth. Since the galaxies aren’t alike, if MOND was able to predict their motions with accuracy, it would give credence to the hypothesis. Indeed, the hypothesis succeeded in it’s predictions according to a paper published by Stacy McGaugh, who among other credentials, is one of the founders of MOND. The universe is currently thought to be composed of vast quantities of dark matter, about 26.8% in contrast to the 4.9% normal ‘visible’ matter in the visible universe. Even though dwarf galaxies are small, containing only a few thousand stars, when you’re dealing with a 1:5 ratio of regular matter to dark matter, you infer the presence of a ton of dark matter. That is assuming gravity is conventional instead of being modified.   “Most scientists are more comfortable with the dark matter interpretation, but we need to understand why MOND succeeds with these predictions. We don’t even know how to make this prediction with dark matter. At stake now is whether the universe is predominantly made of an invisible substance that persistently eludes detection in the laboratory, or whether we are obliged to modify one of our most fundamental theories, the law of gravity.” McGaugh said in an interview about the findings.   According to the MOND hypothesis, the original Newtonian laws at low acceleration is significantly less that we currently believe. Once you start getting to higher accelerations, gravity returns to “normal” and the original Newtonian equations work just fine. Hypothetically, the modification of gravity at these small accelerations is sufficient to eliminate the need for dark matter altogether because it’s able to account for the currently observed mass discrepancies. According to the paper, this iteration of MOND was able to accurately describe little but measurable discrepancies seen in the gravitational fields of Andromeda’s dwarf galaxies in spite of their distance to the large spiral galaxy. According to dark matter, no such distinction should be expected. In this case, dark matter is able to make an “after the fact” prediction (when you modify the dark matter values for the dwarfs, you are able to make accurate calculations) whereas the MOND hypothesis was able to make accurate predictions without “cheating.” While talking about MOND’s falsifiablity, McGaugh said, “The influence of the host galaxy may provide a test to distinguish between dark matter and MOND. Dark matter provides a cocoon for the dwarfs, protecting the stars from tidal influence by the host galaxy. With MOND, the influence of the host is more pronounced.”   To date, McGaugh has been able to successfully predict the velocity dispersion of 17 dwarf galaxies, which is fantastic news for proponents of MOND. Dark matter still has a ton of observational evidence in its favor that MOND will have to contend with, but the fact that the hypothesis is making predictions unattainable by dark matter is a step in the right direction. At this point, it is important to note that dark matter is still the commonly accepted theory used to describe the mass discrepancy seen in the universe. The overwhelming majority of physicists support dark matter and believe it more accurately fits the evidence we can see. MOND is widely considered a “fringe theory” though it is not considered a pseudoscience. Personally, I prefer MOND over dark matter simply because I think MOND is a more elegant explanation – but MOND needs a lot of work done before it will be able to replace dark matter, assuming it has what it takes. I think Richard Feynman puts it best, “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong. ” Only time and hard work will tell what’s in store for MOND.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238836526870728, "perplexity": 633.7617024574555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320887.15/warc/CC-MAIN-20170627013832-20170627033832-00092.warc.gz"}
https://www.lessonplanet.com/teachers/p-p-letter-recognition
# P p - Letter Recognition In this letter recognition activity, students color a black line picture of a penguin. They color a large upper case P and a large lower case p.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8183753490447998, "perplexity": 5183.27651202919}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814036.49/warc/CC-MAIN-20180222061730-20180222081730-00483.warc.gz"}
https://electronics.stackexchange.com/questions/29527/can-we-remove-the-neutral-wire
# Can we remove the neutral wire? If we have the connection shown below (star connection of the secondary part of a TX), the load per each phase is equivalent (balanced load), then the vector sum of I1, I2 and I3 is zero. In this case, I would like to ask if we can remove the neutral wire from the connection since no current will flow in it (theoretically at least)? And if we can remove it, where is the return path (to form a closed circuit) for each phase current? • The return path is distributed over the other two phases. If I1 is positive, then at least I2 or I3 has to be negative to make their sum (I1 + I2 + I3) zero. – stevenvh Apr 9 '12 at 15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194517135620117, "perplexity": 449.97499692107226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00340.warc.gz"}
http://tex.stackexchange.com/questions/162626/htlatex-converting-all-occurences-of-ff-and-fi-to-null-characters-in-html-ou
# htlatex converting all occurences of 'ff' and 'fi' to null characters in HTML output I'm running MiKTeX under Windows 7, trying to compile some LaTeX to HTML using htlatex. htlatex appears to be converting every instance of 'ff' and 'fi' with a NUL byte in the HTML. Was wondering if anybody had any insight! Here's the config file I'm using (some stuff I got on the internet that is probably the whole problem): \Preamble{xhtml} \Configure{HtmlPar} {\EndP\Tg<p>} {\EndP\Tg<p>} {\HCode{</p>\Hnewline}} {\HCode{</p>\Hnewline}} \Configure{emph}{\ifvmode\ShowPar\fi\HCode{<em>}}{\HCode{</em>}} \Configure{textbf}{\ifvmode\ShowPar\fi\HCode{<b>}}{\HCode{</b>}} \begin{document} \EndPreamble And here is an example tex file that illustrates my problem: \documentclass{article} \begin{document} \section{Letters} \subsection{Valid Letters} AA Aa aA aa\\ BB Bb bB bb\\ CD Cd cD cd\\ \subsection{Invalid Letters} FF Ff fF ff\\ FI Fi fI fi\\ \section{Strings} a string of text\\ a fine string of text\\ a definition of an efficient and fine string of text\\ finally, the problem is solved!\\ \end{document} And here is output I get after running the the command htlatex example.tex MyFonts.cfg "xhtml, NoFonts, -css" -utf8 -shell-escape 1 Letters 1.1 Valid Letters AA Aa aA aa BB Bb bB bb CD Cd cD cd 1.2 Invalid Letters FF Ff fF FI Fi fI 2 Strings a string of text a ne string of text a denition of an ecient and ne string of text nally, the problem is solved! If I look in the HTML output, all occurences of 'ff' and 'fi' have been replaced by the NUL character. Does anybody know why? Thanks! - I saved the first snippet as MyFonts.cfg, but NoFonts gives error; if I remove it, I get errors because of -utf8 and -shell-escape. –  egreg Feb 27 at 0:15 The error seems to be caused by the -css argument... but why? Also, you can replicate the error (or I can) with just the following command htlatex minimal.tex MyFonts.cfg "xhtml, -css" –  Robert Kelly Feb 27 at 0:20 With that command line I get, for instance, a fine string of text in the .html file. –  egreg Feb 27 at 0:22 hm. For some reason the -css is killing it for me. –  Robert Kelly Feb 27 at 0:31 You compile it in a wrong way. Correct compiling order is: htlatex filename "tex4ht.sty opt" "tex4ht command opt" "t4ht command opt" "latex opt" in your case this means this command: htlatex filename "MyFonts, NoFonts, -css" " -utf8" "" -shell-escape this generates fine html for me. some further notes: you request -utf8 option for tex4ht command, but you doesn't provide option for unicode fonts -cunihtf or -cmozhtf, so generated html is in latin-1 encoding. correct compile sequence for unicode is: htlatex filename "MyFonts, NoFonts, -css, charset=utf-8" " -utf8 -cunihtf" "" -shell-escape note that in this case ligatures are transformed to unicode characters, which maybe isn't what you want. you can simplify the process with my make4ht tool: you can move options for tex4ht.sty to the cfg file: \Preamble{xhtml,NoFonts, -css} so you don't need to specify them on the command line. Now you can simply call: make4ht -u -c MyFonts -s filename also note that NoFonts option may be dangerous - if you add to your tex file: \textbf{příliš} \emph{žluťoučký} \textit{ďábelské} and \usepackage[utf8]{inputenc} generated html looks like: <b>Hello příliš</b> <em>žluťoučký</em> ďábelské you can see that there are html tags for \textbf and \emph, because you provided configuration for these commands, but \textit doesn't produce any markup. If you remove NoFonts option, due to bug in tex4ht command, lot of unnecessary elements around accented characters are produced, which is probably reason why you use NoFonts. To fix this issue, you can configure \textbf and \emph to turn html fonts off: \Configure{emph}{\ifvmode\ShowPar\fi\HCode{<em>}\NoFonts}{\EndNoFonts\HCode{</em>}} \Configure{textbf}{\ifvmode\ShowPar\fi\HCode{<b>}\NoFonts}{\EndNoFonts\HCode{</b>}} but if you don't want to provide such configurations for all font changing commands, you can use make4ht filters. create make file filename.mk4: local filter = require "make4ht-filter" local process = filter{"cleanspan", "fixligatures", "hruletohr"} Make:htlatex() Make:htlatex() Make:match("html\$",process) three filters are used in this sample: • cleanspan removes spurious span elements around accented letters • fixligatures replaces ligatures with base characters • hruletohr replaces series of ----- characters with <hr /> element if you remove the NoFonts option, generated html is now: <b>příliš</b> <em>žluťoučký</em> <span class="cmti-10">ďábelské </span> so even if \textit doesn't produce semantically meaningful code, the text is italicized thanks of css. - Thanks! Very informative and helpful answer –  Robert Kelly Feb 27 at 22:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8425516486167908, "perplexity": 17448.00872829918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931012025.85/warc/CC-MAIN-20141125155652-00015-ip-10-235-23-156.ec2.internal.warc.gz"}
https://control.com/forums/threads/interested.3363/
# Interested C #### Curt Wuollet Hi Bill Welcome! What can you do? We need work on languages, testing, distribution building, documentation, drivers, applications, practically anything you want to do is available. Best place to start is understanding. The materials on the website will get you the background and I'm sure we can find a way to apply your talents. We do seem to be having a problem getting people over the initial hump and on board so, I would be in your debt if you could tell me what I can do to get you onboard. We're programmers and somewhat lacking in people skills, I guess. To you and all the folks who have inquired recently, please don't be put off if I can't assign you neat packages of things to do. We really don't have a board or meetings to generate to do lists. I'm trying to do something like this but with volunteers it's like herding cats. Someone who can tell me how to mentor programmers into the group would be doing the project a great service. Please be patient and give us a second and maybe third chance. We do need you. I will keep all off list communications strictly private. Regards cww _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc J #### Jiri Baum Curt Wuollet: > Best place to start is understanding. The materials on the website will > get you the background and I'm sure we can find a way to apply your > talents. I'm responding a bit late, but a good starting point is the recent article. You can get it from the CVS or on the web at: http://www.linuxplc.org/cgi-bin/viewcvs.cgi/~checkout~/doc/lplc-article.txt > We do seem to be having a problem getting people over the initial hump > and on board so, I would be in your debt if you could tell me what I can > do to get you onboard. Seconded. > To you and all the folks who have inquired recently, please don't be put > off if I can't assign you neat packages of things to do. We really don't > have a board or meetings to generate to do lists. At this stage, it's basically anything you'd want in LPLC that isn't already written''. > I'm trying to do something like this but with volunteers it's like > herding cats. Have you tried a laser pointer? Jiri -- Jiri Baum <[email protected]> Connect the power cable, interface cable and ground wire only in the methods indicated in the this manual. It may lead to fire. -- OKIPAGE 8z user's manual _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26533353328704834, "perplexity": 1111.8312458486475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00369.warc.gz"}
http://mathoverflow.net/questions/43805/when-is-g-isomorphic-to-g-times-g/44027
# When is $G$ isomorphic to $G \times G$? Is there a finitely generated nontrivial group $G$ such that $G \cong G \times G$? Here are some properties which such a group $G$ has to satisfy: • $G$ is not abelian (otherwise $G$ is a noetherian $\mathbb{Z}$-module, and the composition of the first projection $G \times G \to G$ with an isomorphism $G \cong G \times G$ will be bijective, i.e. $G$ is trivial). • $G$ is perfect (apply the first observation to $G/G'$) - apart from the trivial group, I guess. –  wood Oct 27 '10 at 14:50 $G$ must also not be residually finite (as a finitely generated residually finite group is Hopfian, i.e. has no isomorphic proper quotients). –  Jonathan Kiehlmann Nov 1 '10 at 17:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512717485427856, "perplexity": 446.2653955890133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115858580.32/warc/CC-MAIN-20150124161058-00190-ip-10-180-212-252.ec2.internal.warc.gz"}
http://www.intmath.com/blog/learn-math/comfort-stress-and-learning-in-math-classes-477
# Comfort, stress and learning in math classes By Murray Bourne, 05 Dec 2006 Greater comfort doesn't result in better education (article no longer available), by Karen Utley of the Statesman Journal in Oregon, pushes the line that learning should involve some stress. Researchers asked students to rate the pleasure factor of their math classes and compared the kids' responses with their standardized test scores. They discovered that students who characterized their math experiences as "scary" or "unpleasant" were performing better on the exams than those who described their classes as happy and comfortable. She goes on to say: Schools, like parents, have succumbed to the wishful optimism of advertising that promises to make learning -- through the latest method or game or computerized gadget -- convenient, pleasurable, comfortable and stress-free. I am in two minds on this issue. There are thousands of students who end up very miserable in mathematics classes. Blame it on bad curriculum, bad teaching, bad scheduling, bad societal support, whatever. But we must cater for these stressed out students and try to reduce their hatred of mathematics. On the other hand, we know from research that a reasonable amount of stress is important for effective learning. (See my earlier article, Memory, stress, fish and sleep.) Some stress is important for everything in life, really. After all, getting up in the morning is most often about relieving hunger or bladder stress. But back to Utley's article, I agree with her when she says: Conversely, [students are] inspired to achievement when schools and parents insist on consistent effort and rigorous scholarship, because high expectations reassure students by saying, "We know it's hard -- but we know you can do it." Yes, but let's not make it hard because we can do it, and let's not make it stressful because that's how we like to learn. One thing I think we should do is talk more with students about math stress and help them decide where on the stress curve they should aim for. ### 6 Comments on “Comfort, stress and learning in math classes” 1. John Atkins says: Whether tragic events touch your family personally or are brought into your home via newspapers and television, you can help children cope with the anxiety that violence, death, and disasters can cause. Listening and talking to children about their concerns can reassure them that they will be safe. Start by encouraging them to discuss how they have been affected by what is happening around them. Even young children may have specific questions about tragedies. Children react to stress at their own developmental level. The Caring for Every Child's Mental Health Campaign offers these pointers for parents and other caregivers: * Encourage children to ask questions. Listen to what they say. Provide comfort and assurance that address their specific fears. It's okay to admit you can't answer all of their questions. * Talk on their level. Communicate with your children in a way they can understand. Don't get too technical or complicated. * Find out what frightens them. Encourage your children to talk about fears they may have. They may worry that someone will harm them at school or that someone will try to hurt you. * Focus on the positive. Reinforce the fact that most people are kind and caring. Remind your child of the heroic actions taken by ordinary people to help victims of tragedy. * Pay attention. Your children's play and drawings may give you a glimpse into their questions or concerns. Ask them to tell you what is going on in the game or the picture. It's an opportunity to clarify any misconceptions, answer questions, and give reassurance. * Develop a plan. Establish a family emergency plan for the future, such as a meeting place where everyone should gather if something unexpected happens in your family or neighborhood. It can help you and your children feel safer. If you are concerned about your child's reaction to stress or trauma, call your physician or a community mental health center. 2. james elsey says: I am doing another maths degree with the OU @ age 75. The stress article is most illuminating. Cut-off dates, assignments and examinations seem to keep me sharp so to speak. Computer difficulties with maths software can stress me to the point of despondency - although when you get them working smoothly they induce a harmless elation. My printers can take me through the whole spectrum of emotions. Keep the newsletter going it has given me a whole range of extremely useful contacts. 3. Murray says: Hi James and good to hear from you again. It's great that you are doing Open University courses. What math software are you using? 4. james elsey says: I have just completed mst 209 using Mathcad. Now I am registered to do M248 using Minitab. Thanks for your enquiry. Jim Elsey. 5. pallavi says: its interesting fact that eustress activates learning. Being a teacher of Mathemetics I am interested in undertaking research in effect of constructivism on students' stress and achievement. Please post me related articles if any available. 6. Murray says: Hi Palavi. This Google search turns up plenty of interesting possibilities. Good luck with it! ### Comment Preview HTML: You can use simple tags like <b>, <a href="...">, etc. To enter math, you can can either: 1. Use simple calculator-like input in the following format (surround your math in backticks, or qq on tablet or phone): a^2 = sqrt(b^2 + c^2) (See more on ASCIIMath syntax); or 2. Use simple LaTeX in the following format. Surround your math with $$ and $$. $$\int g dx = \sqrt{\frac{a}{b}}$$ (This is standard simple LaTeX.) NOTE: You can't mix both types of math entry in your comment.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22641980648040771, "perplexity": 4094.6945160643327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424945.18/warc/CC-MAIN-20170725002242-20170725022242-00625.warc.gz"}
https://2021.help.altair.com/2021/hwsolvers/acusolve/topics/acusolve/udfgetelmcnn_acusolve_udf.htm
# udfGetElmCnn() Return the element connectivity. ## Syntax elemCnn = udfGetElmCnn( udfHd ) ; ## Type User Defined Element ## Parameters udfHd The opaque handle (pointer) which was passed to the user function. ## Return Value elemCnn (Integer*) Pointer to two dimensional integer array of element connectivity. The first (fastest) dimension of the array is the number of elements in the element set, nItems, and the second (slower) dimension is the number of element nodes, nElemNodes. ## Description This routine returns the array of element connectivity. This is the array, without the first column, of the parameter elements of the command ELEMENT_SET in the input file. For example, Integer* elemCnn ; Integer elemNode, elem, node, nElemNodes ; ... nElemNodes = udfGetElmNElemNodes( udfHd ) ; elemCnn = udfGetElmCnn( udfHd ) ; for ( elem = 0 ; elem < nItems ; elem++ ) { for ( elemNode = 0 ; elemNode < nElemNodes ; elemNode++ ) { node = elemCnn[elemNode*nItems+elem] ; ... } } ## Errors • This routine expects a valid udfHd. • This routine may only be called within a Body Force, Material Model or Component Model user function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38259056210517883, "perplexity": 15614.357999898688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500392.45/warc/CC-MAIN-20230207071302-20230207101302-00784.warc.gz"}
https://curriculum.illustrativemathematics.org/HS/students/2/1/4/index.html
# Lesson 4 Construction Techniques 2: Equilateral Triangles • Let’s identify what shapes are possible within the construction of a regular hexagon. ### 4.1: Notice and Wonder: Circles Circles Circles What do you notice? What do you wonder? ### 4.2: What Polygons Can You Find? Here is a straightedge and compass construction of a regular hexagon inscribed in a circle just before the last step of drawing the sides: 1. Use the polygon tool (the one that looks like a triangle) to draw at least 2 polygons on the figure. The vertices of your polygon should be intersection points in the figure. Shade in your polygons using different colors to make them easier to see. Use the style bar to change the color. This is what the style bar looks like. ### 4.3: Spot the Equilaterals Use straightedge and compass moves to construct at least 2 equilateral triangles of different sizes. 1. Examine the figure carefully. What different shapes is it composed of? Be specific. 2. Figure out how to construct the figure with a compass and straightedge. 3. Then, cut it out, and see if you can fold it up into a container like this. ### Summary The straightedge allows us to construct lines and line segments, and the compass allows us to make circles with a specific radius. With these tools, we can reason about distances to explain why certain shapes have certain properties. For example, when we construct a regular hexagon using circles of the same radius, we know all the sides have the same length because all the circles are the same size. The hexagon is called inscribed because it fits inside the circle and every vertex of the hexagon is on the circle. Similarly, we could use the same construction to make an inscribed triangle. If we connect every other point around the center circle, it forms an equilateral triangle. We can conjecture that this triangle has 3 congruent sides and 3 congruent angles because the entire construction seems to stay exactly the same whenever it is rotated $$\frac{1}{3}$$ of a full turn around the center. ### Glossary Entries • circle A circle of radius $$r$$ with center $$O$$ is the set of all points that are a distance $$r$$ units from $$O$$ To draw a circle of radius 3 and center $$O$$, use a compass to draw all the points at a distance 3 from $$O$$. • conjecture A reasonable guess that you are trying to either prove or disprove. • inscribed We say a polygon is inscribed in a circle if it fits inside the circle and every vertex of the polygon is on the circle. We say a circle is inscribed in a polygon if it fits inside the polygon and every side of the polygon is tangent to the circle. • line segment A set of points on a line with two endpoints. • parallel Two lines that don't intersect are called parallel. We can also call segments parallel if they extend into parallel lines. • perpendicular bisector The perpendicular bisector of a segment is a line through the midpoint of the segment that is perpendicular to it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5531815886497498, "perplexity": 391.9127640058415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00182.warc.gz"}
http://catalog.flatworldknowledge.com/bookhub/reader/2223?e=cooperecon-ch10_s01
Study Aids: Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards). Study Pass: Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes. Highlighting and Taking Notes: If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination. Printing: If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections. Search: To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result). View Full Student FAQs ## 10.1 A Walk Down Wall Street ### Learning Objectives 1. What are the different types of assets traded in financial markets? 2. What can you earn by owning an asset? 3. What risks do you face? Wall Street in New York City is the financial capital of the United States. There are other key financial centers around the globe: Shanghai, London, Paris, Hong Kong, and many other cities. These financial centers are places where traders come together to buy and sell assets. Beyond these physical locations, opportunities for trading assets abound on the Internet as well. We begin the chapter by describing and explaining some of the most commonly traded assets. Ownership of an assetA resource whose ownership gives you the right to some future benefit or a stream of benefits. gives you the right to some future benefit or a stream of benefits. Very often, these benefits come in the form of monetary payments; for example, ownership of a stock gives you the right to a share of a firm’s profits. Sometimes, these benefits come in the form of a flow of services: ownership of a house gives you the right to enjoy the benefits of living in it. ## Stocks One of the first doors you find on Wall Street is called the stock exchange. The stock exchange is a place where—as the name suggests—stocks are bought and sold. A stock (or share)An asset that comes in the form of (partial) ownership of a firm. is an asset that comes in the form of (partial) ownership of a firm. The owners of a firm’s stock are called the shareholders of that firm because the stock gives them the right to a share of the firm’s profits. More precisely, shareholders receive payments whenever the board of directors of the firm decides to pay out some of the firm’s profits in the form of dividendsA payment from a firm to a firm’s shareholders based on the firm’s profits.. Some firms—for example, a small family firm like a corner grocery store—are privately owned. This means that the shares of the firm are not available for others to purchase. Other firms are publicly traded, which means that anyone is free to buy or sell their stocks. In many cases, particularly for large firms such as Microsoft Corporation or Nike, stocks are bought and sold on a minute-by-minute basis. You can find information on the prices of publicly traded stocks in newspapers or on the Internet. ## Stock Market Indices Most often, however, we hear not about individual stock prices but about baskets of stocks. The most famous basket of stocks is called the Dow Jones Industrial Average (DJIA). Each night of the week, news reports on the radio and television and newspaper stories tell whether the value of the DJIA increased or decreased that day. The DJIA is more than a century old—it started in 1896—and is a bundle of 30 stocks representing some of the most significant firms in the US economy. Its value reflects the prices of these stocks. Very occasionally, one firm will be dropped from the index and replaced with another, reflecting changes in the economy.You can learn more about the DJIA if you go to NYSE Euronext, “Dow Jones Industrial Average,” accessed March 14, 2011,http://www.nyse.com/marketinfo/indexes/dji.shtml. Figure 10.3 The DJIA: October 1928 to July 2007 This figure shows the closing prices for the DJIA between 1928 and 2010. Figure 10.3 "The DJIA: October 1928 to July 2007" shows the Dow Jones Industrial Average from 1928 to 2011. Over that period, the index rose from about 300 to about 12,500, which is an average growth rate of about 4.5 percent per year. You can see that this growth was not smooth, however. There was a big decrease at the very beginning, known as the stock market crash of 1929. There was another very significant drop in October 1987. Even though the 1929 crash looks smaller than the 1987 decrease, the 1929 crash was much more severe. In 1929, the stock market lost about half its value and took many years to recover. In 1987, the market lost only about 25 percent of its value and recovered quite quickly. One striking feature of Figure 10.3 "The DJIA: October 1928 to July 2007" is the very rapid growth in the DJIA in the 1990s and the subsequent decrease around the turn of the millennium. The 1990s saw the so-called Internet boom, when there was a lot of excitement about new companies taking advantage of new technologies. Some of these companies, such as Amazon, went on to be successful, but most others failed. As investors came to recognize that most of these new companies would not make money, the market fell in value. There was another rise in the market during the 2000s, followed by a substantial fall during the global financial crisis that began around 2008. Very recently, the market has recovered again. If these ups and downs in the DJIA were predictable, it would be easy to make money on Wall Street. Suppose you knew the DJIA would increase 10 percent next month. You would buy the stocks in the average now, hold them for a month, and sell them for an easy 10 percent profit. If you knew the DJIA would decrease next month, you could still make money. If you currently owned DJIA stocks, you could sell them and then buy them back after the price decreased. Even if you don’t own these stocks right now, there is still a way of selling first and buying later. You can sell (at today’s high price) a promise to deliver the stocks in a month’s time. Then you buy the stocks after the price has decreased. This is called a forward sale. If this sounds as if it is too easy a way to make money, that’s because it is. The ups and downs in the DJIA are not perfectly predictable, so there are no easy profit opportunities of the kind we just described. We have more to say about this later in the chapter. Although the DJIA is the most closely watched stock market index, many others are also commonly reported. The Standard and Poor’s 500 (S&P 500) is another important index. As the name suggests, it includes 500 firms, so it is more representative than the DJIA. If you want to understand what is happening to stock prices in general, you are better off looking at the S&P 500 than at the DJIA. The Nasdaq is another index, consisting of the stocks traded in an exchange that specializes in technology-based firms. We mentioned earlier that the DJIA has increased by almost 5 percent per year on average since 1928. On the face of it, this seems like a fairly respectable level of growth. Yet we must be careful. The DJIA and other indices are averages of stock prices, which are measured in dollar terms. To understand what has happened to the stock market in real terms, we need to adjust for inflation. Between 1928 and 2007, the price level rose by 2.7 percent per year on average. The average growth in the DJIA, adjusted for inflation, was thus 4.8 percent − 2.7 percent = 2.1 percent. ## The Price of a Stock As a shareholder, there are two ways in which you can earn income from your stock. First, as we have explained, firms sometimes choose to pay out some of their income in the form of dividends. If you own some shares and the company declares it will pay a dividend, either you will receive a check in the mail or the company will automatically reinvest your dividend and give you extra shares. But there is no guarantee that a company will pay a dividend in any given year. The second way you can earn income is through capital gainsIncome from an increase in the price of an asset.. Suppose you own a stock whose price has gone up. If that happens, you can—if you want—sell your stock and make a profit on the difference between the price you paid for the stock and the higher price you sold it for. Capital gains are the income you obtain from the increase in the price of an asset. (If the asset decreases in value, you instead incur a capital loss.) To see how this works, suppose you buy, for $100, a single share of a company whose stock is trading on an exchange. In exchange for$100, you now have a piece of paper indicating that you own a share of a firm. After a year has gone by, imagine that the firm declares it will pay out dividends of $6.00 per share. Also, at the end of the year, suppose the price of the stock has increased to$105.00. You decide to sell at that price. So with your $100.00, you received$111.00 at the end of the year for an annual return of 11 percent: $106.00+5.00 100.00 =0.11=11%.$ (We have used the term return a few times. We will give a more precise definition of this term later. At present, you just need to know that it is the amount you obtain, in percentage terms, from holding an asset for a year.) Suppose that a firm makes some profits but chooses not to pay out a dividend. What does it do with those funds? They are called retained earnings and are normally used to finance business operations. For example, a firm may take some of its profits to build a new factory or buy new machines. If a firm is being managed well, then those expenditures should allow a firm to make higher profits in the future and thus be able to pay out more dividends at a later date. Presuming once again that the firm is well managed, retained earnings should translate into extra dividends that will be paid in the future. Furthermore, if people expect that a firm will pay higher dividends in the future, then they should be willing to pay more for shares in that firm today. This increase in demand for a firm’s shares will cause the share price to increase. So if a firm earns profits but does not pay a dividend, you should expect to get some capital gain instead. We come back to this idea later in the chapter and explain more carefully the connection between a firm’s dividend payments and the price of its stock. ## The Riskiness of Stocks Figure 10.3 "The DJIA: October 1928 to July 2007" reminds us that stock prices decrease as well as increase. If you choose to buy a stock, it is always possible its price will fall, in which case you suffer a capital loss rather than obtain a capital gain. The riskiness of stocks comes from the fact that the underlying fortunes of a firm are uncertain. Some firms are successful and earn high profits, which means that they are able to pay out large dividends—either now or in the future. Other firms are unsuccessful through either bad luck or bad management, and do not pay dividends. Particularly unsuccessful firms go bankrupt; shares in such a firm become close to worthless. When you buy a share in a firm, you have the chance to make money, but you might lose money as well. ## Bonds Wall Street is also home to many famous financial institutions, such as Morgan Stanley, Merrill Lynch, and many others. These firms act as the financial intermediaries that link borrowers and lenders. If desired, you could use one of these firms to help you buy and sell shares on the stock exchange. You can also go to one of these firms to buy and sell bonds. A bondA promise to make cash payments to a bondholder at predetermined dates (such as every year) until the maturity date. is a promise to make cash payments (the couponThe cash payments paid to a bondholder.) to a bondholder at predetermined dates (such as every year) until the maturity date. At the maturity dateThe date of final payment of principal and interest on a bond., a final payment is made to a bondholder. Firms and governments that are raising funds issue bonds. A firm may wish to buy some new machinery or build a new plant, so it needs to borrow to finance this investment. Or a government might issue bonds to finance the construction of a road or a school. The easiest way to think of a bond is that it is the asset associated with a loan. Here is a simple example. Suppose you loan a friend $100 for a year at a 6 percent interest rate. This means that the friend has agreed to pay you$106 a year from now. Another way to think of this agreement is that you have bought, for a price of $100, an asset that entitles you to$106 in a year’s time. More generally (as the definition makes clear), a bond may entitle you to an entire schedule of repayments. ## The Riskiness of Bonds Bonds, like stocks, are risky. • The coupon payments of a bond are almost always specified in dollar terms. This means that the real value of these payments depends on the inflation rate in an economy. Higher inflation means that the value of a bond has less worth in real terms. • Bonds, like stocks, are also risky because of the possibility of bankruptcy. If a firm borrows money but then goes bankrupt, bondholders may end up not being repaid. The extent of this risk depends on who issues the bond. Government bonds usually carry a low risk of bankruptcy. It is unlikely that a government will default on its debt obligations, although it is not impossible: Iceland, Ireland, Greece, and Portugal, for example, have recently been at risk of default. In the case of bonds issued by firms, the riskiness obviously depends on the firm. An Internet start-up firm operated from your neighbor’s garage is more likely to default on its loans than a company like the Microsoft Corporation. There are companies that evaluate the riskiness of firms; the ratings provided by these companies have a tremendous impact on the cost that firms incur when they borrow. Inflation does not have the same effect on stocks as it does on bonds. If prices increase, then the fixed nominal payments of a bond unambiguously become less valuable. But if prices increase, firms will typically set higher nominal prices for their products, earn higher nominal profits, and pay higher nominal dividends. So inflation does not, in and of itself, make stocks less valuable. You can review the meaning and calculation of the inflation rate in the toolkit. One way to see the differences in the riskiness of bonds is to look at the cost of issuing bonds for different groups of borrowers. Generally, the rate at which the US federal government can borrow is much lower than the rate at which corporations borrow. As the riskiness of corporations increases, so does the return they must offer to compensate investors for this risk. ## Real Estate and Cars As you continue to walk down the street, you are somewhat surprised to see a real estate office and a car dealership on Wall Street. (But this is a fictionalized Wall Street, so why not?) Real estate is another kind of asset. Suppose, for example, that you purchase a home and then rent it out. The rental payments you receive are analogous to the dividends from a stock or the coupon payments on a bond: they are a flow of money you receive from ownership of the asset. Real estate, like other assets, is risky. The rent you can obtain may increase or decrease, and the price of the home can also change over time. The fact that housing is a significant—and risky—financial asset became apparent in the global financial crisis that began in 2007. There were many aspects of that crisis, but an early trigger of the crisis was the fact that housing prices decreased in the United States and around the world. If you buy a home and live in it yourself, then you still receive a flow of services from your asset. You don’t receive money directly, but you receive money indirectly because you don’t have to pay rent to live elsewhere. You can think about measuring the value of the flow of services as rent you are paying to yourself. Our fictional Wall Street also has a car dealership—not only because all the financial traders need somewhere convenient to buy their BMWs but also because cars, like houses, are an asset. They yield a flow of services, and their value is linked to that service flow. ## The Foreign Exchange Market Further down the street, you see a small store listing a large number of different three-letter symbols: BOB, JPY, CND, EUR, NZD, SEK, RUB, SOS, ADF, and many others. Stepping inside to inquire, you learn that that, in this store, they buy and sell foreign currencies. (These three-letter symbols are the currency codes established by the International Organization for Standardization (http://www.iso.org/iso/home.htm). Most of the time, the first two letters refer to the country, and the third letter is the initial letter of the currency unit. Thus, in international dealings, the US dollar is referenced by the symbol USD.) Foreign currencies are another asset—a simple one to understand. The return on foreign currency depends on how the exchange rate changes over the course of a year. The (nominal) exchange rate is the price of one currency in terms of another. For example, if it costs US$2 to purchase €1, then the exchange rate for these two currencies is 2. An exchange rate can be looked at in two directions. If the dollar-price of a euro is 2, then the euro price of a dollar is 0.5: with €0.5, you can buy US$1. Suppose that the exchange rate this year is US$2 to the euro, and suppose you have US$100. You buy €50 and wait a year. Now suppose that next year the exchange rate is US$2.15 to the euro. With your €50, you can purchase US$107.50 (because US$(50 × 2.15) = US$107.50). Your return on this asset is 7.5 percent. Holding euros was a good investment because the dollar became less valuable relative to the euro. Of course, the dollar might increase in value instead. Holding foreign currency is risky, just like holding all the other assets we have considered.The currency market is also discussed in Chapter 8 "Why Do Prices Change?". The foreign exchange market brings together suppliers and demanders of different currencies in the world. In these markets, one currency is bought using another. The law of demand holds: as the price of a foreign currency increases, the quantity demanded of that currency decreases. Likewise, as the price of a foreign currency increases, the quantity supplied of that currency increases. Exchange rates are determined just like other prices, by the interaction of supply and demand. At the equilibrium exchange rate, the quantity of the currency supplied equals the quantity demanded. Shifts in the supply or demand for a currency lead to changes in the exchange rate. You can review the foreign exchange market and the exchange rate in the toolkit. ## Foreign Assets Having recently read about the large returns on the Shanghai stock exchange and having seen that you can buy Chinese currency (the yuan, which has the international code CNY), you might wonder whether you can buy shares on the Shanghai stock exchange. In general, you are not restricted to buying assets in your home country. After all, there are companies and governments around the world who need to finance projects of various forms. Financial markets span the globe, so the bonds issued by these companies and governments can be purchased almost anywhere. You can buy shares in Australian firms, Japanese government bonds, or real estate in Italy.Some countries have restrictions on asset purchases by noncitizens—for example, it is not always possible for foreigners to buy real estate. But such restrictions notwithstanding, the menu of assets from which you can choose is immense. Indeed, television, newspapers, and the Internet report on the behavior of both US stock markets and those worldwide, such as the FTSE 100 on the London stock exchange, the Hang Seng index on the Hong Kong stock exchange, the Nikkei 225 index on the Tokyo stock exchange, and many others. You could buy foreign assets from one of the big financial firms that you visited earlier. It will be happy to buy foreign stocks or bonds on your behalf. Of course, if you choose to buy stocks or bonds associated with foreign companies or governments, you face all the risks associated with buying domestic stocks and bonds. The dividends are uncertain, there might be inflation in the foreign country, the price of the asset might change, and so on. In addition, you face exchange rate risk. If you purchase a bond issued in Mexico, you don’t know what exchange rate you will face in the future for converting pesos to your home currency. You may feel hesitant about investing in other countries. You are not alone in this. Economists have detected something they call home bias. All else being equal, investors are more likely to buy assets issued by corporations and governments in their own country rather than abroad. ## A Casino Toward the end of your walk, you are particularly surprised to see a casino. Stepping inside, you see a casino floor, such as you might find in Las Vegas, Monaco, or Macau near Hong Kong. You are confronted with a vast array of betting opportunities. The first one you come across is a roulette wheel. The rules are simple enough. You place your chip on a number. After the wheel is spun, you win if—and only if—you guessed the number that is called. There is no skill—only luck. Nearby are the blackjack tables where a version of 21 is played. In contrast to roulette, blackjack requires some skill. As a gambler in blackjack, you have to make choices about taking cards or not. The objective is to get cards whose sum is as high as possible without going over 21. If you do go over 21, you lose. If the dealer goes over 21 and you don’t, you win. If neither of you goes over 21, then the winner is the one with the highest total. There is skill involved in deciding whether or not to take a card. There is also a lot of luck involved through the draw of the cards. You always thought of stocks and bonds as serious business. Yet, as you watch the players on the casino floor, you come to realize that it might not be so peculiar to see a casino on Wall Street. Perhaps there are some similarities between risking money at a gambling table and investing in stocks, bonds, or other assets. As this chapter progresses, you will see that there are some similarities between trading in financial assets and gambling in a casino. But you will learn that there are important differences as well. ### Key Takeaways • Many different types of assets, such as stocks, bonds, real estate, and foreign currency, are traded in financial markets. • Your earnings from owning an asset depend on the type of asset. If you own a stock, then you are paid dividends and also receive a capital gain or incur a capital loss from selling the asset. If you own real estate, then you have a flow of rental payments from the property and also receive a capital gain or incur a capital loss from selling the asset. • Risks also depend on the type of asset. If you own a bond issued by a company, then you bear the risk of that company going bankrupt and being unable to pay off its debt.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15415045619010925, "perplexity": 1288.2282839969132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698493317/warc/CC-MAIN-20130516100133-00069-ip-10-60-113-184.ec2.internal.warc.gz"}
https://quant.stackexchange.com/users/2299/lehalle?tab=topactivity
lehalle 24 academic papers about market making 24 How can I go about applying machine learning algorithms to stock markets? 23 Book on market microstructure 23 Multilayer Perceptron (Neural Network) for Time Series Prediction 21 What exactly is meant by “microstructure noise”? ### Reputation (8,632) +30 What is the best alternative of Quantlib library +10 Meta-view of different time-series similarity measures? +10 Why non-stationary data cannot be analyzed? +10 How to use a realized kernel? ### Question (1) 12 Strategy Risk and Portfolio Allocation Model (copy from nuclear phynance) ### Tags (170) 249 market-microstructure × 40 61 machine-learning × 5 151 high-frequency × 25 56 time-series × 14 83 trading × 16 55 high-frequency-estimators × 5 83 market-making × 8 54 equities × 16 70 volatility × 11 50 prediction × 4 ### Bookmarks (55) 284 What data sources are available online? 135 How can I go about applying machine learning algorithms to stock markets? 52 Paradoxes in quantitative finance 31 How to calculate historical intraday volatility? 30 Have Goldman Sachs Quantitative Strategies Research Notes been published as a book or a comprehensive collection? ### Accounts (21) Quantitative Finance 8,632 rep 11 gold badge3636 silver badges6767 bronze badges TeX - LaTeX 158 rep 55 bronze badges Stack Overflow 101 rep 33 bronze badges Cross Validated 101 rep 22 bronze badges Mathematics 101 rep 22 bronze badges
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8140373826026917, "perplexity": 10540.933409147854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900200.97/warc/CC-MAIN-20201028162226-20201028192226-00260.warc.gz"}
http://www.cs.utexas.edu/node/47849
# Research Preparation Examination: Pravesh Kothari, GDC 4.816 Contact Name: Lydia Griffith Date: Oct 11, 2013 11:00am - 12:00pm Research Preparation Examination: Pravesh Kothari Date: Oct 11, 2013 Time: 11 am We study the complexity of approximate representation and learning of submodular functions over the uniform distribution on the Boolean hypercube. Our main result is the following structural theorem: any submodular function is $\epsilon$-close in $\ell_2$ to a real-valued decision tree (DT) of depth $O(1/\epsilon^2)$. This immediately implies that any submodular function is $\epsilon$-close to a function of at most $2^{O(1/\epsilon^2)}$ variables (independent of the ambient dimension) and has a spectral $\ell_1$ norm of $2^{O(1/\epsilon^2)}$. It also implies the closest previous result that states that submodular functions can be approximated by polynomials of degree $O(1/\epsilon^2)$ (Cheraghchi et al., 2012). Our result is proved by constructing an approximation of a submodular function by a Decision Tree of rank $4/\epsilon^2$ and a proof that any rank-$r$ DT can be $\epsilon$-approximated by a DT of depth $\frac{5}{2}(r+\log(1/\epsilon))$. We show that these structural results can be exploited to give an attribute-efficient PAC learning algorithm for submodular functions running in time $\tilde{O}(n^2) \cdot 2^{O(1/\epsilon^{4})}$. The best previous algorithm for the problem requires $n^{O(1/\epsilon^{2})}$ time and examples (Cheraghchi et al., 2012) but works also in the agnostic setting. In addition, we give improved learning algorithms for a number of related settings. We also prove that our PAC and agnostic learning algorithms are essentially optimal via two lower bounds: (1) an information-theoretic lower bound of $2^{\Omega(1/\epsilon^{2/3})}$ on the complexity of learning monotone submodular functions in any reasonable model; (2) computational lower bound of $n^{\Omega(1/\epsilon^{2/3})}$ based on a reduction to learning of sparse parities with noise, widely-believed to be intractable. These are the first lower bounds for learning of submodular functions over the uniform distribution.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689013719558716, "perplexity": 445.55262988460714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462313.6/warc/CC-MAIN-20150226074102-00166-ip-10-28-5-156.ec2.internal.warc.gz"}
https://hpmuseum.org/forum/thread-17440-post-166264.html
FORTH for the SHARP PC-E500 (S) 10-13-2022, 11:30 PM Post: #81 Helix Member Posts: 251 Joined: Dec 2013 RE: FORTH for the SHARP PC-E500 (S) (09-16-2022 06:54 PM)robve Wrote:  Also included in Forth500 2.0 is a new text editor "TED". TED.FTH is located in the Forth500 additions folder. With TED you can interactively write, edit and run Forth code in Forth500: Code: TEDI MYWORK.FTH ↲ ↲                           \ start editing (press enter) .( TED is great!) ↲        \ a line of Forth (press enter to save) [CCE]                       \ end editing and read MYWORK.FTH TED is great! I've tried TED, and it works perfectly! It's very easy now to write Forth definitions without being linked to a PC, which is the charm of these pocket calculators after all. I've also tried TLOAD, but it requires more manipulations than TED. So I can confirm: TED is great! Jean-Charles 11-06-2022, 04:15 AM (This post was last modified: 11-06-2022 01:57 PM by robve.) Post: #82 robve Senior Member Posts: 360 Joined: Sep 2020 RE: FORTH for the SHARP PC-E500 (S) Recently started a new Forth project. This time for the Sharp PC-850(V)(S). - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 11-06-2022, 01:20 PM Post: #83 rprosperi Super Moderator Posts: 5,543 Joined: Dec 2013 RE: FORTH for the SHARP PC-E500 (S) (11-06-2022 04:15 AM)robve Wrote:  Recently started a new Forth project. This time for the Sharp PC-850(V)(S)... @rob - I'd move this to a new thread, dedicated to the 850V version, it will make it easier to find for folks with that machine. --Bob Prosperi 11-14-2022, 12:55 AM Post: #84 Helix Member Posts: 251 Joined: Dec 2013 RE: FORTH for the SHARP PC-E500 (S) (11-06-2022 04:15 AM)robve Wrote:  Recently started a new Forth project. This time for the Sharp PC-850(V)(S). Rob, In the Forth850 thread, you presented an interesting example on how to use machine code inside Forth definitions. I've not investigated this question (I own a G850VS, but now I'm busy enough with the E500S ), but is the same technique possible with Forth500, just writing HEX codes? Jean-Charles 11-14-2022, 01:18 AM Post: #85 robve Senior Member Posts: 360 Joined: Sep 2020 RE: FORTH for the SHARP PC-E500 (S) (11-14-2022 12:55 AM)Helix Wrote: (11-06-2022 04:15 AM)robve Wrote:  Recently started a new Forth project. This time for the Sharp PC-850(V)(S). Rob, In the Forth850 thread, you presented an interesting example on how to use machine code inside Forth definitions. I've not investigated this question (I own a G850VS, but now I'm busy enough with the E500S ), but is the same technique possible with Forth500, just writing HEX codes? Writing an assembler for the ESR-L in Forth would take some time, something I'm short of. Using the PC-G850's Assembler with Forth850 is cheap: it's already there. I didn't have to write one or find a Z80 assembler to integrate with Forth850. Note that Forth500 is a lot more powerful than Forth850 (thanks to the fact that the E500 is a professional machine with reasonably powerful FCS and IOCS system), so you're not missing out on anything really, except speed perhaps. I'm not sure how many folks would actually use an ESR-L assembler in Forth500 besides you and me? I hope I'm mistaken, but our world with our "toys" is pretty small. I will keep updating Forth500 and Forth850 (as well as other side projects.) But I tend to move quickly between things to do. I recently acquired two PC-1600's that peaked my interest, one close to NIB with 2 64K RAM modules and one with a CE-1600 printer. Forth850 may as well be ported to that PC too. Now that I've said it, I probably can't get that out of my head... oh no - Rob "I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx... 11-14-2022, 02:17 AM (This post was last modified: 11-14-2022 06:35 PM by Helix.) Post: #86 Helix Member Posts: 251 Joined: Dec 2013 RE: FORTH for the SHARP PC-E500 (S) (11-14-2022 01:18 AM)robve Wrote:  Writing an assembler for the ESR-L in Forth would take some time, something I'm short of. My question was not about an assembler. Here is the section that interested me: (11-13-2022 10:15 PM)robve Wrote:  Alternatively, the BEEP word can also be defined with HEX codes in Forth850 as follows, which takes more effort but with the same result: Code: NFA, BEEP       ( -- )   HEX   F3 C,         \       di              ;   F3 C,         \       di              ; disable interrupts   21 C, 0 ,     \       ld hl,0000h     ;   AF C,         \       xor a           ;   D3 C, 18 C,   \ loop: out (18h),a     ; loop, out audio port   D2 C,         \ wait: dec l           ;   loop   20 C, FD C,   \       jr nz,wait      ;   until --l=0   2F C,         \       cpl             ; switch on/off   25 C,         \       dec h           ;   20 C, F7 C,   \       jr nz,loop      ; until --h=0   FB C,         \       ei              ; enable interrupts   FD C, E9 C,   \       jp (iy)         ; next   DECIMAL       \ 18 bytes Here you don't use the built-in assembler, so I was only asking if the same thing is possible in Forth500. But I think I know the answer now. (11-14-2022 01:18 AM)robve Wrote:  I will keep updating Forth500 and Forth850 (as well as other side projects.) But I tend to move quickly between things to do. I recently acquired two PC-1600's that peaked my interest, one close to NIB with 2 64K RAM modules and one with a CE-1600 printer. Forth850 may as well be ported to that PC too. Now that I've said it, I probably can't get that out of my head... oh no Ha ha! I don't own a PC-1600, but I have a TI-92 Plus and a TI Voyage 200. I see in your signature that you also have a TI Voyage 200… A Forth for these machines would be great too. Jean-Charles « Next Oldest | Next Newest » User(s) browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2676815092563629, "perplexity": 3217.094103429338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949642.35/warc/CC-MAIN-20230331113819-20230331143819-00630.warc.gz"}
https://www.physicsforums.com/threads/a-little-help-on-some-physics-questions.645892/
# A little help on some physics questions 1. Oct 21, 2012 ### venture are velocity and displacement vector quantities or scalar quantities? if an object is going up is that when its acceleration is negative, and when that same object is coming back down does its acceleration become positive? if a car is going downhill is its acceleration/velocity negative? 2. Oct 21, 2012 ### bossman27 They are definitely vector quantities, even though a lot of times in simple problems such as an object falling they might appear to be scalar since there's only one dimension in question. No, your confusing acceleration with velocity and then some. If you're talking about throwing a an object in the air, the acceleration is a constant. The acceleration is the rate of change of velocity, and is dependent only on Force. The force of gravity is constant, thus acceleration due to gravity is constant. You can define the positive and negative directions whichever way you want, but when the object is going up the velocity is (positive or negative) and when it's falling back down, the velocity is the opposite sign. This is a similar problem in which it totally depends on whether you define up or down to be positive. It really doesn't matter, as long as it's consistent throughout the problem. 3. Oct 21, 2012 ### Angry Citizen Velocity is a vector. Displacement is a scalar, since it measures the distance from some origin. You can define your coordinate system in any way you choose. But acceleration is constantly pointed in whatever direction you consider "down", so long as no other force but gravity acts. Again, you are a (budding) physicist here. The best thing about being a physicist is that you get to create the framework to solve the problem in whatever way you deem fit. You can define whatever coordinate system you want, so long as it is right handed and each direction is perpendicular to the other two directions. If you think it makes the math easier to say that a car going downhill has a negative y component, then that is your freedom. You can even define a coordinate system such that the car is not traveling downhill at all, but is instead moving along the axis you've defined. It may not necessarily be ideal to do so, but sometimes it can be. Point being, you have total freedom here, so both yes and no are correct answers. 4. Oct 21, 2012 ### Staff: Mentor Welcome to the PF, venture. Please re-read the Rules link at the top of the page. You are supposed to show your own attempt at solving the questions befure you get tutorial help. Unfortunately, your homework has been done by a couple of our helpers this time...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026913046836853, "perplexity": 354.5774774216273}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647567.36/warc/CC-MAIN-20180321023951-20180321043951-00178.warc.gz"}
https://stats.meta.stackexchange.com/questions/1529/can-my-question-be-moved-to-crossvalidated-please
# Can my question be moved to CrossValidated please? Can my question be moved to CrossValidated please? I have flagged the question on SO and asked it to be moved here. Reason: the package author is active on CV but not SO. He has kindly given some advice about this issue in comments another question by me on CV but I fear if I post a new question on CV it might be closed as a duplicate/cross post. No offence, but the only valid reason for migration is when one question is off-topic on source SE and on-topic on the target SE. If you want to reach package author directly, it is a better idea to send an e-mail than to chase him across SE network with off-topic posts. • Thanks, I understand what you are saying, but I don't think it's completely off-topic here as it may be a statistical problem to do with the data itself rather than a programming problem. I have tried emailing him before, but didn't get any response. He seems happy to comment on the problem in the comments on the unrelated question on CV. Is it a crazy idea to migrate it here and hope to get his input and later migrate it back to SO if it turns out to be a programming problem ? ;) Feb 2 '13 at 15:36 • Robert, it all depends on what the question is. The one you reference very clearly is off topic here and on topic at SE, because it asks for R techniques to diagnose a crash. If you do have evidence that this is data-related and can introduce additional information about exactly how the data might be causing such a problem, you might be able to put the question in a statistical manner, which would make it on-topic here. – whuber Mod Feb 2 '13 at 17:20 • @whuber I do have evidence that it is data related in the sense that it crashes with certain subsets of the data and not with others (I think I mentioned that in the post) and I am working on generating small enough anonymised datasets that can be posted [related question: is there a recommended way to host data which is too big to use dput() ?]. Also, I wasn't specifically asking for R techniques to diagnose the crash - just generally "how can I diagnose the problem". Feb 2 '13 at 17:39 • In this case it is difficult to separate the R programming issues with diagnosis from any statistical issues. What statistical techniques of diagnosis might you have in mind? – whuber Mod Feb 2 '13 at 17:40 • @whuber possibly issues to do with small cluster sizes and/or specifying the predictor matrix (off the top of my head) but I was hoping Steph van Buuren would be able to weigh in. To be honest it doesn't seem to be a problem with R programming because it runs fine with exactly the same code (and data structure) on other subsets of the data. Feb 2 '13 at 17:46 • I see you're getting somewhere interesting, but how are we supposed to help with this if you don't make these concerns explicit in your question and unless you provide sample data? – whuber Mod Feb 2 '13 at 18:00 • @whuber I am going to attempt to provide sample data shortly in a major edit to the question. The problem is that it's far to big to use dput() - is it OK to host it externaly ? I fear that if I cut it down so far that I can use dput() (that is assuming it still crashes on such a small dataset) it won't be representative of the actual data. I have now managed to obtain a small-ish sample (~2800 obs on 17 vars) which still crashes and I just need to completely anonymise/jitter it. Feb 2 '13 at 18:18 • Yes, hosting externally is a good idea. – whuber Mod Feb 2 '13 at 18:28 • @whuber OK - done... hope it's an improvement... Feb 2 '13 at 19:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4048258364200592, "perplexity": 664.0917837908285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00289.warc.gz"}
http://math.usf.edu/research/seminars/analysis/fall19/
USF Home > College of Arts and Sciences > Department of Mathematics & Statistics Mathematics & Statistics # Analysis (Leader: Prof. Dmitry Khavinson <dkhavins (at) usf.edu>document.write('<a href="mai' + 'lto:' + 'dkhavins' + '&#64;' + 'usf.edu' + '">Prof. Dmitry Khavinson</a>');) ## Friday, November 22, 2019 Title Speaker Time Place A Zagier-type formula for special multiple Hurwitz zeta values Cezar Lupu Texas Tech University 4:00pm--5:00pm CMC 130 Abstract In this talk, we provide a Zagier-type formula for the multiple $t$-values (special Hurwitz zeta values), \begin{gather*} t\left(k_{1},k_{2},\dotsc,k_{r}\right)=2^{-\left(k_{1}+k_{2}+\dotsb+k_{r}\right)}\zeta\left(k_{1},k_{2},\dotsc,k_{r};-\frac{1}{2},-\frac{1}{2},\dotsc,-\frac{1}{2}\right) \\ =\sum_{1\le n_{1}< n_{2}<\dotsb< n_{r}}\frac{1}{\left(2n_{1}-1\right)^{k_{1}}\left(2n_{2}-1\right)^{k_{2}}\dotsm\left(2n_{r}-1\right)^{k_{r}}}. \end{gather*} Our formula is similar with Zagier's formulas for MZVs $$\zeta(2,\dotsc,2,3)$$ and will involve $$\mathbb{Q}$$-linear combinations of powers of $$\pi$$ and odd zeta values. The derivation of the formula for $$t(2,\dotsc,2,3)$$ relies on a rational zeta series approach via a Gauss hypergeometric argument. ## Friday, September 20, 2019 Title Speaker Time Place Stahl's theorem on a Riemann surface, Part III E. A. Rakhmanov 4:00pm--5:00pm CMC 130 ## Friday, September 13, 2019 Title Speaker Time Place Stahl's theorem on a Riemann surface, Part II E. A. Rakhmanov 4:00pm--5:00pm CMC 130 ## Friday, September 6, 2019 Title Speaker Time Place Stahl's theorem on a Riemann surface E. A. Rakhmanov 4:00pm--5:00pm CMC 130 Abstract H. Stahl's theorem on convergence of Pade approximats for analytic functions with branch points is one of the fundamental results in the theory of rational approximations of analytic functions. It is also one of the basic facts in the theory of orthogonal polynomials. Denominators of Pade approximats are complex (non-hermitian) orthogonal polynomials and Stahl's created an original method of investigating asymptotics of complex orthogonal polynomials based directly on complex orthogonality.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728330373764038, "perplexity": 5367.559139682671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574159.19/warc/CC-MAIN-20190921001810-20190921023810-00102.warc.gz"}
https://neuron.yale.edu/neuron/static/py_doc/programming/gui/texteditor.html
TextEditor # TextEditor¶↑ class TextEditor Syntax: e = h.TextEditor() e = h.TextEditor(string) e = h.TextEditor(string, rows, columns) Description: For editing or displaying multiline text. Default is 5 rows, 30 columns. Warning At this time no scroll bars or even much functionality. Mouse editing and emacs style works. TextEditor.text() Syntax: string = e.text() string = e.text(string) Description: Returns the text of the TextEditor in a strdef. If arg exists, replaces the text by the string and returns the new text (string). TextEditor.readonly() Syntax: boolean = e.readonly() boolean = e.readonly(boolean) Description: Returns True if the TextEditor in read only mode. Returns False if text entry by the user is allowed. Change the mode with the argument form using False (or 0) or True (or 1). Prior to NEURON 7.6, this method returned 0 or 1 instead of False or True. TextEditor.map() Syntax: e.map() e.map(title) e.map(title, left, bottom, width, height) Description: Map the text editor onto the screen at indicated coordinates with indicated title bar. Note: title is a string.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21542750298976898, "perplexity": 10665.616540717627}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875149238.97/warc/CC-MAIN-20200229114448-20200229144448-00056.warc.gz"}
https://www.physicsforums.com/threads/find-the-value-of-p-and-q-that-make-the-function-continuous.597943/
# Homework Help: Find the value of p and q that make the function continuous 1. Apr 18, 2012 ### egc 1. The problem statement, all variables and given/known data Find the value of p and q that make the function continuous 2. Relevant equations f(x)= x-2 if x≥2 $\sqrt{p-x^{2}}$ -2<x<2 q-x if x≤-2 3. The attempt at a solution lim f(x)= x-2 n→2+ lim f(x)=q-x n→-2 I really have no idea how to continue,the teacher never explained this and I have a test tomorrow please help! Last edited: Apr 18, 2012 2. Apr 18, 2012 ### egc Does q= -2 and p=4 ?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096283912658691, "perplexity": 2816.362666842387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155561.35/warc/CC-MAIN-20180918150229-20180918170229-00178.warc.gz"}
https://www.physicsforums.com/threads/how-to-determine-if-vortex-shedding-will-occur.394410/
# How to determine if vortex shedding will occur? 1. Apr 11, 2010 ### kentigens With given object dimensions and speed moving inside a fluid. How do i decide if vortex shedding will occur? It is sometimes obvious just to look at the speed and dimension and say it will not occur as its going too slow or vice versa. Is there formula or some mathematical criterion to predict vortex shedding? Thank you, Kent 2. Apr 11, 2010 ### Cyrus Hi Kent, I believe the parameter you are interested in is the strouhal number. However, there isn't going to be a number where you can automatically say 'shedding occurs'. Just like you cannot say above a certain Reynolds number flow is turbulent (a common misconception many students have when posting here, because text books say turbulent flow is at about a Re = 15k for pipe flow). What you can say, though, is that if your model and your prototype have the same Strohaul number, then they will both encounter shedding, at a particular frequency. You would have to find the prototype shedding frequency by solving for the equality of dynamic similitude. It should just go as the ratios of the L/V, as per the definition. Last edited: Apr 11, 2010 3. Apr 19, 2010 ### kentigens Thank you cyrus. I know strouhal number and it only tells information providing that vortex shedding does occur. But sometimes when making an engineering design, hmmm... dont we need to take vortex shedding into consideration? instead of making a prototype and see if theres presence of vortex shedding??? and by the way, any suggested materials i can have a read on?? Thank you, Kent 4. Apr 19, 2010 ### Cyrus Of course you do, but that does not mean you can calculate when and where it will occur. That's why we put things in the wind tunnel, or run CFD analysis. Typically, this is done on a model (experimentally), or at prototype Reynolds number (computational). If you are concerned about vortex shedding, my recommendation is to look for papers of similar designs and use that as a historical guideline.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599705696105957, "perplexity": 1173.9676934548934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863119.34/warc/CC-MAIN-20180619193031-20180619213031-00570.warc.gz"}
http://mathhelpforum.com/advanced-algebra/72404-matrices-square-matrix.html
# Math Help - Matrices, Square matrix 1. ## Matrices, Square matrix (a) Prove that the transpose of the sum of two matrices A and B is the sum of their transposes. (b) Verify this is so when A = [1 2 3] and B = [4 6 8] [4 5 6] [5 7 9] (c) Prove that if A is a square matrix then S = 1/2 (A -A^t) is skew-symmetric. (d) Verify that this S is skew-symmetric when A = [4 3] [2 1] (e) Prove that if A is a square matrix with complex elements, then H = 1/2 (A + A^*) is hermitian. (f) Verify that this H is hermitian when A = [1+i 4-3i] [2-i 3+2i] Any help with any of these questions would be greatly appreciated! 2. ## Transpose of a matrix Hello mr_motivator Originally Posted by mr_motivator (a) Prove that the transpose of the sum of two matrices A and B is the sum of their transposes. (b) Verify this is so when A = [1 2 3] and B = [4 6 8] [4 5 6] [5 7 9] (a) Denote the elements in row $i$, column $j$ of the matrices $A$ and $B$ by $a_{i,j}$ and $b_{i, j}$. In the matrix sum $A+B$, the element in row $i$, column $j$ is therefore $a_{i,j} + b_{i, j}$. This is then the element in row $j$, column $i$ of $(A+B)^T$, which is the sum of elements in row $j$, column $i$ of $A^T$ and $B^T$. This is true for all valid $i$ and $j$. Hence $(A+B)^T=A^T +B^T$. (b) is a simple matter of checking this out. $ A+B = \begin{pmatrix}5 & 8 & 11\\9 & 12 & 15\end{pmatrix}$ $\Rightarrow (A+B)^T =\begin{pmatrix}5 & 9\\8 & 12\\11 & 15\end{pmatrix}$ = etc... 3. (c) Prove that if A is a square matrix then S = 1/2 (A -A^t) is skew-symmetric. You have to prove: $S^T = -S$ Use these 3 properties: • $(cA)^T = cA^T$ • $(A+B)^T = A^T + B^T$ • $\left(A^T\right)^T = A$ So: \begin{aligned} S^{T} & = \left[\frac{1}{2} \left(A - A^{T}\right)\right]^{T} \\ & = \frac{1}{2}\left(A - A^T\right)^{T} \\ & = \cdots \end{aligned} I'm sure you can finish off. For (d), just show that: $A^T = -A$ 4. ## Hermitian Matrices Hello mr_motivator Originally Posted by mr_motivator (e) Prove that if A is a square matrix with complex elements, then H = 1/2 (A + A^*) is hermitian. (f) Verify that this H is hermitian when A = [1+i 4-3i] [2-i 3+2i] A matrix is Hermitian if its transpose is also its conjugate. In other words, if $(a_{i,j})^* = a_{j, i}$ for all valid $i, j$. We have to prove that $\tfrac{1}{2}(A + (A^T)^*)$ is Hermitian. Now suppose in matrix $A, a_{i, j}$ is written $p + iq$ and $a_{j,i}$ is written $r + is$. Then the element in row $i$, column $j$ of $\tfrac{1}{2}(A + (A^T)^*)$ is $\tfrac{1}{2}(a_{i,j} + (a_{j,i})^*) = \tfrac{1}{2}[(p+iq)+(r-is)]= \tfrac{1}{2}[(p+r) +(q-s)i]$ (1) and the element in row $j$, column $i$ of $\tfrac{1}{2}(A + (A^T)^*)$ is $\tfrac{1}{2}(a_{j,i} + (a_{i,j})*) = \tfrac{1}{2}[(r+is)+(p-iq)]= \tfrac{1}{2}[(p+r) +(s-q)i]$ which is the conjugate of the element in (1). This then is the required proof. I think you can complete part (f) now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 43, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930642247200012, "perplexity": 361.2622481811209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398452385.31/warc/CC-MAIN-20151124205412-00203-ip-10-71-132-137.ec2.internal.warc.gz"}
https://pub.uni-bielefeld.de/publication/2337407
# THE STATES OF MATTER IN QCD. (TALK) Satz H (1982) . No fulltext has been uploaded. References only! Journal Article | Published | English Author Publishing Year PUB-ID ### Cite this Satz H. THE STATES OF MATTER IN QCD. (TALK). 1982. Satz, H. (1982). THE STATES OF MATTER IN QCD. (TALK) Satz, H. (1982). THE STATES OF MATTER IN QCD. (TALK). Satz, H., 1982. THE STATES OF MATTER IN QCD. (TALK). H. Satz, “THE STATES OF MATTER IN QCD. (TALK)”, 1982. Satz, H.: THE STATES OF MATTER IN QCD. (TALK). (1982). Satz, Helmut. “THE STATES OF MATTER IN QCD. (TALK)”. (1982). This data publication is cited in the following publications: This publication cites the following data publications: ### Export 0 Marked Publications Open Data PUB
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.967311441898346, "perplexity": 14707.628271732223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687582.7/warc/CC-MAIN-20170920232245-20170921012245-00307.warc.gz"}
https://crypto.stackexchange.com/questions/42427/what-is-the-difference-between-regular-and-twisted-ecc-curves
# What is the difference between regular and “twisted” ECC curves? When I do: openssl ecparam -list_curves I get, among other entries: brainpoolP512r1: RFC 5639 curve over a 512 bit prime field brainpoolP512t1: RFC 5639 curve over a 512 bit prime field Apparently the "t" means it is a twisted ECC curve. Is this slightly more secure or slightly less secure? I'd rather give up a few milliseconds of performance than give up any security margin. • Twisted curves are isomorphic and therefore have the same security strength. – user27950 Dec 20 '16 at 13:22 • @MaartenBodewes Behind the hot water pipes ... youtube.com/watch?v=52a7QbLr4ys – DepressedDaniel Dec 20 '16 at 20:25 TL;DR: There is no difference. Given an elliptic curve $E$ defined over $\mathbb{F}_p$ for some prime $p$, we say that a second curve $E_t$ defined over $\mathbb{F}_p$ is a twist of $E$ when $E_t\cong E$. That is, when there is an isomorphism between $E_t$ and $E$, defined over $\bar{\mathbb{F}}_p$ (the algebraic closure of $\mathbb{F}_p$). From this we can conclude that every curve is a twisted curve, as every curve is isomorphic to itself. Thus the definition of a regular curve versus a twisted curve is nonsensical, there is no difference. You may be left wondering why we care about twists in the first place. Well, it turns out that given some curve $E/\mathbb{F}_p$, in some cases one can force a user to work on some twist of $E$ instead (there could be many twists). This twist could have different security properties (notice the definition of twist with respect to $E$), which could in turn lead to an attack. This is an example of a so-called invalid-curve attack. Edit: Note that ${\tt brainpoolPXXXr1}$ and ${\tt brainpoolPXXXt1}$ are trivial twists (see Definition 9.5.1). That means that the security properties are essentially the same. The reason why both these curves are specified, is because ${\tt brainpoolPXXXr1}$ is pseudo-randomly generated (and therefore supposedly leaving them unable to create backdoors), yet has large curve parameters. By specifying ${\tt brainpoolPXXXt1}$ which has $A=-3$, we can make some improvements in the curve arithmetic, making operations more efficient (see EFD). • Could you explain (more clearly) why the twisted curves are named for brainpool in the first place? – Maarten Bodewes Dec 20 '16 at 14:17 • That is actually quite simple, and has nothing to do with security. In fact, the standard does not seem to mention twist attacks at all. They pseudo-randomly generate some curves which satisfy some pre-defined properties (chapter 3 of the standard), and one of the requirements is that a curve $E$ (i.e. brainpoolPXXXr1) is isomorphic over $\mathbb{F}_p$ to a curve $E_t$ with $A=-3$ (i.e. brainpoolPXXXt1). This isomorphic curve is a twist, and the reason we want $A=-3$ is because arithmetic simplifies. – CurveEnthusiast Dec 20 '16 at 14:42 • OK, now for the last step, the reason for this is that this simplification allows for fast implementation of the curve, right? I mean the question is directly about the twisted curves as mentioned in the brainpool page. You correctly specify why there is no security difference, but it still doesn't really explain why the named curve is listed in the first place. – Maarten Bodewes Dec 20 '16 at 15:40 • I'm not sure if I understand you correctly. The twist is named precisely because it allows for fast curve arithmetic, and the document does not mention any other reason (as far as I can see). See tools.ietf.org/html/rfc5639 section 2.2.3. – CurveEnthusiast Dec 20 '16 at 16:15 • Looks like you understand me, I just tried to make the information in the comments part of the answer. And although I have extensively worked with these curves I'd rather have the answer be provided by a certain CurveEnthusiast :) – Maarten Bodewes Dec 20 '16 at 18:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9203895926475525, "perplexity": 456.4563933392504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00485.warc.gz"}
https://galileo.phys.virginia.edu/classes/252/Classical_Waves/Classical_Waves.html
# Classical Wave Equations Michael Fowler, University of Virginia ## Introduction The aim of this section is to give a fairly brief review of waves in various shaped elastic media$—$beginning with a taut string, then going on to an elastic sheet, a drumhead, first of rectangular shape then circular, and finally considering elastic waves on a spherical surface, like a balloon. The reason we look at this material here is that these are “real waves”, hopefully not too difficult to think about, and yet mathematically they are the solutions of the same wave equation the Schrödinger wave function obeys in various contexts, so should be helpful in visualizing solutions to that equation, in particular for the hydrogen atom. We begin with the stretched string, then go on to the rectangular and circular drumheads.  We derive the wave equation from $F=ma$ for a little bit of string or sheet.  The equation corresponds exactly to the Schrödinger equation for a free particle with the given boundary conditions. The most important section here is the one on waves on a sphere.  We find the first few standing wave solutions.  These waves correspond to Schrödinger’s wave function for a free particle on the surface of a sphere.  This is what we need to analyze to understand the hydrogen atom, because using separation of variables we split the electron’s motion into radial motion and motion on the surface of a sphere.  The potential only affects the radial motion, so the motion on the sphere is free particle motion, described by the same waves we find for vibrations of a balloon.  (There is the generalization to complex non-standing waves, parallel to the one-dimensional extension from $sinkx$ and $coskx$ to $e ikx$ and $e −ikx ,$  but this does not affect the structure of the equations.) ## Waves on a String Let’s begin by reminding ourselves of the wave equation for waves on a taut string, stretched between  $x=0$ and  $x=L,$ tension $T$ newtons, density $ρ$ kg/meter.  Assuming the string’s equilibrium position is a straight horizontal line (and, therefore, ignoring gravity), and assuming it oscillates in a vertical plane, we use $f( x,t )$ to denote its shape at instant $t,$ so $f( x,t )$ is the instantaneous upward displacement of the string at position $x.$ We assume the amplitude of oscillation remains small enough that the string tension can be taken constant throughout. The wave equation is derived by applying $F=ma$ to an infinitesimal length $dx$ of string (see the diagram below).  We picture our little length of string as bobbing up and down in simple harmonic motion, which we can verify by finding the net force on it as follows. At the left hand end of the string fragment, point $x,$ say, the tension $T$ is at a small angle $df( x )/dx$ to the horizontal, since the tension acts necessarily along the line of the string.  Since it is pulling to the left, there is a downward force component $Tdf( x )/dx.$  At the right hand end of the string fragment there is an upward force $Tdf( x+dx )/dx.$ Putting $f( x+dx )=f( x )+( df/dx )dx,$ and adding the almost canceling upwards and downwards forces together, we find a net force $T( d 2 f/d x 2 )dx$ T(d2f/dx2)dx on the bit of string.  The string mass is $ρdx,$ so $F=ma$ becomes $T ∂ 2 f(x,t) ∂ x 2 dx=ρdx ∂ 2 f(x,t) ∂ t 2$ giving the standard wave equation $∂ 2 f(x,t) ∂ x 2 = 1 c 2 ∂ 2 f(x,t) ∂ t 2$ with wave velocity given by  $c 2 =T/ρ.$  (A more detailed discussion is given in my Physics 152 Course,  plus an animation here.) This equation can of course be solved by separation of variables, $f( x,t )=f( x )g( t ),$  and the equation for $f( x )$ is identical to the time independent Schrödinger equation for a particle confined to $( 0,L )$ by infinitely high walls at the two ends.  This is why the eigenfunctions (states of definite energy) for a Schrödinger particle confined to $( 0,L )$ are identical to the modes of vibration of a string held between those points.  (However, it should be realized that the time dependence of the string wave equation and the Schrödinger time-dependent equation are quite different, so a nonstationary state, one corresponding to a sum of waves of different energies, will develop differently in the two systems.) ## Waves on a Rectangular Drumhead Let us now move up to two dimensions, and consider the analogue to the taut string problem, which is waves in a taut horizontal elastic sheet, like, say, a drumhead.  Let us assume a rectangular drumhead to begin with.  Then, parallel to the argument above, we would apply $F=ma$ to a small square of elastic with sides parallel to the $x$ and $y$ axes.  The tension from the rest of the sheet tugs along all four sides of the little square, and we realize that tension in a sheet of this kind must be defined in newtons per meter, so the force on one side of the little square is given by multiplying this “tension per unit length” by the length of the side. Following the string analysis, we take the vertical displacement of the sheet at instant $t$ to be given by $f( x,y,t ).$ We assume this displacement is quite small, so the tension itself doesn’t vary, and that each bit of the sheet oscillates up and down (the sheet is not tugged to one side).  Suppose the bottom left-hand corner (so to speak) of the square is $( x,y ),$ the top right-hand corner $( x+dx, y+dy ).$  Then the left and right edges of the square have lengths $dy.$ Now, what is the total force on the left edge?  The force is $Tdy,$  in the local plane of the sheet, perpendicular to the edge $dy.$ Factoring in the slope of the sheet in the direction of the force, the vertically downward component of the force must be $Tdy∂f( x,y,t )/∂x.$  By the same argument, the force on the right hand edge has to have an upward component $Tdy∂f( x+dx,y,t )/∂x.$ Tdyf(x+dx, y, t)/∂x Thus the net upward force on the little square from the sheet tension tugging on its left and right sides is $Tdy( ∂f(x+dx,y,t) ∂x − ∂f(x,y,t) ∂x )=Tdydx( ∂ 2 f ∂ x 2 ).$ The net vertical force from the sheet tension on the other two sides is the same with $x$ and $y$ interchanged. The mass of the little square of elastic sheet is $ρdxdy,$ and its upward acceleration is $∂ 2 f/∂ t 2 .$  Thus $F=ma$ becomes: $Tdydx( ∂ 2 f ∂ x 2 )+Tdydx( ∂ 2 f ∂ y 2 )=ρdxdy( ∂ 2 f ∂ t 2 ),$ giving $( ∂ 2 f ∂ x 2 )+( ∂ 2 f ∂ y 2 )= 1 c 2 ( ∂ 2 f ∂ t 2 ).$ with $c 2 =T/ρ.$ This equation can be solved by separation of variables, and the time independent part is identical to the Schrödinger time independent equation for a free particle confined to a rectangular box. ## Waves on a Circular Drumhead A similar argument gives the wave equation for a circular drumhead, this time in $( r,φ )$ coordinates (we use $φ$ rather than $θ$ here because of its parallel role in the spherical case, to be discussed shortly). This time, instead of a tiny square of elastic, we take the small area $rdrdφ$ bounded by the circles of radius $r$ and $r+dr$ and lines through the origin at angles $φ$ and $φ+dφ.$ Now, the downward force from the tension $T$ in the sheet on the inward curved edge, which has length $rdφ,$ is $Trdrdφ∂f( r.φ.t )/∂r.$  On putting this together with the upward force from the other curved edge, it is important to realize that the $r$ in $Trdφ$ varies as well as $∂f/∂r$ on going from $r$ to $r+dr,$ so the sum of the two terms is $Tdφ( ∂/∂r )( r∂f/∂r )dr.$  To find the vertical elastic forces from the straight sides, we need to find how the sheet slopes in the direction perpendicular to those sides. The measure of length in that direction is not $φ,$ but $rφ,$ so the slope is $( 1/r )( ∂f/∂φ ),$ and the net upward elastic force contribution from those sides (which have length $dr$ ) is $Tdrdφ( ∂/∂φ )( 1/r )( ∂f/∂φ ).$ Writing $F=ma$ for this small area of elastic sheet, of mass $ρrdrdφ,$ gives then $Tdφ ∂ ∂r r ∂f ∂r dr+Tdrdφ ∂ ∂φ 1 r ∂f ∂φ =ρrdrdφ ∂ 2 f ∂ t 2$ which can be written $1 r ∂ ∂r r ∂f ∂r + 1 r 2 ∂ 2 f ∂ φ 2 = 1 c 2 ∂ 2 f ∂ t 2 .$ This is the wave equation in polar coordinates.  Separation of variables gives a radial equation called Bessel’s equation, the solutions are called Bessel functions.  The corresponding electron standing waves have actually been observed for an electron captured in a circular corral on a surface. ## Waves on a Spherical Balloon Finally, let us consider elastic waves on the surface of a sphere, such as an inflated spherical balloon. The natural coordinate system here is spherical polar coordinates, with $θ$ measuring latitude, but counting the north pole as zero, the south pole as $π.$  The angle $φ$ measures longitude from some agreed origin. We take a small elastic element bounded by longitude lines $φ$ and $φ+dφ$ and latitudes $θ$ and $θ+dθ.$  For a sphere of radius $r,$ the sides of the element have lengths $rsinθdφ, rdθ.$   Beginning with one of the longitude sides, length $rdθ,$ tension $T,$ the only slightly tricky point is figuring its deviation from the local horizontal, which is $( 1/rsinθ )( ∂f/∂φ ),$ since increasing $φ$ by $dφ$ means moving an actual distance $rsinθdφ$ on the surface, just analogous with the circular case above.  Hence, by the usual method, the actual "vertical" force from tension on the two longitude sides is $Trdθdφ ∂ ∂φ 1 rsinθ ∂f ∂φ .$ To find the force on the latitude sides, taking the top one first, the slope is given by $( 1/r )( ∂f/∂θ ),$  so the force is just $Trsinθdφ( 1/r )( ∂f/∂θ ).$  On putting this together with the opposite side, it is necessary to recall that $sinθ$ as well as $f$ varies with $θ.$  so the sum is given by:  $Trdφdθ( ∂/∂θ )sinθ( 1/r )( ∂f/∂θ ).$  We are now ready to write down $F=ma$ once more, the mass of the element is $ρ r 2 sinθdθdφ.$  Canceling out elements common to both sides of the equation, we find: $1 sinθ ∂ ∂θ sinθ ∂f ∂θ + 1 sin 2 θ ∂ 2 f ∂ φ 2 = r 2 c 2 ∂ 2 f ∂ t 2$. Again, this wave equation is solved by separation of variables.  The time-independent solutions are called the Legendre functions.  They are the basis for analyzing the vibrations of any object with spherical symmetry, for example a planet struck by an asteroid, or vibrations in the sun generated by large solar flares, or the cosmic background microwave radiation. ## Simple Solutions to the Spherical Wave Equation Recall that for the two dimensional circular case, after separation of variables the angular dependence was all in the solution to ∂2f/∂$φ$2 = −λf, and the  physical solutions must fit smoothly around the circle (no kinks, or it would not satisfy the wave equation at the kink), leading to solutions sinm$φ$ and cosm$φ$ (or eim$φ$) with m an integer, and λ = m2 (this is why we took λ with a minus sign in the first equation). For the spherical case, the equation containing all the angular dependence is $1 sinθ ∂ ∂θ sinθ ∂f ∂θ + 1 sin 2 θ ∂ 2 f ∂ φ 2 =−λf$ The standard approach here is, again, separation of variables. Taking the first term on the left hand side over to the right, and multiplying throughout by sin2θ isolates the $φ$ term: $∂ 2 f ∂ φ 2 =− sin 2 θ( λf+ 1 sinθ ∂ ∂θ sinθ ∂f ∂θ )$ Writing now $f( θ,φ )= f θ ( θ ) f φ ( φ )$ in the above equation, and dividing throughout by f, we find as usual that the left hand side depends only on $φ$, the right hand side only on θ, so both sides must be constants.  Taking the constant as $–$m2, the $φ$ solution is e±im$φ$, and one can insert that in the θ equation to give $1 sinθ ∂ ∂θ sinθ ∂ f θ ∂θ − m 2 sin 2 θ f θ =−λ f θ$ What about possible solutions that don’t depend on $φ$?  The equation would be the simpler $1 sinθ ∂ ∂θ sinθ ∂f ∂θ =−λf$ Obviously, f = constant is a solution (for m = 0) with eigenvalue λ = 0. Try f = cosθ.  It is easy to check that this is a solution with λ = 2. Try f = sinθ.  This is not a solution. In fact, we should have realized it cannot be a solution to the wave equation by visualizing the shape of the elastic sheet near the north pole.  If f = sinθ,  f = 0 at the pole, but rises linearly (for small θ ) going away from the pole. Thus the pole is at the bottom of a conical valley. But this conical valley amounts to a kink in the elastic sheet$—$the slope of the sheet has a discontinuity if one moves along a line passing through the pole, so the shape of the sheet cannot satisfy the wave equation at that point. This is somewhat obscured by working in spherical coordinated centered there, but locally the north pole is no different from any other point on the sphere, we could just switch to local (x,y) coordinates, and the cone configuration would clearly not satisfy the wave equation. However, f = sinθ sin$φ$ is a solution to the equation.  It is a worthwhile exercise to see how the $φ$ term gets rid of the conical point at the north pole by considering the value of f as the north pole is approached for various values of $φ$: $φ$ = 0, π/2, π, 3π/2 say.  The sheet is now smooth at the pole! We find f = sinθ cos$φ$, sinθ sin$φ$ (and so sinθ ei$φ$) are solutions with λ = 2. It is straightforward to verify that f = cos2θ $–$ 1/3 is a solution with λ = 6. Finally, we mention that other λ = 6 solutions are sinθ cosθ sin$φ$ and sin2θ sin2$φ$. We do not attempt to find the general case here, but we have done enough to see the beginnings of the pattern.  We have found the series of eigenvalues 0, 2, 6, … .  It turns out that the complete series is given by λ = l(l + 1), with l = 0, 1, 2, … .  This integer l is the analogue of the integer m in the wave on a circle case.  Recall that for the wave on the circle, if we chose real wave functions (cosm$φ$, sinm$φ$,  not eim$φ$) then 2m gave the number of nodes the wave had (that is, m complete wavelengths fitted around the circle).  It turns out that on the sphere l gives the number of nodal lines (or circles) on the surface.  This assumes that we again choose the $φ$-component of the wave function to be real, so that there will be m nodal circles passing through the two poles corresponding to the zeros of the cosm$φ$ term.  We find that there are l $–$ m nodal latitude circles corresponding to zeros of the function of θ. ## Summary: First Few Standing Waves on the Balloon λ                      l                       m         form of solution (unnormalized) 0                      0                      0          constant 2                      1                      0          cosθ 2                      1                      1          sinθ ei$φ$ 2                      1                      -1         sinθ e-i$φ$ 6                      2                      0          cos2θ $–$ 1/3 6                      2                      ±1        cosθ sinθ e±i$φ$ 6                      2                      ±2        sin2θ e±2i$φ$ ## The Schrödinger Equation for the Hydrogen Atom: How Do We Separate the Variables? In three dimensions, the Schrödinger equation for an electron in a potential can be written: $− ℏ 2 2m ( ∂ 2 ψ ∂ x 2 + ∂ 2 ψ ∂ y 2 + ∂ 2 ψ ∂ z 2 )+V(x,y,z)ψ=Eψ$ This is the obvious generalization of our previous two-dimensional discussion, and we will later be using the equation in the above form to discuss electron wave functions in metals, where the standard approach is to work with standing waves in a rectangular box. Recall that in our original “derivation” of the Schrödinger equation, by analogy with the Maxwell wave equation for light waves, we argued that the differential wave operators arose from the energy-momentum relationship for the particle, that is $p x 2 + p y 2 + p z 2 2m ψ≡− ℏ 2 2m ( ∂ 2 ψ ∂ x 2 + ∂ 2 ψ ∂ y 2 + ∂ 2 ψ ∂ z 2 )=− ℏ 2 ∇ 2 ψ 2m$ so that the time-independent Schrödinger wave equation is nothing but the statement that E = K.E. + P.E. with the kinetic energy expressed as the equivalent operator. To make further progress in solving the equation, the only trick we know is separation of variables.  Unfortunately, this won’t work with the equation as given above in $( x,y,z )$ coordinates, because the potential energy term is a function of $x,y$ and $z$ in a nonseparable form.   The solution is, however, fairly obvious:  the potential is a function of radial distance from the origin, independent of direction. Therefore, we need to take as our coordinates the radial distance $r$ and two parameters fixing direction, $θ$ and $φ.$ We should then be able to separate the variables, because the potential only affects radial motion.  No potential term will appear in the equations for $θ,φ$ motion, which will describe free particle motion on the surface of a sphere. ## Momentum and Angular Momentum with Spherical Coordinates It is worth thinking about what are the natural momentum components for describing motion in spherical polar coordinates $( r,θ,φ ).$ The radial component of momentum, $p r ,$  points along the radius, of course. The $θ$ -component $p θ$ points along a line of longitude, away from the north pole if positive (remember $θ$ itself measures latitude, counting the north pole as zero).  The $φ$-momentum component, $p φ ,$ points along a line of latitude. It will be important in understanding the hydrogen atom to connect these momentum components $( p r , p θ , p φ )$ with the angular momentum components of the atom.  Evidently, momentum in the $r$ -direction, which passes directly through the center of the atom, contributes nothing to the angular momentum. Consider now a particle for which $p r = p θ =0,$ only $p φ$ being nonzero.  Classically, such a particle is circling the north pole at constant latitude $θ,$ say, so it is moving in space in a circle or radius $rsinθ$ in a plane perpendicular to the north-south axis of the sphere.  Therefore, it has an angular momentum about that axis (The standard transformation from $( x,y,z )$ coordinates to $( r,θ,φ )$ coordinates is to take the north pole of the $θ,φ$ sphere to be on the $z$ -axis.) As we shall see in detail below, the wave equation describing the $φ$ motion is a simple one, with solutions of the form $e imφ$ with integer $m,$ just as in the two-dimensional circular well. This just means that the component of angular momentum along the $z$ -axis is quantized, $L z =mℏ,$ with $m$ an integer. ## Total Angular Momentum and Waves on a Balloon The total angular momentum is $L=r p ⊥ ,$ where $p ⊥$ is the component of the particle’s momentum perpendicular to the radius, so $p ⊥ 2 = p φ 2 + p θ 2 .$ Thus the square of the total angular momentum is (apart from a constant factor) the kinetic energy of a particle moving freely on the surface of a sphere.  The equivalent Schrödinger equation for such a particle is the wave equation given in the last section for waves on a balloon. (This can be established by the standard change of variables routine on the differential operators).  Therefore, the solutions we found for elastic waves on a sphere actually describe the angular momentum wave function of the hydrogen atom.  We conclude that the total angular momentum is quantized, $L 2 =l( l+1 ) ℏ 2 ,$ with $l$ an integer. ## Angular Momentum and the Uncertainly Principle The conclusions of our above waves on a sphere analysis of the angular momentum of a quantum mechanical particle are a little strange.  We found that the component of angular momentum in the $z$ -direction must be a whole number of $ℏ$ units, yet the square of the total angular momentum $L 2 =l( l+1 ) ℏ 2$ is not a perfect square!  One might wonder if the component of angular momentum in the $x$ -direction isn’t also a whole number of $ℏ$ units as well, and if not, why not? The key is that in questions of this type we are forgetting the essentially wavelike nature of the particle’s motion, or, equivalently, the uncertainty principle.  Recall first that the $z$ -component of angular momentum, that is, the angular momentum $L z$ about the $z$ -axis, is the product of the particle’s momentum in the $xy$ -plane and the distance of the line of that motion from the origin.  There is no contradiction in specifying that momentum and that position simultaneously, because they are in perpendicular directions.  However, we cannot at the same time specify either of the other components $L x , L y$ of the angular momentum, because that would involve measuring some component of momentum in a direction in which we have just specified a position measurement.  We can measure the total angular momentum, that involves additionally only the component $p θ$ of momentum perpendicular to the $p φ$ needed for the $z$ -component. Thus the uncertainty principle limits us to measuring at the same time only the total angular momentum and the component in one direction.  Note also that if we knew the $z$ -component of angular momentum to be $mℏ,$ and the total angular momentum were $L 2 = l 2 ℏ 2 ,$ with $l=m,$ then we would also know that the $x$ and $y$ components of the angular momentum were exactly zero. Thus we would know all three components, in contradiction to our uncertainly principle arguments. This is the essential reason why the square of the total angular momentum is greater than the maximum square of any one component. It is as if there were a “zero point motion” fuzzing out the direction. Another point related to the uncertainty principle concerns measuring just where in its circular (say) orbit the electron is at any given moment. How well can that be pinned down?  There is an obvious resemblance here to measuring the position and momentum of a particle at the same time, where we know the fuzziness of the two measurements is related by $Δx⋅Δp∼h.$  Naïvely, for a circular orbit of radius $r$ in the $xy$ -plane, $rp= L z ,$  and distance measured around the circle is $rθ,$ so $Δx⋅Δp∼h$ suggests $Δθ⋅Δ L z ∼h.$   That is to say, precise knowledge of $L z$ implies no knowledge of where on the circle the particle is. This is not surprising, because we have found that for $L z =mℏ$ the wave has the form $e imφ ,$ and so $| ψ | 2 ,$ the relative probability of finding the particle, is the same anywhere in the circle.  On the other hand, if we have a time-dependent wave function describing a particle orbiting the nucleus, so that the probability of finding the particle at a particular place varies with time, the particle cannot be in a definite angular momentum state.  This is just the same as saying that a particle described by a wave packet cannot have a definite momentum. ## The Schrödinger Equation in (r, θ, $φ$) Coordinates It is worth writing first the energy equation for a classical particle in the Coulomb potential: $1 2m ( p r 2 + p θ 2 + p φ 2 )− 1 4π ε 0 e 2 r =E.$ This makes it possible to see, term by term, what the various parts of the Schrödinger equation signify.  In spherical polar coordinates, Schrödinger’s equation is: $− ℏ 2 2m ( 1 r ∂ 2 ∂ r 2 (rψ)+ 1 r 2 { 1 sinθ ∂ ∂θ ( sinθ ∂ψ ∂θ )+ 1 sin 2 θ ∂ 2 ψ ∂ φ 2 } )− 1 4π ε 0 e 2 r ψ=Eψ.$ ## Separating the Variables: the Messy Details We look for separable solutions of the form $ψ(r,θ,φ)=R(r)Θ(θ)Φ(φ).$ We now follow the standard technique.  That is to say, we substitute $RΘΦ$ for $ψ$ in each term in the above equation.  We then observe that the differential operators only actually operate on one of the factors in any given part of the expression, so we put the other two factors to the left of these operators. We then divide the entire equation by $RΘΦ,$ to get $− ℏ 2 2m ( 1 R 1 r ∂ 2 ∂ r 2 (rR)+ 1 r 2 { 1 Θ 1 sinθ ∂ ∂θ ( sinθ ∂Θ ∂θ )+ 1 sin 2 θ 1 Φ ∂ 2 Φ ∂ φ 2 } )− 1 4π ε 0 e 2 r =E.$ ## Separating Out and Solving the Φ($φ$) Equation The above equation can be rearranged to give: $( 1 R 1 r ∂ 2 ∂ r 2 (rR)+ 1 r 2 { 1 Θ 1 sinθ ∂ ∂θ ( sinθ ∂Θ ∂θ )+ 1 sin 2 θ 1 Φ ∂ 2 Φ ∂ φ 2 } )= 2m ℏ 2 ( E+ 1 4π ε 0 e 2 r ).$ $1 Φ ∂ 2 Φ ∂ φ 2 = sin 2 θ[ r 2 { 2m ℏ 2 ( E+ 1 4π ε 0 e 2 r )− 1 R 1 r ∂ 2 ∂ r 2 (rR) }− 1 Θ 1 sinθ ∂ ∂θ ( sinθ ∂Θ ∂θ ) ].$ At this point, we have achieved the separation of variables!  The left hand side of this equation is a function only of $φ,$ the right hand side is a function only of $r$ and $θ.$  The only way this can make sense is if both sides of the equation are in fact constant (and of course equal to each other). Taking the left hand side to be equal to a constant we denote for later convenience by  $− m 2 ,$ $∂ 2 Φ(φ) ∂ φ 2 =− m 2 Φ(φ).$ We write the constant $− m 2$ because we know that as a factor in a wave function $Φ( φ )$  must be single valued as $φ$ increases through $2π,$ so an integer number of oscillations must fit around the circle, meaning $Φ$ is $sinmφ, cosmφ$ or $e imφ$ with $m$ an integer. These are the solutions of the above equation. Of course, this is very similar to the particle in the circle in two dimensions, $m$ signifies units of angular momentum about the $z$ -axis. ## Separating Out the Θ(θ) Equation Backing up now to the equation in the form $( 1 R 1 r ∂ 2 ∂ r 2 (rR)+ 1 r 2 { 1 Θ 1 sinθ ∂ ∂θ ( sinθ ∂Θ ∂θ )+ 1 sin 2 θ 1 Φ ∂ 2 Φ ∂ φ 2 } )= 2m ℏ 2 ( E+ 1 4π ε 0 e 2 r )$ we can replace the $1 Φ ∂ 2 Φ ∂ Φ 2$ term by $− m 2 ,$ and move the $r$ term over to the right, to give $1 Θ 1 sinθ ∂ ∂θ ( sinθ ∂Θ ∂θ )− m 2 sin 2 θ = r 2 { 2m ℏ 2 ( E+ 1 4π ε 0 e 2 r )− 1 R 1 r ∂ 2 ∂ r 2 (rR) }.$ We have again managed to separate the variables$—$the left hand side is a function only of $θ,$ the right hand side a function of $r.$  Therefore both must be equal to the same constant, which we set equal to $−λ.$ This gives the $Θ( θ )$ equation: $1 sinθ ∂ ∂θ sinθ ∂Θ(θ) ∂θ − m 2 sin 2 θ Θ(θ)=−λΘ(θ).$ This is exactly the wave equation we discussed above for the elastic sphere, and the allowed eigenvalues $λ$ are $l( l+1 )$ where $l=0, 1, 2,…$ . with $l≥m.$ ## The R(r) Equation Replacing the $θ,φ$ operator with the value found just above in the original Schrödinger equation gives the equation for the radial wave function: $− ℏ 2 2m ( 1 r ∂ 2 ∂ r 2 (rR(r))− l(l+1) r 2 R(r) )− 1 4π ε 0 e 2 r R(r)=ER(r).$ The first term in this radial equation is the usual radial kinetic energy term, equivalent to $p r 2 /2m$ in the classical picture. The third term is the Coulomb potential energy.  The second term is an effective potential representing the centrifugal force.  This is clarified by reconsidering the energy equation for the classical case, $1 2m ( p r 2 + p θ 2 + p φ 2 )− 1 4π ε 0 e 2 r =E.$ The angular momentum squared $L 2 = r 2 ( p θ 2 + p φ 2 )=l( l+1 ) ℏ 2 .$ Thus for fixed angular momentum, we can write the above “classical” equation as $1 2m ( p r 2 + l(l+1) ℏ 2 r 2 )− 1 4π ε 0 e 2 r =E.$ The parallel to the radial Schrödinger equation is then clear. We must find the solutions of the radial Schrödinger equation that decay for large $r.$   These will be the bound states of the hydrogen atom.  In natural units, measuring lengths in units of the first Bohr radius, and energies in Rydberg units . Finally, taking $u( r )=rR( r ),$ the radial equation becomes $− d 2 u(ρ) d ρ 2 + l(l+1) ρ 2 u(ρ)− 2 ρ u(ρ)=εu(ρ)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 258, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9839412569999695, "perplexity": 444.8660467614744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556133.92/warc/CC-MAIN-20210624141035-20210624171035-00038.warc.gz"}
https://www.lessonplanet.com/teachers/equivalent-fractions-597085-math-4th-5th
# Equivalent Fractions In this math worksheet, learners solve 10 problems in which they make equivalent fractions, filling in the missing numerator in each. The denominators for the equivalent fractions are provided. 4 Views 2 Downloads Concepts Resource Details Grade 4th - 5th Subjects Arithmetic & Pre-Algebra 1 more... Resource Types Problem Solving 1 more...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822450280189514, "perplexity": 16219.875720809014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423927.54/warc/CC-MAIN-20170722082709-20170722102709-00716.warc.gz"}
http://slideplayer.com/slide/1662867/
# 5.4 Complex Numbers (p. 272). ## Presentation on theme: "5.4 Complex Numbers (p. 272)."— Presentation transcript: 5.4 Complex Numbers (p. 272) Imaginary Unit Until now, you have always been told that you can’t take the square root of a negative number. If you use imaginary units, you can! The imaginary unit is ¡. ¡= It is used to write the square root of a negative number. Property of the square root of negative numbers If r is a positive real number, then Examples: *For larger exponents, divide the exponent by 4, then use the remainder as your exponent instead. Example: Examples Complex Numbers A complex number has a real part & an imaginary part. Standard form is: Real part Imaginary part Example: 5+4i The Complex plane Real Axis Imaginary Axis Graphing in the complex plane Ex: Ex: Ex: Multiplying Treat the i’s like variables, then change any that are not to the first power Ex: Ex: Absolute Value of a Complex Number The distance the complex number is from the origin on the complex plane. If you have a complex number the absolute value can be found using: Examples 1. 2. Which of these 2 complex numbers is closest to the origin? -2+5i Assignment
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956279158592224, "perplexity": 1088.6429654979734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887973.50/warc/CC-MAIN-20180119105358-20180119125358-00336.warc.gz"}
https://direct.mit.edu/coli/article/42/3/421/1545/Parsing-Linear-Context-Free-Rewriting-Systems-with
We describe a recognition algorithm for a subset of binary linear context-free rewriting systems (LCFRS) with running time O(nωd) where M(m) = O(mω) is the running time for m × m matrix multiplication and d is the “contact rank” of the LCFRS—the maximal number of combination and non-combination points that appear in the grammar rules. We also show that this algorithm can be used as a subroutine to obtain a recognition algorithm for general binary LCFRS with running time O(nωd+1). The currently best known ω is smaller than 2.38. Our result provides another proof for the best known result for parsing mildly context-sensitive formalisms such as combinatory categorial grammars, head grammars, linear indexed grammars, and tree-adjoining grammars, which can be parsed in time O(n4.76). It also shows that inversion transduction grammars can be parsed in time O(n5.76). In addition, binary LCFRS subsumes many other formalisms and types of grammars, for some of which we also improve the asymptotic complexity of parsing. The problem of grammar recognition is a decision problem of determining whether a string belongs to a language induced by a grammar. For context-free grammars (CFGs), recognition can be done using parsing algorithms such as the CKY algorithm (Kasami 1965; Younger 1967; Cocke and Schwartz 1970) or the Earley algorithm (Earley 1970). The asymptotic complexity of these chart-parsing algorithms is cubic in the length of the sentence. In a major breakthrough, Valiant (1975) showed that context-free grammar recognition is no more complex than Boolean matrix multiplication for a matrix of size m × m where m is linear in the length of the sentence, n. With current state-of-the-art results in matrix multiplication, this means that CFG recognition can be done with an asymptotic complexity of O(n2.38). In this article, we show that the problem of linear context-free rewriting system (LCFRS) recognition can also be reduced to Boolean matrix multiplication. Current chart-parsing algorithms for binary LCFRS have an asymptotic complexity of O(n3f), where f is the maximal fan-out of the grammar.1 Our algorithm takes time O(nωd), for a constant d which is a function of the grammar (and not the input string), and where the complexity of n × n matrix multiplication is M(n) = O(nω). The parameter d can be as small as f, meaning that we reduce parsing complexity from O(n3f) to O(nωf), and that, in general, the savings in the exponent is larger for more complex grammars. LCFRS is a broad family of grammars. As such, we are able to support the findings of Rajasekaran and Yooseph (1998), who showed that tree-adjoining grammar (TAG) recognition can be done in time O(M(n2)) = O(n4.76) (TAG can be reduced to LCFRS with d = 2). As a result, combinatory categorial grammars, head grammars, and linear indexed grammars can be recognized in time O(M(n2)). In addition, we show that inversion transduction grammars (ITGs; Wu 1997) can be parsed in time O(nM(n2)) = O(n5.76), improving the best asymptotic complexity previously known for ITGs. ### 1.1 Matrix Multiplication State of the Art Our algorithm reduces the problem of LCFRS parsing to Boolean matrix multiplication. Let M(n) be the complexity of multiplying two such n × n matrices. These matrices can be naïvely multiplied in O(n3) time by computing for each output cell the dot product between the corresponding row and column in the input matrices (each such product is an O(n) operation). Strassen (1969) discovered a way to do the same multiplication in O(n2.8704) time—his algorithm is a divide and conquer algorithm that eventually uses only seven operations (instead of eight) to multiply 2 × 2 matrices. With this discovery, there have been many attempts to further reduce the complexity of matrix multiplication, relying on principles similar to Strassen's method: a reduction in the number of operations it takes to multiply sub-matrices of the original matrices to be multiplied. Coppersmith and Winograd (1987) identified an algorithm that has the asymptotic complexity of O(n2.375477). Others have slightly improved that algorithm, and currently there is an algorithm for matrix multiplication with M(n) = O(nω) such that ω = 2.3728639 (Le Gall 2014). It is known that M(n) = Ω(n2 log n) (Raz 2002). Although the asymptotically best matrix multiplication algorithms have large constant factors lurking in the O-notation, Strassen's algorithm does not, and is widely used in practice. Benedí and Sánchez (2007) show speed improvement when parsing natural language sentences using Strassen's algorithm as the matrix multiplication subroutine for Valiant's algorithm for CFG parsing. This indicates that similar speed-ups may be possible in practice using our algorithm for LCFRS parsing. ### 1.2 Main Result Our main result is a matrix multiplication algorithm for unbalanced, single-initial binary LCFRS with asymptotic complexity M(nd) = O(nωd) where d is the maximal number of combination points in all grammar rules. The constant d can be easily determined from the grammar at hand: where AB C ranges over rules in the grammar and φ(A) is the fan-out of nonterminal A. Single-initial grammars are defined in Section 2, and include common formalisms such as tree-adjoining grammars. Any LCFRS can be converted to single-initial form by increasing its fan-out by at most one. The notion of unbalanced grammars is introduced in Section 4.4, and it is a condition on the set of LCFRS grammar rules that is satisfied with many practical grammars. In cases where the grammar is balanced, our algorithm can be used as a subroutine so that it parses the binary LCFRS in time O(nωd+1). A similar procedure was applied by Nakanishi et al. (1998) for multiple component context-free grammars. See more discussion of this in Section 7.5. Our results focus on the asymptotic complexity as a function of string length. We do not give explicit grammar constants. For other work that focuses on reducing the grammar constant in parsing, see, for example, Eisner and Satta (1999), Dunlop, Bodenstab, and Roark (2010), and Cohen, Satta, and Collins (2013). For a discussion of the optimality of the grammar constants in Valiant's algorithm, see, for example, Abboud, Backurs, and Williams (2015). This section provides background on LCFRS, and establishes notation used in the remainder of the paper. A reference table of notation is also provided in Appendix A. For an integer n, let [n] denote the set of integers {1, … , n}. Let [n]0 = [n] ∪ {0}. For a set X, we denote by X+ the set of all sequences of length 1 or more of elements from X. A span is a pair of integers denoting left and right endpoints for a substring in a larger string. The endpoints are placed in the “spaces” between the symbols in a string. For example, the span (0, 3) spans the first three symbols in the string. For a string of length n, the set of potential endpoints is [n]0. We turn now to give a succinct definition for binary LCFRS. For more details about LCFRS and their relationship to other grammar formalisms, see Kallmeyer (2010). A binary LCFRS is a tuple (, , , φ, S) such that: • • is the set of nonterminal symbols in the grammar. • • is the set of terminal symbols in the grammar. We assume . • • φ is a function specifying a fixed fan-out for each nonterminal (φ: ). • • is a set of productions. Each production p has the form Ag[B, C] where , and g is a composition function g : × , which specifies how to assemble the φ(B) + φ(C) spans of the right-hand side nonterminals into the φ(A) spans of the left-hand side nonterminal. We use square brackets as part of the syntax for writing productions, and parentheses to denote the application of the function g. The function g must be linear and non-erasing, which means that if g is applied on a pair of tuples of strings, then each input string appears exactly once in the output, possibly as a substring of one of the strings in the output tuple. Rules may also take the form Ag[], where g returns a constant tuple of one string from . • • is a start symbol. Without loss of generality, we assume φ(S) = 1. The language of an is defined as follows: • • We define first the set yield(A) for every : • • For every , g() ∈ yield(A). • • For every and all tuples β ∈ yield(B), γ ∈ yield(C), g(β, γ) ∈ yield(A). • • Nothing else is in yield(A). • • The string language of G is L(G) = {w | 〈w〉 ∈ yield(S)}. Intuitively, the process of generating a string from an LCFRS grammar consists of first choosing, top–down, a production to expand each nonterminal, and then, bottom– up, applying the composition functions associated with each production to build the string. As an example, the following context-free grammar: corresponds to the following (binary) LCFRS: The only derivation possible under this grammar consists of the function application g1 (g2(), g3()) = 〈ab The following notation will be used to precisely represent the linear non-erasing composition functions g used in a specific grammar. For each production rule that operates on nonterminals A, B, and C, we define variables from the set = {β1, … , βφ(B), γ1, … , γφ(C)}. In addition, we define variables αi for each rule where i ∈ [φ(A)], taking values from . We write an LCFRS function as: where each αi = αi,1 ⋯ αi,ni specifies the parameter strings that are combined to form the ith string of the function's result tuple. For example, for the rule in Equation (1), α1,1 = β1 and α1,2 = γ2. We adopt the following notational shorthand for LCFRS rules in the remainder of the article. We write the rule: as: where α consists of a tuple of strings from the alphabet {β1, … , βφ(B), γ1, … , γφ(C)}. In this notation, β is always the tuple 〈β1, … , βφ(B)〉, and γ is always 〈γ1, … , γφ(C)〉. We include β and γ in the rule notation merely to remind the reader of the meaning of the symbols in α. For example, with CFGs, rules have the form: indicating that B and C each have one span, and are concatenated in order to form A. A binary TAG can also be represented as a binary LCFRS (Vijay-Shanker and Weir 1994). Figure 1 demonstrates how the adjunction operation is done with binary LCFRS. Each gray block denotes a span, and the adjunction operator takes the first span of nonterminal B and concatenates it to the first span of nonterminal C (to get the first span of A), and then takes the second span of C and concatenates it with the second span of B (to get the second span of A). For TAGs, rules have the form: Figure 1 An example of a combination of spans for TAGs for the adjunction operation in terms of binary LCFRS. The rule in Equation (2) specifies how two nonterminals B and C are combined together into a nonterminal A. Figure 1 An example of a combination of spans for TAGs for the adjunction operation in terms of binary LCFRS. The rule in Equation (2) specifies how two nonterminals B and C are combined together into a nonterminal A. Close modal The fan-out of a nonterminal is the number of spans in the input sentence that it covers. The fan-out of CFG rules is 1, and the fan-out of TAG rules is 2. The fan-out of the grammar, f, is the maximum fan-out of its nonterminals: We sometimes refer to the skeleton of a grammar rule A[α] → B[β] C[γ], which is just the context-free rule AB C, omitting the variables. In that context, a logical statement such as is true if there is any rule with some α, β, and γ. For our parsing algorithm, we assume that the grammar is in a normal form such that the variables β1 , … , βφ(B) appear in order in α, that is, that the spans of B are not re-ordered by the rule, and similarly we assume that γ1 , … , γφ(C) appear in order. If this is not the case in some rule, then the grammar can be transformed by introducing a new nonterminal for each permutation of a nonterminal that can be produced by the grammar. We further assume that α1,1 = β1, that is, that the first span of A begins with material produced by B rather than by C. If this not the case for some rule, B and C can be exchanged to satisfy this condition. We refer to an LCFRS rule AB C as single-initial if the leftmost endpoint of C is internal to a span of A, and dual-initial if the leftmost endpoint of C is the beginning of a span of A. Our algorithm will require the input LCFRS to be in single-initial form, meaning that all rules are single-initial. We note that grammars for common formalisms including TAG and synchronous context-free grammar (SCFG) are in this form. If a grammar is not in single-initial form, dual-initial rules can be converted to single-initial form by adding an empty span to B that combines with the first spans of C immediately to its left, as shown in Figure 2. Specifically, for each dual-initial rule AB C, if the first span of C appears between spans i and i + 1 of B, create a new nonterminal B′ with φ(B′) = φ(B) + 1, and add a rule B′ → B, where B′ produces B along with a span of length zero between spans i and i + 1 of B. We then replace the rule AB C with ABC, where the new span of B′ combines with C immediately to the left of C's first span. Because the new nonterminal B′ has fan-out one greater than B, this grammar transformation can increase a grammar's fan-out by at most one. Figure 2 Conversion of a dual-initial rule to a single-initial rule. Figure 2 Conversion of a dual-initial rule to a single-initial rule. Close modal By limiting ourselves to binary LCFRS grammars, we do not necessarily restrict the power of our results. Any LCFRS with arbitrary rank (i.e., with an arbitrary number of nonterminals in the right-hand side) can be converted to a binary LCFRS (with potentially a larger fan-out). See discussion in Section 7.6. Example 1 Consider the phenomenon of cross-serial dependencies that exists in certain languages. It has been used in the past (Shieber 1985) to argue that Swiss–German is not context-free. One can show that there is a homomorphism between Swiss–German and the alphabet {a, b, c, d} such that the image of the homomorphism intersected with the regular language a*b*c*d* gives the language L = {ambncmdn | m, n ≥ 1}. Because L is not context-free, this implies that Swiss-German is not context-free, because context-free languages are closed under intersection with regular languages. Tree-adjoining grammars, on the other hand, are mildly context-sensitive formalisms that can handle such cross-serial dependencies in languages (where the as are aligned with cs and the bs are aligned with the ds). For example, a tree-adjoining grammar for generating L would include the following initial and auxiliary trees (nodes marked by an asterisk are nodes where adjunction is not allowed): This TAG corresponds to the following LCFRS: Here we have one unary LCFRS rule for the initial tree, one unary rule for each adjunction tree, and one null-ary rule for each nonterminal producing a tuple of empty strings in order to represent TAG tree nodes at which no adjunction occurs. The LCFRS shown here does not satisfy our normal form requiring each rule to have either two nonterminals on the right-hand side with no terminals in the composition function, or zero nonterminals with a composition function returning fixed strings of terminals. However, it can be converted to such a form through a process analogous to converting a CFG to Chomsky Normal Form. For adjunction trees, the two strings returned by the composition function correspond the the material to the left and right of the foot node. The composition function merges terminals at the leaves of the adjunction tree with material produced by internal nodes of the tree at which adjunction may occur. In general, binary LCFRS are more expressive than TAGs because they can have nonterminals with fan-out greater than 2, and because they can interleave the arguments of the composition function in any order. Our algorithm for LCFRS string recognition is inspired by the algorithm of Valiant (1975). It introduces a few important novelties that make it possible to use matrix multiplication for the goal of LCFRS recognition. The algorithm relies on the observation that it is possible to construct a matrix T with a specific non-associative multiplication and addition operator such that multiplying T by itself k times on the left or on the right yields k-step derivations for a given string. The row and column indices of the matrix together assemble a set of spans in the string (the fan-out of the grammar determines the number of spans). Each cell in the matrix keeps track of the nonterminals that can dominate these spans. Therefore, computing the transitive closure of this matrix yields in each matrix cell the set of nonterminals that can dominate the assembled indices' spans for the specific string at hand. There are several key differences between Valiant's algorithm and our algorithm. Valiant's algorithm has a rather simple matrix-indexing scheme for the matrix: The rows correspond to the left endpoints of a span and the columns correspond to its right endpoints. Our matrix indexing scheme can mix both left endpoints and right endpoints at either the rows or the columns. This is necessary because with LCFRS, spans for the right-hand side of an LCFRS rule can combine in various ways into a new set of spans for the left-hand side. In addition, our indexing scheme is “over-complete.” This means that different cells in the matrix T (or its matrix powers) are equivalent and should consist of the same nonterminals. The reason we need such an over-complete scheme is again because of the possible ways spans of a right-hand side can combine in an LCFRS. To address this over-completeness, we introduce into the multiplication operator a “copy operation” that copies nonterminals between cells in order to maintain the same set of nonterminals in equivalent cells. To give a preliminary example, consider the tree-adjoining grammar rule shown in Figure 1. We consider an application of the rule with the endpoints of each span instantiated as shown in Figure 3. With our algorithm, this operation will translate into the following sequence of matrix transformations. We will start with the following matrices, T1 and T2: Figure 3 A demonstration of a parsing step for the combination of spans in Figure 1. During parsing, the endpoints of each span are instantiated with indices into the string. The variables for these indices shown on the left correspond to the logical induction rule on the right. The specific choice of indices shown at the bottom is used in our matrix multiplication example in Section 3. Figure 3 A demonstration of a parsing step for the combination of spans in Figure 1. During parsing, the endpoints of each span are instantiated with indices into the string. The variables for these indices shown on the left correspond to the logical induction rule on the right. The specific choice of indices shown at the bottom is used in our matrix multiplication example in Section 3. Close modal For T1, for example, the fact that B appears for the pair of addresses (1, 8) (for row) and (2, 7) for column denotes that B spans the constituents (1, 2) and (7, 8) in the string (this is assumed to be true—in practice, it is the result of a previous step of matrix multiplication). Similarly, with T2, C spans the constituents (2, 4) and (5, 7). Note that (2, 7) are the two positions in the string where B and C meet, and that because B and C share these two endpoints, they can combine to form A. In the matrix representation, (2, 7) appears as the column address of B and as the row address of C, meaning that B and C appear in cells that are combined during matrix multiplication. The result of multiplying T1 by T2 is the following: Now A appears in the cell that corresponds to the spans (1, 4) and (5, 8). This is the result of merging the spans (1, 2) with (2, 4) (left span of B and left span of C) into (1, 4) and the merging of the spans (5, 7) and (7, 8) (right span of C and right span of B) into (5, 8). Finally, an additional copying operation will lead to the following matrix: Here, we copy the nonterminal A from the address with the row (1, 8) and column (4, 5) into the address with the row (1, 4) and column (5, 8). Both of these addresses correspond to the same spans (1, 4) and (5, 8). Note that matrix row and column addresses can mix both starting points of spans and ending points of spans. We turn next to give a description of the algorithm. Our description is constructed as follows: • • In Section 4.1 we describe the basic matrix structure used for LCFRS recognition. This construction depends on a parameter d, the contact rank, which is a function of the underlying LCFRS grammar we parse with. We also describe how to create a seed matrix, for which we need to compute the transitive closure. • • In Section 4.2 we define the multiplication operator between cells of the matrices we use. This multiplication operator is distributive, but not associative, and as such we use Valiant's specialized transitive closure algorithm to compute transitive closure of the seed matrix given a string. • • In Section 4.3 we define the contact rank parameter d. The smaller d is, the more efficient it is to parse with the specific grammar. • • In Section 4.4 we define when a binary LCFRS is “balanced.” This is an end case that increases the final complexity of our algorithm by a factor of O(n). Nevertheless, it is an important end case that appears in applications, such as inversion transduction grammars. • • In Section 4.5 we tie things together, and show that computing the transitive closure of the seed matrix we define in Section 4.1 yields a recognition algorithm for LCFRS. ### 4.1 Matrix Structure The algorithm will seek to compute the transitive closure of a seed matrix T(d), where d is a constant determined by the grammar (see Section 4.3). The matrix rows and columns are indexed by the set N(d) defined as: where n denotes the length of the sentence, and the exponent d′ denotes a repeated Cartesian product. Thus each element of N(d) is a sequence of indices into the string, where each index is annotated with a bit (an element of the set {0, 1}) indicating whether it is marked or unmarked. Marked indices will be used in the copy operator defined later. Indices are unmarked unless specified as marked: We use to denote a marked index (x, 1) with x ∈ [n]0. In the following, it will be safe to assume sequences from N(d) are monotonically increasing in their indices. For an iN(d), we overload notation, and often refer to the set of all elements in the first coordinate of each element in the sequence (ignoring the additional bits). As such, • • The set ij is defined for jN(d). • • If we state that i is in N(d) and includes a set of endpoints, it means that i is a sequence of these integers (ordered lexicographically) with the bit part determined as explained in the context (for example, all unmarked). • • The quantity |i| denotes the length of the sequence. • • The quantity min i denotes the smallest index among the first coordinates of all elements in the sequence i (ignoring the additional bits). We emphasize that the variables i, j, and k are mostly elements in N(d) as overloaded above, not integers, throughout this article; we choose the symbols i, j, and k by analogy to the variables in the CKY parsing algorithm, and also because we use the sequences as addresses for matrix rows and columns. For i, jN(d), we define m(i, j) to be the set of pairs {(ℓ1, ℓ2), (ℓ3, ℓ4), … , (ℓ2f −1, ℓ2f)} such that ℓk < ℓk+1 for k ∈ [2f′ − 1] and (ℓk, 0) ∈ ij for k ∈ [2f′]. This means that m(i, j) takes as input the two sequences in matrix indices, merges them, sorts them, then divides this sorted list into a set of f′ consecutive pairs. Whenever min j ≤ min i, m(i, j) is undefined. The interpretation of this is that ℓ1 should always belong to i and not j. See more details in Section 4.2. In addition, if any element of i or j is marked, m(i, j) is undefined. We define an order < on elements i and j of N(d) by first sorting the sequences i and j and then comparing i and j lexicographically (ignoring the bits). This ensures that i < j if min i < min j. We assume that the rows and columns of our matrices are arranged in this order. For the rest of the discussion, we assume that d is a constant, and refer to T(d) as T and N(d) as N. We also define the set of triples M as the following Cartesian product: where , , , , , and are six special pre-defined symbols.2 Each cell Tij in T is a set such that TijM. The intuition behind matrices of the type of T (meaning T and, as we see later, products of T with itself, or its transitive closure) is that each cell indexed by (i, j) in such a matrix consists of all nonterminals that can be generated by the grammar when parsing a sentence such that these nonterminals span the constituents m(i, j) (whenever m(i, j) is defined). Our normal form for LCFRS ensures that spans of a nonterminal are never re-ordered, meaning that it is not necessary to retain information about which indices demarcate which components of the nonterminal, because one can sort the indices and take the first two indices as delimiting the first span, the second two indices as delimiting the second span, and so on. The two additional N elements in each triplet in a cell are actually just copies of the row and column indices of that cell. As such, they are identical for all triplets in that cell. The additional , , , , , symbols are symbols that indicate to the matrix multiplication operator that a “copying operation” should happen between equivalent cells (Section 4.2). Figure 4 gives an algorithm to seed the initial matrix T. Entries added in Step 2 of the algorithm correspond to entries in the LCFRS parsing chart that can be derived immediately from terminals in the string. Entries added in Step 3 of the algorithm do not depend on the input string or input grammar, but rather initialize elements used in the copy operation described in detail in Section 4.2. Because the algorithm only initializes entries with i < j, the matrix T is guaranteed to be upper triangular, a fact which we will take advantage of in Section 4.2. Figure 4 An algorithm for computing the seed matrix T. The function remove(v, x) takes a sequence of integers v and removes x from it, if it is in there. The function insert(v, x) takes a sequence of integers and adds x to it. Figure 4 An algorithm for computing the seed matrix T. The function remove(v, x) takes a sequence of integers v and removes x from it, if it is in there. The function insert(v, x) takes a sequence of integers and adds x to it. Close modal #### 4.1.1 Configurations Our matrix representation requires that a nonterminal appears in more than one equivalent cell in the matrix, and the specific set of cells required depends on the specific patterns in which spans are combined in the LCFRS grammar. We now present a precise description of these cells by defining the configuration of a nonterminal in a rule. The concept of a configuration is designed to represent which endpoints of spans of the rule's right-hand side (r.h.s.) nonterminals B and C meet one another to produce larger spans, and which endpoints, on the other hand, become endpoints of spans of the left-hand side (l.h.s.) nonterminal A. For each of the three nonterminals involved in a rule, the configuration is the set of endpoints in the row address of the nonterminal's matrix cell. To make this precise, for a nonterminal B with fan-out φ(B), we number the endpoints of spans with integers in the range 1 to 2φ(B). In a rule A[α] → B[β] C[γ], the configuration of B is the subset of [2φ(B)] of endpoints of B that do not combine with endpoints of C in order to form a single span of A. The endpoints will form the row address for B. Formally, let β = 〈β1, … , βφ(B)〉, and let α = 〈α1,1 ⋯ α1,n1, …, αφ(A),1 ⋯ αφ(A),nφ(A)〉. Then the set of non­combination endpoints of B is defined as: where the first set defines right ends of spans of B that are right ends of some span of A, and the second set defines left ends of spans of B that are left ends of some span of A. For example, given that CFG rules have the form the configuration config2 (r) is {1} because, of B's two endpoints, only the first is also an endpoint of A. For the TAG rule t shown in Figure 1, config2 (t) = {1, 4} because, of B's four endpoints, the first and fourth are also endpoints of A. For the second r.h.s. nonterminal of a rule r, the configuration consists of the set of endpoints in the row address for C, which are the endpoints that do combine with B: where the first set defines right ends of spans of C that are internal to some span of A, and the second set defines left ends of spans of C that are internal to some span of A. For example, any CFG rule r has configuration, config3 (r) = {1}, because the first endpoint of C is internal to A. For the TAG rule t shown in Figure 1, config3 (t) = {1, 4} because, of C's four endpoints, the first and fourth are internal A. For the l.h.s. nonterminal A of the rule, matrix multiplication will produce an entry in the matrix cell where the row address corresponds to the endpoints from B, and the column address corresponds to the endpoints from C. To capture this partition of the endpoints of A, we define where the first set defines right ends of spans of A that are formed from B, and the second set defines left ends of spans of A that are formed from B. For example, any CFG rule r has configuration, config1 (r) = {1}, because only the first endpoint of A is derived from B. For the TAG rule t shown in Figure 1, config1 (t) = {1, 4} because, of A's four endpoints, the first and fourth are derived from B. ### 4.2 Definition of Multiplication Operator We need to define a multiplication operator ⊗ between a pair of elements R, SM. Such a multiplication operator induces multiplication between matrices of the type of T, just by defining for two such matrices, T1 and T2, a new matrix of the same size T1T2 such that: We also use the ∪ symbol to denote coordinate-wise union of cells in the matrices it operates on. The operator ⊗ we define is not associative, but it is distributive over ∪. This means that for R, S1, S2M it holds that: In addition, whenever R = ∅, then for any S, RS = SR = ∅. This property maintains the upper-triangularity of the transitive closure of T. Figure 5 gives the algorithm for multiplying two elements of the matrix. The algorithm is composed of two components. The first component (Step 2 in Figure 5) adds nonterminals, for example, A, to cell (i, j), if there is some B and C in (i, k) and (k, j), respectively, such that there exists a rule AB C and the span endpoints denoted by k are the points where the rule specifies that spans of B and C should meet. Figure 5 An algorithm for the product of two matrix elements. Figure 5 An algorithm for the product of two matrix elements. Close modal In order to make this first component valid, we have to make sure that k can indeed serve as a concatenation point for (i, j). Step 2 verifies this using the concept of configurations defined earlier. To apply a rule r : A[α] → B[β] C[γ], we must have an entry for (B, i, k) in cell (i, k), where i is a set of indices corresponding to the endpoints of B selected by config2 (r) and k is a set of indices corresponding to the endpoints of B selected by [2φ(B)] \ config2 (r). This condition is enforced by Step 2c of Figure 5. Similarly, we must have an entry for (C, k, j) in cell (k, j), where k is a set of indices corresponding to the endpoints of C selected by config3 (r) and j is a set of indices corresponding to the endpoints of C selected by [2φ(C)] \ config3 (r). This is enforced by Step 2d. Finally, the spans defined by B and C must not overlap in the string. To guarantee that the spans do not overlap, we sort the endpoints of A and check that each position in the sorted list is derived from either B or C as required by the configuration of A in r. This check is performed in Step 2e of Figure 5. Given that T is initialized to be upper-triangular, the properties of matrix multiplication guarantee that all matrix powers of T are upper-triangular. We now proceed to show that upper-triangular matrices are sufficient in terms of the grammar. In particular, we need to show the following lemma: Lemma 1 For each application of a single-initial rule AB C, it is possible to create an entry for A by multiplying two upper-triangular matrices T1 and T2, where T1 contains an entry for B, and T2 contains an entry for C. Proof A nonterminal B appears in a cell above the diagonal if its row address is smaller than its column address, which in turn occurs if the leftmost endpoint of B appears in the row address rather than the column address. The row address for B contains the endpoints of B that are also endpoints of A. Our normal form for LCFRS rules ensures that the leftmost endpoint of B forms the leftmost endpoint of A. Therefore the leftmost endpoint of B is in B's row address, and B is above the diagonal. The row address of nonterminal C in T2 must contain the endpoints of C that combine with endpoints of B. For single-initial rules, these endpoints include the leftmost endpoint of C, guaranteeing that C appears above the diagonal. Because each instance of A can be produced by combining elements of T1 and T2 that are above the diagonal, each instance of A can be produced by multiplying two upper-triangular matrices. ∎ #### 4.2.1 Copy Operations The first component of the algorithm is sound, but not complete. If we were to use just this component in the algorithm, then we would obtain in each cell (i, j) of the transitive closure of T a subset of the possible nonterminals that can span m(i, j). The reason this happens is that our addressing scheme is “over-complete.” This means that any pair of addresses (i, j) and (k, ℓ) are equivalent if m(i, j) = m(k, ℓ). We need to ensure that the transitive closure, using ⊗, propagates, or copies, non-terminals from one cell to its equivalents. This is done by the second component of the algorithm, in Steps 3–6. The algorithm does this kind of copying by using a set of six special “copy” symbols, {, , , , , }. These symbols copy nonterminals from one cell to the other in multiple stages. Suppose that we need to copy a nonterminal from cell (i, j) to cell (k, ℓ), where m(i, j) = m(k, ℓ), indicating that the two cells describe the same set of indices in the input string. We must move the indices in i ∩ ℓ from the row address to the column address, and we must move the indices in jk from the column address to the row address. We will move one index at a time, adding nonterminals to intermediate cells along the way. We now illustrate how our operations move a single index from a row address to a column address (moving from column to row is similar). Let x indicate the index we wish to move, meaning that we wish to copy a nonterminal in cell (i, j) to cell (remove(i, x), insert(j, x)). Because we want our overall parsing algorithm to take advantage of fast matrix multiplication, we accomplish the copy operations through a sequence of three matrix multiplications, as shown in Figure 6. The first multiplication involves the nonterminal A in cell (i, j) in the left matrix, and a symbol in cell (j, insert(j, )) in the right matrix, resulting in a matrix with nonterminal A in cell (i, insert(j, )). This intermediate result is redundant in the sense that index x appears in the row and index appears in the column address. To remove x from the row address, we multiply on the left with a matrix containing the symbol in cell (remove(i, x), i), resulting in a matrix with nonterminal A in cell (remove(i, x), insert(j, )). Finally, we multiply by a third matrix to replace the marked index with the unmarked index x. This is done by multiplying on the right with a matrix containing the symbol in cell (insert(j, ), insert(j, x)). Figure 6 An example of moving an index from the row address to the column address. Nonterminal B in T1 is copied from cell (1, 8), (2, 7) to cell (1), (2, 7, 8) through three matrix multiplications. First, multiplying by T2 on the right yields T1T2, shown in the right of the second row. Multiplying this matrix by T3 on the left yields T1T2T3. Finally, multiplying this matrix by T4 on the right yields T1T2T3T4, shown in the bottom row. Figure 6 An example of moving an index from the row address to the column address. Nonterminal B in T1 is copied from cell (1, 8), (2, 7) to cell (1), (2, 7, 8) through three matrix multiplications. First, multiplying by T2 on the right yields T1T2, shown in the right of the second row. Multiplying this matrix by T3 on the left yields T1T2T3. Finally, multiplying this matrix by T4 on the right yields T1T2T3T4, shown in the bottom row. Close modal The key idea behind this three-step process is to copy elements from one cell to another through intermediate cells. In matrix multiplication, only cells that share a row or a column index actually interact when doing multiplication. Therefore, in order to copy a nonterminal from (i, j) to another cell which represents the same set of spans, we have to copy it through cells such as (i, insert(j, x)) that share the index i with (i, j). In order to guarantee that our operations copy nonterminals only into cells with equivalent addresses, the seed matrix contains the special symbol only in cells (j, k) such that k = insert(j, ) for some x. When in cell (j, k) combines with a nonterminal A in cell (i, j), the result contains A only if xi, guaranteeing that the index added to the column address was originally present in the row address. In addition, the condition that i contains only unmarked indices (in the multiplication operator) and that the condition j contains only unmarked indices (in the initialization of the seed matrix) guarantee that only one index is marked in the address of any non-empty matrix cell. Similar conditions apply to the operation. The seed matrix contains only in cells (i, k) such that i = remove(k, x) for some x, guaranteeing that the operation only removes one index at a time. Furthermore, when in cell (i, k) combines with a nonterminal A in cell (k, j), the result contains A only if . This guarantees that the new entry includes all the original indices, meaning that any index we remove from the row address is still present as a marked index in the column address. The operator removes the mark on index in the column address, completing the entire copying process. The condition |ij| = φ(A) ensures that the removal of the mark from does not take place until after x has been removed from the row address. Taken together, these conditions ensure that after a sequence of one , one , and one , A is copied into all cells having the form (remove(i, x), insert(j, x)) for some x. To move an index from the column address to the row address, we use one operation followed by one operation and one operation. The conditions on these three special symbols are analogous to the conditions on , , and outlined earlier, and ensure that we copy from cell (i, j) to cells of the form (insert(i, x), remove(j, x)) for some x. We now show that matrix powers of the upper-triangular seed matrix T copy nonterminals between all equivalent cells above the diagonal. Lemma 2 Let (i, j) and (k, ℓ) be unmarked matrix addresses, in a seed matrix T indexed by row and column addresses from N(d) where d > min{|i|, |j|} and d > min{|k|, |ℓ|}. Assume that min i = min k and either k = remove(i, x) and ℓ = insert(j, x) for some x, or k = insert(i, x) and ℓ = remove(j, x) for some x. If A appears in cell (i, j) of T(n), then A appears in cell (k, ℓ) of T(n+3). Furthermore, the copy operations do not introduce nonterminals into any other cells with unmarked addresses. Proof The condition on d guarantees that we can form row and column addresses long enough to hold the redundant representations with one address shared between row and column. This condition is only relevant in the case where i, j, k, and ℓ are all of the same length; in this case we need to construct temporary indices with length one greater, as in the example in Figure 6. A can be added to cell (k, ℓ) through a sequence of three matrix multiplications by combining with symbols , , and or with , , and . Because T(n) is upper triangular, min i = min ij, meaning that A's leftmost index is in its row address. The condition min i = min k implies that we are not moving this leftmost index from row to column. The addresses of the three copy symbols required are all formed by adding or removing x or to the row and column addresses (i, j); because the leftmost index of i is not modified, the copy symbols that are required are all above the diagonal, and are present in the seed matrix T. Therefore, A appears in cell (k, ℓ) of T(n+3). To see that nonterminals are not introduced into any other cells, observe that and are the only symbols that introduce nonterminals into unmarked addresses. They can only apply when a marked index is present, and when the total number indices is 2φ(A). This can only occur after either has introduced a marked index and removed the corresponding unmarked index, or has introduced a marked index and removed the corresponding unmarked index. ∎ Putting together sequences of these operations to move indices, we arrive at the following lemma: Lemma 3 Let (i, j) and (k, ℓ) be matrix addresses such that m(i, j) = m(k, ℓ), in a seed matrix T indexed by row and column addresses from N(d) where d > min{|i|, |j|} and d > min{|k|, |ℓ|}. Then, for any nonterminal A in cell (i, j) in T(n), A will also appear in cell (k, ℓ) of the power matrix T(n+6d). Proof Nonterminal A can be copied through a series of intermediate cells by moving one index at a time from i to ℓ, and from j to k. We begin by moving indices from either the row address i to the column address if |i| > |j|, or from the column address j to the row address otherwise. We must move up to d indices from row to column, and d indices from column to row. Each move takes three matrix multiplications, for a total of 6d matrix multiplications. ∎ ### 4.3 Determining the Contact Rank The dimensions of the matrix T (and its transitive closure) are |N| × |N|. The set N is of size O(nd), where d is a function of the grammar. When a given pair of cells in two matrices of the type of T are multiplied, we are essentially combining endpoints from the first multiplicand column address with endpoints from the second multiplicand row address. As such, we have to ensure that d allows us to generate all possible sequences of endpoints that could potentially combine with a given fixed LCFRS. We refer to the endpoints at which a rule's r.h.s. nonterminals meet as combining points. For example, in the simple case of a CFG with a rule S → NP VP, there is one combining point where NP and VP meet. For the TAG rule shown in Figure 1, there are two combining points where nonterminals B and C meet. For each rule r in the LCFRS grammar, we must be able to access the combining points as row and column addresses in order to apply the rule with matrix multiplication. Thus, d must be at least the maximum number of combining points of any rule in the grammar. The number of combining points δ(r) for a rule r can be computed by comparing the number of spans on the l.h.s. and r.h.s. of the rule: Note that δ(r) depends only on the skeleton of r (see Section 2), and therefore it can be denoted by δ(AB C).3 For each nonterminal on the r.h.s. of the rule, the address of its matrix cell consists of the combination points in one dimension (either row or column), and the other points in the other dimension of the matrix. For r.h.s. nonterminal B in rule AB C, the number of non-combination endpoints is: Thus, taking the maximum size over all addresses in the grammar, the largest addresses needed are of length: We call this number the contact rank of the grammar. As examples, the contact rank of a CFG is 1, while the contact rank of a TAG is 2. A simple algebraic manipulation shows that the contact rank can be expressed as follows: We require our grammars to be in single-initial form, as described in Section 2. Because the process of converting an LCFRS grammar to single-initial form increases its fan-out by at most one, the contact rank is also increased by at most one. ### 4.4 Balanced Grammars We define the configuration set of a nonterminal A to the the set of all configurations (Section 4.1.1) in which A appears in a grammar rule, including both appearances in the r.h.s. and as the l.h.s. For example, in a CFG, the configuration set of any nonterminal is {{1}}, because, as shown in Section 4.1.1, nonterminals are always used in the unique configuration {1}. For TAG, the configuration set of any nonterminal is {{1, 4}} because, as in CFG, nonterminals are always used in the same configuration. A configuration c of nonterminal B is balanced if |c| = φ(B). This means that the number of contact points and non-contact points are the same. The contact rank d defined in the previous section is the maximum size of any configuration of any nonterminal in any rule. For a given nonterminal B, if φ(B) < d, then we can copy entries between equivalent cells. To see this, suppose that we are moving from cell (i, j) to (k, ℓ) where the length of i is greater than the length of j. As long as we move the first index from row to column, rather than from column to row, the intermediate results will require addresses no longer than the length of i. However, if φ(B) = d, then every configuration in which B appears is balanced: If φ(B) = d and B appears in more than one configuration, that is, |config(B)| > 1, it is impossible to copy entries for B between the cells using a matrix of size (2n)d. This is because we cannot move indices from row to column or from column to row without creating an intermediate row or column address of length greater than d as a result of the first or operation. We define a balanced grammar to be a grammar containing a nonterminal B such that φ(B) = d, and |config(B)| > 1. As examples, a CFG is not balanced because, although for each nonterminal B, φ(B) = d = 1, the number of configurations |config(B)| is 1. Similarly, TAG is not balanced, because each nonterminal has only one configuration. ITGs are balanced, because, for each nonterminal B, φ(B) = d = 2, and nonterminals can be used in two configurations, corresponding to straight and inverted rules. The following condition will determine which of two alternative methods we use for the top level of our parsing algorithm. Condition 4.1 Unbalanced Grammar Condition There is no nonterminal B such that φ(B) = d and |config(B)| > 1. This condition guarantees that we can move nonterminals as necessary with matrix multiplication: Lemma 4 Let (i, j) and (k, ℓ) be matrix addresses such that m(i, j) = m(k, ℓ). Under Condition 4.1, for any nonterminal A in cell (i, j) in T(n), A will also appear in cell (k, ℓ) of the power matrix T(n+6d). Proof The number of A's endpoints is 2φ(A) = |i| + |j| = |k| + |ℓ|. If the grammar is not balanced, then d > φ(A), and therefore d > min{|i|, |j|} and d > min{|k|, |ℓ|}. By Lemma 3, A will appear in cell (k, ℓ) of the power matrix T(n+6d). ∎ ### 4.5 Computing the Transitive Closure of T The transitive closure T+ of a matrix T is the result of repeated applications of the matrix multiplication operator described in Equation (5). With T being the seed matrix, we define where T(i) is defined recursively as: Under Condition 4.1, one can show that given an LCFRS derivation tree t over the input string, each node in t must appear in the transitive closure matrix T+. Specifically, for each node in t representing nonterminal A spanning endpoints {(ℓ1, ℓ2), (ℓ3, ℓ4), … , (ℓ2φ(A)−1, ℓ2φ(A))}, at each cell in the matrix such that m(i, j) = {(ℓ1, ℓ2), (ℓ3, ℓ4), … , (ℓ2φ(A)−1, ℓ2φ(A))}, contains A. This leads to the following result: Lemma 5 Under Condition 4.1, the transitive closure of T is such that [T+]ij represents the set of nonterminals that are derivable for the given spans in m(i, j). Proof The proof is by induction over the length of the LCFRS derivations. By Lemma 1, derivations consisting of a single rule A[α] → B[β] C[γ] produce AT(2) for i and j corresponding the non-combination points of B and C. For all other i and j such that m(i, j) = {(ℓ1, ℓ2), (ℓ3, ℓ4), … , (ℓ2φ(A)−1, ℓ2φ(A))}, an entry is produced in by Lemma 4. By induction, Ts(6d+2) contains entries for all LCFRS derivations of depth s, and T+ contains entries for all LCFRS derivations of any length. In the other direction, we need to show that all entries A in T+ correspond to a valid LCFRS derivation of nonterminal A spanning endpoints m(i, j). This can be shown by induction over the number of matrix multiplications. During each multiplication, entries created in the product matrix correspond either to the application of an LCFRS rule with l.h.s. A, or to the movement of an index between row and column address for a previously recognized instance of A. ∎ The transitive closure still yields a useful result, even when Condition 4.1 does not hold. To show how it is useful, we need to define the “copying” operator, Π, which takes a matrix T′ of the same type of T, and sets Π (T′) using the following procedure: • 1. Define e(i, j) = {(i′, j′) | m(i′, j′) = m(i, j)}—that is, the set of equivalent configurations to (i, j). • 2. Set . This means that Π takes a completion step, and copies all nonterminals between all equivalent addresses in T′. Note that the Π operator can be implemented such that it operates in time O(nd). All it requires is taking O(nd) unions of sets (corresponding to the sets of nonterminals in the matrix cells), where each set is of size O(1) with respect to the sentence length (i.e., the size is only a function of the grammar), and each union is over O(1) sets. This procedure leads to a recognition algorithm for binary LCFRS that do not satisfy Condition 4.1 (we also assume that these binary LCFRS would not have unary cycles or ϵ rules). This algorithm is given later, in Figure 9. It operates by iterating through transitive closure steps and copying steps until convergence. When we take the transitive closure of T, we are essentially computing a subset of the derivable nonterminals. Then, the copying step (with Π) propagates nonterminals through equivalent cells. Now, if we take the transitive closure again, and there is any way to derive new nonterminals because of the copying step, the resulting matrix will have at least one new nonterminal. Otherwise, it will not change, and as such, we recognized all possible derivable nonterminals in each cell. Lemma 6 For any single-initial LCFRS, when Step 2 of the algorithm in Figure 9 converges, T is such that [T]ij represents the set of nonterminals that are derivable for the given spans in m(i, j). Proof Any LCFRS derivation of a nonterminal can be decomposed into a sequence of rule applications and copy operations, and by induction over the length of the derivation, all derivations will be found. Each matrix operation only produces derivable LCFRS nonterminals, and by induction over the number of steps of the algorithm, only derivable nonterminals will be found. ∎ #### 4.5.1 Reduction of Transitive Closure to Boolean Matrix Multiplication Valiant (1975) showed that his algorithm for computing the multiplication of two matrices, in terms of a multiplication operator similar to ours, can be reduced to the problem of Boolean matrix multiplication. His transitive closure algorithm requires as a black box this two-matrix multiplication algorithm. We follow here a similar argument. We can use Valiant's algorithm for the computation of the transitive closure, because our multiplication operator is distributive (with respect to ∪). To complete our argument, we need to show, similarly to Valiant, that the product of two matrices using our multiplication operator can be reduced to Boolean matrix multiplication (Figure 7). Figure 7 Reduction of transitive closure to Boolean matrix multiplication. Boolean matrix operations implementing the matrix multiplication example of Section 3. Figure 7 Reduction of transitive closure to Boolean matrix multiplication. Boolean matrix operations implementing the matrix multiplication example of Section 3. Close modal Consider the problem of multiplication a matrix T1 and T2, and say T1T2 = T3. To reduce it to Boolean matrix multiplication, we create pairs of matrices, Gr and Hr, where r ranges over . The size of Gr and Hr is N × N. If r = A[α] → B[β]C[γ], we set [Gr]ik to be 1 if the nonterminal B appears in [T1]ik and B, i, and k meet the conditions of Step 2c of Figure 5. Similarly, we set [Hr]kj to be 1 if the nonterminal C appears in [T2]kj and C, k, and j meet the conditions of Step 2d. All other cells, in both Gr and Hr, are set to 0. Note that Gr and Hr for all are upper triangular Boolean matrices. In addition, we create pairs of matrices, GA and HA, where A ranges over the set of nonterminals . We set [GA]ik to be 1 if the nonterminal A appears in [T1]ik, regardless the conditions of Step 2c of Figure 5. Similarly, we set [HA]kj to be 1 if the nonterminal A appears in [T2]kj, regardless of the conditions of Step 2d. All other cells, in both GA and HA, are set to 0. Again, GA and HA for all are upper triangular Boolean matrices. Finally, we create six additional matrices, for each element in the set {, , , , , }. These matrices indicate the positions in which each symbol appears in the seed matrix T defined in Figure 4: • 1. , for which only if (, i, j) ∈ T. • 2. , for which only if (, i, j) ∈ T. • 3. , for which only if (, i, j) ∈ T. • 4. , for which only if (, i, j) ∈ T. • 5. , for which only if (, i, j) ∈ T. • 6. , for which only if (, i, j) ∈ T. Now, for each rule , we compute the matrix Ir = GrHr. The total number of matrix multiplications required is , which is constant in n. Now, T3 can be obtained by multiplying these matrices, and applying the conditions of Figure 5: • 1. For each , for each rule r = AB C, check whether [Ir]ij = 1. If Step 2e is satisfied for A, i, and j, then add (A, i, j) to [T3]ij. • 2. For each , compute . For each (i, j), add A to [T3]ij if and xi for some x, and [JA]ij = 1. • 3. For each , compute . For each (i, j), add A to [T3]ij and xi and for some x, and [JA]ij = 1. • 4. For each , compute . For each (i, j), add A to [T3]ij if |ij| = 2φ(A), and [JA]ij = 1. • 5. For each , compute . For each (i, j), add A to [T3]ij if and xj for some x, and [JA]ij = 1. • 6. For each , compute . For each (i, j), add A to [T3]ij and xj for some x, and [JA]ij = 1. • 7. For each , compute . For each (i, j), add A to [T3]ij if |ij| = 2φ(A), and [JA]ij = 1. Lemma 7 The matrix product operation for two matrices of size (2n)d × (2n)d can be computed in time O(nωd), if two m × m Boolean matrices can be multiplied in time O(mω). Proof The result of the algorithm above is guaranteed to be the same as the result of matrix multiplication using the ⊗ operation of Figure 5 because it considers all combinations of i, j, and k and all pairs of nonterminals and copy symbols, and applies the same set of conditions. This is possible because each of the conditions in Figure 5 applies either to a pair (i, k) or (k, j), in which case we apply the condition to input matrices to the Boolean matrix multiplication, or to the pair (i, j), in which case we apply the condition to the result of the Boolean matrix multiplication. Crucially, no condition in Figure 5 involves i, j, and k simultaneously. The Boolean matrix algorithm takes time O(nωd) for each matrix multiplication, while the pre- and post-processing steps for each matrix multiplication take only O(n2d). The number of Boolean matrix multiplications depends on the grammar, but is constant with respect to n, yielding an overall runtime of O(nωd). ∎ The final parsing algorithm is given in Figure 8. It works by computing the seed matrix T, and then finding its transitive closure. Finally, it checks whether the start symbol appears in a cell with an address that spans the whole string. If so, the string is in the language of the grammar. Figure 8 Algorithm for recognizing binary linear context-free rewriting systems when Condition 4.1 is satisfied by the LCFRS. Figure 8 Algorithm for recognizing binary linear context-free rewriting systems when Condition 4.1 is satisfied by the LCFRS. Close modal As mentioned in the previous section, the algorithm in Figure 8 finds the transitive closure of a matrix under our definition of matrix multiplication. The operations ∪ and ⊗ used in our matrix multiplication distribute. The ⊗ operator takes the cross product of two sets, and applies a filtering condition to the results; the fact that (xy) ∪ (xz) = x ⊗ (yx) follows from the fact that it does not matter whether we take the cross product of the union, or the union of the cross product. However, unlike in the case of standard matrix multiplication, our ⊗ operation is not associative. In general, x ⊗ (yz) ≠ (xy) ⊗ z, because the combination of y and z may be allowed by the LCFRS grammar, whereas the combination of x and y is not. Lemma 8 The transitive closure of a matrix of size (2n)d × (2n)d can be computed in time O(nωd), if 2 < ω < 3, and two m × m Boolean matrices can be multiplied in time O(mω). Proof We can use the algorithm of Valiant for finding the closure of upper triangular matrices under distributive, non-associative matrix multiplication. Because we can perform one matrix product in time O(nωd) by Lemma 7, the algorithm of Valiant (1975, Theorem 2) can be used to compute transitive closure also in time O(nωd). ∎ When Valiant's paper was published, the best well-known algorithm known for such multiplication was Strassen's algorithm, with M(n) = O(n2.8704). Since then, it has been found that M(n) = O(nω) for ω < 2.38 (see also Section 1). There are ongoing attempts to further reduce ω, or find lower bounds for M(n). The algorithm for transitive closure gives one of the main results of this article: Theorem 1 A single-initial binary LCFRS meeting Condition 4.1 can be parsed in time O(nωd), where d is the contact rank of the grammar, 2 < ω < 3, and two m × m Boolean matrices can be multiplied in time O(mω). Proof By Lemma 8, Step 2 of the algorithm in Figure 8 takes O(nωd). By Lemma 5, the result of Step 2 gives all nonterminals that are derivable for the given spans in m(i, j). ∎ Parsing a binary LCFRS rule with standard chart parsing techniques requires time O(nφ(A)+φ(B)+φ(C)). Let . The worst-case complexity of LCFRS chart parsing techniques is O(np). We can now ask the question: In which case the algorithm in Figure 8 is asymptotically more efficient than standard chart parsing techniques with respect to n? That is, in which cases is ndω = o(np)? Clearly, this would hold whenever dω < p. By definition of d and p, a sufficient condition for that is that for any rule it holds that:4 This means that for any rule, the following conditions should hold: Algebraic manipulation shows that this is equivalent to having: For the best well-known algorithm for matrix multiplication, it holds that: For Strassen's algorithm, it holds that: We turn now to analyze the complexity of the algorithm in Figure 9, giving the main result of this article for arbitrary LCFRS: Theorem 2 A single-initial binary LCFRS can be parsed in time O(nωd+1), where d is the contact rank of the grammar, 2 < ω < 3, and two m × m Boolean matrices can be multiplied in time O(mω). Proof The algorithm of Figure 9 works by iteratively applying the transitive closure and the copying operator until convergence. At convergence, we have recognized all derivable nonterminals by Lemma 6. Each transitive closure has the asymptotic complexity of O(nωd) by Lemma 8. Each Π application has the asymptotic complexity of O(nd). As such, the total complexity is O(tnωd), where t is the number of iterations required to converge. At each iteration, we discover at least one new nonterminal. The total number of nodes in the derivation for the recognized string is O(n) (assuming no unary cycles or ϵ rules). As such t = O(n), and the total complexity of this algorithm is O(nωd+1). ∎ Figure 9 Algorithm for recognizing binary LCFRS when Condition 4.1 is not necessarily satisfied by the LCFRS. Figure 9 Algorithm for recognizing binary LCFRS when Condition 4.1 is not necessarily satisfied by the LCFRS. Close modal Our algorithm is a recognition algorithm that is applicable to binary LCFRS. As such, our algorithm can be applied to any LCFRS, by first reducing it to a binary LCFRS. We discuss results for specific classes of LCFRS in this section, and return to the general binarization process in Section 7.6. LCFRS subsumes context-free grammars, which was the formalism that Valiant (1975) focused on. Valiant showed that the problem of CFG recognition can be reduced to the problem of matrix multiplication, and, as such, the complexity of CFG recognition in that case is O(nω). Our result generalizes Valiant's result. CFGs (in Chomsky Normal Form) can be reduced to a binary LCFRS with f = 1. As such, d = 1 for CFGs, and our algorithm yields a complexity of O(nω). (Note that CFGs satisfy Condition 4.1, and therefore we can use a single transitive closure step.) LCFRS is a broad family of grammars, and it subsumes many other well-known grammar formalisms, some of which were discovered or developed independently of LCFRS. Two such formalisms are tree-adjoining grammars (Joshi and Schabes 1997) and synchronous context-free grammars. In the next two sections, we explain how our algorithmic result applies to these two formalisms. ### 6.1 Mildly Context-Sensitive Language Recognition Linear context-free rewriting systems fall under the realm of mildly context-sensitive grammar formalisms. They subsume four important mildly context-sensitive formalisms that were developed independently and later shown to be weakly equivalent by Vijay-Shanker and Weir (1994): tree-adjoining grammars (Joshi and Schabes 1997), linear indexed grammars (Gazdar 1988), head grammars (Pollard 1984), and combinatory categorial grammars (Steedman 2000). Weak equivalence here refers to the idea that any language generated by a grammar in one of these formalisms can be also generated by some grammar in any of the other formalisms among the four. It can be verified that all of these formalisms are unbalanced, single-initial LCFRSs, and as such, the algorithm in Figure 8 applies to them. Rajasekaran and Yooseph (1998) show that tree-adjoining grammars can be parsed with an asymptotic complexity of O(M(n2)) = O(n4.76). Although they did not discuss that, the weak equivalence between the four formalisms mentioned here implies that all of them can be parsed in time O(M(n2)). Our algorithm generalizes this result. We now give the details. Our starting point for this discussion is head grammars. Head grammars are a specific case of linear context-free rewriting systems, not just in the formal languages they define—but also in the way these grammars are described. They are described using concatenation production rules and wrapping production rules, which are directly transferable to LCFRS notation. Their fan-out is 2. We focus in this discussion on “binary head grammars,” defined analogously to binary LCFRS—the rank of all production rules has to be 2. The contact rank of binary head grammars is 2. As such, our work shows that the complexity of recognizing binary head grammar languages is O(M(n2)) = O(n4.76). Vijay-Shanker and Weir (1994) show that linear indexed grammars (LIGs) can actually be reduced to binary head grammars. Linear indexed grammars are extensions of CFGs, a linguistically motivated restricted version of indexed grammars, the latter of which were developed by Aho (1968) for the goal of handling variable binding in programming languages. The main difference between LIGs and CFGs is that the nonterminals carry a “stack,” with a separate set of stack symbols. Production rules with LIGs copy the stack on the left-hand side to one of the nonterminal stacks in the right-hand side,5 potentially pushing or popping one symbol in the new copy of the stack. For our discussion, the main important detail about the reduction of LIGs to head grammars is that it preserves the rank of the production rules. As such, our work shows that binary LIGs can also be recognized in time O(n4.76). Vijay-Shanker and Weir (1994) additionally address the issue of reducing combinatory categorial grammars to LIGs. The combinators they allow are function application and function composition. The key detail here is that their reduction of CCG is to an LIG with rank 2, and, as such, our algorithm applies to CCGs as well, which can be recognized in time O(n4.76). Finally, Vijay-Shanker and Weir (1994) reduced tree-adjoining grammars to combinatory categorial grammars. The TAGs they tackle are in “normal form,” such that the auxiliary trees are binary (all TAGs can be reduced to normal form TAGs). Such TAGs can be converted to weakly equivalent CCG (but not necessarily strongly equivalent), and as such, our algorithm applies to TAGs as well. As mentioned earlier, this finding supports the finding of Rajasekaran and Yooseph (1998), who show that TAG can be recognized in time O(M(n2)). For an earlier discussion connections between TAG parsing and Boolean matrix multiplication, see Satta (1994). ### 6.2 Synchronous Context-Free Grammars SCFGs are widely used in machine translation to model the simultaneous derivation of translationally equivalent strings in two natural languages, and are equivalent to the syntax-directed translation schemata of Aho and Ullman (1969). SCFGs are a subclass of LCFRS where each nonterminal has fan-out 2: one span in one language and one span in the other. Because the first span of the l.h.s. nonterminal always contains spans from both r.h.s. nonterminals, SCFGs are always single-initial. Binary SCFGs, also known as ITGs, have no more than two nonterminals on the r.h.s. of a rule, and are the most widely used model in syntax-based statistical machine translation. Synchronous parsing with traditional tabular methods for ITG is O(n6), as each of the three nonterminals in a rule has fan-out of two. ITGs, unfortunately, do not satisfy Condition 4.1, and therefore we have to use the algorithm in Figure 9. Still, just like with TAG, each rule combines two nonterminals of fan-out 2 using two combination points. Thus, d = 2, and we achieve a bound of O(n2ω+1) for ITG, which is O(n5.76) using the current state of the art for matrix multiplication. We achieve even greater gains for the case of multi-language synchronous parsing. Generalizing ITG to allow two nonterminals on the right-hand side of a rule in each of k languages, we have an LCFRS with fan-out k. Traditional tabular parsing has an asymptotic complexity of O(n3k), whereas our algorithm has the complexity of O(nωk+1). Another interesting case of a synchronous formalism that our algorithm improves the best well-known result for is that of binary synchronous TAGs (Shieber and Schabes 1990)—that is, a TAG in which all auxiliary trees are binary. This formalism can be reduced to a binary LCFRS. A tabular algorithm for such grammar has the asymptotic complexity of O(n12). With our algorithm, d = 4 for this formalism, and as such its asymptotic complexity in that case is O(n9.52). In this section, we discuss some extensions to our algorithm and open problems. ### 7.1 Turning Recognition into Parsing The algorithm we presented focuses on recognition: Given a string and a grammar, it can decide whether the string is in the language of the grammar or not. From an application perspective, perhaps a more interesting algorithm is one that returns an actual derivation tree, if it identifies that the string is in the language. It is not difficult to adapt our algorithm to return such a parse, without changing the asymptotic complexity of O(nωd+1). Once the transitive closure of T is computed, we can backtrack to find such a parse, starting with the start symbol in a cell spanning the whole string. When we are in a specific cell, we check all possible combination points (there are d of those) and nonterminals, and if we find such pairs of combination points and nonterminals that are valid in the chart, then we backtrack to the corresponding cells. The asymptotic complexity of this post-processing step is O(nd+1), which is less than O(nωd) (ω > 2, d > 1). This post-processing step corresponds to an algorithm that finds a parse tree, given a pre-calculated chart. If the chart was not already available when our algorithm finishes, the asymptotic complexity of this step would correspond to the asymptotic complexity of a naïve tabular parsing algorithm. It remains an open problem to adapt our algorithm to probabilistic parsing, for example—finding the highest scoring parse given a probabilistic or a weighted LCFRS (Kallmeyer and Maier 2010). See more details in Section 7.3. ### 7.2 General Recognition for Synchronous Parsing Similarly to LCFRS, the rank of an SCFG is the maximal number of nonterminals that appear in the right-hand side of a rule. Any SCFG can be binarized into an LCFRS grammar. However, when the SCFG rank is arbitrarthis means that the fan-out of the LCFRS grammar can be larger than 2. This happens because binarization creates intermediate nonterminals that span several substrings, denoting binarization steps of the rule. These substrings are eventually combined into two spans, to yield the language of the SCFG grammar (Huang et al. 2009). Our algorithm does not always improve the asymptotic complexity of SCFG parsing over tabular methods. For example, Figure 10 shows the combination of spans for the rule [SA B C D, B D A C], along with a binarization into three simpler LCFRS rules. A naïve tabular algorithm for this rule would have the asymptotic complexity of O(n10), but the binarization shown in Figure 10 reduces this to O(n8). Our algorithm gives a complexity of O(n9 52), as the second step in the binarization shown consists of a rule with d = 4. Figure 10 Upper left: Combination of spans for SCFG rule [SA B C D, B D A C]. Upper right and bottom row: Three steps in parsing binarized rule. Figure 10 Upper left: Combination of spans for SCFG rule [SA B C D, B D A C]. Upper right and bottom row: Three steps in parsing binarized rule. Close modal ### 7.3 Generalization to Weighted Logic Programs Weighted logic programs (WLPs) are declarative programs, in the form of Horn clauses similar to those that Prolog uses, that can be used to formulate parsing algorithms such as CKY and other types of dynamic programming algorithms or NLP inference algorithms (Eisner, Goldlust, and Smith 2005; Cohen, Simmons, and Smith 2011). For a given Horn clause, WLPs also require a “join” operation that sums (in some semiring) over a set of possible values in the free variables in the Horn clauses. With CKY, for example, this sum will be performed on the mid-point concatenating two spans. This join operation is also the type of operation we address in this paper (for LCFRS) in order to improve their asymptotic complexity. It remains an open question to see whether we can generalize our algorithm to arbitrary weighted logic programs. In order to create an algorithm that takes as input a weighted logic program (and a set of axioms) and “recognizes” whether the goal is achievable, we would need to have a generic way of specifying the set N, which was specialized to LCFRS in this case. Not only that, we would have to specify N in such a way that the asymptotic complexity of the WLP would improve over a simple dynamic programming algorithm (or a memoization technique). In addition, in this paper we focus on the problem of recognition and parsing for unweighted grammars. Benedí and Sánchez (2007) showed how to generalize Valiant's algorithm in order to compute inside probabilities for a PCFG and a string. Even if we were able to generalize our addressing scheme to WLPs, it remains an open question to see whether we can go beyond recognition (or unweighted parsing). ### 7.4 Rytter's Algorithm Rytter (1995) gives an algorithm for CFG parsing with the same time complexity as Valiant's, but a somewhat simpler divide-and-conquer strategy. Rytter's algorithm works by first recursively finding all chart items entirely within the first half of the string and entirely within the second half of the string. The combination step uses a shortest path computation to identify the sequence of chart items along a spine of the final parse tree, where the spine extends from the root of the tree to the terminal in position n/2. Rytter's algorithm relies on the fact that this spine, consisting of chart items that cross the midpoint of the string, forms a single path from the root to one leaf of the derivation tree. This property does not hold for general LCFRS, because two siblings in the derivation tree may both correspond to multiple spans in the string, each containing material on both sides of the string midpoint. For this reason, Rytter's algorithm does not appear to generalize easily to LCFRS. ### 7.5 Relation to Multiple Context-Free Grammars Nakanishi et al. (1998) develop a matrix multiplication parsing algorithm for multiple context-free grammars (MCFGs). When these grammars are given in a binary form, they can be reduced to binary LCFRS. Similarly, binary LCFRS can be reduced to binary MCFGs. The algorithm that Nakanishi et al. develop is simpler than ours, and does not directly tackle the problem of transitive closure for LCFRS. More specifically, Nakanishi et al. multiply a seed matrix such as our T by itself in several steps, and then follow up with a copying operation between equivalent cells. They repeat this n times, where n is the sentence length. As such, the asymptotic complexity of their algorithm is identical for both balanced and unbalanced grammars, a distinction they do not make. The complexity analysis of Nakanishi et al. is different from ours, but in certain cases, yields identical results. For example, if φ(a) = f for all , and the grammar is balanced, then both our algorithm and their algorithm give a complexity of O(nωf+1). If the grammar is unbalanced, then our algorithm gives a complexity of O(nωf), whereas the asymptotic complexity of their algorithm remains O(nωf+1). As such, Nakanishi et al.'s algorithm does not generalize Valiant's algorithm—its asymptotic complexity for context-free grammars is O(nω+1) and not O(nω). Nakanishi et al. pose in their paper an open problem, which loosely can be reworded as the problem of finding an algorithm that computes the transitive closure of T without the extra O(n) factor that their algorithm incurs. In our paper, we provide a solution to this open problem for the case of single-initial, unbalanced grammars. The core of the solution lies in the matrix multiplication copying mechanism described in Section 4.2. ### 7.6 Optimal Binarization Strategies The two main grammar parameters that affect the asymptotic complexity of parsing with LCFRS (in its general form) are the fan-out of the nonterminals and the rank of the rules. With tabular parsing, we can actually refer to the parsing complexity of a specific rule in the grammar. Its complexity is O(n ), where the parsing complexity p is the total fan-out of all nonterminals in the rule. For binary rules of the form AB C, p = φ(A) + φ(B) + φ(C). To optimize the tabular algorithm time complexity of parsing with a binary LCFRS, equivalent to another non-binary LCFRS, we would want to minimize the time complexity it takes to parse each rule. As such, our goal is to minimize φ(A) + φ(B) + φ(C) in the resulting binary grammar. Gildea (2011) has shown that this metric corresponds to the tree width of a dependency graph that is constructed from the grammar. It is not known whether finding the optimal binarization of an LCFRS is an NP-complete problem, but Gildea shows that a polynomial time algorithm would imply improved approximation algorithms for the treewidth of general graphs. In general, the optimal binarization for tabular parsing may not by the same as the optimal binarization for parsing with our algorithm based on matrix multiplication. In order to optimize the complexity of our algorithm, we want to minimize d, which is the maximum over all rules AB C of For a fixed binarized grammar, d is always less than p, the tabular parsing complexity, and, hence, the optimal d* over binarizations of an LCFRS is always less than the optimal p* for tabular parsing. However, whether any savings can be achieved with our algorithm depends on whether ωd* < p*, or ωd* + 1 < p* in the case of balanced grammars. Our criterion does not seem to correspond closely to a well-studied graph-theoretic concept such a treewidth, and it remains an open problem to find an efficient algorithm that minimizes this definition of parsing complexity. It is worth noting that . As such, this gives a lower bound on the time complexity of our algorithm relative to tabular parsing using the same binarized grammar. If O(nt1) is the asymptotic complexity of our algorithm, and O(nt2) is the asymptotic complexity of a tabular algorithm, then . We described a parsing algorithm for binary linear context-free rewriting systems that has the asymptotic complexity of O(nωd+1) where ω < 2.38, d is the “contact rank” of the grammar (the maximal number of combination points in the rules in the grammar in single-initial form), and n is the string length. Our algorithm has the asymptotic complexity of O(nωd) for a subset of binary LCFRS that are unbalanced. Our result generalizes the algorithm of Valiant (1975), and also reinforces existing results about mildly context-sensitive parsing for tree-adjoining grammars (Rajasekaran and Yooseph 1998). Our result also implies that inversion transduction grammars can be parsed in time O(n2ω+1) and that synchronous parsing with k languages has the asymptotic complexity of O(nωk+1) where k is the number of languages. Table A.1 provides a table of notation for symbols used in this article. Table A.1 SymbolDescription1st mention M(nThe complexity of Boolean n × n matrix multiplication §1 ω Best well-known complexity for M(n), M(n) = O(nω§1 [nSet of integers {1, … , n§2 [n]0 [n] ∪ {0} §2 Nonterminals of the LCFRS §2 Terminal symbols of the LCFRS §2 Variables that denote spans in grammar §2 Rules in the LCFRS §2 A,B,C Nonterminals §2 f Maximal fan-out of the LCFRS Eq. (3) φ(AFan-out of nonterminal A §2 y Denotes a variable in (potentially subscripted) §2 T Seed matrix §3 N, N(dSet of indices for addresses in the matrix Eq. (4) i, j Indices for cells in T. i, jN §4.1 d Grammar contact rank §4.1 M Tij is a subset of M §4.1 Copying/marking symbols for rows §4.1 Copying/marking symbols for columns §4.1 n Length of sentence to be parsed §1 Total order between the set of indices of T §4.1 m(i, jMerged sorted sequence of ij, divided into pairs §4.1 remove(v, xRemoval of x from a sequence v Figure 5 insert(v, xInsertion of x in a sequence v §4.5 Π Copying operator §4.5 SymbolDescription1st mention M(nThe complexity of Boolean n × n matrix multiplication §1 ω Best well-known complexity for M(n), M(n) = O(nω§1 [nSet of integers {1, … , n§2 [n]0 [n] ∪ {0} §2 Nonterminals of the LCFRS §2 Terminal symbols of the LCFRS §2 Variables that denote spans in grammar §2 Rules in the LCFRS §2 A,B,C Nonterminals §2 f Maximal fan-out of the LCFRS Eq. (3) φ(AFan-out of nonterminal A §2 y Denotes a variable in (potentially subscripted) §2 T Seed matrix §3 N, N(dSet of indices for addresses in the matrix Eq. (4) i, j Indices for cells in T. i, jN §4.1 d Grammar contact rank §4.1 M Tij is a subset of M §4.1 Copying/marking symbols for rows §4.1 Copying/marking symbols for columns §4.1 n Length of sentence to be parsed §1 Total order between the set of indices of T §4.1 m(i, jMerged sorted sequence of ij, divided into pairs §4.1 remove(v, xRemoval of x from a sequence v Figure 5 insert(v, xInsertion of x in a sequence v §4.5 Π Copying operator §4.5 The authors thank the anonymous reviewers for their comments and Adam Lopez and Giorgio Satta for useful conversations. This work was supported by NSF grant IIS-1446996 and by EPSRC grant EP/L02411X/1. 1 Without placing a bound on f, the problem of recognition of LCFRS languages is NP-hard (Satta 1992). 2 The symbols will be used for “copying commands:” (1) “from row” (); (2) “from column” (); (3) “to row” (); (4) “to column” (); (5) “unmark row” (); (6) “unmark column” (). 3 To see that Equation (6) is true, consider that if we take φ(B) + φ(C) variables from the spans of the r.h.s. and try to combine them together to φ(A) sequences per span of the l.h.s., we will get φ(B) + φ(C) − φ(A) points where variables “touch.” If φ(A) = 1, then this is clearly true. For φ(A) > 1, consider that for each span, we “lose” one contact point. 4 For two sets of real numbers, X and Y, it holds that if for all xX there is a yY such that x < y, then max X < max Y. 5 General indexed grammars copy the stack to multiple nonterminals on the right-hand side. Abboud , Amir , Arturs Backurs , and Virginia Vassilevska Williams . 2015 . If the current clique algorithms are optimal, so is Valiant's parser . arXiv preprint arXiv:1504.01431 . Aho , Alfred V. 1968 . Indexed grammars—an extension of context-free grammars . Journal of the ACM , 15 ( 4 ): 647 671 . Aho , Alfred V. and Jeffery D. Ullman . 1969 . Syntax directed translations and the pushdown assembler . Journal of Computer and System Sciences , 3 : 37 56 . Benedí , José-Miguel and Joan-Andreu Sánchez . 2007 . Fast stochastic context-free parsing: A stochastic version of the Valiant algorithm . In J. Martí , A. M. Mendonça , and J. Serat , editors, Pattern Recognition and Image Analysis . Springer , pages 80 88 . Cocke , John and Jacob T. Schwartz . 1970 . Programming languages and their compilers: Preliminary notes . Technical report, Courant Institute of Mathematical Sciences, New York University . Cohen , Shay B. , Giorgio Satta , and Michael Collins . 2013 . Approximate PCFG parsing using tensor decomposition . In Proceedings of the 2013 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-13) , pages 487 496 , Atlanta, GA . Cohen , Shay B. , Robert J. Simmons , and Noah A. Smith . 2011 . Products of weighted logic programs . Theory and Practice of Logic Programming , 11 ( 2–3 ): 263 296 . Coppersmith , D. and S. . 1987 . Matrix multiplication via arithmetic progressions . In Proceedings of the 19th Annual ACM Conference on Theory of Computing , pages 1 6 , New York, NY . Dunlop , Aaron , Nathan Bodenstab , and Brian Roark . 2010 . Reducing the grammar constant: An analysis of CYK parsing efficiency . Technical report CSLU-2010-02, OHSU . Earley , Jay . 1970 . An efficient context-free parsing algorithm . Communications of the ACM , 13 ( 2 ): 94 102 . Eisner , Jason , Eric Goldlust , and Noah A. Smith . 2005 . Compiling Comp Ling: Practical weighted dynamic programming and the Dyna language . In Proceedings of HLT-EMNLP , pages 281 290 , Vancouver . Eisner , Jason and Giorgio Satta . 1999 . Efficient parsing for bilexical context-free grammars and head automaton grammars . In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics on Computational Linguistics , pages 457 464 , Baltimore, MD . Gazdar , Gerald . 1988 . Applicability of indexed grammars to natural languages . Springer . Gildea , Daniel . 2011 . Grammar factorization by tree decomposition . Computational Linguistics , 37 ( 1 ): 231 248 . Huang , Liang , Hao Zhang , Daniel Gildea , and Kevin Knight . 2009 . Binarization of synchronous context-free grammars . Computational Linguistics , 35 ( 4 ): 559 595 . Joshi , Aravind K. and Yves Schabes . 1997 . . In G. Rozenberg and A. Salomaa , editors, Handbook of formal languages . Springer , pages 69 123 . Kallmeyer , Laura . 2010 . Parsing Beyond Context-Free Grammars . Cognitive Technologies . Springer . Kallmeyer , Laura and Wolfgang Maier . 2010 . Data-driven parsing with probabilistic linear context-free rewriting systems . In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010) , pages 537 545 , Beijing . Kasami , . 1965 . An efficient recognition and syntax-analysis algorithm for context-free languages . Technical Report AFCRL-65-758, Air Force Cambridge Research Lab . Le Gall , François . 2014 . Powers of tensors and fast matrix multiplication . In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC'14 , pages 296 303 , New York, NY . Nakanishi , Ryuichi , Keita , Hideki Nii , and Hiroyuki Seki . 1998 . Efficient recognition algorithms for parallel multiple context-free languages and for multiple context-free languages . IEICE TRANSACTIONS on Information and Systems , 81 ( 11 ): 1148 1161 . Pollard , Carl J. 1984 . Generalized Phrase Structure Grammars, Head Grammars and Natural Languages . Ph.D. thesis, Stanford University . Rajasekaran , Sanguthevar and Shibu Yooseph . 1998 . TAL parsing in O(M(n2)) time . Journal of Computer and System Sciences , 56 : 83 89 . Raz , Ran . 2002 . On the complexity of matrix product . In Proceedings of the 34th Annual ACM Symposium on Theory of Computing , pages 144 151 , Montreal . Rytter , Wojciech . 1995 . Context-free recognition via shortest paths computation: A version of Valiant's algorithm . Theoretical Computer Science , 143 ( 2 ): 343 352 . Satta , Giorgio . 1992 . Recognition of linear context-free rewriting systems . In Proceedings of the 30th Annual Meeting of the Association for Computational Linguistics , pages 89 95 , Newark, DE . Satta , Giorgio . 1994 . Tree-adjoining grammar parsing and boolean matrix multiplication . Computational Linguistics , 20 ( 2 ): 173 191 . Shieber , S. M. 1985 . Evidence against the context-freeness of natural language . In Linguistics and Philosophy , volume 8 . D. Reidel Publishing Company , pages 333 343 . Shieber , Stuart M and Yves Schabes . 1990 . . In Proceedings of the 13th Conference on Computational Linguistics-Volume 3 , pages 253 258 , Stroudsburg, PA . Steedman , Mark . 2000 . The Syntactic Process . Language, speech, and communication . MIT Press , Cambridge, MA . Strassen , V. 1969 . Gaussian elimination is not optimal . Numerische Mathematik , 14 ( 3 ): 354 356 . Valiant , Leslie G. 1975 . General context-free recognition in less than cubic time . Journal of Computer and System Sciences , 10 : 308 315 . Vijay-Shanker , K. and David Weir . 1994 . The equivalence of four extensions of context-free grammars . Mathematical Systems Theory , 27 : 511 546 . Wu , Dekai . 1997 . Stochastic inversion transduction grammars and bilingual parsing of parallel corpora . Computational Linguistics , 23 ( 3 ): 377 403 . Younger , Daniel H. 1967 . Recognition and parsing of context-free languages in time n3 . Information and Control , 10 ( 2 ): 189 208 . ## Author notes * School of Informatics, University of Edinburgh, Edinburgh, EH8 9AB, United Kingdom. E-mail: scohen@inf.ed.ac.uk. ** Department of Computer Science, University of Rochester, Rochester, NY 14627, United States. E-mail: gildea@cs.rochester.edu.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8699492812156677, "perplexity": 1081.6751786384928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00231.warc.gz"}
http://mathhelpforum.com/calculus/81408-integration.html
# Math Help - Integration 1. ## Integration How do you evaluate integral(x^1/2)sinx dx ? 2. You can use the Taylor expansion of $\sin x$ around $x=0$, then multiply each term of the series by $x^{\frac{1}{2}}$ and finally integrate term by term... Kind regards $\chi$ $\sigma$ 3. Originally Posted by CandyKanro How do you evaluate integral(x^1/2)sinx dx ? An exact answer cannot be found using a finite number of elementary functions. Where has this integral come from? Is an exact answer required?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9957938194274902, "perplexity": 771.9974808867496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657133417.25/warc/CC-MAIN-20140914011213-00336-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://www.mathwarehouse.com/solid-geometry/cylinder/formula-area-of-cylinder.php
Graphing Calculator Math Worksheets Algebara Solver Chart Maker A+ A− B # Formula Area of a Cylinder ## How to Find a Cylinder's Area This page examines the properties of a right circular cylinder. A cylinder has a radius (r) and a height (h) (see picture below). This shape is similar to a can. The surface area is the area of the top and bottom circles (which are the same), and the area of the rectangle (label that wraps around the can) ## The Cylinder Area Formula The picture below illustrates how the formula for the area of a cylinder is simply the sum of the areas of the top and bottom circles plus the area of a rectangle. This rectangle is what the cylinder would look like if we 'unraveled' it. Below is a picture of the general formula for area Practice Problems on Area of a Cylinder What is the area of the cylinder with a radius of 2 and a height of 6? Answer What is the area of the cylinder with a radius of 3 and a height of 5? Answer What is the area of the cylinder with a radius of 6 and a height of 7? Answer Back To Solid Geometry
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9077808856964111, "perplexity": 275.1570857302499}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378862.11/warc/CC-MAIN-20141119123258-00248-ip-10-235-23-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1091096/isomorphism-of-extensions
# Isomorphism of Extensions Let $\mathbb{L}$ an extension field of $\mathbb{K}$ and $\alpha, \beta\in\mathbb{L}$. If they have the same minimal polynomial than $\mathbb{K}(\alpha)\simeq\mathbb{K}(\beta)$, because if: $$\begin{array}{rccccrccc} \phi:&\frac{\mathbb{K}[x]}{<p(x)>}&\longrightarrow&\mathbb{K}[\beta]&\mbox{and}&\psi:&\mathbb{K}[\alpha]&\longrightarrow&\frac{\mathbb{K}[x]}{<p(x)>}\\ &f(x)+<p(x)>&\longrightarrow&f(\beta)&&&f(\alpha)&\longrightarrow&f(x)+<p(x)> \end{array},$$ are isomorphisms, where $p(x)$ is the minimal polynomial of $\alpha$ and $\beta$, than: $\varphi:\mathbb{K}(\alpha)\longrightarrow\mathbb{K}(\beta)$, defined by $\varphi=\psi\circ\phi$, is an isomorphism which $\left.\varphi\right|_\mathbb{K}$ is the identity on $\mathbb{K}$, so $\varphi$ is an isomorphism of extensions and $\mathbb{K}[\alpha]\simeq\mathbb{K}[\beta]$ (note that $\frac{\mathbb{K}[x]}{<p(x)>}$ is a field because $<p(x)>$ is maximal ideal, then $\mathbb{K}[\alpha]$ is a field and $\mathbb{K}[\alpha]=\mathbb{K}(\alpha)$). I believe this is right, but I was thinking if we just consider: $$\begin{array}{rccc} \sigma:&\mathbb{K}(\alpha)&\longrightarrow&\mathbb{K}(\beta)\\ &f(\alpha)&\longrightarrow&f(\beta) \end{array},$$ than $\left.\sigma\right|_\mathbb{K}$ is also the identity on $\mathbb{K}$ and $\sigma$ is an isomorphism, so I don't need that $\alpha$ and $\beta$ have the same minimal polynomial for $\mathbb{K}(\alpha)$ and $\mathbb{K}(\beta)$ are isomorphic? I can find some conditions for $\mathbb{L}$ and $\mathbb{K}$ such that I always have a minimal polynomial that $\alpha$ and $\beta$ are roots? Thanks. • $\sigma$ is an isomorphism because: Surjective: If $f(\beta)\in\mathbb{K}(\beta)\Longrightarrow f(x)\in\mathbb{K}[x]\Longrightarrow f(\alpha)\in\mathbb{K}(\alpha)\Longrightarrow\sigma(f(\alpha))=f(\beta)$. Injection: Let $f(\beta), g(\beta)\in\mathbb{K}(\beta)$ such that $f(\beta)=g(\beta)$. We know that by $\mathbb{K}(\beta)$ be a simple extension of $\mathbb{K}$ than for any element in $\mathbb{K}(\beta)$ has a unique expression of the form $m(\beta)$, where $m(x)\in\mathbb{K}[x]$ and the degree of $m(x)$ is less than the degree of minimal polynomial of $\beta$, so $f(x)=m(x)=h(x)\Longrightarrow f(\alpha)=g(\alpha)$. Homomorphism: $\sigma(f(\alpha)+g(\alpha))=\sigma((f+g)(\alpha))=(f+g)(\beta)=f(\beta)+g(\beta)=\sigma(f(\alpha))+\sigma(g(\alpha))$, $\sigma(f(\alpha)g(\alpha))=\sigma((fg)(\alpha))=(fg)(\beta)=f(\beta)g(\beta)=\sigma(f(\alpha))\sigma(g(\alpha))$. • What isomorphism exactly is that $\;\sigma\;$ in the last part of your question? – Timbuc Jan 4 '15 at 19:32 • In general your map $\sigma$ won't be well definied. In order that $\sigma$ is well definied, you need that $f(\alpha) = 0$ implies $f(\beta) = 0$... – Hans Giebenrath Jan 4 '15 at 19:35 • If I have $f(\alpha)=0\Longrightarrow f(\beta)=0$ then the Surjective, Injective and Homomorphism steps are right? – donikvep Jan 4 '15 at 20:06 Your map is in general not a morphism. Consider $\mathbb K = \mathbf Q$ and $\alpha = \sqrt 2$ as well as $\beta = \sqrt 3$. Then by definition $\sigma(\sqrt2) = \sqrt 3$. But then $$0 = \sqrt 3^2 - 3 = \sigma(\sqrt 2)^2 - \sigma(3) = \sigma((\sqrt 2)^2 - 3) = \sigma(-1) = - \sigma(1) = -1.$$ So $\sigma$ can not be a morphism. Note that $\sigma$ being a morphism implies that every polynomial $f \in \mathbf Q[X]$ with $f(\alpha) = 0$ will also satisfy $f(\beta) = 0$. This will give the condition on the minimial polynomial you need. • If I have this condition, then what I did to show that $\sigma$ is an isomorphism are right? – donikvep Jan 4 '15 at 20:23 • I think. But you have to be careful at the step $\sigma((fg)(\alpha)) = (fg)(\beta)$, because $fg$ can have degree larger then $m$. Moreover, when you have this condition, you immeadatly get that $\alpha$ and $\beta$ have the same minimal polynomial: Let $m_\alpha$ and $m_\beta$ be the minimal polynomials. From $m_\alpha(\alpha) = 0$ we get $m_\alpha(\beta) = 0$. Thus $m_\beta$ divides $m_\alpha$. Since both are irreducible we get equality. – Hans Giebenrath Jan 4 '15 at 20:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9828569889068604, "perplexity": 109.5967089393454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540510336.29/warc/CC-MAIN-20191208122818-20191208150818-00427.warc.gz"}
https://brilliant.org/problems/gravitation-on-rigid-body/
# Gravitation on Rigid Body Classical Mechanics Level pending A long spherical mass $$M$$ is fixed at one position and two identical point masses $$m$$ are kept on a line passing through the centre of $$M$$. The point masses are connected by a rigid massless rod of length $$l$$ and this assembly is free to move along the line connecting them. All three masses interact only through their mutual gravitational interaction. When the point mass nearer to $$M$$ is at distance $$r = 3l$$ from $$M$$, then tension in rod is zero for $$m = k \left( \frac M {288} \right)$$. Find the value of $$k$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6926053166389465, "perplexity": 278.1293660282558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689686.40/warc/CC-MAIN-20170923141947-20170923161947-00681.warc.gz"}
http://monkeysocietyblog.blogspot.com/2012_07_23_archive.html
Monday, July 23, 2012 The Ultimate Answer to the Ultimate Question Why do we exist? Aha! Good God..Here we go again. Damn. The deathless question about existence. Why do we exist? Oh brother, how can we ever answer that question? I think there is no answer. Why? Because it is pathetic asking such. But somehow, asking such question may come as noble as it sounds. This shows a good mark of intelligence to ever come to a place of asking in such a tone. I call it the upper level or the higher consciousness. However, no matter what, there can never be a complete answer to the question of existence. Perhaps there is a reason why there can't be an answer. Perhaps the question itself is wrong at the first place. Look. Long time ago, you were born.....sent to school...taught by people around...and then arrived to a moment you can be proud of every knowledge you have accumulated your entire life. But there is a lingering problem. The problem of NEXT. What is next? What is next. Hell. what is next? And it can only be possible to question that way because deep inside you are quite bored. So the next question would naturally arise, the ultimate question, WHY DO WE EXIST? But then again.....the question is perhaps wrong. The question is false from the onset. What I am saying therefore is to look for the right question first before we can ever hope to find the right answer. So what is right the question then? Oh damn! That is the answer!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083450794219971, "perplexity": 803.226042796141}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637899531.38/warc/CC-MAIN-20141030025819-00209-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.linstitute.net/archives/711870
# CIE A Level Physics复习笔记20.2.2 Magnetic Flux Linkage • The magnetic flux linkage is a quantity commonly used for solenoids which are made of N turns of wire • Magnetic flux linkage is defined as: The product of the magnetic flux and the number of turns • It is calculated using the equation: ΦN = BAN • Where: • Φ = magnetic flux (Wb) • N = number of turns of the coil • B = magnetic flux density (T) • A = cross-sectional area (m2) • The flux linkage ΦN has the units of Weber turns (Wb turns) • As with magnetic flux, if the field lines are not completely perpendicular to the plane of the area they are passing through • Therefore, the component of the flux density which is perpendicular is equal to: ΦN = BAN cos(θ) #### Worked Example A solenoid of circular cross-sectional radius of 0.40 m2 and 300 turns is facing perpendicular to a magnetic field with magnetic flux density of 5.1 mT.Determine the magnetic flux linkage for this solenoid. Step 1: Write out the known quantities Cross-sectional area, A = πr2 = π(0.4)2 = 0.503 m2 Magnetic flux density, B = 5.1 mT Number of turns of the coil, N = 300 turns Step 2: Write down the equation for the magnetic flux linkage ΦN = BAN Step 3: Substitute in values and calculate ΦN = (5.1 × 10-3) × 0.503 × 300 = 0.7691 = 0.8 Wb turns (2 s.f)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9730314016342163, "perplexity": 2013.301220443317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00104.warc.gz"}
https://www.proctor-it.com/category/functional-programming/
# Ruby Tuesday – composing filter As mentioned in the last Ruby Tuesday on using compose and map, I ended with saying that we would take a look at how we would take advantage of composition for our filter. As a refresher, here is our Sequence module. module Sequence def self.my_map(f, items) do_map = lambda do |accumulator, item| accumulator.dup << f.call(item) end my_reduce([], do_map, items) end def self.my_filter(predicate, items) do_filter = lambda do |accumulator, item| if (predicate.call(item)) accumulator.dup << item else accumulator end end my_reduce([], do_filter, items) end def self.my_reduce(initial, operation, items) return nil unless items.any?{|item| true} accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end def self.my_compose(functions, initial) apply_fn = ->(accum, fn) { fn.(accum) } my_reduce(initial, apply_fn, functions) end @@map = method(:my_map).curry @@filter = method(:my_filter).curry @@reduce = method(:my_reduce).curry @@compose = method(:my_compose).curry def self.map @@map end def self.filter @@filter end def self.reduce @@reduce end def self.compose @@compose end end So lets start taking a look at how we would be able to compose our filter lambdas together by seeing if we can find all numbers that are multiples of three, and then multiples of five. multiple_of_three = ->(x) { x % 3 == 0} # => #<Proc:0x007faeb28926b0@(pry):1 (lambda)> multiple_of_five = ->(x) { x % 5 == 0} # => #<Proc:0x007faeb3866c58@(pry):2 (lambda)> Sequence.compose.([Sequence.filter.(multiple_of_three), Sequence.filter.(multiple_of_five)]).((1..100)) => [15, 30, 45, 60, 75, 90] The catch again is that we are traversing the first list for all 100 elements to find those items that are multiples of 3, and then filtering the filtered list to find only those that are multiples of 5. What if we could do this in one pass, and compose the predicates together? And since we want to test this to make sure we get a good result, we will compose those predicates together and just test our composed function with the number 15. three_and_five = Sequence.compose.([multiple_of_three, multiple_of_five]) => #<Proc:0x007faeb4905d20 (lambda)> three_and_five.(15) # NoMethodError: undefined method %' for true:TrueClass # from (pry):2:in block in __pry__' It fails with a NoMethodError telling us that we can’t use the method % on an object of type TrueClass. Don’t we want to get back a true instead of an error though? If we rollback the compose into it’s nested function call version, it starts to become clear where the error is coming from. multiple_of_five.(multiple_of_three.(15)) # NoMethodError: undefined method %' for true:TrueClass # from (pry):2:in block in __pry__' So what is happening is that we get back true from calling multiple_of_three with 15, and we then pass that true into multiple_of_five, when what we wanted to do was to pass in the 15 and make sure it was a multiple of 5 as well. What we really want is to evaluate every lambda with the value of 15, and then then see if all of the predicate functions succeed in their checks. So we will start with a very naive, and “un-curried” version, to prove out our concept. Our first pass will be to map over each predicate and invoke it with the value we want to test resulting in a list of booleans if it succeeded or failed. We then reduce all of those items together via an and operation to get an overall success status. def my_all_succeed_naive(predicates, value) check_results = Sequence.map.(->(f) {f.(value)}, predicates) Sequence.reduce.(true, ->(accum, item) {accum && item}, check_results) end my_all_succeed_naive([multiple_of_three, multiple_of_five], 15) # => true my_all_succeed_naive([multiple_of_three, multiple_of_five], 14) # => false my_all_succeed_naive([multiple_of_three, multiple_of_five], 5) # => false This looks like this works, as we get 15 is a multiple of 3 and a multiple of 5, but we still check if the item is a multiple of 5, even if our multiple of 3 check failed. Could we do better, and short circuit our evaluation if we get a false? Let’s try. def my_all_succeed(predicates, value) for predicate in predicates do return false unless predicate.(value) end true end So let’s take a look at the difference between the two, just to make sure we are on the right track. First we will create a “long running predicate check” to add to our chain. [22] pry(main)> pass_after = ->(x, value) { sleep(x); true }.curry => #<Proc:0x007faeb310d498 (lambda)> Then we will time it using Benchmark#measure (and don’t forget to require 'benchmark' first though). First with a success, and then with a failure against the first predicate. Benchmark.measure do my_all_succeed([multiple_of_three, pass_after.(3), multiple_of_five], 15) end # => #<Benchmark::Tms:0x007faeb28f24c0 # @cstime=0.0, # @cutime=0.0, # @label="", # @real=3.0046792929642834, # @stime=0.0, # @total=0.0, # @utime=0.0> Benchmark.measure do my_all_succeed([multiple_of_three, pass_after.(3), multiple_of_five], 14) end # => #<Benchmark::Tms:0x007faeb6018c88 # @cstime=0.0, # @cutime=0.0, # @label="", # @real=2.5073997676372528e-05, # @stime=0.0, # @total=0.0, # @utime=0.0> We can see that it takes three seconds to run if we succeed, but only a fraction of a second if we fail on the first check. And just to make sure our earlier assumptions were correct, we will benchmark the same predicate list against the naive version with the value of 14, so it will fail on the first check of a multiple of three. Benchmark.measure do my_all_succeed_naive([multiple_of_three, pass_after.(3), multiple_of_five], 14) end # => #<Benchmark::Tms:0x007faeb487d218 # @cstime=0.0, # @cutime=0.0, # @label="", # @real=3.0028793679666705, # @stime=0.0, # @total=0.0, # @utime=0.0> And it does indeed take just over 3 seconds to complete. So let’s add this to our Sequence module, and get it to be able to be curried. So what if we did that for just a single string, not even in an list of some sort? def self.my_all_succeed(predicates, value) for predicate in predicates do return false unless predicate.(value) end true end @@all_succeed = method(:my_all_succeed).curry def self.all_succeed @@all_succeed end And we check that we can use it off our Sequence module partially applied. Sequence.all_succeed.([multiple_of_three, multiple_of_five]).(14) # => false Sequence.all_succeed.([multiple_of_three, multiple_of_five]).(15) # => true So now we can get back to how we would use this with filter by using our new Sequence::all_succeed. three_and_five_multiple = Sequence.all_succeed.([multiple_of_three, multiple_of_five]) # => #<Proc:0x007faeb28685e0 (lambda)> Sequence.filter.(three_and_five_multiple).((1..100)) # => [15, 30, 45, 60, 75, 90] And there we go, we have now composed our predicates into one function which we can then pass to Sequence::filter and only have to walk through the list once. As we have seen how to compose predicates together for an “and” style composition, next week we will look at building an “or” style composition of predicates. –Proctor # Ruby Tuesday – compose and map As mentioned last week, now that we have our compose function we will take a look at some of the properties of we get when using our map and compose functions together. Here is our Sequence class as we left it with compose added to it. module Sequence def self.my_map(f, items) do_map = lambda do |accumulator, item| accumulator.dup << f.call(item) end my_reduce([], do_map, items) end def self.my_filter(predicate, items) do_filter = lambda do |accumulator, item| if (predicate.call(item)) accumulator.dup << item else accumulator end end my_reduce([], do_filter, items) end def self.my_reduce(initial, operation, items) return nil unless items.any?{|item| true} accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end def self.my_compose(functions, initial) apply_fn = ->(accum, fn) { fn.(accum) } my_reduce(initial, apply_fn, functions) end @@map = method(:my_map).curry @@filter = method(:my_filter).curry @@reduce = method(:my_reduce).curry @@compose = method(:my_compose).curry def self.map @@map end def self.filter @@filter end def self.reduce @@reduce end def self.compose @@compose end end Like last week we have a list of names, and our goal is to get the list of “first” names back and have them be capitalized. names = ["jane doe", "john doe", "arthur dent", "lori lemaris", "DIANA PRINCE"] # => ["jane doe", "john doe", "arthur dent", "lori lemaris", "DIANA PRINCE"] So we start with our base functions that we are going to use in our map calls. split = ->(delimiter, str) {str.split(delimiter)}.curry # => #<Proc:0x007ffdfa203a00 (lambda)> whitespace_split = split.(" ") # => #<Proc:0x007ffdfa8391e0 (lambda)> first = ->(xs){ xs[0] }.curry # => #<Proc:0x007ffdfa2a26a0 (lambda)> capitalize = ->(s) { s.capitalize } # => #<Proc:0x007ffdfa210ed0@(pry):13 (lambda)> And we create our different instances of map calls, which are partially applied map functions with the appropriate lambda passed to them. name_parts = Sequence.map.(whitespace_split) # => #<Proc:0x007ffdfa1a0248 (lambda)> firsts = Sequence.map.(first) # => #<Proc:0x007ffdfa169568 (lambda)> capitalize_all = Sequence.map.(capitalize) # => #<Proc:0x007ffdfa86a1f0 (lambda)> And as we saw last week, we can nest the function calls together, capitalize_all.(firsts.(name_parts.(names))) # => ["Jane", "John", "Arthur", "Lori", "Diana"] or we can use compose to create a “pipeline” of function calls. capitalized_first_names = Sequence.compose.([name_parts, firsts, capitalize_all]) # => #<Proc:0x007ffdfa1c1c18 (lambda)> capitalized_first_names.(names) # => ["Jane", "John", "Arthur", "Lori", "Diana"] Here’s where things can start to get interesting. In our capitalized first names example, we go through the list once per transformation we want to apply. First we transform the list of names into a list of split names, which we transform into a list of only the first items from the source list, and then finally we transform that into a list of the capitalized names. That could be a lot of processing if we had more transformations, and/or a much longer list. This seems like a lot of work. Let’s look at it from near the other extreme; the case of if we only had one name in our list. capitalized_first_names.(['tARa cHAsE']) => ["Tara"] In this case, the fact that we have a list at all is almost incidental. For a list of only one item, we split the string, get the first element, and capitalize that value. So what if we did that for just a single string, not even in an list of some sort? capitalize_first_name = Sequence.compose.([whitespace_split, first, capitalize]) # => #<Proc:0x007ffdfa121100 (lambda)> capitalize_first_name.("tARa cHAsE") # => "Tara" We can compose each of these individual operations together to get a function that will transform a name into the result we want. And since we have a “transformation” function, we can pass that function to our map function for a given list of names. Sequence.map.(capitalize_first_name, names) # => ["Jane", "John", "Arthur", "Lori", "Diana"] Lo and behold, we get the same results as above for when we composed our map function calls. Sequence.compose.([Sequence.map.(whitespace_split), Sequence.map.(first), Sequence.map.(capitalize)] ).(names) # => ["Jane", "John", "Arthur", "Lori", "Diana"] Sequence.map.(Sequence.compose.([whitespace_split, first, capitalize]) ).(names) # => ["Jane", "John", "Arthur", "Lori", "Diana"] This leads us to the property that the composition of the map of function f and the map of function g is equivalent to the map of the composition of functions f and g. $\big(map\ f \circ map\ g\big)\ list = map\big( f \circ g \big) list$ Where the circle ($\circ$) symbol represents the function compose expressed using mathematical notation. Because of this, we can now only traverse the sequence via map once and do the composed transformation functions on each item as we encounter it without having to go revisit it again. Next time we will take a look at how we can do the same kind of operation on filter, as we can’t just pipeline the results of one filter on through to another, since filter returns a boolean value which is not what we would want to feed through to the next filter call. –Proctor # Ruby Tuesday – Refactoring towards compose We have seen filter, map, reduce, partial application, and updating the former functions to be able to take advantage of partial application, so how much further can we go? In this case, we will take a look at how we can chain the functions together to build up bigger building blocks out of their smaller components. First as a reminder, here is our Sequence class that we have built up over the past few posts. module Sequence def self.my_map(f, items) do_map = lambda do |accumulator, item| accumulator.dup << f.call(item) end my_reduce([], do_map, items) end def self.my_filter(predicate, items) do_filter = lambda do |accumulator, item| if (predicate.call(item)) accumulator.dup << item else accumulator end end my_reduce([], do_filter, items) end def self.my_reduce(initial, operation, items) return nil unless items.any?{|item| true} accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end @@map = method(:my_map).curry @@filter = method(:my_filter).curry @@reduce = method(:my_reduce).curry def self.map @@map end def self.filter @@filter end def self.reduce @@reduce end end Suppose we want to get a list of capitalized “first” names from a list of names, and we have a bunch of smaller functions that handle the different part of the transformation process that we can reuse. It might look like the following. names = ["jane doe", "john doe", "arthur dent", "lori lemaris"] name_parts_map = Sequence.map.(->(name) {name.split}) # => #<Proc:0x007fea82200638 (lambda)> first_map = Sequence.map.(->(xs) {xs[0]}) #=> #<Proc:0x007fea82842780 (lambda)> capitalize_map = Sequence.map.(->(s) {s.capitalize}) # => #<Proc:0x007fea82a05ce8 (lambda)> initials_map = Sequence.map.(->(strings) {first_map.(strings)}) # => #<Proc:0x007fea82188638 (lambda)> capitalize_map.(first_map.(name_parts_map.(names))) # => ["Jane", "John", "Arthur", "Lori"] And if we wanted to get a list of the initials as a list themselves, we might have something like this. initials_map.(name_parts_map.(names)) # => [["j", "d"], ["j", "d"], ["a", "d"], ["l", "l"]] And maybe somewhere else, we need to do some mathematical operations, like transform numbers into one less than their square. square_map = Sequence.map.(->(i) {i*i}) # => #<Proc:0x007fea82183ef8 (lambda)> dec_map = Sequence.map.(->(i) {i-1}) # => #<Proc:0x007fea82a37018 (lambda)> dec_map.(square_map.([1,2,3,4,5])) # => [0, 3, 8, 15, 24] And yet another place, we have a calculation to turn Fahrenheit into Celsius. minus_32 = ->(x) {x-32} # => #<Proc:0x007fea8298d4c8@(pry):36 (lambda)> minus_32_map = Sequence.map.(minus_32) # => #<Proc:0x007fea82955488 (lambda)> five_ninths = ->(x) {x*5/9} # => #<Proc:0x007fea83836330@(pry):38 (lambda)> five_ninths_map = Sequence.map.(five_ninths) # => #<Proc:0x007fea821429f8 (lambda)> five_ninths_map.(minus_32_map.([0, 32, 100, 212])) # => [-18, 0, 37, 100] Setting aside for the moment, all of the Procs making up other Procs if this is still foreign to you, there is a pattern here that we have been doing in all of these examples to compose a larger function out of a number of smaller functions. The pattern is that we are taking the return result of calling one function with a value, and feeding that into the next function, lather, rinse, and repeat. Does this sound familiar? This is our reduce. We can use reduce to define a function compose that will take a list of functions as our items, and an initial value for our functions, and our function to apply will be to call the function in item against our accumulated value. That’s a bit of a mouthful, so let’s look at it as code, and we will revisit that statement. def self.my_compose(functions, initial) apply_fn = ->(accum, fn) { fn.(accum) } my_reduce(initial, apply_fn, functions) end @@compose = method(:my_compose).curry def self.compose @@compose end Now that we have the code to reference, let’s go back and inspect against what was described above. First we create a lambda apply_fn, which will be our “reducing” function to apply to the accumulator and each item in the list, which in the case of my_compose is a list of functions to call. apply_fn like all our “reducing” functions so far takes in an accumulator value, the result of the composed function calls so far, and the current item, which is the function to call. The result for the new accumulator value is the result of applying the function with the accumulator as its argument. We were able to build yet another function out of our reduce, but this time we operated on a list of functions as our values. Let that sink in for a while. So let’s see how we use that. We will start with creating a composed function to map the Fahrenheit to Celsius conversion and see what different temperatures are in Celsius, including the past few days of highs and lows here at DFW airport. f_to_c_map = Sequence.compose.([minus_32_map, five_ninths_map]) # => #<Proc:0x007fea82a1f9b8 (lambda)> f_to_c_map.([0, 32, 100, 212]) # => [-18, 0, 37, 100] dfw_highs_in_celsius = f_to_c_map.([66, 46, 55, 48, 64, 68]) # => [18, 7, 12, 8, 17, 20] dfw_lows_in_celsius = f_to_c_map.([35, 27, 29, 35, 45, 40]) # => [1, -3, -2, 1, 7, 4] And if we take a look at the initials above and compose the map calls together, we get the following. get_initials_map = Sequence.compose.([name_parts_map, initials_map]) # => #<Proc:0x007fea82108ff0 (lambda)> get_initials_map.(names) # => [["j", "d"], ["j", "d"], ["a", "d"], ["l", "l"]] Doing the same for our capitalized first names we get: capitalized_first_names_map = Sequence.compose.([name_parts_map, first_map, capitalize_map]) # => #<Proc:0x007fea821d1108 (lambda)> capitalized_first_names_map.(names) # => ["Jane", "John", "Arthur", "Lori"] By having our compose function, we are able to be more explicit that capitalized_first_names_map is, along with the rest of the examples, just a composition of smaller functions that have been assembled together in an data transformation pipeline. They don’t have any other logic other than being the result of chaining the other functions together to get some intended behavior. Not only that, but we can now reuse our capitalized_first_names_map mapping function against other lists of names nicely, since we have it able to be partially applied as well. capitalized_first_names_map.(["bob cratchit", "pete ross", "diana prince", "tara chase"]) # => ["Bob", "Pete", "Diana", "Tara"] Even better is that compose can work on any function (Proc or lambda) that takes a single argument. Such as a Fahrenheit to Celsius function that operates against a single value instead of a list. f_to_c = Sequence.compose.([minus_32, five_ninths]) # => #<Proc:0x007fea8294c4c8 (lambda)> f_to_c.(212) # => 100 f_to_c.(32) # => 0 Sequence.map.(f_to_c, [0, 32, 100, 212]) # => [-18, 0, 37, 100] Next week well will look at some other properties of our functions and show how compose can potentially help us in those cases as well. –Proctor # Ruby Tuesday – Partial Application of map, filter, and reduce Now that we have covered how to get to a basic implementation of map, filter, and reduce in Ruby, as well as how to take advantage of Method#curry, we are going to see how we can get some extra power from our code by combining their use. In the Ruby versions of Enumerables map, reduce, and select, it operates against a specific object, such as an array of users. class User def initialize(name:, active:) @name = name @active = active end def active? @active end def name @name end end users = [User.new(name: "johnny b. goode", active: true), User.new(name: "jasmine", active: true), User.new(name: "peter piper", active: false), User.new(name: "mary", active: true), User.new(name: "elizabeth", active: true), User.new(name: "jennifer", active: false)] users.map{|user| user.name} # => ["johnny b. goode", "jasmine", "peter piper", "mary", "elizabeth", "jennifer"] users.select{|user| user.active?} # => [#<User:0x007fa37a13eb68 @active=true, @name="johnny b. goode">, # #<User:0x007fa37a13eaa0 @active=true, @name="jasmine">, # #<User:0x007fa37a13e910 @active=true, @name="mary">, # #<User:0x007fa37a13e848 @active=true, @name="elizabeth">] If we want the names of a different collection, we need to call map (and use the same block) directly on that collection as well; like a if we had a collection of active Users. users.select{|user| user.active?}.map{|user| user.name} => ["johnny b. goode", "jasmine", "mary", "elizabeth"] This can be made a little more generic by having the methods get_user_names and get_active_users defined on User, but this still leaves us a bit shallow, so let’s see what else we can do base of what we have seen so far. We will try it with our versions of map, filter, and reduce, and see how we can distill some of this logic, and raise the level of abstraction hight to make it more generic. module Sequence def self.my_map(f, items) do_map = lambda do |accumulator, item| accumulator.dup << f.call(item) end my_reduce([], do_map, items) end def self.my_filter(predicate, items) do_filter = lambda do |accumulator, item| if (predicate.call(item)) accumulator.dup << item else accumulator end end my_reduce([], do_filter, items) end def self.my_reduce(initial, operation, items) return nil unless items.any?{|item| true} accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end end And we look at how we call it using our Sequence module defined above. Sequence.my_map(->(user) {user.name}, users) # => ["johnny b. goode", "jasmine", "peter piper", "mary", "elizabeth", "jennifer"] Sequence.my_filter(->(user) {user.active?}, users) # => [#<User:0x007fa37a13eb68 @active=true, @name="johnny b. goode">, # #<User:0x007fa37a13eaa0 @active=true, @name="jasmine">, # #<User:0x007fa37a13e910 @active=true, @name="mary">, # #<User:0x007fa37a13e848 @active=true, @name="elizabeth">] Granted, at this point, this is not a great improvement, if any, on it’s own. BUT…. Since we take the collection to operate on as the last argument to our methods, we can combine our versions of my_map, my_filter, my_reduce with partial application to get a Proc that will do a specific operation against any Enumerable. Let’s see how this would work. First, we will update our Sequence module to have a map, filter, and reduce that can be partially applied. module Sequence def self.my_map(f, items) do_map = lambda do |accumulator, item| accumulator.dup << f.call(item) end my_reduce([], do_map, items) end def self.my_filter(predicate, items) do_filter = lambda do |accumulator, item| if (predicate.call(item)) accumulator.dup << item else accumulator end end my_reduce([], do_filter, items) end def self.my_reduce(initial, operation, items) return nil unless items.any?{|item| true} accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end @@map = method(:my_map).curry @@filter = method(:my_filter).curry @@reduce = method(:my_reduce).curry def self.map @@map end def self.filter @@filter end def self.reduce @@reduce end end Next with the ability to partially applied versions of our map, filter, and reduce, we can now save these off to variables that we can invoke later and just pass in the Users enumerable we wish to operate against. names = Sequence.map.(->(user) {user.name}) # => #<Proc:0x007f9c53155990 (lambda)> names.(users) # => ["johnny b. goode", "jasmine", "peter piper", "mary", "elizabeth", "jennifer"] get_active = Sequence.filter.(->(user) {user.active?}) # => #<Proc:0x007f9c531af3f0 (lambda)> get_active.(users) # => [#<User:0x007f9c5224e960 @active=true, @name="johnny b. goode">, # #<User:0x007f9c5224e8c0 @active=true, @name="jasmine">, # #<User:0x007f9c5224e780 @active=true, @name="mary">, # #<User:0x007f9c5224e6e0 @active=true, @name="elizabeth">] Or if we want to, we can use the Symbol#to_proc so we don’t have to define our lambda for checking if an item is active?. get_active = Sequence.filter.(:active?.to_proc) => #<Proc:0x007f9c531e62d8 (lambda)> get_active.(users) => [#<User:0x007f9c5224e960 @active=true, @name="johnny b. goode">, #<User:0x007f9c5224e8c0 @active=true, @name="jasmine">, #<User:0x007f9c5224e780 @active=true, @name="mary">, #<User:0x007f9c5224e6e0 @active=true, @name="elizabeth">] And now that we have our partial applied functions, we can also chain our calls together to get the names of active Users. names.(get_active.(users)) => ["johnny b. goode", "jasmine", "mary", "elizabeth"] Not only that, but say we have some collections of Product objects, class Product def initialize(id:, name:, active:, brand:) @id = id @name = name @active = active @brand = brand end def active? @active end end products = [Product.new(id: 0, name: "Prefect", active: false, brand: "Ford"), Product.new(id: 7, name: "SICP", active: true, brand: "MIT Press"), Product.new(id: 16, name: "HTDP", active: true, brand: "MIT Press"), Product.new(id: 17, name: "MRI", active: true, brand: "Ruby"), Product.new(id: 42, name: "HHGTTG", active: true, brand: "HHGTTG"), Product.new(id: 53, name: "Windows 3.1", active: false, brand: "Microsoft")] Session objects, class Session def initialize(name:, duration:) @name = name @duration = duration end def name @name end def active? @duration < 15 end end sessions = [Session.new(name: "session A", duration: 3), Session.new(name: "session A", duration: 30), Session.new(name: "session A", duration: 17), Session.new(name: "session A", duration: 9), Session.new(name: "session A", duration: 1)] and even SalesLead objects. class SalesLead @active = active end def active? end def name end end And because our Product class has the methods name and active?, we can use our names and get_active variables that hold our partially applied Procs against the list of Products, names.(products) # => ["Prefect", "SICP", "HTDP", "MRI", "HHGTTG", "Windows 3.1"] get_active.(products) # => [#<Product:0x007f9c530b7dd0 @active=true, @brand="MIT Press", @id=7, @name="SICP">, # #<Product:0x007f9c530b7ce0 @active=true, @brand="MIT Press", @id=16, @name="HTDP">, # #<Product:0x007f9c530b7bf0 @active=true, @brand="Ruby", @id=17, @name="MRI">, # #<Product:0x007f9c530b7ad8 @active=true, @brand="HHGTTG", @id=42, @name="HHGTTG">] names.(get_active.(products)) # => ["SICP", "HTDP", "MRI", "HHGTTG"] Sessions, names.(sessions) # => ["session A", "session A", "session A", "session A", "session A"] get_active.(sessions) # => [#<Session:0x007f9c52153560 @duration=3, @name="session A">, # #<Session:0x007f9c52152f98 @duration=9, @name="session A">, # #<Session:0x007f9c52152ca0 @duration=1, @name="session A">] names.(get_active.(sessions)) # => ["session A", "session A", "session A"] and SalesLeads, names.(leads) # @active=true, # @active=true, # @active=true, With this in mind, we will update the definition of our names and get_active to show that it is not just “users” it operates against, but any item. names = Sequence.map.(->(x) {x.name}) # => #<Proc:0x007f9c53107f38 (lambda)> get_active = Sequence.filter.(->(x) {x.active?}) # => #<Proc:0x007f9c530bfcd8 (lambda)> So with this, we have now been able to take our map and select from Ruby’s Enumerable class that worked on a specific collection only, without being redefined and moved into a method to live on User somewhere, we now have a Proc that is recognized to be applicable to anything that accepts an item of that form, and these can be defined and used anywhere. –Proctor # Ruby Tuesday – Partial Application As we continue with the theme we have been pursuing in the last couple of posts, we take a brief pit stop and look at partial application before we move on to our next method we want to define. Partial application is the ability to provide only a subset of arguments to a function or method, and return a new function/method to be called later that will keep the original context. We will start our examples with the two methods double and triple. def double(y) 2 * y end def triple(y) 3 * y end double(3) # => 6 triple(5) # => 15 We can think of these methods as defined in terms of a more generic function multiply that we call with a hard coded value. def multiply(x, y) x * y end def double(y) multiply(2, y) end def triple(y) multiply(3, y) end What partial application allows us to do is to define double and triple in terms of multiply, by calling it with the first argument only, the 2 for the method double, and saving the resulting function to be invoked later. To do this in Ruby we can use Method#curry, or Proc#curry. Method#curry will return a new Proc that can then be invoked with only a subset of its arguments. method(:multiply).curry # => #<Proc:0x007fc02b225950 (lambda)> So for our double and triple functionality, we can make those be variables which hold the resulting Proc of passing in their value to multiply, and invoke them by only passing in the value we want to double, or triple. double = method(:multiply).curry.(2) # => #<Proc:0x007fc02b1eeea0 (lambda)> double.(8) # => 16 triple = method(:multiply).curry.(3) # => #<Proc:0x007fc02b186260 (lambda)> triple.(17) # => 51 At this point you might be wondering what this gets you, as the examples of double, triple, and multiply might seem a bit simplistic at best, and maybe even contrived. I would agree; it is a simple example, but mainly to show as an example of what partial application is, and now we will take a look at our filter, map, and reduce from the previous posts and update them to show some of the power of partial application. As a reminder this is the map, filter, and reduce as defined previously. def map(items, do_map) reduce([], items, lambda do |accumulator, item| accumulator.dup << do_map.call(item) end) end def filter(items, predicate) reduce([], items, lambda do |accumulator, item| if (predicate.call(item)) accumulator.dup << item else accumulator end end) end def reduce(initial, items, operation) return nil if items.empty? accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end We will update these definitions to be able to take the items collection last, as that is the most general argument. def map(f, items) do_map = lambda do |accumulator, item| accumulator.dup << f.call(item) end reduce([], do_map, items) end def filter(predicate, items) do_filter = lambda do |accumulator, item| if (predicate.call(item)) accumulator.dup << item else accumulator end end reduce([], do_filter, items) end def reduce(initial, operation, items) return nil unless items.any?{|item| true} accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end You might be wondering why I said that the collections of items is the most general argument. That was because, if we want to sum a sequence of numbers, double a sequence of numbers, or even pick out the even numbers, we can do that against a number of different collections of items. reduce(0, :+.to_proc, (1..5)) # => 15 reduce(0, :+.to_proc, [2, 4, 6, 8]) # => 20 map(double, (2..7)) # => [4, 6, 8, 10, 12, 14] map(double, [1, 2, 3, 5, 8, 13]) # => [2, 4, 6, 10, 16, 26] filter(lambda{|x| x.even?}, (5..10)) # => [6, 8, 10] filter(lambda{|x| x.even?}, (0..100).step(10)) # => [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100] And because we now have the generic items last, we can use Method#curry to build up Procs to call that represent the parts that are the same and give those Procs descriptive names. concat = method(:reduce).curry.("", :+.to_proc) # => #<Proc:0x007fc02b207f68 (lambda)> concat.(("a".."d")) # => "abcd" concat.(["alpha", "beta", "gamma", "delta"]) sum = method(:reduce).curry.(0, :+.to_proc) # => #<Proc:0x007fc02b25c450 (lambda)> sum.((1..10)) # => 55 sum.((2..20).step(2)) # => 110 evens_only = method(:filter).curry.(lambda{|x| x.even?}) # => #<Proc:0x007fc02b155688 (lambda)> evens_only.([1, 2, 3, 5, 8, 13, 21]) # => [2, 8] evens_only.([1, 2, 4, 7, 11]) # => [2, 4] doubles = method(:map).curry.(double) # => #<Proc:0x007fc02b0c0308 (lambda)> doubles.((1..10)) # => [2, 4, 6, 8, 10, 12, 14, 16, 18, 20] doubles.([2, 4, 6, 8]) # => [4, 8, 12, 16] And in the last example of doubles we did a curry of map, and used the partially applied function double as the argument to be partially applied to map. So with partial application, we can start to abstract common behavior out into Procs that can operate against more generic data. One last example would be filtering items against those that are active. With partial application, we can take filter and create a partially applied version that is given a Proc that will check to see if an item is active?. active_items = method(:filter).curry.(lambda{|item| item.active?}) # => #<Proc:0x007fc02b207f68 (lambda)> With this active_items Proc, we can then use that against any collection of objects as long as they support the method active?, e.g. Users, Orders, Sessions, Blog Posts, etc. active_users = active_items.(users) active_blog_posts = active_items.(blog_posts) active_sessions = active_items.(sessions) active_orders = active_items.(orders) As you can hopefully start to see, we can start to get some very small, focused, and powerful functions that are nicely abstracted to work against a broader range of input. Next week, we will take the new versions of map, filter, and reduce, along with partial application, to show how we can take these smaller pieces of code and reuse and assemble them together to get more advanced behaviors. –Proctor # Ruby Tuesday – Refactoring Towards Creating reduce As we continue with the theme we have been pursuing in the last couple of posts, we will look at refactoring to reduce, and will take a look at how we can use this with what we have built on from previous posts. Again for reference, here is our setup data of a User object require 'date' class User def initialize(name:, is_active:, date_of_birth: nil, date_of_death: nil, languages_created: []) @name = name @is_active = is_active @date_of_birth = date_of_birth @date_of_death = date_of_death @languages_created = languages_created end def active? @is_active end def to_s inspect end end and our list of users. alan_kay = User.new(name: "Alan Kay", is_active: true, date_of_birth: Date.new(1940, 5, 17), languages_created: ["Smalltalk", "Squeak"]) john_mccarthy = User.new(name: "John McCarthy", is_active: true, date_of_birth: Date.new(1927, 9, 4), date_of_death: Date.new(2011, 10, 24), languages_created: ["Lisp"]) robert_virding = User.new(name: "Robert Virding", is_active: true, languages_created: ["Erlang", "LFE"]) dennis_ritchie = User.new(name: "Dennis Ritchie", is_active: true, date_of_birth: Date.new(1941, 9, 9), date_of_death: Date.new(2011, 10, 12), languages_created: ["C"]) james_gosling = User.new(name: "James Gosling", is_active: true, date_of_birth: Date.new(1955, 5, 19), languages_created: ["Java"]) matz = User.new(name: "Yukihiro Matsumoto", is_active: true, date_of_birth: Date.new(1965, 4, 14), languages_created: ["Ruby"]) nobody = User.new(name: "", is_active: false) users = [alan_kay, john_mccarthy, robert_virding, dennis_ritchie, james_gosling, matz, nobody] In our theoretical code base we have some code that will find the oldest language creator. def oldest_language_creator(users) oldest = nil for user in users do next unless user.date_of_death.nil? next if user.date_of_birth.nil? if (oldest.nil? || oldest.date_of_birth > user.date_of_birth) oldest = user end end oldest end oldest_language_creator(users).name # => "Alan Kay" That is pretty nasty, so let’s see if we can clean it up some and see what happens. First, inside the for we have both and if and unless, so let’s refactor the unless to be an if. def oldest_language_creator(users) oldest = nil for user in users do next if (not user.date_of_death.nil?) next if user.date_of_birth.nil? if (oldest.nil? || oldest.date_of_birth > user.date_of_birth) oldest = user end end oldest end pry(main)> oldest_language_creator(users).name # => "Alan Kay" And while we are at it, we will refactor out the conditions in the ifs to give them clarifying names. def alive?(user) user.date_of_death.nil? end def has_known_birthday?(user) not user.date_of_birth.nil? end def oldest_language_creator(users) oldest = nil for user in users do next if not alive?(user) next if not has_known_birthday?(user) if (oldest.nil? || oldest.date_of_birth > user.date_of_birth) oldest = user end end oldest end oldest_language_creator(users).name # => "Alan Kay" Still works, and that we now have multiple if in our for loop, we can think back to a couple of posts ago, and realize we have a couple of filters happening in our list, and then some logic around who has the earliest birth date. So let’s refactor out the filters and see what our method starts to look like. def oldest_language_creator(users) alive_users = filter(users, lambda{|user| alive?(user)}) with_birthdays = filter(alive_users, lambda{|user| has_known_birthday?(user)}) oldest = nil for user in with_birthdays do if (oldest.nil? || oldest.date_of_birth > user.date_of_birth) oldest = user end end oldest end oldest_language_creator(users).name #=> "Alan Kay" This next refactoring might be a bit of a jump, but for me, I am not too fond with starting out with a nil and having to check that every time, since it will be fixed on the first time around, so let’s clean that up. def oldest_language_creator(users) alive_users = filter(users, lambda{|user| alive?(user)}) with_birthdays = filter(alive_users, lambda{|user| has_known_birthday?(user)}) oldest, *rest = with_birthdays for user in rest do if (oldest.date_of_birth > user.date_of_birth) oldest = user end end oldest end Let’s refactor out our for loop into another method, so we can look at it on its own. def user_with_earliest_birthday(users) oldest, *rest = users for user in rest do if (oldest.date_of_birth > user.date_of_birth) oldest = user end end oldest end def oldest_language_creator(users) alive_users = filter(users, lambda{|user| alive?(user)}) with_birthdays = filter(alive_users, lambda{|user| has_known_birthday?(user)}) user_with_earliest_birthday(with_birthdays) end Now we have a pattern here, and it has been present in our filter and map as well, if you can see it, so let’s see if we can identify it. def user_with_earliest_birthday(users) oldest, *rest = users for user in rest do if (oldest.date_of_birth > user.date_of_birth) oldest = user end end oldest end def filter(items, predicate) matching = [] for item in items do if (predicate.call(item)) matching << item end end matching end def map(items, do_map) results = [] for item in items do results << do_map.call(item) end results end If you haven’t detangled it, the pattern is: 1. We have some initial value, 2. and for every item in a list we do some operation against that item value and the current accumulated value, which results in a new value 3. we return the accumulated value. With our user_with_earliest_birthday method, the initial accumlated value is the first user, the operation is a compare against that with each item to find the oldest user so far, and we return the oldest user. With filter, the initial accumulated value is an empty Array, the operation is an append if some critera is met, and the return is the accumlated array for those items that meet that criteria. With map, the initial accumulated value is an empty Array, the operation is an append of the result of a transformation function on each value, and the return is the accumlated array for the transformed results. This pattern is called reduce. So what would this look like generically??? def reduce(initial, items, operation) accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end Let’s write our user_with_earliest_birthday using this new reduce then, and consume it in our oldest_language_creator def oldest_language_creator(users) alive_users = filter(users, lambda{|user| alive?(user)}) with_birthdays = filter(alive_users, lambda{|user| has_known_birthday?(user)}) reduce(with_birthdays.first, with_birthdays.drop(1), lambda do |oldest, user| oldest.date_of_birth > user.date_of_birth ? user : oldest end) end Our accumulator starts with the first user in the list, uses the rest of the list to iterate through, and then will return either the accumulator (oldest_so_far) or the current item (user), which would be assigned to the accumulator value for the next iteration. So how would we write our map and filter to use this new reduce? def map(items, do_map) reduce([], items, lambda do |accumulator, item| accumulator.dup << do_map.call(item) end) end def filter(items, predicate) reduce([], items, lambda do |accumulator, item| if (predicate.call(item)) accumulator.dup << item else accumulator end end) end For our new map, our operation is to call the do_map lambda given to the function, and add the transformed value to a duplicate of the original accumulator. While in these cases, it is not necessary to duplicate the original accumulator, I did so here to mirror that in reduce, we are getting what could be considered a completely new value, such as we have with our oldest_language_creator version that uses reduce. And for our new filter our operation either returns the original accumulator, or adds the item to a new copy of the accumulated list if the predicate passed to filter returns truth. Again, we could leave out the duplication, but for purity sake, and working out the logic we will keep it in there. So let’s step through our new filter and see what happens one step at a time with it now using reduce. filter((1..9), lambda{|item| item.odd?}) If we inline reduce substituting the variables given to filter, it looks like the following. reduce([], (1..9), lambda do |accumulator, item| if (lambda{|item| item.odd?}.call(item)) accumulator.dup << item else accumulator end end) And if we expand the body of reduce, and rename it to filter_odds, we get def filter_odds() accumulator = [] for item in (1..9) do accumulator = lambda do |accumulator, item| if (lambda{|item| item.odd?}.call(item)) accumulator.dup << item else accumulator end end.call(accumulator, item) end accumulator end And we inline the calls to the lambda that came is from the predicate def filter_odds() accumulator = [] for item in (1..9) do accumulator = lambda do |accumulator, item| if (item.odd?) accumulator.dup << item else accumulator end end.call(accumulator, item) end accumulator end and inline the lambda for the operation to given to reduce def filter_odds() accumulator = [] for item in (1..9) do accumulator = if (item.odd?) accumulator.dup << item else accumulator end end accumulator end And we can see how through filter and reduce, we get back to something that looks like the orignal filtering out of odd numbers from a list. And to test out reduce further, let’s add some numbers together. We will call reduce with our initial accumulated “sum” of 0, the numbers from 1 to 10, and a lambda that adds the two numbers together to produce a new running sum. reduce(0, (1..10), lambda{|accum, item| accum + item}) # => 55 And we do the same for a reduce that computes the product of a list of numbers. This time our initial accumulator value is 1 which is the identity operation of multiplication. reduce(1, (1..10), lambda{|accum, item| accum * item}) # => 3628800 But if we call it with an empty list, we return 1 still reduce(1, [], lambda{|accum, item| accum * item}) # => 1 So we need to clean up our reduce some to make it more robust in the case of reducing against empty lists. def reduce(initial, items, operation) return nil if items.empty? accumulator = initial for item in items do accumulator = operation.call(accumulator, item) end accumulator end And now our reduce handles empty lists nicely, or at the least, a little more sanely. reduce(1, [], lambda{|accum, item| accum * item}) # => nil With all of that, we have refactored our code into something close to Ruby’s Enumerable#select, expect that we return nil if the enumerable is empty, instead of the initial value for the accumulator. –Proctor # Ruby Tuesday – Refactoring Towards Creating map Today’s Ruby Tuesday continues from where we left off with last week’s look at refactoring to filter. For reference, we had a User class, require 'date' class User def initialize(name:, is_active:, date_of_birth: nil, date_of_death: nil, languages_created: []) @name = name @is_active = is_active @date_of_birth = date_of_birth @date_of_death = date_of_death @languages_created = languages_created end def active? @is_active end def to_s inspect end end a list of User objects, alan_kay = User.new(name: "Alan Kay", is_active: true, date_of_birth: Date.new(1940, 5, 17), languages_created: ["Smalltalk", "Squeak"]) john_mccarthy = User.new(name: "John McCarthy", is_active: true, date_of_birth: Date.new(1927, 9, 4), date_of_death: Date.new(2011, 10, 24), languages_created: ["Lisp"]) robert_virding = User.new(name: "Robert Virding", is_active: true, languages_created: ["Erlang", "LFE"]) dennis_ritchie = User.new(name: "Dennis Ritchie", is_active: true, date_of_birth: Date.new(1941, 9, 9), date_of_death: Date.new(2011, 10, 12), languages_created: ["C"]) james_gosling = User.new(name: "James Gosling", is_active: true, date_of_birth: Date.new(1955, 5, 19), languages_created: ["Java"]) matz = User.new(name: "Yukihiro Matsumoto", is_active: true, date_of_birth: Date.new(1965, 4, 14), languages_created: ["Ruby"]) nobody = User.new(name: "", is_active: false) users = [alan_kay, john_mccarthy, robert_virding, dennis_ritchie, james_gosling, matz, nobody] and a helper method to get the list of names for a list of Users. def get_names_for(users) names = [] for user in users do names << user.name end names end get_names_for(users) => ["Alan Kay", "John McCarthy", "Robert Virding", "Dennis Ritchie", "James Gosling", "Yukihiro Matsumoto", ""] Elsewhere in our (imaginary, but based off real events with names changed to protect the innocent) code base, we have some logic to get a listing of languages created by the users. def get_languages(users) languages = [] for user in users do languages << user.languages_created end languages end get_languages(users) # => [["Smalltalk", "Squeak"], ["Lisp"], ["Erlang", "LFE"], ["C"], ["Java"], ["Ruby"], []] And yet somewhere else, there is logic to get a listing of the years different users were born. def get_birth_years(users) birth_years = [] for user in users do birth_years << (user.date_of_birth ? user.date_of_birth.year : nil) end birth_years end get_birth_years(users) # => [1940, 1927, nil, 1941, 1955, 1965, nil] As with the filter we looked at last week, we have quite a bit of duplication of logic in all of these methods. If we turn our head and squint a little, we can see the methods all look something like this: def transform_to(items) results = [] for item in items do results << do_some_transformation(item) end results end This method: 1. takes a list of items to iterate over 2. creates a working result set 3. iterates over every item in the items given and for each item • some transformation of the item into a new value is computed and • the result is added to the working results set 4. the end results are returned The only thing that is different between each of the functions above, once we have rationalized the variable names, is the transformation to be done on each item in the list. And this transformation that is the different part is just calling a function on that item, also called map in Mathematics, which Wolfram Alpha defines as: So we will “map” over all of the items to get a new list of items, which makes our generic function look like the following, after we update names to match our new terminology. def map(items) results = [] for item in items do results << do_map(item) end results end This is starting to come together, but we still don’t have anything specific for what do_map represents yet. We will follow our previous example in filter and make the generic function we want to call a anonymous function, specifically a lambda in Ruby, and pass that in to our map method. def map(items, do_map) results = [] for item in items do results << do_map.call(item) end results end Time to test it out by using our previous calls and making the specifics a lambda. map(users, lambda{|user| user.languages_created}) # => [["Smalltalk", "Squeak"], ["Lisp"], ["Erlang", "LFE"], ["C"], ["Java"], ["Ruby"], []] map(users, lambda{|user| user.name}) # => ["Alan Kay", "John McCarthy", "Robert Virding", "Dennis Ritchie", "James Gosling", "Yukihiro Matsumoto", ""] map(users, lambda{|user| user.date_of_birth ? user.date_of_birth.year : nil}) # => [1940, 1927, nil, 1941, 1955, 1965, nil] And to test if we did get this to be generic enough to work against lists of other types, we’ll do some conversions from characters to Integers, Integers to characters, and cube some integers. map( ("a".."z"), lambda{|char| char.ord}) # => [97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122] map((65..90), lambda{|ascii_value| ascii_value.chr}) # => ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"] map((1..7), lambda{|i| i*i*i}) # => [1, 8, 27, 64, 125, 216, 343] So like last week’s post, where we were able to genericize the logic about conditionally plucking out items from a list based of some condition, we were able to genericize the transformation of a list of one set of values into a list of another set of values. Which if you are familiar to Ruby, you will likely recognize as Enumerable#map, a.k.a. Enumerable#select, but now you have seen how you could have went down the road to creating your own, if Ruby hadn’t already provided it for you. -Proctor # Giving an Intro to Erlang workshop at LambdaConf 2015 Just a quick post letting anyone who is going to be at LambdaConf 2015 this weekend in Boulder, Colorado. I will be there giving an Intro to Erlang workshop on this coming Friday, the 22nd of May, so if you are going make sure to look for me around there at least. 😀 Feel free to track me down and say hi, as I would love to meet you as well. The offer is open for you to chat me up about Erlang specifically, functional programming in general, or what ever else we might find interesting. And if you are going and we meet up, I might just have some Functional Geekery stickers to give away. Look forward to seeing you there. –Proctor # Clojure function has-factors-in? Just another quick post this evening to share a new function I created as part of cleaning up my solution to Problem 1 of Project Euler. Was just responding to a comment on Google+ on my update sharing the post Project Euler in Clojure – Problem 16, and I saw the commenter had his own solution to problem 1. In sharing my solution I realized that I could clean up my results even further, and added a function has-factors-in?. These updates have also been pushed to my Project Euler in Clojure Github repository for those interested. (defn has-factors-in? [n coll] (some #(factor-of? % n) coll)) (defn problem1 ([] (problem1 1000)) ([n] (sum (filter #(or (factor-of? 3 %) (factor-of? 5 %))) (range n)))) It now becomes: (defn problem1 ([] (problem1 1000)) ([n] (sum (filter #(has-factors-in? % [3 5]) (range n))))) This change makes my solution read even more like the problem statement given.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.306801974773407, "perplexity": 6506.361843603233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806419.21/warc/CC-MAIN-20171121151133-20171121171133-00205.warc.gz"}
https://minhyongkim.wordpress.com/2010/02/06/cubic-curves/
## Cubic curves Dear Professor Kim, Thank you for your reply. I have another question, this time regarding the group law on a cubic, $C$. I am happy with showing that the points of $C$ form a group. However I am a little unsure of the ‘simplified’ group law as described in the textbook. I believe that if we take $O=(0,1,0)$ as the identity element we can treat the cubic as an affine curve. However as this corresponds to the ‘line at infinity’ in projective 2-space, what point $(x,y)$ does it correspond to in affine 2-space? ——————————————————————— IF $C$ is the curve $Y^2Z=X^3+axZ^2+bZ^3,$ in $P^2$, then it meets the line $Z=0$ in the point $O=(0:1,0)$ as explained in the book. So we then express $C$ as a union $C=\{O\}\cup C_0$ where $C_0=C\cap \{(X:Y:Z) | Z\neq 0\}.$ We then note that $\{(X:Y:Z) | Z\neq 0\}\simeq A^2$ by the map $(X:Y:Z)\mapsto (X/Z,Y/Z).$ Then $C_0$ goes to the curve in $A^2$ with equation $y^2=x^3+ax+b.$ Of course the origin does not lie on this curve, because this affine curve corresponds exactly to the complement of the origin in $C$. Does this answer your question? Let me know again if you’re still curious.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 18, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8804243206977844, "perplexity": 89.97528479483692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613780.89/warc/CC-MAIN-20170530031818-20170530051818-00273.warc.gz"}