url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://clay6.com/qa/24251/ozone-layer-is-present-in
|
# Ozone layer is present in
$(a)\;Troposphere\qquad(b)\;Thermosphere\qquad(c)\;Lithosphere\qquad(d)\;Stratosphere$
Ozone layer is present in thermosphere
Hence (b) is the correct answer.
answered Jan 20, 2014
|
2017-09-25 13:20:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3235384225845337, "perplexity": 6413.9696195404995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818691830.33/warc/CC-MAIN-20170925130433-20170925150433-00290.warc.gz"}
|
https://math.stackexchange.com/questions/1862871/how-to-find-the-solution-for-this-inequality
|
# how to find the solution for this inequality?
The question is $(2+\sqrt3)^{x^2-x}+(2-\sqrt3)^{x^2-x}\ge14$
how will i proceed with this question?
I'm not able to think of any idea of how to solve this question
• Try to prove the function is monotone in certain interval(may have inf. endpoint) using calculus techniques. – user175968 Jul 18 '16 at 6:03
• A graph might help. google.com/… – user175968 Jul 18 '16 at 6:03
• is it possible using rules of logarithm – danny Jul 18 '16 at 6:04
Hint: $2-\sqrt 3=\dfrac1{2+\sqrt3}$. And you should be able to proceed.
As a rule of thumb, every time you have something like $A^x+B^x=C$, you either find a trick like the one above or you have little hope for an algebraic solution.
• Solve $T+\frac1T\ge14$ for appropriate $T$. – user228113 Jul 18 '16 at 6:13
|
2020-01-22 14:39:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645449280738831, "perplexity": 373.6555533547174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607118.51/warc/CC-MAIN-20200122131612-20200122160612-00410.warc.gz"}
|
http://aramram.com/forum/8739c3-electrophilic-addition-questions
|
See the answer. "Since both F and NO2 are -I groups, further the C+ carbocation is situated, more stable is the compound." formed, but by recycling the unreacted gases the conversion can be taken to All Rights Reserved. Correct answer is option 'D'. in this case) is colourless, while bromine is either a dark red liquid in pure You can study other questions, MCQs, videos and tests for NEET on EduRev and even discuss your questions like Using Curved Arrows, Write The Next Step Of Electrophilic Addition Of HI To Propene. the left hand side carbon, i.e. is a carbonium ion. Tes Global Ltd is I know the mechanism for electrophilic addition, but I don't know how to apply that to compounds such as these with more than one C=C. (the overall result). Electrophilic Addition (no rating) 0 customer reviews. Name the compound whose structure is shown below. Answers of Dows reaction involvesa)Electrophilic additionb)Nucleophilic additionc)Electrophilic substitutiond)Nucleophilic substitutionCorrect answer is option 'D'. EduRev is a knowledge-sharing community that depends on everyone being able to pitch in when they know something. The first question is 11(b)(i) from the first image. Stability of carbocations with -M groups in beta position, how to extract index of first alphabetic character of line in awk. Let's analyse the carbocations one by one. registered in England (Company No 02017289) with its registered office at 26 Red Lion Is it possible Alpha Zero will eventually solve chess? Pertaining to the question in hand we will be forming a two degree carbocation in all the cases as the only other option is a very unstable three degree carbocation. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Exergonic and the transition state (second step) represents the reactant (cation). Aimed at the new AQA AS spec. Click here to view some great books which can aid your learning . So, the carbocation gets a slight stabilisation due to this +R effect but there is also a -I effect in case of $\ce{F}$ which is the factor of destabilisation of carbocation. the bromine molecule. It is generally observed that in electrophilic addition of haloacids to alkenes, the more substituted carbon is the one that ends up bonded to the heteroatom of the acid, while the less substituted carbon is protonated. How to determine the reactivity of a set of compounds that may undergo nucleophilic substitution? which is also the order given by your book. Which of the following reaction types is characteristic of alkenes? a) 2-ethyl-5-methylnonane ... Electrophilic addition c) Electrophilic substitution d) Nucleophilic addition Question 5 Which of the following carbocations is the least stable? than hydrogen. This When the $\ce{R}$ is $\ce{NO2}$, there is -I effect and also a dominant -R effect or -M effect. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. ethene and steam only proceeds to equilibrium with a small quantity of ethanol Other industrially important alcohols can also be manufactured by this process. b) (i) The double bond is made up of two parts, a sigma bond where the bonding pair of electrons is held on the line between the two nuclei, and a pi bond where the … understand the mechanism of electrophilic addition of alkenes with Alkyl groups can be considered to act as little Read more. Reactivity in this case is measured by the stability of the carbocation. Who "spent four years refusing to accept the validity of the [2016] election"? "significantly stronger +R effect" stronger than what? Although what I am saying is what my professor has said at least a couple of times and according to him, in this case the extent of orbital overlap between F and C is the deciding factor. More Exam Questions on 2.9 Alkenes (mark scheme) 2.9 Exercise 1 - E-Z isomerism 2.9 Exercise 2 - addition reactions of alkenes Answers to 2.9 Exercises. (CH3CH2)CH=CH2, (CH3CH2)2C=CHCH3, (CH3CH2)CH=CHCH3. to be adjusted and the alkene mixed with steam and passed over a phosphoric When the $\ce{R}$ is $\ce{F}$, there is a significantly stronger +R effect due to $\ce{2p\pi -2p\pi}$ overlap. There is no lone pair on N in $\ce{-NO_2}$ so there is no +R effect in this case. Watch the recordings here on Youtube! The Dow process is an aromatic nucleophilic substitution reaction . There will be +R(or +M) in the cases involving F and Cl, +R being better with F than Cl due to better orbital overlap with C. Fluorine donates a lone pair to carbon, leading to a positive charge on F. Although F has a positive charge, this resonating form has special stability since this resonance form has more number of bonds, and since octet of all atoms and duplet of $\ce{-H}$ are complete.*. -I effect of nitro group is much stronger compared to that of fluorine. The reaction with an alkene takes place in two steps. Why did the F of "sneeze" and "snore" change to an S in English history? Find the order of the ease of electrophilic addition of the following: According to me, it should be 1 > 4 > 2 > 3 but in my textbook, it says 1 > 2 > 3 > 4. Created: Mar 5, 2017 | Updated: Feb 22, 2018. resides on a carbon which is attached to two hydrogen atoms. This is wrong. This discussion on Dows reaction involvesa)Electrophilic additionb)Nucleophilic additionc)Electrophilic substitutiond)Nucleophilic substitutionCorrect answer is option 'D'. Nitro (-M): Actively destabilizes the intermediate. Tertiary carbonium ions are more stable than secondary carbonium ions, which Chlorine(+M): Poor overlap due to a larger orbital size than carbon. This induction Nov 06,2020 - Test: Alkenes Electrophilic Addition Reaction | 21 Questions MCQ Test has questions of Class 11 preparation. The latter should be more reactive than the former. Which of the following metals is used as a catalyst in the catalytic hydrogenation of both alkenes and alkynes? Why is it wrong to answer a question with a tautology? Apart from being the largest NEET community, EduRev has the largest solved @GaurangTandon. $$\ce{R-CH=CH2 + E -Nu -> R-CH^+-CH2-E -> R-CH(Nu)-CH2E}$$The above path is the general electrophillic addition reaction in the case of alkenes. Created: Mar 5, 2017| Updated: Feb 22, 2018. Draw out an energy diagram of this step reaction. Since F and Cl are both deactivators (in the context of EAS) with F being more deactivating than Cl, it would thus make sense to say that F must be more electron-withdrawing from the pi bond than Cl, making the bond less nucleophilic. For the electrophilic addition to carbondouble bonds, the $\pi$- electron cloud double bonds first attack the electrophile and then a carbocation is created adjacent to the carbon where the electrophile is added. Preview. route 1 would be preferentially followed. know the typical conditions for the industrial production of ethanol This is a very useful London WC1R 4HQ. In the case above the hydrogen atom fromm the hydrogen bromide would add to The reason for this is more stable the intermediate/carbocation, greater the time for the subsequent attack by the anion. Route 1 is favoured. Due to the presence of pi electrons they show addition reactions in which an electrophile attacks the carbon-carbon double bond to form the addition products. risae to a stabilisation order for carbocations (carbonium ions). This problem has been solved! (See attached pictures in my next post for images of the questions). This acts as a region of attraction for positive +M of F>+M of Cl and -I of F<-I of Cl, Reactivity order of electrophilic addition, Feature Preview: New Review Suspensions Mod UX, Creating new Help Center documents for Review queues: Project overview, Rate of electrophilic bromination of toluene and ethylbenzene, SN2 nucleophilic attack with alpha unsaturation. Isn't "2+2" correct when answering 'What is "2+2"'? By continuing, I agree that I am at least 13 years old and have read and The key to understanding Markovinikov's rule is the stability of the intermediate Free. preferentially proceeds via this stable ion. on a carbon atom will be ale to stabilise an intermediate even more. Make a minimal and maximal 2-digit number from digits of two 3-digit numbers. To learn more, see our tips on writing great answers. be a minor product (perhaps only a couple of percent). To subscribe to this RSS feed, copy and paste this URL into your RSS reader. H+. industry. Effect at second carbon of nitro group will be comparable to that of fluorine group at first carbon(not completely sure about them being comparable, but there is a considerable difference in their inductive effects). Fluorine has a greater pulling tendency therefore it destabilizes the carbocation more than oxygen. not cause electrophilic addition of alkenes. See the answer. Only the orientation is made on the bases of +M not reactivity. Thus, the reaction becomes very difficult with nitroethylene. molecule to the electron rich double bond induces temporary polarisation within Integrated Foundations of Pharmacy Series, Barber & Rostron: Pharmaceutical Chemistry. Does meat (Black Angus) caramelize just with heat? Markovnikov's rule says that in the case of asymmetric electrophilic addition, the hydrogen adds to the carbon atom that … These are the homework exercises to accompany the Textmap for McMurry's Organic Chemistry textbook. Draw the cationic intermediates that are seen in the following reactions: Consider the second step in the electrophilic addition of HBr to an alkene. Copyright © Oxford University Press, 2016. Is this step exergonic or endergonic and does the transition state represent the product or the reactant (cation)? Name the following compounds, with cis/trans nomenclature. PracticeProblems:+ 1. An electrophilic addition reaction is a reaction in which a substrate is initially attacked by an electrophile, and the overall result is the addition of one or more relatively simple molecules across a multiple bond. very similar to the electrophilic addtion reactions above. Although Dow process is nucleophile substitution reaction. HBr, H, know that bromine can be used to test for unsaturation, be able to predict the products of addition to unsymmetrical alkenes 1.From this mechanism of electrophilic addition reaction,may I know that what does it meant by Trans-attack? The reaction between Electrophilic Addition Reactions of Alkenes Alkenes belong to the group of unsaturated hydrocarbons that is one molecule of alkene contains at least one double bond. For latest news check www.mwalimuluke.wordpress.com: Home As shown to go from intermediate cation to final product the step is exergonic. This gives $$\ce{ O > F > Cl > NO2}$$ Mr pea's answer below is correct. 7.E: Alkenes: Structure and Reactivity (Exercises), [ "article:topic", "Exercises", "showtoc:no" ], 7.1 Industrial Preparation and Use of Alkenes, 7.5 Alkene Stereochemistry and the E, Z Designation, 7.7 Electrophilic Addition Reactions of Alkenes, 7.8 Orientation of Electrophilic Additions: Markovnikov's Rule, 7.11 Evidence for the Mechanism of Electrophilic Additions, The following reaction shows a rearrangement within the mechanism. What will be the major product obtained from the acid-catalysed hydration of pent-1-ene? The product (1,2-dibromoethane carbocation intermediates. acid catalyst at 300ºC and 60-70 atmospheres pressure. So, fractions of molecules forming the carbocation is extremely less, which actually increases the activation energy of the reaction. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. If we take $\ce{F-CH=CH2}$ and $\ce{NO2-CH=CH2}$, the result after electrophilic addition would be $\ce{F-CH+-CH2E}$ and $\ce{NO2-CHE-CH2+}$ where E is the electrophile. Q7.10.1 Consider the second step in the electrophilic addition of HBr to an alkene. Organic reactions || Naming Reaction || Reaction Mechanism || Organic Tricks || Name Reaction. Degradation of these compounds causes fats and oils to taste and smell rancid.What is the major chemical process through which this degradation occurs? This test is Rated positive by 89% students preparing for Class 11.This MCQ test is related to Class 11 syllabus, prepared by Class 11 teachers.
|
2021-02-28 13:56:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33953624963760376, "perplexity": 3123.775247461001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360853.31/warc/CC-MAIN-20210228115201-20210228145201-00550.warc.gz"}
|
https://electronics.stackexchange.com/questions/432300/how-does-the-output-jump-up-to-7-5-volts-when-the-switch-is-closed-and-opened-ag
|
How does the output jump up to 7.5 volts when the switch is closed and opened again?
Sorry if the question might sound dumb. In this circuit, when the switch is closed the output voltage drops to zero and then rapidly charges back up to 5 volts. My question being, when the switch is opened up back again, the voltage spikes up to 7.5 bolts as shown in the oscilloscope figure attached. I am confused as the direction of the current would be the opposite of what it used to be when the switch is closed. Can anyone help me with understanding the behavior of this circuit? thanks!
• When the switch is closed, $C_1$'s left side is tied to $0\:\text{V}$ while the right side is pulled upward, over time, such that it eventually rises to $5\:\text{V}$. So, after some time, there is $5\:\text{V}$ across $C_1$ with the right side more positive than the left. When the switch opens, $C_1$ discharges via both resistors developing $2.5\:\text{V}$ across each. But the mid-point of those resistors is tied to $5\:\text{V}$ so $V_\text{OUT}$ must be $2.5\:\text{V}$ above the $5\:\text{V}$ rail and the left side of $C_1$ then $2.5\:\text{V}$ below it. – jonk Apr 13 '19 at 6:02
• Thank you so much for your great help! – Mohammed Osama Apr 13 '19 at 8:59
1. Here, capacitor $$\C_1\$$ has long been discharged so the voltage difference across its leads is $$\0\:\text{V}\$$. So there is no current in $$\R_1\$$ or $$\R_2\$$ and therefore no voltage drop across either resistor. The output voltage will be $$\5\:\text{V}\$$.
2. Now the switch is closed. $$\R_2\$$ has its own current but doesn't participate in charging $$\C_1\$$ so I removed it as "distracting" so that you can focus on the capacitor charging activity. $$\V_\text{OUT}\$$ will rise, exponentially, until it reaches $$\V_\text{OUT}=+5\:\text{V}\$$ and then the charging process stops.
3. The switch is re-opened and the circuit returns to the configuration shown in step 1 above. However, there is a difference. Now, node $$\N_2\$$ is $$\+5\:\text{V}\$$ relative to node $$\N_1\$$. Because of this, current immediately starts flowing through $$\R_1\$$ and then through $$\R_2\$$ in order to start discharging $$\C_1\$$ via this path. The two resistors have the same value, so their mid-point (which is attached to a $$\+5\:\text{V}\$$ voltage source) will be half-way between or $$\2.5\:\text{V}\$$ lower than node $$\N_2\$$ and $$\2.5\:\text{V}\$$ higher than node $$\N_1\$$. So it must be the case, right after opening the switch, that $$\V_{\text{N}_1}=+2.5\:\text{V}\$$ and $$\V_{\text{N}_2}=+7.5\:\text{V}\$$. Note that there is still just the original $$\5\:\text{V}\$$ voltage difference across $$\C_1\$$. That didn't change (yet.) But now $$\V_\text{OUT}\$$ must initially start higher than $$\+5\:\text{V}\$$ (by the voltage drop across $$\R_1\$$.) Now, $$\C_1\$$ discharges via the two resistors and eventually returns to its discharged state. In the meantime, the current through the two resistors declines and therefore also the voltage at $$\V_\text{OUT}\$$ gradually returns to $$\+5\:\text{V}\$$.
You can adjust the ratio of $$\R_1\$$ and $$\R_2\$$ to get different peak voltages. For example, if you change $$\R_1\$$ to $$\2\:\text{k}\Omega\$$ then $$\V_\text{OUT}\$$ will rise above $$\8.3\:\text{V}\$$ for a short time -- more than the $$\7.5\:\text{V}\$$ you see in your trace.
|
2020-04-07 01:46:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6988081932067871, "perplexity": 256.7237383423666}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371662966.69/warc/CC-MAIN-20200406231617-20200407022117-00258.warc.gz"}
|
https://hackage.haskell.org/package/Agda-2.6.0/docs/Agda-TypeChecking-Conversion.html
|
Agda-2.6.0: A dependently typed functional programming language and proof assistant
Agda.TypeChecking.Conversion
Contents
Synopsis
# Documentation
Try whether a computation runs without errors or new constraints (may create new metas, though). Restores state upon failure.
tryConversion' :: TCM a -> TCM (Maybe a) Source #
Try whether a computation runs without errors or new constraints (may create new metas, though). Return Just the result upon success. Return Nothing and restore state upon failure.
Check if to lists of arguments are the same (and all variables). Precondition: the lists have the same length.
intersectVars us vs checks whether all relevant elements in us and vs are variables, and if yes, returns a prune list which says True for arguments which are different and can be pruned.
equalTerm :: Type -> Term -> Term -> TCM () Source #
equalAtom :: Type -> Term -> Term -> TCM () Source #
Ignore errors in irrelevant context.
compareTerm :: Comparison -> Type -> Term -> Term -> TCM () Source #
Type directed equality on values.
assignE :: CompareDirection -> MetaId -> Elims -> Term -> (Term -> Term -> TCM ()) -> TCM () Source #
Try to assign meta. If meta is projected, try to eta-expand and run conversion check again.
compareTel :: Type -> Type -> Comparison -> Telescope -> Telescope -> TCM () Source #
compareTel t1 t2 cmp tel1 tel1 checks whether pointwise tel1 cmp tel2 and complains that t2 cmp t1 failed if not.
etaInequal :: Comparison -> Type -> Term -> Term -> TCM () Source #
Raise UnequalTerms if there is no hope that by meta solving and subsequent eta-contraction these terms could become equal. Precondition: the terms are in reduced form (with no top-level pointer) and failed to be equal in the compareAtom check.
By eta-contraction, a lambda or a record constructor term can become anything.
Compute the head type of an elimination. For projection-like functions this requires inferring the type of the principal argument.
compareAtom :: Comparison -> Type -> Term -> Term -> TCM () Source #
Syntax directed equality on atomic values
Arguments
:: Free c => Comparison cmp The comparison direction -> Dom Type a1 The smaller domain. -> Dom Type a2 The other domain. -> Abs b b1 The smaller codomain. -> Abs c b2 The bigger codomain. -> TCM () Continuation if mismatch in Hiding. -> TCM () Continuation if mismatch in Relevance. -> TCM () Continuation if comparison is successful. -> TCM ()
Check whether a1 cmp a2 and continue in context extended by a1.
When comparing argument spines (in compareElims) where the first arguments don't match, we keep going, substituting the anti-unification of the two terms in the telescope. More precisely:
@ (u = v : A)[pid] w = antiUnify pid A u v us = vs : Δ[w/x] ------------------------------------------------------------- u us = v vs : (x : A) Δ @
The simplest case of anti-unification is to return a fresh metavariable (created by blockTermOnProblem), but if there's shared structure between the two terms we can expose that.
This is really a crutch that lets us get away with things that otherwise would require heterogenous conversion checking. See for instance issue #2384.
compareElims :: [Polarity] -> [IsForced] -> Type -> Term -> [Elim] -> [Elim] -> TCM () Source #
compareElims pols a v els1 els2 performs type-directed equality on eliminator spines. t is the type of the head v.
compareIrrelevant :: Type -> Term -> Term -> TCM () Source #
Compare two terms in irrelevant position. This always succeeds. However, we can dig for solutions of irrelevant metas in the terms we compare. (Certainly not the systematic solution, that'd be proof search...)
compareWithPol :: Polarity -> (Comparison -> a -> a -> TCM ()) -> a -> a -> TCM () Source #
compareArgs :: [Polarity] -> [IsForced] -> Type -> Term -> Args -> Args -> TCM () Source #
Type-directed equality on argument lists
# Types
compareType :: Comparison -> Type -> Type -> TCM () Source #
Equality on Types
leqType :: Type -> Type -> TCM () Source #
coerce v a b coerces v : a to type b, returning a v' : b with maybe extra hidden applications or hidden abstractions.
In principle, this function can host coercive subtyping, but currently it only tries to fix problems with hidden function types.
Precondition: a and b are reduced.
coerceSize :: (Type -> Type -> TCM ()) -> Term -> Type -> Type -> TCM () Source #
Account for situations like k : (Size< j) <= (Size< k + 1)
Actually, the semantics is (Size<= k) ∩ (Size< j) ⊆ rhs which gives a disjunctive constraint. Mmmh, looks like stuff TODO.
For now, we do a cheap heuristics.
Precondition: types are reduced.
# Sorts and levels
leqSort :: Sort -> Sort -> TCM () Source #
Check that the first sort is less or equal to the second.
We can put SizeUniv below Inf, but otherwise, it is unrelated to the other universes.
equalLevel' :: Level -> Level -> TCM () Source #
Precondition: levels are normalised.
equalSort :: Sort -> Sort -> TCM () Source #
Check that the first sort equal to the second.
forallFaceMaps :: Term -> (Map Int Bool -> MetaId -> Term -> TCM a) -> (Substitution -> TCM a) -> TCM [a] Source #
type Conj = (Map Int (Set Bool), [Term]) Source #
leqInterval :: [Conj] -> [Conj] -> TCM Bool Source #
leqInterval r q = r ≤ q in the I lattice. (∨ r_i) ≤ (∨ q_j) iff ∀ i. ∃ j. r_i ≤ q_j
leqConj r q = r ≤ q in the I lattice, when r and q are conjuctions. ' (∧ r_i) ≤ (∧ q_j) iff ' (∧ r_i) ∧ (∧ q_j) = (∧ r_i) iff ' {r_i | i} ∪ {q_j | j} = {r_i | i} iff ' {q_j | j} ⊆ {r_i | i}
equalTermOnFace :: Term -> Type -> Term -> Term -> TCM () Source #
equalTermOnFace φ A u v = _ , φ ⊢ u = v : A
compareTermOnFace' :: (Comparison -> Type -> Term -> Term -> TCM ()) -> Comparison -> Term -> Type -> Term -> Term -> TCM () Source #
|
2021-06-18 16:23:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24016383290290833, "perplexity": 11468.671141783841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487637721.34/warc/CC-MAIN-20210618134943-20210618164943-00367.warc.gz"}
|
https://lammps.sandia.gov/doc/angle_cross.html
|
# angle_style cross command
## Syntax
angle_style cross
## Examples
angle_style cross
angle_coeff 1 200.0 100.0 100.0 1.25 1.25 107.0
## Description
The cross angle style uses a potential that couples the bond stretches of a bend with the angle stretch of that bend:
where r12,0 is the rest value of the bond length between atom 1 and 2, r32,0 is the rest value of the bond length between atom 2 and 2, and theta0 is the rest value of the angle. KSS is the force constant of the bond stretch-bond stretch term and KBS0 and KBS1 are the force constants of the bond stretch-angle stretch terms.
The following coefficients must be defined for each angle type via the angle_coeff command as in the example above, or in the data file or restart files read by the read_data or read_restart commands:
• KSS (energy/distance^2)
|
2019-04-25 12:21:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4717540442943573, "perplexity": 3434.8357969474496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425140058-00489.warc.gz"}
|
https://keplerlounge.com/posts/archimedes-constant/
|
# Archimedes’ Constant is absolutely normal
Using the theory of Algorithmic Probability, we demonstrate that Archimedes’ Constant is absolutely normal.
Aidan Rocke https://github.com/AidanRocke
04-11-2022
Using the theory of Algorithmic Probability, we demonstrate that Archimedes’ Constant is absolutely normal. Upon closer examination, this analysis implies that Archimedes’ Constant is a physical constant.
### Lemma: Prime encodings are algorithmically random sequences.
Prime encodings $$X_N = \{x_n\}_{n=1}^N \in \{0,1\}^N$$ where $$x_n = 1$$ if $$n$$ is prime and $$x_n=0$$ otherwise are algorithmically random sequences.
### Proof:
Based on an information-theoretic demonstration of the Erdős-Kac theorem [1] and the Riemann Hypothesis [4], the Algorithmic Probability with which a prime number of magnitude $$p \in \mathbb{N}$$ is observed is on the order of:
$$$m(p) = \lim_{N \to \infty} \forall X \sim U([1,N]), P(X \bmod p = 0) = \frac{1}{p} \tag{1}$$$
As a result, using Levin’s Coding theorem, the Kolmogorov Complexity of $$p \in \mathbb{N}$$ is on the order of:
$$$K_U(p) = -\log_2 m(p) = \log_2 p \tag{2}$$$
which implies that a prime $$p \in \mathbb{N}$$ might as well be generated by $$\log_2 p$$ coin flips.
### Theorem: Archimedes’ Constant is absolutely normal.
In the following analysis, we shall demonstrate that Archimedes’ Constant is finite-state incompressible using the Three Master Keys for Probabilistic Number Theory [2].
### Proof:
Given the Euler product,
$$$\frac{\pi}{4} = \big(\prod_{p \equiv 1(\bmod 4)} \frac{p}{p-1}\big) \cdot \big(\prod_{p \equiv 3(\bmod 4)} \frac{p}{p+1}\big) = \frac{3}{4} \cdot \frac{5}{4} \cdot \frac{7}{8} \cdot \frac{11}{12} ... \tag{3}$$$
we may reformulate this product as follows:
$$$\frac{\pi}{4} = \prod_{p \in \mathbb{P} \setminus \{2\}} \frac{p}{f(p)} \tag{4}$$$
where $$f(X) = 4 \cdot (\text{argmin}_{\lambda \in \mathbb{N}} \lvert X - 4 \cdot \lambda \rvert)$$.
Now, considering that $$K_U(f)= \mathcal{O}(1)$$ for any computable function $$f$$ we have:
$$$K_U(\frac{\pi}{4}) = \lim_{N \to \infty} K_U \big(\prod_{n=2}^N p_n \big) + \mathcal{O}(1) \tag{5}$$$
Furthermore, considering the information-theoretic derivation of the Prime Number Theorem [3], the statistical distribution of primes $$p_n$$ may be modeled by independent random variables $$\widehat{p_n} \sim U([1,N])$$. Using the fact that if $$f$$ is invertible and $$X,Y$$ are independent random variables, the entropy:
$$$H(f(X,Y)) = H(X) + H(Y) \tag{6}$$$
so we have:
$$$\mathbb{E}[K_U \big(\prod_{n=2}^N \widehat{p_n} \big)] \sim \sum_{n=2}^N \mathbb{E}[K_U(\widehat{p_n})] \sim \sum_{n=2}^N H(\widehat{p_n}) \tag{7}$$$
where invertibility is guaranteed by the Unique Factorization Theorem.
It follows that we may derive the asymptotic relation:
$$$\sum_{n=2}^N H(\widehat{p_n}) \sim \pi(N) \cdot \big(\sum_{p \leq N} m(p) \cdot \ln p\big) \sim \pi(N) \cdot \big(\sum_{p \leq N} \frac{1}{p} \cdot \ln p\big) \tag{8}$$$
which may be deduced from the Lemma.
Using the Shannon Source coding theorem, the expression (8) may be identified with:
$$$\mathbb{E}[K(X_N)] \sim \pi(N) \cdot \big(\sum_{k =1 }^N \frac{1}{k} \big) \sim \pi(N) \cdot \ln N \sim N \tag{9}$$$
where $$X_N$$ is the prime encoding of length $$N$$.
Now, using the Asymptotic Equipartition Property we may note that the average information gained from observing a prime number of unknown magnitude in the interval $$[1,N]$$ is dominated by the typical probability $$\frac{1}{N}$$:
$$$\frac{1}{N} \cdot -\ln \prod_{k=1}^N P(x_k = 1) = \frac{-\ln \prod_{k=1}^N \frac{1}{k}}{N} = \frac{\ln N!}{N} \sim \frac{N \cdot \ln N - N}{N} \sim \ln N \tag{10}$$$
Hence, a representation of Archimedes’ Constant of length $$N$$ relative to any description language $$U$$ corresponds to $$\pi(N)$$ incompressible strings $$\widehat{x_p} \in \{0,1\}^*$$ of length $$\sim \ln N$$ where each string occurs with Algorithmic Probability:
$$$m(\widehat{x_p}) \sim e^{-\ln N} = \frac{1}{N} \tag{11}$$$
As the entropy of Archimedes’ Constant is dominated by the product of uniformly distributed random variables, its entropy is invariant to permutations of these variables. It follows that the Algorithmic Probability of observing the entire string of length $$N$$ is given by:
$$$P(\widehat{x}_{p_1},...,\widehat{x}_{p_{\pi(N)}}) = \prod_{p \leq N} m(\widehat{x_p}) \sim \big(\frac{1}{N}\big)^{\pi(N)} = e^{-\pi(N) \cdot \ln N} = e^{-N + \mathcal{o}(1)} \tag{12}$$$
which allows us to determine the expected information gained:
$$$-\ln P(\widehat{x}_{p_1},...,\widehat{x}_{p_{\pi(N)}}) \sim -\ln \big(\frac{1}{N}\big)^{\pi(N)} \sim \pi(N) \cdot \ln N \sim N \tag{13}$$$
Hence, we may conclude that Archimedes’ Constant has a Maximum Entropy distribution such that randomly sampled substrings of equal length have equal entropy. Therefore, Archimedes’ Constant is absolutely normal.
## References:
1. Rocke (2022, Jan. 11). Kepler Lounge: An information-theoretic proof of the Erdős-Kac theorem. Retrieved from keplerlounge.com
2. Rocke (2022, Jan. 15). Kepler Lounge: Three master keys for Probabilistic Number Theory. Retrieved from keplerlounge.com
3. Rocke (2022, Jan. 12). Kepler Lounge: An information-theoretic derivation of the Prime Number Theorem. Retrieved from keplerlounge.com
4. Rocke (2022, March 8). Kepler Lounge: The Von Neumann Entropy and the Riemann Hypothesis. Retrieved from keplerlounge.com
### Citation
Rocke (2022, April 11). Kepler Lounge: Archimedes' Constant is absolutely normal. Retrieved from keplerlounge.com
@misc{rocke2022archimedes',
}
|
2022-05-17 18:19:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 13, "x-ck12": 0, "texerror": 0, "math_score": 0.9991747140884399, "perplexity": 4698.973747131646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662519037.11/warc/CC-MAIN-20220517162558-20220517192558-00144.warc.gz"}
|
https://collegephysicsanswers.com/openstax-solutions/football-player-punts-ball-450circ-angle-without-effect-wind-ball-would-travel
|
Question
A football player punts the ball at a $45.0^\circ$ angle. Without an effect from the wind, the ball would travel 60.0 m horizontally. (a) What is the initial speed of the ball? (b) When the ball is near its maximum height it experiences a brief gust of wind that reduces its horizontal velocity by 1.50 m/s. What distance does the ball travel horizontally?
Question by OpenStax is licensed under CC BY 4.0.
Final Answer
a) $24.2 \textrm{ m/s}$
b) $57.4 \textrm{ m}$
Solution Video
# OpenStax College Physics Solution, Chapter 3, Problem 47 (Problems & Exercises) (6:09)
#### Sign up to view this solution video!
View sample solution
## Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. When this football is punted at an angle of 45 degrees, it has a range of 60 meters. And so we can use the range formula, since the final level of the ball will be the same as the initial level and we can use this to find the initial speed. We’ll solve this for v naught by multiplying both sides by g over sin two theta. And then switch the sides around and take the square root of both sides, giving us v naught is the square root of the range times acceleration due to gravity divided by sine of two times the angle. So it’s the square root of 60 times 9.8 and that gives 24.2 meters per second, is the initial speed. Then in part b, we’re told that the ball as its travelling along its trajectory will experience a gust of wind at halfway point that reduces its horizontal velocity by 1.5 meters per second. There’s gonna be two intervals to consider here. The first interval is before the gust of wind happens, between the initial x and x1. And the second interval of time will be from here to x2. Now the time intervals will be the same because this gust of wind occurs at the halfway point. So this occurs at t1 which is the total time divided by two. We know what the position is here, because since it’s the halfway point, it’s half of 60. So x1 is 30 meters. Now the part where we need to do some work though, is to figure out what is this x2. We know what the time interval t2 is, it’s also gonna be the total time over two. We’re going to calculate the speed during the second interval, and it’s gonna be the initial x component of velocity minus 1.5. When we plug this speed and this time into our horizontal position formula, we will find its final position x2. It’s gonna be x1 plus the speed during the second interval multiply by the time of the second interval. So we need to know what the total time in the air is though. To do that, we consider the vertical direction and so the final y position equals initial y position plus the initial vertical component of its velocity times time plus one half times the vertical acceleration times time squared. The final initial heights are both zero so that makes it a bit more convenient. We’re solving for t by the way so we are subtracting v naught y and t, that factor, that term I should say, from both sides and then we’re left with this line but I’ve substituted v naught times sin theta in place of v naught y because that’s the vertical component of this velocity. And it’s the opposite leg of this triangle. To find the opposite leg, you take the sine of the angle multiply by the length of the hypotenuse. And then we divide both sides by t and also divide both sides by ay and multiply both sides by two. Let’s write that a bit nicer here. And then multiply this by two over ay t and so the t is cancel on the right, and you’re left with t to the power of one on the left. And of course the two and the ay cancel on the left, isolating t. On the right hand side, we’re left with negative two times initial speed times sin theta over vertical acceleration. So that’s negative two times 24.2487 meters per second that we calculated in part a, times sin 45 divided by negative 9.8 meters per second squared, giving a total time of 3.4993 seconds in the air. This gust of wind since it affects only the horizontal direction, has no effect on the amount of time that the ball spends in the air. It still spends the same total time in the air. We move on to this part of our work. Now we know what t2 is, being half of the total time. We have now calculated the total time so we can figure out what t2 is and v2 is the horizontal component of the initial velocity minus the 1.5 meters per second resulting from the gust of wind. We’ll substitute this and this into here, to get this line here. So x2 is x1 plus v naught cos theta minus one and a half, all times t over two. So plugging in numbers, 30 meters plus 24.2487 meters per second times cos 45 minus 1.5, all times the total time of 3.4993 seconds over two, gives a final range of 57.4 meters. As a result of this gust of wind, our range is now less than 60, and so this answer makes sense.
|
2019-02-19 21:48:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8096129894256592, "perplexity": 352.03745143117084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247492825.22/warc/CC-MAIN-20190219203410-20190219225410-00461.warc.gz"}
|
http://math.stackexchange.com/questions/271245/continuity-of-solutions-to-convex-optimization-problems
|
# Continuity of solutions to convex optimization problems
Let $x_A$ solve $$\min J(x) \quad \text{subject to} \quad Ax=b$$ and $x_B$ solve $$\min J(x) \quad \text{subject to} \quad Bx=b$$ given that $\|A-B\|_\text{operator} \leq \epsilon$ and that $J$ is convex (though not necessarily differentiable) , what can I say about $\| x_A - x_B \|_2$ ?
-
Sadly, you can say nothing. All you know is that the optimization occurs on one of the vertex points, so changing the slope every so slightly, could send you to the next vertex point, which is very far away. – Calvin Lin Jan 6 '13 at 0:57
Ok thanks. Do you know if there are some additional constraints I can place on A or B that would allow me to say something? – dranxo Jan 7 '13 at 3:35
Let $J(x) = x_1^2+(x_2-10)^2$. Let $b=0$, $A_\epsilon=\begin{bmatrix}0 & \epsilon\end{bmatrix}$, $B_\epsilon=\begin{bmatrix}\epsilon & 0\end{bmatrix}$. Then if $\epsilon>0$, $x_{A_\epsilon}=\binom{0}{0}$, $x_{B_\epsilon} = \binom{0}{10}$. The norm $\|A_\epsilon-B_\epsilon\|$ can be made as small as you want, but $\|x_{A_\epsilon}-x_{B_\epsilon}\|_\infty = 10$.
|
2014-08-28 15:42:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.927855372428894, "perplexity": 216.37434006130312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830903.34/warc/CC-MAIN-20140820021350-00194-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.mathway.com/examples/finite-math/statistical-distributions/describing-distributions-two-properties?id=716
|
# Finite Math Examples
Describe the Distribution's Two Properties
Step 1
A discrete random variable takes a set of separate values (such as , , ...). Its probability distribution assigns a probability to each possible value . For each , the probability falls between and inclusive and the sum of the probabilities for all the possible values equals to .
1. For each , .
2. .
Step 2
is between and inclusive, which meets the first property of the probability distribution.
is between and inclusive
Step 3
is between and inclusive, which meets the first property of the probability distribution.
is between and inclusive
Step 4
is between and inclusive, which meets the first property of the probability distribution.
is between and inclusive
Step 5
is between and inclusive, which meets the first property of the probability distribution.
is between and inclusive
Step 6
is between and inclusive, which meets the first property of the probability distribution.
is between and inclusive
Step 7
is not less than or equal to , which doesn't meet the first property of the probability distribution.
is not less than or equal to
Step 8
is between and inclusive, which meets the first property of the probability distribution.
is between and inclusive
Step 9
The probability does not fall between and inclusive for all values, which does not meet the first property of the probability distribution.
The table does not satisfy the two properties of a probability distribution
|
2022-06-29 22:13:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503971099853516, "perplexity": 809.0067983525905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00103.warc.gz"}
|
http://mathforces.com/problems/216/
|
# Integer sequence
Author: mathforces
Problem has been solved: 11 times
Русский язык | English Language
For a sequence $x_n$ of positive integer numbers, it is given that $x_{n+2}=\frac{x_n+2009}{1+x_{n+1}}$ for all positive integers $n$. Find the smallest possible value of $x_1+x_2$.
|
2021-09-21 16:27:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20679526031017303, "perplexity": 453.8838053316355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00184.warc.gz"}
|
https://orfe.princeton.edu/events/2006/jia-shun-jin-purdue-university
|
# Jia-shun Jin, Purdue University
Inferences on the Proportion of Non-Null Effects in Large-Scale Multiple Comparisons
Date
Nov 9, 2006, 4:30 pm5:30 pm
Location
Event Description
The immediate need for effective massive data mining gives rise to a recent new field in statistics: large-scale multiple simultaneous testing or multiple comparisons. In such settings, one tests thousands or even millions of hypotheses simultaneously:
H1,H2,...,Hn,
where associated with each hypothesis is a summary test statistic
X1,X2,...,Xn.
A problem of particular interest is to estimate the proportion of non-null effects, i.e., the pro- portion of hypotheses that are untrue.
In this talk, we report some recent progress on estimating the proportion. We model each Xj as normally distributed with individual mean μj and individual variance σj2, where the parameters satisfy that (μj , σj ) = (0, 1) if Hj is true, and (μj , σj ) ̸= (0, 1) otherwise. We show that, under natural identifiability conditions, universal oracle equivalence of the proportion can be constructed, which equals to the true proportion for any n and any set of parameters. The oracle naturally yields real estimators, which are uniformly consistent for the proportion over a wide class of situations.
This talk is based on collaborated works with (alphabetically) Tony Cai, David Donoho, Mark Low, Jie Peng, and Pei Wang.
|
2022-11-29 14:09:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8316236138343811, "perplexity": 2368.012505225853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00528.warc.gz"}
|
http://smartistera.com/nootropics-genius-brain-pill.html
|
A week later: Golden Sumatran, 3 spoonfuls, a more yellowish powder. (I combined it with some tea dregs to hopefully cut the flavor a bit.) Had a paper to review that night. No (subjectively noticeable) effect on energy or productivity. I tried 4 spoonfuls at noon the next day; nothing except a little mental tension, for lack of a better word. I think that was just the harbinger of what my runny nose that day and the day before was, a head cold that laid me low during the evening.
Tuesday: I went to bed at 1am, and first woke up at 6am, and I wrote down a dream; the lucid dreaming book I was reading advised that waking up in the morning and then going back for a short nap often causes lucid dreams, so I tried that - and wound up waking up at 10am with no dreams at all. Oops. I take a pill, but the whole day I don’t feel so hot, although my conversation and arguments seem as cogent as ever. I’m also having a terrible time focusing on any actual work. At 8 I take another; I’m behind on too many things, and it looks like I need an all-nighter to catch up. The dose is no good; at 11, I still feel like at 8, possibly worse, and I take another along with the choline+piracetam (which makes a total of 600mg for the day). Come 12:30, and I disconsolately note that I don’t seem any better, although I still seem to understand the IQ essays I am reading. I wonder if this is tolerance to modafinil, or perhaps sleep catching up to me? Possibly it’s just that I don’t remember what the quasi-light-headedness of modafinil felt like. I feel this sort of zombie-like state without change to 4am, so it must be doing something, when I give up and go to bed, getting up at 7:30 without too much trouble. Some N-backing at 9am gives me some low scores but also some pretty high scores (38/43/66/40/24/67/60/71/54 or ▂▂▆▂▁▆▅▇▄), which suggests I can perform normally if I concentrate. I take another pill and am fine the rest of the day, going to bed at 1am as usual.
Choline is a nootropic: it enhances your ability to pay attention and learn efficiently,[18] probably because you use a lot of acetylcholine during mentally-demanding tasks, and choline helps you synthesize enough to work harder and go longer.[19] Choline also links to decreased brain inflammation in a dose-dependent manner — the more choline you eat, the less inflamed your brain tends to be.[20]
As you may or may not know, curcumin has become a darling of the nutrition world in the last several years, thanks to a flurry of research that indicates the turmeric derivative can do everything from support the brain to reduce painful body-wide inflammation to even support positive mood. You can learn more about the research behind curcumin here:
It’s that time of the year again. It’s Blue Monday. We’re halfway into January, trudging through the deepest and darkest of the winter months, as we try to keep our heads high after the Christmas festivities with the motivation of our New Year’s resolutions. Some of you may have never heard of Blue Monday and let’s just say you’re not exactly missing out.
Clinical psychiatrist Emily Deans has a private practice in Massachusetts and teaches at Harvard Medical School. She told me by phone that, in principle, there's "probably nothing dangerous" about the occasional course of nootropics for a hunting trip, finals week, or some big project. Beyond that, she suggests considering that it's possible to build up a tolerance to many neuroactive products if you use them often enough.
(As I was doing this, I reflected how modafinil is such a pure example of the money-time tradeoff. It’s not that you pay someone else to do something for you, which necessarily they will do in a way different from you; nor is it that you have exchanged money to free yourself of a burden of some future time-investment; nor have you paid money for a speculative return of time later in life like with many medical expenses or supplements. Rather, you have paid for 8 hours today of your own time.)
Supplements, medications, and coffee certainly might play a role in keeping our brains running smoothly at work or when we’re trying to remember where we left our keys. But the long-term effects of basic lifestyle practices can’t be ignored. “For good brain health across the life span, you should keep your brain active,” Sahakian says. “There is good evidence for ‘use it or lose it.’” She suggests brain-training apps to improve memory, as well as physical exercise. “You should ensure you have a healthy diet and not overeat. It is also important to have good-quality sleep. Finally, having a good work-life balance is important for well-being.” Try these 8 ways to get smarter while you sleep.
This supplement is dangerous and should not be sold. I have taken brain supplements for a while and each of them are very similar, EXCEPT for Addium. On the day I took Addium many blood vessels in my hands burst, and two on my face burst. With their "proprietary blend" not being detailed as to the amount of each ingredient (only listed in the aggregate of 500mg Proprietary Blend) you have no way of determining which ingredient may or may not be too much. I can only recommend to stay away from this supplement.
Caffeine (Examine.com; FDA adverse events) is of course the most famous stimulant around. But consuming 200mg or more a day, I have discovered the downside: it is addictive and has a nasty withdrawal - headaches, decreased motivation, apathy, and general unhappiness. (It’s a little amusing to read academic descriptions of caffeine addiction9; if caffeine were a new drug, I wonder what Schedule it would be in and if people might be even more leery of it than modafinil.) Further, in some ways, aside from the ubiquitous placebo effect, caffeine combines a mix of weak performance benefits (Lorist & Snel 2008, Nehlig 2010) with some possible decrements, anecdotally and scientifically:
Do you sometimes feel like you are only half-there in your daily conversations because you lack concentration, or mental focus? With Cognizance you will no longer be wondering if the people conversing with you realize your lack of mental focus as you interact. This supplement helps by improving mental clarity and focus1, boosting intelligence levels, memory function, and increasing your level of concentration and alertness. As an added bonus, Cognizance can provide you with an increased level of energy and improved mood. COGNIZANCE BENEFITS: - Improves mood - Boosts memory function - Raises intelligence levels - Increases physical energy - Improves mental clarity - Boosts ability to focus - Improves concentration - Increases level of alertness The proprietary ingredients in Cognizance improve the functioning of the mind and body in several ways. One ingredient, dimethylaminoethanol is responsible for improving mood, boosting the function of the memory, raising intelligence levels, and increasing physical energy. Another, L-pyroglutamic acid, works to improve mental focus and concentration. These ingredients, combined with the others in Cognizance allow it to offer these benefits and more.
Farah was one of several scholars who contributed to a recent article in Nature, "Towards Responsible Use of Cognitive Enhancing Drugs by the Healthy". The optimistic tone of the article suggested that some bioethicists are leaning towards endorsing neuroenhancement. "Like all new technologies, cognitive enhancement can be used well or poorly," the article declared. "We should welcome new methods of improving our brain function. In a world in which human workspans and lifespans are increasing, cognitive-enhancement tools - including the pharmacological - will be increasingly useful for improved quality of life and extended work productivity, as well as to stave off normal and pathological age-related cognitive declines. Safe and effective cognitive enhancers will benefit both the individual and society." The BMA report offered a similarly upbeat observation: "Universal access to enhancing interventions would bring up the baseline level of cognitive ability, which is generally seen to be a good thing."
Essential fatty acids (EFAs) cannot be made by the body which means they must be obtained through diet. The most effective omega-3 fats occur naturally in oily fish in the form of EPA and DHA. Good plant sources include linseed (flaxseed), soya beans, pumpkin seeds, walnuts and their oils. These fats are important for healthy brain function, the heart, joints and our general wellbeing. What makes oily fish so good is that they contain the active form of these fats, EPA and DHA, in a ready-made form, which enables the body to use it easily. The main sources of oily fish include salmon, trout, mackerel, herring, sardines, pilchards and kippers. Low DHA levels have been linked to an increased risk of dementia, Alzheimer's disease and memory loss whilst having sufficient levels of both EPA and DHA is thought to help us manage stress and helps make the good mood brain chemical, serotonin. If you're vegetarian or vegan, you may wish to add seeds like linseed and chia to your diet, or consider a plant-based omega-3 supplement. If you are considering taking a supplement speak to your GP first.
Difficulty concentrating. As mentioned previously, this may not be a direct result of age—though it can be a common side-effect of struggling with fatigue and brain fog. When it takes more mental energy to think, it is harder to stay with it for a long time. Many of us also are surrounded by distractions clambering for our limited attention. Modern life is fast-paced, stressful, and overcrowded.
Participants (n=205) [young adults aged 18-30 years] were recruited between July 2010 and January 2011, and were randomized to receive either a daily 150 µg (0.15mg) iodine supplement or daily placebo supplement for 32 weeks…After adjusting for baseline cognitive test score, examiner, age, sex, income, and ethnicity, iodine supplementation did not significantly predict 32 week cognitive test scores for Block Design (p=0.385), Digit Span Backward (p=0.474), Matrix Reasoning (p=0.885), Symbol Search (p=0.844), Visual Puzzles (p=0.675), Coding (p=0.858), and Letter-Number Sequencing (p=0.408).
The reality is that cognitive impairment and dementia are also on the rise, and sometimes symptoms of forgetfulness and confusion are not so innocuous. According to the Alzheimer’s Association, someone in the United States is diagnosed with Alzheimer’s disease every 66 seconds. By the middle of this century, that is expected to grow to every 33 seconds.
Stayed up with the purpose of finishing my work for a contest. This time, instead of taking the pill as a single large dose (I feel that after 3 times, I understand what it’s like), I will take 4 doses over the new day. I took the first quarter at 1 AM, when I was starting to feel a little foggy but not majorly impaired. Second dose, 5:30 AM; feeling a little impaired. 8:20 AM, third dose; as usual, I feel physically a bit off and mentally tired - but still mentally sharp when I actually do something. Early on, my heart rate seemed a bit high and my limbs trembling, but it’s pretty clear now that that was the caffeine or piracetam. It may be that the other day, it was the caffeine’s fault as I suspected. The final dose was around noon. The afternoon crash wasn’t so pronounced this time, although motivation remains a problem. I put everything into finishing up the spaced repetition literature review, and didn’t do any n-backing until 11:30 PM: 32/34/31/54/40%.
After 7 days, I ordered a kg of choline bitartrate from Bulk Powders. Choline is standard among piracetam-users because it is pretty universally supported by anecdotes about piracetam headaches, has support in rat/mice experiments28, and also some human-related research. So I figured I couldn’t fairly test piracetam without some regular choline - the eggs might not be enough, might be the wrong kind, etc. It has a quite distinctly fishy smell, but the actual taste is more citrus-y, and it seems to neutralize the piracetam taste in tea (which makes things much easier for me).
I find this very troubling. The magnesium supplementation was harmful enough to do a lot of cumulative damage over the months involved (I could have done a lot of writing September 2013 - June 2014), but not so blatantly harmful enough as to be noticeable without a randomized blind self-experiment or at least systematic data collection - neither of which are common among people who would be supplementing magnesium I would much prefer it if my magnesium overdose had come with visible harm (such as waking up in the middle of the night after a nightmare soaked in sweat), since then I’d know quickly and surely, as would anyone else taking magnesium. But the harm I observed in my data? For all I know, that could be affecting every user of magnesium supplements! How would we know otherwise?
At small effects like d=0.07, a nontrivial chance of negative effects, and an unknown level of placebo effects (this was non-blinded, which could account for any residual effects), this strongly implies that LLLT is not doing anything for me worth bothering with. I was pretty skeptical of LLLT in the first place, and if 167 days can’t turn up anything noticeable, I don’t think I’ll be continuing with LLLT usage and will be giving away my LED set. (Should any experimental studies of LLLT for cognitive enhancement in healthy people surface with large quantitative effects - as opposed to a handful of qualitative case studies about brain-damaged people - and I decide to give LLLT another try, I can always just buy another set of LEDs: it’s only ~$15, after all.) Paul Phillips was unusual for a professional poker player. When he joined the circuit in the late 1990s he was already a millionaire: a twentysomething tech guy who helped found an internet portal called go2net and cashed in at the right moment. He was cerebral and at times brusque. On the international poker scene Phillips cultivated a geeky New Wave style. He wore vintage shirts in wild geometric patterns; his hair was dyed orange or silver one week, shaved off the next. Most unusual of all, Phillips talked freely about taking prescription drugs - Adderall and, especially, Provigil - in order to play better cards. Ampakines are structurally derived from a popular nootropic called “aniracetam”. Their basic function is to activate AMPA glutamate receptors (AMPARs). Glutamate (a neurotransmitter) is the primary mediator of excitatory synaptic transmission in mammalian brains, which makes it crucial for synaptic plasticity (the adaptation of synapses, the space between neurons across which information is sent), learning and memory, so when you activate or stimulate glutamate receptors, you can trigger many of these functions. AMPARs are distributed across the central nervous system and are stimulated by incoming glutamate to begin the neuroenhancing benefits they’re often used for. But it is possible to have too much glutamate activity. When excess glutamate is produced, accumulates and binds to AMPARs, the result is excitotoxicity, which is a state of cell death (in the case of the central nervous system and your brain, neuron death) resulting from the toxic levels of excitatory amino acids. Excitotoxicity is believed to play a major role in the development of various degenerative neurological conditions such as schizophrenia, delirium and dementia. I decided to try out day-time usage on 2 consecutive days, taking the 100mg at noon or 1 PM. On both days, I thought I did feel more energetic but nothing extraordinary (maybe not even as strong as the nicotine), and I had trouble falling asleep on Halloween, thinking about the meta-ethics essay I had been writing diligently on both days. Not a good use compared to staying up a night. To our partners, community supporters, and funders: The Brainfood journey has taken us many places, and at each fork in the road we discovered an amazing network of youth advocates ready to help lift our work to the next level. Whether you donated pro-bono consulting hours, connected us to allies in the city, or came in to meet our students and see a class, you helped us build something really special. Thanks for believing in us. Mosconi uses a pragmatic approach to improve your diet for brain health. The book is divided in three parts. The first one provides information regarding the brain nutritional requirement. The second one teaches you how to eat better. And, the third part tests you to find out where you are in terms of feeding yourself well. This includes an 80 question test that grades you as either Beginner/Intermediate/Advanced. “Beginner” entails you have little food awareness. You eat a lot of processed food. “Advanced” entails you eat very healthily, mainly organic foods. And, “Intermediate” falls in between. But he has also seen patients whose propensity for self-experimentation to improve cognition got out of hand. One chief executive he treated, Ngo said, developed an unhealthy predilection for albuterol, because he felt the asthma inhaler medicine kept him alert and productive long after others had quit working. Unfortunately, the drug ended up severely imbalancing his electrolytes, which can lead to dehydration, headaches, vision and cardiac problems, muscle contractions and, in extreme cases, seizures. The content that appears on this page is presented as an overview vs. comparative marketing. The provided information includes product information, overviews, buying guides, and product specifications. All trademarks, registered trademarks and service-marks mentioned on this site are the property of their respective owners. If something is factually inaccurate please contact us and let us know. By contributing your product facts helps to better serve our readers and the accuracy of the content. “Most people assume that because it’s a supplement, it can’t be bad for you because it’s natural,” says Louis Kraus, M.D., a psychiatrist with Rush University Medical Center in Chicago. In 2016, he chaired a committee that investigated nootropics for the American Medical Association. After reviewing the science, the committee found little to no evidence to support the efficacy or safety of nootropics. ##### This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit. In August 2011, after winning the spaced repetition contest and finishing up the Adderall double-blind testing, I decided the time was right to try nicotine again. I had since learned that e-cigarettes use nicotine dissolved in water, and that nicotine-water was a vastly cheaper source of nicotine than either gum or patches. So I ordered 250ml of water at 12mg/ml (total cost:$18.20). A cigarette apparently delivers around 1mg of nicotine, so half a ml would be a solid dose of nicotine, making that ~500 doses. Plenty to experiment with. The question is, besides the stimulant effect, nicotine also causes habit formation; what habits should I reinforce with nicotine? Exercise, and spaced repetition seem like 2 good targets.
A common dose for this combination is 500 milligrams per day of Lion’s Mane, 240 milligrams per day of Ginkgo Biloba, and 100 milligrams twice per day of Bacopa Monnieri. Consider buying each ingredient in bulk to have stock and experiment with. If you are not experiencing positive results after 12 weeks, try adjusting the dosages in small increments. For example, you can start by adjusting Bacopa Monnieri to 150 milligrams twice per day for a couple weeks. Be patient: the end result is worth the trial and error.
I posted a link to the survey on my Google+ account, and inserted the link at the top of all gwern.net pages; 51 people completed all 11 binary choices (most of them coming from North America & Europe), which seems adequate since the 11 questions are all asking the same question, and 561 responses to one question is quite a few. A few different statistical tests seem applicable: a chi-squared test whether there’s a difference between all the answers, a two-sample test on the averages, and most meaningfully, summing up the responses as a single pair of numbers and doing a binomial test:
Here’s how it works: Donepezil boosts serotonin and acetylcholine in the brain, chemicals that are usually found in high concentrations in the brains of young children which naturally decrease with age. As a cholinesterase inhibitor, Donezepil boosts brain function by increasing the amount of acetylcholine around nerve endings. In dementia and Alzheimer’s patients, the drug has been shown to improve memory function.
Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185).
*Result may vary. If you are pregnant, nursing, have a serious medical condition, or have a history of heart conditions we suggest consulting with a physician before using any supplement. The information contained in this website is provided for general informational purposes only. It is not intended to diagnose, treat, cure, or prevent any disease and should not be relied upon as a medical advice. Always consult your doctor before using any supplements.
50 pairs of active/placebos or 100 days. With 120 tablets and 4 tablets used up, that leaves me 58 doses. That might seem adequate except the paired t-test approximation is overly-optimistic, and I also expect the non-randomized non-blinded correlation is too high which means that is overly-optimistic as well. The power would be lower than I’d prefer. I decided to simply order another bottle of Solgar’s & double the sample size to be safe.
Though coffee gives instant alertness and many cups of the beverage are downed throughout the day, the effect lasts only for a short while. People who drink coffee every day may develop caffeine tolerance; this is the reason why it is still important to control your daily intake. It is advisable that an individual should not consume more than 300mg of coffee a day. Caffeine, the world’s favourite nootropic has very less side effects but if consumed abnormally high can result in nausea, restlessness, nervousness and hyperactivity. This is the reason why people who need increased sharpness would rather induce L-theanine, or some other Nootropic, along with caffeine. Today, you can find various smart drugs that contain caffeine in them. OptiMind , one of the best and most sought-after nootropic in the U.S, containing caffeine, is considered more effective and efficient when compared to other focus drugs present in the market today.
Reason: Acetyl-L-carnitine can protect the brain from neurotoxicity. It can also ward off oxygen deprivation. Acetyl-L-carnitine can even preserve cells energy-producing mitochondria. Plus, it can rejuvenate mental and physical function. Dosages for studies have been in the 1,500 – 4,000 mg range. These are divided into two or three doses. However, we recommend no more than 1,000 mg of acetyl-L-carnitine a day without medical supervision.
And many people swear by them. Neal Thakkar, for example, is an entrepreneur from Marlboro, New Jersey, who claims nootropics improved his life so profoundly that he can’t imagine living without them. His first breakthrough came about five years ago, when he tried a piracetam/choline combination, or “stack,” and was amazed by his increased verbal fluency. (Piracetam is a cognitive-enhancement drug permitted for sale in the U. S. as a dietary supplement; choline is a natural substance.)
Ampakines are structurally derived from a popular nootropic called “aniracetam”. Their basic function is to activate AMPA glutamate receptors (AMPARs). Glutamate (a neurotransmitter) is the primary mediator of excitatory synaptic transmission in mammalian brains, which makes it crucial for synaptic plasticity (the adaptation of synapses, the space between neurons across which information is sent), learning and memory, so when you activate or stimulate glutamate receptors, you can trigger many of these functions. AMPARs are distributed across the central nervous system and are stimulated by incoming glutamate to begin the neuroenhancing benefits they’re often used for. But it is possible to have too much glutamate activity. When excess glutamate is produced, accumulates and binds to AMPARs, the result is excitotoxicity, which is a state of cell death (in the case of the central nervous system and your brain, neuron death) resulting from the toxic levels of excitatory amino acids. Excitotoxicity is believed to play a major role in the development of various degenerative neurological conditions such as schizophrenia, delirium and dementia.
This was so unexpected that I wondered if I had somehow accidentally put the magnesium pills into the placebo pill baggie or had swapped values while typing up the data into a spreadsheet, and checked into that. The spreadsheet accorded with the log above, which rules out data entry mistakes; and looking over the log, I discovered that some earlier slip-ups were able to rule out the pill-swap: I had carelessly put in some placebo pills made using rice, in order to get rid of them, and that led to me being unblinded twice before I became irritated enough to pick them all out of the bag of placebos - but how could that happen if I had swapped the groups of pills?
50 pairs of active/placebos or 100 days. With 120 tablets and 4 tablets used up, that leaves me 58 doses. That might seem adequate except the paired t-test approximation is overly-optimistic, and I also expect the non-randomized non-blinded correlation is too high which means that is overly-optimistic as well. The power would be lower than I’d prefer. I decided to simply order another bottle of Solgar’s & double the sample size to be safe.
It all comes down to my personal investigation and exploration into how one can use a variety of compounds to enhance the mind, all while combining ancestral wisdom and herbs such as bacopa and gingko with modern science and tactics such as LSD and racetams. The fact is, I’ve taken a deep dive in the wonderful world of smart drugs, nootropics and psychedelics, and have had the opportunity to interview some of the brightest minds in this unique field of brain enhancement on my podcast. So in this article, I’ll spill the beans on it all, including how to navigate the oft-confusing world of smart drugs and nootropics, the best brain supplement stacks I’ve discovered and experimented with, how to procure and microdose psychedelics and much more.
Alex's sense of who uses stimulants for so-called "non-medical" purposes is borne out by two dozen or so scientific studies. In 2005 a team led by Sean Esteban McCabe, a professor at the University of Michigan, reported that in the previous year 4.1% of American undergraduates had taken prescription stimulants for off-label use - at one school the figure was 25%, while a 2002 study at a small college found that more than 35% of the students had used prescription stimulants non-medically in the previous year.
Looking at the prices, the overwhelming expense is for modafinil. It’s a powerful stimulant - possibly the single most effective ingredient in the list - but dang expensive. Worse, there’s anecdotal evidence that one can develop tolerance to modafinil, so we might be wasting a great deal of money on it. (And for me, modafinil isn’t even very useful in the daytime: I can’t even notice it.) If we drop it, the cost drops by a full $800 from$1761 to $961 (almost halving) and to$0.96 per day. A remarkable difference, and if one were genetically insensitive to modafinil, one would definitely want to remove it.
Hericium erinaceus (Examine.com) was recommended strongly by several on the ImmInst.org forums for its long-term benefits to learning, apparently linked to Nerve growth factor. Highly speculative stuff, and it’s unclear whether the mushroom powder I bought was the right form to take (ImmInst.org discussions seem to universally assume one is taking an alcohol or hotwater extract). It tasted nice, though, and I mixed it into my sleeping pills (which contain melatonin & tryptophan). I’ll probably never know whether the $30 for 0.5lb was well-spent or not. 28,61,36,25,61,57,39,56,23,37,24,50,54,32,50,33,16,42,41,40,34,33,31,65,23,36,29,51,46,31,45,52,30, 50,29,36,57,60,34,48,32,41,48,34,51,40,53,73,56,53,53,57,46,50,35,50,60,62,30,60,48,46,52,60,60,48, 47,34,50,51,45,54,70,48,61,43,53,60,44,57,50,50,52,37,55,40,53,48,50,52,44,50,50,38,43,66,40,24,67, 60,71,54,51,60,41,58,20,28,42,53,59,42,31,60,42,58,36,48,53,46,25,53,57,60,35,46,32,26,68,45,20,51, 56,48,25,62,50,54,47,42,55,39,60,44,32,50,34,60,47,70,68,38,47,48,70,51,42,41,35,36,39,23,50,46,44,56,50,39 Burke says he definitely got the glow. “The first time I took it, I was working on a business plan. I had to juggle multiple contingencies in my head, and for some reason a tree with branches jumped into my head. I was able to place each contingency on a branch, retract and go back to the trunk, and in this visual way I was able to juggle more information.” Microdosing involves ingesting small amounts of psychedelics to induce a very subtle physical and mental effect accompanied by a very noticeable, overall positive, health effect. When you take a microdose of a psychedelic, it is typically referred to as a sub-perceptual dose. A sub-perceptual dose will not have a major impact on your ability to function normally, but the effect will definitely be present in your mood and behavior. The microdose of a particular psychedelic is correlated to the lowest dose that will produce a noticeable effect, which is also known as the threshold dose. Since the goal is not to get a hallucinogenic effect, a microdose can be well below the psychedelics threshold dose. By integrating the correct doses of psychedelics into your weekly routine, you can achieve higher creativity levels, more energy, improved mood, increased focus, and better relational skills. There is a growing body of research that shows microdosing to improve depression, anxiety, PTSD, and emotional imbalance, help with alcohol and tobacco addiction, and decrease ADD and ADHD behaviors. ✅ ENERGIZE - REJUVENATE & SUPPORT YOUR BRAIN WITH OUR UNIQUE DAY & NIGHT FORMULA - Steele Spirit Neuro Brain Clarity, is an All Natural 24hr Nootropics brain booster, formulated by an anti-ageing expert - Unlike other Brain Supplements, It provides your brain with the Day and Night support it requires to help you function better during the day and then supports learning, memory retention, repair and rejuvenation while you sleep. An explanation for each ingredient is the "Product Description." ###### My first time was relatively short: 10 minutes around the F3/F4 points, with another 5 minutes to the forehead. Awkward holding it up against one’s head, and I see why people talk of LED helmets, it’s boring waiting. No initial impressions except maybe feeling a bit mentally cloudy, but that goes away within 20 minutes of finishing when I took a nap outside in the sunlight. Lostfalco says Expectations: You will be tired after the first time for 2 to 24 hours. It’s perfectly normal., but I’m not sure - my dog woke me up very early and disturbed my sleep, so maybe that’s why I felt suddenly tired. On the second day, I escalated to 30 minutes on the forehead, and tried an hour on my finger joints. No particular observations except less tiredness than before and perhaps less joint ache. Third day: skipped forehead stimulation, exclusively knee & ankle. Fourth day: forehead at various spots for 30 minutes; tiredness 5/6/7/8th day (11/12/13/4): skipped. Ninth: forehead, 20 minutes. No noticeable effects. Brain enhancing drug – the steroids of the mental world, these are compounds that can be both artificial or natural that are not recommended for casual consumption. If taken over a long period of time, they can and will result in permanent and debilitating damage, and if taken wrongly, they can and will result in injury, illness, and death. So far from being the best brain pill that they loop around and punch the actual best brain pill in the face. In this large population-based cohort, we saw consistent robust associations between cola consumption and low BMD in women. The consistency of pattern across cola types and after adjustment for potential confounding variables, including calcium intake, supports the likelihood that this is not due to displacement of milk or other healthy beverages in the diet. The major differences between cola and other carbonated beverages are caffeine, phosphoric acid, and cola extract. Although caffeine likely contributes to lower BMD, the result also observed for decaffeinated cola, the lack of difference in total caffeine intake across cola intake groups, and the lack of attenuation after adjustment for caffeine content suggest that caffeine does not explain these results. A deleterious effect of phosphoric acid has been proposed (26). Cola beverages contain phosphoric acid, whereas other carbonated soft drinks (with some exceptions) do not. For starters, it’s one of the highest antioxidant-rich foods known to man, including vitamin C and vitamin K and fiber. Because of their high levels of gallic acid, blueberries are especially good at protecting our brains from degeneration and stress. Get your daily dose of brain berries in an Omega Blueberry Smoothie, Pumpkin Blueberry Pancakes or in a Healthy Blueberry Cobbler. One thing I did do was piggyback on my Noopept self-experiment: I blinded & randomized the Noopept for a real experiment, but simply made sure to vary the Magtein without worrying about blinding or randomizing it. (The powder is quite bulky.) The correlation the experiment turned in was a odds-ratio of 1.9; interesting and in the right direction (higher is better), but since the magnesium part wasn’t random or blind, not a causal result. At small effects like d=0.07, a nontrivial chance of negative effects, and an unknown level of placebo effects (this was non-blinded, which could account for any residual effects), this strongly implies that LLLT is not doing anything for me worth bothering with. I was pretty skeptical of LLLT in the first place, and if 167 days can’t turn up anything noticeable, I don’t think I’ll be continuing with LLLT usage and will be giving away my LED set. (Should any experimental studies of LLLT for cognitive enhancement in healthy people surface with large quantitative effects - as opposed to a handful of qualitative case studies about brain-damaged people - and I decide to give LLLT another try, I can always just buy another set of LEDs: it’s only ~$15, after all.)
Eventually one morning you wake up and realize it has been years since you felt like yourself. It takes so much more effort than it did before to string thoughts together. Your clarity is gone, you can never focus for more than two seconds at a time, and penetrating insights have been replaced by a swamp of distraction, confusion, and forgetfulness. Your thoughts feel frayed, worn—like ragged fabric flapping in the breeze.
###### The evidence? In small studies, healthy people taking modafinil showed improved planning and working memory, and better reaction time, spatial planning, and visual pattern recognition. A 2015 meta-analysis claimed that “when more complex assessments are used, modafinil appears to consistently engender enhancement of attention, executive functions, and learning” without affecting a user’s mood. In a study from earlier this year involving 39 male chess players, subjects taking modafinil were found to perform better in chess games played against a computer.
Difficulty concentrating. As mentioned previously, this may not be a direct result of age—though it can be a common side-effect of struggling with fatigue and brain fog. When it takes more mental energy to think, it is harder to stay with it for a long time. Many of us also are surrounded by distractions clambering for our limited attention. Modern life is fast-paced, stressful, and overcrowded.
“By drawing on more than fifteen years of scientific research and experience, Dr. Mosconi provides expert advice to prevent medical decline and sharpen memory. Her brain healthy recipes will help you maintain peak cognitive performance well into old age and therefore delay and may even prevent the appearance of debilitating diseases like Alzheimer’s.”
We felt that the price for this product was OK but were concerned about how cheap it was on some websites. Our experience suggests that this could reflect the standard of the product, it could be that the quality of ingredients is poor and the dosage low so that they can price cut, however, this leaves consumers having to take more to reach the same level as other products. This can lead to all sorts of issues regarding overdosing, so for these reasons, until further testing can be carried out, we could not place this higher on our score board.
###### In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment.
I’ve spent over a million dollars hacking my own biology. The lion’s share has gone to making my brain produce as much energy as it can. I even wrote a book, Head Strong, about neurofeedback, oxygen deprivation, supplements, deeper sleep, meditation, cold exposure, and about a dozen other brain hacks, and how you can use them to make your brain stronger than you thought possible.
She speaks from professional and personal experience. When she first moved to the United States from Italy at age 24 she was struck by how shifting from the Mediterranean-style diet she grew up on to a standard American diet negatively impacted her physical health and work performance. The experience led her to more closely study nutrition and the link between diet and brain health. In this excerpt from a longer interview, she discusses the brain foods you should be eating.
And when it comes to your brain, it’s full of benefits, too. Coconut oil works as a natural anti-inflammatory, suppressing cells responsible for inflammation. It can help with memory loss as you age and destroy bad bacteria that hangs out in your gut. (5) Get your dose of coconut oil in this Baked Grouper with Coconut Cilantro Sauce or Coconut Crust Pizza.
The acid is also known to restore the vitamin C and E levels in the body. Alpha Lipoic Acid’s efficient antioxidant property protects brain cells from damage during any injury. This helps in making sure that your brain functions normally even if there is any external or internal brain injury. OptiMind, one of the best nootropic supplements that you can find today contains Alpha Lipoic Acid that can help in enhancing your brain’s capabilities.
# The ‘Brain-Gut Axis’ is a term used to describe the two-way communication system between our digestive tract and the brain. A growing body of research into this axis demonstrates how much influence the gut can have over the brain and vice versa (1). When we speak about reactions to foods, we most commonly understand them as immediate and often dangerous allergic responses, such as the constriction of the throat and trouble breathing, or dizziness and fainting. It is usually easy to pinpoint the food that causes these reactions because of the immediate immune system response, caused by a type of immune cell known as IgE antibodies. In contrast to this, food intolerances are mediated by IgG antibodies and these reactions can take up to 48 hours to have an effect. Symptoms related to IgG reactions can often be manifested as chronic issues like joint ache, IBS and depression or anxiety, which are often overlooked and not associated with what we eat.
Thursday: 3g piracetam/4g choline bitartrate at 1; 1 200mg modafinil at 2:20; noticed a leveling of fatigue by 3:30; dry eyes? no bad after taste or anything. a little light-headed by 4:30, but mentally clear and focused. wonder if light-headedness is due simply to missing lunch and not modafinil. 5:43: noticed my foot jiggling - doesn’t usually jiggle while in piracetam/choline. 7:30: starting feeling a bit jittery & manic - not much or to a problematic level but definitely noticeable; but then, that often happens when I miss lunch & dinner. 12:30: bedtime. Can’t sleep even with 3mg of melatonin! Subjectively, I toss & turn (in part thanks to my cat) until 4:30, when I really wake up. I hang around bed for another hour & then give up & get up. After a shower, I feel fairly normal, strangely, though not as good as if I had truly slept 8 hours. The lesson here is to pay attention to wikipedia when it says the half-life is 12-15 hours! About 6AM I take 200mg; all the way up to 2pm I feel increasingly less energetic and unfocused, though when I do apply myself I think as well as ever. Not fixed by food or tea or piracetam/choline. I want to be up until midnight, so I take half a pill of 100mg and chew it (since I’m not planning on staying up all night and I want it to work relatively soon). From 4-12PM, I notice that today as well my heart rate is elevated; I measure it a few times and it seems to average to ~70BPM, which is higher than normal, but not high enough to concern me. I stay up to midnight fine, take 3mg of melatonin at 12:30, and have no trouble sleeping; I think I fall asleep around 1. Alarm goes off at 6, I get up at 7:15 and take the other 100mg. Only 100mg/half-a-pill because I don’t want to leave the half laying around in the open, and I’m curious whether 100mg + ~5 hours of sleep will be enough after the last 2 days. Maybe next weekend I’ll just go without sleep entirely to see what my limits are.
Vitamin C has long been thought to have the power to increase mental agility, and some research suggests that a deficiency may be a risk factor for age-related brain degeneration including dementia and Alzheimer's. Furthermore, interesting studies demonstrate that vitamin C may be useful in managing anxiety and stress. One of the best sources of this vital vitamin are blackcurrants. Others include red peppers, citrus fruits such as oranges and broccoli.
The U. S. nootropics industry was valued at more than $1.3 billion in 2015 and is projected to reach$6 billion by 2024. This growth is due in part to slick marketing from biohacking “experts” such as Dave Asprey (founder of Bulletproof) and Josiah Zayner, Ph.D. (CEO of the Odin), who’ve built big social-media and podcast followings as well as customer bases. At the grassroots level, there are meetups across the country like the one at Idea Coffee, plus a vibrant online community.
#### Looking at the prices, the overwhelming expense is for modafinil. It’s a powerful stimulant - possibly the single most effective ingredient in the list - but dang expensive. Worse, there’s anecdotal evidence that one can develop tolerance to modafinil, so we might be wasting a great deal of money on it. (And for me, modafinil isn’t even very useful in the daytime: I can’t even notice it.) If we drop it, the cost drops by a full $800 from$1761 to $961 (almost halving) and to$0.96 per day. A remarkable difference, and if one were genetically insensitive to modafinil, one would definitely want to remove it.
Amphetamines are synthetic stimulants and were first created in 1887. These are among the most powerful stimulant-based smart drugs in use and work primarily by targeting dopamine, serotonin and noradrenaline/norepinephrine. Given what you’ve already learned about the dopaminergic effects of modafinil and methylphenidate, you should already be wary of amphetamines’ targeting of dopamine. Hormones and neurotransmitters such as dopamine, serotonin, norepinephrine and histamine are known as monoamines, and amphetamines block their uptake by being taken up instead themselves by monoamine transporters. This leads to higher levels of monoamines in synapses, and consequently to the psychostimulant effects characteristic of drugs like Adderall.
Pre and Post-Natal Depression are both complex conditions that can have multifactorial underlying drivers, including genetic and environmental influences. These are currently poorly investigated and the gold standard of treatment is often medication to help stabilise mood. Whilst SSRIs and other types of antidepressants have proven to be helpful for many, they do not address potential causes or drivers of poor mental health and can often mask symptoms. Antidepressants are also not regularly recommended during pregnancy, which is why being more mindful of nutrition and lifestyle habits can be a safer option for you and your baby. There are some natural, evidence-based steps you can take to help support optimal mental wellbeing:
If you’re a coffee or tea drinker, keep sipping: Caffeine may help protect against age-related cognitive decline. “Studies have indicated that caffeine—for example, roughly 500 milligrams daily, the equivalent of about five cups of coffee—may help stave off memory issues in humans,” says Bruce Citron, PhD, a neuroscientist at Bay Pines VA Healthcare System and the USF Morsani College of Medicine in Florida. (Experts warn against taking caffeine supplements, which flood your body with a lot of caffeine all at once.)
## And without those precious nutrients, your brain will start to wither. In a recent Bulletproof Radio podcast episode [iTunes], I talked with neuroscientist Dale Bredesen about why neurodegeneration happens. One of the three most common causes of brain aging is a lack of specific brain nutrients (check out the episode to hear about the other two main causes of brain aging, and what you can do about them).
Still, putting unregulated brain drugs into my system feels significantly scarier than downing a latte or a Red Bull—not least because the scientific research on nootropics’ long-term effects is still so thin. One 2014 study found that Ritalin, modafinil, ampakines, and other similar stimulants could eventually reduce the “plasticity” of some of the brain’s neural networks by providing them with too much dopamine, glutamate and norepinephrine, and potentially cause long-term harm in young people whose brains were still developing. (In fact, in young people, the researchers wrote, these stimulants could actually have the opposite effect the makers intended: “Healthy individuals run the risk of pushing themselves beyond optimal levels into hyperdopaminergic and hypernoradrenergic states, thus vitiating the very behaviors they are striving to improve.”) But the researchers found no evidence that normal doses of these drugs were harmful when taken by adults.
There’s been a lot of talk about the ketogenic diet recently—proponents say that minimizing the carbohydrates you eat and ingesting lots of fat can train your body to burn fat more effectively. It’s meant to help you both lose weight and keep your energy levels constant. The diet was first studied and used in patients with epilepsy, who suffered fewer seizures when their bodies were in a state of ketosis. Because seizures originate in the brain, this discovery showed researchers that a ketogenic diet can definitely affect the way the brain works. Brain hackers naturally started experimenting with diets to enhance their cognitive abilities, and now a company called HVMN even sells ketone esters in a bottle; to achieve these compounds naturally, you’d have to avoid bread and cake. Here are 6 ways exercise makes your brain better.
Smart drugs offer significant memory enhancing benefits. Clinical studies of the best memory pills have shown gains to focus and memory. Individuals seek the best quality supplements to perform better for higher grades in college courses or become more efficient, productive, and focused at work for career advancement. It is important to choose a high quality supplement to get the results you want.
Stayed up with the purpose of finishing my work for a contest. This time, instead of taking the pill as a single large dose (I feel that after 3 times, I understand what it’s like), I will take 4 doses over the new day. I took the first quarter at 1 AM, when I was starting to feel a little foggy but not majorly impaired. Second dose, 5:30 AM; feeling a little impaired. 8:20 AM, third dose; as usual, I feel physically a bit off and mentally tired - but still mentally sharp when I actually do something. Early on, my heart rate seemed a bit high and my limbs trembling, but it’s pretty clear now that that was the caffeine or piracetam. It may be that the other day, it was the caffeine’s fault as I suspected. The final dose was around noon. The afternoon crash wasn’t so pronounced this time, although motivation remains a problem. I put everything into finishing up the spaced repetition literature review, and didn’t do any n-backing until 11:30 PM: 32/34/31/54/40%.
# The task of building a better mousetrap just got a lot harder. Scientists at Princeton University recently created a strain of smarter mice by inserting a gene that boosts the activity of brain cells. The mice can learn to navigate mazes and find or recognize objects faster than run-of-the-mill rodents. The news, announced in the Sept. 2, 1999 issue of the journal Nature, raises the possibility that genetic engineers may someday be able to help humans learn and remember faster, too.
## Besides Adderall, I also purchased on Silk Road 5x250mg pills of armodafinil. The price was extremely reasonable, 1.5btc or roughly \$23 at that day’s exchange rate; I attribute the low price to the seller being new and needing feedback, and offering a discount to induce buyers to take a risk on him. (Buyers bear a large risk on Silk Road since sellers can easily physically anonymize themselves from their shipment, but a buyer can be found just by following the package.) Because of the longer active-time, I resolved to test the armodafinil not during the day, but with an all-nighter.
Its high levels of collagen help reduce intestinal inflammation, and healing amino acids like proline and glycine keep your immune system functioning properly and help improve memory. Bone broth is what I prescribe most frequently to my patients because it truly helps heal your body from the inside out. You’ll also be surprised at how simple and economical it is to make at home with my Beef Bone Broth Recipe.
The nootropics community is surprisingly large and involved. When I wade into forums and the nootropics subreddit, I find members trading stack recipes and notifying each other of newly synthesized compounds. Some of these “psychonauts” seem like they’ve studied neuroscience; others appear to be novices dipping their toes into the world of cognitive enhancement. But all of them have the same goal: amplifying the brain’s existing capabilities without screwing anything up too badly. It’s the same impulse that grips bodybuilders—the feeling that with small chemical tweaks and some training, we can squeeze more utility out of the body parts we have. As Taylor Hatmaker of the Daily Dot recently wrote, “Together, these faceless armchair scientists seek a common truth—a clean, unharmful way to make their brains better—enforcing their own self-imposed safety parameters and painstakingly precise methods, all while publishing their knowledge for free, in plain text, to relatively crude, shared databases."
I don’t believe there’s any need to control for training with repeated within-subject sampling, since there will be as many samples on both control and active days drawn from the later trained period as with the initial untrained period. But yes, my D5B scores seem to have plateaued pretty much and only very slowly increase; you can look at the stats file yourself.
Piracetam is used to increase memory, learning, and concentration. It is not reported to be toxic even at high doses, but healthy people are reported to not get that much of a boost from it, and it is understood to be most effective for older people. It’s been found to reduce the chances of a breath-holding spell in children, enhance cellular membrane fluidity, and prevent blood clotting on par with aspirin.
With the new wave of mindful eating, I feel like we're getting a step closer to eliminate the "diet culture" that is constantly sending us messages that our bodies aren't enough, how we need to comply with certain beauty standards, and restrict ourselves from certain meals because they affect the way we look. An important shift needs to be made in the latter: we should pay attention to the way food makes us feel, not to the way it makes us look.
In 2011, as part of the Silk Road research, I ordered 10x100mg Modalert (5btc) from a seller. I also asked him about his sourcing, since if it was bad, it’d be valuable to me to know whether it was sourced from one of the vendors listed in my table. He replied, more or less, I get them from a large Far Eastern pharmaceuticals wholesaler. I think they’re probably the supplier for a number of the online pharmacies. 100mg seems likely to be too low, so I treated this shipment as 5 doses:
But, thanks to the efforts of a number of remarkable scientists, researchers and plain-old neurohackers, we are beginning to put together a “whole systems” model of how all the different parts of the human brain work together and how they mesh with the complex regulatory structures of the body. It’s going to take a lot more data and collaboration to dial this model in, but already we are empowered to design stacks that can meaningfully deliver on the promise of nootropics “to enhance the quality of subjective experience and promote cognitive health, while having extremely low toxicity and possessing very few side effects.” It’s a type of brain hacking that is intended to produce noticeable cognitive benefits.
|
2018-12-14 10:04:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3195810914039612, "perplexity": 2985.5076468598463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00250.warc.gz"}
|
https://physics.stackexchange.com/questions/108008/lie-algebra-and-lie-group-about-quantum-harmonic-oscillator/108096
|
# Lie algebra and Lie group about quantum harmonic oscillator
We know that in the quantum harmonic oscillator $H=a^\dagger a$, $a^\dagger$, $a$, $1$ will span a Lie algebra, where $a, a^\dagger$ are the annihilation and creation operators, and $H$ is the Hamiltonian operator.
$$[H,a^\dagger\ ]= a^\dagger$$ $$[H,a]=-a$$ $$[a,a^\dagger]=1$$ So these four operators, $H=a^\dagger a$, $a^\dagger$, $a$, $1$, can span a Lie algebra, because the commutator satisfies closure and Jacobi's identity.
We know that for any Lie algebra $\mathscr{G}$ there exists only one Lie group $G$ up to a difference in the topology, whose Lie algebra is $\mathscr{G}$.
So what is this Lie group whose Lie algebra is spaned by $\{H=a^\dagger a , a^\dagger ,a ,1\}$ ?
I apologize, this is my third correction to my answer. This question is very subtle indeed. I hope this answer is the ultimate one!
First of all, if you want to take advantage of Lie's theorem you mention (some time called third Lie theorem), the Lie algebra has to be real, as it must be the Lie algebra of a real Lie group. Then, if you are interested in quantum mechanics applications, I mean if you wish that the given generators are also generators of a unitary representation of a Lie group, the generators must be Hermitian at least and a, a† are not.
So you first have to pass to anti self-adjoint generators (*), for instance, introducing two constants $\omega, m >0$:
$$-iI,-iH,-iP, -iK:= -imX\qquad (1)$$
where, up to real factors (so without changing the real Lie algebra) X and P are given by $a+a^\dagger$ and $i(a−a^\dagger)$ as is well known.
$m$ has the physical meaning of mass of the particle and $H = \hbar \omega(a^\dagger a + \frac{1}{2}I)$ can be re-arranged to:
$$H = \frac{1}{2m}P^2 + \frac{m\omega^2}{2}X^2$$
Operators (1) are, in fact essentially self-adjoint on the dense set made of finite linear combinations of vectors $|n\rangle$, eigenstates of $H$.
Classically, the Galileo group in one dimension includes time translations, space translations along x, Galileian boosts along x. If we think of the point $(x,p)$ in the space of phases as the vector $(1,1,x,p) \in \mathbb R^4$, and the generic element of $G$ is denoted by a triple $(\tau,a,v)$ (time translation + space translation + boost) $G$ acts on the system as
$$(1,1, x, v)^t \mapsto A(\tau,a,v) (1,1, x, p)^t$$
where, for the harmonic oscillator system $A(\tau, a,v)$ is (barring errors in computations) the $4\times 4$ real matrix
$$A(t,a,v) = \begin{bmatrix} I & 0 \\ R_{t}T_{a,v}& R_t \end{bmatrix}$$
where $R_t$ and $T_{a,v}$ are $2\times 2$ respectively matrices defined as:
$$R_t = \begin{bmatrix} \cos \omega t & -\frac{\sin \omega t}{m\omega} \\ m\omega \sin \omega t & \cos \omega t \end{bmatrix}$$
and
$$T_{a,v} = \begin{bmatrix} a & 0 \\ 0 & mv \end{bmatrix}$$
In this way, we have $3$ generators $h, \pi, k$ obtaining by taking the derivative of $A(t,a,v)$ respectively in $t$, $a$ and $v$ at (0,0,0). The commutation relations of these generators are the same as for $$H,P,K$$ with the following exception: $$[\pi, k]=0\quad \mbox{instead of}\quad [\pi, k] = m$$ to be compared with: $$[-iP,-iK]= - m(-iI)$$ Notice that this commutator is just a number, so that, when you exponentiate the generators it gives rise to a phase which commutes with all operators. In other words, if you wish to construct an unitary representation of $G$ acting in the Hilbert space of the harmonic oscillator, you face a problem with the composition rule, as you find a so-called unitary-projective representation: $$U(g)U(g')= e^{i\alpha(g,g')}U_{gg'}\qquad (2)$$ The phase $e^{i\alpha(g,g')}$ arises when $g$ and $g'$ includes transformations generated by the momentum $P$ and the boost $K$. It is possible to compute $\alpha(g,g')$ using several procedures, e.g. Hausdorff-Campbell-ecc... identity. Notice that the mass $m$ explicitly shows up in $\alpha$ (which has just the form $\alpha(g,g')= m f(g,g')$) and this is related to Bargmann's superselection rule.
To obtain a true unitary representation of some Lie group one can deals with as follows. Start from the group $U(1) \times G$ (a so called central extension of $G$) with the composition rule:
$$(e^{ia}, g) \circ (e^{ia'}, g') = (e^{i(a+a'+ \alpha(g,g'))}, gg')$$
and define the map:
$$U(1)\times G \ni (e^{ia}, g) \mapsto V_{(e^{ia}, g)} := e^{ia}U_g\:.$$
Just in view of (2), this is a proper unitary representation of $U(1)\times G$.
Notice that $U(1)\times G$ has now a further generator commuting with all the other generators in view of the fact that we have added'' $U(1)$ to the initial group $G$. This generator, in the Hilbert space, is proportional to $-iI$. The anti-self-adjoint generators are just:
$$-iI,-iH,-iP, -iK\:.$$
So, we can conclude that the considered generators are a representation of the Lie algebra of a central extension of a group $G$, representing the action of Galileo group along the $x$ axis on the harmonic oscillator .
There are some open issues.
(1) $U(1) \times G$ is a Lie group. What is the differential structure but also the topology on it? This is a delicate problem solved by Wigner.
(2) In view of the commutation relations of $H$ and $P$, the latter is not a conserved quantity along time evolution. This is a consequence of the fact that the system, obviously, is not invariant under space translations (the location of the minimum value of the harmonic potential fixes a natural origin). Nevertheless the system admits a conserved quantity associated with the generator $P$.
Since $-iP$ belongs the the Lie algebra of the representation, $$e^{-itH} (-iP) e^{itH}$$ still belongs to that Lie algebra in view of the fact that $e^{-itH}$ is a one-parameter subgroup of the representation. As a matter of fact (barring trivial errors in computations) $$e^{-itH} P e^{itH} = -\frac{\omega\sin (\omega t)}{m} K + \cos(\omega t) P\:.$$ Therefore the explicitly depending on time observable in Schroedinger picture: $$P(t) := -\frac{\omega\sin (\omega t)}{m} K + \cos(\omega t) P$$ turns out to be a constant in Heisenberg picture: $$P(t)_H = e^{itH} P(t) e^{-itH} = P\:.$$
This is exactly the procedure exploited to associate a constant quantity (always in Heisenberg picture) to the boost generator, even in relativistic theories.
(*) When one unitarily represents Lie groups, the Lie algebra of the group is isomorphic to the corresponding Lie algebra of anti self-adjoint generators of the unitary representation. It is true when identifying the Lie algebra commutator with the operator commutator.
There is one more option. You can check that $aa$, $\{a,a^+\}$ and $a^+a^+$ form Lie algebra $sp(2)\sim sl(2)$. Then you can add $a^+$ and $a$ treating them as supergenerators. These are words that tell you to take anticommutators of $a$ and $a^+$ as I did in the first line. Then you get a $5$-dimensional superalgebra, which is $osp(1|2)$. There is a supergroup $OSP(1|2)$.
Another view point is just to take all generators mentioned above as they are and lift them to exponent and investigate the group law. For $exp(\alpha a+\beta a^++\gamma)$ it is fairy easy and you find the Heisenberg group, $H_2$ which is a semidirect product of two-vectors and numbers. If you add bilinears, whose algebra is $sp(2)$, then exponentiating them gives $SL(2)\sim SP(2)$ and the full five-dimensional group is the semidirect product of $SP(2)$ and $H_2$. $\{a,a^+\}$ is just one particular generator corresponding to the Cartan element.
|
2021-05-15 15:29:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8984761238098145, "perplexity": 189.04843999939266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00526.warc.gz"}
|
https://stats.stackexchange.com/questions/111319/how-to-apply-shapiro-test-in-r?noredirect=1
|
# How to apply Shapiro test in R? [duplicate]
I'm pretty new to statistics and I need your help. I just installed the R software and I have no idea how to work with it. I have a small sample looking as follows:
Group A : 10, 12, 14, 19, 20, 23, 34, 41, 12, 13
Group B : 8, 12, 14, 15, 15, 16, 21, 36, 14, 19
I want to apply t-test but before that I would like to apply Shapiro test to know whether my sample comes from a population which has a normal distribution. I know there is a function shapiro.test() but how can I give my numbers as an input to this function?
Can I simply enter shapiro.test(10,12,14,19,20,23,34,41,12,13, 8,12, 14,15,15,16,21,36,14,19)?
• 1) the Shapiro Wilk test doesn't tell you your data is normal; it sometimes tells you when it isn't. 2) Your data certainly won't be normal anyway (looks like they're positive integers for starters), so you're answering the wrong question with a test of normality. 3) The mixture distribution obtained from the combined samples are not assumed in the t-test to be normal, so even if it made sense to formally test the assumptions, you wouldn't test that. 4) Your R syntax is wrong, since shapiro.test takes a vector argument, and you're supplying a comma-separated collection of arguments. – Glen_b Aug 10 '14 at 2:07
• Thanks. Data in groups A and B are not real. Those data are just examples. You just mentioned it is not correct!!! then what is the correct way? How can I check the normality. There are many tutorials showing this is the way to check normality and I am confused. Plz see yatani.jp/teaching/doku.php?id=hcistats:datatransformation – Bahador Saket Aug 10 '14 at 2:16
• A visual assessment of normality, such as the QQ plot at your link, at least is looking at a measure of effect size (how non-normal is it?). Indeed, the t-test is pretty robust to non-normality (increasingly so at large sample size), so a goodness of fit test will more often tend to reject when it matters least. A better option would be to examine how sensitive the test behaviour would be under similar conditions via simulation, as was done in the answer here, ... (ctd) – Glen_b Aug 10 '14 at 2:26
• (ctd) ... or simply to avoid the assumption if you don't think it's reasonable to make it. You could always go to a permutation test, for example, unless sample sizes are especially small (whereupon the problem is lack of suitable significance levels to use). – Glen_b Aug 10 '14 at 2:26
• Maybe it would be better to explain my problem in a better way, then you can suggest the best thing that I can do. I have 2 tools. Tool A and Tool B. I recruited 16 participants and asked them to perform some tasks using Tool A and then Tool B. I recorded their performance time. Then I applied t-test to see whether difference is significant. But my advisor asked me to check normality of my data. So that is why i am looking for checking normality. I'm pretty new to statistics. – Bahador Saket Aug 10 '14 at 2:39
|
2021-06-19 21:19:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5316619277000427, "perplexity": 502.6816388282571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00023.warc.gz"}
|
https://socratic.org/questions/59428f6eb72cff475e6a5aef
|
# A 20*g mass of CaCl_2 was dissolved in 700*g of water. What is the molality of the solution with respect to "calcium chloride"?
Jun 15, 2017
$0.25$ $m o l a l$
#### Explanation:
Chemical formula of Calcium chloride= $C a C {l}_{2}$
Molar mass of $C a C {l}_{2}$ =$110.98$ $g m o {l}^{-} 1$
Mass of Calcium chloride (given) = $20 g$
$M o l a l i t y$ = $m$ = ("moles of solute")/"kg of solvent"
Now, to calculate no. of moles:
No. of moles = ("mass")/"molar mass"
No. of moles= ("20g")/"110.98g/mol" = $0.180$ moles
$g$ of solvent = $700 g$
Converting $g$ $\to$ $k g$
$k g$ of solvent = $0.7 k g$
Water is solvent while $C a C {l}_{2}$ is solute
Now,
$M o l a l i t y$ = $m$ = ("moles of solute")/"kg of solvent"
$M o l a l i t y$ = ("0.180 moles")/"0.7kg"
$M o l a l i t y$ = $0.25$ $m o l a l$
Jun 15, 2017
$\text{Concentration"-=0.257*"molal}$.........
#### Explanation:
$\text{Molality}$ $\equiv$ $\text{Moles of solute"/"Kilograms of solvent}$
And thus..............................................
$\text{molality} = \frac{\frac{20 \cdot g}{110.98 \cdot g \cdot m o {l}^{-} 1}}{700 \cdot g \times {10}^{-} 3 \cdot k g \cdot {g}^{-} 1} = 0.257 \cdot m o l \cdot k g$.
Note that at (relatively!) low concentrations, the calculated $\text{molality}$ would be almost the same as solution $\text{molarity}$. $\text{Molality}$ is used because the expression is fairly independent of temperature. What is the $\text{molal concentration}$ with respect to $\text{chloride ion}$?
|
2021-10-22 00:14:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 39, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7983608841896057, "perplexity": 5604.370945447857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00243.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-1-foundations-for-algebra-get-ready-page-1/10
|
## Algebra 1
Multiply a number by itself the same number of times as the number in the exponent so, 4$x^{3}$ = 4$\times$4$\times$4 = 64
|
2020-04-07 01:23:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7708316445350647, "perplexity": 658.8906185149344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371662966.69/warc/CC-MAIN-20200406231617-20200407022117-00464.warc.gz"}
|
https://abaqus-docs.mit.edu/2017/English/SIMACAEKEYRefMap/simakey-r-connectorfriction.htm
|
# *CONNECTOR FRICTION
Define friction forces and moments in connector elements. This option is used to define friction forces and moments in connector elements.
Related Topics *CONNECTOR BEHAVIOR *CONNECTOR DERIVED COMPONENT *CONNECTOR POTENTIAL *FRICTION In Other Guides Connection types Connector behavior Connector friction behavior
ProductsAbaqus/StandardAbaqus/ExplicitAbaqus/CAE
TypeModel data
LevelModel
Abaqus/CAEInteraction module
## Optional parameters
PREDEFINED
Include this parameter to specify predefined friction behavior (if available for the connection type). Abaqus defines the contact forces and the magnitude of the tangential tractions automatically, as illustrated in Connection types.
STICK STIFFNESS
Set this parameter equal to the stick stiffness associated with frictional behavior. If this parameter is omitted, a default value (which usually is appropriate) is chosen.
## Optional parameters used to specify user-defined friction (mutually exclusive with the PREDEFINED parameter)
COMPONENT
Set this parameter equal to the connector's component of relative motion for which user-defined frictional behavior is specified.
Omit this parameter and use the CONNECTOR POTENTIAL option in conjunction with the CONNECTOR FRICTION option to specify coupled user-defined frictional behavior.
CONTACT FORCE
Set this parameter equal to the name of the associated CONNECTOR DERIVED COMPONENT option or the number of the connector component of relative motion that defines the friction-generating contact force.
DEPENDENCIES
Set this parameter equal to the number of field variable dependencies included in the definition of the connector friction data, in addition to temperature. If this parameter is omitted, it is assumed that the friction forces and moments or the contact normal force contributions are independent of field variables. See Material data definition for more information.
EXTRAPOLATION
Set EXTRAPOLATION=CONSTANT (default unless CONNECTOR BEHAVIOR, EXTRAPOLATION=LINEAR is used) to use constant extrapolation of the dependent variables outside the specified range of the independent variables.
Set EXTRAPOLATION=LINEAR to use linear extrapolation of the dependent variables outside the specified range of the independent variables.
INDEPENDENT COMPONENTS
Set INDEPENDENT COMPONENTS=POSITION (default) to specify dependencies on components of relative position included in the frictional behavior definition.
Set INDEPENDENT COMPONENTS=CONSTITUTIVE MOTION to specify dependencies on components of constitutive relative motion included in the frictional behavior definition.
REGULARIZE
This parameter applies only to Abaqus/Explicit analyses.
Set REGULARIZE=ON (default unless CONNECTOR BEHAVIOR, REGULARIZE=OFF is used) to regularize the user-defined tabular connector friction data.
Set REGULARIZE=OFF to use the user-defined tabular connector friction data directly without regularization.
RTOL
This parameter applies only to Abaqus/Explicit analyses.
Set this parameter equal to the tolerance to be used to regularize the connector friction data.
If this parameter is omitted, the default is RTOL=0.03 unless the tolerance is specified on the CONNECTOR BEHAVIOR option.
## Data line to define the parameters (geometric constants and internal contact forces) for predefined frictional behavior (the PREDEFINED parameter is included)
First (and only) line
1. First parameter used to specify predefined friction, as illustrated in Connection types.
2. Second friction parameter.
3. Etc., up to as many friction parameters discussed in Connection types.
No data line is required for connection type SLIPRING.
## Data lines to define the internal contact forces for user-defined friction that does not depend on the relative positions or motions in one or more component directions (both the PREDEFINED and INDEPENDENT COMPONENTS parameters are omitted)
First line
1. Internal contact force/moment generating friction.
2. Accumulated slip.
3. Temperature.
4. First field variable.
5. Second field variable.
6. Etc., up to five field variables.
Subsequent lines (only needed if the DEPENDENCIES parameter has a value greater than five)
1. Sixth field variable.
2. Etc., up to eight field variables per line.
Repeat this set of data lines as often as necessary to define the internal contact force as a function of accumulated slip, temperature, and field variables. Omit these data lines if internal contact forces do not need to be specified.
## Data lines to define the internal contact forces for user-defined friction that depends on the relative positions or motions in one or more component directions (the PREDEFINED parameter is omitted and the INDEPENDENT COMPONENTS parameter is included)
First line
1. First independent component number (1–6).
2. Second independent component number (1–6).
3. Etc., up to $Ni$ entries (maximum six).
Subsequent lines
1. Internal contact force/moment generating friction.
2. Connector relative position or constitutive relative motion in the first independent component identified on the first data line.
3. Connector relative position or constitutive relative motion in the second independent component identified on the first data line.
4. Etc., up to $Ni$ entries as identified on the first data line.
5. Accumulated slip.
6. Temperature.
7. First field variable.
8. Second field variable.
9. If the number of data entries exceeds the limit of eight entries per line, continue the input on the next data line.
Continuation line (if needed)
1. Third field variable.
2. Etc., up to eight entries per line.
Do not repeat the first data line. Repeat the subsequent data lines as often as necessary to define the internal contact force as a function of connector relative position or constitutive relative motion, accumulated slip, temperature, and other predefined field variables.
|
2022-12-05 17:51:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.686719536781311, "perplexity": 3549.0852501929776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00601.warc.gz"}
|
https://inquiryintoinquiry.com/2020/05/28/
|
# Daily Archives: May 28, 2020
## Sign Relations • Discussion 1
Thus, if a sunflower, in turning towards the sun, becomes by that very act fully capable, without further condition, of reproducing a sunflower which turns in precisely corresponding ways toward the sun, and of doing so with the same reproductive … Continue reading
## Sign Relations • Anthesis
Thus, if a sunflower, in turning towards the sun, becomes by that very act fully capable, without further condition, of reproducing a sunflower which turns in precisely corresponding ways toward the sun, and of doing so with the same reproductive … Continue reading
|
2023-03-21 18:22:55
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9194908738136292, "perplexity": 3866.7156251506062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00454.warc.gz"}
|
http://nrich.maths.org/152
|
### Writing Digits
Lee was writing all the counting numbers from 1 to 20. She stopped for a rest after writing seventeen digits. What was the last number she wrote?
### What Number?
I am less than 25. My ones digit is twice my tens digit. My digits add up to an even number.
### One of Thirty-six
Can you find the chosen number from the grid using the clues?
If you put three beads onto a tens/units abacus you could make the numbers $3$, $30$, $12$ or $21$.
|
2014-09-16 07:33:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3407139778137207, "perplexity": 1085.5330751391111}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657114105.77/warc/CC-MAIN-20140914011154-00050-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://www.spp2026.de/projects/20/
|
20
Compactifications and Local-to-Global Structure for Bruhat-Tits Buildings
The project is concerned with rigidity, compactifications and local-to-global principles in CAT(0) geometry.
One aim is to give a uniform construction of compactifications of euclidean buildings, using Gromov's embedding into spaces of continuous functions. The ultimate goal is to study the dynamics of discrete group actions on the building, using the compactification.
LG-rigidity of a metric space $$X$$ means that there is some $$r>0$$ such that if $$Y$$ is a metric space in which every $$r$$-ball is isometric to some $$r$$-ball in $$X$$, then there is a covering map $$X\to Y$$ which is a local isometry on all $$r$$-balls. The project intends to investigate LG-rigidity and non-rigidity for the 1-skeletons and chamber graphs of general Bruhat-Tits buildings.
## Team Members
JProf. Dr. Petra Schwer
|
2019-02-21 10:35:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7194357514381409, "perplexity": 1039.163389677351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00138.warc.gz"}
|
https://wikidoc.org/index.php/Biogas
|
# Biogas
File:Biogas-Linienbus.jpg
Biogas-bus in Bern, Switzerland
Biogas typically refers to a (biofuel) gas produced by the anaerobic digestion or fermentation of organic matter including manure, sewage sludge, municipal solid waste, biodegradable waste or any other biodegradable feedstock, under anaerobic conditions. Biogas is comprised primarily of methane and carbon dioxide.
Depending on where it is produced, biogas is also called:
Biogas containing methane is a valuable by-product of anaerobic digestion which can be utilised in the production of renewable energy [1].Biogas can be used as a vehicle fuel or for generating electricity. It can also be burned directly for cooking, heating, lighting, process heat and absorption refrigeration.
## Biogas and anaerobic digestion
Biogas production by anaerobic digestion is popular for treating biodegradable waste because valuable fuel can be produced while destroying disease-causing pathogens and reducing the volume of disposed waste products. It burns more cleanly than coal, and emits less carbon dioxide per unit of energy. The harvesting of biogas is an important part of waste management because methane is a greenhouse gas with a greater global warming potential than carbon dioxide. The carbon in biogas was generally recently extracted from the atmosphere by photosynthetic plants, so releasing it back into the atmosphere adds less total atmospheric carbon than burning fossil fuels.
Recently, developed countries have been making increasing use of biogas generated from both wastewater and landfill sites or produced by mechanical biological treatment systems for municipal waste. High energy prices and increases in subsidies for electricity from renewable sources (such as renewables obligation certificates) and drivers such as the EU Landfill Directive have led to much greater use of biogas sources.
## Landfill gas
Electricity from biogas (GWh)[2]
Country 2006 2005
Germany 7 338 4 708
UK 4 997 4 690
Italy 1 234 1 198
Spain 675 620
Greece 579 179
France 501 483
Austria 410 70
Netherlands 286 286
Denmark 285 275
Poland 241 175
Belgium 237 240
Czech Republic 175 161
Ireland 108 106
Sweden 54 54
Portugal 33 35
Luxembourg 33 27
Slovenia 32 32
Hungary 22 25
Finland 22 22
Estonia 7 7
Slovakia 4 4
Malta 0 0
EU (GWh) 17 272 13 397
Biogas in EU 2006 (GWh)[2]
Country Total Landfill Sludge Other
Germany 22 370 6 670 4 300 11 400
UK 19 720 17 620 2 100 0
Italy 4 110 3 610 10 490
Spain 3 890 2 930 660 300
France 2 640 1 720 870 50
Netherlands 1 380 450 590 340
Austria 1 370 130 40 1 200
Denmark 1 100 170 270 660
Poland 1 090 320 770 10
Belgium 970 590 290 90
Greece 810 630 180 0
Finland 740 590 150 0
Czech Republic 700 300 360 40
Ireland 400 290 60 50
Sweden 390 130 250 10
Hungary 120 0 90 40
Portugal 110 0 0 110
Luxembourg 100 0 0 100
Slovenia 100 80 10 10
Slovakia 60 0 50 10
Estonia 10 10 0 0
Malta 0 0 0 0
EU (GWh) 62 200 36 250 11 050 14 900
Landfill gas is produced from organic waste disposed of in landfill. The waste is covered and compressed mechanically and by the pressure of higher levels. As conditions become anaerobic the organic waste is broken down and landfill gas is produced. This gas builds up and is slowly released into the atmosphere. This is hazardous for three key reasons:
### Biogas composition
The composition of biogas varies depending upon the origin of the anaerobic digestion process. Landfill gas typically has methane concentrations around 50%. Advanced waste treatment technologies can produce biogas with 55-75%CH4 [3].
Typical composition of biogas[4]
Matter %
Methane, CH4 50-75
Carbon dioxide, CO2 25-50
Nitrogen, N2 0-10*
Hydrogen, H2 0-1
Hydrogen sulphide, H2S 0-3
Oxygen, O2 0-2*
*often 5 % of air is introduced for microbiological desulphurisation
### Siloxanes and gas engines
In some cases, biogas from landfills and sewage treatment contains siloxanes. During combustion of biogas containing siloxanes, silicon is released and can combine with free oxygen or various other elements in the combustion gas. Deposits are formed containing mostly silica (${\displaystyle SiO_{2}}$) or silicates (${\displaystyle Si_{x}O_{y}}$) in general, but can also contain calcium, sulphur, zinc, phosphor… as indicated by the analysis piston scrapings from biogas-fired engines. These (mostly white) deposits can ultimately build to a surface thickness of several millimetres and are difficult to remove by chemical or mechanical means.
In internal combustion engines deposits on pistons and cylinder heads are extremely abrasive and even a small amount is sufficient to cause enough damage to the engine to require a complete overhaul at 5,000 h or less of operation. The damage is similar to that caused by carbon build up during light load running of diesel engines. Deposits on the turbine of the turbocharger will eventually reduce the charger’s efficiency.
Luckily, simply cooling the gas to roughly -4 C is suficaint to remove siloxanes due to condensantion.
Stirling engines are more resistant against siloxanes, though deposits on the tubes of the heat exchanger will reduce the efficiency.[5][6]
## Biogas to natural gas
If biogas is cleaned up sufficiently, biogas has the same characteristics as natural gas. In this instance the producer of the biogas can utilize the local gas distribution networks. The gas must be very clean to reach pipeline quality. Water (H2O), hydrogen sulfide (H2S) and particulates are removed if present at high levels or if the gas is to be completely cleaned. Carbon dioxide is less frequently removed, but it must also be separated to achieve pipeline quality gas. If the gas is to be used without extensively cleaning, it is sometimes cofired with natural gas to improve combustion. Biogas cleaned up to pipeline quality is called renewable natural gas or biomethane.
### Applications of renewable natural gas
In this form the gas can be now used in any application that natural gas is used for. Such applications include distribution via the natural gas grid, electricity production, space heating, water heating and process heating. If compressed, it can replace compressed natural gas for use in vehicles, where it can fuel an internal combustion engine or fuel cells.
#### Cooking
Gober gas is a biogas generated out of cow dung. In India, gober gas is generated at the countless number of micro plants (an estimated more than 2 million) attached to households. The gober gas plant is basically an airtight circular pit made of concrete with a pipe connection. The manure is directed to the pit (usually directed from the cattle shed). The pit is then filled with a required quantity of water (usually waste water). The gas pipe is connected to the kitchen fire place through control valves. The flammable methane gas generated out of this is practically odorless and smokeless. The residue left after the extraction of the gas is used as biofertiliser. Owing to its simplicity in implementation and use of cheap raw materials in the villages, it is often quoted as one of the most environmentally sound energy source for the rural needs.
#### Railway transport
A biogas-powered train has been in service in Sweden since 2005 [7].
## Landfill gas legislation
### United States
In the United States, because landfill gas contains these VOCs, the United States Clean Air Act and Title 40 of the Code of Federal Regulations (CFR) requires landfill owners to estimate the quantity of non-methane organic compounds (NMOCs) emitted. If the estimated NMOC emissions exceeds 50 tonnes per year the landfill owner is required to collect the landfill gas and treat it to remove the entrained NMOCs. Treatment of the landfill gas is usually by combustion. Because of the remoteness of landfill sites it is sometimes not economically feasible to produce electricity from the gas.
|
2020-08-11 13:16:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5039936900138855, "perplexity": 5379.827943437304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00509.warc.gz"}
|
https://www.clutchprep.com/chemistry/practice-problems/137763/what-volume-of-o2-at-798-mmhg-and-41-c-is-required-to-synthesize-12-5-mol-of-no-
|
# Problem: What volume of O2 at 798 mmHg and 41 °C is required to synthesize 12.5 mol of NO? Express your answer to three significant figures and include the appropriate units.What volume of H2O(g) is produced by the reaction under the same conditions? Express your answer to three significant figures and include the appropriate units.
###### FREE Expert Solution
The reaction of NH3 with O2 is as follows: 4 NH3(g) + 5 O2(g) 4 NO(g) + 6 H2O(g)
For a: We can calculate for the volume using the ideal gas law:
$\overline{){\mathbf{PV}}{\mathbf{=}}{\mathbf{nRT}}}$
where P = pressure
V = volume
n = number of moles
R = gas constant
T = temperature
###### Problem Details
What volume of O2 at 798 mmHg and 41 °C is required to synthesize 12.5 mol of NO? Express your answer to three significant figures and include the appropriate units.
What volume of H2O(g) is produced by the reaction under the same conditions? Express your answer to three significant figures and include the appropriate units.
|
2020-07-02 19:55:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8225153088569641, "perplexity": 1663.3586287962842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879738.16/warc/CC-MAIN-20200702174127-20200702204127-00281.warc.gz"}
|
https://www.springerprofessional.de/springer-handbook-of-science-and-technology-indicators/17331896
|
main-content
## Über dieses Buch
This handbook presents the state of the art of quantitative methods and models to understand and assess the science and technology system. Focusing on various aspects of the development and application of indicators derived from data on scholarly publications, patents and electronic communications, the individual chapters, written by leading experts, discuss theoretical and methodological issues, illustrate applications, highlight their policy context and relevance, and point to future research directions.
A substantial portion of the book is dedicated to detailed descriptions and analyses of data sources, presenting both traditional and advanced approaches. It addresses the main bibliographic metrics and indexes, such as the journal impact factor and the h-index, as well as altmetric and webometric indicators and science mapping techniques on different levels of aggregation and in the context of their value for the assessment of research performance as well as their impact on research policy and society. It also presents and critically discusses various national research evaluation systems.
Complementing the sections reflecting on the science system, the technology section includes multiple chapters that explain different aspects of patent statistics, patent classification and database search methods to retrieve patent-related information. In addition, it examines the relevance of trademarks and standards as additional technological indicators.
The Springer Handbook of Science and Technology Indicators is an invaluable resource for practitioners, scientists and policy makers wanting a systematic and thorough analysis of the potential and limitations of the various approaches to assess research and research performance.
## Inhaltsverzeichnis
### 1. The Journal Impact Factor: A Brief History, Critique, and Discussion of Adverse Effects
The journal impact factor ( Journal Impact Factor (JIF) ) is, by far, the most discussed bibliometric indicator. Since its introduction over 40 years ago, it has had enormous effects on the scientific ecosystem: transforming the publishing industry, shaping hiring practices and the allocation of resources, and, as a result, reorienting the research activities and dissemination practices of scholars. Given both the ubiquity and impact of the indicator, the JIF has been widely dissected and debated by scholars of every disciplinary orientation. Drawing on the existing literature as well as original research, this chapter provides a brief history of the indicator and highlights well-known limitations—such as the asymmetry between the numerator and the denominator, differences across disciplines, the insufficient citation window, and the skewness of the underlying citation distributions. The inflation of the JIF and the weakening predictive power is discussed, as well as the adverse effects on the behaviors of individual actors and the research enterprise. Alternative journal-based indicators are described and the chapter concludes with a call for responsible application and a commentary on future developments in journal indicators.
Vincent Larivière, Cassidy R. Sugimoto
### 2. Bibliometric Delineation of Scientific Fields
Delineation of scientific domains (fields, areas of science) is a preliminary task in bibliometric studies at the mesolevel, far from straightforward in domains with high multidisciplinarity, variety, and instability. The Sect. 2.2 shows the connection of the delineation problem to the question of disciplines versus invisible colleges, through three combinable models: ready-made classifications of science, classical information-retrieval searches, mapping and clustering. They differ in the role and modalities of supervision. The Sect. 2.3 sketches various bibliometric techniques against the background of information retrieval ( information retrieval (IR) ), data analysis, and network theory, showing both their power and their limitations in delineation processes. The role and modalities of supervision are emphasized. The Sect. 2.4 addresses the comparison and combination of bibliometric networks (actors, texts, citations) and the various ways to hybridize. In the Sect. 2.5, typical protocols and further questions are proposed.
Michel Zitt, Alain Lelu, Martine Cadot, Guillaume Cabanac
### 3. Knowledge Integration: Its Meaning and Measurement
Interdisciplinary research depends on research traditions and fields originating from different research teams, different countries and regions. Its essence is knowledge integration. As a dynamic and interactive process it continuously pushes the structure of science to become a complex diverse system.In this chapter, we provide a systematic review of interdisciplinary research. Starting from a definition of interdisciplinary research, its elements, and its role for scientific progress, we particularly focus on how to identify the activity of interdisciplinary research, how to measure it and point out the limitations of existing approaches. Stating that one can measure knowledge integration implies that this notion refers to a continuum, beginning from no integration (disciplinary research) to a large degree of integration (highly interdisciplinary).Following Stirling, Rafols and Meyer we show that knowledge integration can be measured by two main factors: a diversity factor and a network coherence factor. The diversity factor itself consists of three aspects: variety (number of categories taken into account), evenness and similarity between categories. In accordance with the Jost–Leinster–Cobbold approach we prefer a so-called true diversity measure.As an illustration, we provide a simple example of a study on interdisciplinarity in the field of synthetic biology, using the true diversity measure derived from the Rao–Stirling measure. Finally, we include some suggestions for future research.
Ronald Rousseau, Lin Zhang, Xiaojun Hu
### 4. Google Scholar as a Data Source for Research Assessment
Emilio Delgado López-Cózar, Enrique Orduña-Malea, Alberto Martín-Martín
### 5. Disentangling Gold Open Access
This chapter focuses on the analysis of current publication trends in gold Open Access ( open access (OA) ). The purpose of the chapter is to develop a full understanding of country patterns, OA journal characteristics and citation differences between gold OA and non-gold OA publications. For this, we will first review current literature regarding Open Access and its ostensible citation advantage. Starting with a chronological perspective we will describe its development, how different countries are promoting OA publishing, and its effects on the journal publishing industry. We will deepen the analysis by investigating the research output produced by different units of analysis. First, we will focus on the production of countries with a special emphasis on citation and disciplinary differences. A point of interest will be identification of national idiosyncrasies and the relation between OA publication and research of local interest. This will lead to our second unit of analysis, OA journals indexed in Web of Science. Here we will focus on journal characteristics and publisher types to clearly identify factors which may affect citation differences between OA and traditional journals which may not necessarily be derived from the OA factor. Gold OA publishing, as opposed to green OA, is being encouraged in many countries. This chapter aims at fully understanding how it affects researchers' publication patterns and whether it ensures an alleged citation advantage as opposed to non-gold OA publications. country publication profile open access (OA) gold scholarly communication citation advantage
Daniel Torres-Salinas, Nicolas Robinson-García, Henk F. Moed
### 6. Science Forecasts: Modeling and Communicating Developments in Science, Technology, and Innovation
In a knowledge-based economy, science and technology are omnipresent, and their importance is undisputed. Equally evident is the need to allocate resources, both monetary and human, in an effective way to foster innovation [6.1, 6.2]. In the preceding decades, science policy has embraced data mining and metrics to gain insights into the structure and evolution of science and to devise metrics and indicators [6.3], but it has not invested significant efforts into mathematical, statistical, and computational models that can predict future developments in science, technology, and innovation ( science, technology, and innovation (STI) ) in support of data-driven decision making.Recent advances in computational power combined with the unprecedented volume and variety of data concerning science and technology developments (e. g., publications, patents, funding, clinical trials, and stock market and social media data) yielded ideal conditions for the advancement of computational modeling approaches that can be not only empirically validated, but used to simulate and understand the structure and dynamics of STI in support of improved human decision making.In this chapter, we review and demonstrate the power of computational models for simulating and predicting possible STI developments and futures. In addition, we discuss novel means to visualize and broadcast STI forecasts to make them more accessible to general audiences.
Katy Börner, Staša Milojević
### 7. Science Mapping Analysis Software Tools: A Review
Scientific articles are one of the most important types of output of a researcher. In that sense, bibliometrics is an essential tool for assessing and analyzing academic research output contributing to the progress of science in many different ways. It provides objective criteria to assess research developed by researchers, being increasingly valued as a tool for measuring scholarly quality and productivity. Science mapping is a bibliometric tool to analyze and mine scientific output. The aim of this chapter is to present a thorough review of science mapping software tools, showing strengths and limitations. Six software tools that meet the criteria of being free, full, and allowing the whole analysis to be performed are analyzed: BibExcel CiteSpace II CitNetExplorer SciMAT Sci $${}^{2}$$ 2 Tool VOSviewer. This analysis describes aspects related to data processing, analysis options, and visualization. The particular properties of each tool that allows us to analyze the science are presented, the choice of a particular tool one depends on the type of actor to be analyzed and the output expected.
Jose A. Moral-Munoz, Antonio G. López-Herrera, Enrique Herrera-Viedma, Manuel J. Cobo
### 8. Creation and Analysis of Large-Scale Bibliometric Networks
In the more than a decade since the last Handbook of Quantitative Science and Technology Research [8.1] was published, a sea change has occurred in the creation and analysis of bibliometric networks that describe the Science & Technology (S&T) landscape. Previously, networks were typically restricted in size to hundreds or thousands of objects (papers, journals, authors, etc.) due to lack of data access and computing capacity. However, recent years have seen the increased availability of full databases, increased computing capacity, and development of partitioning and community detection algorithms that can work effectively at large scale. As a result, much larger networks–comprised of millions or tens of millions of objects–are being created and analyzed. These large-scale networks have enabled analyses that were simply not possible in the past, analyses that require the context of complete networks to give accurate results.In this chapter, we focus on large-scale, global bibliometric networks network bibliometric bibliometric network , and on the types of analysis that are enabled by these networks. We start by providing a historical perspective that sets the stage for recent advances that have culminated in the ability to create and analyze large-scale bibliographic networks bibliographic network . We then discuss data sources and the methods that are commonly used to create large-scale networks. We review many of these networks, along with the types of unique analyses that they enable, and ways that their results can be effectively communicated. After reviewing the state of the art, we discuss our most recent large-scale topic-level model of science in detail as an example of a global bibliometric model and show how it can be used for various applications.
Kevin W. Boyack, Richard Klavans
### 9. Science Mapping and the Identification of Topics: Theoretical and Methodological Considerations
This chapter focusses on the drivers for the advancement of mapping of science and the detection of topics as often applied in scientometrics. The chapter identifies three different drivers for this advancement: technological innovation resulting in increased computational power, the improved community detection approaches available today, and advancements in scientometrics itself with respect to the actual linking of documents through citations or lexical approaches. We will show that the main drivers are the first two, with the last one somewhat lagging behind. Next, severe methodological issues have been identified in network science related to the application of these techniques for community detection. The resolution limit and the degeneracy problem are described. The last section shows how different approaches are taken to enable scientometricians to create global maps of science and how they come to comparable results at higher levels of granularity but that the validity of more fine-grained clusters and topics suffers strongly in the discussed problems, which raises serious questions with respect to the applicability of these global techniques with a strong local focus.
Bart Thijs
### 10. Measuring Science: Basic Principles and Application of Advanced Bibliometrics
We begin with a short history of measuring science and discuss how the Science Citation Index has revolutionized the quantitative study of science and created a strong application potential. After reviewing the rationale of bibliometric analysis, we present the basic principle of the bibliometric methodology, with complex citation networks as a starting point. We show that the two main pillars of advanced bibliometric methods, citation-based analysis and science mapping, are both reducible to one and the same principle. From this basic principle we deduce a set of main indicators, particularly for the assessment of research output and international impact. Important elements include new approaches for identifying fields and research themes on the basis of a publication-level rather than a journal-level network; publication and citation counting; normalization of citation measures; the use of indicators based on averages versus those based on citation distributions; and weighting procedures and statistical reliability. In this account of the state of the art of advanced bibliometrics bibliometrics advanced , we highlight in particular the developments in our Leiden institute, given its long-standing, extensive, and broad experience.The next part of this chapter deals with practical applications of indicators, particularly real-life examples of evaluation studies. We further discuss several crucial issues such as the use of journal impact factors and h-index; the relation between peer review judgment and bibliometric findings; definition and delimitation of fields; assignment of publications; the influence of open access; webometrics and altmetrics; ranking of universities; and general objections to bibliometric analysis.The second main pillar of the advanced bibliometric methodology is the development of science maps. We discuss the basic elements and the construction of both citation-relation and word-relation science maps. Further, we present a method to combine the two main pillars: the integration of citation analysis in science maps. This combined citation analysis and science mapping science mapping can be used to explore research related to socioeconomic problems. Recently developed bibliometric instruments enable tunable mapping, which opens up new analytical opportunities in monitoring scientific research. Finally, we contend that bibliometric indicators indicator bibliometric and maps are not just evaluation tools for science policymakers, research managers, and individual researchers, but also powerful instruments in the study of science.
Anthony van Raan
### 11. Field Normalization of Scientometric Indicators
When scientometric indicators are used to compare research units active in different scientific fields, there is often a need to make corrections for differences between fields, for instance, differences in publication, collaboration, and citation practices. Field-normalized indicators aim to make such corrections. The design of these indicators is a significant challenge. We discuss the main issues in the design of field-normalized indicators and present an overview of the different approaches that have been developed for dealing with the problem of field normalization. We also discuss how field-normalized indicators can be evaluated and consider the sensitivity of scientometric analyses to the choice of a field-normalization approach.
Ludo Waltman, Nees Jan van Eck
### 12. All Along the h-Index-related Literature: A Guided Tour
In this chapter, a survey of the literature related to the $$h$$ h -index (referred to as $$h$$ h -related literature) between 2005 and 2016 is presented. In the first section, the basic definitions and a brief historical account are given. After providing an overview of the typology of the $$h$$ h -related publications and some earlier reviews, the more than 3000 $$h$$ h -related publications collected from four databases (Web of Science, Scopus, Google Scholar and Microsoft Academic) are analyzed by bibliometric methods. Document types, publication sources, subject categories, geographical distributions, authors and institutions, citations and references are listed and mapped. Several examples of applications of the $$h$$ h -index, within and outside the area of scientometrics, are presented, with particular attention to the possibilities for using the $$h$$ h -related indices as a network measure. Among the mathematical models used to explain and interpret the index and its relatives, Hirsch's model, the Lotkaian framework, models based on extreme value theory and on fuzzy integrals, and axiomatic approaches are demonstrated.
András Schubert, Gábor Schubert
### 13. Citation Classes: A Distribution-based Approach for Evaluative Purposes
In this chapter, we describe a scientometric assessment tool that was first introduced as early as the second half of the 1980s, but due to the high computational requirements at that time, the method fell undeservedly into oblivion. The method is called Characteristic Scores and Scales (CSS) and is aimed at providing a more detailed picture of citation impact, with particular regard to the high end of performance. More than two decades after its introduction, the method experienced a revival as a consequence of the burning need for improved and versatile assessment tools, facilitated by the rapid development of information technology and the broad access to electronic data sources.The first part of this chapter will describe the model, its background and the statistical properties underlying this approach, while the following sections will deal with its implementation within the framework of research evaluation framework of research evaluation at different levels of aggregation and in various disciplinary and multidisciplinary contexts. Special attention is paid to the applicability to various aggregation levels, such as national research performance, the comparative analysis of institutional research output, as a tool to assist the assessment of individual researchers and as journal impact measures. A graphical sketch of possible applications is used as a road map throughout the chapter to navigate the various methodological issues and fields of use. The chapter begins with a review of previous work, but also aims at presenting new insights and applications in a systematic manner. In addition to the presentation of new results, future perspectives and possible applications of this model within and outside traditional scientometrics will be sketched and highlighted.
Wolfgang Glänzel, Bart Thijs, Koenraad Debackere
### 14. An Overview of Author-Level Indicators of Research Performance
The purpose of this chapter is to present a critical overview of author-level indicator individual level indicators of research production ( author-level indicators of research production (ALIRP) ), discuss their appropriate application and provide a tool to support the informed use of ALIRP. A brief history of the development of ALIRP begins with a chronological discussion of the major trends in indicator development, which documents the quick adaptation of ALIRP in evaluation practice, and consequently sets the argument for the need to monitor and evaluate present-day indicator production, which is the major theme of this chapter. The characteristics and common mathematical properties of ALIRP are used to highlight the challenges we face in applying appropriate ALIRP in evaluation. The construction and validity validity of 69 ALIRP are analyzed, and the results presented in table form for easy reference. These tables are also available as interactive tables provided as e-material to this chapter. This analysis, combined with the deconstruction of indicators in the chapter sections, argues that ALIRP are mathematical models, and the numerical values they produce should never be confused with the reality they are trying to model in evaluation practice.
Lorna Wildgaard
### 15. Challenges, Approaches and Solutions in Data Integration for Research and Innovation
In order to be implemented by policy makers, science, technology, and innovation ( science, technology, and innovation (STI) ) policies and indicator building need data. Whenever we need data, we need a method for data management, and in the era of big data big data , a crucial role is played by data integration big data integration . Therefore, STI policies and indicator development need data integration. Two main approaches to data integration exist, namely procedural and declarative. In this chapter, we follow the latter approach and focus our attention on the ontology-based data integration ( ontology-based data integration (OBDI) ) paradigm. The main principles of OBDI are: (i) Leave the data where they are. (ii) Build a conceptual specification of the domain of interest (ontology), in terms of knowledge structures. (iii) Map such knowledge structures to concrete data sources. (iv) Express all services over the abstract representation. (v) Automatically translate knowledge services to data services. We introduce the main challenges of data integration for research and innovation ( research and innovation (R&I) ) and show that reasoning over an ontology connected to data may be very helpful for the study of R&I. We also provide examples by using Sapientia, an ontology specifically defined for multidimensional research assessment.
Maurizio Lenzerini, Cinzia Daraio
### 16. Synergy in Innovation Systems Measured as Redundancy in Triple Helix Relations
The Triple Helix ( triple helix (TH) ) of university–industry–government relations can first be considered as an institutional network. However, the correlations in the patterns of relations provide another topology: that of a vector space. Meanings are provided from positions in this latter topology and from the perspective of hindsight. Meanings can be shared, and sharing generates redundancy. Increasing redundancy provides new options and reduces uncertainty; reducing uncertainty improves the innovative climate, and the generation of options (redundancy) is crucial for innovation. The knowledge base provides an engine of the economy by evolving in terms of generating new options. The trade-off between the evolutionary generation of redundancy and the historical variation providing uncertainty can be measured as negative and positive information, respectively. In a number of studies of national systems of innovation (e. g., Sweden, Germany, Spain, China), this TH synergy indicator has been used to analyze regions and sectors in which uncertainty was significantly reduced. The quality of innovation systems innovation system can thus be quantified at different geographical scales and in terms of sectors such as high- and medium-tech manufacturing or knowledge-intensive services ( knowledge -intensive services (KIS) ).
Loet Leydesdorff, Inga Ivanova, Martin Meyer
### 17. Scientometrics Shaping Science Policy and vice versa, the ECOOM Case
It is difficult to imagine a world without science policy. Ever since Vannevar Bush published his seminal insights on the role of science in society, science policy has become deeply ingrained in public policy. Alongside this, the discipline of scientometrics developed. It started from library and information needs, helping the ever-growing scientific community to access, retrieve and disseminate its ever-increasing output. However, along the way, scientometrics developed into a powerful set of scientifically validated data, indicators and tools. It diffused across many disciplines in the social sciences. Over time, this evolution came to the attention of policymakers. The wealth of data and indicators developed in the field of scientometrics (later extended to informetrics and webometrics) elicited interest in their use for policy purposes. A symbiosis between scientometrics and science policy was born. Using the case of the Flemish Centre for Research & Development Monitoring (ECOOM), we describe and illustrate this coevolution between scientometrics and science policy, its opportunities and its challenges, and its do's and don'ts.
Koenraad Debackere, Wolfgang Glänzel, Bart Thijs
### 18. Different Processes, Similar Results? A Comparison of Performance Assessment in Three Countries
Monitoring the scientific performance of a country, region, or organization has become a high priority for research managers and government agencies. Research assessments research assessment have been implemented to provide evidence and facilitate their decisions. They differ in the methodologies applied, the disciplinary and regional breadth, and the consequences that follow. We sought to examine the extent to which quantitative, indicator-based analysis can contribute to identifying and better understanding the effects and effectiveness of the different assessment regimes. To this end, we analyzed the publications from three countries (Australia, the United Kingdom, and Germany) with contrasting systems in place, seeking to demonstrate the possibilities and limitations of using an indicator-based methodology for determining the outcomes from different approaches to assessment.We intentionally selected three countries with different assessment regimes, expecting to see the effects of this in the bibliometric analyses we undertook. However, we found that the data alone do not allow us to conclude that any one system has a beneficial or detrimental influence on performance. Rather, the data suggest that it is not the specific system that makes a difference but the fact that performance becomes a central topic of conversation.In order to better understand the mechanisms behind changing performance, restricting scrutiny to mere numbers is insufficient. Contextual information at various levels of aggregation—within and outside the institutions—is highly relevant.
Sybille Hinze, Linda Butler, Paul Donner, Ian McAllister
### 19. Scientific Collaboration Among BRICS: Trends and Priority Areas
The political and economic partnership known as BRIC (for Brazil, Russia, India and China) was formally established in 2008. Three years later, in a joint meeting in Cape Town, a new member, South Africa, was included in the group. In this meeting, Brazil, Russia, India, China and South Africa (BRICS) delegates elaborated a list of priority areas for enhancing bi- or multilateral cooperation in the fields of science, technology and innovation. Considering the growing importance of BRICS in the global economy and other sectors, the present study investigates the performance of the group in the scientific arena before and after its formalization in 2008, looking closely at BRICS collaborative publications, in order to identify whether the priority areas established in the Cape Town declaration are being actually pursued. Data were collected during February and March 2017 from the Web of Science database, covering the period 2000–2015. To match scientific collaborations, specific searches were carried out by combining the names of two BRICS members and time periods. Various bibliometric techniques were used, including diachronic analysis, Bradford's law and journal co-citation analysis. Among the key findings highlighted here are a marked increase in BRICS participation during the period, widely varying levels of collaboration among members, and the presence of physics as a central field for most members. The chapter concludes with an in-depth discussion focusing on correlations between the fields with greater collaboration and the priority areas.
Jacqueline Leta, Raymundo das Neves Machado, Roberto Mario Lovón Canchumani
### 20. The Relevance of National Journals from a Chinese Perspective
The process of of journal evaluation journal evaluation began in the 1930s when the famous British scholar S.C. Bradford published his study of geophysics and lubrication, which presented the empirical law now known as Bradford's law of scattering, as well as the concept of core area journals. The citation indicator indicator system and citation analysis theory system were founded in the middle of the twentieth century, and now have extensive influence. In the 1960s, Garfield carried out a large-scale statistical analysis of citations in journal literature. Generally speaking, the journal evaluation system has been gradually improved over time, producing an evaluation result that meets the development needs of science and technology. As one of the countries producing important science and technology outputs, China has ranked second according to the statistics of the number of scientific articles in recent years. At the same time, China has over 5000 scholarly journals, however, only 4% of them have been indexed in Web of Science and 10% of them in Scopus. A similar situation is found in Russia, Japan, Korea, and other non-English-speaking countries. Therefore, China has a lot of research and practice in the field of journal evaluation with which to explore more applicable and effective ways of assessing and improving national academic journal development. We will review the development situation of scientific, technical and medical ( scientific, technical and medical (STM) ) journals in China to understand the demand for a national journal evaluation system. According to the comparative study comparative study on international and national evaluation systems and indicators of academic journals in China, we can find the characteristics of national journal evaluation under a framework of their respective evaluation purposes, evaluation methods, key features, and evaluation criteria. We introduce two cases of China's STM journal research and evaluation work: the development of the boom index boom index index boom and its monitoring function, and the definition and application of comprehensive performance scores ( comprehensive performance score (CPS) s) for Chinese scientific and technical journals. English-language science and technology journals in China are more similar to international journals but are developing along a particular path. Therefore we also introduce three other cases: statistics and analysis of English-language science and technology journals in China, the communication value of Chinese-published English-language academic journals according to citation analysis, and the atomic structure model for evaluating English-language scientific journals published in non-English countries.
Zheng Ma
### 21. Bibliometric Studies on Gender Disparities in Science
Understanding gender related disparities in science is an essential step in tackling these issues. Through the years, bibliometric studies have designed several methodologies to analyze scholarly output and demonstrate that there are significant gaps between men and women in the scientific arena. However, gender identification in itself is an enormous challenge, since bibliographic data does not reveal it. These bibliometric studies not only focused on publication output and impact, but also on cross-referencing output, promotions and tenure data, and other related curriculum vitae (CV) information. This chapter discusses the challenges of tracking gender disparities in science through bibliometrics and reviews the various approaches taken by bibliometricians to identify gender and analyze the bibliographic data in order to point to gender disparities in science.
Gali Halevi
### 22. How Biomedical Research Can Inform Both Clinicians and the General Public
This study involved the collection of clinical practice guidelines ( clinical practice guideline (CPG) s) on five noncommunicable disease (NCD) areas from 21 European countries, and extraction of their evidence base in the form of papers in journals processed on the Web of Science ( Web of Science (WoS) ). We analyzed these cited papers to see how their geographical provenance compared with European research in the respective subjects and found that European research (and that from the USA, Australia, and New Zealand) was over-cited compared with that from East Asia. In cancer, surgery and radiotherapy research made important contributions to the CPGs.We also collected medical research stories from 30 newspapers from 22 European countries and the WoS papers that they cited. There was a heavy emphasis on cancer, particularly breast cancer, and its epidemiology, genetics, and prognosis, but new treatment methods were seldomly reported, particularly surgery and radiotherapy. Some of the stories quoted commentators, with those from the two UK newspapers often mentioning medical research charities, which thereby gained much free publicity.Both sets of cited research papers showed a marked tendency to be over-cited by documents from their countrymen; the ratio was higher the smaller the country's contribution to research in the subject area.
Elena Pallari, Grant Lewison
### 23. Societal Impact Measurement of Research Papers
What are the results of public investment in research from which society actually derives a benefit? The scope of research evaluations becomes broader when societal products (outputs), societal use (societal references), and societal benefits (changes in society) of research are considered. This chapter presents an overview of the literature in the area of societal impact measurement of scientific papers. It describes major research projects on societal impact measurements. Problems of societal impact assessments are discussed as well as proposals to measure societal impact. The chapter discusses the role of alternative metrics (altmetrics) in measuring societal impact. There is an ongoing debate in scientometrics as to whether altmetrics are able to measure this kind of impact.
Lutz Bornmann, Robin Haunschild
### 24. Econometric Approaches to the Measurement of Research Productivity
The measurement of research productivity is receiving more and more attention. Besides scholars that are interested in understanding how research works and evolves over time, there are supranational, national and local governments, and national evaluation agencies, as well as various stakeholders, including managers of academic and research institutions, scholars and more generally the wider public, who are interested in the accountability and transparency of the scholarly production process.The main objective of this chapter is to analyze econometric approaches to research productivity and efficiency, highlighting what econometric approaches to research assessment can offer and what their benefit is, compared to traditional bibliometric or informetric approaches. We describe the nature of, and the ambiguities connected to, the measurement of research productivity, as well as the potential of econometric approaches for research measurement and assessment. Finally, we propose a checklist when developing econometric models of research assessment as a starting point for further research.
Cinzia Daraio
### 25. Developing Current Research Information Systems (CRIS) as Data Sources for Studies of Research
Current research information systems ( current research information system (CRIS) ) are increasingly being used to standardize and ease documentation, communication, and administration of research. With broad coverage and sufficient completeness, data quality, and standardization, CRIS systems can also be used as data sources for studies of research. Making CRIS interoperable and comparable across institutions and countries is necessary for the further development of CRIS for research purposes. Integration of CRIS for administrative purposes is already on the European agenda. This chapter focuses on challenges and solutions to the development of internationally integrated CRIS. Most of the remaining challenges are not related to technical solutions, but to an efficient sharing and use of contents. The chapter starts with the situation at the international level before it moves on to an example of CRIS at the national level to describe challenges and possible solutions even more concretely. The last section of the chapter provides examples of the type of studies that can be performed if progress is made for internationally integrated CRIS.
Gunnar Sivertsen
### 26. Social Media Metrics for New Research Evaluation
This chapter approaches, from both a theoretical and practical perspective, the most important principles and conceptual frameworks that can be considered in the application of social media metrics for scientific evaluation. We propose conceptually valid uses for social media metrics in research evaluation. The chapter discusses frameworks and uses of these metrics as well as principles and recommendations for the consideration and application of current (and potentially new) metrics in research evaluation.
Paul Wouters, Zohreh Zahedi, Rodrigo Costas
### 27. Reviewing, Indicating, and Counting Books for Modern Research Evaluation Systems
In this chapter, we focus on the specialists who have helped to improve the conditions for book assessments in research evaluation exercises, with empirically based data and insights supporting their greater integration. Our review highlights the research carried out by four types of expert communities—the monitors, the subject classifiers, the indexers, and the indicator constructionists. Many challenges lie ahead for scholars affiliated with these communities, particularly the latter three. By acknowledging their unique yet interrelated roles, we show where the greatest potential is for both quantitative and qualitative indicator advancements in book-inclusive evaluation systems.
Alesia Zuccala, Nicolas Robinson-García
Twitter has unarguably been the most popular among the data sources that form the basis of so-called altmetrics. Tweets to scholarly documents have been heralded as both early indicators of citations and measures of societal impact. This chapter provides an overview of Twitter activity as the basis for scholarly metrics from a critical point of view and equally describes the potential and limitations of scholarly Twitter metrics. By reviewing the literature on Twitter in scholarly communication and analyzing 24 million tweets linking to scholarly documents, it aims to provide a basic understanding of what tweets can and cannot measure in the context of research evaluation. Going beyond the limited explanatory power of low correlations between tweets and citations, this chapter considers what types of scholarly documents are popular on Twitter, and how, when and by whom they are diffused in order to understand what tweets to scholarly documents measure. Although the chapter is not able to solve the problems associated with the creation of meaningful metrics from social media, it highlights particular issues and aims to provide the basis for advanced scholarly Twitter metrics.
Stefanie Haustein
### 30. Data Collection from the Web for Informetric Purposes
This chapter reviews the development of data collection procedures on the web with an emphasis on current practices, data cleansing and matching, data quality and transparency. There are several issues to be considered when collecting data from the web. Transparency is essential to know what is included in the data source, how recent and comprehensive the data are, what timeframe is covered etc. Data quality relates to reliability and accuracy. Mistakes are inevitable, data providers, aggregators, and researchers all make mistakes, but these mistakes should be reduced to a minimum so that meaningful conclusions may be reached from the data analysis. Extensive data cleansing before starting the analysis is needed to try to correct mistakes in the data. When several data sources are used, data from different sources should be matched, and duplicates should be removed.
Judit Bar-Ilan
Kayvan Kousha
### 32. Usage Bibliometrics as a Tool to Measure Research Activity
Edwin A. Henneken, Michael J. Kurtz
### 33. Online Indicators for Non-Standard Academic Outputs
This chapter reviews webometric, altmetric, and other online indicators for the impact of nonstandard academic outputs, such as software, data, presentations, images, videos, blogs, and grey literature. Although the main outputs of academics are journal articles in science and the social sciences, and monographs, chapters, or edited books to some extent in the arts and humanities, many scholars also produce other primary research outputs. For nonstandard outputs, it is important to provide evidence to justify a claim for a type of impact and online indicators indicator online may help with this. Using the web, academics may obtain data to present as evidence for a specific impact claim. The research reviewed in this chapter describes the types of evidence that can be gathered, the nature of the claims that can be made, and methods to collect and process the raw data. The chapter concludes by discussing the limitations of online data and summarizing recommendations for interpreting impact evidence.
Mike Thelwall
### 34. Information Technology-Based Patent Retrieval Models
This chapter presents information technology ( information technology (IT) ) based patent retrieval models. It first compares and contrasts information retrieval ( information retrieval (IR) ) with patent retrieval, and highlights their key differences. For instance, IR can be considered as a precision-oriented retrieval, whereas patent retrieval can be considered as a recall-oriented retrieval. The chapter then describes the boolean retrieval model, which was designed for IR but can be used for patent retrieval. To facilitate effective patent retrieval, a basic patent retrieval model is presented. With this model, representative keyword terms are extracted from the user query and are ranked according to their importance so that top- $$k$$ k relevant patents can be retrieved with irrelevant patents eliminated. Moreover, the chapter also presents some enhancements and extensions to the basic patent retrieval model, which include incorporation of relevance feedback, estimation of the importance of keyword terms, text preprocessing of patent documents, and handling of patent category frequency. In addition, two dynamic patent retrieval models are also described. These two models perform interactive patent retrieval via dispersion or accumulation to dynamically rank the patents. Experimental results with real-life datasets dataset real-life show that the models presented in this chapter outperformed many conventional search systems with respect to time and cost. While this chapter focuses on the theoretical aspects of IT based patent retrieval models which are of interest to IT specialists, practical illustrative examples in the chapter demonstrate the empirical aspects of patent retrieval models which are helpful to IT practitioners.
Carson Leung, Wookey Lee, Justin Jongsu Song
### 35. The Role of the Patent Attorney in the Filing Process
The role of the legal representative in patent filing processes is, so far, under-explored in patent statistics. This chapter addresses the question of the role and the impact of the patent attorney in the filing process. One of the core assumptions is that more experienced attorneys have more in-depth knowledge of the intricacies of the patent system and, thus, are more likely to pursue more elaborate and successful filing strategies.The results show a high concentration of attorneys and filing action in absolute as well as in relative terms in some countries, namely Germany and the UK, and numbers worth mentioning also in other larger applicant countries like France, Italy, Sweden, or the Netherlands. Explanations for this biased distribution in Europe are language advantages in the case of the UK (and also Ireland) and geographical proximity to the European Patent Office (EPO), as well as economies of scale in the case of Germany.The experience of the representative has a considerable impact on the outcome. Multivariate analyses suggest that the (financial) resource endowment is a decisive factor in the hiring of patent attorneys. It was shown that the patents of more experienced representatives were significantly more often withdrawn (but neither refused nor granted with a higher probability), and they were less often opposed than the ones by less experienced attorneys.
Rainer Frietsch, Peter Neuhäusler
### 36. Exploiting Images for Patent Search
Patent offices worldwide receive considerable numbers of patent documents that aim at describing and protecting innovative artifacts, processes, algorithms, and other inventions. These documents apart from the main text description may contain figures, drawings, and diagrams in an effort to better explain the patented object. Two main directions are presented in this chapter; concept-based and content-based patent retrieval. Concept-based search utilizes textual and visual information, fusing them in a classification late fusion stage. Conversely, content-based retrieval is based on the shape/content information from patent images and is therefore based on the visual descriptors that are extracted from binary images. Concepts are extracted using classification techniques, such as support vector machines and random forests. Adaptive hierarchical density histograms serve as binary image retrieval techniques that combine high efficiency and effectiveness, while being compact and therefore capable of dealing with large binary image databases. Given the vast number of images included in patent documents, it is highly significant for the patent experts to be able to examine them in their attempt to understand the patent contents and identify relevant inventions. Therefore, patent experts would benefit greatly from a tool that supports efficient patent image retrieval image retrieval patent and extends standard figure browsing and metadata-based retrieval by providing content-based search according to the query-by-example paradigm.
Ilias Gialampoukidis, Anastasia Moumtzidou, Stefanos Vrochidis, Ioannis Kompatsiaris
### 37. Methodological Challenges for Creating Accurate Patent Indicators
The chapter deals with new methodological issues of retrieval for patent indicators linked to the change of the patent system in the last $$\mathrm{20}$$ 20 years and the new ways to access patent data. In particular, it describes international flows of patent applications between the US, Europe, and Southeast Asia, and illustrates methods for an appropriate cross-country comparison. A central topic of this chapter is the implications of the frequently used Patent Cooperation Treaty ( Patent Cooperation Treaty (PCT) ) route of patent applications on the conception of search strategies and the interpretation of search results. Furthermore, the possibilities of search with the new international Cooperative Patent Classification ( Cooperative Patent Classification (CPC) ) are explained. In addition, the patenting activities of very large companies and patent value are discussed.
Ulrich Schmoch, Mosahid Khan
### 38. Using Text Mining Algorithms for Patent Documents and Publications
In this chapter we present an overview of text mining approaches that can be used to conduct science and technology studies that rely on assessing the (content) similarity between patent documents and/or scientific publications. We highlight the rationale behind vector space models, latent semantic analysis, and probabilistic topic models. In addition, several validation studies pertaining to patent documents and publications are presented. These studies reveal that choices in terms of algorithms, pre-processing, and calculation options have non-trivial consequences in terms of outcomes and their validity. As such, scholars should pay attention to the technicalities implied when engaging in text mining efforts in order for outcomes to become relevant and informative.
Bart Van Looy, Tom Magerman
### 39. Application of Text-Analytics in Quantitative Study of Science and Technology
The quantitative study of science, technology and innovation (ST&I science, technology, and innovation (STI) ) has experienced significant growth with advancements in disciplines such as mathematics, computer science and information sciences. From the early studies utilizing the statistics method, graph theory, to citations or co-authorship, the state of the art in quantitative methods leverages natural language processing and machine learning. However, there is no unified methodological approach within the research community or a comprehensive understanding of how to exploit text-mining potentials to address ST&I research objectives. Therefore, this chapter intends to present the state of the art of text mining within the framework of ST&I. The major contribution of the chapter is twofold; first, it provides a review of the literature on how text mining extended the quantitative methods applied in ST&I and highlights major methodological challenges. Second, it discusses two hands-on detailed case studies on how to implement the text analytics routine.
Samira Ranaei, Arho Suominen, Alan Porter, Tuomo Kässi
### 40. Functional Patent Classification
Patent classifications are systematically used in patent analysis for a number of purposes. Existing classifications not only shape the administrative activities of recording and reporting and the search for prior art, but also create the backbone of the construction of science and technology indicators used in economic analysis, policy making, and business and competitive intelligence.Yet the current classification system of patents, despite significant and continuous efforts to update, suffers from a number of limitations. In particular, it fails to capture the full potential of inventions to cut across industrial boundaries, does not allow fine-grained technology intelligence, and misses almost entirely the opportunities for lateral vision.We suggest integrating existing schemes with a full scale functional classification, i. e., based on the main functions performed by a technology, rather than on the inventive solutions or their potential applications. The functional approach allows us to overcome most of the limits of traditional classification, due to the generality and abstraction of the representation of functions. In this chapter, we will first review the conceptual background of the functional approach in epistemology and analytical philosophy and illustrate its recent developments in engineering design, design theory, artificial intelligence, computational linguistics, and data mining. We then discuss three short case studies of the application of the methodology for the definition of patent sets (in particular within a technology foresight exercise), prior art analysis, and technology crossover identification and mapping.
Andrea Bonaccorsi, Gualtiero Fantoni, Riccardo Apreda, Donata Gabelloni
### 41. Computer-Implemented Inventions in Europe
The dispute between proponents and opponents of the patent system has been especially visible with regard to the patenting of computer programs. Different developments have resulted in the fact that there are large differences in the patent practices between the European Patent Office (EPO) and the U.S. Patent and Trademark Office (USPTO). While software as such is patentable at the USPTO, the EPO prohibits patenting of pure computer programs and only allows patenting of computer implemented inventions ( computer-implemented invention (CII) ).In this chapter, we investigate the differences between the European and American patent systems with regard to patenting computer programs by also addressing the historical developments that have resulted in the national differences. Based on these considerations, a definition of CII is derived, which enables us to carry out empirical analyses.By applying a conservative estimate, our results show that the share of CII filings at the EPO lies at around $${\mathrm{25}}\%$$ 25 % at present, while at the USPTO a current margin of approximately $${\mathrm{33}}\%$$ 33 % is reached. Thus, at least every fourth patent at the EPO and every third patent at the USPTO is a CII filing. In order to take account of the factual (technological and economical) relevance of computer-implemented inventions, we argue for clear rules with regard to patenting CII, as they are essential to reduce uncertainties and provide the relevant incentives for innovation.
Peter Neuhäusler, Rainer Frietsch
### 42. Interplay of Patents and Trademarks as Tools in Economic Competition
Integrated manufacturing-service systems have been receiving attention recently. The phenomenon of services-to-artifacts companies, namely those specializing in intermediate goods and complex equipment, is increasingly instrumental for long-run competitiveness in fast-changing, high-quality global markets. The debate has so far has remained largely qualitative, and the effective role and relevance of services is rather fuzzy. Against this background, this chapter brings in empirical evidence concerning the evolving business models of a variety of leading innovative manufacturing companies. For this purpose, over 50 manufacturing companies listed in the European Union (EU) research & development (R&D) investment scoreboard are analyzed in terms of patents and trademarks. In particular, trademark strategies are studied in greater depth, and they are sub-divided into goods and services marks and into high and low sophistication. Service marks are used as a supplement to patents, as the service component of industrial offerings is not covered by classic indicators of technical change. The economic data from the EU (EU Scoreboard R&D, sales, growth, employees, profits, or investment) are linked to the patent and trademark data in order to see which balance of goods and service capabilities leads to favorable economic results.
Sandro Mendonça, Ulrich Schmoch, Peter Neuhäusler
### 43. Post Catch-up Trajectories: Publishing and Patenting Activities of China and Korea
This chapter seeks to explore the sequential cyclical growth of science, technology, and science-based technology for two economies—China and South Korea—in the course of transitioning to the postcatching-up phase. Both China and South Korea intend to capitalize on scientific and technological knowledge in order to transition to the postcatching-up phase of development. This chapter highlights the production trajectories of science and technology towards the postcatching-up phase in terms of: 1. Scientific publications 2. Granted patents 3. Copatenting pattern 4. Forward citations 5. Science-based patents. China and South Korea have been active in terms of scientific publication and patenting activities. In regard to patenting, both economies have shown the capability to produce patents and are able to converge the growth of patents with that of publications. This chapter highlights a generic cyclical growth path for science, technology, and science-based technology in the course of transitioning to an advanced knowledge-based economy. It is nonetheless important to explore if there are different paths pursued by other emerging economies emerging economies .
Chan-Yuan Wong, Hon-Ngen Fung
### 44. Standardization and Standards as Science and Innovation Indicators
The focus of innovation policies has shifted from knowledge creation and protection (e. g., by patents) to knowledge diffusion (e. g., via open access) in order to promote their implementation. This has led to an increasing need for innovation indicators that reflect the implementation of knowledge within innovative products and services. Standardization as a kind of open innovation process, and standards as its output, represents a new type of innovation indicator. In this chapter, we begin with a discussion of existing opportunities for using standards and standardization as innovation indicators indicator innovation , including three specific examples of input, throughput, and output indicators. Next we identify challenges that must be addressed to close the data gaps—which are still very significant when compared with patent data. In addition, the broader concept of quality infrastructure quality infrastructure is introduced in order to point out the complexity of standards implementation, and its close link to innovation as well. The chapter concludes with examples of how decision makers in industry and policy could make use of a comprehensive database of standardization and standards to evaluate innovation policy initiatives.
Knut Blind
### Backmatter
Weitere Informationen
|
2019-11-22 18:34:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2981802523136139, "perplexity": 2290.75325092597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671411.14/warc/CC-MAIN-20191122171140-20191122200140-00303.warc.gz"}
|
http://m-phi.blogspot.com/2011/04/how-to-write-proofs-quick-guide.html
|
## Tuesday, 12 April 2011
### "How to write proofs: a quick guide"
In introductory logic, students are asked to answer problems like,
• Show that the formula $P \rightarrow (P \rightarrow Q)$ is equivalent to $P \rightarrow Q$.
So, the student writes down a truth table with sentence letters $P$ and $Q$, and a column $P \rightarrow (P \rightarrow Q)$ and a column for $P \rightarrow Q$ and checks that the truth values of these two columns all match. Alternatively, a student might be asked to give a formal derivation of $P \rightarrow Q$ from $P \rightarrow (P \rightarrow Q)$ and vice versa.
In intermediate logic, students are asked to answer problems like
• Suppose $S_0$ is $P \rightarrow Q$ and $S_{n+1}$ is $P \rightarrow S_n$. Show that, for all $n$, $S_n$ is equivalent to $P \rightarrow Q$
This involves something like a genuine mathematical proof, using induction. When philosophy students step up from introductory logic to intermediate logic, they often find it challenging to come up with informal mathematical proofs of such claims. For philosophy students who do not intend to focus on theoretical philosophy, this needn't matter (though I believe that, increasingly, it will). But for advanced philosophy students who want to focus on topics in logic and parts of metaphysics, philosophy of language, mathematics and science, at some point it becomes necessary to be able to understand, and write out, informal proofs of a mathematical nature.
Here is a link to a short guide on writing proofs, for mathematics students, by Eugenia Cheng, a category theorist at The University of Sheffield.
#### 2 comments:
1. When I wanted to learn about mathematical proofs as a PhD student, I found the book An Introduction to Mathematical Reasoning, by Peter J. Eccles, very helpful. It also taught me some maths.
2. Thanks, Campbell. I think I'll continue with this theme from time to time.
|
2015-05-22 19:05:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5849595665931702, "perplexity": 427.4392137152079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207926620.50/warc/CC-MAIN-20150521113206-00145-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-7-exponential-functions-7-4-exponential-growth-and-decay-exercises-page-349/1
|
## Calculus (3rd Edition)
(a) $$P(0)= 2000.$$ (b) $$t=\frac{\ln5}{1.3}.$$
(a) The number is given at $t=0$, so we have $$P(0)=2000e^0=2000.$$ (b) We have $$10.000=2000e^{1.3t}\Longrightarrow 1.3t=\ln 5\Longrightarrow t=\frac{\ln5}{1.3}.$$
|
2019-11-17 00:01:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9617443084716797, "perplexity": 709.4984023159116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00353.warc.gz"}
|
http://www.jnr.ac.cn/CN/10.11849/zrzyxb.2014.12.004
|
• 资源生态 •
### 基于景观结构和空间统计方法的绿洲区生态风险分析——以石羊河武威、民勤绿洲为例
1. 1. 西北师范大学地理与环境科学学院, 兰州 730070;
2. 甘肃省水利厅石羊河流域管理局, 甘肃 武威 733000;
3. 兰州城市学院城市经济与旅游文化学院, 兰州 730070
• 收稿日期:2013-10-21 修回日期:2014-02-25 出版日期:2014-12-20 发布日期:2014-12-20
• 通讯作者: 石培基,男,教授,从事区域经济研究。E-mail: Shipj@nwnu.edu.cn E-mail:Shipj@nwnu.edu.cn
• 作者简介:魏伟(1982-),男,博士,讲师,从事生态遥感和GIS的应用研究。E-mail: weiweigis2006@126.com
• 基金资助:
国家自然科学基金项目(41261104,41271133);国家社科基金青年项目(12CTJ001);甘肃省自然科学基金计划项目(1107RJZA104)。
### Eco-risk Analysis of Oasis Region Based on Landscape Structure and Spatial Statistics Method—A Case Study of Wuwei and Minqin Oases
WEI Wei1, SHI Pei-ji1, LEI Li2, ZHOU Jun-ju1, XIE Bin-bin3
1. 1. College of Geographical and Environment Science, Northwest Normal University, Lanzhou 730070, China;
2. Management Bureau of Shiyang River Basin, Gansu Provincial Department of Water Resources, Wuwei 733000, China;
3. School of Urban Economics and Tourism Culture, Lanzhou City University, Lanzhou 730070, China
• Received:2013-10-21 Revised:2014-02-25 Online:2014-12-20 Published:2014-12-20
• About author:Resource Ecology
Abstract:
Eco-risk assessment is a hotspot arising in recent 20 years, due to the spatial heterogeneity and the complexity in assessment, as well as an integrated junction for geography, ecology, and environment risk evaluation. The study of ecological risk is helpful for understanding local ecological environment, reducing ecological risk, and finally improving the interactions between human beings and nature. To reveal the impact of ecological risk of oasis change in the small watershed of the Shiyang River Basin, Wuwei and Minqin oases were chosen as the study area. Landscape information was obtained from satellite remote sensing TM images of 1987, 2000 and 2010. The grid by 4 km×4 km was created as the auxiliary evaluation unit, and the GIS technology was employed as the data integration analysis platform. In order to generate the map of oasis eco-risk in 1987, 2000 and 2010, the spatial overlay method was used to make the index a spatial variable. Besides, the landscape interference index, fragile landscape degree, landscape dominance index and fragmentation index were used to analyze the relationship between landscape pattern and the eco- risk degree through the supporting of ArcGIS 10.0, ArcView 3.2 and FRAGSTATS softwares. Besides, based on the overlay analysis of every netfish of the landscape index, the eco-risk degree was reflected by spatial expression. Meanwhile, the spatial statistic method was also used to analyze the spatio-temporal process of landscape structures and ecological risk. The results showed that: 1) the urban and rural land expanded rapidly from 1987 to 2010. At the same time, the farmland and grassland decreased at a large scale, and the preponderant landscape changed from farmland and grassland to farmland and construction land. 2) The Wuwei oasis ecological risk has experienced the transition from high to moderate change, and the ecological risk as a whole tended to be better while the Minqin oasis eco-risk turned from medium /higher- risk to higher- risk/highest- risk. The ecological risk of the whole study area was deteriorated in the past 20 years. 3) The Wuwei oasis elements mainly tended to cluster and its value was higher than the average. It showed the low eco- risk area was much more clustered, and the trend was increasing. Comparatively, the elements of Minqin oasis that were lower than the average tended to be clustered, the high eco-risk area clustered extremely. It also showed that the cluster characteristics increased in area and expanded in space. Therefore, Wuwei oasis should expand the scale of the facilities of agriculture or high-efficiency water saving. At the same time, the ecological key factors such as forest for conservation of water supply and alpine grassland should be protected. What's more, desertification governance in Minqin oasis is particularly important. Through the grass square, cotton stem, corn straw, drought-tolerant plants can effectively manage desertification, and improve the resistance of eco-environmental change.
• X820.4
|
2022-12-01 20:21:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19757220149040222, "perplexity": 9009.043814968663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00068.warc.gz"}
|
https://mathhelpboards.com/threads/mathmaniac-here.3677/page-2
|
# Mathmaniac here!!!
#### agentmulder
##### Active member
Why are you less active here,agent?
The questions here are usually much harder and beyond my reach.
I consider MHB a valuable site to me because of the tutorials... now i just have to make the effort to understand... i'll post whenever i feel i can make a worthwhile contribution.
|
2021-08-02 16:09:00
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344900608062744, "perplexity": 2917.0697845815025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154321.31/warc/CC-MAIN-20210802141221-20210802171221-00298.warc.gz"}
|
https://www.physicsforums.com/threads/is-continous-symmetry-breaking-the-necessary-or-adequate-condition-for-nambu-modes.490736/
|
Is continous symmetry breaking the necessary or adequate condition for Nambu modes ?
Dear all,
I have a question regarding the usual Goldstone theorem, which states that, for a system with continuous symmetry breaking, massless bosons must appear. However, if you look at the derivations of this theorem [1], the crucial assumption seems that, the conserved quantity associated with this symmetry has a local form, i.e., one can define its density and the corresponding current density. As long as this condition is met, the massless modes follow definitely. If so, then the symmetry may not necessarily be continuous, and the conditions can be relaxed as: (1) there exists a symmetry that leaves the Hamiltonian invariant but alters the ground state; (2) the conservable derived from this symmetry has a local form.
May I say that ?
[1]Gene F. Mazenko, Fluctuations, order and defects, p215
|
2021-08-03 08:37:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9573079943656921, "perplexity": 526.4840187697685}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00552.warc.gz"}
|
https://www.greencarcongress.com/2014/01/20140111-petrobras.html
|
## Petrobras launches new 50 ppm ultra-low-sulfur gasoline throughout Brazil
##### 11 January 2014
Petrobras launched new regular and premium ultra-low-sulfur gasoline throughout Brazil on 1 January 2014, entirely replacing the previous regular and premium gasoline.
The new gasoline is called S-50 because it has a maximum sulfur content of 50 mg/kg or parts per million (ppm)—representing a 94% decrease in sulfur content compared to gasoline previously sold in Brazil, allowing vehicles to be introduced with modern technology to treat and reduce of sulfur oxide (SOx) emissions by 35,000 tons/year.
(California, Europe, Japan, South Korea, and several other countries have gasoline sulfur limits of 10 ppm. (Earlier post.)
Sample of outgoing S-800 and incoming S-50. Click to enlarge.
This gasoline reduces gas pollutants emitted from the exhausts of engines manufactured after 2009 by up to 60% for nitrogen oxides (NOx), 45% for carbon monoxide (CO) and 55% for hydrocarbons (HC).
The new fuel has other benefits such as low deposits forming on valves, fuel injectors and within the combustion chamber; increasing the performance and extending the life of the catalytic converter; reducing engine wear; plus the lubricant lasts longer, maintaining energy efficiency with lower maintenance costs.
Petrobras was the first to remove lead completely from Brazilian gasoline in 1989.
The new gasoline is available at Petrobras service stations throughout Brazil as well as other national distributors and continues to identify “regular gasoline” or “premium gasoline” at service station pumps depending on what type of fuel it is.
Between 2005 and 2013, Petrobras invested R$20.6 billion (US$8.7 billion) on 21 new units allowing S-50 gasoline to be produced at all of its refineries.
|
2023-02-06 02:49:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21484683454036713, "perplexity": 10803.724806061851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00634.warc.gz"}
|
https://nikhiljha.com/posts/implicitderivative/
|
# Cool Math Tricks #1: Implicit Derivative
Nikhil Jha | | 6 minute read
So you're in Calculus BC, and you just learned how to find the derivative of a function ($dy/dx$) when the variables aren't isolated. Unfortunately, the "standard" process that is usually taught is, although useful, really slow. Lucky for us, math always has shortcuts. In this post, I will describe a fast shortcut for implicit differentiation.
!! Do NOT use the shortcut and forget to learn the traditional method. You might have trouble with the conceptual questions that are asked if you do so.
You're given a function: $4x^3 + 3xy + 6y^3 = 9$
You differentiate everything like normal, except whenever you take the derivative of $y$, you multiply it by $dy/dx$ (as a result of the chain rule). If a term has both x and y, you use the product rule. It goes something like this...
Find the derivative of the following via implicit differentiation:
d/dx(4 x^3 + 3 x y + 6 y^3) = d/dx(9)
12x^2 + d/dx(3xy) + 18y^2(d/dx) = 0
12x^2 + 3*y + (d/dx)(3x) + 18y^2(d/dx) = 0
12x^2 + 3*y = - (d/dx)(3x) - 18y^2(d/dx)
12x^2 + 3*y = - (d/dx)(3x + 18y^2)
d/dx = - (12x^2 + 3*y)/(3x + 18y^2)
This is very helpful when you are doing related-rate type problems, as you have d?/d? in your equation already, ready to substitute in. This is also necessary to understand conceptual problems, as the shortcut handwaves it away. But how can we do it faster?
## Shortcut
### Background
On the day of the implicit derivative lesson in math, I was trying to solve the problem on my own without listening to the teacher (don't do this, I ended up with a... low... grade on the quiz because of it). I solved the problem incorrectly, but I did it again with a different problem just in case. It was at this point that I noticed a pattern.
1. Always had the wrong sign.
2. Were flipped upside down (reciprocal).
This was too consistent for me to ignore, so I went to ##math on Freenode to ask for help.
[2018-09-19 02:35:10] <fyber> In finding dy/dx for some simple function (like 3x^2 + 7xy + 2y^2 = 3)...
[2018-09-19 02:35:10] <fyber> If I treat y as a constant (dx/d?) I can get 6x + 7y
[2018-09-19 02:35:10] <fyber> If I treat x as a constant (dy/d?) I can get 7x + 4y
[2018-09-19 02:35:10] <fyber> Dividing (dy/d?) by (dx/d?), I get (7x + 4y) / (6x + 7y)
[2018-09-19 02:35:10] <fyber> The correct answer is the negative reciprocal of that, so I think I'm doing something right. Can I use this method and take the negative reciprocal in all cases?
[2018-09-19 02:36:09] <a____ptr> fyber: you could look up the implicit function theorem, which tells you when you can do it and why it has a negative reciprocal from "what you would expect"
[2018-09-19 02:37:58] <mancha> fyber: i dunno what those "?" are, but you don't treat them like constants, because they're not.
[2018-09-19 02:38:54] <fyber> I guess the "?" just represent the other side of the equation
[2018-09-19 02:39:09] <fyber> but if I think about it that way then I get 0/0 for the other side
[2018-09-19 02:39:10] <mancha> don't guess, it's your notation
... snip...
[2018-09-19 02:42:00] <fyber> I don't really know what I'm doing, it's just a neat pattern I noticed in my calculus class
[2018-09-19 02:42:16] <mancha> patterns are important
[2018-09-19 02:42:41] <a____ptr> fyber: if you think of 3x^2 + 7xy + 2y^2 as a function f(x,y), then what you're doing here by """(dx/d?)""" is taking the partial derivative of f wrt to x
[2018-09-19 02:42:53] <a____ptr> symbot: tex \partial
[2018-09-19 02:42:53] <symbot> a____ptr: ∂
[2018-09-19 02:43:11] <a____ptr> fyber: you'd write it as ∂/∂x f(x,y)... usually if you're using leibniz notation
!!! Apart from the amazing < 1 minute response time by a____ptr on IRC (❤️ IRC), we got the name of a theorem that confirms that what I'm doing should work in most cases, as well as the name of the calculus concepts that I was unknowingly using. From here, we can use Google to figure out what they are, and go from there. Now that all the background is out of the way, here's how to actually do the shortcut.
### The Actual Shortcut
You're given a function: $4x^3 + 3xy + 6y^3 = 9$
DescriptionMath
Move all terms to one side except constants.$4x^3 + 3xy + 6y^3 = 9$
Pretend that y is a constant. Take the derivative.$12x^2 + 3y$
Pretend that x is a constant. Take the derivative.$3x + 18y^2$
Divide the x thing by the y thing and make it negative.$-\dfrac{12x^2 + 3y}{3x + 18y^2}$
### Why does it work?
Adapted from Wikipedia: The theorem states that if the equation F(x, y) = 0 satisfies some mild conditions on its partial derivatives, then one can in principle (though not necessarily with an analytic expression) express the variables of the function in terms of the other variables within some disc.
A simple ""proof"" of this (proof is in quotes because it doesn't actually prove anything, but it helps with understanding) is as follows.
Take a circle $f(x,y) = x^2 + y^2 - 1$. The partial derivatives are just $2x$ and $2y$. If you do the implicit derivative the normal way (chain rule), you get $2xdx + 2ydy = 0$. If you solve for $dy/dx$, you get $dy/dx = -x/y$, which is basically what the shortcut states. It turns out that if you only have two variables, dividing the partial derivative wrt x by the partial derivative wrt y and making the whole thing negative works.
Hope this helps!
|
2021-07-26 17:08:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7422429323196411, "perplexity": 850.3403268262815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.81/warc/CC-MAIN-20210726152107-20210726182107-00115.warc.gz"}
|
https://www.physicsforums.com/threads/help-on-partial-derivative.708983/
|
Help on partial derivative
Hi, I was reading something on conservative fields, in this example $\phi$ is a scalar potential. (Please refer to the attatched thumbnail). It's partial derivatives, but I'm not sure why the d$\phi$/dx * dx, the dx should cancel out? and that should leave d$\phi$. So the integral should be -3∫d$\phi$. I know this is wrong, but I'm not sure why, can someone explain?
Thanks
Attachments
• Untitled.png
6.6 KB · Views: 413
HallsofIvy
Homework Helper
You are trying to apply the "one variable" chain rule to a multivariable function. The chain rule for multivariable functions is
$$\frac{d\phi}{dz}= \frac{\partial \phi}{\partial x}\frac{dx}{dt}+ \frac{\partial \phi}{\partial y}\frac{dy}{dt}$$
or in "differential form"
$$d\phi= \frac{\partial \phi}{\partial x}dx+ \frac{\partial \phi}{\partial y}dy$$
1 person
Erland
It does not say ##\frac{d\phi}{dx}dx## etc, it says ##\frac{\partial \phi}{\partial x}dx## etc. and that is not the same thing. A function of three variables ##\phi(x,y,z)## changes if any of three variables changes, not just ##x##, and if all the variables change, then these changes all contribute to the change in ##\phi##.
To derive the formula, choose a path from ##a## to ##b## and parametrize it, and then evaluate the line integral using this parametrization.
vanhees71
Gold Member
2021 Award
To answer your question, it's better to go back to the definition of a line integral. To that end we give the curve in parameter representation
$$C: \quad \vec{x}=\vec{x}(t), \quad t \in [t_1,t_2].$$
Let further be $\vec{V}(\vec{x})$ a vector field. Then by definition the line integral of this field along the curve is given by
$$\int_C \mathrm{d} \vec{x} \cdot \vec{V}(\vec{x})=\int_{t_1}^{t_2} \mathrm{dt} \frac{\mathrm{d} \vec{x}}{\mathrm{d} t} \cdot \vec{V}[\vec{x}(t)].$$
Now suppose $\vec{V}=-\vec{\nabla} \phi$. Now according to the chain rule for multi-variable functions we have
$$\frac{\mathrm{d}}{\mathrm{d} t} \phi[\vec{x}(t)]=\frac{\mathrm{d} x}{\mathrm{d}t} \frac{\partial \phi}{\partial x}+\frac{\mathrm{d} y}{\mathrm{d}t} \frac{\partial \phi}{\partial y}+\frac{\mathrm{d} z}{\mathrm{d}t} \frac{\partial \phi}{\partial z}=\frac{\mathrm{d} \vec{x}}{\mathrm{d} t} \cdot \vec{\nabla} \phi[\vec{x}(t)]=-\frac{\mathrm{d} \vec{x}}{\mathrm{d} t} \cdot \vec{V}[\vec{x}(t).$$
Plugging this into the above integral gives
$$\int_{C} \mathrm{d} \vec{x} \cdot \vec{V}(x)=-\int_{t_1}^{t_2} \mathrm{d} t \frac{\mathrm{d}}{\mathrm{d} t} V[\vec{x}(t)]=-[V(\vec{x}_2)-V(\vec{x}_1)],$$
where $\vec{x}_1=\vec{x}(t_1)$ and $\vec{x}_2=\vec{x}(t_2)$ are the boundary points of the curve.
Note that this result implies that if the vector field is conservative, i.e., if it is the gradient of a scalar field, the line integral connecting two points is independent of the shape of the curve.
1 person
verty
Homework Helper
For me, the most intuitive way to think about this is to pretend that ##dx## is a rate, so that makes ##\phi_x dx = {\partial\phi \over \partial x} dx## the related rate and ##\phi_x## the x-sensitivity of ##\phi##. ##\phi##'s rate of change is the dot product of ##\phi##'s sensitivity vector (the gradient) and the variable rates of change.
1 person
|
2022-07-02 23:14:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278845191001892, "perplexity": 495.0761328248286}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00049.warc.gz"}
|
https://gitlab.osupytheas.fr/wbanfield/tcntools/-/blame/9dd2d5a385a300c0ecda83204512d4f0a937bd06/devs/rnotebooks/TCNtools_eulerian_solver.Rmd
|
TCNtools_eulerian_solver.Rmd 6.56 KB
GODARD Vincent committed Nov 24, 2020 1 --- GODARD Vincent committed Dec 03, 2020 2 title: 'Introducing the Eulerian solver for concentration calculations' GODARD Vincent committed Nov 27, 2020 3 4 5 output: html_document: df_print: paged GODARD Vincent committed Nov 24, 2020 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 --- # Objectives of this Notebook ## Background We are going to consider simple computations of concentration under various conditions in terms of erosion, depth or age. This will be done using an Eulerian point of view, which is the most straightforward and fastest way to perform such computation. In this case the quantity of interest (concentration) is computed at fixed depths below the surface, while the exhumed material is moving through this reference frame during its trajectory toward the surface. More details on the differences between Eulerian and Lagrangian approaches, and their application to complex exposition/denudation histories, will be provided in another tutorial. The relevant equation is the following, $$C=C_0e^{-\lambda t} + \sum_i \frac{P_i}{\frac{\rho \varepsilon}{\Lambda_i}+\lambda}e^{\frac{-\rho z}{\Lambda_i}}(1-e^{-(\frac{\rho \varepsilon}{\Lambda_i}+\lambda)t})$$ with the following variables and parameters, - $C$ the concentration (as a function of time $t$ and depth $z$) - $C_0$ the inherited concentration - $\lambda$ the decay constant for the considered nuclide - $P_i$ the scaled surface production rate for the nuclide of interest and the $i$-th production pathway (spallation, stopped muons, fast muons) - $\rho$ the density of the medium - $\Lambda_i$ the attenuation length for the particules of the $i$-th production pathway - $\varepsilon$ surface denudation In order to stick with usual conventions in the following time will be measured in years (a), the unit of length will be cm and the depths will be expressed in g/cm$^2$ (i.e. actual depth $\times \rho$). ## Set up The first thing we have to do is to load the **TCNtools** library (once it has been installed) {r ck_1} library("TCNtools") We should the define the basic parameters we are going to use for the computation, which are two vectors : - a vector with the attenuation lengths for different particules (in g/cm$^2$) - neutrons for spallation reactions $\Lambda_{spal}$ - stopping muons $\Lambda_{stop}$ - fast muons $\Lambda_{fast}$ - a vector (or matrix) with the SLHL production rates (in at/g/a), in this case for the *st* scaling scheme (@stone2000air), and decay constant $\lambda$ (in 1/a) for the nuclide(s) of interest. {r ck_2} # Attenuation lengths Lambda = c(160,1500,4320) # g/cm2 names(Lambda) <- c("Lspal","Lstop","Lfast") # we just give names to the element of the vector # Production and decay parameters prm = matrix(c( 4.01 , 0.012 , 0.039 , log(2)/1.36e6, 27.93 , 0.84 , 0.081 , log(2)/0.717e6), nrow = 4,ncol=2 ) colnames(prm) <- c("Be10","Al26") # we just give names to the columns of the matrix rownames(prm) <- c("Pspal","Pstop","Pfast","lambda") # we just give names to the rows of the matrix # material density rho = 2.7 We also need to define the properties of our site of interest and compute the relevant scaling parameters. {r ck_3} altitude = 1000 # elevation in m latitude = 45 # latitude in degrees P = atm_pressure(alt=altitude,model="stone2000") # compute atmospheric pressure at site st = scaling_st(P,latitude) # compute the scaling parameters according to Stone (2000) # Concentration along a profile We first compute the changes in concentration with depth $z$ along a profile. We are going to use the solv_conc_eul function. As always the notice of the function, including its various arguments, can be obtained by typing ?solv_conc_eul in the R console. We consider no inheritance ($C_0$=0), so the evolution is starting from a profile with homogeneous zero concentration. {r ck_4} z = seq(0,500,by=10) * rho # a vector containing depth from 0 to 500 cm by 10 cm increments, and then converted into g/cm2 C0 = 0 # inherited concentration age = 10000 # the time in a ero = 10 * 100/1e6*rho # denudation rate expressed in m/Ma and converted in g/cm2/a C = solv_conc_eul(z,ero,age,C0,prm[,"Be10"],st,Lambda) # compute concentration plot(C,z,type="l",ylim=rev(range(z)),lwd=3,xlab="Concentration (at/g)",ylab="Depth (g/cm2)") Try to modify the age and ero (always keeping it in g/cm$^2$/a) parameters above, to see their influence on the profile. # Evolution of concentration with time Now we are going to consider the evolution of concentration with time $t$. The computation will be carried out at the surface ($z=0$), but this could be done at any arbitrary depth. {r ck_5} age = seq(0,100e3,by=100) # a vector containing time from 0 to 100 ka by 100 a steps z = 0 * rho # depth at which we are going to perform the calculation (cm converted to g/cm2) C0 = 0 # inherited concentration ero = 10 * 100/1e6*rho # denudation rate expressed in m/Ma and converted in g/cm2/a C = solv_conc_eul(z,ero,age,C0,prm[,"Be10"],st,Lambda) # compute concentration plot(age/1000,C,type="l",lwd=3,ylab="Concentration (at/g)",xlab="Time (ka)") We see here the progressive build-up of concentration though time and the establishment of balance between gains (production) and losses (denudation and decay) leading to the concentration plateau at steady state. Try to modify the ero (always keeping it in g/cm$^2$/a) parameter above, to see its influence on time needed to reach steady state and the final concentration. # Evolution of concentration with denudation rate Now we are going to consider the evolution of concentration with denudation rate $\varepsilon$. The computation will be carried out at the surface ($z=0$), but this could be done at any arbitrary depth. We will consider that $t=+\infty$ and that we have reached the plateau concentration. {r} ero = 10^seq(log10(0.1),log10(1000),length.out = 100) * 100/1e6*rho # a log-spaced vector for denudation rate expressed in m/Ma and converted in g/cm2/a age = Inf # infinite age z = 0 * rho # depth at which we are going to perform the calculation (cm converted to g/cm2) C0 = 0 # inherited concentration C = solv_conc_eul(z,ero,age,C0,prm[,"Be10"],st,Lambda) # compute concentration plot(ero/100*1e6/rho,C,log="xy",type="l",lwd=3,ylab="Concentration (at/g)",xlab="Denudation rate (m/Ma)") This figure highlights the strong inverse relationship, at steady-state, between denudation rate ($\varepsilon$) and concentration ($C$), which is the foundation of many geomorphological studies trying to establish landscape evolution rates. Note the change in the relationship at very low denudation rates, which corresponds to the situation where the effects of radioactive decay become predominant. # Dealing with inheritance # References
|
2022-09-26 22:33:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6471530199050903, "perplexity": 778.6267767494028}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00113.warc.gz"}
|
https://www.vedantu.com/question-answer/a-heap-of-wheat-is-in-the-form-of-a-cone-whose-class-10-maths-cbse-5f60d378a863034f429f73f7
|
Question
# A heap of wheat is in the form of a cone whose diameter is 10.5 m and height is 3 m. Find its volume. The heap is to be covered by canvas to protect it from rain. Find the area of the canvas required.
Hint: First of all, we should find the radius of the cone. We know that the volume of the cone is equal to V if r is the radius of the cone and h is the height of the cone, then $V=\dfrac{1}{3}\pi {{r}^{2}}h$. By using this formula, we can find the volume of the cone. We know that the curved surface area of the cone is equal to A if r is the radius of the cone, h is the height of the cone and l is the slant height of the cone, then $A=\pi rl$ where $l=\sqrt{{{r}^{2}}+{{h}^{2}}}$. By using this formula, we can find the area of the canvas.
Complete step-by-step solution:
From the question, it is clear that a heap of wheat is in the form of a cone whose diameter is 10.5 m and the height is 3 m.
We know that the volume of the cone is equal to V if r is the radius of the cone and h is the height of the cone, then $V=\dfrac{1}{3}\pi {{r}^{2}}h$.
We were given that the diameter of the cone is equal to 10.5 cm.
Let us assume the diameter of the cone is equal to d.
$\Rightarrow d=10.5....(1)$
We know that if r is the radius of the cone and d is the diameter of the cone, then $d=2r$.
Now let us assume the radius of the cone is equal to r.
$\Rightarrow 10.5=2r$
By using cross multiplication, we get
\begin{align} & \Rightarrow r=\dfrac{10.5}{2} \\ & \Rightarrow r=5.25.....(2) \\ \end{align}
We were given that the height of the cone is equal to 3m.
Let us assume the height of the cone is equal to h.
$\Rightarrow h=3......(3)$
Let us assume the volume of the cone is equal to V.
We know that the volume of the cone is equal to V if r is the radius of the cone and h is the height of the cone, then $V=\dfrac{1}{3}\pi {{r}^{2}}h$.
\begin{align} & \Rightarrow V=\dfrac{1}{3}\pi {{\left( 5.25 \right)}^{2}}\left( 3 \right) \\ & \Rightarrow V=86.59.....(4) \\ \end{align}
From equation (4), it is clear that the volume of the cone is equal to $86.59c{{m}^{3}}$.
Now we should find the area of the canvas.
We know that the curved surface area of the cone is equal to A if r is the radius of the cone, h is the height of the cone and l is the slant height of the cone, then $A=\pi rl$ where $l=\sqrt{{{r}^{2}}+{{h}^{2}}}$.
So, let us assume l is the slant height of the cone.
\begin{align} & \Rightarrow l=\sqrt{{{\left( 3 \right)}^{2}}+{{\left( 5.25 \right)}^{2}}} \\ & \Rightarrow l=\sqrt{36.5625} \\ & \Rightarrow l=6.04669.....(5) \\ \end{align}
From equation (5), it is clear that the slant height of the cone is equal to 6.04669 m.
Let us assume the curved surface area of the cone is equal to A.
\begin{align} & \Rightarrow A=\pi \left( 5.25 \right)\left( 6.04669 \right) \\ & \Rightarrow A=99.756.......(6) \\ \end{align}
From equation (6), it is clear that the area of the canvas is equal to $99.756{{m}^{2}}$.
Note: Students may assume that the curved surface area of the cone is equal to A if r is the radius of the cone, h is the height of the cone, then $A=\pi rh$. Students may also assume that that the volume of the cone is equal to V if r is the radius of the cone, h is the height of the cone and l is the slant height of the cone, then $V=\dfrac{1}{3}\pi {{r}^{2}}l$ where $l=\sqrt{{{r}^{2}}+{{h}^{2}}}$. But we know that these are incorrect. So, these misconceptions should be avoided.
|
2020-09-25 22:37:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995002746582031, "perplexity": 112.7938854006902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400228998.45/warc/CC-MAIN-20200925213517-20200926003517-00505.warc.gz"}
|
https://blogs.ams.org/mathgradblog/2017/07/27/adapting-problems-improve-groupworthiness/
|
# Adapting Problems to Improve their Groupworthiness
In my last blog post, I discussed the importance of using groupworthy tasks with your students. For a task to be groupworthy, it should satisfy three criteria: interdependence (the task is mathematically rich enough that students have to work together), multiple abilities (many different mathematical strengths are needed, e.g. verbal, written, spatial, visual), and multiple representations (e.g. graphical, numeric, linguistic and symbolic).
Many teachers do not have such groupworthy tasks in their curriculum, though, and do not have access to such problems. Many problems that we do have in our textbooks have potential; we just need to learn how to make them groupworthy.
When selecting a problem, it is often helpful to look at problems that involve real-world applications. Many real-world application problems, though, give step-by-step directions, which often obviates the need for a group of students to work together on the problem.
In the following example, drawn from the 6th edition of A Graphical Approach to College Algebra, students are asked to use a given equation to study the height of a ball thrown vertically on the moon with relationship to time (p. 187):
An astronaut on the moon throws a baseball upward. The astronaut is $6$ feet, $6$ inches tall and the initial velocity of the ball is 30 feet per second. The height of the ball is approximated by the function: $s(t) = -2.7t^2 + 30t + 6.5$ where t is the number of seconds after the ball was thrown.
1. After how many seconds is the ball $12$ feet above the moon’s surface?
2. How many seconds after it is thrown will be ball return to the surface?
3. The ball will never reach a height of $100$ feet. How can this be determined analytically?
This problem uses quadratic equations, which could be mathematically rich, but due to the fact that the problem is in section 3.3: Quadratic Equations and Inequalities, the problem ends up being more of a rote exercise given that it covers only the material addressed in the section in which the problem appears.
The inclusion of a real-world context can often be a sign of a groupworthy problem, but this problem provides the equation, which removes most of the opportunity to build a model.
To make this problem groupworthy, we should start by removing the equation, the step-by-step directions, the height of the astronaut and the speed of the ball. Instead, we can let students choose their own height for the astronaut (perhaps using one of their own heights), and figure out a reasonable figure for velocity. If your students know some calculus, they could even find the formula themselves. (If you take the acceleration on the moon ($5.3$ ft/$\text{s}^2$) and integrate it twice with respect to time, you get the $2.7t^2$ from the original problem.)
It does not take calculus to understand that position has $\text{time}^2$, since velocity = acceleration times time and position equals velocity times time. Where students are likely going to have trouble without calculus is figuring out the coefficient of $t^2$ is half the value of the acceleration.
In preparing this blog post, I consulted several math teachers. The traditional way this is done is to give students the basic formulas for velocity and position of a free-falling object:
$v = at, \qquad x = .5a t^2$
Although I think that that formula would best be taught after students understand where the $t^2$ comes from so that it is not just a hand-waving kind of thing to students.
Another teacher suggested the following method: students could find the average velocity by taking $(v + (v-a))/2$. Since the object has no initial velocity, we can use $v = 0$. Simplifying, we get $-a / 2$ and thus we see where the $0.5 a$ comes from.
With all of these ideas in mind, here is the final version of this task that I would give to the students:
An astronaut on the moon throws a baseball upward. Choose a reasonable height for the astronaut and the velocity for the ball and find an equation to describe the position (height) of the ball at time $t$. Then demonstrate various facts of your choice about the path of the ball, such as the maximum height, and when it will reach the ground. Create a poster, using words, graphs, tables, and symbols, to explain how you found your equation and the facts about the path that you chose.
Is this task groupworthy? This task is mathematically rich in that students have to understand not only quadratic equations, but position, velocity, and acceleration. The problem encourages interdependence both by having a group product and being sufficiently challenging that a single student cannot solve it on their own. It also specifically encourages students to utilize multiple representations.
In the next blog post, I will discuss strategies for managing groupwork such as group roles, huddles, and techniques for when a group is getting stuck on a problem.
This entry was posted in Math Education, Math Teaching, Teaching and tagged , , . Bookmark the permalink.
|
2018-03-25 05:20:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7577162384986877, "perplexity": 530.3331736974943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651820.82/warc/CC-MAIN-20180325044627-20180325064627-00564.warc.gz"}
|
https://www.semanticscholar.org/paper/Anderson-and-Gorenstein-duality-Greenlees-Stojanoska/16bc79d844e07abdf8dd5d0bb2dac9afb75cf8c8
|
# Anderson and Gorenstein duality
@article{Greenlees2016AndersonAG,
title={Anderson and Gorenstein duality},
author={John Greenlees and Vesna Stojanoska},
journal={arXiv: Algebraic Topology},
year={2016}
}
• Published 27 July 2016
• Mathematics
• arXiv: Algebraic Topology
The paper relates the Gorenstein duality statements studied by the first author to the Anderson duality statements studied by the second author, and explains how to use local cohomology and invariant theory to understand the numerology of shifts in simple cases.
ANDERSON DUALITY FOR DERIVED STACKS (NOTES)
• Mathematics
• 2018
In these notes, we will prove that many naturally occuring derived stacks in chromatic homotopy theory, which arise as even periodic refinements of Deligne-Mumford stacks, are Gorenstein (in the
Equivariant Gorenstein Duality
This thesis concerns the study of two flavours of duality that appear in stable homotopy theory and their equivariant reformulations. Concretely, we look at the Gorenstein duality framework
Topological modular forms with level structure: Decompositions and duality
Topological modular forms with level structure were introduced in full generality by Hill and Lawson. We will show that these decompose additively in many cases into a few simple pieces and give an
The topological modular forms of $\mathbb{R}P^2$ and $\mathbb{R}P^2 \wedge \mathbb{C}P^2$
• Mathematics
• 2021
In this paper, we study the elliptic spectral sequence computing tmf∗(RP 2) and tmf∗(RP 2 ∧ CP 2). Specifically, we compute all differentials and resolve exotic extensions by 2, η, and ν. For tmf∗(RP
## References
SHOWING 1-10 OF 23 REFERENCES
Brown-Comenetz duality and the Adams spectral sequence
• Mathematics
• 1999
<abstract abstract-type="TeX"><p>We show that the class of <i>p</i>-complete connective spectra with finitely presented cohomology over the Steenrod algebra admits a duality theory related to
Gross-Hopkins duality and the Gorenstein condition
• Mathematics
• 2009
Gross and Hopkins have proved that in chromatic stable homotopy, Spanier-Whitehead duality nearly coincides with Brown-Comenetz duality. Our goal is to give a conceptual interpretation for this
K-theory, reality, and duality
• Mathematics
• 2014
We present a new proof of Anderson's result that the real K -theory spectrum is Anderson self-dual up to a fourfold suspension shift; more strongly, we show that the Anderson dual of the complex K
Gorenstein duality for Real spectra
• Mathematics
• 2016
Following Hu and Kriz, we study the $C_2$-spectra $BP\mathbb{R}\langle n \rangle$ and $E\mathbb{R}(n)$ that refine the usual truncated Brown-Peterson and the Johnson-Wilson spectra. In particular, we
Duality in algebra and topology
• Mathematics
• 2006
We apply ideas from commutative algebra, and Morita theory to algebraic topology using ring spectra. This allows us to prove new duality results in algebra and topology, and to view (1) Poincaré
Topological Modular Forms of Level 3
• Mathematics
• 2008
We describe and compute the homotopy of spectra of topological modular forms of level 3. We give some computations related to the "building complex" associated to level 3 structures at the prime 2.
THE SHIMURA CURVE OF DISCRIMINANT 15 AND TOPOLOGICAL AUTOMORPHIC FORMS
• T. Lawson
• Mathematics
Forum of Mathematics, Sigma
• 2015
We find defining equations for the Shimura curve of discriminant 15 over $\mathbb{Z}[1/15]$. We then determine the graded ring of automorphic forms over the 2-adic integers, as well as the higher
Duality for Topological Modular Forms
It has been observed that certain localizations of the spectrum of topological modular forms are self-dual (Mahowald-Rezk, Gross-Hopkins). We provide an integral explanation of these results that is
|
2022-07-02 12:04:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900286912918091, "perplexity": 1939.2451259316892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00001.warc.gz"}
|
https://manual.gromacs.org/documentation/2020-beta1/user-guide/environment-variables.html
|
# Environment Variables¶
GROMACS programs may be influenced by the use of environment variables. First of all, the variables set in the GMXRC file are essential for running and compiling GROMACS. Some other useful environment variables are listed in the following sections. Most environment variables function by being set in your shell to any non-NULL value. Specific requirements are described below if other values need to be set. You should consult the documentation for your shell for instructions on how to set environment variables in the current shell, or in configuration files for future shells. Note that requirements for exporting environment variables to jobs run under batch control systems vary and you should consult your local documentation for details.
## Output Control¶
GMX_CONSTRAINTVIR
Print constraint virial and force virial energy terms.
GMX_DUMP_NL
Neighbour list dump level; default 0.
GMX_MAXBACKUP
GROMACS automatically backs up old copies of files when trying to write a new file of the same name, and this variable controls the maximum number of backups that will be made, default 99. If set to 0 it fails to run if any output file already exists. And if set to -1 it overwrites any output file without making a backup.
GMX_NO_QUOTES
if this is explicitly set, no cool quotes will be printed at the end of a program.
GMX_SUPPRESS_DUMP
prevent dumping of step files during (for example) blowing up during failure of constraint algorithms.
GMX_TPI_DUMP
dump all configurations to a pdb file that have an interaction energy less than the value set in this environment variable.
GMX_VIEW_XPM
GMX_VIEW_XVG, GMX_VIEW_EPS and GMX_VIEW_PDB, commands used to automatically view xvg, xpm, eps and pdb file types, respectively; they default to xv, xmgrace, ghostview and rasmol. Set to empty to disable automatic viewing of a particular file type. The command will be forked off and run in the background at the same priority as the GROMACS tool (which might not be what you want). Be careful not to use a command which blocks the terminal (e.g. vi), since multiple instances might be run.
GMX_LOG_BUFFER
the size of the buffer for file I/O. When set to 0, all file I/O will be unbuffered and therefore very slow. This can be handy for debugging purposes, because it ensures that all files are always totally up-to-date.
GMX_LOGO_COLOR
set display color for logo in gmx view.
GMX_PRINT_LONGFORMAT
use long float format when printing decimal values.
GMX_COMPELDUMP
Applies for computational electrophysiology setups only (see reference manual). The initial structure gets dumped to pdb file, which allows to check whether multimeric channels have the correct PBC representation.
GMX_TRAJECTORY_IO_VERBOSITY
Defaults to 1, which prints frame count e.g. when reading trajectory files. Set to 0 for quiet operation.
GMX_ENABLE_GPU_TIMING
Enables GPU timings in the log file for CUDA. Note that CUDA timings are incorrect with multiple streams, as happens with domain decomposition or with both non-bondeds and PME on the GPU (this is also the main reason why they are not turned on by default).
GMX_DISABLE_GPU_TIMING
Disables GPU timings in the log file for OpenCL.
## Debugging¶
GMX_PRINT_DEBUG_LINES
when set, print debugging info on line numbers.
GMX_DD_NST_DUMP
number of steps that elapse between dumping the current DD to a PDB file (default 0). This only takes effect during domain decomposition, so it should typically be 0 (never), 1 (every DD phase) or a multiple of nstlist.
GMX_DD_NST_DUMP_GRID
number of steps that elapse between dumping the current DD grid to a PDB file (default 0). This only takes effect during domain decomposition, so it should typically be 0 (never), 1 (every DD phase) or a multiple of nstlist.
GMX_DD_DEBUG
general debugging trigger for every domain decomposition (default 0, meaning off). Currently only checks global-local atom index mapping for consistency.
GMX_DD_NPULSE
over-ride the number of DD pulses used (default 0, meaning no over-ride). Normally 1 or 2.
GMX_DISABLE_ALTERNATING_GPU_WAIT
disables the specialized polling wait path used to wait for the PME and nonbonded GPU tasks completion to overlap to do the reduction of the resulting forces that arrive first. Setting this variable switches to the generic path with fixed waiting order.
There are a number of extra environment variables like these that are used in debugging - check the code!
## Performance and Run Control¶
GMX_DO_GALACTIC_DYNAMICS
planetary simulations are made possible (just for fun) by setting this environment variable, which allows setting epsilon-r to -1 in the mdp file. Normally, epsilon-r must be greater than zero to prevent a fatal error. See webpage for example input files for a planetary simulation.
GMX_BONDED_NTHREAD_UNIFORM
Value of the number of threads per rank from which to switch from uniform to localized bonded interaction distribution; optimal value dependent on system and hardware, default value is 4.
GMX_CUDA_NB_EWALD_TWINCUT
force the use of twin-range cutoff kernel even if rvdw equals rcoulomb after PP-PME load balancing. The switch to twin-range kernels is automated, so this variable should be used only for benchmarking.
GMX_CUDA_NB_ANA_EWALD
force the use of analytical Ewald kernels. Should be used only for benchmarking.
GMX_CUDA_NB_TAB_EWALD
force the use of tabulated Ewald kernels. Should be used only for benchmarking.
GMX_DISABLE_CUDA_TIMING
Deprecated. Use GMX_DISABLE_GPU_TIMING instead.
GMX_CYCLE_ALL
times all code during runs. Incompatible with threads.
GMX_CYCLE_BARRIER
calls MPI_Barrier before each cycle start/stop call.
GMX_DD_ORDER_ZYX
build domain decomposition cells in the order (z, y, x) rather than the default (x, y, z).
GMX_DD_USE_SENDRECV2
during constraint and vsite communication, use a pair of MPI_Sendrecv calls instead of two simultaneous non-blocking calls (default 0, meaning off). Might be faster on some MPI implementations.
GMX_DLB_BASED_ON_FLOPS
do domain-decomposition dynamic load balancing based on flop count rather than measured time elapsed (default 0, meaning off). This makes the load balancing reproducible, which can be useful for debugging purposes. A value of 1 uses the flops; a value > 1 adds (value - 1)*5% of noise to the flops to increase the imbalance and the scaling.
GMX_DLB_MAX_BOX_SCALING
maximum percentage box scaling permitted per domain-decomposition load-balancing step (default 10)
GMX_DD_RECORD_LOAD
record DD load statistics for reporting at end of the run (default 1, meaning on)
GMX_DETAILED_PERF_STATS
when set, print slightly more detailed performance information to the log file. The resulting output is the way performance summary is reported in versions 4.5.x and thus may be useful for anyone using scripts to parse log files or standard output.
GMX_DISABLE_SIMD_KERNELS
disables architecture-specific SIMD-optimized (SSE2, SSE4.1, AVX, etc.) non-bonded kernels thus forcing the use of plain C kernels.
GMX_DISABLE_GPU_TIMING
timing of asynchronously executed GPU operations can have a non-negligible overhead with short step times. Disabling timing can improve performance in these cases.
GMX_DISABLE_GPU_DETECTION
when set, disables GPU detection even if gmx mdrun was compiled with GPU support.
GMX_GPU_APPLICATION_CLOCKS
setting this variable to a value of “0”, “ON”, or “DISABLE” (case insensitive) allows disabling the CUDA GPU allication clock support.
GMX_DISRE_ENSEMBLE_SIZE
the number of systems for distance restraint ensemble averaging. Takes an integer value.
GMX_EMULATE_GPU
emulate GPU runs by using algorithmically equivalent CPU reference code instead of GPU-accelerated functions. As the CPU code is slow, it is intended to be used only for debugging purposes.
GMX_ENX_NO_FATAL
disable exiting upon encountering a corrupted frame in an edr file, allowing the use of all frames up until the corruption.
GMX_FORCE_UPDATE
update forces when invoking mdrun -rerun.
GMX_GPU_ID
set in the same way as mdrun -gpu_id, GMX_GPU_ID allows the user to specify different GPU IDs for different ranks, which can be useful for selecting different devices on different compute nodes in a cluster. Cannot be used in conjunction with mdrun -gpu_id.
GMX_GPUTASKS
set in the same way as mdrun -gputasks, GMX_GPUTASKS allows the mapping of GPU tasks to GPU device IDs to be different on different ranks, if e.g. the MPI runtime permits this variable to be different for different ranks. Cannot be used in conjunction with mdrun -gputasks. Has all the same requirements as mdrun -gputasks.
GMX_IGNORE_FSYNC_FAILURE_ENV
allow gmx mdrun to continue even if a file is missing.
GMX_LJCOMB_TOL
when set to a floating-point value, overrides the default tolerance of 1e-5 for force-field floating-point parameters.
GMX_MAXCONSTRWARN
if set to -1, gmx mdrun will not exit if it produces too many LINCS warnings.
GMX_NB_MIN_CI
neighbor list balancing parameter used when running on GPU. Sets the target minimum number pair-lists in order to improve multi-processor load-balance for better performance with small simulation systems. Must be set to a non-negative integer, the 0 value disables list splitting. The default value is optimized for supported GPUs therefore changing it is not necessary for normal usage, but it can be useful on future architectures.
GMX_NBLISTCG
use neighbor list and kernels based on charge groups.
GMX_NBNXN_CYCLE
when set, print detailed neighbor search cycle counting.
GMX_NBNXN_EWALD_ANALYTICAL
force the use of analytical Ewald non-bonded kernels, mutually exclusive of GMX_NBNXN_EWALD_TABLE.
GMX_NBNXN_EWALD_TABLE
force the use of tabulated Ewald non-bonded kernels, mutually exclusive of GMX_NBNXN_EWALD_ANALYTICAL.
GMX_NBNXN_SIMD_2XNN
force the use of 2x(N+N) SIMD CPU non-bonded kernels, mutually exclusive of GMX_NBNXN_SIMD_4XN.
GMX_NBNXN_SIMD_4XN
force the use of 4xN SIMD CPU non-bonded kernels, mutually exclusive of GMX_NBNXN_SIMD_2XNN.
GMX_NOOPTIMIZEDKERNELS
deprecated, use GMX_DISABLE_SIMD_KERNELS instead.
GMX_NO_CART_REORDER
used in initializing domain decomposition communicators. Rank reordering is default, but can be switched off with this environment variable.
GMX_NO_LJ_COMB_RULE
force the use of LJ paremeter lookup instead of using combination rules in the non-bonded kernels.
GMX_NO_INT, GMX_NO_TERM, GMX_NO_USR1
disable signal handlers for SIGINT, SIGTERM, and SIGUSR1, respectively.
GMX_NO_NODECOMM
do not use separate inter- and intra-node communicators.
GMX_NO_NONBONDED
skip non-bonded calculations; can be used to estimate the possible performance gain from adding a GPU accelerator to the current hardware setup – assuming that this is fast enough to complete the non-bonded calculations while the CPU does bonded force and PME computation. Freezing the particles will be required to stop the system blowing up.
GMX_PULL_PARTICIPATE_ALL
disable the default heuristic for when to use a separate pull MPI communicator (at >=32 ranks).
GMX_NOPREDICT
shell positions are not predicted.
GMX_NO_UPDATEGROUPS
turns off update groups. May allow for a decomposition of more domains for small systems at the cost of communication during update.
GMX_NSCELL_NCG
the ideal number of charge groups per neighbor searching grid cell is hard-coded to a value of 10. Setting this environment variable to any other integer value overrides this hard-coded value.
GMX_PME_NUM_THREADS
set the number of OpenMP or PME threads; overrides the default set by gmx mdrun; can be used instead of the -npme command line option, also useful to set heterogeneous per-process/-node thread count.
GMX_PME_P3M
use P3M-optimized influence function instead of smooth PME B-spline interpolation.
GMX_PME_THREAD_DIVISION
PME thread division in the format “x y z” for all three dimensions. The sum of the threads in each dimension must equal the total number of PME threads (set in GMX_PME_NTHREADS).
GMX_PMEONEDD
if the number of domain decomposition cells is set to 1 for both x and y, decompose PME in one dimension.
GMX_REQUIRE_SHELL_INIT
require that shell positions are initiated.
GMX_REQUIRE_TABLES
require the use of tabulated Coulombic and van der Waals interactions.
GMX_SCSIGMA_MIN
the minimum value for soft-core sigma. Note that this value is set using the sc-sigma keyword in the mdp file, but this environment variable can be used to reproduce pre-4.5 behavior with respect to this parameter.
GMX_TPIC_MASSES
should contain multiple masses used for test particle insertion into a cavity. The center of mass of the last atoms is used for insertion into the cavity.
GMX_USE_GRAPH
use graph for bonded interactions.
GMX_VERLET_BUFFER_RES
resolution of buffer size in Verlet cutoff scheme. The default value is 0.001, but can be overridden with this environment variable.
HWLOC_XMLFILE
Not strictly a GROMACS environment variable, but on large machines the hwloc detection can take a few seconds if you have lots of MPI processes. If you run the hwloc command lstopo out.xml and set this environment variable to point to the location of this file, the hwloc library will use the cached information instead, which can be faster.
MPIRUN
the mpirun command used by gmx tune_pme.
MDRUN
the gmx mdrun command used by gmx tune_pme.
GMX_DISABLE_DYNAMICPRUNING
disables dynamic pair-list pruning. Note that gmx mdrun will still tune nstlist to the optimal value picked assuming dynamic pruning. Thus for good performance the -nstlist option should be used.
GMX_NSTLIST_DYNAMICPRUNING
overrides the dynamic pair-list pruning interval chosen heuristically by mdrun. Values should be between the pruning frequency value (1 for CPU and 2 for GPU) and nstlist - 1.
GMX_USE_TREEREDUCE
use tree reduction for nbnxn force reduction. Potentially faster for large number of OpenMP threads (if memory locality is important).
## OpenCL management¶
Currently, several environment variables exist that help customize some aspects of the OpenCL version of GROMACS. They are mostly related to the runtime compilation of OpenCL kernels, but they are also used in device selection.
GMX_OCL_NOGENCACHE
If set, disable caching for OpenCL kernel builds. Caching is normally useful so that future runs can re-use the compiled kernels from previous runs. Currently, caching is always disabled, until we solve concurrency issues.
GMX_OCL_GENCACHE
Enable OpenCL binary caching. Only intended to be used for development and (expert) testing as neither concurrency nor cache invalidation is implemented safely!
GMX_OCL_NOFASTGEN
If set, generate and compile all algorithm flavors, otherwise only the flavor required for the simulation is generated and compiled.
GMX_OCL_DISABLE_FASTMATH
Prevents the use of -cl-fast-relaxed-math compiler option.
GMX_OCL_DUMP_LOG
If defined, the OpenCL build log is always written to the mdrun log file. Otherwise, the build log is written to the log file only when an error occurs.
GMX_OCL_VERBOSE
If defined, it enables verbose mode for OpenCL kernel build. Currently available only for NVIDIA GPUs. See GMX_OCL_DUMP_LOG for details about how to obtain the OpenCL build log.
GMX_OCL_DUMP_INTERM_FILES
If defined, intermediate language code corresponding to the OpenCL build process is saved to file. Caching has to be turned off in order for this option to take effect (see GMX_OCL_NOGENCACHE).
• NVIDIA GPUs: PTX code is saved in the current directory with the name device_name.ptx
• AMD GPUs: .IL/.ISA files will be created for each OpenCL kernel built. For details about where these files are created check AMD documentation for -save-temps compiler option.
GMX_OCL_DEBUG
Use in conjunction with OCL_FORCE_CPU or with an AMD device. It adds the debug flag to the compiler options (-g).
GMX_OCL_NOOPT
Disable optimisations. Adds the option cl-opt-disable to the compiler options.
GMX_OCL_FORCE_CPU
Force the selection of a CPU device instead of a GPU. This exists only for debugging purposes. Do not expect GROMACS to function properly with this option on, it is solely for the simplicity of stepping in a kernel and see what is happening.
GMX_OCL_DISABLE_I_PREFETCH
Disables i-atom data (type or LJ parameter) prefetch allowing testing.
GMX_OCL_ENABLE_I_PREFETCH
Enables i-atom data (type or LJ parameter) prefetch allowing testing on platforms where this behavior is not default.
GMX_OCL_NB_ANA_EWALD
Forces the use of analytical Ewald kernels. Equivalent of CUDA environment variable GMX_CUDA_NB_ANA_EWALD
GMX_OCL_NB_TAB_EWALD
Forces the use of tabulated Ewald kernel. Equivalent of CUDA environment variable GMX_OCL_NB_TAB_EWALD
GMX_OCL_NB_EWALD_TWINCUT
Forces the use of twin-range cutoff kernel. Equivalent of CUDA environment variable GMX_CUDA_NB_EWALD_TWINCUT
GMX_OCL_FILE_PATH
Use this parameter to force GROMACS to load the OpenCL kernels from a custom location. Use it only if you want to override GROMACS default behavior, or if you want to test your own kernels.
GMX_OCL_DISABLE_COMPATIBILITY_CHECK
Disables the hardware compatibility check. Useful for developers and allows testing the OpenCL kernels on non-supported platforms (like Intel iGPUs) without source code modification.
GMX_OCL_SHOW_DIAGNOSTICS
Use Intel OpenCL extension to show additional runtime performance diagnostics.
## Analysis and Core Functions¶
GMX_QM_ACCURACY
accuracy in Gaussian L510 (MC-SCF) component program.
GMX_QM_ORCA_BASENAME
prefix of tpr files, used in Orca calculations for input and output file names.
GMX_QM_CPMCSCF
when set to a nonzero value, Gaussian QM calculations will iteratively solve the CP-MCSCF equations.
GMX_QM_MODIFIED_LINKS_DIR
location of modified links in Gaussian.
DSSP
used by gmx do_dssp to point to the dssp executable (not just its path).
GMX_QM_GAUSS_DIR
directory where Gaussian is installed.
GMX_QM_GAUSS_EXE
name of the Gaussian executable.
GMX_DIPOLE_SPACING
spacing used by gmx dipoles.
GMX_MAXRESRENUM
sets the maximum number of residues to be renumbered by gmx grompp. A value of -1 indicates all residues should be renumbered.
GMX_NO_FFRTP_TER_RENAME
Some force fields (like AMBER) use specific names for N- and C- terminal residues (NXXX and CXXX) as rtp entries that are normally renamed. Setting this environment variable disables this renaming.
GMX_PATH_GZIP
gunzip executable, used by gmx wham.
GMX_FONT
name of X11 font used by gmx view.
GMXTIMEUNIT
the time unit used in output files, can be anything in fs, ps, ns, us, ms, s, m or h.
GMX_QM_GAUSSIAN_MEMORY
memory used for Gaussian QM calculation.
MULTIPROT
name of the multiprot executable, used by the contributed program do_multiprot.
NCPUS
number of CPUs to be used for Gaussian QM calculation
GMX_ORCA_PATH
directory where Orca is installed.
GMX_QM_SA_STEP
simulated annealing step size for Gaussian QM calculation.
GMX_QM_GROUND_STATE
defines state for Gaussian surface hopping calculation.
GMX_TOTAL
name of the total executable used by the contributed do_shift program.
GMX_ENER_VERBOSE
make gmx energy and gmx eneconv loud and noisy.
VMD_PLUGIN_PATH
where to find VMD plug-ins. Needed to be able to read file formats recognized only by a VMD plug-in.
VMDDIR
base path of VMD installation.
GMX_USE_XMGR
sets viewer to xmgr (deprecated) instead of xmgrace.
|
2022-12-01 19:51:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3806077837944031, "perplexity": 5075.351823858451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710869.86/warc/CC-MAIN-20221201185801-20221201215801-00560.warc.gz"}
|
https://www.trustudies.com/question/2322/q-5-following-table-shows-the-points-/
|
3 Tutor System
Starting just at 265/hour
# Q.5 Following table shows the points of each player scored in four games: Now answer the following questions: (i) Find the mean to determine A’s average number of points scored per game. (ii) To find the mean number of points per game for C, would you divide the total points by 3 or by 4? Why? (iii) B played in all the four games. How would you find the mean? (iv) Who is the best performer?
(i) Number of points scored by A in all games are Game 1 = 14, Game 2 = 16, Game 3 = 10, Game 4 = 10
Therefore Average score $$=\frac{14+16+10+10}{4}=\frac{50}{4}=12.5$$
(ii) Since, C did not play Game 3, he played only 3 games. So, the total will be divided by 3.
(iii) Number of points scored by B in all the games are Game 1 = 0, Game 2 = 8, Game 3 = 6, Game 4 = 4
Therefore Average score $$=\frac{0+8+6+4}{4}=\frac{18}{4}= 4.5$$
(iv) Mean score of C \(=\frac{8+11+14}{3}=\frac{32}{3}=10.67
Mean score of C = 10.67
While mean score of A = 12.5
Clearly, A is the best performer.
|
2023-03-29 00:59:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18539589643478394, "perplexity": 909.4499439926456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948900.50/warc/CC-MAIN-20230328232645-20230329022645-00305.warc.gz"}
|
https://numbersandshapes.net/posts/steffensen/
|
# A note on Steffensen's method for solving equations
Share on:
Steffensen's method is based on Newton's iteration for solving a non-linear equation $$f(x)=0$$:
$x\leftarrow x-\frac{f(x)}{f'(x)}$
Newton's method can fail to work in a number of ways, but when it does work it displays qudratic convergence; the number of correct signifcant figures roughly doubling at each step. However, it also has the disadvntage of needing to compute the derivative as well as the function. This may be difficult for some functions.
Steffensen's idea was to use the quotient approximation of the derivative:
$f'(x)\approx\frac{f(x+h)-f(x)}{h}$
when $$h$$ is small, and since we trying to solve $$f(x)=0$$, we may assume that in the neighbourhood of the solution $$f(x)$$ is itself small, so can be used for $$h$$. This means we can write
$f'(x)\approx\frac{f(x+f(x))-f(x)}{f(x)}$
which leads to the following version of Newton's method:
$x\leftarrow x-\frac{f(x)^2}{f(x+f(x))-f(x)}.$
This is a neat idea, and in fact when it works it converges almost as fast as Newton's method. However, it is very senstive to the starting value. For example, suppose we want to find the value of $$W(10)$$, where $$W(x)$$ is Lambert's $$W$$ function; the inverse of $$y=xe^x$$. Finding $$W(10)$$ then means solving the equation
$xe^x-10=0.$
Newton's method uses the iteration
$x\leftarrow x-\frac{xe^x-10}{e^x(x+1)}$
and with a positive starting value not too big will converge; the first 50 places of the solution are:
$1.74552800274069938307430126487538991153528812908093$
Staring with $$x=2$$ will produce over 1000 correct decimal places in 12 steps.
If we apply Steffensen's method starting with $$x=2$$ we'll see values that wobble about 1.9 for ages 1.9 before converging to the wrong value. Newton's method will work for almost any value (although the larger the initial value, the long the iterations take to "settle down"); Steffensen's method will only work when the initial value is close to 1.7.
## A slight improvement
Using the "central" approximation of the derivative:
$f'(x)\approx\frac{f(x+h)-f(x-h)}{2h}$
makes a considerable difference; this leads to the iteration
$x\leftarrow x-\frac{2f(x)^2}{f(x+f(x))-f(x-f(x))}$
This does however require the computation of three function values, rather than just the original two. A slightly faster version of the above is
$x\leftarrow x-\frac{f(x)^2}{f(x+\frac{1}{2}f(x))-f(x-\frac{1}{2}f(x))}.$
|
2022-08-07 22:19:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9223964810371399, "perplexity": 390.95639289105674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00307.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/cpaa.2011.10.639
|
# American Institute of Mathematical Sciences
March 2011, 10(2): 639-651. doi: 10.3934/cpaa.2011.10.639
## Uniform attractor for non-autonomous nonlinear Schrödinger equation
1 Universite de Picardie Jules Verne, LAMFA UMR 7352, 33 rue Saint-Leu, 80039 Amiens cedex 2 Département de Mathématiques, Faculté des Sciences de Monastir, Av. de l'environement, 5000 Monastir, Tunisia
Received March 2010 Revised September 2010 Published December 2010
We consider a weakly coupled system of nonlinear Schrödinger equations which models a Bose Einstein condensate with an impurity. The first equation is dissipative, while the second one is conservative. We consider this dynamical system into the framework of non-autonomous dynamical systems, the solution to the conservative equation being the symbol of the semi-process. We prove that the first equation possesses a uniform attractor, which attracts the solutions for the weak topology of the underlying energy space. We then study the limit of this attractor when the coupling parameter converges towards $0$.
Citation: Olivier Goubet, Wided Kechiche. Uniform attractor for non-autonomous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2011, 10 (2) : 639-651. doi: 10.3934/cpaa.2011.10.639
##### References:
[1] N. Akroune, Regularity of the attractor for a weakly damped nonlinear Schrödinger equation on $\mathbb R$,, Appl. Math. Lett, 12 (1999), 45. doi: doi:10.1016/S0893-9659(98)00170-0. Google Scholar [2] A. Babin and M. Vishik, "Attractors of Evolution Equations," Nauka, Moscow 1989; English transl., Stud. Math. Appl., (1992). Google Scholar [3] J. Ball, Global attractors for damped semilinear wave equations, partial differential equations and applications,, Discrete Contin. Dyn. Syst., 10 (2004), 31. doi: doi:10.3934/dcds.2004.10.31. Google Scholar [4] T. Cazenave, "Semilinear Schrödinger Equations," vol 10,, Courant Lectures Notes in Mathematics, (2003). Google Scholar [5] V. V. Chepyzhov and M. I. Vishik, Attractor of non-autonomous dynamical systems and their dimension,, J. Math. Pures Appl., 73 (1994), 279. Google Scholar [6] J.-M. Ghidaglia, Finite dimensional behavior for the weakly damped driven Schrödinger equations,, Ann. Inst. Henri Poincar\'e, 5 (1988), 365. Google Scholar [7] O. Goubet, Regularity of the attractor for the weakly damped nonlinear Schrödinger equations,, Applicable Anal., 60 (1996), 99. Google Scholar [8] P. Lauren\c cot, Long-time behavior for weakly damped driven nonlinear Schrödinger equation in $\mathbb R^N, N\leq 3$, , No DEA, 2 (1995), 357. doi: doi:10.1007/BF01261181. Google Scholar [9] A. Miranville and S. Zelik, "Attractors For Dissipative Partial Differential Equations In Bounded And Unbounded Domains,", Handbook of Differential Equations: Evolutionary Equations, (2008), 103. Google Scholar [10] I. Moise, R. Rosa and X. Wang, Attractors For noncompact nonautonomous systems via energy equations,, Disrete Contin. Dyn. Syst., 10 (2004), 473. doi: doi:10.3934/dcds.2004.10.473. Google Scholar [11] W. Kechiche, Ph D thesis,, in preparation., (). Google Scholar [12] G. Raugel, "Global Attractor in Partial Differential Equations. Handbook of Dynamical Systems,", Vol. 2, (2002), 885. Google Scholar [13] R. Rosa, The global attractor of a weakly damped, forced Korteweg-De Vries equation in $H^1(R)$, VI workshop on partial differential equations,, Part II (Rio de Janeiro, 19 (2000), 129. Google Scholar [14] C. Sulem and P. L Sulem, "The Nonlinear Schröinger Equation. Self-focusing and Wave Collapse,", Applied Mathematical Sciences, (1999). Google Scholar [15] R. Temam, "Infinite Dimentional Dynamical Systems in Mecanics And physics,", 2nd Edition, (1997). Google Scholar [16] X. Wang, An energy equation for the weakly damped driven nonlinear Schrödinger equations and its applications,, Physica D, 88 (1995), 167. doi: doi:10.1016/0167-2789(95)00196-B. Google Scholar
show all references
##### References:
[1] N. Akroune, Regularity of the attractor for a weakly damped nonlinear Schrödinger equation on $\mathbb R$,, Appl. Math. Lett, 12 (1999), 45. doi: doi:10.1016/S0893-9659(98)00170-0. Google Scholar [2] A. Babin and M. Vishik, "Attractors of Evolution Equations," Nauka, Moscow 1989; English transl., Stud. Math. Appl., (1992). Google Scholar [3] J. Ball, Global attractors for damped semilinear wave equations, partial differential equations and applications,, Discrete Contin. Dyn. Syst., 10 (2004), 31. doi: doi:10.3934/dcds.2004.10.31. Google Scholar [4] T. Cazenave, "Semilinear Schrödinger Equations," vol 10,, Courant Lectures Notes in Mathematics, (2003). Google Scholar [5] V. V. Chepyzhov and M. I. Vishik, Attractor of non-autonomous dynamical systems and their dimension,, J. Math. Pures Appl., 73 (1994), 279. Google Scholar [6] J.-M. Ghidaglia, Finite dimensional behavior for the weakly damped driven Schrödinger equations,, Ann. Inst. Henri Poincar\'e, 5 (1988), 365. Google Scholar [7] O. Goubet, Regularity of the attractor for the weakly damped nonlinear Schrödinger equations,, Applicable Anal., 60 (1996), 99. Google Scholar [8] P. Lauren\c cot, Long-time behavior for weakly damped driven nonlinear Schrödinger equation in $\mathbb R^N, N\leq 3$, , No DEA, 2 (1995), 357. doi: doi:10.1007/BF01261181. Google Scholar [9] A. Miranville and S. Zelik, "Attractors For Dissipative Partial Differential Equations In Bounded And Unbounded Domains,", Handbook of Differential Equations: Evolutionary Equations, (2008), 103. Google Scholar [10] I. Moise, R. Rosa and X. Wang, Attractors For noncompact nonautonomous systems via energy equations,, Disrete Contin. Dyn. Syst., 10 (2004), 473. doi: doi:10.3934/dcds.2004.10.473. Google Scholar [11] W. Kechiche, Ph D thesis,, in preparation., (). Google Scholar [12] G. Raugel, "Global Attractor in Partial Differential Equations. Handbook of Dynamical Systems,", Vol. 2, (2002), 885. Google Scholar [13] R. Rosa, The global attractor of a weakly damped, forced Korteweg-De Vries equation in $H^1(R)$, VI workshop on partial differential equations,, Part II (Rio de Janeiro, 19 (2000), 129. Google Scholar [14] C. Sulem and P. L Sulem, "The Nonlinear Schröinger Equation. Self-focusing and Wave Collapse,", Applied Mathematical Sciences, (1999). Google Scholar [15] R. Temam, "Infinite Dimentional Dynamical Systems in Mecanics And physics,", 2nd Edition, (1997). Google Scholar [16] X. Wang, An energy equation for the weakly damped driven nonlinear Schrödinger equations and its applications,, Physica D, 88 (1995), 167. doi: doi:10.1016/0167-2789(95)00196-B. Google Scholar
[1] Pengyu Chen. Non-autonomous stochastic evolution equations with nonlinear noise and nonlocal conditions governed by noncompact evolution families. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020383 [2] Yangrong Li, Shuang Yang, Qiangheng Zhang. Odd random attractors for stochastic non-autonomous Kuramoto-Sivashinsky equations without dissipation. Electronic Research Archive, 2020, 28 (4) : 1529-1544. doi: 10.3934/era.2020080 [3] Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242 [4] Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $p$-Laplacian equations on $\mathbb{R}^N$. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265 [5] Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 [6] Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 [7] Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456 [8] Noriyoshi Fukaya. Uniqueness and nondegeneracy of ground states for nonlinear Schrödinger equations with attractive inverse-power potential. Communications on Pure & Applied Analysis, 2021, 20 (1) : 121-143. doi: 10.3934/cpaa.2020260 [9] Xiyou Cheng, Zhitao Zhang. Structure of positive solutions to a class of Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020461 [10] Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 [11] Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 [12] Shiqi Ma. On recent progress of single-realization recoveries of random Schrödinger systems. Electronic Research Archive, , () : -. doi: 10.3934/era.2020121 [13] Thomas Bartsch, Tian Xu. Strongly localized semiclassical states for nonlinear Dirac equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 29-60. doi: 10.3934/dcds.2020297 [14] Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 [15] Kihoon Seong. Low regularity a priori estimates for the fourth order cubic nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5437-5473. doi: 10.3934/cpaa.2020247 [16] José Luis López. A quantum approach to Keller-Segel dynamics via a dissipative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020376 [17] Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276 [18] Alex H. Ardila, Mykael Cardoso. Blow-up solutions and strong instability of ground states for the inhomogeneous nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2021, 20 (1) : 101-119. doi: 10.3934/cpaa.2020259 [19] Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020345 [20] Justin Holmer, Chang Liu. Blow-up for the 1D nonlinear Schrödinger equation with point nonlinearity II: Supercritical blow-up profiles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 215-242. doi: 10.3934/cpaa.2020264
2019 Impact Factor: 1.105
|
2020-12-04 14:56:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7200568914413452, "perplexity": 8647.986117636709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00651.warc.gz"}
|
https://www.physicsforums.com/threads/maximum-current-amplitude-in-rlc-circuit.307608/
|
# Maximum current amplitude in RLC circuit
1. Apr 15, 2009
### John 123
Using the following o.d.e
$$L\frac{d^2i}{dt^2}+R\frac{di}{dt}+\frac{1}{C}i=\frac{d}{dt}E(t)$$
The following problem has several parts all of which I have solved except for the one below.
L=1/20
R=5
$$C=4.10^{-4}$$
$$\frac{dE}{dt}=200\cos100t$$
Where L is an inductance in henries, R is a resistance in ohms, C is a capacitance in farads and E is the emf in volts.
The part I cannot agree with the book is as follows.
Firstly:
What should the frequency of the input E(t) be in order that it be in resonance with the system? [This I have solved correctly as :
$$100\sqrt5$$
But this part leads to the next which I can't agree.
What is the maximum value of the current amplitude for this resonant frequency?
John
Last edited: Apr 15, 2009
2. Apr 15, 2009
### LennoxLewis
Have you solved the differential equation to obtain i (t) ? No doubt that frequency enters somewhere in the equation, then substitute 100sqrt(5), differentiate and set equal to 0 to find the maximum or minimum current, then differentiate a second time and check to see if it's a negative value, so that you're sure you're at a maximum, not a minimum.
3. Apr 15, 2009
### rl.bhat
If you integrate dE= 200cos100t*dt. you will get Emax.
At resonance inductive reactance XL cancels capacitive reactance XC leaving only resistance in the circuit. Now find the maximum current.
4. Apr 16, 2009
### John 123
Yes the steady state current is:
$$i_s=\frac{2}{85}(\sin100t+4\cos100t)$$
When you substitute the frequency
$$100\sqrt5$$
Differentiate and set to zero you get
$$t=\frac{\tan^{-1}0.25}{100\sqrt5}$$
But this leads to a max current of 0.097 Amp[The book answer is 2/5 Amp]?
Regards
John
5. Apr 16, 2009
### John 123
You have used the frequency 100 rad/sec whereas the frequency for resonance is
$$100\sqrt5$$
?
Regards
John
6. Apr 16, 2009
### rl.bhat
If E(t) = 2sin100t, what is dE(t)/dt ?
In the given problem Emax = 2V.
The maximum current at resonance is Emax/R.
7. Apr 16, 2009
### John 123
Hi again
Am I misunderstanding these two parts of the question?
Part 1.
What should the frequency of the input E(t) be in order that it be in resonance with the system?
$$100\sqrt5 rad.sec^{-1}$$
Part 2.
What is the maximum value of the amplitude for THIS RESONANT FREQUENCY?[My caps bold]
Well if
$$\frac{dE(t)}{dt}=200\cos{(100\sqrt5)t}$$
then
$$E(t)=\frac{2}{\sqrt5}\sin{(100\sqrt5)t}$$
so
$$i_max=\frac{2}{5\sqrt5}amps?$$
Regards
John
8. Apr 16, 2009
### John 123
My apologies there is an error in the last posting.
It should be:
$$\frac{dE(t)}{dt}=200\cos(100\sqrt5)t$$
and
$$E(t)=\frac{2}{\sqrt5}\sin(100\sqrt5)t$$
i(max)=2/5xsqrt5
John
9. Apr 16, 2009
### John 123
Here is another question with the same problem.
a.Find the steady state current if L=1/20,R=20,C=1/10000,E=100COS200t.
b.What is the frequency of the input E(t) in order that it be in resonance with the system.
c. What is the maximum value of the amplitude for this resonant frequency?
I have answered parts a. and b. correctly as:
a. $$i=\cos200t-2\sin200t$$
b.
$$\omega=200\sqrt5$$
But once again I cannot see the book answer for part c, which is 5 amp?
Regards
John
10. Apr 16, 2009
### rl.bhat
In both the problem E is given.You can change E keeping Eo constant by changing either L or C. At a particular setting of L, C and R, E will be maximum. At that instant, they have asked, what is the frequency of E. You can change E by either changing Eo keeping frequency constant or by changing frequency keeping Eo constant. In the problem they have adopted the second method. At resonance impedance will be purely resistive. Hence Imax = Eo/R = 100/20
11. Apr 17, 2009
### John 123
Many thanks.
Yes the amplitude E remains the same. I think the wording of the question is confusing by asking for the maximum amplitude FOR THIS FREQUENCY. However, as you say, whatever the frequency the amplitude hasn't been changed.
Regards
John
|
2017-08-16 13:40:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7706879377365112, "perplexity": 1686.307225926936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886101966.48/warc/CC-MAIN-20170816125013-20170816145013-00613.warc.gz"}
|
http://mathoverflow.net/questions/107586/asymptotic-behavior-of-convex-functions
|
Asymptotic behavior of convex functions
Let $f:\mathbb{R}^n\rightarrow\mathbb{R}$ be a $C^2$ convex function which is strictly positive. If $x_n$ is a sequence of points such that $f(x_n)\rightarrow 0$, show that (or give a counterexample) the gradient $\nabla f(x_n)$ also tends to zero.
-
By an accident I posted my answer twice. And don't know how to delete the second one:-) – Alexandre Eremenko Sep 19 '12 at 23:25
A counterexample is $$f=\sqrt{y^2+e^{-x}}.$$ You can verify by computing the second derivatives that this is convex. As a sequence $x_n$ you can take $(n,1/n)$. Then $f(x_n)\to 0$ but the derivative with respect to $y$ tends to 1. Thus the gradient does not tend to 0.
Thank you very much. Now, if we suppose the gradient map of $f$ limited (which is the case I have in mind), do we get a positive answer? This problem arose in the following setting: Suppose $\Sigma$ is a complete hypersurface in $\mathbb{R}^{n+1}$ such that the position vector is everywhere transverse to it and $\Sigma$ is locally strongly convex (whith the position vector pointing to the convex side). Then I was trying to show that the property of $\Sigma$ being asymptotic (or not) to the boundary of the convex cone $\mathcal{C}$ which it generates is equivalent to the property of its – Henrique Sep 20 '12 at 20:43
(continuing....) conormal image $\Sigma^*\subset(\mathbb{R}^{n+1})^*$ be closed in $(\mathbb{R}^{n+1})^*$. – Henrique Sep 20 '12 at 20:45
|
2015-05-28 16:23:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656288623809814, "perplexity": 153.8626644695139}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929422.8/warc/CC-MAIN-20150521113209-00071-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://rpg.stackexchange.com/questions/168062/optimal-algorithm-for-building-raises-in-7th-sea
|
# Optimal algorithm for building raises in 7th Sea?
I'm writing a Discord bot for an online 7th Sea campaign. Players will type something like
roll finesse 4 + weaponry 3 + 1
and the bot will spit out something like
3 raises (9+2, 8+3, 6+4, leftover 5)
The hard part is grouping the dice into raises. I would like to do this automatically. Is there a known optimal algorithm for this? I played around with a straightforward greedy approach, but I'm not convinced that it is optimal in all cases. The algorithm I'm using now is:
2. Repeatedly add the largest die that will not make the group exceed 10 (or 15, if applicable).
3. If the group isn't 10 yet, add the smallest die.
I've found cases where this “almost” doesn't produce the optimal outcome, but no actual failures. Still, I'm not completely convinced. Is there any existing research on this?
• Please don't use code formatting for quotes. I know we've got a meta on this, but basically it 'reads' it wrong for those using screen readers and should only be used for actual code. Apr 23 '20 at 17:09
• @NautArch it is an actual code, isn't it? A command for the bot, written in specific syntax Apr 23 '20 at 17:12
• Here is the meta on the reasoning behind this. But thank you for your understanding! Apr 23 '20 at 17:14
• I'll need to think about this more, but it sounds like you'll want a recursive algorithm that passes an array of remaining dice, sorted ascending in value that checks for all pairs that are exactly ten first, then move on to what you're doing. I haven't try to code it out yet Apr 23 '20 at 18:46
• I believe I have a counterexample for your greedy algorithm: (8, 3, 3, 3, 3, 3, 3, 3, 1, 1). You can get three raises out of those dice by grouping them as 8 + 3 = 11 and two groups of 3 + 3 + 3 + 1 = 10, but your algorithm would instead first group up 8 + 1 + 1 = 10 and then be left with seven 3s that cannot make two raises. Apr 23 '20 at 22:56
Your problem appears to be an instance of the maximum set packing problem:
Given a (multi)set of positive integers, how many disjoint subsets with sum ≥ 10 can be formed from them?
(Allowing subsets with sum ≥ 15 to count as two raises turns it into a weighted maximum set packing problem instead, with subsets summing to 15 or more having twice the weight of those with sum between 10 and 14.)
In general, the maximum set packing problem is known to be hard to solve, and I suspect that even this specific instance of it may be difficult to solve exactly for large inputs. Fortunately, the limited number of dice available to players means that a (smart) exhaustive search of the solution space is probably tractable.
In particular, the maximum set packing problem can be represented as an integer linear program, and solved (or approximated) using any software package designed for solving such programs. You don't say what language you're writing your bot in, but e.g. for Python a quick Google search turns up several possible libraries such as Python-MIP. (To be honest, most of those tools are probably way overkill for this task, but since they exist already, it may be easier to use them than to try to come up with an algorithm from scratch.)
Also note that there are some preprocessing steps you can do to simplify the problem, and in some cases even solve it completely:
• Any roll of 10 (or more, with bonuses) can be set aside as its own group.
• Any pair of dice that sum to exactly 10 can also be safely set aside as a group: there's no situation where breaking up such a pair could increase the number of possible raises. (At least, I have a proof sketch of this that I'm pretty sure is correct.)
• The maximum number of additional raises that can be obtained from the dice remaining after the simplification steps described above is bounded by their sum divided by 10. In particular, if this sum is less than 20, the problem is trivial.
Unfortunately, the simplifications above assume that the skill rank 4 bonus that allows counting any groups summing to 15+ as two raises is not in play. If it is, you could obviously still safely set aside any single rolls of 15+, but I'm not sure those can ever arise. And I'm not even really sure whether setting aside pairs summing to 15 is guaranteed to be optimal in that case.
Still, at the very least, you can obtain a lower bound on the number of possible raises using e.g. your greedy solution algorithm, and an upper bound by multiplying the total sum of the dice with 1/10 (or 2/15). If those bounds match, you'll know that your greedy solution is optimal. If they don't, you could either try a more complicated exhaustive search or just display the greedy solution accompanied by a warning to the player that a better grouping might exist.
Actually, thinking about this a little more, you probably don't need anything like a full ILP solver for this; a simple recursive search with memoization ought to be more than sufficient for, say, less than 20 dice. In pseudocode, it could look something like this:
cache = map(multiset of integers -> integer)
function max_raises_for(rolls: multiset of integers) -> integer:
if rolls in cache: return cache[rolls]
upper_bound = round_down(sum(rolls) / 10)
if upper_bound ≤ 1: return upper_bound
max_raises = 0
for each group in feasible_groups_for(rolls):
raises = 1 + max_raises_for(rolls - group)
if raises > max_raises: max_raises = raises
if max_raises = upper_bound: end loop
cache[rolls] = max_raises
return max_raises
where the helper function feasible_groups_for(rolls) generates every distinct subset of rolls that sums to at least 10 and has no extra dice that could be removed without bringing the sum below 10.
Note that, for efficiency, you'll definitely want to store the multiset rolls in some canonical form, at least for the cache lookups, so that e.g. looking up (4, 5, 4, 3) will find an existing cache entry for (3, 4, 4, 5). Simply sorting the list of rolls before looking it up in the cache will work, but you could also represent the multiset e.g. as a (sorted) map of dice values to counts, so that e.g. (3, 4, 4, 5) would be represented as {3:1, 4:2, 5:1}. Also, you'll probably want to apply the preprocessing steps I suggested above to simplify the problem before running this algorithm on it.
I'll leave modifying the pseudocode to handle the "double raise on 15" rule as an exercise; basically you'll just need to consider more feasible groups (i.e. those that sum to 15, with no extra dice) and adjust the upper bound calculation to something like round_down(sum / 15) + (1 if sum % 15 ≥ 10 else 0) (and return early if it's at most 2). Also, if you want to return the actual groups of dice in the optimal solution and not just how many of them there are, you can do that easily enough by storing the actual list of found groups in the cache (and returning it from the function) instead of just its length.
• The moment I saw this question I knew who'd be answering it x) Apr 24 '20 at 19:22
|
2021-12-02 13:43:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6011521816253662, "perplexity": 527.2373921111252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00589.warc.gz"}
|
https://socratic.org/questions/57d33cd37c01493784c694f0
|
# Find the freezing point (in ""^@ "C") for a solution consisting of "60.0 g" water and "40.0 g" ethylene glycol?
## Find the freezing point (in $\text{^@ "C}$) for a solution consisting of $\text{60.0 g}$ water and $\text{40.0 g}$ ethylene glycol? The ${K}_{f}$ of water is ${1.86}^{\circ} \text{C"cdot"kg/mol}$ the ${K}_{b}$ of water is ${0.512}^{\circ} \text{C"cdot"kg/mol}$, and the molar masses of water and ethylene glycol are $\text{18.015 g/mol}$ and $\text{62.0 g/mol}$, respectively. a) ${20.1}^{\circ} \text{C}$ b) $- {20.1}^{\circ} \text{C}$ c) ${5.50}^{\circ} \text{C}$ d) ${28.42}^{\circ} \text{C}$
Sep 10, 2016
B -20.1 degrees C
#### Explanation:
The freezing point depression for water is -1.85 degrees per molal solution.
A molal solution is l mole of particles per kg of solution.
As ethylene glycol is a covalent molecule it does not ionize in water. Because of this 1 mole of ethylene glycol produces 1 mole of particles.
To find the number of moles of ethylene glycol divide the mass of ethylene glycol by the molar mass.
$\frac{40}{62}$ = ,645 moles.
There are 1000 grams of water in a Kg. To find the number of Kg of solution define the 60 grams grams of water by 1000 grams.
$60 \frac{g}{1000} g$ = .06 Kg.
The molal concentration is the moles divided by the Kg.
,645 moles / .06 Kg = 10.8 m ( molal concentration)
The freezing point depression is -1.85 ${C}^{o}$ per 1 -molal.
To find the depression multiply -1.85 x 10.8 molal.
$- 1.85 \times 10.8$ = - 19.88
The normal freezing point is ${0}^{o} C$ so the new freezing point is
0 + - 19.88 = - 19.88
The closest answer is B.
Sep 10, 2016
I got $\text{b}$.
a) is incorrect because freezing point depression is negative.
c) is incorrect because it results from using ${K}_{b}$ instead of ${K}_{f}$, and yet the change in boiling point is added to the freezing point...
d) is incorrect because it results from having calculated the molality for water instead of for ethylene glycol, and then used ${K}_{b}$ instead of ${K}_{f}$, and then still added the change in boiling point to the freezing point.
The ${K}_{f}$ of water is $\text{1.86"^@ "C/m}$, and recall the freezing point depression equation:
$\boldsymbol{\Delta {T}_{f} = - i {K}_{f} m}$
where:
• $i$ is the van't Hoff factor, which is the number of ions in a fully ionic compound. This was not given so we have to work from a different equation or estimate $i$.
• ${K}_{f}$ is the freezing point depression constant.
• $m$ is the molality of the solution, i.e. $\text{mols solute"/"kg solvent}$.
• $\Delta {T}_{f} = {T}_{f} - {T}_{f}^{\text{*}}$ is the change in freezing point, where ${T}_{f}^{\text{*}}$ is the freezing point of the pure solvent and ${T}_{f}$ is the new freezing point.
Ethylene glycol is not fully ionic, but it is polar, so it is miscible in water. However, its $\text{pKa}$ is significantly higher than that of water ($25$ vs. $15.7$), so we can say it does not dissociate significantly in water. Therefore, $i \approx 1$. It's probably a little higher than $1$ for a real solution, however.
The molality of the solution is based on which substance is the solvent, i.e. which one there is more of.
$\text{60.0 g"/"18.015 g/mol" = "3.331 mol water}$
$\text{40.0 g"/"62.0 g/mol" = "0.6452 mol EG}$
Therefore, water is the solvent.
color(green)(m_"soln") = "mols EG"/"kg water"
= "0.6452 mols EG"/(60.0 cancel"g water" xx "1 kg water"/(1000 cancel"g")
$=$ $\textcolor{g r e e n}{\text{10.75 m solution}}$
Therefore, the change in freezing point is:
$\Delta {T}_{f} \setminus = {T}_{f} - {T}_{f}^{\text{*" ~~ (1)("1.86"^@ "C"cdotcancel"kg/mol")(10.75 cancel"mol/kg") ~~ -20.0^@ "C}}$
Since the freezing point must decrease, the final freezing point is:
$\textcolor{b l u e}{{T}_{f}} = - {20.0}^{\circ} \text{C" + T_f^"*}$
$= \textcolor{b l u e}{- {20.0}^{\circ} \text{C}}$
The apparent answer is therefore $- {20.1}^{\circ} \text{C}$, which is $\text{B}$. None of the other answers are right because they are positive and not negative.
Based on this answer, can you see that $i \approx 1.005$? It was a good estimate to say that $i \approx 1$, but it's not exactly $1$.
|
2021-04-20 23:50:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 60, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7466747164726257, "perplexity": 987.2169207054045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00301.warc.gz"}
|
https://math.stackexchange.com/questions/3126096/products-coproducts-and-morphisms
|
# Products, coproducts and morphisms
The universal properties of products and coproducts "amount" to the statements
$$\hom(\coprod_i X_i , Y) = \prod_i \hom(X_i, Y) \quad \text{and} \quad \hom(X,\prod_i Y_i) = \prod_i \hom(X,Y_i)$$
for any category and any objects for which the (co)product is defined. I am wondering if there is any general statement about the other two cases: are there any similar formulas that describe $$\hom(X,\coprod_i Y_i)$$ and $$\hom(\prod_i X_i , Y)$$? I don't expect a general formula to exist, but maybe one can say something at least for abelian categories?
• In general categories, I think you should not expect any meaningful answer, here. In abelian categories you have finite biproducts, so if the products/coproducts are finite, then of course you get the same formulas. This leaves the infinite product in an abelian category case -- I once again doubt you'll get anything nice, but take that with a grain of salt. – Mees de Vries Feb 25 at 14:36
• In an extensive category, $X$ is a connected object if $\mathsf{Hom}(X,-)$ preserves all coproducts, i.e.the canonical morphism $\coprod_i\mathsf{Hom}(X,Y_i)\to\mathsf{Hom}(X,\coprod_i Y_i)$ is an isomorphism. – Derek Elkins Feb 25 at 19:55
|
2019-04-25 21:45:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7418723702430725, "perplexity": 290.69093227295195}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578742415.81/warc/CC-MAIN-20190425213812-20190425235812-00209.warc.gz"}
|
http://en.wikipedia.org/wiki/Encephalization
|
# Encephalization
## Contents
Encephalization is defined as the amount of brain mass exceeding that related to an animal's total body mass. Quantifying an animal's encephalization has been argued to be directly related to that animal's level of intelligence. Aristotle wrote in 335 B.C. "Of all the animals, man has the brain largest in proportion to his size."[1] Also, in 1871, Charles Darwin wrote in his book The Descent of Man: "No one, I presume, doubts that the large proportion which the size of man's brain bears to his body, compared to the same proportion in the gorilla or orang, is closely connected with his mental powers."[2]
In 2004, Dennis Bramble and Daniel Lieberman proposed that early Homo were scavengers that used stone tools to harvest meat off carcasses and to open bones. They proposed that humans specialized in long-distance running to compete with other scavengers in reaching carcasses.[3] It has been suggested that such an adaptation ensured a food supply that made large brains possible.
More encephalized species tend to have longer spinal shock duration.
Encephalization may also refer to the tendency for a species toward larger brains through evolutionary time. Anthropological studies indicate that bipedalism preceded encephalization in the human evolutionary lineage after divergence from the chimpanzee lineage. Compared to the chimpanzee brain, the human brain is larger and certain brain regions have been particularly altered during human evolution.[4] Most brain growth of chimpanzees happens before birth while most human brain growth happens after birth.[5]
## Encephalization quotient
$E=CS^r$
In Snell's equation of simple allometry[6] "E" is the weight of the brain, "C" is the cephalization factor and "S" is body weight and "r" is the exponential constant. The exponential constant for primates is 0.28[6] and either 0.56 or 0.66 for mammals in general.[7]
The "Encephalization Quotient" (EQ) is the ratio of "C" over the expected value for "C" of an animal of given weight "S".[7]
Species EQ[7] Species EQ[7]
Human 7.44 Cat 1.00
Dolphin 5.31 Horse 0.86
Chimpanzee 2.49 Sheep 0.81
Rhesus Monkey 2.09 Mouse 0.50
Elephant 1.87 Rat 0.40
Whale[clarification needed] 1.76 Rabbit 0.40
Dog 1.17
This measurement of approximate intelligence works best on mammals because of accuracy rather than other phyla of animalia.
## Evolution of the EQ
The evolution of the EQ shows a close correlation with the evolution of the diversity of life generally. During the Paleozoic the EQ generally increased throughout the period, peaking in the late Carboniferous and early Permian. This rate of increase, if it had continued, would have resulted in the evolution of a species with close to a human EQ 70 million years ago. However, the Permian - Triassic mega-extinction events 251 million years ago reversed this trend. Occurring through the probable release of oceanic methane clathrates, and the burning of coal from the Siberian volcanic basalt traps, it saw the elimination of 96% of species, and slowed the rate of EQ development, such that only by the end of the Cretaceous Period had the EQ recovered to its earlier level, with the appearance of the dromeosaurids. A second K-Pg (Cretaceous-Paleogene) extinction event of 66 million years ago, with the extinction of the non avian dinosaurs, ammonites and many other creatures this time saw 78% of species become extinct. These events had a huge effect in setting back the further evolution of the EQ.
|
2014-04-25 08:46:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47196224331855774, "perplexity": 4596.741468596258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://www.nag.com/numeric/CL/nagdoc_cl24/html/F01/f01jfc.html
|
f01 Chapter Contents
f01 Chapter Introduction
NAG Library Manual
# NAG Library Function Documentnag_matop_real_gen_matrix_frcht_pow (f01jfc)
## 1 Purpose
nag_matop_real_gen_matrix_frcht_pow (f01jfc) computes the Fréchet derivative $L\left(A,E\right)$ of the $p$th power (where $p$ is real) of the real $n$ by $n$ matrix $A$ applied to the real $n$ by $n$ matrix $E$. The principal matrix power ${A}^{p}$ is also returned.
## 2 Specification
#include #include
void nag_matop_real_gen_matrix_frcht_pow (Integer n, double a[], Integer pda, double e[], Integer pde, double p, NagError *fail)
## 3 Description
For a matrix $A$ with no eigenvalues on the closed negative real line, ${A}^{p}$ ($p\in ℝ$) can be defined as
$Ap= expplogA$
where $\mathrm{log}\left(A\right)$ is the principal logarithm of $A$ (the unique logarithm whose spectrum lies in the strip $\left\{z:-\pi <\mathrm{Im}\left(z\right)<\pi \right\}$).
The Fréchet derivative of the matrix $p$th power of $A$ is the unique linear mapping $E⟼L\left(A,E\right)$ such that for any matrix $E$
$A+Ep - Ap - LA,E = oE .$
The derivative describes the first-order effect of perturbations in $A$ on the matrix power ${A}^{p}$.
nag_matop_real_gen_matrix_frcht_pow (f01jfc) uses the algorithms of Higham and Lin (2011) and Higham and Lin (2013) to compute ${A}^{p}$ and $L\left(A,E\right)$. The real number $p$ is expressed as $p=q+r$ where $q\in \left(-1,1\right)$ and $r\in ℤ$. Then ${A}^{p}={A}^{q}{A}^{r}$. The integer power ${A}^{r}$ is found using a combination of binary powering and, if necessary, matrix inversion. The fractional power ${A}^{q}$ is computed using a Schur decomposition, a Padé approximant and the scaling and squaring method. The Padé approximant is differentiated in order to obtain the Fréchet derivative of ${A}^{q}$ and $L\left(A,E\right)$ is then computed using a combination of the chain rule and the product rule for Fréchet derivatives.
## 4 References
Higham N J (2008) Functions of Matrices: Theory and Computation SIAM, Philadelphia, PA, USA
Higham N J and Lin L (2011) A Schur–Padé algorithm for fractional powers of a matrix SIAM J. Matrix Anal. Appl. 32(3) 1056–1078
Higham N J and Lin L (2013) An improved Schur–Padé algorithm for fractional powers of a matrix and their Fréchet derivatives MIMS Eprint 2013.1 Manchester Institute for Mathematical Sciences, School of Mathematics, University of Manchester http://eprints.ma.man.ac.uk/
## 5 Arguments
1: nIntegerInput
On entry: $n$, the order of the matrix $A$.
Constraint: ${\mathbf{n}}\ge 0$.
2: a[$\mathit{dim}$]doubleInput/Output
Note: the dimension, dim, of the array a must be at least ${\mathbf{pda}}×{\mathbf{n}}$.
The $\left(i,j\right)$th element of the matrix $A$ is stored in ${\mathbf{a}}\left[\left(j-1\right)×{\mathbf{pda}}+i-1\right]$.
On entry: the $n$ by $n$ matrix $A$.
On exit: the $n$ by $n$ principal matrix $p$th power, ${A}^{p}$.
3: pdaIntegerInput
On entry: the stride separating matrix row elements in the array a.
Constraint: ${\mathbf{pda}}\ge {\mathbf{n}}$.
4: e[$\mathit{dim}$]doubleInput/Output
Note: the dimension, dim, of the array e must be at least ${\mathbf{pde}}×{\mathbf{n}}$.
The $\left(i,j\right)$th element of the matrix $E$ is stored in ${\mathbf{e}}\left[\left(j-1\right)×{\mathbf{pde}}+i-1\right]$.
On entry: the $n$ by $n$ matrix $E$.
On exit: the Fréchet derivative $L\left(A,E\right)$.
5: pdeIntegerInput
On entry: the stride separating matrix row elements in the array e.
Constraint: ${\mathbf{pde}}\ge {\mathbf{n}}$.
6: pdoubleInput
On entry: the required power of $A$.
7: failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument $⟨\mathit{\text{value}}⟩$ had an illegal value.
NE_INT
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 0$.
NE_INT_2
On entry, ${\mathbf{pda}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{pda}}\ge {\mathbf{n}}$.
On entry, ${\mathbf{pde}}=⟨\mathit{\text{value}}⟩$ and ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{pde}}\ge {\mathbf{n}}$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
NE_NEGATIVE_EIGVAL
$A$ has eigenvalues on the negative real line. The principal $p$th power is not defined in this case; nag_matop_complex_gen_matrix_frcht_pow (f01kfc) can be used to find a complex, non-principal $p$th power.
NE_SINGULAR
$A$ is singular so the $p$th power cannot be computed.
NW_SOME_PRECISION_LOSS
${A}^{p}$ has been computed using an IEEE double precision Padé approximant, although the arithmetic precision is higher than IEEE double precision.
## 7 Accuracy
For a normal matrix $A$ (for which ${A}^{\mathrm{T}}A=A{A}^{\mathrm{T}}$), the Schur decomposition is diagonal and the computation of the fractional part of the matrix power reduces to evaluating powers of the eigenvalues of $A$ and then constructing ${A}^{p}$ using the Schur vectors. This should give a very accurate result. In general, however, no error bounds are available for the algorithm. See Higham and Lin (2011) and Higham and Lin (2013) for details and further discussion.
If the condition number of the matrix power is required then nag_matop_real_gen_matrix_cond_pow (f01jec) should be used.
## 8 Parallelism and Performance
nag_matop_real_gen_matrix_frcht_pow (f01jfc) is threaded by NAG for parallel execution in multithreaded implementations of the NAG Library.
nag_matop_real_gen_matrix_frcht_pow (f01jfc) makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
The real allocatable memory required by the algorithm is approximately $6×{n}^{2}$.
The cost of the algorithm is $O\left({n}^{3}\right)$ floating-point operations; see Higham and Lin (2011) and Higham and Lin (2013).
If the matrix $p$th power alone is required, without the Fréchet derivative, then nag_matop_real_gen_matrix_pow (f01eqc) should be used. If the condition number of the matrix power is required then nag_matop_real_gen_matrix_cond_pow (f01jec) should be used. If $A$ has negative real eigenvalues then nag_matop_complex_gen_matrix_frcht_pow (f01kfc) can be used to return a complex, non-principal $p$th power and its Fréchet derivative $L\left(A,E\right)$.
## 10 Example
This example finds ${A}^{p}$ and the Fréchet derivative of the matrix power $L\left(A,E\right)$, where $p=0.2$,
$A = 3 3 2 1 3 1 0 2 1 1 4 3 3 0 3 1 and E = 1 0 2 1 0 4 5 2 1 0 0 0 2 3 3 0 .$
### 10.1 Program Text
Program Text (f01jfce.c)
### 10.2 Program Data
Program Data (f01jfce.d)
### 10.3 Program Results
Program Results (f01jfce.r)
|
2016-08-31 16:05:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 91, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985617995262146, "perplexity": 1544.7368951552503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295966.49/warc/CC-MAIN-20160823195815-00132-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/discrete-math/62061-generalized-permutations-combinations.html
|
# Math Help - Generalized Permutations and Combinations
1. ## Generalized Permutations and Combinations
How many ways are there to distribute 12 indistinguishable balls into 9 distinguisable bins?
2. Originally Posted by aaronrj
How many ways are there to distribute 12 indistinguishable balls into 9 distinguisable bins?
The number of ways distribute K indistinguishable balls into N distinguishable bins is
${{K+N-1} \choose K}$.
3. c(9+12-1, 12) = c(20, 12) = 20! / 8!12! = 125970
I GOT THAT BY LOOKING AT EXAMPLE 9 ON PAGE 377
|
2014-03-08 09:39:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7629294991493225, "perplexity": 1430.4867786506786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654285/warc/CC-MAIN-20140305060734-00088-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/force-and-direction-of-motion.208727/
|
# Force and direction of motion
1. Jan 13, 2008
### rohanprabhu
Newton's $2^{nd}$ law states that:
Let us assume that a train of 1000 tonnes is moving with a constant velocity [so, $\frac{dp}{dt} = 0$ (p = momentum)] on a rough surface in the $\hat{i}$ direction.
Now, i apply a small force, of a very small magnitude, in the $- \hat{i}$ direction i.e. in the opposite direction of the motion. Will it cause the train's direction to reverse? Here, the force applied by the engine is just enough to counteract the force due to friction. So, even a small force [which i can myself] apply on the train, can i reverse the direction of the train?
I'm asking this because it seems quite like a paradox to me. The 1000 tonnes figure is more or less for the perceptual impact :D.
2. Jan 13, 2008
### nicksauce
The rate of change of the momentum would be the the -i direction (for as long as the force is applied), but the momentum itself would still be in the +i direction, until the momentum reaches zero. Therefore you would either need to apply a very large force, or apply a small force for a very long time to actually reverse the direction of the train.
3. Jan 14, 2008
### awvvu
While you may be making the net force on the train point in the opposite direction as its motion, that doesn't mean it'll instantly reverse direction. To change the large (let's say) positive velocity to a negative velocity will take a large acceleration or a small one for a long time (as nicksauce said).
|
2016-10-24 22:12:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6839597225189209, "perplexity": 479.30086392753435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719784.62/warc/CC-MAIN-20161020183839-00388-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://astronomy.stackexchange.com/questions/26980/at-what-speed-does-something-have-to-travel-away-from-us-for-it-to-red-shift-eno
|
# At what speed does something have to travel away from us for it to red shift enough that it becomes invisible to the human eye?
Are there stars, galaxies etc that we cannot see because they are traveling too fast and their spectrum is shifted below our visible range? From what I understand, red shift is caused by stars and such moving away from the viewer. At what speed does a star etc have to travel to be invisible to us?
What if you are in one galaxy on one side of the universe that is expanding looking at a galaxy on the other expanding edge side. They would be traveling faster than the speed of light away from each other. Would they be invisible to each other?
• Do you mean invisible to human vision? Your question is not very well defined. Note that cosmological redshift is not caused by motion. – ProfRob Jul 17 '18 at 22:35
• To repeat @RobJeffries, do you just mean “visible light”? Cosmic microwave background radiation, for example, is highly redshifted and isn’t in the “visible” frequencies. – Chappo Hasn't Forgotten Monica Jul 18 '18 at 0:31
• The answer is still going to depend on the object, or more specifically on the type(s) of electromagnetic radiation that it is emitting or reflecting. A strong gamma ray source, such as an exploding supernova, would still be visible at very high redshifts because the gamma rays would be redshifted down to visible light. – Steve Linton Jul 18 '18 at 12:08
• Also, stars that are far enough away to have an appreciable redshift are by no means visible to the human eye in the first place, simply because they're too far away and hence too faint. You can still detect them in a telescope, but since telescopes can "see" infrared anyway, they won't redshift out of a telescope's visible range. – pela Jul 18 '18 at 12:31
• The question has now been edited into something that is quite different from the original and it is not clear that this was the intent of the OP. – ProfRob Jul 18 '18 at 13:05
The following deals only with redshift caused by motion (Doppler effect).
Wavelength (wl) shift for an object moving away from a stationary observer is calculated by following formula: shift = wl x V/C with V speed of the moving object and C speed of light (source: https://en.wikipedia.org/wiki/Doppler_effect)
Let's say we want a Sun-like star to become invisible to the human eye. Our Sun's emission starts around 250 nm and human vision ends around 700 nm (source: https://en.wikipedia.org/wiki/Sunlight#/media/File:Solar_spectrum_en.svg).
So we want a minimum shift of 700 - 250 = 450 nm for wl 250 nm. Formula yields V/C = 450 / 250 = 1,8 which (1) is impossible because nothing moves faster than light (2) makes the classical formula irrelevant
The relativistic formula (for objects moving at speeds > C/10) is: shift / wl = SQRT( (1 + V/C) / (1 - V/C) ) -1 with SQRT the square root (source: https://en.wikipedia.org/wiki/Relativistic_Doppler_effect)
Rounding (shift / wl) up to 2, the relativistic formula yields V/C = 8/10
The speed V would have to be even higher for a hotter (white to bluish) star with UV emission starting below 250 nm.
So my answer to the question is: very close to the speed of light (80% in above calculation) and therefore not a real-life scenario (see below).
"Among the nearby stars, the largest radial velocities with respect to the Sun are +308 km/s" (source: https://en.wikipedia.org/wiki/Doppler_effect), that is 1000 times smaller than C = 3e5 km/s. Rounding down to 300 km/s, it would produce a maximum shift of 700 x 300 / 3e5 = 0,7 nm, several hundred times smaller than required for invisibility.
Besides, the star would still appear on an infrared image.
• Hi, welcome on the Astronomy SE! Note, the site supports Latex, just write $5\cdot 5$ and you will get $5\cdot 5$. – peterh - Reinstate Monica Jul 26 '18 at 12:38
• I really liked your answer. Please come back and look at some others. – Muze Jul 26 '18 at 20:25
• Among stars which are not "nearby" there are many whose light is red-shifted by a factor of 3 or more, so that 250nm UV would be shifted to 700nm or longer IR. Such galaxies will also contain stars (and other things) that emit much shorter wavelength radiation, though, so some visible light will probably reach Earth. That said such a galaxy will be much too faint to see with the naked eye. – Steve Linton Jul 27 '18 at 21:40
|
2021-03-08 04:06:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5574177503585815, "perplexity": 700.8550859551727}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00187.warc.gz"}
|
https://math.stackexchange.com/questions/2057354/a-possible-numerical-argument-for-the-riemann-hypothesis
|
# A possible numerical argument for the Riemann hypothesis
According to an answer on this MO post, showing that $$\int_{0}^{\infty}\frac{(1-12t^2)}{(1+4t^2)^3}\int_{1/2}^{\infty}\log|\zeta(\sigma+it)|~d\sigma ~dt=\frac{\pi(3-\gamma)}{32}$$
$($$\gamma$ is the Euler-Mascheroni constant$)$ is equivalent to the Riemann hypothesis.
I have two questions:
$(1)$ Has any serious attempt been made to evaluate this numerically or determine strong bounds?
$(2)$ Would numerically evaluating this integral be a valid heuristic argument in favour of the Riemann hypothesis?
Certainly, no amount of numerical accuracy constitutes a proof. However, if we show the equality holds to, say, a quadrillion digits or something, it will be true for all sakes and purposes; I doubt any mathematician would then seriously deny the validity of the conjecture.
• Not an answer to your question but they have checked that the "first" order-of-trillions of zeroes appear on the line $\operatorname {Re}(z)=1/2$. I'm not sure how this compares to your version of heuristic proof. – Elliot G Dec 13 '16 at 18:21
• It does, the lower values of $\zeta(s)$ are related to the real part of the lower zeros which we know are $1/2$ @ElliotG – reuns Dec 13 '16 at 18:22
• @MathematicsStudent1122 And if the RH is false, what is the value ? – reuns Dec 13 '16 at 18:28
• Already very few mathematicians seriously deny the validity of the conjecture because of numerical evidence... – Peter Humphries Dec 14 '16 at 18:56
|
2019-08-21 22:12:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6303479671478271, "perplexity": 1328.8663815520715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00184.warc.gz"}
|
http://blog.bigsmoke.us/2010/10/08/growing-a-qcow2-image-file
|
# Growing a qcow2 image file
First, convert it to raw:
qemu-img convert system.qcow2 -O raw system.raw
Then use DD to append a zero all the way at the end of the new file. It will automatically create a hole in the file:
# Make sure the seek value is bigger than the file size, otherwise it would put a zero somewhere in the middle of the file.
dd if=/dev/zero of=temp.raw bs=1 count=1 seek=100G
Then resize the partition. I did that by binding the image to a loop device:
losetup /dev/loop0 system.raw
Then you can use fdisk on /dev/loop0 to alter the partition table. parted didn’t want to resize my file sytem because it had a journal (argh…) so I just used fdisk and made sure that the start of the partition was the same.
Then you detach the loop device and attach the partition:
losetup -d /dev/loop0
# 32256 is 63*512. 63 is the start sector, which fdisk can tell you (with the u option)
losetup -o 32256 /dev/loop0 system.raw
Then I used resize2fs on /dev/loop0 and detached it again.
1. Comment by halfgaar
On October 8, 2010 at 18:10
Hmm. even though the partition mounted fine, it doesn’t boot because of corruption errors…
2. Comment by halfgaar
On March 9, 2017 at 10:27
I received this by e-mail from Richard, not sure why through e-mail?:
Hi,
I have had the same problem, I use this :
Context : qcow2 file with two partitions type 8e (LVM)
Stop the VM
$qemu-img resize trafalgar.qcow2 +5G Start the VM with SystemRescueCD for delete / recreate the partion with fdisk, then stopped the VM. $ fdisk -l /dev/sda
d
n
t
8e
w
q
Start the VM normally, for grow physical volume.
pvresize /dev/sda2
Richard
|
2018-01-20 12:43:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43529269099235535, "perplexity": 6473.3659125159875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889617.56/warc/CC-MAIN-20180120122736-20180120142736-00425.warc.gz"}
|
https://tex.stackexchange.com/questions/106563/trouble-creating-properly-aligned-matrix-within-a-matrix
|
# Trouble creating properly aligned matrix within a matrix
I'm trying to insert parentheses around a group of elements creating a matrix within a larger matrix. So far I have this:
$$\begin{bmatrix} \begin{matrix} 0 & 0 \\ 0 & \omega_0 \\ -\omega_0 & 0 \\ 0 & 0 \end{matrix} & \begin{matrix} -\frac{p_1}{2} & -\frac{p_2}{2} & 0 \\ 0 & 0 & \frac{p_2}{2} \\ -\frac{p_2}{2} & \frac{p_1}{2} & 0 \\ \end{matrix} \\ \begin{matrix} 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{matrix} & \omega_0 \mathbf{I}^{-1} \begin{pmatrix} I_{yx} & (I_{yy} - I_{zz}) & 2 I_{yz} \\ (I_{zz} - I_{xx}) & -I_{xy} & -2 I_{xz} \\ -I_{yz} & I_{xz} & 0 \end{pmatrix} \end{bmatrix}$$
Basically, it's four smaller matrices combined into one larger one. However, the elements aren't aligned between these matrices. Sorry there's no picture, I don't have the reputation yet to include one.
\documentclass{article}
\usepackage{amsmath}
\usepackage{scalerel}
\begin{document}
Original:
$$\begin{bmatrix} \begin{matrix} 0 & 0 \\ 0 & \omega_0 \\ -\omega_0 & 0 \\ 0 & 0 \end{matrix} & \begin{matrix} -\frac{p_1}{2} & -\frac{p_2}{2} & 0 \\ 0 & 0 & \frac{p_2}{2} \\ -\frac{p_2}{2} & \frac{p_1}{2} & 0 \\ \end{matrix} \\ \begin{matrix} 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{matrix} & \omega_0 \mathbf{I}^{-1} \begin{pmatrix} I_{yx} & (I_{yy} - I_{zz}) & 2 I_{yz} \\ (I_{zz} - I_{xx}) & -I_{xy} & -2 I_{xz} \\ -I_{yz} & I_{xz} & 0 \end{pmatrix} \end{bmatrix}$$
Revised:
\def\x{\begin{array}{c} x\\x\\x\end{array}}
$$\begin{bmatrix} \begin{array}{c} 0 \\ 0 \\ -\omega_0 \\ 0 \\0 \\0 \\0 \\ \end{array} & \begin{array}{c@{\hspace{0ex}}} 0 \\ \omega_0 \\ 0 \\ 0 \\0 \\0 \\0 \\ \end{array} & \begin{array}{@{\hspace{0ex}}c} \\ \\ \\ \\ \\ \omega_0 \mathbf{I}^{-1} \\ \\ \end{array} \begin{array}{@{\hspace{0ex}}c@{\hspace{0ex}}} \\ \\ \\ \\ \scalerel*[1.8ex]{(}{\x} \\ \end{array} & \begin{array}{@{\hspace{0ex}}c} -p_1/2 \\ 0 \\ -p_2/2 \\ \\ I_{yx} \\(I_{zz} - I_{xx}) \\ -I_{yz} \\ \end{array} & \begin{array}{c} -p_2/2 \\ 0 \\ p_1/2 \\ \\ (I_{yy} - I_{zz}) \\ -I_{xy} \\ -I_{xz} \\ \end{array} & \begin{array}{c@{\hspace{0ex}}} 0 \\p_2/2 \\ 0 \\ \\ 2I_{yz} \\ -2I_{xz} \\ 0 \\ \end{array} & \begin{array}{@{\hspace{0ex}}c@{\hspace{0ex}}} \\ \\ \\ \\ \scalerel*[1.8ex]{)}{\x} \\ \end{array} \end{bmatrix}$$
\end{document}
• As egreg pointed out, I am indeed missing a row in the top right quadrant, but the solution provided by Mr. Segletes should do the trick with a little modification! – user2236918 Apr 7 '13 at 16:12
|
2019-10-22 23:59:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.949942409992218, "perplexity": 805.6041626668651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987826436.88/warc/CC-MAIN-20191022232751-20191023020251-00008.warc.gz"}
|
https://www.rdocumentation.org/packages/igraph/versions/1.0.1/topics/sample_forestfire
|
sample_forestfire
0th
Percentile
Forest Fire Network Model
This is a growing network model, which resembles of how the forest fire spreads by igniting trees close by.
Keywords
graphs
Usage
sample_forestfire(nodes, fw.prob, bw.factor = 1, ambs = 1,
directed = TRUE)
Arguments
nodes
The number of vertices in the graph.
fw.prob
The forward burning probability, see details below.
bw.factor
The backward burning ratio. The backward burning probability is calculated as bw.factor*fw.prob.
ambs
The number of ambassador vertices.
directed
Logical scalar, whether to create a directed graph.
Details
The forest fire model intends to reproduce the following network characteristics, observed in real networks:
• Heavy-tailed in-degree distribution.
• Heavy-tailed out-degree distribution.
• Communities.
• Densification power-law. The network is densifying in time, according to a power-law rule.
• Shrinking diameter. The diameter of the network decreases in time.
The network is generated in the following way. One vertex is added at a time. This vertex connects to (cites) ambs vertices already present in the network, chosen uniformly random. Now, for each cited vertex $v$ we do the following procedure:
1. We generate two random number, $x$ and $y$, that are geometrically distributed with means $p/(1-p)$ and $rp(1-rp)$. ($p$ is fw.prob, $r$ is bw.factor.) The new vertex cites $x$ outgoing neighbors and $y$ incoming neighbors of $v$, from those which are not yet cited by the new vertex. If there are less than $x$ or $y$ such vertices available then we cite all of them.
2. The same procedure is applied to all the newly cited vertices.
Value
A simple graph, possibly directed if the directed argument is TRUE.
Note
The version of the model in the published paper is incorrect in the sense that it cannot generate the kind of graphs the authors claim. A corrected version is available from http://www.cs.cmu.edu/~jure/pubs/powergrowth-tkdd.pdf, our implementation is based on this.
References
Jure Leskovec, Jon Kleinberg and Christos Faloutsos. Graphs over time: densification laws, shrinking diameters and possible explanations. KDD '05: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, 177--187, 2005.
See Also
barabasi.game for the basic preferential attachment model.
Aliases
• forest.fire.game
• sample_forestfire
Examples
# NOT RUN {
g <- sample_forestfire(10000, fw.prob=0.37, bw.factor=0.32/0.37)
dd1 <- degree_distribution(g, mode="in")
dd2 <- degree_distribution(g, mode="out")
plot(seq(along=dd1)-1, dd1, log="xy")
points(seq(along=dd2)-1, dd2, col=2, pch=2)
# }
Documentation reproduced from package igraph, version 1.0.1, License: GPL (>= 2)
Community examples
Looks like there are no examples yet.
|
2019-11-15 06:12:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.399920254945755, "perplexity": 3559.8386607117704}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668585.12/warc/CC-MAIN-20191115042541-20191115070541-00056.warc.gz"}
|
https://gmatclub.com/forum/ashok-and-brian-are-both-walking-east-along-the-same-path-160840.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 23 Feb 2019, 05:04
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in February
PrevNext
SuMoTuWeThFrSa
272829303112
3456789
10111213141516
17181920212223
242526272812
Open Detailed Calendar
• ### Free GMAT RC Webinar
February 23, 2019
February 23, 2019
07:00 AM PST
09:00 AM PST
Learn reading strategies that can help even non-voracious reader to master GMAT RC. Saturday, February 23rd at 7 AM PT
• ### FREE Quant Workshop by e-GMAT!
February 24, 2019
February 24, 2019
07:00 AM PST
09:00 AM PST
Get personalized insights on how to achieve your Target Quant Score.
# Ashok and Brian are both walking east along the same path
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
### Hide Tags
Manager
Joined: 09 Nov 2012
Posts: 62
Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
01 Oct 2013, 07:54
11
39
00:00
Difficulty:
95% (hard)
Question Stats:
49% (02:33) correct 51% (02:37) wrong based on 1267 sessions
### HideShow timer Statistics
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds.
Math Expert
Joined: 02 Sep 2009
Posts: 53066
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
02 Oct 2013, 01:01
12
11
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
A---(30 miles)---B--->
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own. b=2(a-b) --> a-b=b/2, where a and b are Ashok and Brian speeds, respectively. Ashok catches up in (time)=(distance)/(relative rate)=30/(a-b)=30/(b/2)=60/b. In that time Brian will cover (distance)=(rate)*(time)=b*60/b=60 miles. Sufficient.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds --> 5a=3(a+b) --> 2a=3b --> the same info as above. Sufficient.
Hope it's clear.
_________________
##### General Discussion
Current Student
Status: Chasing my MBB Dream!
Joined: 29 Aug 2012
Posts: 1118
Location: United States (DC)
WE: General Management (Aerospace and Defense)
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
01 Oct 2013, 08:08
1
2
saintforlife wrote:
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds.
Statement 1 :
B= 2[A-B]-> 3B=2A,
So A=(3/2)B
We have both A and B's walking speed, so we can find the distance.. Statement 1 is sufficient,
So eliminate : B, C and E.
Statement 2 :
A= 5A,
Then 5A= 3[A+B],
Here also we can find the distance, since we have both A and B's walking speed. Statement 2 is also sufficient.
So answer is D- Each statement alone is sufficient.
_________________
Manager
Joined: 09 Nov 2012
Posts: 62
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
01 Oct 2013, 15:55
2
GNPTH wrote:
saintforlife wrote:
Statement 1 :
B= 2[A-B]-> 3B=2A,
So A=(3/2)B
We have both A and B's walking speed, so we can find the distance.. Statement 1 is sufficient,
At this point you just have one equation that connects A and B, i.e. Ashok's and Brian's speeds. How do you conclude at this stage that the statement is sufficient without substituting for Ashok's distance (D + 30) and Brian's distance D and solving the equation for D?
We have Brian’s rate is D / t. Ashok’s rate is (D + 30) / t.
D = 2(D + 30 – D)
D = 2(30)
D = 60
Don't you need to do at least some of the above steps before concluding Statement 1 is sufficient? Am I missing an obvious trick here?
Current Student
Status: Chasing my MBB Dream!
Joined: 29 Aug 2012
Posts: 1118
Location: United States (DC)
WE: General Management (Aerospace and Defense)
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
01 Oct 2013, 22:33
saintforlife wrote:
GNPTH wrote:
saintforlife wrote:
Statement 1 :
B= 2[A-B]-> 3B=2A,
So A=(3/2)B
We have both A and B's walking speed, so we can find the distance.. Statement 1 is sufficient,
At this point you just have one equation that connects A and B, i.e. Ashok's and Brian's speeds. How do you conclude at this stage that the statement is sufficient without substituting for Ashok's distance (D + 30) and Brian's distance D and solving the equation for D?
We have Brian’s rate is D / t. Ashok’s rate is (D + 30) / t.
D = 2(D + 30 – D)
D = 2(30)
D = 60
Don't you need to do at least some of the above steps before concluding Statement 1 is sufficient? Am I missing an obvious trick here?
Hi Yes, i'm aware we have to use Ashok's Distance, and these are given in the question. Here we don't need to find an answer for the statements given..
but we have too see whether these statements are sufficient enough to solve .
In this question we have each statement alone is sufficient to solve, so we go ahead mark the answer as D.
Hope it helps
_________________
Senior Manager
Joined: 13 Jan 2012
Posts: 282
Weight: 170lbs
GMAT 1: 740 Q48 V42
GMAT 2: 760 Q50 V42
WE: Analyst (Other)
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
10 Dec 2013, 14:47
Bunuel wrote:
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
A---(30 miles)---B--->
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own. b=2(a-b) --> a-b=b/2, where a and b are Ashok and Brian speeds, respectively. Ashok catches up in (time)=(distance)/(relative rate)=30/(a-b)=30/(b/2)=60/b. In that time Brian will cover (distance)=(rate)*(time)=b*60/b=60 miles. Sufficient.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds --> 5a=3(a+b) --> 2a=3b --> the same info as above. Sufficient.
Hope it's clear.
Bunuel, would you mind elaborating on the concept/logic at play here? I'm guessing that this is such a hard problem for people because they fail to see how you can use the ratio of the speeds to solve the problem.
Math Expert
Joined: 02 Sep 2009
Posts: 53066
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
11 Dec 2013, 02:13
2
Bunuel wrote:
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
A---(30 miles)---B--->
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own. b=2(a-b) --> a-b=b/2, where a and b are Ashok and Brian speeds, respectively. Ashok catches up in (time)=(distance)/(relative rate)=30/(a-b)=30/(b/2)=60/b. In that time Brian will cover (distance)=(rate)*(time)=b*60/b=60 miles. Sufficient.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds --> 5a=3(a+b) --> 2a=3b --> the same info as above. Sufficient.
Hope it's clear.
Bunuel, would you mind elaborating on the concept/logic at play here? I'm guessing that this is such a hard problem for people because they fail to see how you can use the ratio of the speeds to solve the problem.
We are using relative speed concept here. The distance between A and B is 30 miles and they move in the same direction. Their relative speed is a-b miles per hour, thus A to catch up will need (time)=(distance)/(relative rate)=30/(a-b) hours. In that time B will cover (distance)=(rate)*(time)=b*30/(a-b)=60 miles.
Hope it's clear.
_________________
SVP
Joined: 06 Sep 2013
Posts: 1693
Concentration: Finance
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
31 Dec 2013, 07:28
3
1
saintforlife wrote:
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds.
Clearly testing the concept of relative rates and ratios
Statement 1
B = 2(A-B)
3B = 2A
We have the ratio so this means that for every 3 miles A travels, will travel 2 mile, so one can deduce that it will shorter this distance at this rate.
Statement 2
Basically gives the same info
5A = 3(A+B)
2A = 3B
Hence Sufficient
Hope it helps
Cheers!
J
Intern
Joined: 13 Apr 2014
Posts: 11
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
15 Apr 2014, 06:55
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own. b=2(a-b) --> a-b=b/2, where a and b are Ashok and Brian speeds, respectively. Ashok catches up in (time)=(distance)/(relative rate)=30/(a-b)=30/(b/2)=60/b. In that time Brian will cover (distance)=(rate)*(time)=b*60/b=60 miles. Sufficient.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds --> 5a=3(a+b) --> 2a=3b --> the same info as above. Sufficient.
tnx for this
_________________
best for iranian
e-GMAT Representative
Joined: 04 Jan 2015
Posts: 2597
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
12 May 2015, 22:37
9
4
An alternate solution without using the concept of Relative Speed:
Representing the given information visually, we easily see that:
(Time taken by Brian to cover D miles) = (Time taken by Ashok to cover (D+30) miles) . . . .(1)
Now, $$Time = \frac{Distance}{Speed}$$
So, we can Equation 1 as:
$$\frac{D}{B} = \frac{D+30}{A}$$ . . . (2)
From Equation 2, it's clear that, in order to find the value of D, we need to know either the values of A and B, or the ratio of A and B.
With this understanding, let's move to the two statements:
St. 1 says
B = 2(A - B)
From this equation, we can get the ratio of A and B.
Sufficient.
St. 2 says
5A = 3(A+B)
From this equation as well, we can get the ratio of A and B
Sufficient.
So, correct answer: Option D.
Hope this was useful!
Japinder
_________________
| '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com
Manager
Joined: 26 Feb 2015
Posts: 114
Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
23 May 2015, 01:13
1
I found that the easiest way to answer this question is to use actual numbers
From A we get: $$B=2(A-B)$$ --> $$2A = 3B$$
So if B walks at a pace of 20 mph. A will walk 30 mph.
Now: For A to catch up, he needs to first cover those 30 miles.
30x = 20x + 30 --> 10x = 30. x = 3
So in three hours, A will catch up to B.
Now, since B walks at 20 mph, he will have covered 20 * 3, 60 miles before A is right next to him.
This works no matter the actual speed, as long as their ratio is the same.
Try with A walks 15 mph, and B walks 10 mph.
Now, we have: 15x = 10x + 30. --> x = 6
So in 6 hours, B will have walked 6 * 10 = 60 miles.
Same logic applies to Statement 2.
Current Student
Joined: 02 Jul 2017
Posts: 293
Concentration: Entrepreneurship, Technology
GMAT 1: 730 Q50 V38
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
02 Sep 2017, 22:53
1
Let Speed of Ashok = Sa
Speed of Brian = Sb
Now given :
Both starts walking towards east
Sa > Sb
A -----30 miles-----B------x miles-----Meeting point of A and B
So B starts 30 miles ahead of B and both start at same time.
Let after B starts it meets at A after x miles...
So distance traveled by B = x miles.
distance traveled by A = 30 +x miles.
Both took same time to reach the point. Time= t
as time = distance /speed.
For A time = t= (30+x)/Sa
For B time =t= x/Sb
We can equate time for both as
(30+x)/Sa = x/Sb.
To solve this we should know Sa, Sb and x
1. Given Sb = 2(Sa-Sb) => 3Sb=2Sa. As we know relation between Sa and Sb we can find value of x
As by putting value of Sb in equation speed term will cancel out and we will be left with x.
Sufficient
2.Given 5Sa = 3(Sa+Sb) => 2Sa=3Sb. As we know relation between Sa and Sb we can find value of x
As by putting value of Sb in equation speed term will cancel out and we will be left with x.
Sufficient.
Intern
Joined: 26 Oct 2014
Posts: 22
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
13 Nov 2017, 08:09
Bunuel wrote:
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
A---(30 miles)---B--->
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own. b=2(a-b) --> a-b=b/2, where a and b are Ashok and Brian speeds, respectively. Ashok catches up in (time)=(distance)/(relative rate)=30/(a-b)=30/(b/2)=60/b. In that time Brian will cover (distance)=(rate)*(time)=b*60/b=60 miles. Sufficient.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds --> 5a=3(a+b) --> 2a=3b --> the same info as above. Sufficient.
Hope it's clear.
Hi Bunnuel,
I got the answer - 60 miles. But I did not understand the question. I think the question is trying too hard to be simple and yet confusing. Where does this question come from? is it from Gmac or Gmatclub? I am sorry I am just frustrated.
Math Expert
Joined: 02 Sep 2009
Posts: 53066
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
13 Nov 2017, 08:14
1
Liza99 wrote:
Bunuel wrote:
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
A---(30 miles)---B--->
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own. b=2(a-b) --> a-b=b/2, where a and b are Ashok and Brian speeds, respectively. Ashok catches up in (time)=(distance)/(relative rate)=30/(a-b)=30/(b/2)=60/b. In that time Brian will cover (distance)=(rate)*(time)=b*60/b=60 miles. Sufficient.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds --> 5a=3(a+b) --> 2a=3b --> the same info as above. Sufficient.
Hope it's clear.
Hi Bunnuel,
I got the answer - 60 miles. But I did not understand the question. I think the question is trying too hard to be simple and yet confusing. Where does this question come from? is it from Gmac or Gmatclub? I am sorry I am just frustrated.
You can check the source in among the tags above the first post:
Attachment:
2017-11-13_2013.png [ 109.42 KiB | Viewed 4421 times ]
_________________
Study Buddy Forum Moderator
Joined: 04 Sep 2016
Posts: 1300
Location: India
WE: Engineering (Other)
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
24 Dec 2017, 17:09
niks18 Bunuel amanvermagmat
Can you suggest why did we not take speed of Ashok as A-B as explained in relative velocity concept.
I was unable to derive unique solution for $$\frac{D}{B} = \frac{D+30}{A-B}$$
if I substitute the same in EgmatQuantExpert / Nikkb approach
_________________
It's the journey that brings us happiness not the destination.
Retired Moderator
Joined: 25 Feb 2013
Posts: 1217
Location: India
GPA: 3.82
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
25 Dec 2017, 08:39
niks18 Bunuel amanvermagmat
Can you suggest why did we not take speed of Ashok as A-B as explained in relative velocity concept.
I was unable to derive unique solution for $$\frac{D}{B} = \frac{D+30}{A-B}$$
if I substitute the same in EgmatQuantExpert / Nikkb approach
A-B is the relative speed between the two individuals as would be visible to each other. Suppose you are walking ahead of me at a speed of 30 units and I am walking at a speed of 20 units, then for me your effective speed is only 30-20=10 units because I am also moving. But this does not mean your speed has reduced to 10 units in fact your actual speed remains 30 units. So you could not substitute A-B for the speed of A
CEO
Joined: 11 Sep 2015
Posts: 3447
Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
06 Nov 2018, 06:18
Top Contributor
saintforlife wrote:
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds.
GIVEN: When the men start walking, Brian has a 30-mile lead
Let B = Brian's walking speed (in miles per hour)
Let A = Ashok's walking speed (in miles per hour)
Since Ashok's speed is greater than Brian's speed, the rate at which the gap shrinks = (A - B) miles per hour
For example, if A = 5 and B = 2, then the 30-mile gap will shrink at a rate of (5 - 2) mph.
time = distance/speed
So, time for 30-mile gap to shrink to zero = 30/(A - B)
Target question: How many miles will Brian walk before Ashok catches up with him?
This is a good candidate for rephrasing the target question.
distance = (speed)(time)
So, the distance Brian travels = (B)(30/(A - B))
Simplify to get: 30B/(A - B)
REPHRASED target question: What is the value of 30B/(A - B)?
Statement 1: Brian’s walking speed is twice the difference between Ashok’s walking speed and his own.
We can write: B = 2(A - B)
Expand: B = 2A - 2B
This means: 3B = 2A
So: 3B/2 = A
Or we can say: 1.5B = A
Now take 30B/(A - B) and replace A with 1.5B to get: 30B/(1.5B - B)
Simplify: 30B/(0.5B)
Simplify: 30/0.5
Evaluate 60 (miles)
Perfect!! The answer to the REPHRASED target question is Brian will travel 60 miles
Since we can answer the REPHRASED target question with certainty, statement 1 is SUFFICIENT
Statement 2: If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds.
We can write: 5A = 3(A + B)
Expand: 5A = 3A + 3B
Rewrite as: 2A = 3B
We get: A = 3B/2 = 1.5B
At this point, we're at the same place we got to for statement 1.
So, since statement 1 is sufficient, we know that statement 2 is also sufficient.
RELATED VIDEO FROM OUR COURSE
_________________
Test confidently with gmatprepnow.com
Intern
Joined: 03 Oct 2018
Posts: 4
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
06 Nov 2018, 12:26
may I ask why statement one fails when I use the function
a*t=30+bt
3/2b*t=30+bt
0.5b*t=30
which obviously has two unknown variables.
RC Moderator
Joined: 24 Aug 2016
Posts: 684
Concentration: Entrepreneurship, Operations
GMAT 1: 630 Q48 V28
GMAT 2: 540 Q49 V16
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
06 Nov 2018, 13:43
Say the dist be x , A's speed a and B's speed is b.
Thus according to the Q-stem, (30+x)/a = x/b ---eqn 1
1) b=2(a-b) ==> 2a=3b now if we replace the value of either a or b we get a unique value of x --- thus Sufficient.
2) 5a=3(a+b) ==> 2a=3b ---which is essentially statement 1 .. and due to same reason ---- Sufficient.
Hence Ans D.
_________________
Please let me know if I am going in wrong direction.
Thanks in appreciation.
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 4972
Location: United States (CA)
Re: Ashok and Brian are both walking east along the same path [#permalink]
### Show Tags
07 Nov 2018, 18:17
saintforlife wrote:
Ashok and Brian are both walking east along the same path; Ashok walks at a faster constant speed than does Brian. If Brian starts 30 miles east of Ashok and both begin walking at the same time, how many miles will Brian walk before Ashok catches up with him?
(1) Brian’s walking speed is twice the difference between Ashok’s walking speed and his own.
(2) If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds.
This is a catching up problem. Recall that the time needed for the slower person (Brian) to catch up with the faster person (Ashok) is (difference in distances)/(difference in speeds). Here the difference in their distances is 30, and the difference in their speeds is (a - b), where a is Ashok’s speed and b is Brian’s speed. So the time for Brian to catch up with Ashok is 30/(a - b). If we can determine that, then we can determine the distance walked by Brian.
Statement One Alone:
Brian’s walking speed is twice the difference between Ashok’s walking speed and his own.
We are given that b = 2(a - b). That is, b = 2a - 2b → 3b = 2a → a = 3b/2.
Therefore, the time for Brian to catch up with Ashok is 30/(a - b) = 30/(3b/2 - b) = 30/(b/2) = 60/b and the distance walked by Brian is b x 60/b = 60 miles. Statement one alone is sufficient.
Statement Two Alone:
If Ashok’s walking speed were five times as great, it would be three times the sum of his and Brian’s actual walking speeds.
We are given that 5a = 3(a + b). That is, 5a = 3a + 3b → 2a = 3b → a = 3b/2.
We can see that this statement provides the same information as statement one. So statement two is also sufficient.
_________________
Scott Woodbury-Stewart
Founder and CEO
GMAT Quant Self-Study Course
500+ lessons 3000+ practice problems 800+ HD solutions
Re: Ashok and Brian are both walking east along the same path [#permalink] 07 Nov 2018, 18:17
Display posts from previous: Sort by
# Ashok and Brian are both walking east along the same path
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2019-02-23 13:04:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6355190873146057, "perplexity": 4167.292308590917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249501174.94/warc/CC-MAIN-20190223122420-20190223144420-00311.warc.gz"}
|
https://www.gemcircuits.co.uk/tools/inductance_edge_coupled.aspx
|
Inductance
Edge Coupled Trace Inductance Calculator
Inputs
Trace Width
w
Trace Separation
s
Trace Length
l
Trace Thickness
t
Relative Permeability
µr
Output ⊕
Inductance:
3.75 x 10-8 H
Inductance per unit length:
7.50 x 10-7 H/m
View Notes
Calculation Notes
Coplanar traces are commonly found in printed circuit boards, where one trace is the signal and the other is the return. The trace separation should be constant along the trace length.
Factors that influence the inductance calculation include:-
• Trace width (w)
• Trace separation (s)
• Trace length (l)
• Trace thickness (t)
• Relative permeability (µr)
\mu_{0} = 4\pi \times 10^{-7}\;Wb\;A^{-1}\;m^{-1} (permeability of free space)
Inductance \approx l\times\frac{\mu_{0}\times \mu_{r}}{\pi}\times cosh^{-1}\;\frac{s}{w}
where
s \gg w, w > t
References:
• Electrical Circuit Theory and Technology (ISBN 978-1-85617-770-2)
• Signal Integrity Issues and Printed Circuit Board Design (ISBN 0-13-335947-6)
Disclaimer: The information and this tool are provided with no liability of any kind whatsoever, use at your own risk.
|
2022-08-10 17:32:33
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621412754058838, "perplexity": 13804.123483113543}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00228.warc.gz"}
|
https://boredofstudies.org/threads/need-help-past-hsc-absolute-error-and-arc-length-questions.387405/
|
# Need Help!! Past HSC Absolute error and Arc Length Questions (1 Viewer)
#### shyliix
##### New Member
I have an assessment where I need to create a summary booklet of the different topics we cover in maths over the prelim and HSC year and I'm really struggling to find past HSC questions on Absolute Error and Arc Length that I put into the booklet. Any help is appreciated!
#### blyatman
##### Active Member
I'm assuming this is for new syllabus? The old HSC syllabus didn't have absolute error nor arc length, so you won't find any of those questions in the past papers.
#### BLIT2014
##### The pessimistic optimist.
Moderator
I'm assuming this is for new syllabus? The old HSC syllabus didn't have absolute error nor arc length, so you won't find any of those questions in the past papers.
I’ve seen ARC length and absolute error questions in the Mathsquest book for 2014/2015 HSC kids. So this is not entirely correct.
#### BLIT2014
##### The pessimistic optimist.
Moderator
What’s the earliest you’ve been going back? Might be some when General Mathematics was known as Maths In Society.
#### blyatman
##### Active Member
My bad, thought it was referring to 2u/3u/4u. No idea what the general math syllabus is.
I'm assuming arc length refers to the equation:
$L=\int_a^b \sqrt{1+[f'(x)]^2}\,dx$
As this is not examined in 2u/3u/4u, I'd be very surprised if it was in general math. Unless, of course, it's some other non-calculus-based formula.
Last edited:
#### BLIT2014
##### The pessimistic optimist.
Moderator
My bad, thought it was referring to 2u/3u/4u. No idea what the general math syllabus is.
I'm assuming arc length refers to the equation:
$L=\int_a^b \sqrt{1+[f'(x)]^2}\,dx$
As this is not examined in 2u/3u/4u, I'd be very surprised if it was in general math. Unless, of course, it's some other non-calculus-based formula.
No. Arc length of a circle formula found on page 2 of the General Mathematics formula sheet https://educationstandards.nsw.edu.au/wps/wcm/connect/d1794864-ef77-4a58-9a70-330fdc9714b2/mathematics-general-2-formulae-and-data-sheet-hsc.pdf?MOD=AJPERES&CVID=
Does it have to be HSC questions?
Moderator
Moderator
Moderator
|
2019-07-19 12:44:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7361035943031311, "perplexity": 2342.4770815651864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526237.47/warc/CC-MAIN-20190719115720-20190719141720-00068.warc.gz"}
|
https://motls.blogspot.com/2005/01/help-victims.html?m=1
|
## Tuesday, January 04, 2005
### Help the victims
I only made my first contribution right now, using the "American Red Cross" interface at
If you don't have an account at amazon.com yet, you should definitely create one. The payment is very easy and convenient.
This short posting is addressed to you - I really mean you who is just reading this sentence, not anyone else! You may be surprised how could I know in advance that exactly you would be reading this sentence exactly at this moment - but you know, string theory is a very predictive theory. ;-) The people in Asia suffered a lot, and many of them believe that God was punishing them. You know that it's not true, don't you?
The people in the affected areas will be happy (and grateful) for every package of rice they get. This is a definitely good investment for you if you want to do something good that does not cost too much.
So far, the world has collected roughly 3 billion dollars (at least 630 million comes from America, which includes the government, corporations, as well as individuals, and which is a higher fraction than America has on the world's GDP - the latter is one fifth), and the modest estimates say that at least 2 billion more will be needed for some kind of basic reconstruction - and I hope that you share my belief that the survivors deserve more than that. The Japanese government became the largest donor among the governments (500 million).
I would specifically like to encourage the left-wing readers to donate something because so far it does not seem too obvious that the Left - which normally cares about the poor and unlucky people so much, at least verbally - has contributed a significant amount of money. What are the George Soroses and Michael Moores doing right now when their money is needed? The donations and aid is, so far, dominated by the right-wing US government and the "greedy" corporations.
I guess that the really left-wing people and governments won't donate much. But I urge others, moderate left-wing people and governments. Gerhard Schröder: I know that you're reading my blog. 20 million is pathetic. What about raising it to 500 million Euro, for example? I will appreciate it. Thank you.
1. Very nice of you, Prof. Motl.
I did my bit. I like to diversify so chose
* Red Cross
* Oxfam
* Medecins sans Frontiers (Doctors withothout Borders)
* Sarvodaya
It is good that the persoanl contributions are exceeding national government contributions: that is the way it should be.
Slightly off-topic, but actually, most poor countries would prefer trade to 'foreign aid'---"Trade not Aid" is the slogan. Trade distributes aid more equitably; aid ends up benefitting very few (corrupt)individuals.
So the 'left-wing' (including N America and Europe) would do best to gradually start removing subsidies, especially agricultural subsidies, which are hurting the poorest nations immeasurably.
you are the one who deserves the thanks! Such a diversity suggests that you were not quite "stingy". ;-) I also prefer the situation in which the real people, as opposed to abstract governments and other institutions, help other people - because it is more authentic, legitimate, and emotional.
I also agree with you that the most affected countries should be given the freedom to export their products and services anywhere in the world - without any trade barriers - because this is the most natural way to help nearly everyone, as opposed to - possibly corrupt - officials. It's the way to help not only with the hunger and potential diseases, but also with the feeling of every person that he or she lives a full life.
It will be important for the region to switch to some active mode of life - as opposed to just saving their lives - and my personal guess is that the help of spontaneously chosen leaders will be more efficient than the help that is waiting for the signatures of all nations in the U.N.
Sincerely
Lubos
3. when you refer to "tax dollars," I am sure you understand this is a gift from the people of a democracy?
But yes of course it is a good thing, such help.
Maybe a UN infrastuture, for early warning systems, to the people in isolated areas(ex:horns blaring)would be good for world community?
4. The donations and aid is, so far, dominated by the right-wing US government and the "greedy" corporations.
Just to set the record straight on your left bashing. The (left wing) Canadian government has put up $35 million, and the Canadian people have donated$65 million to private charity (those numbers will likely both go up, check cbc.ca for the latest numbers). Per person, that's comperable to the US donations. Non to mention that you explicitly say
The Japanese government became the largest donor among the governments (500 million).
Which is sorta at odds with your other claim.
Also, from what I've been hearing and reading, the single most helpful thing in the region at the moment is the presence of troops (US and Australian, IIRC). In that sense, I think government aid is probably more valuable than private donations, at least in the immediate aftermath of such a horrible disaster. The US army has helicopters, C-130 cargo planes, desalination ships, engineering corps, and a whole host of other things that no amount of private donations, or non-govermental organization can replace.
5. Yes, surprisingly I realize that the government money are tax money from the people (who can't really quite decide how this money is gonna be spent). 80% of this tax money was paid by the richest 20% people, or how the counting goes, and most of these are on the right wing.
Also, for another person who posted. Japan confirms my claims even better because the Japanese government is currently formed by LDP, which is the largest right-wing conservative party in Japan:
http://en.wikipedia.org/wiki/Liberal_Democratic_Party_of_Japan
http://en.wikipedia.org/wiki/Junichiro_Koizumi
Also, Canadians are not always left-wing, and Alberta is even a part of the Jesusland. ;-) I think it is not too easy to falsify the statement that the more left-wing a group of people is, the less he contributed.
6. Without detracting from the idealization of the helping hand, I think healthcare does not like to discriminate between the rich man and the poor man, but if you have a few mores dollars, treatment is always a little better, and quicker?
Here's a story from the left-wing Lubos and "Jesusland"?LOL
Mouseland – A Political fable told by Tommy Douglas in 1944It's the story of a place called Mouseland. Mouseland was a place where all the little mice lived and played, were born and died. And they lived much the same as you and I do.
They even had a Parliament. And every four years they had an election. Used to walk to the polls and cast their ballots. Some of them even got a ride to the polls. And got a ride for the next four years afterwards too. Just like you and me. And every time on election day all the little mice used to go to the ballot box and they used to elect a government. A government made up of big, fat, black cats.
Now if you think it strange that mice should elect a government made up of cats, you just look at the history of Canada for last 90 years and maybe you'll see that they weren't any stupider than we are.
Now I'm not saying anything against the cats. They were nice fellows. They conducted their government with dignity. They passed good laws--that is, laws that were good for cats. But the laws that were good for cats weren't very good for mice. One of the laws said that mouseholes had to be big enough so a cat could get his paw in. Another law said that mice could only travel at certain speeds--so that a cat could get his breakfast without too much effort.
All the laws were good laws. For cats. But, oh, they were hard on the mice. And life was getting harder and harder. And when the mice couldn't put up with it any more, they decided something had to be done about it. So they went en masse to the polls. They voted the black cats out. They put in the white cats.
Now the white cats had put up a terrific campaign. They said: "All that Mouseland needs is more vision." They said: "The trouble with Mouseland is those round mouseholes we got. If you put us in we'll establish square mouseholes." And they did. And the square mouseholes were twice as big as the round mouseholes, and now the cat could get both his paws in. And life was tougher than ever.
And when they couldn't take that anymore, they voted the white cats out and put the black ones in again. Then they went back to the white cats. Then to the black cats. They even tried half black cats and half white cats. And they called that coalition. They even got one government made up of cats with spots on them: they were cats that tried to make a noise like a mouse but ate like a cat.
You see, my friends, the trouble wasn't with the colour of the cat. The trouble was that they were cats. And because they were cats, they naturally looked after cats instead of mice.
Presently there came along one little mouse who had an idea. My friends, watch out for the little fellow with an idea. And he said to the other mice, "Look fellows, why do we keep on electing a government made up of cats? Why don't we elect a government made up of mice?" "Oh," they said, "he's a Bolshevik. Lock him up!" So they put him in jail.
But I want to remind you: that you can lock up a mouse or a man but you can't lock up an idea.
Hope you like it. Good for you, for taking the initiative, even if you are a mouse.:)
7. Quoting Wikipedia about the Japan's LDP: "The Liberal Democratic party is Japan's largest right-wing and conservative party. Its name is a misnomer; the party is not liberal."
They are wrong: it is not a misnomer. It's just that the common connotation of the word "liberal" in North America is utterly twisted and ahistoric. Liberalism, especially in Europe, has a proud tradition of being a principled approach resting firmly on conservative values. The naive interpretation of the word liberal as meaning "I don't care, I am pro-choice", "Let them do", etc. pp., reveals sub-standard education. In fact, it is quite amusing that left-wingers call themselves incorrectly liberals, giving away their foolishness from the out start every chance they get. :))
Incidentally, Germany's conservative, pro-business party FDP is widely referred to as "Die Lieberalen" ("the liberals"). They were part of the government for 16 consecutive years before the current ochlochratic, socialist government was elected (that's been in power for over 6 said years now).
BTW, I know that none of you, in particular not Lubos, made the mistake. It's Wikipedia that's not up to speed.
Best,
Michael
PS: I am proud to say that I am a conservative liberal and, of course, I did vote for Bush. ;))
8. Concerning the word "liberal".
Micheal, I kind of agree; you may guess that as a European, I naturally consider the word "liberal" in the European definition and tradition - myself being a liberal conservative which can only be a contradiction in America. ;-) The word "liberal" (EU) really means "libertarian", kind of.
You know that I did not make a mistake, don't you? What I wrote what that LDP is a right-wing party. Is that wrong?
Wikipedia is in various cases not up to speed, but various people may disagree what are the cases. ;-)
9. By the way. I seems so, as if the left-wing German government decided to donate 500 million Euros...how does it fit into your picture of missing contributions from non-right-wing persons / organisations.. ;-)
10. As far as I can see, you got it right, Lubos. I was criticizing Wikipedia.
To the previous poster: Some German socialists are talking about 500 million Euros in aid now, yes. It just is not going to happen. Germany's government is so cash-strapped by now, they had to cancel the official New Year's firework at the Brandenburg Gate in Berlin. Of course, they say it's out of respect for the victims, but the truth is that they sent one bag of rice to Asia and went bankrupt. ;))
So far they have given but 20 Million Euros. Let's not give them any credit for words without commitment.
Michael
11. This comment has been removed by a blog administrator.
12. OK, Michael, you are (unfortunately) right...but maybe you should call them socialdemocrats and not socialists... ;-)
13. Yeah, right... how could I make such a stupid mistake!? ;))
Michael
14. What do I say if the German moderately socialist government really pays 500 million euro? I will take credit for it because this was the goal of my article! ;-)
15. The main culprits are G8 countries - Germany, Canada, US, Italy, Japan, UK and France – who not only signed up to the Millennium Goals but agreed more than 30 years ago to spend 0.7 percent of their national incomes on aid. Thirty-four years on none of the G8 have met this target and on average, rich countries today give half as much as a proportion of their income as they did in the 1960s.
Predictably, rich countries have other priorities. In a world in which a staggering $600 billion (450 billion Euro) is spent on defence globally each year, wealthy nations spend between just$50-60 billion on overseas aid. In 2003, US spending on foreign aid measured just one-tenth of what it spent on Iraq.
And no wonder the public is sometimes cynical. New research shows that currently only 40% of the money counted officially as aid from rich governments actually reaches the poorest and even that is often seriously delayed. For example 20% of aid from the EU arrives over a year late and 92% of Italian aid is spent on Italian goods and services. They are not alone – the US ties about 70% to US firms – not a bad Christmas present for American businesses.
http://www.oxfam.org.uk/what_we_do/issues/debt_aid/art_hobbs_price.htm
16. Listen to this leftist! (our favorite south pacific doctor)
http://mediamatters.org/items/200501050006
17. Hi Lubos,
could it be that your last paragraph adressing Gerhard Schroeder changed slightly after your first post... ;-) Otherwise, if they are really going to pay the promised amount of money, set it as an achievement into your CV... ;-).. looks always good.
|
2022-01-21 00:04:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25974103808403015, "perplexity": 2543.0473162631165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00263.warc.gz"}
|
http://experiment-ufa.ru/Prime-factorization-of-555
|
# Prime factorization of 555
If it's not what You are looking for type in the field below your own integer, and You will get the solution.
Prime factorization of 555:
By prime factorization of 555 we follow 5 simple steps:
1. We write number 555 above a 2-column table
2. We divide 555 by the smallest possible prime factor
3. We write down on the left side of the table the prime factor and next number to factorize on the ride side
4. We continue to factor in this fashion (we deal with odd numbers by trying small prime factors)
5. We continue until we reach 1 on the ride side of the table
555 prime factors number to factorize 3 185 5 37 37 1
Prime factorization of 555 = 1×3×5×37= $1 × 3 × 5 × 37$
## Related pages
sin135what are the factors of 6x 6what is 1.125 as a fractionv lwh solve for h68-41what is 5.25 as a fraction2x 2 simplify2x 5y 9derivative sin xyderivative of e to the x squaredwhat is the lcm of 15y 2y y 0what is the square root of 2704factor 4x 2-191-56derivative of pieprime factorization of 120roman numerals 1979x2 6x 4 0gcf and lcm of 36 and 45converting percentages to decimalsln 4x 2sin2x cos2x 16ax4x 2y 5 graphmmxx roman numerals290-70adding and dividing fractions calculatorsenx senxprime factorization for 104prime factorization of 19prime factorization of 39y 3x 8 5x 2y 5conver decimal to fractionprime factorization 252tanx cotxprime factorization of 78cos4pidecimals fractionsgcf of 45 and 752xy x yhow to solve 3x 2y 22x-3 squared0.08333 as a fractionfactor x2-x-12cos37cxv roman numeralsgraph 3x y 4factorise a 2 b 2roman numerals 1980prime factorization for 144what is the greatest common factor of 86 and 94493.90graph y-5x 0multistep equation solver145-17write the prime factorization of 50635-10adding and subtracting fraction calculatorgcf of 46 and 69what is the derivative of 2yv lwh solve for l1.2 inches to fractionsinx cosx solvegcf of 75 and 90common multiples of 9 and 12derivative of sin 4xgreatest monomial factor calculator2x 4y 3derivative of 10xcosasina9u8system of equations word problems calculatorsin2x sin 2xcommon multiples of 6 and 7derivative of tan 3x
|
2018-05-25 18:20:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5025382041931152, "perplexity": 7769.00857119738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867173.31/warc/CC-MAIN-20180525180646-20180525200646-00340.warc.gz"}
|
http://www.math.columbia.edu/~woit/wordpress/
|
## Something Deeply Hidden
Sean Carroll’s new (available in stores early September) book, Something Deeply Hidden, is a quite good introduction to issues in the understanding of quantum mechanics, unfortunately wrapped in a book cover and promotional campaign of utter nonsense. Most people won’t read much beyond the front flap, where they’ll be told:
Most physicists haven’t even recognized the uncomfortable truth: physics has been in crisis since 1927. Quantum mechanics has always had obvious gaps—which have come to be simply ignored. Science popularizers keep telling us how weird it is, how impossible it is to understand. Academics discourage students from working on the “dead end” of quantum foundations. Putting his professional reputation on the line with this audacious yet entirely reasonable book, Carroll says that the crisis can now come to an end. We just have to accept that there is more than one of us in the universe. There are many, many Sean Carrolls. Many of every one of us.
This kind of ridiculous multi-worlds woo is by now rather tired, you can find variants of it in a host of other popular books written over the past 25 years. The great thing about Carroll’s book though is that (at least if you buy the hardback) you can tear off the dust jacket, throw it away, and unlike earlier such books, you’ll be left with something well-written, and if not “entirely reasonable”, at least mostly reasonable.
Carroll gives an unusually lucid explanation of what the standard quantum formalism says, making clear the ways in which it gives a coherent picture of the world, but one quite a bit different than that of classical mechanics. Instead of the usual long discussions of alternatives to QM such as Bohmian mechanics or dynamical collapse, he deals with these expeditiously in a short chapter that appropriately explains the problems with such alternatives. The usual multiverse mania that has overrun particle theory (the cosmological multiverse) is relegated to a short footnote (page 122) which just explains that that is a different topic. String theory gets about half a page (discussed with loop quantum gravity on pages 274-5). While the outrageously untrue statement is made that string theory “makes finite predictions for all physical quantities”, there’s also the unusually reasonable “While string theory has been somewhat successful in dealing with the technical problems of quantum gravity, it hasn’t shed much light on the conceptual problems.” AdS/CFT gets a page or so (pages 303-4), with half of it devoted to explaining that its features are specific to AdS space, about which “Alas, it’s not the real world.” He has this characterization of the situation:
There’s an old joke about the drunk who is looking under a lamppost for his lost keys. When someone asks if he’s sure he lost them there, he replies, “Oh no, I lost them somewhere else, but the light is much better over here.” In the quantum-gravity game, AdS/CFT is the world’s brightest lamppost.
I found Carroll’s clear explanations especially useful on topics where I disagree with him, since reading him clarified for me several different issues. I wrote recently here about one of them. I’ve always been confused about whether I fall in the “Copenhagen/standard textbook interpretation” camp or “Everett” camp, and reading this book got me to better understanding the difference between the two, which I now think to a large degree comes down to what one thinks about the problem of emergence of classical from quantum. Is this a problem that is hopelessly hard or not? Since it seems very hard to me, but I do see that limited progress has been made, I’m sympathetic to both sides of that question. Carroll does at times too much stray into the unfortunate territory of for instance Adam Becker’s recent book, which tried to make a morality play out of this difference, with Everett and his followers fighting a revolutionary battle against the anti-progress conservatives Bohr and Heisenberg. But in general he’s much less tendentious than Becker, making his discussion much more useful.
The biggest problem I have with the book is the part referenced by the unfortunate material on the front flap. I’ve never understood why those favoring so-called “Multiple Worlds” start with what seems to me like a perfectly reasonable project, saying they’re trying to describe measurement and classical emergence from quantum purely using the bare quantum formalism (states + equation of motion), but then usually start talking about splitting of universes. Deciding that multiple worlds are “real” never seemed to me to be necessary (and I think I’m not the only one who feels this way, evidently Zurek also objects to this). Carroll in various places argues for a multiple world ontology, but never gives a convincing argument. He finally ends up with this explanation (page 234-5):
The truth is, nothing forces us to think of the wave function as describing multiple worlds, even after decoherence has occurred. We could just talk about the entire wave function as a whole. It’s just really helpful to split it up into worlds… characterizing the quantum state in terms of multiple worlds isn’t necessary – it just gives us an enormously useful handle on an incredibly complex situation… it is enormously convenient and helpful to do so, and we’re allowed to take advantage of this convenience because the individual worlds don’t interact with one another.
My problem here is that the whole splitting thing seems to me to lead to all sorts of trouble (how does the splitting occur? what counts as a separate world? what characterizes separate worlds?), so if I’m told I don’t need to invoke multiple worlds, why do so? According to Carroll, they’re “enormously convenient”, but for what (other than for papering over rather than solving a hard problem)?
In general I’d rather avoid discussions of what’s “real” and what isn’t (e.g. see here) but, if one is going to use the term, I am happy to agree with Carroll’s “physicalist” argument that our best description of physical reality is as “real” as it gets, so the quantum state is preeminently “real”. The problem with declaring “multiple worlds” to be “real” is that you’re now using the word to mean something completely different (one of these worlds is the emergent classical “reality” our brains are creating out of our sense experience). And since the problem here (classical emergence being just part of it) is that you don’t understand the relation of these two very different things, any argument about whether another “world” besides ours is “real” or not seems to me hopelessly muddled.
Finally, the last section of the book deals with attempts by Carroll to get “space from Hilbert space”, see here, which the cover flap refers to as “His [Carroll’s] reconciling of quantum mechanics with Einstein’s theory of relativity changes, well, everything.” The material in the book itself is much more reasonable, with the highly speculative nature of such ideas emphasized. Since Carroll is such a clear writer, reading these chapters helped me understand what he’s trying to do and what tools he is using. From everything I know about the deep structure of geometry and quantum theory, his project seems to me highly unlikely to give us the needed insight into the relation of these two subjects, but no reason he shouldn’t try. On the other hand, he should ask his publisher to pulp the dust jackets…
Update: Carroll today on Twitter has the following argument from his book for “Many Worlds”:
Once you admit that an electron can be in a superposition of different locations, it follows that person can be in a superposition of having seen the electron in different locations, and indeed that reality as a whole can be in a superposition, and it becomes natural to treat every term in that superposition as a separate “world”.
“Becomes natural” isn’t much of an argument (faced with a problem, there are “natural” things to do which are just wrong and don’t solve the problem). To me, saying one is going to “treat every term in that superposition as a separate “world”” may be natural to you, but it doesn’t actually solve any problem, instead creating a host of new ones.
The book Many Worlds?: Everett, Quantum Theory and Reality gathers various essays, including
Simon Saunders, Introduction
David Wallace, Decoherence and Ontology
Adrian Kent, One World Versus Many
David Wallace’s book, The Emergent Multiverse.
Blog postings from Jess Riedel here and here.
This from Wojciech Zurek, especially the last section, including parts quoted here.
Last Updated on
Posted in Book Reviews, Multiverse Mania | 21 Comments
## What’s the difference between Copenhagen and Everett?
I’ve just finished reading Sean Carroll’s forthcoming new book, will write something about it in the next few weeks. Reading the book and thinking about it did clarify various issues for me, and I thought it might be a good idea to write about one of them here. Perhaps readers more versed in the controversy and literature surrounding this issue can point me to places where it is cogently discussed.
Carroll (like many others before him, for a recent example see here), sets up two sides of a controversy:
• The traditional “Copenhagen” or “textbook” point of view on quantum mechanics: quantum systems are determined by a vector in the quantum state space, evolving unitarily according to the Schrödinger equation, until such time as we choose to do a measurement or observation. Measuring a classical observable of this physical system is a physical process which gives results that are eigenvalues of the quantum operator corresponding to the observable, with the probability of occurrence of an eigenvalue given in terms of the state vector by the Born rule.
• The “Everettian” point of view on quantum mechanics: the description given here is “The formalism of quantum mechanics, in this view, consists of quantum states as described above and nothing more, which evolve according to the usual Schrödinger equation and nothing more.” In other words, the physical process of making a measurement is just a specific example of the usual unitary evolution of the state vector, there is no need for a separate fundamental physical rule for measurements.
I don’t want to discuss here the question of whether the Everettian point of view implies a “Many Worlds” ontology, that’s something separate which I’ll write about when I get around to writing about the new book.
What strikes me when thinking about these two supposedly very different points of view on quantum mechanics is that I’m having trouble seeing why they are actually any different at all. If you ask a follower of Copenhagen (let’s call her “Alice”) “is the behavior of that spectrometer in your lab governed in principle by the laws of quantum mechanics” I assume that she would say “yes”. She might though go on to point out that this is practically irrelevant to its use in measuring a spectrum, where the results it produces are probability distributions in energy, which can be matched to theory using Born’s rule.
The Everettian (let’s call him “Bob”) will insist on the point that the behavior of the spectrometer, coupled to the environment and system it is measuring, is described in principle by a quantum state and evolves according to the Schrödinger equation. Bob will acknowledge though that this point of principle is useless in practice, since we don’t know what the initial state is, couldn’t write it down if we did, and couldn’t solve the relevant Schrödinger equation even if we could write down the initial state. Bob will explain that for this system, he expects “emergent” classical behavior, producing probability distributions in energy, which can be matched to theory using Born’s rule.
So, what’s the difference between the points of view of Alice and Bob here? It only seems to involve the question of how classical behavior emerges from quantum, with Alice saying she doesn’t know how this works, Bob saying he doesn’t know either, but conjectures it can be done in principle without introducing new physics beyond the usual quantum state/Schrödinger equation story. Alice likely will acknowledge that she has never seen or heard of any evidence of such new physics, so has no reason to believe it is there. They both can agree that understanding how classical emerges from quantum is a difficult problem, well worth studying, one that we are in a much better position now to work on than we were way back when Bohr, Everett and others were struggling with this.
Last Updated on
Posted in Quantum Mechanics | 26 Comments
## Where We Are Now
For much of the last 25 years, a huge question hanging over the field of fundamental physics has been that of what judgement results from the LHC would provide about supersymmetry, which underpins the most popular speculative ideas in the subject. These results are now in, and conclusively negative. In principle one could still hope for the HL-LHC (operating in 2026-35) to find superpartners, but there is no serious reason to expect this. Going farther out in the future, there are proposals for an extremely expensive 100km larger version of the LHC, but this is at best decades away, and there again is no serious reason to believe that superpartners exist at the masses such a machine could probe.
The reaction of some parts of the field to this falsification of hopes for supersymmetry has been not at all the abandonment of the idea that one would expect. For example, today brings the bizarre news that failure has been rewarded with a $3 million Special Breakthrough Prize in Fundamental Physics for supergravity. For uncritical media coverage, see for instance here, here, and here. Some media outlets do better. I first heard about this from Ryan Mandelbaum, who writes here. Ian Sample at the Guardian does note that negative LHC results are “leading many physicists to go off the theory” and quotes one of the awardees as saying: We’re going through a very tough time… I’m not optimistic. I no longer encourage students to go into theoretical particle physics. At Nature, the sub-headline is “Three physicists honoured for theory that has been hugely influential — but might not be a good description of reality” and Sabine Hossenfelder is quoted. At her blog, she ends with the following excellent commentary: Awarding a scientific prize, especially one accompanied by so much publicity, for an idea that has no evidence speaking for it, sends the message that in the foundations of physics contact to observation is no longer relevant. If you want to be successful in my research area, it seems, what matters is that a large number of people follow your footsteps, not that your work is useful to explain natural phenomena. This Special Prize doesn’t only signal to the public that the foundations of physics are no longer part of science, it also discourages people in the field from taking on the hard questions. Congratulations. In related news, yesterday I watched this video of a recent discussion between Brian Greene and others which, together with a lot of promotional material about string theory, included significant discussion of the implications of the negative LHC results. A summary of what they had to say would be: • Marcelo Gleiser has for many years been writing about the limits of scientific knowledge, and sees this as one more example. • Michael Dine has since 2003 been promoting the string theory landscape/multiverse, with the idea that one could do statistical predictions using it. Back then we were told that “it is likely that this leads to a prediction of low energy supersymmetry breaking” (although Dine soon realized this wasn’t working out, see here.) In 2007 Physics Today published his String theory in the era of the Large Hadron Collider (discussed here), which complained about how “weblogs” had it wrong that string theory had no relation to experiment. That piece claimed that A few years ago, there seemed little hope that string theory could make definitive statements about the physics of the LHC. The development of the landscape has radically altered that situation. and that The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely. Confronted by Brian with the issue of LHC results, Dine looks rather uncomfortable, but claims that there still is hope for string theory and the landscape, that now big data and machine learning can be applied to the problem (for commentary on this, see here). He doesn’t though expect to see success in his lifetime. • Andy Strominger doesn’t discuss supersymmetry in particular, but about the larger superstring theory unification idea, tries to make the case that it hasn’t been a failure at all, but a success way beyond what was expected. The argument is basically that the search for a unified string theory was like Columbus’s search for a new sea route to China. He didn’t find it, but found something much more exciting, the New World. In this analogy, instead of finding some tedious reductionist new layer of reality as hoped, string theorists have found some revolutionary new insight about the emergent nature of gravity: I think that the idea that people were excited about back in 1985 was really a small thing, you know, to kind of complete that table that you put down in the beginning of the spectrum of particles… We didn’t do that, we didn’t predict new things that were going to be measured at the Large Hadron Collider, but what has happened is so much more exciting than our original vision… we’re getting little hints of a radical new view of the nature of space and time, in which it really just is an approximate concept, emergent from something deeper. That is really, really more exciting, I mean it’s as exciting as quantum mechanics or general relativity, probably even more so. The lesson Strominger seems to have learned from the failure of the 1985 hopes is that when you’ve lost your bet on one piece of hype, the thing to do is double down, go for twice the hype… Update: The Breakthrough Prize campaign to explain why supergravity is important despite having no known relation to reality has led to various nonsense making its way to the public, as reporters desperately try to make sense of the misleading information they have been fed. For instance, you can read (maybe after first reading this comment) here that Witten showed in 1981 that the theory could be used to simplify the proof for general relativity, initiating the integration of the theory into string theory. You could learn here that When the theory of supersymmetry was developed in 1973, it solved some key problems in particle physics, such as unifying three forces of nature (electromagnetism, the weak nuclear force, and the strong nuclear force) Update: On the idea that machine learning will solve the problems of string theory, see this yesterday from the Northeastern press office, which explains that the goal is to “unify string theory with experimental findings”: Using data science to learn more about the large set of possibilities in string theory could ultimately help scientists better understand how theoretical physics fits into findings from experimental physics. Halverson says one of the ongoing questions in the field is how to unify string theory with experimental findings from particle physics and cosmology… Update: Physics World has a story about this that emphasizes the sort of criticism I’ve been making here. As mentioned in the comments, I took a closer look at the citation for the prize. The section on supersymmetry is really outrageous, using “supersymmetry stabilizes the weak scale” as an argument for SUSY, despite the fact that this has been falsified by LHC results. Update: Jim Baggott writes about this story and post-empirical science here. Noah Smith here gets the most remarkable aspect of this right. String theory has always had the feature that the strings were not supposed to be visible at accessible energies, so not directly testable. Supersymmetry is quite different: it has always been advertised as a directly testable idea, with superpartners supposed to appear at the electroweak scale and be seen at the latest at the LHC. Giving a huge prize to a theoretical idea that has just been conclusively shown to not work is something both new and outrageous. Update: Tommaso Dorigo’s take is here, which I’d characterize as basically “any publicity is good publicity, but it’s pretty annoying the cash is going to theorists for failed theories instead of experimentalists”(he does say he wanted to entitle the piece “Billionaire Awards Prizes To Failed Theories”): [Rant mode on] An exception to the above is, of course, the effect that this not insignificant influx of cash and 23rd-hour recognition has on theoretical physicists. For they seem to be the preferred recipients of the breakthrough prize as of late, not unsurprisingly. Apparently, building detectors and developing new methods to study subnuclear reactions, which are our only way to directly fathom the unknown properties of elementary particles, is not considered enough of a breakthrough by Milner’s jury as it is to concoct elegant, albeit wrong, theories of nature. [Rant mode off] Going back to the effect on laypersons: this is of course positive. Already the sheer idea that you may earn enough cash to buy a Ferrari and a villa in Malibu beach in one shot by writing smart formulas on a sheet of paper is suggestive, in a world dominated by the equation “is paid very well, so it is important”. But even more important is the echo that he prize – somewhere by now dubbed “the Oscar of Physics” – is having on the media. Whatever works to bring science to the fore is welcome in my book. Last Updated on Posted in Uncategorized | 48 Comments ## Quick Links A few quick links: • Philip Ball at Quanta has a nice article on “Quantum Darwinism” and experiments designed to exhibit actual toy examples of the idea in action (I don’t think “testing” the idea is quite the right language in this context). What’s at issue is the difficult problem of how to understand the way in which classical behavior emerges from an underlying quantum system. For a recent survey article discussing the ideas surrounding Quantum Darwinism, see this from Wojciech Zurek. Jess Riedel at his blog has a new FAQ About Experimental Quantum Darwinism which gives more detail about what is actually going on here. • This year’s TASI summer school made the excellent choice of concentrating on issues in quantum field theory. Videos, mostly well worth watching, are available here. • This month’s Notices of the AMS has a fascinating article about Grothendieck, by Paulo Ribenboim. It comes with a mysterious “Excerpt from” title and editor’s note: Ribenboim’s original piece contains some additional facts that are not included in this excerpt. Readers interested in the full text should contact the author. • I’ve finally located a valuable Twitter account, this one. Last Updated on Posted in Uncategorized | 12 Comments ## Prospects for contact of string theory with experiments Nima Arkani-Hamed today gave a “vision talk” at Strings 2019, entitled Prospects for contact of string theory with experiments which essentially admitted there are no such prospects. He started by joking that he had been assigned this talk topic by someone who wanted to see him give a short talk for a change, or perhaps someone who wanted to “throw him to the wolves”. The way he dealt with the challenge was by dropping “string theory”, entitling his talk “Connecting Fundamental Theory to the Real World” and only discussing the question of SUSY (he’s still for Split SUSY, negative LHC results are irrelevant since if SUSY were natural it would have been seen at LEP, and maybe a 100km pp machine will see something, or ACME will see an electron edm). He did discuss the string theory landscape, and explained it was one reason that about 15 years ago he mostly stopped working on phenomenological HEP theory and started doing the more mathematical physics amplitudes stuff. David Gross used to argue that the danger of the multiverse was that it would convince people to give up on trying to understand fundamental issues about HEP theory (where does the Standard Model comes from?). It’s now clear that this is no longer a danger for the future but a reality of the present. In order to go over time, Arkani-Hamed dropped the topic of his title and turned to discussing his hopes for his amplitudes work. The “long shot fantasy” is that a formulation of QFT will be found in which amplitudes are given by integrating some abstract geometrical quantities. The conference ended with a “vision” panel discussion. Others may see things differently, but what most struck me about this was the absence of any sort of plausible vision. Update: Taking a look at the slides from the ongoing EPS-HEP 2019 conference, Ooguri seems to strongly disagree with Arkani-Hamed, claiming in his last slide here that a CMB polarization experiment (LiteBIRD) to fly in 8 years, “provides an unprecedented opportunity for String Theory to be falsified.” I find this extremely hard to believe. Does anyone else other than Ooguri believe that detection/non-detection of CMB B-modes can falsify string theory? Last Updated on Posted in Strings 2XXX | 20 Comments ## Against Symmetry One of the great lessons of twentieth century science is that our most fundamental physical laws are built on symmetry principles. Poincaré space-time symmetry, gauge symmetries, and the symmetries of canonical quantization largely determine the structure of the Standard Model, and local Poincaré symmetry that of general relativity. For the details of what I mean by the first part of this, see this book. Recently though there has been a bit of an “Against Symmetry” publicity campaign, with two recent examples to be discussed here. Quanta Magazine last month published K.C. Cole’s The Simple Idea Behind Einstein’s Greatest Discoveries, with summary Lurking behind Einstein’s theory of gravity and our modern understanding of particle physics is the deceptively simple idea of symmetry. But physicists are beginning to question whether focusing on symmetry is still as productive as it once was. It includes the following: “There has been, in particle physics, this prejudice that symmetry is at the root of our description of nature,” said the physicist Justin Khoury of the University of Pennsylvania. “That idea has been extremely powerful. But who knows? Maybe we really have to give up on these beautiful and cherished principles that have worked so well. So it’s a very interesting time right now.” After spending some time trying to figure out how to write something sensible here about Cole’s confused account of the role of symmetry in physics and encountering mystifying claims such as the Higgs boson that was detected was far too light to fit into any known symmetrical scheme… symmetry told physicists where to look for both the Higgs boson and gravitational waves I finally hit the following “naturalness” — the idea that the universe has to be exactly the way it is for a reason, the furniture arranged so impeccably that you couldn’t imagine it any other way. At that point I remembered that Cole is the most incompetent science writer I’ve run across (for more about this, see here), and realized best to stop trying to make sense of this. Quanta really should do better (and usually does). For a second example, the Kavli IPMU recently put out a press release claiming Researchers find quantum gravity has no symmetry. This was based on the paper Constraints on symmetry from holography, by Harlow and Ooguri. The usually reliable Ethan Siegel was taken in, writing a long piece about the significance of this work, Ask Ethan: What Does It Mean That Quantum Gravity Has No Symmetry? To his credit, one of the authors (Daniel Harlow) wrote to Siegel to explain to him some things he had wrong: I wanted to point out that there is one technical problem in your description… our theorem does not apply to any of the symmetries you mention here! … It isn’t widely appreciated, but in the standard model of particle physics coupled to gravity there is actually only one global symmetry: the one described by the conservation of B-L (baryon number minus lepton number). So this is the only known symmetry we are actually saying must be violated! What Harlow doesn’t mention is that this is a result about AdS gravity, and we live in dS, not AdS space, so it doesn’t apply to our world at all. Even if it did apply, and thus would have the single application of telling us B-L is violated, it says nothing about how B-L is violated or what the scale of B-L violation is, so would be pretty much meaningless. By the way, I’m thoroughly confused by the Kavli IPMU press release, which claims: Their result has several important consequences. In particular, it predicts that the protons are stable against decaying into other elementary particles, and that magnetic monopoles exist. Why does Harlow-Ooguri imply (if it applied to the real world, which it doesn’t…) that protons are stable? What is driving a lot of this “Against Symmetry” fashion is “it from qubit” hopes that gravity can be understood as some sort of emergent phenomenon, with its symmetries not fundamental. I’ve yet though to see anything like a real (i.e., consistent with what we know about the real world, not AdS space in some other dimension) theory that embodies these hopes. Maybe this will change, but for now, symmetry principles remain our most powerful tools for understanding fundamental physical reality, and “Against Symmetry” has yet to get off the ground. Update: Quanta seems to be trying to make up for the KC Cole article by today publishing a good piece about space-time symmetries, Natalie Wolchover’s How (Relatively) Simple Symmetries Underlie Our Expanding Universe. It makes the argument that, just as the Poincaré group can be thought of as a “better” space-time symmetry group than the Galilean group, the deSitter group is “better” than Poincaré. In terms of quantization, the question becomes that of understanding the irreducible unitary representations of these groups. I do think the story of the representations of Poincaré group (see for instance my book about QM and representation theory) is in some sense “simpler” than the Galilean group story (no central extensions needed). The deSitter group is a simple Lie group, and comparing its representation theory to that of Poincaré raises various interesting issues. A couple minutes of Googling turned up this nice Master’s thesis that has a lot of background. Last Updated on Posted in Uncategorized | 18 Comments ## What happens when we can’t test scientific theories? Just got back from a wonderful trip to Chile, where the weather was perfect for watching the solar eclipse from the beach at La Serena. While I was away, the Guardian Science Weekly podcast I participated in before leaving for Chile went online and is available here. Thanks to Ian Sample, Graihagh Jackson, and the others at Science Weekly who put this together, I think they did a great job. The issues David Berman, Eleanor Knox and I discussed in the podcast will be familiar to readers of this blog. Comparing to the arguments over string theory that took place 10-15 years ago, one thing that strikes me is that we’re no longer hearing any claims of near term tests of the theory. Instead the argument is now often made, by Berman and others, that it may take centuries to understand and test string theory. This brings into focus the crucial question here: how do you evaluate a highly speculative and very technical research program like this one? Given the all too human nature of researchers, those invested in it cannot be relied upon to provide an unbiased evaluation of progress. So, absent experimental results providing some sort of definitive judgment, where will such an evaluation come from? Last Updated on Posted in Uncategorized | 12 Comments ## Various First something really important: chalk. If you care about chalk, you should watch this video and read this story. Next, something slightly less important: money. The Simons Foundation in recent years has been having a huge (positive, if you ask me…) effect on research in mathematics and physics. Their 2018 financial report is available here. Note that not only are they spending \$300 million/year or so funding research, but at the same time they’re making even more (\$400 million or so) on their investments (presumably RenTech funds). So, they’re running a huge profit (OK, they’re a non-profit…), as well as taking in each year \$220 million in new contributions.
Various particle physics-related news:
• The people promoting the FCC-ee proposal have put out FCC-ee: Your Questions Answered, which I think does a good job of making the physics case for this as the most promising energy-frontier path forward. I don’t want to start up again the same general discussion that went on here and elsewhere, but I do wonder about one specific aspect of this proposal (money) and would be interested to hear from anyone well informed about it.
The FCC-ee FAQ document lists the cost (in Swiss francs or dollars, worth exactly the same today) as 11.6 billion (7.6 billion for tunnel/infrastructure, 4 billion for machine/injectors). The timeline has construction starting a couple years after the HL-LHC start (2026) and going on in parallel with HL-LHC operation over a decade or so. This means that CERN will have to come up with nearly 1.2 billion/year for FCC-ee construction, roughly the size of the current CERN budget. I have no idea what fraction of the current budget could be redirected to new collider construction, while still running the lab (and the HL-LHC). It is hard to see how this can work, without a source of new money, and I have no idea what prospects are for getting a large budget increase from the member states. Non-member states might be willing to contribute, but at least in the case of US, any budget commitments for future spending are probably not worth the paper they might be printed on.
Then again, Jim Simons has a net worth of 21.5 billion, and maybe he’ll just buy the thing for us…
• Stacy McGaugh has an interesting blog post about the sociology of physics and astronomy. His description of his experience with physicists at Princeton sounds all too accurate (if he’d been there a couple years earlier, I would have been one of the arrogant, hard-to-take young particle theorists he had to put up with).
McGaugh’s specialty is dark matter and he has some comments about that. If you want some more discouragement about prospects for detecting dark matter, today you have your choice of Sabine Hossenfelder, Matt Buckley, or Will Kinney. I don’t want to start a discussion of everyone’s favorite ideas about dark matter, but wouldn’t mind hearing from an expert whether my suspicion is well-founded that some relatively simple right-handed neutrino model might both solve the problem and be essentially impossible to test.
• Lattice 2019 is going on this week. Slides here, streaming video here.
• Strings 2019 talk titles are starting to appear here. I’ll be very curious to hear what Arkani-Hamed has to say. His talk title is “Prospects for contact of string theory with experiments (vision talk)” and while he’s known for giving very long talks, I don’t see at all how this one could not be extremely short.
On a more personal front, yesterday I did a recording for a podcast from my office, with the exciting feature of an unannounced fire drill happening towards the end. Presumably this will get edited out, and I’ll post something here when the result is available.
Next week I’ll be heading out for a two week trip to Chile, with one goal to see the total solar eclipse there on July 2. Will start out up in the Atacama desert.
Update: John Horgan has an interview with Peter Shor. I very much agree with Shor’s take on the problems of HEP theory:
High-energy physicists are now trying to produce new physics without either experiment or proof to guide them, and I don’t believe that they have adequate tools in their toolbox to let them navigate this territory.
My impression, although I may be wrong about this, is that in the past, one way that physicists made advances is by coming up with all kinds of totally crazy ideas, and keeping only the ones that agreed with experiment. Now, in high energy physics, they’re still coming up with all kinds of totally crazy ideas, but they can no longer compare them with experiments, so which of their ideas get accepted depends on some complicated sociological process, which results in theories of physics that may not bear any resemblance to the real world. This complicated sociological process certainly takes beauty into account, but I don’t think that’s what is fundamentally leading physicists astray. I think a more important problem is this sociological process leads high-energy physicists to collectively accept ideas prematurely, when there is still very little evidence in favor of them. Then the peer review process leads the funding agencies to mainly fund people who believe in these ideas when there is no guarantee that it is correct, and any alternatives to these ideas are for the most part neglected.
Update: I think John Preskill and Urs Schreiber miss the point in their response here to Peter Shor. Shor is not calling for an end to research on quantum gravity or saying it can’t be done without experimental input. The problem he’s pointing to is a “sociological process” and so potentially fixable. This problem, “collectively accept[ing] ideas prematurely”, not realizing the difference between a solid foundation you can build on, and a speculative framework that may be seriously flawed is one that those exposed to the sociological culture of the math community are much more aware of. Absent experimental checks, mathematicians understand the need to pay close attention to what is solid (there’s a “proof”), and what isn’t.
Last Updated on
Posted in Uncategorized | 17 Comments
## Not So Spooky Action at a Distance
I’ve recently read another new popular book about quantum mechanics, Quantum Strangeness by George Greenstein. Before getting to saying something about the book, I need to get something off my chest: what’s all this nonsense about Bell’s theorem and supposed non-locality?
If I go to the Scholarpedia entry for Bell’s theorem, I’m told that:
Bell’s theorem asserts that if certain predictions of quantum theory are correct then our world is non-local.
but I don’t see this at all. As far as I can tell, for all the experiments that come up in discussions of Bell’s theorem, if you do a local measurement you get a local result, and only if you do a non-local measurement can you get a non-local result. Yes, Bell’s theorem tells you that if you try and replace the extremely simple quantum mechanical description of a spin 1/2 degree of freedom by a vastly more complicated and ugly description, it’s going to have to be non-local. But why would you want to do that anyway?
The Greenstein book is short, the author’s very personal take on the usual Bell’s inequality story, which you can read about many other places in great detail. What I like about the book though is the last part, in which the author has, at 11 am on Friday, July 10, 2015, an “Epiphany”. He realizes that his problem is that he had not been keeping separate two distinct things: the quantum mechanical description of a system, and the every-day description of physical objects in terms of approximate classical notions.
“How can a thing be in two places at once?” I had asked – but buried within that question is an assumption, the assumption that a thing can be in one place at once. That is an example of doublethink, of importing into the world of quantum mechanics our normal conception of reality – for the location of an object is a hidden variable, a property of the object … and the new science of experimental metaphysics has taught us that hidden variables do not exist.
I think here Greenstein does an excellent job of pointing to the main source of confusion in “interpretations” of quantum mechanics. Given a simple QM system (say a fixed spin 1/2 degree of freedom, a vector in C2), people want to argue about the relation of the QM state of the system to measurement results which can be expressed in classical terms (does the system move one way or the other in a classical magnetic field?) . But there is no relation at all between the two things until you couple your simple QM system to another (hugely complicated) system (the measurement device + environment). You will only get non-locality if you couple to a non-local such system. The interesting discussion generated by an earlier posting left me increasingly suspicious that the mystery of how probability comes into things is much like the “mystery” of non-locality in the Bell’s inequality experiment. Probability comes in because you only have a probabilistic (density matrix) description of the measurement device + environment.
For some other QM related links:
• Arnold Neumaier has posted a newer article about his “thermal interpretation” of quantum mechanics. He also has another interesting preprint, relating quantum mechanics to what he calls “coherent spaces”.
• Philip Ball at Quanta magazine explains a recent experiment that demonstrates some of the subtleties that occur in the quantum mechanical description of a transition between energy eigenstates (as opposed to the unrealistic cartoon of a “quantum jump”).
• There’s a relatively new John Bell Institute for the Foundations of Physics. I fear though that the kinds of “foundations” of interest to the organizers seem rather orthogonal to the “foundations” that most interest me.
• If you are really sympathetic to Einstein’s objections to quantum mechanics, and you have a lot of excess cash, you could bid tomorrow at Christie’s for some of Einstein’s letters on the topic, for instance this one.
Last Updated on
Posted in Book Reviews, Quantum Mechanics | 62 Comments
## Various News Items
For physicists:
• For the latest news on US HEP funding, see presentations at this recent HEPAP meeting. It is rarely publicly acknowledged by scientists, but during the Trump years funding for a lot of scientific research research has increased, often dramatically. This has been due not to Trump administration policy initiatives, but instead to the Republican party’s embrace of fiscal irresponsibility whenever there’s a Republican in the White House. After bitter complaints about the size of the budget deficit and demands for reduction in domestic spending during the Obama years, after Trump’s election the congressional Republicans turned on a dime and every year have voted for huge across-the-board spending increases, tax decreases, and corresponding deficit increases. Each year the Trump administration produces a budget document calling for unrealistically large budget decreases which is completely ignored, with Congress passing large increases and Trump signing them into law.
For specific numbers, see for instance page 20 of this presentation, which shows numbers for the DOE HEP budget in recent years. The pattern for FY2020 looks the same: a huge proposed decrease, and a huge likely increase (see the number for the House Mark).
The result of all this is that far greater funds are available than expected during the last P5 planning exercise, so instead of having to make the difficult decisions P5 expected, a wider list of projects can be funded.
For mathematicians;
• Michael Harris has a new article in Quanta magazine, mentioning suggestions by two logicians that the Wiles proof of Fermat’s Last Theorem should be formalized and checked by a computer. He explains why most number theorists think this sort of project is besides the point:
Wiles and the number theorists who refined and extended his ideas undoubtedly didn’t anticipate the recent suggestions from the two logicians. But — unlike many who follow number theory at a distance — they were certainly aware that a proof like the one Wiles published is not meant to be treated as a self-contained artifact. On the contrary, Wiles’ proof is the point of departure for an open-ended dialogue that is too elusive and alive to be limited by foundational constraints that are alien to the subject matter.
I don’t know who the “two logicians” Harris is referring to are, or what the nature of their concerns about the Wiles proof might be. I had thought this might have something to do with number theorist Kevin Buzzard’s Xena Project, but in a comment Buzzard describes such a formalization as currently impractical, with no clear motivation.
Taking a look at the page describing the motivation for the Xena Project, I confess to finding it unconvincing. The idea of revamping the undergraduate math curriculum to make it based on computer checkable proofs seems misguided, since I don’t see at all why this is a good way to teach mathematical concepts or motivate undergraduate students. The complaints about holes in the math literature (e.g. details of the classification of finite simple groups) don’t seem to me to be something that can be remedied by a computer.
• For some cutting-edge number theory, with no computers in sight, see the lecture notes from a recent workshop on geometrization of local Langlands.
• Finally, congratulations to this year’s Shaw Prize winner, Michel Talagrand. Talagrand in recent years has been working on writing up a book on quantum field theory for mathematicians, and I see that Sourav Chatterjee last fall taught a course based on it, producing lecture notes available here.
For a wonderful recent interview with Talagrand, see here.
I first got to know Michel when he started sending me very helpful comments and corrections on my QM book when it was a work in progress. He’s single-handedly responsible for a lot of significant improvements in the quality of the book.
I’ve recently received significant help from someone else, Lasse Schmieding, who has sent me a very helpful list of mistakes and typos in the published version of the book. I’ve now fixed just about all of them. Note that the version of the book available on my website has all typos/mistakes fixed. For the published version, there’s a list of errata.
Update: For more about the Michael Harris vs. Kevin Buzzard argument, see here, or plan on attending their face-off in Paris next week.
Last Updated on
Posted in Uncategorized | 18 Comments
|
2019-08-21 20:28:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4224540889263153, "perplexity": 852.3299050609479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316194.18/warc/CC-MAIN-20190821194752-20190821220752-00363.warc.gz"}
|
http://openstudy.com/updates/55a98d63e4b038ffab0b8b2d
|
## anonymous one year ago What is the discontuinty of f(x) = the quantity negative x squared plus x plus 20 over the quantity x plus 4?
1. anonymous
@Astrophysics
2. anonymous
|dw:1437190829325:dw|
3. anonymous
So I think it is -4?
4. anonymous
Am I right?
5. Astrophysics
So you're taking $\lim_{x \rightarrow -4} f(x)$
6. anonymous
Um..I suppose:).
7. Astrophysics
Mhm, you know calculus?
8. anonymous
No. It's Algebra 2
9. Astrophysics
But yes, that's right -4
10. Astrophysics
As the domain is all real numbers where x cannot equal -4.
11. Astrophysics
So you're looking for what x is basically then, so factor the top, that should give it away :)
12. Astrophysics
To find the zeros you have to factor the numerator, it's just finding the roots, hence x.
13. anonymous
So (x-4)(x+5)
14. anonymous
do I just plug in -4?
15. Astrophysics
You should've got -(x+4)(x-5)
16. anonymous
So it is (-4,1)
17. Astrophysics
Then you can cancel out the numerator and denominator of (x+4) and you'll have x = 5 as your zero
18. anonymous
So, if I were to graph it, the point would be -4,1 ? Correct?
19. anonymous
Is it working now?
20. Astrophysics
Oh I see what you mean now, so if we take the limit we would indeed get = 9
21. Astrophysics
I really can't remember doing it in algebra 2, mhm I will have to do some research
22. anonymous
haha<3 Ok! Thank-you:)
23. anonymous
What is the graph of the function f(x) = the quantity of negative x squared minus 2 x minus 2, all over x minus 2?
24. Astrophysics
use this site to graph https://www.desmos.com/calculator
25. anonymous
I don't mean to post another question because usually I can figure things out by myself, but I can't graph usind desmos. It won't work
26. anonymous
Maybe I'm doing it wrong. Because when I typed it in, I only got one graph when it should have been two. Like 2 lines:)
27. anonymous
Any thoughts?
28. Astrophysics
Well, you can't put words if that's what you're trying, so what do those words tell us mathematically?
29. anonymous
-x^2-2x-2/x-2
30. anonymous
And I didn't :) I typed in that
31. Astrophysics
I got this, make sure you put y = , or f(x) = |dw:1437192565745:dw|
32. anonymous
Ok:) But shouldn't there two?
33. anonymous
Nevermind! Thank-you<3
34. Astrophysics
Np ^.^
35. anonymous
$A\ function\ f(x)\ is\ said\ tobe\ continuous\ at\ a\ point\ x=c \ of\ its\ domain\\ \iff\ \lim_{x \rightarrow c}=f(c).$
|
2016-10-23 08:11:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6822577118873596, "perplexity": 3317.813443486912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719192.24/warc/CC-MAIN-20161020183839-00266-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://mathematica.stackexchange.com/questions/30516/something-is-wrong-with-information
|
# Something is wrong with “Information”
When a new session of kernel is started, searching for help with "?" question mark in notebook goes well, for example
But after some time, it becomes
I have no idea what I did that caused this problem. This situation occurred several times after using the notebook extensively. I think some commands triggered this problem. I'm using Mathematica 9.0.1.0 in Mac OS X Mountain Lion.
-
I believe the Attributes are there from the beginning. (Try ??.) Therefore the question is: what happened to the usage message? Correct? When the system is in the second "mode" what do you get if you enter Sin::usage? – Mr.Wizard Aug 15 at 16:39
Now I'm afraid if I terminate the kernel I won't able to regenerate the issue. But I think using "?" for built-in function won't generate Attributes. Now in the second "mode" after I enter ??Sin, the same output as ?Sin appears. After I enter Sin::usage?, nothing appears. – Joe Li Aug 15 at 16:53
In a fresh session ??Sin produces the usage line as shown in your first example and the Attributes line shown in the second. For Sin::usage just enter it by itself on a new line. It should output "Sin[z] gives the sine of z." If it does not something is causing that to be lost; possibly a bug in one of the packages that is auto-loaded when you use a particular function. – Mr.Wizard Aug 15 at 16:57
Does this happen with all notebooks, or just one or two? – rcollyer Aug 15 at 17:23
I restart the kernel and tried all the input between the last "?" input that worked and the one that failed, and not able to regenerate the issue again. It should be note that the output SystemSin` suggesting the kernel treat the built-in function as a user defined function. – Joe Li Aug 15 at 17:26
show 2 more comments
|
2013-12-06 12:02:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18090872466564178, "perplexity": 1987.3596870042813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051516/warc/CC-MAIN-20131204131731-00085-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://stt.wiki/wiki/Klingon_K%27Vort_Bird-of-Prey_Schematic
|
# Klingon K'Vort Bird-of-Prey
Klingon K'Vort Bird of Prey
2
Affiliation
Klingon Empire
Active
24th Century
Battle stations
1x Security, 1x Engineering
Shields
13200
Hull
26400
Shield Regen
336
Attack
2
Accuracy
2
Evasion
3
DPS [Att.Power * Att.Speed]
1900 [1900 * 1.00] at L1
Accuracy / Evasion Power
0.89 [1900 / 2125] at L1
Crit Rating / Bonus
425 / 5000
Traits
Klingon, Cloaking Device
Antimatter
1250
The Klingon K'Vort Bird of Prey is a tier 2 ship in Star Trek Timelines. It requires 75 Schematics to produce.
The Bird of Prey is a class of warship used by the Klingon Empire. The presence of the Klingon Bird of Prey in Star Trek Timelines was first revealed in a video released on March 27th 2015. It was stated that the model in Star Trek Timelines would be a standard 24th century Bird of Prey, with unique models such as the IKS Rotarran and HMS Bounty also be included. It was revealed that the focus of the ship is its attacking and cloaking abilities.
The ship's cloak engages when you activate its Evasion action, and will disengage prematurely if you activate an Attack action.
## Battle Actions
- Boosts Attack by 3
Initialize: 2s Cooldown: 10s Duration: 4s
Bonus Ability See {{skill}} 30% of incoming damage also taken by attacker
+2 Engage Cloak - Boosts Evasion by 2- Grants Cloaked status to your ship Initialize: 8s Cooldown: 18s Duration: 5s
Klingon K'Vort Bird of Prey
Build Cost - x75
Level 1 2 3 4 5 6 7
Shields 13200 14916 16855 19046 21522 24320 27482
Hull 26400 29832 33710 38092 43044 48640 54963
Attack 2 3 3 4 4 4 4
Accuracy 2 3 3 3 4 4 5
Evasion 3 3 3 4 4 4 5
Attack Power 1900 2075 2325 2650 2775 2975 3375
Attack Speed 1 1 1 1 1 1 1
DPS 1900 2075 2325 2650 2775 2975 3375
Accuracy Power 1900 2050 2250 2486 2650 2950 3500
Evasion Power 2125 2300 2500 2800 3100 3400 3760
[Acc / Eva] 0.89 0.89 0.89 0.89 0.85 0.87 0.93
Shield Regen 336 380 429 485 548 619 700
Crit Rating 425 450 475 525 575 625 675
Crit Bonus 5000 5000 5000 5000 5000 5000 5000
Antimatter 1250 1300 1350 1400 1450 1500 1550
Schematics to
Next Level
15 20 35 65 100 120 Max
## Schematic Drops
Schematic Drops
Normal Reward Rare Reward (1-Time) Faction Shop
### Drop chance
Item Units Cost/Unit Runs/Unit Runs From
Klingon K'Vort Bird-of-Prey Schematic (x3) 465 12.4 1 482 Beyond the Call
Klingon K'Vort Bird-of-Prey Schematic (x2) 46 17.4 1.7 80 Smiles and Knives
Klingon K'Vort Bird-of-Prey Schematic (x2) 652 16 1.6 1044 Putting the Free in Freedom
Klingon K'Vort Bird-of-Prey Schematic 86 27.8 3.5 299 Ishka Issues
Klingon K'Vort Bird-of-Prey Schematic (x3) 141 16.1 1 142 Leverage
|
2019-02-17 00:16:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20375956594944, "perplexity": 10008.497777769593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481249.5/warc/CC-MAIN-20190216230700-20190217012700-00193.warc.gz"}
|
http://clay6.com/qa/7601/prove-by-vector-method-the-sum-of-the-squares-of-the-diagonals-of-a-paralle
|
Browse Questions
# Prove by vector method , The sum of the squares of the diagonals of a parallelogram is equal to the sum of the squares of the sides.
Toolbox:
• For any two vectors $\hat a \: and \: \hat b$ $(\hat a + \hat b)^2=(\hat a)^2+2\hat a.\hat b+(\hat b)^2=a^2+2\hat a.\hat b+b^2$ $(\hat a-\hat b)^2=a^2-2\hat a.\hat b+b^2$ $(\hat a+\hat b).(\hat a-\hat b)=a^2-b^2$
• By $\Delta$ law of vectors if $\overrightarrow a+\overrightarrow b=\overrightarrow c \: or \: \overrightarrow a+\overrightarrow b=-\overrightarrow c$ then the vectors form the sides of a $\Delta$
Let ABCD be the parallelogram with
Now $\overrightarrow {AC}= \overrightarrow {AB}+ \overrightarrow {BC}$
$\overrightarrow {BD}= \overrightarrow {BA}+ \overrightarrow {AD}$
$AC^2=AC^2 = ( \overrightarrow {AB}+ \overrightarrow {BC})^2= \overrightarrow {AB}^2+2 \overrightarrow {AB}. \overrightarrow {BC}+ \overrightarrow {BC}^2$
$= AB^2+2 \overrightarrow {AB}. \overrightarrow {AD}+ BC^2$ (i)
$\overrightarrow {BD}^2 = BD^2=( \overrightarrow {BA}+ \overrightarrow {AD})^2=( \overrightarrow {AD}- \overrightarrow {AB})^2= \overrightarrow {AD}^2-1 \overrightarrow {AD}. \overrightarrow {AB}+ \overrightarrow {AB}^2$
$= AD^2-2\overrightarrow {AB}.\overrightarrow {AD}+\overrightarrow {AB}^2$ (ii)
$AC^2=BD^2=AB^2+\not2\overrightarrow {AB}.\overrightarrow {AD}+BC^2+AD^2-\not2\overrightarrow {AD}.\overrightarrow {AB}+AB^2$
$= AB^2+BC^2+AD^2+AB^2$ Hence proved
edited Jun 6, 2013 by meena.p
|
2016-12-08 02:06:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9478828310966492, "perplexity": 502.5918101993158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542323.80/warc/CC-MAIN-20161202170902-00248-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://motionclouds.invibe.net/
|
# Motion Clouds
MotionClouds are random dynamic stimuli optimized to study motion perception.
In particular, these stimuli can be made closer to naturalistic textures compared to usual stimuli such as gratings and random-dot kinetograms. These have controlled information content: We simplified the definition to parametrically define these "Motion Clouds" around the most prevalent feature axis (mean and bandwith): direction, scale (spatial frequency), orientation. These scripts implement a framework to generate these random texture movies.
The description of this method was published in:
and recently in
While this method was used in the following paper:
• Claudio Simoncini, Laurent U. Perrinet, Anna Montagnini, Pascal Mamassian, Guillaume S. Masson. More is not always better: dissociation between perception and action explained by adaptive gain control. Nature Neuroscience, 2012 URL
This work was supported by ANR project "ANR Speed" ANR-13-BSHS2-0006.
This work was supported by the European Union project Number FP7-269921, BrainScaleS'' (Brain-inspired multiscale computation in neuromorphic hybrid systems), an EU FET-Proactive FP7 funded research project. The project started on 1 January 2011. It is a collaboration of 18 research groups from 10 European countries.
### Code example - demo
Motion Clouds are built using a collection of scripts that provides a simple way of generating complex stimuli suitable for neuroscience and psychophysics experiments. It is meant to be an open-source package that can be combined with other packages such as PsychoPy or NeuroTools.
All functions are implemented in one main script called MotionClouds.py that handles the Fourier cube, the envelope functions as well as the random phase generation and all Fourier related processing. Additionally, all the auxiliary visualization tools to plot the spectra and the movies are included. Specific scripts such as test_color.py, test_speed.py, test_radial.py and test_orientation.py explore the role of different parameters for each individual envelope (respectively color, speed, radial frequency, orientation). Our aim is to keep the code as simple as possible in order to be comprehensible and flexible. To sum up, when we build a custom Motion Cloud there are 3 simple steps to follow:
1. set the MC parameters and construct the Fourier envelope, then visualize it as iso-surfaces:
import MotionClouds as mc
import numpy as np
# define Fourier domain
fx, fy, ft = mc.get_grids(mc.N_X, mc.N_Y, mc.N_frame)
# define an envelope
envelope = mc.envelope_gabor(fx, fy, ft,
V_X=1., V_Y=0., B_V=.1,
sf_0=.15, B_sf=.1,
theta=0., B_theta=np.pi/8, alpha=1.)
# Visualize the Fourier Spectrum
mc.visualize(envelope)
1. perform the IFFT and contrast normalization; visualize the stimulus as a 'cube' visualization of the image sequence,
movie = mc.random_cloud(envelope)
movie = mc.rectif(movie)
# Visualize the Stimulus
mc.cube(movie, name=name + '_cube')
1. export the stimulus as a movie (.mpeg format available), as separate frames (.bmp and .png formats available) in a compressed zipped folder, or as a Matlab matrix (.mat format).
mc.anim_save(movie, name, display=False, vext='.mpeg')
If some parameters are not given, they are set to default values corresponding to a ''standard'' Motion Cloud. Moreover, the user can easily explore a range of different Motion Clouds simply by setting an array of values for a determined parameter. Here, for example, we generate 8 MCs with increasing spatial frequency sf_0 while keeping the other parameters fixed to default values:
for sf_0 in [0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6]:
name_ = 'figures/' + name + '-sf_0-' + str(sf_0).replace('.', '_')
# function performing plots for a given set of parameters
mc.figures_MC(fx, fy, ft, name_, sf_0=sf_0)
|
2017-03-23 00:20:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3232751786708832, "perplexity": 5680.748249889638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186530.52/warc/CC-MAIN-20170322212946-00572-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://paws-public.wmflabs.org/paws-public/53852699/aggregation.ipynb
|
import pandas as pd
import numpy as np
%pylab inline
data = pd.read_csv("https://docs.google.com/uc?export=download&id=1mr-KGEeKq-QS7xKtNKakWlDrjdLukv47",encoding='utf8')
data.dropna(inplace=True)
data.head()
better_0 _unit_id _started_at _created_at _trust _worker_id _city age similarity_0 explanation_0 asi1 time_spent score
0 Your keyword 4 1/10/2018 15:55:41 1/10/2018 16:20:41 0.385118 32 Ernakulam 36-50 6 they are all dressed well and using computers ... 4 00:25:00 84
1 The two keywords are completely identical 6 1/10/2018 17:04:22 1/10/2018 17:23:42 0.033270 13 Kolkata 36-50 6 Almost identicalexcept the tiny spelling diffe... 3 00:19:20 52
4 The two keywords are completely identical 6 1/11/2018 05:14:03 1/11/2018 05:21:25 0.708808 70 Mangalagiri 19-25 7 both are similar 5 00:07:22 16
5 The two keywords are completely identical 20 1/10/2018 16:41:16 1/10/2018 17:06:26 0.899786 95 Patna 26-35 7 they both describe the same kind of people 3 00:25:10 42
6 Your keyword 13 1/10/2018 15:47:20 1/10/2018 16:01:19 0.873825 37 Ulhasnagar 19-25 6 We can see a relaxed state in that images 5 00:13:59 41
## Time spent on a question (can be useful for worker ability)¶
pd.to_datetime(data['_created_at'])- pd.to_datetime(data['_started_at']) #use pd.to_numeric() to convert to number of ns
len(data)
data.describe()
_unit_id _trust _worker_id similarity_0 asi1 time_spent
count 63.000000 63.000000 63.000000 63.000000 63.000000 63
mean 15.460317 0.544987 46.571429 5.555556 3.650794 0 days 00:20:49.984126
std 8.918674 0.321671 28.953642 1.329295 1.109471 0 days 00:06:18.184563
min 1.000000 0.033270 1.000000 2.000000 0.000000 0 days 00:06:07
25% 7.000000 0.282123 21.000000 4.000000 3.000000 0 days 00:17:42
50% 15.000000 0.609291 49.000000 6.000000 4.000000 0 days 00:21:55
75% 24.000000 0.852443 70.500000 7.000000 4.000000 0 days 00:25:19
max 31.000000 0.974416 98.000000 7.000000 5.000000 0 days 00:29:49
Let's see how many judgments we have per unit
data.groupby('_unit_id').size()
data.groupby('_unit_id').size().values
data.groupby('_unit_id').size().hist()
Let's remove the units that have only one judgment
(data.groupby('_unit_id').size()==1).values
a = np.where((data.groupby('_unit_id').size()==1))
a
a = list(a[0])
a
data[data['_unit_id'].isin(a)]
data = data[~data['_unit_id'].isin(a)]
len(data)
1. Create a column with time spent (use pd.to_datetime)
2. Compute the average time per worker
data['time_spent'] = pd.to_datetime(data['_created_at']) - pd.to_datetime(data['_started_at'])
data.groupby('_worker_id').apply(lambda x: average())
# Basic aggregation¶
## Quantitative variables¶
data.groupby('_unit_id')['similarity_0'].mean()
If we are also doing a per-worker analysis, we can compute values from the worker
data.groupby('_worker_id')['_trust'].mean().values
data.groupby('_worker_id')['_trust'].mean().hist()
## Categorical variables¶
Now we can't do the following because the following is a categorical variable:
data.groupby('_unit_id')['better_0'].mean()
Let's explore what is this column and decide what to do
data.groupby('_unit_id')['better_0'].describe()
print(data['better_0'].unique())
len(data['better_0'].unique())
The majority vote of an array is simply the mode
data['better_0'].mode()
How is the variable distributed?
data.groupby('better_0')['better_0'].size()
Let's compute the majority voting
data.groupby('_unit_id')['better_0'].apply(lambda x: x.mode())
Sometimes this returns two values, let's get the first in that case (better way would be random)
data.groupby('_unit_id')['better_0'].apply(lambda x: x.mode()[0])
# Weighted measures¶
## Weighted mean¶
def weigthed_mean(df,weights,values): #df is a dataframe containing a single question
sum_values = (df[weights]*df[values]).sum()
total_weight = df[weights].sum()
return sum_values/total_weight
data.groupby('_unit_id').apply(lambda x: weigthed_mean(x,'_trust','similarity_0'))
data.groupby('_unit_id').apply(lambda x: (x['_trust']*x['similarity_0']).sum()/(x['_trust'].sum()))
## Weighted majority voting¶
Now we need, for each unit, to find the category with the highest trust score
data.head()
def weigthed_majority(df,weights,values): #df is a dataframe containing a single question
#print(df.groupby(values)[weights].sum())
best_value = df.groupby(values)[weights].sum().argmax()
return best_value
data.groupby('_unit_id').apply(lambda x: weigthed_majority(x,'_trust','better_0'))
## Creating a summary table¶
results = pd.DataFrame()
results['better'] = data.groupby('_unit_id').apply(lambda x: weigthed_majority(x,'_trust','better_0'))
results['similarity'] = data.groupby('_unit_id').apply(lambda x: weigthed_mean(x,'_trust','similarity_0'))
results['better_code'] = results['better'].astype('category').cat.codes
results
# Free text¶
Now we analyse the case in which we have free text
data['better_0'].unique()
array(['Your keyword', 'The two keywords are completely identical',
'Search engine query'], dtype=object)
data['explanation_0'].unique()
array(['they are all dressed well and using computers so its more like a business scenario.',
'Almost identicalexcept the tiny spelling difference.',
'both are similar', 'they both describe the same kind of people',
'We can see a relaxed state in that images', 'YES',
'A person is generalized and one cannot find the images of Einstein or kids in them.',
'they are calm', 'genious', 'only 1 image',
'interested in their work',
'i think this is correct that calm person because every one is calm in this images',
'images looks like taking a deep breath',
'it now seems more like to give these results whn we think of interested person rather than thinking and surprising',
'based on result of image', 'whipping', 'Yes',
'calm person and calmness same',
'result suits more to this kerword', 'yes', 'anger',
'hot air baloon', 'both are the same', 'same attitude of boss',
'the results are same',
'all my words are feature of Search engine query',
'They all are working in the office',
'in image person looking very casual',
'both refer to the same traits but intelligent word is more suited',
'i know', 'Because all people here look casual.', 'both are same',
'Casualness is used in both the words',
'interested person only can do Research, smart, thinging',
'Casual person is more accurate of the images.',
'i believe this is my personal theory..so i think aggressive person would be better keyword for these images',
'My keyword "happy people" and Search engine query "calm person" is almost same.',
'My answer is more specific regarding images.',
'i know need search engine when i already knew it',
'BOTH ARE SIMILAR',
'by query image i understood that person seems very angry',
'Everything is related with warm',
'it gives better ideas about all the image',
'we got the same image when search in google',
'with the facial expression we can find him too aggresive',
'very much about that', "It's the image of that",
'Smart Person Bring Innovation and must have high IQ',
'person in aggression is shouting at others',
'they are all were casual dress', 'By nature',
'because it shows that',
'people are working i guess working people is more apt',
'They also look happy',
'On detailed viewing smart person might be a better keyword.',
'everybody is yelling',
'a complete act of expression works out here'], dtype=object)
We can't use the weighted majority voting here! We need first to assign a score to this values.
## Exercise¶
• Create a function that assigns a score to each value of the column 'explanation_0' (for example the text lenght len(text), or whether in contains some words from a list, str in value) look here for reference https://pandas.pydata.org/pandas-docs/stable/text.html
• create a column with this score
• generate a weighted mean for it (using '_trust')
def compute_score(text):
score = 0
for i in ['similar', 'name', 'something']:
if i in text:
score += 1
return score
data['score'] = data['explanation_0'].apply(compute_score)
data.groupby('_unit_id').apply(lambda x: weigthed_mean(x,'_trust','score'))
_unit_id
1 0.000000
2 0.000000
3 0.000000
4 0.000000
6 0.955167
7 0.000000
10 0.000000
13 0.000000
14 0.000000
15 0.000000
16 0.000000
17 0.467747
20 0.000000
21 0.000000
23 0.369922
24 0.000000
25 0.464503
26 0.000000
27 0.000000
30 0.000000
31 0.000000
dtype: float64
data['time_spent'] = pd.to_datetime(data['_created_at']) - pd.to_datetime(data['_started_at'])
data['time'] = pd.to_numeric(data['time_spent'])/1e9
data.groupby('_unit_id').apply(lambda x: weigthed_mean(x,'_trust','time'))
_unit_id
1 1226.583864
2 1346.047720
3 1541.481515
4 1396.977989
6 474.190359
7 1522.166724
10 1016.627134
13 1064.354107
14 1700.328074
15 1330.235426
16 690.252255
17 697.133681
20 1510.000000
21 1315.286800
23 1025.093862
24 1356.460091
25 1196.422715
26 1626.726723
27 1244.551237
30 681.941651
31 1631.000000
dtype: float64
|
2020-02-19 17:02:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1912703514099121, "perplexity": 9433.735134965946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144165.4/warc/CC-MAIN-20200219153707-20200219183707-00248.warc.gz"}
|
https://codereview.stackexchange.com/questions/216545/constructor-for-a-packagetarget-struct
|
# Constructor for a packagetarget struct
As many of you know goto is usually signs of code smell. However I thought this could be an appropriate case, and would like confirmation or criticism.
Unnecessary section such as the called functions were removed. Every non-standard function returns an int as status, 0 is "OK"; except linkedlist_open() which returns a pointer which could be NULL if the system runs out of memory.
packagetarget_close() does implement the necessary NULL checks.
packagetarget *packagetarget_open()
{
packagetarget *target = (packagetarget*) malloc(sizeof(packagetarget));
if (!target) return NULL;
if (packagetarget_setname(target, ""))
goto disposer;
if (packagetarget_setsys(target, PACKAGETARGET_SYS))
goto disposer;
if (packagetarget_setarch(target, PACKAGETARGET_ARCH))
goto disposer;
if (packagetarget_setmin(target, PACKAGETARGET_MIN))
goto disposer;
if (packagetarget_setver(target, PACKAGETARGET_VER))
goto disposer;
if (packagetarget_setmax(target, PACKAGETARGET_MAX))
goto disposer;
if (!comp) goto disposer;
target->comp = comp;
return target;
disposer:
packagetarget_close(target);
return NULL;
}
Each setter function follows a simple pattern. Since each setter is identical, I will only put the code for packagetarget_setname.
int packagetarget_setname(packagetarget *target, char *name)
{
if (!target) return 1;
if (!name) return 2;
target->name = realloc(target->name, strlen(name) * sizeof(char));
if (!target->name) return 3;
strcpy(target->name, name);
return 0;
}
Here is the packagetarget_close function:
void packagetarget_close(packagetarget *target)
{
if (!target) return;
if (target->name) free(target->name);
if (target->sys) free(target->sys);
if (target->arch) free(target->arch);
if (target->min) free(target->min);
if (target->ver) free(target->ver);
if (target->min) free(target->min);
free(target);
return;
}
linkedlist_open is a similar function to the packagetarget_open.
linkedlist *linkedlist_open()
{
if (!list) return NULL;
list->length = 0;
list->remhook = NULL;
return list;
}
The one thing most of these functions is they may fail when the system runs out of memory. So I have implemented checks for each step.
• "However I thought this could be an appropriate case, and would like confirmation or criticism." Yet you've stripped the code out of all it's context so we can't determine whether you're right or not. The code you've provided looks like it should've been used in a wrapper, not with goto constructs. Please provide more code and an explanation of what it's supposed to do so we can see how this snippet is being used. Stack Overflow likes minimal examples, Code Review does absolutely not. Please take a look at our FAQ on asking questions. – Mast Mar 30 at 21:09
• Not all of the functions are written, yet. This function is a constructor for a struct named packagetarget. The idea is to allocate the struct in memory, and then call the setter methods for each field, which may fail. EDIT: If any step fails, the destructor function is called, once; and the function returns a null pointer. – utkumaden Mar 30 at 21:11
• If not all the functions are written, are you sure it works the way it should? And how can you be sure this is the right way to do it if you haven't completed the rest? It sounds like you're simply too early in the process got get this meaningfully reviewed and your question answered. – Mast Mar 30 at 21:15
• I have updated the question to include more code. – utkumaden Mar 30 at 21:27
• What is a constructor in C? I know what a constructor is in C++. Did you mean C++? – pacmaninbw Mar 30 at 22:12
## packagetarget_close()
First of all, you have a copy-and-paste error in packagetarget_close(), where you attempt to free target->min twice, but not target->max.
Next, note that most of the if statements are superfluous. As per the standard behaviour for free(),
If ptr is a null pointer, the function does nothing.
# packagetarget_open()
You have undefined behaviour due to the way your error-handling works. The chunk of memory returned by malloc() contains arbitrary junk. If packagetarget *target = (packagetarget*) malloc(sizeof(packagetarget)) succeeds, but one of the setters fails, then you would call packagetarget_close(), which would then interpret that arbitrary junk as pointers to memory to be freed. A good way to fix that is to zero the memory before calling any of the setters. You can either use calloc() instead of malloc(), or memset(), or a struct initializer.
## Avoiding malloc()
In C, I prefer the init-cleanup idiom over new-destroy (which you call open-close). In the init-cleanup idiom, the caller is responsible for providing the chunk of memory to be initialized, which gives the caller the option of providing either stack-based or heap-based memory.
## Avoiding goto
While indiscriminate use of goto leads to spaghetti code, there are some circumstances where goto is justifiable, if used in a readily recognizable pattern.
I think that your use of goto isn't horrible, but personally I would prefer to write the if statements as a chain of && expressions.
## Suggested solution
#include <stdlib.h>
#include <string.h>
typedef struct packagetarget {
char *name;
char *sys;
char *arch;
char *min;
char *ver;
char *max;
} packagetarget;
int packagetarget_setname(packagetarget *target, const char *name) {
if (!target) return 1;
if (!name) return 2;
target->name = realloc(target->name, strlen(name) * sizeof(char));
if (!target->name) return 3;
strcpy(target->name, name);
return 0;
}
int packagetarget_setsys(packagetarget *target, const char *sys) {
…
}
int packagetarget_setarch(packagetarget *target, const char *arch) {
…
}
int packagetarget_setmin(packagetarget *target, const char *min) {
…
}
int packagetarget_setver(packagetarget *target, const char *ver) {
…
}
int packagetarget_setmax(packagetarget *target, const char *max) {
…
}
packagetarget *packagetarget_cleanup(packagetarget *target) {
if (target) {
free(target->name);
free(target->sys);
free(target->arch);
free(target->min);
free(target->ver);
free(target->max);
}
return target;
}
packagetarget *packagetarget_init(packagetarget *target) {
static const packagetarget empty = {};
if (target) {
*target = empty;
if (!(0 == packagetarget_setname(target, "") &&
0 == packagetarget_setsys(target, PACKAGETARGET_SYS) &&
0 == packagetarget_setarch(target, PACKAGETARGET_ARCH) &&
0 == packagetarget_setmin(target, PACKAGETARGET_MIN) &&
0 == packagetarget_setver(target, PACKAGETARGET_VER) &&
0 == packagetarget_setmax(target, PACKAGETARGET_MAX) &&
packagetarget_cleanup(target);
return NULL;
}
}
return target;
}
int main() {
packagetarget t;
if (packagetarget_init(&t)) {
…
}
packagetarget_cleanup(&t);
}
• I did find the undefined behavior in packagetarget_close() myself too, but it was very late so I couldn't edit the question to fix it. Since I am very early in the project I may switch over idioms like you suggested because it made more sense. I used the open/close idioms because the only c api I am familiar with was Lua, which has the same structure. Also, marked as answer. – utkumaden Mar 31 at 4:34
|
2019-07-17 22:14:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2393869161605835, "perplexity": 7848.490944677165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525402.30/warc/CC-MAIN-20190717201828-20190717223828-00002.warc.gz"}
|
https://tex.stackexchange.com/questions/446425/change-of-font-in-my-moderncv-cv?noredirect=1
|
# Change of font in my moderncv CV
When I recompile my CV that I last changed in August 2017 I suddenly get a different font.
Old font:
New font:
How do I get the old font back?
Here's the preamble of my CV:
\documentclass[10pt,a4paper,roman]{moderncv}
\moderncvstyle{banking}
\moderncvcolor{blue}
\nopagenumbers{}
\usepackage[utf8]{inputenc}
\usepackage[scale=0.75]{geometry}
EDIT 1:
EDIT 2:
Here's a compilable example extracted from my cv:
\documentclass[10pt,a4paper,roman]{moderncv}
\moderncvstyle{banking}
\moderncvcolor{blue}
\nopagenumbers{}
\usepackage[utf8]{inputenc}
\usepackage[scale=0.75]{geometry}
\firstname{Simon}
\familyname{Jakobi}
\begin{document}
\begin{minipage}[t][0pt]{\linewidth}
\makecvtitle
\section{Studium}
\end{minipage}
\end{document}
• How do you compile? Maybe you have compiled with pdfLaTeX the last time and now used a different engine? But without a minimal compilable example, it is hard to help you here. (Btw: Do you have the log file of both runs?) Aug 17, 2018 at 13:02
• Surely you can just take your CV, remove all personal information and add a few bogus definitions to show the fonts. Aug 17, 2018 at 13:18
• The old CV loaded tgpagella (TeX Gyre Pagella), the new one doesn't. Aug 17, 2018 at 13:23
• @sjakobi The banking style automatically uses tgpagella, iff it is installed. You had the font installed in your old TeX installation but not in the current one. Just install the tex-gyre package using your package manager. Aug 17, 2018 at 13:26
• @MarcelKrüger I would like to ask you to write up a short answer instead of closing. The behaviour of the class is unusual enough that this could confuse more people. I personally think \IfFileExsits for package loading is not a great idea and will file a feature request to drop it. Aug 17, 2018 at 13:34
The "old font" is the standard font of the banking style in moderncv, TeX Gyre Pagella. You do not have to write anything in your tex file to activate it, but it has to be installed.
Sadly moderncv uses a trick to avoid issuing an error message if the font is not found, so the only indication that the font is missing is the automatic fallback to the "new font" (This is the TeX standard font Computer Modern).
To install "TeX Gyre Pagella" depends on your system. According to your log file you use Debian or Ubuntu, so you have to install the package named tex-gyre.
sudo apt-get install tex-gyre
then follow the instructions from apt-get. Afterwards compiling your document gives the right font again.
|
2022-08-16 06:14:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857390820980072, "perplexity": 3501.998805535563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572221.38/warc/CC-MAIN-20220816060335-20220816090335-00284.warc.gz"}
|
https://dsp.stackexchange.com/questions/25735/python-write-wav-to-chosen-directory
|
# Python: write .wav to chosen directory [closed]
I can only write .wav files to the current directory
fname = 'bassswoon.wav'
wav = wave.open(fname,'w')
wav.writeframes(struct.pack('h', bassswoon))
wav.close()
Adding a path to fname, i.e. fname = '/path/to/bassswoon.wav' does not work - the file is written neither in the intended directory ('/path/to/') nor in the working directory.
I don't see anything of use in the documentation - is it simply not possible with the wave object? Surely, there must be a way to hack it...
## closed as off-topic by Jason R, Peter K.♦Sep 9 '15 at 11:47
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "General programming questions are off-topic here, but can be asked on Stack Overflow." – Jason R, Peter K.
If this question can be reworded to fit the rules in the help center, please edit the question.
• This question is off-topic. You should post at Stack Overflow. – Jason R Sep 9 '15 at 11:34
• From what I can see, you are not writing, but you are trying to open... I suggest you to use scipy.io.wavfile module anyway. – jojek Sep 9 '15 at 11:37
• I thought it may be of interest to other people @ DSP – yunque Sep 9 '15 at 11:38
• @jojek see my edit for the rest of the code. I guess I may be looking at the wrong step in the process... will check scipy.io.wavfile – yunque Sep 9 '15 at 11:40
|
2019-11-13 19:49:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5413960814476013, "perplexity": 1826.4679957655699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667333.2/warc/CC-MAIN-20191113191653-20191113215653-00397.warc.gz"}
|
https://www.cfm.brown.edu/people/dobrush/am34/glossary/
|
## Glossary
To find a term in the glossary, click on the letter that the term you are searching for begins with, or enter a search term.
Abel, Niels Niels Henrik Abel (1802--1829) was a Norwegian mathematician who made pioneering contributions in a variety of fields. Abel's formula Abel's formula or Abel's identity is an equation that expresses the Wronskian of two solutions of a homogeneous second-order linear ordinary differential equation in terms of a coefficient of the original differential equation. Abscissa Abscissa is the first coordinate (usually horizontal) of a point in a coordinate system. Abscissa of convergence Abscissa is the first coordinate (usually horizontal) of a point in a coordinate system. Adiabatic invariant When the parameters of a physical system vary slowly under the effect of an external perturbation, some quantities are constant to any order of the variable describing the slow rate of change. Such a quantity is called an adiabatic invariant. This does not mean that these quantities are exactly constant but rather that their variation goes to zero faster than any power of the small parameter. Adjoint Suppose that A is a linear operator from one vector space with inner product < , > into another vector space with inner product. The adjoint operator to A is the linear operator A* such that $$\left\langle A\,f, g \right\rangle = \left\langle f, A^{\ast} g \right\rangle$$ for any elements f and g. For example, if $$A = a_2 (x)\, \texttt{D}^2 + a_1 (x)\, \texttt{D} + a_0 (x) ,$$ where $$\texttt{D}$$ = d/dx is the derivative operator, then its adjoint operator acts on a function u as $$A^{\ast}\, u = \texttt{D}^2 \left( a_2 \, u \right) - \texttt{D} \left( a_1 \, u \right) + a_0 (x)\, u(x) .$$ Analytic function A function is analytic at a point if the function has a power series expansion valid in some neighborhood of that point. It may consist of many holomorphic functions, called branches of the analytic function. Arakelian set A closed set E ⊂ ℂ, without holes, is an Arakelian set if, for every closed disc D ⊂ ℂ, the union of all holes of E ∪ D is a bounded set. Arakelian's theorem Arakelyan's theorem states that for every f continuous in E and holomorphic in the interior of E and for every ε > 0 there exists g holomorphic in Ω such that |g − f| < ε on E if and only if Ω* \ E is connected and locally connected. Asymptotic expansion Given a function f(x) and an asymptotic series { gk(x) } at x0, the formal series $$\sum_{k=0}^{\infty}a_k\,g_k(x),$$ where the { ak } are given constants, is said to be an asymptotic expansion of f(x) if $$f(x) - \sum_{k=0}^{n}a_k \,g_k(x)=o(g_n(x))$$ as x → x0 for every n; this is expressed as $$f(x) \sim \sum_{k=0}^{\infty}a_k\, g_k(x) .$$ Basin of attraction Bendixson, I.O. Bernoulli, Daniel Bessel, F.W. Bessel equation Bessel functions Bessel inequality Bessel series Beta function The beta function, also called the Euler integral of the first kind, is a special function that is closely related to the gamma function and to binomial coefficients $B(x,y) = \int_0^1 t^{x-1} \left( 1- t \right)^{y-1} {\text d} t = \frac{\Gamma (x)\,\Gamma (y)}{\Gamma (x+y)} .$ Bifurcation A bifurcation occurs when a small smooth change made to the parameter values (the bifurcation parameters) of a system causes a sudden 'qualitative' or topological change in its behavior. Blasius problem The Blasius boundary value problem on semi-infinite interval 0 ≤ x < ∞ is $2\,f_{xxx} + f\,f_{xx} =0, \qquad f(0) = f_x (0) =0, \quad f_x (\infty ) = 1,$ where fx denotes the derivative of f(x) with respect to x. It is named after its inventor in 1911 Paul Richard Heinrich Blasius (1883--1970). Blasius constant The constant fxx(x=0) ≈ 0.33205733621... is called the Blasius constant. Boundary data Given a differential equation, the value of the dependent variable on the boundary may be given in many different ways. Boussinesq equation The Boussinesq equation (1872): $\frac{\partial^2 \eta}{\partial t^2} = gh\,\frac{\partial^2 \eta}{\partial x^2} + gh \frac{\partial^2}{\partial x^2} \left( \frac{3}{2}\,\frac{\eta^2}{h} + \frac{1}{3}\,h^2 \frac{\partial^2 \eta}{\partial x^2} \right) .$ Camassa–Holm equation The Camassa–Holm equation (1993) was introduced by Roberto Camassa and Darryl Holm as a bi-Hamiltonian model for waves in shallow water: $u_t + 2 \kappa\,u_x - u_{xxt} + 3u\,u_x = 2u_x u_{xx} + u\,u_{xxx} .$ Clenshaw algorithm also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials. It is a generalization of Horner's method for evaluating a linear combination of monomials. Companion matrix companion matrix of the monic polynomial Conjugate harmonic functions A pair of real harmonic functions u and v which are the real and imaginary parts of some analytic function $$f = u + {\bf j}\,v$$ of a complex variable. Crocco's equation The Crocco's equation $\phi\,\frac{\partial^2 \phi}{\partial h^2} + \frac{1}{2}\,f(h) =0$ where f is a given positive function, is usually considered in the unit interval 0 ≤ h ≤ 1 subject to some (mixed) boundary conditions. d'Alembert, J. Jean-Baptiste le Rond d'Alembert (1717--1783) was a French mathematician, mechanician, physicist, philosopher, and music theorist. Determinant The determinant of an n×n matrix A is a scalar value that can be computed from the elements of a square matrix and encodes certain properties of the linear transformation described by the matrix. The determinant of a matrix A is denoted det(A), det A, or |A| and is equal to (-1)n times the constant term in the characteristic polynomial of A. Dirichlet boundary conditions The dependent variable is prescribed on the boundary. This is also called a boundary condition of the first kind. Dispersion relation In applied mathematics, when a solution of the partial differential equation can be represented as the Ehrenpreis integral over some contour L, $u(x,t) = \frac{1}{2\pi} \,\int_{L} {\text d}k\, e^{- \omega (k)\, t - {\bf j}kx} \rho (k) , \qquad {\bf j}^2 = -1,$ then the function ω(k) is referred to as the dispersion relation for the given differential equation. The concept of dispersion relations entered physics with the work of Kronig and Kramers in optics (known as the Kramers–Kronig relations). Dym equation The Dym equation (HD) is the third-order partial differential equation $u_t = u^3 u_{xxx} .$ The Dym equation first appeared in the paper by Martin Kruskal and is attributed to an unpublished paper by Harry Dym (born 1938). Ehrenpreis Principle The Ehrenpreis Fundamental Principle was established by Ehrenpreis and Palamodov in 1970. It states that for the evolution partial differential equation $$u_t + \omega \left( -{\bf j}\partial_x \right) u = 0 ,$$ where ω(ν) is a polynomial, there exists a measure μ(ν) with support L such that $$u(x,t) = \int_L e^{{\bf j}\nu x - \omega (\nu )t} \,{\text d}\mu (\nu ) ,$$ however, the measure μ is not constructed explicitly. Euler's reflection formula $\Gamma \left( 1-z \right) \Gamma (z) = \frac{\pi}{\sin (\pi z)} , \qquad z \notin \mathbb{Z} .$ Fejér, L. Lipót Fejér (1880--1959) was a Hungarian mathematician of Jewish heritage. Fejér was born Weisz (which means "white") Leopold, and changed to the Hungarian name Fejér (which also means "white") around 1900. During the period (1911--1959) he was the chair at Budapest University and led a highly successful Hungarian school of analysis. He was the thesis advisor of mathematicians such as John von Neumann, Paul Erdős, George Pólya, Pál Turán, and many others. Fejér theorem Fejér's theorem, named for Hungarian mathematician Lipót Fejér, states that if f: ℝ → ℂ is a continuous function with period 2π, then the sequence (σn) of Cesàro means of the sequence (sn) of partial sums of the Fourier series of f converges uniformly to f on [-π,π]. Fixed point A fixed point, also known as an invariant point of a function is an element of the function's domain that is mapped to itself by the function. Fokas method The Fokas method (or unified transform method) was originally introduced by A.S. Fokas in 1990s. The method allows to construct solutions to evolution partial differential equations (that admit Lax pairs) in the explicit form that are always uniformly convergent at the boundaries. Fourier, J. Jean-Baptiste Joseph Fourier (1768--1830) was a French mathematician, physicis, and polytician who used Fourier series to solve heat transfer problems. Fourier accompanied Napoleon Bonaparte on his Egyptian expedition in 1798, as scientific adviser, and was appointed secretary of the Institut d'Égypte. Fourier transform There are several common conventions for defining the Fourier transform of an integrable complex-valued function f : ℝ → ℂ. We use the following notation for the Fourier transformation and its inverse. $\hat{f} (\xi ) =ℱ\left[ f(x) \right] (\xi ) = f^F (\xi ) = \int_{-\infty}^{\infty} f(t)\,e^{{\bf j} \xi\cdot t} \,{\text d}t$ with the inverse (that is valid for functions satisfying the Dirichlet conditions) $f(t) = ℱ^{-1} \left( \hat{f} \right) = \text{V.P.} \frac{1}{2\pi} \int_{-\infty}^{\infty} \hat{f}(\xi )\,e^{-{\bf j} \xi\cdot t} \,{\text d}\xi = \lim_{N\to \infty} \frac{1}{2\pi} \int_{-N}^N \hat{f}(\xi )\,e^{-{\bf j} \xi\cdot t} \,{\text d}\xi = \frac{f(t+0) + f(t-0)}{2} .$ Gamma function The gamma function was introduced by Leonhard Euler, who suggested to use Γ, (the capital letter gamma from the Greek alphabet) for its notation $\Gamma \left( z \right) = \int_0^{\infty} x^{z-1} e^{-x} {\text d}x , \qquad \Re (z) > 0,$ Gauss, C.F. Johann Carl Friedrich Gauss (1777--1855) was a German mathematician and physicist who made significant contributions to many fields in mathematics and sciences. Sometimes referred to as the Princeps mathematicorum (Latin for "the foremost of mathematicians"). Glukhovsky-- Dolzhanksy system is a system of the form $$\begin{split} \dot{x} = -\sigma \left( x - y \right) -a yz , \\ \dot{y} = rx - y -xz , \\ \dot{z} = - bz + xy , \end{split}$$ where σ, a, r, b are physical parameters. Green, G George Green (1793--1841) was a British mathematical physicist who introduced several important concepts, among them a theorem similar to the modern Green's theorem. Green function A Green's function is the impulse response of an inhomogeneous linear differential equation defined on a domain, with specified initial conditions or boundary conditions. Green theorem Green's theorem gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C: $$\int_C P\,{\text d}x + Q\,{\text d}y = \iint_{D} \left( \frac{\partial Q}{\partial x} - \frac{\partial P}{\partial y} \right) {\text d}A .$$ Horner rule also known as Horner's method or Horner's scheme etc, is referred to a polynomial evaluation method named after the British mathematician William George Horner (1786--1837) expressed by $$p(x) = a_0 + a_1 x + \cdots + a_n x^n = a_0 + x \left( a_1 + x \left( a_2 + x \left( a_3 + \cdots + x \left( a_{n-1} + x\,a_n \right) \right) \right) \right)$$ Holomorphic function A function is analytic at a point if the function has a power series expansion valid in some neighborhood of that point. Hypergeometric function The Gaussian or ordinary hypergeometric function 2F1(a,b;c;x) is a special function represented by the hypergeometric series, $$_2F_1 (a,b,c;x) = \sum_{n\ge 0} \frac{a^{\overline{n}} b^{\overline{n}}}{c^{\overline{n}}} \, \frac{x^n}{n!} ,$$ that includes many other special functions as specific or limiting cases. It is a solution of a second-order linear ordinary differential equation (ODE), called hypergeometric differential equation: $$x \left( 1- x \right) \frac{{\text d}^2 y}{{\text d}x^2} + \left[ c - (a+b+1)\,x \right] \frac{{\text d} y}{{\text d}x} -ab\,y =0 .$$ Here $$a^{\overline{n}} = a \left( a+1 \right) \cdots \left( a+n-1 \right)$$ is the $$a$$ rising factorial (sometimes called the Pochhammer function). Inhomogeneous equation An ordinary or partial differential equation is called inhomogeneous (or nonhomogeneous) if it contains an input (driven) function. Integrable equation An evolution partial differential equation is called integrable if it admits a Lax pair. Jefery--Hamel The Jeffery--Hamel flow is a flow created by a converging or diverging channel with a source or sink of fluid volume at the point of intersection of the two plane walls. In dimensioneless variables, it can be modeled by the boundary value problem for the third order differential equation: $$F''' + 2\,R_e\,\alpha \, F\,F' + 4\alpha^2 F' =0 , \qquad F(0) = 1, \quad F' (0) = 0 , \quad F(1) =0 .$$ Here α is the channel half-angle and Re is the Reynolds number of flow. Kadomtsev–Petviashvili equation The Kadomtsev–Petviashvili equation – or KP equation, named after Boris Borisovich Kadomtsev and Vladimir Iosifovich Petviashvili – is a partial differential equation to describe nonlinear wave motion: $$\partial_x \left( \partial_t u + u\,\partial_x u + \epsilon^2 \partial_{xxx} u \right) + \lambda \,\partial_{yy} u =0 ,$$ where λ = ±1 Kaup–Kupershmidt equation The Kaup–Kupershmidt equation (named after David J. Kaup and Boris Abram Kupershmidt) is the nonlinear fifth-order partial differential equation $u_t = u_{xxxxx} + 10\,u_{xxx}u + 25\,u_{xx} u_x + 20\,u^2 u_x = \frac{1}{6} \left( 6\,u_{xxxx} + 60\,u\,u_{xx} + 45\,u_x^2 + 40\,u^3 \right)_x .$ Korteweg–de Vries (KdV) equation The Korteweg–de Vries (KdV) equation is a mathematical model of waves on shallow water surfaces. The KdV equation is a nonlinear, dispersive partial differential equation for a function ϕ of two real variables, space x and time t: $$\partial_t \phi + \partial_x^3 \phi -6\,\phi\,\partial_x \phi =0$$ with ∂x and ∂t denoting partial derivatives with respect to x and t. The constant 6 in front of the last term is conventional but of no great significance. The Linearized KdV Equation: $$u_t + u_x + u_{xxx} =0 .$$ Lane--Emden equation The Lane–Emden equation is a dimensionless form of Poisson's equation for the gravitational potential of a Newtonian self-gravitating, spherically symmetric, polytropic fluid. $\frac{1}{\xi^2}\, \frac{\text d}{{\text d}\xi} \left( \xi^2 \frac{{\text d}\theta}{{\text d}\xi} \right) + \theta^n =0 .$ It is named after astrophysicists Jonathan Homer Lane and Robert Emden. Lax pair A Lax pair is a pair of matrices or operators L(t), P(t) dependent on time and acting on a fixed Hilbert space, and satisfying Lax's equation: $$\partial_t L = \left[ P, L \right] ,$$ where [P,L] = PL - LP is the commutator. In other words, a partial differential equation (PDE) in two independent variables for function u(x,t) has a Lax pair formulation if the PDE can be written as $A_t - B_x + \left[ A, B \right] = 0 ,$ where both A and B are matrx functions. Morse potential The Morse potential, named after physicist Philip M. Morse, is a convenient interatomic interaction model for the potential energy of a diatomic molecule. Its Hamiltonian is $H(p,q) = \frac{p^2}{2} + D\left( 1 - e^{-rq} \right)^2 ,$ where q stands for the bond length, D for the dissociation energy, and r for the anharmonic parameter. The exact solution is $q(t) = - \frac{1}{r} \,\ln \frac{1 - (E/D)^{1/2} \cos (\omega t + \varphi_0 )}{1 - E/D} ,$ where E is the total energy, ω is the anharmonic frequency of the oscillator given by $$\omega = \left( 2D - 2E \right)^{1/2} ,$$ and φ0 the initial phase. Neighborhood A neighborhood is any set of points containing the point or subset of interest inside some open set. For example, a neighborhood containing the origin in one dimension could be [-0.1,1], as it contains the point 0 inside the open symmetric interval (-0.1, 0.1). But [0, 1] is not a neighborhood of the origin as it does not contain any open interval centered at zero. In a two-dimensional space, a neighborhood of the origi could be any set containing an open circle with radius epsilon (x² + y² < ε²), which is centered about the origin. See: Part II, iv. Nonlinear Schrödinger equation Nonlinear Schrödinger equation ${\bf j}\,\psi_t = -\frac{1}{2}\,\psi_{xx} + \kappa \left\vert \psi \right\vert^2 \psi ,$ where j is the unit vector in positive vertical direction on the complex plane ℂ. Ordinate Ordinate is the second coordinate (usually vertical) of a point in a coordinate system. Poincaré Map In dynamical systems, a first recurrence map or Poincaré map, named after Henri Poincaré (1854--1912), is the intersection of a periodic orbit in the state space of a continuous dynamical system with a certain lower-dimensional subspace, called the Poincaré section, transversal to the flow of the system. More precisely, one considers a periodic orbit with initial conditions within a section of the space, which leaves that section afterwards, and observes the point at which this orbit first returns to the section. One then creates a map to send the first point to the second, hence the name first recurrence map. The transversality of the Poincaré section means that periodic orbits starting on the subspace flow through it and not parallel to it. See: Part III, Chaos. Radiation condition The radiation condition states that a wave equation has no waves incoming from an infinite distance, only outgoing waves. For example, the equation $$u_{t t}=\Delta\, u$$ might have the radiation condition $$u(x,t)\simeq A_{-}\exp(ik(t-x))$$ as $$x\to -\infty$$ and $$u(x,t)\simeq A_{+}\exp(ik(t+x))$$ as $$x\to +\infty .$$ This is also called the Sommerfeld radiation condition. Regular function A function is regular or holomorphic a point if the function has a power series expansion valid in some neighborhood of that point. Resolvent The resolvent of a linear operator A is $$R_{\lambda} = \left( \lambda\,I - A \right)^{-1} ,$$ where I is the identical operator. Resolvent method The resolvent method was developed by Vladimir Dobrushkin in 1980s. The method reduces a boundary value problem to an integral equation of the second order on the boundary, so it reduces a n-dimensional problem to n-1 dimensional one. Schwarzian derivative If y = y(x), then the Schwarzian derivative of y with respect to x is defined to be $\displaystyle \{y,x\} \equiv \frac{-1}{(y')^2} \left[ \frac{y'''}{y''} - \frac{3}{2} \left( \frac{y''}{y'} \right)^2 \right] .$ Shock A shock is a narrow region in which the dependent variable undergoes a large change. Also called a layer'' or a propagating discontinuity.'' Sine-Gordon equation There are two equivalent forms of the sine-Gordon equation. In the (real) space-time coordinates, denoted (x, t), the equation reads: $\varphi_{tt} - \varphi_{xx} + \sin\varphi =0 ,$ where partial derivatives are denoted by subscripts. Passing to the light cone coordinates (u, v), akin to asymptotic coordinates where $$u = \frac{x+t}{2} , \quad v = \frac{x-t}{2} ,$$ the equation takes the form: $$\varphi_{uv} = \sin\varphi .$$ Sobolev S.L. Sergei Lvovich Sobolev (1908--1989) was a Russian mathematician who first introduced generalized functions that later were called distributions. He was the first director of the Institute of Mathematics at Akademgorodok near Novosibirsk (Siberia). Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself and its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function. Sturm J. Jacques Charles François Sturm (1803--1855) was a French mathematician. Sturm chain A Sturm chain or Sturm sequence is a finite sequence of polynomials p0, p1, ... , pm, of decreasing degree with the following properties: p0 is square-free: i.e. it has no factors of the form $$q^2 (x)$$ for any polynomial q(x). If a is a root of p(x), then the sign of p1(a) is the same as the sign of the derivative p'(a), and in particular is nonzero. If a is a root of pi(x), for some i with 0 < i < m, then both pi-1(a) and pi+1(a) are nonzero. Moreover, the sign of pi-1(a) is the opposite of the sign ofpi+1(a). pm(x)'s sign is constant and nonzero for all x. Sturm Sturm Sturm--Liouville theory A classical Sturm--Liouville theory, named after French mathematicians Jacques Charles François Sturm (1803--1855) and Joseph Liouville (1809--1882), is a generalization of eigenvalue problem for unbounded operators; namely, it is the theory of a real second-order linear differential equation of the form $$\displaystyle \frac{\text d}{{\text d}x} \left[ p(x)\, \frac{{\text d}y}{{\text d}x} \right] + q(x)\, y + \lambda\,w(x)\, y =0 ,$$ where y is a function of the free variable x. Here the functions p(x), q(x), and w(x) > 0 are specified at the outset. In the simplest of cases all coefficients are continuous on the finite closed interval [a,b], and p has continuous derivative. Singular point Zeeman model In 1972 (Zeeman, E.C.: Differential Equations and Nerve Impulse. Towards a Theoretical Biology, 4, pp. 8-67), Zeeman presented an important set of nonlinear dynamical equations for heartbeat modelling, based on the Van der Pol-Lienard equation. $\begin{split} \varepsilon\,\frac{{\text d}x}{{\text d}t} &= T\, x - x^3 - y, \quad T > 0, \\ \frac{{\text d}y}{{\text d}t} &= x - x_d . \end{split}$ Here variable x represents the length of a muscle fiber in the heart and the variable y is an electrical control variable that triggers the electro-chemical wave leading to the heart contraction. The positive constant T represents a tension of muscle and is related to blood pressure. The constant ε characterizes the heart. The initial conditions are usualy taken as x(0) = 1 and y(0) = 0.
|
2021-10-18 15:57:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916005492210388, "perplexity": 410.22459624648855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00019.warc.gz"}
|
http://cms.math.ca/10.4153/CJM-2003-006-2
|
location: Publications → journals → CJM
Abstract view
# On the Zariski-van Kampen Theorem
Let $f \colon E\to B$ be a dominant morphism, where $E$ and $B$ are smooth irreducible complex quasi-projective varieties, and let $F_b$ be the general fiber of $f$. We present conditions under which the homomorphism $\pi_1 (F_b)\to \pi_1 (E)$ induced by the inclusion is injective.
|
2016-02-13 15:24:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9745745658874512, "perplexity": 216.87716955493178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166739.77/warc/CC-MAIN-20160205193926-00108-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.ias.ac.in/listing/bibliography/boms/AHMED_DHAHRI
|
• AHMED DHAHRI
Articles written in Bulletin of Materials Science
• Structural, optical spectroscopy, optical conductivity and dielectric properties of BaTi$_{0.5}$(Fe$_{0.33}$W$_{0.17}$)O$_{3}$ perovskite ceramic
Fe and W co-substituted BaTiO3 perovskite ceramics, compositional formula BaTi$_{0.5}$(Fe$_{0.33}$W$_{0.17}$)O$_3$, were synthesized by the standard solid-state reaction method and studied by X-ray diffraction, scanning electronmicroscopy and spectroscopy ellipsometry. The prepared sample remains as double phases with the perovskite structure. The structure refinement of BaTi$_0.5$(Fe$_{0.33}$W$_{0.17}$)O$_3$ sample was performed in the cubic double and hexagonal setting of the Fm$\bar{3}$m and P6$_3$/mmc space groups. Spectral dependence of optical parameters; real and imaginaryparts of the dielectric function, refractive index, extinction coefficient and absorption coefficient were carried out in the range between 1.4 and 4.96 eV by using the ellipsometry experiments. Direct bandgap energy of 4.36 eV was found from the analysis of absorption coefficient vs. photon energy. In addition, the oscillator energy, dispersion energy and zero-frequency refractive index values were found from the analysis of the experimental data usingWemple–DiDomenico single-effective-oscillator model.
• # Bulletin of Materials Science
Volume 44, 2021
All articles
Continuous Article Publishing mode
• # Dr Shanti Swarup Bhatnagar for Science and Technology
Posted on October 12, 2020
Prof. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru
Chemical Sciences 2020
|
2021-06-16 22:13:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.460575133562088, "perplexity": 8133.469249993258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626122.27/warc/CC-MAIN-20210616220531-20210617010531-00430.warc.gz"}
|
http://stackoverflow.com/questions/9833392/break-string-into-list-of-characters-in-python
|
# Break string into list of characters in Python
So what i want to do is essentially suck a line of txt from a .txt file, then assign the characters to a list, and then creat a list of all the separate characters in a list.
So a list of lists.
At the moment, I've tried:
fO = open(filename, 'rU')
and that's all im up to. I dont quite know how to extract the single characters and assign them to a new list.
I want to do something like:
fL = 'FHFF HHXH XXXX HFHX'
^^^ so that being the line i got from the .txt file.
And then turn it into this:
['F', 'H', 'F', 'F', 'H' ...]
^^^ and that being the new list, with each single character on it's own.
-
Strings are iterable (just like a list).
I'm interpreting that you really want something like:
fd = open(filename,'rU')
chars = []
for line in fd:
for c in line:
chars.append(c)
or
fd = open(filename, 'rU')
chars = []
for line in fd:
chars.extend(line)
or
chars = []
with open(filename, 'rU') as fd:
map(chars.extend, fd)
chars would contain all of the characters in the file.
-
Brilliant koblas, thank you! – FlexedCookie Mar 23 '12 at 2:44
@FlexedCookie itertools.chain is really the simplest for this -- chars = list(itertools.chain.from_iterable(open(filename, 'rU))). – agf Mar 23 '12 at 3:00
The code above does not account for the whitespaces, i.e., " " – Sebastian Raschka Jul 25 '13 at 4:34
You can do this using list:
fNewList=list(fL);
Be aware that any spaces in the line will be included in this list, to the best of my knowledge.
-
fO = open(filename, 'rU')
-
In python many things are iterable including files and strings. Iterating over a filehandler gives you a list of all the lines in that file. Iterating over a string gives you a list of all the characters in that string.
charsFromFile = []
filePath = r'path\to\your\file.txt' #the r before the string lets us use backslashes
for line in open(filePath):
for char in line:
charsFromFile.append(char)
#apply code on each character here
or if you want a one liner
#the [0] at the end is the line you want to grab.
#the [0] can be removed to grab all lines
[list(a) for a in list(open('test.py'))][0]
.
.
Edit: as agf mentions you can use itertools.chain.from_iterable
His method is better, unless you want the ability to specify which lines to grab list(itertools.chain.from_iterable(open(filename, 'rU)))
This does however require one to be familiar with itertools, and as a result looses some readablity
If you only want to iterate over the chars, and don't care about storing a list, then I would use the nested for loops. This method is also the most readable.
-
Or use a fancy list comprehension, which are supposed to be "computationally more efficient", when working with very very large files/lists
fd = open(filename,'r')
chars = [c for line in fd for c in line if c is not " "]
fd.close()
Btw: The answer that was accepted does not account for the whitespaces...
-
So to add the string hello to a list as individual characters, try this:
newlist = []
newlist[:0] = 'hello'
print (newlist)
['h','e','l','l','o']
|
2014-03-14 21:10:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4124531149864197, "perplexity": 2792.702985104158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678694628/warc/CC-MAIN-20140313024454-00066-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.acmicpc.net/problem/3205
|
시간 제한메모리 제한제출정답맞은 사람정답 비율
1 초 128 MB42133.333%
문제
A text in a book consists of a sequence of lines. A line may contain references to footnotes. A footnote consists of one or more lines and it have to be printed together with its reference on the same page. Once a footnote is printed on a page, only another footnote may follow it on that page. A maximal number of lines that can be printed on one page is known. No page of a book may contain more than that number of lines, including footnotes.
Write a program that will compute the minimal number of pages a book can have.
입력
The first line of input contains two integers: N, a number of lines in a document (2 ≤ N ≤ 1000), and K, maximal number of lines a page of a book may contain (2 ≤ K ≤ 1000), separated by a space character.
The second line of input contains an integer F, 1 ≤ F ≤ 100, a number of footnotes in a book.
Each of the next F lines consists of two numbers, X and Y, separated by a space character, meaning that X-th line of the text has a reference to a footnote consisting of Y lines. Those footnotes descriptions will be sorted with respect to the lines where they are being referenced.
Note: Input data will be chosen so that a solution always exists.
출력
The first and only line of output should contain the minimal number of pages a book can have.
예제 입력 1
5 5
1
3 2
예제 출력 1
2
예제 입력 2
7 3
2
2 1
4 2
예제 출력 2
4
예제 입력 3
10 5
5
3 3
4 1
6 2
6 1
9 3
예제 출력 3
6
|
2021-10-28 14:57:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3109460175037384, "perplexity": 925.9848041548711}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00675.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/128342-interpreting-zero-norm-complex-vector.html
|
# Math Help - Interpreting a zero norm of complex vector
1. ## Interpreting a zero norm of complex vector
What can we conclude about the norm of a complex vector (dimension 2) when it's equal to 0? What does it mean? What can be said about their components x and y?
2. Originally Posted by janie_t
What can we conclude about the norm of a complex vector (dimension 2) when it's equal to 0? What does it mean? What can be said about their components x and y?
$z = x + iy$
$|z| = 0 \Longleftrightarrow z = 0$, that is $x = 0$ and $y = 0$
3. But what if x and y are not equal to 0 and the norm is 0. Basically x and y are complex vectors and their norm is 0 so what can I conclude from that?
4. Originally Posted by janie_t
But what if x and y are not equal to 0 and the norm is 0. Basically x and y are complex vectors and their norm is 0 so what can I conclude from that?
which one is it? x, y are the components or x,y are the complex vectors?
anyway, x,y should be 0
5. Suppose you have the following vector x= (5+2i,-2+5i), then the norm ||x||=0.
6. Originally Posted by janie_t
Suppose you have the following vector x= (5+2i,-2+5i), then the norm ||x||=0.
what is your definition for norm here?
7. Let x1 and x2 be the components of the vector x, then the norm of x is SQUARE ROOT [ (x1)^2 + (x2)^2 ]
8. Originally Posted by janie_t
Let x1 and x2 be the components of the vector x, then the norm of x is SQUARE ROOT [ (x1)^2 + (x2)^2 ]
That is not the norm of a 2-vector over $\mathbb{C}$. Assuming $\bold{x}$ is a column vector the usual norm is:
$\|\bold{x}\|=\sqrt{\overline{\bold{x}}^T{\bold{x}} }=\sqrt{x_1\overline{x_1}+x_2\overline{x_2}} =\sqrt{|x_1|^2+|x_2|^2}$
which $0$ if and only if both $x_1$ and $x_2 =0$
CB
9. Originally Posted by janie_t
Let x1 and x2 be the components of the vector x, then the norm of x is SQUARE ROOT [ (x1)^2 + (x2)^2 ]
this is not define a "norm" because a norm must satisfy $||x|| = 0 \leftrightarrow x = 0$
in here, you have $||x|| = 0$ but $x \not = 0$
10. I am talking about the Euclidean norm.
Suppose you have the following vector x= (5+2i,-2+5i), then how come I get ||x||=0. I don't understand why the Euclidean Norm of the vector is 0 when its components are not zero.
11. Because it's not the Euclidean norm, as others have pointed out.
12. x=(5+2i,-2+5i)
Well,
||x||^2 = (5+2i)^2 + (-2+5i)^2
||x||^2 = (25 + 10i + 4i^2) + ( 4 -10i + 25i^2)
||x||^2 = (25+10i-4) + (4-10i-25) if we let (i^2 = -1)
||x||^2 = 25 + 10i -4 + 4 - 10i - 25
||x||^2 = 0
||x|| = 0
? Is there any mistake in my calculations? The initial components of x were not 0, yet the norm is zero.
?
13. I suggest you read this.
14. Sorry for bothering you all! I found my mistake!
15. And it was exactly what everyone had been telling you all along?
|
2015-02-28 05:39:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9192503690719604, "perplexity": 513.442074002517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936461650.2/warc/CC-MAIN-20150226074101-00075-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://marked2app.com/help/Multi-File_Documents.html
|
← Marked 2 Home
Marked Help
• Full Index
• Changelog
# Multi-file Documents
Marked allows several different syntaxes for including one file within another.
## Marked Syntax
You can include external files in a single preview document by using the syntax <<[path/file] at the beginning of a line. The line should have blank lines above and below it, and the path is assumed to be relative to the main document unless it begins with a slash (/) or a tilde (~). Slash (root directory) and tilde (home directory) may be used to define absolute paths to files. No path is needed if the external files are in the same folder as the main document, just put the filename (case sensitive and including extension) in the square brackets.
You can use the metadata headers “Include Base” or “Transclude Base” to change the base location for included files, e.g.:
Transclude Base: ~/Desktop
Note that when viewing documents with included files, you can type “I” (shift-i) to see which included file is in the visible area. Pressing return while the included file path is displayed will open the included file in the default editor.
Using this feature you can build large documents/books using multiple files (e.g. a file for each chapter) and then specify the document order in a single index file. It doesn’t matter how any of the files are named or how the folders are organized; the file you open in Marked will be considered the index and the files listed inside it will be included. An example of an index file for a three-part document:
Folder structure:
Index.md:
# Document title
## Section 1
<<[sections/section1.md]
## Section 2
<<[sections/section2.md]
## Section 3
<<[sections/section3.md]
Opening Index.md in Marked will display its contents with all three included files expanded inside. All included files will be watched for changes. Unlike the open document in Marked, included file tracking depends on Spotlight to obtain updates and must exist in a Spotlight-indexed folder on your disk.
You can also include code snippets and raw html or text using variations of this syntax.
The final HTML export of a document containing includes will have HTML comments containing the relative path of the included file at the beginning and end of the imported text.
Note: the more files included in a document, the slower the overall compile time of the preview will be. Marked tries to optimize and cache the process, but expect some rendering delays as your document size increases.
## MultiMarkdown Transclude Syntax
You can also use {{filename}} syntax based on the newer MultiMarkdown spec. Marked will recognize Transclude Base: path in MMD metadata and use it as the base for file transclusion.
Transclude Base will only be recognized in the parent document, not in additional included documents. All nested includes must have paths based on the initial Transclude Base, or from the location of the parent document.
The fenced code syntax that MultiMarkdown provides for including files without processing will not work in Marked. To do this, please use the <<(file) (code block) or <<{file} (raw) syntax.
## IA Writer Block syntax
Marked 2.5.11+ supports the IA Writer Content Block syntax. This is a reference beginning with a forward slash (/) on its own line. It can be a code sample, an image, a markdown file, or a CSV file. All will be handled appropriately based on the extension of the included file, and CSVs will be converted into Markdown tables if possible.
In IA writer, included files are brought into the iCloud container and don’t always require “actual” paths. In Marked, unless included files already exist in the same folder as the file being previewed, this syntax should be used with a path, either absolute or relative. The first slash will be ignored, so if it’s an absolute path, start with two slashes.
A code snippet in the same folder as the document being previewed:
/snippet.h
Relative path to a subdirectory called “images”:
/images/image.png "optional title"
Absolute path to the Documents folder:
//Users/username/Documents/content.csv
## Book Formats
Marked also supports index files in formats like Leanpub, GitBook and mmd_merge (MultiMarkdown). Files included in book format indexes will be watched for changes and the result is a complete preview of your compiled document, just like the “Index.md” example above.
### Leanpub
If you enable the option in the Marked 2 Preferences, Apps pane under Leanpub/GitBook support, files named “Book.txt” will be treated as Leanpub index files automatically. The older “frontmatter:” format is also recognized.
Leanpub documentation.
Leanpub Book.txt example:
frontmatter:
Acknowledgments.txt
Preface.txt
Introduction.txt
mainmatter:
Markdown.txt
Sample Books.txt
Inserting Images.txt
### mmd_merge
For mmd_merge, Marked require that the first line be “#merge” (a special Marked trigger for mmd_merge, treated as a comment and ignored by other processors).
mmd_merge documentation.
mmd_merge example:
#merge
Chapter-1.md
sub-chapter-1-1.md
sub-chapter-1-2.md
Chapter-2.md
sub-chapter-2-1.md
sub-chapter-2-2.md
FAQ.md
Acknowledgments.md
### GitBook
GitBook formatting uses a Markdown list to create the structure and Table of Contents. If GitBook support is enabled in the Marked 2 Preferences, Apps pane under Leanpub/GitBook support, a file named SUMMARY.md will be read and automatically converted to mmd_merge format, allowing a full preview of your GitBook document.
GitBook documentation.
GitBook SUMMARY.md example:
# Summary
* [Writing is nice](part1/writing.md)
* [GitBook is nice](part1/gitbook.md)
* [Better tools for authors](part2/better_tools.md)
GitBook allows for anchors to be used in SUMMARY.md table of contents, but Marked will ignore these and only include the base document one time.
## Multi-file Document Preview Features
When viewing a document containing included files, you can use two features to help figure out which file you’re looking at.
• Keyboard: Pressing I will briefly display a popup showing the title of the file currently visible at the scroll position of the preview.
• Pressing Return following I will edit the displayed file with your external editor.
• Mouse: Selecting “Show Boundaries of Included Files” from the Gear menu (B) will add a colored bar to the left side of the preview, segmented to show the beginning and end of included files. It also shows nested includes. Hovering over a section of this bar will show the name of the file it represents, and clicking it will open that file in your chosen editor.
|
2020-08-03 23:11:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43832656741142273, "perplexity": 4976.960798281662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735836.89/warc/CC-MAIN-20200803224907-20200804014907-00467.warc.gz"}
|
https://mathtuition88.com/2015/03/19/solution-to-hp-a4-printer-paper-mysterious-question/
|
## Solution to HP A4 Printer Paper Mysterious Question
A while ago, I posted the HP A4 Paper Mysterious Question which goes like this:
Problem of the Week
Suppose $f$ is a function from positive integers to positive integers satisfying $f(1)=1$, $f(2n)=f(n)$, and $f(2n+1)=f(2n)+1$, for all positive integers $n$.
Find the maximum of $f(n)$ when $n$ is greater than or equal to 1 and less than or equal to 1994.
So far no one seems to have solved the question on the internet yet!
I have given it a try, and will post the solution below!
If you are interested in Math Olympiad, it is a good idea to invest in a good book to learn more tips and tricks about Math Olympiad. One excellent Math Olympiad author is Titu Andreescu, trainer of the USA IMO team. His book 104 Number Theory Problems: From the Training of the USA IMO Team is highly recommended for training specifically on Number Theory Olympiad questions, one of the most arcane and mysterious fields of mathematics. He does write on other Math Olympiad subjects too, like Combinatorics, so do check it out by clicking the link above, and looking at the Amazon suggested books.
Now, to the solution of the Mysterious HP A4 Paper Question:
We will solve the problem in a few steps.
## Step 1
First, we will prove that $\boxed{f(2^n-1)=n}$. We will do this by induction. When $n=1$, $f(2^1-1)=f(1)=1$. Suppose $f(2^k-1)=k$. Then,
\begin{aligned} f(2^{k+1}-1)&=f(2(2^k)-1)\\ &=f(2(2^k-1)+1)\\ &=f(2(2^k-1))+1\\ &=f(2^k-1)+1\\ &=k+1 \end{aligned}
Thus, we have proved that $f(2^n-1)=n$ for all integers n.
## Step 2
Next, we will prove a little lemma. Let $g(x)=2x+1$. We will prove, again by induction, that $\boxed{g^n (1)=2^{n+1}-1}$. Note that $g^n(x)$ means the composition of the function g with itself n times.
Firstly, for the base case, $g^1(1)=2+1=3=2^2-1$ is true. Suppose $g^k (1)=2^{k+1}-1$ is true. Then, $g^{k+1}(1)=2(2^{k+1}-1)+1=2^{k+2}-1$. Thus, the statement is true.
## Step 3
Next, we will prove that if $y<2^n-1$, then $f(y). We will write $y=2^{\alpha_1}x_1$, where $x_1$ is odd. We have that $x_1<2^{n-\alpha_1}$.
\begin{aligned} f(y)&=f(2^{\alpha_1} x_1)\\ &=f(x_1) \end{aligned}
Since $x_1$ is odd, we have $x_1=2k_1+1$, where $k_1<2^{n-\alpha_1-1}$.
Continuing, we have
\begin{aligned} f(x_1)&=f(2k_1+1)\\ &=f(2k_1)+1\\ &=f(k_1)+1 \end{aligned}
We will write $k_1=2^{\alpha_2}x_2$, where $x_2$ is odd. We have $x_2<2^{n-\alpha_1-\alpha_2-1}$.
\begin{aligned} f(k_1)+1&=f(2^{\alpha_2}x_2)+1\\ &=f(x_2)+1 \end{aligned}
where $x_2=2k_2+1$, and $k_2<2^{n-\alpha_1-\alpha_2-2}$.
\begin{aligned} f(x_2)+1&=f(2k_2)+1+1\\ &=f(k_2)+2\\ &=\cdots\\ &=f(k_j)+j \end{aligned}
where $k_j=1$, $1=k_j<2^{n-\alpha_1-\alpha_2-\cdots-\alpha_j-j}$.
Case 1: All the $\alpha_i$ are 0, then $y=2(\cdots 2(k_j)+1=g^j(1)=2^{j+1}-1$. Then, $j+1, i.e. $j.
Thus, $f(y)=f(k_j)+j<1+n-1=n$.
Case 2: Not all the $\alpha_1$ are 0, then, $1=k_j<2^{n-\alpha_1-\alpha_2-\cdots-\alpha_j-j}\leq 2^{n-j-1}$. We have $2^0=1<2^{n-j-1}$, thus, $0, which means that $j. Thus, $f(y)=f(k_j)+j<1+n-1=n$.
## Step 4 (Conclusion)
Using Step 1, we have $f(1023)=f(2^{10}-1)=10$, $f(2047)=f(2^{11}-1)=11$. Using Step 3, we guarantee that if $y<2047$, then $f(y)<11$. Thus, the maximum value of f(n) is 10.
Ans: 10
http://mathtuition88.com
This entry was posted in math olympiad solution, Uncategorized and tagged , , . Bookmark the permalink.
### 6 Responses to Solution to HP A4 Printer Paper Mysterious Question
1. ivasallay says:
I reached the same conclusion after finding all the values of f(x) up to f(15) = 4 and knowing that f(16) = 1 again, but I still felt a bit unsure of myself. You did a wonderful job proving the answer. Thank you!
Liked by 1 person
• Thanks for reading the solution! It’s strange no one else on the Internet is interested in solving the HP printer paper question!
Liked by 1 person
2. ivasallay says:
Or at least had as much confidence as you did to post an answer!
Liked by 1 person
• Thanks! I just realized this problem is a little similar to the Collatz Conjecture, a famous unsolved problem! Math is mysterious indeed.
Like
3. Again, thank you for posting a proof.
I’m a maths tutor and found the problem interesting when I bought the HP A4 paper last week, and shared it with a student. I liked it because I would like to encourage students to see that doing maths is puzzling over novel challenges and situations, not just getting to grips with the methods on their particular exam syllabus.
I came to the solution (10) by building up a graph of the function, working step by step from n=1 to n=2 to n=3 onwards, and observed that f(n) peaked, reaching progressively higher maxima at n equal to 7, then 15, then 31, followed by a drop to f(n) equal to 1, and an uneven climb after that.
I then noticed that 7, 15 and 31 are each one less than a power of 2 i.e. 2^n – 1.
Then it was satisfying to notice that the height of each new maximum peak was equal to the value of n in 2^n – 1.
From that, it was a short step to see that 1994 is between 2^10 and 2^11, so the solution had to be 10.
What I’m interested to know is whether my path to discovering the answer was similar to the paths of others who have found it?
Deriving the proof demonstrates unequivocally that the answer is right, but I wonder whether only publishing the proof might give the impression to inquisitive maths students having a go at the problem, that deriving the proof was how the answer (10) was found. I’d bet that you found the answer by a similar exploratory route to me, and set about deriving a proof afterwards. Am I right?
Also, as I came to my conjecture that the answer was 10 by inspecting f(n) as far as n=50 by hand on graph paper, I then decided to work out how to use an Excel spreadsheet to find all the values of f(n) from n=1 to n=1994.
(This turns out to need dynamic cell referencing.)
I got Excel to give me all the values of f(n) by first filling cells in column A in turn with 1,2,3 … up to 1994.
I used column B for each value of f(n).
The first row of column B I filled with 1 as given in the problem. I figured that an Excel function that would work would be one which would test whether the current row was odd or even. If even, the function would need to return the value of the cell in the row whose number was half the current row ( as f(n) = f(2n) was stated in the problem). Otherwise the current row number must be odd, and so the Excel function would need to return the value of 1 more than the previous row ( as f(2n+1)=f(2n) +1 ).
After some googling to find out how to dynamically make cell references – which turns out to be the INDIRECT function – I worked out this formula which worked, when I copied it down all the cells in column B starting from row 2 up to row 1994.
=IF(ISEVEN(A2), INDIRECT(“B”&(ROW()/2)), B1 + 1).
Liked by 1 person
• Hi Jonathan, thanks for your detailed comment! “I’d bet that you found the answer by a similar exploratory route to me, and set about deriving a proof afterwards. Am I right?” Yes, you are right indeed! My favorite approach to solving questions is Construct Examples, Find Patterns, and write a proof!
I hope you and your student enjoyed solving this problem as much as I did.
Like
|
2017-09-22 20:39:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 53, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6542119979858398, "perplexity": 534.4097894858003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689192.26/warc/CC-MAIN-20170922202048-20170922222048-00337.warc.gz"}
|
https://forums.nsclient.org/t/using-registry-and-http-for-nsclient-ini/4382
|
# Using Registry and http for nsclient.ini
#1
Nsclient++ 5.0.62 x64
I am trying to get the http settings working correctly but currently I get this error-
C:\Program Files\NSClient++>nscp settings --switch http://server/nsclient/nscl ient.ini E settings Failed to download C:\Program Files\NSClient++/cache\nsclient.ini.t mp: Failed to GET http://server:80 302: c:\source\master\include\settings/impl/settings_http.hpp:96 E settings Failed to download C:\Program Files\NSClient++/cache\nsclient.ini.t mp: Failed to GET http://server:80 302: c:\source\master\include\settings/impl/settings_http.hpp:96 Current settings instance loaded: E settings Failed to download C:\Program Fil es\NSClient++/cache\nsclient.ini.tmp: Failed to GET http://server:80 302: c:\source\master\include\settings/impl/settings_http.hpp:96
I am able to open the file in the browser and I am able to telnet to the server via port 80 with no issues…
Can I just check aswell- If I use the http approach do I have to use the registry settings?
Also- it says on https://docs.nsclient.org/manual/settings.html that the refresh period is configurable… how?
#2
Also- how are these settings set during install?
I want to use the registry and set the http location during install
|
2018-02-25 19:50:42
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113204836845398, "perplexity": 14170.576943451399}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816912.94/warc/CC-MAIN-20180225190023-20180225210023-00157.warc.gz"}
|
http://supercgis.com/relative-error/relative-error-percent.html
|
Home > Relative Error > Relative Error Percent
# Relative Error Percent
## Contents
In the mathematical field of numerical analysis, the numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm. Simply substitute the equation for Absolute Error in for the actual number. You might also enjoy: Sign up There was an error. Absolute Error: Absolute error is simply the amount of physical error in a measurement. http://supercgis.com/relative-error/relative-percent-error.html
Celsius temperature is measured on an interval scale, whereas the Kelvin scale has a true zero and so is a ratio scale. Unlike absolute error where the error decides how much the measured value deviates from the true value the relative error is expressed as a percentage ratio of absolute error to the The limits of these deviations from the specified values are known as limiting errors or guarantee errors.[2] See also Accepted and experimental value Relative difference Uncertainty Experimental uncertainty analysis Propagation of Van Loan (1996).
## Relative Error Formula
Relative error compares the absolute error against the size of the thing you were measuring. Create an account EXPLORE Community DashboardRandom ArticleAbout UsCategoriesRecent Changes HELP US Write an ArticleRequest a New ArticleAnswer a RequestMore Ideas... Wolfram|Alpha» Explore anything with the first computational knowledge engine. It is the difference between the result of the measurement and the true value of what you were measuring.
The best way to learn how to calculate error is to go ahead and calculate it. Answer this question Flag as... This is the experimental value. Relative Error Physics http://mathworld.wolfram.com/RelativeError.html Wolfram Web Resources Mathematica» The #1 tool for creating Demonstrations and anything technical.
The relative error expresses the "relative size of the error" of the measurement in relation to the measurement itself. Relative Error Definition If you tried to measure something that was 12 inches long and your measurement was off by 6 inches, the relative error would be very large. To get the best deal on Tutoring, call 1-855-666-7440 (Toll Free) Home How it works About Us HomePhysicsRelative Error Formula Top Relative Error Formula Many a times it happens that there https://en.wikipedia.org/wiki/Approximation_error Another example would be if you measured a beaker and read 5mL.
Here absolute error is expressed as the difference between the expected and actual values. Relative Error Matlab Generating a sequence of type T at compile time Why were Native American code talkers used during WW2? In the mathematical field of numerical analysis, the numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm. Topic Index | Algebra Index | Regents Exam Prep Center Created by Donna Roberts
current community blog chat Mathematics Mathematics Meta your communities
## Relative Error Definition
far away, where the signal is microvolts, I need precision down to the nanovolt, but near the source, where the signal is a few volts, I need millivolt precision, and would http://www.regentsprep.org/regents/math/algebra/am3/LError.htm You'll need to calculate both types of error in science, so it's good to understand the difference between them and how to calculate them.Absolute ErrorAbsolute error is a measure of how Relative Error Formula Also from About.com: Verywell & The Balance This site uses cookies. Relative Error Chemistry Greatest Possible Error: Because no measurement is exact, measurements are always made to the "nearest something", whether it is stated or not.
For example, if a measurement made with a metric ruler is 5.6 cm and the ruler has a precision of 0.1 cm, then the tolerance interval in this measurement is 5.6 http://supercgis.com/relative-error/relative-error-percent-formula.html MathWorld. Does Anna know what a ball is? The solution is to weigh the absolute error by the inverse of a yardstick signal, that has a similar fall-off properties to the signals of interest, and is positive everywhere. Absolute Error Formula
About Today Living Healthy Chemistry You might also enjoy: Health Tip of the Day Recipe of the Day Sign up There was an error. Please try again. Why is the bridge on smaller spacecraft at the front but not in bigger vessel? this contact form up vote 10 down vote favorite 3 How do I calculate relative error when the true value is zero?
Step-by-step Solutions» Walk through homework problems step-by-step from beginning to end. Absolute Error Definition Is this ok?1How to calculate signifiance on an A/B test on revenue0Calculate the error given a tolerance0How should graphs of True Positive Rate / False Positive Rate be interpreted?2Relative Error $\frac{x-x_0}{x}$3How if your space is anisotropic, but you still use 1/r^2 as the denominator), and the ratio would still work well as a relative error.
## The percent error is the relative error expressed in terms of per 100.
To do so, simply subtract the measured value from the expected one. For example, if your experimental value is in inches but your real value is in feet, you must convert one of them to the other unit of measurement. p.53. True Error To continue the example of measuring between two trees: Your Absolute Error was 2 feet, and the Actual Value was 20 feet. 2ft20ft{\displaystyle {\frac {2ft}{20ft}}} Relative Error =.1feet{\displaystyle =.1feet}[7] 2 Multiply
Van Loan (1996). What if some of the experimental values are negative? Hints help you try the next step on your own. navigate here But, if you are measuring a small machine part (< 3cm), an absolute error of 1 cm is very significant.
Relative ErrorProblems Back to Top Below are given some relative error examples you can go through it: Solved Examples Question1: John measures the size of metal ball as 3.97 cm but Find: a.) the absolute error in the measured length of the field. So, first consider that you have $[X(i),Y(i)]$ data points and that you want to adjust a model such as $$Y =a+b X+c X^2$$ Among your data points, you have one for You pace from one tree to another and estimate that they're 18 feet apart.
Then find the absolute deviation using formulaAbsolute deviation $\Delta$ x = True value - measured value = x - xoThen substitute the absolute deviation value $\Delta$ x in relative error formula That is the "real" value. Even if the result is negative, make it positive. Normally people use absolute error, relative error, and percent error to represent such discrepancy: absolute error = |Vtrue - Vused| relative error = |(Vtrue - Vused)/Vtrue|
Absolute errors do not always give an indication of how important the error may be. The error is a smaller percentage of the total measurement.[8] 2ft20ft=.1feet{\displaystyle {\frac {2ft}{20ft}}=.1feet} .1∗100=10%{\displaystyle .1*100=10\%} Relative Error. 3 Calculate Relative Error all at once by turning the numerator (top of fraction) In the formula for relative error, the true signal itself is used for that, but it doesn't have to be, to produce the behaviour you expect from the relative error. c.) the percentage error in the measured length of the field Answer: a.) The absolute error in the length of the field is 8 feet.
This is your absolute error![2] Example: You want to know how accurately you estimate distances by pacing them off.
|
2017-06-23 17:06:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5993944406509399, "perplexity": 949.5107011097119}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320077.32/warc/CC-MAIN-20170623170148-20170623190148-00070.warc.gz"}
|
https://www.iacr.org/cryptodb/data/author.php?authorkey=1564
|
## CryptoDB
### William E. Skeith III
#### Publications
Year
Venue
Title
2013
CRYPTO
2008
EPRINT
Searching and modifying public-key encrypted data (without having the decryption key) has received a lot of attention in recent literature. In this paper we re-visit this important problem and achieve much better amortized communication-complexity bounds. Our solution resolves the main open question posed by Boneh at al., \cite{BKOS07}. First, we consider the following much simpler to state problem (which turns out to be central for the above): A server holds a copy of Alice's database that has been encrypted under Alice's public key. Alice would like to allow other users in the system to replace a bit of their choice in the server's database by communicating directly with the server, despite other users not having Alice's private key. However, Alice requires that the server should not know which bit was modified. Additionally, she requires that the modification protocol should have small" communication complexity (sub-linear in the database size). This task is referred to as private database modification, and is a central tool in building a more general protocol for modifying and searching over public-key encrypted data with small communication complexity. The problem was first considered by Boneh at al., \cite{BKOS07}. The protocol of \cite{BKOS07} to modify $1$ bit of an $N$-bit database has communication complexity $\mathcal{O}(\sqrt N)$. Naturally, one can ask if we can improve upon this. Unfortunately, \cite{OS08} give evidence to the contrary, showing that using current algebraic techniques, this is not possible to do. In this paper, we ask the following question: what is the communication complexity when modifying $L$ bits of an $N$-bit database? Of course, one can achieve naive communication complexity of $\mathcal{O}(L\sqrt N)$ by simply repeating the protocol of \cite{BKOS07}, $L$ times. Our main result is a private database modification protocol to modify $L$ bits of an $N$-bit database that has communication complexity $\mathcal{O}(\sqrt{NL^{1+\alpha}}\textrm{poly-log~} N)$, where $0<\alpha<1$ is a constant. (We remark that in contrast with recent work of Lipmaa \cite{L08} on the same topic, our database size {\em does not grow} with every update, and stays exactly the same size.) As sample corollaries to our main result, we obtain the following: \begin{itemize} \item First, we apply our private database modification protocol to answer the main open question of \cite{BKOS07}. More specifically, we construct a public key encryption scheme supporting PIR queries that allows every message to have a non-constant number of keywords associated with it. \item Second, we show that one can apply our techniques to obtain more efficient communication complexity when parties wish to increment or decrement multiple cryptographic counters (formalized by Katz at al. ~\cite{KMO01}). \end{itemize} We believe that public-key encrypted'' amortized database modification is an important cryptographic primitive in it's own right and will be a useful in other applications.
2008
CRYPTO
2007
CRYPTO
2007
PKC
2007
EPRINT
In this paper we survey the notion of Single-Database Private Information Retrieval (PIR). The first Single-Database PIR was constructed in 1997 by Kushilevitz and Ostrovsky and since then Single-Database PIR has emerged as an important cryptographic primitive. For example, Single-Database PIR turned out to be intimately connected to collision-resistant hash functions, oblivious transfer and public-key encryptions with additional properties. In this survey, we give an overview of many of the constructions for Single-Database PIR (including an abstract construction based upon homomorphic encryption) and describe some of the connections of PIR to other primitives.
2007
EPRINT
In cryptography, there has been tremendous success in building primitives out of homomorphic semantically-secure encryption schemes, using homomorphic properties in a black-box way. A few notable examples of such primitives include items like private information retrieval schemes and collision-resistant hash functions. In this paper, we illustrate a general methodology for determining what types of protocols can be implemented in this way and which cannot. This is accomplished by analyzing the computational power of various algebraic structures which are preserved by existing cryptosystems. More precisely, we demonstrate lower bounds for algebraically generating generalized characteristic vectors over certain algebraic structures, and subsequently we show how to directly apply this abstract algebraic results to put lower bounds on algebraic constructions of a number of cryptographic protocols, including PIR-writing and private keyword search protocols. We hope that this work will provide a simple litmus test'' of feasibility for use by other cryptographic researchers attempting to develop new protocols that require computation on encrypted data. Additionally, a precise mathematical language for reasoning about such problems is developed in this work, which may be of independent interest.
2007
EPRINT
Consider the following problem: Alice wishes to maintain her email using a storage-provider Bob (such as a Yahoo! or hotmail e-mail account). This storage-provider should provide for Alice the ability to collect, retrieve, search and delete emails but, at the same time, should learn neither the content of messages sent from the senders to Alice (with Bob as an intermediary), nor the search criteria used by Alice. A trivial solution is that messages will be sent to Bob in encrypted form and Alice, whenever she wants to search for some message, will ask Bob to send her a copy of the entire database of encrypted emails. This however is highly inefficient. We will be interested in solutions that are communication-efficient and, at the same time, respect the privacy of Alice. In this paper, we show how to create a public-key encryption scheme for Alice that allows PIR searching over encrypted documents. Our solution provides a theoretical solution to an open problem posed by Boneh, DiCrescenzo, Ostrovsky and Persiano on Public-key Encryption with Keyword Search'', providing the first scheme that does not reveal any partial information regarding user's search (including the access pattern) in the public-key setting and with non-trivially small communication complexity. The main technique of our solution also allows for Single-Database PIR writing with sub-linear communication complexity, which we consider of independent interest.
2005
CRYPTO
|
2020-11-23 20:15:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6569156050682068, "perplexity": 791.8204763006466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141164142.1/warc/CC-MAIN-20201123182720-20201123212720-00661.warc.gz"}
|
https://math.stackexchange.com/questions/593397/fields-and-irreducible-polynomial-of-pn-degree
|
# Fields and irreducible polynomial of $p^n$ degree
Let $K$ be a field of $p$ elements.
Let $f(x) \in K [x]$ be an irreducible polynomial of degree $n$.
Prove that the field $K[x]/(f(x))$ has $p^n$ elements.
By given theorem, let $K$ be a field, $P(x)\in K[x]$ an irreducible polynomial. Then $\exists$ a field $F$ s.t. $P(x)$ has a root in $F$, then $K \subseteq F$.
Below are some notes that may can get me the proof.
Can anyone provide me help?
• By the way, is it supposed to say, there exists a field $F$... AND $K\subseteq F$? – LASV Dec 5 '13 at 0:41
Hint 1: You can write all the elements of $K[x]/(f(x))$ in the form $a_nx^{\deg(f)-1}+b^{\deg(f)-2}+...+a_0+(f(x))$ (Why?)
Hint 2: Consider $K[x]/((P(x))$
|
2019-12-10 03:15:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9514375329017639, "perplexity": 160.42491832253205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525781.64/warc/CC-MAIN-20191210013645-20191210041645-00115.warc.gz"}
|
https://easyelectronics.co.in/solar-cell-photovoltaic-cell/
|
# SOLAR CELL – Photovoltaic Cell
3
979
In this lecture, we are going to learn about the solar cell, how solar cell works, what is the principle of the solar cell, construction, application, and advantage of solar cells also will be discussed in this lecture.
## What is Solar Cell?
A Solar cell is a device that converts solar energy into electrical energy.
• The solar cell or solar energy converter is essentially a large photodiode to operate as a photovoltaic device and to give as much output power as possible.
• Semiconductor materials like silicon, gallium arsenide (GaAs), indium arsenide (InAs), and cadmium arsenide (CdAs) are used for manufacturing solar cells.
• Silicon and selenium are the most widely USD materials for solar cells.
• The solar cell can convert energy directly into electrical energy with high conversion efficiency and can provide nearly permanent power at a low operating cost and also free of pollution.
• The maximum theoretical efficiency of a solar cell depends on the bandgap of the semiconductor material.
## Principle of Solar Cell
Solar cells operate on the principle of photovoltaic action. The photovoltaic action refers to their voltage-generating capability. Since these cells generate a voltage proportional to sunlight intensity, the solar cells are also called photovoltaic cells.
## Construction of Solar Cell
• The solar cells consist of a single semiconductor crystal that has been doped with both p- and n-type impurities, thereby forming a p-n junction.
• The basic construction of a p-n junction solar cell and its circuit symbol is shown in the figure below.
• The thickness of p-type material is made extremely thin so that light can penetrate into the junction. A nickel-plated being around the p-type material is the positive output terminal. The thickness of an n-region is also kept small to allow holes generated near the surface to diffuse to the junction before they recombine. A metal plating at the bottom of the n-material is the negative output terminal. The p-n diode is enclosed in a can with a glass window on top so that light may fall upon p- and n-type materials.
## Working of Solar Cell
• The net current id an open circuit p-n junction is zero. This is because the current due minority carriers are balanced by the current due to the majority carriers.
• However, when the p-n junction is illuminated, the incident light photon at the junction may collide with a valance electron and impart sufficient energy to make a transition to the conduction band.
• As a result, an electron-hole pair is formed. The newly formed minority carriers in the p-region and n-region get injected across the junction, thereby increasing the current due to minority carriers.
• Since the junction is open-circuited, the net current must still remain zero. Therefore the current due to majority carriers must increase an equal amount.
• The rise in carrier current is possible only if the field at the junction is reduced. Thus, the barrier height is lowered. This leads to the accumulation of majority carriers on both sides of the junction.
• This gives rise to a photovoltaic voltage ( also called open-circuit voltage Voc) across the junction in the open circuit condition. This voltage is equal to the decrease in the barrier potential.
• When the p-n junction photovoltaic cell is used as a solar cell, it is important to use an optimum load resistance so as to extract maximum power.
• The conversion efficiency of solar cells is given by,
\eta = \frac{Output\;Power}{Input\;Power}=\frac{I_mV_m}{P_{in}}
Where Vm and Im are the voltage and current at maximum power point and Pin is the power density of the sunlight.
• The maximum efficiency of silicon solar cells is about 14% even though the theoretical efficiency is about 25% when using sunlight as the light source.
• Apart from the bandgap energy, the current-voltage characteristics of a solar cell also influence its efficiency.
• Typical V-I characteristics of a solar cell corresponding to different levels of illumination are sown in the figure below:
• The maximum power output is obtained when the cell is operated at the knee of the curve.
• The available output current depends upon the light intensity, cell efficiency, and the size of the active area of the cell face. The conversion efficiency depends upon the spectral content and the illumination.
1. They can be operated satisfactorily over a wide range of temperatures.
2. They have the ability to generate voltage without any bias.
3. They have extremely fast responses.
## Applications of Solar Cell
1. Solar cells are extensively used as a source of power in satellites and solar vehicles, to supply power to electronic and other equipment or to charge the battery.
2. Solar cells are used to generate power in calculators and watches.
3. Solar cells are used in photodetection(visible and invisible), demodulation, logic circuits, switching, and so on.
## Frequently Asked Questions Related to Solar Cells
Answer: Crystalline silicon cells are made of silicon atoms connected to one another to form a crystal lattice. This lattice provides an organized structure that makes the conversion of light into electricity more efficient.
Answer: Solar cells are very useful in powering space vehicles such as satellites and telescopes (e.g. Hubble). They provide a very economical and reliable way of powering objects which would otherwise need expensive and cumbersome fuel sources.
Answer: Solar cells are very sensitive in terms of their location, which means that if there is shade on your lot, it is difficult to exploit solar installation optimally.
Answer: Silicon crystals are laminated into p-type and n-type layers, stacked on top of each other. Light striking the crystals induces the “photovoltaic effect,” which generates electricity.
Answer: No, it can be only harnessed in the presence of sunlight.
Previous articleTransducer Interview Questions And Answers
Electronics Engineering(2014 pass out) Junior Telecom Officer(B.S.N.L.) Project Development, PCB designing Teaching of Electronics Subjects
|
2023-03-23 01:24:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4984957277774811, "perplexity": 1073.471497052448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00118.warc.gz"}
|
https://api-project-1022638073839.appspot.com/questions/what-is-pi-in-degrees
|
# What is -pi in degrees?
##### 1 Answer
May 16, 2015
$- \pi$ radians $= - {180}^{o}$
A radian is the angle subtended by an arc of a circle of length equal to the radius. If a circle has radius 1, then an angle of 1 radian defines an arc whose curved length is 1. $2 \pi$ radians will take you once round the circle.
Degrees divide the angle of a complete circle into 360 divisions. ${360}^{o}$ takes you once round the circle.
So $2 \pi$ radians is the same as ${360}^{o}$ and $\pi$ radians is ${180}^{o}$.
|
2021-10-21 08:27:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9518566131591797, "perplexity": 541.0893685229752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00455.warc.gz"}
|
http://mca.nowgray.com/2017/02/solved-real-life-description-for-a-a.html
|
Cheap and Secure Web Hosting Provider : See Now
# [Solved]: Real life description for (~A->A)->A
, ,
Problem Detail:
It can be shown that the logical preposition [ :- (~A->A)->A ] is a theorem (always true). I want to know if anybody knows a real life description for the preposition above? I mean an expression in computer, economics, mathematics, politics or anything that fits in that preposition.
Since $A$ is always the same atomic statement, every direkt translation is going to be weird. Also, natural language does not do well with (nested) implications; we typically say "leads to" not "logically implies" in reality.
I suggest you transform the formula to
\qquad\begin{align*} &(\lnot A \to A) \to A \\ \equiv &\lnot(\lnot A \to A) \lor A \\ \equiv &\lnot(A \lor A) \lor A \\ \equiv &\lnot A \lor A, \end{align*}
which is, of course, a tautology (as you've stated):
You will help me, or you won't [, your choice].
Coming up with more complicated sentences won't give you more (logical) meaning -- all tautologies are equivalent (in Boolean logic), after all.
|
2017-08-19 22:36:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945566654205322, "perplexity": 3452.316196245235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105927.27/warc/CC-MAIN-20170819220657-20170820000657-00442.warc.gz"}
|
https://socratic.org/questions/what-is-the-arc-length-of-f-x-2-x-4-1-x-3-7-6-on-x-in-3-oo
|
# What is the arc length of f(x)=2/x^4-1/(x^3+7)^6 on x in [3,oo]?
Jun 18, 2018
$\infty$
#### Explanation:
The arc length formula is:
$s = {\int}_{{x}_{0}}^{x + 1} \sqrt{1 + {\left(\frac{\mathrm{df}}{\mathrm{dx}}\right)}^{2}} \mathrm{dx}$
In this case, $f \left(x\right) = \frac{2}{x} ^ 4 - \frac{1}{{x}^{3} + 7} ^ 6$, which immediately suggests that the resulting mess will be too complex to evaluate. Let's produce the mess anyhow:
$\frac{\mathrm{df}}{\mathrm{dx}} = - \frac{8}{x} ^ 5 + \frac{18 {x}^{2}}{{x}^{3} + 7} ^ 7$, so
$s = {\int}_{3}^{\infty} \sqrt{1 + \frac{64}{x} ^ 10 - \frac{288}{{x}^{3} {\left({x}^{3} + 7\right)}^{7}} + \frac{324 {x}^{4}}{{x}^{3} + 7} ^ 14} \mathrm{dx}$
Yep, that is way too messy to hope to tackle. Feeding it in to integrals.wolfram.com doesn't even produce an answer.
However, the rather strange infinite limit on the requested arc length interval tells us the answer straight away. Whatever the function $f \left(x\right)$ does (and it exists for all values in the interval), you're measuring the length of a curve running from a finite value to an infinite one, so the length of the curve cannot be less than infinity - if that concept can be said to have meaning in this context at all. Thus, immediately from inspection: $s = \infty$.
|
2022-01-17 16:01:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9493433237075806, "perplexity": 472.4407672211791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00207.warc.gz"}
|
http://gmatclub.com/forum/where-to-find-difficult-quant-questions-98859.html
|
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 12 Feb 2016, 19:49
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
Where to find difficult quant questions?
Question banks Downloads My Bookmarks Reviews Important topics
Author Message
Intern
Joined: 09 Aug 2010
Posts: 1
Followers: 0
Kudos [?]: 0 [0], given: 0
Where to find difficult quant questions? [#permalink] 09 Aug 2010, 21:53
I just started studying for the GMAT, which I am taking in two weeks. I downloaded the GMATPrep software and today took the first practice test.
My score was a 760, which I was pretty happy with. Raw scores were 49 for quant, and 46 for verbal.
I'm not too worried about the verbal, as I had plenty of time (25 minutes leftover when I finished), but the math was excruciating.
Unfortunately, most of the prep problems that I have looked at so far fail to mirror the difficulty of the quantitative section. I knew I was getting the problems most correct, but it was taking me so long to work through them that I was seriously risking running out of time. Where is a good resource for finding the most difficult GMAT math problems? I want to practice at a high level so that I am better prepared to pace myself when the real test comes.
Any other tips for what I can do to maintain or raise this score?
Kaplan Promo Code Knewton GMAT Discount Codes Veritas Prep GMAT Discount Codes
Intern
Status: it won't happen in one day but it'll happen one day
Joined: 22 May 2010
Posts: 43
Followers: 1
Kudos [?]: 4 [1] , given: 4
Re: Where to find difficult quant questions? [#permalink] 09 Aug 2010, 22:05
1
KUDOS
hmmm......try these GMAT Club Tests
http://gmatclub.com/tests/
_________________
Life is all about ass; you're either covering it, laughing it off, kicking it, kissing it, busting it, trying to get a piece of it, or behaving like one.
SVP
Affiliations: HEC
Joined: 28 Sep 2009
Posts: 1637
Concentration: Economics, Finance
GMAT 1: 730 Q48 V44
Followers: 95
Kudos [?]: 557 [1] , given: 432
Re: Where to find difficult quant questions? [#permalink] 10 Aug 2010, 20:09
1
KUDOS
Expert's post
Your GMATPrep score is amazing! You're off to a very good start.
I second intellijat's suggestion. The GMATClub Tests are of very high quality and will challenge you! In total, there are 925 (37 questions x 25 tests) questions. The entire package costs $79, but it's worth it. Also look into Jeff Sackmann's Extreme Challenge set. The set contains 100 questions and costs$25. It's a tad pricey, but Sackmann's questions are very GMAT-like. And his explanations are very helpful.
_________________
Intern
Joined: 03 Aug 2010
Posts: 21
Followers: 0
Kudos [?]: 10 [0], given: 4
Re: Where to find difficult quant questions? [#permalink] 12 Aug 2010, 14:51
Can you also guide me to some good compilations of high quality GMAT problems at GMATCLUB
SVP
Affiliations: HEC
Joined: 28 Sep 2009
Posts: 1637
Concentration: Economics, Finance
GMAT 1: 730 Q48 V44
Followers: 95
Kudos [?]: 557 [1] , given: 432
Re: Where to find difficult quant questions? [#permalink] 12 Aug 2010, 15:27
1
KUDOS
Expert's post
underdog wrote:
Can you also guide me to some good compilations of high quality GMAT problems at GMATCLUB
Here's a list of 700-level PS problems:
search.php?search_id=tag&tag_id=187
And here's a list for 700-level DS problems:
search.php?search_id=tag&tag_id=180
Make sure to first become comfortable with easier questions (500, 600, 650, etc.) before practicing with these, though.
_________________
Re: Where to find difficult quant questions? [#permalink] 12 Aug 2010, 15:27
Similar topics Replies Last post
Similar
Topics:
1 Where to find GMAT questions? 1 02 Apr 2012, 07:06
2 Where to find lots of review questions??? 4 08 Oct 2011, 18:35
Where to find questions to take?? 3 04 Oct 2011, 08:45
Are the OG Quant questions difficult enough? 2 15 May 2011, 09:57
1 Where to find answer explanations to GMATPrep Questions? 5 28 Apr 2009, 05:49
Display posts from previous: Sort by
Where to find difficult quant questions?
Question banks Downloads My Bookmarks Reviews Important topics
Moderators: bagdbmba, WaterFlowsUp
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
|
2016-02-13 03:49:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3827366530895233, "perplexity": 9459.76398868283}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165697.9/warc/CC-MAIN-20160205193925-00037-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.gamedev.net/forums/topic/148224-just-starting-out-and-i-already-have-a-problem/
|
• Advertisement
• Popular Now
• 13
• 15
• 27
• 9
• 9
• Advertisement
• Advertisement
• Advertisement
Archived
This topic is now archived and is closed to further replies.
Just Starting out... and I already have a problem
This topic is 5466 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
If you intended to correct an error in the post then please contact us.
Recommended Posts
Hey all... I am just starting programming, and working with tutorials using SDL to start and I took this tutorial from Code3. You may know him, he has a MsgBoard on this Forum, which seems to be down at the moment. Anyway I am running Microsoft Visual C++ 6 and the code compiles fine, but I get no output, the screen is black, could someone take a look at the code, let me know if they have any tips or hints to make this work. ** The object of the program is to print out a background on the console and then put an image ontop of that background... thanks.
#include <stdio.h> #include <stdlib.h> #include <SDL/SDL.h> // SDL Surface Variables, Back-> Background, Image->Sprint Image // Global so all functions can use them // Probably Set up a Surface Class?? SDL_Surface *back; SDL_Surface *image; SDL_Surface *screen; // Used for the location of the image box int xpos=0,ypos=0; // Called fromt he DrawIMG functions to Blit our Sprite to the screen // src is the SCreen Surface to Blit the Sprite to int SDL_BlitSurface(SDL_Surface *src, SDL_Rect *srcrect, SDL_Surface *dst, SDL_Rect *dstrect); // *** Inline Functions... // Loads the BMP Files into the variables names // SDL_LoadBMP allows us to do that // The SDL_LoadBMP argumet is path of the picture int InitImages() { back = SDL_LoadBMP( "pics/pinstripe.bmp" ); image = SDL_LoadBMP( "pics/triangles.bmp" ); return 0; } // First of 2 DrawIMG functions // This is going to be where the img is going to be drawn on the surface void DrawIMG(SDL_Surface *img, int x, int y) { SDL_Rect dest; dest.x = x; dest.y = y; SDL_BlitSurface(img, NULL, screen, &dest); } // Second of 2 DrawIMG functions // I think this has to do with blitting an image on a part of the screen surface, notthe whole thing?? void DrawIMG(SDL_Surface *img, int x, int y, int w, int h, int x2, int y2) { SDL_Rect dest; dest.x = x; dest.y = y; SDL_Rect dest2; dest2.x = x2; dest2.y = y2; dest2.w = w; dest2.h = h; SDL_BlitSurface(img, &dest2, screen, &dest); } // Draws the background to the screen surface starting at x,y ( 0, 0 ) // No Screen Locking ( only with pixel manipulation ) void DrawBG() { DrawIMG(back, 0, 0); } // Actually draws our image on the background provided // DrawIMG is determined by the parameters // The two functions work together to make an image that // draws the img but deletes the trail void DrawScene() { DrawIMG(back, xpos-2, ypos-2, 132, 132, xpos-2, ypos-2); DrawIMG(image, xpos, ypos); SDL_Flip(screen); // Screen Update } int main(int argc, char *argv[]) { Uint8* keys; // Standard Init Stuff *** if ( SDL_Init(SDL_INIT_AUDIO|SDL_INIT_VIDEO) < 0 ) { printf("Unable to init SDL: %s\n", SDL_GetError()); exit(1); } atexit(SDL_Quit); screen=SDL_SetVideoMode(640,480,16,SDL_HWSURFACE|SDL_DOUBLEBUF); if ( screen == NULL ) { printf("Unable to set 640x480 video: %s\n", SDL_GetError()); exit(1); } // *** End of standard stuff InitImages(); // Init Images, BG and IMG DrawBG(); // Draws the background to screen // Game Loop int done=0; while(done == 0) { SDL_Event event; while ( SDL_PollEvent(&event) ) { if ( event.type == SDL_QUIT ) { done = 1; } if ( event.type == SDL_KEYDOWN ) { if ( event.key.keysym.sym == SDLK_ESCAPE ) { done = 1; } } } // Key POsitions and Manipultaions keys = SDL_GetKeyState(NULL); if ( keys[SDLK_UP] ) { ypos -= 1; } if ( keys[SDLK_DOWN] ) { ypos += 1; } if ( keys[SDLK_LEFT] ) { xpos -= 1; } if ( keys[SDLK_RIGHT] ) { xpos += 1; } DrawScene(); // Hopefully that will draw the sceene } return 0; }
Share this post
Share on other sites
Advertisement
NEVERMIND! I am such a dummy, the executables are two different files from the build directly from the Editor and the EXE is creates INSIDE the debug funtion... DOH!
Share this post
Share on other sites
in the future put large code blocks in these tags
<source>code...code...</source>//using [ ] instead of < >
[edited by - ChildOfKordova on October 11, 2017 10:32:23 AM]
weird
Share this post
Share on other sites
quote:
Original post by AdolphousC
NEVERMIND! I am such a dummy, the executables are two different files from the build directly from the Editor and the EXE is creates INSIDE the debug funtion... DOH!
I was doing a Linux port of a Half Life mod and couldn''t figure out why the math library wasn''t linking in as I had specified. I was getting all kinds of errors about this missing, that missing, everything.
I spent two hours trying to figure it out on my own and finally went into #tuxlinux (I think). The guy there tried helping me but we still couldn''t figure it out.
So finally I go to open the Makefile but I typo.. and type in makefile. There were two of them, separated only by case.. and gcc was using the lower case one while I had been editing the uppercase one.
God bless compilers.
Share this post
Share on other sites
• Advertisement
|
2018-03-18 06:31:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1741696298122406, "perplexity": 6751.219700490731}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00690.warc.gz"}
|
https://stacks.math.columbia.edu/tag/09X5
|
Lemma 21.31.4. Let $f : X \to Y$ be a morphism of $\textit{LC}$. If $f$ is proper and surjective, then $\{ f : X \to Y\}$ is a qc covering.
Proof. Let $y \in Y$ be a point. For each $x \in X_ y$ choose a quasi-compact neighbourhood $E_ x \subset X$. Choose $x \in U_ x \subset E_ x$ open. Since $f$ is proper the fibre $X_ y$ is quasi-compact and we find $x_1, \ldots , x_ n \in X_ y$ such that $X_ y \subset U_{x_1} \cup \ldots \cup U_{x_ n}$. We claim that $f(E_{x_1}) \cup \ldots \cup f(E_{x_ n})$ is a neighbourhood of $y$. Namely, as $f$ is closed (Topology, Theorem 5.17.5) we see that $Z = f(X \setminus U_{x_1} \cup \ldots \cup U_{x_ n})$ is a closed subset of $Y$ not containing $y$. As $f$ is surjective we see that $Y \setminus Z$ is contained in $f(E_{x_1}) \cup \ldots \cup f(E_{x_ n})$ as desired. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2019-04-26 07:49:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9843525886535645, "perplexity": 123.26897669332833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578762045.99/warc/CC-MAIN-20190426073513-20190426095513-00301.warc.gz"}
|
https://franklin.dyer.me/post/198
|
## Franklin Pezzuti Dyer
### Counting elements of lists in Agda
In this entry, I just want to describe a simple concept that I've personally had a lot of trouble grasping that involves case splits in the proof assistant Agda. I've written a couple of posts in the past involving Agda, such as this post introducing how type theory, and in particular the theory of dependent types, can help us represent mathematics programmatically with propositions and proofs corresponding to types and their elements. In this later post I dived a little deeper into the algebraic side of things, showing how a common tedious part of our proofs (involving repeated applications of associativity) can be automated. I'd recommend checking out these posts before reading this one, if you haven't already. I'll also use some of the more basic functions defined in Martín Escardó's notes, since this is where I've learned the most about proofwriting and foundations in Agda.
Originally, I was planning on writing this blog post about my attempt to implement the insertion sort algorithm in Agda and prove that it correctly sorts lists. This was a huge challenge, and I've spent several weeks puzzling over it between semesters, but I finally managed to make it work. But this would make for a monster of a blog post, and a lot of my more recent blog posts have ended up a lot longer than I meant for them to be. So, instead, I'll be zooming in on a small part of this much larger problem with a focus on the part that has given me the most trouble: case splits in proofs.
The statement we'll be trying to prove is the following:
Given two lists $\mathtt{ls1}$ and $\mathtt{ls2}$, the number of times that the element $a$ appears in $\mathtt{append}(\mathtt{ls1},\mathtt{ls2})$ equals the number of times that it appears in $\mathtt{ls1}$ plus the number of times that it appears in $\mathtt{ls2}$.
...to the surprise and amazement of nobody. The tricky part of this task is not really convincing ourselves that this is true, but convincing the type checker that this is true. (Although, to be honest, after explicitly and pedantically writing out all of the components of this proof in Agda, I do feel like I understand it on a deeper level, even if it seems like a "trivial" statement.)
We need to do a little bit of housekeeping first. We're going to use the following inductive definition of list types:
data List (A : 𝓤 ̇) : 𝓤 ̇ where [] : List A cons : A → List A → List A
The type $\mathtt{List} A$ is the type of lists of elements of $A$, where $A$ is any other type (in some type universe). There are only two ways of getting a list of elements of $A$ - we can either produce the empty list, or we can add an element of $A$ to the beginning of a preexisting list of elements of $A$. We aren't quite ready to formulate our statement, because we still need to tell Agda what we mean by "appending two lists" and "the number of times an element appears in a list". We can define an $\mathtt{append}$ function pretty easily as follows:
append : {A : 𝓤 ̇} → List A → List A → List A append [] ls2 = ls2 append (cons x ls1) ls2 = cons x (append ls1 ls2)
Notice that we're pattern matching on the first argument. If the list we're prepending is empty, we don't need to make any changes to the second list. If the list we're prepending starts with some element $x$, then this will become the first element of the resulting list. Defining an "element counting" function involves a slight complication. To count the number of occurrences of some element $a:A$ in a list $\mathtt{ls}:\mathtt{List} A$, we have to iterate through the elements of the list, check whether each one of them is equal to $a$, and increment our result for each element that is equal to $a$. The problem is that it's not a given that every type is equipped with decidable equality. To write this function, we'll need a function that is capable of determining whether two given elements of $A$ are equal or not, i.e. a function with the type signature If we nonconstructively decide to assume the law of excluded middle, we can always procure a function of this type, since LEM allows us to construct elements of $P+\neg P$ for any type $P$. However, in order to keep things constructive and make the weakest assumptions necessary, we'll just assume that the type $A$ comes equipped with an "equality decider", and this assumption will take the form of an additional argument to our counting function. Hence, rather than writing a function with the type signature we will write a function with the signature where $\mathtt{decidable}(P)$ represents $P+\neg P$. By the way, this problem is by no means unique to Agda. If you attempt to write a polymorphic function in Haskell with the type signature a -> [a] -> Int that counts the number of occurrences of some element in a list, you'll probably find yourself running into this issue as well. The solution is to add something called a constraint to the function signature, so that it looks like (Eq a) => a -> [a] -> Int. This has the effect of only allowing the type variable a to range over types that instance the Eq class, which requires its instances to possess an equality testing function == :: a -> a -> Bool, rather than allowing a to range over all types. This is pretty much analogous to the fix we've implemented above in Agda, where we accept an equality decider on $A$ as an additional argument to our counting function. Anyways, here's our counting function:
counts : {A : 𝓤 ̇} → has-decidable-equality A → List A → A → ℕ counts deq [] a = 0 counts deq (cons x ls) a = +-recursion (λ _ → succ) (λ _ → id) (deq x a) (counts deq ls a)
What exactly is going on here? We're pattern-matching on the list argument, returning zero if the list is empty and recursing if the list has at least one element. Although the call to +-recursion is a little cryptic-looking, it's essentially a glorified if-else statement. The type signature of +-recursion, a function defined in Escardó's notes, is as follows: For any three types $A,B,C$, given a function $f:A\to C$ and another $g:B\to C$, it "smushes together" the domains of these two functions to form a function $h:A+B\to C$ originating from the coproduct $A+B$, so that the behavior of $h$ is determined by the behavior of $f$ on the "left side" of the coproduct and by $g$ on the "right side" of the coproduct. In particular, if this coproduct looks like $P+\neg P$ - i.e. a decision of whether the proposition $P$ is true or false - and if the functions $f$ and $g$ are constant and $\mathtt{dec}$ is something of type $P+\neg P$, then (+-recursion f g dec) produces the first constant if $P$ is true and the second constant if $P$ is false. In the above definition, the result of our evaluation of +-recursion is either the successor function succ (if x equals a) or the identity function id (otherwise). Hence, the effect is to increment the occurrence count of $a$ in the tail of the list if the first element is equal to $a$, or simply return the occurrence count of $a$ in the tail if the first element is not equal to $a$.
Now we're ready to at least state the proposition that we want to prove. Here is its type signature:
append-sum-counts : {A : 𝓤 ̇} → (deq : has-decidable-equality A) → (ls1 ls2 : List A) → (a : A) → counts deq (append ls1 ls2) a ≡ (counts deq ls1 a) +̇ (counts deq ls2 a)
To paraphrase this in English, given a type $A$, a function that decides equality of elements of $A$, two lists of type $\mathtt{List} A$, and an element $a:A$, the count of $a$ in the concatenation of the two lists (tallied using the provided equality deciding function) equals the sum of the counts of $a$ in the two lists separately (also tallied using the equality deciding function). We'll want to pattern match on the first input list for this proof, since the definition of append is defined in this way. The case when ls1 is empty is pretty easy to write:
append-sum-counts deq [] ls2 a = refl (counts deq ls2 a)
Why does this work? Agda knows from our definition of append that (append [] ls2) is defined to be equal to ls2, and it also knows that (counts deq [] a) is defined to be zero. Also, not only is $0+n$ equal to $n$ for any natural number $n:\mathbb N$, but this fact is part of the very definition of the addition function as it's defined in my arithmetic.agda module:
_+̇_ : ℕ → ℕ → ℕ 0 +̇ y = y (succ x) +̇ y = succ (x +̇ y)
so Agda is able to determine without any help from us that counts deq (append [] ls2) a and (counts deq ls1 a) +̇ (counts deq ls2 a) are equal. It is capable of simplifying both of these expressions to (counts deq ls2 a) using only the definitions of the functions involved, allowing us to supply nothing more than refl (counts deq ls2 a).
The second case won't be quite as simple, and we'll see why in a second, but we can already start to sketch out in our heads what it should look like. Since we've already covered the case in which the first list is empty, the second case should define the proof append-sum-counts deq (cons x ls1) ls2 a in terms of previous proofs of simpler cases, most likely the case append-sum-counts deq ls1 ls2 a. We know that if x and a are equal, then appending x shouldn't change the occurrence count of a, so we can probably return the previous proof unchanged - but if they are equal, then both sides of the equality will be incremented, so we'll probably need to use an ap succ somewhere to transform the previous case into the current one. Since the way in which we transform the previous equality to obtain the desired equality depends on whether or not $x=a$, our proof will have to involve some kind of case split. Even though the proof may not seem terribly complicated, we're going to write a pair of helper functions first, one for each of these two cases, with the following type signatures:
cons-eq-succ-count : {A : 𝓤 ̇} → (deq : has-decidable-equality A) → (ls : List A) → (a x : A) → (x ≡ a) → (counts deq (cons x ls) a) ≡ succ (counts deq ls a) cons-neq-same-count : {A : 𝓤 ̇} → (deq : has-decidable-equality A) → (ls : List A) → (a x : A) → ¬ (x ≡ a) → (counts deq (cons x ls) a) ≡ counts deq ls a
The first helper function will essentially be a proof that appending some element of $A$ that is equal to $a:A$ to a list will increment its count for that element, and the second will be a proof that appending an element distinct from $a$ will not affect the list's count for that element. Since this is essentially how we defined the counts function, it feels like these equalities should be completely definitional. For the first function, we might be tempted to write something like
cons-eq-succ-count deq ls a x eq = refl (succ (counts deq ls a))
but Agda's type checker does not like this: it's not capable of verifying that the two quantities are definitionally equal. It can simplify the expression (counts deq (cons x ls) a) to the following:
+-recursion (λ _ → succ) (λ _ → id) (deq x a) (counts deq ls a)
but it's not able to simplify the +-recursion expression call to either succ or id without knowing whether or not deq x a results in something that looks like inl equal or inr not-equal. And even though something of type $x \equiv a$ is passed to this function as an argument, Agda can't make the final leap of deducing from this that whatever is returned from deq x a will have to fall inside the left half of the coproduct $\mathtt{decidable}(x\equiv a)$, or the half containing proofs that $x\equiv a$. A little piece of insight is missing, namely the fact that receiving something from the right half of the coproduct, i.e. the half containing proofs that $x\not \equiv a$, would contradict the preexisting proof eq that $x\equiv a$. That's right - there's actually a small proof by contradiction hidden inside the proof of this innocuous claim!
Since Agda cannot infer what the output of (deq x a) will look like, we'll have to perform a case-split depending on whether its output looks like inl equal or inr not-equal. The second case, of course, will be an "absurd" case. This time, however, we can't use +-recursion to do our case split. This is because we will actually be defining a dependent function out of the sum type $(x\equiv a) + \neg (x\equiv a)$. That is, +-recursion will only help us when the output has the same type for inputs in the left half and the right half of the domain. But for the function we're writing, when (deq x a) evaluates to something that looks like inl equal, the +-recursion on the left hand side of the equality type defining the output can be reduced, giving the following output type:
succ (counts deq ls a) ≡ succ (counts deq ls a)
which is clearly inhabited by refl (succ (counts deq ls a)). On the other hand, if deq x a were to evaluate to something of the form inr not-equal, then the output type would simplify to
counts deq ls a ≡ succ (counts deq ls a)
which of course is not an inhabited type - but this is okay, because deq x a should never evaluate to a proof of inequality, since a proof of equality was passed as an argument. We don't actually need to procure an element of the above type - we just need to show that inr not-equal would produce a contradiction, i.e. an element of the empty type $\mathbb 0$, and then use an absurd pattern. Here's a picture visualizing our plan of attack:
Now we're finally ready to write the body of this function:
cons-eq-succ-count : {A : 𝓤 ̇} → (deq : has-decidable-equality A) → (ls : List A) → (a x : A) → (x ≡ a) → (counts deq (cons x ls) a) ≡ succ (counts deq ls a) cons-eq-succ-count deq ls a x eq = ap (λ f → f (counts deq ls a)) (+-induction (λ yneq → +-recursion (λ _ → succ) (λ _ → id) yneq ≡ succ) (λ eq' → refl succ) (λ neq → ex-nihilo (neq eq)) (deq x a))
Notice that +-induction takes an additional argument (its first argument) used to explicitly specify the type family for its output. In the case in which deq x a evaluates to something of the form inl eq', Agda knows how to finish the job - hence the second argument λ eq' → refl succ. The second case, where we employ the absurd pattern, is the tricky one. If deq x a were to instead evaluate to something of the form inr neq, we could still obtain a proof of the desired type by first obtaining something of type $\mathbb 0$, which can be done by evaluating $\mathtt{neq}:(x\equiv a)\to\mathbb 0$ at the argument $\mathtt{eq}:x\equiv a$, and from there anything follows. Notice that this part of the case split doesn't really have any computational content, since it can never actually be evaluated. It's just a "formality" to convince the type checker that we're taking all possibilities into account.
We can write a similar proof for our second helper function, but for this one the first case will be the absurd one, since the assumption is that $x$ and $a$ are unequal:
cons-neq-same-count : {A : 𝓤 ̇} → (deq : has-decidable-equality A) → (ls : List A) → (a x : A) → ¬ (x ≡ a) → (counts deq (cons x ls) a) ≡ counts deq ls a cons-neq-same-count deq ls a x neq = ap (λ f → f (counts deq ls a)) (+-induction (λ yneq → +-recursion (λ _ → succ) (λ _ → id) yneq ≡ id) (λ eq → ex-nihilo (neq eq)) (λ neq' → refl id) (deq x a))
Now we're ready to write our final proof! We've proven our proposition for the case in which the first list was empty, so now we need to consider the following case:
append-sum-counts deq (cons x ls1) ls2 a = _
Depending on whether $x$ and $a$ or equal or not, we will follow one of two different lines of reasoning - but our output type will be the same, namely the type
counts deq (append (cons x ls1) ls2) a ≡ (counts deq (cons x ls1) a) +̇ (counts deq ls2 a)
so it makes sense to use +-recursion rather than +-induction for this case split. So we'll want to write a definition that looks something like this:
append-sum-counts deq (cons x ls1) ls2 a = = +-recursion (λ eq → _) (λ neq → _) (deq x a)
If it turns out that $x\equiv a$, then there are three intermediate equalities involved in arriving at the desired result. Firstly, we will have that counts deq (append (cons x ls1) ls2) a is equal to succ (counts deq (append ls1 ls2) a), the result of one of our previous lemmas. Secondly, we will have that this is equal to succ ((counts deq ls1 a) +̇ (counts deq ls2 a)), the result of a recursive call to the previous case of our function (in which the first argument has one element fewer). Thirdly, we will have that this is equal to (counts deq (append x ls1) a) +̇ (counts deq ls2 a), which also follows from our first lemma. Similarly, the case of $x\not\equiv a$ can be split into three intermediate equalities, with the difference being that there is no succ applied to both sides of the equality. Hence, we can arrive at the following final implementation:
append-sum-counts deq (cons x ls1) ls2 a = +-recursion (λ eq → (cons-eq-succ-count deq (append ls1 ls2) a x eq) ∙ (ap succ (append-sum-counts deq ls1 ls2 a)) ∙ (ap (λ y → y +̇ (counts deq ls2 a)) (cons-eq-succ-count deq ls1 a x eq) ⁻¹)) (λ neq → (cons-neq-same-count deq (append ls1 ls2) a x neq) ∙ (append-sum-counts deq ls1 ls2 a) ∙ (ap (λ y → y +̇ (counts deq ls2 a)) (cons-neq-same-count deq ls1 a x neq) ⁻¹)) (deq x a)
And this type-checks, completing our proof of the proposition! It still baffles me how long it took me to understand how to deal with the case-split, and why it was even necessary at all to use +-induction at all.
|
2023-03-27 02:30:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6191964745521545, "perplexity": 827.0269469267641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00587.warc.gz"}
|
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Path_(topology)
|
# All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# Path (topology)
In mathematics, a path in a topological space X is a continuous map f from the unit interval I = [0,1] to X
f : IX.
The initial point of the path is f(0) and the terminal point is f(1). One often speaks of a "path from x to y" where x and y are the initial and terminal points of the path. Note that a path is not just a subset of X which "looks like" a curve, it also includes a parametrization. For example, the maps f(x) = x and g(x) = x2 represent two different paths from 0 to 1 on the real line.
A loop in a X based at xX is a path from x to x. A loop may be equally well regarded as a map f : IX with f(0) = f(1) or as a continuous map from the unit circle S1 to X
f : S1X.
This is because S1 may be regarded as a quotient of I under the identification 0 ∼ 1.
A topological space for which the exists a path connecting any two points is said to be path-connected. Any space may be broken up into a set of path-connected components. The set of path-connected components of a space X is often denoted π0(X);
One can compose paths in a topological space in an obvious manner. Suppose f is a path from x to y and g is a path from y to z. The path fg is defined as the path obtained by first traversing f and then traversing g:
$fg(s) = \begin{cases}f(2s) & 0\leq s \leq \frac{1}{2} \\ g(2s-1) & \frac{1}{2} \leq s \leq 1\end{cases}$
It is important to note that path composition is not associative due to problems with parametrization.
## Homotopy theory
Paths and loops are extremely important in branch of algebraic topology called homotopy theory. A homotopy of paths makes precise the notion of continuously deforming a path while keeping its endpoints fixed.
Specifically, a homotopy of paths in X is a family of paths ft : IX such that
• ft(0) = x0 and ft(1) = x1 are fixed.
• the map F : I × IX given by F(s, t) = ft(s) is continuous.
The paths f0 and f1 connected by a homotopy are said to homotopic. One can likewise define a homotopy of loops keeping the base point fixed.
The property of being homotopic defines an equivalence relation on paths in a topological space. The equivalence class of a path f under this relation is called the homotopy class of f, often denoted [f].
Although path composition is not associative at the level of paths, it is associative at the level of homotopy. That is, [(fg)h] = [f(gh)]. Path composition defines a group structure on the set of homotopy classes of loops based at a point x in X. The resultant group is called the fundamental group of X based at x, usually denoted π1(X,x).
03-10-2013 05:06:04
|
2013-05-22 18:00:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7414629459381104, "perplexity": 513.665500752466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702185502/warc/CC-MAIN-20130516110305-00036-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/4e5c738f0b8b1f45b48c12c2
|
## anonymous 4 years ago integration of ln(sinx) ?
1. anonymous
$\int\limits_{}^{}\ln(sinx)$
2. anonymous
You must remember the by parts rule right?
3. anonymous
yes i know.
4. anonymous
1*ln(sinx)
5. anonymous
Take 1 as the function to integrate and ln(sinx) to differentiate
6. anonymous
$\int\limits_{}^{}u.v=u \int\limits_{}^{}v-\int\limits_{}^{}(du/dx.\int\limits_{}^{}vdx)$
7. anonymous
is it right ?
8. anonymous
ah Yes it is U = ln sinx V =1
9. anonymous
i got stuck up at $x \ln sinx.\int\limits_{}^{}xcos x dx/sinx$ i cant make substitution of sinx =t as x remains there
10. anonymous
okay use by parts after substitution again
11. anonymous
$x = sin^{-1}t$
12. anonymous
trying ...
13. anonymous
$x=\sin^{-1} t$ didnt work well ..it gets me more to intgrate
14. anonymous
yup okay I have to try it on my notebook
15. anonymous
thank you very much . sensei
16. anonymous
17. anonymous
ah I didn't get much ahead but see the integral scroll down a little
18. anonymous
This one was pretty tough. wondering why these questions are given to IIT aspirants
19. anonymous
lol
20. anonymous
Yea I know did you solve quadratic for IIT
21. anonymous
ah I still can't figure out how am I supposed to do them in 2 minutes I have posted a question go to my profile you will see
22. anonymous
|
2016-08-25 18:31:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862410306930542, "perplexity": 6387.661777164151}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982293922.13/warc/CC-MAIN-20160823195813-00223-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://code.tutsplus.com/articles/cross-platform-sass-and-compass-in-wordpress--wp-30611
|
# Cross-Platform Sass and Compass in WordPress
I find it particularly interesting when other developers share their workflow tips and tricks. It can be very helpful to take a sneak-peak into somebody else's development world and find out what tools they are using to make their own life easier.
Well today I'm going to show you a portion of my own workflow - specifically how to use Sass and Compass when developing a WordPress theme. Instead of simply explaining the configuration and tools needed, I thought it would be better to start from scratch and show you exactly what's needed to get going when developing a WordPress theme that uses Sass and Compass.
I hope this article sounds interesting to you and I'm looking forward to sharing a small part of my workflow with you - I encourage you to do the same.
## What You'll Need
After much experimentation, this is the best tool for cross-platform Sass and Compass support. This is a menu-bar only app that can compile Sass files into CSS (it also has live-reload). It is not free, but at \$10.00 I've found it more than worthwhile.
### Alternatives
In the interest of providing a solution for all readers, regardless of platform, this tutorial will provide configuration for the app mentioned above. There are of course other alternatives, but be aware that things may need slightly different configuration than what you see here.
• Mac alternative - Codekit
• Windows alternative - I've not come across a decent Windows GUI alternative other than the app we'll be using in this tutorial. If you know of one, please feel free to share in the comments below.
The _s theme is a design-less theme perfectly suited for developers. As stated on their website "I'm a theme meant for hacking so don't use me as a Parent Theme." - Sounds perfect for us. Head along to their website, _s theme, and use the 'Generate' command on their homepage to download a custom build. You could simply download the theme directly from GitHub, but then you'd have to manually search for all instances of _s within the theme and replace them with your own prefix. Using 'Generate' does that part for you automatically.
Once you have your custom build downloaded, unzip the theme directory into wp-content/themes. For this tutorial I used the generator to create the theme wp-tuts and the directory structure should now look like this:
You can now go ahead and activate the theme from the Admin Panel.
## 2. Configuration for Sass and Compass
In the theme's root directory, we'll have a folder called sass. This is where we'll put all of our .scss files. Compass.app will then watch that directory and compile everything into the single style.css file that lives in the root of the theme.
1. In the root of the theme, create a folder called sass.
2. Also in the root, create a file called config.rb
These are the settings that will work well with WordPress:
Ok, we have our sass folder and our config.rb both sitting in the root of our theme. We are now ready to rip apart the theme's CSS file and create individual files that will be easier to build upon / maintain in the future.
## 3. Convert the Theme's CSS to Sass
One of the advantages to using any CSS preprocessor is the ability to split our CSS into many small files. This helps our workflow tremendously as we can organize our code into related chunks that are easier to maintain and work with. So instead of having everything crammed into one giant CSS file, we can have a separate file that is only for resets. Then we could also have a separate file that only handles the menu, one file for media, etc. etc. We can have as many .scss files as we like, and after compilation they will all be compressed down into a single style.css.
If you look at the style.css file that comes shipped with the theme we downloaded, you'll see that the author has put comments to separate the content into sections like this:
We'll use those comments as a guide for breaking up this stylesheet into separate .scss files.
Within the sass directory, create a file called style.scss - This is the file that we'll use to import all of the other files. Also, this is the only scss file that will NOT be prefixed with an underscore ("_"). This tells our compiler that this file should be converted into an actual CSS file.
Now run through the style.css file and for each commented section, create a new file in the sass folder that is prefixed with an underscore and has the file extension .scss. Copy the contents of that section into the newly created file.
For example, where you see this in the style.css, you would create a file called _navigation.scss and put it within the sass folder:
After running through the entire stylesheet, your sass directory should now look like this. (notice that style.scss is the only file that is not prefixed with an underscore, everything else is considered to be a partial, and will not be compiled into a separate CSS file.)
Now that we've put all the CSS into separate SCSS files, we now need to import those into the style.scss file and also add the theme information.
Ensure these files are imported in the same order that the CSS appeared in the original document. You can see that we start with reset and add the rest in the correct order. You still have to think about the order in which rules are defined in the final CSS!
Important: Also note that exclamation mark (!) on the first line. This tells the compiler not to strip out this important comment. We need to do this because earlier we set the option output_style = :compressed (in the config.rb file). This means that all white-space and comments will be removed from the compiled version. This is a great thing and you certainly want that to happen - but not here! If this comment was removed by the compiler then WordPress would not recognize this theme.
## 4. Compiling Into CSS
We've done all the manual work, now it's time to bring the automation into play. Go ahead and delete the style.css file from the root of the theme as we no longer need it. Now, if you have successfully followed all the steps above, then you should be able to open up Compass.app and choose Watch a Folder. Select your theme's root directory (in our case, it's the wp-tuts folder inside of wp-content/themes)
1. Open Compass.app
2. Select Watch a Folder
3. Select your theme's root directory
After a very short delay, you should see a new style.css file that has been generated. Open it, and you should see a minified version. This is an indication that everything worked as expected.
## 5. Using Compass
At this point, we've converted the theme's base CSS into small, manageable chunks of code and now we'll look at using Compass with our project.
Compass is a framework that provides a lot of powerful features to make your life easier when crafting CSS. Because we're using Compass.app, we can bring in the functionality provided by Compass by simply importing the required module in our style.scss file. For example, if you want the CSS3 modules of Compass, just do this:
That's really it, now you can head over to the Compass website and when you're ready to use any of its features in your project, you'll know exactly how to do it.
You now have all you need to start using Sass and Compass when building themes in WordPress. Next, we'll take a look at a couple of very simple examples of how to use them and whilst this tutorial is not an introduction to Sass and Compass, the examples below should help beginners further recognize the benefits of using a CSS pre-processor.
## 6. Examples
### _vars.scss
As we are now leveraging the power of a pre-processor, we can be more efficient when writing CSS and avoid repeating ourselves. One of the things I have on every single WordPress project is a _vars.scss file where I would keep anything that is project specific in variables. That way, I can refer to the variable names throughout many files, and should I need to change something, I would only have to do it in one place.
To use them across your entire collection of .scss files, just @import it like any other file into style.scss, but just make sure it's the first one, or just after reset would be ok too. Once you have imported it, use the variables like this:
### Compass
Often, many people will only use Compass for its vendor-prefixing abilities. I fall into that category myself and here's a small example to show why:
## Conclusion
I hope this tutorial has been helpful in showing a simple but effective workflow when using Sass and Compass within WordPress. The two examples I gave at the end are the absolute basics of using Sass and Compass and you should look into both separately to make full use of them.
Saying that, you'll still be improving your workflow a great deal with what you've learned here. By using these tools to split up CSS into small files, using variables to reduce repetition, and removing the need to type vendor prefixes - you're on your way to better development workflow.
|
2021-10-20 22:42:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18308793008327484, "perplexity": 1354.5986995248606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585353.52/warc/CC-MAIN-20211020214358-20211021004358-00330.warc.gz"}
|
https://codereview.stackexchange.com/questions/228833/rails-initializer-to-be-more-clean
|
# Rails initializer to be more clean [closed]
I need a little bit of refactor my initialize method because I think it will make initializer more flexible and readable.
class LogAdminData
DEFAULT_EXCLUDED_PARAMS = %w[
].freeze
def initialize(admin_obj:, action_type:, old_data:, new_data:, excluded_params: %w[])
excluded_params += DEFAULT_EXCLUDED_PARAMS
@old_data = old_data.reject { |k, _v| excluded_params.include? k }
@new_data = new_data.reject { |k, _v| excluded_params.include? k }
@action_type = action_type
end
def call
action_type: action_type,
new_data: new_data,
old_data: old_data,
)
end
Maybe something like
def cleanup_data(data)
excluded_params += DEFAULT_EXCLUDED_PARAMS
data.reject { |k, _v| excluded_params.include? k }
end
But how to call it in initializer using old_data and new_data?
## closed as unclear what you're asking by 200_success, dfhwze, Heslacher, AlexV, DannnnoSep 12 at 14:05
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• What does this code do, and how is it used? Is this the entire class? As it is, it looks like some essential parts of the code are missing, and your question makes little sense. – 200_success Sep 12 at 3:24
@old_data = old_data.reject { |k, _v| excluded_params.include? k } with @old_data = old_data.except(excluded_params)
|
2019-10-19 01:27:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2814722955226898, "perplexity": 7084.616009216109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986685915.43/warc/CC-MAIN-20191018231153-20191019014653-00323.warc.gz"}
|
https://www.ctan.org/ctan-ann/id/mailman.1978.1508826839.5216.ctan-ann@ctan.org
|
# CTAN update: quran
Date: October 24, 2017 8:33:48 AM CEST
Seiied-Mohammad-Javad Razavian submitted an update to the quran package. Version number: 1.3 2017-10-22 License type: lppl1.3 Summary description: An easy way to typeset any part of The Holy Quran Announcement text:
Typesetting transliteration of the Holy Quran is supported now. All macros for typesetting any part of quran defined in the package, have an ``lt'' version to typeset the transliteration, e.g. \quransurah has \quransurahlt.
The package's Catalogue entry can be viewed at http://www.ctan.org/pkg/quran The package's files themselves can be inspected at http://mirror.ctan.org/macros/xetex/latex/quran
Thanks for the upload. For the CTAN Team Erik Braun
We are supported by the TeX users groups. Please join a users group; see http://www.tug.org/usergroups.html .
## quran – An easy way to typeset any part of The Holy Quran
This package offers the user an easy way to typeset The Holy Quran. It has been inspired by the lipsum and ptext packages and provides several macros for typesetting the whole or any part of the Quran based on its popular division, including surah, ayah, juz, hizb, quarter, and page.
Besides the Arabic original, translations to English, German, French, and Persian are provided, as well as an English transliteration.
Package quran Version 1.81 2021-02-02 Copyright 2015–2021 Seiied-Mohammad-Javad Razavian Maintainer Seiied-Mohammad-Javad Razavian
more
|
2023-03-29 13:38:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8104910850524902, "perplexity": 12928.713001928509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00231.warc.gz"}
|
https://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/chapter_split/05-Multivariate-Gaussians.ipynb
|
# Multivariate Gaussians - Modeling Uncertainty in Multiple Dimensions¶
In [1]:
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style, set_figsize, figsize
Out[1]:
## Introduction¶
The techniques in the last chapter are very powerful, but they only work with one variable or dimension. Gaussians represent a mean and variance that are scalars - real numbers. They provide no way to represent multidimensional data, such as the position of a dog in a field. You may retort that you could use two Kalman filters from the last chapter. One would track the x coordinate and the other the y coordinate. That does work, but suppose we want to track position, velocity, acceleration, and attitude. These values are related to each other, and as we learned in the g-h chapter we should never throw away information. Through one key insight we will achieve markedly better filter performance than was possible with the equations from the last chapter.
In this chapter I will introduce you to multivariate Gaussians - Gaussians for more than one variable, and the key insight I mention above. Then, in the next chapter we will use the math from this chapter to write a complete filter in just a few lines of code.
## Multivariate Normal Distributions¶
In the last two chapters we used Gaussians for a scalar (one dimensional) variable, expressed as $\mathcal{N}(\mu, \sigma^2)$. A more formal term for this is univariate normal, where univariate means 'one variable'. The probability distribution of the Gaussian is known as the univariate normal distribution
What might a multivariate normal distribution be? Multivariate means multiple variables. Our goal is to be able to represent a normal distribution across multiple dimensions. I don't necessarily mean spatial dimensions - it could be position, velocity, and acceleration. Consider a two dimensional case. Let's say we believe that $x = 2$ and $y = 17$. This might be the x and y coordinates for the position of our dog, it might be the position and velocity of our dog on the x-axis, or the temperature and wind speed at our weather station. It doesn't really matter. We can see that for $N$ dimensions, we need $N$ means, which we will arrange in a column matrix (vector) like so:
$$\mu = \begin{bmatrix}{\mu}_1\\{\mu}_2\\ \vdots \\{\mu}_n\end{bmatrix}$$
Therefore for this example we would have
$$\mu = \begin{bmatrix}2\\17\end{bmatrix}$$
The next step is representing our variances. At first blush we might think we would also need N variances for N dimensions. We might want to say the variance for x is 10 and the variance for y is 4, like so.
$$\sigma^2 = \begin{bmatrix}10\\4\end{bmatrix}$$
This is incorrect because it does not consider the more general case. For example, suppose we were tracking house prices vs total $m^2$ of the floor plan. These numbers are correlated. It is not an exact correlation, but in general houses in the same neighborhood are more expensive if they have a larger floor plan. We want a way to express not only what we think the variance is in the price and the $m^2$, but also the degree to which they are correlated. The covariance describes how two variables are correlated. Covariance is short for correlated variances
We use a covariance matrix to denote covariances with multivariate normal distributions, and it looks like this:
$$\Sigma = \begin{bmatrix} \sigma_1^2 & \sigma_{12} & \cdots & \sigma_{1n} \\ \sigma_{21} &\sigma_2^2 & \cdots & \sigma_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{n1} & \sigma_{n2} & \cdots & \sigma_n^2 \end{bmatrix}$$
If you haven't seen this before it is probably a bit confusing. Instead of starting with the mathematical definition I will build your intuition with thought experiments. At this point, note that the diagonal contains the variance for each state variable, and that all off-diagonal elements (covariances) are represent how much the $i$th (horizontal row) and $j$th (vertical column) state variable are linearly correlated to each other. In other words, covariance is a measure for how much they change together.
A couple of examples. Generally speaking as the square footage of a house increases the price increases. These variables are correlated. As the temperature of an engine increases its life expectancy lowers. These are inversely correlated. The price of tea and the number of tail wags my dog makes have no relation to each other, and we say they are not correlated - each can change independent of the other.
Correlation implies prediction. If our houses are in the same neighborhood, and you have twice the square footage I can predict that the price is likely to be higher. This is not guaranteed as there are other factors such as proximity to garbage dumps which also affect the price. If my car engine significantly overheats I start planning on replacing it soon. If my dog wags his tail grocery I don't conclude that tea prices will be increasing.
A covariance of 0 indicates no correlation. So, for example, if the variance for x is 10, the variance for y is 4, and there is no linear correlation between x and y, then we would say
$$\Sigma = \begin{bmatrix}10&0\\0&4\end{bmatrix}$$
If there was a small amount of correlation between x and y we might have
$$\Sigma = \begin{bmatrix}10&1.2\\1.2&4\end{bmatrix}$$
where 1.2 is the covariance between x and y. Note that this is always symmetric - the covariance between x and y is always equal to the covariance between y and x. That is, $\sigma_{xy}=\sigma_{yx}$ for any x and y.
Now, without explanation, here is the multivariate normal distribution in $n$ dimensions.
$$f(\mathbf{x},\, \mu,\,\Sigma) = \frac{1}{(2\pi)^{\frac{n}{2}}|\Sigma|^{\frac{1}{2}}}\, \exp \Big [{ -\frac{1}{2}(\mathbf{x}-\mu)^\mathsf{T}\Sigma^{-1}(\mathbf{x}-\mu) \Big ]}$$
I urge you to not try to remember this function. We will program it in a Python function and then call it if we need to compute a specific value. Plus, the Kalman filter equations compute this for us automatically; we never have to explicitly compute it. However, note that it has the same form as the univariate normal distribution. It uses matrices instead of scalar values, and the root of $\pi$ is scaled by $n$. If you set n=1 then it turns into the univarate equation. Here is the univariate equation for reference:
$$f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp \Big [{-\frac{1}{2}}{(x-\mu)^2}/\sigma^2 \Big ]$$
The multivariate version merely replaces the scalars of the univariate equations with matrices. If you are reasonably well-versed in linear algebra this equation should look quite manageable; if not, don't worry! Let's plot it and see what it looks like.
In [41]:
import mkf_internal
mkf_internal.plot_3d_covariance((2, 17), [[10., 0], [0, 4.]])
This is a plot of two dimensional multivariate Gaussian with a mean of $\mu=[\begin{smallmatrix}2\\17\end{smallmatrix}]$ and a covariance of $\Sigma=[\begin{smallmatrix}10&0\\0&4\end{smallmatrix}]$. The three dimensional shape shows the probability density of for any value of (x,y) in the z-axis. I have projected the variance for x and y onto the walls of the chart - you can see that they take on the normal Gaussian bell curve shape. The curve for x is wider than the curve for y, which is explained by $\sigma_x^2=10$ and $\sigma_y^2=4$. The highest point of the curve is centered over (2, 17), the means for x and y.
All multivariate Gaussians form this shape. If we think of this as a the Gaussian for the position of a dog, the z-value at each point of (x, y) is the probability density of it being at that position. So, he has the highest probability of being near (2, 17), a modest probability of being near (5, 14), and a very low probability of being near (10, 10).
More details are in the Kalman Filter Math chapter. Here we need to understand the following.
1. The diagonal of the matrix contains the variance for each variable. This is because the covariance between x and itself is the variance of x: $\sigma_{xx} = \sigma_x^2$.
1. Each off-diagonal element contains $\sigma_{ij}$ - the covariance between i and j. This tells us how much linear correlation there is between the two variables. 0 means no correlation, and as the number gets higher the correlation gets greater.
2. $\sigma_{ij} = \sigma_{ji}$: if i gets larger when j gets larger, then it must be true that j gets larger when i gets larger.
3. This chart only shows a 2 dimensional Gaussian, but the equation works for any number of dimensions > 0.
FilterPy [2] implements the equation with the function filterpy.stats.multivariate_gaussian. I am not showing the code here because I have taken advantage of the linear algebra solving apparatus of NumPy to efficiently compute a solution - the code does not correspond to the equation in a one to one manner.
In the last chapter we did not have to explicitly program the univariate equation into our filter. The filter equations were generated by substituting the univariate equation into Bayes' equation. The same is true for the multivariate case. You will not be using this function very often in this book, so I would not spend a lot of time mastering it unless it interests you.
SciPy's stats module implements the multivariate normal equation with multivariate_normal(). It implements a 'frozen' form where you set the mean and covariance once, and then calculate the probability for any number of values for x over any arbitrary number of calls. This is much more efficient then recomputing everything in each call. So, if you have version 0.14 or later you may want to substitute my function for the built in version. Use scipy.version.version to get the version number. I named my function multivariate_gaussian() to ensure it is never confused with the SciPy version. I will say that for a single call, where the frozen variables do not matter, mine consistently runs faster as measured by the timeit function.
The tutorial[1] for the scipy.stats module explains 'freezing' distributions and other very useful features.
In [42]:
from filterpy.stats import gaussian, multivariate_gaussian
I'll demonstrate using it, and then move on to more interesting things.
First, let's find the probability density for our dog being at (2.5, 7.3) if we believe he is at (2, 7) with a variance of 8 for $x$ and a variance of 4 for $y$.
Start by setting $x$ to (2.5, 7.3). You can use a tuple, list, or NumPy array.
In [43]:
x = [2.5, 7.3]
Next, we set the mean of our belief:
In [44]:
mu = [2.0, 7.0]
Finally, we have to define our covariance matrix. In the problem statement we did not mention any correlation between $x$ and $y$, and we will assume there is none. This makes sense; a dog can choose to independently wander in either the $x$ direction or $y$ direction without affecting the other. If there is no correlation between the values place the variances in the diagonal, and set off-diagonal elements to zero. I will use name P. Kalman filters use the name $\textbf{P}$ for the covariance matrix, and we need to become familiar with the conventions.
In [45]:
P = [[8., 0.], [0., 4.]]
Now call the function
In [46]:
print('{:.4}'.format(multivariate_gaussian(x, mu, P)))
0.02739
These numbers are not easy to interpret. Let's view a plot of it.
In [47]:
import mkf_internal
import matplotlib.pyplot as plt
ax = mkf_internal.plot_3d_covariance(mu, P)
The result is clearly a 3D bell shaped curve. We can see that the Gaussian is centered around (2,7), and that the probability density quickly drops away in all directions. On the sides of the plot I have drawn the Gaussians for $x$ in greens and for $y$ in orange.
Let's look at this in a slightly different way. Instead of plotting a surface showing the probability distribution I will generate 1,000 points with the distribution of $[\begin{smallmatrix}8&0\\0&4\end{smallmatrix}]$.
In [48]:
mkf_internal.plot_3d_sampled_covariance(mu, P)
We can think of the sampled points as being possible locations for our dog given those particular mean and covariances. The contours on the side show the variance in the points for $x$ and $y$ only. We can see that he is far more likely to be at (2, 7) where there are many points, than at (-5, 5) where there are few.
As beautiful as this is, it is hard to get useful information. For example, it is not easy to tell if $x$ and $y$ both have the same variance. In most of the book I'll display Gaussians using contour plots. Helper functions in FilterPy plot them for us. If you are interested in linear algebra look at the code used to produce these contours, otherwise feel free to ignore it.
In [49]:
with figsize(y=5):
mkf_internal.plot_3_covariances()
For those of you viewing this online or in IPython Notebook on your computer, here is an animation.
(source: http://git.io/vqxLS)
From a mathematical perspective these display the values that the multivariate Gaussian takes for a specific standard deviation. This is like taking a horizontal slice out of the 3D plot. By default it displays one standard deviation, but you can use the variance parameter to control what is displayed. For example, variance=3**2 would display the 3rd standard deviation, and variance=[1,4,9] would display the 1st, 2nd, and 3rd standard deviations as in the chart below. This takes 3 different horizontal slices of the multivariate Gaussian chart and displays them in 2D.
In [50]:
from filterpy.stats import plot_covariance_ellipse
P = [[2, 0], [0, 9]]
plot_covariance_ellipse((2, 7), P, facecolor='g', alpha=0.2,
variance=[1, 2**2, 3**2],
axis_equal=True, title='|2 0|\n|0 9|')
However, the solid colors may suggest that the probability distribution is constant between the standard deviations. This is not true, as you can tell from the 3D plot of the Gaussian. Here is a 2D shaded representation of the probability distribution for the covariance ($\begin{smallmatrix}2&1.2\\1.2&1.3\end{smallmatrix})$.
In [51]:
from nonlinear_plots import plot_cov_ellipse_colormap
plot_cov_ellipse_colormap(cov=[[2, 1.2], [1.2, 1.3]])
Thinking about the physical interpretation of these plots clarifies their meaning. The mean and covariance of the fist plot is
$$\mathbf{\mu} =\begin{bmatrix}2\\7\end{bmatrix},\, \, \Sigma = \begin{bmatrix}2&0\\0&2 \end{bmatrix}$$
Let this be our current belief about the position of our dog in a field. In other words, we believe that he is positioned at (2,7) with a variance of $\sigma^2=2$ for both x and y. The contour plot shows where we believe the dog is located with the '+' in the center of the ellipse. The ellipse shows the boundary for $1\sigma$. As in the univariate case 68% of the data will fall within this ellipse. Recall from the Gaussians chapter the the 68-95-99.7 rule - 68% of all values will fall within 1 standard deviation ($1\sigma$), 95% within $2\sigma$, and 99.7% within $3\sigma$. This rule applies for any dimensional size. The dog could be at (356443, 58483), but the chances for values that far away from the mean are infinitesimally small.
A Bayesian way of thinking about this is that the ellipse shows us the amount of error in our belief. A tiny circle would indicate that we have a very small error, and a very large circle indicates a lot of error in our belief. We will use this throughout the rest of the book to display and evaluate the accuracy of our filters at any point in time.
The second plot is for the mean and covariance
$$\mu =\begin{bmatrix}2\\7\end{bmatrix}, \, \, \, \Sigma = \begin{bmatrix}2&0\\0&9\end{bmatrix}$$
This time we use a different variance for $x$ ($\sigma_x^2=2$) vs $y$ ($\sigma^2_y=9$). The result is a tall and narrow ellipse. We can see that a lot more uncertainty in $y$ value vs $x$. Our belief that the value is (2, 7) is the same in both cases, but the uncertainties are different. In this case the standard deviation in $x$ is $\sigma_x = \sqrt{2}=1.414$ and the standard deviation for $y$ is $\sigma_y = \sqrt{9}=3$. This sort of thing happens naturally as we track objects in the world - one sensor has a better view of the object or is closer than another sensor, resulting in different uncertainties in each axis.
The third plot shows the mean and covariance
$$\mu =\begin{bmatrix}2\\7\end{bmatrix}, \, \, \, \Sigma = \begin{bmatrix}2&1.2\\1.2&2\end{bmatrix}$$
This is the first contour that has values in the off-diagonal elements of the covariance, and this is the first contour plot with a slanted ellipse. This is not a coincidence. The two facts are telling us the same thing. A slanted ellipse tells us that the $x$ and $y$ values are somehow correlated. We denote that in the covariance matrix with values off the diagonal.
What does this mean in physical terms? Think of parallel parking a car. You can not pull up beside the spot and then move sideways into the space because cars cannot drive sideways. $x$ and $y$ are not independent. This is a consequence of the steering mechanism. When the steering wheel is turned the car rotates around its rear axle while moving forward. Or think of a horse attached to a pivoting exercise bar in a corral. The horse can only walk in circles, he cannot vary $x$ and $y$ independently, which means he cannot walk in a straight line or a zig zag. If $x$ changes, $y$ must also change in a defined way.
When we see this ellipse we know that $x$ and $y$ are correlated, and that the correlation is "strong". The size of the ellipse shows how much error we have in each axis, and the slant shows how the relative sizes of the variance in $x$ and $y$. For example, a very long and narrow ellipse tilted almost to the horizontal has a strong correlation between $x$ and $y$ (because the ellipse is narrow), and the variance of $x$ is much larger than that of $y$ (because the ellipse is much longer in $x$).
## Using Correlations to Improve Estimates¶
Suppose we believe our dog is at position (5, 10) with some given covariance. If the standard deviation in x and y is each 2 meters, but they are strongly correlated, the covariance contour would look something like this.
In [53]:
import matplotlib.pyplot as plt
P = [[4, 3.9], [3.9, 4]]
plot_covariance_ellipse((5, 10), P, edgecolor='k',
variance=[1, 2**2, 3**2])
plt.xlabel('X')
plt.ylabel('Y');
Now suppose I were to tell you that the actual position of the dog in the x-axis is 7.5, what can we infer about his position in the y-axis? The position is extremely likely to lie within the 3$\sigma$ covariance ellipse. We can infer the position in y based on the covariance matrix because there is a correlation between x and y. I've roughly illustrated the likely value for y as a blue filled circle.
In [76]:
mkf_internal.plot_correlation_covariance()
A word about correlation and independence. If variables are independent they can vary separately. If you walk in an open field, you can move in the $x$ direction (east-west), the $y$ direction(north-south), or any combination thereof. Independent variables are always also uncorrelated. Except in special cases, the reverse does not hold true. Variables can be uncorrelated, but dependent. For example, consider the pair$(x,y)$ where $y=x^2$. Correlation is a linear measurement, so $x$ and $y$ are uncorrelated. However, they are obviously dependent on each other.
## Multiplying Multidimensional Gaussians¶
In the previous chapter we incorporated an uncertain measurement with an uncertain estimate by multiplying their Gaussians together. The result was another Gaussian with a smaller variance. If two pieces of uncertain information corroborate each other we should be more certain in our conclusion. The graphs look like this:
In [77]:
mkf_internal.plot_gaussian_multiply()
The combination of measurement 1 and 2 yields more certainty, so the new Gaussian is taller and narrower - the variance became smaller. The same happens in multiple dimensions with multivariate Gaussians.
Here are the equations for multiplying multivariate Gaussians. They are generated by plugging the Gaussians for the prior and the estimate into Bayes Theorem. I gave you the algebra for the univariate case in the last section of the last chapter. You will not need to remember these equations, as they are computed by Kalman filter equations that will be presented shortly. This computation is also available in FilterPy using the multivariate_multiply() method, which you can import from filterpy.stats.
\begin{aligned} \mu &= \Sigma_2(\Sigma_1 + \Sigma_2)^{-1}\mu_1 + \Sigma_1(\Sigma_1 + \Sigma_2)^{-1}\mu_2 \\ \Sigma &= \Sigma_1(\Sigma_1+\Sigma_2)^{-1}\Sigma_2 \end{aligned}
\begin{aligned} \mu &=\frac{\sigma_1^2 \mu_2 + \sigma_2^2 \mu_1} {\sigma_1^2 + \sigma_2^2}, \\ \sigma^2 &= \frac{1}{\frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}} = \frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}
This looks similar to the equations for the multivariate equations. This will be more obvious if you recognize that matrix inversion, denoted by the -1 power, is like division since $AA^{-1} =I$. I will rewrite the inversions as divisions - this is not a mathematically correct thing to do but it does help us see what is going on.
\begin{aligned} \mu &\approx \frac{\Sigma_2\mu_1 + \Sigma_1\mu_2}{\Sigma_1 + \Sigma_2} \\ \\ \Sigma &\approx \frac{\Sigma_1\Sigma_2}{(\Sigma_1+\Sigma_2)} \end{aligned}
In this form we can surmise that these equations are the linear algebra form of the univariate equations.
Now let's explore multivariate Gaussians in terms of a concrete example. Suppose that we are tracking an aircraft with two radar systems. I will ignore altitude so I can use two dimensional plots. Radars give us the range and bearing to a target. We start out being uncertain about the position of the aircraft, so the covariance, which is our uncertainty about the position, might look like this. In the language of Bayesian statistics this is our prior.
In [78]:
P0 = [[6, 0], [0, 6]]
plot_covariance_ellipse((10, 10), P0, facecolor='y', alpha=0.6)
Now suppose that there is a radar to the lower left of the aircraft. Further suppose that the radar is very accurate in the bearing measurement, but not very accurate at the range. That covariance, which is the uncertainty in the reading might look like this (plotted in blue):
In [79]:
P1 = [[2, 1.9], [1.9, 2]]
plot_covariance_ellipse((10, 10), P0, facecolor='y', alpha=0.6)
plot_covariance_ellipse((10, 10), P1, facecolor='b', alpha=0.6)
Recall that Bayesian statistics calls this the evidence. The ellipse points towards the radar. It is very long because the range measurement is inaccurate, and he aircraft could be within a considerable distance of the measured range. It is very narrow because the bearing estimate is very accurate and thus the aircraft must be very close to the bearing estimate.
We want to find the posterior - the mean and covariance of incorporating the evidence into the prior. As in every chapter so far we multiply them together. I have the equations for this and we could use those, but I will use FilterPy's multivariate_multiply method.
In [80]:
from filterpy.stats import multivariate_multiply
P2 = multivariate_multiply((10, 10), P0, (10, 10), P1)[1]
plot_covariance_ellipse((10, 10), P0, facecolor='y', alpha=0.2)
plot_covariance_ellipse((10, 10), P1, facecolor='b', alpha=0.6)
plot_covariance_ellipse((10, 10), P2, facecolor='y')
Here I have plotted the original estimate (prior) it a very transparent yellow, the radar reading in blue (evidence), and the finale estimate (posterior) in yellow.
The Gaussian retained the same shape and position as the radar measurement, but is smaller. We've seen this with one dimensional Gaussians. Multiplying two Gaussians makes the variance smaller because we are incorporating more information, hence we are less uncertain. But the main point I want to make is that the covariance shape reflects the physical layout of the aircraft and the radar system.
Now lets say we get a measurement from a second radar, this one to the lower right, which I will plot in blue against the yellow covariance of our current belief.
In [81]:
P3 = [[2, -1.9], [-1.9, 2.2]]
plot_covariance_ellipse((10, 10), P2, facecolor='y', alpha=0.6)
plot_covariance_ellipse((10, 10), P3, facecolor='b', alpha=0.6)
Again, to incorporate this new information we will multiply the Gaussians together.
In [82]:
P4 = multivariate_multiply((10, 10), P2, (10, 10), P3)[1]
plot_covariance_ellipse((10, 10), P2, facecolor='y', alpha=0.2)
plot_covariance_ellipse((10, 10), P3, facecolor='b', alpha=0.6)
plot_covariance_ellipse((10, 10), P4, facecolor='y')
You can see how the multivariate Gaussian's shape reflects the geometry of the problem. The first radar system was at a 45 degree angle to the aircraft, and its error in the bearing measurement was much smaller than the error in the range. This resulted in a long and narrow covariance ellipse whose major axis was aligned with the angle to the radar system. The next radar system was also at a 45 degree angle, but to the right, so the two measurements were orthogonal to each other. This allowed us to triangulate on the aircraft, resulting in a very accurate estimate. We didn't explicitly write any code to perform triangulation; it was a natural outcome of multiplying the Gaussians of each measurement together.
To make sure you understand this, what would the Gaussian look like if we only had one radar station, and we received several measurements from it over a short period of time? Clearly the Gaussian would remain elongated in the axis of the bearing angle. Without a second radar station no information would be provided to reduce the error on that axis, so it would remain quite large. As the aircraft moves the bearing will typically change by a small amount, so over time some of the error will be reduced, but it will never be reduced as much as a second radar station would provide.
To round this out lets quickly redo this example but with the first radar system in a different position. I will position it directly to the left of the aircraft. The only change I need to make is to the Gaussian for the measurement from the radar. In the previsous example I used
$$\Sigma = \begin{bmatrix}2&1.9\\1.9&2\end{bmatrix}$$
Why did this result in a 45 degree ellipse? Think about that before reading on. It was 45 degrees because the values in the diagonal were identical. So if x=10 then y=10, and so on. We can alter the angle by making the variance for x or y different, like so:
In [83]:
P1 = [[2, 1.9], [1.9, 8]]
plot_covariance_ellipse((10, 10), P1, facecolor='y', alpha=0.6)
The radar is to the left of the aircraft, so I can use a covariance of
$$\Sigma = \begin{bmatrix}2&0\\0&0.2\end{bmatrix}$$
to model the measurement. Incidentally, I invented those values. We haven't learned how to transform a matrix from one coordinate system to another.
In the next graph I plot the original estimate in a very light yellow, the radar measurement in blue, and the new estimate based on multiplying the two Gaussians together in yellow.
In [84]:
P1 = [[2, 0], [0, .2]]
P2 = multivariate_multiply((10, 10), P0, (10, 10), P1)[1]
plot_covariance_ellipse((10, 10), P0, facecolor='y', alpha=0.2)
plot_covariance_ellipse((10, 10), P1, facecolor='b', alpha=0.6)
plot_covariance_ellipse((10, 10), P2, facecolor='y')
Now we can incorporate the measurement from the second radar system, which we will leave in the same position as before.
In [85]:
P3 = [[2, -1.9], [-1.9, 2.2]]
P4 = multivariate_multiply((10, 10), P2, (10, 10), P3)[1]
plot_covariance_ellipse((10, 10), P2, facecolor='y', alpha=0.2)
plot_covariance_ellipse((10, 10), P3, facecolor='b', alpha=0.6)
plot_covariance_ellipse((10, 10), P4, facecolor='y')
Our estimate is not as accurate as the previous example. The two radar stations are no longer orthogonal to each other relative to the aircraft's position so the triangulation is not optimal. Imagine standing on the ground and trying to triangulate on an aircraft in the sky with a transit. If you took a measurement, moved the transit 5 meters and took a second measurement the tiny change in angle between the two measurements would result in a very poor measurement because a very small error in either measurement would give a wildly different result. Think of the measurements as two nearly parallel lines. Changing the angle between them slightly will move the intersection between the two by a large amount. If you were to take the measurements from positions 100 km apart the lines might be nearly perpendicular to each other, in which case a small measurement error would result in a very small shift in the intersection point.
## Hidden Variables¶
You can probably already see why a multivariate Kalman filter can perform better than a univariate one. The last section demonstrated how we can use correlations between variables to significantly improve our estimates. We can take this much further. This section contains the key insight to this chapter, so read carefully.
Let's say we are tracking an aircraft and we get the following data for the $x$ and $y$ coordinates at time $t$=1,2, and 3 seconds. What does your intuition tell you the value of $x$ will be at time $t$=4 seconds?
In [86]:
import mkf_internal
mkf_internal.show_position_chart()
It appears that the aircraft is flying in a straight line and we know that aircraft cannot turn on a dime. The most reasonable guess is that at $t$=4 the aircraft is at (4,4). I will depict that with a green arrow.
In [87]:
mkf_internal.show_position_prediction_chart()
You made this inference because you inferred a constant velocity for the airplane. The reasonable assumption is that the aircraft is moving one unit each in x and y per time step.
Think back to the g-h filter chapter when we were trying to improve the weight predictions of a noisy scale. We incorporated weight gain into the equations because it allowed us to make a better prediction of the weight the next day. The g-h filter uses the g parameter to scale the amount of significance given to the current weight measurement, and the h parameter scaled the amount of significance given to the weight gain.
We are going to do the same thing with our Kalman filter. After all, the Kalman filter is a form of a g-h filter. In this case we are tracking an airplane, so instead of weight and weight gain we need to track position and velocity. Weight gain is the derivative of weight, and of course velocity is the derivative of position. It's impossible to plot and understand the 4D chart that would be needed to plot x and y and their respective velocities so let's do it for $x$, knowing that the math generalizes to more dimensions.
At time 1 we might be fairly certain about the position (x=0) but have no idea about the velocity. We can plot that with a covariance matrix like this. The narrow width expresses our relative certainty about position, and the tall height expresses our lack of knowledge about velocity.
In [88]:
mkf_internal.show_x_error_chart(1)
Now after one second we get a position update of x=5.
In [89]:
mkf_internal.show_x_error_chart(2)
This implies that our velocity is roughly 5 m/s. But of course position and velocity are correlated. If the velocity is 5 m/s the position would be 5, but if the velocity was 10 m/s the position would be 10. So let's draw a velocity covariance matrix in red.
In [90]:
mkf_internal.show_x_error_chart(3)
This superposition of the two covariances is where the magic happens. The only reasonable estimate at time t=1 (where position=5) is roughly the intersection between the two covariance matrices! More exactly, we can use the math from the last section and multiply the two covariances together. From a Bayesian point of view we multiply the prior with the evidence to get the posterior. If we multiply the position covariance with the velocity covariance using the Bayesian equations we get the result shown in the next chart.
In [91]:
mkf_internal.show_x_error_chart(4)
We can see that the new covariance (the posterior) lies at the intersection of the position covariance and the velocity covariance. It is slightly tilted, showing that there is some correlation between the position and velocity. Far more importantly, it is much smaller than either the position or velocity covariances. In the previous chapter our variance would get smaller each time we performed an update() because the previous estimate was multiplied by the new measurement. The same thing happens here. However, the amount by which the covariance got smaller by is much larger in this chapter. This is because we are using two different kinds of information which are nevertheless correlated. Knowing the velocity approximately and the position approximately allows us to very quickly hone in on the correct answer.
This is a key point in Kalman filters, so read carefully! Our sensor is only detecting the position of the aircraft (how doesn't matter). This is called an observed variable. It does not have a sensor that provides velocity. But based on the position estimates we can compute velocity. In Kalman filters we would call the velocity a hidden variable. Hidden means what it sounds like - there is no sensor that is measuring velocity, thus its value is hidden from us. We are able to use the correlation between position and velocity to infer its value very accurately.
To round out the terminology there are also unobserved variables. For example, the aircraft's state includes things such as as heading, engine RPM, weight, color, the first name of the pilot, and so on. We cannot sense these directly using the position sensor so they are not observed. There is no way to infer them from the sensor measurements and correlations (red planes don't go faster than white planes), so they are not hidden. Instead, they are unobservable. If you include an unobserved variable in your filter state the estimate for that variable will be nonsense.
What makes this possible? Imagine for a moment that we superimposed the velocity from a different airplane over the position graph. Clearly the two are not related, and there is no way that combining the two could possibly yield any additional information. In contrast, the velocity of this airplane tells us something very important - the direction and speed of travel. So long as the aircraft does not alter its velocity the velocity allows us to predict where the next position is. After a relatively small amount of error in velocity the probability that it is a good match with the position is very small. Think about it - if you suddenly change direction your position is also going to change a lot. If the measurement of the position is not in the direction of the velocity change it is very unlikely to be true. The two are correlated, so if the velocity changes so must the position, and in a predictable way.
It is important to understand that we are taking advantage of the fact that velocity and position are correlated. We get a rough estimate of velocity from the distance and time between two measurement, and use Bayes theorem to and produce very accurate estimates after only a few observations. Please reread this section if you have any doubts. If you grasp this point the rest is straightforward. If you do not you will quickly find it impossible to reason about what you will learn in the rest of this chapter.
In summary we have taken advantage of the geometry and correlations of the system to produce a very accurate estimate. The math does not care whether we are working with two positions, or a position and a correlated velocity, or if these are spatial dimensions. If floor space is correlated to house price you can write a Kalman filter to track house prices. If age is correlated to disease incidence you can write a Kalman filter to track diseases. If the zombie population is inversely correlated with the number of shotguns then you can write a Kalman filter to track zombies. I showed you this in terms of geometry and talked about triangulation. That was just to build your intuition. Get used to thinking of these as Gaussians with correlations. If we can express our uncertainties as a multidimensional Gaussian we can then multiply the prior with the evidence and get a much more accurate result.
|
2022-07-01 10:22:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7362076044082642, "perplexity": 419.97467571355185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00471.warc.gz"}
|
http://www.macdevcenter.com/2004/03/05/examples/latex-html/node13.html
|
Next: Figures Up: Common LaTeX Operations Previous: Footnotes Contents
## Tables
Table are something that we all use in our documents. LaTeX enables you to create high-quality tables through the table and tabular environments. Using these environments, you can easily produce very well-structured and readable tables of information. The basic syntax for creating a table is as follows:
\begin{table}[where]
\caption{Table caption}
\centering
\begin{tabular}[pos]{cols}
column 1 & column 2 ... & column k \\
...
\end{tabular}
\end{table}
Where LaTeX places a table is very important. For example, if you have a large table in your document, you would rather see it on a single page, rather than broken up across pages. In the LaTeX literature, you will see the term "float'" or "floating object" to refer to this idea. This describes the situation where a table or figure can not fit on its current page, and is placed on a separate so-called floating page. LaTeX enables control over the placement of tables through the where parameter. The where parameter defines where the table is displayed on the page. A value of b places the table at the bottom of the page, h places the table here, t at the top of the page, and p on a separate float page containing no text, only floats. The concept of floating objects also applied to figures and footnotes.
You use the tabular environment to construct the table. The pos and cols parameters control how the table is formatted. The pos parameter controls the vertical position of the whole tabular environment. The values are either t (align with top row) or b (align with bottom row). The cols parameter controls the column formatting; l = format text left, r = format text right, c = format text center. The p{wd} parameter controls the size of a column and makes columns with multi lines.
Next: Figures Up: Common LaTeX Operations Previous: Footnotes Contents
Kevin O'Malley 2004-03-05
|
2017-10-19 16:04:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7546281814575195, "perplexity": 2091.7545153686747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823350.23/warc/CC-MAIN-20171019160040-20171019180040-00124.warc.gz"}
|
https://www.natedayta.com/2018/08/04/maps-with-the-new-ggplot2-v3-0-0/
|
#### Maps with the new ggplot2 v3.0.0
##### August 4, 2018 - 2 minutes
Civic Data ggplot2 tidyverse
In honor of ggplot2 turning version 3 on CRAN I decided to make some maps of the 2010 census in Charlottesville, Virginia, to show off the new geom_sf() layer.
#### Packages
library(magrittr) # viva la %<>%
library(tidyverse)
#### Theme prep
Universal settings for all of my ggplot’s. These make typing easier and documents more consistant.
theme_set(cowplot::theme_map() +
theme(panel.grid.major=element_line(colour="transparent")))
scale_fill_continuous <- function(...) ggplot2::scale_fill_continuous(..., type = "viridis")
#### Census Data
The tract level summary is available on the city’s ODP. But you could also use the tidycensus package for another city’s record.
tracts <- sf::read_sf("https://opendata.arcgis.com/datasets/63f965c73ddf46429befe1132f7f06e2_15.geojson")
tracts %<>% select(OBJECTID, area = AREA_, Population:Asian)
Let’s look at that census data now and since we have geom_sf() thowing on aesthetics is easy. Here I’ll use tracts$Population as fill. ggplot(tracts, aes(fill = Population)) + geom_sf() Ok that’s pretty freaking easy. No suprise that the city’s largest population is around UVA’s grounds and the Corner. Lets’ use our favorite facets with geom_sf() to explore the racial distribution of Whites, Blacks, American Indians, and Asians in the city. long_tracts <- tracts %>% gather("race", "pop", White:Asian) ggplot(long_tracts, aes(fill = pop)) + geom_sf() + facet_wrap(~ race) Damn, Charlottesville is really, really white. To make a better viz about the non-white population patterns it would be nice to free the fill scales in each facet. And because this is ggplot() now, I can use on my favorite grid helper tool, cowplot::plot_grid(). Any alternatives, like gridextra, egg or patchwork, are on the table too. long_tracts %>% split(.$race) %>%
map(~ ggplot(., aes(fill = pop)) +
geom_sf() +
facet_wrap(~race) ) %>%
cowplot::plot_grid(plotlist = .)
That’s pretty fast and now we have a much better picture of each race’s distribution in the city.
Being able to manipulate and make maps with the tidyverse is awesome. Working with ggplot2 layers is straight forward and there already exist a ton of accessory packages, like cowplot that make formatting these ggobjects straight forward too!
#### Is the weather getting wetter?
##### Exploring historical data from 1905 to 2015 from the World Bank
weather lm sf tidyverse
#### Geocoded crime reports for Charlottesville Virginia
##### November 27, 2018 - 5 minutes
Civic Data packages sf tidyverse
data viz ggplot2
|
2019-04-22 20:57:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19305898249149323, "perplexity": 14427.013304313998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578582584.59/warc/CC-MAIN-20190422195208-20190422221208-00310.warc.gz"}
|
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Priestley_space
|
# Priestley space
In mathematics, a Priestley space is an ordered topological space with special properties. Priestley spaces are named after Hilary Priestley who introduced and investigated them.[1] Priestley spaces play a fundamental role in the study of distributive lattices. In particular, there is a duality ("Priestley duality"[2]) between the category of Priestley spaces and the category of bounded distributive lattices.[3][4]
## Definition
A Priestley space is an ordered topological space (X,τ,≤), i.e. a set X equipped with a partial order and a topology τ, satisfying the following two conditions:
1. (X,τ) is compact.
2. If ${\displaystyle \scriptstyle x\,\not \leq \,y}$, then there exists a clopen up-set U of X such that xU and yU. (This condition is known as the Priestley separation axiom.)
## Properties of Priestley spaces
• Each Priestley space is Hausdorff. Indeed, given two points x,y of a Priestley space (X,τ,≤), if xy, then as is a partial order, either ${\displaystyle \scriptstyle x\,\not \leq \,y}$ or ${\displaystyle \scriptstyle y\,\not \leq \,x}$. Assuming, without loss of generality, that ${\displaystyle \scriptstyle x\,\not \leq \,y}$, (ii) provides a clopen up-set U of X such that x U and yU. Therefore, U and V = XU are disjoint open subsets of X separating x and y.
• Each Priestley space is also zero-dimensional; that is, each open neighborhood U of a point x of a Priestley space (X,τ,≤) contains a clopen neighborhood C of x. To see this, one proceeds as follows. For each y XU, either ${\displaystyle \scriptstyle x\,\not \leq \,y}$ or ${\displaystyle \scriptstyle y\,\not \leq \,x}$. By the Priestley separation axiom, there exists a clopen up-set or a clopen down-set containing x and missing y. The intersection of these clopen neighborhoods of x does not meet XU. Therefore, as X is compact, there exists a finite intersection of these clopen neighborhoods of x missing XU. This finite intersection is the desired clopen neighborhood C of x contained in U.
It follows that for each Priestley space (X,τ,≤), the topological space (X,τ) is a Stone space; that is, it is a compact Hausdorff zero-dimensional space.
Some further useful properties of Priestley spaces are listed below.
Let (X,τ,≤) be a Priestley space.
(a) For each closed subset F of X, both F = {x X : yx for some y F} and F = { x X : xy for some y F} are closed subsets of X.
(b) Each open up-set of X is a union of clopen up-sets of X and each open down-set of X is a union of clopen down-sets of X.
(c) Each closed up-set of X is an intersection of clopen up-sets of X and each closed down-set of X is an intersection of clopen down-sets of X.
(d) Clopen up-sets and clopen down-sets of X form a subbasis for (X,τ).
(e) For each pair of closed subsets F and G of X, if F ∩ ↓G = ∅, then there exists a clopen up-set U such that FU and UG = ∅.
A Priestley morphism from a Priestley space (X,τ,≤) to another Priestley space (X′,τ′,≤′) is a map f : XX which is continuous and order-preserving.
Let Pries denote the category of Priestley spaces and Priestley morphisms.
## Connection with spectral spaces
Priestley spaces are closely related to spectral spaces. For a Priestley space (X,τ,≤), let τu denote the collection of all open up-sets of X. Similarly, let τd denote the collection of all open down-sets of X.
Theorem:[5] If (X,τ,≤) is a Priestley space, then both (X,τu) and (X,τd) are spectral spaces.
Conversely, given a spectral space (X,τ), let τ# denote the patch topology on X; that is, the topology generated by the subbasis consisting of compact open subsets of (X,τ) and their complements. Let also denote the specialization order of (X,τ).
Theorem:[6] If (X,τ) is a spectral space, then (X,τ#,≤) is a Priestley space.
In fact, this correspondence between Priestley spaces and spectral spaces is functorial and yields an isomorphism between Pries and the category Spec of spectral spaces and spectral maps.
## Connection with bitopological spaces
Priestley spaces are also closely related to bitopological spaces.
Theorem:[7] If (X,τ,≤) is a Priestley space, then (X,τu,τd) is a pairwise Stone space. Conversely, if (X,τ1,τ2) is a pairwise Stone space, then (X,τ,≤) is a Priestley space, where τ is the join of τ1 and τ2 and is the specialization order of (X,τ1).
The correspondence between Priestley spaces and pairwise Stone spaces is functorial and yields an isomorphism between the category Pries of Priestley spaces and Priestley morphisms and the category PStone of pairwise Stone spaces and bi-continuous maps.
Thus, one has the following isomorphisms of categories:
${\displaystyle \mathbf {Spec} \cong \mathbf {Pries} \cong \mathbf {PStone} }$
One of the main consequences of the duality theory for distributive lattices is that each of these categories is dually equivalent to the category of bounded distributive lattices.
## Notes
1. Priestley, (1970).
2. Cignoli, R.; Lafalce, S.; Petrovich, A. (September 1991). "Remarks on Priestley duality for distributive lattices". Order. 8 (3): 299–315. doi:10.1007/BF00383451.
3. Cornish, (1975).
4. Bezhanishvili et al. (2010)
5. Cornish, (1975). Bezhanishvili et al. (2010).
6. Cornish, (1975). Bezhanishvili et al. (2010).
7. Bezhanishvili et al. (2010).
|
2021-08-01 17:31:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9605458378791809, "perplexity": 1149.346576754702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00305.warc.gz"}
|
http://mrandrewandrade.com/blog/2015/10/21/battery-testing.html
|
# Background
As previously mentioned my fourth year design project (FYDP) involves building an energy storage system (ESS) using repurposed electric vehicle (EV) batteries. After individual battery cells are removed from the large high voltage EV battery, the first step the project is testing the battery cells to see if they are “healthy” and ready to use. This means we have to charge and discharge the cells while monitoring both the voltage and the temperture. We are planning on using a BeagleBone Black (BBB) as our microcontroller (show below), and have to pick sensors to measure voltage and temperature.
# Project Scope
Since we are running lean, our goal is to try and get to stage of safetly changing and discharging the battery autonomously as quickly. This means for our minimal viable product (MVP), we had the following goals:
1. Label every battery cell with a unique identifier, and keep log of all tests and data.
2. Be able to autonomously charge the battery. This means be able to stop charging when either the battery reaches 50 degrees C or the voltage accross the battery cell exceed 7.2 V.
3. Constantly measure voltage accross the battery over time
4. Constantly measure the temperature of the battery over time
5. Be able to autonomously discharge the battery by pressenting a simple resistive load
These are a couple of stretch goals: 1. Steam data online 2. Test Multiple Cells 3. Coloumb counting
Let’s start with choosing the sensors and figuring out how to take measurements.
# Instrumentation and Measurement
## Measuring Voltage
Measuring voltage using a BBB is simple: we can simply use the analog input pins on the BBB to measure voltage. The first issue is that according the the BBB documentation, to safetly read analog values, the input voltage to the analog to digital convert (ADC) is limited to 1.8V. This means that the input voltage must be properly limited to ensure the safe operstion. To achieve this, a simple voltage divider can be used (by apply Ohm’s Law $V = I R$).
### Voltage Divider
Based on Ohm’s Law, the circuit above can be simplified to $$V_o= \frac{I R_2}{I ( R_1 + R_2 ) } V_{in}$$, and finally to this form:
$V_o=\frac{R_2}{R_1+R_2}V_{in}$
Now given a desired $$V_o$$ and known $V_{in}$, $$R_1$$ and $$R_2$$ can be solved for. We wanted the ability to read up to 30V (including contingency), and still limit the voltage to the BBB to 1.8 V. Here we use $$V_{in}=30V$$ and $$V_o=1.8V$$. By setting the smaller resistor $$R_2$$ in this case) to $$10k\Omega$$, one can solve $$R_1=160k$$. Note, resistor come in standard resistances, so one must round to ensure the limits are sets correctly. In this example, $$R_1$$ should be rounded up not down. Why? Think about how the voltage devider works.
### How to not drain battery while measuring
The issue with this method is that with a total resistance $R1 + R2 = 180K\Omega$, then the total current drawn $I_{total} ={V_{in}} / R_{total} = 30V / 180k\Omega = 1.66\times10^{-4} A$ While this is a very smal current drawn, slowly by slowly it will be drain the battery. The way to reduce this effect is to increase the resistors, to the $M\Omega$ range, such at that current being drawn will be very low. The issue with this is that there will not be a minimum current which exceeds the leakage current in a pin. In short, the current can not be too small or it will cause measurement issues. But can this be solved? Yes! The trick is to add a small capacitor in parallel with the $R_2$ resistor. This improves the readings of the voltage out, yet enables us maintain a very low current.
## Temperature Sensors
There exist many different types of temperature sensors which function in different ways. The choice of the temperature sensor and implmentation will be talked about in the next blog post!
|
2017-07-26 18:27:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3493254482746124, "perplexity": 1042.6276891331024}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00113.warc.gz"}
|
https://socratic.org/questions/what-is-the-distance-between-8-2-and-4-7
|
# What is the distance between (8, 2) and (4, 7) ?
Dec 29, 2015
Its $\sqrt{41}$
#### Explanation:
We first need to calculate the difference in x axis and in y axis:
$\triangle x = 8 - 4 = 4$
$\triangle y = 7 - 2 = 5$
Having this values, we now have a triangle rectangle with angles in both points. Now, just apply Pythagoras' formula:
${\text{line}}^{2} = \triangle {x}^{2} + \triangle {y}^{2}$
${\text{line}}^{2} = {4}^{2} + {5}^{2}$
$\text{line} = \sqrt{41}$
|
2020-06-05 13:52:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5779642462730408, "perplexity": 1020.9137428499437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348500712.83/warc/CC-MAIN-20200605111910-20200605141910-00292.warc.gz"}
|